text
stringlengths 56
7.94M
|
---|
\begin{equation}gin{document}
\thetaitle{A SIMPLE SCHEME FOR QUANTUM NON DEMOLITION OF PHONONS NUMBER OF THE NANOELECTOMECHANICS SYSTEMS}
\author{F. R. de S. Nunes$^{1}$, J. J. I. de Souza$^{1}$, D. A. Souza$^{1}$, R. C. Viana$^{2}$, e O. P. de S\'a Neto$^{1}$}
\affiliation{$^{1}$ Coordenação de Ciência da Computação,
Universidade Estadual do Piau\'i, CEP: 64202220, Parnaíba, Piau\'i, Brazil.}
\affiliation{$^{2}$ Centro Cir\'urgico do Hospital Dirceu Arcoverde, Parnaíba, Piau\'i, Brazil.}
\partialtaate{\thetaoday}
\begin{equation}gin{abstract}
In this work we describe a scheme to perform a continuous over time quantum non demolition (QND) measurement of the number of phonons of a nanoelectromechanical system (NEMS). Our scheme also allows us to describe the statistics of the number of phonons.
\end{abstract}
\title{A SIMPLE SCHEME FOR QUANTUM NON DEMOLITION OF PHONONS NUMBER OF THE NANOELECTOMECHANICS SYSTEMS}
\section{QUANTUM MECHANICS MEASUREMENT PROBLEM}
In general, the measurement of an observable in a given quantum system disturbs its state, such that the observable variance is greater in a future measurement \cite{Milburn}. This is easily illustrated by a simple system, a harmonic oscillator of mass $m$ and momentum operator $p$ and in a thermal state, as previously considered by references \cite{meu0}-\cite{qnd}. It's possible to initially make a precise measurement in the $x$ position, the canonically conjugate operator to moment $p$. However, due to Heisenberg's uncertainty principle, $\partialtaelta p\gammaeq\hbar/(2\partialtaelta x)$, and $p$ is disturbed. However, in an evolution following this measurement, $p$ induces a variation in $x$: $\partialtaot{x}=[x,p^{2}/2m]/i\hbar$, resulting in, $x(t)=x(0)+p(0)t/m$. Therefore, using the uncertainty relation to calculate the uncertainty in $x$ for future measurements $(\partialtaelta x(t))^{2}\gammaeq(\partialtaelta x(0))^{2}+(\hbar/2m\partialtaelta x(0))^{2}t^{2}$, we conclude that position and momentum are uncorrelated. The measurement apparatus acted randomly disrupting the observable being measured.
\section{PROTOCOL TO MEASURE GENERAL QND}
The Quantum non demotion (QND) measurement is characterized as one that can be performed without disturbing the observable state. In a QND measurement, the observable $\mathcal{O}_{S}$ of the system $S$ is inferred by measuring an observable $\mathcal{O}_{A}$ of an auxiliary system $A$, without disturbing the next evolution of $\mathcal{O}_{S}$. After a finite number of successive steps the final state $S$ remains an eigenstate of $\mathcal{O}_{S}$.
Formally, if we have the total Hamiltonian:
\begin{equation}gin{eqnarray}
H&=&H_{S}+H_{A}+H_{I},
\end{eqnarray}
with $H_{S}$ being the system Hamiltonian, $H_{A}$ being the apparatus Hamiltonian, and $H_{I}$ being the Hamiltonian of the apparatus-system interaction. The QND measurement $O_{S}$ must satisfy the following properties:
\begin{equation}gin{enumerate}
\item {$\frac{\partial H_{I}}{\partial O_{S}}\neq0$ and $[O_{A},H_{I}]\neq0$. This condition is because we want to measure $O_{S}$ through $O_{A}$. This implies the interaction Hamiltonian should be a function of $O_{S}$ and that $O_{A}$ varies accordingly, to interact with the system. In fact, this condition must be observed for any type of measurement, since it simply requires that the pointer system *CITAR* varies depending on the eigenvectors of the observable being measured;}
\item {The operator of the observable $O_{S}$ must commute with $H_{I}$. This observable can not be changed during the measurement process;}
\item {$\frac{\partial H_{S}}{\partial O_{S}^{C}}\neq0$. This is the main feature of QND measurement: after the interaction of $S$ with $A$ the conjugate observable $O_{S}^{C}$ is changed uncontrollably. So that this increase in variance does not affect the observable being measured, we have to demand that the Hamiltonian of the system does not depend on the conjugate observable. So a more restrictive way is to require $[H_{S},O_{S}]=0$, because then the observable being measured is a constant of movement.}
\end{enumerate}
\section{MODEL}
The capacitive coupling between Quantum Bit (Qubit) and nanoelectromechanical system (NEMS) \cite{ML}-\cite{ML0} is illustrated in the figure \ref{Qubit+NEMS}. In quantum bit notation, the Hamiltonian of the Box of Cooper Pairs (\ref{55}) is written as
\begin{equation}gin{equation}
H_{qb}=(E_{1}-E_{0})\sigmama^{z}-\frac{E_{J}}{2}\sigmama^{x},
\Upsilonabel{56}
\end{equation}
where $\sigmama^{x}=\Upsiloneft|1\right\rangle\Upsiloneft\Upsilonangle 0\right|+\Upsiloneft|0\right\rangle\Upsiloneft\Upsilonangle 1\right|$, $\sigmama^{z}=\Upsiloneft|0\right\rangle\Upsiloneft\Upsilonangle 0\right|-\Upsiloneft|1\right\rangle\Upsiloneft\Upsilonangle 1\right|$, and $e$ is the eletron charge. $E_{n}=2E_{C}(n-n_{g})^{2}$ is the charging energy of $n$ cooper pairs, with $E_{C}=e^{2}/2C_{\sum}$, $C_{\sum}=C_{N}+C_{cpb}+C_{J}$. Also, $n_{g}=n_{N}+n_{cpb}$, where $n_{cpb}=C_{cpb}V_{cpb}/2e$ is the gate charge, $C_{cpb}$ is the capacitance and $V_{cpb}$ the potential difference of the Cooper pair box. $n_{N}=C_{N}V_{N}/2e$, is the gate charge, $C_{N}$ is the capacitance and $V_{N}$ is the potential difference of NEMS. $E_{J}$ is the capacitive energy of Qubit Josephson junction. Therefore, the necessary charging energy for the transition of one Cooper pair will be:
\begin{equation}gin{eqnarray*}
E_{n+1}-E_{n}&=&2E_{C}\Upsiloneft[(n+1-n_{g})^{2}-(n-n_{g})^{2}\right],
\end{eqnarray*}
for $n=0$
\begin{equation}gin{eqnarray*}
E_{1}-E_{0}&=&2E_{C}(1-2n_{g})\nonumber\\
&&=2E_{C}(1-2n_{N}-2n_{cpb})
\end{eqnarray*}
Assuming small NEMS oscillation amplitude, we get the expression $C_{N}=C_{N}(0)+(\frac{\partial C_{N}}{\partial x})x$, with $x$ being the NEMS's flexion axis deformation position. Thus the capacitive interaction between the Qubit and NEMS mode is:
\begin{equation}gin{eqnarray}
H_{Q-N}&=&\hbar g \sigmama^{z}(b+b^{\partialtaag}),
\Upsilonabel{57}
\end{eqnarray}
where, $g=\sqrt{\frac{\hbar}{2m\omegamega}}\thetaimes[4n_{N}(0)E_{C}(\frac{\partial C_{N}}{\partial x})]/(\hbar C_{N})$, and $\hbar$ is the Planck constant divided by $2\pi$.
The Complete Hamiltonian for this model is:
\begin{equation}gin{eqnarray}
H_{\Upsiloneft|0\right\rangle,\Upsiloneft|1\right\rangle}&=&-\frac{E_{J}}{2}\sigmama^{x}+\hbar\omegamega b^{\partialtaag} b+\hbar g\sigmama^{z}\Upsiloneft(b+b^{\partialtaag}\right).
\Upsilonabel{58}
\end{eqnarray}
\begin{equation}gin{figure}[h]
\includegraphics[scale=0.5]{Qubit+NEMS}
\caption{Schematic Model.}
\Upsilonabel{Qubit+NEMS}
\end{figure}
The Hamiltonian $H_{\Upsiloneft|0\right\rangle,\Upsiloneft|1\right\rangle}$ is written in the Cooper pair basis. However, changing the atomic basis to the new representation,
\begin{equation}gin{equation}
\sigmama^{z}\rightarrow \sigmama^{x}, \sigmama^{x}\rightarrow -\sigmama^{z},
\Upsilonabel{60}
\end{equation}
the Hamiltonian terms $H_{\Upsiloneft|0\right\rangle,\Upsiloneft|1\right\rangle}$ become:
\begin{equation}gin{equation}
H_{\Upsiloneft|-\right\rangle,\Upsiloneft|+\right\rangle}=\frac{E_{J}}{2} \sigmama^{z}+\hbar\omegamega b^{\partialtaag}b+\hbar g \sigmama^{x}\Upsiloneft(b+b^{\partialtaag}\right)
\Upsilonabel{61}
\end{equation}
with $\sigmama^{x}=\Upsiloneft|+\right\rangle\Upsiloneft\Upsilonangle -\right|+\Upsiloneft|-\right\rangle\Upsiloneft\Upsilonangle +\right|$ and $\sigmama^{z}=\Upsiloneft|-\right\rangle\Upsiloneft\Upsilonangle -\right|-\Upsiloneft|+\right\rangle\Upsiloneft\Upsilonangle +\right|$.
Making the rotation wave approximation to the Hamiltonian (\ref{61}), we have
\begin{equation}gin{equation}
\thetailde{\mathcal{H}}=\hbar\omegamega b^{\partialtaag}b+\frac{E_{J}}{2}\sigmama^{z}+\hbar g \Upsiloneft(\sigmama_{-}b^{\partialtaag}+\sigmama_{+}b\right),
\end{equation}
where, $\sigmama^{+}=\Upsiloneft|+\right\rangle\Upsiloneft\Upsilonangle -\right|$, $\sigmama_{-}=\sigmama_{+}^{\partialtaag}$, $\Upsiloneft|-\right\rangle$ is the fundamental atomic state, $\Upsiloneft|+\right\rangle$ is the excited atomic state.
For our case, considering a tightly dispersive regime, we can to expand the Hamiltonian with a Baker-Campbell-Hausdor as follows,
\begin{equation}gin{eqnarray*}
e^{-\Upsilonambda X}\thetailde{\mathcal{H}}e^{\Upsilonambda X}&=&\thetailde{\mathcal{H}}+\Upsilonambda\Upsiloneft[\thetailde{\mathcal{H}},X\right]+\frac{\Upsilonambda^{2}}{2!}\Upsiloneft[\Upsiloneft[\thetailde{\mathcal{H}},X\right],X\right]+\Upsilondots
\end{eqnarray*}
where $\Upsilonambda=g/\Deltaelta$, $\Deltaelta=\omegamega-\nu_{a}$, $\nu_{a}=E_{J}/\hbar$ and $X=b^{\partialtaag}\sigmama_{-}+b\sigmama_{+}$ results in the efective Hamiltonian
\begin{equation}gin{eqnarray}
\mathcal{H}_{eff}&\approx&\hbar\Upsiloneft[\omegamega+\frac{g^{2}}{\Deltaelta}\sigmama_{z}\right]b^{\partialtaag}b+\frac{\hbar}{2}\Upsiloneft[\nu_{a}+\frac{g^{2}}{\Deltaelta}\right]\sigmama_{z}.
\Upsilonabel{eff}
\end{eqnarray}
Now, with the equations of dynamics of the density operator,
\begin{equation}gin{eqnarray}
\partialtaot{\rho}&=&\frac{-i}{\hbar}[H_{eff},\rho]+\kappa\mathcal{D}[b]+\gammaamma\mathcal{D}[\sigmama_{-}]+\frac{\gammaamma_{\varphi}}{2}\mathcal{D}[\sigmama_{z}]\nonumber\\
&=&\mathcal{L}\rho
\Upsilonabel{em}
\end{eqnarray}
where $\mathcal{D}[\alpha]=(2\alpha\rho\alpha^{\partialtaag}-\alpha^{\partialtaag}\alpha\rho-\rho\alpha^{\partialtaag}\alpha)/2$.
\section{RESULTS}
With this we can calcule the correlation
\begin{equation}gin{eqnarray}
\Upsiloneft\Upsilonangle \sigmama_{-}(t)\sigmama_{+}(0)\right\rangle_{s}&=&Tr\Upsiloneft[\sigmama_{-}e^{\mathcal{L}t}(\Upsiloneft|+\right\rangle\Upsiloneft\Upsilonangle -\right|)\right],
\Upsilonabel{cor}
\end{eqnarray}
and finally the Qubit absorption spectrum.
\begin{equation}gin{eqnarray}
S(\omegamega)&=&\frac{1}{2\pi}\int_{-}^{}dte^{i\omegamega t}\Upsiloneft\Upsilonangle \sigmama_{-}(t)\sigmama_{+}(0)\right\rangle_{s}.
\Upsilonabel{abspe}
\end{eqnarray}
We used the Qutip \cite{qutip} package to obtain numerical results for the correlation (fig. \ref{result0}.a), spectrum (fig. \ref{result0}.b), and its statistic distribution (fig. \ref{result0}.c). For our present calculation we used the Qubit in the excited state, and the NEMS in the vacuum state, with the number of thermal occupation of its reservoir being equal to one.
\begin{equation}gin{figure}[t]
\includegraphics[scale=0.3]{linha21}\\(a)\\
\includegraphics[scale=0.45]{linha24}\\(b)\\
\includegraphics[scale=0.45]{linha26}\\(c)\\
\caption{(a) Excited states correlation in time function, for $\chi=g^{2}/\Deltaelta>>\kappa,\gammaamma$; (b) Qubit absorption spectrum given resolution number states of NEMS in termal state; (c) Visualization of the quantum states.}
\Upsilonabel{result0}
\end{figure}
However, in this measurement protocol QND that measures the number of phonos, can to conduce the Qubit Stark frequence displacement in $\nu_{n}= \nu_{a}+ng^{2}/\Deltaelta$, followed by the independent measure of Qubit state, once that the number of phonos is not changed in this process.
\section{DISCUSSION}
Motivated by a sete of discovery \cite{JG}-\cite{jg0}-\cite{ML}-\cite{ML0}-\cite{jg1}, we explored an electromechanical interaction in a highly dispersive regime in promoting for QND measurement scheme. We have demonstrated that the spectrum of the phonons of NEMS in the Qubit state resolution, thereby have access to each number of state and statistics of Bosen-Einsteis this ressoandor.
\begin{equation}gin{thebibliography}{99}
\bibitemitem{Milburn} D.F. Walls e Gerard J. Milburn. {\it Quantum optics}, $2º$ edição, Editora Springer (2007).
\bibitemitem{meu0} NETO, O. P. DE SÁ ; DEOLIVEIRA, M. C. ; MILBURN, G. J., Temperature measurement and phonon number statistics of a nanoelectromechanical resonator. New Journal of Physics, v. 17, p. 093010, 2015.
\bibitemitem{qnd} G. J. Milburn and D. F. Walls, Quantum nondemolition measurements via quadratic coupling, Phys. Rev. A 28, 2065, 1983.
\bibitemitem{JG} D. I. Schuster, A. A. Houck, J. A. Schreier, A. Wallraff, J. M. Gambetta, A. Blais, L. Frunzio, J. Majer, B. Johnson, M. H. Devoret, S. M. Girvin e R. J. Schoelkopf. Resolving photon number states in a superconducting circuit, Nature \thetaextbf{445}, 515 (2007).
\bibitemitem{jg0} Jay Gambetta, Alexandre Blais, D. I. Schuster, A. Wallraff, L. Frunzio, J. Majer, M. H. Devoret, S. M. Girvin, and R. J. Schoelkopf, Qubit-photon interactions in a cavity: Measurement induced dephasing and number splitting, Phys. Rev. A 74, 042318 (2006).
\bibitemitem{ML} MD LaHaye, O Buu, B Camarota, KC Schwab, Approaching the quantum limit of a nanomechanical resonator, Science 304 (5667), 74-77 (2004).
\bibitemitem{ML0} MD LaHaye, J Suh, PM Echternach, KC Schwab, ML Roukes, Nanomechanical measurements of a superconducting qubit, Nature 459 (7249), 960-964 (2009).
\bibitemitem{jg1} Jay Gambetta, W. A. Braff, A. Wallraff, S. M. Girvin, and R. J. Schoelkopf, Protocols for optimal readout of qubits using a continuous quantum nondemolition measurement, Phys. Rev. A 76, 012325 (2007).
\bibitemitem{quite} J. R. Johansson, P. D. Nation, and F. Nori: "QuTiP: An open-source Python framework for the dynamics of open quantum systems.", Comp. Phys. Comm. 183, 1760?1772 (2012).
\end{thebibliography}
\end{document}
|
\begin{equation}gin{document}
\title{Big Free Groups are Almost Free}
\begin{equation}gin{abstract}
\ni
\textit{It is shown that the big free group (the set of countably-long words over a countable alphabet) is almost free, in the sense that any function from the alphabet to a compact topological group factors through a homomorphism. This statement is in fact a simple corollary of the more general result proven below on the extendability of homomorphisms from subgroups (of a certain kind) of the big free group to a compact topological group.\\ \\
\let \alph{footnote}\relax\footnotetext{2014 \textit{Mathematics Subject Classification} 20E05, 20E18, 57M05. Keywords: Free groups, Hawaiian earring, homomorphism extension.}}
\end{abstract}
It is an elementary fact that the free group over a set $A$, $F(A)$ can be defined in two equivalent ways:
\begin{equation}gin{definition}
$F(A)$ is the set of all finite, reduced words over the alphabet $A$.
\end{definition}
\begin{equation}gin{definition}
\label{def:free}
$F(A)$ is the unique, up to an isomorphism, group such that any function $f : A \to G$, where $G$ is some group, factors through a homomorphism from $F(A)$ to $G$.
\end{definition}
The first definition suggests a natural generalization of the concept of a free group: what happens if the finiteness requirement on the words is dropped? Indeed, the study of such generalizations can be traced as far back as \cite{higman}. Recently, such groups have been under intense study as it was realized that, in addition to their intrinsic interest, they play an important role in the study of the fundamental groups of spaces which are not semilocally simply-connected \cite{desmit, cannon}. Such groups also appear in the study of smooth loop groups \cite{tlas}, and thus are relevant for the theory of gauge connections on principal bundles. \\
Let us give now the precise definitions of the group of transfinite words. We follow \cite{cannon} closely, to which the reader is referred to for more details if needed.
\begin{equation}gin{definition}
Let $A$ be the alphabet set and let $A^{-1}$ be the set of formal inverses of elements of $A$. A transfinite word is a map $w$ from a countable, linearly ordered set $S$ into $A \cup A^{-1}$ such that the preimage of any element of $A \cup A^{-1}$ is finite.
\end{definition}
Intuitively, a transfinite word is a countable string of letters such that each letter appears at most finitely many times. Two words $w_1 : S_1 \to A \cup A^{-1}$ and $w_2 : S_2 \to A \cup A^{-1}$ are considered identical if there is a bijection $f : S_1 \to S_2$ such that $w_1 = w_2 \circ f$. In this paper we will only deal with words for which both $A$ and $S$ are countable. We thus take $A = \{a_1, a_2, \dots \}$.\\
Transfinite words can be multiplied in essentially the same way as the finite ones:
\begin{equation}gin{definition}
If $w_1 : S_1 \to A \cup A^{-1}$ and $w_2 : S_2 \to A \cup A^{-1}$ are two words, then $w_1 w_2 : S_1S_2 \to A \cup A^{-1}$ is the transfinite word which acts in the obvious way on the domain $S_1S_2$ consisting of the disjoint union of the elements of $S_1$ with $S_2$ with all elements of $S_1$ preceding those of $S_2$.
\end{definition}
Reduction is a little more involved to formulate in the transfinite case:
\begin{equation}gin{definition}
Denoting $\{s \in S : a \leq s \leq b \}$ by $[a,b]_S$, we say that the word $w : S \to A \cup A^{-1}$ admits a cancellation if there is a subset $T$ of $S$ and a mapping $\ast : T \to T$ satisfying the following four conditions for all $t \in T$:
\begin{equation}gin{itemize}
\item $\ast$ is an involution.
\item $[t, t^\ast]_S = [t, t^\ast]_T$.
\item $[t,t^\ast]_T = ([t, t^{\ast}]_T)^\ast$.
\item $w(t^\ast) = w(t)^{-1}$.
\end{itemize}
Denoting $S - T$ by $S/\ast$ and $w$ restricted to $S/\ast$ by $w/\ast$ we say that $w/\ast$ arises by a cancellation from $w$. If a word does not admit cancellations, it will be called reduced.
\end{definition}
We shall consider all the words which are related to each other by a cancellation to be equivalent. It can be shown that any such equivalence class contains a unique reduced word and that the set of such equivalence classes becomes a group \cite{cannon}, which is called the big free group over $A$, denoted by $BF(A)$.\\
The big free group is known to be not free, and there is quite some work on its free subgroups \cite{desmit, cannon, eda}. In this paper, we show that the big free group is almost free in the sense that it almost satisfies definition \ref{def:free}. We are going to show that any function from $A$ to a compact topological\footnote{As is customary, we assume that $G$ is Hausdorff.} group $G$ can be factored through a homomorphism from $BF(A)$ to $G$, with the only difference being that this homomorphism is not unique. In fact, we are going to show more: Any homomorphism from a subgroup of $BF(A)$, of a special kind, to a compact topological group $G$ can be extended to a homomorphism from the whole of $BF(A)$ to $G$, which will make the factorizability of a function from $A$ a simple special case. Let us define the special class of subgroups of $BF(A)$ that we need:
\begin{equation}gin{definition}
A subgroup $H$ of $BF(A)$ is called tame if for any reduced $w \in H$ we have that every subword of $w$ is also in $H$, where a subword is the restriction of the word $w: S \to A \cup A^{-1}$ to a set of the form $[a,b]_S$.
\end{definition}
We can now state the main result:
\begin{equation}gin{theorem*}
Let $H$ be a tame subgroup of $BF(A)$ and $f : H \to G$ a homomorphism where $G$ is a compact topological group, then $f$ extends to a homomorphism from $BF(A)$ to $G$.
\end{theorem*}
The main idea of the proof of the theorem is that one `excises' small intervals around singular (to be defined below) points, thus replacing the initial word with a finite string of elements from the tame subgroup. This string is then mapped to $G$ by applying $f$ to the product of the elements in this string. After this, one essentially `takes the limit' as the lengths of these removed intervals go to zero.\\
Let us begin with the first lemma:
\begin{equation}gin{lemma}
Let $\mathcal{I}$ be a directed set and let $\mathcal{G}$ be the group (under pointwise multiplication) of all functions from $\mathcal{I}$ to a compact, topological group $G$ (i.e. the group of all nets in $G$ indexed by $\mathcal{I}$). Let $\mathcal{G}_0$ be the subgroup of $\mathcal{G}$ consisting of those nets which are eventually constant. If $\pi_\mathcal{I}: \mathcal{G}_0 \to G$ is the natural homomorphism given by $\pi_\mathcal{I}(g_\alpha) = \lim_\alpha g_\alpha$ then $\pi_{\mathcal{I}}$ has an extension to all of $\mathcal{G}$.
\end{lemma}
\begin{equation}gin{proof}
Choose an ultrafilter on $\mathcal{I}$ and let $\ast G$ stand for the set of equivalence classes of $G$-valued nets where we consider two nets equivalent if they are identical on an element of the ultrafilter. Proceeding as is customary in nonstandard analysis \cite{goldblatt, encyclo}, it is easy to see that $\ast G$ is a group. Since it is compact, any element of $\ast G$ is near standard, which allows us to define the standard part map from $\ast G \to G$. This map is a homomorphism and is an extension of $\pi_\mathcal{I}$. From here onwards, $\pi_\mathcal{I}$ denotes this extension.
\end{proof}
Note that the extension above is not unique since it depends on the ultrafilter\footnote{For a concrete example of this nonuniqueness take $G = U(1)$ and $\mathcal{I} = \mathbb{N}$. Then if the odd naturals are in our ultrafilter, $\pi_{\mathcal{I}}$ will assign the value $-1$ to the net $g_n = (-1)^n$. While, if the ultrafilter contained the even naturals, this same net will be assigned the value of $+1$.}.\\
It suffices to prove the Theorem in the case when the tame subgroup $H$ contains all the letters of the alphabet $A$. This follows at once from:
\begin{equation}gin{lemma}
If $A' \subset A$, then any homomorphism $f$ from $BF(A')$ to a compact group $G$ extends to $BF(A)$.
\end{lemma}
\begin{equation}gin{proof}
This is an immediate consequence of the fact that there is a retraction from $BF(A)$ to $BF(A')$ (obtained simply by deleting the letters not contained in $A'$)\footnote{We would like to thank the referee for pointing this out.}. However, let us give an alternative proof since it illustrates in a simple context an idea which will be used later. \\
Consider the set of all finite collections of words in $BF(A')$. Order this set by inclusion making it into a directed set $\mathcal{J}$. Suppose $w \in BF(A)$. By deleting the letters not appearing in $A'$, this word splits into a string of elements in $BF(A')$. Let us denote this string by $w'$ (note that $w'$ is \textit{not} a single element of $BF(A')$, rather it is a string of elements of $BF(A')$). For example, if $A = \{a, b, c\}$, $A' = \{a, b\}$ and $w = a^2ba^{-1}cb^2c^2a^3$, then $w'$ is the string of three words $(a^2ba^{-1}) (b^2) (a^3)$. \\
Now pick any element $j \in \mathcal{J}$ and associate to it the group element $f(w_1) f(w_2) \dots $ where $w_1, w_2, \dots$ are the elements of $j$ appearing in the string $w'$ in the order in which they appear in it (the appearance of an inverse of an element of $j$ is counted as an appearance of the element). Note that since any given letter can appear only finitely many times, the product is a finite one. If no elements of $j$ appears in the string $w'$, associate to $j$ the identity element. We thus get a $G$-valued net indexed by $\mathcal{J}$ which we shall denote by $\{ w_j \}_{j \in \mathcal{J}}$. Note that if $w \in BF(A)$ happens to be in $BF(A')$, then this net is eventually constant and is equal to $f(w)$. It is easy to check that if $w_1$ and $w_2$ are any two elements in $BF(A)$ then eventually $ (w_1 \cdot w_2)_j = (w_1)_j (w_2)_j$. This is because the only case when $ (w_1 \cdot w_2)_j = (w_1)_j (w_2)_j$ may fail to hold is when $w_1 = a b c d$ and $w_2 = d^{-1} c^{-1} e f$ where $b, c, e \in BF(A')$ and the word $be$ is irreducible. However, if we let $j_0 = \{b, c, e, bc , c^{-1} e, b e \}$ then for any $j$ containing $j_0$ we do have the equality we want. It follows at once that the map $w \to \{ w_j \}_{j \in \mathcal{J}} \stackrel{\pi_\mathcal{J}}{\to} G$ is a homomorphism which extends $f$. \end{proof}
In view of the above discussion, we shall assume below that $H$ includes all the letters of $A$.\\
Let now $\mathcal{I}$ be the following set
\begin{equation}n
\mathcal{I} = \bigg \{ \{l_n\}_{n=1}^\infty \quad : \quad l_{n+1} \leq l_n \quad , \quad l_n > 0 \quad , \quad \sum_{n=1}^\infty l_n < \infty \bigg \},
\end{equation}n
i.e. the set of all monotone non-increasing, strictly positive, real-valued sequences whose sum is convergent. Order this set by stipulating that $\{l_n \}_{n=1}^\infty \prec \{ l'_n\}_{n=1}^\infty \iff l_n \geq l'_n \, , \, \forall n$. It is obvious that $(\mathcal{I}, \prec)$ becomes a directed set. \\
Let $w \in BF(A)$ be a reduced word. Denote by $k_n$ the total number of times the letter $a_n$ appears in $w$, where we count both the letter and its inverse. Suppose that $\iota \in \mathcal{I}$ is such that $\sum_{n=1}^\infty k_n l_n \equiv L_w < \infty$. We decompose $[0,L_w]$ into two sets, a countable collection of disjoint open intervals and its complement, where the intervals correspond to the letters of the word $w$. This is done in the following way: For every letter in the word, associate an open interval of length $l_n$, if the letter is $a_n$ or $a_n^{-1}$, whose starting point is equal to the sum of the lengths of the intervals corresponding to all the letters preceding the given letter. Thus, if for example our word is $a_2 a_1^2 a_2^{-1}$, we get the following intervals $\{(0,l_2) , (l_2,l_2+l_1), (l_2+l_1, l_2 + 2 l_1), (l_2 + 2l_1, 2l_2 + 2 l_1) \}$. It is obvious in this example and, as is easy to check, true generally, that any two such obtained intervals are disjoint and that each one is a subset of $[0, L_w]$. Denote the complement (in $[0,L_w]$) of the union of these intervals by $C$. $C$ is clearly a closed set. There is an obvious correspondence between subwords of $w$ and subintervals of $[0,L_w]$ with endpoints\footnote{It is irrelevant whether the endpoints are included or not in these subintervals as no letters correspond to endpoints.} in $C$. \\
Let $x \in C$. Note that any such point naturally splits $w$ into the part `before $x$' and the part `after $x$'. We now make:
\begin{equation}gin{definition}
We say that $x$ is \textit{regular on the right/left} if there is an initial/final segment of the word after/before $x$ which is contained in $H$. If a point is both regular on the left and on the right, then we shall simply say that it is \textit{regular}. Points which are not regular will be called \textit{singular} (note that a singular point can be regular on the right or on the left). The set of all singular points will be denoted by $C'$.
\end{definition}
\begin{equation}gin{lemma}
$C'$ is closed.
\end{lemma}
\begin{equation}gin{proof}
This is an immediate consequence of the fact that $H$ is a tame subgroup. To see this, pick any regular point. By definition this means that there is an open interval around it (with endpoints in $C$) such that the word corresponding to this interval is in $H$. This implies that any element of $C$ in this interval is also regular for there is an interval around it whose word is contained in $H$ (and all subintervals are also in $H$). Thus the set of regular points is open (in C), which means that $C'$ is closed.
\end{proof}
Fix now $m \in \mathbb{N}$. To every $x \in C'$ we associate two intervals, $I_{x.m}^r$ and $I_{x,m}^l$ in the following way:
\begin{equation}gin{itemize}
\item If $x$ is regular on the right, then $I_{x,m}^r = [x,x] = \{x \}$.
\item If $x$ is not regular on the right, let $\alpha_{x,m} = \textrm{sup} \{ x \in C' \cap [x, x + \frac{1}{m} ] \}$. We now have two subcases:
\begin{equation}gin{itemize}
\item $\alpha_{x,m} = x$. In this case $I_{x,m}^r = [x, x + \frac{1}{m}] \cap [0, L_w]$.
\item $\alpha_{x,m} \neq x$. In this case $I_{x,m}^r = [x , \alpha_{x,m}]$.
\end{itemize}
\item $I_{x,m}^l$ is defined with obvious changes in an analogous way.
\end{itemize}
Consider now the set $C_m$ given by: $$C_m = \bigcup_{x \in C'} (I_{x,m}^r \cup I_{x,m}^l). $$ Note that $C' \subset C_m$ and that any connected component of $C_m$, being a subset of $\mathbb{R}$, is an interval.\\
Now that we know that the connected components of $C_m$ are all intervals, we shall classify them into two classes. The first class are those whose length is greater or equal to $\frac{1}{m}$, while the second one are those whose length is strictly less. We have the following lemma:
\begin{equation}gin{lemma}
If $I$ is an interval of the second class, then its endpoints are in $C$. Additionally, unless one of the points involved is an endpoint of $[0,L_w]$, the distance between the two left endpoints of any two intervals of the second class is always greater or equal to $\frac{1}{m}$ with the same being true for right endpoints.
\end{lemma}
\begin{equation}gin{proof}
We will use the following basic fact from point-set topology:\\
\textit{If an interval is a union of a collection of intervals, then the left endpoint of the original interval is contained in the closure of the left endpoints of the intervals in the collection, with the same being true for right endpoints.}\\
We know that $I$ is a union of intervals of the form $I_{x,m}^r$ and $I_{x,m}^l$ where $m$ is held fixed and $x$ ranges of a subset of $C'$. It is clear that the right endpoint of any $I_{x,m}^l$, which is just $x$, is in $C'$ and thus in $C$. On the other hand, the right endpoint of any $I_{x,m}^r$ can be:
\begin{equation}gin{itemize}
\item Equal to $L_w$ and thus is in $C$.
\item Equal to $\alpha_{x,m}$ in which case it is in $C'$, since $C'$ is closed (recall the definition of $\alpha_{x,m}$ above). In this case the right endpoint is also in $C$.
\item Equal to a point not in $C$.
\end{itemize}
The last case however, can only happen when $\alpha_{x,m} = x$, in which case the length of $I^r_{x,m}$ is equal to $\frac{1}{m}$. Since $I$ is assumed to be an interval of the second class, i.e. its length is strictly less than $\frac{1}{m}$, this case cannot occur. We thus have that the right endpoints of all the intervals whose union is equal to $I$ are all contained in $C$. It follows that the right endpoint of $I$ is in $C$ as well. Needless to say, the same argument shows that the left endpoint of $I$ is in $C$ as well. Note that the discussion above shows that if an endpoint of $I$ is not an endpoint of $[0,L_w]$, then the endpoint is in fact in $C'$. Moreover, since $C' \subset C_m$, then $I$ is closed since its endpoints cannot be in any other connected component of $C_m$. \\
Now suppose we take two intervals of the second class, $[a_1, b_1]$, $[a_2, b_2]$ and assume $a_1 \neq 0$. Since $a_1 \in C'$, we know that it cannot be regular. It has to be regular on the left since otherwise $I_{a_1, m}^l$ being an interval of nonzero length with right endpoint equal to $a_1$ would not be contained in $[a_1, b_1]$. It follows that $a_1$ is not regular on the right. This implies that $a_2 > a_1 + \frac{1}{m}$, for otherwise $[a_1, a_2] \subset I_{a_1,m}^r \subset [a_1, b_1]$ which is a contradiction. Thus the left endpoints of the intervals of second class (if they are not endpoints of $[0,L_w]$) are always at least $\frac{1}{m}$ apart. The same argument shows that the same is true for the right endpoints. This concludes the proof of this lemma. \end{proof}
In view of the above lemma, and since all intervals are a subset of $[0,L_w]$, it is clear that the number of intervals of the second class must be finite. \\
The number of intervals of the first class is also finite. This is because the sum of their lengths (each of which is greater or equal to $\frac{1}{m}$) has to be finite, being bounded from above by $L_w$.\\
It could happen that an interval of the first class has its two endpoints not in $C$. In this case replace it by the smallest closed interval containing it whose endpoints are in $C$.\\
Summarizing the above construction we have obtained, for a fixed word $w$, a choice of $\iota \in \mathcal{I}$ (with the condition that $L_w < \infty$) and for an $m \in \mathbb{N}$, a finite collection of disjoint intervals which contain all the singular points in $[0, L_w]$ and whose endpoints are always in $C$. Now, note that if we delete all the subwords of $w$ corresponding to these intervals\footnote{Recall that there is a correspondence between subwords and subintervals with endpoints in $C$.}, we will be left with a finite string of subwords $w_1 w_2 \dots w_n$. We now state the following:
\begin{equation}gin{lemma}
Each of $w_1, w_2, \dots, w_n$ is in $H$.
\end{lemma}
\begin{equation}gin{proof}
Fix any particular letter $a$ in the subword $w_k$ and consider the set of all words in $H$ which are subwords of $w_k$ containing this particular letter. Every one of such subwords corresponds to a subinterval of the interval corresponding to $w_k$. Let $\alpha$ be the infimum of the left endpoints of these intervals. Similarly, let $\begin{equation}ta$ be the supremum of the right endpoints. We claim that $\alpha$ is regular on the right. This is the case for otherwise $I_{\alpha,m}^r$ would be a nontrivial interval contained in the interval corresponding to $w_k$ which is impossible since all such intervals where deleted. The same argument shows that $\begin{equation}ta$ must be regular on the left.\\
We claim that the word corresponding to $[\alpha, \begin{equation}ta]$ is in $H$. Since $\alpha$ is regular on the right, there is an initial segment of the word corresponding to $[\alpha, \begin{equation}ta]$ such that the word corresponding to it is in $H$. If it was not possible to choose this segment to include the given fixed letter $a$, it would follow that there could be no overlap between the interval corresponding to this initial segment and the interval corresponding to any subword of $w_k$ which is in $H$ and which contains this letter (this is due to the fact that $H$ is a tame subgroup and thus if the intervals corresponding to two words in $H$ overlap, then the word corresponding to the union of the two intervals is also in $H$). However $\alpha$ is the infimum of such intervals and we have a contradiction. The same argument shows that there is a final segment of the word corresponding to $[\alpha, \begin{equation}ta]$ containing the given fixed letter. Again using the fact that $H$ is tame we have that the word corresponding to $[\alpha, \begin{equation}ta]$ is in $H$.\\
If this word was not equal to $w_k$, i.e. if e.g. $\alpha$ was not the left endpoint of $w_k$, it would follow that $\alpha$ is regular and there would be a strictly longer subword than $[\alpha, \begin{equation}ta]$ which would still be in $H$. This would contradict the way $\alpha$ was defined.
\end{proof}
We are now ready to finish:
\begin{equation}gin{proof}[Proof of the Theorem]
The discussion above shows that for any element $\iota \in \mathcal{I}$ (with $L_w < \infty$) and any $m \in \mathbb{N}$ we can associate to $w$ a finite string of words in $H$. Multiplying these words in the order in which they appear in the string gives a word in $H$. Let us denote this word by $h_{\iota, m} (w)$, where we have kept the dependence on $\iota$ and $m$ explicit. It is obvious that if $w \in H$ then $h_{\iota, m} (w) = w$. This is simply because if $w \in H$, there are no singular points and thus no intervals to delete. \\
Let us begin by observing that $h_{\iota, m} (w^{-1}) = ( h_{\iota, m} (w) )^{-1}$. To see this, note that $L_w = L_{w^{-1}}$ as a consequence of the fact that we associate intervals of the same length to a letter and to its inverse. Also, if we have a decomposition of $[0,L_w]$ into intervals corresponding to the letters of $w$, then the decomposition of $[0, L_{w^{-1}}]$ is obtained from it by reflecting through the origin and then shifting to the right by $L_w$ (simply because we exchange the sum of the lengths of the intervals `before' a letter, with those `after' it). Note that if an interval in $[0,L_w]$ corresponded to a letter in $w$, then the reflected and shifted interval in $[0, L_{w^{-1}}]$ corresponds now to the inverse of the letter in $w^{-1}$. \\
Now, it follows from the fact that the definitions of regularity on the right/left are mirror images of each other and from a similar symmetry in the definitions of $I^r_{x,m}, I^l_{x,m}$, that the set $C_m$ we shall delete from $[0, L_{w^{-1}}]$ is obtained from the $C_m$ deleted from $[0,L_w]$ by reflection and translation. This however, means that the finite string of words obtained from $[0,L_{w^{-1}}]$ is the string of words one obtains from $[0,L_w]$ by `reflection', i.e. by rewriting the words in the opposite order, by rewriting the letters in each word in the opposite order and replacing each letter with its inverse (all the intervals are in the opposite order and each interval now corresponds to the inverse of the original letter). It is immediate that the product of the string of words obtained from $[0, L_{w^{-1}}]$ (which is $h_{\iota,m}(w^{-1})$) is the product of the `reflected' words, which is equal to $( h_{\iota, m} (w) )^{-1}$, and we have the equality we want. \\
Keeping $\iota$ fixed for now, we claim that if $w$ and $\tilde{w}$ are two words such that $L_w, L_{\tilde{w}} < \infty$ then eventually, i.e. for all sufficiently large $m$, we have
\begin{equation}
\label{eq:eq}
h_{\iota, m}(w \cdot \tilde{w}) = h_{\iota,m} (w) \cdot h_{\iota,m}(\tilde{w}).
\end{equation}
It is enough to prove this eventual equality for words whose concatenation is irreducible. To see this, assume that this was done. Let $w$ and $\tilde{w}$ be two words and write them as $w=w' \cdot w''$ and $\tilde{w} = w''^{-1} \cdot \tilde{w}'$, where $w''$ is the part that gets reduced when $w$ and $\tilde{w}$ are concatenated. We then have that eventually
\begin{equation}sn
h_{\iota, m} (w \cdot \tilde{w}) & = & h_{\iota, m} (w' \cdot \tilde{w}') \\
& = & h_{\iota, m} (w') \cdot h_{\iota, m} (\tilde{w}') \\
& = & h_{\iota, m} (w') \cdot h_{\iota, m}(w'') \cdot h_{\iota, m}(w''^{-1}) \cdot h_{\iota, m} (\tilde{w}')\\
& = & h_{\iota, m} (w' \cdot w'') \cdot h_{\iota, m} (w''^{-1} \cdot \tilde{w}') \\
& = & h_{\iota,m} (w) \cdot h_{\iota,m}(\tilde{w})
\end{equation}sn
Therefore assume that $w$ and $\tilde{w}$ are two words whose concatenation is irreducible. How can equality (\ref{eq:eq}) fail to hold? The only way this could happen is when there is a mismatch between the intervals deleted in $w$ and $\tilde{w}$ and the intervals deleted in $w \cdot \tilde{w}$. For instance $\tilde{w}$ could be an element in $H$ (so no subintervals of it should be deleted), while there could be a point in the interval corresponding to $w$ which is not regular on the right and which is sufficiently close to $L_w$ such that the excised interval at this point `spills' over to the interval corresponding to $\tilde{w}$ in $w \cdot \tilde{w}$. This would cause a part of $\tilde{w}$ to be deleted causing (\ref{eq:eq}) to fail.\\
Let us see that this does not happen when $m$ is sufficiently large. There are three cases two consider:
\begin{equation}gin{itemize}
\item $L_w$ is regular on the right and on the left in $[0,L_{w \cdot \tilde{w}}]$: In this case choose $m$ to be large enough so that $L_w$ is more than $\frac{1}{m}$ from the nearest singular point.
\item $L_w$ is regular on the right but not on the left (or vice versa): Let $m$ be large enough so that there are no singular points in $(L_w, L_w +\frac{1}{m}]$. Note that no $I_{x,m}^r$ for $x < L_w$ can have its right endpoint larger than $L_w$ since $\alpha_x \leq L_w \in C'$ for any such $x$.
\item $L_w$ is not regular on the right nor on the left: In this case any $m$ works. Suppose that $x \in [0, L_w]$ and that $I^r_{x,m}$ `spills over' to $[L_w, L_{w \cdot \tilde{w}}]$, i.e. more precisely $I^r_{x,m}\cap [L_w, L_{w \cdot \tilde{w}}] \neq \phi$ (note that here we are considering $I^r_{x,m}$ for the word $w \cdot \tilde{w}$). However, by assumption $L_w$ is not regular on the right. We claim that $I^r_{L_w,m} \supset I^r_{x,m}\cap [L_w, L_{w \cdot \tilde{w}}]$. To see this, consider the two cases:
\begin{equation}gin{itemize}
\item $\alpha_{L_w,m} = L_w$: In this case $I^r_{L_w,m} = [L_w, L_w + \frac{1}{m}] \supset \{L_w \} = I^r_{x,m}\cap [L_w, L_{w \cdot \tilde{w}}]$. Note that the last equality holds because $\alpha_{x,m} = L_w$.
\item $\alpha_{L_w,m} \neq L_w$: In this case $I^r_{L_w,m} = [L_w, \alpha_{L_w,m}] \supset [L_w, \alpha_{x,m} ]$. This follows trivially from $\alpha_{L_w,m} \geq \alpha_{x,m}$.
\end{itemize}
Above, we only considered `right' intervals. Needless to say symmetric statements are true regarding $I^l_{L_w,m}$. Thus, the `spillovers' are contained in $I^r_{L_w,m}$ and $I^l_{L_w,m}$. Therefore, in this case, there is no mismatch between the intervals removed from the words whether the words are considered individually or are concatenated.
\end{itemize}
Thus we can always choose $m$ to be large enough so that the intervals deleted from $w$ and from $\tilde{w}$ match those which are deleted from $w \cdot \tilde{w}$. This means that the string of elements of $H$ obtained from $w \cdot \tilde{w}$ is the concatenation of the strings obtained from $w$ and $\tilde{w}$. It follows that (\ref{eq:eq}) holds.\\
Keeping $\iota$ fixed for now, and using $f$ (the given homomorphism from $H$ to $G$), we can associate to $w$ a sequence of elements in $G$, $ m \to g_{\iota, m} = f( h_{\iota,m}(w) ) $, where we have kept the dependence on $\iota$ explicit. In view of (\ref{eq:eq}) we have that eventually the sequence that corresponds to $w \cdot \tilde{w}$ is equal to the sequence $\{g_{\iota, m} h_{\iota, m} \}_{m=1}^\infty$. \\
Using $\pi_\mathbb{N}$ (nets are simply sequences here), we can map the sequence to a single group element $g_\iota$. Let $g_\iota$ be equal to the identity element of $G$ if $L_w = \infty$ (note that for any word $w$, we do have eventually $L_w < \infty$). We thus get for any word a net of group elements $\{ g_\iota \}_{\iota \in \mathcal{I}}$ such that the net corresponding to the product of two words is equal eventually to the pointwise product of the two individual nets. Using $\pi_\mathcal{I}$ again (the nets here are indexed by $\mathcal{I}$ of course), we see that the map $w \to \{ g_\iota \}_{\iota \in \mathcal{I}} \stackrel{\pi_\mathcal{I}}{\to} G$ is the homomorphism extension that we seek.
\end{proof}
We have the following immediate corollary:
\begin{equation}gin{corollary*}
$BF(A)$ satisfies definition $2$, if $G$ is a compact, topological group.
\end{corollary*}
\begin{equation}gin{proof}
This follows at once from the fact that $F(A)$ is a tame subgroup of $BF(A)$.
\end{proof}
Let us finish by noting that the extension in the Corollary is never unique. To see this let $A = \{a_1, a_2, \dots \}, \alpha = a_1 a_2 \dots$ and $H = F(A \cup \alpha)$ (in other words, $H$ is the free group generated by $A \cup \alpha$). Since any subword of $\alpha$ belongs to $H$, $H$ is tame. The theorem guarantees that any homomorphism from $H$ to a compact $G$ extends to $BF(A)$. However, since $H$ is free, there are infinitely many homomorphisms on it which coincide when restricted to $F(A)$ (they only differ in their action on $\alpha$). Thus no extension of a homomorphism from $F(A)$ to $G$ is unique.\\
\textbf{Acknowledgements:} The author would like to thank an anonymous referee for several suggestions which have greatly improved the manuscript and its readability.
\begin{equation}gin{thebibliography}{99}
\bibitem{cannon} J. W. Cannon, G. R. Conner, `The combinatorial structure of the Hawaiian earring group', Topology Appl. 106 (2000), no. 3, 225--271.
\bibitem{eda} G. Conner, K. Eda, `Free subgroups of free complete products', J. Algebra 250 (2002), no. 2, 696--708.
\bibitem{desmit} B. de Smit, `The fundamental group of the Hawaiian earring is not free', Internat. J. Algebra Comput. 2 (1) (1992), 33--37.
\bibitem{goldblatt} R. Goldblatt, \textit{``Lectures on the hyperreals''}, Graduate Texts in Mathematics, 188, Springer-Verlag, New York, \textbf{1998}.
\bibitem{encyclo} K. P Hart, J. Nagata, J. E. Vaughan, \textit{``Encyclopedia of General Topology}, Elsevier Science, 2004.
\bibitem{higman} G. Higman, `Unrestricted free products and varieties of topological groups', J. London Math. Soc 27, (1952), 73--81.
\bibitem{tlas} T. Tlas, `On the Holonomic Equivalence of Two curves', arXiv:1311.6611 [math.DG].
\end{thebibliography}
\texttt{{\footnotesize Department of Mathematics, American University of Beirut, Beirut, Lebanon.}
}\\ \texttt{\footnotesize{Email address}} : \textbf{\footnotesize{[email protected]}}
\end{document}
|
\begin{document}
\title{Nonclassical microwave radiation from the dynamical Casimir effect}
\author{J.R. Johansson}
\email{[email protected]}
\affiliation{Advanced Science Institute, RIKEN, Wako-shi, Saitama, 351-0198 Japan}
\author{G. Johansson}
\affiliation{Microtechnology and Nanoscience, MC2, Chalmers University of Technology, SE-412 96 G{\"o}teborg, Sweden}
\author{C.M. Wilson}
\affiliation{Microtechnology and Nanoscience, MC2, Chalmers University of Technology, SE-412 96 G{\"o}teborg, Sweden}
\author{P. Delsing}
\affiliation{Microtechnology and Nanoscience, MC2, Chalmers University of Technology, SE-412 96 G{\"o}teborg, Sweden}
\author{F. Nori}
\affiliation{Advanced Science Institute, RIKEN, Wako-shi, Saitama, 351-0198 Japan}
\affiliation{Physics Department, The University of Michigan, Ann Arbor, Michigan 48109-1040, USA}
\date{\today}
\begin{abstract}
We investigate quantum correlations in microwave radiation produced by the dynamical Casimir effect in a superconducting waveguide terminated and modulated by a superconducting quantum interference device. We apply nonclassicality tests and evaluate the entanglement for the predicted field states. For realistic circuit parameters, including thermal background noise, the results indicate that the produced radiation can be strictly nonclassical and can have a measurable amount of intermode entanglement. If measured experimentally, these nonclassicalilty indicators could give further evidence of the quantum nature of the dynamical Casimir radiation in these circuits.
\end{abstract}
\pacs{85.25.Cp, 42.50.Lc, 03.70.+k}
\maketitle
Vacuum fluctuations are fundamental in quantum mechanics, yet they have not so far played an active role in the rapidly advancing field of engineered quantum devices, e.g., for quantum information processing and communication. The main reason being that it has been notably difficult to observe dynamical consequences of the vacuum fluctuations \cite{nation:2012}, let alone use them for applications. The dynamical Casimir effect (DCE) \cite{dodonov:2010, dalvit:2011} is a vacuum amplification process that can produce pairs of photons from quantum vacuum fluctuations by means of nonadiabatic changes in the mode structure of the quantum field, e.g., by a changing boundary condition \cite{moore:1970,fulling:1976} or index of refraction \cite{yablonovitch:1989,uhlmann:2004}. As such it could potentially be applied as a source of entangled microwave photons.
For decades the DCE eluded experimental demonstration, largely due to the challenging prerequisite of nonadiabatic changes in the mode structure with respect to the speed of light. However, using a varying boundary condition in a superconducting waveguide \cite{johansson:2009, johansson:2010}, the experimental observation of the DCE was recently reported \cite{wilson:2011}. This experiment also demonstrated that the dynamical Casimir radiation exhibits the expected two-mode squeezing \cite{dodonov:1990,dodonov:1999,dezael:2010,johansson:2010}, which is a consequence of a nonclassical pairwise photon-creation process.
The microwave radiation produced by the DCE in superconducting circuits therefore has high potential of being distinctly nonclassical. Whether the state of a quantum field is nonclassical, or if it could be produced by a classical process, may be demarcated by evaluating certain carefully-designed inequalities \cite{miranowicz:2010} for the field observables (nonclassicality tests). In this paper, we apply such nonclassicality tests to show that the microwave radiation produced by the DCE in these superconducting circuits can be distinctly nonclassical, even when taking into account the background thermal noise \cite{plunien:2000,schutzhold:2002} and higher-order scattering processes. Using auxiliary quantum systems as detectors \cite{dodonov:2012a,dodonov:2012b} could be an alternative to directly measure the field quadratures, which could provide further opportunities to detect nonclassical correlations, e.g., on the single photon-pair level \cite{chen:2011}.
{\it DCE in superconducting circuits.---}
Superconducting circuits are strikingly favorable for amplifying vacuum fluctuations because of their inherently low dissipation, which allows the vacuum state to be reached, and the in-situ tunability of an essential circuit element, namely the Josephson junction (JJ). A JJ is characterized by its Josephson energy, and by arranging two such junctions in a superconducting loop -- a superconducting quantum interference device (SQUID) -- an effective tunable JJ can be produced. The Josephson energy of the effective junction can be tuned by the applied magnetic flux through the SQUID-loop. This in-situ tunability can be used to produce waveguide circuits with tunable boundary conditions \cite{wallquist:2006,laloy:2008,sandberg:2008,yamamoto:2008}, as employed in the DCE experiment in Ref.~\cite{wilson:2011}, and tunable index of refraction \cite{castellanos:2008,nation:2009,latheenmaki:2011}. Tunable JJs are also essential in related DCE proposals based on circuit QED with tunable coupling \cite{deliberato:2009}.
The electromagnetic field confined by a superconducting waveguide, such as a coplanar or strip-line waveguide, can be described quantum mechanically in terms of the flux operator $\Phi(x,t)$. It is related to the voltage operator by $\Phi(x,t) = \int^tdt'V(x,t')$, and to the gauge-invariant superconducting phase operator $\varphi = 2\pi\Phi/\Phi_0$, where $\Phi_0 = h/2e$ is the magnetic flux quantum. The flux field in the transmission line obeys the massless, one-dimensional Klein-Gordon wave equation, $\partial_{xx}\Phi(x,t)-v^{-2}\partial_{tt}\Phi(x,t)=0$, which has independent left- and right-propagating components. Using this decomposition, the field can be written in the form
\begin{eqnarray}
\label{eq:field}
\Phi(x,t) &=& \sqrt{\frac{\hbar Z_0}{4\pi}}\int_{-\infty}^{\infty} \frac{d\omega}{\sqrt{|\omega|}}\times\nonumber\\
&&
\left[a(\omega) e^{-i(-k_\omega x +\omega t)} + b(\omega)e^{-i(k_\omega x +\omega t)}\right],
\end{eqnarray}
where $a(\omega)$ and $b(\omega)$ are the annihilation operators for photons with frequency $\omega/2\pi>0$ propagating to the right (incoming) and left (outgoing), respectively. Here we have used the notation $a(-\omega)=a^\dag(\omega)$, and $k_\omega = \omega/v$ is the wavenumber, $v$ is the speed of light in the waveguide, and $Z_0$ the characteristic impedance.
Using the previously discussed flux-tunable SQUID termination of the waveguide, one can produce a tunable boundary condition (see also Refs.~\cite{alves:2006,silva:2011}) for the quantum field [Eq.~(\ref{eq:field})],
\begin{eqnarray}
\Phi(0,t) + \left.L_{\rm eff}(t)\partial_x\Phi(x,t)\right|_{x=0} = 0,
\end{eqnarray}
that can be characterized by an effective length $L_{\rm eff}(t) = \left(\Phi_0/2\pi\right)^2/(E_J(t)L_0)$, where $L_0$ is the characteristic inductance per unit length of the waveguide and $E_J(t)=E_J[\Phi_{\rm ext}(t)]$ is the flux-dependent effective Josephson energy. To arrive at this boundary condition we have neglected the capacitance of the SQUID and assumed small phase fluctuations, which is justified for a large SQUID plasma frequency \cite{johansson:2009,johansson:2010}. For sinusoidal modulation with frequency $\omega_d/2\pi$ and normalized amplitude $\epsilon$, $E_J(t) = E_J^0 [1 + \epsilon \sin \omega_d t]$, we obtain an effective length modulation amplitude $\delta\!L_{\rm eff} = \epsilon L^0_{\rm eff}$, where $L^0_{\rm eff} = L_{\rm eff}(0)$. A strong modulation (corresponding to an effective velocity $v_{\rm eff}=\delta\!L_{\rm eff}\omega_d$ that is a significant fraction of the speed of light in the waveguide $v$), results in nonadiabatic changes in the mode structure of the quantum field, and the emission of photons as described by the DCE.
The DCE can be analyzed using scattering theory that describes how the time-dependent boundary condition, or region of the waveguide with a time-dependent index of refraction, mixes the otherwise independent left and right propagating modes \cite{lambrecht:1996}. The superconducting circuits considered here were analyzed using this method in Refs.~\cite{johansson:2009,johansson:2010}, where the weak-modulation regime was studied analytically using perturbation theory, and the strong-modulation regime was studied using a higher-order numerical method.
In the perturbative regime, the resulting output field is correlated at modes with angular frequencies $\omega$ and $\omega_d-\omega$, i.e., symmetrically around half the driving frequency. This intermode symmetry is emphasized when the output field is written for two such correlated modes:
\begin{eqnarray}
\label{eq:output-field-perturbation-simplified-notation}
b_\pm = -a_\pm -i\frac{\delta\!L_{\rm eff}}{v}\sqrt{\omega_+\omega_-}a^\dag_\mp,
\end{eqnarray}
where we have introduced the short-hand notation $a_\pm=a(\omega_\pm)$ and $b_\pm=b(\omega_\pm)$, and where $\omega_\pm = \omega_d/2 \pm \delta\omega$ and $\delta\omega$ is the symmetric detuning. In this perturbation calculation, the small parameter is $\delta\!L_{\rm eff}\sqrt{\omega_-\omega_+}/v \approx \epsilon L_{\rm eff}(0)\omega_d/2v$. Here, even if the input field is in the vacuum state, $\left<a^\dag_\pm a_\pm\right> = 0$, the output field Eq.~(\ref{eq:output-field-perturbation-simplified-notation}) has a nonzero, symmetric photon flux $\left<b^\dag_\pm b_\pm\right> = (\delta\!L_{\rm eff}/v)^2\omega_+\omega_-$, i.e., the dynamical Casimir radiation. Furthermore, the photons in the two modes have bunching-like statistics, where the probability of simultaneously observing one photon in each mode is equal to the probability of observing a photon in one of the modes
$\left<b_+^\dag b_+ b_-^\dag b_-\right> \approx \left<b_\pm^\dag b_\pm\right>$, i.e., they appear in pairs.
For finite temperatures, where thermal noise is present in the input field, and for not so weak modulation, when for example $\delta\!L_{\rm eff}\sqrt{\omega_+\omega_-}/v$ no longer is a small parameter, it is not obvious if or to what extent the above results apply. In these cases there are both classical and nonclassical contributions to the photon flux in the output field, and it becomes necessary to systematically compare the relative importance of such contributions in order to tell if the resulting output field remains nonclassical or not. In the following, we carry out such an analysis using nonclassicality tests and by evaluating the degree of entanglement in the predicted output field.
{\it Nonclassicality tests.---}
The theory of nonclassicality tests has been well developed in quantum optics, and here we briefly review the important results in the notation introduced above for superconducting waveguides. We consider an operator $\hat{f}$ which is defined as a function of the creation and annihilation operators. For the Hermitian operator $\hat{f}^\dagger\hat{f}$ it can then be shown \cite{miranowicz:2010}, using the Glauber-Sudarshan $P$ function formalism, that any classical state of the field satisfies
\begin{equation}
\label{eq:fdf_ineq}
\left<:\hat{f}^\dagger\hat{f}:\right> \geq 0,
\end{equation}
where the condition for classicality that has been used is that the $P$ function must always be non-negative. The $::$ denotes normal ordering.
For the two-mode quadrature-squeezed states that the DCE is known to produce, the natural definition of $\hat{f}$ is
\begin{equation}
\label{eq:f_def}
\hat{f}_\theta = e^{i\theta}\hat{b}_- + e^{-i\theta}\hat{b}_-^\dag +i(e^{i\theta}\hat{b}_+ - e^{-i\theta}\hat{b}_+^\dag),
\end{equation}
where $\theta$ is the angle that defines the principal squeezing axis. With this definition of $\hat{f}_\theta$, a pure two-mode squeezed state is known to violate the inequality (\ref{eq:fdf_ineq}), see, e.g., Ref.~\cite{miranowicz:2010} and references therein. This choice of $\hat{f}_\theta$ is also suitable from an experimental point of view, since $\left<:\hat{f}^\dagger_\theta\hat{f}_\theta:\right>$ can be evaluated from experimentally-accessible quadrature correlations.
We now evaluate the quantum-classical indicator $\left<:\hat{f}^\dagger\hat{f}:\right> = \min\limits_{\theta} \left<:\hat{f}^\dagger_\theta\hat{f}_\theta:\right>$ for the field state produced by the DCE, and discuss the conditions under which this nonclassicality test is violated. For weak driving, using output field Eq.~(\ref{eq:output-field-perturbation-simplified-notation}), and a thermal input field we obtain
\begin{eqnarray}
\left<:f^\dag_\theta f_\theta:\right>
&=&
2(n^{\rm th}_+ + n^{\rm th}_-)\nonumber\\
&-&
4 \cos2\theta \frac{\delta L_{\rm eff}}{v}\sqrt{\omega_+\omega_-} (1 + n^{\rm th}_+ + n^{\rm th}_-),
\end{eqnarray}
where $n^{\rm th}_\pm = \left<a_\pm^\dag a_\pm\right> = (\exp(\hbar\omega_\pm/k_BT)-1)^{-1}$ is the thermal photon flux of the input mode with frequency $\omega_\pm$. In this case, $\left<:f^\dag_\theta f_\theta:\right>$ is minimized by taking $\theta = 0$, and it is negative if $(\delta L_{\rm eff}/v)\sqrt{\omega_+\omega_-} \gtrsim (n^{\rm th}_+ + n^{\rm th}_-)/2$, or, equivalently, $\epsilon\gtrsim 2v/(L^0_{\rm eff}\omega_d)(n^{\rm th}_++n^{\rm th}_-)/2$. This indicates that the field state in the form Eq.~(\ref{eq:output-field-perturbation-simplified-notation}) is distinctly nonclassical for a vacuum input field, and potentially also for low-temperature thermal input fields.
\begin{figure}
\caption{
(color online) (a) The quantum-classical indicator $\left<:f^\dag_\theta f_\theta:\right>$ as a function of driving amplitude $\epsilon$ for a range of $\theta$ values in the interval $[0,2\pi]$ (blue), and for $\theta = 0$ (red), which is the optimal $\theta$ in the perturbation regime. Due to the thermal input field, $\left<:f^\dag_\theta f_\theta:\right> > 0$ for small $\epsilon$. However, when $\epsilon$ is sufficiently large $\left<:f^\dag f:\right> < 0$, which conclusively rules out that the field state is of classical origin. (b) The two-mode squeezing $\sigma_2$ as a function of the dimensionless driving amplitude $\epsilon$ (red), together with the right-hand side of Eq.~(\ref{eq:sigma-2-ineq}
\label{fig:dce-fdf-range-and-sigma2-vs-eps}
\end{figure}
To investigate whether the nonclassical characteristics of the DCE radiation remain for realistic input field temperatures and when the driving amplitude is increased beyond the perturbative regime, we also evaluate $\left<:f^\dag_\theta f_\theta:\right>$ by solving the scattering problem numerically. The results of this calculation are presented in Fig.~\ref{fig:dce-fdf-range-and-sigma2-vs-eps}(a), showing that for sufficiently large driving amplitude $\left<:f^\dag f:\right> < 0$ even at typical temperatures for superconducting circuits, and including higher-order scattering processes. We therefore conclude that nonclassical characteristics of the DCE radiation can be sufficiently robust to remain important in realistic experimental situations. Evaluating $\left<:f^\dag f:\right>$ from experimentally measured field quadratures therefore appears be a viable method to conclusively demonstrate the quantum statistics of the dynamical Casimir radiation.
{\it The nonclassicality test in terms of $\sigma_2$.---}
To further relate to the experimental demonstrations of the DCE, it is instructive to formulate the nonclassicality test in terms of the two-mode squeezing $\sigma_2$, which was measured in Ref.~\cite{wilson:2011}. The two-mode squeezing is defined as $\sigma_2 = (\left<I_-I_+\right> - \left<Q_-Q_+\right>)/\left((\left<I_-^2\right>+\left<I_+^2\right>+\left<Q_-^2\right>+\left<Q_+^2\right>)/2\right)$, where $I_\pm = \left(\hbar\omega_\pm Z_0/8\pi\right)^{1/2}\left(e^{i\phi}b_\pm +e^{-i\phi}b_\pm^\dag\right)$ and $Q_\pm = -i\left(\hbar\omega_\pm Z_0/8\pi\right)^{1/2}\left(e^{i\phi} b_\pm - e^{-i\phi} b_\pm^\dag\right)$ are the voltage quadratures. Using this expression for $\sigma_2$, we can write the inequality $\left<:f^\dag_\theta f_\theta:\right> < 0$ as
\begin{eqnarray}
\label{eq:sigma-2-ineq}
\sigma_2
&>&
\frac{2\sqrt{\omega_+\omega_-}\left(n_+ + n_-\right)}
{\omega_+ \left[2n_+ + 1\right] + \omega_-\left[2n_- + 1\right]},
\end{eqnarray}
where $n_\pm = \left<b_\pm^\dag b_\pm\right>$ is the photon flux (thermal and DCE) for the output mode with frequency $\omega_\pm$, and where we have taken $\theta=\phi+\pi/4$ to relate $\sigma_2$ and $\left<:f^\dag_\theta f_\theta:\right>$.
Equation (\ref{eq:sigma-2-ineq}) suggests that a non-zero two-mode squeezing does not necessarily imply that the field is a strictly nonclassical state [by the criterium of Eq.~(\ref{eq:fdf_ineq}) and the current definition of the operator $\hat{f}$]. However, if the magnitude of the two-mode squeezing exceeds the right-hand side of Eq.~(\ref{eq:sigma-2-ineq}), the field is {\it guaranteed} to be distinctively nonclassical (i.e., squeezed vacuum rather than a squeezed thermal state). Since the expectation values in the right-hand side of Eq.~(\ref{eq:sigma-2-ineq}) can be measured experimentally, this could be a practical formulation for the experimental evaluation of the nonclassicality test.
Figure \ref{fig:dce-fdf-range-and-sigma2-vs-eps}(b) shows the two-mode squeezing together with the boundary between the classical and quantum regimes, as defined by Eq.~(\ref{eq:sigma-2-ineq}). With the parameters used in Fig.~\ref{fig:dce-fdf-range-and-sigma2-vs-eps}, the boundary corresponds to the squeezing $\sigma_2 \approx 0.04$. Experimental measurements \cite{wilson:2011} have demonstrated significantly larger squeezing for the dynamical Casimir radiation, but at the same time the measured photon flux was larger than in the current calculations due to the presence of low-$Q$ resonances in the transmission line. An increased photon flux increases the value of the boundary in Eq.~(\ref{eq:sigma-2-ineq}) and makes the violation of the inequality more demanding. However, by reducing the driving strength to get a lower photon flux a violation of the nonclassicality test Eq.~(\ref{eq:sigma-2-ineq}) should be achievable with an experimental setup like the one in Ref.~\cite{wilson:2011}, although increased measurement time and averaging may be necessary to obtain sufficient sensitivity.
{\it Entanglement.---}
The two-mode squeezing and the nonclassicality tests discussed above demonstrate that the DCE radiation is nonclassical. The quantum nature of the radiation originates from the entanglement in {\it individual pairs of photons}. To quantify the entanglement between two {\it entire modes} with frequencies adding up to the driving frequency, we evaluate the logarithmic negativity $\mathcal{N}$ \cite{adesso:2007}, which is an entanglement measure for Gaussian states that is frequently used in quantum optics, and recently also in microwave circuits \cite{flurin:2012} and nanomechanical systems \cite{joshi:2012}. The logarithmic negativity is positive for entangled states, and it can be calculated from the covariance matrix $V_{\alpha\beta} = \frac{1}{2}\left<R_\alpha R_\beta+R_\beta R_\alpha\right>$, where $R^{\rm T} = \left(q_-, p_-, q_+, p_+\right)$ is a vector with the quadratures as elements: $q_\pm = (b_\pm + b_\pm^\dag)/\sqrt{2}$ and $q_\pm = -i(b_\pm - b_\pm^\dag)/\sqrt{2}$.
\begin{figure}
\caption{(color online) The logarithmic negativity $\mathcal{N}
\label{fig:dce-log-neg-vs-eps}
\end{figure}
The covariance matrix can be evaluated both analytically and numerically, and also constructed from experimental quadrature measurements. The numerically calculated covariance matrix is shown in the inset in Fig.~\ref{fig:dce-log-neg-vs-eps} for typical parameters. Given the covariance matrix for the two selected modes, it is straightforward to evaluate the logarithmic negativity, defined as $\mathcal{N} = \max[0, -\log(2\nu_-)],$ where $\nu_- = (\sigma/2-(\sigma^2 - 4 \det V)^{1/2}/2)^{1/2},$ and $\sigma = \det A + \det B -2 \det C,$ where the $A, B$, and $C$ are $2\times2$ submatrices of the covariance matrix $V = \left(A, C; C^T, B\right)$.
The logarithmic negativity for the DCE (see also Ref.~\cite{guerreiro:2012}) is shown in Fig.~\ref{fig:dce-log-neg-vs-eps}. At zero temperature and small drive amplitudes, it is proportional to the driving amplitude $\mathcal{N} = \epsilon L^0_{\rm eff}\omega_d/v$. For finite temperatures and small detuning $\delta\omega$, the onset of nonzero logarithmic negativity is at $\epsilon_0 \approx 2v/(L^0_{\rm eff}\omega_d)(n^{\rm th}_+n^{\rm th}_-)^{1/2}$, after which it increases with the driving amplitude. For sufficiently large driving amplitude, $\epsilon \gtrsim 0.06$, the quantum correlations overcome the thermal noise and the two matching output modes are entangled. Comparing Figs.~\ref{fig:dce-fdf-range-and-sigma2-vs-eps} and \ref{fig:dce-log-neg-vs-eps} implies that the logarithmic negativity is a stronger indicator of the nonclassicality of the field state than the inequality (\ref{eq:fdf_ineq}) with our definition of $\hat{f}$. This is also shown in Fig.~\ref{fig:combined-qci-region}, which visualizes the nonclassical regions as a function of temperature and detuning, as well as the sensitivity to uncorrelated classical quadrature noise introduced in the detector (the one-$\sigma$ contour line). However, when taking this sensitivity into consideration, the two measures appear to be of similar practical usefulness.
\begin{figure}
\caption{(color online) The region of nonclassical radiation (blue), visualized using $-\left<:f^\dag f:\right>$ (left) and the logarithmic negativity $\mathcal{N}
\label{fig:combined-qci-region}
\end{figure}
{\it Conclusion.---}
We have theoretically investigated quantum correlations in the radiation produced by the DCE in a superconducting waveguide by evaluating nonclassicality tests and the logarithmic negativity. These measures indicates that the devices used in Ref.~\cite{wilson:2011}, should have access to regimes where the produced radiation is strictly nonclassical. We have formulated practical inequalities with experimentally obtainable observables that could be used to directly verify the quantum nature of the measured radiation in future DCE experiments. We also note that recently two-mode squeezed states have been generated in microwave circuits using other mechanisms, for example parametric amplification using the nonlinear response \cite{castellanos:2008,eichler:2011} or time-varying index of refraction \cite{latheenmaki:2011} of SQUID arrays and JJs \cite{bergeal:2010,flurin:2012}. The nonclassicality tests discussed here could also be applied to analyze the radiation produced in these experiments. We believe that a demonstration of a nonclassicality violation in superconducting circuits, or other promising systems \cite{braggio:2005,naylor:2009,faccio:2011,carusotto:2012}, could pave the way to the experimental exploration of the continuous production of entangled microwave photons by the DCE, and possible applications thereof in, for example, quantum information processing \cite{you:2011,you:2005,buluta:2011}. As such it could become a novel practical application of microwave quantum vacuum fluctuations.
JRJ is supported by the JSPS Foreign Postdoctoral Fellowship No. P11501.
GJ, CMW and PD acknowledge financial support from the Swedish Research Council, the European Research Council as well as the European Comission through the FET Open project PROMISCE.
FN is partially supported by the ARO, NSF No. 0726909, JSPS-RFBR No. 12-02-92100, Grant-in-Aid for Scientific Research (S), MEXT Kakenhi on Quantum Cybernetics, and the JSPS-FIRST program.
\end{document}
|
\begin{document}
\title
[fractional Schr\"{o}dinger equations with a general nonlinearity]
{Ground state solution of fractional Schr\"{o}dinger equations with a general nonlinearity*}\footnotetext{*This work is supported by Natural Science Foundation of China (Grant No. 11601530, 11371159).}
\maketitle
\begin{center}
\author{Yi He}
\footnote{Corresponding Author: Yi He. Email addresses: [email protected] (Y. He).}
\end{center}
\begin{center}
\address{School of Mathematics and Statistics, South-Central University For Nationalities, Wuhan, 430074, P. R. China}
\end{center}
\maketitle
\begin{abstract}
In this paper, we study the following fractional Schr\"{o}dinger equation:
\[
\left\{ \begin{gathered}
{( - \Delta )^s}u + mu = f(u){\text{ in }}{\mathbb{R}^N},
\\
u \in {H^s}({\mathbb{R}^N}),{\text{ }}u > 0{\text{ on }}{\mathbb{R}^N},
\\
\end{gathered} \right.
\]
where $m>0$, $N>2s$, ${( - \Delta )^s}$, $s \in (0,1)$ is the fractional Laplacian. Using minimax arguments, we obtain a positive ground state solution under general conditions on $f$ which we believe to be almost optimal.
{\bf Key words }: ground state solution; fractional Schr\"{o}dinger equation; critical growth.
{\bf 2010 Mathematics Subject Classification }: Primary 35J20, 35J60, 35J92
\end{abstract}
\maketitle
\section{Introduction and Main Result}
\setcounter{equation}{0}
We cunsider the following fractional Schr\"{o}dinger equation:
\begin{equation}\lambdabel{1.1}
\left\{ \begin{gathered}
{( - \Delta )^s}u + mu = f(u){\text{ in }}{\mathbb{R}^N},
\\
u \in {H^s}({\mathbb{R}^N}),{\text{ }}u > 0{\text{ on }}{\mathbb{R}^N},
\\
\end{gathered} \right.
\end{equation}
where $m>0$, $N>2s$, ${( - \Delta )^s}$, $s \in (0,1)$ is the fractional Laplacian. The nonlinearity $f:\mathbb{R} \to \mathbb{R}$ is a continuous function. Since we are looking for positive solutions, we assume that $f(t)=0$ for $t<0$. Furthermore, we need the following conditions:\\
$(f_1)$ $\mathop {\lim }\limits_{t \to {0^ + }} f(t)/t = 0$;\\
$(f_2)$ $\mathop {\lim }\limits_{t \to + \infty } f(t)/{t^{2_s^ * - 1}} = 1$ where $2_s^ * = 2N/(N - 2s)$;\\
$(f_3)$ $\exists \lambdambda > 0$ and $2 < q < {2_s^ *}$ such that $f(t) \ge \lambdambda {t^{q - 1}} + {t^{2_s^ * - 1}}$ for $t \ge 0$.\\
Note that, for the case $s=1$, $(f_1)$-$(f_3)$ were first introduced by J. Zhang, Z. Chen and W. Zou \cite{zcz}. This hypothesis can be regarded as an extension of the celebrated Berestycki-Lions' type nonlinearity (see \cite{bl1,bl2}) to the fractional Schr\"{o}dinger equations with critical growth.
Equation \eqref{1.1} has been derived as models of many physical phenomena, such as phase transition, conservation laws, especially in fractional quantum mechanics, etc., \cite{fqt}. \eqref{1.1} was introduced by N. Laskin \cite{l2,l3} as an extension of the classical nonlinear Schr\"{o}dinger equations $s=1$ in which the Brownian motion of the quantum paths is replaced by a L\'{e}vy flight. We refer to \cite{dpv} for more physical backgrounds.
In recent years, the study of fractional Schr\"{o}dinger equations has attracted much attention from many mathematicians. In \cite{crs,css,s1}, L. Caffarelli, L. Silvestre $et~al$ investigated free boundary problems of fractional Schr\"{o}dinger equations and obtained some regularity estimates. In \cite{cs1,cs2}, X. Cabr\'{e} and Y. Sire studied the existence, uniqueness, symmetry, regularity, maximum principle and qualitative properties of solutions to the fractional Schr\"{o}dinger equations in the whole space. For more results, we refer to \cite{ap,bcps1,bcps,cw,dpv,fqt,jlx,rs}.
Our main result is as follows:
\begin{theorem}\lambdabel{1.1.}
Assume that the nonlinearity $f$ satisfies $(f_1)$-$(f_3)$. If $N \ge 4s$, $2<q<{2_s^ * }$ or $2s< N < 4s$, $4s/(N - 2s) < q < 2_s^ * $, then for every $\lambdambda > 0$, \eqref{1.1} possesses a positive ground state solution. Moreover, the same conclusion holds provided that $2s< N < 4s$, $2 < q \le 4s/(N - 2s)$ and $\lambdambda > 0$ sufficiently large.
\end{theorem}
We note that, to the best of our knowledge, there is no result on the existence of positive ground state solutions for fractional Schr\"{o}dinger equation under $(f_1)$-$(f_3)$.
The proof of Theorem~\ref{1.1.} is based on variational method. The main difficulties lie in two aspects: (i) The facts that the nonlinearity $f(u)$ does not satisfy $({\text{AR}})$ condition and the function $f(s)/s$ is not increasing for $s > 0$ prevent us from obtaining a bounded Palais-Smale sequence ((PS) sequence in short) and using the Nehari manifold respectively. (ii) The unboundedness of the domain $\mathbb{R}^N$ and the nonlinearity $f(u)$ with critical growth lead to the lack of compactness.
To complete this section, we sketch our proof.
To treat the nonlocal problem \eqref{1.1}, we use the L. Caffarelli and L. Silvestre extension method \cite{cs} to study a corresponding extension problem
\begin{equation}\lambdabel{1.5}
\left\{ \begin{gathered}
- {\text{div}}({y^{1 - 2s}}\nabla w) = 0{\text{ in }}\mathbb{R}_ + ^{N + 1},
\\
- {k_s}\mathop {\lim }\limits_{y \to {0^ + }} {y^{1 - 2s}}\frac{{\partial w}}
{{\partial y}}(x,y) = - mw + f(w){\text{ on }}{\mathbb{R}^N} \times \{ 0\} .
\\
\end{gathered} \right.
\end{equation}
with the corresponding functional
\[
{I_m}(w) = \frac{{{k_s}}}
{2}\int_{\mathbb{R}_ + ^{N + 1}} {{y^{1 - 2s}}|\nabla w{|^2}} dxdy + \frac{m}
{2}\int_{{\mathbb{R}^N}} {{w^2}(x,0)} dx - \int_{{\mathbb{R}^N}} {F(w(x,0))} dx,{\text{ }}w \in {X^{1,s}}(\mathbb{R}_ + ^{N + 1}).
\]
where $F(s): = \int_0^s {f(t)} dt$ and ${X^{1,s}}(\mathbb{R}_ + ^{N + 1})$ is defined as the completion of $C_0^\infty (\overline {\mathbb{R}_ + ^{N + 1}} )$ under the norm
\[
{\| w \|_{{X^{1,s}}(\mathbb{R}_ + ^{N + 1})}} = {\Bigl( {\int_{\mathbb{R}_ + ^{N + 1}} {{y^{1 - 2s}}|\nabla w{|^2}} dxdy + \int_{{\mathbb{R}^N}} {{w^2}(x,0)} dx} \Bigr)^{1/2}}.
\]
Motivated by J. Hirata, N. Ikoma and K. Tanaka \cite{hit}, by applying the General Minimax principle (Theorem~2.8 of \cite{w1}) to the composite functional
\[
{I_m} \circ \Phi (\theta ,w): = {I_m}(w({e^{ - \theta }}x,{e^{ - \theta }}y)),{\text{ }}(\theta ,w) \in \mathbb{R} \times {X^{1,s}}(\mathbb{R}_ + ^{N + 1}),
\]
we construct a bounded ${{\text{(PS)}}_{{c_m}}}$ sequence $\{ {w_n}\} _{n = 1}^\infty \subset {X^{1,s}}(\mathbb{R}_ + ^{N + 1})$ with an extra property ${P_m}({w_n}) \to 0$ as $n \to \infty $ where $c_m$ is the mountain pass level of $I_m$ and ${P_m}(w)=0$ is the Pohozaev's identity of \eqref{1.5} (Proposition~\ref{3.2.} below). Proceeding by standard arguments, the existence of ground state solutions for \eqref{1.5} follows.
This paper is organized as follows, in Section 2, we give some preliminary results. In Section 3, we prove the main result Theorem~\ref{1.1.}.\\
\section{Preliminaries}
\setcounter{equation}{0}
In this section, we collect some preliminary results. Recall that for $s \in (0,1)$, ${D^s}({\mathbb{R}^N})$ is defined by the completion of $C_0^\infty ({\mathbb{R}^N})$ with respect to the Gagliardo norm
\[
{\| u \|_{{D^s}({\mathbb{R}^N})}} = {\left( {\int_{{\mathbb{R}^{2N}}} {\frac{{|u(x) - u(y){|^2}}}
{{|x - y{|^{N + 2s}}}}} dxdy} \right)^{1/2}}
\]
and the embedding ${D^s}({\mathbb{R}^N}) \hookrightarrow {L^{2_s^ * }}({\mathbb{R}^N})$ is continuous, that is
\[
{\| u \|_{{L^{2_s^ * }}({\mathbb{R}^N})}} \le C(N,s){\| u \|_{{D^s}({\mathbb{R}^N})}}
\]
by Theorem~1 of \cite{ms}. The fractional Sobolev space ${H^s}({\mathbb{R}^N})$ is defined by
\[
{H^s}({\mathbb{R}^N}) = \left\{ {u \in {L^2}({\mathbb{R}^N}):\int_{{\mathbb{R}^{2N}}} {\frac{{|u(x) - u(y){|^2}}}
{{|x - y{|^{N + 2s}}}}dxdy} } \right\}
\]
endowed with the norm
\[
{\| u \|_{{H^s}({\mathbb{R}^N})}} = {\| u \|_{{D^s}({\mathbb{R}^N})}} + {\| u \|_{{L^2}({\mathbb{R}^N})}}.
\]
For $N>2s$, we see from Lemma~2.1 of \cite{ap} that
\begin{equation}\lambdabel{c4}
{H^s}({\mathbb{R}^N}){\text{ is continuously embedded in }}{L^p}({\mathbb{R}^N}){\text{ for }}p \in [2,2_s^*].
\end{equation}
An important feature of the operator ${( - \Delta )^s} (0<s<1)$ is its nonlocal character. A common approach to deal with this problem was proposed by L. Caffarelli and L. Silvestre \cite{cs}, allowing to transform \eqref{1.1} into a local problem via the Dirichlet-Neumann map in the domain $\mathbb{R}_ + ^{N + 1}: = \{ (x,t) \in {\mathbb{R}^{N + 1}}:t > 0\} $. For $u \in {D^s}({\mathbb{R}^N})$, the solution $w \in {X^s}(\mathbb{R}_ + ^{N + 1})$ of
\[\left\{ \begin{gathered}
- {\text{div}}({y^{1 - 2s}}\nabla w) = 0{\text{ in }}\mathbb{R}_ + ^{N + 1},
\\
w = u{\text{ on }}{\mathbb{R}^N} \times \{ 0\}
\\
\end{gathered} \right.\]
is called $s$-harmonic extension of $u$, denoted by $w = {E_s}(u)$. The $s$-harmonic extension and the fractional Laplacian have explicit expressions in terms of the Poisson and the Riesz kernels, respectively
\begin{equation}\lambdabel{2.1}
w(x,y) = P_y^s * u(x) = \int_{{\mathbb{R}^N}} {P_y^s(x - \xi)u(\xi )} d\xi ,
\end{equation}
where
\[
P_y^s(x): = c(N,s)\frac{{{y^{2s}}}}
{{{{(|x{|^2} + {y^2})}^{(N + 2s)/2}}}}
\]
with a constant $c(N,s)$ such that $\int_{{\mathbb{R}^N}} {P_1^s(x)} dx = 1$ (see \cite{jlx}).
Here, the space ${X^s}(\mathbb{R}_ + ^{N + 1})$ is defined as the completion of $C_0^\infty (\overline {\mathbb{R}_ + ^{N + 1}} )$ under the norm
\[
{\| w \|_{{X^s}(\mathbb{R}_ + ^{N + 1})}}: = {\Bigl( {\int_{\mathbb{R}_ + ^{N + 1}} {{k_s}{y^{1 - 2s}}|\nabla w{|^2}} dxdy} \Bigr)^{1/2}}.
\]
From \cite{bcps}, the map ${E_s}( \cdot )$ is an isometry between ${D^s}({\mathbb{R}^N})$ and ${X^s}(\mathbb{R}_ + ^{N + 1})$, i.e. for $w = {E_s}(u)$,
\begin{equation}\lambdabel{2.3}
{\| u \|_{{D^s}({\mathbb{R}^N})}} = {\| w \|_{{X^s}(\mathbb{R}_ + ^{N + 1})}}.
\end{equation}
On the other hand, for a function $w \in {X^s}(\mathbb{R}_ + ^{N + 1})$, we shall denote its trace on ${\mathbb{R}^N} \times \{ 0\} $ as $u(x) : = {\text{Tr}}(w) = w(x,0)$. This trace operator is also well defined and it satisfies
\begin{equation}\lambdabel{2.4}
{\| u \|_{{D^s}({\mathbb{R}^N})}} \le {\| w \|_{{X^s}(\mathbb{R}_ + ^{N + 1})}}.
\end{equation}
\begin{lemma}\lambdabel{2.2.}
(Theorem~2.1 of \cite{bcps}) For every $w \in {X^s}(\mathbb{R}_ + ^{N + 1})$, it holds that
\[
S(s,N){\Bigl( {\int_{{\mathbb{R}^N}} {|u{|^{2_s^ * }}} dx} \Bigr)^{2/2_s^ * }} \le \int_{\mathbb{R}_ + ^{N + 1}} {{y^{1 - 2s}}|\nabla w{|^2}} dxdy,
\]
where $u = {\text{Tr}}(w)$. The best constant takes the exact value
\[
S(s,N) = \frac{{2{\pi ^s}\Gamma (1 - s)\Gamma ((N + 2s)/2)\Gamma {{(N/2)}^{2s/N}}}}
{{\Gamma (s)\Gamma ((N - 2s)/2)\Gamma {{(N)}^{2s/N}}}}
\]
and it is achieved when $u_{\delta}$ takes the form
\[
u_{\delta}(x) = {\delta ^{(N - 2s)/2}}{(|x {|^2} + {\delta ^2})^{ - (N - 2s)/2}}
\]
for some $\delta > 0$ and $w_{\delta} = {E_s}(u_{\delta})$.
\end{lemma}
\section{Proof of the main results}
In view of \cite{cs}, \eqref{1.1} can be transformed into
\begin{equation}\lambdabel{3.1}
\left\{ \begin{gathered}
- {\text{div}}({y^{1 - 2s}}\nabla w) = 0{\text{ in }}\mathbb{R}_ + ^{N + 1},
\\
- {k_s}\mathop {\lim }\limits_{y \to {0^ + }} {y^{1 - 2s}}\frac{{\partial w}}
{{\partial y}}(x,y) = - mw + f(w){\text{ on }}{\mathbb{R}^N} \times \{ 0\}
\\
\end{gathered} \right.
\end{equation}
with the corresponding functional
\[
{I_m}(w) = \frac{{{k_s}}}
{2}\int_{\mathbb{R}_ + ^{N + 1}} {{y^{1 - 2s}}|\nabla w{|^2}} dxdy + \frac{m}
{2}\int_{{\mathbb{R}^N}} {{w^2}(x,0)} dx - \int_{{\mathbb{R}^N}} {F(w(x,0))} dx,{\text{ }}w \in {X^{1,s}}(\mathbb{R}_ + ^{N + 1}).
\]
In view of \cite{cw,rs}, if $w \in {X^{1,s}}(\mathbb{R}_ + ^{N + 1})$ is a weak solution to \eqref{3.1}, the following Pohozaev's identity holds:
\begin{equation}\lambdabel{3.2}
{P_m}(w) = \frac{{{k_s}(N - 2s)}}
{2}\int_{\mathbb{R}_ + ^{N + 1}} {{y^{1 - 2s}}|\nabla w{|^2}} dxdy + \frac{{mN}}
{2}\int_{{\mathbb{R}^N}} {{w^2}(x,0)} dx - N\int_{{\mathbb{R}^N}} {F(w(x,0))} dx = 0.
\end{equation}
\begin{lemma}\lambdabel{3.1.}
${I_m}$ possesses the Mountain-Pass geometry (see \cite{ar}), i.e.\\
$(i)$ There exist ${\rho _0},{\alpha _0} > 0$ such that ${I_m}(w) \ge {\alpha _0}$ for all $w \in {X^{1,s}}(\mathbb{R}_ + ^{N + 1})$ with ${\| w \|_{{X^{1,s}}(\mathbb{R}_ + ^{N + 1})}} = {\rho _0}$.\\
$(ii)$ $\exists {w _0} \in {X^{1,s}}(\mathbb{R}_ + ^{N + 1})$ such that ${I_m}({w _0}) < 0$.
\end{lemma}
\begin{proof}
$(i)$ By $(f_1)$ and $(f_2)$, $\forall \delta > 0$, $\exists {C_\delta } > 0$ such that
\begin{equation}\lambdabel{3.3}
f(w) \le \delta |w| + {C_\delta }|w{|^{2_s^ * - 1}}{\text{ and }}F(w) \le \delta |w{|^2} + {C_\delta }|w{|^{2_s^ * }}.
\end{equation}
Choosing $\delta = m/4$ in \eqref{3.3}, we see from Lemma~\ref{2.2.} that
\[
{I_m}(w) \ge \frac{1}
{4}\| w \|_{{X^{1,s}}(\mathbb{R}_ + ^{N + 1})}^2 - C\| w \|_{{X^{1,s}}(\mathbb{R}_ + ^{N + 1})}^{2_s^ * },
\]
then taking ${\rho _0},{\alpha _0} > 0$ small, $(i)$ holds.
$(ii)$ For $R>0$, $T>0$, we define
\[{w_{R,T}}(x,y) = \left\{ \begin{gathered}
T,{\text{ if }}(x,y) \in {B_R^ + (0)} ,
\\
T(R + 1 - {{\text{(}}|x{|^2} + {y^2})^{1/2}}),{\text{ if }}(x,y) \in B_{R + 1}^ + (0)\backslash B_R^ + (0),
\\
0,{\text{ if }} (x,y) \in \mathbb{R}_ + ^{N + 1}\backslash {B_{R + 1}^ + (0)} ,
\\
\end{gathered} \right.\]
then ${w_R} \in {X^{1,s}}(\mathbb{R}_ + ^{N + 1})$. By $(f_3)$ and the polar coordinate transformation, we have
\[\begin{gathered}
{\text{ }}{I_m}({w_{R,T}}(x/\theta ,y/\theta ))
\\
= \frac{{{k_s}}}
{2}{\theta ^{N - 2s}}\int_{\mathbb{R}_ + ^{N + 1}} {{y^{1 - 2s}}|\nabla {w_{R,T}}{|^2}} dxdy + {\theta ^N}\Bigl[ {\frac{m}
{2}\int_{{\mathbb{R}^N}} {w_{R,T}^2(x,0)} dx - \int_{{\mathbb{R}^N}} {F({w_{R,T}}(x,0))} dx} \Bigr]
\\
\le \frac{{{k_s}}}
{2}{\theta ^{N - 2s}}{T^2}\int_{B_{R + 1}^ + (0)\backslash B_R^ + (0)} {{y^{1 - 2s}}} dxdy + {\theta ^N}\Bigl[ {\frac{m}
{2}\int_{\Gamma _{R + 1}^0(0)} {w_{R,T}^2(x,0)} dx - \frac{1}
{{2_s^ * }}\int_{\Gamma _R^0(0)} {w_{R,T}^{2_s^ * }(x,0)} dx} \Bigr]
\\
\le C{\theta ^{N - 2s}}{T^2}\int_R^{R + 1} {{r^{N + 1 - 2s}}} dr + {\theta ^N}\Bigl[ {C\Bigl( {\frac{m}
{2}{T^2} - \frac{1}
{{2_s^ * }}{T^{2_s^ * }}} \Bigr){R^N} + C{T^2}({{(R + 1)}^N} - {R^N})} \Bigr]
\\
\le C{T^2}{R^{N + 1 - 2s}}{\theta ^{N - 2s}} + \Bigl( {C\Bigl( {\frac{m}
{2}{T^2} - \frac{1}
{{2_s^ * }}{T^{2_s^ * }}} \Bigr){R^N} + C{T^2}{R^{N - 1}}} \Bigr){\theta ^N}.
\\
\end{gathered} \]
Choosing a large ${T_0} > 0$ such that $\frac{m}
{2}T_0^2 - \frac{1}
{{2_s^ * }}T_0^{2_s^ * } < 0$, then we can choose a large ${R_0} > 0$ such that $C\Bigl( {\frac{m}
{2}T_0^2 - \frac{1}
{{2_s^ * }}T_0^{2_s^ * }} \Bigr)R_0^N + CT_0^2R_0^{N - 1} < 0$, at last, we select a large $\bar \theta > 0$ to ensure that ${I_m}({w_{{R_0},{T_0}}}(x/\bar \theta ,y/\bar \theta )) < 0$, ${w_{{R_0},{T_0}}}$ is the desired $w_0$.
\end{proof}
Hence we define the Mountain-Pass level of ${I_m}$:
\begin{equation}\lambdabel{3.4}
{c_m}: = \mathop {\inf }\limits_{\gamma \in {\Gamma _m}} \mathop {\sup }\limits_{t \in [0,1]} {I_m}(\gamma (t)),
\end{equation}
where the set of paths is defined as
\begin{equation}\lambdabel{3.5}
{\Gamma_m}: = \left\{ {\gamma \in C([0,1],{X^{1,s}}(\mathbb{R}_ + ^{N + 1})):\gamma (0) = 0{\text{ and }}{I_m}(\gamma (1)) < 0} \right\}.
\end{equation}
By Lemma~\ref{3.1.}(i), we see that ${c_m} > 0$. Moreover, we denote
\[
{b_m}: = \inf \{ {I_m}(w):w \in {X^{1,s}}(\mathbb{R}_ + ^{N + 1})\backslash \{ 0\} {\text{ be a nontrivial solution of \eqref{3.1}}}\} .
\]
Next, we will construct a (PS) sequence $\{ {w_n}\} _{n = 1}^\infty $ for $I_m$ at the level $c_m$ that satisfies ${P_m}({w_n}) \to 0$ as $n \to \infty $, i.e.
\begin{proposition}\lambdabel{3.2.}
There exists a sequence $\{ {w_n}\} _{n = 1}^\infty $ in ${X^{1,s}}(\mathbb{R}_ + ^{N + 1})$ such that, as $n \to \infty $,
\begin{equation}\lambdabel{3.6}
{I_m}({w_n}) \to {c_m},{\text{ }}{I'_m}({w_n}) \to 0,{\text{ }}{P_m}({w_n}) \to 0.
\end{equation}
\end{proposition}
\begin{proof}
Define the map $\Phi :\mathbb{R} \times {X^{1,s}}(\mathbb{R}_ + ^{N + 1}) \to {X^{1,s}}(\mathbb{R}_ + ^{N + 1})$ for $\theta \in \mathbb{R}$, $w \in {X^{1,s}}(\mathbb{R}_ + ^{N + 1})$ and $(x,y) \in \mathbb{R}_ + ^{N + 1}$ by $\Phi (\theta ,w) = w({e^{ - \theta }}x,{e^{ - \theta }}y)$. For every $\theta \in \mathbb{R}$, $w \in {X^{1,s}}(\mathbb{R}_ + ^{N + 1})$, the functional ${I_m} \circ \Phi $ is computed as
\[\begin{gathered}
{I_m} \circ \Phi (\theta ,w) = \frac{{{k_s}}}
{2}{e^{(N - 2s)\theta }}\int_{\mathbb{R}_ + ^{N + 1}} {{y^{1 - 2s}}|\nabla w{|^2}} dxdy + \frac{m}
{2}{e^{N\theta }}\int_{{\mathbb{R}^N}} {{w^2}(x,0)} dx
\\
{\text{ }} - {e^{N\theta }}\int_{{\mathbb{R}^N}} {F(w(x,0))} dx.
\\
\end{gathered} \]
By Lemma~\ref{3.1.}, $({I_m} \circ \Phi) (\theta ,w) > 0$ for all $(\theta ,w)$ with $|\theta |$, ${\| w \|_{{X^{1,s}}(\mathbb{R}_ + ^{N + 1})}}$ small and $({I_m} \circ \Phi )(0,{w_0}) < 0$, i.e. ${I_m} \circ \Phi $ possesses the Mountain-Pass geometry in $\mathbb{R} \times {X^{1,s}}(\mathbb{R}_ + ^{N + 1})$. The Mountain-Pass level of ${I_m} \circ \Phi $ is defined by
\begin{equation}\lambdabel{3.7}
{{\tilde c}_m}: = \mathop {\inf }\limits_{\tilde \gamma \in {{\tilde \Gamma }_m}} \mathop {\sup }\limits_{t \in [0,1]} ({I_m} \circ \Phi )(\tilde \gamma (t)),
\end{equation}
where the set of paths is
\begin{equation}\lambdabel{3.8}
{{\tilde \Gamma }_m}: = \{ {\tilde \gamma \in C([0,1],\mathbb{R} \times {X^{1,s}}(\mathbb{R}_ + ^{N + 1}) ):\tilde \gamma (0) = (0,0){\text{ and }}({I_m} \circ \Phi )(\tilde \gamma (1)) < 0} \}.
\end{equation}
As ${\Gamma _m} = \{ {\Phi \circ \tilde \gamma :\tilde \gamma \in {{\tilde \Gamma }_m}} \}$, the Mountain-Pass levels of ${I_m}$ and ${I_m} \circ \Phi $ coincide, i.e. ${c_m} = {{\tilde c}_m}$.
By the General Minimax principle (Theorem~2.8 of \cite{w1}), there exists a sequence $\{ ({\theta _n},{v_n})\} _{n = 1}^\infty $ in $\mathbb{R} \times {X^{1,s}}(\mathbb{R}_ + ^{N + 1})$ such that as $n \to \infty $,
\begin{equation}\lambdabel{3.9}
({I_m} \circ \Phi )({\theta _n},{v_n}) \to {c_m},
\end{equation}
\begin{equation}\lambdabel{3.10}
({I_m} \circ \Phi )'({\theta _n},{v_n}) \to 0{\text{ in (}}\mathbb{R} \times {X^{1,s}}(\mathbb{R}_ + ^{N + 1}){)^{ - 1}},
\end{equation}
\begin{equation}\lambdabel{3.11}
{\theta _n} \to 0.
\end{equation}
Indeed, set $\varepsilon = {\varepsilon _n}: = 1/{n^2}$, $\delta = {\delta _n}: = 1/n$ in Theorem~2.8 of \cite{w1}, \eqref{3.9}, \eqref{3.10} are direct conclusions from $(a)$, $(c)$ in Theorem~2.8 of \cite{w1}. By \eqref{3.4} and \eqref{3.5}, for $\varepsilon = {\varepsilon _n}: = 1/{n^2}$, $\exists {\gamma _n} \in {\Gamma_m} $, such that
$\mathop {\sup }\limits_{t \in [0,1]} {I_m}({\gamma _n}(t)) \le {c_m} + 1/{n^2}$. Set ${{\tilde \gamma }_n}(t) = (0,{\gamma _n}(t))$, then
\[
\mathop {\sup }\limits_{t \in [0,1]} ({I_m} \circ \Phi) ({{\tilde \gamma }_n}(t)) = \mathop {\sup }\limits_{t \in [0,1]} {I_m}({\gamma _n}(t)) \le {c_m} + 1/{n^2}.
\]
From $(b)$ in Theorem~2.8 of \cite{w1}, $\exists ({\theta _n},{v_n}) \in \mathbb{R} \times {X^{1,s}}(\mathbb{R}_ + ^{N + 1})$ such that ${\text{dist}}(({\theta _n},{v_n}),(0,{\gamma _n}(t))) \le 2/n$, then \eqref{3.11} holds.
For every $(h,w) \in \mathbb{R} \times {X^{1,s}}(\mathbb{R}_ + ^{N + 1})$,
\begin{equation}\lambdabel{3.12}
\lambdangle {({I_m} \circ \Phi )'({\theta _n},{v_n}),(h,w)} \rangle = \lambdangle {{I'_m}(\Phi ({\theta _n},{v_n})),\Phi ({\theta _n},w)} \rangle + {P_m}(\Phi ({\theta _n},{v_n}))h.
\end{equation}
Taking $h=1$, $w=0$ in \eqref{3.12}, we have
\begin{equation}\lambdabel{3.13}
{P_m}(\Phi ({\theta _n},{v_n})) \to 0(n \to \infty) .
\end{equation}
For every $v \in {X^{1,s}}(\mathbb{R}_ + ^{N + 1})$, set $w(x,y) = v({e^{{\theta _n}}}x,{e^{{\theta _n}}}y)$, $h=0$ in \eqref{3.12}, by \eqref{3.11}, we get
\begin{equation}\lambdabel{3.14}
\lambdangle {{I'_m}(\Phi ({\theta _n},{v_n})),v} \rangle = o(1){\| {v({e^{{\theta _n}}}x,{e^{{\theta _n}}}y)} \|_{{X^{1,s}}(\mathbb{R}_ + ^{N + 1})}} = o(1){\| v \|_{{X^{1,s}}(\mathbb{R}_ + ^{N + 1})}}.
\end{equation}
Denote ${w_n}: = \Phi ({\theta _n},{v_n})$ in \eqref{3.9}, \eqref{3.13} and \eqref{3.14}, we get \eqref{3.6}.
\end{proof}
\begin{lemma}\lambdabel{3.3.}
Every sequence $\{ {w_n}\} _{n = 1}^\infty $ satisfying \eqref{3.6} is bounded in ${X^{1,s}}(\mathbb{R}_ + ^{N + 1})$.
\end{lemma}
\begin{proof}
By \eqref{3.6},
\begin{equation}\lambdabel{3.15}
{c_m} + o(1) = {I_m}({w_n}) - \frac{1}
{N}{P_m}({w_n}) = \frac{s}
{N}\int_{\mathbb{R}_ + ^{N + 1}} {{y^{1 - 2s}}|\nabla {w_n}{|^2}} dxdy,
\end{equation}
we get the upper bound of ${\| {{w_n}} \|_{{X^s}(\mathbb{R}_ + ^{N + 1})}}$, then by Lemma~\ref{2.2.}, we see that $\{ {w_n}(x,0)\} $ is bounded in ${L^{2_s^ * }}({\mathbb{R}^N})$. From \eqref{3.6} and \eqref{3.3}, we see that
\begin{equation}\lambdabel{3.16}
\begin{gathered}
{\text{ }}\frac{{{k_s}(N - 2s)}}
{2}\int_{\mathbb{R}_ + ^{N + 1}} {{y^{1 - 2s}}|\nabla {w_n}{|^2}} dxdy + \frac{{mN}}
{2}\int_{{\mathbb{R}^N}} {w_n^2(x,0)} dx
\\
= N\int_{{\mathbb{R}^N}} {F({w_n}(x,0))} dx + o(1) \le \frac{{mN}}
{4}\int_{{\mathbb{R}^N}} {w_n^2(x,0)} dx + C\int_{{\mathbb{R}^N}} {w_n^{2_s^ * }(x,0)} dx + o(1),
\\
\end{gathered}
\end{equation}
hence $\{ {w_n}\} $ is bounded in ${{X^{1,s}}(\mathbb{R}_ + ^{N + 1})}$.
\end{proof}
For the Mountain-Pass level $c_m$, we have the following estimate:
\begin{lemma}\lambdabel{3.4.}
If $N \ge 4s$, $2 < q < 2_s^ * $ or $2s< N < 4s$, $4s/(N - 2s) < q < 2_s^ * $, then for all $\lambdambda > 0$, ${c_m} < \frac{s}
{N}{({k_s}S(s,N))^{N/(2s)}}$. Moreover, if $2s< N < 4s$ and $2 < q \le 4s/(N - 2s)$, then for $\lambdambda > 0$ sufficiently large, the same conclusion holds.
\end{lemma}
\begin{proof}
Let ${\phi _0} \in {C^\infty }({\mathbb{R}_ + })$ satisfying
\[
{\phi _0}(t) = 1{\text{ if }}0 \le t \le 1,{\text{ }}{\phi _0}(t) = 0{\text{ if }}t \ge 2,{\text{ }}0 \le {\phi _0}(t) \le 1{\text{ and }}|{\phi '_0}(t)| \le C
\]
and denote ${w_\delta }$ the $s$-harmonic extension of ${u_\delta }$ in Lemma~\ref{2.2.}. Denote
$${v_\delta }(x,y) = \frac{{{\phi _0}({{(|x{|^2} + {y^2})}^{1/2}}){w_\delta }(x,y)}}
{{{{\| {{\phi _0}(|x|){u_\delta }(x)} \|}_{{L^{2_ s ^*}}({\mathbb{R}^N})}}}},$$
by \cite{bcps1,dms}, we see that
\begin{equation}\lambdabel{3.17}
\| {{v_\delta }} \|_{{X^s}(\mathbb{R}_ + ^{N + 1})}^2 ={k_s} {S(s,N)} + O({\delta ^{N - 2s}}),
\end{equation}
and for any $p \in [2,2_s^ * )$,
\begin{equation}\lambdabel{3.18}
\left\| {{v_\delta }(x,0)} \right\|_{{L^p}({\mathbb{R}^N})}^p = \left\{ \begin{gathered}
O({\delta ^{(2N - (N - 2s)p)/2}}),{\text{ if }}p > N/(N - 2s),
\\
O({\delta ^{N/2}}|\log \delta |),{\text{ if }}p = N/(N - 2s),
\\
O({\delta ^{(N - 2s)p/2}}),{\text{ if }}p < N/(N - 2s).
\\
\end{gathered} \right.
\end{equation}
By $(f_3)$,
\[\begin{gathered}
{I_m}({v_\delta }(x/t)) \le {g_\delta }(t)
\\
: = \frac{{{k_s}}}
{2}{t^{N - 2s}}\int_{\mathbb{R}_ + ^{N + 1}} {{y^{1 - 2s}}|\nabla {v_\delta }{|^2}} dxdy - \frac{1}
{{2_s^ * }}{t^N} + \frac{m}
{2}{t^N}\int_{{\mathbb{R}^N}} {v_\delta ^2(x,0)} dx - \frac{\lambdambda }
{q}{t^N}\int_{{\mathbb{R}^N}} {v_\delta ^q(x,0)} dx
\\
: = {h_\delta }(t) + \frac{m}
{2}{t^N}\int_{{\mathbb{R}^N}} {v_\delta ^2(x,0)} dx - \frac{\lambdambda }
{q}{t^N}\int_{{\mathbb{R}^N}} {v_\delta ^q(x,0)} dx.
\\
\end{gathered} \]
In view of \eqref{3.17} and \eqref{3.18}, for $\delta > 0$ small, ${g_\delta }(t)$ has a unique critical point ${t_\delta } > 0$ which corresponds to its maximum. Therefore, we check from ${g'_\delta }({t_\delta }) = 0$ that
\[
{t_\delta } = \frac{{{{\left( {\frac{{N - 2s}}
{2}} \right)}^{1/2s}}\left\| {{v_\delta }} \right\|_{{X^s}(\mathbb{R}_ + ^{N + 1})}^{1/s}}}
{{{{\left( {\frac{{N - 2s}}
{2} - \frac{{mN}}
{2}\left\| {{v_\delta }(x,0)} \right\|_{{L^2}({\mathbb{R}^N})}^2 + \frac{{\lambdambda N}}
{q}\left\| {{v_\delta }(x,0)} \right\|_{{L^q}({\mathbb{R}^N})}^q} \right)}^{1/2s}}}},
\]
then we see from \eqref{3.17} and \eqref{3.18} that
\begin{equation}\lambdabel{3.19}
0 < {C_1} \le {t_\delta } \le {C_2}{\text{ for }}\delta > 0{\text{ small.}}
\end{equation}
By \eqref{3.17} and \eqref{3.18}, we get
\begin{equation}\lambdabel{3.20}
\mathop {\max }\limits_{t \ge 0} {h_\delta }(t) = {h_\delta }({t'_\delta }) = \frac{s}
{N}{({k_s}S(s,N))^{N/2s}} + O({\delta ^{N - 2s}}),
\end{equation}
where ${t'_\delta } = \| {{v_\delta }} \|_{{X^s}(\mathbb{R}_ + ^{N + 1})}^{1/s}$ is the maximum point of ${h_\delta }(t)(t>0)$.
By \eqref{3.19} and \eqref{3.20}, we have
\begin{equation}\lambdabel{3.21}
\begin{gathered}
{\text{ }}{c_m} \le\mathop {\sup }\limits_{t > 0} {I_m}({v_\delta }(x/t)) \le \mathop {\sup }\limits_{t > 0} {g_\delta }(t)
\\
= {h_\delta }({t_\delta }) + \frac{m}
{2}t_\delta ^N\int_{{\mathbb{R}^N}} {v_\delta ^2(x,0)} dx - \frac{\lambdambda }
{q}t_\delta ^N\int_{{\mathbb{R}^N}} {v_\delta ^q(x,0)} dx
\\
\le \frac{s}
{N}{({k_s}S(s,N))^{N/(2s)}} + O({\delta ^{N - 2s}}) + C\| {{v_\delta }(x,0)} \|_{{L^2}({\mathbb{R}^N})}^2 - C\lambdambda \| {{v_\delta }(x,0)} \|_{{L^q}({\mathbb{R}^N})}^q.
\\
\end{gathered}
\end{equation}
Next, we distinguish the following cases:\\
(i) If $N>4s$, then $q > 2 > N/(N - 2s)$, by \eqref{3.18} and \eqref{3.21}, we get
\[
{c_m} \le \frac{s}
{N}{({k_s}S(s,N))^{N/(2s)}} + O({\delta ^{N - 2s}}) + O({\delta ^{2s}}) - \lambdambda \cdot O({\delta ^{(2N - (N - 2s)q)/2}}).
\]
In view of $(2N - (N - 2s)q)/2 < 2s < (N - 2s)$, we get the conclusion for $\delta>0$ small.\\
(ii) If $N=4s$, then $q > 2 = N/(N - 2s)$, by \eqref{3.18} and \eqref{3.21}, we have
\[
{c_m} \le \frac{s}
{N}{({k_s}S(s,N))^{N/(2s)}} + O({\delta ^{2s}}(1 + |\log \delta |)) - \lambdambda \cdot O({\delta ^{4s - sq}}).
\]
Since $4s-sq<2s$, we get the conclusion for $\delta>0$ small.\\
(iii) If $2s < N < 4s$ and $N/(N - 2s) < q < 2_s^ * $, we see from \eqref{3.18} and \eqref{3.21} that
\[
{c_m} \le \frac{s}
{N}{({k_s}S(s,N))^{N/(2s)}} + O({\delta ^{N - 2s}}) - \lambdambda \cdot O({\delta ^{(2N - (N - 2s)q)/2}}).
\]
If $4s/(N - 2s) < q < 2_s^ * $, then $(N - 2s) > (2N - (N - 2s)q)/2$, we get the conclusion for $\delta>0$ small. If $N/(N - 2s) < q \le 4s/(N - 2s)$, then $(N - 2s) \le (2N - (N - 2s)q)/2$, we choose $\lambdambda = {\delta ^{ - \theta }}$ with $\theta > (2N - (q + 2)(N - 2s))/2 > 0$, we still get the conclusion for $\delta>0$ small.\\
(iv) If $2s < N < 4s$ and $q =N/(N - 2s)$, \eqref{3.18} and \eqref{3.21} yield
\[
{c_m} \le \frac{s}
{N}{({k_s}S(s,N))^{N/(2s)}} + O({\delta ^{N - 2s}}) - \lambdambda \cdot O({\delta ^{N/2}}|\log \delta |).
\]
Since $(N - 2s) < N/2$, we choose $\lambdambda = {\delta ^{ - \theta }}$ with $\theta > 2s - (N/2)$, we get the conclusion for $\delta>0$ small.\\
(v) If $2s < N < 4s$ and $2<q <N/(N - 2s)$, \eqref{3.18} and \eqref{3.21} show that
\[
{c_m} \le \frac{s}
{N}{({k_s}S(s,N))^{N/(2s)}} + O({\delta ^{N - 2s}}) - \lambdambda \cdot O({\delta ^{(N - 2s)q/2}}).
\]
We choose $\lambdambda = {\delta ^{ - \theta }}$ with $\theta > (q - 2)(N - 2s)/2$, we get the conclusion for $\delta>0$ small.
\end{proof}
\begin{lemma}\lambdabel{3.5.}
There is a sequence $\{ {x_n}\} _{n = 1}^\infty \subset {\mathbb{R}^N}$ and $R > 0$, $\beta > 0$ such that
\[
\int_{\Gamma _R^0({x_n})} {w_n^2(x,0)} dx \ge \beta ,
\]
where $\{ {w_n}\} _{n = 1}^\infty $ is the sequence given in \eqref{3.6}.
\end{lemma}
\begin{proof}
Assuming on the contrary that the lemma does not hold, then by Lemma~2.2 of \cite{fqt}, it follows that
\[
\int_{{\mathbb{R}^N}} {|{w_n}(x,0){|^p}} dx \to 0{\text{ as }}n \to \infty {\text{ for all }}2 < p < 2_s^ * .
\]
Since $\lambdangle {I'_m({w_n}),{w_n}} \rangle = o(1)$ and ${I_m}({w_n}) \to {c_m}$, by $(f_1)$ and $(f_2)$, we get
\[
\| {{w_n}} \|_{{X^s}(\mathbb{R}_ + ^{N + 1})}^2 + m\| {{w_n}(x,0)} \|_{{L^2}({\mathbb{R}^N})}^2 - \| {{w_n}(x,0)} \|_{{L^{2_s^ * }}({\mathbb{R}^N})}^{2_s^ * } = o(1),
\]
\begin{equation}\lambdabel{3.22}
\frac{1}
{2}\| {{w_n}} \|_{{X^s}(\mathbb{R}_ + ^{N + 1})}^2 + \frac{m}
{2}\| {{w_n}(x,0)} \|_{{L^2}({\mathbb{R}^N})}^2 - \frac{1}
{{2_s^ * }}\| {{w_n}(x,0)} \|_{{L^{2_s^ * }}({\mathbb{R}^N})}^{2_s^ * } = {c_m} + o(1).
\end{equation}
Let $l \ge 0$ be such that
\begin{equation}\lambdabel{3.23}
\| {{w_n}} \|_{{X^s}(\mathbb{R}_ + ^{N + 1})}^2 + m\| {{w_n}(x,0)} \|_{{L^2}({\mathbb{R}^N})}^2 \to l{\text{ and }}\| {{w_n}(x,0)} \|_{{L^{2_s^ * }}({\mathbb{R}^N})}^{2_s^ * } \to l.
\end{equation}
It is trivial that $l > 0$, otherwise ${\| {{w_n}} \|_{{X^{1,s}}(\mathbb{R}_ + ^{N + 1})}} \to 0$ as $n \to \infty $ which contradicts ${c_m} > 0$. By \eqref{3.22}, we get
\begin{equation}\lambdabel{3.24}
{c_m} = \frac{s}
{N}l.
\end{equation}
By Lemma~\ref{2.2.}, we see that
\begin{equation}\lambdabel{3.25}
\| {{w_n}} \|_{{X^s}(\mathbb{R}_ + ^{N + 1})}^2 + m\| {{w_n}(x,0)} \|_{{L^2}({\mathbb{R}^N})}^2 \ge {k_s}S(s,N)\| {{w_n}(x,0)} \|_{{L^{2_s^ * }}({\mathbb{R}^N})}^2.
\end{equation}
Letting $n \to \infty $ in \eqref{3.25}, we get $l \ge {({k_s}S(s,N))^{N/2s}}$, then by \eqref{3.24}, ${c_m} \ge \frac{s}
{N}{({k_s}S(s,N))^{N/(2s)}}$, which contradicts Lemma~\ref{3.4.}.
\end{proof}
\begin{proof}[\bf Proof of Theorem~\ref{1.1.}]
Let $\{ {w_n}\} _{n = 1}^\infty $ be the sequence given in \eqref{3.6} and denote ${\tilde w_n}(x,y) = {w_n}(x + {x_n},y)$, where $\{{x_n}\} _{n = 1}^\infty$ is the sequence given in Lemma~\ref{3.5.}. By Lemma~\ref{3.3.} and Lemma~\ref{3.5.}, we see that, up to a subsequence, $\exists \tilde w(x,y) \in {X^{1,s}}(\mathbb{R}_ + ^{N+1})\backslash \{ 0\} $ such that
${{\tilde w}_n}(x,y) \rightharpoonup \tilde w(x,y)$ in ${X^{1,s}}(\mathbb{R}_ + ^{N+1})$, ${{\tilde w}_n}(x,0) \to \tilde w(x,0)$ in $L_{{\text{loc}}}^p({\mathbb{R}^N})(1 \le p < 2_s^ * )$, ${{\tilde w}_n}(x,0) \to \tilde w(x,0)$ a.e. in ${\mathbb{R}^N}$ and ${\tilde w}$ satisfies \eqref{3.1}. Hence
\begin{equation}\lambdabel{3.26}
\begin{gathered}
{b_m} \le {I_m}(\tilde w) = {I_m}(\tilde w) - \frac{1}
{N}{P_m}(\tilde w) = \frac{s}
{N}\| {\tilde w} \|_{{X^s}(\mathbb{R}_ + ^{N+1})}^2 \le \mathop {\underline {\lim } }\limits_{n \to \infty } \frac{s}
{N}\| {{{\tilde w}_n}} \|_{{X^s}(\mathbb{R}_ + ^{N+1})}^2
\\
{\text{ }} = \mathop {\underline {\lim } }\limits_{n \to \infty } \frac{s}
{N}\| {{w_n}} \|_{{X^s}(\mathbb{R}_ + ^{N+1})}^2 = \mathop {\underline {\lim } }\limits_{n \to \infty } \Bigl[ {{I_m}({w_n}) - \frac{1}
{N}{P_m}({w_n})} \Bigr] = {c_m}.
\\
\end{gathered}
\end{equation}
For any $\bar w \in {X^{1,s}}(\mathbb{R}_ + ^{N+1})\backslash \{ 0\} $ a solution of \eqref{3.1}, we set the path
\[\bar \gamma (t) = \left\{ \begin{gathered}
\bar w(x/t,y/t),{\text{ if }}t > 0,
\\
0,{\text{ if }}t = 0.
\\
\end{gathered} \right.\]
Since
\begin{equation}\lambdabel{3.27}
{I_m}(\bar \gamma (t)) = {I_m}(\bar \gamma (t)) - \frac{1}
{N}{t^N}{P_m}(\bar w) = \Bigl( {\frac{1}
{2}{t^{N - 2s}} - \frac{{N - 2s}}
{{2N}}{t^N}} \Bigr)\| {\bar w} \|_{{X^s}(\mathbb{R}_ + ^N)}^2,
\end{equation}
there exists a ${T_0}>0$ large such that ${I_m}(\bar \gamma ({T_0})) < 0$ and ${I_m}(\bar \gamma (t))$ achieve the strict global maximum at $t=1$. By the definition of $c_m$, we see that ${I_m}(\bar w) \ge {c_m}$. Since $\bar w$ is arbitrary, we see that ${b_m} \ge {c_m}$. Hence, we conclude from \eqref{3.26} that ${I_m}(\tilde w) = {c_m} = {b_m}$ and ${I'_m}(\tilde w) = 0$. Arguing as Proposition~4.1.1 of \cite{dmv}, we see that $\tilde w \in {L^\infty }({\mathbb{R}^N})$. Since $\tilde w$ is nonnegative and nontrivial and $f$ is continuous, we can apply the Harnack's inequality in Lemma 4.9 of \cite{cs1} to conclude that $\tilde w$ is positive, that is, $\tilde w$ is in fact a positive ground state solution of \eqref{3.1}, hence, $u(x): = \tilde w(x,0)$ is a positive ground state solution of \eqref{1.1}.
\end{proof}
\end{document}
|
\begin{document}
\title{Some Hopf Algebras related to $\mathfrak{sl}_2$}
\author{Jing Wang}
\address{Mathematics Department, Beijing Forestry University, Beijing, 100083, P.R.China} \email{wang\[email protected]}
\author{Zhixiang Wu}
\address{Mathematics Department, Zhejiang University,
Hangzhou, 310027, P.R.China} \email{[email protected]}
\author{Yan Tan}
\address{College of Science, Zhejiang Agriculture and Forestry University, Hangzhou, 311300, P.R.China} \email{[email protected]}
\thanks{The author Jing Wang is supported by NNSFC (No.11901034) and the Fundamental Research Funds for the Central Universities (No.BLX201721). The work of Zhixiang Wu and Yan Tan is sponsored
by ZJNSF (No. LY17A010015) and NNSFC (No.11871421).}
\subjclass[2000]{Primary 17B37, 81R50, 16E30, 06B15, 16T05}
\keywords{ Hopf algebra, finite-dimensional representation, Grothendieck ring}
\begin{abstract}We give a series of infinite dimensional noncommutative and noncocommutative pointed Hopf algebras, which are Artin-Schelter Gorenstein Hopf algebras with injective dimensions 3. Radford's Hopf algebra and Gelaki's Hopf algebra are homomorphic images of these Hopf algebras.
We determine the irreducible representations of them. We describe the Grothendieck rings of them. We obtain that two non-isomorphic Hopf algebras could have isomorphic Grothendieck rings.
\\
Key Words: Hopf algebra, irreducible module, Grothendieck ring, generators, relations
\end{abstract}
\maketitle
\section{Introduction}
The tensor product of representations of a Hopf algebra is important in the representation theory of Hopf algebras and quantum groups. In particular, the decomposition of the tensor product of indecomposable modules into a direct sum of indecomposable modules has received enormous attention. However, little is known about how a tensor product of two indecomposable modules decomposes into a direct sum of indecomposable modules over a Hopf algebra or a quantum group. There are some results for the decompositions of tensor products of modules over a Hopf algebra or a quantum group \cite{C1,CMS,Gu,KS,Y}. Recently, Green rings and their homomorphic images, which are called Grothendieck rings, of various finite-dimensional Hopf algebras have attracted numerous attentions \cite{CFZ,E,WLZ,ZWLC}. But most of them are considered in the case of finite dimensional Hopf algebras or quantum groups.
In this paper, we describe an infinite-dimensional pointed Hopf algebra $H_\beta$ for any $\beta\in({\bf K^*})^3$ and its Grothendieck ring, which differs from the Green ring. The Hopf algebra is constructed by adding group-like elements to the restricted quantum enveloping algebra of $\mathfrak {sl}_2$. The Hopf algebra $H_\beta$ has
many interesting finite dimensional images, such as Gelaki's Hopf algebra ${\mathcal
U}_{(n,N,\nu,q,\alpha,\beta,\gamma)}$ and Radford's Hopf algebra $U_{(N,\nu,\omega)}$. Here, all irreducible representations of $H_\beta$ and the decomposition of the tensor product of two irreducible $H_\beta$-modules are determined as in \cite{C}, but we consider the decomposition in the sense of Grothendieck group. Thus we obtain the Grothendieck ring of the infinite-dimensional Hopf algebras $H_\beta$. Meanwhile we obtain the Grothendieck rings
of Gelaki's Hopf algebra ${\mathcal
U}_{(n,N,\nu,q,\alpha,\beta,\gamma)}$ and Radford's Hopf algebra $U_{(N,\nu,\omega)}$.
This paper is organized as follows. In Section 2, we introduce the infinite dimensional noncommutative and noncocommutative Hopf algebra $H_\beta$, which is a pointed $AS$-Gorenstein Hopf algebra of injective dimension 3. And we prove that the homological integral of $H_\beta$ is isomorphic to ${\bf K}$ as $H_\beta$-bimodules. To construct irreducible representations of $H_\beta$, we define a finite-dimensional quotient Hopf algebra $H_{\alpha,\beta}$ of Hopf algebra $H_\beta$ in Section 3, where $\alpha=(n,m,n_1,n_2,n_3)\in {\bf N}^5$,
$\beta=(\beta_1,\beta_2,\beta_3)\in{\bf K}^3$. We prove that the algebra $H_{\alpha,\beta}$ could be decomposed into direct sum of matrix rings over some commutative local rings. Based on this result, all irreducible representations of $H_\beta$ are illustrated in Section 4. Besides, we point out that the category of all finite-dimensional representations of $H_\beta$ is not semisimple. Meanwhile we obtain all irreducible representations of Radford's Hopf
algebra $U_{(N,\nu,\omega)}$ and Gelaki's Hopf algebra ${\mathcal
U}_{(n,N,\nu,q,\alpha,\beta,\gamma)}$. In Section 5, the Grothendieck ring $G_0(H_\beta)$ of $H_\beta$ is constructed. We illustrate the structure of the Grothendieck ring $G_0(H_\beta)$ of $H_\beta$ in several cases. In details, Theorem~\ref{them52}(iv) describes the case in which $\beta_1=\beta_2=\beta_3=0$. Theorems~\ref{L55}, \ref{thmVI} and \ref{thmV} illustrate the cases that iff two the three equations $\beta_1=0$, $\beta_2=0$ and $\beta_3=0$ hold. In theorems \ref{V}, \, \ref{T9} and \ref{T10}, we give the cases in which iff one of the conditions $\beta_1=0$, $\beta_2=0$ and $\beta_3=0$ holds. In additional, we also obtain the Grothendieck rings $G_0(U_{(N,\nu,\omega)})$ and $G_0({\mathcal
U}_{(n,N,\nu,q,\alpha,\beta,\gamma)})$. We prove that there exists non-isomorphic Hopf algebras with isomorphic Grothendieck rings.
Throughout this paper, ${\bf K}$ denotes an algebraically closed field of characteristic zero, ${\bf K}^*$ the group of all nonzero elements of ${\bf K}$ with the product of ${\bf K}$ and $q\in{\bf K}$ a primitive $n$-th root of unity, $n\geq 2$. For any $z\in {\bf K}^*$, we denote $\langle z\rangle$ the subgroup of ${\bf K}^*$ generated by $z$, the quotient group ${\bf K}^*/N$ of ${\bf K}^*$ with respect to a normal subgroup $N$, and $\bar{z}$ the element $zN$.
We simplify the tensor product $V\otimes_{\bf K}W$ of two vector spaces over ${\bf K}$ as $V\otimes W$. The ring of integers is denoted by ${\mathbb Z}$ and $(N_1,\cdots,N_k)$ denotes the greatest common divisor of $N_1, \cdots, N_k$, for any $N_1,\cdots, N_k\in\mathbb{Z}$.
\section{Definition of Hopf algebra $H_{\beta}$}\label{sec-2}
In this section, we give the definition of Hopf algebra $H_\beta$. Then we prove that $H_\beta$ is an Artin-Schelter Gorenstein Hopf algebra with injective dimension 3.
\begin{definition}
Let $n_1$ be a positive integer such that $2\nmid (n,n_1)$ and $1\leq n_1\leq n$. Assume that $q\in{\bf K}$ is a primitive $n$-th root of unity. Suppose that $H_{\beta}$ is an algebra generated by $a,b,c,x,y$ with the relations
$$ab=ba,\quad ac=ca,\quad bc=cb,\quad xa=qax,\quad ya=q^{-1}ay,\quad bx=xb,\quad cx=xc,\quad by=yb,\quad cy=yc,$$
$$yx-q^{-n_1}xy=\beta_3(a^{2n_1}-bc),\quad x^n=\beta_1(a^{nn_1}-b^n), \quad y^n=\beta_2(a^{nn_1}-c^n),$$
for $\beta=(\beta_1,\beta_2,\beta_3)\in {\bf K}^3$. The coproduct $\Delta$ and counit $\varepsilon$ of $H_{\beta}$ is determined by
$$\Delta(a)=a\otimes a,\quad
\Delta(b)=b\otimes b,\quad
\Delta(c)=c\otimes c,\quad
\Delta(x)=x\otimes a^{n_1}+b\otimes
x,\quad
\Delta(y)=y\otimes a^{n_1}+c\otimes y,$$
and
$$\varepsilon(a)=\varepsilon(b)=\varepsilon(c)=1,\quad
\varepsilon(x)=\varepsilon(y)=0$$respectively.
An anti-automorphism $s$ of $H_{\beta}$ is determined by
$$s(a)=a^{-1},\quad
s(b)=b^{-1},\quad
s(c)=c^{-1},\quad
s(x)=-q^{-n_1}a^{-n_1}b^{-1}x,\quad s(y)=-q^{n_1}a^{-n_1}c^{-1}y.$$
\end{definition}
In the following theorem, we prove that $H_\beta$ is an infinite dimensional Hopf algebra with the above coproduct $\Delta,$ counit $ \varepsilon$ and antipode $s$.
\begin{theorem} \label{thm-basis}
The algebra $(H_{\beta},\ \Delta,\ \varepsilon,\ s)$ is a pointed Hopf algebra with a basis $$\{a^ib^jc^kx^uy^v|i,j,k\in{\mathbb Z},0\leq u,v\leq n-1\}.$$
\end{theorem}
\begin{proof} First, we prove that $\Delta$ can determine a homomorphism of
algebras from $H_{\beta}$ to $H_{\beta}\otimes H_{\beta}$.
By the definition of $H_{\beta}$, we obtain that $\Delta(x)^k=\sum\limits^{k}_{l=0}\bibitemnom{k}{l}_{q^{n_1}}b^lx^{k-l}\otimes a^{(k-l)n_1}x^l$, where $\bibitemnom{k}{l}_{q^{n_1}}=\frac{(k)_{q^{n_1}}!}{(l)_{q^{n_1}}!(k-l)_{q^{n_1}}!}$ and $(p)_{q^{n_1}}!=(p)_{q^{n_1}}(p-1)_{q^{n_1}}\cdots (1)_{q^{n_1}}$ for $(p)_{q^{n_1}}=1+q^{n_1}+\cdots +q^{(p-1)n_1}$.
Hence
$$\begin{array}{lll}\Delta(x)^{n}&=&(x\otimes a^{n_1}+b\otimes x)^{n}
=x^{n}\otimes a^{n_1n}+b^n\otimes x^{n}\\
&=&\beta_1((a^{n_1n}-b^{n})\otimes
a^{n_1n}+b^{n}\otimes (a^{n_1n}-b^{n}))\\
&=&\beta_1(a^{n_1n}\otimes a^{n_1n}-b^{n}\otimes
b^{n})
=\Delta(x^{n}).\end{array} $$ Similarly, we can prove
$\Delta(y^n)=\Delta(y)^n$. Moreover,
$$\begin{array}{lll}&&\Delta(y)\Delta(x)-q^{-n_1}\Delta(x)\Delta(y)\\
&=&(y\otimes
a^{n_1}+c\otimes y)(x\otimes a^{n_1}+b\otimes x)
-q^{-n_1}(x\otimes a^{n_1}+b\otimes
x)(y\otimes
a^{n_1}+c\otimes y)\\
&=&yx\otimes a^{2n_1}-q^{-n_1}xy\otimes a^{2n_1}+bc\otimes
yx-q^{-n_1}bc\otimes xy\\
&=&\beta_3((a^{2n_1}-bc)\otimes
a^{2n_1}+bc\otimes(a^{2n_1}-bc))\\
&=&\beta_3(a^{2n_1}\otimes a^{2n_1}-bc\otimes
bc)\\
&=&\Delta(yx-q^{-n_1}xy).\end{array}$$ It is easy to verify that
$$\Delta(x)\Delta(a)=q\Delta(a)\Delta(x),\quad\Delta(y)\Delta(a)=q^{-1}\Delta(a)\Delta(y),\quad\Delta(x)\Delta(b)=\Delta(b)\Delta(x),$$
$$\Delta(x)\Delta(c)=\Delta(c)\Delta(x),\quad\Delta(y)\Delta(b)=\Delta(b)\Delta(y),\quad\Delta(y)\Delta(c)=\Delta(c)\Delta(y),$$
$$\Delta(a)\Delta(b)=\Delta(b)\Delta(a),\quad\Delta(a)\Delta(c)=\Delta(c)\Delta(a),\quad\Delta(b)\Delta(c)=\Delta(c)\Delta(b).$$ Thus
$\Delta$ induces a homomorphism of algebras.
Second, it is easy to prove that $\varepsilon$ is a
homomorphism from the algebra $H_{\beta}$ to ${\bf K}$ and
$$(\Delta\otimes 1)\Delta(K)=(1\otimes \Delta)\Delta(K),\qquad
(1\otimes \varepsilon)\Delta(K)=(\varepsilon\otimes
1)\Delta(K)=K$$
for any $K\in \{a,b,c,x,y\}$.
Thus $H_{\beta}$ is a bialgebra.
At last, we need to prove that $s$ is an antipode of the algebra $H_{\beta}$.
By $s(x)=-q^{-n_1}a^{-n_1}b^{-1}x,$
we get that $$\begin{array}{lll}(s(x))^n&=&(-q^{-n_1})^nb^{-n}(a^{-n_1}x)^n\\
&=&(-1)^nb^{-n}q^{\frac12n(n-1)n_1}a^{-n_1n}x^n\\
&=&(-1)^nq^{\frac12n(n-1)n_1}a^{-nn_1}b^{-n}\beta_1(a^{nn_1}-b^n)\\
&=&(-1)^{n+1}q^{\frac12n(n-1)n_1}\beta_1( a^{-nn_1}-b^{-n})\\
&=&(-1)^{n+1}q^{\frac12n(n-1)n_1}s(x^n).\end{array}$$
It is not difficult to prove that $s(x)^n=s(x^n)$. Indeed,
if $n$ is odd then $(-1)^{n+1}q^{\frac12n(n-1)n_1}=1$.
If $n$ is even then we obtain that $n_1$ is odd by $2\nmid(n,n_1)$ and
$$(-1)^{n+1}q^{\frac12n(n-1)n_1}=(-1)^{n+1}q^{\frac12n(n-2)n_1
+\frac12nn_1}=(-1)^{n+n_1+1}= 1.$$
Similarly, we have
$s(y^n)=s(y)^n$. Moreover,
$$\begin{array}{lll}&&s(x)s(y)-q^{-n_1}s(y)s(x)\\
&=&a^{-n_1}b^{-1}xa^{-n_1}c^{-1}y-q^{-n_1}a^{-n_1}c^{-1}ya^{-n_1}b^{-1}x\\
&=&a^{-2n_1}(bc)^{-1}(q^{-n_1}xy-yx)
=-\beta_3a^{-2n_1}(bc)^{-1}(a^{2n_1}-bc)\\
&=&\beta_3( a^{-2n_1}-(bc)^{-1})
=s(yx-q^{-n_1}xy).\end{array}$$
It is easy to verify that
$$s(a)s(x)=qs(x)s(a),\quad s(a)s(y)=q^{-1}s(y)s(a),\quad s(a)s(b)=s(b)s(a),$$
$$s(c)s(b)=s(b)s(c),\quad s(c)s(a)=s(a)s(c),\quad s(b)s(x)=s(x)s(b),$$
$$s(b)s(y)=s(y)s(b),\quad s(c)s(x)=s(x)s(c),\quad s(c)s(y)=s(y)s(c).$$
Thus $s$ is an anti-automorphism of $H_{\beta}$.
Hence, we only need to verify the antipode axiom on the generators $x, y, a, b, c$. It is easy to verify this.
Altogether, we have proved that $H_{\beta}$ is a Hopf algebra.
By \cite[Lemma 1]{R} or \cite[Lemma 1.1]{G}, we obtain that the Hopf algebra $H_{\beta}$ is pointed and
the set $\{a^ib^jc^k|a,b,c\in{\mathbb Z}\}$ is of all group-like elements of $H_{\beta}$.
Similarly to \cite{G,R}, we can prove that
$\{a^ib^jc^kx^uy^v|i,j,j\in{\mathbb Z},0\leq u,v\leq n-1\}$ is a basis
of $H_{\beta}$ by the Diamond Lemma \cite{B}.
\end{proof}
\begin{remark}\label{rem-Gelaki}
Notice that if the generators of $H_\beta$
satisfy the relations
$$a^N=1,\ b=c=1,\ N\in \mathbb{Z}\ \text{and}\ n\mid N, $$
then the Hopf algebra $H_\beta$ is a Gelaki's Hopf algebra ${\mathcal U}_{(n,N,n_1,q,\beta_1,\beta_2,\beta_3)}$, which is defined in \cite{G}.
\end{remark}
\begin{remark}[S. Gelaki \cite{G}]
Note that if $\omega$ is a primitive $N$-th root of unity and
$N\nmid n_1^2$, then $$ {\mathcal
U}_{(N/(N,n_1),N,n_1,\omega^{n_1},0,0,\gamma)}\simeq
U_{(N,n_1,\omega)}$$ as Hopf algebras for any $\gamma\in {\bf
K}^*$.
Especially, $$U_{(N,n_1,\omega)}={\mathcal U}_{(N/(N,n_1),N,n_1,\omega^{n_1},0,0,1)}.$$
Here, $U_{(N,n_1,\omega)}$ is called Radford's Hopf algebra.
\end{remark}
Let $\frak S=\{a^{kn}b^lc^t| k,l,t$ are nonnegative integers$\}$.
Then $\frak S$ is a multiplicatively closed set of ${\bf K}[a^{n},b,c]$. Since ${\bf K}[a^{{\cal P}}\newcommand \nb{\text{\bf N}m
n},b^{{\cal P}}\newcommand \nb{\text{\bf N}m 1},c^{{\cal P}}\newcommand \nb{\text{\bf N}m 1}]$ is the localization of ${\bf K}[a^{n},b,c]$ with respect to $\frak S$, we get that the
Gelfand-Kirillov dimensions of ${\bf K}[a^{{\cal P}}\newcommand \nb{\text{\bf N}m n},b^{{\cal P}}\newcommand \nb{\text{\bf N}m 1},c^{{\cal P}}\newcommand \nb{\text{\bf N}m 1}]$ and ${\bf K}[a^{n},b,c]$ are equal by \cite[Proposition 8.2.13]{CR}.
For simplification, we write the Gelfand-Kirillov dimension as GK dim in this section.
Recall that a Hopf algebra $A$ over field ${\bf K}$ is Artin-Schelter Gorenstein (simply AS-Gorenstein) defined in \cite[Definition 3.1]{WZ} if
(AS1) injdim $_AA=d< \infty,$
(AS2) dim$_{\bf K}Ext^d_A(_A{\bf K},_AA)=1,$ $Ext^i_A(_A{\bf
K},_AA)=0$ for all $i\neq d$,
(AS3) the right $A$-module versions of the conditions (AS1,AS2)
hold.
\begin{theorem} $H_{\beta}$ is an AS-Gorenstein Hopf algebra of injective dimension 3.
\end{theorem}
\begin{proof} Let $T={\bf K}[a^{{\cal P}}\newcommand \nb{\text{\bf N}m n},b^{{\cal P}}\newcommand \nb{\text{\bf N}m 1},c^{{\cal P}}\newcommand \nb{\text{\bf N}m 1}]$. Then $H_{\beta}$ is finitely generated module over the Noetherian ring $T$. So $H_{\beta}$ is both right and left Noetherian. By \cite[Corollary 13.1.13]{CR}, $H_{\beta}$ is a $PI$ Hopf algebra. Moreover, by \cite[\ Propositions \ 8.2.9,\ 8.2.13 and 8.1.15]{CR} we obtain that
$$\begin{array}{llll}GKdim(H_{\beta})&=&GKdim({\bf K}[a^{{\cal P}}\newcommand \nb{\text{\bf N}m n},b^{{\cal P}}\newcommand \nb{\text{\bf N}m 1},c^{{\cal P}}\newcommand \nb{\text{\bf N}m 1}]),\\
&=&GKdim({\bf K}[a^{n},b,c]), \\
&=&3.\end{array}$$
By \cite[Theorem 0.1]{WZ}, $H_{\beta}$ is an AS-Gorenstein Hopf algebra of injective dimension $3$.
\end{proof}
We recall the homological integral of $H$ from \cite{LWZ}. Let $H$ be an $AS$-Gorenstein Hopf algebra of injective dimension $d$ over field ${\bf K}$.
The vector space of left homological integrals of $H$ is defined as $\int_H^l:=Ext_H^d(_H{\bf K},_HH)$ and the right homological integrals of $H$ as $\int_H^r:=Ext_H^d({\bf K}_H,H_H)$.
Besides, homological integrals agree with the classical integrals for finite dimensional Hopf algebra in \cite[section 1]{LWZ}. Then we get the following corollary.
\begin{corollary} $Ext^3_{H_{\beta}^{op}}({\bf K}_{H_\beta},{H_{\beta}}_{H_\beta})\simeq Ext^3_{H_{\beta}}(_{H_{\beta}}{\bf K},_{H_{\beta}}H_{\beta})\simeq {\bf K}$ as two sided $H_{\beta}$-modules.
\end{corollary}
\begin{proof}Let $H^{{\cal P}}\newcommand \nb{\text{\bf N}rime}=H_{\beta}/(b-1)$,$H^{{\cal P}}\newcommand \nb{\text{\bf N}rime{\cal P}}\newcommand \nb{\text{\bf N}rime}=H^{{\cal P}}\newcommand \nb{\text{\bf N}rime}/(c-1)$ and $H^{{\cal P}}\newcommand \nb{\text{\bf N}rime{\cal P}}\newcommand \nb{\text{\bf N}rime{\cal P}}\newcommand \nb{\text{\bf N}rime}=H^{{\cal P}}\newcommand \nb{\text{\bf N}rime{\cal P}}\newcommand \nb{\text{\bf N}rime}/(a^{n(n-1)}-1)$. Then $H^{{\cal P}}\newcommand \nb{\text{\bf N}rime{\cal P}}\newcommand \nb{\text{\bf N}rime{\cal P}}\newcommand \nb{\text{\bf N}rime}$ is isomorphic to a Gelaki's Hopf algebra ${\mathcal U}_{(n,n(n-1),n_1,q,\beta_1,\beta_2,\beta_3)}$. Since
$a^{(n-1)n}-1,b-1,$ $c-1$ are in the center of $H_{\beta}$, we get $\int^l_{H_{\beta}}=\int^l_{H^{{\cal P}}\newcommand \nb{\text{\bf N}rime}}=\int^l_{H^{{\cal P}}\newcommand \nb{\text{\bf N}rime{\cal P}}\newcommand \nb{\text{\bf N}rime}}=\int^l_{H^{{\cal P}}\newcommand \nb{\text{\bf N}rime{\cal P}}\newcommand \nb{\text{\bf N}rime{\cal P}}\newcommand \nb{\text{\bf N}rime}}$ by \cite[Lemma 2.6]{LWZ}.
Since $H'''$ is finite dimensional, by \cite[Proposition 3.2]{G} we obtain that $\int^l_{H^{{\cal P}}\newcommand \nb{\text{\bf N}rime{\cal P}}\newcommand \nb{\text{\bf N}rime{\cal P}}\newcommand \nb{\text{\bf N}rime}}={\bf K}\lambda$, where
$\lambda=\frac1{n(n-1)}(\sum\limits_{i=0}^{n(n-1)-1}a^i)x^{n-1}y^{n-1}$. It follows that ${\bf K}\lambda\simeq {\bf K}$ as $H_{\beta}$-bimodules, since $k\cdot\lambda=\varepsilon(k)\lambda$ for any $k\in H_\beta$ the same action as on $1\in{\bf K}$.
\end{proof}
\section{Properties of $H_{\alpha, \beta}$}
In this section, we construct and illustrate some properties of the Hopf algebra $H_{\alpha,\beta}$, which is a quotient of $H_{\beta}$.
It is necessary to determine all irreducible representations of $H_{\beta}$ in the next section.
\begin{definition}
Assume that $N=mn(n-1)$, where $m\geq 1$. Let $\alpha=(n,m,n_1,n_2,n_3)\in
{\bf N}^5$, where $1\leq n_1<m(n-1)$, $0\leq n_2,n_3<n-1$. Let $I$ be an ideal of $H_{\beta} $ generated by $a^N-1, $ $b-a^{mnn_2}$ and $c-a^{mnn_3}$.
Let $$H_{\alpha,\beta}=H_{\beta} /I.$$ Then $H_{\alpha,\beta}$ is a Hopf algebra
generated by $a,x,y$ satisfying
$$a^{mn(n-1)}=1, \quad x^{n}=\beta_1(a^{n_1n}-a^{mnn_2}),\quad
y^{n}=\beta_2(a^{n_1n}-a^{mnn_3}),$$ $$ xa=qax,\quad ya=q^{-1}ay,\quad
yx-q^{-n_1}xy=\beta_3(a^{2n_1}-a^{mn(n_2+n_3)}).$$
The coalgebra structure of $H_{\alpha,\beta}$ is determined by
$$\Delta(a)=a\otimes a,\quad \Delta(x)=x\otimes a^{n_1}+a^{mnn_2}\otimes
x,$$
$$\Delta(y)=y\otimes a^{n_1}+a^{mnn_3}\otimes
y,\quad \varepsilon(a)=1,\quad \varepsilon(x)=\varepsilon(y)=0.$$ The antipode of $H_{\alpha,\beta}$ is determined by
$$s(a)=a^{-1}=a^{N-1},\quad s(x)=-q^{-n_1}a^{-mnn_2-n_1}x,\quad s(y)=-q^{n_1}a^{-mnn_3-n_1}y.$$
\end{definition}
Besides, we can prove that $\lambda=\sum\limits_{i=0}^{N-1}\frac{a^i}{N}x^{n-1}y^{n-1}$ is a two-sided integral of $H_{\alpha,\beta}$. Indeed,
by $yx-q^{-n_1}xy=\beta_3(a^{2n_1}-a^{mn(n_2+n_3)})$ we get that
\begin{eqnarray}y^kx=q^{-kn_1}xy^k+\beta_3u_k(q^{-(k-1)n_1}a^{2n_1}-a^{mn(n_2+n_3)})y^{k-1},\end{eqnarray}
where $u_k=q^{-(k-1)n_1}+\cdots+q^{-n_1}+1.$
Let $\alpha=(n,m,n_1,n_2,n_3),\alpha'=(n',m',n_1',n_2',n_3')$,
$\beta=(1,1,\beta_3)$ and $\beta'=(1,1,\beta_3')$. Suppose that $\beta_1\beta_2\neq 0$. Then we can assume that $\beta_1=\beta_2=1$. Moreover, we assume that $x^n\neq 0$, $y^n\neq 0$ and $yx\neq q^{-n_1}xy$. Under these assumptions, we have the following proposition.
\begin{proposition} $H_{\alpha,\beta}\simeq H_{\alpha',\beta'}$ if and only if $\alpha=\alpha'$ and $\beta_3=c\beta_3'$,
where $c^n=1$.
\end{proposition}
\begin{proof} Let $f$ be an isomorphism from $H_{\alpha,\beta}$ to $H_{\alpha',\beta'}$ and $H_{\alpha',\beta'}$ be a Hopf algebra generated
by $g=a+I'$, $z=x+I'$ and $w=y+I'$, where $I'$ is the ideal generated by $a^{n'(n'-1)m'}-1, $ $b-a^{n'm'n'_2}$ and $c-a^{m'n'n'_3}$.
Notice that the subset $G(H_{\alpha,\beta})$ of all group-like elements of $H_{\alpha,\beta}$ is equal to
$\{a^t|0\leq t\leq n(n-1)m-1\}$, and the subset $G(H_{\alpha',\beta'})$ of all group-like elements of $H_{\alpha',\beta'}$ is equal to
$\{g^t|0\leq t\leq n'(n'-1)m'-1\}$. Since
$f$ induces an isomorphism from $G(H_{\alpha,\beta})$ to
$G(H_{\alpha',\beta'})$ and $mn(n-1)=m'n'(n'-1)$. Moreover, we can assume that $f(a)=g$. Since $f(a)f(x)=qf(x)f(a)$, $f(x)=\sum\limits_{i=0}^{N-1}\sum\limits_{j=1}^{n'-1}x_{ij}g^iz^jw^{j-1}$ for some $x_{ij}\in {\bf K}$.
By $\Delta(f(x))=(f\otimes f)\Delta(x)$, we get $f(x)=x_{01}z$. It is obvious that $x_{01}\neq 0$.
Similarly, we can prove that $f(y)=uy$ for some nonzero $u\in {\bf
K}$. Since $f(s(x))=s(f(x))$, we have $q^{-n_1}a^{-mnn_2-n_1}x=q^{-n'_1}a^{-m'n'n'_2-n'_1}x$. Then $mnn_2=m'n'n'_2$ and $n_1=n'_1$. Since $x^n=a^{nn_1}-a^{mnn_2}$, we have
$$x_{01}^n(g^{n'n_1'}-g^{m'n'n'_2})=g^{nn_1}-g^{mnn_2} \quad \text{i.e.} \quad (x_{01}^ng^{n'-n}-1)g^{nn_1-mnn_2}=x_{01}^n-1.$$ Since $g^{nn_1-mnn_2}\neq0$, we have $x_{01}^n=1$ and $n=n'$. Thus $m=m'$ and $n_2=n'_2$. Similarly, we can prove that $u^n(g^{n'n_1'}-g^{m'n'n'_3})=g^{nn_1}-g^{mnn_3}$.
Hence $n_3=n_3'$ and $u^n=1$. Notice that
by applying the
function $f$ to the equation $yx-q^{-n_1}xy=\beta_3(a^{2n_1}-a^{mn(n_2+n_3)})$, we obtain the equation
$\beta_3=ux_{01}\beta_3'.$
\end{proof}
\begin{remark}
The Hopf algebra $H_{\alpha,\beta}$ is finite dimensional and by Theorem~\ref{thm-basis} we get
$$dim_{\bf K} H_{\alpha,\beta}=n^3(n-1)m.$$
Especially, if $n_2=n_3=0$ then $H_{\alpha,\beta}$ is Gelaki's Hopf algebra by Remark \ref{rem-Gelaki}.
\end{remark}
Next, we determine the algebraic structure of $H_{\alpha,\beta}$. For this purpose, we construct some idempotent elements of $H_{\alpha,\beta}$. Suppose that $\omega_0$ is a primitive $m(n-1)$-th root of unity and $$e_i=\frac1{m(n-1)}\sum_{j=0}^{m(n-1)-1}(\omega_0^ia^n)^j,\qquad i=0,1,\cdots,m(n-1)-1.$$ Note that $\sum\limits_{j=0}^{m(n-1)-1}(\omega_0^i)^j=0$ for any
$i\neq 0$. Thus we have the following claim.
\begin{lemma} \label{lem41}$1=e_0+e_1+\cdots +e_{m(n-1)-1}$ is a sum of central idempotent elements.\end{lemma}
\begin{proof} Since $a^nx=xa^n,a^ny=ya^n$, $e_i$ are in the center
of $H_{\alpha,\beta}$. Hence
$$\begin{array}{lll}\sum\limits_{i=0}^{m(n-1)-1}e_i&=&\frac1{m(n-1)}\sum\limits_{i=0}^{m(n-1)-1}\sum\limits_{j=0}^{m(n-1)-1}\omega_0^{ij}a^{nj}\\
&=&\frac1{m(n-1)}\sum\limits_{j=0}^{m(n-1)-1}a^{nj}(\sum\limits_{i=0}^{m(n-1)-1}(\omega_0^j)^i)\\
&=&\frac1{m(n-1)}m(n-1)=1\end{array}$$and
$$\begin{array}{lll}e_ie_j&=&\frac1{(m(n-1))^2}
\sum\limits_{k=0}^{m(n-1)-1}\sum\limits_{l=0}^{m(n-1)-1}\omega_0^{ik+jl}a^{n(k+l)}\\
&=&\frac1{(m(n-1))^2}\sum\limits_{u=0}^{2(m(n-1)-1)}a^{nu}\omega_0^{ui}\sum\limits_{0\leq
l\leq min\{u,m(n-1)-1\},0\leq u-l\leq m(n-1)-1}\omega_0^{(j-i)l}\\
&=&\frac1{(m(n-1))^2}(\sum\limits_{u=0}^{(m(n-1)-1)}a^{nu}\omega_0^{ui}\sum\limits_{0\leq
l\leq
u}\omega_0^{(j-i)l}\\
&&+\sum\limits_{u=m(n-1)}^{2(mn-m-1)}a^{nu}\omega_0^{iu}\sum\limits_{u-(mn-m-1)\leq
l\leq (mn-m-1)}\omega_0^{(j-i)l})\\
&=&\frac1{(m(n-1))^2}(\sum\limits_{u=0}^{(m(n-1)-1)}a^{nu}\omega_0^{ui}\sum\limits_{0\leq
l\leq
u}\omega_0^{(j-i)l}\\
&&+\sum\limits_{u'=0}^{mn-m-1}a^{nu'}\omega_0^{iu'}\sum\limits_{1+u'\leq
l\leq {mn-m-1}}\omega_0^{(j-i)l})\\
&=&\frac1{(m(n-1))^2}(\sum\limits_{u=0}^{(m(n-1)-1)}a^{nu}\omega_0^{ui}\sum\limits_{0\leq
l\leq {mn-m-1}}\omega_0^{(j-i)l})\\
&=&\frac1{m(n-1)}\delta_{ij}\sum\limits_{u=0}^{m(n-1)-1}a^{nu}\omega_0^{iu}
=\delta_{ij}e_i,\end{array}$$where $\delta_{ij}$ is the
Kronecker symbol. The claim is true.\end{proof}
Let $A_i=e_iH_{\alpha,\beta}$, where $i\in \{0,1,\cdots,m(n-1)-1\}$. Since
$e_ia^n=\omega_0^{-i}e_i$, we obtain that the set $\{e_ia^jx^ly^t|0\leq j,l,t\leq n-1\}$ is
a basis of $A_i$ and $dim_{\bf{K}}A_i=n^3$. It follows that $H_{\alpha,\beta}$ is a direct sum of algebras $A_i$ by Lemma \ref{lem41}.
\begin{lemma} Let $\omega$ be a primitive $N$-th root of unity such that $\omega^n=
\omega_0$. Then $A_i$ is generated by $g=\omega^ie_ia,x',y'$,
satisfying the following relations
$$g^n=e_i,\quad x'^n=\beta_1'e_i,\quad y'^n=\beta_2'e_i,\quad gx'=q^{-1}x'g,\quad gy'=qy'g$$
and
$$y'x'-q^{-n_1}x'y'=\left\{\begin{array}{ll}0&\beta_3=0\\
g^{2n_1}-\omega^{2n_1i-m(n_2+n_3)i}e_i&\beta_3\neq0\end{array}\right.,$$
where $\beta_1'=\beta_1(\omega_0^{-n_1i}-\omega_0^{-mn_2i}),\ $
$\beta_2'=\left\{\begin{array}{ll}\beta_2(\omega_0^{-n_1i}-\omega_0^{-mn_3i})&\beta_3=0\\
\beta_2\beta_3^{-n}(\omega_0^{n_1i}-\omega_0^{(2n_1-mn_3)i})&\beta_3\neq0\end{array}\right.$.
\end{lemma}
\begin{proof}Let $x'=xe_i$ and $y'=\left\{\begin{array}{ll}ye_i&\beta_3=0\\
\beta_3^{-1}\omega^{2n_1i}ye_i&\beta_3\neq 0\end{array}\right.$.
It is obvious that $A_i$ is generated by $g,x',y'$. It is easy to
check the relations on these generators $g, x', y'$ in this lemma.
\end{proof}
For the rest of this section, we illustrate the properties of $A_i$.
For simplification, we denote $A$ as an algebra
generated by $g,x,y$, satisfying
\begin{equation}\label{eq-yn}
g^n=1,\quad x^n=\beta_1',\quad y^n=\beta_2',\quad gx=q^{-1}xg,\quad gy=qyg
\end{equation}
and
$$yx-q^{-n_1}xy=\left\{\begin{array}{ll}0&\beta_3=0\\
g^{2n_1}-\omega^{2n_1i-mn(n_2+n_3)i}&\beta_3\neq0\end{array}\right..$$
Let $f_i=\frac1n\sum\limits_{j=0}^{n-1}(q^ig)^j,i=0,1,\cdots,n-1$. Then
$$f_0+f_1+\cdots+f_{n-1}=1$$ and $f_if_j=\delta_{ij}$. Thus
$$A=Af_0 +Af_1+\cdots+Af_{n-1}$$as a direct sum of left ideals.
\begin{lemma}\label{lem43} Suppose that either $\beta_1'\neq 0$ or $\beta_2'\neq 0$. Then A is isomorphic to $$M_n(R_1)\oplus M_n(R_2)\oplus\cdots\oplus
M_n(R_t),$$ where $R_i$ are commutative local rings.\end{lemma}
\begin{proof} It is easy to verify that $f_ix=xf_{i-1}$ and
$f_iy=yf_{i+1}$. If $\beta_1'\neq 0$, then $Af_i$ is isomorphic to
$Af_{i-1}$ as left $A$-modules by multiplying $x$ from the right.
If $\beta_2'\neq 0$, then $Af_i$ is isomorphic to $Af_{i+1}$ as
left $A$-modules by multiplying $y$ from the right. Thus $A$ is
isomorphic to $Hom_A(A,A)\simeq M_n(End_A(Af_i))$. Since
$$f_ig=\frac1n\sum_{j=1}^nq^{ij}g^{j+1}=\frac1nq^{-i}\sum_{j=1}^nq^{i(j+1)}g^{j+1}=q^{-i}f_i,$$
$End_A(Af_i)=f_iAf_i=span\{f_ix^ty^tf_i|t=0,1,\cdots,n-1\}$.
Notice that
$f_lyxf_l-q^{-n_1}f_lxyf_l=
(q^{-2n_1l}-\omega_0^{2n_1i-m(n_2+n_3)i})f_l$ when $\beta_3\neq0$.
Let
$\gamma_l=0$ if $\beta_3=0$ and $\gamma_l=q^{-2n_1l}-\omega_0^{2n_1i-m(n_2+n_3)i}$ if $\beta_3\neq0$.
Then
$$\begin{array}{lll}(f_ixyf_i)^2&=&f_ixf_{i-1}yxf_{i-1}yf_i\\
&=&f_ix(q^{-n_1}f_{i-1}xyf_{i-1}+\gamma_{i-1}f_{i-1})yf_i\\
&=&q^{-n_1}f_ix^2y^2f_i+\gamma_{i-1} f_ixyf_i.\end{array}$$
Similarly, we can prove that
$(f_ixyf_i)^k=q^{-(k-1)n_1}f_ix^ky^kf_i+b_kf_ix^{k-1}y^{k-1}f_i+\cdots+b_1$
for $k\geq 2$. Hence $R=End_A(Af_i)$ is generated by
$f_ixyf_i$ over the field ${\bf K}$ as an algebra. Since $dim_{\bf{K}}
R=n$, there exists a minimal polynomial $p(x)$ such that
$p(f_ixyf_i)=0$. Therefore, $R=R_1\oplus\cdots\oplus R_t$ is a
direct sum of local rings.\end{proof}
\begin{lemma}\label{lem44} Suppose that $\beta_1'=\beta_2'=\beta_3=0$. Then the Jacobson radical $J(A)$ of $A$ is
spanned by $\{g^ix^jy^k|0\leq i,j,k\leq n-1,j+k>0\}$
and $A/J(A)\cong {\bf K}[{\mathbb Z}_n]$.
\end{lemma}
\begin{proof} Since the ideal $I=\langle x,y\rangle$ is nilpotent, we have $I\subseteq J(A).$ Besides, there is a $\bf{K}$ algebra isomorphism $A/I={\bf{K}}[g]/\langle g^n-1 \rangle\cong {\bf{K}}[{\mathbb Z}_{n}]$. It follows that $J(A)=I.$
\end{proof}
\begin{lemma} \label{lem45}Suppose that $\beta_1'=\beta_2'=0$ and $\beta_3\not=0$. Then $A$ is generated by $K,E,F$ with relations
$$K^n=1,\ E^n=F^n=0,\ EK=q^{-1}KE,\ FK=qKF,$$
$$EF-FE=K^{2n_1}\frac{K^{n_1}-\rho K^{-n_1}}{q^{n_1}-q^{-n_1}},$$
where $\rho=\omega_0^{2n_1i-mn(n_2+n_3)i}$.
\end{lemma}
\begin{proof} Let $E=\frac 1{q^{n_1}-1}yg^{n-n_1},
F=\frac1{q^{n_1}+1}x$, $K=g$, and $K^{-1}=g^{n-1}$. It is obvious that
$A$ is generated by $E,F,K$. Thus we have $EF-FE=K^{2n_1}\frac{K^{n_1}-\rho K^{-n_1}}{q^{n_1}-q^{-n_1}}$ from $yx-q^{-n_1}xy=g^{2n_1}-\omega^{2n_1i-mn(n_2+n_3)i}.$
By $E=\frac 1{q^{n_1}-1}yg^{n-n_1}$, then
$E^n=(\frac{1}{q^{n_1}-1}yg^{n-n_1})^n=\frac{1}{(q^{n_1}-1)^n}q^{\frac{n(n-1)(n-n_1)}{2}}y^{n}g^{(n-n_1)n}$. Hence $E^n=0$ by the relation (\ref{eq-yn}) $y^n=\beta_2'=0$ of $A$.
The other relations hold trivially.
\end{proof}
Notice that a weak Hopf algebra related to the algebra $A_i$ in Lemma \ref{lem45} has been studied in \cite{YW}. Especially, if $N|[2n_1i-mn(n_2+n_3)i]$ then the algebra $A$ is isomorphic to Radford's Hopf algebra $U_{(n,n_1,\omega)}$.
\section{Irreducible representations of $H_{\beta}$}
In this section, we determine all irreducible representations of $H_{\beta}$. The following key Lemma is necessary.
\begin{lemma} \label{lem51}
Every irreducible representation of $H_{\beta}$ is finite-dimensional.
\end{lemma}
\begin{proof} The proof is similar to that of \cite[Corollary 1.2]{W}. For the sake of completeness, we give a sketch. Let $M$ be a simple module over $H_{\beta}$ and $T:={\bf
K}[a^{{\cal P}}\newcommand \nb{\text{\bf N}m n},b^{{\cal P}}\newcommand \nb{\text{\bf N}m1},c^{{\cal P}}\newcommand \nb{\text{\bf N}m1}]$. Since $T$ is contained in the center
of $H_{\beta}$, every element $t$ of $T$ induces an endomorphism of the
$H_{\beta}$-module $M$. We denote $\varphi(t)$ as the endomorphism
of $M$ induced by $t\in T$. Suppose that ${\mathcal K}=\{\varphi(t)|t\in
T\}$. Then $kM=M$ for any nonzero $k\in {\mathcal K}$. Since $H_{\beta}$ is finitely
generated over $T$, $M$ is a finitely generated ${\mathcal
K}$-module. Let $m_1,m_2,\cdots,m_r$ be the generators of $M$.
By $kM=M$, we obtain
$$\left\{\begin{array}{l}
m_1=ka_{11}m_1+ka_{12}m_2+\cdots+ka_{1r}m_r\\
m_2=ka_{21}m_1+ka_{22}m_2+\cdots+ka_{2r}m_r\\
\quad \quad \quad \quad \vdots\\
m_r=ka_{r1}m_1+ka_{r2}m_2+\cdots+ka_{rr}m_r
\end{array}\right.$$
for some $a_{ij}\in {\mathcal {K}}$.
Then $det(I-k(a_{ij}))=0$, where $I$ is the identity matrix with
order $r$. Thus, there exists $h\in{\mathcal
K}$ such that $kh=1$. Hence ${\mathcal K}$ is a field. Notice that
${\mathcal K}={\bf K}[\varphi(a^{{\cal P}}\newcommand \nb{\text{\bf N}m n}),\varphi(b^{{\cal P}}\newcommand \nb{\text{\bf N}m
1}),\varphi(c^{{\cal P}}\newcommand \nb{\text{\bf N}m 1})]$. It is an algebraic extension of ${\bf K}$.
Since ${\bf K}$ is an algebraically closed field, ${\bf
K}={\mathcal K}$. Consequently, $M$ is finite-dimensional.
\end{proof}
From the proof of Lemma \ref{lem51}, we can construct an algebra homomorphism $\lambda_M\in
Hom_{\bf K}(T,{\bf K})$ for any simple module $M$.
Let
$\lambda_M(a^n)=\gamma_1,\lambda_M(b)=\gamma_2$ and $\lambda_M(c)=\gamma_3$.
Since $a,b,c$ are invertible in $T$, $\gamma_i\neq 0$ for $i=1,2,3$.
Then $M$ can be
viewed as a module over $H_M:=H_{\beta}/(a^n-\gamma_1,b-\gamma_2,c-\gamma_3)$.
Let $a'=\frac1{\sqrt[n]{\gamma_1}}a$,\
$b'=\frac1{\sqrt[n]{\gamma_1^{n_1}}}b$,\
$c'=\frac1{\sqrt[n]{\gamma_1}^{n_1}}c$,\ $x'=x$ and
$y'=\left\{\begin{array}{ll}y&\beta_3=0\\
\beta_3^{-1}\gamma_1^{-\frac{2n_1}n}y&\beta_3\neq
0\end{array}\right.$. Then $H_{\beta}$ is generated by $a',b',c',x',y'$
with the relations
$$a'b'=b'a',\quad a'c'=c'a',\quad b'c'=c'b',\quad x'a'=qa'x',\quad y'a'=q^{-1}a'y',\quad b'x'=x'b',\quad $$ $$ c'x'=x'c', \quad b'y'=y'b',\quad c'y'=y'c',\quad x'^n=\beta_1'(a'^{nn_1}-b'^n),\quad y'^n=\beta_2'(a'^{nn_1}-c'^n),$$
$$y'x'-q^{-n_1}x'y'=\left\{\begin{array}{ll}0&\beta_3=0\\
a'^{2n_1}-b'c'&\beta_3\neq 0\end{array}\right.,$$
where
$\beta_1'=\gamma_1^{n_1}\beta_1,$\ $\beta_2'=\left\{\begin{array}{lll}\gamma_1^{n_1}\beta_2&&\beta_3=0\\
\beta_3^{-n}\gamma_1^{-n_1}\beta_2&&\beta_3\neq
0\end{array}\right.$. Thus the generators $a',x',y'$ in
$H_M$ satisfy
$$x'a'=qa'x',\quad y'a'=q^{-1}a'y',\quad a'^n=1,\quad x'^n=\beta_1'(1-\frac{\gamma_2^n}{\gamma_1^{n_1}}),\quad
y'^n=\beta_2'(1-\frac{\gamma_3^n}{\gamma_1^{n_1}}),$$
$$y'x'-q^{-n_1}x'y'=\left\{\begin{array}{ll}0&\beta_3=0\\
a'^{2n_1}-\frac{\gamma_2\gamma_3}{\sqrt[n]{\gamma_1^{2n_1}}}&\beta_3\neq
0\end{array}\right..$$
In the following we illustrate all irreducible representations of $H_\beta$.
\begin{lemma} \label{lem52}
Let $\beta_1^{{\cal P}}\newcommand \nb{\text{\bf N}rime{\cal P}}\newcommand \nb{\text{\bf N}rime}=\beta_1'(1-\frac{\gamma_2^n}{\gamma_1^{n_1}}),\
\beta_2^{{\cal P}}\newcommand \nb{\text{\bf N}rime{\cal P}}\newcommand \nb{\text{\bf N}rime}=\beta_2'(1-\frac{\gamma_3^n}{\gamma_1^{n_1}}),$ and
$$\beta_3^{{\cal P}}\newcommand \nb{\text{\bf N}rime{\cal P}}\newcommand \nb{\text{\bf N}rime}(i)=\left\{\begin{array}{ll}0&\beta_3=0\\
\gamma_1^{-\frac{2n_1}n}(\gamma_1^{\frac{2n_1}n}q^{2n_1i}-\gamma_2\gamma_3)&\beta_3\neq
0\end{array}\right.$$
\begin{itemize}
\itemsep=0pt
\item [(I)]Suppose that $\beta_1^{{\cal P}}\newcommand \nb{\text{\bf N}rime{\cal P}}\newcommand \nb{\text{\bf N}rime}\neq 0$. Then the irreducible
$H_\beta$-module $M$ has a basis $\{m_0,m_1,\cdots,m_{n-1}\}$ such that the actions of $a,b,c,x,y$ on $M$ with respect to this basis are given by
\begin{eqnarray}\label{eq8}am_j=\sqrt[n]{\gamma_1}q^{i-j}m_j,\quad
bm_j=\gamma_2m_j,\ cm_j=\gamma_3m_j\qquad for\quad 0\leq j\leq n-1;
\end{eqnarray}
\begin{eqnarray}\label{eq9}xm_j=m_{j+1},\qquad for\quad 0\leq j\leq n-2;\qquad
xm_{n-1}=(\gamma_1^{n_1}-\gamma_2^n)\beta_1m_0;
\end{eqnarray}
\begin{eqnarray}\label{eq10}ym_j=k_{n-j+1}m_{j-1},\qquad for\quad 1\leq j\leq
n-1;\qquad ym_{0}=k_1m_{n-1},
\end{eqnarray}
where $k_1,\cdots,k_n $ are determined by $$k_l=q^{(l-1)n_1}\beta_1(\gamma_1^{n_1}-\gamma_2^n)k_1+\sum^{n-l}_{j=0}q^{-jn_1}\beta_3(\gamma_1^{\frac{2n_1}{n}}q^{(2i+l)n_1}-\gamma_2\gamma_3)$$ for $2\leq l\leq n$ and $k_1k_2\cdots k_n=(\gamma_1^{n_1}-\gamma_3^n)\beta_2$. We denote this irreducible module $M$ by $V_I(\gamma_1,\gamma_2,\gamma_3;i)$.
\item [(II)]
Suppose that $\beta_{1}^{{\cal P}}\newcommand \nb{\text{\bf N}rime{\cal P}}\newcommand \nb{\text{\bf N}rime}=0$ and $\beta_2^{{\cal P}}\newcommand \nb{\text{\bf N}rime{\cal P}}\newcommand \nb{\text{\bf N}rime}\neq 0$. Then the irreducible
$H_\beta$-module $M$ has a basis $$\{m_0,m_1,\cdots,m_{n-1}\}$$ such that the actions of $a,b,c,x,y$ on $M$ with respect to this basis satisfying
\begin{eqnarray}\label{eq11}
am_j=\sqrt[n]{\gamma_1}q^{i+j}m_j,\quad
bm_j=\gamma_2m_j,\quad cm_j=\gamma_3m_j\qquad for\quad 0\leq j\leq n-1;
\end{eqnarray}
\begin{eqnarray}\label{eq12}
ym_j=m_{j+1},\qquad for\quad 0\leq j\leq n-2;\qquad
ym_{n-1}=(\gamma_1^{n_1}-\gamma_3^n)\beta_2m_0;
\end{eqnarray}
\begin{eqnarray}\label{eq13}
xm_j=k_{j}m_{j-1},\qquad for\quad 1\leq j\leq
n-1;\qquad xm_0=k_nm_{n-1},
\end{eqnarray}
where $k_1,\cdots,k_n $ are determined by $$k_l=q^{ln_1}(\beta_2(\gamma_1^{n_1}-\gamma_3^n)k_n-\sum^{l-1}_{j=0}q^{-jn_1}\beta_3(\gamma_1^{\frac{2n_1}{n}}q^{(2i+l-1)n_1}-\gamma_2\gamma_3))$$
for $1\leq l\leq n-1$ and $k_1k_2\cdots k_n=(\gamma_1^{n_1}-\gamma_2^n)\beta_1$.
We denote this irreducible module $M$ by $V_{II}(\gamma_1,\gamma_2,\gamma_3;i)$.
\end{itemize}
\end{lemma}
\begin{proof}The proof of (I) is similar to that of (II). We only
give the proof of (II). Similarly to the proof of Lemma \ref{lem43}, we can
prove that $H_M$ is a direct sum of $M_n(R_t)$, where $R_t$ are
commutative local rings. Hence every irreducible module over $H_M$
is of dimension $n$. Since $H_\beta$ module $M$ can be viewed as a $H_M$ module, $dim(M)=n$. Let $m_0$ be an eigenvector of $a'$ with
eignvalue $q^i$ for some $0\leq i\leq n-1$. Since
$y'^nm_0=\beta_2^{{\cal P}}\newcommand \nb{\text{\bf N}rime{\cal P}}\newcommand \nb{\text{\bf N}rime}m_0\neq 0$,
$\{m_0,y'm_0,\cdots,y'^{n-1}m_0\}$ is a basis of $M$. Moreover
$a'x'm_0=q^{-1}x'a'm_0=q^{i-1}x'm_0$. Therefore $x'm_0=k'_ny'^{n-1}m_0$
for some $k'_n\in{\bf K}$. Thus $M$ is a $H_\beta$ module with a basis $\{m_0,ym_0,\cdots,y^{n-1}m_0\}$ and the action of $a$ and
$y$ on $M$ with respect to this basis is given by (\ref{eq11}) and (\ref{eq12}) respectively. Meanwhile, $xm_0=k_ny^{n-1}m_0$
for some $k_n\in{\bf K}$. Since $xa=qax$ and
$y^lx-q^{-ln_1}xy^l=\beta_3\sum^{l-1}_{j=0}q^{-jn_1}(q^{-(l-1)n_1}a^{2n_1}-bc)y^{l-1}$, the action of $x$ can be
realized by $${\mathcal
X}=\left(\begin{array}{ccccc}0&k_1&0&\cdots&0\\
0&0&k_2&\dots&0\\
\vdots&\vdots&\vdots&\ddots&\vdots\\
0&0&0&\dots&k_{n-1}\\
k_n&0&0&\cdots&0\end{array}\right),$$ where
$$k_l=q^{ln_1}(\beta_2(\gamma_1^{n_1}-\gamma_3^n)k_n-\sum^{l-1}_{j=0}q^{-jn_1}\beta_3(\gamma_1^{\frac{2n_1}{n}}q^{(2i+l-1)n_1}-\gamma_2\gamma_3)).$$
Since ${\mathcal X}^n=k_1k_2\cdots k_nI_n=\beta_1(\gamma_1^{n_1}-\gamma_2^n)I_n$, $k_n$ is a root of the equation
$$q^{\frac{n(n+1)}{2}n_1}{\cal P}}\newcommand \nb{\text{\bf N}rod^n_{l=1}(\beta_2(\gamma_1^{n_1}-\gamma_3^n)k_n-\sum^{l-1}_{j=0}q^{-jn_1}\beta_3(\gamma_1^{\frac{2n_1}{n}}q^{(2i+l-1)n_1}-\gamma_2\gamma_3))=\beta_1(\gamma_1^{n_1}-\gamma_2^n).$$
Hence, the claim is true.
\end{proof}
\begin{lemma} \label{lem53} Suppose that $\beta_1^{{\cal P}}\newcommand \nb{\text{\bf N}rime{\cal P}}\newcommand \nb{\text{\bf N}rime}=\beta_2^{{\cal P}}\newcommand \nb{\text{\bf N}rime{\cal P}}\newcommand \nb{\text{\bf N}rime}=
\beta_3^{{\cal P}}\newcommand \nb{\text{\bf N}rime{\cal P}}\newcommand \nb{\text{\bf N}rime}(i)=0$ for some $0\leq i\leq n-1$. Then the
irreducible $H_\beta$ module $M={\bf K}$ is determined by $a\cdot 1=\sqrt[n]{\gamma_1}q^i,\ b\cdot 1=\gamma_2,\ c\cdot1=\gamma_3$ and
$x\cdot1=y\cdot1=0$ for some $0\leq i\leq n-1$. We denote this irreducible module $M$ by $V_0(\gamma_1,\gamma_2,\gamma_3;i)$ in the sequel.
\end{lemma}
\begin{proof} We first consider $M$ as the irreducible $H_M$ module. Suppose that $m_0$ is an eigenvector of $a'$. Since $x'^nm_0=0$, we can assume that
$x'm_0=0$. If $y'm_0\neq 0$, then there exist $j\in\{1,\cdots,n-1\}$ such that $y'^jm_0\neq 0$ and $y'^{j+1}m_0=0$. Hence $\{m_0, y'm_0,y'^2m_0,\cdots, y'^{j}m_0\}$ is a basis of the module $M$. $<y'^{j}m_0>$ is the submodule of $M$ by $x'y'^{j}m_0=q^{jn_1}y'^{j}x'm_0=0$. This gives a contradiction. Thus $x\cdot m_0=y\cdot m_0=0$ and $M={\bf K}m_0\cong {\bf K}$.
\end{proof}
Suppose that $\beta_3(i)''=\beta_3(q^{2n_1i}-\gamma_1^{-\frac{2n_1}n}\gamma_2\gamma_3)\neq 0$. If there is an integer $v$ such that $$q^{(2i-v+1)n_1}= \gamma_1^{-\frac{2n_1}n}\gamma_2\gamma_3,$$ then there is a minimal positive integer $r$ such that $q^{(2i-r+1)n_1}= \gamma_1^{-\frac{2n_1}n}\gamma_2\gamma_3$. As $\beta_3(i)^{{\cal P}}\newcommand \nb{\text{\bf N}rime{\cal P}}\newcommand \nb{\text{\bf N}rime}\neq 0$, we have $q^{(2i-r+1)n_1}\neq q^{2n_1i}$ and $q^{(1-r)n_1}\neq 1$. Thus $2\leq r\leq t,$ where $t=|q^{n_1}|$ the order of $q^{n_1}$.
If there exists no integer $v$ such that $q^{(2i-v+1)n_1}= \gamma_1^{-\frac{2n_1}n}\gamma_2\gamma_3$,
then we define $r:=t$.
\begin{lemma} \label{lem54}Suppose that $\beta_1^{{\cal P}}\newcommand \nb{\text{\bf N}rime{\cal P}}\newcommand \nb{\text{\bf N}rime}=\beta_2^{{\cal P}}\newcommand \nb{\text{\bf N}rime{\cal P}}\newcommand \nb{\text{\bf N}rime}=0$,
but $\beta_3^{{\cal P}}\newcommand \nb{\text{\bf N}rime{\cal P}}\newcommand \nb{\text{\bf N}rime}(i)\not=0$ for some $0\leq i\leq n-1$. Then the irreducible $H_\beta$ module $M$ has a basis $\{m_i|0\leq i\leq r-1\}$ such that the actions of $a,b,c,x,y$ are given by
\begin{eqnarray}\label{eq15}
am_j=\sqrt[n]{\gamma_1}q^{i-j}m_j,\quad
bm_j=\gamma_2m_j,\quad cm_j=\gamma_3m_j\qquad for\quad 0\leq j\leq r-1\
\end{eqnarray}
\begin{eqnarray}\label{eq16}
ym_j=k_jm_{j-1},\qquad for\quad 1\leq j\leq r-1;\qquad
ym_{0}=0;
\end{eqnarray}
\begin{eqnarray}\label{eq17}
xm_j=m_{j+1},\qquad for\quad 0\leq j\leq
r-2;\qquad xm_{r-1}=0,
\end{eqnarray}
where $k_1,\cdots,k_r $ are determined by $$k_l=\sum^{l-1}_{j=0}q^{-jn_1}\beta_3(q^{(2i-l+1)n_1}\gamma^{\frac{2n_1}{n}}_1-\gamma_2\gamma_3)$$
for $1\leq l\leq r-1.$
\end{lemma}
\begin{proof} We first consider $M$ as the irreducible $H_M$ module. Suppose that $m_0$ is an eigenvector of $a'$. Since $y'^nm_0=0$, we can assume that
$y'm_0=0$. Moreover, we can assume that $x'^jm_0\neq 0$ for $0\leq j\leq
r'-1$ and $x'^{r'}m_0=0$. It is obvious that $r'\leq n$ and
$a'm_0=q^im_0$ for some $1\leq i\leq n$. It is easy to check that
$$y'x'^um_0=q^{-un_1}x'^uy'm_0+\frac{1-q^{-un_1}}{1-q^{-n_1}}(q^{(2i-u+1)n_1}-
\gamma_1^{-\frac{2n_1}n}\gamma_2\gamma_3)x'^{u-1}m_0.$$ Since
$$0=y'x'^{r'}m_0=\frac{1-q^{-{r'}n_1}}{1-q^{-n_1}}(q^{(2i-r'+1)n_1}-
\gamma_1^{-\frac{2n_1}n}\gamma_2\gamma_3)x'^{r'-1}m_0,$$ either $q^{r'n_1}=1$, or
$q^{(2i-r'+1)n_1}=\gamma_1^{-\frac{2n_1}n}\gamma_2\gamma_3$. If $r'>t$, then
$span\{x'^tm_0,\cdots,x'^{r'-1}m_0\}$ is a nonzero submodule of the simple module $M$. This is impossible.
Thus $r'\leq t$. Similarly, we have $r\leq r'$. Hence
$r'=min\{t,r\}.$ Since
$y'x'^jm_0=(\sum\limits_{v=0}^{j-1}q^{(2i-v)n_1}-\sum\limits_{v=0}^{j-1}q^{vn_1}\gamma_1^{-\frac{2n_1}n}\gamma_2\gamma_3)x'^{j-1}m_0$
for all $j\geq 1$, $y'(x'^lm_0)=k'_l(x'^{l-1}m_0)$ for $1\leq l\leq r-1$, where $k'_l=\sum^{l-1}_{j=0}q^{-jn_1}(q^{(2i-l+1)n_1}-\gamma^{-\frac{2n_1}{n}}_1\gamma_2\gamma_3).$
Finally, we prove that
$M=span\{x'^vm_0|0\leq v \leq r-1\}$ is a simple $H_M$-module. Let $V'$ be a nonzero module of $M$. Then there exists
$x'^{i-s}m_0\in V'$ for some $0\leq s\leq r-1$. If $s=i$, then $m_0\in
V'$. If $s\neq i$, then $y'^{i-s}x'^{i-s}m_0=k'_1k'_2\cdots
k'_{i-s}m_0\in V'$. Thus $m_0\in V'$
and $V'=M$.
Since $a'=\frac1{\sqrt[n]{\gamma_1}}a$,
$b'=\frac1{\sqrt[n]{\gamma_1^{n_1}}}b$,
$c'=\frac1{\sqrt[n]{\gamma_1}^{n_1}}c$, $x'=x$, and
$y'=\beta_3^{-1}\gamma_1^{-\frac{2n_1}n}y$, the action of $a,b,c,x,y$ on $M$ is given by (\ref{eq15})(\ref{eq16})(\ref{eq17}).
\end{proof}
Recall the modules $V_I(\gamma_1,\gamma_2,\gamma_3;i)$,\ $V_{II}(\gamma_1,\gamma_2,\gamma_3;i)$ and $V_{0}(\gamma_1,\gamma_2,\gamma_3;i)$ defined in Lemma~\ref{lem52} and Lemma~\ref{lem53}.
We denote the irreducible module described in Lemma \ref{lem54} by $V_r(\gamma_1,\gamma_2,\gamma_3;i)$.
By all the above lemmas, we obtain the following classification theorem.
\begin{theorem} Let $M$ be an irreducible representation of $H_{\beta}$. Then
$M$ is isomorphic to one of the following types:
$V_I(\gamma_1,\gamma_2,\gamma_3;i)$, $V_{II}(\gamma_1,\gamma_2,\gamma_3;i)$, $V_{0}(\gamma_1,\gamma_2,\gamma_3;i)$, and
$V_{r}(\gamma_1,\gamma_2,\gamma_3;i)$.
\end{theorem}
\begin{proof} It is easy to verify that $V_I(\gamma_1,\gamma_2,\gamma_3;i)$,
$V_{II}(\gamma_1,\gamma_2,\gamma_3;i)$,
$V_{0}(\gamma_1,\gamma_2,\gamma_3;i)$, and
$V_{r}(\gamma_1,\gamma_2,\gamma_3;i)$ are irreducible
representations of $H_{\beta}$. Let $M$ be an irreducible representation of
$H_{\beta}$. Then there exists an algebra homomorphism $\lambda\in Hom_{\bf{K}}(T,{\bf K})$ such that
$\lambda(a^n)=\gamma_1\neq 0$, $\lambda(b)=\gamma_2\neq 0$ and
$\lambda(c)=\gamma_3\neq 0$. Since $\gamma_1,\gamma_2,\gamma_3$
lie in one of the above four cases, $M$ can be viewed as a
representation of $H_M$ described by Lemma \ref{lem52}, Lemma \ref{lem53} and Lemma
\ref{lem54}. Being careful with the change of the generators, $M$ is
isomorphic to one of the above four kinds of irreducible representations.
\end{proof}
Next proposition shows that the category of all $H_\beta$ modules is not semisimple.
\begin{proposition} Suppose that $\gamma_1^{-\frac {2n_1}n}\gamma_2\gamma_3\neq q^{v n_1}$
for any $v$ and
$(\gamma_1^{n_1}-\gamma_2^n)\beta_1=(\gamma_1^{n_1}-\gamma_3^n)\beta_2=0$.
Then
$Ext_{H_\beta}(V_{t}(\gamma_1,\gamma_2,\gamma_3;i-t+n),V_{t}(\gamma_1,\gamma_2,\gamma_3;i))\ne
0 $ for all $1\le i\le n-1$, where $t=|q^{n_1}|$.
\end{proposition}
\begin{proof} Let
$k_u(i)=\frac{1-q^{-un_1}}{1-q^{-n_1}}(q^{(2i-u+1)n_1}-\gamma_1^{-\frac{2n_1}n}\gamma_2\gamma_3)$ and $L$ be a vector
space with a basis $\{v_1,\cdots,v_{2t}\}$. Set
$$\begin{array}{llll} av_p=q^{i-p+1}v_p,&
av_{t+p}=q^{i-r+n-p+1}v_{t+p},& & 1\le p\le t,\\
xv_{2t}=0,& xv_p= v_{p+1},& &1\leq p\leq 2t-1,\\
yv_1=0,& yv_{t+1}=0,&&\\
yv_p=k_{p-1}(i)v_{p-1},&yv_{t+p}=k_{p-1}(i-t+n)v_{t+p-1},& & 2\leq
p\leq t,\\
bv_p=\gamma_2v_p,&cv_p=\gamma_3v_p,&&1\le p\le 2t,
\end{array}$$
Then $span\{v_{t+1},\cdots,v_{2t}\}$ is isomorphic to $V_{t}(\gamma_1,\gamma_2,\gamma_3;i-t+n)$ and
$$L/V_{t}(\gamma_1,\gamma_2,\gamma_3;i-t+n)\cong V_{t}(\gamma_1,\gamma_2,\gamma_3;i).$$ Obviously, the sequence
$0\rightarrow V_{t}(\gamma_1,\gamma_2,\gamma_3;i-t+n)\rightarrow
L\rightarrow V_{t}(\gamma_1,\gamma_2,\gamma_3;i)\rightarrow 0$ is
not splitting. Hence
$Ext_{H_\beta}(V_{t}(\gamma_1,\gamma_2,\gamma_3;i-t+n),V_{t}(\gamma_1,\gamma_2,\gamma_3;i))\ne
0$.\end{proof}
\begin{remark} Since $H_\beta$ is a Hopf algebra, the dual of any
$H_\beta$ module is still a $H_\beta$ module. It is easy to prove that the dual
of any irreducible representation is irreducible. Therefore the dual
module $V_{I}(\gamma_1,\gamma_2,\gamma_3;i)^*$,
$V_{II}(\gamma_1,\gamma_2,\gamma_3;i)^*$,
$V_{0}(\gamma_1,\gamma_2,\gamma_3;i)^*$, and
$V_{r}(\gamma_1,\gamma_2,\gamma_3;i)^*$ are also irreducible
representations of $H_\beta$.
\end{remark}
\section{Grothendieck ring $G_0(H_\beta)$}
We recall some notations. Let $R$ be an algebra over field {\bf K}.
Recall that the Grothendieck group $G_0(R)$ is the abelian group generated by the
set $\{[M]|$ $M$ is a left $R$-module$\}$ of isomorphic classes of finite-dimensional $R$-modules with relations $[B]=[A]+[C]$ for every short exact sequence
$0\to A\to B\to C\to 0$ of R-modules. Unlike ordinary algebras, the Grothendieck group $G_0(H)$ of a Hopf algebra $H$ has a product given by $[M][N]=[M\otimes N]$. With this product, Grothendieck group $G_0(H)$ becomes a ring, which is called the Grothendieck ring of the Hopf algebra $H$.
Assume that $H_\beta$ is the Hopf algebra defined in Section~\ref{sec-2} and the modules
$$V_I(\gamma_1,\gamma_2,\gamma_3;i),\,
V_{II}(\gamma_1,\gamma_2,\gamma_3;i), \,
V_{0}(\gamma_1,\gamma_2,\gamma_3;i) \, \,
\text{and}\;
V_r(\gamma_1,\gamma_2,\gamma_3;i)$$
are defined in Lemma~\ref{lem52}, Lemma~\ref{lem53} and Lemma~\ref{lem54}. In this section, we determine the Grothendieck ring $G_0(H_\beta)$ of the Hopf algebra $H_\beta$ in several cases, which is generated by
$$[V_I(\gamma_1,\gamma_2,\gamma_3;i)],\,
[V_{II}(\gamma_1,\gamma_2,\gamma_3;i)],\,
[V_0(\gamma_1,\gamma_2,\gamma_3;i)],\,
\text{and}\; [V_r(\gamma_1,\gamma_2,\gamma_3;i)],$$
where $0\leq i\leq n-1$, $2\leq r\leq t$,
$(\gamma_1,\gamma_2,\gamma_3)\in(\bf{K}^*)^3$ and $t=|q^{n_1}|$ is the order of $q^{n_1}$. The classification results of all cases are illustrated in several lemmas and theorems. In details, the Grothendieck rings in cases that iff two the three equations $\beta_1=0$, $\beta_2=0$ and $\beta_3=0$ hold are stated in theorems~\ref{L55},\,\ref{thmVI} and \ref{thmV}. To do that, lemmas~\ref{L54},\, \ref{L52} and \ref{L53} are necessary. In theorems \ref{V}, \, \ref{T9} and \ref{T10}, we give the cases in which iff one of the conditions $\beta_1=0$, $\beta_2=0$ and $\beta_3=0$ holds. In additional, we also obtain properties of the Grothendieck rings $G_0(U_{(N,\nu,\omega)})$ and $G_0({\mathcal
U}_{(n,N,\nu,q,\alpha,\beta,\gamma)})$ in several corollaries. We also observe that there exist non-isomorphic Hopf algebras with isomorphic Grothendieck rings.
In order to discuss $G_0(H_\beta)$ in which $\beta_1=\beta_2=\beta_3=0$, the analyze to the subring of $G_0(H_\beta)$ generated by all $[V_0(\gamma_1,\gamma_2,\gamma_3;i)]$ for all nonzero $\gamma_i\in{\bf K}$ and $i\in\mathbb{Z}_n$ is crucial.
We present the results of this case in Theorem~\ref{them52} and the following Lemma \ref{L51} is necessary.
\begin{lemma}\label{L51}
The product of the Grothendieck ring $G_0(H_\beta)$ satisfies
$$[V_0(\gamma_1,\gamma_2,\gamma_3;i)][V_0(\gamma'_1,\gamma'_2,\gamma'_3;j)]= [V_0(\gamma_1\gamma'_1,\gamma_2\gamma'_2,\gamma_3\gamma'_3;i+j)]$$
for any $i,j\in\mathbb{Z}_n$ and
$\gamma_k,\gamma_k'\in{\bf K}^*$, $k\in\{1,2,3\}$.
\end{lemma}
\begin{proof}
By the definition of $V_0(\gamma_1,\gamma_2,\gamma_3;i)$ in Lemma~\ref{lem54}, we obtain that the product $$V_0(\gamma_1,\gamma_2,\gamma_3;i)\otimes V_0(\gamma'_1,\gamma'_2,\gamma'_3;j)$$ is generated by $1\otimes 1$. Since $$\beta_1(\gamma_1^{n_1}-\gamma_2^n)=\beta_2(\gamma_1^{n_1}-\gamma_3^n)=\beta_3(\gamma_1^\frac{2n_1}{n}q^{2n_1i}-\gamma_2\gamma_3)=0$$ and
$$\beta_1(\gamma_1'^{n_1}-\gamma_2'^n)=\beta_2(\gamma_1'^{n_1}-\gamma_3'^n)=\beta_3(\gamma_1'{^\frac{2n_1}{n}}q^{2n_1j}-\gamma_2'\gamma_3')=0,$$
we have
$$a\cdot(1\otimes 1)=(a\cdot 1)\otimes (a\cdot 1)=\sqrt[n]{\gamma_1\gamma'_1}q^{i+j}1\otimes 1,$$
$$b\cdot(1\otimes 1)=(b\cdot 1)\otimes (b\cdot 1)=(\gamma_2\gamma'_2)1\otimes 1,\quad
c\cdot(1\otimes 1)=(c\cdot 1)\otimes (c\cdot 1)=(\gamma_3\gamma'_3)1\otimes 1,$$
$$x\cdot(1\otimes 1)=(x\cdot 1)\otimes (a^{n_1}\cdot 1)+ (b\cdot 1)\otimes (x\cdot 1) =0,$$
$$y\cdot(1\otimes 1)=(y\cdot 1)\otimes (a^{n_1}\cdot 1)+ (c\cdot 1)\otimes (y\cdot 1) =0.$$
Therefore, $V_0(\gamma_1,\gamma_2,\gamma_3;i)\otimes V_0(\gamma'_1,\gamma'_2,\gamma'_3;j)\cong V_0(\gamma_1\gamma'_1,\gamma_2\gamma'_2,\gamma_3\gamma'_3;i+j).$
\end{proof}
By Lemma \ref{L51}, we can prove the following result.
\begin{theorem}\label{them52}
Let $R_1$ be the subring of the Grothendieck ring $G_0(H_\beta)$ generated by all $[V_0(\gamma_1,\gamma_2,\gamma_3;i)]$ for all nonzero $\gamma_i\in{\bf K}$ and $i\in\mathbb{Z}_n$. Then we have the following.
\begin{itemize}
\itemsep=0pt
\item [(i)] If at most one of $\beta_1, \beta_2, \beta_3$ equals to zero, then $R_1$ is isomorphic to $\mathbb{Z}[ \mathbb{Z}_{n_1}\times \mathbb{Z}_n^2\times {\bf K^*}]$;
\item [(ii)] If only $\beta_1\neq 0$ or $\beta_2\neq 0$, then $R_1$ is isomorphic to $\mathbb{Z}[ \mathbb{Z}_{n_1}\times \mathbb{Z}_n\times {(\bf K^*)}^2]$;
\item [(iii)] If only $\beta_3\neq 0$, then $R_1$ is isomorphic to $\mathbb{Z}[\mathbb{Z}_{2n_1}\times \mathbb{Z}_n\times {(\bf K^*)}^2]$;
\item [(iv)] If $\beta_1=\beta_2=\beta_3=0$, then $R_1$ is isomorphic to $\mathbb{Z}[ \mathbb{Z}_n\times {\bf K^*}\times {\bf K^*}\times {\bf K^*}]$.\\\noindent Thus $G_0(H_\beta)\cong\mathbb{Z}[ \mathbb{Z}_n\times {\bf K^*}\times {\bf K^*}\times {\bf K^*}]$ if $\beta=(\beta_1,\beta_2,\beta_3)=0$.
\end{itemize}
\end{theorem}
\begin{proof}
We prove this claim by several cases.
\begin{itemize}
\itemsep=0pt
\item[(i)]
First, we consider the case where $\beta_1\beta_2\beta_3\neq0$. By $\beta ''_1=0$ and $\beta''_2=0$, we obtain $\beta_1^{{\cal P}}\newcommand \nb{\text{\bf N}rime{\cal P}}\newcommand \nb{\text{\bf N}rime}=\beta_1'(1-\frac{\gamma_2^n}{\gamma_1^{n_1}})=0 $ and $\beta_2^{{\cal P}}\newcommand \nb{\text{\bf N}rime{\cal P}}\newcommand \nb{\text{\bf N}rime}=\beta_2'(1-\frac{\gamma_3^n}{\gamma_1^{n_1}})=0$.
Hence $\gamma_1=\gamma_2^{\frac{n}{n_1}}\bar{\omega}^p=\gamma_3^{\frac{n}{n_1}}\bar{\omega}^l$ and $\gamma_3=q^u\gamma_2$ for some intergers $p,l,u$, where $\bar{\omega}$ is the primitive $n_1$-th unitary root. Thus
\begin{equation}\label{EQ1}\begin{array}{lll}
V_0(\gamma_1,\gamma_2,\gamma_3,i)&\cong &V_0(\bar{\omega}^p,1,q^u;i)\otimes V_0(\gamma_2^{\frac{n}{n_1}},\gamma_2, \gamma_2; 0)\\
&\cong&V_0(\bar{\omega},1,1;0)^{\otimes p}\otimes V_0(1,1,q^u;0)\otimes V_0(1,1,1;1)^{\otimes i}\otimes V_0(\gamma_2^{\frac{n}{n_1}},\gamma_2,\gamma_2; 0)\end{array}
\end{equation} for some nonzero $\gamma_2$ and $i\in \mathbb{Z}_n$. Then $R_1$ is isomorphic to the group algebra $\mathbb{Z}[ \mathbb{Z}_{n_1}\times \mathbb{Z}^2_n\times {\bf K^*}]$, where ${\bf K^*}$ is the multiplicative group of all nonzero elements of ${\bf K}$.
If $\beta_2\beta_3\neq 0$ and $\beta_1=0$, then $\gamma_1=\gamma_3^{\frac{n}{n_1}}\bar{\omega}^p$ for some integer $p$. Since $\gamma_1^{-\frac{2n_1}n}\gamma_2\gamma_3=q^{2n_1i}$, $\gamma_2=q^{2n_1i+u}\gamma_3$ for some integer $u$. Thus
$$\begin{array}{lll}V_0(\gamma_1,\gamma_2,\gamma_3,i)&\cong &V_0(\bar{\omega}^p,q^{2n_1i+u}, 1;i)\otimes V_0(\gamma_3^{\frac{n}{n_1}}, \gamma_3, \gamma_3; 0)\\
&\cong&V_0(\bar{\omega},1,1;0)^{\otimes p}\otimes V_0(1, q^{2n_1i+u},1;0)\otimes V_0(1,1,1;1)^{\otimes i}\otimes V_0(\gamma_3^{\frac{n}{n_1}}, \gamma_3, \gamma_3; 0).
\end{array}$$
Hence $R_1$ is isomorphic to the group algebra $\mathbb{Z}[ \mathbb{Z}_{n_1}\times\mathbb{Z}_n^2\times {\bf K^*}]$. Similarly, we can prove that $ R_1$ is also isomorphic to $\mathbb{Z}[\mathbb{Z}_{n_1}\times\mathbb{Z}_n^2\times {\bf K}^*]$ if $\beta_1\beta_3\neq0$ and $\beta_2=0$.
If $\beta_1\beta_2\neq 0$ and $\beta_3=0$, then $\gamma_1=\gamma_3^{\frac{n}{n_1}}\bar{\omega}^p$ and $\gamma_2=\gamma_3q^u$ for some integers $p, u$.
Thus
$$\begin{array}{lll}V_0(\gamma_1,\gamma_2,\gamma_3,i)&\cong &V_0(\bar{\omega},1,1;0)^{\otimes p}\otimes V_0(1,1, 1;i)\otimes V_0(\gamma_3^{\frac{n}{n_1}}, q^{u}\gamma_3, \gamma_3; 0)\\
&\cong&V_0(\bar{\omega},1,1;0)^{\otimes p}\otimes V_0(1,q,1;0)^{\otimes u}\otimes V_0(1, 1,1;1)^{\otimes i}\otimes V_0(\gamma_3^{\frac{n}{n_1}}, \gamma_3, \gamma_3; 0).
\end{array}$$
Hence $R_1$ is isomorphic to $\mathbb{Z}[\mathbb{Z}_{n_1}\times \mathbb{Z}^2_n\times {\bf K}^*]$.
\item[(ii)]
If $\beta_1\neq 0$ and $\beta_2=\beta_3=0$, then $\gamma_1=\gamma_2^{\frac{n}{n_1}}\bar{\omega}^p$ for some integer $p$. Thus $$\begin{array}{lll}V_0(\gamma_1,\gamma_2,\gamma_3,i)&\cong &V_0(\bar{\omega}^p,1, 1;i)\otimes V_0(\gamma_2^{\frac{n}{n_1}}, \gamma_2, \gamma_3; 0)\\
&\cong& V_0(\bar{\omega},1,1;0)^{\otimes p}\otimes V_0(1, 1,1;1)^{\otimes i}\otimes V_0(\gamma_2^{\frac{n}{n_1}}, \gamma_2, \gamma_2; 0)\otimes V_0(1,1,\gamma_2^{-1}\gamma_3;0).
\end{array}$$
Hence $R_1$ is isomorphic to $\mathbb{Z}[\mathbb{Z}_{n_1}\times \mathbb{Z}_n\times{({\bf K}^*)}^2]$. Similarly, we can prove that $R_1$ is isomorphic to $\mathbb{Z}[\mathbb{Z}_{n_1}\times \mathbb{Z}_n\times{({\bf K}^*)}^2]$ if $\beta_2\neq0$ and $\beta_1=\beta_3=0$.
\item[(iii)]
If $\beta_1=\beta_2=0$ and $\beta_3\neq 0$, then $\gamma_1=(\gamma_2\gamma_3)^{\frac{n}{2n_1}}\bar{\omega}^{\frac p2}$ for some integer $p$. Thus $$\begin{array}{lll}V_0(\gamma_1,\gamma_2,\gamma_3,i)&\cong &V_0(\bar{\omega}^{\frac p2},1,1;0)\otimes V_0(1,1, 1;i)\otimes V_0((\gamma_2\gamma_3)^{\frac{n}{2n_1}}, \gamma_2, \gamma_3; 0)
\end{array}.$$
Hence $R_1$ is isomorphic to $\mathbb{Z}[\mathbb{Z}_{2n_1}\times \mathbb{Z}_n\times{({\bf K}^*)}^2]$.
\item[(iv)]
Finally, if $\beta=0$, then $\begin{array}{lll}V_0(\gamma_1,\gamma_2,\gamma_3,i)&\cong &V_0(1,1, 1;i)\otimes V_0(\gamma_1, \gamma_2, \gamma_3; 0).
\end{array}$
Hence $R_1$ is isomorphic to $\mathbb{Z}[\mathbb{Z}_n\times{\bf K}^*\times{\bf K}^*\times{\bf K}^*]$.
\end{itemize}
\end{proof}
Since $H_{\beta}/(a^N-1,b-1,c-1)$ is isomorphic to Gelaki's Hopf algebra ${\mathcal U}_{(n,N,n_1,q,\beta_1,\beta_2,\beta_3)}$, where $n|N$, we obtain the following properties of the Gelaki's Hopf alegbra.
\begin{corollary}\label{cor53}Suppose that $n|N$. Let $\mathfrak{q}$ be a primitive $N$-th root of unity and $S_1$ be the subring of $R_1$ generated by $[V_0(\gamma,1,1;i)]$, where $\gamma^{\frac{N}{n}}=1$. Then we have the following result.
\begin{itemize}
\itemsep=0pt
\item[(1)]
If either $\beta_1\beta_2\neq 0$ and $\beta_3=0$, or $\beta_1\neq 0$ and $\beta_2=\beta_3=0$, or $\beta_2\neq 0$ and $\beta_1=\beta_3=0$, then $S_1=\mathbb{Z}[g,h]$ for $g=[V_0(1,1,1;1)]$ and $h=[V_0(\mathfrak{q}^{\frac{N}{(N/n,n_1)}},1,1;0)]$. Moreover,
$$g^n=h^{(N/n,n_1)}=1 \quad \text{and} \quad \mathbb{Z}[g,h]\simeq \mathbb{Z}[\mathbb{Z}_{(N/n,n_1)}\times \mathbb{Z}_n].$$
\item[(2)]
If $\beta_3\neq 0$ and $\beta_1=\beta_2=0$, then $S_1=\mathbb{Z}[g,h_1]$ for $g=[V_0(1,1,1;1)]$ and $$h_1=[V_0(\mathfrak{q}^{\frac{Nn}{(N,2n_1)}},1, 1;0)].$$ Moreover, $g^n=h_1^{\mu}=1$ and $\mathbb{Z}[g,h_1]\simeq \mathbb{Z}[\mathbb{Z}_{\mu}\times \mathbb{Z}_n]$, where $\mu=\frac{(N,2n_1)}{(n,2n_1)}$.
\item[(3)]
If either $\beta_1\beta_3\neq 0$ and $\beta_2=0$, or $\beta_2\beta_3\neq 0$ and $\beta_1=0$, or $\beta_1\beta_2\beta_3\neq 0$, then $S_1=\mathbb{Z}[g,h_2]$ for $g=[V_0(1,1,1;1)]$ and $h_2=[V_0(\mathfrak{q}^{n\nu},1,1;0)]$, where $\nu=\frac{N}{(N,2n_1,nn_1)}$. Moreover, $g^n=h_2^{\mu'}=1$ and $\mathbb{Z}[g,h_2]\simeq \mathbb{Z}[\mathbb{Z}_{\mu'}\times\mathbb{Z}_n]$, where $\mu'=\frac{(N,2n_1,nn_1)}{(n,2n_1)}$.
\item[(4)]
If either $\beta_1=\beta_2=\beta_3=0$, then $S_1=\mathbb{Z}[g,h_3]$ for $g=[V_0(1,1,1;1)]$ and $h_3=[V_0(\mathfrak{q}^n,1,1;0)]$. Moreover, $g^n=h_3^\frac{N}{n}=1$ and $\mathbb{Z}[g,h_3]\simeq \mathbb{Z}[\mathbb{Z}_{N/n}\times \mathbb{Z}_n].$
\item[(5)]
If $\beta_1=\beta_2=\beta_3=0$, then the Grothendieck ring $G_0({\mathcal U}_{(n,N,n_1,q,\beta_1,\beta_2,\beta_3)})$ of the Gelaki's Hopf algebra ${\mathcal U}_{(n,N,n_1,q,0,0,0)}$ is equal to $\mathbb{Z}[g,h_3]$.
\end{itemize}
\end{corollary}
\begin{proof}
If $\beta_1\beta_2\neq 0$ and $\beta_3=0$, then $\gamma=\bar{\omega}^p$ satisfying $\gamma^{\frac{N}{n}}=1$, where $\bar{\omega}$ is a primitive $n_1$-th root of unity. Thus $n_1|\frac{N}{n}p$ and $\frac{n_1}{(n_1,\frac{N}{n})}|p$.
Therefore $\gamma\in\langle\bar{\omega}^{\frac{n_1}{(N/n,n_1)}}\rangle=\langle\mathfrak {q}^{\frac{N}{(N/n,n_1)}}\rangle
$, where $\mathfrak{q}$ is a primitive $N$-th root of unity. Hence $S_1=\mathbb{Z}[g,h]\simeq \mathbb{Z}[\mathbb{Z}_{{(N/n,n_1)}}\times \mathbb{Z}_{n}]$ by (\ref{EQ1}), where $g=[V_0(1,1,1;1)]$, $h=[V_0( \mathfrak{q}^{\frac{N}{(N,n_1)}},1,1;0)]$. It is obvious that $g^n=h^{{(N/n,n_1)}}=1$.
If $\beta_3\neq 0$ and $\beta_1=\beta_2=0$, then $\gamma^{-\frac{2n_1}n}=1$ for $V_0(\gamma,1,1;0)$. Since $\gamma^{\frac{N}n}=1$, $\gamma=\mathfrak{q}^{nr}$ for some integer $r$, where $\mathfrak{q}$ is a primitive $N$-th root of unity. Then $\mathfrak{q}^{-2rn_1}=1$ and $N|2rn_1$. Thus $r=\frac{N}{(N,2n_1)}p$ for any integer $p$. Hence $\gamma\in\langle \mathfrak{q}^{\frac{Nn}{(N,2n_1)}}\rangle$. Thus
$V_0(\gamma,1,1;i)\cong V_0(\mathfrak {q}^{\frac{Nn}{(N,2n_1)}},1,1;0)^{\otimes p}\otimes V_0(1,1,1;1)^{\otimes i}.$
Hence $S_1=\mathbb{Z}[g,h_1]$ for $h_1=[V_0(\mathfrak{q}^{\frac{Nn}{(N,2n_1)}},1,1;0)].$ It is obvious that $S_1$ is isomorphic to the group algebra $\mathbb{Z}[ \mathbb{Z}_{\mu}\times\mathbb{Z}_n]$, where $\mu=\frac{(N,2n_1)}{(n,2n_1)}$.
If either $\beta_1\beta_3\neq 0$, or $\beta_2\beta_3\neq 0$, or $\beta_1\beta_2\beta_3\neq 0$, then $\gamma^{\frac{N}n}=\gamma^{n_1}=\gamma^{\frac{2n_1}n}=1$ for $V_0(\gamma,1,1;0)$. Thus $\gamma=\mathfrak{q}^{nr}$ for some integer $r$ and $\mathfrak{q}^{nrn_1}=\mathfrak{q}^{2n_1r}=1$. Hence $r=\frac{N}{(N,nn_1)}p=\frac{N}{(N,2n_1)}p'$ for some integers $p,p'$. Let $\bar{d}=(\frac{N}{(N,nn_1)},\frac{N}{(N,2n_1)})$ and $\nu=\frac{N^2}{\bar{d}(N,nn_1)(N,2n_1)}=\frac{N}{(N,2n_1,nn_1)}$. Then $\gamma\in\langle\mathfrak{q}^{n\nu}\rangle$. Thus $S_1=\mathbb{Z}[g,h_2]$ for $h_2=[V_0(\mathfrak{q}^{n\nu},1,1;0)].$ It is obvious that $S_1$ is isomorphic to the group algebra $\mathbb{Z}[ \mathbb{Z}_{\mu'}\times\mathbb{Z}_n]$, where $\mu'=|\mathfrak{q}^{n\nu}|=\frac{(N,2n_1,nn_1)}{(n,2n_1)}$.
If $\beta_1\neq 0$ and $\beta_2=\beta_3=0$, then $\gamma=\bar{\omega}^p$ for some integer $p$. Thus $S_1=\mathbb{Z}[g,h]$, which is isomorphic to $\mathbb{Z}[\mathbb{Z}_{(N/n, n_1)}\times \mathbb{Z}_n]$. Similarly, we can prove that $S_1$ is isomorphic to $\mathbb{Z}[\mathbb{Z}_{(N/n,n_1)}\times \mathbb{Z}_n]$ if $\beta_2\neq0$ and $\beta_1=\beta_3=0$.
Finally, if $\beta=0$, then $\begin{array}{lll}V_0(\gamma,1,1,i)&\cong &V_0(1,1, 1;i)\otimes V_0(\gamma, 1, 1; 0).
\end{array}$
Hence $S_1=\mathbb{Z}[g,h_3]$, where $h_3=[V_0(\mathfrak{q}^n,1,1;0)].$ Thus $S_1$ is isomorphic to $\mathbb{Z}[\mathbb{Z}_{N/n}\times \mathbb{Z}_n]$.
\end{proof}
Let $\mathfrak{q}$ is a primitive $N$-th root of unity. Set
$g:=[V_0(1,1,1;1)],\;
\mathfrak{s}:=\sum\limits_{p=0}^{n-1}g^p, \;
\mathfrak{s}':=\sum\limits_{k=1}^{u}g^{kt},\\
\mathfrak{s}'':=\sum\limits_{k=0}^{t-1}g^{n-k}, \;
h:=[V_0( \mathfrak{q}^{\frac{N}{(N,n_1)}},1,1;0)],\;
h_1:=[V_0( \mathfrak{q}^{\frac{Nn}{(N,2n_1)}},1,1;0)],\;
h_2:=[V_0(\mathfrak{q}^{n\nu},1,1;0)],\\
h_3:=[V_0(\mathfrak{q}^{n},1,1;0)],\;$
and
$ g_{\gamma_1,\gamma_2,\gamma_3}: =[V_0(\gamma_1,\gamma_2,\gamma_3;0)]\in G_0(H_\beta).$
To determine the Grothendieck ring $G_0(H_\beta)$ for $\beta_3\neq 0$, the following lemma is necessary.
\begin{lemma}\label{L54}
Suppose that $\beta_3(q^{2n_1i}-\gamma_1^{-\frac{2n_1}n}\gamma_2\gamma_3)\neq 0$, $\beta_1(\gamma_1'^{n_1}-\gamma_3'^{n})=\beta_2(\gamma_2'^{n}-\gamma_3'^{n})=0$ and
$\beta_3(\gamma_1'^{-\frac{2n_1}{n}}\gamma_2'\gamma_3'-q^{2n_1j})=0$. Then
\begin{eqnarray}
[V_r(\gamma_1,\gamma_2,\gamma_3;i)][V_0(\gamma'_1,\gamma'_2,\gamma'_3;j)]&=&[V_r(\gamma_1\gamma'_1,\gamma_2\gamma'_2,\gamma_3\gamma'_3;i+j)]\nonumber\\ &=&[V_0(\gamma'_1,\gamma'_2,\gamma'_3;j)][V_r(\gamma_1,\gamma_2,\gamma_3;i)].\nonumber
\end{eqnarray}
\end{lemma}
\begin{proof} As $\beta_3(\gamma_1^\frac{2n_1}{n}q^{2n_1i}-\gamma_2\gamma_3)\neq0$, we have $\beta_3\neq0$.
Moreover,
$$\gamma_1'^{\frac{2n_1}{n}}q^{2n_1j}=\gamma_2'\gamma_3', ((\gamma_1\gamma_1')^{\frac{2n_1}{n}}q^{2n_1(i+j)}-(\gamma_2\gamma_2')(\gamma_3\gamma_3'))\beta_3=\beta_3(\gamma_1^{\frac{2n_1}{n}}q^{2n_1i}-\gamma_2\gamma_3)(\gamma_2'\gamma'_3)\neq0.$$
Since $\beta_1(\gamma_1^{n_1}-\gamma_2^n)=\beta_2(\gamma_1^{n_1}-\gamma_3^n)=0$ and $\beta_1(\gamma_1'^{n_1}-\gamma_2'^n)=\beta_2(\gamma_1'^{n_1}-\gamma_3'^n)=0$, we have $$\beta_1((\gamma_1\gamma'_1)^n_1-(\gamma_2\gamma'_2)^n)
=\beta_2((\gamma_1\gamma_1')^{n_1}-(\gamma_3\gamma_3')^n)=0.$$
If $\gamma_1^{-\frac {2n_1}n}\gamma_2\gamma_3= q^{n_1(2i-r+1)}$, then $(\gamma_1\gamma'_1)^{- \frac {2n_1}n}(\gamma_2\gamma_2')(\gamma_3\gamma_3)= q^{n_1(2(i+j)-r+1)}$. If $\gamma_1^\frac{-2n_1}{n}\gamma_2\gamma_3\neq q^{n_1v}$ for any integer $v$, then $(\gamma_1\gamma'_1)^{-\frac {2n_1}n}(\gamma_2\gamma_2')(\gamma_3\gamma_3)\neq q^{n_1v}$ for any integer $v$.
Let $\{m_0,m_1,\cdots,m_{r-1}\}$ be the basis of $V_r(\gamma_1,\gamma_2,\gamma_3;i)$ with the action given by (\ref{eq15})-(\ref{eq17}). Then $\{m_0\otimes 1,m_1\otimes 1,\cdots, m_{r-1}\otimes 1\}$ is a basis of $V_r(\gamma_1,\gamma_2,\gamma_3;i)\otimes V_0(\gamma'_1,\gamma'_2,\gamma'_3;j)$ and the action on this basis is as follows:
$$a\cdot(m_l\otimes 1)=(a\cdot m_l)\otimes (a\cdot 1)=\sqrt[n]{\gamma_1\gamma'_1}q^{i+j-l}m_l\otimes 1,$$
$$b\cdot(m_l\otimes 1)=(b\cdot m_l)\otimes (b\cdot 1)=(\gamma_2\gamma'_2)m_l\otimes 1,\quad
c\cdot(m_l\otimes 1)=(c\cdot m_l)\otimes (c\cdot 1)=(\gamma_3\gamma'_3)m_l\otimes 1,$$
$$x\cdot(m_l\otimes 1)=(x\cdot m_l)\otimes (a^{n_1}\cdot 1)+ (b\cdot m_l)\otimes (x\cdot 1) =\gamma'^{\frac{n_1}{n}}_1q^{jn_1}m_{l+1}\otimes 1,\; for\; 0\leq 0\leq r-2,$$
$$x\cdot(m_{r-1}\otimes 1)=(x\cdot m_{r-1})\otimes (a^{n_1}\cdot 1)+ (b\cdot m_{r-1})\otimes (x\cdot 1) =0,$$
$$y\cdot(m_l\otimes 1)=(y\cdot m_l)\otimes (a^{n_1}\cdot 1)+ (c\cdot m_l)\otimes (y\cdot 1) =k_l\gamma'^{\frac{n_1}{n}}_1q^{jn_1}m_{l-1}\otimes 1,\; for\; 1\leq l\leq r-1,$$
$$y\cdot(m_0\otimes 1)=(y\cdot m_0)\otimes (a^{n_1}\cdot 1)+ (c\cdot m_0)\otimes (y\cdot 1) =0.$$
Let $u_l:=\gamma'^{\frac{n_1l}{n}}_1q^{jn_1l}m_l\otimes 1$, then $\{u_0,\cdots, u_{r-1}\}$ is a new basis of $V_r(\gamma_1,\gamma_2,\gamma_3;i)\otimes V_0(\gamma'_1,\gamma'_2,\gamma'_3;j)$. Under this basis,
\begin{eqnarray*}au_k=\sqrt[n]{\gamma_1\gamma_1'}q^{i+j-k}u_k;\ \
bu_k=\gamma_2\gamma_2'u_k;\ \ cu_k=\gamma_3\gamma_3'u_k\qquad for\quad 0\leq k\leq r-1;
\end{eqnarray*}
\begin{eqnarray*}xu_k=u_{k+1},\qquad for\quad 0\leq k\leq r-2;\qquad
xu_{r-1}=0;
\end{eqnarray*}
\begin{eqnarray*}yu_p=\gamma_1'^{\frac{2n_1}{n}}q^{2jn_1}k_pu_{p-1},\qquad for\quad 1\leq p\leq
r-1;\qquad yu_{0}=0.
\end{eqnarray*}
Hence $V_r(\gamma_1,\gamma_2,\gamma_3;i)\otimes V_0(\gamma'_1,\gamma'_2,\gamma'_3;j)\cong V_r(\gamma_1\gamma'_1,\gamma_2\gamma'_2,\gamma_3\gamma'_3;i+j)$.
Similarly, we get $$V_0(\gamma'_1,\gamma'_2,\gamma'_3;j)\otimes V_r(\gamma_1,\gamma_2,\gamma_3;i)\cong V_r(\gamma_1\gamma'_1,\gamma_2\gamma'_2,\gamma_3\gamma'_3;i+j).$$
\end{proof}
Next, we discuss the case in which there is an integer $v$ such that $q^{(2i-v+1)n_1}=\gamma_1^{-\frac{2n_1}{n}}\gamma_2\gamma_3$. Let $r$ be the minimal positive integer such that $q^{(2i-r+1)n_1}=\gamma_1^{-\frac{2n_1}{n}}\gamma_2\gamma_3$. From Lemma \ref{L54}, we obtain
\begin{equation}\label{EQ41}
V_r(\gamma_1,\gamma_2,\gamma_3,i)\cong V_r(1,1,1;0)\otimes V_0(\gamma_1,\gamma_2,\gamma_3; i), \ \ \text{for} \ 2\leq r\leq t.
\end{equation}
By Equations (\ref{eq16}) and
(\ref{eq17}), the action of $H_\beta$ on $V_r(1,1,1;0)$ is given by $a\cdot m_j=q^{\frac12(r-1)-j}m_j$, $b\cdot m_j=c\cdot m_j=m_j$, where $k_1,\cdots,k_r $ are determined by $k_l=\sum\limits^{l-1}_{j=0}q^{-jn_1}\beta_3(q^{(r-l)n_1}-1)$
for $1\leq l\leq r-1.$
In the case that $\gamma_1^{\frac{2n_1}{n}}q^{vn_1}\neq\gamma_2\gamma_3$ for any integer $v$, we have
\begin{equation}\label{EQ42}
V_t(\gamma_1,\gamma_2,\gamma_3,i)\cong
\begin{cases}
V_t(\gamma_1(\gamma_2\gamma_3)^{-\frac{n}{2n_1}},1,1;0)\otimes V_0((\gamma_2\gamma_3)^{\frac{n}{2n_1}},\gamma_2, \gamma_3; i), \ & \beta_1=\beta_2=0\ \text{or} \ \beta_1\beta_2\neq0\\
V_t(1,\gamma_2\gamma_3^{-1},1;0)\otimes V_0(\gamma_1,\gamma_3, \gamma_3; i), \ & \beta_1=0, \beta_2\neq0\\
V_t(1,1,\gamma_3\gamma_2^{-1};0)\otimes V_0(\gamma_1,\gamma_2, \gamma_2; i), \ & \beta_1\neq 0, \beta_2=0
\end{cases}\nonumber
\end{equation}
where $\gamma_1'=(\gamma_3\gamma_2)^{-\frac{n}{2n_1}}\gamma_1 \neq \bar{\omega}^{\frac{p(n,n_1)}2}$ for any integer $p$.
Let $R_2$ be the subring of the Grothendieck ring $G_0(H_\beta)$ for $\beta_3\neq 0$ generated by
$R_1$ and $V_r(\gamma_1,\gamma_2,\gamma_3;i)$.
Then $R_2$ is isomorphic to a quotient ring of $R_1[z_r, z_{\xi}'| 2\leq r\leq t, \xi\in\widetilde{{\bf K}_0}]$, where
$\widetilde{{\bf K}_0}:=({\bf K}^*/\{\bar{\omega}^{\frac{p(n,n_1)}{2}}\mid p\in \mathbb{Z}\})\setminus \{\overline{1}\}$, $z_r=[V_r(1,1,1;0)]$ for $2\leq r\leq t$ and $z_{\xi}'=[V_t(\xi,1,1;0)]$ for $\xi\in\widetilde{{\bf K}_0}$. Moreover, if $\beta_3\neq 0$ and $\beta_1=\beta_2=0$, then $R_2=G_0(H_\beta)$.
Let $r_p=(r-2p)\ mod (t)$ and $1\leq r_p\leq t$, where $r$ is the minimal positive integer such that $\xi\xi'=q^{(-r+1)n_1}$. Then we obtain the following theorem about the Grothendieck ring $G_0(H_\beta)$.
\begin{theorem}\label{L55}
Suppose that $\beta_1=\beta_2=0$, $\beta_3\neq 0$. Then the Grothendieck ring $G_0(H_\beta)$ is isomorphic to $R_2:=R_1[z_2, z_{\xi}'| \xi\in\widetilde{{\bf K}_0}]$ with relations \begin{eqnarray}\label{Eq*1}\sum\limits_{v=0}^{[\frac{t}{2}]}(-1)^v\frac{t}{t-v}\bibitemnom{r-v}{v}g^{(n-1)v}z_2^{t-2v}-g^{n-t}-1=0,\end{eqnarray}
$$z_2z_\xi'=z_{\xi q^\frac{n}{2}}'+g^{n-1}z_{\xi q^\frac{n}{2}}',\qquad z_\xi'z_{\xi'}'=\mathfrak{s}''z'_{\xi\xi'} \text { for }\ \xi\xi'\in \widetilde{\bf K}_0, $$
and
$$z_\xi'z_{\xi'}'=\sum\limits^{t-1}_{p=0,r_p<t}(g^{n-p-r_p}z_{t-r_p}+g^{n-p}z_{r_p})+\sum\limits_{i=0,r_p=t}^{t-1}g^{n-p}z_t, \text{ for }\ \xi\xi'\in\{q^{un_1}\mid u\in{\mathbb{Z}}\}$$
where $z_r=\sum\limits_{v=0}^{[\frac{r-1}{2}]}(-1)^v\bibitemnom{r-1-v}{v}g^{(n-1)v}z_2^{r-1-2v}$ for $3\leq r\leq t$.
\end{theorem}
\begin{proof} (1) Suppose that $\{m_0,\cdots,m_{r-1}\}$ and $\{m_0',m_{1}'\}$ are the basis of $V_{r}(1,1,1;0)$ and $V_{2}(1,1,1;0)$, respectively. Then $\{m_l\otimes m'_k,\; 0\leq l\leq r-1, 0\leq k\leq 1\}$ is a basis of $V_{r}(1,1,1;0)\otimes V_{2}(1,1,1;0)$. With this basis, we have
$$a\cdot(m_l\otimes m'_k)=(a\cdot m_l)\otimes (a\cdot m'_k)=q^{\frac{r}2-l-k}m_l\otimes m'_k,$$
$$b\cdot(m_l\otimes m'_k)=(b\cdot m_l)\otimes (b\cdot m'_k)=m_l\otimes m'_k,\quad
c\cdot(m_l\otimes m'_k)=(c\cdot m_l)\otimes (c\cdot m'_k)=m_l\otimes m'_k,$$
$$y\cdot(m_0\otimes m'_0)=(y\cdot m_0)\otimes (a^{n_1}\cdot m'_0)+ (c\cdot m_0)\otimes (y\cdot m'_0) =0.$$
Since
$\Delta(x^k)=\sum\limits^k_{l=0}\bibitemnom{k}{l}_{q^{n_1}}b^lx^{k-l}\otimes a^{(k-l)n_1}x^l
$ in $H_\beta$, where $\bibitemnom{k}{l}_{q^{n_1}}=\frac{(k)_{q^{n_1}}!}{(l)_{q^{n_1}}!(k-l)_{q^{n_1}}!}$ and $(p)_{q^{n_1}}!=(p)_{q^{n_1}}(p-1)_{q^{n_1}}\cdots (1)_{q^{n_1}}$ for $(p)_{q^{n_1}}=1+q^{n_1}+\cdots +q^{(p-1)n_1}$, we have
\begin{eqnarray}
x^{k'}(m_0\otimes m'_0)&=&(\sum\limits^{k'}_{l=0}\bibitemnom{k'}{l}_{q^{n_1}}b^lx^{k'-l}\otimes a^{(k'-l)n_1}x^l)(m_0\otimes m'_0)\nonumber\\
&=&\sum\limits^{k'}_{l=0}\bibitemnom{k'}{l}_{q^{n_1}}q^{(\frac{1}2-l)(k'-l)n_1}m_{k'-l}\otimes m'_l\nonumber.
\end{eqnarray}
Thus $x^{k'}(m_0\otimes m'_0)=0$ only if $k'\geq l_0:=min\{t,r+1\}$.
If $2\leq r\leq t-1$, then $x^{r+1}(m_0\otimes m_0')=0$. In addition,
\begin{eqnarray}\label{Eq21}\begin{array}{lll}
(yx^{k'})(m_0\otimes m'_0)&=&(q^{-n_1k'}x^{k'}y+\sum\limits^{k'-1}_{v=0}q^{-(k'-v-1)n_1}\beta_3x^{k'-1}(q^{-2n_1v}a^{2n_1}-bc))(m_0\otimes m'_0)\\
&=&\sum\limits^{k'-1}_{v=0}q^{-vn_1}\beta_3(q^{(r-k'+1)n_1}-1)x^{k'-1}\cdot(m_0\otimes m'_0) .
\end{array}\end{eqnarray}
Let $u_l=x^l(m_0\otimes m_0')$. Then $\{u_0,\cdots, u_{r}\}$ is a basis of $H_\beta(m_0\otimes m'_0)$, which is isomorphic to $V_{r+1}(1,1,1;0)$ for $r\leq t-1$. Let
$A_1:=H_\beta(m_0\otimes m'_1)/H_\beta(m_0\otimes m^{\cal P}}\newcommand \nb{\text{\bf N}rime_0)$. Then $$y(m_0\otimes m'_1)=(y\cdot m_0)\otimes (a^{n_1}\cdot m'_1)+ (c\cdot m_0)\otimes (y\cdot m'_1) =k'_1m_0\otimes m'_{0}=\overline{0}.$$
It is easy to prove that $x^p(m_0\otimes m_1')=q^{\frac{pn_1}2}m_p\otimes m_1'\notin H_\beta(m_0\otimes m_0')$ for $0\leq p\leq r-2$ and $x^{r-1}(m_0\otimes m_1')\in H_\beta(m_0\otimes m_0')$. Let $u_l=x^l(\overline{m_0\otimes m_1'})$. Then $\{u_0,\cdots, u_{r-2}\}$ is a basis of $H_\beta(\overline{m_0\otimes m'_1})$. Since $a(m_0\otimes m_1')=q^{\frac12(r-2)}m_0\otimes m_1',$ $H_\beta(\overline{m_0\otimes m_1'})\cong V_{r-1}(1,1,1;0)\otimes V_0(1,1,1,n-1)$.
Thus $z_rz_2=z_{r+1}+g^{n-1}z_{r-1}$ for $2\leq r\leq t-1$. In particular, $z_2^2=z_3+g^{n-1}.$
So $z_{r+1}=z_rz_2-g^{n-1}z_{r-1}$ for $2\leq r\leq t-1$.
If $r=t$, then $yx(m_0\otimes m_0')=0$ by (\ref{Eq21}). Since $ax(m_0\otimes m_0')=q^{\frac12(t-2)}x(m_0\otimes m_0')$,
$H_{\beta}x(m_0\otimes m_0')\cong V_{t-1}(1,1,1;0)\otimes V_0(1,1,1;n-1)$. Thus $H_\beta(m_0\otimes m'_0)/H_{\beta}x(m_0\otimes m_0')\cong V_0(1,1,1;0)$ and
$[H_\beta(m_0\otimes m_0')]=g^{n-1}z_{t-1}+1$, where $g=[V_0(1,1,1;1)]$. Let
$A_1:=H_\beta(m_0\otimes m'_1)/H_\beta(m_0\otimes m^{\cal P}}\newcommand \nb{\text{\bf N}rime_0)$. Then
$y(\overline{m_0\otimes m'_1})=\overline{0}$ and
$$x^{k'}(m_0\otimes m'_1)\in H_\beta(m_0\otimes m^{\cal P}}\newcommand \nb{\text{\bf N}rime_0)$$ only if $k'\geq t$. Since $yx^{t-1}(m_0\otimes m_1')=0$. We have $H_{\beta}x^{t-1}(\overline{m_0\otimes m_1'})\cong V_0(1,1,1;n-t)$ and $H_\beta(\overline{m_0\otimes m'_1})/H_{\beta}x^{t-1}(\overline{m_0\otimes m_1'})\cong V_{t-1}(1,1,1;n-1)\cong V_{t-1}(1,1,1;0)\otimes V_0(1,1,1;n-1)$. Hence $z_tz_2=2g^{n-1}z_{t-1}+g^{n-t}+1$.
Consequently, we have $z_r=\sum\limits_{v=0}^{[\frac{r-1}{2}]}(-1)^v\bibitemnom{r-1-v}{v}g^{(n-1)v}z_2^{r-1-2v}$ for $3\leq r\leq t$ and $\sum\limits_{v=0}^{[\frac{t}{2}]}(-1)^v\frac{t}{t-v}\bibitemnom{r-v}{v}g^{(n-1)v}z_2^{t-2v}-g^{n-t}-1=0$.
(2) Now we consider the product $z_2z_{\xi}'$. Similar to (1), we get $H_{\beta}(m_0\otimes m_0')\cong V_t(\xi q^\frac{n}{2},1,1;0)$ and $H_\beta(\overline{m_0\otimes m'_1})\cong V_t(\xi q^\frac{n}{2},1,1;n-1)\cong V_t(\xi q^\frac{n}{2},1,1;0)\otimes V_0(1,1,1;n-1)$. Thus $z_2z_\xi'=z_{\xi q^\frac{n}{2}}'+g^{n-1}z_{\xi q^\frac{n}{2}}'$.
(3) Here, we determine the product $z_{\xi}'z_{\xi'}'$. Similar to the proof of (1), we get $l_p=t$ and $V_t(\xi,1,1;0)\otimes V_t(\xi',1,1;0)=\sum\limits_{p=0}^{t-1}H_\beta(m_0\otimes m_p')$. Suppose that $\xi\xi'\in \widetilde{\bf K}_0$. Hence we get that $$H_\beta(\overline{m_0\otimes m_p'})\cong V_t(\xi\xi',1,1;n-p)\cong V_t(\xi\xi',1,1;0)\otimes V_0(1,1,1;n-p).$$
Thus $z_{\xi}'z_{\xi'}'=\sum\limits_{p=0}^{t-1}g^{n-p}z_{\xi\xi'}'$, where $g=[V_0(1,1,1;1)]\in R_1$.
(4) Finally, we assume that $\xi\xi'\in\{\bar{\omega}^{\frac{p(n,n_1)}{2}}\mid p\in \mathbb{Z}\}$.
Let $r$ be the minimal positive integer such that $(\xi\xi')^{\frac{2n_1}{n}}q^{(-r+1)n_1}=1$. Then $(\xi\xi')^{\frac{2n_1}{n}}q^{(-2p-r_p+1)n_1}=1$ for $1\leq r_p\leq t$ satisfying $r_p\equiv (r-2p)\ mod(t)$. Hence
$H_\beta(\overline{m_0\otimes m_p'})\cong V_t(\xi\xi',1,1;n-p)\cong V_t(\xi\xi',1,1;0)\otimes V_0(1,1,1;n-p)$ if $r_p=t$. Otherwise, we have $H_{\beta}x^{r_p}(\overline{m_0\otimes m_p'})\cong V_{t-r_p}(\xi\xi',1,1;n-p-r_p)\cong V_{t-r_p}(\xi\xi',1,1;0)\otimes V_0(1,1,1;n-p-r_p)$ and $H_\beta(\overline{m_0\otimes m'_p})/H_{\beta}x^{r_p}(\overline{m_0\otimes m_p'})\cong V_{r_p}(\xi\xi',1,1;n-p)\cong V_{r_p}(\xi\xi',1,1;0)\otimes V_0(1,1,1;n-p)$.
Thus $z_\xi'z_{\xi'}'=\sum\limits^{t-1}_{p=0}W'(p), $ where
$W'(p)=
g^{n-p}z_t$ if $r_p=t$ and
$W'(p)=g^{n-p-r_p}z_{t-r_p}+g^{n-p}z_{r_p}$ if $r_p<t$.
The proof is completed.
\end{proof}
Especially, we obtain the structure of the Grothendieck ring of the Gelaki's Hopf alegbra where $\beta_3\neq 0$.
\begin{corollary}\label{cor56}
Suppose that $\beta_3\neq 0$.
\begin{itemize}
\item[(1)] If $N\nmid 2n_1t$, then one of the following satisfies.
\item[(1.1)] If either $2n\mid N$, or $2n\nmid N$ and $2(n,n_1)\nmid n$, then the Grothendieck ring $G_0(\mathcal{U}_{(n,N,n_1,q, 0,0, \beta_3)})$ is isomorphic to $S_2:=\mathbb{Z}[g,h_1,z_2, z']$ with relations (\ref{Eq*1}), $z_2z'=(1+g^{n-1})z'$ and
\begin{eqnarray}\label{EQ23} {z'}^{v_0}={\mathfrak{s}''}^{v_0-2}\left(\sum\limits^{t-1}_{p=0,r_p<t}(g^{n-p-r_p}z_{t-r_p}+g^{n-p}z_{r_p})+\sum\limits_{i=0,r_p=t}^{t-1}g^{n-p}z_t\right),\end{eqnarray}
where $z_0=0$, $z_1=1$, $z_p=\sum\limits_{v=0}^{[\frac{p-1}{2}]}(-1)^v\bibitemnom{p-1-v}{v}g^{(n-1)v}z_2^{p-1-2v}$ for $3\leq p\leq t$ and $v_0=\frac{N}{(2n_1t,N)}$.
\item[(1.2)] If $2n\nmid N$ and $2(n,n_1)\mid n$, then the Grothendieck ring $G_0(\mathcal{U}_{(n,N,n_1,q, 0,0, \beta_3)})$ is isomorphic to $S_2:=\mathbb{Z}[g,h_1,z_3, z']$ with relations (\ref{EQ23}), $z_3z'=(1+g^{n-1}+g^{n-2})z'$ and
\begin{equation}\label{T3}
\sum\limits_{v=0}^{[\frac{t-2}{4}]}(-1)^v\bibitemnom{\frac{t-2}{2}-v}{v}g^{(n-2)v}(z_3-g^{n-1})^{\frac{t-2}{2}-2v}-g^{n-1}z_{t-1}-g^{n-t}-1=0,
\end{equation}
where $z_0=0$, $z_1=1$, $v_0=\frac{N}{(2n_1t,N)}$ and $z_{2l+1}=\sum\limits_{v=0}^{[\frac{l-1}{2}]}(-1)^v\bibitemnom{l-1-v}{v}g^{(n-2)v}(z_3-g^{n-1})^{l-1-2v}$ for $3\leq 2l+1\leq t-1$.
\item[(2)] If $N\mid 2n_1t$, then the Grothendieck ring $G_0(\mathcal{U}_{(n,N,n_1,q, 0,0, \beta_3)})$ is isomorphic to
$$\begin{cases}
\mathbb{Z}[g,h_1,z_2], & \text{if}\ \ 2n\mid N \ \text{or}\ 2n\nmid N \ \text{and}\ 2(n,n_1)\nmid n\\
\mathbb{Z}[g,h_1,z_3], & \text{if}\ \ 2n\nmid N\ \text{and}\ 2(n,n_1)\mid n
\end{cases}$$
which is a subring of $S_2$.
\end{itemize}
\end{corollary}
\begin{proof}
Since $\gamma^{\frac{N}n}=1$, $\gamma=\mathfrak{q}^{np}$ for some integer $p$. If $\gamma^{\frac{2n_1}{n}}= q^{vn_1}$ for some integer $v$, then $\mathfrak{q}^{2n_1p}=q^{vn_1}$. Hence $\mathfrak{q}^{2n_1pt}=1$, i.e. $N\mid 2n_1pt$. Thus $\frac{N}{(N,2n_1t)}\mid p$.
Then $z'_{\gamma}\in S_2$ only if $\gamma\in \langle \mathfrak{q^n}\rangle$ and $\gamma\notin \langle \mathfrak{q}^{nv_0}\rangle$ for $v_0=\frac{N}{(2n_1t,N)}$. If either $2n\mid N$, or $2n\nmid N$ and $2(n,n_1)\nmid n$, then we have generator $z_2=[V_0(1,1,1;0)]$. If $v_0=1$, that is $N(n,n_1)|2nn_1$, then $S_2=S_1[z_2]$. Otherwise, $S_2=S_1[z_2,z']$ for $z'=[V_t(\mathfrak{q}^n,1,1;0)]$. If $2n\nmid N$ and $2(n,n_1)\mid n$, then $[V_{r}(\gamma,1,1;0)]\in \mathcal{U}_{(n,N,n_1,q, 0,0, \beta_3)}$ for $\gamma^{\frac{2n_1}{n}}\in\langle q^{n_1}\rangle$ implies that $r$ is an odd integer. Let $r_p\equiv (r-2p)\ mod(t)$ for $1\leq p\leq t$ and $z'=[V_t(\mathfrak{q}^{n},1,1;0)]$. We further assume that $1\leq r_p\leq t$. Then $(z')^{v_0}=z'(z')^{v_0-1}=
{\mathfrak{s}''}^{v_0-2}z'z'_{\mathfrak{q}^{v_0-1}}$ by Theorem \ref{L55}. Since
$$\mathfrak{q}^{2n_1v_0}=q^{\frac{2nn_1}{(2n_1t,N)}}=q^{(r-1)n_1}$$ for some $2\leq r\leq t$, $z'z_{\mathfrak{q}^{v_0-1}}'=\sum\limits^{t-1}_{p=0,r_p<t}(g^{n-p-r_p}z_{t-r_p}+g^{n-p}z_{r_p})+\sum\limits_{i=0,r_p=t}^{t-1}g^{n-p}z_t,$ where $$z_p=\sum\limits_{v=0}^{[\frac{p-1}{2}]}(-1)^v\bibitemnom{p-1-v}{v}g^{(n-1)v}z_2^{p-1-2v}$$ for $3\leq p\leq t$.
Since $\mathfrak{q}^nq^{\frac{n}2}=\mathfrak{q}^n$, $z_2z'=(g^{n-1}+1)z'$ by Theorem \ref{L55}. Similarly to the proof of Theorem \ref{L55}, we have $z_3z'=(g^{n-2}+g^{n-1}+1)z'$,
\begin{equation}
\sum\limits_{v=0}^{[\frac{t-2}{4}]}(-1)^v\bibitemnom{\frac{t-2}{2}-v}{v}g^{(n-2)v}(z_3-g^{n-1})^{\frac{t-2}{2}-2v}-g^{n-1}z_{t-1}-g^{n-t}-1=0\nonumber
\end{equation}
and $z_{2l+1}=\sum\limits_{v=0}^{[\frac{l-1}{2}]}(-1)^v\bibitemnom{l-1-v}{v}g^{(n-2)v}(z_3-g^{n-1})^{l-1-2v}$ for $3\leq 2l+1\leq t-1$.
\end{proof}
We always assume that $z_0=0$, $z_1=1$, $z_2=[V_2(1,1,1;0)]$ and $$z_r=\sum\limits_{v=0}^{[\frac{r-1}{2}]}(-1)^v\bibitemnom{r-1-v}{v}g^{(n-1)v}z_2^{r-1-2v}\ \text{ for}\ 3\leq r\leq t.$$
To determine the Grothendieck ring $G_0(H_\beta)$ for $\beta_1\neq 0$, we need the following lemma.
\begin{lemma}\label{L52}
Suppose that $\beta_1(\gamma_1^{n_1}-\gamma_2^n)\neq 0$. Then
\begin{eqnarray}
[V_I(\gamma_1,\gamma_2,\gamma_3;i)][V_0(\gamma'_1,\gamma'_2,\gamma'_3;j)]&=&[V_I(\gamma_1\gamma'_1,\gamma_2\gamma'_2,\gamma_3\gamma'_3;i+j)]\nonumber\\
&=&[V_0(\gamma'_1,\gamma'_2,\gamma'_3;j)][V_I(\gamma_1,\gamma_2,\gamma_3;i)].\nonumber
\end{eqnarray}
\end{lemma}
\begin{proof} Since $(\gamma_1^{n_1}-\gamma_2^n)\beta_1\neq 0$, $\beta_1\neq 0$. Thus $\gamma_1'^{n_1}=\gamma_2'^{n}$ and $\gamma_2'=\gamma_1'^{\frac{n_1}n}q^s$ for some $0\leq s\leq n-1$. Hence $((\gamma_1\gamma_1')^{n_1}-(\gamma_2\gamma_2')^n)\beta_1=(\gamma_1')^{n_1}(\gamma_1^{n_1}-\gamma_2^n)\beta_1\neq 0$.
Let $\{m_0,m_1,\cdots,m_{n-1}\}$ be the basis of $V_I(\gamma_1,\gamma_2,\gamma_3;i)$ with the action given by Equations (\ref{eq8})-(\ref{eq10}). Then $\{m_0\otimes 1,m_1\otimes 1,\cdots,m_{n-1}\otimes 1\}$ is a basis of $V_I(\gamma_1,\gamma_2,\gamma_3;i)\otimes V_0(\gamma_1',\gamma_2',\gamma_3';j)$. With this basis, we have
$$a\cdot(m_l\otimes 1)=(a\cdot m_l)\otimes (a\cdot 1)=\sqrt[n]{\gamma_1\gamma'_1}q^{i+j-l}m_l\otimes 1,$$
$$b\cdot(m_l\otimes 1)=(b\cdot m_l)\otimes (b\cdot 1)=(\gamma_2\gamma'_2)m_l\otimes 1,\quad
c\cdot(m_l\otimes 1)=(c\cdot m_l)\otimes (c\cdot 1)=(\gamma_3\gamma'_3)m_l\otimes 1,$$
$$x\cdot(m_l\otimes 1)=(x\cdot m_l)\otimes (a^{n_1}\cdot 1)+ (b\cdot m_l)\otimes (x\cdot 1) =\gamma'^{\frac{n_1}{n}}_1q^{jn_1}m_{l+1}\otimes 1,\; for\; 0\leq l\leq n-2,$$
$$x\cdot(m_{n-1}\otimes 1)=(x\cdot m_{n-1})\otimes (a^{n_1}\cdot 1)+ (b\cdot m_{n-1})\otimes (x\cdot 1) =\beta_1(\gamma_1^{n_1}-\gamma_2^n)\gamma'^{\frac{n_1}{n}}_1q^{jn_1}m_0\otimes 1,$$
$$y\cdot(m_l\otimes 1)=(y\cdot m_l)\otimes (a^{n_1}\cdot 1)+ (c\cdot m_l)\otimes (y\cdot 1) =k_{n-l+1}\gamma'^{\frac{n_1}{n}}_1q^{jn_1}m_{l-1}\otimes 1,\; for\; 1\leq l\leq n-1,$$
$$y\cdot(m_0\otimes 1)=(y\cdot m_0)\otimes (a^{n_1}\cdot 1)+ (c\cdot m_0)\otimes (y\cdot 1) =k_1\gamma'^{\frac{n_1}{n}}_1q^{jn_1}m_{n-1}\otimes 1.$$
Let $u_l=\gamma'^{\frac{n_1l}{n}}_1q^{jn_1l}m_l\otimes 1$. Then $\{u_0,\cdots, u_{n-1}\}$ is a new basis of $V_I(\gamma_1,\gamma_2,\gamma_3;i)\otimes V_0(\gamma_1',\gamma_2',\gamma_3';j)$. Under this basis, we have
\begin{eqnarray*}au_k=\sqrt[n]{\gamma_1\gamma_1'}q^{i+j-k}u_k;\ \
bu_k=\gamma_2\gamma_2'u_k;\ \ cu_k=\gamma_3\gamma_3'u_k\qquad for\quad 0\leq k\leq n-1;
\end{eqnarray*}
\begin{eqnarray*}xu_k=u_{k+1},\qquad for\quad 0\leq k\leq n-2;\qquad
xu_{n-1}=((\gamma_1\gamma_1')^{n_1}-(\gamma_2\gamma_2')^n)\beta_1u_0;
\end{eqnarray*}
\begin{eqnarray*}yu_p=\gamma_1'^{\frac{2n_1}{n}}q^{2jn_1}k_{n-j+1}u_{p-1},\qquad for\quad 1\leq p\leq
n-1;\qquad yu_{0}=\gamma_1'^{\frac{n_1(2-n)}{n}}q^{2jn_1}k_1u_{n-1}.
\end{eqnarray*}
Thus $V_I(\gamma_1,\gamma_2,\gamma_3;i)\otimes V_0(\gamma'_1,\gamma'_2,\gamma'_3;j)\cong V_I(\gamma_1\gamma'_1,\gamma_2\gamma'_2,\gamma_3\gamma'_3;i+j)$.
Similarly, we get $$V_0(\gamma'_1,\gamma'_2,\gamma'_3;j)\otimes V_I(\gamma_1,\gamma_2,\gamma_3;i)\cong V_I(\gamma_1\gamma'_1,\gamma_2\gamma'_2,\gamma_3\gamma'_3;i+j).$$
\end{proof}
\begin{remark}
Lemma \ref{L52} implies that
\begin{equation}\label{EQ2}
V_I(\gamma_1,\gamma_2,\gamma_3,i)\cong V_I(\gamma_1',1,\gamma_3';0)\otimes V_0(\gamma_2^{\frac{n}{n_1}},\gamma_2,\gamma_2q^{2n_1i}; i),
\end{equation}
where $\gamma_1'=\gamma_1\gamma_2^{-\frac{n}{n_1}}$ satisfying $\gamma_1'^{n_1}\neq 1$, $\gamma_3'=\gamma_3\gamma_2^{-1}q^{-2n_1i}$.
If $\beta_2=\beta_3=0$, then $V_I(\gamma_1',1,\gamma_3';0)\cong V_I(\gamma_1',1,1;0)\otimes V_0(1,1,\gamma_3';0)$. If $\beta_2\neq 0$, $\beta_3=0$ and $\gamma_3'=q^{u}$ for some integer $u$, then $V_I(\gamma_1',1,\gamma_3';0)\cong V_I(\gamma_1',1,1;0)\otimes V_0(1,1,\gamma_3';0)$. If $\beta_2=0$, $\beta_3\neq 0$ and $(\gamma_3')^{\frac{n}{2}}=1$, then $V_I(\gamma_1',1,\gamma_3';0)\cong V_I(\gamma_1'(\gamma_3')^{-\frac{n}{2n_1}},$ $1,1;0)\otimes V_0((\gamma_3')^{\frac{n}{2n_1}},1,\gamma_3';0)$. If $\beta_2\beta_3\neq 0$ and $(\gamma_3')^{\frac{n}{2}}=\gamma_3'^{n}=1$, then $V_I(\gamma_1',1,\gamma_3';0)\cong V_I(\gamma_1'(\gamma_3')^{-\frac{n}{2n_1}},$ $1,1;0)\otimes V_0((\gamma_3')^{\frac{n}{2n_1}},1,\gamma_3';0)$.
\end{remark}
\begin{remark}
Let $R_3$ be the subring of the Grothendieck ring $G_0(H_\beta)$ for $\beta_1\neq 0$ generated by
$R_2$ and $\{[V_I(\gamma_1,\gamma_2,\gamma_3;i)]|\gamma_1^{n_1}\neq \gamma_2^n,i\in\mathbb{Z}_n \}$. Then $R_3$ is isomorphic to a quotient ring of $R_2[x_{\zeta_1,\zeta_2}|\zeta_2\in{\bf K}^* ,\zeta_1\in \widehat{{\bf K}_0}]$, where
$\widehat{{\bf K}_0}=({\bf K}^*/\{\bar{\omega}^r|r\in\mathbb{Z}\})\setminus\{\overline{1}\}$ and $x_{\zeta_1,\zeta_2}=[V_I(\zeta_1,1,\zeta_2;0)]$. Moreover, if $\beta_2=0$ and $\beta_1\beta_3\neq 0$, then $R_3$ is the Grothendieck ring of $G_0(H_\beta)$.
Suppose that $\beta_2=\beta_3=0$ and $\beta_1\neq 0$. Then the Grothendieck ring $G_0(H)$ is isomorphic to a quotient ring of the algebra $R_2'=R_1[x_{\zeta,1}|\zeta\in \widehat{{\bf K}_0}]$.
\end{remark}
Let $r_p=(r-2p)\ mod(t)$ and $1\leq r_p\leq t$, where $r$ is the minimal positive integer such that $\zeta_2\zeta_2'(\zeta_1\zeta_1')^{-\frac{2n_1}{n}}=q^{(-r+1)n_1}$. Then $r_p$ is the minimal positive integer such that $$\zeta_2\zeta_2'(\zeta_1\zeta_1')^{-\frac{2n_1}{n}}=q^{(-2p-r+1)n_1}.$$
\begin{theorem}\label{V} Suppose that $\beta_1\beta_3\neq 0$, $\beta_2=0$ and ${\bf K}^{**}=({\bf K}^*/\{\zeta\in{\bf K}|\zeta^{\frac{n}{2}}=1\})\setminus\{\bar{1}\}$. Let $z_\xi''=[V_t(1,1,\xi;0)]$ for $\xi\in\overline{{\bf K}^*}:=({\bf K}^* /\langle q^{n_1}\rangle)\setminus\{\bar{1}\}$. Then the Grothendieck ring $R_3:=G_0(H_\beta)$ is isomorphic to the commutative ring
$R_1[z_2, x_{\zeta_1,\zeta_2},z_{\xi}''|\zeta_1\in \widehat{{\bf K}_0},\zeta_2\in{\bf K}^{**},\xi\in \overline{{\bf K}^*}]$ with relations (\ref{Eq*1}),
$$z_2x_{\zeta_1,\zeta_2}=x_{\zeta_1q^\frac{n}{2},\zeta_2}+g^{n-1}x_{\zeta_1q^\frac{n}{2},\zeta_2}, \quad z_2z_\xi''=\eta(z_{\xi q^{-n_1}}''+g^{n-1}z_{\xi q^{-n_1}}''),$$
$$x_{\zeta_1,\zeta_2}z_\xi''=\mathfrak{s}''x_{\zeta_1,\zeta_2\xi}, \quad z_\xi''z_{\xi'}''=\mathfrak{s}''z_{\xi\xi'}'' \ for\ \xi\xi'\in \overline{{\bf K}^*},$$
$$z_\xi''z_{\xi'}''=\sum\limits^{t-1}_{p=0,r_p'<t}g^{n-p}(g^{-r_p'}g_{q^{-\frac{(t-r_p'-1)n}{2}},1,\xi\xi'}z_{t-r_p'}+g_{q^{-\frac{(r_p'-1)n}{2}},1,\xi\xi'}z_{r_p'})+\sum\limits_{i=0,r_p'=t}^{t-1}g^{n-p}g_{q^{-\frac{(t-1)n}{2}},1,\xi\xi'}z_t$$ for $r'$ is the minimal positive integer such that $\xi\xi'=q^{r'n_1}$, and
\begin{equation*}
x_{\zeta_1,\zeta_2}x_{\zeta_1',\zeta_2'}=
\begin{cases}
\mathfrak{s}x_{\zeta_1\zeta_1',\zeta_2\zeta_2'}, \ & \text{if}\ \ (\zeta_1\zeta_1')^{n_1}\neq 1\\
\mathfrak{s}'g_{\zeta_1\zeta_1',1,\zeta_2\zeta_2'}\sum\limits_{p=0}^{n-1}(g^{n-p-r_p}z_{t-r_p}+ g^{n-p}z_{r_p}), \ &\text{if}\ \ (\zeta_1\zeta_1')^{n_1}=1,(\zeta_1\zeta_1')^{-\frac{2n_1}{n}}\zeta_2\zeta_2\in\langle q^{n_1}\rangle,\\
u\mathfrak{s}g_{\zeta_1\zeta_1',1,1}z_{\zeta_2\zeta_2'}'', \ &\text{if}\ \ (\zeta_1\zeta_1')^{n_1}=1,(\zeta_1\zeta_1')^{-\frac{2n_1}{n}}\zeta_2\zeta_2\in \overline{{\bf K}^*},
\end{cases}
\end{equation*}
where $\mathfrak{s}'=\sum\limits_{k=1}^ug^{kt}\in R_1$,
$u=\frac{n}{t}$, $z_r=\sum\limits_{v=0}^{[\frac{r-1}{2}]}(-1)^v\bibitemnom{r-1-v}{v}g^{(n-1)v}z_2^{r-1-2v}$ for $3\leq r\leq t$ and $\eta=[V_0(q^\frac{n}{2},1,q^{n_1};0)]$.
\end{theorem}
\begin{proof} Since $\beta_2=0$, $G_0(H_\beta)$ is generated by $[V_I(\gamma_1,\gamma_2,\gamma_3;i)], [V_0(\gamma_1,\gamma_2,\gamma_3;i)]$ and $[V_r(\gamma_1,\gamma_2,\gamma_3;i)]$ for $r\leq t$. Suppose that $\gamma_1^{n_1}=\gamma_2^n$ and $\gamma_1^{-\frac{2n_1}{n}}\gamma_2\gamma_3\neq q^{n_1v}$ for any integer $v$.
Then $$[V_t(\gamma_1,\gamma_2,\gamma_3;i)]=[V_t(1,1,\gamma_3\gamma_2^{-1};0)][V_0(\gamma_1,\gamma_2,\gamma_2;i)],$$
where $\gamma_2^{-1}\gamma_3\neq q^{n_1v}$ for any integer $v$. In addition, we have
$$[V_t(1,1,\, \gamma_2^{-1}\gamma_3 q^{{n_1}}; 0)]=[V_t(1,1,\gamma_2^{-1}\gamma_3,0)][V_0(1,1,q^{{n_1}};0)].$$
Therefore $G_0(H_\beta)$ is generated by $R_1$, $z_2$, $x_{\zeta_1,\zeta_2}$ and $z_{\zeta}''=[V_t(1,1,\xi;0)]$ for $\xi\in({\bf K}^*/\langle q^{n_1}\rangle)\setminus\{\bar{1}\}$.
In the following, we determine the relations of the generators in several subcases.
(1) Similar to the proof of Theorem \ref{L55}, we have Equation (\ref{Eq*1}), $$z_2z_\xi''=\eta(z_{\xi q^{-n_1}}''+g^{n-1}z_{\xi q^{-n_1}}''),\qquad z_\xi''z_{\xi'}''=\sum\limits_{i=0}^{t-1} g^{n-p}z_{\xi\xi'}'' \text { for }\ \xi\xi'\in \overline{{\bf K}^*},$$
and
$$z_\xi''z_{\xi'}''=\sum\limits^{t-1}_{p=0,r_p'<t}g^{n-p}(g^{-r_p'}g_{q^{-\frac{(t-r_p'-1)n}{2}},1,\xi\xi'}z_{t-r_p'}+g_{q^{-\frac{(r_p'-1)n}{2}},1,\xi\xi'}z_{r_p'})+\sum\limits_{i=0,r_p'=t}^{t-1}g^{n-p}g_{q^{-\frac{(t-1)n}{2}},1,\xi\xi'}z_t$$
for $\xi\xi'=q^{r'n_1}$, where $z_r=\sum\limits_{v=0}^{[\frac{r-1}{2}]}(-1)^v\bibitemnom{r-1-v}{v}g^{(n-1)v}z_2^{r-1-2v}$ for $3\leq r\leq t$, $\eta=[V_0(q^\frac{n}{2},1,q^{n_1};0)]$, and $r_p'=(r'-2p)(mod\ t)$ for $1\leq p\leq t$, $r'$ is the minimal positive integer such that $\xi\xi'=q^{r'n_1}$.\\
(2) Let $\{m_0,\cdots,m_{n-1}\}$ (resp. $\{m_0',m_1'\}$) be the basis of $V_I(\zeta_1,1,\zeta_2;0)$ (resp. $V_2(1,1,1;0)$) with the actions given by Equations (\ref{eq8})-(\ref{eq10})(resp. Equations (\ref{eq15})-(\ref{eq17}), where $k_i$ is replaced by $k_i'$). Then $\{m_l\otimes m'_v,\; 0\leq l\leq n-1,0\leq v\leq 1\}$ is a basis of $V_I(\zeta_1,1,\zeta_2;0)\otimes V_2(1,1,1;0)$. With this basis, we have
$$a\cdot(m_l\otimes m'_v)=(a\cdot m_l)\otimes (a\cdot m'_v)=\zeta_1^{\frac{1}{n}}q^{\frac{1}{2}-l-v}m_l\otimes m'_v,$$
$$b\cdot(m_l\otimes m'_v)=(b\cdot m_l)\otimes (b\cdot m'_v)=m_l\otimes m'_v,\quad
c\cdot(m_l\otimes m'_v)=(c\cdot m_l)\otimes (c\cdot m'_v)=\zeta_2m_l\otimes m'_v,$$
\begin{eqnarray}y(m_l\otimes m'_v)=
\begin{cases}
q^\frac{n_1}{2}k_{n-l+1}m_{l-1}\otimes m'_0,\ & \text{if} \ 1\leq l\leq n-1,\;v=0\\
q^\frac{n_1}{2}k_1m_{n-1}\otimes m'_0, \ & \text{if}\ l=v=0\\
q^{\frac{n_1}{2}-kn_1}k_{n-l+1}m_{l-1}\otimes m'_1+\zeta_2k'_1m_l\otimes m'_0,\ & \text{if} \ 1\leq l\leq n-1,\;v=1\\
q^{\frac{n_1}{2}-kn_1}k_1m_{n-1}\otimes m'_1+\zeta_2k'_1m_0\otimes m'_0,\ & \text{if} \ l=0,\;v=1
\end{cases}\nonumber
\end{eqnarray}
and
\begin{eqnarray}x^t(m_l\otimes m'_v)=
\begin{cases}
m_{l+t}\otimes m'_v, &\text{if}\ 0\leq l\leq n-t-1
\cr \beta_1(\gamma_1^{n_1}-1)m_{l+t-n}\otimes m'_v, &\text{if}\ n-t\leq l\leq n-1
\end{cases}.\nonumber
\end{eqnarray}
If $(n_1,n)=u$, then $|q^{n_1}|=|q^u|=\frac{n}{u}$. Hence $n=ut$ and
\begin{eqnarray}
x^{n-1}(m_l\otimes m'_v)=\sum\limits^{1-v}_{l'=0}\bibitemnom{t-1}{l'}_{q^{n_1}}q^{\frac{(n-1-l')n_1}{2}+(-l-l')(t-1-l')n_1}x^{n-1-l'}m_{l}\otimes m'_{v+l'}.\nonumber
\end{eqnarray}
Let $v_0=m_0\otimes m'_0+\alpha_1m_{n-1}\otimes m'_1$ , $v_1=m_0\otimes m'_1$ and $yv_0=\theta x^{n-1}v_0$. Then
$$\begin{array}{lll}
yv_0&=&\alpha_1q^{-\frac{n_1}{2}}k_2m_{n-2}\otimes m'_1+(q^{\frac{n_1}{2}}k_1+\alpha_1\zeta_2k'_1)m_{n-1}\otimes m'_0,
\end{array}$$
and
$$\begin{array}{lll}
x^{n-1}v_0&=&q^\frac{(n-1)n_1}{2}m_{n-1}\otimes m'_0-q^{\frac{(n-2)n_1}{2}-(t-1)n_1}m_{n-2}\otimes m'_1\\
&&+\alpha_1\beta_1(\zeta_1^{n_1}-1)q^{\frac{(n-1)n_1}{2}-(t-1)n_1}m_{n-2}\otimes m'_1.
\end{array}$$
Thus
\begin{equation}
\alpha_1k_2-q^{-(t-1)n_1}(q^{\frac{n_1}{2}}k_1+\alpha_1\zeta_2k'_1)(\alpha_1\beta_1(\zeta_1^{n_1}-1)q^{\frac{n_1}{2}}-1)=0.\nonumber
\end{equation}
Hence we can determine the $\alpha_1$. In addition, $H_\beta v_0\cong V_I(\zeta_1q^\frac{n}{2},1,\zeta_2;0)$ is an irreducible submodule of $V_I(\zeta_1,1,\zeta_2;0)\otimes V_2(1,1,1;0)$ by the proof of Lemma \ref{lem52}.
By the same method, we prove that $$(H_\beta v_1+H_\beta v_0)/H_\beta v_0\cong V_I(\zeta_1q^\frac{n}{2},1,\zeta_2;n-1)\cong V_I(\zeta_1q^\frac{n}{2},1,\zeta_2;0)\otimes V_0(1,1,1;n-1)$$ is an irreducible submodule of $V_I(\zeta_1,1,\zeta_2;0)\otimes V_2(1,1,1;0)/H_\beta v_0$.
Hence
\begin{equation*}
x_{\zeta_1,\zeta_2}z_2=x_{\zeta_1q^\frac{n}{2},\zeta_2}+g^{n-1}x_{\zeta_1q^\frac{n}{2},\zeta_2},
\end{equation*}
where $g=[V_0(1,1,1;1)]$. Likewise, we have
$z_2x_{\zeta_1,\zeta_2}=x_{\zeta_1q^\frac{n}{2},\zeta_2}+g^{n-1}x_{\zeta_1q^\frac{n}{2},\zeta_2}.$ \\
(3) Similarly, we obtain $$[V_I(\zeta_1,1,\zeta_2;0)][V_t(1,1,\xi;0)]=\sum\limits^{t-1}_{p=0}[V_I(\zeta_1,1,\zeta_2\xi;n-p)]=[V_t(1,1,\xi;0)][V_I(\zeta_1,1,\zeta_2;0)].$$
Hence $x_{\zeta_1,\zeta_2}z_\xi''=\sum\limits^{t-1}_{p=0}g^{n-p}x_{\zeta_1,\zeta_2\xi}$.
(4) Let $\{m_0,\cdots,m_{n-1}\}$ (resp. $\{m_0',\cdots,m_{n-1}'\}$) be a basis of $V_I(\zeta_1,1,\zeta_2;0)$ (resp. $V_I(\zeta_1',1,\zeta_2';0)$) with the action given by (\ref{eq8})-(\ref{eq10}) (resp. (\ref{eq8})-(\ref{eq10}) with $k_i$ replaced by $k_i'$). Then $\{m_l\otimes m'_v,\; 0\leq l,v\leq n-1\}$ is a basis of $V_I(\zeta_1,1,\zeta_2;0)\otimes V_I(\zeta_1',1,\zeta_2';0)$ and the action on this basis is given by
$$a\cdot(m_l\otimes m'_v)=(a\cdot m_l)\otimes (a\cdot m'_v)=(\zeta_1\zeta_1')^\frac{1}{n}q^{-l-v}m_l\otimes m'_v,$$
$$b\cdot(m_l\otimes m'_v)=(b\cdot m_l)\otimes (b\cdot m'_v)=m_l\otimes m'_v,$$
$$c\cdot(m_l\otimes m'_v)=(c\cdot m_l)\otimes (c\cdot m'_v)=(\zeta_2\zeta_2')m_l\otimes m'_v,$$
\begin{eqnarray}y(m_l\otimes m'_v)=
\begin{cases}
\zeta_1'^{\frac{n_1}{n}}k_1m_{n-1}\otimes m'_0+\zeta_2k'_1m_0\otimes m'_{n-1}, &\text{if}\ l=v=0;
\cr \zeta_1'^{\frac{n_1}{n}}q^{-vn_1}k_1m_{n-1}\otimes m'_v+\zeta_2k'_{n-v+1}m_0\otimes m'_{v-1}, &\text{if}\ l=0,\;1\leq v\leq n-1;
\cr \zeta_1'^{\frac{n_1}{n}}k_{n-l+1}m_{l-1}\otimes m'_0+\zeta_2k'_1m_l\otimes m'_{n-1}, &\text{if}\ 1\leq l\leq n-1;\;
\cr \zeta_1'^{\frac{n_1}{n}}q^{-vn_1}k_{n-l+1}m_{l-1}\otimes m'_v+\zeta_2k'_{n-v+1}m_l\otimes m'_{v-1}, &\text{if}\ 1\leq l,v\leq n-1;
\end{cases}\nonumber
\end{eqnarray}
\begin{eqnarray}x^t(m_l\otimes m'_v)=
\begin{cases}
m_l\otimes m'_{t+v}+\zeta_1'^{\frac{tn_1}{n}}m_{l+t}\otimes m'_v, &\text{if}\ t+v,t+l<n;
\cr m_l\otimes m'_{t+v}+\beta_1(\zeta_1^{n_1}-1)\zeta_1'^{\frac{tn_1}{n}}m_{l+t-n}\otimes m'_v,
&\text{if}\ t+v<n,t+l\geq n;
\cr \beta_1((\zeta_1'^{n_1}-1)m_l\otimes m'_{t+v-n}+(\zeta_1^{n_1}-1)\zeta_1'^{\frac{tn_1}{n}}m_{l+t-n}\otimes m'_v),
&\text{if}\ t+v,t+l\geq n;
\cr \beta_1(\zeta_1'^{n_1}-1)m_l\otimes m'_{t+v-n}+\zeta_1'^{\frac{tn_1}{n}}m_{l+t}\otimes m'_v,
&\text{if}\ t+v\geq n,t+l<n.
\end{cases}\nonumber
\end{eqnarray}
(4.1) If $((\zeta_1\zeta_1')^{n_1}-1)\beta_1\neq0$, let $v_p=m_0\otimes m'_p+\sum\limits^{n-1}_{l=p+1}\alpha_lm_{n+p-l}\otimes m'_l$ for $0\leq p\leq n-1$. Assume that $yv_0=\theta x^{n-1}v_0$, similar to (1), we can determine $\alpha_l, 1\leq l\leq n-1$.
Hence, $A_0:=H_\beta v_0\cong V_I(\zeta_1\zeta_1',1,\zeta_2\zeta_2';0)$ is an irreducible submodule of $V_I(\zeta_1,1,\zeta_2;0)\otimes V_I(\zeta_1',1,\zeta_2';0)$.
By using the same method, we can determine that $$A_p:=(H_\beta v_p+\sum^{p-1}_{l=0}H_\beta v_l)/\sum\limits^{p-1}_{l=0}H_\beta v_l\cong V_I(\zeta_1\zeta_1',1,\zeta_2\zeta_2';n-p)$$ is an irreducible submodule of $V_I(\zeta_1,1,\zeta_2;0)\otimes V_I(\zeta_1',1,\zeta_2';0)/\sum^{p-1}_{l=0}H_\beta v_l$ for $1\leq p\leq n-1$.
Hence $[V_I(\zeta_1,1,\zeta_2;0)][V_I(\zeta_1',1,\zeta_2';0)]=\sum\limits^{n-1}_{p=0}[V_I(\zeta_1\zeta_1',1,\zeta_2\zeta_2';n-p)]=\sum\limits^{n-1}_{p=0}[V_I(\zeta_1\zeta_1',1,\zeta_2\zeta_2';0)][ V_0(1,1,1;n-p)]$.
Consequently, $x_{\zeta_1,\zeta_2}x_{\zeta_1',\zeta_2'}=\sum\limits^{n-1}_{p=0}g^{n-p}x_{\zeta_1\zeta_1',\zeta_2\zeta_2'}= \sum\limits^{n-1}_{p=0}g^{p}x_{\zeta_1\zeta_1',\zeta_2\zeta_2'}$.
(4.2) We discuss the case in which $((\zeta_1\zeta_1')^{n_1}-1)\beta_1=0$. Since $\beta_2=0$, we have $y^n=0$. There exists an element $v'\in V_I(\zeta_1,1,\zeta_2;0)\otimes V_I(\zeta_1',1,\zeta_2';0)$ such that $yv'=0$. We assume that $v'=\sum\limits^{d}_{l=0}\alpha_lm_{d-l}\otimes m'_l+\sum\limits^{n-1}_{l=d+1}\alpha_lm_{n+d-l}\otimes m'_l$ for $ay=qya$. Let $$yv'=(m_{d-1}\otimes m'_1,m_{d-2}\otimes m'_2,\cdots,m_d\otimes m'_0)C(\alpha_0,\cdots,\alpha_{n-1})^T,$$ then $det(C)=0$.
Let $v_p=\sum\limits^{p}_{l=0}\alpha_lm_{p-l}\otimes m'_l+\sum\limits^{n-1}_{l=p+1}\alpha_lm_{n+p-l}\otimes m'_l$ for $0\leq p\leq n-1$,
$$yv_p=(m_{p-1}\otimes m'_0,m_{p-2}\otimes m'_1,\cdots,m_p\otimes m'_{n-1})D_p(\alpha_0,\cdots,\alpha_{n-1})^T,$$
$$x^{n-1}v_p=(m_{p-1}\otimes m'_0,m_{p-2}\otimes m'_1,\cdots,m_p\otimes m'_{n-1})B_p(\alpha_0,\cdots,\alpha_{n-1})^T,$$
where
$$D_p=\left(\begin{array}{cccc}
\zeta_1'^{\frac{n_1}{n}}k_{n-p+1} & \zeta_2k'_n & \cdots & 0 \\
\vdots & \ddots & \ddots & \vdots \\
0 & \cdots & \zeta_1'^{\frac{n_1}{n}}k_{n-p-1}q^{(-n+2)n_1} & \zeta_2k'_2 \\
\zeta_2k'_1 & \cdots & 0 & \zeta_1'^{\frac{n_1}{n}}k_{n-p}q^{(-n+1)n_1} \\
\end{array}
\right),$$
$$B_p=\left(\begin{array}{ccc}
\zeta_1'^{\frac{(n-1)n_1}{n}} & \cdots & \bibitemnom{t-1}{1}_{q^{n_1}}\beta_1(\zeta_1'^{n_1}-1)\zeta_1'^{\frac{(n-2)n_1}{n}} \\
\vdots & \ddots & \vdots \\
\beta_1(\zeta_1^{n_1}-1) & \cdots & \zeta_1'^{\frac{(n-1)n_1}{n}}q^{(-n+1)(t-1)n_1} \\
\end{array}
\right).$$
Since $det(D_p)=det(C)=0$ and $D_p-\lambda B_p\neq 0$ for any $\lambda$, there exists $\alpha_l$ for $0\leq l\leq n-1$ such that $yv_p=0$ and $x^{n-1}v_p\neq0$. In fact, $Hv_i\cap Hv_j=0$ for any $0\leq i<j\leq n-1$. If there is a $\lambda\in \bf{K}$ such that $v_j=\lambda x^{j-i}v_i$. Then $x^{n-1}v_1=\lambda x^{n-1+j-i}v_0=\lambda\beta_1((\zeta_1\zeta_1')^{n_1}-1)x^{j-i-1}v_i=0$, which is a contradiction. Thus, $V_I(\zeta_1,1,\zeta_2;0)\otimes V_I(\zeta_1',1,\zeta_2';0)=\bibitemgoplus\limits^{n-1}_{p=0}H_\beta v_p$.
Notice that
\begin{eqnarray}
yx^{k'}(m_l\otimes m'_v)&=&q^{-n_1k'}x^{k'}y(m_l\otimes m'_v)\nonumber\\
&+&\frac{1-q^{-k'n_1}}{1-q^{-n_1}}\beta_3(q^{(2(-v-l)-k'+1)n_1}(\zeta_1\zeta_1')^{\frac{2n_1}{n}}-(\zeta_2\zeta_2'))x^{k'-1}(m_l\otimes m'_v).\nonumber
\end{eqnarray}
(4.2.1) In the case that there exists an integer $l$ such that $\zeta_2\zeta_2'-(\zeta_1\zeta_1')^{\frac{2n_1}{n}}q^{(-2p-l+1)n_1}=0$.
Let $r_p$ be the minimal positive integer such that $\zeta_2\zeta_2'(\zeta_1\zeta_1')^{-\frac{2n_1}{n}}=q^{(-2p-r_p+1)n_1}$. Then $1\leq r_p\leq t$ and $r_p=(r-2p)\ mod(t)$, where $r$ is the minimal positive integer such that $\zeta_2\zeta_2'(\zeta_1\zeta_1')^{-\frac{2n_1}{n}}=q^{(-r+1)n_1}$.
So $r_p$, which satisfies $1\leq r_p\leq t$, is the minimal positive integer such that $yx^{r_p}v_p=0$. Thus $yx^{kt+r_p}v_p=0$ for $0\leq k\leq u-1$, where $u=\frac{n}{t}$. Notice that $yx^{kt}v_p=0$ for $0\leq k\leq u-1$. Hence
$A_{p,2k+1}:=H_\beta x^{(u-k-1)t+r_p}v_p/H_\beta x^{(u-k)t}v_p\cong V_{t-r_p}(\zeta_1\zeta_1',1,\zeta_2\zeta_2';n-p+(k+1)t-r_p)\cong V_{t-r_p}(1,1,1;0)\otimes V_0(\zeta_1\zeta_1',1,\zeta_2\zeta_2';n-p+(k+1)t-r_p)\cong V_{t-r_p}(1,1,1;0)\otimes V_0(\zeta_1\zeta_1',1,\zeta_2\zeta_2';0)\otimes V_0(1,1,1;1)^{\otimes (n-p+(k+1)t-r_p)}$,
$A_{p,2(k+1)}:=H_\beta x^{(u-k-1)t}v_p/H_\beta x^{(u-k-1)t+r_p}v_p\cong V_{r_p}(\zeta_1\zeta_1',1,\zeta_2\zeta_2';n-p+(k+1)t)\cong V_{r_p}(1,1,1;0)\otimes V_0(\zeta_1\zeta_1',1,\zeta_2\zeta_2';n-p+(k+1)t)\cong V_{r_p}(1,1,1;0)\otimes V_0(\zeta_1\zeta_1',1,\zeta_2\zeta_2';0)\otimes V_0(1,1,1;1)^{\otimes(n-p+(k+1)t)}$
for $0\leq k\leq u-1$. Therefore $[A_{p,2k+1}]=z_{t-r_p}g^{(k+1)t-p-r_p}g_{\zeta_1\zeta_1',1,\zeta_2\zeta_2'}$ and $[A_{p,2(k+1)}]=z_{r_p}g^{(k+1)t-p}g_{\zeta_1\zeta_1',1,\zeta_2\zeta_2'}$.
Consequently,
$$\begin{array}{lll}x_{\zeta_1,\zeta_2}x_{\zeta_1',\zeta_2'}=\sum\limits_{p=0}^{n-1}[H_\beta v_p]&= &\sum\limits_{p=0}^{n-1}\sum\limits^{2u}_{v=1}[A_{p,v}]\\
&=&\sum\limits_{p=0}^{n-1}(\sum\limits_{k=0}^{u-1}g^{(k+1)t-p-r_p}z_{t-r_p}+\sum\limits_{k=0}^{u-1}g^{(k+1)t-p}z_{r_p})g_{\zeta_1\zeta_1',1,\zeta_2\zeta_2'}\\
&=&\sum\limits_{p=0}^{n-1}(g^{n-p-r_p}z_{t-r_p}+ g^{n-p}z_{r_p})\mathfrak{s}'g_{\zeta_1\zeta_1',1,\zeta_2\zeta_2'},\end{array}$$
where $\mathfrak{s}'=\sum\limits_{k=1}^{u}g^{kt}$.
(4.2.2) If $\zeta_2\zeta_2'\neq (\zeta_1\zeta_1')^{\frac{2n_1}{n}}q^{vn_1}$ for any integer $v$, then $yx^{kt}v_p=0$ for $0\leq k\leq u-1$, where $u=\frac{n}{t}$. Thus $A_{p,v}:=H_\beta x^{(u-v)t}v_p/H_\beta x^{(u-v+1)t}v_p \cong V_{t}(\zeta_1\zeta_1',1,\zeta_2\zeta_2';n-p+vt)\cong V_0(\zeta_1\zeta_1',1,1;n-p+vt)\otimes V_t(1,1,\zeta_2\zeta_2';0)$ for $1\leq v\leq u$. Hence $[A_{p,v}]=g_{\zeta_1\zeta_1',1,1}g^{n-p+vt}z_{\zeta_2\zeta_2'}''$.
Consequently, $x_{\zeta_1,\zeta_2}x_{\zeta_1',\zeta_2'}=\sum\limits^{n-1}_{p=0}\sum\limits^{u}_{v=1}[A_{p,v}]=u\mathfrak{s}g_{\zeta_1\zeta_1',1,1}z_{\zeta_2\zeta_2'}''$, where $u=\frac{n}{t}$.
\end{proof}
Especially, we obtain the structure of the Grothendieck ring of the Gelaki's Hopf alegbra where $\beta_1\beta_3\neq 0$.
\begin{corollary}\label{cor59}
{\rm(1)}
Suppose that $\beta_1\beta_3\neq 0$ and $(N,nn_1,2n_1t)<(nn_1,N)$.
{\rm(1.1)} If either $2n\mid N$, or $2n\nmid N$ and $2(n,n_1)\nmid n$, then the Grothendieck ring $$S_3:=G_0(\mathcal{U}_{(n,N,n_1,q,\beta_1,0,\beta_3)})$$ is isomorphic to the commutative ring
$S_2[ x^*]=\mathbb{Z}[g,h_2,z_2,z'',x^*]$ with relations Equation (\ref{Eq*1}),
\begin{equation}\label{T2}
z''^{v_1}={\mathfrak{s}''}^{v_1-2}\left(\sum\limits^{t-1}_{p=0,r_p<t}(g^{n-p-r_p}z_{t-r_p}+g^{n-p}z_{r_p})+\sum\limits_{i=0,r_p=t}^{t-1}g^{n-p}z_t\right),
\end{equation}
$z_2z''=(1+g^{n-1})z'',$ $z_2x^*=(1+g^{n-1})x^*$ and ${x^*}^{\frac{N}{(N/n,n_1)}}=u\mathfrak{s}^{\frac{N}{(N/n,n_1)}-1}z''$.
{\rm(1.2)} If $2n\nmid N$ and $2(n,n_1)\mid n$, then the Grothendieck ring $S_3:=G_0(\mathcal{U}_{(n,N,n_1,q,\beta_1,0,\beta_3)})$ is isomorphic to the commutative ring
$S_2[ x^*]=\mathbb{Z}[g,h_2,z_3,z'',x^*]$ with relations (\ref{T3}), (\ref{T2}), $z_3z''=(1+g^{n-1}+g^{n-2})z'',$ $z_3x^*=(1+g^{n-1}+g^{n-2})x^*$ and ${x^*}^{\frac{N}{(N/n,n_1)}}=u\mathfrak{s}^{\frac{N}{(N/n,n_1)}-1}z''$.
{\rm(2)} Suppose that $\beta_1\beta_3\neq 0$ and $(N,nn_1,2n_1t)=(nn_1,N)$. Then the Grothendieck ring $$G_0(\mathcal{U}_{(n,N,n_1,q,\beta_1,0,\beta_3)})\cong
\begin{cases}
\mathbb{Z}[g,h_2,z_2,x^*], &\text{if}\ \ 2n\mid N \ \text{or}\ 2n\nmid N \ \text{and}\ 2(n,n_1)\nmid n\\
\mathbb{Z}[g,h_2,z_3,x^*], &\text{if}\ \ 2n\nmid N\ \text{and}\ 2(n,n_1)\mid n
\end{cases}$$ is a subring of $S_3$.
\end{corollary}
\begin{proof} Suppose that $V_I(\gamma,1,1;0)$ is an irreducible representation of
$\mathcal{U}_{(n,N,n_1,q,\beta_1,0,\beta_3)}$. Then $\gamma^{\frac{N}n}=1$ and $\gamma^{n_1}\neq 1$. Assume that $\gamma^{\frac{N}n}=\gamma^{n_1}=1$.
Then $\gamma=\mathfrak{q}^{nr}$ for some integer $r$ and $N|nn_1r$. Thus $\gamma\in\langle\mathfrak{q}^{\frac{Nn}{(N,nn_1)}}\rangle$. We further assume that $\gamma^\frac{2n_1}{n}\in \langle q^{n_1}\rangle$, then $\gamma\in \langle\mathfrak{q}^{\frac{Nn}{(N,nn_1)}}\rangle\cap \langle\mathfrak{q}^{\frac{Nn}{(N,2n_1t)}}\rangle=\langle\mathfrak{q}^{\frac{Nn}{(N,nn_1,2n_1t)}}\rangle$.
Let $v_1=\frac{(nn_1,N)}{(nn_1,2n_1t,N)}$ and $\mathfrak{q}^{\frac{2n_1N}{(N,nn_1,2n_1t)}}=q^{(r-1)n_1}$ for some $2\leq r\leq t$. Then
\begin{equation}
z''^{v_1}={\mathfrak{s}''}^{v_1-2}\left(\sum\limits^{t-1}_{p=0,r_p<t}(g^{n-p-r_p}z_{t-r_p}+g^{n-p}z_{r_p})+\sum\limits_{i=0,r_p=t}^{t-1}g^{n-p}z_t\right),\nonumber
\end{equation}
$z_2z''=(1+g^{n-1})z''$ and $z_3z''=(1+g^{n-1}+g^{n-2})z'',$ where $z''=[V_t(q^{\frac{Nn}{(N,nn_1)}},1,1;0)]$. Let $x^*=[V_I(\mathfrak{q}^n,1,1;0)]$. Then $z_2x^*=(1+g^{n-1})x^*$, $z_3x^*=(1+g^{n-1}+g^{n-2})x^*$, and ${x^*}^{\frac{N}{(N/n,n_1)}}=u\mathfrak{s}^{\frac{N}{(N/n,n_1)}-1}z''$
by Theorem \ref{V}.
\end{proof}
\begin{theorem}\label{thmVI} Suppose that $\beta_1\neq 0$ and $\beta_2=\beta_3=0$. Then the Grothendieck ring $G_0(H_\beta)=R_2':=R_1[x_{\zeta,1}|\zeta\in\widehat{{\bf K}_0}]$ with relations
\begin{equation}
x_{\zeta,1}x_{\zeta',1}=\begin{cases}
\mathfrak{s}x_{\zeta\zeta',1}, \ &\text{if} \ (\zeta\zeta')^{n_1}\neq 1\\
n\mathfrak{s}g_{\zeta\zeta',1,1},\ &\text{if} \ (\zeta\zeta')^{n_1}=1.
\end{cases}
\end{equation}
\end{theorem}
\begin{proof} In the case that $(\zeta_1\zeta_1')^{n_1}-1\neq0$, according to the proof of Theorem \ref{V}, we have
\begin{eqnarray*}
[V_I(\zeta_1,1,1;0)][V_I(\zeta_1',1,1;0) ]&= & \sum\limits^{n-1}_{p=0}[V_I(\zeta_1\zeta_1',1,1;n-p)]\\
& =& \sum\limits^{n-1}_{p=0}[V_I(\zeta_1\zeta_1',1,1;0)][V_0(1,1,1;n-p)].
\end{eqnarray*}
Hence, $x_{\zeta,1}x_{\zeta',1}=\sum\limits^{n-1}_{p=0}g^{n-p}x_{\zeta\zeta',1}=\mathfrak{s}x_{\zeta\zeta',1}$.
Next we discuss the case that $(\zeta_1\zeta_1')^{n_1}=1$. Let $\{m_0,\cdots,m_{n-1}\}$ and $\{m_0',\cdots,m_{n-1}'\}$ be the basis of $V_I(\zeta_1,1,1;0)$ and $V_I(\zeta_1',1,1;0)$ with the actions given by Equations (\ref{eq8})-(\ref{eq10}) respectively. Then $\{m_l\otimes m'_k,\; 0\leq l,k\leq n-1\}$ is a basis of $V_I(\zeta_1,1,1;0)\otimes V_I(\zeta_1',1,1;0)$. From the proof of Theorem \ref{V}, we have $V_I(\zeta_1,1,1;0)\otimes V_I(\zeta_1',1,1;0)=\bibitemgoplus\limits^{n-1}_{p=0}H_\beta v_p$, where $v_p=\sum\limits^{p}_{l=0}\alpha_lm_{p-l}\otimes m'_l+\sum\limits^{n-1}_{l=p+1}\alpha_lm_{n+p-l}\otimes m'_l$ in the proof of Theorem \ref{V} for $0\leq p\leq n-1$. Since $\beta_3=0$, we obtain that $yxv_p=0$ for $0\leq p\leq n-1$. Hence
$$[V_I(\zeta_1,1,1;0)][V_I(\zeta_1',1,1;0)]=\sum\limits^{n-1}_{p=0}\sum\limits^{n}_{v=1}[V_0(\zeta_1\zeta_1',1,1;n-p-v)].$$ Since $g^n=1$, $\sum\limits_{p=0}^{n-1}\sum\limits_{v=1}^ng^{n-v-p}=\sum\limits_{p=0}^{n-1}g^{n-p}\sum\limits_{v=1}^{n}g^{n-v}=n\sum\limits_{p=0}^{n-1}g^p.$ Thus
\begin{eqnarray*}
[V_I(\zeta_1,1,1;0)][V_I(\zeta_1',1,1;0) ]&= & \sum\limits_{p=0}^{n-1}\sum\limits_{v=1}^ng^{n-p-v}[V_0(\zeta_1\zeta_1',1,1;0)]\\
&=& n\sum\limits_{p=1}^ng^p[V_0(\zeta_1\zeta_1',1,1;0)]\\
& =& n\mathfrak{s}g_{\zeta_1\zeta_1',1,1}.
\end{eqnarray*}
\end{proof}
Especially, we obtain the structure of the Grothendieck ring of the Gelaki's Hopf alegbra where $\beta_1\neq 0$.
\begin{corollary}\label{cor511} Suppose that $\beta_1\neq 0$. Then the Grothendieck ring $G_0(\mathcal{U}_{(n,N,n_1,q,\beta_1,0,0)}=S_1[x^*]=\mathbb{Z}[g,h,x^*]$ with relations
\begin{equation}
{x^*}^{\frac{N}{(N/n,n_1)}}=
n^{\frac{N}{(N/n,n_1)}-1}\mathfrak{s}h.
\end{equation}
\end{corollary}
\begin{proof}Suppose that $V_I(\gamma,1,1;0)$ is an irreducible representation of $\mathcal{U}_{(n,N,n_1,q,\beta_1,0,0)}$. Then $\gamma^{\frac{N}n}=1$ and $\gamma^{n_1}\neq 1$.
Thus $\gamma\in\langle\mathfrak{q}^{n}\rangle\setminus\langle\mathfrak{q}^{\frac{N}{(N/n,n_1)}}\rangle$ by the proof of Corollary \ref{cor59}. Let $x^*=[V_I(\mathfrak{q}^n,1,1;0)]$.
Then $G_0(\mathcal{U}_{(n,N,n_1,q,\beta_12,0,0)})=\mathbb{Z}[g,h,x^*]$.
By $\mathfrak{q}^{\frac{Nn_1}{(N/n,n_1)}}=1$, by Theorem \ref{thmVI} we get
$${x^*}^{\frac{N}{(N/n,n_1)}}={x^*}^{\frac{N}{(N/n,n_1)}-1}x^*=
n\mathfrak{s}^{\frac{N}{(N/n,n_1)}-1}h.$$
\end{proof}
To determine the Grothendieck ring $G_0(H_\beta)$ for $\beta_2\neq 0$, we need the following lemma.
\begin{lemma}\label{L53}Suppose that $\beta_1(\gamma_1^{n_1}-\gamma_2^n)=0$, $\beta_2(\gamma_1^{n_1}-\gamma_3^n)\neq 0$ and $\beta_1(\gamma_1'^{n_1}-\gamma_2'^n)=\beta_2(\gamma_1'^{n_1}-\gamma_3'^n)=\beta_3({\gamma'_1}^{-\frac{2n_1}{n}}\gamma_2'\gamma_3'-q^{2n_1j})=0$. Then
\begin{eqnarray}
[V_{II}(\gamma_1,\gamma_2,\gamma_3;i)][V_0(\gamma'_1,\gamma'_2,\gamma'_3;j)]&=&[V_{II}(\gamma_1\gamma'_1,\gamma_2\gamma'_2,\gamma_3\gamma'_3;i+j)]\nonumber\\ &=&[V_0(\gamma'_1,\gamma'_2,\gamma'_3;j)][V_{II}(\gamma_1,\gamma_2,\gamma_3;i)].\nonumber
\end{eqnarray}
\end{lemma}
\begin{proof}
Since $\beta_2(\gamma_1^{n_1}-\gamma_3^n)\neq0$, $\beta_2\neq0$. Thus $\gamma_1'^{n_1}=\gamma_3'^n$ and $\gamma_3'=\gamma_1'^{\frac{n_1}{n}}q^s$ for some integer $1\leq s\leq n-1$. Hence $\beta_2((\gamma_1\gamma_1')^{n_1}-(\gamma_3\gamma_3')^n)=\beta_2(\gamma_1^{n_1}-\gamma_3^n)(\gamma'_3)^n\neq0$.
Since $\beta_1(\gamma_1^{n_1}-\gamma_2^n)=0$ and $\beta_1(\gamma_1'^{n_1}-\gamma_2'^n)=0$, $\beta_1((\gamma_1\gamma'_1)^{n_1}-(\gamma_2\gamma'_2)^n)=0$.
Let $\{m_0,m_1,\cdots,m_{n-1}\}$ be the basis of $V_{II}(\gamma_1,\gamma_2,\gamma_3;i)$ with the actions given by Equations (\ref{eq11})-(\ref{eq13}). Then $\{m_0\otimes 1,m_1\otimes 1,\cdots, m_{n-1}\otimes 1\}$ is a basis of $V_{II}(\gamma_1,\gamma_2,\gamma_3;i)\otimes V_0(\gamma'_1,\gamma'_2,\gamma'_3;j)$. With this basis, we have
$$a\cdot(m_l\otimes 1)=(a\cdot m_l)\otimes (a\cdot 1)=\sqrt[n]{\gamma_1\gamma'_1}q^{i+j+l}m_l\otimes 1,$$
$$b\cdot(m_l\otimes 1)=(b\cdot m_l)\otimes (b\cdot 1)=(\gamma_2\gamma'_2)m_l\otimes 1,$$
$$c\cdot(m_l\otimes 1)=(c\cdot m_l)\otimes (c\cdot 1)=(\gamma_3\gamma'_3)m_l\otimes 1,$$
$$x\cdot(m_l\otimes 1)=(x\cdot m_l)\otimes (a^{n_1}\cdot 1)+ (b\cdot m_l)\otimes (x\cdot 1) =k_l\gamma'^{\frac{n_1}{n}}_1q^{jn_1}m_{l-1}\otimes 1,\; for\; 1\leq l\leq n-1,$$
$$x\cdot(m_0\otimes 1)=(x\cdot m_0)\otimes (a^{n_1}\cdot 1)+ (b\cdot m_0)\otimes (x\cdot 1) =k_0\gamma'^{\frac{n_1}{n}}_1q^{jn_1}m_{n-1}\otimes 1,$$
$$y\cdot(m_l\otimes 1)=(y\cdot m_l)\otimes (a^{n_1}\cdot 1)+ (c\cdot m_l)\otimes (y\cdot 1) =\gamma'^{\frac{n_1}{n}}_1q^{jn_1}m_{l+1}\otimes 1,\; for\; 0\leq 0\leq n-2,$$
$$y\cdot(m_{n-1}\otimes 1)=(y\cdot m_{n-1})\otimes (a^{n_1}\cdot 1)+ (c\cdot m_{n-1})\otimes (y\cdot 1) =\beta_2(\gamma_1^{n_1}-\gamma_3^n)\gamma'^{\frac{n_1}{n}}_1q^{jn_1}m_0\otimes 1.$$
Let $u_l=\gamma'^{\frac{n_1l}{n}}_1q^{jn_1l}m_l\otimes 1$. Then $\{u_0,\cdots, u_{n-1}\}$ is also a basis of $V_{II}(\gamma_1,\gamma_2,\gamma_3;i)\otimes V_0(\gamma'_1,\gamma'_2,\gamma'_3;j)$.
Then we have
\begin{eqnarray*}au_p=\sqrt[n]{\gamma_1\gamma_1'}q^{i+j+p}u_p;\ \
bu_p=\gamma_2\gamma_2'u_p;\ \ cu_p=\gamma_3\gamma_3'u_p\qquad for\quad 0\leq p\leq n-1;
\end{eqnarray*}
\begin{eqnarray*}xu_p=\gamma_1'^{\frac{2n_1}{n}}q^{2jn_1}k_pu_{p-1},\qquad for\quad 1\leq p\leq n-1;\qquad
xu_0=\gamma_1'^{\frac{(2-n)n_1}{n}}q^{2jn_1}k_0u_{n-1};
\end{eqnarray*}
\begin{eqnarray*}yu_p=u_{p+1},\qquad for\quad 0\leq p\leq
n-2;\qquad yu_{n-1}=\beta_2((\gamma_1\gamma_1')^{n_1}-(\gamma_2\gamma_2')^n)u_0.
\end{eqnarray*}
Thus $$V_{II}(\gamma_1,\gamma_2,\gamma_3;i)\otimes V_0(\gamma'_1,\gamma'_2,\gamma'_3;j)\cong V_{II}(\gamma_1\gamma'_1,\gamma_2\gamma'_2,\gamma_3\gamma'_3;i+j).$$
Similarly, we get $$V_0(\gamma'_1,\gamma'_2,\gamma'_3;j)\otimes V_{II}(\gamma_1,\gamma_2,\gamma_3;i)\cong V_{II}(\gamma_1\gamma'_1,\gamma_2\gamma'_2,\gamma_3\gamma'_3;i+j).$$
\end{proof}
\begin{remark}
By Lemma \ref{L53}, we get
\begin{equation}\label{EQ3}
V_{II}(\gamma_1,\gamma_2,\gamma_3,i)\cong V_{II}(\gamma_1',\gamma_2',1;0)\otimes V_0(\gamma_3^{\frac {n}{n_1}},\gamma_3q^{2n_1i},\gamma_3;i)
\end{equation}
where $\gamma_1'=\gamma_1\gamma_3^{-\frac {n}{n_1}}$ satisfying $\gamma_1'^{n_1}\neq 1$, $\gamma_2'=\gamma_2\gamma_3^{-1}q^{-2n_1i}$. If $\beta_1=\beta_3=0$, then $V_{II}(\gamma_1,\gamma_2,1;0)\cong V_{II}(\gamma_1,1,1;0)\otimes V_0(1,\gamma_2,1;0)$. If $\beta_1\neq 0$, then $V_{II}(\gamma_1,\gamma_2,1;0)=V_{II}(\gamma_1,\gamma_1^{\frac{n_1}{n}} q^u,1;0)=V_{II}(\gamma_1,\gamma_1^{\frac{n_1}{n}},1;0)\otimes V_0(1,q^u,1;0)$ for some integer $u$. If $\beta_1=0$, $\beta_3\neq 0$ and $\gamma_2=q^u$ for some integer $u$, then $V_{II}(\gamma_1,\gamma_2,1;0)\cong V_{II}(\gamma_1,1,1;0)\otimes V_0(1,\gamma_2,1;0)$.
\end{remark}
Let $y_{\epsilon_1,\epsilon_2}=[V_{II}(\epsilon_1,\epsilon_2,1;0)]$ for $\epsilon_1\in\widehat{{\bf K}_0}:=({\bf K}^*/\{\bar{\omega}^v\mid v\in \mathbb{Z}\})\setminus\{\overline{1}\}$, $\epsilon_2\in{\bf K}^*$. If $\beta_2\neq0$ and $\beta_1=\beta_3=0$, then the Grothendieck ring of $G_0(H_\beta)$ is isomorphic to a quotient ring of $R_2''=R_1[y_{\epsilon,1}|\epsilon\in\widehat{{\bf K}_0}]$. Similar to the proof of Theorem \ref{thmVI}, we have the following theorem.
\begin{theorem}\label{thmV} Suppose that $\beta_2\neq 0$ and $\beta_1=\beta_3=0$. Then the Grothendieck ring $G_0(H_\beta)=R_2^{{\cal P}}\newcommand \nb{\text{\bf N}rime{\cal P}}\newcommand \nb{\text{\bf N}rime}:=R_1[y_{\epsilon,1}|\epsilon\in\widehat{{\bf K}_0}]$ with the relations
\begin{equation}
y_{\epsilon_1,1}y_{\epsilon_2,1}=
\begin{cases}
\mathfrak{s}y_{\epsilon_1\epsilon_2},\ &(\epsilon_1\epsilon_2)^{n_1}\neq 1\\
n\mathfrak{s}g_{\epsilon_1\epsilon_2,1,1}, \ &(\epsilon_1\epsilon_2)^{n_1}=1.
\end{cases}
\end{equation}
\end{theorem}
Especially, we obtain the structure of the Grothendieck ring of the Gelaki's Hopf alegbra where $\beta_2\neq 0$.
\begin{corollary}\label{cor514} Suppose that $\beta_2\neq 0$. Then the Grothendieck ring $G_0(\mathcal{U}_{(n,N,n_1,q,0,\beta_2,0)})=S_2^{{\cal P}}\newcommand \nb{\text{\bf N}rime{\cal P}}\newcommand \nb{\text{\bf N}rime}:=S_1[y^*]$ with the relation
\begin{equation}{y^*}^{\frac{N}{(N/n,n_1)}}=
n^{\frac{N}{(N/n,n_1)}-1}\mathfrak{s}h.
\end{equation}
\end{corollary}
\begin{proof}Similar to the proof of Corollary \ref{cor511}.\end{proof}
\begin{theorem}\label{T9}Suppose that $\beta_1\beta_2\neq 0$ and $\beta_3=0$. Then the Grothendieck ring $R_3'$ of $H_\beta$ is equal to $R_1[x_{\zeta_1,\zeta_2}, y_{\epsilon_1,\epsilon_2} |(\zeta_1,\zeta_2)\in{\widehat {\bf K_0}\times \bf{K}^*}, \epsilon_1\in {\widehat {\bf K_0}}, \epsilon_2=\epsilon_1^{\frac{n_1}{n}}]$ with relations
\begin{equation}\label{T51}
x_{\zeta_1,\zeta_2}x_{\zeta_1',\zeta_2'}=
\begin{cases}
\mathfrak{s}x_{\zeta_1\zeta_1',\zeta}, \ &\text{if}\ \ (\zeta_1\zeta_1')^{n_1}-1\neq0\\
\mathfrak{s}g_{\zeta^{\frac{n}{n_1}},\zeta,\zeta}y_{\zeta_1\zeta_1'\zeta^{-\frac{n}{n_1}},\zeta^{-1}}, \ &\text{if}\ \ (\zeta_1\zeta_1')^{n_1}=1, (\zeta_1\zeta_1')^{n_1}\neq \zeta^n\\
n\mathfrak{s}g_{\zeta_1\zeta_1',1,\zeta}, \ &\text{if}\ \ (\zeta_1\zeta_1')^{n_1}=\zeta^n=1
\end{cases}
\end{equation}
\begin{equation}\label{T52}
y_{\epsilon_1,\epsilon_2}y_{\epsilon_1',\epsilon_2'}=
\begin{cases}
\mathfrak{s}y_{\epsilon_1\epsilon_1',\epsilon_2\epsilon_2'},\ &\text{if}\ \ (\epsilon_1\epsilon_1')^{n_1}\neq 1\\
n\mathfrak{s}g_{\epsilon_1\epsilon_1',\epsilon_2\epsilon_2',1}, \ &\text{if}\ \ (\epsilon_1\epsilon_1')^{n_1}=1
\end{cases}
\end{equation}
$$x_{\zeta_1,\zeta_2}y_{\epsilon_1,\epsilon_2}=\mathfrak{s}g_{\epsilon_1,\epsilon_2,\epsilon_2}x_{\zeta_1,\zeta_2\epsilon_2^{-1}}=y_{\epsilon_1,\epsilon_2}x_{\zeta_1,\zeta_2},$$
where $\zeta=\zeta_2\zeta_2'$.
\end{theorem}
\begin{proof} By the proof of Theorem \ref{thmVI} and Theorem \ref{thmV}, we have the Equation (\ref{T51}) except for the case of $(\zeta_1\zeta_1')^{n_1}-1=0, (\zeta_1\zeta_1')^{n_1}-(\zeta_2\zeta_2')^n\neq0$ and Equation (\ref{T52}).
(1) Here, we discuss the case in which $((\zeta_1\zeta_1')^{n_1}-1)\beta_1=0$ and $((\zeta_1\zeta_1')^{n_1}-(\zeta_2\zeta_2')^n)\beta_2\neq0$. Then $(\zeta_1\zeta_1')^{n_1}=1$ and $(\zeta_2\zeta_2')^n\neq (\zeta_1\zeta_1')^{n_1}$.
Let $\{m_0,\cdots,m_{n-1}\}$ and $\{m_0',\cdots,m_{n-1}'\}$ be the basis of $V_I(\zeta_1,1,\zeta_2;0)$ and $V_I(\zeta_1',1,\zeta_2';0)$ with the actions given by Equations (\ref{eq8})-(\ref{eq10}) respectively. Then $\{m_l\otimes m'_k,\; 0\leq l,k\leq n-1\}$ is a basis of $V_I(\zeta_1,1,\zeta_2;0)\otimes V_I(\zeta_1',1,\zeta_2';0)$. Let $v_p=m_0\otimes m'_p+\sum\limits^{n-1}_{l=p+1}\alpha_lm_{n+p-l}\otimes m'_l$ for $0\leq p\leq n-1$. Assume that $xv_0=\theta y^{n-1}v_0$. Similar to the proof of Theorem \ref{thmVI}, we can determine $\alpha_l, 1\leq l\leq n-1$.
Hence, $H_\beta v_0\cong V_{II}(\zeta_1\zeta_1',1,\zeta_2\zeta_2';0)\cong V_0((\zeta_2\zeta_2')^\frac{n}{n_1},\zeta_2\zeta_2',\zeta_2\zeta_2';0)\otimes V_{II}(\zeta_1\zeta_1'(\zeta_2\zeta_2')^{-\frac{n}{n_1}},(\zeta_2\zeta_2')^{-1},1;0)$ and $H_\beta v_0$ is an irreducible submodule of $V_I(\zeta_1,1,\zeta_2;0)\otimes V_I(\zeta_1',1,\zeta_2';0)$.
Similarly, we can show that
\begin{eqnarray*}
(H_\beta v_p+\sum^{p-1}_{l=0}H_\beta v_l)/\sum\limits^{p-1}_{l=0}H_\beta v_l &\cong &V_{II}(\zeta_1\zeta_1',1,\zeta_2\zeta_2';n-p) \\
&\cong &V_{II}(\zeta_1\zeta_1'(\zeta_2\zeta_2')^{-\frac{n}{n_1}},(\zeta_2\zeta_2')^{-1},1;0)\otimes V_0((\zeta_2\zeta_2')^\frac{n}{n_1},\zeta_2\zeta_2',\zeta_2\zeta_2';n-p)
\end{eqnarray*}
and $(H_\beta v_p+\sum^{p-1}_{l=0}H_\beta v_l)/\sum\limits^{p-1}_{l=0}H_\beta v_l$ is an irreducible submodule of $V_I(\zeta_1,1,\zeta_2;0)\otimes V_I(\zeta_1',1,\zeta_2';0)/\sum\limits^{p-1}_{l=0}H_\beta v_l$ for $1\leq p\leq n-1$.
Hence $[V_I(\zeta_1,1,\zeta_2;0)][V_I(\zeta_1',1,\zeta_2';0)]=\sum\limits^{n-1}_{p=0}[V_{II}(\zeta_1\zeta_1',1,\zeta_2\zeta_2';n-p)]$. Consequently, $$x_{\zeta_1,\zeta_2}x_{\zeta_1',\zeta_2'}=c_2y_{\zeta_1\zeta_1'(\zeta_2\zeta_2')^{-\frac{n}{n_1}},(\zeta_2\zeta_2')^{-1}}$$ where $c_2=\sum\limits^{n-1}_{p=0}[V_0((\zeta_2\zeta_2')^\frac{n}{n_1},\zeta_2\zeta_2',\zeta_2\zeta_2';n-p)]=\sum\limits^{n-1}_{p=0}g^p[V_0((\zeta_2\zeta_2')^\frac{n}{n_1},\zeta_2\zeta_2',\zeta_2\zeta_2';0)]
$.
(2) Let $\{m_0,\cdots,m_{n-1}\}$ and $\{m_0'',\cdots,m_{n-1}''\}$ be the basis of $V_I(\zeta_1,1,\zeta_2;0)$ and $V_{II}(\epsilon_1,\epsilon_2,1;0)$ with the actions given by Equations(\ref{eq8})-(\ref{eq10}) and (\ref{eq11})-(\ref{eq13}) respectively. Then $\{m_l\otimes m_k'',\; 0\leq l,k\leq n-1\}$ is a basis of $V_I(\zeta_1,1,\zeta_2;0)\otimes V_{II}(\epsilon_1,\epsilon_2,1;0)$. Let $v_p'=\sum\limits^{n-p-1}_{v=0}\alpha_vm_{p+v}\otimes m_v''$ for $\alpha_0=1$.
Similar to the proof of Theorem \ref{V}, we can prove $$\sum^{p}_{l=0}H_\beta v_l''/\sum^{p-1}_{l=0}H_\beta v_l''\cong V_I(\zeta_1\epsilon_1,\epsilon_2,\zeta_2;n-p)\cong V_I(\zeta_1,1,\zeta_2\epsilon_2^{-1};0)\otimes V_0(\epsilon_1,\epsilon_2,\epsilon_2;n-p)$$
and $\sum^{p}_{l=0}H_\beta v_l''/\sum^{p-1}_{l=0}H_\beta v_l''$ is an irreducible submodule of $V_I(\zeta_1,1,\zeta_2;0)\otimes V_{II}(\epsilon_1,\epsilon_2,1;0)/\sum\limits^{p-1}_{l=0}H_\beta v_l''$. Hence
$$V_I(\zeta_1,1,\zeta_2;0)\otimes V_{II}(\epsilon_1,\epsilon_2,1;0)=\sum\limits^{n-1}_{p=0}V_I(\zeta_1,1,\zeta_2\epsilon_2^{-1};0)\otimes V_0(\epsilon_1,\epsilon_2,\epsilon_2;n-p).$$
Similarly, we have $V_{II}(\epsilon_1,\epsilon_2,1;0)\otimes V_I(\zeta_1,1,\zeta_2;0)=\sum\limits^{n-1}_{p=0}V_I(\zeta_1,1,\zeta_2\epsilon_2^{-1};0)\otimes V_0(\epsilon_1,\epsilon_2,\epsilon_2;n-p)$. Consequently, we obtain $x_{\zeta_1,\zeta_2}y_{\epsilon_1,\epsilon_2}=c_3x_{\zeta_1,\zeta_2\epsilon_2^{-1}}=y_{\epsilon_1,\epsilon_2}x_{\zeta_1,\zeta_2}$, where $c_3=\sum\limits^{n-1}_{p=0}[V_0(\epsilon_1,\epsilon_2,\epsilon_2;n-p)]$.
\end{proof}
Especially, we obtain the structure of the Grothendieck ring of the Gelaki's Hopf alegbra where $\beta_1\beta_2\neq 0$.
\begin{corollary}\label{cor516} Suppose that $\beta_1\beta_2\neq 0$. Then the Grothendieck ring $G_0(\mathcal{U}_{(n,N,n_1,q,\beta_1,\beta_2,0)})=S_3^{{\cal P}}\newcommand \nb{\text{\bf N}rime}:=S_1[x^*]=\mathbb{Z}[g,h,x^*]$ with relation
\begin{equation}{x^*}^{\frac{N}{(N/n,n_1)}}=
n^{\frac{N}{(N/n,n_1)}-1}\mathfrak{s}h.
\end{equation}
\end{corollary}
\begin{proof}Since $\beta_1''=0$ implies $\gamma^{n_1}=1$, $\mathcal{U}_{(n,N,n_1,q,\beta_1,\beta_2,0)}$ has no irreducible representation $V_{II}(\gamma,1,1;0)$. Thus
$S_3'=S_1[x^*]$. The rest of the proof is
similar to that of Corollary \ref{cor511}.\end{proof}
Let $\{m_0',\cdots,m_{r-1}'\}$ be the basis of $V_r(\gamma'_1,\gamma'_2,\gamma'_3;j)$ with the actions given by Equations (\ref{eq15})-(\ref{eq17}). Since $k'_l\neq0$ for $1\leq l\leq r-1$, $\{m''_l:={\cal P}}\newcommand \nb{\text{\bf N}rod^{r-1}_{v=r-l}k'_vm'_{r-l-1}|0\leq l\leq r-1\}$ is also a basis of $V_r(\gamma'_1,\gamma'_2,\gamma'_3;j)$. The action of $H_\beta$ on this basis is given by
$am''_l={\cal P}}\newcommand \nb{\text{\bf N}rod\limits^{r-1}_{v=r-l}k'_vam'_{r-l-1}=(\gamma'_1)^\frac{1}{n}q^{j-r+1+l}m''_l,$ \quad for $0\leq l\leq r-1$;
$xm''_l=k'_{r-l}m''_{l-1}$,\quad\quad for $1\leq l\leq r-1$; \quad\quad\quad\quad$xm''_0=0$;
$ym''_l=m''_{l+1}$,\quad\quad\quad\quad for $0\leq l\leq r-2$; \quad\quad\quad\quad$ym''_{r-1}=0$.
Let $\overline{\bf{K_0}}:=({\bf{K}^*}/\langle q \rangle)\setminus \{\bar{1}\}$. By the same method as Theorem \ref{V}, we can get the following theorem.
\begin{theorem}\label{T10}
Suppose that $\beta_2\beta_3\neq 0$ and $\beta_1=0$. Let $\widetilde{z}_{\xi}=[V_t(1,\xi,1;0)]$, where $\xi\in\overline{{\bf K}^*}$.
Then the Grothendieck ring $R_3''$ of $H_\beta$ is isomorphic to the commutative ring $R_1[z_2,\tilde{z}_{\xi}, y_{\epsilon_1,\epsilon_2}|(\epsilon_1,\epsilon_2)\in\widehat{{\bf K}_0}\times \overline{{\bf K}_0 }, \xi\in\overline{{\bf K}^*}]$ with relations $$y_{\epsilon_1,\epsilon_2}z_2=y_{\epsilon_1q^\frac{n}{2},\epsilon_2}+g^{n-1}y_{\epsilon_1q^\frac{n}{2},\epsilon_2},\qquad y_{\epsilon_1,\epsilon_2}\widetilde{z}_{\xi}=\mathfrak{s}''y_{\epsilon_1,\epsilon_2\xi},$$
$$z_2\tilde{z}_\xi=\eta'(\tilde{z}_{\xi q^{-n_1}}+g^{n-1}\tilde{z}_{\xi q^{-n_1}}),\quad \tilde{z}_\xi\tilde{z}_{\xi'}=\mathfrak{s}''\tilde{z}_{\xi\xi'}, \ for \ \xi\xi'\in \overline{{\bf K}^*},$$
$$\tilde{z}_\xi\tilde{z}_{\xi'}=\sum\limits^{t-1}_{p=0,r_p<t}g^{n-p}(g^{-r_p}g_{q^{-\frac{(t-r_p-1)n}{2}},\xi\xi',1}z_{t-r_p}+g_{q^{-\frac{(r_p-1)n}{2}},\xi\xi',1}z_{r_p})+\sum\limits_{p=0,r_p=t}^{t-1}g^{n-p}g_{q^{-\frac{(t-1)n}{2}},\xi\xi',1}z_t$$ for $\xi\xi'=q^{rn_1},$
\begin{equation}\label{eq29}
y_{\epsilon_1,\epsilon_2}y_{\epsilon_1',\epsilon_2'}=
\begin{cases}
\mathfrak{s}y_{\epsilon_1\epsilon_1',\epsilon_2\epsilon_2},\ &\text{if}\ \ (\epsilon_1\epsilon_1')^{n_1}\neq 1\\
u\mathfrak{s}g_{\epsilon_1\epsilon_1',1,1}\widetilde{z}_{\epsilon_2\epsilon_2'}, \ &\text{if}\ \ (\epsilon_1\epsilon_1')^{n_1}=1, \epsilon_1\epsilon_2\in \overline{{\bf K}^*}\\
\mathfrak{s}g_{\epsilon_1\epsilon_1',\epsilon_2\epsilon_2',1}\sum\limits_{p=0}^{n-1}g^{1-p}(z_{t-r_p}+g^{-r_p}z_{r_p}), \ &\text{if}\ \ (\epsilon_1\epsilon_1')^{n_1}=1, \epsilon_1\epsilon_2=q^{rn_1}
\end{cases}
\end{equation}
where $\eta'=[V_0(q^\frac{n}{2},q^{n_1},1;0)]$ and $r_p=(r-2p)\ mod(t)$, $r$ is the minimal positive integer such that $\epsilon_1\epsilon_2=q^{rn_1}$.
\end{theorem}
Especially, we obtain the structure of the Grothendieck ring of the Gelaki's Hopf alegbra where $\beta_2\beta_3\neq 0$.
\begin{corollary}\label{cor518}
Suppose that $\beta_2\beta_3\neq 0$.
\begin{itemize}
\item [(1)]Suppose $(N,nn_1,2n_1t)<(nn_1,N)$.
If either $2n\mid N$, or $2n\nmid N$ and $2(n,n_1)\nmid n$, then the Grothendieck ring $S_3'=G_0(\mathcal{U}_{(n,N,n_1,q,0,\beta_2,\beta_3)})$ is isomorphic to the commutative ring
$S_2[ y^*]=\mathbb{Z}[g,h_2,z_2,z'',y^*]$ with relations (\ref{Eq*1}), (\ref{T2}), $z_2z''=(1+g^{n-1})z''$,
$z_2y^*=(1+g^{n-1})y^*$ and ${y^*}^{\frac{N}{(N/n,n_1)}}=u\mathfrak{s}^{\frac{N}{(N/n,n_1)}-1}z''$.
If $2n\nmid N$ and $2(n,n_1)\mid n$, then the Grothendieck ring $S_3':=G_0(\mathcal{U}_{(n,N,n_1,q,0,\beta_2,\beta_3)})$ is isomorphic to the commutative ring
$S_2[ y^*]=\mathbb{Z}[g,h_2,z_3,z'',y^*]$ with relations (\ref{T3}), (\ref{T2}), $z_3z''=(1+g^{n-1}+g^{n-2})z''$, $z_3y^*=(1+g^{n-1}+g^{n-2})y^*$ and ${y^*}^{\frac{N}{(N/n,n_1)}}=u\mathfrak{s}^{\frac{N}{(N/n,n_1)}-1}z''$.
\item [(2)]Suppose that $\beta_2\beta_3\neq 0$ and $(N,nn_1,2n_1t)=(nn_1,N)$. Then the Grothendieck ring $$G_0(\mathcal{U}_{(n,N,n_1,q,0,\beta_2,\beta_3)})\cong
\begin{cases}
\mathbb{Z}[g,h_2,z_2,y^*], &\text{if}\ \ 2n\mid N \ \text{or}\ 2n\nmid N \ \text{and}\ 2(n,n_1)\nmid n\\
\mathbb{Z}[g,h_2,z_3,y^*], &\text{if}\ \ 2n\nmid N\ \text{and}\ 2(n,n_1)\mid n
\end{cases}$$ is a subring of $S_3'$.
\end{itemize}
\end{corollary}
Similar to the proof of Theorem \ref{L55}-Theorem \ref{T10}, we have the following theorem.
\begin{theorem}\label{Th5.12}
Suppose that $\beta_1\beta_2\beta_3\neq 0$. Then the Grothendieck ring $R_4$ of $H_\beta$ is equal to $R_1[z_2,z_\xi', x_{\zeta_1,\zeta_2}, $ $y_{\epsilon_1,\epsilon_1}|\xi\in\widetilde{\bf K_0}, (\zeta_1,\zeta_2)\in \overline{\bf K_0}\otimes{\bf K}^*, (\epsilon_1,\epsilon_2)\in {\widehat{ {\bf K}_0}}\times \overline{{\bf K}_0}, \epsilon_1^{n_1}=\epsilon_2^n]$ with relations (\ref{Eq*1}), (\ref{eq29}),
$$z_2z_\xi'=z_{\xi q^\frac{n}{2}}'+g^{n-1}z_{\xi q^\frac{n}{2}}',\qquad
z_2x_{\zeta_1,\zeta_2}=x_{\zeta_1q^\frac{n}{2},\zeta_2}+g^{n-1}x_{\zeta_1q^\frac{n}{2},\zeta_2},\qquad x_{\zeta_1,\zeta_2}z_\xi'=g^{n-t}\mathfrak{s}''x_{\zeta_1\xi,\zeta_2}, $$
$$ x_{\zeta_1,\zeta_2}y_{\epsilon_1,\epsilon_2}=\mathfrak{s}g_{\epsilon_1,\epsilon_2,\epsilon_2}x_{\zeta_1,\zeta_2\epsilon_2^{-1}},\qquad
y_{\epsilon_1,\epsilon_2}z_2=y_{\epsilon_1q^\frac{n}{2},\epsilon_2}+g^{n-1}y_{\epsilon_1q^\frac{n}{2},\epsilon_2}, \qquad y_{\epsilon_1,\epsilon_2}z_{\xi}'=\mathfrak{s}''y_{\epsilon_1\xi,\epsilon_2},$$
\begin{equation}
z_\xi'z_{\xi'}'=
\begin{cases}
g^{n-t}\mathfrak{s}''z'_{\xi\xi'} \ &\text{if}\ \ \xi\xi'\in \widetilde{\bf K}_0,\\
\sum\limits^{t-1}_{p=0,r_p<t}(g^{n-p-r_p}z_{t-r_p}+g^{n-p}z_{r_p})+\sum\limits_{i=0,r_p=t}^{t-1}g^{n-p}z_t, \ &\text{if}\ \ \xi\xi'\in\{q^{un_1}\mid u\in{\mathbb{Z}}\}
\end{cases},\nonumber
\end{equation}
and
\begin{equation}
x_{\zeta_1,\zeta_2}x_{\zeta_1',\zeta_2'}=
\begin{cases}
\mathfrak{s}x_{\zeta_1\zeta_1',\zeta_2\zeta_2'}, \ &\text{if}\ \ (\zeta_1\zeta_1')^{n_1}\neq 1\\
\mathfrak{s}g_{\zeta,\zeta^*,\zeta^*}y_{\zeta^{{\cal P}}\newcommand \nb{\text{\bf N}rime{\cal P}}\newcommand \nb{\text{\bf N}rime}\zeta^{-1},(\zeta^*)^{-1}}, \ &\text{if}\ \ (\zeta_1\zeta_1')^{n_1}=1, (\zeta_1\zeta_1')^{n_1}\neq (\zeta_2\zeta_2')^n\\
\mathfrak{s}'g_{\zeta^{{\cal P}}\newcommand \nb{\text{\bf N}rime{\cal P}}\newcommand \nb{\text{\bf N}rime},1,\zeta^*}(\sum\limits_{p=0}^{n-1}g^{r-p}z_{t-r_p}+g^pz_{r_p}), \ &\text{if}\ \ (\zeta'')^{n_1}=(\zeta^*)^n=1,\ \zeta^*=(\zeta'')^{\frac{2n_1}{n}}q^{(-r+1)n_1} \\
u\mathfrak{s}g_{\zeta^{\frac12},1,\zeta^*}z_{\zeta''\zeta^{-\frac12}}', \ &\text{if}\ \ (\zeta_1\zeta_1')^{n_1}=(\zeta_2\zeta_2')^n=1,(\zeta'')^{-\frac{2n_1}{n}}\zeta^*\notin\langle q^{n_1}\rangle,
\end{cases}\nonumber
\end{equation}
where $\zeta=(\zeta_2\zeta_2')^{\frac{n}{n_1}}$, $\zeta^*=\zeta_2\zeta_2'$, $\zeta^{{\cal P}}\newcommand \nb{\text{\bf N}rime{\cal P}}\newcommand \nb{\text{\bf N}rime}=\zeta_1\zeta_1'$,
$r_p=(r-2p)\ mod(t)$ and $1\leq r_p\leq t$, $r$ is the minimal positive integer such that $\zeta_2\zeta_2'=(\zeta_1\zeta_1')^{\frac{2n_1}{n}}q^{(-r+1)n_1} $.
\end{theorem}
Especially, we obtain the structure of the Grothendieck ring of the Gelaki's Hopf alegbra where $\beta_1\beta_2\beta_3\neq 0$.
\begin{corollary}\label{cor520}
Suppose that $\beta_1\beta_2\beta_3\neq 0$.
\begin{itemize}
\item[(1)] Suppose $(N,nn_1,2n_1t)<(nn_1,N)$. If either $2n\mid N$, or $2n\nmid N$ and $2(n,n_1)\nmid n$, then the Grothendieck ring $S_3''=G_0(\mathcal{U}_{(n,N,n_1,q,\beta_1,\beta_2,\beta_3)})$ is isomorphic to the commutative ring
$S_2[ x^*]=\mathbb{Z}[g,h_2,z_2,z'',x^*]$ with relations in Corollary \ref{cor59}(1)(I).
If $2n\nmid N$ and $2(n,n_1)\mid n$, then the Grothendieck ring $S_3'':=G_0(\mathcal{U}_{(n,N,n_1,q,\beta_1,\beta_2,\beta_3)})$ is isomorphic to the commutative ring
$S_2[ x^*]=\mathbb{Z}[g,h_2,z_3,z'',x^*]$ with relations in Corollary \ref{cor59}(1)(II).
\item[(2)] Suppose that $\beta_1\beta_2\beta_3\neq 0$ and $(N,nn_1,2n_1t)=(nn_1,N)$. Then the Grothendieck ring $$G_0(\mathcal{U}_{(n,N,n_1,q,\beta_1,\beta_2,\beta_3)})\cong
\begin{cases}
\mathbb{Z}[g,h_2,z_2,x^*], &\text{if}\ \ 2n\mid N \ \text{or}\ 2n\nmid N \ \text{and}\ 2(n,n_1)\nmid n\\
\mathbb{Z}[g,h_2,z_3,x^*], &\text{if}\ \ 2n\nmid N\ \text{and}\ 2(n,n_1)\mid n
\end{cases}$$ is a subring of $S_3''$.
\end{itemize}
\end{corollary}
Let $\omega$ be a primitive $N$-th root of unity. If
$N\nmid n_1^2$, then $U_{(N,n_1,\omega)}={\mathcal U}_{(N/(N,n_1),N,n_1,\omega^{n_1},0,0,1)}$ and ${\mathcal
U}_{(N/(N,n_1),N,n_1,\omega^{n_1},0,0,\gamma)}\simeq
U_{(N,n_1,\omega)}$ as Hopf algebras for any $\gamma\in {\bf
K}^*$. Hence $V_0(\gamma_1,1,1,i)$ and $V_r(\gamma_1,1,1,i)$ are irreducible representations of $U_{(N,n_1,\omega)}$, where $\gamma_1^{(N,n_1)}=1$. Thus, the Grotendieck ring of $U_{(N,n_1,\omega)}$ is the same as the Grothendieck ring $G_0({\mathcal U}_{(n,N,n_1,q,0,0,\beta_3)})$ with $\beta_3\neq0$ in Corollary \ref{cor56}.
\begin{corollary}\label{corford}
Let $h'=[V_0(\omega^{n'},1,1;0)]$ and $z'=[V_t(\omega^{n},1,1;0)]$. Then $g^n=h'^{\frac{N}{(N,n')}}=1$, where $n=\frac{N}{(N,\nu)}$ and $n'=\frac{N^2}{(N^2,N\nu,2\nu^2)}$.
\begin{itemize}
\item[(1)] Suppose that $(\nu^2,N)\nmid 2\nu$.
If either $2\mid (N,\nu)$, or $2\nmid (N,\nu)$ and $2(N,\nu^2)\nmid N$, then the Grothendieck ring $S=G_0(U_{(N,\nu,\omega)})\cong\mathbb{Z}[g,h',z_2,z']$ with relations (\ref{Eq*1}), (\ref{EQ23}) and $z_2z'=(1+g^{n-1})z'$.
If $2\nmid (N,\nu)$ and $2(N,\nu^2)\mid N$, then the Grothendieck ring $S=G_0(U_{(N,\nu,\omega)})\cong\mathbb{Z}[g,h',z_3,z']$ with relations (\ref{EQ23}), (\ref{T3}) and $z_3z'=(1+g^{n-1}+g^{n-2})z'$.
\item[(2)]Suppose that $2\nu=(\nu^2,N)$. Then the Grothendieck ring $$G_0(U_{(N,\nu,\omega)})\cong
\begin{cases}
\mathbb{Z}[g,h',z_2], &\text{if}\ \ 2\mid (N,\nu), \ \text{or}\ 2\nmid (N,\nu) \ \text{and}\ 2(N,\nu^2)\nmid N\\
\mathbb{Z}[g,h',z_3], &\text{if}\ \ 2\nmid (N,\nu)\ \text{and}\ 2(N,\nu^2)\mid N
\end{cases}$$ is a subring of $S$.
\end{itemize}
\end{corollary}
\begin{remark}
By Corollary \ref{cor56}-Corollary \ref{cor520}, we have
$$G_0({\mathcal U}_{(n,N,n_1,q,\beta_1,\beta_2,0)})=G_0({\mathcal U}_{(n,N,n_1,q,\beta_1,0,0)})\cong G_0({\mathcal U}_{(n,N,n_1,q,0,\beta_2,0)})$$ and $$G_0({\mathcal U}_{(n,N,n_1,q,\beta_1,\beta_2,\beta_3)})=G_0({\mathcal U}_{(n,N,n_1,q,\beta_1,0,\beta_3)})\cong G_0({\mathcal U}_{(n,N,n_1,q,0,\beta_2,\beta_3)}).$$
But ${\mathcal U}_{(n,N,n_1,q,\beta_1,\beta_2,\beta_3)}\ncong{\mathcal U}_{(n,N,n_1,q,\beta_1,0,\beta_3)}$ by \cite[Proposition 3.2]{G}.
\end{remark}
\end{document}
|
\begin{document}
\title{Efficient Representation for \penalty-1 Online Suffix Tree Construction}
\begin{abstract}
Suffix tree construction algorithms based on \emph{suffix links} are popular because
they are simple to implement, can operate \emph{online} in linear time, and because the suffix links are
often convenient for pattern matching. We present an approach using
\emph{edge-oriented} suffix links, which reduces the number of branch
lookup operations (known to be a bottleneck in construction time) with some
additional techniques to reduce construction cost. We discuss various effects of our approach and
compare it to previous techniques. An experimental evaluation shows that we
are able to reduce construction time to around half that of the original algorithm, and
about two thirds that of previously known branch-reduced construction.
\end{abstract}
\section{Introduction}\label{sec-intro}
The \emph{suffix tree} is arguably the most important data structure in string
processing, with a wide variety of
applications~\cite{Apostolico85,gusfield,SufComp}, and with a number of
available construction
algorithms~\cite{Weiner73,McR,UkkoOnli,FarFOCS,giegkurtzstoyetopdown,canovas2010practical},
each with its benefits. Improvements in its efficiency of construction and
representation continues to be a lively area of research, despite the fact that
from a classical asymptotic time complexity perspective, optimal solutions have
been known for decades. Pushing the edge of efficiency is critical for indexing
large inputs, and make large amounts of experiments feasible, e.g., in
genetics, where lengths of available genomes increase. Much work has been
dedicated to reducing the memory footprint with representations that are
compact~\cite{kurtzsuftree} or compressed (see C{\'a}novas and
Navarro~\cite{canovas2010practical} for a practical view, with references to
theoretical work), and to alternatives requiring less space, such as suffix
arrays~\cite{Manber93}. Other work adresses the growing performance-gap
between cache and main memory, frequently using algorithms originally designed
for secondary
storage~\cite{ClarkMunro,ferraginaHierarchxs,TsirogiannisModernSuffix,TianSuffixVLDB}.
While memory-reduction is important, it typically requires elaborate operations
to access individual fields, with time overhead that can be deterring for some
applications. Furthermore, compaction by a reduced number of pointers per node
is ineffective in applications that use those pointers for pattern matching. Our
work ties in with the more direct approach to improving performance of the
conventional primary storage suffix tree representation, taken by Senft and
Dvořák~\cite{SenftBranching}. Classical representations required in Ukkonen's
algorithm~\cite{UkkoOnli} and the closely related predecessor of
McCreight~\cite{McR} remain important in application areas such as genetics,
data compression and data mining, since they allow online construction as well
as provide \emph{suffix links}, a feature useful not only in construction,
but also for string matching tasks~\cite{gusfield,kielbasa2011adaptive}. In
these algorithms, a critically time-consuming operation is \emph{branch}:
identifying the correct outgoing edge of a given node for a given
character~\cite{SenftBranching}. This work introduces and evaluates several
representation techniques to help reduce both the number of branch operations
and the cost of each such operation, focusing on running time, and taking an
overall liberal view on space usage.
Our experimental evaluation of runtime, memory locality, and the counts for
critical operations, shows that a well chosen combination of our presented
techniques consistently produce a significant advantage over the original
Ukkonen scheme as well as the branch-reduction technique of Senft and Dvořák.
\section{Suffix Trees and Ukkonen's Algorithm}\label{sec-defs}
We denote the \emph{suffix tree} (illustrated in fig.~\ref{fig-st}) over a
string $T=t_0\cdots t_{N-1}$ of length $|T|=N$ by $\ST$\!. Each edge in
$\ST$\!, directed downwards from the root, is labeled with a substring of
$T$\!, represented in constant space by reference to position and length in
$T$\!. We define a \emph{point} on an $\ST$ edge as the position between two
characters of its label, or -- when the point coincides with a node -- after
the whole edge label. Each point in the tree corresponds to precisely one
nonempty substring $t_i\cdots t_j$, $0\le i\leq j<N$, obtained by reading edge
labels on the path from the root to that point. A consequence is that the first
character of an edge label uniquely identifies it among the outgoing edges of a
node. The point corresponding to an arbitrary pattern can be located (or found
non-existent) by scanning characters left to right, matching edge labels from
the root down. For convenience, we add an auxiliary node $\topnode$ above the
root (following Ukkonen), with a single edge to the root. We denote this edge
$\topedge$ and label it with the empty string, which is denoted by
$\emptystring$. (Although $\topnode$ is the topmost node of the augmented tree,
we consistently refer to the root of the unaugmented tree as the root of
$\ST$\!.) Each leaf corresponds to some suffix~$t_i\cdots t_{N-1}$, $0\le
i<N$. Hence, the label endpoint of a leaf edge can be defined implicitly,
rather than updated during construction. Note, however, that any suffix that is
not a unique substring of~$T$ corresponds to a point higher up in the tree. (We
do not, as is otherwise common, require that $t_{N-1}$ is a unique character,
since this clashes with online construction.)
Except for $\topedge$, all edges are labeled with nonempty strings,
and the tree represents exactly the substrings of $T$ in the minimum number of
nodes. This implies that each node is either $\topnode$, the root, a leaf, or a
non-root node with at least two outgoing edges. Since the number of leaves is
at most $N$ (one for each suffix), the total number of nodes cannot exceed
$2N+1$ (with equality for $N=1$).
We generalize the definition to $\STi{i}$ over string $T_i=t_0 \cdots
t_{i-1}$, where $\STi{N}=\ST$. An \emph{online} construction algorithm
constructs $\ST$ in $N$ updates, where update $i$ reshapes
$\STi{i-1}$ into $\STi{i}$, without looking ahead any further than $t_{i-1}$.
We describe suffix tree construction based on Ukkonen's algorithm~\cite{UkkoOnli}. Please refer to Ukkonen's original,
closer to an actual implementation, for details such as
correctness arguments.
Define the
\emph{active point} before update $i>1$ as the point corresponding to the
longest suffix of $T_{i-1}$ that is not a unique substring of
$T_{i-1}$. Thanks to the implicit label endpoint of leaf edges, this is the point
of the longest string where update $i$ might alter the tree. The active
point is moved once or more in update $i$, to reach the corresponding start
position for update $i+1$. (This diverges
slightly from Ukkonen's use, where the active point is only defined as the
start point of the update.) Since any leaf corresponds to a
suffix, the label end position of any point coinciding with a leaf in $\STi{i}$
is $i-1$. The tree is augmented with \emph{suffix links},
pointing upwards in the tree: Let $v$ be a non-leaf node that coincides with
the string $aA$ for some character $a$ and string $A$. Then the suffix link of
$v$ points to the node coinciding with the point of $A$. The suffix link of the
root leads to $\topnode$, which has no suffix link. Before the first update,
the tree is initialized to $\STi{0}$ consisting only of $\topnode$ and the
root, joined by $\topedge$, and the active point is set to the endpoint of
$\topedge$ (i.e.\@ the root). Update $i$ then procedes as
follows:
\begin{enumerate}
\item If the active point coincides with $\topnode$, move it down one step to
the root, and finish the update.
\item Otherwise, attempt to move the active point one step down, by scanning
over character $t_i$. If the active point is at the end of an edge, this
requires a \emph{branch} operation, where we choose among the outgoing
edges of the node. Otherwise,
simply try matching the character following the point with $t_i$. If the move down
succeeds, i.e., $t_i$ is present just below the active point, the update
is finished. Otherwise, keep the current active point for now, and continue with
the next step.
\item\label{step-ukksplit} Unless the active point is at the end of an edge,
split the edge at the active point and introduce a new node. If there is a
saved node $v_p$ (from step~\ref{step-ukksavevp}), let $v_p$'s suffix link
point to the new node. The active point now coincides with a node, which we
denote $v$.
\item Create a new leaf $w$ and make it a child of $v$. Set the start pointer
of the label on the edge from $v$ to $w$ to $i$ (the end pointer of leaf
labels being implicit).
\item If the active point corresponds to the root, move it to
$\topnode$. Otherwise, we should move the active point to correspond
to the string $A$, where $aA$ is the string corresponding to $v$ for some character
$a$. There are two cases:
\begin{enumerate}
\item Unless $v$ was just created, it has a suffix link, which
we can simply follow to directly arrive at a node that coincides with the
point we seek.
\item\label{step-ukkrescan} Otherwise, i.e. if $v$'s suffix link is not yet set, let $u$ be
the parent of $v$, and follow the suffix link of $u$ to $u'$. Then locate
the edge below $u'$ containing the point that corresponds to $A$. Set this
as the active point. Moving down from $u'$ requires one or more branch
operations, a process referred to as \emph{rescanning} (see fig.~\ref{fig-linkstyles}). If
the active point now coincides with a node $v'$, set the suffix link of $v$
to point to $v'$. Otherwise, save $v$ as $v_p$ to have its suffix link set
to the node created next.\label{step-ukksavevp}
\end{enumerate}
\item Continue from step~1.
\end{enumerate}
\section{Reduced Branching Schemes}\label{sec-redbranch}
Senft and Dvořák~\cite{SenftBranching} observe that the \emph{branch} operation,
searching for the right outgoing edge of a node, typically dominates execution
time in Ukkonen's algorithm. Reducing the cost of branch can significantly improve
construction efficiency. Two paths are possible: attacking the cost of the
branch operation itself, through the data structures that support it, which
we consider in section~\ref{sec-hashvsll}, and reducing the \emph{number}
of branch operations in step~\ref{step-ukkrescan} of
the update algorithm.
We refer to Ukkonen's original method of maintaining and using suffix links as
\emph{node-oriented top-down} (\notd). Section~\ref{sec-nobu} discusses the
\emph{bottom-up} approach (\nobu) of Senft and Dvořák, and
sections~\ref{sec-eotd}--\ref{sec-eov} present our novel approach of
\emph{edge-oriented} suffix links, in two variants \emph{top-down} (\eotd) and
\emph{variable} (\eov).
\subsection{Node-Oriented Bottom-Up}\label{sec-nobu}
A branch operation comprises the rather expensive task of locating, given a
node $v$ and character $c$, $v$'s outgoing
edge whose edge label begins with $c$, if one exists. By contrast, following an
edge in the opposite direction can be made much cheaper, through a parent pointer. Senft and Dvořák~\cite{SenftBranching}
suggests the following simple modification to suffix tree representation and
construction:
\begin{itemize}
\item Maintain parents of nodes, and suffix links for
leaves as well as non-leaves.
\item In step~\ref{step-ukkrescan} of
update, follow the suffix link of $v$ to $v'$ rather than that of its
parent $u$ to $u'$, and locate the point corresponding to $A$ moving up,
\emph{climbing} from $v'$
rather than rescanning from $u'$ (see fig.~\ref{fig-linkstyles}).
\end{itemize}
Senft and Dvořák experimentally demonstrate a runtime improvement across a
range of typical inputs. A drawback is that worst case time complexity is
not linear: a class of inputs with time complexity $\Omega(N^{1.5})$ is
easily constructed, and it is unknown whether actual worst case
complexity is even higher. To circumvent degenerate cases, Senft and Dvořák
suggest a hybrid scheme where climbing stops after $c$ steps, for constant $c$, falling back to rescan. (As an alternative, we
suggest bounding the number of edges to climb to by using rescan iff the remaining edge label length below the active point
exceeds constant $c'$.) Some of the space overhead can be avoided in a representation using clever
leaf numbering.
\subsection{Edge-Oriented Top-Down}\label{sec-eotd}
We consider an alternative branch-saving strategy, slightly modifying
suffix links.
For each split edge, the \notd update algorithm follows a suffix link from
$u$ to $u'$, and immediately obtains the outgoing edge $e'$ of
$u'$ whose edge label starts with the same character as the edge just
visited. We can avoid this
\emph{first} branch operation in rescan (which constitutes a large part of
rescan work)
, by having $e'$ available from $e$ directly, without taking the detour via
$u$ and $u'$.
Define the string that \emph{marks} an edge as the shortest string represented
by the edge
(corresponding to the point after one character in its label). For edge $e$,
let $aA$, for character $a$ and string $A$, be the shortest string represented
$e$ such that $A$ marks some other edge $e'$. (The same as saying that $aA$
marks $e$, except when $e$ is an outgoing edge of the root and $|A|=1$, in
which case $a$ marks $e$.) Let the \emph{edge oriented suffix link} of $e$
point to $e'$ (illustrated i fig.~\ref{fig-st}).
\begin{figure}
\caption{\label{fig-st}
\label{fig-st}
\end{figure}
\label{sec-siblinglookup}Modifying the update algorithm for this variant of suffix links, we obtain an
\emph{edge-oriented top-down} (\eotd) variant. The update algorithm is
analogous to the original, except that edge suffix links are set and followed
rather than node suffix links, and the first branch operation of each rescan
avoided as a result. The following points deserve special attention:
\begin{itemize}
\item When an edge is split, the top part should remain the destination of
incoming suffix links, i.e., the new edge is the bottom part.
\item After splitting one or more edges in an update, finding the correct
destination for the suffix link of the last new edge (the bottom part of the
last edge split) requires a \emph{sibling lookup} branch operation, not
necessary in \notd.
\item Following a suffix link from the endpoint of an edge occasionally
requires one or more extra rescan operation, in relation to following the
node-oriented suffix link of the endpoint.
\end{itemize}
The first point raises some implementation issues. Efficient representations
(see e.g.\@ Kurtz's~\cite{kurtzsuftree}) do not implement nodes and edges as
separate records in memory. Instead, they use a single record for a node and
its incoming edge. Not only does this reduce the memory overhead, it cuts down
the number of pointers followed on traversing a path roughly by half. The
effect of our splitting rule is that while the top part of the split edge
should retain incoming suffix links, the new record, tied to the bottom part
should inherit the children. We solve this by adding a level of indirection,
allowing all children to be moved in a single assignment. In some settings
(e.g., if parent pointers are needed), this induces a one
pointer per node overhead, but it also has two potential efficiency
benefits. First, new node/edge pairs become siblings, which makes for a natural
memory-locality among siblings (cf.\@ \emph{child inlining} in
section~\ref{sec-inlining}).\label{sec-notechildcache} Second, the original bottom node stays where it was
in the branching data structure, saving one replace operation. These properties
are important for the efficiency of the \eotd representation.
The latter two points go against the reduction of branch operations that
motivated edge-oriented suffix links, but does not cancel it out. (Cf.\@
table~\ref{tab-opcounts}.)
These assertions are supported by experimental data in
section~\ref{sec-experiments}.
Furthermore, \eotd retains the $O(N)$ total construction time of \notd. To see this, note
first that the modification to edge-oriented suffix links clearly adds at most
constant-time operation to each operation, except possibly with regards to the
extra rescan operations after following a suffix link from the endpoint of an
edge. But Ukkonen's proof of total $O(N)$ rescan time still applies: Consider
the string $t_j\cdots t_i$, whose end corresponds to the active point, and
whose beginning is the beginning of the currently scanned edge. Each downward
move in rescanning deletes a nonempty string from the left of this string, and
characters are only added to the right as $i$ is incremented, once for each
online suffix tree update. Hence the number of downward moves are bounded by
$N$, the total number of characters added.
\newcommand\stylescale{.635}
\begin{figure}
\caption{\label{fig-linkstyles}
\label{fig-linkstyles}
\end{figure}
\subsection{Edge-Oriented Variable}\label{sec-eov}
Let $e$ be an edge from node $u$ to $v$, and let $v'$ and $u'$ be nodes
such that node suffix links would point from $u$ to $u'$ and from $v$ to
$v'$. If the path from $u'$ to $v'$ is longer than one edge, the \eotd suffix
link from $e$ would point to the first one, an outgoing edge of
$u'$. Another edge-oriented
approach, more closely resembling \nobu, would be to let $e$'s suffix link to
point to the \emph{last} edge on the path, the incoming edge of $v'$, and use
climb rather than rescan for locating the right edge after following a suffix
link. But this approach does not promise any performance gain over
\nobu.
An approach worth investigating, however, is to allow some freedom in where to
on the path between $u'$ and $v'$ to point $e$'s suffix link. We refer to the
path from $u'$ to $v'$ as the \emph{destination path} of $e$'s suffix link. Given that an
edge maintains the length of the longest string it represents (which is a
normal edge representation in any case), we can use climb or rescan as
required.
We suggest the following \emph{edge-oriented
variable} (\eov) scheme:
\begin{itemize}
\item When an edge is split, let the bottom part remain the destination of
incoming suffix links, i.e., let the top part be the new edge. (The
opposite of the \eotd splitting rule.) This sets a suffix link to the last
edge on its destination path, possibly requiring
climb operations after the link is followed.
\item When a suffix link is followed and $c$ edges on its destination path
climbed, if $c>k$ for a constant $k$, move the suffix link $c-k$ edges up.
\end{itemize}
Intuitively, this approach looks promising, in that it avoids spending time on
adjusting suffix links that are never used, while eliminating the
$\Omega(N^{1.5})$ degeneration case demonstrated for
\nobu~\cite{SenftBranching}. Any node further than $k$ edges away from the top
of the destination path is followed only once per suffix link, and hence the
same destination path can only be climbed multiple times when multiple suffix
links point to the same path, and each corresponds to a separate occurrence of
the string corresponding to the climbed edge labels. We conjecture that the
amortized number of climbs per edge is thus~$O(1)$. However, our experimental
evaluation indicates that the typical savings gained by the \eov approach are
relatively small, and are surpassed by careful application of \eotd.
\section{Branching Data Structure}\label{sec-hashvsll}
Branching efficiency depends on the input alphabet size. Ukkonen proves $O(N)$ time complexity only under the
assumption that characters are drawn from an alphabet $\alphabet$ where
$|\alphabet|$ is $O(1)$. If $|\alphabet|$ is not constant, \emph{expected} linear
time can be achieved by hashing, as suggested in McCreight's seminal
work~\cite{McR}, and more recent dictionary data
structures~\cite{arbitman2010backyard,hagerup2001deterministic} can be applied
for bounds very close to deterministic linear time. Recursive suffix tree
construction, originally presented by Farach~\cite{FarFOCS} achieves the
same asymptotic time bound as character sorting, but does not support online
construction.
We limit our treatment to simple schemes based on linked lists or
hashing since, to our knowledge, asymptotically stronger results have not been shown to
yield a practical improvement. Kurtz~\cite{kurtzsuftree}
observed in 1999 that linked lists appear faster for practical inputs when
$|\alphabet|\leq 100$ and $N\leq 150\,000$. For a lower bound estimate of the
alphabet size breaking point, we tested suffix tree construction on random
strings of different size alphabets. We used a hash table of size $3N$ with linear probing
for collision resolution, which resulted
in an average of less than two hash table probes per insert or lookup
across all files. The results, shown in
table~\ref{fig-hashvsll}, indicate that hashing can outperform linked lists for
alphabet sizes at least as low as 16, and our experiments did indeed show hashing to be
advantageous for the \emph{protein} file, with this size of alphabet. However,
for many practical input types that produce a much lower average suffix tree
node out-degree, the breaking point would be at a much larger $|\alphabet|$.
\begin{figure}
\caption{\label{fig-hashvsll}
\label{fig-hashvsll}
\end{figure}
\label{sec-inlining}
\paragraph*{Child Inlining}
An internal node has, by definition, at least two children. Naturally occurring
data is typically repetitive, causing some children to be accessed more
frequently than others. (This is the basis of the \textsc{ppm} compression
method, which has a direct connection to the suffix tree~\cite{SufComp}.) By
a simple probabilistic argument, children with a high traversal probability
also have a high probability of being encountered first. Hence, we
obtain a possible efficiency gain by storing the first two children of each
node, those that cause the node to be created, as \emph{inline} fields of the
node record together with their first character, instead of including them in
the overall child retrieval data structure. The effect should be particularly strong for \eotd, which, as
noted in section~\ref{sec-eotd}, eliminates the \emph{replace child}
operation that otherwise occurs when an edge is split, and the record
of the original child hence remains a child forever. Furthermore, if nodes are
laid out in memory in creation order, \eotd's
consecutive creation of the first two children can
produce an additional caching advantage. Note that inline space use is compensated by space
savings in the non-inlined child-retrieval data structure.
When linked lists are used for branching, we can achieve an effect similar to inlining by always inserting new children at the back of the list. This change
has no significant cost, since an addition is made only after an unsuccessful
list scan.
\section{Performance Evaluation on Input Corpora}\label{sec-experiments}
Our target is to keep the number of branch operations low, and
their cost low through lookup data structures with low overhead and good cache
utilization. The overall goal is reducing construction time. Hence, we evaluate
these factors.
\subsection{Models, Measures, and Data}\label{sec-expmodel}
Practical runtime measurement is, on the one
hand, clearly directly relevant for evaluating algorithm behavior. On the other
hand, there is a risk of exaggerated effects dependent on specific hardware
characteristics, resulting in limited relevance for future hardware
development. Hence, we are reluctant to use execution time as the sole performance measure. Another important measure, less dependent on conditions at the
time of measuring, is memory probing during execution. Given the
central role of main memory access and caching in modern architectures, we
expect this to be directly relevant to the
runtime, and include several measures to capture it in our evaluation.
We measure level~3 cache misses using the \emph{Perf} performance counter
subsystem in Linux~\cite{PerfSystem}, which reports hardware events using the
performance monitoring unit of the \cpu. Clearly, with this hardware measure,
we are again at the mercy of hardware characteristics, not necessarily relevant on
a universal scale. Measuring cache misses in a
theoretically justified model such as the \emph{ideal-cache
model}~\cite{FrigoCacheObliv} would be attractive, but such a model does not easily lend itself
to experiments. Attempts of measuring emulated cache performance using a
virtual machine (\!\emph{Valgrind}) produced spurious results, and the overhead
made large-scale experiments infeasible. Instead, we concocted two simple cache
models to evaluate the locality of memory access: one minimal
cache of only ten 64~byte cache lines with a \emph{least recently used}
replacement scheme (intended as a baseline for the level~one cache of any
reasonable \cpu), and one with a larger amount of cache lines with a simplistic
direct mapping without usage estimation (providing a baseline expected to be at
least matched by any practical hardware).
We measure runtimes of Java implementations kept as similar as possible in
regards to other aspects than the techniques tested, with the 1.6.0\_27
Open\,\textsc{jdk} runtime, a common contemporary software environment. With current
\emph{hotspot} code generation, we achieve performance on par with compiled
languages by freeing critical code sections of language constructs that allocate
objects (which would trigger garbage collection) or produce unnecessary pointer
dereference. We repeat critical sections ten times per test run, to
even out fluctuation in time and caching. Experiment hardware was a Xeon
E3-1230 v2 3.3GHz quadcore with 32\,kB per core for each of data and
instructions level~1 cache, 256\,kB level~2 cache per core, 8\,MB shared level~3
cache, and 16\,GB 1600\,MHz \textsc{ddr}3 memory. Note that this configuration
influences only runtime and physical cache (table~\ref{tab-runtimes} and the
first two bars in each group of fig.~\ref{fig-maidiagram}); other measures are
system independent.
We evaluate over a variety of data in common use for testing string processing
performance, from the \emph{Pizza \& Chili}~\cite{PizzaChiliCorpus} and
\emph{lightweight}~\cite{LightweightCorpus} corpora. In order to evaluate a
degenerate case for \nobu, we also include an adversary input constructed for
the pattern $T=ab^{m^2}abab^2ab^3\cdots ab^ma$ (with $m=4082$ for a 25~million
character file), which has $\Omega(N^{1.5})$ performance in this
scheme~\cite{SenftBranching}.
\subsection{Results}\label{sec-results}
\begin{figure}
\caption{\label{fig-maidiagram}
\label{fig-maidiagram}
\end{figure}
\begin{table}
\begin{center}
\scriptsize
\addtolength{\tabcolsep}{.3ex}
\begin{tabular}{lrrrrrrr}
& $\text{size}\cdot$& \notd& \eotd & \eov & \eov & \nobu & move down\\
File& $10^{-6}$& rs& rs+sl& rs& climb& climb& branch ops \\[.5ex] \hline
chr22$^{\text{B1}}$& 34.55& 29\,569\,178& 18\,927\,812&
318\,499& 33\,064\,133& 33\,669\,019& 35\,053\,371 \tstrut \\
dna$^{\text{A1}}$& 104.86& 87\,681\,116& 58\,585\,203& 236\,634& 100\,172\,270& 100\,736\,053& 111\,372\,537 \\
dblp$^{\text{A2}}$& 104.86& 54\,757\,925& 14\,743\,305& 32\,980& 55\,418\,573& 55\,594\,399& 73\,784\,654 \\
rctail96$^{\text{B2}}$& 114.71& 74\,993\,651& 20\,777\,946& 86\,190& 71\,863\,312& 72\,128\,546& 70\,211\,436 \\
jdk13c$^{\text{B3}}$& 69.73& 50\,659\,938& 6\,678\,647& 54\,174& 49\,300\,828& 49\,413\,385& 28\,044\,490 \\
sources$^{\text{A3}}$& 104.86& 80\,270\,528& 30\,753\,392& 191\,755& 75\,419\,031& 75\,953\,764& 70\,537\,447 \\
w3c2$^{\text{B3}}$& 104.20& 80\,056\,887& 12\,933\,438& 57\,108& 75\,773\,161& 75\,904\,742& 41\,111\,077 \\
english$^{\text{A4}}$& 104.86& 86\,528\,338& 43\,803\,204& 109\,151& 78\,451\,578& 78\,998\,269& 85\,577\,767 \\
etext$^{\text{B4}}$& 105.28& 73\,446\,539& 40\,782\,335& 106\,811& 73\,482\,182& 74\,097\,636& 99\,131\,563 \\
howto$^{\text{B4}}$& 39.42& 28\,590\,381& 13\,523\,660& 89\,650& 27\,703\,460& 27\,944\,722& 32\,676\,237 \\
rfc$^{\text{B4}}$& 116.42& 88\,716\,588& 32\,739\,584& 452\,280& 83\,618\,480& 84\,486\,767& 77\,334\,572 \\
pitches$^{\text{A5}}$& 55.83& 47\,744\,716& 21\,303\,582& 279\,505& 42\,615\,067& 43\,081\,419& 46\,777\,918 \\
proteins$^{\text{A6}}$& 104.86& 74\,912\,821& 39\,662\,469& 31\,075& 70\,942\,644& 71\,016\,405& 111\,979\,688 \\
sprot34$^{\text{B7}}$& 109.62& 70\,190\,029& 20\,737\,274& 45\,034& 69\,274\,605& 69\,425\,197& 78\,927\,702 \\
adversary$^{\text{A8}}$& 25.00& 41\,662\,928& 16\,323& 8\,313\,003& 41\,654\,774& 68033\,898\,010& 12\,249 \\
\end{tabular}
\end{center}
\caption{\label{tab-opcounts} Operation counts. \emph{rs}: rescan branch
operations, \emph{sl}: extra
sibling lookup (see section~\ref{sec-siblinglookup}), \emph{move down}:
branch operations outside of rescan. Files from the
\emph{Pizza and Chili Corpus} ($^{\text{A}}$) and \emph{Lightweight
Corpus} ($^{\text{B}}$). File categories are \textsc{dna}~($^{\text{1}}$),
\textsc{xml}~($^{\text{2}}$), source code~($^{\text{3}}$), text~($^{\text{4}}$),
\textsc{midi}~($^{\text{5}}$), proteins~($^{\text{6}}$),
database~($^{\text{7}}$), and \nobu
adversary~($^{\text{8}}$).}
\end{table}
Fig.~\ref{fig-maidiagram} shows performance across seven implementations and
five performance measures (explained in section~\ref{sec-expmodel}), which we
deem to be relevant for comparison. It summarizes the runtimes (also in
table~\ref{tab-runtimes}) and memory access measures by taking averages across
all files except \emph{adversary}, with equal weight per file. The bars are
scaled to show percentages of the measures for the basic \notd implementation,
which is used as the benchmark. The order of the implementations when ranked by
performance is fairly consistent across the different measures, with some
deviation in particular for the hardware cache measure and smaller-cache
models. The hardware cache measurement comes out as a relatively poor predictor
of performance; by the numbers reported by Perf, the hardware cache even appears
to be outperformed by our simplistic theoretical cache model.
We detect only a minor improvement of \eotd \lili implementations in relation to
\nobu \lili, while inline \eotd \hata provides a more significant
improvement. Note, however that for \nobu, the \hata implementation is much
worse than the \lili implementation, while the reverse is true for \eotd. This
can be attributed to the different hash table use and the particular
significance of inlining, noted in section~\ref{sec-inlining}. The fact that
\eotd \hata without inlining (not in the diagram) is not clearly better than
\nobu \hata stands to confirm this. Although table~\ref{tab-runtimes} shows that
\eotd \lili beats its \hata counterpart for files producing a low average
out-degree in $\ST$ (because of a small alphabet and/or high repetitiveness), the
robustness of hashing (cf.\@ fig~\ref{fig-hashvsll}) has the greater impact on
average. We have included results to show the impact of the \emph{add to back}
heuristic in \eotd \lili, which also produced a slight improvement for \nobu (not
shown in diagram), as expected.
The operation counts shown in table~\ref{tab-opcounts} generally confirm our
expectations. (Branch counts include moves down from $\topnode$ to the
root, in order to match Senft and Dvořák's corresponding
counts~\cite{SenftBranching}.) \eov yields a large rescan reduction, even for the adversary
file, which makes it an attractive alternative to \nobu when branching is very
expensive. We found the exact choice of the $k$ parameter of \eov not to be
overly delicate. All values shown were obtained with $k=5$.
\section{Conclusion}\label{sec-concl}
It is possible to significantly improve online suffix tree construction time
through modifications that target reducing branch operations and cache
utilization, while maintaining linear worst-case time complexity. In many
applications, our representation variants should be directly applicable for
runtime reduction. Interesting topics remaining to explore are how our
techniques for, e.g., suffix link orientation, fit into the compromise game of
time versus space in succinct representations such as compressed suffix trees,
and comparison to off-line construction.
\begin{table}[t]
\begin{center}
\scriptsize
\addtolength{\tabcolsep}{.3ex}
\begin{tabular}{lrrrrrrrrrrr}
& \notd & \notd& \nobu& \nobu& \nobu&
\eov & \eov & \eotd& \eotd & \eotd& \eotd\\
File& & \hata& & back & \hata & \lili & \hata & \lili& \hata& back& inl. \hata \\ \hline
chr22& 11.43& 16.73& 8.66& 9.00 & 13.72& 9.08& 14.26& 8.96& 14.40& 8.80& 8.91 \\
dblp& 29.31& 35.56& 22.41& 21.90 & 30.60& 23.60& 32.15& 20.35& 26.55& 17.67& 16.91 \\
dna& 40.37& 60.76& 30.65& 32.12 & 51.66& 32.70& 53.77& 31.60& 53.37& 30.97& 32.89 \\
english& 64.26& 50.99& 45.65& 46.36 & 42.11& 47.47& 43.34& 42.70& 42.77& 36.64& 26.21 \\
etext& 64.96& 50.15& 47.68& 46.44 & 42.43& 50.37& 44.30& 45.56& 43.38& 39.06& 27.67 \\
howto& 21.74& 15.43& 16.12& 15.09 & 12.64& 16.48& 12.92& 15.33& 12.50& 12.56& 7.61 \\
jdk13c& 7.97& 23.24& 6.39& 6.53 & 19.29& 6.97& 20.24& 5.72& 14.46& 5.27& 6.76 \\
pitches& 46.65& 21.34& 34.66& 28.98 & 18.40& 35.38& 19.07& 34.08& 17.26& 26.34& 10.55 \\
proteins& 104.49& 49.60& 74.30& 74.46 & 41.95& 76.49& 44.27& 75.73& 46.18& 70.55& 31.97 \\
rctail96& 35.67& 44.76& 26.72& 26.54 & 37.44& 27.61& 38.35& 24.59& 31.32& 21.18& 18.35 \\
rfc& 52.58& 50.81& 38.78& 37.22 & 42.96& 40.55& 44.14& 37.18& 39.99& 29.45& 21.82 \\
sources& 44.21& 44.23& 32.71& 30.12 & 37.24& 34.27& 38.63& 31.49& 34.28& 24.70& 17.76 \\
sprot34& 50.19& 42.65& 38.40& 37.92 & 37.37& 39.82& 38.50& 36.71& 33.66& 33.24& 21.24 \\
w3c2& 18.98& 39.84& 14.38& 15.19 & 33.13& 14.89& 33.73& 12.91& 24.98& 11.41& 10.47 \\
adversary& 1.30& 7.89& 267.50& 266.16& 296.42& 1.64& 8.07& 1.40& 5.10& 1.39& 1.34 \\
\end{tabular}
\end{center}
\caption{\label{tab-runtimes}Running times in seconds for the same files as
table~\ref{tab-opcounts}}
\end{table}
\end{document}
|
\begin{equation}gin{document}
\title{A Note on the Entropy of Mean Curvature Flow}
\begin{equation}gin{abstract}
The entropy of a hypersurface is given by the supremum over all F-functionals with varying centers and scales, and is invariant under rigid motions and dilations. As a consequence of Huisken's monotonicity formula, entropy is non-increasing under mean curvature flow. We show here that a compact mean convex hypersurface with some low entropy is diffeomorphic to a round sphere. We will also prove that a smooth self-shrinker with low entropy is exact a hyperplane.
\epsilonnd{abstract}
\keywords{entropy, self-shrinker, mean curvature flow, sphere}
\renewcommand{\textup{2010} Mathematics Subject Classification}{\textup{2010} Mathematics Subject Classification}
\subjclass[2010]{Primary 53C25; Secondary 58J05}
\author{Chao Bao}
\address{Chao Bao, Key Laboratory of Pure and Applied mathematics, School of Mathematics Science, Peking University,
Beijing, 100871, P.R. China.} \epsilonmail{[email protected]}
\date{2014}
\maketitle
\markboth{Chao Bao}{}
\maketitle
\section{Introduction}
The F-functional of a hypersurface ${\mathfrak G}amma \subset \textbf{R}^{n+1}$ is defined as
$$F({\mathfrak G}amma) = (4 \partiali)^{-n/2} \int_{{\mathfrak G}amma} e^{-{\mathfrak f}rac{|x|^2}{4}}$$
whereas the entropy of ${\mathfrak G}amma$ is given by
\begin{equation}gin{equation}
\langlembda({\mathfrak G}amma) = \sup_{x_0 \in \textbf{R}^{n+1}, t_0 >0} (4 \partiali t_0)^{-n/2} \int_{{\mathfrak G}amma} e^{-{\mathfrak f}rac{|x-x_0|^2}{4t_0}}
\epsilonnd{equation}
If taking a transformation of the integral, we can also get
\begin{equation}gin{equation}
\langlembda({\mathfrak G}amma) = \sup_{x_0 \in \textbf{R}^{n+1}, t_0 >0} (4 \partiali)^{-n/2} \int_{t_0{\mathfrak G}amma + x_0} e^{-{\mathfrak f}rac{|x|^2}{4}}
\epsilonnd{equation}
By section 7 in \cite{CM}, the entropy of a self-shrinker is equal to the value of the F-functional $F$ and thus no supremum is needed. In \cite{S}, Stone computed the entropy for generalized cylinders $\textbf{S}^k \times \textbf{R}^{n-k}$. He showed that $\langlembda(\textbf{S}^n)$ is decreasing in $n$ and
$$\langlembda(\textbf{S}^1) = \sqrt{{\mathfrak f}rac{2\partiali}{e}} \approx 1.5203 > \langlembda(\textbf{S}^2) = {\mathfrak f}rac{4}{e} \approx 1.4715 > \langlembda(\textbf{S}^3) > \cdots > 1 = \langlembda(\textbf{R}^{n})$$
Moreover, a simple computation shows that $\langlembda(\Sigma \times \textbf{R}) = \langlembda(\Sigma)$.
Mean curvature flow is a parameter family of hypersurfaces $\{M_t\} \subset \textbf{R}^{n+1}$ which evolves under the following equation:
\begin{equation}gin{equation}
(\partialartial_t X(p,t))^{\partialerp} = - H(p,t) \nu(p,t)
\epsilonnd{equation}
Here $\overrightarrow{H} = - H \nu$ is the mean curvature vector of $M_t$, $H = div_{M_t} \nu$, $\nu$ is the outward unit normal, $X$ is the position vector and $\cdot^{\partialerp}$ denotes the projection on the normal space.
We denote ${\mathfrak P}hi(x,t) = (4 \partiali t)^{-n/2} e^{-{\mathfrak f}rac{|x|^2}{4t}}$ and ${\mathfrak P}hi_{(y,\tau)} = {\mathfrak P}hi(x-y, \tau - t)$, Huisken's monotonicity implies that for any $(y,\tau) \in \textbf{R}^{n+1} \times \textbf{R}$, $t_1$ and $t_2$ with $t_2 < t_1 < \tau$ we have
\begin{equation}gin{equation}
\int_{M_{t_1}} {\mathfrak P}hi_{(y,\tau)} \leq \int_{M_{t_2}} {\mathfrak P}hi_{(y,\tau)}
\epsilonnd{equation}
As a consequence of Huisken's monotonicity formula, entropy is non-increasing under mean curvature flow.
A hypersurface ${\mathfrak G}amma \subset \textbf{R}^{n+1}$ is a self-shrinker if it satisfies
\begin{equation}gin{equation}\langlebel{selfshrinker}
H = {\mathfrak f}rac{\langlengle X,\nu \ranglengle}{2}
\epsilonnd{equation}
It can be proved that, if ${\mathfrak G}amma$ is a self-shrinker, then ${\mathfrak G}amma_t = \sqrt{-t} {\mathfrak G}amma$ satisfies the mean curvature flow equation, see lemma 2.2 in \cite{CM}.
A non-compact hypersurface $\Sigma \subset \textbf{R}^{n+1}$ is said to be with polynomial volume growth if there are constants $C$ and $d$ so that for all $r \mathfrak geq 1$
\begin{equation}gin{equation}
Vol(B_r(0) \cap \Sigma) \leq Cr^d.
\epsilonnd{equation}
where $B_r(0)$ denote the ball centered at origin $0$ with radius $r$ in $\textbf{R}^{n+1}$.
In \cite{H}, Huisken showed that mean curvature flow stating at any smooth compact convex initial hypersurface in $\textbf{R}^{n+1}$ remains convex and smooth until it becomes extinct at a point and if we rescale the flow about the point in space-time where it becomes extinct, then the rescalings converge to round spheres. In \cite{HS}, Huisken and Sinestrari developed a theory for mean curvature flow with surgery for two-convex hypersurfaces in $\textbf{R}^{n+1} (n \mathfrak geq 3)$, and classified all of the closed two-convex hypersurfaces. In \cite{CM}, Colding and Minicozzi found a piece-wise mean curvature flow, under which they could prove that assuming a uniform diameter bound the piece-wise mean curvature flow starting from any closed surface in $\textbf{R}^3$ will become extinct in a round point.
Inspired by \cite{CIMW}, we expect to study hypersurfaces perspective from entropy, i.e. whether we can classify all of the mean convex hypersurfaces under some entropy condition. Specially, when the entropy of a closed mean convex hypersurface is no more than $\langlembda(\textbf{S}^{n-2})$, whether we can classify all of this kind of hypersurfaces like the result of Huisken and Sinestrari, see \cite{HS}. As a first step to our goal, in this note, we will prove that under some entropy condition a mean convex closed hypersurface is diffeomorphic to a round sphere.
It seems to the author that, entropy plays similar roles as energy does in harmonic map theory. For example, in harmonic map theory one has $\epsilonpsilon$-regularity theorem \cite{L} \cite{SW}, Liouville type theorem for harmonic maps with small energy \cite{EL}, and uniqueness of harmonic maps with small energy \cite{mS} etc. If comparing self-shrinkers as harmonic maps one has similar results on the entropies of self-shrinkers. So it also motivates the author to do this work.
\begin{equation}gin{theo}\langlebel{main2}
Suppose $M_0 \subset \textbf{R}^{n+1}$ is a smooth closed embedded hypersurface with mean curvature $H > 0$. If $\langlembda(M_0) \leq \min\{\langlembda(\textbf{S}^{n-1}), {\mathfrak f}rac{3}{2}\}$, then it is diffeomorphic to a round sphere $\textbf{S}^n$.
\epsilonnd{theo}
Moreover, we can get the following Bernstein type theorem for self-shrinkers under some low entropy condition.
\begin{equation}gin{theo}\langlebel{main1}
Suppose ${\mathfrak G}amma$ is a smooth non-compact embedded self-shrinker with polynomial volume growth, there exists a constant $\epsilonpsilon >0$, such that if $\langlembda({\mathfrak G}amma)< 1+ \epsilonpsilon$ then ${\mathfrak G}amma$ must be a hyperplane.
\epsilonnd{theo}
It should be pointed out that, under the assumption $\langlembda(M_0) < 2$, it is easy to check that all tangent flows must be multiplicity one, and as a sequel, we will not need to mention this in the proof of main theorems. In the proof of main theorems, we will use similar techniques from \cite{E}, \cite{CIMW} etc.
{\bf Acknowledgements} The author is very grateful to Professor Yuguang Shi for discussing this result and many helpful comments on this problem.
\section{Tangent flows of mean curvature flows}Throughout this paper, unless otherwise mentioned, we will always assume $M_0$ is a smooth closed embedded hypersurface in $\textbf{R}^{n+1}$, and $\{M_t\}$ is a mean curvature flow starting from $M_0$.
Let $(x_0, t_0) \in \textbf{R}^{n+1} \times \textbf{R}$ be a fixed point in the space-time, and $\langlembda > 0$ be a positive constant in $\textbf{R}$. We say that $\{M_s^{\langlembda}\}$ is a parabolic rescaling of $\{M_t\}$ at $(x_0 , t_0)$ if it satisfies
\begin{equation}gin{equation}
M^{\langlembda}_s = \langlembda^{-1}(M_{\langlembda^{2}s + t_0} - x_0)
\epsilonnd{equation}
where $s \in (-\langlembda^{-2} t_0, 0)$. It is easy to check that $\{M_s^{\langlembda}\}$ also satisfies mean curvature flow equation. For any hypersurface $M$ in $\textbf{R}^{n+1}$, we say $x_0$ is a regular point of $M$, if there is an open neighbourhood $U_0 \subset \textbf{R}^{n+1}$ of $x_0$, such that $M$ is smooth in $M \cap U_0$. Moreover, we say $M$ is regular, if every point of $M$ is a regular point.
\begin{equation}gin{defi}
We say that a parameter of hypersurfaces $\{{\mathfrak G}amma_s\}_{s<0}$ is a tangent flow of $\{M_t\}$, if there exists a sequence of positive numbers $\{\langlembda_j\}$, $\langlembda_j \rightghtarrow$ 0 as $j \rightghtarrow \infty$, such that $M^{\langlembda_j}_s {\mathfrak h}ookrightarrow {\mathfrak G}amma_s$ as Randon measures for each $s<0$.
\epsilonnd{defi}
We will denote $M^j_s = M^{\langlembda_j}_s$ for simplicity without confusion. About the existence of tangent flows, we have the following lemma:
\begin{equation}gin{lemm}[see \cite{I2}] \langlebel{tang.exist}
Suppose $\{M_t\}$ is a mean curvature flow, and $M_0$ is a smooth embedded hypersurface, then for any time-space point $(x_0,t_0) \in \textbf{R}^{n+1} \times \textbf{R}$ there is a parameter of hypersurfaces $\{{\mathfrak G}amma_s\}_{s<0}$ and a sequence of positive numbers $\{\langlembda_j\}$, $\langlembda_j \rightghtarrow$ 0 as $j \rightghtarrow \infty$, such that $M^{j}_s {\mathfrak h}ookrightarrow {\mathfrak G}amma_s$ as Radon measures for each $s<0$.
\epsilonnd{lemm}
Moreover, by Lemma 8 of \cite{I1}, we know that ${\mathfrak G}amma_s = \sqrt{-s}{\mathfrak G}amma_{-1}$, and ${\mathfrak G}amma_{-1}$ is a weak solution of self-shrinker equation (\ref{selfshrinker}). Furthermore by Huisken's monotonicity formula, we can prove the following point-wise convergence lemma:
\begin{equation}gin{lemm} \langlebel{tan.conv}
If $\{{\mathfrak G}amma_s\}_{s<0}$ is a tangent flow of $\{M_t\}$ at $(x_0,t_0)$, and $\{M^j_s\}$ is the corresponding sequence of parabolic transformation of $\{M_t\}$, then $\{M^j_s\}$ converge to $\{{\mathfrak G}amma_s\}$ as Hausdorff distance for each $s<0$.
\epsilonnd{lemm}
\begin{equation}gin{proof}
Because $M_0$ is closed and embedded, we can prove that for any fixed $t$, $T < t < t_0$ for some $T > 0$, there is a constant $V = V(Vol(M_0),T)$ such that $Vol(B_{r}(0) \cap M_t) \leq Vr^n$ for all $r >0$, and all $T \leq t < t_0$, see Lemma 2.9 in \cite{CM}. Furthermore, it is easy to check that
\begin{equation}gin{equation}\langlebel{dis1}
Vol(B_{r}(0) \cap M_s^j) \leq Vr^n
\epsilonnd{equation}
for all $r >0$ and all $\langlembda_j^{-2}(T-t_0) \leq s < 0$.
Since $\{M_s^j\}$ is a also a mean curvature flow, then by Huisken's monotonicity formula for any $x_0 \in \textbf{R}^{n+1}$, and any $s_2 < s_1 < s_0$, we have
\begin{equation}gin{equation}\langlebel{dis2}
\int_{M_{s_1}^j} {\mathfrak P}hi_{(x_0,s_0)} \leq \int_{M_{s_2}^j} {\mathfrak P}hi_{(x_0,s_0)}
\epsilonnd{equation}
By (\ref{dis1}), smoothness of the function ${\mathfrak P}hi_{(x_0,s_0)}$, and the measure convergence of $\{M^j_s\}$, as $j \rightghtarrow \infty$ for every $s < s_0$ we have
\begin{equation}gin{equation} \langlebel{conv}
\int_{M^j_s} {\mathfrak P}hi_{(x_0,s_0)} \rightghtarrow \int_{{\mathfrak G}amma_s} {\mathfrak P}hi_{(x_0,s_0)}
\epsilonnd{equation}
Combining this with (\ref{dis2}), we have for any $s_2 < s_1 < s_0$,
\begin{equation}gin{equation}
\int_{{\mathfrak G}amma_{s_1}} {\mathfrak P}hi_{(x_0,s_0)} \leq \int_{{\mathfrak G}amma_{s_2}} {\mathfrak P}hi_{(x_0,s_0)}
\epsilonnd{equation}
so
$$\lim_{s \nearrow s_0} \int_{{\mathfrak G}amma_{s}} {\mathfrak P}hi_{(x_0,s_0)} $$ exists.
Suppose there are a sequence $\{x_j\}$, $x_j \in M^j_{s_0}$ and a point $y \in \textbf{R}^{n+1}$ satisfying $\lim_{j \rightghtarrow \infty} x_j= y$. It is easy to see that if we prove $y \in {\mathfrak G}amma_{s_0}$, then we get the lemma.
For any smooth embedded mean curvature flow $\{\widehat{M}_t\}$, so it is easy to check that if $\widehat{x} \in \widehat{M}_{s_0}$
\begin{equation}gin{equation}\langlebel{dis3}
\lim_{s \rightghtarrow s_0} \int_{\widehat{M}_t} {\mathfrak P}hi_{(\widehat{x}, s_0)} = 1
\epsilonnd{equation}
Moreover, it is also easy to check that for any $\widehat{x} \notin \widehat{M}_{s_0}$, we have
\begin{equation}gin{equation}\langlebel{dis4}
\lim_{s \rightghtarrow s_0} \int_{\widehat{M}_{s_0}} {\mathfrak P}hi_{(\widehat{x}, s_0)} = 0
\epsilonnd{equation}
That is to say, if
$$\lim_{s \rightghtarrow s_0} \int_{\widehat{M}_{s}} {\mathfrak P}hi_{(\widehat{x}, s_0)} \neq 0$$
we must have $\widehat{x} \in \widehat{M}_{s_0}$.Actually, we do not need to assume $\{\widehat{M}_t\}$ is smooth and embedded here.
Furthermore, if $\{\widehat{M}_t\}$ is only smooth in a neighbourhood of $\widehat{x}$, we also have (\ref{dis3}) and (\ref{dis4}), see P.66 in \cite{E}.
For the sequence $\{x_j\}$ and $y$, it is also easy to prove the following result like (\ref{conv}) under the the condition of (\ref{dis1}), smoothness of the function ${\mathfrak P}hi_{(x_0,s_0)}$, and the measure convergence of $\{M^j_s\}$:
\begin{equation}gin{equation}\langlebel{conv2}
\int_{M^j_s} {\mathfrak P}hi_{(x_j,s_0)} \rightghtarrow \int_{{\mathfrak G}amma_s} {\mathfrak P}hi_{(y,s_0)}
\epsilonnd{equation}
as $j \rightghtarrow \infty$. By Huisken's monotonicity formula and (\ref{dis3}), from (\ref{conv2}) we get that
\begin{equation}gin{equation}
\int_{{\mathfrak G}amma_s} {\mathfrak P}hi_{(y,s_0)} \mathfrak geq 1
\epsilonnd{equation}
Then we take $s \rightghtarrow s_0$, we have
$$\lim_{s \rightghtarrow s_0} \int_{{\mathfrak G}amma_s} {\mathfrak P}hi_{(\widehat{x}, s_0)} \mathfrak geq 1$$
Thus we must have $y \in {\mathfrak G}amma_{s_0}$ and complete the proof.
\epsilonnd{proof}
In the following subsections, see lemma {\ref{mainlemma}}, we will further prove that if a tangent flow is smooth and embedded, we even have smooth convergence.
\section{Partial regularity for mean curvature flows}
We will need a partial regularity theorem due to Ecker, see theorem 5.6 in \cite{E}.
Before stating Ecker's theorem, we need to introduce a test function, which plays an important role in Ecker's local monotonicity, see theorem 4.17 in \cite{E}. Define
$$\partialhi_{\rho}(x,t) = (1 - {\mathfrak f}rac{|x|^2 + 2nt}{\rho^2})^3_{+}$$
and its translates
$$\partialhi_{(x_0 , t_0), \rho}(x,t) = \partialhi_{\rho}(x - x_0 , t - t_0)$$
For an open subset $U$ of $\textbf{R}^{n+1}$, there is a radius $\rho_0 > 0$ such that
$$B_{\sqrt{1+2n} \rho_0}(x_0) \times (t_0 - \rho_0^2, t_0) \subset U \times (t_1 ,t_0).$$
For all $\rho \in (0, \rho_0)$ and $t \in (t_0 - \rho_0^2, t_0)$ then we have
$$spt \partialhi_{(x_0, t_0), \rho} \subset B_{\sqrt{\rho^2 - 2n(t -t_0)}}(x_0) \subset B_{\sqrt{1+2n} \rho_0}(x_0) \subset U$$
The Gaussian density at $(x_0 , t_0)$ of mean curvature flow $\{M_t\}$ is defined as
\begin{equation}gin{equation}\langlebel{gaussden}
\Theta(M_t ,x_0, t_0) = \lim_{t \nearrow t_0} \int_{M_t} {\mathfrak P}hi_{(x_0, t_0)}
\epsilonnd{equation}
It is easy to check that, if $x_0$ is a regular point of $M_{t_0}$ then $ \Theta(M_t ,x_0, t_0) = 1$.
\begin{equation}gin{theo}[Ecker's local monotonicity, \cite{E}]\langlebel{locmono}
Let $\{M_t\}_{t \in (t_1 ,t_0)}$ be a smooth,properly embedded solution of mean curvature flow in an open set $U \subset \textbf{R}^{n+1}$. Then for every $x_0 \in U$ there is a $\rho_0 \in (0, \sqrt{t_0 - t_1})$ such that for all $\rho \in (0, \rho_0]$ and $t \in (t_0 - \rho^2 , t_0)$ we have
$$spt\partialhi_{(x_0 , t_0), \rho}(\cdot , t) \subset U$$
and
$${\mathfrak f}rac{d}{dt} \int_{M_t} {\mathfrak P}hi_{(x_0 , t_0)} \partialhi_{(x_0 ,t_0), \rho} \leq - \int_{M_t} |\overrightarrow{H}(x) - {\mathfrak f}rac{(x - x_0)^{\partialerp}}{2(t - t_0)}|^2 {\mathfrak P}hi_{(x_0 , t_0)} \partialhi_{(x_0 ,t_0), \rho}$$
Since the right-hand side is non-positive and
$$\partialhi_{(x_0 , t_0) , \rho}(x_0 ,t_0) = 1$$
for every $\rho \in (0, \rho_0]$, this implies that the locally defined Gaussian density
$$\Theta(M_t, x_0 , t_0) \epsilonquiv \lim_{t\nearrow t_0} \int_{M_t} {\mathfrak P}hi_{(x_0, t_0)}\partialhi_{(x_0, t_0), \rho}$$
exists, is independent of $\rho$ and for global solutions agrees with the Gaussian density defined in (\ref{gaussden}). Furthermore, for every $t \in (t_0 - \rho^2, t_0),$
$$\Theta(M_t, x_0 , t_0) \leq \int_{M_t}{\mathfrak P}hi_{(x_0, t_0)}\partialhi_{(x_0, t_0), \rho} $$
\epsilonnd{theo}
The following partial regularity theorem is due to by B.White \cite{W}, and in \cite{E} Ecker proves a similar result using the local monotonicity formula, here we present Ecker's version of B.White's partial regularity theorem.
\begin{equation}gin{theo}[Ecker, \cite{E}]\langlebel{Ecker}
Suppose $\{M_t\}$ is a smooth, properly embedded solution of mean curvature flow in $U \times (t_1 ,t_0)$ which reaches $x_0$ at time $t_0$, and $U$ is an open set in $\textbf{R}^{n+1}$. Then there exist constants $\epsilonpsilon_0 >0$ and $c_0 >0$ such that whenever
$$\Theta(M_t , x_0 , t_0) \leq 1+ \epsilonpsilon_0$$
holds at $x_0 \in U$, then
$$|A(x)|^2 \leq {\mathfrak f}rac{c_0}{\rho^2}$$
for some $\rho > 0$ and for all $x \in M_t \cap B_{\rho}(x_0)$ and $t \in (t_0 - \rho^2, t_0)$. In particular, $x_0$ is a regular point at time $t_0$.
\epsilonnd{theo}
In Ecker's proof, he actually proved the following result:
\begin{equation}gin{theo}
Whenever $\{M_t\}$ is a smooth, properly embedded solution of mean curvature flow in $U \times (t_1, t_0)$ which reaches $x_0$ at time $t_0$, and $B_{\rho}(x_0) \times (t_0 - 2\rho^2 , t_0) \subset U \times (t_1 , t_0)$, if there exists constants $\epsilonpsilon_0 >0$ and $c_0 > 0$ such that if
$$\int_{M_t} {\mathfrak P}hi_{(y,\tau)} \partialhi_{(y,\tau),\rho_0} \leq \epsilonpsilon_0$$
for all $(y,\tau) \in B_{\rho}(x_0) \times (t_0 - \rho^2 , t_0)$ and $t \in (\tau - \rho^2, \tau)$, and $\rho_0$ is chosen to make sure that $spt \partialhi_{(y,\tau),\rho_0} \subset U$, then we have
$$|A(x)|^2 \leq {\mathfrak f}rac{c_0}{\rho^2}$$
for all $x \in M_t \cap B_{\rho}(x_0)$ and $t \in (t_0 - \rho^2, t_0)$.
\epsilonnd{theo}
\begin{equation}gin{rema}
In the original version of Ecker's theorem, he didn't point out what exactly the constant $c_0$ depends on. However, throughout his proof, the author think $c_0$ depends on $\epsilonpsilon_0$, $U$ and $t_0 - t_1$. Whatever, we can still prove Theorem \ref{main1} following his proof.
\epsilonnd{rema}
\subsection{Proof of theorem \ref{main1}} Following the same technique given by Ecker in proving Theorem \ref{Ecker}, now we prove Theorem \ref{main1}.
\begin{equation}gin{lemm}\langlebel{thm1lemm}
Let $M_t$ be an smooth complete embedded ancient solution of mean curvature flow which exists in $(-\infty, 0]$. Assuming the origin $0 \in M_0$, then there exists a constant $\epsilonpsilon >0$, such that for any such ancient solution $M_t$ and any $ R >0$, if for all $(y, \tau) \in B_R(0) \times (-\infty, 0]$, $M_t$ satisfies
$$\int_{M_t} {\mathfrak P}hi_{(y,\tau)} < 1+\epsilonpsilon,$$
then we have
$$ (\sigma R)^2 \sup_{(-(1-\sigma)^2 R^2 , 0)} \sup_{M_t \cap B_{(1-\sigma)R}(0)} |A|^2 \leq C_0,$$
for all $\sigma \in (0,1)$, and $C_0$ does not depend on $R$ and $M_t$.
\epsilonnd{lemm}
\begin{equation}gin{proof}
Suppose the lemma is not correct. Then for every $j \in \epsilonmph{N}$ one can find a smooth, complete embedded solution $\{M^j_t\}$ which reaches $0 \in \textbf{R}^{n+1}$ at time 0 and some $R_j >0$ such that for all $(y, \tau) \in B_{R_j}(0) \times (\infty , 0]$,
$$\int_{M^j_t} {\mathfrak P}hi_{(y,\tau)} \leq 1+ {\mathfrak f}rac{1}{j}$$
holds but
$$\mathfrak gamma^2_j \epsilonquiv \sup_{\sigma \in (0,1)}((\sigma R_j)^2 \sup_{(-(1-\sigma)^2 R^2_j,0)} \sup_{M^j_t \cap B_{(1-\sigma)R_j}} |A|^2) \rightghtarrow \infty$$
as $j \rightghtarrow \infty$. In particular, one can find a $\sigma_j \in (0,1)$ for which
$$\mathfrak gamma^2_j = (\sigma_j R_j)^2 \sup_{(-(1-\sigma_j)^2 R^2_j,0)} \sup_{M^j_t \cap B_{(1-\sigma_j)R_j}} |A|^2$$
and a point
$$y_j \in M^j_{\tau_j} \cap \overline{B}_{(1-\sigma_j)R_j}$$
at a time $[-(1-\sigma_j)^2R_j^2, 0]$ so that
$$\mathfrak gamma_j^2 = \sigma_j^2 R_j^2 |A(y_j)|^2.$$
If we choose $\sigma = {\mathfrak f}rac{1}{2} \sigma_j$, we have
$$\sigma_j^2 R^2_j \sup_{(-(1-{\mathfrak f}rac{\sigma_j}{2})^2R_j^2, 0)} \sup_{M^j_t \cap B_{(1-\sigma_j/2)R_j}(0)} |A|^2 \leq 4\mathfrak gamma_j^2$$
that is
$$\sup_{(-(1-{\mathfrak f}rac{\sigma_j}{2})^2R_j^2, 0)} \sup_{M^j_t \cap B_{(1-\sigma_j/2)R_j}(0)} |A|^2 \leq 4|A(y_j)|^2.$$
Since $(\tau_j - {\mathfrak f}rac{\sigma_j^2}{4}R^2_j , \tau_j) \subset (-(1-{\mathfrak f}rac{\sigma_j}{2})^2R_j^2, 0)$ and $B_{\sigma_jR_j/2}(y_j) \subset B_{(1-\sigma_j/2)R_j}(0)$
so we can get
$$\sup_{(\tau_j - \sigma_j^2R^2_j/4 , \tau_j)} \sup_{M^j_t \cap B_{\sigma_jR_j/2(y_j)}} |A|^2 \leq 4|A(y_j)|^2. $$
Now let
$$\langlembda_j = |A(y_j)|^{-1}$$
and define
$$\widetilde{M}^j_s = {\mathfrak f}rac{1}{\langlembda_j} (M^j_{\langlembda^2_j s + \tau_j} - y_j)$$
for $s \in [\langlembda^{-2}_j \sigma_j^2 R^2_j/4, 0]$.
Then $\{\widetilde{M}^j_s\}$ is a smooth solution of mean curvature flow satisfying
$$0 \in \widetilde{M}^j_0, |A(0)| = 1$$
and
$$\sup_{(\langlembda^{-2}_j \sigma^2_j R^2_j/4, 0)} \sup_{\widetilde{M}^j_s \cap B_{\langlembda^{-1}_j \sigma_j R_j/2}(0)} \leq 4$$
for every $j \in \epsilonmph{N}$. Since
$$\langlembda_j^{-2} \sigma^2_j R^2_j = \mathfrak gamma^2_j \rightghtarrow \infty$$
we have for every $R>0$ and sufficiently large $j$ depending on $R$,
$$\sup_{(-R^2 , 0)}\sup_{\widetilde{M}^j_s \cap B_{R}(0)} |A|^2 \leq 4.$$
By curvature estimates for mean curvature flow we know that for every $j$, $\{\widetilde{M}^j_s\}$ is smooth and have uniform curvature estimate on any compact subset of time-space $\epsilonmph{R}^{n+1} \times \epsilonmph{R}$. These allow us to apply Arzela-Ascoli theorem to conclude that a subsequence of $\{\widetilde{M}^j_s\}$ converges smoothly on compact subsets of $\epsilonmph{R}^{n+1} \times \epsilonmph{R}$ to a smooth solution $\{M'_s\}_{s \leq 0}$ of mean curvature flow. Moreover, for $\{M'_s\}_{s \leq 0}$ we can get
$$0 \in M'_0, |A(0)| = 1$$
and
$$|A(y)| \leq 4$$
for $y \in M'_s, s \leq 0$.
By our assumption, we know that
$$\int_{\widetilde{M}^j_s} {\mathfrak P}hi \leq 1+ {\mathfrak f}rac{1}{j}$$
for all $s \in (-\langlembda^{-2}_j \sigma^2_j R^2_j/4, 0)$.
By the decreasing of $\int_{M^j_t} {\mathfrak P}hi_{(y_j,\tau_j)}$ and the smoothness of $\widetilde{M}^j_0$, we also get
$$\int_{\widetilde{M}^j_s} {\mathfrak P}hi \mathfrak geq 1.$$
Now we take the limit for $j \rightghtarrow \infty$, we have
$$\int_{M'_s} {\mathfrak P}hi = 1$$
for all $s<0$.
At last, following the same argument in the proof of Theorem 5.6 in \cite{E}, and we complete the proof.
\epsilonnd{proof}
\textit{Proof of Theorem \ref{main1}}: Under the condition of Theorem \ref{main1}, it is easy to check that ${\mathfrak G}amma_{t} = \sqrt{-t+1}{\mathfrak G}amma$ is a self-shrinking ancient solution of mean curvature flow. By Huisken's monotonicity formula and the definition of entropy, we see that ${\mathfrak G}amma_t$ satisfies all the condition needed in Lemma \ref{thm1lemm}, so we get the estimate for ${\mathfrak G}amma$
$$ (\sigma R)^2 \sup _{(-(1-\sigma)^2 R^2 , 0)} \sup_{M_t \cap B_{(1-\sigma)R}(0)} \leq C_0$$
for all $\sigma \in (0,1)$. If we take $\sigma = {\mathfrak f}rac{1}{2}$ and let $R \rightghtarrow \infty$, then we get $|A|^2 = 0$ everywhere on ${\mathfrak G}amma$, so ${\mathfrak G}amma$ is a hyperplane.
\section{Partial regularity for tangent flows}Suppose $\{{\mathfrak G}amma_s\}$ is a tangent flow of $\{M_t\}$ at the first singular time, and $\{M^j_s\}$ is the corresponding sequence of parabolic rescalings of $\{M_t\}$. We will need the following consequence of Theorem \ref{Ecker}:
\begin{equation}gin{lemm}\langlebel{mainlemma}
Let $\{M_t\} \subset \epsilonmph{R}^{n+1}$ be closed hypersurfaces flowing by mean curvature flow, and $\{{\mathfrak G}amma_s\}$ and $\{M^j_s\}$ are defined as above. If ${\mathfrak G}amma_{-1}$ is multiplicity one, then for any compact subset $K \subset Reg({\mathfrak G}amma_{-1})$ there is a subsequence of $\{M^j_{-1}\}$ which converge smoothly to $\{{\mathfrak G}amma_{-1}\}$ on $K$.
\epsilonnd{lemm}
Before proving the lemma, we need the following result:
\begin{equation}gin{lemm}\langlebel{lemm}
Suppose $\{{\mathfrak G}amma_s\}$ and $\{M_s^j\}$ are defined as in the above lemma, ${\mathfrak G}amma_{-1}$ is multiplicity one, and $\epsilonpsilon > 0$ is any fixed positive constant. Let $Reg({\mathfrak G}amma_{-1})$ represent the regular part of ${\mathfrak G}amma_{-1}$. Then for any $x_0 \in Reg({\mathfrak G}amma_{-1})$, there exist $\rho_0 = \rho_0(x_0) > 0$ and some $\rho \in (0,\rho_0)$ and a sufficiently large $J$, such that
$$\int_{M^j_s} {\mathfrak P}hi_{(y,\tau)} \partialhi_{(y,\tau),\rho_0} \leq 1 + \epsilonpsilon$$
for all $(y,\tau) \in B_{\rho}(x_0) \times (-1 - \rho^2, -1)$, $s \in (\tau - \rho^2, \tau)$ and $j > J$.
\epsilonnd{lemm}
\begin{equation}gin{proof}
Because $x_0$ is a regular point of ${\mathfrak G}amma_{-1}$ and $\{{\mathfrak G}amma_s\}$ is a self-shrinking mean curvature flow, we can find a $\rho_0 = \rho_0(x_0) > 0$ such that $\{{\mathfrak G}amma_s\}$ is smooth on $B_{\sqrt{1+2n}\rho_0}(x_0) \times (-1 - \rho_0 , \-1)$.
Since ${\mathfrak G}amma_{-1}$ is multiplicity one, so $\Theta({\mathfrak G}amma_s, x_0, -1) = 1$. By Theorem {\ref{locmono}}, we can find a $\rho_1 \in (0,\rho_0]$ such that
$$\int_{{\mathfrak G}amma_{-1 - \rho^2_{1}}} {\mathfrak P}hi_{(x_0 , -1)} \partialhi_{(x_0, -1),\rho_0} \leq 1+ {\mathfrak f}rac{1}{4} \epsilonpsilon.$$
The continuity of
$$(y,\tau) \longrightarrow \int_{{\mathfrak G}amma_{-1 - \rho_1^2}} {\mathfrak P}hi_{(y,\tau)}\partialhi_{(y,\tau),\rho_0}$$
implies that for some $\rho \in (0,\rho_0)$ and all $(y,\tau) \in B_{\rho}(x_0) \times (-1 -\rho^2, -1)$,
\begin{equation}gin{equation}\langlebel{gamma}
\int_{{\mathfrak G}amma_{-1 - \rho_1^2}}{\mathfrak P}hi_{(y, \tau)}\partialhi_{(y,\tau),\rho_0} \leq 1 + {\mathfrak f}rac{1}{2}\epsilonpsilon
\epsilonnd{equation}
and furthermore $(\tau - \rho^2, \tau) \subset (-1 - \rho_1^2 , -1)$.
Define a sequence of functions $g_j$ by
$$g_j(y,\tau) = \int_{M^j_{-1 - \rho_1^2}}{\mathfrak P}hi_{(y, \tau)}\partialhi_{(y,\tau),\rho_0}$$
We will only consider the $g_j's$ on the region $\overline{B}_{\rho}(x_0) \times [-1 -\rho^2, -1]$, it follows from the first variation formula, see lemma 3.7 in \cite{CM}, that $g_j$'s are uniformly Lipschitz in this region with
$$\sup_{\overline{B}_{\rho}(x_0) \times [-1 -\rho^2, -1]} |\nabla_{y,\tau} g_j| < C,$$
where $C$ depends on $\rho$ and the scale-invariant local area bounds for the $M^j_{-1 - \rho_1^2}$'s which are uniformly bounded. Since $M^j_{-1 - \rho_1^2}$'s converge to ${\mathfrak G}amma_{-1 - \rho_1^2}$ as Radon measures and ${\mathfrak G}amma_{-1 - \rho_1^2}$ satisfies (\ref{gamma}), so there exists some $J$ sufficiently large so that for all $j > J$ we have
$$\int_{M^j_{-1 - \rho_1^2}} {\mathfrak P}hi_{(y,\tau)} \partialhi_{(y,\tau),\rho_0} \leq 1 + \epsilonpsilon$$
for all $(y,\tau) \in B_{\rho}(x_0) \times (-1 - \rho^2, -1)$. Since by lemma \ref{locmono},
$$s \mapsto \int_{M_s^j} {\mathfrak P}hi_{(y,\tau)} \partialhi_{(y,\tau),\rho_0} $$
is non-increasing we obtain
$$\int_{M_s^j} {\mathfrak P}hi_{(y,\tau)} \partialhi_{(y,\tau),\rho_0} \leq \int_{M^j_{-1 - \rho_1^2}} {\mathfrak P}hi_{(y,\tau)} \partialhi_{(y,\tau),\rho_0} \leq 1 + \epsilonpsilon$$
for all $(y,\tau) \in B_{\rho}(x_0) \times (-1 - \rho^2, -1)$, $s \in (\tau - \rho^2, \tau)$ and $j > J$.
\epsilonnd{proof}
\textit{Proof of lemma \ref{mainlemma}}: By Lemma \ref{lemm} and Theorem \ref{Ecker}, we know that for any $x_0 \in Reg({\mathfrak G}amma_{-1})$, there is a positive $\rho(x_0)$ and a sufficiently large number $J = J(x_0)$ such that $\{M^j_{-1}\}$ have unform bound on second fundamental form. From the curvature estimate of mean curvature flow, we can also get unform bound on higher derivatives of second fundamental form of $\{M^j_{-1}\}$. Therefore, for any compact subset $K \subset Reg({\mathfrak G}amma_{-1})$ we can choose a subsequence of $\{M^j_{-1}\}$ denoted by $\{M^{j_i}_{-1}\}$, such that $\{M^{j_i}_{-1}\}$ converge smoothly to $\{{\mathfrak G}amma_{-1}\}$ on $K$.
\subsection{Proof of Theorem \ref{main2}}We will prove Theorem \ref{main2} by mean curvature flows. Suppose $M_0$ is a hypersurface in $\textbf{R}^{n+1}$ satisfying all conditions in Theorem \ref{main2}, and denote $\{M_t\}$ is the mean curvature flow starting from $M_0$ before the first singular time. Since mean curvature $H > 0$ on $M_0$, by Theorem 4.3 in \cite{H}, we obviously have the following lemma:
\begin{equation}gin{lemm}\langlebel{pinchlemma}
Suppose $\{M_t\}_{t \in [0,T)}$ is a mean curvature flow before the first singular time starting from $M_0$. If there is a constant $C$ such that $|A|^2 \leq CH^2$ on $M_0$, then we have
\begin{equation}gin{equation}\langlebel{pinch}
|A|^2(x,t) \leq CH^2(x,t)
\epsilonnd{equation}
holds on $M_t$ for every $t \in [0,T)$.
\epsilonnd{lemm}
For preparation of proving Theorem \ref{main2}, we also need the following two important theorems.
\begin{equation}gin{theo}[see \cite{CM}] \langlebel{CM1}
$\textbf{S}^{k} \times \textbf{R}^{n-k}$ are the only smooth complete embedded self-shrinkers without boundary, with polynoimal volume growth, and $H \mathfrak geq 0$ in $\textbf{R}^{n+1}$
\epsilonnd{theo}
\begin{equation}gin{theo}[see \cite{CIMW}] \langlebel{CM2}
If ${\mathfrak G}amma \subset \textbf{R}^{n+1}$ is a weak solution of the self-shrinker equation (\ref{selfshrinker}), $\langlembda({\mathfrak G}amma) < {\mathfrak f}rac{3}{2}$, and there is a constant $C >0$ such that
$$|A| \leq CH $$
on the regular set $ Reg({\mathfrak G}amma)$, then ${\mathfrak G}amma$ is smooth.
\epsilonnd{theo}
\textit{Proof of Theorem \ref{main2}}: Assume $\{M_t\}_{t \in [0,T)}$ is a mean curvature flow starting from $M_0$, $T$ is the first singular time and $x_0$ is a singular point in $\textbf{R}^{n+1}$. By Lemma \ref{tang.exist} and Lemma \ref{tan.conv}, we know that there exist a tangent flow $\{{\mathfrak G}amma_s\}_{s<0}$ at $(x_0 , T)$ and a corresponding sequence $\{M^j_s\}$ of parabolic transformations of $\{M_t\}$.
By Lemma \ref{pinchlemma} and inequality (\ref{pinch}) is scaling-invariant, we get that for every $j$,
$$|A_j| \leq C H_j$$
on $M^j_s$, where $A_j$ and $H_j$ are the second fundamental form and mean curvature on $M^j_s$ respectively. Combining this with Lemma \ref{mainlemma}, we have
$|A| \leq C H $
on the regular part of ${\mathfrak G}amma_{-1}$. By Theorem \ref{CM1} and Theorem \ref{CM2}, we know that
${\mathfrak G}amma_{-1}$ must be of the form $\textbf{S}^{k} \times \textbf{R}^{n-k}$. Since entropy is non-increasing under mean curvature flow, scaling non-invariant and lower semi-continuous under limits, then we have
$$\langlembda({\mathfrak G}amma_{-1}) \leq \min\{\langlembda(\textbf{S}^{n-1}), {\mathfrak f}rac{3}{2}\}$$
If $\langlembda({\mathfrak G}amma_{-1}) = \min\{\langlembda(\textbf{S}^{n-1}), {\mathfrak f}rac{3}{2}\}$, we know that the entropy $\langlembda(M_t)$ is invariant under $\{M_t\}$, By Huisken's monotonicity formula, $M_0$ must be a compact self-shrinker with $H > 0$, then from Theorem \ref{CM1} we know that $M_0$ must be a round sphere.
If $\langlembda({\mathfrak G}amma_{-1}) < \min\{\langlembda(\textbf{S}^{n-1}), {\mathfrak f}rac{3}{2}\}$, By Theorem \ref{CM1} and Theorem \ref{CM2}, we know that ${\mathfrak G}amma_{-1}$ must be $\textbf{S}^{n}$.
Using Lemma \ref{mainlemma} again, we have for sufficient large $j$, $M^j_{-1}$ can be written as a smooth graph over ${\mathfrak G}amma_{-1}$. Since ${\mathfrak G}amma_{-1}$ is a round sphere, then we have for sufficient large $j$, we have $M^j_{-1}$ is diffeomorphic to a round sphere. By the definition of $M^j_{-1}$, we know that $M^j_{-1} = \langlembda_j^{-1}(M_{T - \langlembda_j^2} - x_0)$. Since mean curvature flow $\{M_t\}$ is smooth up to the first singular time, then $M_0$ is diffeomorphic to a round sphere, and we complete the proof of Theorem \ref{main2}.
\begin{equation}gin{rema}
We think entropy may give some information of the hypersurface, so this work is attempt to study the singularities of mean curvature flow by entropy. From theorem \ref{main2} we can see that if the entropy of a mean convex compact hypersurface is no more than $\min\{\langlembda(\textbf{S}^{n-1}), {\mathfrak f}rac{3}{2}\}$ , then it is diffeomorphic to a round sphere. We believe that if the entropy is a little higher we can also get some classification result as we mentioned at the beginning of this paper.
\epsilonnd{rema}
\begin{equation}gin{thebibliography}{99}
\bibitem{E}
K.Ecker, Regularity theory for mean curvature flow. BirkH\"{a}user, Boston, 2004.
\bibitem{EL}
J.Ells, L.Lemaire, A report on harmonic maps. Bull. London Math. Soc., 10(1978), 1-68.
\bibitem{CM}
T.H.Colding and W.P.Minicozzi II, Generic mean curvature flow I; generic singularities. Annals of Math., 175(2012), 755-833.
\bibitem{CIMW}
T.H.Colding, T.Ilmanen, W.P.Minicozzi II and B.White, The round sphere minimizes entropy among closed self-shrinkers. J. Differential Geom. Volume 95, Number 1 (2013), 53-69.
\bibitem{H}
G.Huisken, Flow by mean curvature of convex surfaces into spheres. J. Differential Geom., 20(1984), 237-266.
\bibitem{HS}
G.Huisken and C.Sinestrari, Mean curvature flow with surgeries of two-convex hypersurfaces. Invent. Math., 175(2009), 137-221.
\bibitem{I1}
T.Ilmanen, Singularities of mean curvature flow of surfaces. preprint, 1995.
\bibitem{I2}
T.Ilmanen, Elliptic regularization and partial regularity for motion by mean curvature. preprint, 1993.
\bibitem{L}
Liao,G.J, A regularity theorem for harmonic map with small energy. J. Differential Geometry, 22(1985), 233-241.
\bibitem{mS}
M.Struwe, Uniqueness of harmonic maps with small energy. Manuscripta Mathematica, 96(1998), 463-486.
\bibitem{S}
A.Stone, A density function and the structure of singularities of the mean curvature flow. Calc.
Var. 2(1994), 443-480.
\bibitem{SW}
J.Sacks, K.Uhlenbeck, The existence of minimal immersions of 2-spheres. Ann. of Math., (2)113(1981), 1-24.
\bibitem{W}
B.White, A local regularity theorem for mean curvature flow. Ann. of Math. (2)161(2005), 1487-1519.
\epsilonnd{thebibliography}
\epsilonnd{document}
|
\begin{document}
\begin{frontmatter}
\title{Wiener--Hopf factorization and distribution of extrema for a
family of L\'evy processes}
\runtitle{Wiener--Hopf factorization and distribution of extrema}
\begin{aug}
\author[A]{\fnms{Alexey} \snm{Kuznetsov}\corref{}\thanksref
{t1}\ead[label=e1]{[email protected]}}
\runauthor{A. Kuznetsov}
\affiliation{York University}
\address[A]{Department of Mathematics and Statistics\\
York University \\
Toronto, Ontario, M3J 1P3\\
Canada\\
\printead{e1}}
\end{aug}
\thankstext{t1}{Supported in part by the
Natural Sciences and Engineering Research Council of Canada.}
\received{\smonth{1} \syear{2009}}
\revised{\smonth{12} \syear{2009}}
\begin{abstract}
In this paper we introduce a ten-parameter family of L\'evy processes
for which we obtain Wiener--Hopf factors and distribution of the
supremum process in semi-explicit form. This family allows an
arbitrary behavior of small jumps and includes processes similar to the
generalized tempered stable, KoBoL and CGMY processes. Analytically it
is characterized by the property that the characteristic exponent is a
meromorphic function, expressed in terms of beta and digamma functions.
We prove that the Wiener--Hopf factors can be expressed as infinite
products over roots of a certain transcendental equation, and the
density of the supremum process can be computed as an exponentially
converging infinite series. In several special cases when the roots can
be found analytically, we are able to identify the Wiener--Hopf factors
and distribution of the supremum in closed form. In the general case we
prove that all the roots are real and simple, and we provide
localization results and asymptotic formulas which allow an efficient
numerical evaluation. We also derive a convergence acceleration
algorithm for infinite products and a simple and efficient procedure to
compute the Wiener--Hopf factors for complex values of parameters. As a
numerical example we discuss computation of the density of the supremum
process.
\end{abstract}
\begin{keyword}[class=AMS]
\kwd[Primary ]{60G51}
\kwd[; secondary ]{60E10}.
\end{keyword}
\begin{keyword}
\kwd{L\'evy process}
\kwd{supremum process}
\kwd{Wiener--Hopf factorization}
\kwd{meromorphic function}
\kwd{infinite product}.
\end{keyword}
\pdfkeywords{60G51, 60E10, Levy process,
supremum process, Wiener--Hopf factorization,
meromorphic function, infinite product}
\end{frontmatter}
\section{Introduction}\label{section_introduction}
Wiener--Hopf factorization is a powerful tool in the study of various
functionals of a L\'evy process, such as
extrema of the process, first passage time and the overshoot, the last
time the extrema was achieved, etc. These results
are very important from the theoretical point of view; for example,
they can be used to prove general theorems
about short/long time behavior (see \cite{Bertoin,Doney2007,Kyprianou} and
\cite{Sato}). However, in recent years, there has
also been a growing
interest in applications of Wiener--Hopf factorization, for example,
in Insurance Mathematics and the classical ruin problem (see
\cite{Asmussen2}) and in Mathematical Finance,
where the above-mentioned functionals are being used to describe the
payoff of a contract and
the corresponding probability distribution is used to compute its price
(see \cite{Asmussen,Boyarchenko,Mordecki3} and
\cite{Schoutens} and the references therein).
Let us summarize one of the most important results from Wiener--Hopf
factorization. Assume that
$X_t$ is a one-dimensional real-valued L\'evy process started from
$X_0=0$ and defined by a triple $(\mu,\sigma,\nu)$, where
$\mu{i}n{\mathbb R}$ specifies the linear component, $\sigma\ge0$ is
the volatility of the Gaussian component and
$\nu({d} x)$ is the L\'evy measure satisfying ${i}nt_{{\mathbb R}} \min
\{
1,x^2\}
\nu({d} x) < {i}nfty$. The characteristic exponent $\Psi(z)$ is
defined by
\[
{\mathbb E} [e^{{i} zX_t} ]=e^{-t\Psi(z)},\qquad z {i}n {\mathbb R},
\]
and the L\'evy--Khintchine representation (see \cite{Bertoin}) tells us
that $\Psi(z)$ can be expressed in terms of the generating triple
$(\mu
,\sigma,\nu)$ as follows:
\begin{equation}\label{levy_khintchine}
\Psi(z)=\frac12 \sigma^2z^2 -{i}\mu z-{i}nt _{{\mathbb R}}
\bigl(
e^{{i} zx}-1-izh(x) \bigr) \nu({d} x).
\end{equation}
Here $h(x)$ is the cut-off function, which in general can be taken to
be equal to $x\mathbf{I}_{\{|x|<1\}}$; however, in this paper we will use
$h(x)\equiv0$ (Section \ref{section_comp_poisson}) or $h(x)\equiv x$
(Sections \ref{section_results_nu3} and \ref{section_results_nu4}).
We define extrema processes
\[
S_t=\sup\{X_s {d}vtx 0\le s \le t\}, \qquad I_t={i}nf\{X_s {d}vtx 0\le s \le t\}
\]
introduce an exponential random variable $\tau=\tau(q)$ with parameter
$q>0$, which is independent of the process $X_t$,
and use the following notation for characteristic functions of $S_{\tau
}$ and $I_{\tau}$:
\[
\phi_q^{+}(z)={\mathbb E} [ e^{{i} z S_{\tau(q)}} ],\qquad
\phi
_q^{-}(z)={\mathbb E} [ e^{{i} z I_{\tau(q)}} ].
\]
The Wiener--Hopf factorization states that the random variables
$S_{\tau
}$ and $X_{\tau}-S_{\tau}$ are independent,
random variables $I_{\tau}$ and $X_{\tau}-S_{\tau}$ have the same
distribution; thus for $z{i}n{\mathbb R}$ we have
\begin{eqnarray}\label{eq_WH_factorization}
\frac{q}{q+\Psi(z)}&=&{\mathbb E} [ e^{{i} zX_{\tau}}
]\nonumber\\[-8pt]\\[-8pt]
&=&{\mathbb E} [ e^{{i} z S_{\tau}} ]{\mathbb E} \bigl[
e^{{i} z(X_{\tau
}-S_{\tau
})} \bigr]=\phi_q^{+}(z)\phi_q^{-}(z).\nonumber
\end{eqnarray}
Moreover, random variable $S_{\tau}$ ($I_{\tau}$) is infinitely
divisible, positive (negative) and has no linear component
in the L\'evy--Khintchine representation (\ref{levy_khintchine}).
There also exist several integral representations for $\phi
_q^{\pm}$ in terms of ${\mathbb P}(X_t {i}n{d} x)$ (see
\cite{Bertoin,Doney2007,Kyprianou} and \cite{Sato}) or
in terms of $\Psi(z)$ \cite{Baxter1957,Mordecki}.
The integral expressions for the Wiener--Hopf factors $\phi
_q^{\pm}$ are quite complicated; however, in the case of stable process
it is possible to obtain explicit formulas for a dense class of
parameters (see \cite{Doney1987}). It is
remarkable that in some cases
we can compute Wiener--Hopf factors explicitly with the help of
factorization identity~(\ref{eq_WH_factorization}). As an example, let us
consider the case when the L\'evy measure is of phase-type. Phase-type
distribution (see \cite{Asmussen2})
can be defined as the distribution of the first passage time of a
finite state continuous
time Markov chain. A L\'evy process $X_t$ whose jumps are phase-type
distributed enjoys the following analytical property:
its characteristic function $\Psi(z)$ is a rational function. Thus function
$q(q+\Psi(z))^{-1}$ is also a rational function, and therefore it has a
finite number of zeros/poles in the complex plane ${\mathbb C}$.
And here is the main idea: since the random variable $S_{\tau}$
($I_{\tau}$) is positive (negative) and infinitely divisible, its
characteristic function must be analytic and have no zeros in ${\mathbb C}
^{+
}$ (${\mathbb C}^{-}$), where
\[
{\mathbb C}^{+}=\{ z{i}n{\mathbb C} {d}vtx \operatorname{Im}(z) > 0 \}
,\qquad
{\mathbb C}^{-}=\{ z{i}n{\mathbb C} {d}vtx \operatorname{Im}(z) < 0 \}
, \qquad \bar{\mathbb C}
^{\pm
}={\mathbb C}^{\pm}\cup{\mathbb R}.
\]
Thus we can \textit{uniquely} identify $\phi_q^{+}(z)$ [$\phi
_q^{-
}(z)$] as a rational function, which has value one at $z=0$ and whose
poles/zeros coincide with poles/zeros of $q(q+\Psi(z))^{-1}$ in
${\mathbb C}^{-}$ (${\mathbb C}^{+}$).
While L\'evy processes with phase-type jumps are very convenient
objects to work with and one can implement efficient numerical schemes,
there are some unresolved difficulties. One of them is that by
definition phase-type distribution has a smooth density on $[0,{i}nfty)$;
in particular the density of the L\'evy measure cannot have a
singularity at zero. This means that if we want to work with a process
with infinite activity of jumps, we have to approximate its L\'evy measure
by a sequence of phase-type measures, but then the degree of rational
function $\Psi(z)$ would go to infinity and the above algorithm
for computing Wiener--Hopf factors would quickly become unfeasible.
In this paper we address this problem and discuss Wiener--Hopf
factorization for processes whose L\'evy measure can have a
singularity of arbitrary order
at zero. The main idea is quite simple: if characteristic exponent
$\Psi
(z)$ is \textit{meromorphic} in ${\mathbb C}$ and if we have sufficient
information about zeros/poles of $q+\Psi(z)$, we can still use
factorization identity (\ref{eq_WH_factorization}) essentially
in the same way as in the case of phase-type distributed jumps, except
that all the finite products will be replaced by infinite products,
and we have to be careful with the convergence issues. The main
analytical tools will be asymptotic expansion of solutions to $q+\Psi
(z)=0$ and
asymptotic results for infinite products.
The paper is organized as follows: in Section \ref
{section_comp_poisson} we introduce a simple example of a compound
Poisson process,
whose L\'evy measure has a density given by $\nu(x)=e^{\alpha x}{\mbox
{sech(x)}}$. We obtain
closed form expressions for the Wiener--Hopf factors and density of
$S_{\tau}$. Also, in this simple case we introduce many
ideas and tools which will be used in other sections. In Section \ref
{section_results_nu3} we introduce
a L\'evy process $X_t$
with jumps of infinite variation and the density of the L\'evy measure
$\nu(x)=e^{\alpha x}{\sinh(x/2)}^{-2}$. This process is a member of the
general $\beta$-family defined later in
Section \ref{section_results_nu4}; however, it is quite unique because
its characteristic
exponent $\Psi(z)$ is expressed in terms of simpler functions, and
thus all the formulas are easier and stronger results can be proved. In
this section we derive the
localization results and asymptotic expansion for the solutions of
$q+\Psi(iz)=0$, prove that
all of them are real and simple, obtain explicit formulas for sums of
inverse powers of
these solutions and finally obtain semi-explicit formulas for
Wiener--Hopf factors and distribution of supremum $S_{\tau}$.
In Section \ref{section_results_nu4} we define the ten-parameter
$\beta
$-family of L\'evy processes and derive
formulas for characteristic exponent and prove results similar to the
ones in Section \ref{section_results_nu3}.
Section \ref{section_implementation} deals with numerical issues: we
discuss acceleration of convergence of
infinite products and introduce an efficient method to compute roots of
$q+\Psi(z)$ for $q$ complex. As an example we compute
the distribution of the supremum process $S_t$.
\section{A compound Poisson process}\label{section_comp_poisson}
In this section we study a compound Poisson process $X_t$, defined by a
L\'evy measure having density
\[
\nu(x)=\frac{e^{\alpha x}}{\cosh(x)}.
\]
We take the cut-off function $h(x)$ in (\ref{levy_khintchine}) to be
equal to zero, and thus the characteristic exponent of $X_t$ is given by
\begin{equation}\label{eq_Psi_nu1}
\Psi(z)=-{i}nt _{\mathbb R} (e^{{i} xz}-1 )\nu(x)\,{d} x=
\frac{\pi}{\cos ({\pi}/2 \alpha )}-\frac{\pi
}{\cosh
({\pi}/2(z-{i}\alpha) )},
\end{equation}
and the above integral can be computed with the help of formula 3.981.3
in \cite{Jeffrey2007}.
Our main result in this section is the following theorem, which
provides closed-form expressions for the Wiener--Hopf factors
and the distribution of $S_{\tau}$.
\begin{theorem}\label{thm_nu1_1} Assume that $q>0$. Define
\begin{eqnarray}\label{def_eta}
\eta&=&\frac{2}{\pi} \arccos \biggl(\frac{\pi}{q+\pi\sec
({\pi}/2 \alpha ) } \biggr), \nonumber\\[-8pt]\\[-8pt]
p_0&=&\frac{\Gamma (1/4(1-\alpha) )\Gamma
(1/4(3-\alpha) )}
{\Gamma (1/4(\eta-\alpha) )\Gamma (1/4(4-\eta
-\alpha) )}.\nonumber
\end{eqnarray}
Then for $\operatorname{Im}(z)>(\alpha-\eta)$ we have
\begin{eqnarray}\label{eq_Mtauq}
\phi_q^{+}(z)=
p_0\frac{\Gamma (1/4(\eta-\alpha-{i} z) )\Gamma
(1/4(4-\eta-\alpha-{i} z) )}
{\Gamma (1/4(1-\alpha-{i} z) )\Gamma (1/4(3-\alpha
-{i} z) )}.
\end{eqnarray}
We have ${\mathbb P}(S_{\tau}=0)=p_0$, and the density of $S_{\tau}$
is given by
\begin{eqnarray}\label{eq_density_M1}
&&\frac{{d}}{{d} x}{\mathbb P}(S_{\tau}\le x)\nonumber\\
&&\qquad=\frac{2p_0}{\pi}\cot
\biggl(\frac{\pi
\eta}2 \biggr) \nonumber\\
&&\qquad\quad{}\times \biggl[ \frac{\Gamma (1/4(1+\eta) )\Gamma
(1/4(3+\eta) )}{\Gamma (1/2 \eta )}\nonumber\\[-8pt]\\[-8pt]
&&\qquad\quad\hspace*{15.16pt}{}\times
e^{(\alpha-\eta)x}{}_2F_1 \biggl( \frac{1+\eta}4,\frac{3+\eta
}4;\frac
{\eta}2;e^{-4x} \biggr)\nonumber\\
&&\qquad\quad\hspace*{15.16pt}{}-\frac{\Gamma (1/4(5-\eta) )\Gamma (
1/4(7-\eta)
)}{\Gamma (1/2(4-\eta) )}\nonumber\\
&&\qquad\quad\hspace*{26.28pt}{}\times
e^{(\alpha-4+\eta)x}{}_2F_1 \biggl( \frac{5-\eta}4,\frac{7-\eta
}4;\frac
{4-\eta}2;e^{-4x} \biggr) \biggr],\nonumber
\end{eqnarray}
where ${}_2F_1(a,b;c;z)$ is the Gauss hypergeometric function.
If $q=0$ and $\alpha<0$, equation (\ref{def_eta}) implies $\eta=|
\alpha|$, and formulas (\ref{eq_Mtauq}) and
(\ref{eq_density_M1}) are still valid. In this case the random variable
$S_{\tau(0)}$ should be interpreted as $S_{{i}nfty}=\sup\{X_s {d}vtx s\ge
0 \}$.
\end{theorem}
First we will state and prove the following lemma, which will be used
repeatedly in this paper. It is a variant of the Wiener--Hopf
argument, which we have borrowed from the proof of Lemma 45.6 in
\cite{Sato}.
\begin{lemma}\label{Lemma1}
Assume we have two functions $f^{+}(z)$ and $f^{-}(z)$, such
that $f^{\pm}(0)=1$,
$f^{\pm}(z)$ are analytic in ${\mathbb C}^{\pm}$, continuous and
have no roots in $\bar{\mathbb C}^{\pm}$ and $z^{-1}\ln(f^{\pm
}(z))\to0$ as $z\to{i}nfty$, $z{i}n\bar{\mathbb C}^{\pm}$.
If
\begin{equation}\label{WH_factorn_f}
\frac{q}{q+\Psi(z)}=f^{+}(z)f^{-}(z), \qquad z{i}n{\mathbb R},
\end{equation}
then $f^{\pm}(z)\equiv\phi_q^{\pm}(z)$.
\end{lemma}
\begin{pf}
We define function $F(z)$ as
\[
F(z)=\cases{
{d}frac{\phi_q^{-}(z)}{f^{-}(z)}, &\quad if
$z {i}n\bar{\mathbb C}^{-}$, \cr
{d}frac{f^{+}(z)}{\phi_q^{+}(z)}, &\quad if
$z {i}n\bar{\mathbb C}^{+}$.}
\]
Function $F(z)$ is well defined for $z$ real due to (\ref
{WH_factorn_f}) and (\ref{eq_WH_factorization}).
Using properties of $\phi_q^{\pm}$ and $f^{\pm}$ we
conclude that $F(z)$ is analytic in ${\mathbb C}^{+}$ and
${\mathbb
C}^{-}$ and continuous in ${\mathbb C}$, and therefore by analytic
continuation (see Theorem 16.8 on page 323 in \cite{Rudin1986}) it must
be analytic in the entire complex plane.
Moreover, by construction function $F(z)$ has no zeros in ${\mathbb
C}$, thus
its logarithm is also an entire function. All that is left to do is to
prove that function $\ln(F(z))$ is constant.
Using integration by parts and formula (\ref{levy_khintchine}) one
could prove the following result:
if $\xi$ is an infinitely divisible positive random variable with no
drift and $\Psi_{\xi}(z)$ is its characteristic exponent,
then $z^{-1} \Psi_{\xi}(z)\to0$ as $z\to{i}nfty$, $z{i}n\bar
{\mathbb C}
^{+
}$ (this statement is similar to Proposition 2 in \cite{Bertoin}). Thus
\[
z^{-1} \ln(\phi_q^{\pm}(z)) \to0, \qquad z \to{i}nfty,\qquad
z{i}n\bar{\mathbb C}^{\pm}.
\]
Since functions $f^{\pm}$ also satisfy the above conditions, we
find that $z^{-1} \ln(F(z))\to0$ as $| z |\to{i}nfty$ in the
entire complex plane. Thus we have an analytic function $\ln(F(z))$
which grows slower than $|z|$ as $z\to{i}nfty$, and therefore we can
conclude that this function
must be constant (a rigorous way to prove this is to apply Cauchy's
estimates, see Proposition 2.14 on page 73 in \cite{Conway1978}). The
value of this constant is easily seen to be zero, since $f^{\pm
}(0)=\phi_q^{\pm}(0)=1$.
\end{pf}
\begin{pf*}{Proof of Theorem \ref{thm_nu1_1}}
Using expression (\ref{eq_Psi_nu1}) for $\Psi(z)$ we find that function
$q(q+\Psi(z))^{-1}$ has simple zeros at $\{{i}(1+\alpha+4n),{i}
(3+\alpha
+4n)\}$ and simple poles at $\{{i}(\alpha+\eta+4n),{i}(\alpha-\eta
+4n)\}
$, where $n{i}n{\mathbb Z}$ and $\eta$ is defined by (\ref{def_eta}).
Next we check that $|\alpha|< \eta<1$ and
define function $f^{+}(z)$ as product over all zeros/poles lying in
${\mathbb C}^{-}$
\begin{equation}\label{def_f_plus}
f^{+}(z)=\prod _{n\ge0} \frac
{ (1-{{i} z}/({4n+1-\alpha})) (1-{{i}
z}/({4n+3-\alpha
}) )}
{ (1-{{i} z}/({4n+\eta-\alpha} )) (1-{{i}
z}/({4n+4-\eta
-\alpha}) )}
\end{equation}
and similarly $f^{-}(z)$ as product over zeros/poles in ${\mathbb
C}^{+}$. It is easy to see that the product
converges uniformly on compact subsets of ${\mathbb C}\setminus{i}
{\mathbb R}$ since
each term is $1+O(n^{-2})$
(see Corollary 5.6 on page 166 in \cite{Conway1978} for sufficient
conditions for the absolute convergence of infinite products).
The fact that $f^{+}(z)$ is equal to the right-hand side of formula
(\ref{eq_Mtauq}) can be seen by applying the following result
from \cite{Erdelyi1955V3}:
\begin{equation}
\prod _{n \ge0} \frac
{1+{x}/({n+a})}{1+{x}/({n+b})}
=\frac{\Gamma(a)\Gamma(b+x)}{\Gamma(b)\Gamma(a+x)}.
\end{equation}
The formula for $f^{-}(z)$ is identical to (\ref{eq_Mtauq}) with
$(z, \alpha)$
replaced by $(-z, -\alpha)$.
Now we will prove that $f^{\pm}(z)\equiv\phi_q^{\pm
}(z)$. First, using the reflection formula for the gamma function
(formula 8.334.3 in \cite{Jeffrey2007}), one can check that for $z {i}n
{\mathbb R}$ functions $f^{\pm}(z)$ satisfy factorization identity
(\ref
{WH_factorn_f}).
Next, using the following asymptotic expression (formula 6.1.47 in
\cite{AbramowitzStegun}):
\begin{equation}\label{eq_gamma_asympt}
\frac{\Gamma(a+x)}{\Gamma(b+x)}=x^{a-b}+O(x^{a-b-1}),
\end{equation}
we conclude that $z^{-1} \ln(f^{\pm}(z)) \to0$ as $z \to
{i}nfty
$, $z{i}n\bar{\mathbb C}^{\pm}$, and thus all the conditions of
Lemma \ref{Lemma1} are satisfied, and we conclude that $f^{\pm
}(z)\equiv\phi_q^{\pm}(z)$.
To derive formula (\ref{eq_density_M1}) for the density of $S_{\tau}$
we use equations (\ref{eq_Mtauq}) and (\ref{eq_gamma_asympt})
to find that ${\mathbb E} [e^{-\zeta S_{\tau}} ]=\phi_q^{+
}({i}
\zeta)\to p_0$ as $\zeta\to{i}nfty$, where $p_0$ is given by (\ref
{def_eta}). This implies
that distribution of $S_{\tau}$ has an atom at $x=0$ (which should not
be surprising since $X_t$ is a compound Poisson process), and
${\mathbb P}(S_{\tau}=0)=p_0$. The density of $S_{\tau}$ can be
computed by the
inverse Fourier transform
\begin{eqnarray*}
&&\frac{{d}}{{d} x}{\mathbb P}(S_{\tau}\le x)\\
&&\qquad=\frac{1}{2\pi} {i}nt
_{\mathbb
R} [\phi_q^{+}(z)-p_0 ] e^{-{i} xz}\,{d} z\\
&&\qquad=
\frac{p_0}{2\pi} {i}nt _{\mathbb R}
\biggl[\frac{\Gamma (1/4(\eta-\alpha-{i} z) )\Gamma
(1/4(4-\eta-\alpha-{i} z) )}
{\Gamma (1/4(1-\alpha-{i} z) )\Gamma (
1/4(3-\alpha
-{i} z) )}-1 \biggr]e^{-{i} xz} \,{d} z.
\end{eqnarray*}
Formula (\ref{eq_density_M1}) is obtained from the above expression by
replacing the contour of integration by ${i} c+{\mathbb R}$, letting
$c\to-{i}nfty$ and evaluating the residues at
$z {i}n\{-{i}(4n+\eta-\alpha), -{i}(4n+4-\eta-\alpha)\}$ for $n\ge0$.
Evaluating the residues can be made easier by using the reflection
formula for the gamma function.
\end{pf*}
\begin{remark}
There are other examples of L\'evy measures $\nu(x)\,dx$, which have
finite total mass (and thus can define a process with
a finite intensity of jumps), and for which the characteristic exponent
is a
simple meromorphic function. These are two examples based on theta
functions (see Section 8.18 in \cite{Jeffrey2007} for definition and
properties of theta functions):
\begin{eqnarray*}
\nu_1(x)&=&e^{-\alpha x} \theta_2 (0 ,e^{-x} )=e^{-\alpha
x} \biggl[2\sum _{n\ge0} e^{-(n+1/2)^2x} \biggr], \\
\nu_2(x)&=& e^{-\alpha x} \theta_3 (0 ,e^{-x}
)=e^{-\alpha
x} \biggl[1+2\sum _{n\ge0} e^{-n^2x} \biggr].
\end{eqnarray*}
These two jump densities are defined on $x>0$, they decay exponentially
as $x\to+{i}nfty$ and behave as $x^{-1/2}$ as $x\to0^+$;
thus the total mass is finite. The Fourier transform of these functions
can be computed using formulas 6.162 in \cite{Jeffrey2007}
\begin{eqnarray*}
{i}nt _{0}^{{i}nfty} e^{{i} xz} \nu_1(x)\,{d} x&=&\frac{\pi
}{\sqrt
{\alpha-{i} z}} \tanh \bigl(\pi\sqrt{\alpha-{i} z} \bigr), \\
{i}nt _{0}^{{i}nfty} e^{{i} xz} \nu_2(x)\,{d} x&=&\frac{\pi
}{\sqrt
{\alpha-{i} z}} \coth \bigl(\pi\sqrt{\alpha-{i} z} \bigr).
\end{eqnarray*}
Unfortunately equation $q+\Psi(z)=0$ cannot be solved explicitly which
implies that we cannot obtain closed form results
as in Theorem \ref{thm_nu1_1}; however, these processes could be
treated using methods presented in the next sections.
\end{remark}
\section{A process with jumps of infinite variation}\label{section_results_nu3}
In this section we study a L\'evy process $X_t$, defined by a triple
$(\mu,\sigma,\nu)$, where the density of the L\'evy measure is given by
\[
\nu(x)=\frac{e^{\alpha x}}{ [\sinh({x}/2) ]^2}
\]
with $|\alpha|<1$ (it is a L\'evy measure of a Lamperti-stable
process with characteristics $(1,1+\alpha,1-\alpha)$, see \cite{KyPaRi}).
The jump part of $X_t$ is similar to the normal inverse Gaussian
process (see \cite{Barndorff1997,Cont}), as it is also a process of infinite
variation,
the jump measure decays exponentially as $|x| \to{i}nfty$ and has a
$O(x^{-2})$ singularity at $x=0$. Note that since
the L\'evy measure has exponential tails we can take the cut-off
function $h(x)\equiv x$ in (\ref{levy_khintchine}).
By definition process $X_t$ has three parameters. However, if we want to
achieve greater generality for modeling purposes, we could introduce
two additional scaling parameters $a$ and $b>0$ and define
a process $Y_t=aX_{bt}$, thus obtaining a five parameter family of L\'
evy processes.
\begin{proposition}
The characteristic exponent of $X_t$ is given by
\begin{equation}\label{eq_psi_nu3}
\Psi(z)=\tfrac12 \sigma^2z^2+i\rho z
+4\pi(z-i\alpha)\coth \bigl(\pi(z-i\alpha) \bigr)-4\gamma,
\end{equation}
where
\[
\gamma=\pi\alpha\cot (\pi\alpha ), \qquad \rho=4\pi
^2 \alpha
+\frac{4\gamma(\gamma-1)}{\alpha}-\mu.
\]
\end{proposition}
\begin{pf}
We start with the series representation valid for $x>0$,
\begin{equation}\label{sinh_series}
\biggl[\sinh \biggl(\frac{x}2 \biggr) \biggr]^{-2}=4\frac
{e^{-x}}{
(1-e^{-x} )^2}=4\sum _{n\ge1} ne^{-nx},
\end{equation}
which can be easily obtained
using binomial series or by taking derivative of a geometric series.
The infinite series in (\ref{sinh_series}) converges uniformly on
$(\varepsilon,{i}nfty)$
for every $\varepsilon>0$, thus
\begin{eqnarray*}
&&
{i}nt _{0}^{{i}nfty} ( e^{{i} zx}-1-{i} zx ) \frac
{e^{\alpha x}}{\sinh({x}/2)^2}\,{d} x\\
&&\qquad=
4\sum _{n\ge1} \biggl[ \frac{n}{n-\alpha-{i} z}-\frac
{n}{n-\alpha
}- \frac{{i} nz}{(n-\alpha)^2} \biggr]\\
&&\qquad=
4\sum _{n\ge1} \biggl[\frac{\alpha+{i} z}{n-\alpha-{i}
z}-\frac
{\alpha+{i} z}{n-\alpha}- \frac{{i}\alpha z}{(n-\alpha)^2} \biggr].
\end{eqnarray*}
The integral in the L\'evy--Khintchine representation (\ref
{levy_khintchine}) for $\Psi(z)$ can now be computed as
\begin{eqnarray*}
&&
{i}nt _{0}^{{i}nfty} ( e^{{i} zx}-1-{i} zx ) \frac
{e^{\alpha x}}{\sinh(x)^2}\,{d} x+
{i}nt _{0}^{{i}nfty} ( e^{-{i} zx}-1+{i} zx ) \frac
{e^{-\alpha x}}{\sinh(x)^2}\,{d} x\\
&&\qquad=
8(\alpha+{i} z)^2\sum _{n \ge1}\frac{1}{n^2-(\alpha+{i} z)^2}-
8(\alpha+{i} z)\alpha\sum _{n \ge1}\frac{1}{n^2-\alpha^2}\\
&&\qquad\quad{} -
4{i}\alpha z \sum _{n {i}n{\mathbb Z}} \frac{1}{(n-\alpha
)^2}+\frac
{4{i} z}{\alpha}.
\end{eqnarray*}
To complete the proof we need to use the following well-known series
expansions (see formulas 1.421.4 and 1.422.4 in \cite{Jeffrey2007})
\begin{eqnarray*}
\coth(\pi x)&=&\frac{1}{\pi x}+\frac{2x}{\pi}\sum _{n \ge
1}\frac
{1}{x^2+n^2}, \\
1+\cot(\pi x)^2&=&\operatorname{cosec}(\pi x)^2=\frac{1}{\pi^2} \sum
_{n {i}n{\mathbb Z}} \frac{1}{(n-x)^2}.
\end{eqnarray*}
\upqed\end{pf}
Note that it is impossible to find solutions to $q+\Psi(z)=0$
explicitly in the general case, even though the characteristic exponent
$\Psi(z)$ is quite simple. It is remarkable that in some special cases,
when $\sigma=0$ and parameters $\mu$, $\alpha$ and $q$ satisfy certain
conditions, we can still obtain closed-form results.
Below we present just one example of this type.
\begin{proposition}\label{prop_explicit_nu3} Assume that $\sigma
=\alpha
=0$. Define
\begin{equation}
\eta=\frac{1}{\pi}\operatorname{arccot} \biggl( \frac{\mu}{4\pi
} \biggr).
\end{equation}
Then Wiener--Hopf factor $\phi_q^{+}(z)$ can be computed in closed
form when $q=4$,
\begin{equation}
\phi_4^{+}(z)=\frac{\Gamma(\eta-{i} z)}{\Gamma(\eta)\Gamma
(1-{i} z)}.
\end{equation}
The density of $S_{\tau(4)}$ is given by
\[
\frac{{d}}{{d} x} {\mathbb P}\bigl(S_{\tau(4)}\le x\bigr) =\frac{\sin(\pi
\eta
)}{\pi}
( e^x-1 )^{-\eta}.
\]
\end{proposition}
The proof of Proposition \ref{prop_explicit_nu3} is identical to the
proof of Theorem \ref{thm_nu1_1}.
\begin{remark}
This result is very similar to Proposition 1 in \cite{CaCha}, where the
authors are able to compute the law
of $I_{\tau(q)}$ in closed form only for a single value of~$q$, and
this law is essentially identical to the distribution of $S_{\tau(4)}$.
This coincidence seems to be rather surprising, since these
propositions study different processes: our Proposition \ref{prop_explicit_nu3}
is concerned with a Lamperti-stable process having characteristics
$(1,1,1)$ (see \cite{KyPaRi}) and completely arbitrary drift, while
Proposition~1 in \cite{CaCha} studies a Lamperti-stable process with
characteristics $(\alpha,1,\alpha)$ but with no freedom
in specifying the drift, which must be uniquely expressed in terms of
parameters of the L\'evy measure.
\end{remark}
The following theorem is one of the main results in this section. It
describes various properties of solutions to equation $q+\Psi(z)=0$,
which will be used later to compute Wiener--Hopf factors and the
distribution of the supremum process.
\begin{theorem}\label{thm_nu3_1} Assume that $q>0$ and that $\Psi(z)$
is given by (\ref{eq_psi_nu3}).
\begin{longlist}
{i}tem Equation $q+\Psi(i\zeta)=0$ has infinitely many solutions,
all of which are real and simple.
They are located as follows:
\begin{eqnarray}\label{eq_localization}
\zeta_0^{-} &{i}n& (\alpha-1,0),\nonumber\\
\zeta_0^{+} &{i}n& (0,\alpha+1), \nonumber\\[-8pt]\\[-8pt]
\zeta_n &{i}n& (n+\alpha, n+\alpha+1), \qquad n\ge1 \nonumber\\
\zeta_n &{i}n& (n+\alpha-1, n+\alpha), \qquad n\le-1.\nonumber
\end{eqnarray}
{i}tem
If $\sigma\ne0$ we have as $n\to\pm{i}nfty$
\begin{eqnarray}\label{eq_asympt_roots1_1}
\zeta_n&=&(n+\alpha)+\frac{8}{\sigma^2}(n+\alpha)^{-1}\nonumber\\[-8pt]\\[-8pt]
&&{}-\frac{8}{\sigma^2} \biggl(\frac{2\rho}{\sigma^2}+\alpha
\biggr)(n+\alpha)^{-2}+O(n^{-3}).\nonumber
\end{eqnarray}
{i}tem
If $\sigma=0$ we have as $n\to\pm{i}nfty$
\begin{eqnarray}\label{eq_asympt_roots1_2}
\zeta_{n+{d}elta}&=&(n+\alpha+\omega_0)+c_0(n+\alpha+\omega
_0)^{-1}\nonumber\\[-8pt]\\[-8pt]
&&{}-\frac{c_0}{\rho} (4\gamma-q-4\pi^2 c_0 )
(n+\alpha+\omega_0)^{-2}+O(n^{-3}),\nonumber
\end{eqnarray}
where
\[
c_0=-\frac{4(4\gamma-q+\alpha\rho) }{16\pi^2+\rho^2},\qquad
\omega
_0=\frac{1}{\pi}\operatorname{arccot} \biggl(\frac{\rho}{4\pi} \biggr)
\]
and ${d}elta{i}n\{-1,0,1\}$ depending on the signs of $n$ and $\rho$.
{i}tem Function $q(q+\Psi(z))^{-1}$ can be factorized as follows:
\begin{equation}\label{eq_big_factorization}
\frac{q}{q+\Psi(z)}=\frac{1}{ (1+{{i} z}/{\zeta_0^{+
}}
) (1+{{i} z}/{\zeta_0^{-}} )}
\prod _{| n |\ge1} \frac{1+{{i} z}/({n+\alpha
})}{1+
{{i} z}/{\zeta_n}},
\end{equation}
where the infinite product converges uniformly on the compact subsets
of the complex plane excluding zeros/poles of $q+\Psi(z)$.
\end{longlist}
\end{theorem}
First we need to prove the following technical result.
\begin{lemma}\label{lemmma_asymp_for_product}
Assume that $\alpha$ and $\beta$ are not equal to a negative integer,
and $b_n=O(n^{-\varepsilon_1})$ for some $\varepsilon_1>0$ as $n\to{i}nfty
$. Then
\begin{equation}\label{eq_inf_prod}
\prod _{n \ge0} \frac{1+{z}/({n+\alpha})}{1+
{z}/({n+\beta
+b_n})} \approx C z^{\beta-\alpha}
\end{equation}
as $z\to{i}nfty$, $|{\operatorname{arg}}(z) |< \pi-\varepsilon_2<\pi
$, where
$C=\frac{\Gamma(\alpha)}{\Gamma(\beta)}\prod_{n \ge0}
(1+\frac
{b_n}{n+\beta} )$.
\end{lemma}
\begin{pf}
First we have to justify absolute convergence of infinite products.
A~product in the left-hand side of (\ref{eq_inf_prod}) converges since
each term is
$1+O(n^{-2})$ and the infinite product in the definition of constant
$C$ converges since each term is $1+O(n^{-1-\varepsilon_1})$
(see Corollary 5.6 on page 166 in \cite{Conway1978} for sufficient
conditions for the absolute convergence of infinite products). Thus we
can rewrite the left-hand side of (\ref{eq_inf_prod}) as
\begin{eqnarray}\label{asympt_proof1}\qquad
\prod _{n \ge0} \frac{1+{z}/({n+\alpha})}{1+
{z}/({n+\beta+b_n})}&=&
\prod _{n \ge0} \frac{1+{z}/({n+\alpha})}{1+
{z}/({n+\beta})}
\prod _{n \ge0} \frac{1+{z}/({n+\beta})}{1+
{z}/({n+\beta
+b_n})}\nonumber\\[-8pt]\\[-8pt]
&=&
C\frac{\Gamma(\beta+z)}{\Gamma(\alpha+z)} \prod _{n\ge0}
\frac
{z+n+\beta}
{z+n+\beta+b_n}.\nonumber
\end{eqnarray}
The ratio of gamma functions gives us the leading asymptotic term
$z^{\beta-\alpha}$ due to~(\ref{eq_gamma_asympt}). Now we need to prove that the last infinite
product in (\ref{asympt_proof1}) converges to one
as $z\to{i}nfty$, $|{\operatorname{arg}}(z) |< \pi-\varepsilon_2<\pi$.
We take the logarithm of this product and estimate it as
\begin{eqnarray*}
\biggl| \sum _{n\ge1} \ln \biggl(\frac{z+n+\beta}
{z+n+\beta+b_n} \biggr) \biggr| &=&
\biggl| \sum _{n\ge1} \ln \biggl(1+\frac{b_n}
{z+n+\beta} \biggr) \biggr| \nonumber\\
&\le&
\sum _{n\ge1} \ln \biggl(1+\frac{| b_n |}
{| z+n+\beta|} \biggr) \nonumber\\
&\le& \sum _{n\ge1} \frac{| b_n |}
{| z+n+\beta|}\\
&\le& A\sum _{n\ge1} \frac{1}
{n^{\varepsilon_1}| z+n+\beta|},
\end{eqnarray*}
where we have used the fact that $\ln(1+x)<x$ for $x>0$ and $| b_n
|< An^{-\varepsilon_1}$ for some $A>0$.
Since $|{\operatorname{arg}}(z) |< \pi-\varepsilon_2<\pi$ we have for
$z$ sufficiently large
$|z + n + \beta| > \max\{1,|n- | z+\beta| |\}$. Let $m=[|z+\beta|]$,
where $[x]$ denotes the integer part of $x$.
Then
\begin{equation}\label{two_series}
\sum _{n\ge1} \frac{1}
{n^{\varepsilon_1}|z+n+\beta|}<\sum _{n=1}^{m} \frac{1}
{n^{\varepsilon_1}(m+1-n)}+\sum _{n=m+1}^{{i}nfty} \frac{1}
{n^{\varepsilon_1}(n-m)}.
\end{equation}
The first series in the right-hand side of (\ref{two_series}) converges
to zero as $m\to{i}nfty$, since
\begin{eqnarray*}
\sum _{n=1}^{m} \frac{1}
{n^{\varepsilon_1}(m+1-n)}&=&\sum _{n=1}^{[\sqrt{m}]} \frac{1}
{n^{\varepsilon_1}(m+1-n)}+\sum _{n=[\sqrt{m}]+1}^m \frac{1}
{n^{\varepsilon_1}(m+1-n)} \\
&<&\frac{[\sqrt{m}]}{m+1-[\sqrt{m}]}+m^{-\varepsilon_1/2} \sum
_{n=[\sqrt{m}]+1}^m \frac{1}
{(m+1-n)} \\ &<& \frac{[\sqrt{m}]}{m+1-[\sqrt{m}]}+m^{-\varepsilon
_1/2} \ln(m).
\end{eqnarray*}
The second series in the right-hand side of (\ref{two_series}) can be
rewritten as $\sum_{n=1}^{{i}nfty}
(n+m)^{-\varepsilon_1}n^{-1}$, and we see that it is a convergent series
of positive terms, where each term converges to zero as $m \to{i}nfty$.
By considering its partial sums it is easy to prove that the series
itself must converge to zero as $m \to{i}nfty$.
\end{pf}
\begin{pf*}{Proof of Theorem \ref{thm_nu3_1}}
The proof consists of three steps. The first step is to study solutions
to equation $q+\Psi({i}\zeta)=0$.
We will produce a sequence of ``obvious'' solutions $\zeta_n$ and study
their asymptotics as $n\to\pm{i}nfty$.
Note that this first step requires quite demanding computations, which
can be made much more enjoyable if one uses a symbolic computation package.
The second step is to represent the function $q(q+\Psi(z))^{-1}$ as a
general infinite product, which includes poles of $\Psi(z)$, zeros of
$q+\Psi(z)$ (given by ``obvious'' ones
$\{{i}\zeta_0^{\pm},{i}\zeta_n\}$
and possibly some ``unaccounted'' zeros) and an exponential factor. Our
main tool will be Hadamard theorem (Theorem 1, page 26, in
\cite{Levin1996} or Theorem 3.4, page 289, in \cite{Conway1978}). We produce
entire functions $P(z)$ and $Q(z)$, such that
$P(z)$ has zeros at poles of $\Psi(z)$ and $q(q+\Psi
(z))^{-1}=P(z)/Q(z)$. After studying the growth rate of $P(z)$ and $Q(z)$
we apply Hadamard theorem and obtain an infinite product for function
$Q(z)$ [function $P(z)$ will have an explicit infinite product]. These
results give us an infinite product for $q(q+\Psi(z))^{-1}$. The last
step is to prove the absence of exponential factor and ``unaccounted''
zeros in this infinite product, and here the main tool will be
asymptotic relation (\ref{eq_inf_prod}) for infinite products provided
by Lemma \ref{lemmma_asymp_for_product}.
First we will prove localization result (\ref{eq_localization}). We use
(\ref{eq_psi_nu3}) to rewrite equation $q+\Psi({i}\zeta)=0$ as
\begin{equation}\label{eq_Psiq0}
4\pi(\zeta-\alpha) \cot\bigl(\pi(\zeta-\alpha)\bigr)-(\rho+\mu)\zeta
-4\gamma
=\tfrac12 \sigma^2\zeta^2-\mu\zeta-q.
\end{equation}
Note that we have separated the jump part of $\Psi(z)$ on the left-hand
side and the diffusion part on the right-hand side of (\ref
{eq_Psiq0}). See Figure \ref{fig_plot}, where the jump part is
represented by black line and diffusion part by grey dotted line.
\begin{figure}
\caption{Illustration of the proof of Theorem \protect\ref{thm_nu3_1}
\label{fig_plot}
\end{figure}
The left-hand side of (\ref{eq_Psiq0}) is zero at $\zeta=0$
and goes to $-{i}nfty$ as $\zeta\nearrow\alpha+1$ or $\zeta\searrow
\alpha-1$ (see Figure \ref{fig_plot}). The right-hand side is negative
at $\zeta=0$ and continuous everywhere; thus we have at least one solution
$\zeta_0^{+} {i}n(0,\alpha+1)$ and at least one solution $\zeta
_0^{-} {i}n(\alpha-1,0)$. In fact
it is easy to prove that we have
\textit{exactly} one solution on each of these intervals, since $4\pi
(\zeta-\alpha) \cot(\pi(\zeta-\alpha))$ is a concave function on
$(\alpha-1,\alpha+1)$, while
$\frac12 \sigma^2 \zeta^2-\mu\zeta-q$ is convex.
Next, for $n\ne0$ we have
\begin{eqnarray*}
&& 4\pi(\zeta-\alpha) \cot\bigl(\pi(\zeta-\alpha)\bigr) \nearrow+{i}nfty
\qquad\mbox{as } \zeta\nearrow\alpha-n,
\zeta\searrow\alpha+n, \\
&& 4\pi(\zeta-\alpha) \cot\bigl(\pi(\zeta-\alpha)\bigr) \searrow-{i}nfty
\qquad\mbox{as } \zeta\searrow\alpha-n,
\zeta\nearrow\alpha+n,
\end{eqnarray*}
thus there must exist at least one zero $\zeta_n$ on each interval
$(n+\alpha, n+\alpha+1)$,
$(n+\alpha-1, n+\alpha)$.
Next we will prove the asymptotic expansion (\ref{eq_asympt_roots1_1}).
Since we have assumed that $\sigma\ne0$ we can rearrange the terms
in (\ref{eq_Psiq0}) to obtain
\begin{eqnarray}\label{eq_asymp1_proof1}
\frac{1}{\pi} \tan\bigl(\pi(\zeta-\alpha)\bigr)&=&\frac{4(\zeta-\alpha
)}{1/2
\sigma^2 \zeta^2+\rho\zeta+4\gamma-q}
\nonumber\\
&=&\frac{8}{\sigma^2}\zeta^{-1}
\biggl[\frac{1-\alpha\zeta^{-1}}{1+2\rho\sigma^{-2} \zeta
^{-1}+O(\zeta
^{-2})} \biggr]\\
&=&
\frac{8}{\sigma^2}\zeta^{-1}-
\frac{8}{\sigma^2} \biggl(\frac{2}{\rho}+\alpha \biggr)\zeta
^{-2}+O(\zeta^{-3}).\nonumber
\end{eqnarray}
The main idea in the above calculation is to expand the rational
function in the Taylor series centered at $\zeta={i}nfty$.
Now, the right-hand side of (\ref{eq_asymp1_proof1}) is small when
$\zeta$ is large, and thus the solution to (\ref{eq_asymp1_proof1})
should be close to the solution of $\tan(\pi(\zeta-\alpha))=0$,
which implies
\begin{equation}\label{eq_zeta_omega}
\zeta= n+\alpha+ \omega
\end{equation}
and $\omega=o(1)$ as $n\to{i}nfty$. Next we expand the right-hand side
of (\ref{eq_asymp1_proof1}) in powers of $w$ as
\[
\frac{1}{\pi} \tan\bigl(\pi(\zeta-\alpha)\bigr)=\frac{1}{\pi} \tan(\pi
\omega
)=\omega+O(\omega^3)
\]
and, using the first two terms of the Maclaurin series for $\zeta^{-1}$
in powers of $\omega$
\[
\zeta^{-1}=(n+\alpha+\omega)^{-1}=(n+\alpha)^{-1}-\omega(n+\alpha
)^{-2}+O(\omega^2n^{-3}),
\]
we are able to
rewrite (\ref{eq_asymp1_proof1}) as
\[
\omega+O(\omega^3)=\frac{8}{\sigma^2} \bigl((n+\alpha)^{-1}+\omega
(n+\alpha)^{-2} \bigr)-
\frac{8}{\sigma^2} \biggl(\frac{2}{\rho}+\alpha \biggr)(n+\alpha
)^{-2}+O(n^{-3}).
\]
Asymptotic expansion (\ref{eq_asympt_roots1_1}) follows easily from the
above formula and (\ref{eq_zeta_omega}).
If $\sigma=0$, equation (\ref{eq_asymp1_proof1}) has to be modified
as follows:
\begin{eqnarray}\label{eq_asymp1_proof2}
\frac{1}{\pi} \tan\bigl(\pi(\zeta-\alpha)\bigr)&=&\frac{4(\zeta-\alpha
)}{\rho
\zeta+4\gamma-q}=
\frac{4}{\rho} \biggl[ \frac{1-\alpha\zeta^{-1}}{1+(4\gamma
-q)\rho
^{-1}\zeta^{-1}} \biggr]\nonumber\\
&=&
\frac{4}{\rho}-\frac{4}{\rho^2}(4\gamma-q+\alpha\rho) \zeta^{-1}\\
&&{}+
\frac{4(4\gamma-q)}{\rho^3} (4\gamma-q+\alpha\rho) \zeta
^{-2}+O(\zeta^{-3}),\nonumber
\end{eqnarray}
where again we have expanded the rational function in the Taylor series
centered at $\zeta={i}nfty$.
As before, when $\zeta$ is large the solution of (\ref{eq_Psiq0})
should be close to the solution of
\[
\frac{1}{\pi} \tan\bigl(\pi(\zeta-\alpha)\bigr)=\frac{4}{\rho},
\]
and thus we should expand both sides of (\ref{eq_asymp1_proof2}) in the
Taylor series centered at the solution to the above equation. We define
$\omega$ as
\begin{equation}\label{eq_zeta_omega2}
\zeta=n+\alpha+\frac{1}{\pi}\arctan \biggl(\frac{4\pi}{\rho
} \biggr)+\omega
=n+\alpha+\omega_0+\omega,
\end{equation}
and again $\omega=o(1)$ as $n \to{i}nfty$. To expand the left-hand side
of (\ref{eq_asymp1_proof2}) in power series in $\omega$ we use an
addition formula for $\tan(\cdot)$ and find that
\begin{eqnarray}\label{eq_tan2}\quad
\frac{1}{\pi} \tan\bigl(\pi(\zeta-\alpha)\bigr)&=&\frac{1}{\pi}
\tan
\biggl(\arctan \biggl(\frac{4\pi}{\rho} \biggr)+\pi\omega \biggr)
\nonumber\\
&=&
\frac{1}{\pi}\frac{{4\pi}/{\rho}+\tan(\pi\omega)}{1-
{4\pi}/{\rho}\tan(\pi\omega)}
=\frac{1}{\pi}\frac{{4\pi}/{\rho}+\pi\omega+O(\omega
^3)}{1-
{4\pi}/{\rho}\pi\omega+O(\omega^3)} \nonumber\\[-8pt]\\[-8pt]
&=&\frac{4}{\rho}+
\frac{1}{\rho^2} (16 \pi^2+\rho^2 ) \omega\nonumber\\
&&{} +\frac{4 \pi
^2}{\rho
^3} (16 \pi^2+\rho^2 ) \omega^2+O(\omega^3).\nonumber
\end{eqnarray}
Again, we use (\ref{eq_zeta_omega2}) to obtain the Maclaurin series of
$\zeta^{-1}$ in powers of $\omega$
\begin{eqnarray*}
\zeta^{-1}&=&(n+\alpha+\omega_0+\omega)^{-1}\nonumber\\
&=&(n+\alpha+\omega_0)^{-1}-\omega(n+\alpha+\omega
_0)^{-2}+O(\omega^2n^{-3}).
\end{eqnarray*}
Using (\ref{eq_tan2}) and the above expansion we can rewrite
(\ref{eq_asymp1_proof2}) as
\begin{eqnarray*}
&& (16 \pi^2+\rho^2 ) \biggl[ \omega+\frac{4 \pi
^2}{\rho}\omega
^2 \biggr]\\
&&\qquad=
-4(4\gamma-q+\alpha\rho) \bigl((n+\alpha
+\omega
_0)^{-1}-\omega(n+\alpha+\omega_0)^{-2} \bigr)
\\
&&\qquad\quad{} +\frac{4(4\gamma-q)}{\rho} (4\gamma-q+\alpha\rho)
(n+\alpha+\omega_0)^{-2}+O(n^{-3})+O(\omega^3),
\end{eqnarray*}
and from this equation we obtain the second asymptotic expansion (\ref
{eq_asympt_roots1_2}).
Now we are ready to prove the factorization identity (\ref
{eq_big_factorization}) and the fact that all the
zeros of $q+\Psi({i}\zeta)$ are real and simple and that there are no
other zeros except for the ones described in (\ref{eq_localization}).
First we need to find
an analytic function $P(z)$ such that $P(0)=1$ and which has zeros at
all poles of $\Psi(z)$ (with the same multiplicity).
The choice is rather obvious due to (\ref{eq_psi_nu3}):
\begin{equation}\label{def_P}
P(z)=\frac{\alpha}{\sin(\pi\alpha)}\times\frac{\sinh(\pi(z-{i}
\alpha
))}{ z-{i}\alpha}.
\end{equation}
By definition, the function
\begin{equation}\label{def_Q}
Q(z)=q^{-1}\bigl(q+\Psi(z)\bigr)P(z)
\end{equation}
is also analytic in the entire complex plane.
Next, using the definition of $P(z)$ (\ref{def_P}) and $Q(z)$ (\ref
{def_Q}) we check that $Q(z)=0$ if and only if $q+\Psi(z)=0$.
We have proved already that the zeros of $q+\Psi({i}\zeta)$ include
$\zeta_n, \zeta_0^{\pm}$; however, some of them might
have multiplicity greater than one, and there also might exist other
roots (real and/or complex). Let us denote the set of
these unaccounted roots (counting with multiplicity) as
${\mathfrak Z}$. Using asymptotic expansions given by equations
(\ref{eq_asymp1_proof1}) and (\ref{eq_asymp1_proof2}) one can easily
prove that ${\mathfrak Z}$ is a finite set (possibly empty).
Using equations (\ref{def_P}), (\ref{def_Q}) and (\ref{eq_psi_nu3}) we
obtain an explicit formula for $Q(z)$ from which it easily follows
that $Q(z)$ has order equal to one, which means that one is the least
lower bound of all $\gamma>0$ such that
$Q(z)=O(\exp(|z|^{\gamma}))$ as $z\to{i}nfty$; the rigorous definition
can be found in \cite{Levin1996}, page 4 or Chapter 11 in \cite
{Conway1978}. Since $Q(z)$ has order equal to one, we can use the
Hadamard theorem (see Theorem 1, page~26, in \cite{Levin1996} or Theorem
3.4, page 289, in \cite{Conway1978})
to represent it as an infinite product over its zeros
\begin{eqnarray*}
Q(z)&=&\exp(c_1 z) \biggl(1+\frac{{i} z}{\zeta_0^{+}} \biggr)
\biggl(1+\frac{{i} z}{\zeta_0^{-}} \biggr)
\\
&&{}\times
\prod _{z_k {i}n{\mathfrak Z}} \biggl(1+\frac{{i} z}{z_k} \biggr)
\prod _{| n |\ge1} \biggl(1+\frac{{i} z}{\zeta
_n} \biggr)\exp
\biggl(-\frac{{i} z}{\zeta_n} \biggr)
\end{eqnarray*}
for some constant $c_1 {i}n{\mathbb C}$.
As the next step we rearrange the infinite product in the above formula
and obtain
\begin{eqnarray}\label{product_Q_zeta}
Q(z)&=&\exp(c_2 z) \biggl(1+\frac{{i} z}{\zeta_0^{+}} \biggr)
\biggl(1+\frac{{i} z}{\zeta_0^{-}} \biggr)
\nonumber\\[-8pt]\\[-8pt]
&&{}\times
\prod _{z_k {i}n{\mathfrak Z}} \biggl(1+\frac{{i} z}{z_k} \biggr)
\prod _{n \ge1} \biggl(1+\frac{{i} z}{\zeta_n} \biggr)
\biggl(1+\frac
{{i} z}{\zeta_{-n}} \biggr)\nonumber
\end{eqnarray}
for some other constant $c_2 {i}n{\mathbb C}$, where the infinite product
converges absolutely since each term is $1+O(n^{-2})$ as $n\to{i}nfty$.
Using definition of $P(z)$ (\ref{def_P}) and infinite product
representation of trigonometric functions (see formulas 1.431 in \cite
{Jeffrey2007}) we find that
\begin{equation}\label{product_P}
P(z)=\prod _{n \ge1} \biggl(1+\frac{{i} z}{n+\alpha}
\biggr)
\biggl(1+\frac{{i} z}{-n+\alpha} \biggr).
\end{equation}
Combining equations (\ref{def_P}), (\ref{def_Q}),
(\ref{product_Q_zeta}) and (\ref{product_P}) we finally conclude that
for all $z {i}n{\mathbb C}$
\begin{equation}\label{proof_factn}
\frac{q}{q+\Psi(z)}=\frac{\exp(c_2 z)}{ (1+{{i} z}/{\zeta
_0^{+}} ) (1+{{i} z}/{\zeta_0^{-}} )}
\prod _{z_k {i}n{\mathfrak Z}} \frac{1}{1+{{i} z}/{z_k}}
\prod _{| n |\ge1} \frac{1+{{i} z}/({n+\alpha
})}{1+
{{i} z}/{\zeta_n}}.\hspace*{-28pt}
\end{equation}
First let us prove that $c_2=0$. Denote the left-hand side of
(\ref{proof_factn})
as $F_1(z)$ and right-hand side as $F_2(z)$.
Since $\Psi(z)$ is a characteristic exponent,
it must be $O(z^2)$ as $z \to{i}nfty$, $z {i}n{\mathbb R}$, thus clearly
$z^{-1}\ln(F_1(z)) \to0$ as $z \to{i}nfty$, $z{i}n{\mathbb R}$.
Using Lem\-ma~\ref{lemmma_asymp_for_product} we find that $z^{-1}\ln
(F_2(z)) \to c_2$ as $z \to{i}nfty$ $z {i}n{\mathbb R}$, which implies that
$c_2=0$.
All that is left to do it to prove that ${\mathfrak Z}$ is an empty
set. The main tool is again Lemma \ref{lemmma_asymp_for_product}.
Assuming that $\sigma\ne0$ and using asymptotic expansion (\ref
{eq_asympt_roots1_1})
and Lem\-ma~\ref{lemmma_asymp_for_product} we find that the infinite
product in (\ref{proof_factn}) converges to a
constant as $z\to{i}nfty$, $z{i}n{\mathbb R}$. Thus function $F_2(z)
\approx A_2
z^{-2-M}$ where $M$ is equal to the number of
elements in the set ${\mathfrak Z}$.
However, function $F_1(z) \approx A_1 z^{-2}$ as $z\to{i}nfty$, $z{i}n
{\mathbb R}
$, thus $M=0$ and the set ${\mathfrak Z}$ must be empty.
In the case $\sigma\ne0$ the proof is identical,
except that both $F_1(z)$ and $F_2(z)$ behave like $Az^{-1}$, which can
be established by the asymptotic expression for
$\zeta_n$ given in
(\ref{eq_asympt_roots1_2})
and Lemma \ref{lemmma_asymp_for_product}.
\end{pf*}
Theorem \ref{thm_nu3_1} provides us with all the information about the
zeros of $q+\Psi(z)$ that we will need
later to prove results about Wiener--Hopf factors and perform numerical
computations. However, we can also compute explicitly
the sums of inverse powers of zeros. These results can be useful for
checking\vspace*{1pt} the accuracy, but more importantly, for approximating the
smallest solutions
$\zeta_0^{\pm}$. We assume that $\alpha\ne0$ and define for
$m\ge0$
\begin{equation}\label{def_Sm}
\Omega_m=\alpha^{-m-1}+ ( \zeta_0^{-}
)^{-m-1}+ (\zeta_0^{+} )^{-m-1}+\sum _{n\ge1}
[ \zeta_n^{-m-1}+\zeta_{-n}^{-m-1} ].
\end{equation}
Asymptotic expansions (\ref{eq_asympt_roots1_1}) and (\ref
{eq_asympt_roots1_2}) guarantee that the series converges absolutely
for $m\ge0$, thus the sequence $\{\Omega_m\}_{m\ge0}$ is correctly defined.
\begin{lemma}\label{Lemma_explicit_Sm}
The sequence $\{\Omega_m\}_{m\ge0}$ can be computed using the
following recurrence relation:
\[
\Omega_m=-\frac{1}{b_0} \Biggl[(m+1)b_{m+1}+\sum _{n=0}^{m-1}
\Omega
_n b_{m-n} \Biggr], \qquad m\ge0,
\]
where coefficients $\{b_n\}_{n\ge0}$ are defined as
\begin{eqnarray}\label{def_bn}
b_{2n}&=&\frac{(-1)^{n-1}\pi^{2n-1}}{(2n)!} [ n(2n-1)\alpha
\sigma
^2+\pi^2\alpha(q+8n)-2n\gamma\rho ] \nonumber\\
b_{2n+1}&=&\frac{(-1)^{n}\pi^{2n}}{(2n+1)!} \biggl[n(2n+1)\frac
{\gamma
\sigma^2}{\pi} -
\pi(4\pi^2 \alpha^2+4\gamma^2-\gamma q)\\
&&\hspace*{128.2pt}{}+\pi(2n+1)(4\gamma+\alpha
\rho)
\biggr].\nonumber
\end{eqnarray}
\end{lemma}
\begin{pf}
This statement is just an application of the following general result.
Assume that we have an entire function $H(z)$ which can be expressed
as an infinite product over the set of its zeros ${\mathfrak Z}$
\[
H(z)=\prod _{z_k {i}n{\mathfrak Z}} \biggl(1-\frac
{z}{z_k} \biggr).
\]
Taking derivative of $\ln(H(z))$ we find
\[
H'(z)=-H(z)\sum _{z_k {i}n{\mathfrak Z}} (z_k-z)^{-1}=-h(z)
\sum
_{m\ge0}
\biggl[ \sum _{z_k {i}n{\mathfrak Z}} z_k^{-m-1} \biggr] z^m,
\]
and the recurrence relation for $\sum_{z_k {i}n{\mathfrak Z}}
z_k^{-m-1}$ is obtained by expanding $H(z)$ and $H'(z)$ as a Maclaurin
series, multiplying two series in the right-hand side and comparing the
coefficients in front of $z^m$. The statement of Lemma \ref
{Lemma_explicit_Sm} follows by considering an entire function
\[
H(z)=q \pi(z-\alpha) Q({i} z),
\]
where $Q(z)$ is defined by (\ref{def_Q}). Function $H(z)$ has zeros at
$\{\alpha, \zeta_0^{\pm}, \zeta_n \}$, and one can check
that the Maclaurin expansion
is given by $H(z)=\sum_{n\ge0} b_n z^n$ where coefficients $b_n$ are
defined in (\ref{def_bn}).
\end{pf}
Finally we can state and prove our main results: expressions for
Wiener--Hopf factors and density of $S_{\tau}$.
\begin{theorem}\label{WH_factors_nu3} For $q>0$
\begin{eqnarray}\label{explicit_WH_factors}
\phi_q^{-}(z)&=&
\frac{1}{1+{{i} z}/{\zeta_0^{+}}}
\prod _{n \ge1} \frac{1+{{i} z}/({n+\alpha})}{1+{{i}
z}/{\zeta_n}},\nonumber\\[-8pt]\\[-8pt]
\phi_q^{+}(z)&=&
\frac{1}{1+{{i} z}/{\zeta_0^{-}}}
\prod _{n \le-1} \frac{1+{{i} z}/({n+\alpha})}{1+{{i}
z}/{\zeta_n}}.\nonumber
\end{eqnarray}
Infinite products converge uniformly on compact subsets of ${\mathbb
C}\setminus{i}{\mathbb R}$.
The density of $S_{\tau}$ is given by
\begin{equation}\label{density_of_Mtauq}
\frac{{d}}{{d} x} {\mathbb P}(S_{\tau} \le x)=- c_0^{-}\zeta
_0^{-
}e^{\zeta_0^{-}x}-
\sum _{k \le-1} c_k^{-} \zeta_{k} e^{\zeta_{k} x},
\end{equation}
where
\begin{eqnarray}\label{eq_ck_minus}
c_0^{-}&=&
\prod _{n \le-1} \frac{1-{\zeta_0^{-}}/({n+\alpha
})}{1-{\zeta_0^{-}}/{\zeta_n}},\nonumber\\[-8pt]\\[-8pt]
c_k^{-}&=&\frac{ 1-{\zeta_{k}}/({k+\alpha})}{1-{\zeta
_{k}}/{\zeta_0^{-}}}
\prod _{n \le-1, n\ne k} \frac{1-{\zeta
_{k}}/({n+\alpha
})}{1-{\zeta_{k}}/{\zeta_n}}.\nonumber
\end{eqnarray}
\end{theorem}
\begin{pf}
Expressions (\ref{explicit_WH_factors}) for Wiener--Hopf factors are
obtained using factorization identity (\ref{eq_big_factorization})
and Lemmas \ref{Lemma1} and \ref{lemmma_asymp_for_product}. Expression
(\ref{density_of_Mtauq}) for the density of $S_{\tau}$ is
derived by computing the inverse Fourier transform via residues.
\end{pf}
\begin{remark}
Theorem \ref{WH_factors_nu3} remains true for $q=0$ if $\mu<0$. In this
case ${\mathbb E}X_1 <0$ and $S_{\tau}\to S_{{i}nfty}$
and $I_{\tau} \to-{i}nfty$ as $q\to0^+$. From the analytical point of
view we have $\zeta_0^{-}<0$ and
$\zeta_0^{+}=0$ [see Figure
\ref{fig_plot} and (\ref{eq_Psiq0})]. If $\mu=0$, then
${\mathbb E}
X_1=0$ and the process $X_t$ oscillates; thus $S_{{i}nfty}=I_{{i}nfty
}={i}nfty$, which is expressed analytically by the fact that $\zeta
_0^{+}=\zeta_0^{-}=0$.
\end{remark}
\section{A family of L\'evy processes}\label{section_results_nu4}
\begin{definition}
We define a $\beta$-family of L\'evy processes by the generating triple
$(\mu,\sigma,\nu)$, where the
density of the L\'evy measure is defined as
\begin{equation}\label{def_nu4}
\nu(x)=c_1\frac{e^{-\alpha_1 \beta_1 x}}{(1-e^{-\beta_1
x})^{\lambda
_1}}{\mathbf I}_{\{x>0\}}+c_2\frac{e^{\alpha_2 \beta_2
x}}{(1-e^{\beta
_2 x})^{\lambda_2}}{\mathbf I}_{\{x<0\}}
\end{equation}
and parameters satisfy $\alpha_i>0$, $\beta_i>0$, $c_i\ge0$ and
$\lambda_i {i}n(0,3)$. This L\'evy measure has exponential tails; thus
we will use
the cut-off function $h(x)\equiv x$ in (\ref{levy_khintchine}).
\end{definition}
The $\beta$-family is quite rich: in particular, by controlling
parameters $\lambda_i$,
we can obtain an arbitrary behavior of small jumps, and parameters
$\alpha_i$ and $\beta_i$ are responsible for the tails of the L\'evy measure
(which are always exponential). Parameters $c_i$ control the total
``intensity'' of positive/negative jumps. The processes in $\beta
$-family are
similar to the generalized tempered stable processes (see \cite{Cont})
which were also named KoBoL processes in \cite{BoLe} and \cite{Boyarchenko}
\[
\nu(x)=
c_{+} \frac{e^{-\alpha_{+} x}}{ x ^{\lambda_{+
}}}{\mathbf
I}_{\{x>0\}}+
c_{-} \frac{e^{\alpha_{-} x}}{| x |^{\lambda_{-
}}}{\mathbf I}_{\{x<0\}}.
\]
In fact we can obtain the above measure as the limit of L\'evy measures
in $\beta$-family. If we set $c_1=c_{+}\beta^{\lambda_{+}}$,
$c_2=c_{-}\beta^{\lambda_{-}}$, $\alpha_1=\alpha_{+
}\beta^{-1}$,
$\alpha_2=\alpha_{-}\beta^{-1}$, $\beta_1=\beta_2=\beta$
and let
$\beta\to0^+$ we see that the L\'evy measure defined in
(\ref{def_nu4}) will converge to the L\'evy measure of the generalized
tempered stable process.
Next, when $\lambda_1=\lambda_2$, the processes in $\beta$-family are
similar to the
tempered stable processes (see \cite{BaLe}). If we restrict the parameters
even further, $c_1=c_2$, $\lambda_1=\lambda_2$
and $\beta_1=\beta_2$ so that the small positive/negative jumps have
the same behavior, while large jumps which are
controlled by $\alpha_i$ may be different, and we obtain a process very
similar to the CGMY family defined in \cite{CGMY2002}.
Finally, if $c_i=4$, $\beta_i=1/2$, $\lambda_i=2$ and $\alpha
_1=1-\alpha
$ and $\alpha_2=1+\alpha$, we obtain the process $X_t$ discussed in
Section \ref{section_results_nu3}.
If we restrict parameters as $\sigma=0$, $\beta_1=\beta_2$, $\lambda
_1=\lambda_2$ (and $\mu$ uniquely specified in terms of these
parameters), then $\beta$-family reduces to
a family of Lamperti-stable processes, which can be obtained by
Lamperti transformation from the stable processes conditioned to stay positive
(see the original paper \cite{Lamperti1972} by Lamperti for the
definition of this transformation and its various properties).
Spectrally one-sided Lamperti-stable processes appeared in
\cite{BertoinYor2001} and \cite{Patie2009}, and two-sided processes were
studied in \cite{CaCha,Caballero2008,Chaumont2009} and \cite{KyPaRi}.
Lamperti-stable processes are a particularly interesting subclass of
the $\beta$-family since they offer many examples of fluctuation
identities related to Wiener--Hopf factorization which can be computed
in closed form
(see \cite{CaCha,Chaumont2009} and \cite{KyPaRi}).
In the following proposition we derive a formula for the characteristic
exponent $\Psi(z)$ for processes in the $\beta$-family.
As we will see, the characteristic exponent can be expressed in terms
of beta and digamma functions (see Chapter 6 in \cite{AbramowitzStegun}
or Section 8.3 in \cite{Jeffrey2007})
\begin{equation}\label{eq_def_beta_digamma}
\mathrm{B}(x;y)=\frac{\Gamma(x)\Gamma(y)}{\Gamma(x+y)},\qquad
\psi
(x)=\frac{{d}}{{d} x}\ln(\Gamma(x)),
\end{equation}
which justifies the name of the family.
\begin{proposition} If $\lambda_i {i}n(0,3)\setminus\{1,2\}$, then
\begin{eqnarray}\label{def_psi_nu4}
\Psi(z)&=&\frac{\sigma^2z^2}2+{i}\rho z
-\frac{c_1}{\beta_1} \mathrm{B} \biggl(\alpha_1-\frac
{{i}
z}{\beta_1}; 1-\lambda_1 \biggr)\nonumber\\[-8pt]\\[-8pt]
&&{}-
\frac{c_2}{\beta_2} \mathrm{B} \biggl(\alpha_2+\frac{{i} z}{\beta_2};
1-\lambda
_2 \biggr)+\gamma,\nonumber
\end{eqnarray}
where
\begin{eqnarray*}
\gamma&=&\frac{c_1}{\beta_1} \mathrm{B} (\alpha_1; 1-\lambda_1
)+\frac{c_2}{\beta_2} \mathrm{B} (\alpha_2; 1-\lambda_2),\nonumber\\
\rho&=&\frac{c_1}{\beta_1^2} \mathrm{B} (\alpha_1; 1-\lambda_1
)\bigl(\psi(1+\alpha_1-\lambda_1)-\psi(\alpha_1)\bigr)
\\
&&{}-
\frac{c_2}{\beta_2^2} \mathrm{B} (\alpha_2; 1-\lambda_2
)\bigl(\psi
(1+\alpha_2-\lambda_2)-\psi(\alpha_2)\bigr)-\mu.
\end{eqnarray*}
If $\lambda_1$ or $\lambda_2{i}n\{1,2\}$ the characteristic exponent can
be computed using the following two integrals:
\begin{eqnarray}\label{int_nu_lambda1}\qquad
&& {i}nt _0^{{i}nfty} (e^{{i} xy}-1-{i} xy)\frac{e^{-\alpha\beta x}
}{1-e^{-\beta x}}\,{d} x\nonumber\\[-8pt]\\[-8pt]
&&\qquad=
-\frac{1}{\beta} \biggl[ \psi \biggl(\alpha
-\frac{{i}
y}{\beta} \biggr)-\psi(\alpha) \biggr]-\frac{{i} y}{\beta^2} \psi
'(\alpha)
\nonumber\\
\label{int_nu_lambda2}
&& {i}nt _0^{{i}nfty} (e^{{i} xy}-1-{i} xy)\frac{e^{-\alpha\beta x}
}{(1-e^{-\beta x})^2}\,{d} x
\nonumber\\[-8pt]\\[-8pt]
&&\qquad =-\frac{1}{\beta} \biggl(1-\alpha+\frac{{i}
y}{\beta
} \biggr) \biggl[ \psi \biggl(\alpha-\frac{{i} y}{\beta} \biggr)-\psi(\alpha
)
\biggr]-\frac{{i} y(1-\alpha)}{\beta^2} \psi'(\alpha).\nonumber
\end{eqnarray}
\end{proposition}
\begin{pf}
First we assume that $\lambda{i}n(0,1)$. Performing change of
variables $u=\exp(-\beta x)$ and using integral representation for beta
function (formula 8.380.1 in
\cite{Jeffrey2007}) we find that
\begin{eqnarray*}
&&\beta{i}nt _{0}^{{i}nfty} (e^{{i} xz}-1-{i} xz )
\frac
{e^{-\alpha\beta x}}{(1-e^{-\beta x})^{\lambda}}\,{d} x\\
&&\qquad=
\mathrm{B} \biggl( \alpha-\frac{{i} z}{\beta};1-\lambda \biggr)-
\mathrm{B} ( \alpha;1-\lambda )-z \biggl[\frac{{d}}{{d} z}
\mathrm{B} \biggl( \alpha-\frac{{i} z}{\beta};1-\lambda
\biggr) \biggr]_{z=0},
\end{eqnarray*}
and we obtain the desired result (\ref{def_psi_nu4}). The left-hand
side of the above equation is analytic in $\lambda$ for
$\operatorname{Re}(\lambda)<3$, and the right-hand side is analytic
and well defined
for $\operatorname{Re}(\lambda)<3$, $\lambda\ne\{1,2\}$; thus by analytic
continuation they should be equal for $\lambda{i}n(0,3)\setminus\{
1,2\}$.
Assume that $\lambda=2$. Then using binomial series we can expand
\[
\bigl(1-\exp(-x)\bigr)^{-2}=\sum_{n\ge0} (n+1)\exp(-nx),
\]
which converges uniformly on $(\varepsilon,{i}nfty)$
and obtain
\begin{eqnarray*}
&&\beta{i}nt _{0}^{{i}nfty} (e^{{i} xz}-1-{i} xz )
\frac
{e^{-\alpha\beta x}}{(1-e^{-\beta x})^{2}} \,{d} x
\nonumber\\[-1pt]
&&\qquad=
\sum _{n\ge0} \biggl[\frac{n+1}{n+\alpha-{{i} y}/{\beta
}}-\frac
{n+1}{n+\alpha}-\frac{{i} y}{\beta}\frac{n+1}{(n+\alpha)^2}
\biggr]\\[-1pt]
&&\qquad= \biggl(1-\alpha+\frac{{i} y}{\beta} \biggr)\sum _{n\ge0}
\biggl[\frac{1}{n+\alpha-{{i} y}/{\beta}}-\frac{1}{n+\alpha} \biggr]\\[-1pt]
&&\qquad\quad{} - \frac{{i} y}{\beta}(1-\alpha)\sum _{n\ge0} \frac
{1}{(n+\alpha)^2},
\end{eqnarray*}
and using the series representation for digamma function (formula
8.362.1 in \cite{Jeffrey2007}) we obtain (\ref{int_nu_lambda2}).
Derivation of formula (\ref{int_nu_lambda1})
corresponding to the case $\lambda=1$ is identical.
\end{pf}
The following theorem is the analogue of Theorem \ref{thm_nu3_1}, and
it is the main result in this section.
\begin{theorem}\label{thm_nu4_1} Assume that $q>0$ and that $\Psi(z)$
is given by (\ref{def_psi_nu4}).
\begin{longlist}
{i}tem Equation $q+\Psi({i}\zeta)=0$ has infinitely many solutions,
all of which are real and simple.
They are located as follows:
\begin{eqnarray}
\zeta_0^{-} &{i}n&(-\beta_1\alpha_1,0), \nonumber\\[-1pt]
\zeta_0^{+} &{i}n& (0,\beta_2\alpha_2), \nonumber\\[-9pt]\\[-9pt]
\zeta_n &{i}n& \bigl(\beta_2(\alpha_2+n-1), \beta_2(\alpha_2+n)\bigr),\qquad
n\ge
1 ,\nonumber\\[-1pt]
\zeta_n &{i}n&\bigl(\beta_1(-\alpha_1+n),\beta_1(-\alpha_1+
n+1)\bigr),\qquad
n\le-1.\nonumber
\end{eqnarray}
{i}tem If $\sigma\ne0$ we have
\begin{eqnarray}\label{asympt_zetan_nu41}
\zeta_{n+1}&=&\beta_2(n+\alpha_2)
+\frac{2 c_2}{\sigma^2 \beta_2^2\Gamma(\lambda_2)}
(n+\alpha_2)^{\lambda_2-3}\nonumber\\
&&{}+O(n^{\lambda_2-3-\varepsilon}),\qquad
n\to+{i}nfty,\nonumber\\[-8pt]\\[-8pt]
\zeta_{n-1}&=&\beta_1(n-\alpha_1)
-\frac{2 c_1}{\sigma^2\beta_1^2\Gamma(\lambda_1)}
(-n+\alpha_1)^{\lambda_1-3}\nonumber\\
&&{}+
O( n^{\lambda_1-3-\varepsilon}),\qquad
n\to-{i}nfty.\nonumber
\end{eqnarray}
{i}tem
If $\sigma=0$ we have
\begin{eqnarray}\label{asympt_zetan_nu42}
\zeta_{n+{d}elta}&=&\beta_2(n+\alpha_2+\omega_0)
+A(n+\alpha_2+\omega_0)^{\lambda}\nonumber\\[-8pt]\\[-8pt]
&&{}+O(n^{\lambda
-\varepsilon
}),\qquad
n\to+{i}nfty,\nonumber
\end{eqnarray}
where coefficients $w_0$ and $A$ are presented in Table \ref{tab1},
${d}elta{i}n\{0,1\}$ depending on the
signs of $w_0$ and $A$ and
\[
x_0=\frac{1}{\pi} \arctan \biggl( \sin(\pi\lambda_2) \biggl(\frac
{c_1\beta
_2^{\lambda_2} \Gamma(1-\lambda_1)}{c_2\beta_1^{\lambda_1}\Gamma
(1-\lambda_2)}- \cos(\pi\lambda_2) \biggr)^{-1} \biggr).
\]
The corresponding results for $n \to-{i}nfty$ can be obtained by
symmetry considerations.
\begin{table}[b]
\caption{Coefficients for asymptotic expansion of $\zeta_n$ when
$\sigma=0$}\label{tab1}
\begin{tabular*}{\tablewidth}{@{\extracolsep{\fill}}lccc@{}}
\hline
&$\bolds{\omega_0}$ & $\bolds{A}$ & $\bolds{\lambda}$ \\
\hline
$\lambda_1<2$, $\lambda_2<2$ & 0 &
$ \frac{c_2}{\rho\beta_2 \Gamma(\lambda_2)}$ &
$\lambda
_2-2$ \\[4pt]
$\lambda_1<2$, $\lambda_2>2$ & $2-\lambda_2$ &
$ -\frac{\sin(\pi\lambda_2) \beta_2^3 \rho}{\pi c_2
\Gamma(1-\lambda_2)}$ & $2-\lambda_2$ \\[4pt]
$\lambda_1>2$, $\lambda_2<\lambda_1$ &0 &
$ \frac{c_2\beta_1^{\lambda_1}}{c_1 \beta_2^{\lambda
_1-1}\Gamma(1-\lambda_1)\Gamma(\lambda_2)}$ &
$\lambda_2-\lambda_1$ \\[4pt]
$\lambda_1>2$, $\lambda_2>\lambda_1$ &
$2-\lambda_2$ &
$ -\frac{\sin(\pi\lambda_2)}{\pi} \frac{c_1\beta
_2^{\lambda_1+1}
\Gamma(1-\lambda_1)}{c_2\beta_1^{\lambda_1}\Gamma(1-\lambda_2)}$
& $\lambda_1-\lambda_2$ \\[4pt]
$\lambda_1>2$, $\lambda_2=\lambda_1$ & $x_0$ &
$ -\rho\frac{\sin(\pi x_0)^2}{\pi^2}\frac{\beta_2^3
}{c_2}\Gamma(\lambda_2)$ & $2-\lambda_2$ \\
\hline
\end{tabular*}
\end{table}
{i}tem Function $q(q+\Psi(z))^{-1}$ can be factorized as follows:
\begin{eqnarray}\label{big_factorization2}\quad
\frac{q}{q+\Psi(z)}&=&\frac{1}{ (1+{{i} z}/{\zeta_0^{+}}
) (1+{{i} z}/{\zeta_0^{-}} )}
\prod _{ n \ge1} \frac{1+{{i} z}/({\beta_2(n-1+\alpha
_2)})}{1+{{i} z}/{\zeta_n}}\nonumber\\[-8pt]\\[-8pt]
&&{}\times
\prod _{ n \le-1} \frac{1+{{i} z}/({\beta_1(n+1-\alpha
_1)})}{1+{{i} z}/{\zeta_n}},\nonumber
\end{eqnarray}
where the infinite products converge uniformly on the compact subsets
of the complex plane excluding zeros/poles of $q+\Psi(z)$.
\end{longlist}
\end{theorem}
\begin{remark}
When $\sigma=0$ the remaining cases $\lambda_1<2$, $\lambda_2=2$ and
$\lambda_1=2$, $0<\lambda_2<3$ are not covered
by Theorem \ref{thm_nu4_1}. The interested reader can derive these
asymptotic expansions by
using formulas (\ref{int_nu_lambda1}), (\ref{int_nu_lambda2}) and
the following results for the digamma function (see formulas 6.3.7 and
6.3.18 in \cite{AbramowitzStegun}):
\[
\psi(1-z)=\psi(z)+\pi\cot(\pi z),\qquad
\psi(z)=\ln(z)-\frac{1}{2z}+O(z^{-2}), \qquad z\to{i}nfty.
\]
\end{remark}
\begin{pf*}{Proof of Theorem \ref{thm_nu4_1}}
The proof of (i) is very similar to the corresponding part of the proof
of Theorem \ref{thm_nu3_1}. We separate equation
$q+\Psi({i}\zeta)=0$ into a jump part and a diffusion part, find points
where the jump part goes to infinity and by
analyzing the signs we conclude that on every interval between these
points there should exist a solution.
The proof of (ii) and (iii) is based on the following two asymptotic
formulas as $\zeta\to+{i}nfty$:
\begin{eqnarray*}
\mathrm{B}(\alpha+\zeta;\gamma)&=&\Gamma(\gamma)\zeta^{-\gamma
}
\biggl[1-\frac
{\gamma(2\alpha+\gamma-1)}{2\zeta}+O(\zeta^{-2}) \biggr],\\
\mathrm{B}(\alpha-\zeta;\gamma)&=&\Gamma(\gamma)\frac{\sin(\pi
(\zeta-\alpha
-\gamma))}{\sin(\pi(\zeta-\alpha))}
\zeta^{-\gamma} \biggl[1+\frac{\gamma(2\alpha+\gamma-1)}{2\zeta
}+O(\zeta
^{-2}) \biggr].
\end{eqnarray*}
The first asymptotic expansion follows from the definition of beta
function (\ref{eq_def_beta_digamma}) and formula 6.1.47 in
\cite{AbramowitzStegun}, while the second formula
can be reduced to the first one by applying a reflection formula for
the gamma function.
If $\sigma\ne0$ and $\zeta\to+{i}nfty$ we use (\ref{def_psi_nu4}) and
the above formulas and rewrite equation $q+\Psi({i}\zeta)=0$ as
\begin{eqnarray*}
\frac{\sin ( \pi ({\zeta}/{\beta_2}-\alpha
_2+\lambda_2
) )}
{\sin ( \pi ({\zeta}/{\beta_2}-\alpha_2
) )}&=&
\frac{\sigma^2\beta_2^{\lambda_2}}{2c_2\Gamma(1-\lambda_2)} \zeta
^{3-\lambda_2}+O(\zeta^{1-\lambda_2})\\
&&{}+O(\zeta^{\lambda_1-\lambda_2}),
\end{eqnarray*}
while if $\sigma=0$ and $\zeta\to+{i}nfty$ we have
\begin{eqnarray*}
\frac{\sin ( \pi ({\zeta}/{\beta_2}-\alpha
_2+\lambda
_2 ) )}
{\sin ( \pi ({\zeta}/{\beta_2}-\alpha_2
) )} &=&
\frac{\beta_2^{\lambda_2}\rho}{c_2\Gamma(1-\lambda_2)}\zeta
^{2-\lambda_2} +
\frac{c_1\beta_2^{\lambda_2} \Gamma(1-\lambda_1)}{c_2\beta
_1^{\lambda
_1}\Gamma(1-\lambda_2)}\zeta^{\lambda_1-\lambda_2}\\
&&{} + O(\zeta
^{1-\lambda
_2})+O(\zeta^{\lambda_1-\lambda_2-1}).
\end{eqnarray*}
Asymptotic expansions (\ref{asympt_zetan_nu41}) and (\ref
{asympt_zetan_nu42}) can be derived from the above formulas using
the same method as in the proof of Theorem \ref{thm_nu3_1}.\vadjust{\goodbreak}
In order to prove factorization identity (\ref{big_factorization2}) and
the fact that there are no other roots, we use exactly the same approach
as in the proof of Theorem \ref{thm_nu3_1}. Again we choose an entire
function $P(z)$ which has
zeros at the poles of $\Psi(z)$ with the same multiplicity, and the
choice is obvious due to (\ref{def_psi_nu4}):
\begin{equation}\label{def_PQ2}
P(z)=
\biggl[\Gamma \biggl(\alpha_1-\frac{{i} z}{\beta_1} \biggr)\Gamma
\biggl(\alpha
_2+\frac{{i} z}{\beta_2} \biggr) \biggr]^{-1},
\end{equation}
and function $Q(z)$ is defined by as $q^{-1}(1+\Psi(z))P(z)$.
Function $P(z)$ can be expanded in infinite product using Euler's
formula (see formula 6.1.3 in \cite{AbramowitzStegun}).
Using (\ref{def_psi_nu4}) and (\ref{def_PQ2}) and asymptotics for gamma
function we find that function $Q(z)$ has order equal
to one, and thus again we can use Hadamard's theorem to expand it as
infinite product,
and finally we use asymptotics for infinite products supplied by Lemma
\ref{lemmma_asymp_for_product} and asymptotics for $\zeta_n$ given by
(\ref{asympt_zetan_nu41}) and (\ref{asympt_zetan_nu41}) to prove factorization
identity (\ref{big_factorization2}).
\end{pf*}
We can also derive a result similar to Lemma \ref{Lemma_explicit_Sm}
using the entire function $Q(z)$ defined by
(\ref{def_PQ2}). While there is no closed form expression for
derivatives
of gamma function, they can be easily computed numerically. Our final
result in this section is the analogue of
Theorem \ref{WH_factors_nu3}, and the proof is identical.
\begin{theorem}\label{WH_factors_nu4} For $q>0$
\begin{eqnarray*}
\phi_q^{-}(z)&=&
\frac{1}{1+{{i} z}/{\zeta_0^{+}}}
\prod _{n \ge1} \frac{1+{{i} z}/({\beta_2(n-1+\alpha
_2)})}{1+{{i} z}/{\zeta_n}},\\
\phi_q^{+}(z)&=&
\frac{1}{1+{{i} z}/{\zeta_0^{-}}}
\prod _{n \le-1} \frac{1+{{i} z}/({\beta_1(n+1-\alpha
_1)})}{1+{{i} z}/{\zeta_n}}.
\end{eqnarray*}
Infinite products converge uniformly on compact subsets of ${\mathbb
C}\setminus{i}{\mathbb R}$.
The density of $S_{\tau}$ is given by
\[
\frac{{d}}{{d} x} {\mathbb P}(S_{\tau}\le x)=- c_0^{-}\zeta
_0^{-
}e^{\zeta_0^{-}x}-
\sum _{k \le-1} c_k^{-} \zeta_{k} e^{\zeta_{k} x},
\]
where
\begin{eqnarray*}
c_0^{-}&=&
\prod _{n \le-1} \frac{1-{\zeta_0^{-}}/({\beta
_1(n+1-\alpha_1)})}{1-{\zeta_0^{-}}/{\zeta_n}},\\
c_k^{-}&=&\frac{ 1-{\zeta_{k}}/({\beta_1(k+1-\alpha
_1)})}{1-
{\zeta_{k}}/{\zeta_0^{-}}}
\prod _{n \le-1, n\ne k} \frac{1-{\zeta_{k}}/({\beta
_1(n+1-\alpha_1)})}{1-{\zeta_{k}}/{\zeta_n}}.
\end{eqnarray*}
\end{theorem}
\begin{remark}
In the case $\sigma=0$, $\lambda_i<2$, and $\rho>0$ we have a process of
bounded variation and negative drift, and thus the distribution of
$S_{\tau}$ will have an atom at zero, which can be computed using the
following formula:
\begin{eqnarray*}
{\mathbb P}(S_{\tau}=0)&=&\lim _{z\to+{i}nfty} {\mathbb E}
[ e^{-z
S_{\tau}}
]=\lim _{z\to+{i}nfty} \phi_q^{+}({i} z)\\
&=&\frac
{-\zeta
_0^{-}}{\alpha_1 \beta_1} \prod _{n\le-1} \frac{\zeta
_n}{\beta_1(n-\alpha_1)}.
\end{eqnarray*}
Using asymptotic relation (\ref{asympt_zetan_nu42}) one can see that
the above infinite product converges to
a number between zero and one.
\end{remark}
\section{Implementation and numerical results}\label{section_implementation}
In this section we discuss implementation details for computing the
probability density function of $S_{\tau(q)}$ and $S_t$.
In order to illustrate the main ideas we will use the process $X_t$ defined
in Section~\ref{section_results_nu3}; however, the implementation for a
general $X_t$ from the $\beta$-family would be quite
similar.
Our main tools are Theorem \ref{WH_factors_nu3} and asymptotic
expansion for $\zeta_n$ given in Theorem \ref{thm_nu3_1}.
First let us discuss the computation of density of $S_{\tau}$. The
first step would be to compute solutions to
equation $q+\Psi({i}\zeta)=0$, and for $q$ real this is a simple task:
for $n$ large we use Newton's method which is started from the
point
given by asymptotic expansion (\ref{eq_asympt_roots1_1}) or (\ref
{eq_asympt_roots1_2}). To compute $\zeta_0^{\pm}$ or $\zeta
_n$ with
$n$ small we use localization result (\ref{eq_localization}) and the
secant (or bisection) method to get the starting point for Newton's
iteration. Overall this part of the algorithm is very computationally
efficient and can be made even faster if
we compute different $\zeta_n$ in parallel.
The second step is to compute coefficient $c_k^{-}$ which are
given by (\ref{eq_ck_minus}). Each term in the infinite product is
$1+O(n^{-2})$; however, as we show in Proposition \ref{prop_conv_accel},
we can considerably improve
convergence by using our knowledge of the asymptotic expansion for
$\zeta_n$.
The final step is to compute the density of $S_{\tau}$ using
formula~(\ref{density_of_Mtauq}). Note that the series converges exponentially
for $x>0$. When $x$ is small the convergence is slow, and the
asymptotic behavior as $x\to0^+$
would depend on the decay rate of coefficient $c_k^{-}$; however,
we were unable to prove any results in this direction.
\begin{proposition}\label{prop_conv_accel}
Assume that $\zeta_n=n+\beta+\frac{A_1}{(n+\beta)}+\frac
{A_2}{(n+\beta
)^2}+O(n^{-3})$ as $n\to+{i}nfty$.
Then as $N\to+{i}nfty$ we have
\begin{eqnarray}\label{eq_conv_imprv}
&&
\prod _{n\ge N} \frac{1+{z}/({n+\alpha})}{1+
{z}/{\zeta_n}}\nonumber\\
&&\qquad=
\frac{\Gamma(N+\alpha)\Gamma(N+\beta+z)}{\Gamma(N+\beta)\Gamma
(N+\alpha+z)}
\nonumber\\[-8pt]\\[-8pt]
&&\qquad\quad{}\times
\exp \bigl[A_1 \bigl(f_{1,1}(\beta,\beta;N)-f_{1,1}(z+\beta,\beta
;N) \bigr)
\nonumber\\
&&\qquad\quad\hspace*{30.3pt}{} + A_2 \bigl(f_{1,2}(\beta,\beta;N)-f_{1,2}(z+\beta
,\beta ;N) \bigr)+O(N^{-3})
\bigr],\nonumber
\end{eqnarray}
where $f_{\alpha_1,\alpha_2}(z_1,z_2;N)$ can be computed as
follows:
\begin{eqnarray}\label{asympt_faazzN}
&&
f_{\alpha_1,\alpha_2}(z_1,z_2;N)\nonumber\\
&&\qquad=\sum _{k\ge0}
\frac{{-\alpha_2\choose k}(z_2-z_1)^k}{\alpha_1+\alpha
_2+k-1}(z_1+N)^{1-\alpha_1-\alpha_2-k}\nonumber\\[-8pt]\\[-8pt]
&&\qquad\quad{}+(z_1+N)^{-\alpha_1}
(z_2+N)^{-\alpha_2} \biggl[\frac12+\frac{\alpha_1}{12(z_1+N)}+\frac
{\alpha_2}{12(z_2+N)}\biggr]
\nonumber\\
&&\qquad\quad{}+O(N^{-\alpha_1-\alpha_2-3}).\nonumber
\end{eqnarray}
\end{proposition}
\begin{pf}
First we define for $\alpha_1+\alpha_2>1$
\[
f_{\alpha_1,\alpha_2}(z_1,z_2;N)=\sum _{ n \ge N}
(n+z_1)^{-\alpha
_1} (n+z_2)^{-\alpha_2}.
\]
The proof of the asymptotic expansion (\ref{asympt_faazzN}) is based on
the Euler--Maclaurin formula
\[
\sum _{n\ge N} f(n)={i}nt _N^{{i}nfty} f(x)\,{d} x + \frac
{f(N)}2-\frac{f'(N)}{12}+O\bigl(f^{(3)}(N)\bigr),
\]
where we take $f(x)=(x+z_1)^{-\alpha_1} (x+z_2)^{-\alpha_2}$. To obtain
(\ref{asympt_faazzN}) we compute the integral by changing variables
$y=(x+z_1)^{-1}$, expanding the resulting integrand in Taylor's series
at $y=0$ and integrating term by term.
In order to obtain formula (\ref{eq_conv_imprv}) we follow the steps of
the proof of Lemma \ref{lemmma_asymp_for_product}
\begin{eqnarray*}
\prod _{n\ge N} \frac{1+{z}/({n+\alpha})}{1+
{z}/{\zeta_n}}&=&
\prod _{n\ge N} \frac{1+{z}/({n+\alpha})}{1+
{z}/({n+\beta})}
\prod _{n\ge N} \frac{1+{z}/({n+\beta})}
{1+{z}/{\zeta_n}}\\
&=&\frac{\Gamma(N+\alpha)\Gamma(N+\beta+z)}{\Gamma(N+\beta
)\Gamma
(N+\alpha+z)} \prod _{n\ge N} \frac{1+{z}/({n+\beta})}
{1+{z}/{\zeta_n}}.
\end{eqnarray*}
Next, we approximate $\ln(1+\omega)=\omega+O(\omega^2)$ as $\omega
\to
0$ and obtain
\begin{eqnarray*}
&&
\sum _{n\ge N} \ln \biggl(\frac{1+{z}/({n+\beta})}
{1+{z}/{\zeta_n}} \biggr)\\
&&\qquad=\sum _{n\ge N} \ln \biggl(\frac
{\zeta
_n}{n+\beta} \biggr)-
\sum _{n\ge N} \ln \biggl(\frac{z+\zeta_n}
{z+n+\beta} \biggr)\\
&&\qquad= \sum _{n\ge N} \biggl(\frac{A_1}{(n+\beta)^2}+\frac
{A_2}{(n+\beta)^3}+O(n^{-4}) \biggr) \\
&&\qquad\quad{}-
\sum _{n\ge N} \biggl(\frac{A_1}{(z+n+\beta)(n+\beta)}+\frac
{A_2}{(z+n+\beta)(n+\beta)^2}+O(n^{-4}) \biggr),
\end{eqnarray*}
which completes the proof.
\end{pf}
Computing the density $p_t(x)=\frac{{d}}{{d} x}{\mathbb P}(S_t\le x)$
of $S_t$ at
a deterministic time $t > 0$ requires more work.
Our starting point is the the fact that the density
of $S_{\tau}$ [which we denote by $p^S(q,x)$] is the Laplace transform
of $q \times p_t(x)$
\begin{eqnarray*}
p^S(q,x)&=&\frac{{d}}{{d} x} {\mathbb P}\bigl(S_{\tau(q)} \le x\bigr)= \frac{{d}
}{{d} x}
{i}nt _0^{{i}nfty} q e^{-qt} {\mathbb P}(S_t\le x)\,{d} t\\
&=& q{i}nt _0^{{i}nfty} e^{-qt} p_t(x) \,{d} t.
\end{eqnarray*}
Thus $p_t(x)$ can be recovered as the following cosine transform:
\[
p_t(x)=
\frac{2}{\pi}e^{q_0t}{i}nt _0^{{i}nfty} \operatorname
{Re} \biggl[ \frac
{p^S(q_0+{i}
u,x)}{q_0+{i} u} \biggr] \cos(tu)\,{d} u, \qquad q_0>0.
\]
We see that to compute this Fourier integral numerically we
need to be able to compute $p^S(q,x)$ for $q$ lying in some interval
$q{i}n[q_0,q_0+{i} u_0]$ in the complex plane. The main problem is that
we need to solve many equations (\ref{eq_Psiq0}) with complex $q$.
While the asymptotic expansions for $\zeta_n$ presented in Theorem
\ref{thm_nu3_1}
are still true, we do not have any localization results in the complex plane.
It is certainly possible to compute the roots using the argument
principle, and originally all the computations were done by the author
using this method. However as we will see, there is a much more
efficient algorithm.
We need to compute the solutions of equation $q+\Psi({i}\zeta)=0$ for
all $q{i}n[q_0,q_0+{i} u_0]$.
First we compute the initial values: the roots $\zeta_0^{\pm}$,
$\zeta_n$ for real value of $q=q_0$ using the method discussed above.
Next we consider each root as an implicit function of $u{d}vtx\zeta_n(u)$
is defined as
\[
\Psi({i}\zeta_n(u))+(q_0+{i} u)=0, \qquad \zeta_n(0)=\zeta_n.
\]
Using implicit differentiation we obtain a first order differential equation
\[
\frac{{d}\zeta_n(u)}{{d} u}=-\frac{1}{\Psi'({i}\zeta_n(u))}
\]
with initial condition $\zeta_n(0)=\zeta_n$. We compute the solution to
this ODE using a numerical scheme, for example, an adaptive Runge--Kutta
method, and at each step we correct the solution by applying several
iterations of Newton's method. Again, for different $n$ we can compute
$\zeta_n(u)$ in parallel.
\begin{figure}
\caption{The values of $\zeta_0^{\pm}
\label{fig_plot_cmplx}
\end{figure}
Figure \ref{fig_plot_cmplx} shows the result of this procedure. We have
used the following values of parameters: $\sigma=1$, $\mu=-0.1$ and
$\alpha=0.25$
and computed zeros $\zeta_0^{\pm}$ and $\zeta_n$ for $q{i}n
[1,1+200{i}]$. The graph shows interesting
qualitative behavior: all zeros except $\zeta_0^{+}$ converge to
the closest pole of
$\Psi({i}\zeta)$ at $\alpha+n$, while $\zeta_0^{+}$ has no
pole nearby
[since $\Psi({i}\zeta)$ is regular at $\zeta=\alpha$] and it converges
to ${i}nfty$ while always
staying in ${\mathbb C}^{+}$. If $\alpha<0$ the situation is exactly the
same, except that now $\zeta_0^{-}$ escapes to ${i}nfty$
while always staying
in ${\mathbb C}^{-}$. We have repeated this procedure for many different
values of parameters, and from this numerical evidence
we can make some observations/conjectures.
It appears to be true that the roots never collide which means that we
have no
higher order solutions to $q+\Psi(z)=0$ for all $q{i}n{\mathbb C}$.
It also seems that the roots never cross the real line. All these
observations are based on numerical evidence, and we did not pursue
this any further to obtain rigorous proofs. However there is one fact
that we can prove rigorously: there are not going to appear any new,
unaccounted zeros. This could be proved
by an argument that we have used in the proof of Theorem
\ref{thm_nu3_1} to show that there are no extra zeros.
The results of our computations are presented in Figure \ref{fig_surf}.
The parameters are $\sigma=1$, $\mu=-0.1$ and $\alpha=0.25$;
the surface $p^S(q,x)$ is on the left and $p_t(x)$ on the right.
\section{Conclusion}\label{section_conclusion}
In this paper we have introduced a ten-parameter family of L\'evy
processes characterized by the fact that the
characteristic exponent is
a meromorphic function expressed in terms of beta and digamma functions.
This family is quite rich, and, in particular, it includes processes
with the complete range of
behavior of small positive/negative jumps. We have presented results of
the Wiener--Hopf factorization
for these processes, including semi-explicit formulas for Wiener--Hopf
factors and the density of the supremum process $S_{\tau}$.
\begin{figure}
\caption{Surface plot of $p^S(q,x)$ (left) and $p_t(x)$
(right).}
\label{fig_surf}
\end{figure}
These L\'evy processes might be used for modeling purposes whenever one
needs to compute distributions related to such functionals
as the first passage time, overshoot, extrema, last time before
achieving extrema, etc.
Some possible applications in Mathematical Finance and Insurance
Mathematics include pricing barrier, lookback and perpetual American options,
building structural models with jumps in Credit Risk, computing ruin
probabilities, etc.
Finally, we would like to mention that one can use the methods
presented in this paper, and, in particular, the technique to
solve $q+\Psi(z)=0$ for complex values of $q$ discussed in Section
\ref{section_implementation}, to compute
Wiener--Hopf factors arising from a more general factorization identity
(see Theorem 6.16 in \cite{Kyprianou})
\[
\frac{q}{q-{i} w+\Psi(z)}=\Psi_q^{+}(w,z)\Psi_q^{-}(w,z),
\qquad w,z {i}n{\mathbb R},
\]
where
\[
\Psi_q^{+}(w,z)=E [e^{{i} w \overline G_{\tau}+{i} zS_{\tau
}} ],\qquad
\Psi_q^{-}(w,z)=E [e^{{i} w \underline{G}_{\tau}+{i}
zI_{\tau
}} ],
\]
and $\overline G_t$ ($\underline G_t$) are defined as the last time
before $t$ when maximum (minimum) was achieved
\[
\overline G_t=\sup\{0\le u \le t {d}vtx X_u=S_u \}, \qquad \underline
G_t=\sup\{0\le u \le t {d}vtx X_u=I_u \}.
\]
\section*{Acknowledgment}
The author would like to thank Andreas Kyprianou and Geert Van Damme
for many detailed comments and constructive suggestions.
\printaddresses
\end{document}
|
\begin{document}
\begin{sffamily}
\maketitle
\begin{abstract} We give two variations on a result of Wilkie's (\cite{Wilkie}) on unary functions definable in $\R_{\text{an},\exp}$ that take integer values at positive integers. Provided that the function grows slower (in a suitable sense) than the function $2^x$, Wilkie showed that is must be eventually equal to a polynomial. We show the same conclusion under a stronger growth condition but only assuming that the function takes values sufficiently close to a integers at positive integers. In a different variation we show that it suffices to assume that the function takes integer values on a sufficiently dense subset of the positive integers (for instance the primes), again under a stronger growth bound than that in Wilkie's result.
\end{abstract}
\section{Introduction}
In \cite{JTW}, Thomas, Wilkie and the first author studied functions $f:(0,\infty)\to\R$ definable in certain o-minimal structures under the assumption that $f$ is integer-valued, that is $f(n)\in \Z$ for all positive integer $n$. They showed that, under a rather strong growth bound, such a function must be a polynomial. This gives a weak real analogue of a classical theorem of Polya on integer-valued entire functions. Wilkie \cite{Wilkie} substantially improved the one-variable result of \cite{JTW}, proving that such an $f$ must be a polynomial provided that it satisfies a growth bound that is close to optimal. Wilkie's result also applies to a larger o-minimal structure than those considered in \cite{JTW}.
Here we consider a similar problem in which the function $f$ is no longer supposed integer-valued, but only assumed to be such that $f(n)$ is close to an integer for positive integers $n$. Throughout this paper, by definable, we mean definable in the structure $\R_{\text{an},\exp}$. This structure is o-minimal by work of van den Dries and Miller \cite{vdDMiller}. We prove the following.
\begin{thm}\label{close_to_integers}
There is a $\delta>0$ with the following property. Suppose that $f:[0,\infty)\to \R$ is definable and analytic, and that there exists $c_0>0$ such that for all positive integers $n$ there is an integer $m_n$ such that
\[
\abs{f(n)-m_n}<c_0 e^{-3n}.
\]
If there are $c_1>0$ and $\delta'<\delta$ such that $|f(x)|<c_1e^{\delta' x}$ then there is a polynomial $Q$ such that
\[
Q(n)=m_n
\]
for all sufficiently large $n$.
\end{thm}
We also prove a result in which our function $f$ is only assumed to be integer-valued on some sufficiently dense subset of the nonnegative integers. For this we fix $\A\subseteq \N$ such that there is a positive real $\lambda$ such that for all sufficiently large $T$ we have
\[
\frac{T}{(\log T)^\lambda} \ll \# \A \cap [0,T] \ll \frac{T}{(\log T)^\lambda}.
\]
With such an $\A$ fixed, we prove the following.
\begin{thm} \label{Primes} Suppose that $f:[0,\infty)\to \R$ is definable and analytic, and such that $f(n)$ is an integer for $n\in\A$. If there exist $\alpha>0$ and $c_1>0$ such that
\[
|f(x)|<c_1 \exp \left(\frac{x}{(\log x)^{2\lambda+2+\alpha}}\right)
\]
then $f$ is a polynomial.
\end{thm}
So for instance, if $|f(x)|<c \exp\left(\frac{x}{(\log x)^5}\right)$ and $f$ is integer-valued on the primes, then $f$ is a polynomial.
Before discussing the proofs of these results, we briefly discuss Wilkie's proof. Wilkie first shows that definable unary functions whose growth is at most exponential can be approximated by a function which admits an analytic continuation (as a complex function) to a half-plane. The diophantine part of the proof then follows Polya's method, as adapted by Langley \cite{Langley} to functions on a half-plane. This seems to need the function to be take integer values at all positive integers, and doesn't seem to adapt to a set $\A$ as in the second theorem above. And it is difficult to see how this method could be used to prove Theorem \ref{close_to_integers}. The method of \cite{JTW}, which relies on a counting theorem in \cite{JT} gives nothing in the context of Theorem \ref{close_to_integers}. Instead, we adapt Waldschmidt's proof, via transcendence methods, of a weak form of Polya's Theorem \cite{Waldschmidt}. See also Chapter 9 of Masser's recent book \cite{Masser}. Waldschmidt's proof was adapted by Hirata \cite{Hirata} to show that an entire $f$ that is exponentially close to integers at positive integers, and doesn't grow too quickly must be a polynomial. And a more precise version of this result was given by Ito and Hirata-Kohno \cite{Ito_HirataKohno}. This fails in the o-minimal setting (consider say $e^{- a x}$ for large $a$, or even worse $\exp(- \exp x)$). But the method, applied not to the function itself but to an approximating function with a large continuation, provided by Wilkie's work, goes through and gives Theorem \ref{close_to_integers}. The second Theorem is proved by similar methods, and further exploits o-minimality, in that to show that an analytic function definable in an o-minimal structure is identically zero, it is enough to show that it has infinitely many zeros. In the setting of our second Theorem, the method of \cite{JTW}, applying the counting theorem for curves in \cite{JT} would give that $f$ is a polynomial provided that it is definable in the structure $\R_{\exp}$ and satisfies the much stronger growth bound $|f(x)|<e^{x^\epsilon}$, eventually, for all positive $\epsilon$.
It will be clear from the proofs that a similar result can be obtained, for instance, assuming that $f$ is close to integers (in the sense of Theorem \ref{close_to_integers}) on a sequence $\A$ as in Theorem \ref{Primes}. Or we could suppose that the sequence $\A$ in Theorem \ref{Primes} had more points, and get a corresponding relaxation in the growth bound. Various results of this nature will appear in the thesis of the second author. Finally, the main point of \cite{JTW} was to consider functions of several variables that take integer values at tuples with integer coordinates. Again, considerations of this kind will appear in the thesis of the second author.
\section{Preliminaries}
We begin with various estimates that we shall use repeatedly. First, a well-known estimate for binomial coefficients (see for instance (9.9) on page 104 of \cite{Masser}).
\begin{lemma}\label{binomial}
Suppose that $z\in \mathbb{C}$ and that $i$ is a nonnegative integer with $i\le L$. Then
\[
\abs{\binom{z}{i} }\le e^L\left( \frac{\abs{z}+L}{L}\right)^L.
\]
\end{lemma}
The following lemma is easy.
\begin{lemma}\label{polydiff}
Suppose that $P=\sum_{i=0}^L\sum_{j=0}^M p_{i,j}\binom{X}{i}Y^j$ is a polynomial with complex coefficients. Then for $a>0 $ and $b,b'\ge 1$ we have
\[
\abs{P(a,b)-P(a,b')} \le (L+1)(M+1)^2\max \{ |p_{i,j}|\} \max \left\{ \abs{\binom ai }: i \le L \right\} b^Mb'^M|b-b'|.
\]
\end{lemma}
We will use the following estimate which is a special case of Lemma 3 of \cite{Tijdeman}.
\begin{lemma} Suppose that $a_1,\ldots,a_N$ are pairwise distinct integers. Then
\[
\prod_{i=2}^N\abs{a_1-a_i} \ge \frac{(N-1)!}{2^{N-1}}.
\]
\end{lemma}
In place of the usual formula for the number of zeros of a complex function, we use the following, a special case of Lemma 6 of \cite{Ito_HirataKohno}.
\begin{propn}\label{jensen} Suppose that $\phi$ is analytic on $|z|\le R$, and that $a_1,\ldots, a_N$ are nonzero complex numbers of modulus less than $R$. Then
\begin{equation}\label{jensenestimate}
|\phi(0)| \le |\phi|_R \prod_{i=1}^N \frac{ |a_i|}{R} + \sum_{i=1}^N \left( |\phi(a_i)| \prod_{j=1}^N \frac{ |R^2 - a_i\overline{a_j}|}{R^2}\prod_{k=1,k\ne n}^N \frac{|a_k|}{|a_k-a_i|}\right).
\end{equation}
\end{propn}
\section{Functions with values close to integers}
For our proofs we will use Wilkie's results on approximate continuations. The result we need in this section is as follows.
\begin{thm}[Wilkie] Suppose that $f:\R \to \R$ is definable and suppose that there exist $c_1>0$ and $\delta>0$ such that $|f(x)|<c_1 e^{\delta x}$ for all large $x$. Given $D>0$ and $\epsilon>0$ there is an $a \in \R$ and an analytic $g: \{ z : \text{Re}(z) >a\} \to \mathbb{C} $ such that
\begin{enumerate}
\item $|g(x) - f(x) | < e^{ -D x}$ for all $x >a$,
\item there exists $c_2>0$ such that $|g(z)| <c_2 e^{(\delta+\epsilon) |z|}$ for all $z$ such that $\text{Re}(z) >a$.
\end{enumerate}
\end{thm}
\begin{proof} Apply Corollary 4.8 in \cite{Wilkie} to get definable functions $f_1,\ldots, f_l :(a,\infty)\to \R$, pairwise distinct reals $s_1,\ldots,s_l$ and a positive $a$ such that
\begin{equation}\label{close1}
\abs{ f(x) - \sum_{i=1}^l f_i(x) \exp(s_i x)} < \exp (- D x),
\end{equation}
for all $x>a$. Moreover, in the notation of \cite{Wilkie}, the functions $f_i$ are all in $\mathcal{R}_{\text{subexp}}$. It then follows from Theorem 4.2 in \cite{Wilkie} that, perhaps after increasing $a$, these functions continue to the half-plane $\{ z: \text{Re}(z) >a \}$. And by Lemma 5.3 in \cite{Wilkie}, increasing $a$ if necessary, these continuations are such that
\[
|f_i(z)| \le \exp (\epsilon |z|)
\]
for any $z$ in the half-plane with large modulus, and $i=1,\ldots, l$. By the growth condition on $f$ and \eqref{close1}, we have that $s_i \le \delta$ for each $i$. So we can take $g$ to be the continuation of $\sum_{i=1}^l f_i(x) \exp(s_i x)$.
\end{proof}
\begin{thm} There is a $\delta>0$ with the following property. Suppose that $f:[0,\infty)\to \R$ is definable and analytic, and that there exists $c_0>0$ such that for all positive integers $n$ there is an integer $m_n$ such that
\[
\abs{f(n)-m_n}<c_0 e^{-3n}.
\]
If there are $c_1>0$ and $\delta'<\delta$ such that $|f(x)|<c_1e^{\delta' x}$ then there is a polynomial $Q$ such that
\[
Q(n)=m_n
\]
for all sufficiently large $n$.
\end{thm}
\begin{proof}
Rather than the assumption in the statement, we start by assuming that the integers $m_n$ are such that
\[
\abs{f(n)-m_n}<c_0 e^{-Dn}
\]
for some positive $D$, and show later that the choice $D=3$ is sufficient. Note that we can assume without loss of generality that $f$ is positive. Suppose that $ f(x) \le c_1e^{\delta' x}$, for some positive $\delta'<\delta<1$, where we fix $\delta$ later. By Wilkie's Theorem above, there is a function $g$ analytic in some right halfplane, and such that $|g(x)-f(x)|<e^{-Dx}$ for large $x$. Moreover, we have $|g(z)|\le c_2 e^{\delta |z|}$. We will assume that $c_2\ge 2 c_1$. Translating, we can assume that the halfpane is $\{ z: \text{Re} z \ge 0\}$, and that for all $x\ge 0$ we have
\begin{equation}\label{fgclose}
|g(x)-f(x)| < c_3 e^{-D x}.
\end{equation}
We let $M,L$ be large integers, to be determined later. We suppose that $M+1<L$ and that $L$ is odd. Set $T=(L+1)(M+1)$. We construct a nonzero polynomial
\[
P(X,Y)= \sum_{i=0}^L\sum_{j=0}^M p_{i,j} \binom{X}{i} Y^j
\]
such that
\begin{equation}\label{constraints}
P(n,m_n)=0
\end{equation}
for $n\in [T/2,T)\cap \Z$. So we have $T$ unknowns and $T/2$ equations. To apply Siegel's lemma, we first compute an upper bound on
\begin{displaymath}
\abs{\binom{n}{i}m_n^j}
\end{displaymath}
for $n\in [T/2, T]\cap \Z, i\le L$ and $j\le M$. By Lemma \ref{binomial} we have
\[
\abs{\binom{n}{i}} \le e^L(M+4)^L.
\]
And
\[
m_n^j \le (2f(n))^j \le c_2^M e^{\delta M T}.
\]
So Siegel's Lemma (see \cite[8.3]{Masser}, for example) gives solutions $p_{i,j}$ to \eqref{constraints}, not all zero, with
\begin{equation}\label{normP}
\abs{p_{i,j}} \le (L+1)(M+1) e^L(M+4)^L c_2^M e^{\delta MT}.
\end{equation}
We now show that $P(X,Y)$ is not too big at either $(n,f(n))$ or $(n,g(n))$. For $n\in [T/2,T]\cap \Z$ we have, by assumption, $|f(n)-m_n |\le c_0 e^{-DT/2}$. So by Lemma \ref{polydiff}, we have, for $n$ in the same range
\[
|P(n,f(n))-P(n,m_n)| \le T^3 c_2^{3M}e^{2L}(M+4)^{2L}e^{3\delta M T-DT/2}.
\]
In particular, the right hand side here is an upper bound for $|P(n,f(n))|$ for $n\in [T/2,T)\cap \Z$. Similarly, for $n \in [T/2, T]\cap \Z$ we have
\[
|P(n,f(n))-P(n,g(n))| \le T^3 c_2^{3M}e^{2L}(M+4)^{2L}e^{3\delta M T-DT/2}.
\]
Combining these, we have
\begin{equation}\label{tobeat}
|P(T,m_{T})| \le |P(T,g(T))|+ 2 T^3 c_2^{3M}e^{2L}(M+4)^{2L}e^{3\delta M T-DT/2}.
\end{equation}
And for $n\in [T/2,T)\cap \Z$ we have
\begin{equation}\label{supgn}
|P(n,g(n))| \le 2 T^3 c_2^{3M}e^{2L}(M+4)^{2L}e^{3\delta M T-DT/2}.
\end{equation}
With the aim of showing that $P(T ,m_{T })=0$, we now consider the function $\phi(z)=P(z+T ,g(z+T ))$, analytic on $|z|\le T $ and show, using Proposition \ref{jensen}, that it is small at the origin. For $|z|\le 2T $ and $i\le L$ we have
\[
\abs{\binom{z}{i}} \le e^{2L}(M+4)^L
\]
(using Lemma \ref{binomial}). So, from the definition of $\phi$, the bounds \eqref{normP} and the growth bound in Wilkie's Theorem we have
\begin{equation}\label{supphi}
|\phi|_T\le T^2 c_2^{2M}e^{3L}(M+4)^{2L} e^{3\delta M T}.
\end{equation}
Let $\A_T=[T/2,T)\cap \Z$, and for $n\in \A_T$ let $a_n=n-T $. Then $|a_n|\le T/2$ and
\begin{eqnarray*}
|\phi|_T \prod_{n \in \A_T} \frac{|a_n|}{T} & \le |\phi|_T \left(\frac{1}{2}\right)^{\frac{T}{2}}.\\
\end{eqnarray*}
So by \eqref{supphi} we have
\[
|\phi|_T \prod_{n \in \A_T} \frac{|a_n|}{T} \le T^2 c_2^{2M} \left( \frac{e^{3}(M+4)^2}{2^{\frac{M+1}{2}}}\right)^{L+1} e^{3\delta M T}
\]
We now consider the second summand in the estimate \eqref{jensenestimate} for $|\phi(0)|$. First, we have
\[
\prod_{k \in \A_T\setminus \{ n\}} \frac{1}{|a_k-a_n|} \le \frac{2^{\frac{T}{2}-1}}{\left( \frac{T}{2}-1\right)!}
\]
for any $n\in \A_T$. And so
\[
\prod_{k \in \A_T\setminus \{ n\}} \frac{|a_k|}{|a_k-a_n|} \le \frac{ \left( \frac{T}{2}\right)^{\frac{T}{2}-1} 2^{\frac{T}{2}-1}}{\left( \frac{T}{2}-1\right)!}
\]
Estimating the factorial and simplifying, this is at most
\[
c_4(2e)^{\frac{T}{2}-1}
\]
for some positive constant $c_4$. Since
\[
\abs{\frac{T^2-a_na_k}{T^2} }<1
\]
we have the following upper bound (using \eqref{supgn}) for the second summand in \eqref{jensenestimate}.
\[
2c_4 T^4 c_2^{3M}e^{2L}(M+4)^{2L}e^{3\delta M T-DT/2}(2e)^{\frac{T}{2}-1}.
\]
So, by \eqref{tobeat}, and Proposition \ref{jensen} we have
\begin{align*}
|P(T ,m_{T })| \le& T^2 c_2^{2M} \left( \frac{e^{3}(M+4)^2}{2^{\frac{M+1}{2}}}\right)^{L+1}e^{3\delta M T} + c_4 T^4 c_2^{3M}e^{2L}(M+4)^{2L}e^{3\delta M T)-DT/2}(2e)^{\frac{T}{2}} \\
\le & c_4T^4 c_2^{3M}\left( \frac{e^{3}(M+4)^2}{2^{\frac{M+1}{2}}}\right)^{L+1} e^{3\delta MT} \left( 1 + \exp \left( - \frac{DT}{2} + T\log 2 +\frac{T}{2}\right)\right)
\end{align*}
Now, if we fix $M$ sufficiently large then
\[
\left( \frac{e^{3}(M+4)^2}{2^{\frac{M+1}{2}}}\right)< \frac{1}{2}.
\]
We then take $\delta>0$ so small that
\[
e^{3\delta M(M+1)}<2.
\]
These choices ensure that
$$
\left( \frac{e^{3}(M+4)^2}{2^{\frac{M+1}{2}}}e^{3\delta M (M+1)}\right)^{L+1}
$$
decays exponentially as $L$ increases. Then taking
\[
D> 2\log 2 +1
\]
(e.g. $D=3$ as claimed in the statement of the Thereom) we will have
\[
|P(T ,m_{T }) |< 1
\]
provided that $L$ is large enough. As $P(T ,m_{T })$ is an integer, it must be zero.
Now inductively suppose that $P(n,m_n)=0$ for all $n \in [T/2,T')\cap \Z$, for some integer $T'>T$. We write $T'=(T+1)(M'+1)$ for some $M'\in \Q$. Using Lemma \ref{binomial}, if $|z|\le T' $ and $i\le L$ then
\begin{equation}\label{binomT'+1}
\abs{ \binom{z}{i} } \le e^L (2M'+2)^L.
\end{equation}
And if $|z| \le 2T' $ and $i \le L$ then
\begin{equation}\label{binom2T'+1}
\abs{ \binom{z}{i} } \le e^L (3M'+4)^L.
\end{equation}
Using \eqref{binomT'+1} and Lemma \ref{polydiff}, for $n \in [T'/2,T']$ we have
\begin{equation}
|P(n,f(n))-P(n,m_n)| \le T^3 c_2^{2M}e^{2L}(M+4)^L(2 M'+2)^L e^{3 \delta MT'-DT'/2}.
\end{equation}
Similarly, for $n$ in the same range,
\begin{equation}
|P(n,f(n)) - P(n,g(n))| \le T^3 c_2^{2M}e^{2L}(M+4)^L(2 M'+2)^L e^{3 \delta MT'-DT'/2}.
\end{equation}
These two inequalities hold in particular for $n \in [T'/2, T')\cap \Z$ and here we also have $P(n,m_n)=0$, so for $n$ in this range we get
\begin{equation}\label{modP(n(gn))}
|P(n,g(n))| \le 2 T^3 c_2^{2M}e^{2L}(M+4)^L(2 M'+2)^L e^{3 \delta MT'-DT'/2}.
\end{equation}
Finally, for $n=T' $ we have
\begin{equation}\label{stilltobeat}
|P(T',m_{T' })| \le |P(T' , g(T' ))| +2 T^3 c_2^{2M}e^{2L}(M+4)^L(2 M'+2)^L e^{3 \delta MT'-DT'/2}.
\end{equation}
As before, we now aim to show that $P(T',g(T'))$ is small, using Proposition \ref{jensen}, and thus show that $P(T' , m_{T' })=0$. To this end let
\[
\psi(z) = P(z+T', g(z+T' )),
\]
analytic on $|z|\le T' $. Let $\A_{T'}= [T'/2 , T')\cap \Z$ so that $ \frac{T'}{2}-1\le \# \A_{T'} \le \frac{T'}{2}$. And for $n\in \A_{T'} $ let $a_n= n-T'$, so that $|a_n| \le \frac{T'}{2}$. Using the definition of $\psi$, together with \eqref{normP} and \eqref{binom2T'+1} and the growth bounds on $g$, we have
\[
|\psi|_{T'} \le T^2 c_2^{2M}e^{2L}(M+4)^L(3M'+4)^L e^{3\delta MT'}.
\]
From this we see that
\begin{equation}\label{G1}
|\psi|_{T'} \prod_{n\in \A_{T'}} \frac{|a_n|}{T'}\le T^2 c_2^{2M} \left( \frac{e^2(M+4)(3M'+4)}{2^{\frac{M'+1}{2}}}\right)^{L+1} e^{3\delta MT'}.
\end{equation}
To estimate the second summand in \eqref{jensenestimate} we first note that as in the base case, for $n \in \A_{T'}$, we have
\[
\prod_{k \in \A_{T'} \setminus \{ n\}} \frac{|a_k|}{|a_k-a_n|} \le c_5 (2 e)^{\frac{T'}{2}-1}
\]
for some positive constant $c_5$. And as before, for $n,i \in \A_{T'}$ we have
\[
\abs{\frac{T'^2-a_ia_n}{T'^2}} <1.
\]
So, with \eqref{modP(n(gn))} we have
\begin{equation}\label{G2}
\begin{split}
\sum_{i\in \A_{T'}}&\left( |\psi(a_i)| \prod_{j\in \A_{T'}} \frac{ |T'^2 - a_ia_j|}{T'^2} \prod_{k\in \A_{T'},k\ne n} \frac{|a_k|}{|a_k-a_n|}\right) \le \\ & c_5T'T^3c_2^{2M}e^{2L}(M+4)^L(2M'+2)^Le^{3\delta MT'-D T'/2}(2e)^{T'/2 -1}.
\end{split}
\end{equation}
So by Proposition \ref{jensen} we get an upper bound for $|P(T',g(T' ))|$, given by the sum of the right hand sides of \eqref{G1} and \eqref{G2}, and then by \eqref{stilltobeat} we have
\[
\begin{split}
|P(T' , m_{T' })| \le c_5T' T^3c_2^{2M}&\left( \frac{e^2(M+4)(3M'+4)}{2^{\frac{M'+1}{2}}}\right)^{L+1} \cdot \\ & e^{3\delta M T' }\left(1+ e^{(-D +2 \log 2 +1)T'/2 }\right)
\end{split}
\]
With our earlier choices of $M, \delta$ and $D$, and $L$ fixed sufficiently large, this tends to $0$ as $M' \ge M$ tends to infinity. So $P(T' ,m_{T' })=0$.
Hence, by induction, we have $P(n,m_n)=0$ for all $n$.
Now, there are finitely many analytic algebraic functions $\theta_1,\ldots,\theta_k$ defined on some interval $(a,\infty)$ such that if $P(x,y)=0$ and $x>a$ then $y=\theta_i(x)$ for some $i$. We assume that the functions $\theta_i$ are distinct. For $n>a$ we have $m_n = \theta_i(n)$ for some $i$. We show that $i$ here is independent of $n$, perhaps after increasing $a$. Suppose on the contrary that $i,j\le k$ are not equal and such that there are infinitely many $n>a$ with $m_n=\theta_i(n)$ and infinitely many $n'>a$ with $m_{n'}=\theta_j(n')$. Since for all large $n$ we have
\[
|f(n)-m_n| < c_0 e^{-3 n}
\]
we have, by o-minimality,
\[
|\theta_i(x)-\theta_j(x)| < 2 c_0 e^{-3 x}
\]
for all large $x$. But $\theta_i$ and $\theta_j$ are algebraic, so this cannot happen unless they are equal. Hence there is some $i$ such that for all sufficiently large $n$ we have $m_n =\theta_i(n)$. Since $\theta$ is an algebraic function taking integer values at all large integers, $\theta_i$ must be a polynomial (for instance by the Theorem on page 131 of \cite{Serre}), and the proof is complete.
\end{proof}
\section{Functions taking integer values on a reasonably dense sequence.}
For this section we require another result on approximate continuations.
\begin{thm}[Wilkie] Suppose that $f:\R \to \R$ is definable and suppose that there exist positive $N,\alpha$ and $c_1$ such that $|f(x)|<c_1 \exp \left(\frac{x}{(\log x)^{N+\alpha}}\right)$ for all large $x$. Then there exist $\eta>0,a \in \R$ and an analytic $g: \{ z : \text{Re}(z) >a\} \to \mathbb{C} $ such that
\begin{enumerate}
\item $|g(x) - f(x) | < e^{ -\eta x}$ for all $x >a$,
\item there exists $c_2>0$ such that $|g(z)| <c_2 \exp \left(\frac{x}{(\log x)^N}\right)$ for all $z$ such that $\text{Re}(z) >a$.
\end{enumerate}
\end{thm}
\begin{proof} By Theorems 4.1 and 4.2 in \cite{Wilkie}, there exist $a\in \R$ and an analytic $g:\{ z : \text{Re}(z) >a\} \to \mathbb{C} $ such that $|g-f|$ is infinitesimal with respect to the valuation ring $\mathcal{F}_{\text{subexp}}$ (in the notation of \cite{Wilkie}). We may suppose that $|g-f|$ is positive. So $|g-f|$ is less than all elements of the decreasing sequence
\[
\exp\left(-\frac{x}{\log x}\right), \exp\left(-\frac{x}{\log \log x}\right),\ldots,\exp\left(-\frac{x}{\log\cdots \log x}\right),\ldots.
\]
It then follows from the fact that $\R_{\text{an},\exp}$ is exponentially bounded (see Proposition 9.2 of \cite{vdDMiller}) that there is a positive $\eta$ such that the first condition in the theorem holds.
It remains to check the growth condition. This follows using Theorem 4.2 of \cite{Wilkie}, following the argument of Lemma 5.3 of \cite{Wilkie} but with the $\phi$ there replaced by
$$
\frac{(\log x)^N \log g(x)}{x}.$$
The extra $\alpha$ in the growth bound ensures that this $\phi$ tends to $0$, so that Wilkie's argument works.
\end{proof}
With this in hand, we prove the second main result. Recall that we fix a set $\A$ of positive integers for which there exist positive real $\lambda$ such that for sufficiently large $T$ we have
\[
\frac {T}{(\log T)^\lambda} \ll \A \cap [0,T] \ll \frac{T}{(\log T)^\lambda}.
\]
It follows that there exist positive reals $a,b$ and $\epsilon \in (0,1)$ such that
\[
a \frac {T}{(\log T)^\lambda} \le \A\cap [\epsilon T,T] \le b \frac {T}{(\log T)^\lambda}
\]
for large $T$.
The result is as follows.
\begin{thm} Suppose that $f:[0,\infty)\to \R$ is definable and analytic, and such that $f(n)$ is an integer for $n\in\A$. If there exist $\alpha>0$ and $c_1>0$ such that
\[
|f(x)|<c_1 \exp\left(\frac{x}{(\log x)^{2\lambda+2+\alpha}}\right)
\]
then $f$ is a polynomial.
\end{thm}
\begin{proof} To prove this we proceed as before and start by applying Wilkie's theorem above and translating to obtain a positive $\eta$ and a $g$, analytic on the right half-plane $\{ z : \text{Re}(z)\ge 0\}$, such that
\begin{equation}\label{primes:g_growth}
|g(z)| \le c_2 \exp \left(\frac{|z|}{(\log |z|)^{2\lambda+2}}\right)
\end{equation}
and
\begin{equation}\label{primes:g_close_to_f}
|f(x)-g(x)| \le c_3 e^{-\eta x}
\end{equation}
for $x\ge 0$, where $c_2$ and $c_3$ are positive and we assume $c_2\ge c_1$. Below we set $\delta(x)=\frac{x}{(\log x)^{2\lambda+2}}$.
We fix large integers $L$ and $M$ and aim to construct a nonzero polynomial
\[
P(X,Y)= \sum_{i=0}^L\sum_{j=0}^M p_{i,j} \binom{X}{i}Y^j
\]
with integer coefficients such that
\begin{equation}\label{primes:constraint}
P(n,f(n))=0
\end{equation}
for all $n\in \A\cap [\epsilon T,T]$, where $T=(L+1)(M+1)$. Aiming to use Siegel's Lemma to control the size of the coefficients, note that Lemma \ref{binomial} implies that if $k\ge 1$ and $|z|\le kT$ and $i\le L$ then
\[
\left| \binom{z}{i} \right| \le (ek)^L (M+4)^L.
\]
And for $j\le M, x\le kT$ we have
\[
|f(x)|^j\le c_1^M e^{M\delta(kT)}.
\]
Taking $k=1$ here, these estimates together with Siegel's Lemma show that there are integers $p_{i,j}$ not all zero such that \eqref{primes:constraint} holds for all $n\in\A \cap [\epsilon T,T]$ and such that
\begin{equation}\label{primes:coefficients_of_P}
|p_{i,j}| \le T e^L(M+4)^L c_1^M e^{M\delta (T)}.
\end{equation}
Combining this with Lemma \ref{polydiff}, and using \eqref{primes:g_close_to_f} we have
\[
|P(x,f(x))-P(x,g(x))| \le T^3 (ek)^{2L}(M+4)^{2L} c_2^{3M} e^{3M\delta(kT)} e^{-\eta\epsilon T}
\]
where $k\ge 1$ and $x\in [\epsilon T,kT]$. So for $n\in \A\cap [\epsilon T,T]$ we have
\begin{equation}\label{prime:P(n,g(n))}
|P(n,g(n))| \le T^3 e^{2L}(M+4)^{2L} c_2^{3M} e^{3M\delta(T)} e^{-\eta\epsilon T}
\end{equation}
and for $x\in (T,kT]$ we have
\begin{equation}\label{prime:to_beat}
|P(x,f(x))|\le |P(x,g(x))|+ T^3 (ek)^{2L}(M+4)^{2L} c_2^{3M} e^{3M\delta(kT)} e^{-\eta\epsilon T}.
\end{equation}
In order to apply Proposition \ref{jensen} we estimate $|P(z,g(z))|$ for $z$ in the closed right half-plane, with $|z|\le kT$. We have
\begin{equation}\label{prime:modphi}
|P(z,g(z))| \le T^2 (ek)^{2L} (M+4)^{2L} c_2^{2M} e^{2M\delta(kT)}.
\end{equation}
Now let $T_1=\min \{ n\in \A : n>T\}$, and fix $\ell$ such that $T<T_1\le \ell T$ (for instance we can take $\ell$ around $1/\epsilon$). Put $\phi(z)= P(z+T_1,g(z+T_1))$, analytic on $|z|\le T_1$. By \eqref{prime:modphi} with $k=2\ell$ we have
\begin{equation}\label{prime:modphiT_1}
|\phi|_{T_1} \le T^2 (2e\ell)^{2L} (M+4)^{2L} c_2^{2M} e^{2M\delta(2\ell T)}.
\end{equation}
And
\begin{eqnarray*}
\prod_{n\in\A \cap [\epsilon T,T]} \frac{ |n -T_1|}{T_1} &\le & \left( 1 -\epsilon \frac{T}{T_1}\right)^{ \left[ a\frac{T}{(\log T)^\lambda}\right]} \\
&\le & r^{ \frac{T}{(\log T)^\lambda}}
\end{eqnarray*}
where $r= (1-\epsilon^2)^{a/2}<1$.
Arguing as in the previous proof, for $n\in \A\cap [\epsilon T,T]$ we have
\[
\prod_{m\in\A\cap [\epsilon T,T]\setminus \{n\}} \left| \frac{m-T_1}{m-n}\right| \le \left(c \log T\right)^{ b \frac{T}{(\log T)^\lambda}},
\]
for some $c>1$ (depending on $\epsilon,\ell$ and $a$ but not $T$ or $n$).
And as in the previous proof, for $n\in \A\cap [\epsilon T,T]$ we have
\[
\prod_{m\in \A\cap [\epsilon T,T]} \abs{\frac{T_1^2-(m-T_1)(n-T_1)}{T_1^2}} < 1.
\]
Applying Proposition \ref{jensen} and using (\ref{prime:modphiT_1}),(\ref{prime:P(n,g(n))}), and (\ref{prime:to_beat}) and the previous three inequalities, we have
\[
\begin{split}
|P(T_1,f(T_1))| & \le T^4 (2e\ell)^{2L}(M+4)^{2L}c_2^{3M}e^{3M\delta(2\ell T)}r^{\frac{T}{(\log T)^\lambda}}\\
&\cdot\left( 1+ \exp\left( b \frac{T}{(\log T)^\lambda}\log( c\log T) - \left(\eta\epsilon T+\frac{T}{(\log T)^\lambda}\log r\right)\right)\right)
\end{split}
\]
We consider the two factors on the right separately. First, using the fact that $T=(L+1)(M+1)$, the definition of $\delta(x)$, and taking $M+1$ to be an integer around $(\log (L+1))^{\lambda+1}$, we have
\begin{eqnarray*}
T^4 (2e\ell)^{2L}(M+4)^{2L}c_2^{3M}e^{3M\delta(2\ell T)}r^{\frac{T}{(\log T)^\lambda}}\le T^4 (2e\ell)^{2L}(M+4)^{2L}c_2^{3M}e^{6\ell \frac{M(M+1)(L+1)}{(\log(L+1))^{2\lambda+2}}}r^{\frac{(M+1)(L+1)}{2^\lambda(\log (L+1))^\lambda}}\\
\le \exp\left( (L+1) \left( 6 \log (M+4)+\frac{1}{2^\lambda}(\log r)\log (L+1)\right)\right)
\end{eqnarray*}
for sufficiently large $L$. Since $r<1$, this is at most $1/4$ for sufficiently large $L$.
For the second factor we have
\[
1+ \exp\left( b \frac{T}{(\log T)^\lambda}\log (c\log T) - \left(\eta\epsilon T+\frac{T}{(\log T)^\lambda}\log r\right)\right)<\frac{3}{2}
\]
for large enough $L$. So $|P(T_1,f(T_1))|$ is an integer less than $1$, hence $P(T_1,f(T_1))=0$.
We now inductively assume that $T'\ge T_1$ and that $P(n,f(n))=0$ for all $n \in \A\cap [\epsilon T, T']$. We write $T'=(M'+1)(L+1)$. Suppose that $k\ge 1$. For $|z| \le k T'$ and $i\le L$ we have
\begin{equation}\label{prime:ind_binom}
\abs{\binom{z}{i}} \le (3 k e)^L (M'+1)^L.
\end{equation}
Using the fact that $\max \{ |f(x)|^j,|g(x)|^j \} \le c_2^M e^{M\delta (k T')}$ for $x\le k T' $ and $j\le M$, together with \eqref{primes:coefficients_of_P},\eqref{primes:g_close_to_f} and Lemma \ref{polydiff}, this implies that
\begin{equation}\label{prime:ind_Pdiff}
|P(x,f(x))-P(x,g(x))| \le T^3 (3 k e)^{2L} (M+4)^L (M'+1)^L c_2^{3M} e^{3M \delta (k T')} e^{-\eta\epsilon T'}
\end{equation}
for $x\in [\epsilon T',kT']$. In particular for $x \in (T',k T']$ we have
\begin{equation}\label{prime:ind_to_beat}
|P(x,f(x))| \le |P(x,g(x))| +T^3 (3 k e)^{2L} (M+4)^L (M'+1)^L c_2^{3M} e^{3M \delta (k T')} e^{-\eta\epsilon T'}
\end{equation}
As in the base case we now let $T_1'=\min \{ n\in \A : n>T'\}$ and put $\psi(z)= P(z+T_1',g(z+T_1'))$, analytic for $|z|\le T_1'$.Let $\ell$ such that $T'_1\le \ell T'$. Using Proposition \ref{jensen}, and estimating as in the base case, and using \eqref{prime:ind_to_beat} we have
\[
\begin{split}
|P(T_1',f(T_1'))|\le T^2 (6\ell k)^{2L}(M+4)^L(M'+1)^L c_2^{2M} e^{2M\delta(2 l T')}r^{\frac{T'}{(\log T')^\lambda}} \\+ 2 \frac{T'}{(\log T')^\lambda} T^3 (3\ell e)^{2L}(M+4)^L(M'+1)^L c_2^{3M}e^{3 M \delta (\ell T')}e^{-\eta\epsilon T'} \left( \log T'\right)^{c_6 \frac{T'}{(\log T')^\lambda}}
\end{split}
\]
for some $r<1$ and some positive $c_6$. With the choice of $M$ made earlier, we see that for $M'>M$ we have $|P(T_1',f(T_1'))|<1$ once $L$ is fixed sufficiently large. So $P(T_1',f(T_1'))=0$. And then inductively we have $P(n,f(n))=0$ for all sufficiently large $n\in\A$.
By o-minimality and analyticity it follows that $P(x,f(x))=0$ for all $x\ge 0$. So $f$ is algebraic. It then follows from the theorem on page 131 of \cite{Serre} that $f$ must in fact be a polynomial.
\end{proof}
\end{sffamily}
\end{document}
|
\begin{document}
\author{Hiu-Fai Law \and Colin McDiarmid}
\title{Independent sets in graphs with given\ minimum degree}
\begin{abstract}
We consider numbers and sizes of independent sets in graphs with minimum degree at least $d$,
when the number $n$ of vertices is large. In particular we investigate which of these
graphs yield the maximum numbers of independent sets of different sizes,
and which yield the largest random independent sets.
We establish a strengthened form of a conjecture of Galvin concerning the first of these topics.
\end{abstract}
Given a graph $G$, let $\mathcal I(G)$ be the set of independent sets and let $i(G)=|\mathcal I(G)|$; and
for $k\geq 0$ let $\mathcal I_k(G)$ be the set of independent sets of order $k$ and let $i_k(G)= |\mathcal I_k(G)|$.
Thus $i(G) = \sum_{k\geq 0} i_k(G)$.
There are many extremal results on $i(G)$ and $i_k(G)$, where $G$ ranges over a certain family of graphs, for example, trees or regular graphs
(see \cite{CGT09}-\!\cite{Gal11}, \cite{Kah01}-\!\cite{PT82},\cite{Zha10}).
Here we investigate graphs with a given lower bound on their vertex degrees.
For $d\geq 0$, let $\mathcal G_n(d)$ be the set of graphs of order $n$ with minimum degree at least $d$.
(Always $n,k$ and $d$ will be integers.)
We are interested in which of these graphs yield the maximum numbers of independent sets of
different sizes, and which yield the largest random independent sets.
Let us discuss numbers first.
Recall that the \emph{independence number} $\alpha(G)$ is the maximum size of an independent set.
Clearly $\alpha(G) \leq n-d$ for each $G \in \mathcal G_n(d)$.
Recently, Galvin~\cite{Gal11} proved that,
for $n$ suitably larger than $d$,
$i(G)< i(K_{d,n-d})$ for any $G\in\mathcal G_n(d)$ that is not (isomorphic to) $K_{d,n-d}$.
Moreover,
he conjectured essentially
that for any $d\geq 1$, there exist integers
$N(d)$ and $C(d)$ such that for each $n \geq N(d)$, $K_{d,n-d}$ maximizes
$i_k$ over all graphs in $\mathcal G_n(d)$ for each $k$ satisfying $C(d) \leq k \leq n-d$;
and he proved
such a result in the case when $d=1$.
We shall see that this conjecture holds even if $d$ is allowed to grow slowly, and further we can take $C(d)=3$.
Observe that we need $C(d) \geq 3$. For, each $n$-vertex graph has $i_0(G)=1$ and $i_1(G)=n$.
Also $i_2(G)= \binom{n}{2} - e(G)$, where $e(G)$ is the number of edges, and graphs $G \in {\cal G}_n(d)$
can have $i_2(G)>i_2(K_{d,n-d})$.
(For example, if $d$ is fixed and $n$ is large and even, $K_{d,n-d}$ has $d(n-d) \sim dn$ edges, whereas
a $d$-regular graph has $dn/2$ edges.)
We shall show:
\begin{theorem} \label{thm.indepksets}
Let $1\leq d=d(n)=o(n^{1/3})$.
Then for all sufficiently large~$n$,
for each graph $G \in {\cal G}_n(d)$ and each $k \geq 3$ we have
$i_k(G)\leq i_k(K_{d,n-d})$;
and if $G$ is not $K_{d,n-d}$
then $i_2(G)+i_{4}(G) < i_2(K_{d,n-d}) + i_{4}(K_{d,n-d})$,
and so $i(G)<i(K_{d,n-d})$.
\end{theorem}
A graph $G \in {\cal G}_n(d)$ with $\alpha(G)=n-d$ has the form $G = H+I_{n-d}$
for a graph $H$ of order $d$ and the empty graph $I_{n-d}$ on $n-d$ vertices.
(Recall that for graphs $G, G'$ with disjoint vertex sets, the sum $G+G'$ denotes the graph obtained by adding
all edges between them.) Let $K^*_{a,b}$ denote the graph $K_a + I_b$.
Denote by $X(G)$ the size of an independent set chosen uniformly at random from $\mathcal I(G)$.
Recall that $X$ is \emph{stochastically dominated} by $Y$,
denoted by $X\leq_s Y$, if $\mathbb P(X\leq t)\geq \mathbb P(Y\leq t)$ for each $t$.
If $G \in {\cal G}_n(d)$ satisfies $\alpha(G)=n-d$ and $G$ is not $ K^*_{d,n-d} $,
then $G$
is (isomorphic to) a proper subgraph of $K^*_{d,n-d}$, and so
$i(G)> i(K^*_{d,n-d})$; and it follows that
$\mathbb P(X(G) \leq t) < \mathbb P(X( K^*_{d,n-d} ) \leq t)$ for $t=0$ and $t=1$.
Hence it is {\em not} the case that $X(G) \leq_s X( K^*_{d,n-d} )$.
Nevertheless, our second theorem shows that,
if we ignore independent sets of size at most 1, then of all graphs in ${\cal G}_n(d)$,
the graph $ K^*_{d,n-d} $ is the unique graph yielding the largest random independent sets.
\begin{theorem} \label{T:SDmax}
Let $1\leq d=d(n)=o(n^{1/3})$.
Then for all sufficiently large~$n$, for each graph $G \in\mathcal G_n(d)$ other than $ K^*_{d,n-d} $, we have
\[ \mathbb P(X(G)\geq t) < \mathbb P(X(K^*_{d,n-d})\geq t) \;\; \mbox{ for each } t=3,\ldots,n-d, \]
and if $\alpha(G) < n-d$ then this inequality holds also for $t=1$ and $2$.
\end{theorem}
\noindent
This yields directly:
\begin{corollary} \label{cor.1}
If $d$ is as above, then for all sufficiently large~$n$, for each graph $G \in\mathcal G_n(d)$
\begin{equation} \label{eqn.cor2}
X(G)\leq_s \max\{ 2, X(K^*_{d,n-d})\},
\end{equation}
and
\begin{equation} \label{eqn.cor1}
\mbox{ if } \alpha(G) < n-d \;\; \mbox{ then } \; X(G) \leq_s X( K^*_{d,n-d} ).
\end{equation}
\end{corollary}
\noindent
Also, since $\mathbb E(X) = \sum_{t\geq 1} \mathbb P(X\geq t)$, we may obtain almost directly:
\begin{corollary} \label{cor.2}
If $1\leq d=d(n)=o(n^{1/3})$, then for all sufficiently large~$n$,
for each graph $G \in\mathcal G_n(d)$ other than $ K^*_{d,n-d} $, we have
\[ \mathbb E(X(G)) < \mathbb E(X(K^*_{d,n-d})) < (n-d)/2. \]
\end{corollary}
\noindent
In order to prove these results, it turns out that the `growth rates' $\alpha_k$
of the numbers of independent sets are crucial quantities. For a graph $G$ and positive integer $k \leq \alpha(G)$,
let $\alpha_k(G) := \frac{i_k(G)}{i_{k-1}(G)}$.
Thus $\alpha_k(G)$ is $1/k$ times the average number of extensions of an independent $(k-1)$-set
to an independent $k$-set in $G$;
or (roughly) the `average number of extensions per vertex' at size~$k$.
To prove Theorem~\ref{thm.indepksets} we use two lemmas, one on growth rates $\alpha_k(G)$ and
one on the `base case' $i_3(G)$. To prove Theorem~\ref{T:SDmax} we need one further lemma,
a general result on growth rates and stochastic domination.
We adopt the following notations. For a graph $G$ and integer $d$
let $A=A(G,d)=\{ v\in V(G): \deg(v)>d\}$ and $B=V(G)\setminus A$; and let $a=|A|$, $b=|B|$.
Also recall the standard notation that, if $U$ is a set of vertices in~$G$, then the neighbourhood $\Gamma(U)$
is the set of neighbours of vertices in $U$,
and the closed neighbourhood $\Gamma[U]$ is $\Gamma(U) \cup U$.
\begin{lemma}
\label{L:avgext}
(a) For each $1\leq d<n$
and $G \in {\cal G}_n(d)$, we have
$\alpha_k(G)\leq \alpha_k( K^*_{d,n-d} )$ for each $3\leq k\leq \alpha(G)$.\\
(b)
Let $1\leq d=d(n)=o(n^{1/3})$.
Then for all sufficiently large $n$, for each $G,K\in \mathcal G_n(d)$ with $\alpha(G)<n-d=\alpha(K)$,
we have $\alpha_k(G) < \alpha_k(K)$
for each $4\leq k\leq \alpha(G)$.
\end{lemma}
\begin{proof}
Let $3\leq k\leq \alpha(G)$.
Since each vertex degree in $G$ is at least $d$,
each $I\in \mathcal I_{k-1}(G)$ can be extended to at most $n-d-k+1$ independent $k$-sets.
Call $I$ \emph{good} if this upper bound is attained,
and otherwise call $I$ \emph{bad}.
Note that $I$ is good if and only if $|\Gamma(I)|=d$,
if and only if each vertex in $I$ has the same set of $d$ neighbours.
Also, each $I$ is good if $G$ is $ K^*_{d,n-d} $.
Since each independent $k$-set contains exactly $k$ independent $(k-1)$-sets,
we have $i_{k-1}(G)(n-d-k+1)\geq ki_k(G)$.
Hence, $\alpha_k(G) \leq \frac{n-d-k+1}{k}$.
But $\alpha_k( K^*_{d,n-d} )=\frac{n-d-k+1}{k}$ for $k= 3,\ldots,n-d$. This establishes part (a).
Now we prove part (b). Let $4 \leq k \leq \alpha(G)$. Suppose first that $k \geq d+2$.
Let $J$ be an independent set in $G$ of size $\alpha(G) \leq n-d-1$.
Let $W$ be a set of $d+1$ vertices outside $J$, and note that each vertex in $W$ has at least one neighbour in $J$.
Since $k-1 \geq d+1$ we may pick a $(k-1)$-subset $I$ of $J$ with $\Gamma(I) \supseteq W$,
and so $I$ is bad. Now, since there is a bad independent $(k-1)$-set, $\alpha_k(G)< \frac{n-d-k+1}{k}$.
Further, $\alpha_k(K)=\frac{n-d-k+1}{k}$ for each $k= d+2,\ldots,n-d$, so this case is done; and
so to prove part (b) we may assume that $4 \leq k\leq d+1$.
Assume also that $n >2d$ (as we may). Write $K=H+ I_{n-d}$ for some graph $H$ of order $d$.
Then $\alpha_k(K) = \frac{\binom{n-d}{k}+i_k(H)}{\binom{n-d}{k-1} + i_{k-1}(H)}$.
Since $i_{k-1}(H)\leq \binom{d}{k-1}$, for each $k \leq d+1$,
\begin{equation} \label{eqn.alphak}
\alpha_k(K) \geq \frac{\binom{n-d}{k}}{\binom{n-d}{k-1}+\binom{d}{k-1}}
> \frac{n-d-k+1}{k} \left( 1- \frac{\binom{d}{k-1}}{\binom{n-d}{k-1}}\right).
\end{equation}
Let $p$ and $q$ denote the numbers of good and bad sets in $\mathcal I_{k-1}(G)$ respectively,
so $p+q=i_{k-1}(G)$. Then
\[ k i_k(G) \leq p (n-d-k+1) + q(n-d-k) = (p+q) (n-d-k+1) -q, \]
so
\begin{equation} \label{eqn.alphak2}
\alpha_k(G) \leq \frac{n-d-k+1}{k} - \frac{q}{k(p+q)}.
\end{equation}
Assume for a contradiction that $\alpha_k(G) \geq \alpha_k(K)$.
Then it follows using~(\ref{eqn.alphak}) and~(\ref{eqn.alphak2}) that
\begin{equation}
\label{E:badprop}
\frac{q}{p+q} \leq (n-d-k+1) \binom{d}{k-1}/\binom{n-d}{k-1}< \frac{d^{k-1}}{(n-d-k+1)^{k-2}}.
\end{equation}
Observe that, since $k \geq 4$, the final bound above is $O(d^3 n^{-2})=o(n^{-1})$.
Thus certainly $p>0$.
\noindent
{\bf Claim:}
For each good independent $(k-1)$-set $I$ in $G$ there is a vertex $w \not\in I \cup \Gamma(I)$ such that
$\Gamma(w) \neq \Gamma(I)$.
We will prove the claim later: suppose for now that it holds.
Then from each good independent $(k-1)$-set $I$ we may construct a bad independent $(k-1)$-set $I'$
by deleting a vertex $u$ from $I$ and adding a vertex $w$ as in the claim.
This gives at least $p(k-1) \geq 3p$ constructions. Also, in each bad independent $(k-1)$-set $I'$ which has been
constructed, we can identify the vertex $w$ added
(since the other $k-2 \geq 2$ vertices all have the same neighbourhood).
Thus each bad independent $(k-1)$-set $I'$ is constructed at most $n-k+1 \leq n-3$ times. Hence
\[ q \geq 3p/(n-3) > p/(n-1) \]
and so $q/(p+q) > 1/n$, which contradicts \eqref{E:badprop} (for $n$ sufficiently large, since $k \geq 4$).
It remains to prove the claim.
Recall that $B=\{v \in V(G): \deg(v)=d\}$.
Let $I$ be a good independent $(k-1)$-set.
Note that $I \subseteq B$ and $|\Gamma(I)|=d$.
If $|A|=a \geq d+1$ then for $w$ we may pick any vertex in $A \setminus \Gamma(I)$. So we may assume that $a \leq d$.
Let $B_1=\{v\in B: \Gamma(v)\cap B\neq \emptyset\}$ and $B_2 = B \setminus B_1$.
Since $\alpha(G)<n-d \leq |B|$ we have $E(B) \neq \emptyset$ and so $B_1 \neq \emptyset$.
Either $I \subseteq B_1$ or $I \subseteq B_2$, since
each vertex in $I$ has the same set of $d$ neighbours.
If $I \subseteq B_1$ then $I \subseteq \Gamma(v)$ for some $v \in B_1$,
and so for $w$ we may pick any vertex not in $\Gamma(I) \cup \Gamma(v)$
(at least $n-2d \geq 1$ choices).
If $I \subseteq B_2$ then for $w$ we may pick any vertex in $B_1$.
This completes the proof of the claim, and we are done.
\end{proof}
The previous lemma concerns ratios; the next considers the base case.
Of graphs in ${\cal G}_n(d)$, clearly a $d$-regular graph has the most independent $2$-sets:
we look at the number $i_3$ of independent $3$-sets. We first give a formula for $i_3(G)$ for any graph $G$.
Let $t_i$ be the number of induced subgraphs of $G$ on three vertices with $i$ edges. Then
\begin{eqnarray*}
\binom{n}{3} & = &t_0 +t_1 +t_2 +t_3,\\
e(G) (n-2) &= &t_1 +2t_2 +3t_3, \\
\sum_{v_i\in V(G)} \binom{\deg(v_i)}{2} & = &t_2 + 3t_3.
\end{eqnarray*}
Hence,
\begin{equation} \label{E:i3}
i_3(G) = \binom{n}{3} - e(G)(n-2) + \sum_{v_i\in V(G)} \binom{\deg(v_i)}{2} - t(G),
\end{equation}
where $t(G)=t_3$ is the number of triangles. For example, if $G$ is a $d$-regular graph then
\begin{eqnarray*}
i_3(G) & = & \binom{n}{3} - \frac12 dn(n-2) +n \binom{d}{2} -t(G) \\
&=& \binom{n-d}{3} - \frac12 dn + \frac16 d(d^2+3d+2) - t(G).
\end{eqnarray*}
\begin{lemma}
\label{L:triangle}
Let $1\leq d=d(n)=o(n^{1/3})$.
For all sufficiently large~$n$, if $G, K\in\mathcal G_n(d)$
are such that $\alpha(G)<n-d=\alpha(K)$, then $i_3(G) \leq i_3(K)- n/2 +1$.
\end{lemma}
\begin{proof}
Our proof relies on~(\ref{E:i3}). Consider $G\in\mathcal G_n(d)$ with $\alpha(G)<n-d$.
We first show that we may assume without loss of generality
that the set $A$ of vertices of degree $>d$ is a non-empty
independent set, and then that it suffices to prove~(\ref{claim.hd}) below;
then we prove~(\ref{claim.hd}) by considering four cases for $a=|A|$.
Suppose that $G$ is $d$-regular. Then by the above we have
\[ i_3(G) \leq \binom{n-d}{3} - \frac12 dn + \frac16 d(d^2 +3d+2).\]
But $i_3(K) \geq \binom{n-d}{3}$. Thus, if $d=1$ then
\[ i_3(G) \leq \binom{n-d}{3} -n/2+1 \leq i_3(K) -n/2 +1;\]
and if $d \geq 2$ then
\[ i_3(G) \leq \binom{n-d}{3} -n +O(d^3) \leq i_3(K) -n/2\]
for $n$ sufficiently large.
Hence we may assume that $G$ is not regular, and so $A$ is non-empty.
Now repeatedly delete edges between vertices of degree $>d$,
as long as $G$ keeps satisfying $\alpha(G)<n-d$.
We end up with some graph $G' \in\mathcal G_n(d)$ with $\alpha(G')<n-d$. Suppose that there is an edge $uv\in E'(A')$ after this step (we use $E'$ and $A'$ to refer to $G'$). Then there exists an $(n-d)$-set $I$ such that $E'(I)=\{uv\}$. Let $J=V(G')\setminus I$, so $|J|=d$. Since $\deg_{G'}(u), \deg_{G'}(v)>d$ and $\deg_{G'}(w) \geq d$ for each other vertex $w \in I$, every possible edge between $I$ and $J$ is present in $G'$.
Therefore, since there are $(n-d-2)$ 3-subsets of $I$ containing $u$ and $v$,
\begin{eqnarray*}
i_3(G) \leq i_3(G')
& \leq & \binom{n-d}{3} - (n-d-2) + \binom{d}{3} < \binom{n-d}{3} - \frac{n}{2}
\end{eqnarray*}
for large $n$, since $d=o(n^{1/3})$. Hence, we may assume that $A$ is independent.
For each $v_i\in A$, let $r_i=\deg(v_i)$.
Observe that $2e(G)= \sum_i r_i \ + (n-a)d$.
Thus, from~\eqref{E:i3},
\begin{eqnarray*}
&& 2 i_3(G) - 2\binom{n}{3}\\
&=& -[\sum_{i=1}^{a} r_i \ + (n-a)d ] (n-2) + \sum_{i=1}^{a} r_i(r_i -1) + (n-a)d(d-1)\!-\! 2 t(G)\\
&=& \sum_{i=1}^{a} r_i(r_i -n +1) -(n-a)d(n-d-1) - 2 t(G)\\
&=& - dn(n-d-1) + h_d(G),
\end{eqnarray*}
where
\[h_d(G) = \sum_{i=1}^a r_i (r_i-n+1) + ad(n-1-d) - 2t(G). \]
Thus
\begin{eqnarray*}
i_3(G) & = & \binom{n}{3} - \frac12 dn(n-d-1) + \frac12 h_d(G)\\
& =& \binom{n-d}{3} - \frac12 dn + \frac16 d (d^2+3d+2) + \frac12 h_d(G).
\end{eqnarray*}
Observe that here only $\frac12 h_d(G)$ varies with $G\in {\mathcal G}_n(d)$.
Since $i_3(K)\geq \binom{n-d}{3}$, by the last equality
\[ h_d(K) \geq dn - \frac13 d (d^2+3d+2) = (1+o(1))\ dn.\]
Thus it suffices to show that
\begin{equation} \label{claim.hd}
h_d(G) \leq (d-1)n +O(d^2)
\end{equation}
and the remainder of the proof is devoted to establishing this result.
Recall that we are assuming that in $G$ the set $A$ of vertices of degree $>d$ is independent.
Thus $d+1 \leq r_i \leq n-a$ for each $i=1,\ldots,a$.
Consider the function $g(x)=x(x-n+1)= -x(n-1-x)$ for real $x$. This is decreasing for $x<(n-1)/2$ and increasing for $x>(n-1)/2$.
We now break the proof of~(\ref{claim.hd}) into four cases: $a \geq d+2$, $a=d+1$, $a=d$, and $1 \leq a \leq d-1$.
Suppose that $a \geq d+2$. Then each $d+1 \leq r_i \leq n-d-2$, so $g(r_i) \leq (d+1)(d+2-n)$.
Hence,
\begin{eqnarray}
h_d(G) & \leq & a (d+1)(d+2-n) + ad(n-1-d) \nonumber\\
& = & a(-n+2d+2) \label{E:a_large}
\end{eqnarray}
and so~(\ref{claim.hd}) holds.
Suppose that $a=d+1$. Then $d+1 \leq r_i \leq n-d-1$ for each $i$, and $\sum_{i=1}^a r_i \leq d(n-a) = d(n-d-1)$.
Thus at most $d-1$ of the $r_i$ are equal to $n-d-1$, and so
\[\sum_{i=1}^a g(r_i) \leq -(d-1) d (n-d-1) - 2 (d+1)(n-d-2)= -n(d^2+d+2) +O(d^3).\]
Hence $h_d(G) \leq -2n + O(d^3)$, and so~(\ref{claim.hd}) holds.
Suppose that $a=d$. Since $\alpha(G)<n-d$, $e(B)>0$. It follows that $\sum_{i=1}^a r_i \leq d(n-d)-2 \leq d(n-d)-1$.
Hence not all $d$ of the $r_i$ are equal to $n-d$, and so
\begin{eqnarray*}
h_d(G) & \leq & (d-1)(n-d)(1-d) + (n-d-1)(-d) + d^2(n-1-d)\\
&=& (d-1)n +O(d^2)
\end{eqnarray*}
as required.
Finally, suppose that $1 \leq a \leq d-1$.
Consider $v_i \in A$. Suppose that $r_i\geq n-d-1$.
Then the edge-boundary of $\Gamma(v_i)$ has size at most
$r_i a +(n-a-r_i)d \leq r_ia + d(d+1-a)$, and so $2e(\Gamma(v_i)) \geq r_i(d-a) - d(d+1-a)$.
Hence, twice the number of triangles containing $v_i$ is at least $r_i(d-a)-d(d+1-a)$.
Also, using first that $r_i \leq n-a$ and then that $r_i \geq n-d-1$ we have
\[ g(r_i)-r_i(d-a) = r_i(r_i-n+1-d+a) \leq r_i(1-d) \leq (n-d-1) (1-d).\]
On the other hand, if $r_i\leq n-d-2$, then $g(r_i) \leq (d+1)(d-n+2)$. Let $l=|\{i: r_i\geq n-d-1\}|$.
Then $ h_d(G)$ is at most
\begin{eqnarray*}
& & \sum_{i: r_i\geq n\!-\!d\!-1}\!\! [g(r_i) - r_i(d\!-\!a) + d(d\!+\!1\!-\!a)]
+ \sum_{i: r_i\leq n\!-\!d\!-\!2} \!\!g(r_i) + ad(n\!-\!1\!-\!d)\\
& \leq & l [(n\!-\!d\!-\!1)(1\!-\!d) + d(d\!+\!1\!-\!a)]\! +\! (a\!-\!l) (d\!+\!1)(d\!-\!n\!+\!2)\! +\! ad(n\!-\!1\!-\!d)\\
& \leq & an + O(d^2) \; \leq \; (d-1)n + O(d^2)
\end{eqnarray*}
as required.
\end{proof}
With the last two lemmas, we may now prove Theorem~\ref{thm.indepksets}, establishing a stronger version of the
conjecture of Galvin \cite{Gal11} mentioned earlier.
\begin{proof}[Proof of Theorem~\ref{thm.indepksets}]
If $\alpha(G)= n-d$ then $G$ is (isomorphic to) a
supergraph of $K_{d,n-d}$ and the result is trivial:
so we may assume that $\alpha(G)<n-d$. Let us also assume that $n$ is large. Let $K \in {\cal G}_n(d)$
with $\alpha(K)=n-d$.
Since $i_3(G)\leq i_3(K)-n/2 +1$ by Lemma~\ref{L:triangle} (b), by Lemma \ref{L:avgext} we have
$i_k(G)< i_k(K)$ for all $k\geq 3$. In fact, $i_4(G)<i_4(K) - \Omega(n^2)$ since $\alpha_4(K)=\Omega(n)$.
On the other hand, $e(G)\geq dn/2$, so that $i_2(G)-i_2(K) \leq dn/2$.
Thus $i_2(G)+i_4(G) < i_2(K) + i_4(K)$, and we are done.
\end{proof}
To prove Theorem~\ref{T:SDmax}, as well as the two corollaries,
we need one further lemma, which is
a general result on growth rates and stochastic domination,
adapted from Lemma 2.4 of~\cite{msw05}.
Given a finite sequence of positive real numbers $x = (x_0,x_1,\ldots,x_s)$,
let $S(x) = \sum_{k\geq 0} x_k$.
Define a random variable $X=X(x)$ by $\mathbb P(X=k) = x_k/S(x)$.
\begin{lemma} \label{L:SD}
Let $x_0, y_0>0$, let $1 \leq a \leq b$ be integers, and let $\alpha_1, \ldots, \alpha_a > 0$ and
$\beta_1, \ldots, \beta_b > 0$.
For $i=1,\ldots, a$, let $x_i = x_0 \mathbb Pod_{0<j\leq i} \alpha_j$; and for $i=1,\ldots, b,$
let $y_i = y_0 \mathbb Pod_{0<j\leq i}\beta_j$.
Let $x = (x_0,x_1,\ldots,x_a)$ and $y = (y_0,y_1,\ldots,y_b)$, and
denote $X(x)$ by $X$ and $X(y)$ by $Y$.
If $\alpha_i\leq \beta_i$ for each $i=1,\ldots, a$, then $X\leq_s Y$.
Further, if these conditions hold, and $(\alpha_1, \ldots, \alpha_a) \neq (\beta_1, \ldots, \beta_b)$,
then
\[ \mathbb P(X \geq t) < \mathbb P(Y \geq t) \mbox{ for each } t=1,\ldots,b. \]
\end{lemma}
\begin{proof}
By replacing $y_a$ by $\sum_{j>a} y_j$, we may assume that $b=a$. It suffices to consider the case when
$\alpha_i=\beta_i$ for all $i$ except $j_0$ where $\alpha_{j_0}<\beta_{j_0}$.
Since $\mathbb P(X\leq a)=\mathbb P(Y\leq a)=1$, it suffices to prove
$\mathbb P(X\leq t) > \mathbb P(Y\leq t)$ for $t=0,\ldots, a-1$.
Note that we may rescale $x_i, y_i$'s without changing the distribution.
Suppose $t$ satisfies $0\leq t\leq j_0-1$. Rescale to $x_0=y_0=1$. Then $x_i=y_i$ for all
$i\leq t$ and $S(x)<S(y)$. So
$\mathbb P(X\leq t)=\frac{\sum_{i\leq t} x_i}{S(x)} > \frac{\sum_{i\leq t} y_i}{S(y)}=\mathbb P(Y\leq t)$.
For $t$ such that $j_0\leq t\leq a-1$, we rescale to $x_{j_0}=y_{j_0}$. Then
$x_i=y_i$ for all $i=j_0, j_0+1,\ldots, a$ and $S(x)>S(y)$.
Hence, $\mathbb P(X>t) < \mathbb P(Y>t)$
and so $\mathbb P(X\leq t) > \mathbb P(Y\leq t)$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{T:SDmax}]
There are two cases, depending on whether $\alpha(G)<n-d$ or $\alpha(G)=n-d$.
(a) Let $G \in {\cal G}_n(d)$ with $\alpha(G)<n-d$.
For $k\geq 1$, let $\alpha^*_k$ denote $\alpha_k( K^*_{d,n-d} )$.
Then $\alpha_1(G)=\alpha^*_1=n$.
By Lemma \ref{L:avgext} (a),
$\alpha_k(G)\leq \alpha^*_k$ for $3 \leq k\leq \alpha(G)$.
If $\alpha_2(G) \leq \alpha^*_2$
then directly from
Lemma \ref{L:SD} we have $\mathbb P(X(G) \geq t)< \mathbb P(X( K^*_{d,n-d} )\geq t)$ for each $t=1,\ldots,n-d$, and we are done.
So we may suppose that $\alpha_2(G) > \alpha^*_2$; that is $i_2(G) > i^*_2$,
where $i_k^*$ denotes $i_k( K^*_{d,n-d} )$.
Let $x$ be the $i_k$-vector for $G$ (up to $x_{n-d}$), let $z$ be the $i_k$-vector for $ K^*_{d,n-d} $,
and let $y$ agree with $x$ in the first three places, and agree with $z$ in the remaining places; that is,
\[ x=(x_0,x_1,\ldots,x_{n-d}) = (1,n,i_2(G),i_3(G),i_4(G),\ldots,i_{n-d}(G) ),\]
\[ y= (y_0,y_1,\ldots,y_{n-d})=(1,n,i_2(G), i^*_3,i^*_4,\ldots,i^*_{n-d})\]
and
\[ z=(z_0,z_1,\ldots,z_{n-d})=(1,n,i^*_2, i^*_3,i^*_4,\ldots,i^*_{n-d}). \]
Let $3 \leq t \leq n-d$. By Lemma~\ref{L:avgext} (b) with $K= K^*_{d,n-d} $, for each $4 \leq k \leq \alpha(G)$ we have
$\frac{x_k}{x_{k-1}}\leq \frac{y_k}{y_{k-1}}$.
Moreover, by Lemma~\ref{L:triangle}, $i_3(G)<i^*_3$ so that $\frac{x_3}{x_2}<\frac{y_3}{y_2}$.
Then by Lemma~\ref{L:SD},
\[ \mathbb P(X(G) \geq t) = \mathbb P(X(x) \geq t) < \mathbb P(X(y) \geq t).\]
Also
\[ \mathbb P(X(y) \geq t) < \mathbb P(X(z) \geq t) = \mathbb P(X( K^*_{d,n-d} ) \geq t)\]
since $S(y)<S(z)$.
Hence
$\mathbb P(X(G) \geq t) < \mathbb P(X( K^*_{d,n-d} ) \geq t)$ as required.
To complete the proof for this case, note that by Theorem~\ref{thm.indepksets},
$i(G) < i( K^*_{d,n-d} )$, so that
\[ \mathbb P(X(G)\leq 0)=1/i(G) > 1/i( K^*_{d,n-d} )= \mathbb P(X( K^*_{d,n-d} )\leq 0),\]
and similarly
\[ \mathbb P(X(G)\leq 1)=(1+n)/i(G) > (1+n)/i( K^*_{d,n-d} )= \mathbb P(X( K^*_{d,n-d} )\leq 1). \]
(b) It remains to consider the case when $\alpha(G)=n-d$ and $G$ is not $ K^*_{d,n-d} $.
Then $G$ may be obtained from $ K^*_{d,n-d} $ by deleting at least one edge from the $K_d$ part. Thus $i(G)>i( K^*_{d,n-d} )$;
and the $i_k$-vector $x$ of $G$ may be obtained from the $i_k$-vector $z$ for $ K^*_{d,n-d} $
by adding positive integers to some entries amongst the first $d+1$ including adding at least 1 to $z_2$.
It is immediate that $\mathbb P(X(x) \geq t) < \mathbb P(X(z)\geq t)$ for each $t=d+1,\ldots,n-d$.
Let $2 \leq t \leq d-1$. Then
\[ \mathbb P(X(z) \leq t) = \frac{\sum_{i=0}^{t} z_i}{S(z)}.\]
To obtain $\mathbb P(X(x) \leq t)$ from the last ratio we add at least 1 to the numerator and at most $2^d$
to the denominator. Thus the numerator increases by a factor $(1+\Omega(n^{-d}))$ and
the denominator increases by a factor at most $(1+ 2^{-(n-2d)})$.
So overall the ratio increases (for large $n$), that is $\mathbb P(X(z) \leq t) < \mathbb P(X(x) \leq t)$,
as required.
\end{proof}
We noted earlier that Corollary~\ref{cor.1} follows directly from Theorem~\ref{T:SDmax}, so it remains only to prove
Corollary~\ref{cor.2}.
\begin{proof}[Proof of Corollary~\ref{cor.2}]
If $\alpha(G) < n-d$, the result follows directly from~\eqref{eqn.cor1}. Suppose then that
$\alpha(G) = n-d$, and let $n$ be sufficiently large that $\mathbb E[X( K^*_{d,n-d} )] \geq d$.
Then the average size of the sets which are independent in $G$ but not in $ K^*_{d,n-d} $ is at most $d \leq \mathbb E[X( K^*_{d,n-d} )]$,
and so $\mathbb E[X(G)] \leq \mathbb E[X( K^*_{d,n-d} )]$.
\end{proof}
We remark that with an analogous method, a weighted version of the statements can be proved.
Let $I(G, \lambda) = \sum_{k\geq 0} i_k(G)\lambda^k$ be the independent set polynomial of $G$ (\cite{GH83}, \cite{SS05}).
Instead of a uniform sampling of independent sets of $\mathcal I(G)$, we fix $\lambda>0$ and
pick a given independent $k$-set with probability $\lambda^k/ I(G,\lambda)$.
Then under this sampling, the analogous versions of Theorem~\ref{T:SDmax} and its corollaries hold.
\end{document}
|
\begin{document}
\mathcalaketitle
\begin{abstract}
Regression models applied to network data where node attributes are the dependent variables poses a methodological challenge. As has been well studied, naive regression neither properly accounts for community structure, nor does it account for the dependent variable acting as both model outcome and covariate. To address this methodological gap, we propose a network regression model motivated by the important observation that controlling for community structure can, when a network is modular, significantly account for meaningful correlation between observations induced by network connections. We propose a generalized estimating equation (GEE) approach to learn model parameters based on clusters defined through any single-membership community detection algorithm applied to the observed network. We provide a necessary condition on the network size and edge formation probabilities to establish the asymptotic normality of the model parameters under the assumption that the graph structure is a stochastic block model. We evaluate the performance of our approach through simulations and apply it to estimate the joint impact of baseline covariates and network effects on COVID-19 incidence rate among countries connected by a network of commercial airline traffic. We find that during the beginning of the pandemic the network effect has some influence, the percentage of urban population has more influence on the incidence rate compared to the network effect after the travel ban was in effect.
\end{abstract}
\noindent
{\it Keywords:} network regression; transportation networks; generalized estimating equations; COVID-19
\section{Introduction}
Network data provide quantitative information to study and unveil the pattern of interactions among various objects from individuals to countries. The inferential task in scientific applications requires learning about the influence that an individual has on others, which is further complicated as the individuals are often nested in communities. Two important methodological challenges of network regression are (1) accounting for correlation induced by network structure, and (2) allowing for node attributes to appear in the data both as outcome as well as covariates in the design matrix. For example, network regression was applied in the study of the nature and extent of the person-to-person spread of obesity, \cite{christakis2007spread} found that obesity appears to spread through social ties, a phenomenon that may in part be explained by the notion of homophily -- that birds of a feather flock together \citep{shrum1988friendship, igarashi2005gender, tifferet2019gender}. Sensitivity analyses suggest that contagion effects for obesity and smoking cessation are reasonably robust to possible latent homophily or environmental confounding; those for happiness and loneliness are somewhat less so \citep{vanderweele2011sensitivity}. To further investigate the causal relationship, \cite{shalizi2011homophily} provided three factors underlying such interaction: homophily, or the formation of social ties due to matching individual traits; social contagion; the causal effect of an individual's covariates on his or her measurable responses; and an individual's response. This has led to the development of two types of models: (1) community detection models that aim to find distinct communities or clusters of similar individuals \citep{holland1983stochastic, newman2018network}, and (2) a more general framework of network regression that can model such phenomenon using a direct link between observed individual attributes or covariates and the network interactions \citep{holland1981exponential, hoff2002latent, hoff2021additive}. Some approaches that aim at merging the above two models exploit distributional assumptions to incorporate covariate information in detecting communities in the network \citep{binkiewicz2017covariate, mu2022spectral}. These examples highlight the need for regression techniques that are robust to non-trivial network structure because in all these studies community labels are known which makes the evaluation of covariate effects easy. However, in a more realistic situation, community labels are unknown
(unobserved) with differences in covariates existing through the unobserved communities making the inference challenging.
In this article, we aim to develop a network regression model motivated by the important observation that the differences between the communities in a network can be attributed to the differences in the influences of covariates responsible for an edge formation. Our model is based on a flexible network effect assumption, and it allows us
to perform tests for the model parameters in addition to the estimation.
Aside from detecting communities solely based on degree, the degree corrected stochastic block model was one of the first model that allows within community heterogeniety \citep{qin2013regularized}. There has been a surge of interests among psychologists and social scientists to study such behavior- \cite{aukett1988gender} shows that gender difference plays an important role in friendship patterns: women shows a preference for a few, closer, intimate same-sex friendships based on sharing emotions whereas men build up friendship based on the activities they do together. \cite{staber1993friends} studies how women and men form entrepreneurial relationships, concluding that women's networks are wider with more strangers and higher proportion of cross-sex ties. In each of these network regression examples, community labels are known which alleviates the inferential task of finding the effects of covariates and learning community structures. However, in many other scenarios, community labels are unknown which makes the inferential task more challenging. For example, here we will consider the network regression problem of seeing the impact of air travel network flow between countries on COVID-19 incidence rates, where network structure is unknown and must be estimated. The current literature lacks powerful methods that can address this important issue. Our focus is to bridge this gap by leveraging covariate-assisted information to propose an effective tool which also accounts for the network modeling through adjacency matrix.
Generalized estimating equations (GEE) \citep{liang1986longitudinal, zeger1986analysis} provide a popular approach that is often used to analyze longitudinal and other types of correlated data \citep{burton1998extending, diggle2002analysis, mandel2021neural} We propose first performing community detection in order to estimate community membership before then using a GEE regression model to account for the resulting estimated community structure. This approach is general and agnostic to the community detection algorithm used so long as each node can belong to at most one community. Given that network regression needs to account for correlation between attributes from nodes that are connected, a GEE allows for arbitrary correlation between nodes within the same community. Of note, this approach is best suited for highly modular networks so that the GEE assumption of independence between different communities is more accurate, though we explore its performance in less modular networks with more between-community mixing.
The rest of this article is organized as follows. In \Cref{Methods} we introduce our network regression model along with our GEE extension, and we describe the transportation network we use to model country-specific COVID-19 incidence rates. In \Cref{Results} we present the theoretical results followed by extensive simulation results and real data analysis. Finally, \Cref{Discussion} concludes the article with a discussion.
\section{Methods}\label{Methods}
In this section, we introduce our network regression model and describe the air travel newtorks for each month of the first quarter of 2020 (the beginning of the COVID-19 pandemic) with country-specific information such as COVID-19 incidence rate, GDP, and population size. Next, we present a generalized estimating equation (GEE) approach to network regression that accounts for network-induced correlation between observations.
\subsection{Model and notation}\label{Notations and model}
We consider a directed network of $n$ nodes and an $n \times n$ adjacency matrix $\bm A=(a_{ij})$ with $0$'s on the diagonal (i.e. no self-loops).
We denote the feature variable of the $i$th node by $y_i$, the $l$-dimensional vector of covariates by $\bm{\alpha}$, and the corresponding design matrix by $\bm{x}_i$. Denoting the coefficient of interest by $\beta$, the network regression model for node attribute $y_i$ we consider:
\begin{align}\label{original_eqn1}
& y_i=\bm{\alpha}^{\top}\bm{x}_i + \beta \cdot \sum_{j \ne i}A_{ji}y_j/(n-1) + \epsilon_i,\\ \nonumber
\end{align}
where $\epsilon_i \overset{iid}{\sim}N(0,\sigma^2)$ and $\bm{\alpha} \in \mathcalathbb{R}^l$. One can note that the above model is reminiscent of the first-order autoregressive spatial model of \cite{kelejian1998generalized} which frequently contains a spatial lag of the dependent variable as a covariate that is spatially autoregressive.
Using vector and matrix notation, the above model can also be written as
\begin{align*}
& \bm y = \bm X^{\top} \bm \alpha + \beta \cdot \bm A\bm y/(n-1) + \bm \epsilon,
\end{align*}
where $\bm y$ is the concatenation of the $y_i$s in a vector of length $n$, and $\bm X=[\bm x_1: \bm x_2: ...: \bm x_n]$, is the matrix of covariates. Note, model \eqref{original_eqn1} can be trivially adapted to generalized linear models with non-linear link functions and non-Gaussian data in the scope of GEE models.
\subsection{Data description}\label{Data description}
With 622 million confirmed cases and 6.5 million deaths globally as of October 01, 2022, the COVID-19 pandemic has had a tremendous impact on the world, shrinking the global economy by 5.2\%, the largest recession in the history post World War II \citep{world2020covid}. The travel bans in places worldwide have severely affected the tourism industry, with estimated losses of 900 billion to 1.2 trillion USD and tourism down 58\%-78\% \citep{le2022framework}. The airline industry has also suffered heavily, with 43 airlines declaring bankruptcy and 193 of 740 European airlines at risk of closing. Here we focus on the start of the pandemic covering the transition to travel bans across the world to study the relative effectiveness of travel bans for controlling and contributing to COVID-19 incidence rates.
We use pandemic data from the Johns Hopkins University coronavirus data repository through April 30, 2020, \citep{covid19database}. Flight data are from the Official Airline Guide (OAG) \citep{strohmeier2021crowdsourced}. Because only data for January and February 2020 are available from OAG, we used the estimated fight data for other time periods using the OpenSky Network database \citep{schafer2014bringing, strohmeier2021crowdsourced}. This database tracks the number of fights from one country to another over time, which we use to estimate country-to-country flight data for other months. We include as covariates the GDP, total population, and percentage of the urban population for each country in our network. Constructing a network based on by the flight data and incorporating the above country-specific attributes such as GDP, population, etc. as covariates through $\bm \alpha$ in \eqref{original_eqn1}, we aim to estimate the effectiveness of travel bans through $\beta$ in model \eqref{original_eqn1}.
\subsection{Generalized estimation equation (GEE) approach}\label{GEE approach}
Out network contains $K$ communities where community $k$ is defined by
$$E_k=\{i: g_i=k\},$$
where index $g_i$ represents the community membership of node $i$, $|E_k|=n_k$ (number of nodes in community $k$) and $\sum n_k=n$. Let $\bm y_k$ and $\bm \epsilon_k$ denote the concatenated vectors of $y_{ij}$s and $\epsilon_{ij}$s respectively, and $\bm X_k=[\bm x_1, \bm x_2, ...,\bm x_{n_k}]$ denote the $l\times n_k$ sub-matrix of covariates corresponding to cluster $k$.
To fit our network regression model in the GEE framework, we use communities as clusters and use the following equation to model the node attributes of members of the $k$th cluster:
\begin{align}\label{network_reg_gee}
& \bm y_k = \bm X_k^{\top}\bm \alpha + \beta \bm Z_k +\bm \epsilon_k, \\ \nonumber
\end{align}
where $\bm A_k$ is the $n_k \times n_k$ sub-matrix of $\bm A$ pertaining to the cluster $k$, and $\bm Z_k=\bm A_k \bm y_k /(n-1)$.
The marginal mean $\bm \mathcalu_k$ of $\bm y_k$ has the form:
$$\bm \mathcalu_k=E(\bm y_k|\bm X_k)=(\bm I_{n_k}-\beta \bm A_k /(n-1))^{-1}\bm X_k^{\top}\bm \alpha, \;\;\;\ k=1,2,...,K,$$
where $\bm I_{n_k}$ is the identity matrix of order $n_k$.
Adopting a GEE approach \citep{liang1986longitudinal}, the resulting estimating equation is given by
\begin{align}\label{gee_objective_fn}
& \sum_{k=1}^{K}\bm D_k^{\top} \bm V_k^{-1}(\bm y_k-\bm \mathcalu_k)=\bm 0, \;\;\ k=1,2,...,K, \\ \nonumber
\end{align}
where $\bm D_k=\frac{\partial \bm \mathcalu_k}{\partial (\beta,\bm \alpha)^{\top}}$ is of dimension $n_k \times (l+1)$ and $\bm V_k$ is the $n_k\times n_k$ working covariance matrix of $\bm y_k$. The explicit form for $\bm D_k$ is
$$\bm D_k=[-\underbrace{(\bm I_{n_k}-(\beta/(n-1)) \bm A_k)^{-1}\bm A_k(\bm I_{n_k}-(\beta/(n-1)) \bm A_k)^{-1}\bm X^{\top}\bm \alpha}_{n_k\times 1}\;\;\ : \;\;\ \underbrace{(\bm I_{n_k}-(\beta/(n-1)) \bm A_k)^{-1}\bm X_k^{\top}}_{n_k \times l}].$$
One can note that $\bm D_k$ consists of two partitioned matrices where the first one corresponds to the network parameter $\beta$ and the second one is due to the covariate $\bm \alpha$. We can solve for $\hat{\bm \alpha}$ and $\hat{ \beta}$ in equation \eqref{gee_objective_fn} through iterative reweighted least squares, and use the robust sandwich covariance estimator to perform inference on $\bm \alpha$ and $ \beta$.
\section{Results}\label{Results}
\subsection{Theoretical results}\label{Theoretical results}
In this section, we prove the asymptotic normality of the resulting GEE estimator for $\beta$ and $\bm \alpha$ jointly. Towards this, we assume constant probabilities of edge formation between and within communities where we denote these quantities by $p$ and $q$ respectively as in a stochastic block model \citep{holland1983stochastic}. Our proof of asymptotic normality hinges on Theorem 2 of \cite{liang1986longitudinal} which establishes the asymptotic normality of the regression parameter in the classical GEE approach under the assumption that the correlation parameter appropriately scaled by the number of communitites is consistent. Our primary distinction from this approach is that we must account for probabilities of edge formation instead of correlation between observations. Therefore, we first establish the consistency of $p$ and $q$ following Proposition 1 of \cite{chen2021analysis} and subsequently show asymptotic normality of $(\hat{\beta},
\hat{\bm \alpha})$ from the estimating equation \eqref{gee_objective_fn}.
\subsubsection{Consistency of $\hat{p}$ and $\hat{q}$}\label{Consistency results}
\begin{proposition}\label{const_p_q}
Consider a network generated from a stochastic block model of $K$ communities of size $m$ so that total number of nodes is $n=Km$. Assume that $m^{\gamma}p \rightarrow p^{*}$ and $m^{\gamma}q \rightarrow q^{*}$ as $n \rightarrow \infty$, where $p^{*}$ and $q^{*}$ are positive fixed constants, and $\gamma \in [0,2)$, then $m^{1+\gamma/2}(\hat{p}-p)$ and $m^{1+\gamma/2}(\hat{q}-q)$ are both $o_P(1)$.
\end{proposition}
\begin{proof}
See the appendix in \Cref{proof_const_p_q}.
\end{proof}
Guided by the above proposition, we establish the asymptotic normality of the model parameters in the following theorem. The key difference with the classical formula is the inclusion of the community size in the covariance coming from the consistency of $p$ and $q$.
\subsubsection{Asymptotic normality of $\hat{\beta}$}
\begin{theorem}\label{asym_norm}
Under the conditions of \Cref{const_p_q} $K^{1/2}m^{1+\gamma/2}\big((\hat{\beta},\hat{\bm \alpha})-(\beta,\bm \alpha)\big)^{\top}$ is asymptotically multivariate normal with zero mean and covariance given by
$$V=\underset{K\rightarrow \infty}{\text{lim}}K m^{2+\gamma}\Big(\sum_{k=1}^{K}\bm D^{\top}_k\bm V_k^{-1}\bm D_k\Big)^{-1}\Big(\sum_{k=1}^{K}\bm D^{\top}_k\bm V_k^{-1}cov(\bm Y_k)\bm V_k^{-1}\bm D_k\Big)\Big(\sum_{k=1}^{K}\bm D^{\top}_k\bm V_k^{-1}\bm D_k\Big)^{-1}$$.
\end{theorem}
\begin{proof}
See the appendix in \Cref{asym_norm_pf}.
\end{proof}
\textit{Remark:} The variance formula involves the term $m^{2+\gamma}$, where $m$ is the community size. If $m$ is large, then the covariance will increase at a rate $m^{2+\gamma}$. This is reminiscent
of the well-known fact that sandwich estimator $\hat{V}$ of the covariance of $(\beta,\bm \alpha)^{\top}$ is not stable if $m$ is large relative to $K$. In essence, if $m$ grows at a similar rate to $K$ the sandwich estimator becomes and unstable estimator of covariance. In practice this implies that the GEE approach works best when the network contains many smaller communities rather than only a few larger communities.
\subsection{Simulations}\label{Simulations}
We simulate a network of $n$ nodes having balanced communities of size $m=10$ via a stochastic block model and vary $n$ in $\{200, 400\}$ with the number of communities ($K$) being $20$ and $40$ for both values of $n$, respectively. Let $p$ and $q$ denote the within community and between community probabilities of an edge formation as in a stochastic block model. In each setting, we vary $(p, q) \in \{(0.8,0), (0.7,0.1), (0.6,0.2), (0.5, 0.3)\}$.
\subsubsection{Estimation of $\beta$ and $\alpha$}
The choice of true model parameters and the corresponding data-generating process are detailed to span networks with varying degrees of modularity. We set $\beta_0=0.5$ and $\bm \alpha_0=(1, 1, 1, 1, 0.5, 0.5, 0.5, -0.5, -0.5, 2)^{\top}$ ($l=10$ in equation \eqref{original_eqn1}). We simulate the $l \times n$ matrix $\bm X$ by a multivariate normal distribution in the following manner. The $j$th column of $\bm X$, if it belongs to community $k$ ($k=1,2,...,K$), follows MVN$((k/10) \bm 1_l, 0.0001 \bm I_l)$, where $\bm 1_l$ and $\bm I_l$ are the vector with 1s and identity matrix of dimension $l$, respectively. In each setting, the adjacency matrix $\bm A$ of dimension $n$ is simulated by the stochastic block model which has $K$ communities of the aforementioned size such that within and between community edge probabilities are $p$ and $q$, respectively. Finally, the response variable $y_i$ is generated according to equation \eqref{original_eqn1} with $\sigma$ = 0.01.
To estimate $\beta$ and $\bm \alpha$, we first perform a community detection on the directed graph obtained from the adjacency matrix $\bm A$ as in \citet{rosvall2007maps}. Next, with the resulting communities we fit a GEE using the \textit{geepack} R package \citep{geeglm} and report the estimates of bias and variance by taking the average of over $B=1000$ replications.
The squared bias and variance of the estimated $\beta$ increase as the networks become less modular (i.e. $q$ increases) for both our GEE approach as well as naive least squares (see \Cref{method}). The naive least squares method does not assume any community structure and reports the parameter estimates from assuming independence between observations using equation \eqref{original_eqn1} directly.
From \Cref{method} we see that less modular networks are more difficult to fit in general based on the increase in bias for both methods. Despite this our GEE approach uniformly is less biased that naive least squares, which demonstrates that controlling for correlation within communities is effectively done by GEE. This also exposes the weakness of our GEE approach: if the network is not modular with high degrees of mixing between communities (i.e. large $q$) the GEE framework cannot accommodate this due its assumption of independence between communities. However, even when $q$ is large we still observe that GEE somewhat mitigates the impact of correlation induced by the network at least partially, which explains why its bias is smaller than that of naive least squares. Essentially, even when the network structure suggests the GEE assumptions are incorrect, one is still generally better off partially accounting for network structure using our GEE approach rather than ignore community structure altogether.
The smaller standard errors demonstrated by least squares in \Cref{method} reflect the inaccuracy of the method. By ignoring community structure and assuming independence between observations in network regression settings, we expect to see anti-conservative hypothesis tests, too-narrow confidence intervals, and general overconfidence. This overconfidence is reflected in the too-small standard errors for naive least squares as well as in the anti-conservative Type I errors we demonstrate next.
\begin{figure}
\caption{\textbf{Bias squared and standard error of estimates of $\beta$ with varying degrees of network modularity.}
\label{method}
\end{figure}
\subsubsection{Hypothesis testing of $\beta$}
We consider an approach to testingthe hypothesis of $H_0:\beta=0$ against the alternative $H_A:\beta\ne 0$. We perform a simulation study to obtain Type I error in a variety of network structures. We consider two working correlation structures for our GEE model: independence and exchangeable.
First, we simulate our data as in \Cref{Simulations} with $\beta=0$. We obtain an empirical null distribution by replicating this procedure $B=1000$ times to obtain $\hat{\beta}^{(1)}, \hat{\beta}^{(2)},..., \hat{\beta}^{(B)}$. We estimate the P-value by $\sum_{b=1}^{B}I(\vert \hat{\beta}^{(b)}\vert<\hat{\beta})/B$. We summarize our Type I error results at the $0.05$ significance level in \Cref{tab0} which demonstrates that as network modularity decreases ($q$ gets larger), the hypothesis test for $H_0$ becomes anti-conservative. This is true for both least squares as well as GEE, although least squares is more adversely impacted. In addition, while GEE excels in the highly modular setting, as expected, LS is still anti-conservative. This demonstrates the efficacy of GEE as a better inferential tool over naive least squares in network regression settings, even when GEE assumptions of between-community independence do not hold. It is also instructive to note that although the P-values for independent correlation structure are generally less than those for exchangeable structure, the differences are not overwhelming.
\begin{table}[h]
\caption{ \textbf{ Type I error for the test of $H_0: \beta=0$.} Comparison of Type-I error at the $0.05$ significance level for a network of $n$ nodes with $K$ balanced communities for different choices of within community edge probability ($p$) and between community edge probability ($q$) among GEE with independent
and exchangeable correlation structure and naive least squares.}\label{tab0}
\begin{center}
\begin{tabular}{ c | c |c |c|c}
\hline
$(n,K)$ & $(p,q)$ & GEE (independent)& GEE (exchangeable) & LS \\
\hline
&(0.8, 0) & 0.049 & 0.055 &0.062 \\
$(200,20)$ &(0.7, 0.1) & 0.050 & 0.055 & 0.079 \\
&(0.6, 0.2) & 0.058 & 0.061& 0.091 \\
&(0.5, 0.3) & 0.072 & 0.075 & 0.095 \\
\hline
&(0.8, 0) & 0.050 &0.050 &0.060 \\
$(400,40)$ &(0.7, 0.1) & 0.051 & 0.056& 0.075 \\
&(0.6, 0.2) & 0.059 & 0.062 & 0.090 \\
&(0.5, 0.3) & 0.070 & 0.075 & 0.096 \\
\hline
\end{tabular}
\end{center}
\end{table}
\subsection{Real data analysis}\label{Real data analysis}
Air travel networks were constructed from flight data retrieved from the Official Airline Guide (OAG), with node attribute outcomes being the country-specific COVID-19 incidence rates by month available from the Johns Hopkins University coronavirus data repository through April 30, 2020, \citep{covid19database}. In addition we included country-specific GDP \citep{capita2020availablegdp},
total population \citep{capita2020availabletotpop},
and percentage of the urban population \citep{capita2020availableurban}
of the countries from the website of the World Bank as covariates. We study the effectiveness of travel bans on the incidence rate of the COVID-19 using our network regression model by performing a month-by-month analysis from January-April, 2020 which span the period of before the pandemic till one month after the travel restriction was in effect. In addition, we study the importance of baseline covariates effects such as GDP, percentage of urban population, and population size of the countries vs the network effect. The GDP and the population of the most populated countries are in the order of $10^{12}$ and $10^6$ respectively, so we scale these variables by $10^{12}, 10^6$, and $10^2$, respectively, in order to stablize coefficient estimation.
With these data retrieved across different continents to model the network effect through our porposed model in \eqref{original_eqn1} we assume a directed stochastic block model framework, where nodes correspond to countries and each block contains countries having a large number of commercial flights traveling among them compared to the others. Edge formation is determined by thresholding the population-normalized count of flights arriving at the destination country.
\begin{figure}
\caption{\textbf{Changes in air travel networks and the impact of incidence rates over time.}
\label{monthly_plot}
\end{figure}
We let the incidence rate $y_i$ of the $i$th country to be the number of cases per 1000 populations, and the covariates $\bm x_{i_{n \times 4}}=(\bm 1_{n \times 1}: \bm x_{{i2}_{n\times 1}} : \bm x_{{i3}_{n\times 1}} : \bm x_{{i4}_{n\times 1}})$ to be a matrix whose first column has 1s as the intercept and the second, third and fourth columns have the population size, GDP, percentage of the urban population of the $i$th country scaled by $10^6$, $10^{12}$, and $10^2$ respectively. For constructing the adjacency matrix $\bm A$, we consider two scenarios: unweighted and weighted. For unweighted adjacency matrices, from the directed graph dictated by the number of flights in a particular month, we first construct a count matrix $\bm C$ that counts the number of flights from one country to the other. Then we construct an unweighted adjacency matrix $\bm A$ with 0s and 1s where the entry of $\bm A$ is 1 if it exceeds the third quartile of the elements of $\bm C$, and 0 otherwise. For weighted adjacency matrices we divide the elements of $\bm C$ by the total population of the destination country per million. The main rationale behind this scaling is that one would expect more flights traveling to a populated country compared to a less populated country. Therefore, dividing it by the population of the country will result in the elements of the weighted adjacency matrix being on the same scale.
With the two adjacency matrices constructed in this way, we fit model \eqref{gee_objective_fn} and summarize the results in \Cref{tab1}. The estimates of $\beta$ increase from February to April both for the weighted and unweighed cases suggesting that there is an increasing association between the travel and spread of the pandemic. To further investigate this behavior, we plot the average number of flights per million population in Figure 2 which shows that although the average number of flights is decreasing from January to April, the estimates of $\beta$ are increasing. This reflects the fact that while travel bans led to a decrease in the total number of flights in March and April, the increasing $\beta$ in these months implies that each flight had an increased likelihood of transmitting COVID-19 to the destination country, increasing the correlation between the incidence rates in the two countries.
\Cref{tab1} also demonstrates that while the overall population of a country has a negligible effect on the incidence rate leading to smaller values of $\alpha_2$, the percentage of the urban population plays a crucial role especially when the travel ban has already been in effect, for example, the value of $\alpha_4$ is 62 (55 for weighted case) in April compared to 0 in February. Moreover, the corresponding values are consistent for weighted and unweighted networks.
\Cref{tab1} also demonstrates the utility of including baseline covariates alongside network effect to study the effectiveness of travel bans in mitigating the spread of COVID-19. By comparing the values of $\beta$ and $\alpha_4$, one can note that for the initial months of the pandemic, the network effect is more compared to the effect of urban population. However, when the travel ban has already taken place during March (for most of the countries), the effect of urban population supersedes the network effect as its value increase to 62.33 drastically from 9.88 suggesting that urbanity is the next important factor to consider for controlling the spread of the pandemic after the travel bans.
We also compare our results with naive least squares in \Cref{tab20} which shows coefficient estimates for both models. Of note, the April estimate of $\beta$ is dramatically larger in the naive least squares model, which may be a manifestation of the increased bias we expect to see for naive least squares. Standard errors for the coefficient estimates are uniformly smaller for least squares as we expect, likely a representation of the overconfidence of the least squares model that comes from ignoring network induced correlation. This case demonstrates that using least squares would incorrectly dramatically increase both the magnitude and degree of significance of the network effects on incidence rates, particularly in the months post-lockdown when travel bans were in place. In contrast, the GEE model provides a more realistic view.
\vskip .1cm
\iffalse
\begin{figure}
\caption{\textbf{Changes in air travel networks and the impact of incidence rates over time.}
\label{monthly_plot}
\end{figure}
\fi
\begin{table}[h]
\caption{\textbf{Estimates of the parameters for unweighted and (weighted) networks under the GEE model.} $\beta, \alpha_1, \alpha_2, \alpha_3$ and $\alpha_4$ correspond to the coefficients of the adjacency matrix (network effect), intercept, population, GDP, and percentage of the urban population, respectively.}\label{tab1}
\begin{center}
\resizebox{\columnwidth}{!}{
\begin{tabular}{ c | c | c | c |c |c}
\hline
Month of 2020 & $\beta$ & $\alpha_1$ & $\alpha_2$ & $\alpha_3$ & $\alpha_4$\\
\hline
Jan & 1.8759 (0.0038) & -0.0002 (-0.0001) & 0.0000 (0.0000) & 0.0013 (0.0013) & 0.0003 (0.0000 )\\
Feb & 2.1946 (0.0014) & -0.0048 (-0.0037) & 0.0000 (0.0000) & 0.0555 (0.0575) & -0.0094 (-0.0072 )\\
Mar & 2.7186 (0.0030) & -0.7846 (-0.7490) & -0.0005 (-0.0007) & -0.1446 (-0.0233) & 9.8844 (9.8862 )\\
Apr& 4.6574 (0.0328) & -4.7307 (-3.5655) & -0.0085 (-0.0125) & 0.0626 (0.4685) & 62.3300 (54.9767)\\
\hline
\end{tabular}
}
\end{center}
\end{table}
\begin{table}[ht]
\caption{\textbf{Comparison of the naive linear regression with GEE for the weighted networks.} The numbers reported in the parenthesis correspond to our GEE method while the others correspond to least squares. $\beta, \alpha_1, \alpha_2, \alpha_3$ and $\alpha_4$ correspond to the coefficients of the adjacency matrix, intercept, population, GDP, and percentage of the urban population respectively.}\label{tab20}
\begin{center}
\resizebox{\columnwidth}{!}{
\begin{tabular}{ c | c | c | c |c |c}
\hline
Month of 2020 & $\beta$ & $\alpha_1$ & $\alpha_2$ & $\alpha_3$ & $\alpha_4$\\
\hline
Jan & 0.0424 (0.0038) & -0.0001 (-0.0001) & 0.0000 (0.0000) & 0.0013 (0.0013) & 0.0000 (0.0000 )\\
Feb & 0.0105 (0.0014) & -0.0037 (-0.0037) & 0.0000 (0.0000) & 0.0576 (0.0575) & -0.0073 (-0.0072 )\\
Mar & 0.0458 (0.0030) & -0.7060 (-0.7490) & -0.0009 (-0.0007) & 0.0017 (-0.0233) & 9.5519 (9.8862 )\\
Apr& 1.2655 (0.0328) & -5.5415 (-3.5655) & -0.0073 (-0.0125) & -0.2041 (0.4685) & 67.5900 (54.9767)\\
\hline
\end{tabular}
}
\end{center}
\end{table}
\section{Discussion}\label{Discussion}
We have proposed a generalized estimating equation (GEE) approach to network regression model. Assuming independent community structure and using the simultaneous estimation of memberships, the GEE approach allows for community dependent covariate coefficient estimation, and thereby provides a flexible and efficient solution to the network regression model. Moreover, this allows us to do a hypothesis testing of the network regression parameter $\beta$ which helps us to decide the importance of including such term in our analysis. We provided a relevant real data example of COVID-19 cases along with baseline covariates such as GDP, population size, and percentage of urban population of countries across different continents, and the number of commercial flights traveling between them to study the importance of travel bans to mitigate the spread of the COVID-19 pandemic. We have constructed the adjacency matrix from the count of flights rendered in a network of countries via a stochastic block model where each block contains countries having a similar number of flights. Our proposed model has helped us to understand the importance of the baseline covariates vs network effect. Since we have dealt with longitudinal data, it is also instructive to note that our proposed model offers us the flexibility of clustering both in the network space and also over time.
One limitation of our results is the balanced community assumption made in Proposition \ref{const_p_q} which may not reflect some unbalanced networks. This assumption can be relaxed somewhat with small departures from the balanced design. Further, under the stochastic block model, extremely unbalanced networks still allow for consistent estimation of $p$ and $q$, however for a more flexible network model that allows for community-specific edge probabilities, consistent estimation of edge probabilities requires each community to grow asymptotically with $n$.
\section{Appendix}\label{Appendix}
\subsection{Proof of \Cref{const_p_q}}\label{proof_const_p_q}
From the construction of the adjacency matrices, one can note that the entries $A_{ij}$s are i.i.d. Bernoulli random variables with
$$
E(A_{ij})=
\begin{cases}
p, \;\;\ \text{if } i, j \text{ belong to the same community}\\
q, \;\;\ \text{otherwise} \\
\end{cases}
$$
and
$$
Var(A_{ij})=
\begin{cases}
p(1-p), \;\;\ \text{if } i, j \text{ belong to the same community}\\
q(1-q), \;\;\ \text{otherwise} \\
\end{cases}
$$
Denote by $S$ the set $\{i,j: 1\leq i<j\leq n, i,j \text{ belongs to the same community}\}$, and its complement by $S'$. Therefore, for a directed graph, one can write
\begin{align*}
& E\Big(\sum_{i,j\in S}A_{ij}\Big)=2K{m \choose 2}p, \\& Var\Big(\sum_{i,j\in S}A_{ij}\Big)=2K{m \choose 2}p(1-p)=s_{m_p}^2,\\
& E\Big(\sum_{i,j\in S'}A_{ij}\Big)=K
(K-1)m^2q, \text{ and }\\
&Var\Big(\sum_{i,j\in S'}A_{ij}\Big)=K
(K-1)m^2q(1-q)=s_{m_q}^2.\\
\end{align*}
Next, one can verify Lindeberg condition to establish the central limit theorem:
\begin{align}\label{lindeberg_cond}
& \frac{\sum_{i,j\in S}A_{ij}-2K{m \choose 2}p}{\sqrt{2K{m \choose 2}p(1-p)}} \;\;\ \text{ and }\;\; \frac{\sum_{i,j\in S'}A_{ij}-K(K-1)m^2q}{\sqrt{K(K-1)m^2q(1-q)}}\overset{d}{\rightarrow} N(0,1)\\ \nonumber
\end{align}
as $Km^2p(1-p)$ and $Km^2q(1-q) \rightarrow \infty$.\\
The Lindeberg's condition requires us to verify
$$\frac{1}{s_{m_p}^2}\sum_{i,j\in S}E\Big(B_{ij}^2I_{\{|B_{ij}|\geq \epsilon s_{m_p}\}}\Big) \rightarrow 0 \;\;\ \text{ and }\frac{1}{s_{m_q}^2}\sum_{i,j\in S'}E\Big(B_{ij}^2I_{\{|B_{ij}|\geq \epsilon s_{m_q}\}}\Big) \rightarrow 0,$$
where $B_{ij}=A_{ij}-E(A_{ij})$. Since $|B_{ij}|\leq 1$, the above condition is satisfied when $Km^2p(1-p)$ and $Km^2q(1-q)$ tend to $\infty$ under which $\epsilon s_{m_p}$ and $\epsilon s_{m_q}$ are both greater than 1.\\
Dividing the numerator and denominator of \eqref{lindeberg_cond} by $2K{m \choose 2}$ and $K(K-1)m^2$ respectively, one obtains both
$m^{\gamma/2}\sqrt{2K{m \choose 2}}\frac{\sum_{i,j\in S}A_{ij}/\Big(2K{m \choose 2}\Big)-p}{\sqrt{m^{\gamma}p(1-p)}} \;\;\ \text{ and }\;\; m^{\gamma/2}\sqrt{K(K-1)m^2}\frac{\sum_{i,j\in S'}A_{ij}/\Big(K(K-1)m^2\Big)-q}{\sqrt{m^{\gamma}q(1-q)}}$ both converge to $N(0,1)$ distribution.
Since $\hat{p}=\sum_{i,j\in S}A_{ij}/\Big(K{m \choose 2}\Big)$, and $\hat{q}=\sum_{i,j\in S'}A_{ij}/\Big(K(K-1)m^2\Big)$, both $\hat{p}$ and $\hat{q}$ are $m^{1+\gamma/2}$ consistent.
\subsection{Proof of \Cref{asym_norm}}\label{asym_norm_pf}
Here we present the sketch of the proof of \Cref{asym_norm}. One can write \eqref{gee_objective_fn} as
\begin{align}\label{obj_fun}
& \sum_{k=1}^{K}U_k(\alpha,\bm \beta,p,q)=\sum_{k=1}^{K}\bm D_k^{\top}\bm V_k^{-1}\bm S_k=0,\\ \nonumber
\end{align}
where $\bm S_k=\bm y_k-\bm \mathcalu_k$, and $U$ is a function of the model parameters.
Let $\bm b=(\beta,\bm \alpha)^{\top}$ denote the vector of model parameters, and $\bm \pi =(p,q)^{\top}$ denote the vector of model parameters, and vector of within and between edge probabilities respectively. Letting $\bm b$ fixed, the Taylor expansion yields
\begin{align}\label{taylor}
& \frac{\sum_{k=1}^{K}U_k(\bm b, \bm \pi^*)}{K^{1/2}m^{1+\gamma/2}}=\frac{\sum_{k=1}^{K}U_k(\bm b, \bm \pi)}{K^{1/2}m^{1+\gamma/2}}+\frac{\sum_{k=1}^{K}\frac{\partial U_k(\bm b,\bm \pi)}{\partial \pi}}{K^{1/2}m^{1+\gamma/2}}m^{1+\gamma/2}(\bm \pi^*-\bm \pi)+o_P(1)\\ \nonumber
&\hspace{2.8cm} =\tilde{A}+\tilde{B}\tilde{C}+o_P(1),\\ \nonumber
\end{align}
One can note that $\tilde{B}=o_P(1)$ as $\partial U_k(\bm b,\bm \pi)/\partial \bm \pi$ are linear functions of $\bm S_k$'s defined in \eqref{obj_fun} whose means are zero, and $\tilde{C}=O_P(1)$ thanks to \Cref{const_p_q}. Therefore, $\frac{\sum_{k=1}^{K}U_k(\bm b, \bm \pi^*)}{K^{1/2}m^{1+\gamma/2}}$ is asymptotically equivalent to $\frac{\sum_{k=1}^{K}U_k(\bm b, \bm \pi)}{K^{1/2}m^{1+\gamma/2}}$ whose asymptotic distribution is multivariate normal with zero mean and covariance is equal to $V$ as defined in \Cref{asym_norm}. The proof is thus complete following \cite{liang1986longitudinal}.
\end{document}
|
\begin{document}
\title{On Ergotropic Gap of Tripartite Separable Systems}
\author{Ya-Juan Wu$^1$}
\author{Shao-Ming Fei$^{2,3}$}
\author{Zhi-Xi Wang$^2$}
\thanks{Corresponding author: [email protected]}
\author{Ke Wu$^2$}
\thanks{Corresponding author: [email protected]}
\affiliation{
{\footnotesize $^1$School of Mathematics, Zhengzhou University of Aeronautics, Zhengzhou 450046, China}\\
{\footnotesize $^2$School of Mathematical Sciences, Capital Normal University, Beijing 100048, China}\\
{\footnotesize $^3$Max-Planck-Institute for Mathematics in the Sciences, 04103 Leipzig, Germany}
}
\begin{abstract}
Extracting work from quantum system is one of the important areas in quantum thermodynamics. As a significant thermodynamic quantity, the ergotropy gap characterizes the difference between the global and local maximum extractable works. We derive an analytical upper bound of the ergotropic gap with respect to $d\times d\times d$ tripartite separable states. This bound also provides a necessary criterion for the separability of tripartite states. Detailed examples are presented to illustrate the efficiency of this separability criterion.
\end{abstract}
\maketitle
\section{Introduction}
Information plays important roles in thermodynamics \cite{RL, CH, KM}.
Work extraction is a significant aspect in thermodynamics, while quantum correlations are the basic resources in quantum information processing tasks. Their connections have been extensively studied \cite{RD, KF, JO, RA, WH, SJ, OCO, KMF, VV, HC, KMF}.
Extracting work from quantum systems has gained renewed interest in quantum thermodynamics \cite{LD, HB}. In \cite{GFJ}, the authors addressed the concept of ergotropy: the maximum work that can be gained from a quantum state with respect to some reference Hamiltonian and a cyclical unitary transformations. The maximum extractable work can be naturally divided into two parts: the local contribution from each subsystems, named local ergotropy, and the global contribution originating from correlations among the subsystems, named global ergotropy \cite{MP}. The work that can be extracted is proportional to the quantum mutual information under the global operations of the subsystems \cite{JO, SJ}.
As the extractable work under cyclic local interaction is strictly less than that obtained under global interaction, the presence of quantum correlations among the subsystems may
result in non-vanishing ergotropic gap for the case of non-degenerate energy subspace \cite{AMA}. This fact brings a new insight in the quantum-to-classical transition in thermodynamics.
As an essential kind of correlations, quantum entanglement plays an important role in quantum teleportation, quantum cryptography, quantum dense coding, quantum secret sharing and the development of quantum computer \cite{C.H1, C.H2, C.H3, C.B, R.H, Z.Z, DG, YA}.
Distinguishing quantum entangled states from separable ones is of the significant but difficult problems in the theory of quantum entanglement. Numerous entanglement criteria have been proposed, such as the positive partial transpose (PPT) criterion \cite{AP, MHPH}, realignment criteria \cite{OR, KC, CJZ}, correlation matrix or tensor criteria \cite{JI1, JI2, ASM, JI3}, covariance matrix criteria \cite{OG1, OG2}, entanglement witnesses \cite{MHPH, BMT}, separability criteria via measurements \cite{CSM} and so on. In \cite{MA}, Mir Alimuddin {\it et al.} presented a separability criterion (the operational thermodynamic criterion) based on the evaluation of ergotropic gap of bipartite systems.
In this paper, we focus on the thermodynamic quantity ergotropic gap, given by the difference between the global ergotropy and local ergotropy associated with the tripartite quantum system.
We present an upper bound on the ergotropic gap for arbitrary dimensional tripartite separable states. The bound presents a sufficient criteria for the entanglement of tripartite states.
\section{Bounds on the ergotropic gap of tripartite separable states}
Consider a $d\times d\times d$ tripartite state $\rho_{ABC}\in\mathcal{D}\{\mathcal{H}_A\otimes\mathcal{H}_B\otimes\mathcal{H}_C\}$, where $\mathcal{H}_{X}(X=A, B, C)$ denotes the Hilbert space corresponding to the subsystem $X$ and $\mathcal{D}(\mathfrak{X})$ denotes the set of density operator acting on Hilbert space $\mathfrak{X}$. The subsystem X is governed by the local Hamiltonian $H_{X}=\sum\limits_{j=0}^{d-1}j\,E|j\ranglengle\langlengle j|$, where $j\,E$ and $|j\ranglengle$ are the $jth$ energy eigenvalue and eigenvector of $H_X$, respectively. The total non-interacting global Hamiltonian is $H_{ABC}=H_A\otimes I_B\otimes I_C+I_A\otimes H_B\otimes I_C+I_A\otimes I_B\otimes H_C$, where $I_{X}$ denotes the identity operator acting on the Hilbert space $\mathcal{H}_{X}$. Under a cyclic Hamiltonian process, a time-dependent unitary operation $U(\tau)=\overrightarrow{exp} \left( -i\hbar \int^{\tau }_{0} dt\left[ H_{ABC}+V(t)\right] \right) $ can be applied, which $\overrightarrow{exp}$ denotes the time-ordered exponential and $V(t)$ denotes a time-dependent interaction among the subsystems. The work extraction from an isolated tripartite system under such a process is the change in average energy of the system, with the form $\mathcal{W}_e =\mathrm{Tr}((\rho_{ABC}-U\rho_{ABC}U^\dag)H_{ABC})$. Within this framework, the maximum work, extracted from the isolated tripartite state $\rho_{ABC}$ by transforming it to the corresponding passive state $\rho_{ABC}^p$, is defined below in (\ref{1}).
The maximum extractable work, called global ergotropy, is defined by
\begin{equation}\langlebel{1}
\begin{split}
\mathcal{W}_e^g &=\max\limits_{U\in\mathcal{L}(\mathcal{H}_A\otimes\mathcal{H}_B
\otimes\mathcal{H}_C)}\mathrm{Tr}((\rho_{ABC}-U\rho_{ABC}U^\dag)H_{ABC})\\
&=\mathrm{Tr}(\rho_{ABC}H_{ABC})-\min\limits_{U\in \mathcal{L}(\mathcal{H}_A\otimes\mathcal{H}_B\otimes\mathcal{H}_C)}\mathrm{Tr}(U\rho_{ABC}U^\dag H_{ABC})\\
&=\mathrm{Tr}(\rho_{ABC}H_{ABC})-\mathrm{Tr}(\rho_{ABC}^pH_{ABC}),
\end{split}
\end{equation}
where $\mathcal{L}(X')$ denotes the set of all bounded linear operators on the Hilbert space $X'$ and $\rho_{ABC}^p$ is the passive state with Hamiltonian $H_{ABC}$ for system $ABC$, from which no work can be extracted, of the form $\rho_{ABC}^p=\sum_{j} \rho_{j} |j\ranglengle \langlengle j|$ with $\rho_{j+1} \leqslant \rho_{j}$. $\rho_{ABC}$ and $\rho_{ABC}^p$ have the same spectrum, and therefore there exists a unitary operator $U$ transforming the former to the latter.
The total achievable work, called local ergotropy, is given by
\begin{equation}\langlebel{W}
\mathcal{W}_e^l =\mathcal{W}_e^A+\mathcal{W}_e^B+\mathcal{W}_e^C,
\end{equation}
where $\mathcal{W}_e^A$, $\mathcal{W}_e^B$ and $\mathcal{W}_e^C$ are the maximum local extractable works from systems $A$, $B$ and $C$, respectively,
\begin{equation}
\begin{split}
\mathcal{W}_e^A &=\mathrm{Tr}(\rho_{ABC}H_A\otimes I_B\otimes I_C)-\min\limits_{U\in \mathcal{L}(\mathcal{H}_A)}\mathrm{Tr}((U_A\otimes I_B\otimes I_C)\rho_{ABC}(U_A\otimes I_B\otimes I_C)^\dag H_A\otimes I_B\otimes I_C)\\
&=\mathrm{Tr}(\rho_{ABC}H_A\otimes I_B\otimes I_C)-\mathrm{Tr}(\rho_A^pH_A),
\end{split}
\end{equation}
and, similarly, $\mathcal{W}_e^B =\mathrm{Tr}(\rho_{ABC}I_A\otimes H_B\otimes I_C)-\mathrm{Tr}(\rho_B^pH_B)$,
$\mathcal{W}_e^C=\mathrm{Tr}(\rho_{ABC}I_A\otimes I_B\otimes H_C)-\mathrm{Tr}(\rho_C^pH_C)$
with $\rho_A^p$, $\rho_B^p$ and $\rho_C^p$ the passive states associated with the subsystems $A$, $B$ and $C$, respectively.
Hence,
\begin{equation}
\mathcal{W}_e^l =\mathrm{Tr}(\rho_{ABC}H_{ABC})-\big\{\mathrm{Tr}(\rho_A^p H_A)+\mathrm{Tr}(\rho_B^p H_B)+\mathrm{Tr}(\rho_C^p H_C)\big\}.
\end{equation}
The difference between the global ergotropy and the local ergotropy is called the ergotropic gap $\Delta_{EG}$,
\begin{equation}\langlebel{EG}
\Delta_{EG}=\mathcal{W}_e^g-\mathcal{W}_e^l=\big\{\mathrm{Tr}(\rho_A^p H_A)+\mathrm{Tr}(\rho_B^p H_B)+\mathrm{Tr}(\rho_C^p H_C)\big\}-\mathrm{Tr}(\rho_{ABC}^pH_{ABC}).
\end{equation}
Indeed, $\Delta_{EG}\geq0$ as global unitary operations are capable of extracting work from subsystems as well as from correlations among the subsystems. Clearly the ergotropic gap depends on various kinds of correlations presented in a tripartite quantum system.
It is generally a challenging problem to compute $\Delta_{EG}$ analytically. In the following
we derive analytic upper bounds of the ergotropic gap.
\begin{theorem}
Consider a $d\times d\times d$ tripartite state $\rho_{ABC}$ with spectrum $\langlembda(\rho_{ABC})=\{x_{0}, x_{1}, \cdots, x_{d^3-1}\}$ in nonincreasing order. Let the subsystems be governed by the same Hamiltonian $H_{A}=H_{B}=H_{C}=\sum\limits_{j=0}^{d-1}j\,E|j\ranglengle\langlengle j|$. If $\rho_{ABC}$ is separable, then the ergotropic gap is bounded by
\begin{equation}
\Delta_{EG}\leq \min\Big\{(Y-Z)E,M(d)E\Big\},
\end{equation}
where
$$Y=3\sum\limits_{i=0}^{d-1}ix_i+3(d-1)\sum\limits_{i=d}^{d^3-1}x_i,$$
$$Z=\sum\limits_{i=1}^{d-1}i\sum\limits_{j_i'=0}^{\frac{(i+1)(i+2)}{2}-1}x_{D_i+j_i'}+
\sum\limits_{i=1}^{d-1}(d-1+i)\sum\limits_{k_i'=0}^{\frac{(d+i)(d+i+1)}{2}-\frac{3}{2}i(i+1)-1}x_{D_{d+i-1}-3D_{i-1}+{k_i'}}
+\sum\limits_{i=1}^{d-1}(2d-2+i)\sum\limits_{{l_i'}=0}^{\frac{(d-i)(d-i+1)}{2}-1}x_{D-D_{d-i}+{l_i'}},$$
and
$$D=\frac{(2d-1)2d(2d+1)}{6}-\frac{(d-1)d(d+1)}{3},$$$$D_i=\frac{i(i+1)(i+2)}{6},$$
$$M(d)=\frac{3(d-1)}{2}-\frac{l}{d}[\frac{l^3+2l^2-5l+2}{8}+m+1].$$
The integers $l$ and $m$ are uniquely determined by the constraint $\frac{(l-1)(l+1)(l+2)}{6}+m=d-1$, where $0\leq m\leq \frac{(l+1)(l+2)}{2}$.
\end{theorem}
\begin{proof}
Let the spectra of the reduced sub-states $\rho_A$, $\rho_B$ and $\rho_C$ be
$\langlembda(\rho_A)=\{p_0, p_1, \cdots, p_{d-1}\}$, $\langlembda(\rho_B)=\{q_0, q_1, \cdots, q_{d-1}\}$ and $\langlembda(\rho_C)=\{r_0, r_1, \cdots, r_{d-1}\}$, respectively. Without loss of generality, we assume that the spectra are arranged in nonincreasing order.
i) {\it Proof of $\Delta_{EG}\leq (Y-Z)E$}
From \eqref{EG}, the ergotropy gap of the system can be written in the following form:
\begin{equation}\langlebel{th1}
\Delta_{EG}=\sum\limits_{i=0}^{d-1}i(p_i+q_i+r_i)E-\mathrm{Tr}(\rho_{ABC}^pH_{ABC}),
\end{equation}
where the first three terms are the local passive state energies of subsystems $A$, $B$ and $C$, respectively, and the last term is the global passive state energy.
A state $\rho$ is said to be majorized by a state $\sigma$,
$\langlembda(\rho)\prec\langlembda(\sigma)$, if
$\sum\limits_{i=1}^kp_i^\downarrow \leq \sum\limits_{i=1}^kq_i^\downarrow$
for $1\leq k\leq n-1$ and $\sum\limits_{i=1}^np_i^\downarrow = \sum\limits_{i=1}^nq_i^\downarrow$,
where $\langlembda(\rho)\equiv\{p_i^\downarrow\}$ and $\langlembda(\sigma)\equiv \{q_i^\downarrow\}$
are the spectra of $\rho$ and $\sigma$, respectively, arranged in nonincreasing order.
By convention one appends zeros to make the two vectors $\langlembda(\rho)$ and $\langlembda(\sigma)$ have the same dimensions.
From the Nielsen-Kempe separability criterion \cite{MAN},
if $\rho_{ABC}$ is separable, one has
$\langlembda(\rho_A) \succ \langlembda(\rho_{ABC}) \quad\wedge \quad\langlembda(\rho_B) \succ \langlembda(\rho_{ABC}) \quad\wedge\quad \langlembda(\rho_C)\succ\langlembda(\rho_{ABC})$.
Namely, e.g. for $\rho_A$,
\begin{equation} \langlebel{xiu1}
\begin{split}
p_0\geq x_0, ~~
p_0+p_1\geq x_0+x_1,~~
\cdots,~~
p_0+\cdots+p_i\geq x_0+\cdots+x_i ,~~
\cdots,~~\\
\sum\limits_{i=0}^{d-2}p_i\geq \sum\limits_{i=0}^{d-2}x_i,~~
\cdots, ~~
\sum\limits_{i=0}^{d^3-2}p_i\geq \sum\limits_{i=0}^{d^3-2}x_i,~~
\sum\limits_{i=0}^{d-1}p_i=\sum\limits_{i=0}^{d^3-1}p_i= \sum\limits_{i=0}^{d^3-1}x_i,~~
\end{split}
\end{equation}
by substituting $p_i$ for $q_i$ and $r_i$, similar results are obtained for $\rho_B$ and $\rho_C$, respectively.
Subtracting the last term by $\mathrm{1st~ term}, \mathrm{2nd ~term}, \cdots, (d^3-1)$-th term in \eqref{xiu1}, respectively, we get
\begin{equation}\langlebel{xiu8}
\begin{split}
\sum\limits_{i=1}^{d-1}p_i \leq \sum\limits_{i=1}^{d^3-1}x_i,~~
\sum\limits_{i=2}^{d-1}p_i \leq \sum\limits_{i=2}^{d^3-1}x_i,~~
\cdots,~~\\
\sum\limits_{i=j}^{d-1}p_i\leq \sum\limits_{i=j}^{d^3-1}x_i ,~~
\cdots,~~
p_{d-1}\leq \sum\limits_{{i=}d-1}^{d^3-1}x_i.~~
\end{split}
\end{equation}
Similar inequalities can be obtained for $\rho_B$ and $\rho_C$, respectively.
Summing over the above inequalities for $\rho_A$, $\rho_B$ and $\rho_C$ , we obtain
\begin{equation}\langlebel{th2}
\sum\limits_{i=0}^{d-1}i(p_i+q_i{+}r_i)\leq3\sum\limits_{i=0}^{d-1}ix_i+3(d-1)\sum\limits_{i=d}^{d^3-1}x_i.
\end{equation}
From \eqref{th2} into \eqref{th1}, we get the bound of the ergotropic gap,
\begin{equation}\langlebel{th3}
\Delta_{EG}\leq 3\sum\limits_{i=0}^{d-1}ix_iE+3(d-1)\sum\limits_{i=d}^{d^3-1}x_iE-\mathrm{Tr}(\rho_{ABC}^pH_{ABC})\triangleq (Y-Z)E,
\end{equation}
where
$Y\equiv 3\sum\limits_{i=0}^{d-1}ix_i+3(d-1)\sum\limits_{i=d}^{d^3-1}x_i$
and $ZE=\mathrm{Tr}(\rho_{ABC}^pH_{ABC})$.
To evaluate $ZE=\mathrm{Tr}(\rho_{ABC}^pH_{ABC})=\min\limits_{U\in \mathcal{L}(\mathcal{H}_A\otimes\mathcal{H}_B\otimes\mathcal{H}_C)}\mathrm{Tr}(U\rho_{ABC}U^\dag H_{ABC})$, note that the total non-interacting global Hamiltonian
\begin{equation}
\begin{split}
H_{ABC} &=H_A\otimes I_B\otimes I_C+I_A\otimes H_B\otimes I_C+I_A\otimes I_B\otimes H_C\\
&= diag\{0,\cdots,d-1\}E\otimes diag\{1,\cdots,1\}\otimes diag\{1,\cdots,1\}+diag\{1,\cdots,1\}\otimes diag\{0,\cdots,d-1\}E\\
&\otimes diag\{1,\cdots,1\}+diag\{1,\cdots,1\}\otimes diag\{1,\cdots,1\}\otimes diag\{0,\cdots,d-1\}E \\
&= diag\{0,\cdots,d-1,\cdots,d-1,\cdots,2d-2,\cdots,d-1,\cdots,2d-2,\cdots,2d-2,\cdots,3d-3\}E.
\end{split}
\end{equation}
To obtain the minimum value, we need to designate the corresponding spectrum of the passive state $\rho_{ABC}^p$:
Energy 0: $x_0\rightarrow|000\ranglengle.$
Energy 1: $x_1\rightarrow|100\ranglengle;~ x_2\rightarrow|010\ranglengle;~ x_3\rightarrow|001\ranglengle.$
Energy 2: $x_4\rightarrow|200\ranglengle;~ x_5\rightarrow|110\ranglengle;~ x_6\rightarrow|020\ranglengle;$~
$x_7\rightarrow|011\ranglengle;~ x_8\rightarrow|002\ranglengle;~ x_9\rightarrow|101\ranglengle.$
Energy 3: $x_{10}\rightarrow|300\ranglengle;~ x_{11}\rightarrow|210\ranglengle;~ x_{12}\rightarrow|120\ranglengle;$~
$x_{13}\rightarrow|030\ranglengle;~ x_{14}\rightarrow|021\ranglengle;~ x_{15}\rightarrow|012\ranglengle;$~
$x_{16}\rightarrow|003\ranglengle;~ x_{17}\rightarrow|102\ranglengle;~ x_{18}\rightarrow|201\ranglengle;~x_{19}\rightarrow|111\ranglengle.$
$\cdots$.
Energy $i$: $x_{\frac{i(i+1)(i+2)}{6}}\triangleq x_{D_i}\rightarrow|i00\ranglengle;~ x_{D_i+1}\rightarrow|(i-1)10\ranglengle;~\cdots;~ x_{D_i+i}\rightarrow|0i0\ranglengle,~$
$x_{D_i+i+1}\rightarrow|0(i-1)1\ranglengle;~ \cdots;~ $\\
$x_{D_i+2i}\rightarrow|00i\ranglengle;~\cdots;~ x_{D_i+j_i'}\rightarrow|j'_{i1}j'_{i2}j'_{i3}\ranglengle;~ \cdots;~x_{D_{i+1}-1},$~
where $0\leq j_i'\leq\frac{(i+1)(i+2)}{2}-1$ and $j'_{i1}+j'_{i2}+j'_{i3}=i$.
$\cdots$.
Energy $d-1$: $x_{D_{d-1}}\rightarrow|(d-1)00\ranglengle;~ x_{D_{d-1}+1}\rightarrow|(d-2)10\ranglengle;~\cdots; x_{D_{d-1}+j_{d-1}'}\rightarrow|j'_{(d-1)1}j'_{(d-1)2}j'_{(d-1)3}\ranglengle;~ \cdots;~x_{D_d-1},$\\where $0\leq j_{d-1}'\leq\frac{d(d+1)}{2}-1$ and $j'_{(d-1)1}+j'_{(d-1)2}+j'_{(d-1)3}=d-1$.
Energy $d$: $x_{D_d}\rightarrow|(d-1)10\ranglengle;~ x_{{D_d}+1}\rightarrow|(d-2)20\ranglengle;~ \cdots;~ x_{D_d+k_1'}\rightarrow|k'_{11}k'_{12}k'_{13}\ranglengle;~ \cdots;~x_{D_d+\frac{(d+1)(d+2)}{2}-\frac{3}{2}(1\times 2)}= x_{D_{d+1}-3D_1-1},$~
where $0\leq k_1'\leq \frac{(d+1)(d+2)}{2}-\frac{3}{2}(1\times 2)-1$ and $k'_{11}+k'_{12}+k'_{13}=d$ with $ k'_{11},k'_{12},k'_{13}\leq d-1.$
$\cdots$.
Energy $d+i-1$: $x_{D_{d+i-1}-3D_{i-1}} \rightarrow|(d-1)i0\ranglengle;~x_{D_{d+i-1}-3D_{i-1}+1} \rightarrow|(d-2)(i+1)0\ranglengle;~\cdots; x_{D_{d+i-1}-3D_{i-1}+k_i'}\rightarrow|k'_{i1}k'_{i2}k'_{i3}\ranglengle;~ \cdots;~x_{D_{d+i}-3D_i-1},$~
where $0\leq k_i'\leq\frac{(d+i)(d+i+1)}{2}-\frac{3}{2}i(i+1)-1$ and $k'_{i1}+k'_{i2}+k'_{i3}=d+i-1$ with $ k'_{i1},k'_{i2},k'_{i3}\leq d-1.$
$\cdots$.
Energy $2d-2$: $x_{D_{2d-2}-3D_{d-2}}\rightarrow|(d-1)(d-1)0\ranglengle;~ x_{D_{2d-2}-3D_{d-2}+1}\rightarrow|(d-2)d0\ranglengle;~\cdots; x_{D_{2d-2}-3D_{d-2}+k_{d-1}'}\rightarrow|k'_{(d-1)1}k'_{(d-1)2}k'_{(d-1)3}\ranglengle;~ \cdots;~x_{D_{2d-1}-3D_{d-1}-1},$~
where $0\leq k_{d-1}'\leq\frac{(2d-1)2d}{2}-\frac{3}{2}(d-1)\times d -1$ and $k'_{(d-1)1}+k'_{(d-1)2}+k'_{(d-1)3}=2d-2$ with $k'_{(d-1)1},k'_{(d-1)2},k'_{(d-1)3}\leq d-1.$
Energy $2d-1$: $x_{D_{2d-1}-3D_{d-1}}\triangleq x_{D-D_{d-1}} \rightarrow|(d-1)(d-1)1\ranglengle;~ x_{D-D_{d-1}+1} \rightarrow|(d-2)(d-1)2\ranglengle;~\cdots; x_{D-D_{d-1}+l_1'}\rightarrow|l'_{11}l'_{12}l'_{13}\ranglengle;~ \cdots;~ x_{D-D_{d-2}-1},$~
where $D=\frac{(2d-1)2d(2d+1)}{6}-\frac{(d-1)d(d+1)}{3}$, $0\leq l_1'\leq\frac{(d-1)d}{2}-1$ and $l'_{11}+l'_{12}+l'_{13}=2d-1$ with $l'_{11},l'_{12},l'_{13}\leq d-1$.
$\cdots$.
Energy $2d+i-2$: $x_{D-D_{d-i}}\rightarrow|(d-1)(d-1)i\ranglengle ;~ x_{D-D_{d-i}+1}\rightarrow|(d-2)(d-1)(i+1)\ranglengle ;~ \cdots; x_{D-D_{d-i}+l_i'}\rightarrow|l'_{i1}l'_{i2}l'_{i3}\ranglengle;~ \cdots;~ x_{D-D_{d-i-1}-1},$~
where $0\leq l_i'\leq \frac{(d-i)(d-i+1)}{2}-1$ and $l'_{i1}+l'_{i2}+l'_{i3}=2d+i-2$ with $l'_{i1},l'_{i2},l'_{i3}\leq d-1$.
$\cdots$.
Energy $3d-3$: $x_{d^3-1}\rightarrow|(d-1)(d-1)(d-1)\ranglengle$.
For convenience, we give a diagram to represent the above specification for the corresponding spectrum of the passive state $\rho_{ABC}^p$. In this diagram, elements in each row have the same energy, as follows:
\begin{equation}
\bordermatrix{
0 & x_0 & & \cr
1 & x_1 &\cdots &x_3 \cr
\vdots &\vdots & && \cr
i &x_{D_i} &\cdots &x_{D_i+{j_i'}} &\cdots&x_{D_{i+1}-1} \cr
\vdots &\vdots & \cr
d-1 &x_{D_{d-1}} &\cdots &\cdots&\cdots&x_{D_{d}-1} \cr
d &x_{D_{d}} &\cdots &\cdots&\cdots&\cdots&x_{D_{d+1}-3D_1-1} \cr
\vdots &\vdots & \cr
d+i-1 &x_{D_{d+i-1}-3D_{i-1}} &\cdots &\cdots&x_{D_{d+i-1}-3D_{i-1}+{k_i'}}&\cdots&\cdots&x_{D_{d+i}-3D_i-1} \cr
\vdots &\vdots & \cr
2d-2 &x_{D_{2d-2}-3D_{d-2}} &\cdots &\cdots&\cdots&\cdots&x_{D_{2d-1}-3D_{d-1}-1} \cr
2d-1 &x_{D-D_{d-1}} &\cdots &\cdots&\cdots&x_{D-D_{d-2}-1} \cr
\vdots &\vdots & \cr
2d+i-2&x_{D-D_{d-i}} &\cdots &x_{D-D_{d-i}+{l_i'}} &\cdots&x_{D-D_{d-i-1}-1} \cr
\vdots &\vdots & \cr
3d-3 &x_{d^3-1} & & \cr
}.
\end{equation}
Therefore,
\begin{equation}\langlebel{th4}
\begin{split}
ZE&=\mathrm{Tr}(\rho_{ABC}^pH_{ABC})\\&=\sum\limits_{i=1}^{d-1}iE\sum\limits_ {{j_i'}=0}^{\frac{(i+1)(i+2)}{2}-1}x_{D_i+{{j_i'}}}+
\sum\limits_{i=1}^{d-1}(d-1+i)E\sum\limits_{{k_i'}=0}^{\frac{(d+i)(d+i+1)}{2}-\frac{3}{2}i(i+1)-1}x_{D_{d+i-1}-3D_{i-1}+{k_i'}}\\
&+\sum\limits_{i=1}^{d-1}(2d-2+i)E\sum\limits_{{l_i'}=0}^{\frac{(d-i)(d-i+1)}{2}-1}x_{D-D_{d-i}+{l_i'}},
\end{split}
\end{equation}
where the last three terms in \eqref{th4} correspond to the spectra $\{x_0, x_1, \cdots, x_{D_{d}-1}\}$, $\{x_{D_{d}}, \cdots, x_{D_{2d-1}-3D_{d-1}-1}\}$ and $\{x_{D-D_{d-1}}=x_{D_{2d-1}-3D_{d-1}}, \cdots, x_{d^3-1}\}$, respectively.
Substituting \eqref{th4} in \eqref{th3}, we get
\begin{equation}
\begin{split}
\Delta_{EG}&\leq \bigg(3\sum\limits_{i=0}^{d-1}ix_i+3(d-1)\sum\limits_{i=d}^{d^3-1}x_i-\sum\limits_{i=1}^{d-1}i\sum\limits_{{{j_i'}=0}}^{\frac{(i+1)(i+2)}{2}-1}x_{D_i+{{j_i'}}}\\
&-\sum\limits_{i=1}^{d-1}(d-1+i)\sum\limits_{{k_i'}=0}^{\frac{(d+i)(d+i+1)}{2}-\frac{3}{2}i(i+1)-1}x_{D_{d+i-1}-3D_{i-1}+{k_i'}}-\sum\limits_{i=1}^{d-1}(2d-2+i)\sum\limits_{{l_i'}=0}^{\frac{(d-i)(d-i+1)}{2}-1}x_{D-D_{d-i}+{l_i'}}\bigg)E,
\end{split}
\end{equation}
which proves $\Delta_{EG}\leq (Y-Z)E$.
ii) {\it Proof of $\Delta_{EG}\leq M({ d})E$}
Similar to the approach used in \cite{MA}, we rewrite \eqref{th1} as,
\begin{equation}\langlebel{th5}
\begin{split}
\Delta_{EG}&=\sum\limits_{i=0}^{d-1}i(p_i+q_i+r_i)E
-\sum\limits_{i=1}^{l-1}iE\sum\limits_{k'=0}^{\frac{(i+1)(i+2)}{2}-1}r_{D_i+k'}-lE\sum\limits_{k'=0}^m r_{D_l+k'}\\
&+\sum\limits_{i=1}^{l-1}iE\sum\limits_{k'=0}^{\frac{(i+1)(i+2)}{2}-1}r_{D_i+k'}+lE\sum\limits_{k'=0}^m r_{D_l+k'}
-\mathrm{Tr}(\rho_{ABC}^pH_{ABC}),
\end{split}
\end{equation}
where $l$ and $m$ are integers determined uniquely by the constraint
\begin{equation}
D_i+m=\frac{l(l+1)(l+2)}{6}+m=d-1, \quad 0\leq m\leq\frac{(l+1)(l+2)}{2}.
\end{equation}
Replacing $p_i$ with $r_i$ in \eqref{xiu8}, we obviously have
\begin{equation}\langlebel{2xiu}
\sum\limits_{i=1}^{l-1}iE\sum\limits_{k'=0}^{\frac{(i+1)(i+2)}{2}-1}r_{D_i+k'}+lE\sum\limits_{k'=0}^m r_{D_l+k'}\leq\sum\limits_{i=1}^{l-1}iE\sum\limits_{k'=0}^{\frac{(i+1)(i+2)}{2}-1}x_{D_i+k'}+lE\sum\limits_{k'=0}^m x_{D_l+k'}.
\end{equation}
Putting the expressions of \eqref{th4} and \eqref{2xiu} into \eqref{th5}, we get
\begin{equation}\langlebel{th6}
\begin{split}
\Delta_{EG}&\leq\sum\limits_{i=0}^{d-1}i(p_i+q_i+r_i)E
-\sum\limits_{i=1}^{l-1}iE\sum\limits_{k'=0}^{\frac{(i+1)(i+2)}{2}-1}r_{D_i+k'}-lE\sum\limits_{k'=0}^m r_{D_l+k'}+\sum\limits_{i=1}^{l-1}iE\sum\limits_{k'=0}^{\frac{(i+1)(i+2)}{2}-1}x_{D_i+k'}\\
&+lE\sum\limits_{k'=0}^m x_{D_l+k'}
-\bigg[\sum\limits_{i=1}^{d-1}iE\sum\limits_{{j_i'}=0}^{\frac{(i+1)(i+2)}{2}-1}x_{D_i+{j_i'}}+
\sum\limits_{i=1}^{d-1}(d-1+i)E\sum\limits_{{k_i'}=0}^{\frac{(d+i)(d+i+1)}{2}-\frac{3}{2}i(i+1)-1}x_{D_{d+i-1}-3D_{i-1}+{k_i'}}\\
&
+\sum\limits_{i=1}^{d-1}(2d-2+i)E\sum\limits_{{l_i'}=0}^{\frac{(d-i)(d-i+1)}{2}-1}x_{D-D_{d-i}+{l_i'}}\bigg].
\end{split}
\end{equation}
Note that
\begin{equation}\langlebel{2xiuth7}
\sum\limits_{i=0}^{d-1}ir_iE
-\sum\limits_{i=1}^{l-1}iE\sum\limits_{k'=0}^{\frac{(i+1)(i+2)}{2}-1}r_{D_i+k'}-lE\sum\limits_{k'=0}^m r_{D_l+k'}=\sum\limits_{i=1}^{\frac{l(l+1)(l+2)}{6}-1}(i-l')r_iE+\sum\limits_{i=\frac{l(l+1)(l+2)}{6}}^{d-1}(i-l)r_iE,
\end{equation}
where $(l', m')$ are determined by $i=\frac{l'(l'+1)(l'+2)}{6}+m'$, $0\leq m'\leq\frac{(l'+1)(l'+2)}{2}$.
Substituting \eqref{2xiuth7} in \eqref{th6}, we have
\begin{equation}\langlebel{th7}
\Delta_{EG}\leq\sum\limits_{i=0}^{d-1}ip_iE+\sum\limits_{i=0}^{d-1}iq_iE
+\sum\limits_{i=1}^{\frac{l(l+1)(l+2)}{6}-1}(i-l')r_iE
+\sum\limits_{i=\frac{l(l+1)(l+2)}{6}}^{d-1}(i-l)r_iE-\delta E,
\end{equation}
where
\begin{equation}
\begin{split}
\delta &\triangleq\bigg[\sum\limits_{i=1}^{d-1}i\sum\limits_{{j_i'}=0}^{\frac{(i+1)(i+2)}{2}-1}x_{D_i+{j_i'}}+
\sum\limits_{i=1}^{d-1}(d-1+i)\sum\limits_{{k_i'}=0}^{\frac{(d+i)(d+i+1)}{2}-\frac{3}{2}i(i+1)-1}x_{D_{d+i-1}-3D_{i-1}+{k_i'}}\\
&+\sum\limits_{i=1}^{d-1}(2d-2+i)\sum\limits_{{l_i'}=0}^{\frac{(d-i)(d-i+1)}{2}-1}x_{D-D_{d-i}+{l_i'}}\bigg]-\bigg(\sum\limits_{i=1}^{l-1}i\sum\limits_{k'=0}^{\frac{(i+1)(i+2)}{2}-1}x_{D_i+k'}+l\sum\limits_{k'=0}^m x_{D_l+k'}\bigg)\geq0.
\end{split}
\end{equation}
Maximizing the right hand side of \eqref{th7} we have $\min(\delta)=0$,
$p_i=q_i=r_i={1}/{d}$ for $i=1,...,d-1$, and
\begin{equation}
\begin{split}
\Delta_{EG}&\leq M(d)E=\bigg(\frac{d-1}{2}+\frac{d-1}{2}+\frac{d-1}{2}-\frac{1}{d}\sum\limits_{i=1}^{l-1}i\frac{(i+1)(i+2)}{2}-\frac{l(m+1)}{d}\bigg)E\\
&=\bigg(\frac{3(d-1)}{2}-\frac{l}{d}\big(\frac{l^3+2l^2-5l+2}{8}+m+1\big)\bigg)E.
\end{split}
\end{equation}
Altogether we have $\Delta_{EG}\leq \min\Big\{(Y-Z)E,M(d)E\Big\}$.
\end{proof}
From the Theorem, we have the following separability criterion
\begin{corollary}
A tripartite state $\rho_{ABC}$ is entangled if
\begin{equation}\langlebel{en1}
\Delta_{EG}> \min\Big\{(Y-Z)E,M(d)E\Big\}.
\end{equation}
\end{corollary}
In particular, for three-qubit case, the reduced states are just qubit ones. The local qubit systems are governed by the same two-energy levels Hamiltonian $H=E|1\ranglengle\langlengle1|$.
For separable three-qubit states $\rho_{ABC}$ with the spectrum $\{x_0, x_1, \cdots, x_7\}$ in nonincreasing order, the ergotropy gap is bounded by
\begin{equation}
\Delta_{EG}\leq \min \Big\{[2(x_1+x_2+x_3)+x_4+x_5+x_6]E, E \Big\}.
\end{equation}
Let us consider several examples.
\begin{example}
The superposition of W and GHZ states \cite{RLA}:
$|\psi\ranglengle=\sqrt{p}|GHZ\ranglengle+\sqrt{1-p}|W\ranglengle$, $0\leq p\leq1$,
where
$|GHZ\ranglengle=\frac{1}{\sqrt{2}}(|000\ranglengle+|111\ranglengle)$
and
$|W\ranglengle=\frac{1}{\sqrt{3}}(|100\ranglengle+|010\ranglengle+|001\ranglengle)$.
For this state, we have
$\Delta_{EG}=\frac{3-\sqrt{1+4p-5p^2}}{2}E$, $Y-Z=0$ and $M(d)=1$. From (\ref{en1}) we have that $|\psi\ranglengle$ is entangled for $0\leq p\leq 1$.
The entanglement criterion for GHZ states \cite{YA} claims that any separable 3-qubit state $\rho$ satisfies the following inequalities
$$\mid \left< A_{1}\right>_{\rho } \pm \left< A_{2}\right>_{\rho } +\left< A_{3}\right>_{\rho } \mid \leq 1, ~~\mid \left< A_{1}\right>_{\rho } \pm \left< A_{2}\right>_{\rho } -\left< A_{3}\right>_{\rho } \mid \leq 1,$$
where $A_1=\sigma_{x} \otimes \sigma_{x} \otimes \sigma_{x} ,$ $A_2=I \otimes \sigma_{z} \otimes \sigma_{z} ,$ $A_3=\sigma_{y} \otimes \sigma_{y} \otimes \sigma_{x} , $ and $\left< A_{i}\right>_{\rho }= Tr(\rho A_i).$
The inequalities may be violated by entangled states.
It has been shown that $|\psi\ranglengle$ is entangled for $\frac{2}{5}< p\leq 1$.
And the entanglement criterion for W states \cite{YA} claims that any separable 3-qubit state $\rho$ satisfies the following inequalities
$$\mid \left< B_{1}\right>_{\rho } \pm \left< B_{2}\right>_{\rho } +\left< B_{3}\right>_{\rho} \mid \leq 1, ~~\mid \left< B_{1}\right>_{\rho } \pm \left< B_{2}\right>_{\rho } -\left<B_{3}\right>_{\rho} \mid \leq 1,$$
where $B_1=I \otimes \sigma_{x} \otimes \sigma_{x} ,$ $B_2=I \otimes \sigma_{y} \otimes \sigma_{y} ,$ $B_3=\sigma_{z} \otimes \sigma_{z} \otimes \sigma_{z} ,$ and $\left< B_{i}\right>_{\rho }= Tr(\rho B_i).$
The inequalities may be violated by entangled states.
It has been shown that $|\psi\ranglengle$ can be identified as an entangled state for $0 \leq p < \frac{4}{7}$. It is obvious that our result is an improvement.
\end{example}
\begin{example}
The GHZ state mixed with a colored noise \cite{ACA, MSP},
$$
\rho=\frac{p}{2}(|000\ranglengle\langlengle 000|+|111\ranglengle\langlengle111|)+(1-p)|GHZ\ranglengle\langlengle GHZ|, ~~~0\leq p\leq1.
$$
For this state, we have
$\Delta_{EG}=\big(\frac{3}{2}-\frac{p}{2}\big)E$, $Y-Z=p$ and $M(d)=1$.
Therefore $\rho_{ABC}$ is entangled for $p<1$, which is the same result obtained in \cite{YA}.
\end{example}
\begin{example}
The GHZ state mixed with the white noise,
$$
\rho=(1-p)\frac{I}{8}+p|GHZ\ranglengle\langlengle GHZ|,~~~ 0\leq p\leq1.
$$
For this state, we have
$\Delta_{EG}=\frac{3}{2}pE$, $Y-Z=\frac{9}{8}(1-p)$ and $M(d)=1$.
Hence $\rho_{ABC}$ is entangled for $p>\frac{3}{7}$, as the result obtained in \cite{OG, SS}.
\end{example}
\section{Conclusion}
We have investigated the ergotropic gap, the difference between the global ergotropy and local ergotropy for tripartite systems. The upper bounds of the ergotropic gap have been analytically
derived. The violation of the bound provides a sufficient criterion for entanglement.
The ergotropic gap is tightly related to the work extraction and correlations among the subsystems.
Our results may highlight further studies on ergotropic gaps for multipartite systems the detection of bi-separability and the genuine multipartite entanglement.
\end{document}
|
\begin{document}
\title{The normal closure of big Dehn twists, and plate
spinning with rotating
families }
\begin{abstract} We study the normal closure of a big power of one or several Dehn
twists in a Mapping Class Group. We prove that it has a
presentation whose relators consist only of commutators between twists of
disjoint support, thus answering a question of Ivanov.
Our method is to use the theory of projection
complexes of Bestvina Bromberg and Fujiwara, together with the theory
of rotating families, simultaneously on several spaces.
\end{abstract}
\setcounter{tocdepth}{2}
\tableofcontents
\section*{Introduction}
Consider a closed orientable surface $\Sigma$ of negative Euler
characteristic.
The Mapping Class Group of $\Sigma$, denoted by ${\rm MCG} (\Sigma)$, is the quotient of the group
of orientation-preserving homeomorphisms by the path-connected component
of the identity. A classical theorem of Dehn and Nielsen indicates a
natural isomorphism between this
group and a subgroup of index $2$ of the outer automorphism group of $\pi_1(\Sigma)$.
As Riemann uniformisation theorem makes
$\pi_1(\Sigma)$ act as a lattice on the hyperbolic plane, one can argue that ${\rm MCG} (\Sigma)$ is (in a sense)
some hyperbolic
analogue of $SL_2(\bbZ)$ which is of index $2$ in
the automorphism group of $\bbZ^2$, a lattice in the euclidean plane.
However, contrarily to $SL_2(\bbZ)$, some nontrivial elements of ${\rm MCG} (\Sigma)$ have large centraliser. For instance, consider
a simple closed curve $\alpha$ on $\Sigma$, a tubular neighborhood of
it $\alpha^{(t)} \simeq [-\epsilon, \epsilon] \times \alpha
\hookrightarrow \Sigma$ and define a (simple) Dehn twist
$\tau$ as the identity in $\Sigma \setminus \alpha^{(t)}$, and as a
full twist on $\alpha^{(t)}$, namely, identifying $\alpha$ with $S^1$,
the map $[(\eta, e^{i\theta}) \mapsto (\eta, e^{i(\theta +
\frac{(\eta+\epsilon)\pi}{\epsilon } )} )]$. A Dehn twist will
obviously commute with any mapping class whose support is disjoint
from this tube, and therefore with a lot of other Dehn twists. By
a theorem of Dehn, ${\rm MCG} (\Sigma)$ is generated by Dehn twists around simple closed
curves, thus by an intricate set of generators linked by
commutation relations, but also braid relations and lantern
relations.
These differences can lead to modify the expected analogy with
the euclidean case in order to include $SL_n(\bbZ)$ for $n\gammaeq
3$ (generated by elementary matrices).
Thurston, and Nielsen, (see the discussion and references in \cite{HT}) classified mapping classes into three
cases, those of finite order, those that are reducible in the sense
that they have infinite order and that some nontrivial power preserves the homotopy class of a simple
closed curve, and finally the pseudo-Anosov. The pseudo-Anosov mapping
classes happen to be
the hyperbolic isometries of an action of ${\rm MCG} (\Sigma)$ on an important
graph, the curve graph of $\Sigma$, which is Gromov hyperbolic \cite{MM}. They
are,
in many ways, the witnesses that some phenomena of rank one
happen in ${\rm MCG} (\Sigma)$ that
are similar to the structure of $SL_2(\bbZ)$, and its action on the modular tree. On the
other hand, Dehn twists are as reducible as it is possible to be. They
are, or should be, the witnesses of some phenomena of higher rank,
similar to the structure of $SL_n(\bbZ)$ for $n\gammaeq
3$.
Here is an illustration of the difference of behaviors. If one considers a finite collection of pseudo-Anosov
elements, one can show that, after taking suitable powers, the group
they generate is free \cite{Ifree, McC}. This is a ping-pong argument, for instance on
the boundary of Teichm\"uller space, or on the curve graph. If one considers a finite collection of Dehn
twists around simple closed curves, then Koberda \cite{K} proved the
beautiful ping-pong result that the group generated by some powers of these Dehn
twists is a right angled Artin group: a group whose presentation over
the given generating set is a collection of commutators, the obvious
ones (two Dehn twists commute if their curves are disjoint).
The case of normal subgroups is our interest.
If $n\gammaeq 3$, by Margulis' normal subgroup
theorem, all normal subgroups of $SL_n(\bbZ)$ are finite or of finite
index. In $SL_2(\bbZ)$ it is not the case: this group is virtually
free, and has uncountably many non-isomorphic quotients.
It is a
natural question to ask whether (and how) these phenomena are seen in
${\rm MCG} (\Sigma)$. What can be the normal closure of a power of a pseudo-Anosov, the normal
closure of a power of a Dehn twist, and the group generated by all $k$-th powers
of all simple Dehn twists~? Farb and Ivanov
asked this question in the case of a pseudo-Anosov (respectively
\cite[\S 2.4]{F} and \cite[\S 3]{I}), attributing it
to Long, McCarthy, and Penner. Ivanov also asked what he calls the deep
relation question \cite[\S 12]{I}, that is
whether all relations among certain powers of Dehn twists must derive from
obvious commutation relations.
In \cite[\S 5]{DGO}, we answered the first question: there is an integer $N=N(\Sigma)$
such that for any pseudo-Anosov mapping class $\gammaamma$, the normal
closure $\langle\langle \gammaamma^N\rightarrowngle\rightarrowngle_{{\rm MCG} (\Sigma)}$ is free, and
consists only of pseudo-Anosov elements and the identity. This
is in line with what happens in $SL_2(\bbZ)$, for each infinite order element.
We are interested in the question of the closure of a power of a Dehn twist, and
in the group generated by certain powers of all (simple) Dehn twists, as
in Ivanov's deep relation problem. A
naive expectation along the lines of the analogy with $SL_n(\bbZ)$,
and the Margulis normal subgroup theorem, could be to expect such
normal subgroups to be a finite
index subgroup. Whereas it is the case for squares of Dehn twists
\cite{H}, it is not the case for large powers (see \cite{H}, \cite{Fu},
\cite[6.17]{Cou}, see also \cite{S} and \cite{Mas} for the case of
powers of half-Dehn twists on punctured spheres).
Another
expectation could be, in light of the finite-type situation, and
ping pong arguments, to expect infinitely generated right angled
Artin groups. Again, this is not the case in general (see \cite{CLM}
and \cite{BM}; Brendle and Margalit proved restrictions on the
automorphism group of certain of these normal subgroups, that
forbid them to be right angled Artin groups). However, we indeed
prove that there is no need of relations other than the obvious ones.
\begin{theo}\label{theo;intro}
For every orientable closed surface $\Sigma$, there is an integer $N_0$ such that for any $N$ multiple of $N_0$:
\begin{itemize}\item for any Dehn twist $\tau$, the normal
closure of $\tau^N$ in the Mapping
Class Group of $\Sigma$ has a partially
commutative presentation, built on an infinite set of generators that are
conjugates of $\tau^N$, so that the relators are commutations
between pairs of conjugates of $\tau^N$ that have disjoint underlying
curves.
\item the group generated by all $N$-th powers of all simple Dehn
twists has a partially
commutative presentation, built on an infinite set of generators that
are $N$-th powers of Dehn twists, and whose relators are commutations
between pairs of conjugates of the generators that have disjoint underlying
curves.
\end{itemize}
\end{theo}
The difference with an infinitely generated right angled
Artin group is that some elements in the commutator relators are not
in the generating set, but merely conjugates of elements in the
generating set. We recover that the normal closure is far
from being of finite index in ${\rm MCG} (\Sigma)$, for instance because it has
abelianisation of infinite rank (the
relators being in the derived subgroup of the free
group over the set of generators).
In our point of view, this result above, and its departure
from the complexity of normal subgroups of $SL_n(\mathbb{Z})$ for
$n\gammaeq 3$ (granted by Margulis normal subgroup theorem) reinforce
\cite{Fu,Cou} in
witnessing a dent in the analogy between ${\rm MCG} (\Sigma)$ and $SL_n(\mathbb{Z})$. It
also answers Ivanov's question on deep relations.
Let us discuss the proof of this theorem.
In \cite{DGO} the structure of the normal closure of a big
pseudo-Anosov was studied with the help of rotating families. Consider
$G$ a group acting by isometries on a space $X$. A
rotating family in $G$ on $X$ is a collection of subgroups (the
rotation groups), that is
closed under conjugacy, such that each of them fixes a certain point in
$X$ (thus inducing some kind of rotation around this point). Take
$\rho$ in one of these subgroups,
fixing $c$. One may
measure an analogue of the angle of rotation of $\rho$
by taking $x$ at distance $1$ from $c$, and measuring the infimal length
between $x$ and $\rho x$ of paths outside the ball of radius $1$
around $c$. If $X$ is Gromov-hyperbolic (for a small hyperbolicity
constant), if the fixed point of the different rotation groups are
sufficiently far from each other, and if the angles of rotations are
sufficiently big, the group generated by all the rotation groups is a
free product of a selection of them. In \cite{DGO} we applied this theory to the
action of ${\rm MCG} (\Sigma)$ on a cone-off of the curve graph of $\Sigma$. The
rotation groups were the conjugates of the big pseudo-Anosov considered.
The rotating family argument can be explained as follows. One analyses
the structure of groups generated by more and more rotation groups, to
discover that they arrange as a sequence of free products. Starting
from a quasi-convex set $W$ (that will change over time) that is at
first a small ball around a single fixed point of
a single rotation group, one sets $G_W$ the group generated by the rotation groups whose centers are in $W$, and one makes $W$ grow until it (almost)
touches another center of rotation, for some other group.
Call $S$ a $G_W$-transversal of the newly approached centers of rotation.
Then one
\emph{unfolds} $W$ into $W'$ by taking its images by the group $G_{W'}$ (thus generated by the new
rotations, and the rotation already with center in $W$). Because of
hyperbolicity, and of largeness of angles of rotations involved, the resulting
space is still quasi-convex, with almost the same constant -- with
a little repair, it has the same quasi-convexity constant indeed.
Actually $W'$ has the
structure of a tree whose vertices are the images of $W$ by the group $G_{W'}$,
and the images of points in $S$ by $G_{W'}$, thus giving by Bass-Serre duality the structure of
free product of $G_W$ and the rotation groups around points of $S$ (edge stabilizers are trivial since no element can fix two
different centers of rotation). Then, one takes the new $W$ as $W'$ and start over. In the direct
limit, the group generated by all rotations has been described as a
free product of a selection of rotation groups.
In \cite{BBF}, Bestvina Bromberg and Fujiwara, using a system of
subsurface projections, discovered that there is
a normal finite index subgroup $G_0$ of ${\rm MCG} (\Sigma)$ that acts on some spaces
quasi-isometric to trees, and on which Dehn twists behave like large
rotation subgroups. It has been observed by several people that this implies
that the normal closure of a certain power of a Dehn twist in $G_0$ is
free, using the argument of \cite{DGO}. However, it is far from obvious how
to promote this structural feature to the normal closure in ${\rm MCG} (\Sigma)$.
In this paper, we use several quasi-trees as above, one for each left
coset of $G_0$ in ${\rm MCG} (\Sigma)$. The group $G_0$ acts on each of them, but
its action is twisted by the automorphism of $G_0$ that is the
conjugation by elements $g_i, i=1, \dots, m$ realising a transversal of $G_0$
in ${\rm MCG} (\Sigma)$. If $\tau^N$ is a Dehn twist in $G_0$, the
normal closure of $\tau^N$ in ${\rm MCG} (\Sigma)$ equals the normal closure of the
collection $\{ g_i \tau^Ng_i^{-1},\, i=1,\dots m \}$ in $G_0$. Each $g_i
\tau^Ng_i^{-1}$ is a legitimate rotation on the quasi-tree associated
to $g_i$.
The argument of \cite{DGO} is then performed simultanously on each of the
$m$ quasi-trees. Instead of one convex subset that grows, and gets
unfolded in a hyperbolic space, we have $m$ convex sets $\calW_1,
\dots, \calW_m$ in the $m$
quasi-trees. Each of them is invariant by the group generated by the
rotations around rotation points in all of them. One looks for a
rotation point $R$ that is nearby one of these sets, and in a certain
sense, nearby all of them (although they do not live in the same
quasi-trees, this still makes sense in the framework of projection
systems). Then, one unfolds our convex sets in all coordinates $i=1,
\dots, m$.
A funny phenomenon happens. The unfolding in the coordinate of $R$
provides a nice tree, as the argument of \cite{DGO}, and the convexity
of the result is quantitively very good. This tree gives
the structure of the new group by Bass-Serre duality, and reveals
that only
commutation relations are involved. There is no reason that the unfolding in all other
coordinates produces something resembling a tree, and could in
principle destroy the convexity of $\calW_j$. However, using
the properties of the projection system, we show that the result is
still somehow convex (less convex than before though).
The game is then to unfold in the different
quasitrees at regular intervals of time in the process, and to control the degradation
of the convexity so that the repair can wait until a new unfolding
occurs. It is a game of plate spinning.
The quasi-trees that we will use come from projection complexes
defined in \cite{BBF}. We wrote the argument in
this axiomatic language, to avoid dealing with
useless hyperbolicity constants. In the end, even if the spaces are
indeed quasi-trees, this fact does not appear in the argument. The
axioms of projection systems are extensively used though, and they
contain the information that the geometric space is a quasi-tree. We
will thus prove a similar statement as Theorem \ref{theo;intro},
namely Theorem \ref{theo;main}, that gives the structure of groups
generated by composite rotating families. There is actually more
information coming from this composite rotating family structure, as
for instance the Greendlinger property (see Definition \ref{def;CW}), that describes
how an element in the group can be shortened in some coordinate of the
composite projection system.
\numberwithin{thm}{section}
\section{Composite projection systems}
\subsection{Projection systems}
Let us recall a part of the axiomatic construction of
\cite{BBF}.
\begin{defi} (\cite{BBF})
A projection
system is a set $\bbY$, with a constant $\theta>0$,
and for each $Y\in \bbY$, a
function $ \left(d_Y^\pi: \bbY\setminus\{Y\} \times
\bbY\setminus\{Y\} \to \bbR_+\right)$ satisfying the following axioms:
\begin{itemize}
\item symmetry ($d_Y^\pi (X,Z) = d_Y^\pi (Z,X)$ for all $X,Y,Z$),
\item triangle inequality
($d^\pi_Y(X,Z) + d_Y^\pi(Z,W) \gammaeq d_Y^\pi(X,W) $ for all $X,Y,Z, W$),
\item Behrstock
inequality ($ \min\{ d^\pi_Y(X,Z), d_Z^\pi (X,Y) \} \leq \theta
$ for all $X,Y,Z$),
\item properness ($\{Y, d_Y^\pi (X,Z) >\theta\}$ is
finite for all $X,Z$).
\item In
this work one also assume the separation ($d_Y^\pi (Z,Z) \leq
\theta$ for all $Z,Y$).
\end{itemize}
\end{defi}
Observe that if the axioms are true for some
$\theta$ they hold for all larger $\theta$.
From this rudimentary axiomatic set, Bestvina Bromberg and
Fujiwara manage to extract meaningful geometry, by modifying
the functions $d_Y^\pi$ into some functions $d_Y$, that satisfy many more
properties, usually encapsulated in the statement that the
projection complex of $\bbY$, for a suitable parameter
$K$
is a quasi-tree.
One should think of $d_Y$ (or
$d^\pi_Y$) as an
angular measure between $X$ and $Z$ seen from $Y$. The axioms
fit in this viewpoint: the Behrstock inequality says that if
the angle at $Y$ between $X$ and $Z$ is large, then from the point of view of $Z$, the items $Y$ and $X$
look aligned.
Let us review very quickly the procedure of \cite{BBF} to
produce the functions $d_Y$.
Given $\theta$ for which the axioms hold, \cite{BBF} define $\calH(X,Z)$ to be
the set of pairs $(X',Z')$ such that both $d_X^\pi$ and $d_Z^\pi$
between them is strictly larger than $2\theta$, and one also include the
pairs $(X,Z')$ if $d_Z^\pi (X,Z' ) >2\theta$, symmetrically the
pairs $(X',Z)$ if $d_X^\pi (X',Z ) >2\theta$, and finally the
pair $(X,Z)$ itself.
Then $d_Y(X,Z) $ is defined to be the infimum of $d_Y^\pi$ over
$\calH(X,Z)$.
For all $K$, $\bbY_K(X,Z)$ denotes the set $\{Y, d_Y(X,Z) \gammaeq K\}$.
\cite[Theorem 3.3]{BBF} states that there exists $\Theta$ and
$\kappa\gammaeq \theta$
depending only on $\theta$, such that for all $X,Y,Z, W$:
\begin{itemize}
\item (Symmetry) $d_Y(X,Z) = d_Y(Z,X)$
\item (Coarse equality) $d_Y^\pi -\kappa \leq d_Y \leq d_Y^\pi$
\item (Coarse triangle inequality) $d_Y(X,Z) + d_Y(Z,W) \gammaeq d_Y(X,W) -\kappa$
\item (Behrstock inequality) $\min\{ d_Y(X,Z) , d_X(Y,Z) \}\leq \kappa$
\item (Properness) $\{ V, d_V(X,Z) >\Theta\}$ is finite
\item (Monotonicity) If $d_Y(X,Z) \gammaeq \Theta$ then both
$d_W(X,Y), d_W(Z,Y)$ are at most $d_W(X,Z)$.
\item (Order) $\bbY_\Theta (X,Z) \cup \{X, Z\}$ is totally
ordered by an order $ \dot{<}$ such that $X$ is lowest, $Z$
is greatest, and if $Y_0 \dot{<} Y_1 \dot{<}Y_2$, then
$$ d_{Y_1}(X,Z) -\kappa \leq d_{Y_1} (Y_0, Y_2) \leq d_{Y_1}(X,Z), $$
and
$$ d_{Y_0} (Y_1, Y_2) \leq \kappa, \quad d_{Y_2}(Y_1, Y_0) \leq \kappa. $$
\end{itemize}
Then choosing $K$ larger than $\Theta$, the projection complex
$\calP_K(\bbY)$ is defined as follows: it is a graph whose
vertices are the elements of $\bbY$ and where $X,Z$ span an
edge if and only if $\bbY_K(X,Z) =\emptyset$. Then
\cite[Thm. 3.16]{BBF} states that for sufficiently large $K$,
$\calP_K(\bbY)$ is connected and quasi-isometric to a tree for
its path metric.
\subsection{Composite projection systems}
In this work, we are concerned with a composite situation.
\subsubsection{Definitions, and projection complexes}
Let $\bbY_*$ be the disjoint union of finitely many countable
sets
$\bbY_1, \dots, \bbY_m$. Their indices $i=1, \dots, m$ are called the coordinates. Given $Y\in \bbY_*$, denote by $i(Y)$ its coordinate:
$Y\in \bbY_{i(Y)}$.
\begin{defi}\label{def;CPS}
A composite projection system on a countable set $\bbY_* =
\sqcup_{i=1}^m \bbY_i$
is the data of a
constant $\theta>0$, of a family of subsets ${\rm Act}(Y)\subset
\bbY_*, Y\in
\bbY_*$ (the active set for $Y$) such that $\bbY_{i(Y)} \subset{\rm Act}(Y)$,
and of a family of functions
$d^\pi_Y : ( {\rm Act}(Y)\setminus \{Y\} \times {\rm Act}(Y) \setminus \{Y\}) \to \bbR_+$, satisfying the
symmetry, the triangle inequality, the Behrstock inequality for
$\theta$ whenever both quantities are defined,
the properness for $\theta$ when restricted to each
$\bbY_i$,
the separation for
$\theta$, and also three other properties related to the
map ${\rm Act}$:
\begin{itemize}
\item (symmetry in action) $X\in {\rm Act}(Y)$ if and
only if $Y\in {\rm Act}(X)$,
\item (closeness in inaction) if $X\notin {\rm Act}(Z)$, for
all $Y \in {\rm Act}(X) \cap {\rm Act}(Z)$, $ d_Y^\pi( X,Z )\leq \theta
$
\item (finite filling) for all $\calZ\subset \bbY_*$, there
is a finite collection
of elements $X_j$ in $\calZ$ such that $\cup_j
{\rm Act}(X_j)$ covers $\cup_{ X\in \calZ} {\rm Act}(X)$.
\end{itemize}
\end{defi}
The closeness in inaction can be understood as a
complement to Behrstock inequality: ``if $ d_Y^\pi( X,Z )> \theta
$, then $ d_X^\pi( Y,Z )$ is {\it defined} and is less than $\theta$''.
Applying \cite{BBF} (as recalled in the previous subsection)
we get, for each coordinate
$i\leq m$, and for a suitable choice of $\theta$, a
modified function $d_Y : \bbY_{i} \times \bbY_i \to
\bbR_+$. This function is unfortunately not defined on
${\rm Act}(Y)\setminus \bbY_i$, but $d^\pi_Y$ is defined on it,
and thus we choose to define $d^\sphericalangle_Y (X,Z)$ to be $d_Y$ if
both $X, Z$ are in $\bbY_i$ and $d^\pi_Y$ otherwise.
We then define $\bbY^j_M (X,Z) = \{Y\in \bbY_j \cap {\rm Act}(X)
\cap {\rm Act}(Z), d^\sphericalangle_Y( X,Z) \gammaeq M\}$. The elements
$X,Y,Z$ need not be in the same coordinate.
In the following we first choose $\theta$ such that the
construction of \cite{BBF} applies for all coordinates
$\bbY_i$, and this provides the constants $\Theta$ and
$\kappa$ (suitable for all coordinates).
Then we choose $c_* >1000 (\Theta +\kappa)$, and $\Theta_P = c_*
+21m\kappa $. One can choose $K > \Theta_P$
sufficiently large to get quasi-trees
in all coordinates, but this is not important for us.
Finally, choose
$\Theta_{Rot} > 2c_* + 2\Theta_P + 20 (\kappa +\Theta) $
for later purpose.
To keep track of the constants, it is worth keeping in mind
that
$$\Theta_{Rot} >> 2 \Theta_P >> 2c_* >> 20 (\Theta +\kappa) >>\theta.$$
\subsubsection{Group in the picture}
An \emph{automorphism} of a composite projection system is a map
$\psi: \bbY_*\to \bbY_*$
\begin{itemize}
\item that induces a bijection on each
$\bbY_i$,
\item that sends ${\rm Act}(Y)$ to ${\rm Act}(\psi(Y))$,
\item such that for all $Y$, and all $X,Z\in {\rm Act}(Y)$, $d_Y^\sphericalangle (X,Z) =
d_{\psi(Y)}^\sphericalangle(\psi(X), \psi(Z))$.
\end{itemize}
A \emph{rotation} around $X\in \bbY_*$ in a composite projection
system $\bbY_*$ is an automorphism $\psi$ such that
$\psi(X) = X$, and such that for all $Y\in \bbY_* \setminus
{\rm Act}(X)$, and for all $ W, Z \in {\rm Act}(Y)$, $\psi(Y)=Y$,
and $d_Y^\sphericalangle (W,Z) =d_Y^\sphericalangle (\psi(W),\psi(Z))$.
Let us now assume that a group $G$ acts on the composite
projection system by automorphisms.
Let us denote by $G_X$ the stabilizer of $X\in \bbY_*$.
We say that a subgroup $\Gammaamma_X<G_X$ has \emph{proper
isotropy}
if for all $N>0$ there is
a finite subset $F(N)$ of $\Gammaamma_X$ such that
if $\gammaamma \in \Gammaamma_X\setminus F(N)$, and if $Y\in {\rm Act}(X)$,
then $d_X^\pi(Y, \gammaamma Y)
>N$.
\subsubsection{Betweenness and orbit estimates}
\begin{lemma} (Betweenness is transitive)\label{lem;bet_trans}
If $d^\sphericalangle_Y (X,Z) >2\kappa$ and $d^\sphericalangle_Z (Y,T)
>2\kappa$, then $Z$ is in ${\rm Act}(X)$ and $d_Z^\sphericalangle(X,T )
\gammaeq d^\sphericalangle_Z (Y,T) - 2 \kappa$.
If $d^\sphericalangle_Y (X,Z) >10\kappa$ and $d^\sphericalangle_Z (X,T)
>10\kappa$, then $d^\sphericalangle_Y (X,T) \gammaeq d^\sphericalangle_Y (X,Z) -2\kappa$.
\end{lemma}
\begin{proof}
By Behrstock inequality,
one has $d^\sphericalangle_Z (X,Y) \leq \kappa$ in both cases. For
the first implication, by triangular
inequality, $d_Z^\sphericalangle (X,T ) \gammaeq d_Z^\sphericalangle(Y,T ) -
d_Z^\sphericalangle(X,Y ) - \kappa $.
For the second implication,
$d^\sphericalangle_Z( Y,T)$ is within $2\kappa$ from $d^\sphericalangle_Z
(X,T)$. Behrstock inequality gives that $d^\sphericalangle_Y(Z,T) \leq
\kappa$ and therefore $d^\sphericalangle_Y (X,T) \gammaeq d^\sphericalangle_Y (X,Z) -2\kappa$.
\end{proof}
\begin{lemma}{\it (Orbit estimates, or transfer in a coordinate)} \label{lem;transfert}
Assume that $\Gammaamma_X$ has proper isotropy.
For the finite subset $F = F( 10\kappa)$ of $\Gammaamma_X$, and for all
$Y\in {\rm Act}(X)$,
and all $X'$ that is either in $ {\rm Act}(Y)$ or in ${\rm Act}(X)$, and all $\gammaamma\in
\Gammaamma_X\setminus F$, then either $d_Y^\sphericalangle
(X',X)\leq \kappa$ or $d_Y^\sphericalangle
( \gammaamma X',X)\leq \kappa$.
\end{lemma}
\begin{proof}
Let us first treat the case of $ X'\in {\rm Act}(Y)$.
If $d_Y^\sphericalangle
(X',X)\leq \kappa $ we are done. Assume that $d_Y^\sphericalangle
(X',X)> \kappa $.
By closeness in inaction, $ X'\in {\rm Act}(X)$,
and by Behrstock inequality (and because $\kappa \gammaeq \theta$),
one has $d^\sphericalangle_X(X',Y) \leq \kappa$. By
proper isotropy (and coarse triangle inequality),
$d^\sphericalangle_X(\gammaamma X',Y) >
5\kappa$. Thus, by Behrstock inequality again, $d_Y^\sphericalangle
( \gammaamma X',X)\leq \kappa$.
Now assume that $ X'\notin {\rm Act}(Y)$, but is in
${\rm Act}(X)$. Since $Y\in {\rm Act}(X)$ we can measure $d_X^\sphericalangle(X', Y)$
and (since $\Gammaamma_X$ preserves ${\rm Act}(X)$) also
$d_X^\sphericalangle(\gammaamma X', Y)$. By proper isotropy, $d_X^\sphericalangle(X',
\gammaamma X') \gammaeq 10\kappa$ and therefore at least one of the
quantities $d_X^\sphericalangle(X', Y)$ and $d_X^\sphericalangle(\gammaamma X', Y)$ is
larger than $4\kappa$. Assume for instance that $d_X^\sphericalangle(X', Y) \gammaeq
4\kappa$. Then by Behrstock inequality, $d_Y^\sphericalangle
(X',X)\leq \kappa$.
\end{proof}
To facilitate notations, we will say that a property is true for almost all elements of a group if the property holds for all elements outside a certain finite subset of the group.
Using this lemma four times, together with triangle inequality, one
gets:
\begin{lemma} (Orbit estimates for proper isotropy)\label{lem;orbit_estimates}
Let $X_1,X_2, X'_1, X'_2$ such that $X_1, X_2 \in
{\rm Act}(Y)$. Assume that either $X'_i$ is in ${\rm Act}(Y)$ or in ${\rm Act}(X_i)$.
If the group $\Gammaamma_{X_1}$ and $\Gammaamma_{X_2}$ have proper isotropy,
then for almost all elements $\gammaamma_1\in \Gamma_{X_1}$ and $\gamma_2 \in
\Gamma_{X_2}$, one has $$d^\sphericalangle_Y( \gamma_1 (X'_1),
\gamma_2 (X'_2) ) -4\kappa \leq d^\sphericalangle_Y(X_1,X_2) \leq
d^\sphericalangle_Y( \gamma_1 (X'_1), \gamma_2(X'_2) ) +4\kappa.$$
\end{lemma}
Recall that we chose $K > 2\Theta +\kappa$.
\begin{prop}(Ellipticity)
Given $X \in \bbY_*$, and any $j\leq m$, the group $G_X$
has an orbit in $\calP_K(\bbY_j)$ of diameter at most $1$.
\end{prop}
\begin{proof}
If $j=i(X)$, and more generally, if $G_X$ fixes an element $Y\in
\bbY_j$, it is obvious. Assume then that $\bbY_j \subset {\rm Act}(X)$.
The group $G_X$
preserves the set $\{Z \in \bbY_j, \bbY^j_{K_0} (X, Z) =
\emptyset\}$ for any
$K_0$ hence for $K_0 = (K-\kappa)/2\gammaeq \Theta$.
Consider $Z_a, Z_b$ in this set, we claim that
$\bbY^j_{K} (Z_a,Z_b)$ is empty. Assume $Y\in \bbY^j_{K}
(Z_a,Z_b)$.
Since $Y\in
{\rm Act}(X)$ we can consider $d_Y^\sphericalangle (Z_a, X)$ and $d_Y^\sphericalangle
(Z_b, X)$. By triangle inequality, $d_Y^\sphericalangle (Z_a, X) +d_Y^\sphericalangle
(Z_b, X) \gammaeq d_Y^\sphericalangle (Z_a, Z_b) -\kappa \gammaeq
K-\kappa$. Thus, one of them needs to be larger than $(K -
\kappa)/2$ hence $Y$ is either in $ \bbY_{K_0} (X, Z_a)$ or
in
$\bbY_{K_0} (X, Z_b)$, and this is a contradiction to our assumption.
\end{proof}
\begin{prop} (Induced orders)\label{prop;induced_order}
Consider $X, Z \in \bbY_*$, with $Z\in {\rm Act}(X)$. Assume that
$\Gamma_X, \Gamma_Z$ are infinite subgroups of $G_X, G_Z$ with proper isotropy.
For all $i\leq m$, for all $M \gammaeq \Theta + 12\kappa$, the set
$\bbY^i_{M} (X, Z)$ is finite, and carries a partial order $\dot{<}$, that is
given by the order of $ \bbY^i_{M -4\kappa} ( \gamma_X
(X^i), \gamma_Z Z^i)$, for arbitrary $X^i$, $Z^i$,
in $\bbY_i$,
and almost all $\gamma_X\in \Gamma_X$ and $\gamma_Z\in \Gamma_Z$.
\end{prop}
\begin{proof}
Let us first check that the set is finite. We may assume
that there are $X^i\in
{\rm Act}(X)\cap \bbY_i$ and $Z^i\in {\rm Act}(Z)\cap \bbY_i$,
otherwise $\bbY^i_{M} (X, Z)$ is empty. By Lemma
\ref{lem;transfert}, there exists $\gammaamma_X\in \Gammaamma_X$,
$\gammaamma_Z\in \Gammaamma_Z$ such that each $Y\in
\bbY^i_{M} (X, Z)$ is in either one of the four sets
$\bbY^i_{M-3\kappa} (\eta_X X^i, \eta_Z Z^i)$ for $\eta_X
\in \{1, \gammaamma_X\}$ and $\eta_Z
\in \{1, \gammaamma_Z\}$. The union of these four sets is
finite by properness axiom.
We now need to check that the order on $ \bbY^i_{M -4\kappa} (\gamma_X
(X^i), \gamma_Z Z^i)$ includes all $\bbY^i_{M} (X,
Z)$ and does not depend on the choice of
the points $X^i, Z^i$. By Lemma
\ref{lem;orbit_estimates}, for
arbitrary choice of
points, and for any $Y \in
\bbY^i _{M} (X, Z)$, there is a finite set of $\Gamma_X$ and of
$\Gamma_Z$ such that for all elements $\gamma_X$, $\gamma_Z$ outside
these finite sets,
$Y \in \bbY^i_{M -4\kappa} (\gamma_X
X^i, \gamma_Z Z^i)$ (the finite sets depend on the choice of
$X^i, Z^i$ though). Since $\bbY^i _{M} (X, Z)$
is finite, we may find a finite set of $\Gamma_X$ and $\Gamma_Y$
suitable for all of them.
Thus, for almost all $\gamma_X, \gamma_Z$,
all $ \bbY^i_{M -4\kappa} (\gamma_X
(X^i), \gamma_Z( Z^i) )$ is ordered, and the order, once
chosen the points $X^i, Z^i$, does not depend on $\gamma_X, \gamma_Z$.
Assume that for two different choices of points $X^i,
Z^i$, namely
$(X_a^i, Z_a^i)$ and $(X_b^i, Z_b^i )$,
the orders are different, and take $Y_1, Y_2$ such that
$Y_1\dot{<}_a Y_2$ for the first order, and
$Y_2\dot{<}_b Y_1$ for the other.
$Y_1\dot{<}_a Y_2$ means that $d_{Y_1} (Y_2, \gamma_Z ( Z_a^i )
) \leq \kappa$. By the orbit estimate, $d_{Y_1}^\sphericalangle (Y_2, Z )
\leq 5\kappa$ for suitable $\gamma_Z$.
$Y_2\dot{<}_b Y_1$ means that $d_{Y_1} (Y_2, \gamma_X ( X_b^i )
) \leq \kappa$, and by the orbit estimate, $d_{Y_1}^\sphericalangle (Y_2, X )
\leq 5\kappa$. Finally, by coarse triangular inequality,
$d_{Y_1}^\sphericalangle (Z, X) \leq 11\kappa$,
contradicting the assumption that $Y_1$ is in $\bbY^i_{M} (X, Z)$.
\end{proof}
\subsection{Convexity}
\begin{defi}(Convexity)
Let $L>10\kappa$. We say that a subset $\calW \subset \bbY_*$ is
$L$-convex if:
for all $i$, for all $X, Z \in \calW\cap \bbY_i$, for
all $j$, the set $\bbY^j_{L} ( X , Z)$ is a subset
of $\calW$.
Let now $\calL=(L(1), \dots, L(m))$ be a $m$-tuple of positive numbers.
A subset $\calW$ of $\bbY_*$ is said $\calL$-convex if for all $X,Z
\in \calW$, of same coordinate $i(X) = i(Z)$, and for all $j$, the
set $\bbY^j_{L(j)}(X,Z) $ is a subset $ \calW$.
\end{defi}
Note that being $L$-convex, for $L>0$ is equivalent to being
$(L, \dots, L)$-convex.
\begin{defi}
Let $\calW\subset \bbY_*$ non-empty, and $R\in \bbY_*\setminus \calW$ for which ${\rm Act}(R) \cap \calW$
is non-empty. Let $L\gammaeq 10\kappa$.
Define $\bbY_L(\calW, R)$ as the set
of $Y\in \bbY_*$ satisfying the following.
\begin{itemize}
\item $Y\in {\rm Act}(R)$
\item $Y\notin \calW$
\item $\calW \cap {\rm Act}(R)\cap{\rm Act}(Y)$ is non-empty, and
for all $X\in\calW \cap {\rm Act}(R)\cap{\rm Act}(Y)$, one has $Y\in
\bbY^{i(Y)}_L(X, R)$.
\end{itemize}
\end{defi}
\begin{prop}\label{prop;intervals_are_finite}
Assume that for all $X\in \calW$, $\calW$ is invariant by
an infinite group $\Gammaamma_X$ of rotations around $X$,
with proper isotropy.
If $L\gammaeq \Theta + 12\kappa$,
then for all $R$ for which it is defined, the set $\bbY_L(\calW, R)$ is finite.
\end{prop}
\begin{proof}
From the definition, $\bbY_L(\calW, R) \subset
\bigcup_i \bigcap_{X\in{\rm Act}(R)\cap \calW} ( \bbY^{i}_L(X,
R) \cup (\bbY_i \setminus {\rm Act}(X)))
$. By finite filling assumption on the projection system, there
is a finite collection of elements $X_j\in \calW\cap
{\rm Act}(R)$ such that $\cup_j
{\rm Act}(X_j)$ covers $\cup_{\calW\cap{\rm Act}(R)} {\rm Act}(X)$.
In particular, $\bbY_L(\calW, R)$ is inside a finite union of sets of
the form
$\bbY^{i}_L(X_j, R)$ which are finite by Proposition
\ref{prop;induced_order}.
\end{proof}
\begin{prop}\label{prop;included}
Assume that for all $X\in \calW$, $\calW$ is invariant by
an infinite group $\Gammaamma_X$ of rotations around $X$,
with proper isotropy. Let $L\gammaeq \Theta +12\kappa$.
If $\calW$ is $(L-6\kappa)$-convex,
and if $S\in \bbY_L(\calW, R) $ then $\bbY_{L}(\calW, S)
\subset \bbY_{L-2\kappa} (\calW, R)$.
Moreover, if $\calW'$ contains $\calW$, then $\bbY_{L}(\calW', R)
\subset \bbY_{L} (\calW, R)$.
\end{prop}
\begin{proof}
Let $Y \in \bbY_L(\calW, S)$ in coordinate $i$. There
exists $X\in \calW \cap {\rm Act}(Y)\cap {\rm Act}(S)$ such that $d^\sphericalangle_Y(X,S) \gammaeq L$.
Assume
that $\tilde X \in {\rm Act}(R)\cap {\rm Act}(Y) \cap \calW$. If it
is not in ${\rm Act}(S)$, then $d^\sphericalangle_Y(\tilde X, S) <\kappa$
and $d_Y^\sphericalangle(\tilde X, X) > L-2\kappa$. Transfering
$\tilde X$ in the coordinate of $X$ (by invariance under
$\Gammaamma_{\tilde X}$), one has
$d_Y^\sphericalangle(\tilde X_{i(X)}, X) > L-6\kappa$. By convexity,
$Y\in \calW$ though we assumed otherwise.
Therefore, $\tilde X \in {\rm Act}(S)$. Therefore, by
definition of $\bbY_L(\calW,S)$, one has $d_Y^\sphericalangle(\tilde
X, S) \gammaeq L$, but also $d_S^\sphericalangle(\tilde X, R) \gammaeq
L$. It follows by transitivity of betweenness (Lemma
\ref{lem;bet_trans}) that
$d_Y^\sphericalangle(\tilde X, R) \gammaeq L-2\kappa$.
The second assertion is a direct consequence of the definition.
\end{proof}
\begin{prop} \label{prop;firstguy}
Assume that for all $X\in \calW$, $\calW$ is invariant by
an infinite group $\Gammaamma_X$ of rotations around $X$,
with proper isotropy.
If ${\rm Act}(R)\cap\calW$ is not empty,
for all $L\gammaeq (2m+12)\kappa +\Theta$, there exists $Z\in
\bbY_L(\calW, R)$ such that $\bbY_{L-2m\kappa} (\calW, Z) = \emptyset$.
\end{prop}
\begin{proof}
Let us say that $R$ has $k$ $L$-links to $\calW$ if $\{ i,
\bbY_{L}(R,\calW) \cap \bbY_i \neq \emptyset \}$ has $k$
elements.
For any such index $i$, take a minimal item $Z_i$ in
$\bbY_{L}(R,\calW) \cap \bbY_i $ for the order of
Proposition \ref{prop;induced_order}. Then, by
Proposition \ref{prop;included}, $\bbY_{L-2\kappa}
(\calW, Z_i)$ is included in $\bbY_{L}(R,\calW) $, thus
$Z_i$ has at most $(k-1)$ $(L-2\kappa)$-links to
$\calW$.
Iterating this choice at most $m$ times, we find an
element $Z$ that has no $(L-2m\kappa)$-links to
$\calW$. Therefore $\bbY_{L-2m\kappa} (\calW, Z) = \emptyset$.
\end{proof}
\begin{prop}\label{prop;osc_cvx}
Let $L\gammaeq \Theta +12\kappa$. Consider $\calW$, and assume it is
$L$-convex, and that for all $X\in \calW$, there is
$\Gammaamma_X<G_X$, infinite, that leaves $\calW$ invariant
and that has proper isotropy.
If $\bbY_{L'}(\calW, R)$ is
well defined and empty, then $\calW\cup\{R\}$ is $(L+L'+5\kappa)$-convex.
\end{prop}
\begin{proof}
If $\calW\cap \bbY_{i(R)}$ is empty, there is nothing to
prove. We assume it is non-empty.
Consider $Y\in \bbY_{L+L'+5\kappa}(R, X)$ for some $X\in
\calW\cap \bbY_{i(R)}$, and assume that $Y\notin
\calW$. Notice that $Y\in {\rm Act}(R)$ though, and $X\in
{\rm Act}(R)$ since they have same coordinate.
Hence, $X\in \calW\cap{\rm Act}(R)\cap
{\rm Act}(Y)$.
Let $X'$ be any other element of $\calW\cap{\rm Act}(R)\cap
{\rm Act}(Y)$. Transfer $X'$ in the coordinate $i=i(R)$, inside
$\calW$, by
$\Gammaamma_{X'}$.
There exists $X'_i \in \bbY_i \cap \calW$ such
that $d_Y^\sphericalangle(X', X'_i) \leq \kappa$. But, $\calW$ being
$L$-convex, one has $d_Y^\sphericalangle(X', X'_i) \leq L$. It
follows by triangular inequality, that
$d_Y^{\sphericalangle}(R,X')\gammaeq L'+2\kappa$. Since this is true for
all $X'$ as above, it follows that $Y\in \bbY_{L'
+2\kappa}(\calW, R)$, contradicting our assumption.
\end{proof}
\section{Composite rotating families and windmills}
We proceed to adapt the rotating families study of \cite{DGO} to
the context of composite projection systems.
\subsection{Definition}
\begin{defi}(Composite rotating family)\label{def;CRF}
A composite rotating family on a composite projection system,
endowed with an action of a group $G$ by isomorphisms,
is a family of subgroups $\Gamma_Y,
Y\in \bbY_*$ such that
\begin{itemize}
\item for all $X\in \bbY_*, \Gamma_X < G_X= {\rm
Stab}_G(X)$, is an infinite group of rotations around $X$, with proper isotropy
\item for all $g\in G$, and all $X\in \bbY_*$ , one
has $\Gamma_{gX}= g\Gamma_X g^{-1}$
\item if $X\notin {\rm Act}(Z)$ then $\Gammaamma_X$ and $\Gammaamma_Z$ commute,
\item for all $i$, for all $X,Y, Z \in \bbY_i$, if $d_Y(X,Z)
\leq \Theta_P $
then for all
$g\in \Gamma_Y\setminus\{1\}$, $d_Y(X,gZ) \gammaeq \Theta_{Rot}$.
\end{itemize}
\end{defi}
We will show the following.
\begin{theo} \label{theo;main}
Consider $\bbY_*$ a composite projection system.
If $\{\Gammaamma_Y, Y\in \bbY_*\}$ is a composite rotating family
for sufficiently large $\Theta_{Rot}$, then the group
$\Gammaamma_{Rot}$ generated
by $\bigcup_{Y\in \bbY_*} \Gammaamma_Y$,
has a partially commutative
presentation.
More precisely, two presentations of $\Gammaamma_{Rot}$ are
$$ \Gammaamma_{rot} \simeq \langle \bigcup_{Y\in \bbY_*} \Gammaamma_Y \;|\; \begin{array}{cc} \forall Y, \forall Y'\notin {\rm Act}(Y) & [\Gammaamma_Y, \Gammaamma_{Y'}]=1 \\ \forall Y, \forall g\in \Gammaamma_{rot} & \Gammaamma_{gY} = g\Gammaamma_Y g^{-1} \end{array} \rightarrowngle$$
and, for a certain $\calS\subset
\bbY_*$,
$$ \Gammaamma_{rot} \simeq \langle \bigcup_{Y\in \calS} \Gammaamma_Y \;|\; \begin{array}{cc} \forall Y,Y' \in \calS, & \\ \forall w /\, Y\notin {\rm Act}( wY'), & [s,ws'w^{-1}]=1 \\ \forall s \in Y, s'\in Y' & \end{array} \rightarrowngle $$
\end{theo}
In these presentations, we consider implicit the relations of the groups
$\Gammaamma_{Y}$ that appear in the generating sets. Moreover the expression $\Gammaamma_{gY} = g\Gammaamma_Y g^{-1}$ refers to the following precise collection of formal relations: for all $\gammaamma$ in $\Gammaamma_Y$, for all $g \in \Gammaamma_{rot} $, given the element $\gammaamma'\in \Gammaamma_{gY}$ equal to $ g\gammaamma g^{-1} $ (which exists by definition of composite rotating family), we add the relation $ (\gammaamma')^{-1}g\gammaamma g^{-1}=1 $. It is somewhat tautological, but necessary in a presentation over this generating set. The point of the second presentation is to avoid these tautological relations by reducing the generating set to a certain set of representatives of conjugacy classes of groups $\Gammaamma_Y$.
Unfortunately, it is not so easy to describe a-priori the subset $\calS$. It is constructed recursively in a number of steps, by taking at each step orbit representatives of a certain subset of $\bbY_*$ under the action of the group generated by the $\Gammaamma_Y$ that have been collected so far in the process. In principle, it probably can be enumerated explicitely, but at the cost of a certain complexification of the exposition.
The following result is, in our point of view, an incarnation of the
Greendlinger lemma, from the small cancellation theories. If one
considers a relation $\gammaamma$ of the quotient group, one can find
in it a large part of a defining relation $\gammaamma_s$. Compare to
\cite[\S 5.1.3]{DGO}.
Let us consider $\Gammaamma_{rot} $ as in the previous theorem, and $\gammaamma \in \Gammaamma_{rot}$. A principal coordinate for $\gammaamma$ is a coordinate $i\leq m$ for which, for all $X\in \bbY_{i}$, $d_R(X, \gammaamma X) >\Theta_{Rot} -2\Theta_P -\kappa$ (the constants are somewhat ad-hoc, chosen for the counting arguments to flow properly). In that case, a shortening pair $(R, \gammaamma_s)$ for $\gammaamma$ in a principal coordinate $i$, at $X\in \bbY_{i}$, is a pair consisting of a element $R$ of $\bbY_{i}$, and of an element $\gammaamma_s \in \Gammaamma_{R}$ such that
$d_R(X, \gammaamma_s \gammaamma X) \leq 2\Theta_P +3\kappa$.
\begin{theo}\label{theo;mainGreendlingerLemma}
Consider $\bbY_*$ a composite projection system.
If $\{\Gammaamma_Y, Y\in \bbY_*\}$ is a composite rotating family
for sufficiently large $\Theta_{Rot}$, let
$\Gammaamma_{Rot}$ be the group generated
by $\bigcup_{Y\in \bbY_*} \Gammaamma_Y$
Then
for all $\gammaamma\in \Gammaamma_{Rot}\setminus \{1\}$, there
is $i(\gammaamma)\leq m$ a principal coordinate for $\gammaamma$ and
$(R, \gammaamma_s)$ a shortening pair for $\gammaamma$ in that coordinate.
\end{theo}
A major tool for analysing rotating families was the concept of
windmills. We are going to use composite windmills.
Let us fix $\calL$ the $m$-tuple $$\calL =( c_* +20(m-1)\kappa ,
\, c_*
+ 20(m-2)\kappa, \, \dots, \, c_* +20\kappa, \, c_*
).$$ Let $\sigma$ be the cyclic shift on $\bbZ/m\bbZ$: $\sigma(i)
= (i-1)$, and define $\calL_{j} = \sigma^{j-1} (\calL)$ obtained by
shifting the coordinates of the $m$-tuple.
Thus
$\calL_i$ reaches its maximum $ c_*+20(m-1)\kappa$ on the coordinate
$i$, minimal value $c_*$ at $i-1$. Note that the maximum of
$\calL$ is less than $\Theta_P -\kappa$.
\begin{defi}(Composite windmills)\label{def;CW}
A composite windmill is a collection $(\calW_1, \dots, \calW_m,
G_W, j_0)$ in which
\begin{itemize}
\item $G_W$ is the subgroup of $G$ generated by a set of
subgroups $\{\Gamma_Y, Y\in
\bigcup_{i\in I_*} \calW_i\}$ for $I_*$ either $\{1, \dots,
m\}$ or $\{1, \dots,
m\} \setminus \{j_0\}$,
\item $\calW_i$ is a subset of $\bbY_i$ for all $i$, invariant
under $G_W$,
\item $j_0$ is called the principal coordinate, and $1\leq j_0\leq m$,
\item $ \bigcup_i \calW_i$ is $\calL_{j_0}$-convex.
\item The group $G_W$ has a partially commutative presentation,
that is a presentation of the form $$ G\simeq \langle \calS \,
|\, \calR\rightarrowngle$$ where $\calS$ is the union over a subset
$\calW_*$ of
$\calW$ of generating sets for $\Gammaamma_X, X\in\calW_*$, and
$\calR$ consists of words over the alphabet
$\calS \cup \calS^{-1} $ of the form $[s, ws'w^{-1}]$ for $w$ a word over
$\calS \cup \calS^{-1}$. Moreover, if $X,X' \in \calW_*$ and $s\in
\Gammaamma_X, s'\in \Gammaamma_{X'} $, the word $[s, ws'w^{-1}] $ is in $\calR$ if and only if $wX' \notin {\rm Act}(X)$.
\item (Greendlinger property) for each $\gammaamma \in
G_W$ there is $i(\gammaamma)\leq m$, and for all $X\in
\calW_{i(\gammaamma)}$, either $\gammaamma \in \Gammaamma_X$, or there is an $R\in
\calW_{i(\gammaamma)}$ such that
$d_R(X, \gammaamma X) >\Theta_{Rot}
-2\Theta_P -\kappa$. Moreover, there is a $\gammaamma_s\in \Gammaamma_{R}$ such
that $d_R(X, \gammaamma_s \gammaamma X) \leq 2\Theta_P +3\kappa$ (the pair
$(R, \gammaamma_s)$ is called a shortening pair for
$\gammaamma$ at $X$).
\end{itemize}
We say that the composite windmill has full group if $G_W$ is the subgroup of $G$ generated by $\{\Gamma_Y, Y\in
\bigcup_{i=1}^m \calW_i\}$.
\end{defi}
If we do not mention it, our windmills will be full. Only in
specific circumstances do
we need non-full windmills. Indeed, we will use the case of a non-full group only at most one time by
coordinate, when
initiating the process in each coordinate.
\begin{prop}
In a composite windmill $\mathcal{W}$, for all $i$ such that $\mathcal{W}_i \neq \emptyset$, $\mathcal{W}_i$ is connected in $\calP_K(\bbY_i)$.
\end{prop}
\begin{proof} Consider $X, X'$ two points in it, by \cite[Thm. 3.7]{BBF} (more precisely the first claim in its proof), there exists a path between them, $X_1, \dots X_n= X'$ such that for each $j$, $X_j \in \bbY^i_K(X,X')$. Since $K> \max (\calL) $, it follows that each $X_j$ is in $\mathcal{W}_i$.
\end{proof}
We say that a windmill $\calW'$ (with its representative set $\calW'_*$
used for the presentation of the definition) is \emph{constructed over} $\calW$ if
$\calW\subset \calW'$ and if the set of representatives $\calW'_*$
contains the set of representatives $\calW_*$. Note that it is transitive: if $\calW''$ is constructed over $\calW'$, and $\calW'$ is constructed over $\calW$, then $\calW''$ is constructed over $\calW$.
\subsection{Osculations of two kinds}
\begin{itemize}
\item An osculator of type {\it gap} of a composite windmill $(\calW_1, \dots, \calW_m,
G_W, j_0)$
is an element $R$ of $\bbY_{j_0} \setminus
\calW_{j_0}$ such that there
exists $i\leq m$, $X_i, Z_i \in \calW_i$, that are in ${\rm Act}(R)$
and such that $d_R^\sphericalangle(X_i,Z_i) > \frac{c_*}{2} -20\kappa$.
\item An osculator of type {\it neighbor} of a
composite windmill $(\calW_1, \dots, \calW_m,
G_W, j_0)$
is an element
$R$ of $\bbY_{j_0} \setminus \calW_{j_0}$ such that $\bbY_{\frac{c_*}{2}}(\calW, R) = \emptyset$.
\end{itemize}
\begin{lemma} \label{lem;approx_gap} Consider a composite windmill $\calW = (\calW_{1}, \dots, \calW_m,
G_W, j_0)$, assume that $\calW_{j_0} \neq \emptyset$, and let $R \in \bbY_{j_0}$ be an osculator of type gap.
Let $Y\in \bbY_i$ in ${\rm Act}(R)$. Then there exists $X\in \calW_{j_0}$ such that
$d_Y^\sphericalangle(X,R) \leq \kappa$.
\end{lemma}
\begin{proof}
If $R$ is an osculator of type gap, there are
$X', Z' \in \calW_i$, for some $i$, such that $d^\sphericalangle_{R}(X',Z')
>c_*/2 -20\kappa$.
Let $X_0\in \calW_{j_0}$, and
consider its orbit under the groups $\Gammaamma_{X'}$, and
$\Gammaamma_{Z'}$, which
preserves $\calW_{j_0}$.
We may use Lemma \ref{lem;orbit_estimates} to find
$X'^{(j_0)}, Z'^{(j_0)}$ in these orbits, hence in $\calW_{j_0}$, such that $d^\sphericalangle_{R}
(X'^{(j_0)}, Z'^{(j_0)}) >c_*/2-24\kappa$.
By the coarse
triangle inequality, for at least one point among $X'^{(j_0)},
Z'^{(j_0)}$, say $X'^{(j_0)}$, we have $d^\sphericalangle_{R} (Y, X'^{(j_0)})
>c_*/4 - 13\kappa$. Behrstock inequality gives $d^\sphericalangle_{Y} (R,
X'^{(j_0)}) \leq \kappa$.
\end{proof}
\begin{lemma}\label{lem;i=1}
Let $\calW$ be a composite windmill, and $R_1, R_3$ be two osculators
of $\calW$. Assume $\calW_{j_0} \neq \emptyset$, and
let $X_2 \in \calW_{j_0}$.
If $R_3$ is of type neighbor and $\calW$ is $(\frac{c_*}{2}
-20\kappa)$-convex, then $d_{R_1}(X_2, R_3) \leq c_* $.
If $R_3$ is of type gap, then
$d_{R_1}(X_2, R_3) \leq \Theta_P $.
\end{lemma}
\begin{proof}
If $R_3$ is an osculator of neighbor type, then the result
follows from Proposition \ref{prop;osc_cvx}.
If now $R_3$ is an osculator of type gap, the proof is slightly
more involved. There is $i$, and there are $X,Z \in \calW_i$
such that $ d_{R_3}^\sphericalangle (X,Z) > c_*/2-20\kappa$.
Since $\calW_{j_0}$ is non-empty, and invariant for $\Gamma_{X}$ and
$\Gamma_Z$, we can apply Lemma \ref{lem;orbit_estimates} and find
$X^{(j_0)} , Z^{(j_0)} \in \calW_{j_0}$ such that $d_{R_3} (
X^{(j_0)} , Z^{(j_0)} ) \gammaeq d_{R_3}^\sphericalangle (X,Z) -4\kappa$
which is $\gammaeq c_*/2-24\kappa$. By coarse triangular
inequality, at least one of the quantities $d_{R_3} ( R_1,
X^{(j_0)} )$ and $ d_{R_3} (R_2, Z^{(j_0)})$ is greater than
$c_*/4 -13\kappa$. Say it is $d_{R_3} ( R_1,
X^{(j_0)} )$. Behrstock inequality then gives that $d_{R_1} ( R_3,
X^{(j_0)} ) \leq \kappa$, and again coarse triangular
inequality gives $d_{R_1} (
X^{(j_0)} , X_2 ) \gammaeq d_{R_1}(X_2, R_3) - \kappa$. Since
the first is bounded by the maximal convexity constant of
$\calW$, the result follows.
\end{proof}
\subsection{The unfolding in the different coordinates}
Given a composite windmill $\calW$, we will define its unfolding.
Observe first the following, which justifies the next definition of admissible set of osculators.
\begin{lemma} \label{lem;keepwalking}
If $\calW$ is a composite windmill, it has some gap osculator if and only if it is not $(\frac{c_*}{2}-20\kappa)$-convex.
Assume that for all $R\in \bbY_*$, ${\rm Act}(R) \cap \calW \neq \emptyset$. If $\calW$ is $(\frac{c_*}{2}-20\kappa)$-convex, and yet does not contain $\bbY_*$, then there exists a neighbor osculator.
\end{lemma}
\begin{proof}
The first assertion is direct from the definitions. To prove the second assertion, take $X \notin \calW$. By Proposition \ref{prop;firstguy} there is $Z$ in $\bbY_{\frac{c_*}{2} + 2m\kappa}(\calW, X) \cup\{X\}$ such that $\bbY_{\frac{c_*}{2} }(\calW, Z) = \emptyset$. It is therefore a neighbor osculator of $\calW$.
\end{proof}
We define now admissible sets of osculators of a composite windmill $\calW$ that does not cover the entire set $\bbY_*$.
If $\calW$ is not $(\frac{c_*}{2}-20\kappa)$-convex, then the (only) admissible set of osculators for $\calW$ is the
set $\calR_{gap}$ of osculators of type gap in $\bbY_{j_0}$. Note that it can be the empty set if the gap osculators are not in the coordinate $j_0$.
If $\calW$ is $(\frac{c_*}{2}-20\kappa)$-convex
(but does not cover the entire set $\bbY_*$), then an admissible set of osculators for $\calW$ is a set $\calR=\{ G_W R \} $ for a choice of an osculator $R$ (necessarily of type neighbor).
We define the unfolding of $\calW$ as follows.
\begin{defi}(Unfolding)\label{def;unfolding}
Let $\calW = (\calW_1, \dots,
\calW_m, G_W, j_0)$
be a composite widmill that does not contain the entire set $\bbY_*$,
and $\calR$ be an admissible set of osculators.
Define,
for all $i$, $\calW'_{i}$ to be the union of all the
images of $\calW_{i}$ by elements of the group $G_{W'}$ generated by
$G_W \cup \{\displaystyle \bigcup_{\calR} \Gamma_R\}$.
The unfolding
of $\calW$ is then
$(\calW_1', \dots, \calW_m', G_{W'}, j_0+1)$,
where $j_0+1$ is taken modulo $m$.
If $\calW$ contains $\bbY_*$, its unfolding is $\calW' = \calW$.
\end{defi}
Here is an obvious lemma.
\begin{lemma} (Trivial unfolding) \label{lem;emptyR}
Let $\calR$ be a choice of an admissible set of osculators of
$\calW$. If $\calR$ is empty, then the unfolding
$\calW' = (\calW_1, \dots,
\calW_m, G_W, j_0+1)$ is a composite windmill.
\end{lemma}
We thus concentrate on the case where $\calR$ is non-empty.
In the case $\calW_{j_0}$ is empty, we include here a convexity result
for an intermediate step in the construction: adding an
admissible set of osculators $\calR$,
which produces a
non-full composite windmill.
\begin{lemma}\label{lem;calRisconvex}
Assume that $\calW$ is a full composite windmill of principal
coordinate $j_0$, with $\calW_{j_0} = \emptyset$.
Let $\calW_{j_0}^s$ be a set $\calR$ of admissible osculators
as defined above, assumed non-empty.
For all other
coordinates, let $\calW_i^s = \calW_i$.
Then $\calW^s= (\calW_1^s, \calW_2^s, \dots, \calW_m^s, G_W, j_0)$
is a non-full composite windmill of principal
coordinate $j_0$. If moreover $\calR$ is the orbit of a
neighbor osculator, and if $\calW$ is
$(\frac{c_*}{2}-20\kappa)$-convex, then $\calW^s$ is $B$-convex, for $B= \frac{c_*}{2}+10\kappa \leq \inf \calL$.
\end{lemma}
\begin{proof}
If $\calR=\emptyset$, there is nothing to prove.
Consider the case of the orbit of a neighbor osculator.
It suffices to check that $\calW_{j_0}^s (= G_W R)$ is convex in the
sense that for all $\gammaamma\in G_W$ and all $i$
the set
$\bbY^i_{B} (R, \gammaamma R)$ is in $\calW_i$.
By the Greendlinger Property,
given $\gammaamma$, there
exists $j$, and $Y_j\in
\calW_j$ such that $d_{Y_j}(R, \gammaamma R) >\Theta_{Rot} -
2\Theta_P -\kappa$, or $R=\gammaamma R$ (if $R$ is not active for all the shortening
pairs of $\gammaamma$).
Of course we consider only the first case of the alternative.
Assume that some $Y\in \bbY_i $ is in $\bbY_{B}^{(i)} (R,
\gammaamma R)$.
If $Y\notin {\rm Act}(Y_i)$, then one can use a shortening pair at $Y_i$ to
reduce the length of $\gammaamma$ in its principal coordinate, and this
shortening pair gives $\gammaamma'$ such that $d_Y^\sphericalangle(R, \gammaamma R) =
d_Y^\sphericalangle(R, \gammaamma' R)$. Thus, $Y\in \bbY^{(i)}_{B} (R, \gammaamma' R)$
as well, and by performing this reduction sufficiently many times, we
may assume that $Y\in {\rm Act}(Y_i)$.
By Lemma \ref{lem;transfert}, either $R$ or $\gammaamma R$ approximates by
$\kappa$ the projection of $Y_j$ on $Y$.
Say that $d^\sphericalangle _Y(\gammaamma R, Y_j) \leq \kappa$. By osculation
if $Y\notin \calW_i$, one has $d_Y^\sphericalangle(Y_j, R) \leq \frac{c_*}{2}
$. Therefore,
one has $d_Y^\sphericalangle(\gammaamma R, R)\leq d^\sphericalangle _Y(\gammaamma R, Y_j) +d^\sphericalangle
_Y( R, Y_j) +\kappa \leq \frac{c_*}{2} +2\kappa $ which is less than $B$.
If now $d^\sphericalangle _Y( R, Y_j) \leq \kappa$, one has $d_Y^\sphericalangle (R,\gammaamma
R) $ is within $2\kappa$ from $d_Y^\sphericalangle (Y_j ,\gammaamma
R) $, which equals $d_{\gammaamma^{-1}Y}^\sphericalangle (\gammaamma^{-1} Y_j ,
R) $. Of course, $Y\notin \calW_i$ if and only if $\gammaamma^{-1}Y \notin
\calW_i$, hence, if it is the case, by osculation of $R$,
$d_Y^\sphericalangle(Y_j, \gammaamma R) \leq \frac{c_*}{2}$,
and $d_Y^\sphericalangle(\gammaamma R, R)\leq \frac{c_*}{2} +
2\kappa \leq B$.
In the case where $\calR$ is the set of gap osculators, the proof is
similar. Indeed, if $R_1$ is a gap between $X_1$ and $Z_1$, and
$R_2$ is a gap between $X_2$ and $Z_2$, and if $Y$ is
between $R_1$ and $ R_2$, so that $d_Y^\sphericalangle(R_1,
R_2) \gammaeq c_*+20(m-1)\kappa (=\calL_{j_0}(j_0))$, then $Y$ is also between $X_1$ (or $Y_1$) and
$X_2$ (or $Y_2$) so that, say, $d_Y^\sphericalangle(X_1,
X_2) \gammaeq c_*+20(m-1)\kappa -3\kappa$. One can transfer $X_2$ in the
coordinate of $X_1$ by Lemma \ref{lem;transfert}, in $\calW$ (in the
$\Gammaamma_{X_2} $-orbit of $X_1$). The convexity of $\calW$
then shows that $Y\in \calW$.
\end{proof}
The aim of the next sections is to prove the following.
\begin{prop} \label{prop;unfold_main} If $\calW= (\calW_1, \dots, \calW_m, G_{W'}, j_0)$ is a
(full) composite windmill, and $\calR$ is an admissible set of osculators, then the unfolding $\calW'= (\calW'_1, \dots, \calW'_m, G_{W'}, j_0+1)$ is a
(full) composite windmill, and $\calW'_*$ can be chosen to contain
$\calW_*$ (in other words, $\calW'$ is constructed over $\calW$).
\end{prop}
\subsubsection{Unfolding a tree}
\begin{prop} (Principal coordinate tree) \label{prop;principal_tree}
Consider a full composite windmill $\calW$, of
principal coordinate $j_0$.
Let $\calR\neq \emptyset$ be an admissible set of osculators as defined in
the previous section. If $\calW_{j_0} = \emptyset$, let $
\calW^s_{j_0}=\calR$, and otherwise let
$\calW^s_{j_0}=\calW_{j_0} $.
There exists a $G_{W'}$-tree $T$, bipartite, with black
and white vertices, with an equivariant
injective map $\psi : T\to \calP(\bbY_{j_0})$ (the set of
subsets of $\bbY_{j_0}$) that sends black vertices to images
of osculators by $G_{W'}$, and white vertices to
images of $\calW^s_{j_0}$ by $G_{W'}$, and that sends the
neighbors (in $T$) of the preimage of $ \calW^s_{j_0}$ to
$\calR$.
Moreover, for any pair of distinct white vertices $w_1, w_2$,
and any black vertex $v$ in the interval between them (in $T$), and
any $X_1\in \psi(w_1), X_2\in \psi(w_2)$, one has
$d_{\psi(v)}(X_1,X_2) \gammaeq \Theta_{Rot} - 2\Theta_P -\kappa$.
Finally, if $w_1, w_2$ are white vertices for which the path
from a black vertex $v$ starts by the same edge, then for any
$X_1\in \psi(w_1), X_2\in \psi(w_2)$, one has
$d_{\psi(v)}(X_1,X_2) \leq 2\Theta_P +3\kappa$.
\end{prop}
\begin{proof}
Take a transversal $\calR^t$ of $\calR$ under
the action of $G_W$. For each $R\in \calR^t$, let $(G_W)_R$
the subgroup of $G_W$ generated by $\bigcup_{X \in \calW \setminus {\rm Act}(R)}
\Gammaamma_X$.
Set $T$ to be the Bass-Serre tree of the
(abstract) graph of groups whose vertex groups are $G_W$ and
the groups $\Gamma_R \times (G_W)_R ,
R\in \calR^t$, and the edges are the pairs $(G_W, R), R\in
\calR^t$, and the edge groups are the groups $(G_W)_R$.
Let $\widetilde{G_{W'}}$ the fundamental group of this graph of
groups. The group $G_{W'}$ is a quotient of this group, since
it is generated by $G_W$ and the stabilizers of elements $R$ of
$\calR^t$, which, by assumption (Definition \ref{def;CRF}), are
direct sums of their rotation group
with the groups $(G_W)_R$.
$T$ is
a tree, endowed with a $\widetilde{ G_{W'}}$-action, bipartite,
and with an equivariant (with respect to $\widetilde{G_{W'}}
\onto G_{W'}$)
map $\psi\,:\, T\to \calP(\bbY_j)$ that sends black vertices to images
of elements of $\calR$ by $G_{W'}$, and white vertices to
images of $\calW^s_{j_0}$
by $G_{W'}$.
We need to show that it is injective, and at the same time,
we will show the estimate of the end of the statement.
Consider a path $p$
of
$T$, starting and ending at white vertex. Up to cyclic permutation, and up to
the group action, we may assume that the path $p$ starts at
the vertex fixed by $G_W$, and its second vertex is fixed by
some $R_1\in \calR^t$, and that its length is even.
Let us denote by $p_0, p_1, \dots , p_N$ the consecutive
vertices of $p$, and let $X_{2i}$ be a choice of a element of
$\psi(p_{2i})$, and $R_{2i+1} = \psi(p_{2i+1})$.
The monotonicity property in the coordinate $j_0$ says that
if $d_Y(X,Z) \gammaeq \Theta$ then $d_W(X,Z) \gammaeq d_W(X,Y)$.
We
will use it in an induction
to establish that for all $k$ odd, and all $i$ in $1 \leq i\leq
\frac{N-k}{2}$ and all $j$ in $1\leq j\leq \frac{k-1}{2} $,
one has $$ \begin{array}{cccc} & d_{R_k}(R_{k-2j}, R_{k+2i}) &\gammaeq & \Theta_{rot}
-2\Theta_P -\kappa \\
\forall X_{ s}\in \psi(p_{s}), & d_{R_k}(X_{k-2j+1},
X_{k+2i-1}) & \gammaeq & \Theta_{rot}
-2\Theta_P -\kappa \end{array}$$
The case $i,j=1$ happens as follows. Choose $k$.
We first show how a black vertex separates two adjacent white
vertices. Note that there is $X'_{k+1} \in \psi(p_{k+1})$ that equals
$g X_{k-1}$ for some $g \in
\Gamma_{R_{k}}\setminus \{0\}$. By convexity of
$\calW^s_{j_0}$ (ensured by assumption, or by
Lemma \ref{lem;calRisconvex} in case $\calW_{j_0}$ is empty),
$d_{R_{k}} (X_{k+1}, X'_{k+1}) \leq
\Theta_P$. And by assumption on the rotating groups,
$d_{R_{k}} ( X_{k-1}, X'_{k+1}) \gammaeq
\Theta_{Rot}$. Thus, $d_{R_{k}} ( X_{k-1}, X_{k+1}) \gammaeq
\Theta_{Rot}-\Theta_P-\kappa$, the second inequality.
By Lemma \ref{lem;i=1}, $d_{R_{k}}(X_{k+1}, R_{k+2}) \leq
\Theta_P$ and $d_{R_{k}}(X_{k-1}, R_{k-2}) \leq
\Theta_P$. By triangle inequality, we get $
d_{R_k}(R_{k-2}, R_{k+2}) \gammaeq \Theta_{rot}
-2\Theta_P-\kappa$. We have both inequalities.
Assume that the inequalities are proven for all $(i, j)$ such that
$i+j \leq i_0$ (and for all
$k$), and let us choose $k$ and $(i, j)$ with $i+j\leq i_0$, and prove the inequality
for $(i+1, j)$.
Set $Y= R_{k+2i}$, and and
$W = R_{k}$. In the following we set either $Z=
R_{2i+k+2}$ or $X_{2i+k+1}$, and either $X=R_{k-2j}$ or
$X = X_{k-2j+1}$.
By the inductive
assumption for $k'=k+2i$, $i'=1, j'= i $, one has $d_Y(W ,
Z)\gammaeq \Theta_{rot} -2\Theta_P-\kappa$.
Also for $k$, $i$ and $j$ the induction gives $d_{W}(Y,X)
\gammaeq \Theta_{rot} -2\Theta_P-\kappa$. Behrstock inequality
then provides $d_{Y}(W,X) \leq \kappa$ and therefore $d_Y(X ,
Z)\gammaeq \Theta_{rot} -2\Theta_P-3\kappa$. This is still far
above $\Theta$. One thus may apply the monotonicity property
and obtain $d_W(X,Z) \gammaeq d_W(X,Y)$. In other words, $$
d_{R_k}(R_{k-2j}, R_{k+2i+2}) \gammaeq \Theta_{rot}
-2\Theta_P-\kappa. $$
The inequality is also proven for $(i, j +1)$ in the same
manner, symmetrically. This finishes the induction.
In the end, we have obtained for $i=N/2-1$, and $k=1$,
$ d_{R_1} (X_{0}, R_{N-1}) \gammaeq
\Theta_{Rot} -\Theta_P$, and it follows that $ d_{R_1} (X_{0},
X_{N}) \gammaeq \Theta_{Rot} -2\times \Theta_P-\kappa$, which is the
estimate of the statement.
If we assume that $p$
is mapped to a loop, $\calW_{j_0}$ contains both $X_0$ and
$X_N$, and not $R_1$ (it is an osculator), the convexity of
$\calW_{j_0}$ imposes $\Theta_{Rot} -2\times \Theta_P-\kappa \leq
\Theta_P$, meaning $\Theta_{Rot} \leq \Theta_P+\kappa$.
and this contradicts our choice of $\Theta_{Rot}$.
It also follows from this analysis that if $w_1, w_2$ are
white vertices of $T$ and $v$ is a black vertex between then,
then $d_{\psi(v)} ( X_1, X_2 ) \gammaeq \Theta_{Rot}- 2\Theta_P
-\kappa$ (in our induction above). A final use of
Behrstock inequality provides that whenever the paths from
$v$ to a white vertex $w_1$ has more than three edges, then if $v'$ is the
first black vertex after $v$ on this path, and if $X_1\in
\psi(w_1)$, then
$d_{\psi(v)}(X_1, \psi(v')) \leq \kappa$. It follows from
that and Lemma \ref{lem;i=1} that if
$w_2$ is another white vertex $w_1$ whose path from $v$
starts at the same edge, $d_{\psi(v)}(X_1, \psi(v')) \leq
2\Theta_P + 3\kappa $.
\end{proof}
The former proposition allows to define, for each element $\gammaamma$ of
$G_{W'}$, its principal coordinate, and its principal
tree. Indeed, if $\gammaamma\in G_{W'}$ is not conjugated to $G_W$, the
proposition shows that it is either loxodromic or the
stabilizer of a black vertex on the tree $T$. Then
we define its principal coordinate as $j_0$ and its principal
tree as $T$. If it is in $G_W$, or conjugate in it, its principal coordinate
and its principal tree are defined inductively, according to
the process of unfoldings of composite windmills.
\subsubsection{Preservation of convexity}
\begin{prop}(Convexity of $\calW'$)\label{prop;unfolding_is_convex}
Let $\calW = (\calW_1, \dots, \calW_m, G_W, j_0)$ be a
composite windmill (possibly non-full).
Assume that $\calR$ is an admissible set of
osculators, and $\calW'$ the unfolding defined in
Definition \ref{def;unfolding}
If $\calR$ consists of the orbit of a neighbor, then $\calW'$
is $c_*$-convex.
If $\calR$ consists of gap osculators, then $\calW'$ is
$\calL_{j_0+1}$-convex.
\end{prop}
The case of $\calR=\emptyset$ is trivial, so we assume
it is not empty.
\begin{proof}
If $\calR$ consists of the orbit of a neighbor, let $A_j=
c_*$ for all $j$. If $\calR$
consists of gaps, let $A_j= \calL_{j_0}(j) + 20\kappa$ (which is
less than $
\calL_{j_0}(j+1)$).
Let $X,Z \in \calW'_i$, consider $Y\in \bbY_{A(j)}^{j} (X,Z)
$.
Here is our main claim.
We will show that $Y$ is a $G_{W'}$-translate of one of the
following type of elements:
\begin{itemize}
\item $Y'$ for which there exists $ X_f, Z_f \in \calW_{j_0}$ such
that $d_{Y'}^\sphericalangle(X_f, Z_f) \gammaeq A(j) - 10\kappa$;
\item $Y'$ for which there exists $ X_f \in \calW_{j_0}$, and $R$
an osculator of $\calW$ in $\calW'_{j_0}$ such that $d_{Y'}^\sphericalangle(X_f, R) \gammaeq A(j) - 10\kappa$;
\item $Y'$ for which there exists $R_1, R_2$ osculators of
$\calW$ in $\calW'_{j_0}$ such that $d_{Y'}^\sphericalangle(R_1, R_2) \gammaeq A(j) - 10\kappa$
\end{itemize}
We will then finish the proof with this claim established, but
before that we will prove the claim.
\newcommand{X^{(j_0)}}{X^{(j_0)}}
\newcommand{Z^{(j_0)}}{Z^{(j_0)}}
\newcommand{Y^{(j_0)}}{Y^{(j_0)}}
{\it Transfer of $X$ and $Z$ to $\bbY_{j_0}$. } In $\calW'$,
the groups $\Gammaamma_X$ and $\Gammaamma_Z$ preserve
$\calW_{j_0}'$ which is not empty (it contains $\calR$).
Therefore, by Lemma \ref{lem;orbit_estimates} there are
$X^{(j_0)}, Z^{(j_0)}$ in $\calW'_{j_0}$ such that $d_Y^{\sphericalangle} (X^{(j_0)},
Z^{(j_0)}) \gammaeq A(j) - 4\kappa$.
{\it The interval in $T$.} Taking $\psi^{-1}$ of $X^{(j_0)}$ and of
$Z^{(j_0)}$ produces two vertices in the principal coordinate tree
$T$ of Proposition \ref{prop;principal_tree}. More precisely, either one of $X^{(j_0)},
Z^{(j_0)}$ is the image of a black vertex of $T$, or in the image of
a white vertex of $T$. This thus give two vertices of $T$ that
we (slightly abusively) denote by $\psi^{-1}(X^{(j_0)}),
\psi^{-1}(Z^{(j_0)})$.
If these vertices are adjacent, we have achieved the second
point of the
claim. If these vertices are the same, we have achieved the
first point of the claim. If these vertices are different, both black with
only one white vertex in the interval, we have achieved the
third point of the claim.
Thus, we may assume that there is at least one black vertex of
$T$ in the open interval $(\psi^{-1}(X^{(j_0)}),
\psi^{-1}(Z^{(j_0)}))$. Let $R_1, \dots , R_N$ the images by
$\psi$ of these black
vertices, in order starting from the side of $\psi^{-1}(X^{(j_0)})$.
By Proposition \ref{prop;principal_tree}, we have for all $i$, $d_{R_i}(X^{(j_0)}, Z^{(j_0)})
>\Theta_{Rot} - 2\Theta_P - \kappa$, which is
$>50\kappa$.
{\it Reduction to the case where $R_i\in {\rm Act}(Y)$}
If $Y$ is
equal to one of the $R_i$ then we fall in the first possibility
of the main claim. Thus, let us assume
that $Y$ is different from all the $R_i$.
We may assume that $Y$ is in ${\rm Act}(R_i)$ for all $i$. Indeed if it
was not, one could use an element of $\Gammaamma_{R_i}$ to reduce
the length of the path $p$, without changing the value of the
projection distance $d^\sphericalangle_Y(X^{(j_0)}, Z^{(j_0)})$ since $ \Gammaamma_{R_i}$ leaves
$d^\pi_Y$ invariant.
{\it Transfer of $Y$ in $\bbY_{j_0}$}. We may apply Lemma
\ref{lem;transfert} again, and find an element $Y^{(j_0)}$ in $\bbY_{j_0}$ (far in an
orbit of $\Gammaamma_Y$) such that, for all $i$, one has
$d_{R_i}^\sphericalangle ( Y, Y^{(j_0)})\leq 4\kappa$.
{\it Position of $Y^{(j_0)}$ in the order}.
Fix $0<i\leq N$. Since $d_{R_i}(X^{(j_0)}, Z^{(j_0)}) >50\kappa$, either
$d_{R_i} (X^{(j_0)}, Y^{(j_0)})$ or $d_{R_i} (Y^{(j_0)}, Z^{(j_0)})$ is larger
than $24\kappa$.
All $R_i$ are in $\bbY_{50\kappa}(X^{(j_0)}, Y^{(j_0)})$ therefore they
satisfy the order property in this set, which coincide with the
ordering of their indices.
By this order property and Behrstock
inequality, if for some $i$ one has $d_{R_i}(Y^{(j_0)}, X^{(j_0)})
>5\kappa$, then for all $i'<i$, one still has $d_{R_i}(Y^{(j_0)}, X^{(j_0)})
>5\kappa$. Similarly if $d_{R_i}(Y^{(j_0)}, Z^{(j_0)})
>5\kappa$ then for all greater $i''$ the same holds.
Therefore we have three cases.
Either $d_{R_1}(Y^{(j_0)}, X^{(j_0)}) \leq 5\kappa$ or $d_{R_N}(Y^{(j_0)},
Z^{(j_0)}) \leq 5\kappa$, or there exists $i\gammaeq 1$, largest such
that $ d_{R_1}(Y^{(j_0)}, X^{(j_0)}) > 5\kappa$ and $i<N$.
By symmetry, and translation by an element of $G_{W'}$ the first
and second case have same resolution. Let us treat the first
one. By triangle inequality, $d_{R_1}(Z^{(j_0)}, Y^{(j_0)})
>\Theta_{Rot} - 10\kappa - 2\Theta_P$ which is still greater than
$20\kappa$.
Going back to $Y$: $d_{R_1}^\sphericalangle (Z^{(j_0)}, Y) >16\kappa$. By
Behrstock inequality, $d_{Y}^\sphericalangle (Z^{(j_0)}, R_1) <\kappa$, and
finally by triangle inequality, $d_Y^\sphericalangle(X^{(j_0)}, R_1) \gammaeq A(j)
-2\kappa$. We are in the second point of the claim if $X^{(j_0)}$ is in a white
vertex, and in the third point if it is a black vertex.
We thus turn to the case where there exists $i\gammaeq 1$, largest such
that $ d_{R_1}(Y^{(j_0)}, X^{(j_0)}) > 5\kappa$ and $i<N$.
One has
$$\begin{array}{ccl}
d_{R_{i+1}} (Y^{(j_0)}, Z^{(j_0)}) & > & \Theta_{Rot} - 2\Theta_P -10 \kappa \\
d_{R_{i+1}}^\sphericalangle (Y, Z^{(j_0)}) & > & \Theta_{Rot} - 2\Theta_P -14 \kappa \\
d_Y^\sphericalangle(R_{i+1}, Z^{(j_0)}) & \leq & \kappa \\
\end{array}
$$
and
$$\begin{array}{ccl}
d_{R_{i}} (Y^{(j_0)}, X^{(j_0)}) & \gammaeq & 5 \kappa \\
d_{R_{i}}^\sphericalangle (Y, X^{(j_0)}) & \gammaeq & \kappa \\
d_Y^\sphericalangle(R_{i}, X^{(j_0)}) & \leq & \kappa \\
\end{array}
$$
So, $d_Y^\sphericalangle (R_i, R_{i+1} ) \gammaeq A(j)-4\kappa $. We have the third
point of the claim, and the claim is established.
We need to finish the proof of the lemma. There are several cases to
treat. The easiest is when the first case of the claim occurs.
In that case, if $j=j_0$, $Y'$ is actually a gap osculator, hence in
$\calW'_{j_0}$. If $j\neq j_0$, by convexity of $\calW$, it is in
$\calW_j$.
Assume now that the second case occurs.
If $R$ is of type neighbor, it simply contradicts Proposition
\ref{prop;osc_cvx}.
If $R$ is an osculator of type gap between $X_0, X_1$, and $j=j_0$,
one easily gets that $R$ is an osculator of type gap between $X_f$
and either $X_0$ or $X_1$ (any one for which $ d_R(Y',X_\epsilon)$ is larger
than $\kappa$, and by triangular inequality, there must be at least
one). If $j\neq j_0$, we may use the same argument. $Y'\in{\rm Act}(R)$
therefore $d^\sphericalangle_R(Y',X_\epsilon)$ is larger
than $\kappa$ for either $\epsilon=0$ or $1$. Then,
$d^\sphericalangle_{Y'}(R,X_\epsilon) <\kappa$ and by triangular inequality,
$d^\sphericalangle_{Y'}(X_f ,X_\epsilon) \gammaeq A(j) -12\kappa (>\calL_{j_0}(j)) $. It follows by
convexity of $\calW$ that $Y'\in \calW_j$.
Finally, assume that the third case occurs.
Assume that $R_2$ is an osculator of type gap, between $X_0,X_1$.
Then, again with the same
reasoning, $Y'\in {\rm Act}(R_2)$ and there is $\epsilon$ for which it is
in ${\rm Act}(X_\epsilon)$ and $d_{Y'}^\sphericalangle (R_2, X_\epsilon)$ is less
than $\kappa$. Thus $d_{Y'}^\sphericalangle (R_1, X_\epsilon) \gammaeq A(j)-12\kappa$,
and we are back to the case $2$ of the claim, with a slightly lower
constant. The proof goes
nevertheless through, and the desired conclusion holds.
Finally,
assume that $R_2$ is of type neighbor. Then both $R_1, R_2$ are of type neighbor, and $R_2=
\gammaamma R_1$ for some $\gammaamma \in \Gamma_W$. Let us rename $R_1=R$, call
$i=i(Y')$, and $j$ the principal coordinate of $\gammaamma$ (for the
Greendlinger property). Let $Z \in \calW_j$ be the vertex of a
shortening pair for $\gammaamma$ for which $Z\in {\rm Act}(Y') \cap
{\rm Act}(R)$ (there exists one, otherwise one can reduce the length of
$\gammaamma$ in its principal tree by a
shortening pair at $Z$). Thus, $d_Z^\sphericalangle(R, \gammaamma R) >\Theta_{Rot} -
2\Theta_P -2\kappa$.
Suppose $d_{Y'}^\sphericalangle(R, \gammaamma R) > c_*-10\kappa$. Then, there are two possible
cases. Either $d_{Y'}^\sphericalangle(R,Z) > \frac{c_*}{2}- 6\kappa $ or $d_{Y'}^\sphericalangle(\gammaamma R,Z)
> \frac{c_*}{2} - 6\kappa$ (or both).
In
the first case, $d_Z^\sphericalangle(R,Y') \leq \kappa$. Thus $d_Z^\sphericalangle(Y',\gammaamma
R) >\kappa$, and so $d_{Y'}^\sphericalangle(\gammaamma R, Z)< \kappa$.
Recall that $Z\in{\rm Act}(R)\cap {\rm Act}(Y')$. Thus $d_{Y'}^\sphericalangle(Z,R) > c_*
-2\kappa$, and $Y'\in \bbY_{c_*-2\kappa}( Z,R ) $. Now let $Z'$ any
other element of $\calW$ in ${\rm Act}(R)\cap {\rm Act}(Y')$. By $(\frac{c_*}{2}-20\kappa)$-convexity of
$\calW$, one has $d^\sphericalangle_{Y'} (Z,Z') \leq \frac{c_*}{2}-20\kappa$
and therefore
$Y'\in
\bbY_{c_*-2\kappa -\frac{c_*}{2}+21\kappa}( Z',R ) $. In other words, $Y'
\in \bbY_{\frac{c_*}{2}+19\kappa}(\calW, R)$ and this contradicts the fact that $R$ is a neighbor.
In the second case, the situation is similar after composing by the automorphism
$\gammaamma^{-1}$.
\end{proof}
\subsubsection{The unfolding is a windmill}
\begin{prop} If $\calW= (\calW_1, \dots, \calW_m, G_W, j_0)$ is
a composite windmill, and if $\calW'= (\calW'_1, \dots, \calW'_m,
G_{W'}, j_0+1)$ is an unfolding over an admissible set of osculators,
then $\calW'$ is a composite windmill.
Moreover, the set $\calW'_*$ of the fifth point of the
definition can be assumed to contain the set $\calW_*$ (in
other words, $\calW'$ is constructed over $\calW$).
\end{prop}
\begin{proof}
The first three points follow by construction. The fourth point (convexity) is
the result of Proposition \ref{prop;unfolding_is_convex}. The
sixth point is a consequence of Proposition
\ref{prop;principal_tree}. The same proposition introduces an
action of $G_{W'}$ on a tree $T$ which is Bass-Serre dual to a
presentation of $G_{W'}$ as the fundamental
group of a graph of group, with one vertex $v_0$ carrying the group
$G_W$ and the other vertices $v_{[R]}, [R]\in \calR/G_W$,
adjacent to a single edge whose other end is
$v_0$, carrying the group $ \Gammaamma_R \times (G_W)_R $, if $R$
is a representative of the orbit $[R]$.
\end{proof}
\subsection{Towers of windmills, and accessibility}
\subsubsection{Starting point}
We start the process by selecting $\calW(0)$ to be a maximal collection of
mutually inactive elements in $\bbY_*$. Thus, whenever
$\calW(0)_j \neq \emptyset$, it is reduced to a single point.
We choose $j_0=1$. It is clear that $\calW(0)$ defines a composite windmill
where for all $i$, $\calW(0)_{i}$ is either empty or a singleton, and
where $G_{W}$ is the direct product of the groups
$G_X$, for $X\in \calW(0)$ (there are at most $m$ direct
factors).
$\calW(0)$ is $\kappa$-convex, and for all $R$, by maximality
of $\calW(0)$, ${\rm Act}(R)\cap \calW(0) \neq \emptyset$.
Recall that by choice, $c_*>25\kappa + 2\Theta$, hence by
Proposition \ref{prop;firstguy}, there exists a neighbor
osculator in $\bbY_{\frac{c_*}{2}+2m\kappa} (\calW(0), R)$.
\subsubsection{The process}
Recall that we assumed $\bbY_*$ to be
countable.
We will work with indices in the {\it set} of countable ordinals:
we will define $\calW(k)$ for $k$ any countable ordinal (not
necessarily a number).
We take the notation $$ \calW(k) = (\calW(k)_1, \dots, \calW(k)_m, G_{W(k)}, j_k). $$
Let us convene that $\calW(k)\subset \calW(k')$ means that for all $i\leq m, \calW(k)_i\subset \calW(k')_i$. This is not an order relation, however note that, for full windmills, if $\calW(k)\subset \calW(k') \subset \calW(k)$, and if $\calW(k)$ is fixed, there are only $m$ possibilities for $\calW(k')$ (corresponding to the values of $j_{k'}$).
We will also write
$\calW(k)\subsetneqq \calW(k')$ if $\calW(k)\subset \calW(k')$ and one of the inclusions $\calW(k)_i\subset \calW(k')_i$ is strict.
We have chosen $\calW(0)$. In order to define $\calW(k)$ for $k$ any countable ordinal, we treat separately the case of $k$ a successor of some ordinal, and the case of $k$ a limit ordinal.
For any countable ordinal $k$, we define $\calW(k+1)$ to be the unfolding of $\calW(k)$ (as in Definition \ref{def;unfolding}) over an admissible set of osculators. Recall that if there is no gap osculator at all, one may need to choose a certain neighbor osculator to define a choice of admissible set of osculators. We could, but do not impose the choice.
Note that by maximality of $\calW(0)$, Lemma \ref{lem;keepwalking} can be applied to show that such a choice is always possible for all $\calW(k)$.
\begin{lemma}\label{lem;still_a_CWM}
If $\calW(k)$ is a composite windmill, then
$\calW(k+1)$
is still a composite
windmill, constructed over $\calW(k)$.
\end{lemma}
\begin{proof} This follows from Proposition \ref{prop;unfold_main}
if the set of
osculators is non-empty, and from Lemma \ref{lem;emptyR} otherwise.
\end{proof}
We now define $\calW(\alpha)$ assuming $\alpha$ is a limit ordinal, and that all $\calW(k)$, for $k<\alpha$ have been defined, and satisfy $ \calW(k) \subset \calW(k') $ for all $k<k'$.
We consider $\calW(\alpha)_i = \bigcup_{k<\alpha} \calW(k)_i$ for each $i\leq m$, and $ G_{W(\alpha)} = \bigcup_{k<\alpha} G_{W(k)} $, and we set $j_\alpha = 1$.
\begin{lemma}\label{lem;still_a_CWM_again}
If $\alpha$ is a limit countable ordinal such that for all $k<\alpha$, $\calW(k)$ is a composite windmill, and that for all $k<\alpha$, $\calW(k+1)$ is constructed over $\calW(k)$. Then $\calW(\alpha)$ is a composite windmill, constructed over $\calW(k)$, for all $k<\alpha$.
\end{lemma}
\begin{proof}
One easily check that all the points, except possibly the fifth (on the partially commutative presentation) of the definition \ref{def;CW} of composite windmill are satisied after taking a direct union. Assume that the fifth point is not satisfied. Consider then $\alpha_0$ the smallest ordinal such that this point fails. $\alpha_0$ is a limit ordinal (otherwise
Lemma \ref{lem;still_a_CWM} says that $\calW(\alpha_0)$ is a composite windmill constructed over earlier $\calW(k)$). Fix $k_0<\alpha_0$. For all $k<k_0$, $\calW(k)$ is contained in $\calW(k_0)$.
Note that by definition, for each $i\leq m$, $$\calW(\alpha_0)_i = \bigcup_{k_0 <k<\alpha_0} \calW(k)_i, \hbox{ and } \, G_{W(\alpha_0)} = \bigcup_{k_0< k<\alpha_0} G_{W(k)}.$$
Since for all $k'>k$ less than $\alpha_0$, $\calW(k')$ is constructed over $\calW(k)$, we obtain a presentation of $ G_{W(\alpha_0)}$ by increasing union of the generating sets of $G_{W(k)}$ (each of which contains that of $G_{W(k_0)}$), and by increasing union of the relators of $G_{W(k)} $. The fifth point of Definition \ref{def;CW} is then satisfied by $\calW(\alpha_0)$, and it is a composite windmill constructed over $\calW(k_0)$. Since this is true for all $k_0<\alpha_0$, we obtain a contradiction with the definition of $\alpha_0$.
\end{proof}
\subsubsection{Accessibility}
\begin{lemma} \label{lem;end_in_countable_time}
Let $\calI$ be the set of countable ordinals $k$ such that $\forall k'<k, \calW(k')\subsetneqq \calW(k)$. Then $\calI$ is countable. Moreover, for each $k_1, k_2$ in $\calI$, consecutive in $\calI$, there are at most $m$ ordinals between $k_1$ and $k_2$.
\end{lemma}
\begin{proof} For each $k\in \calI$, unless it is its maximal element, one can associate its successor $s(k)$ in $\calI$, and therefore an element $X_{k}$ in $\bbY_*$ in $\calW(s(k))$ but not in $\calW(k)$. The assignation of $X_k$ is obviouly injective on $\calI$, and $\bbY_*$ is countable, thus $\calI$ is countable.
For the second assertion, assume that there are $m+1$ consecutive countable ordinals $k_1, \dots, k_{m+1}$ outside $\calI$, all less than some $k_{t}\in \calI$. Then by the pigeonhole argument, for two of them, $k, k'$, one has $\calW(k) = \calW(k')$. Thus, by the rules of construction of $\calW(k+1)$, one has $\calW(k) \subset \calW(k+r) \subset \calW(k)$ for all $r\in \mathbb{N}$, ro equivalently, for all $r$, $\calW(k+r+1) \subset \calW(k+r) \subset \calW(k+r+1)$.
Since we take direct limits for limit ordinals, this holds also for all $r$ countable ordinal. However $k_{t}$ is a countable ordinal, and therefore $\calW(k_{t}+1) \subset \calW(k_{t}) \subset \calW(k_{t}+1)$, contradicting that $k_{t} \in \calI$.
\end{proof}
\begin{lemma}\label{lem;the_top_k}
There is a countable ordinal $k_{top}$, such that $ \bbY_*\subset \calW(k_{top}) $.
\end{lemma}
\begin{proof} By Lemma \ref{lem;end_in_countable_time}, the suppremum of $\calI$ is still a countable ordinal. Call $k_{top}$ is this ordinal, $\calW(k_{top}) $ is thus well defined. Assume that $ \bbY_* \not\subset \calW(k_{top}) $. Then it follows from Lemma \ref{lem;keepwalking}
that $\calW(k_{top}) $ is not $(\frac{c_*}{2}-20\kappa)$-convex. Therefore, there is a gap osculator in one of the coordinates, and this coordinate is reached while $r\leq m$. This is a contradiction on the definition of $k_{top}$. Thus, $ \bbY_* \subset \calW(k_{top}) $.
\end{proof}
\subsection{End of the proof Theorems \ref{theo;main} and \ref{theo;mainGreendlingerLemma}}
Consider $\calW(k_{top})$ from Lemma \ref{lem;the_top_k}.
Assume it is not a composite windmill. Then there is a smallest ordinal $k_1$ such that $\calW(k_{1})$ is not a composite windmill. If $k_1$ is not a limit ordinal, it is of the form $k_0 +1$ for $k_0$ such that $\calW(k_{0})$ is a composite windmill. Lemma \ref{lem;still_a_CWM} concludes a contradiction. If $k_1$ is a limit ordinal, then
Lemma \ref{lem;still_a_CWM_again} concludes a contradiction.
Thus $\calW(k_{top})$ is a composite windmill.
Since it contains all elements of $\bbY_*$, the statement of the Theorems \ref{theo;main} and \ref{theo;mainGreendlingerLemma} follow from the definition of composite windmill.
\section{Conclusion, application to Dehn twists, and Theorem \ref{theo;intro}}
Let $\Sigma$ be an orientable closed surface of genus greater than
$2$. Consider ${\rm MCG} (\Sigma)$ its Mapping Class Group.
Bestvina Bromberg and Fujiwara produced a finite coloring of the set
of simple closed curves of $\Sigma$ such that two curves of same color
intersect, and a finite-index normal subgroup
$G_0$ of ${\rm MCG} (\Sigma)$ that preserves the coloring. $G_0$ is called the color
preserving group. After refinement of the colors, we actually may
assume that the colors are in correspondance with the cosets of
$G_0$. We denote the colors by $\{1, \dots, m\}$.
Let $c$ and $c'$ be simple closed curves. If they intersect, the
projection of $c'$ on $c$ is the family of elements in the arc complex
of the annulus around $c$ (that is the cover of $\Sigma$ associated
to $c$) that come from lifts of $c'$. They are all disjoint. If $c''$
is another simple closed curve intersecting $c$, $d^\pi_c(c',c'')$ is
the diameter in the curve graph of the union of the projections of $c'$ and $c''$
on the annulus around $c$.
$d^\pi$ defines a composite projection system on the set of all
(homotopy classes of) simple
closed curves. Indeed, let ${\rm Act}(c)$ be the set of curves intersecting
$c$. Clearly $d^\pi_c$ is symmetric, and satisfies the
separation. The symetry in action, and the closeness in inaction are
also direct consequences of definitions. The finite filling property
is a consequence of the fact that all sequences of subsurfaces up to isotopy,
increasing under inclusion, are eventually stationnary. $d^\pi_c$ satisfies the
triangle inequality since it is a diameter of projections, and the Behrstock inequality
\cite{Beh06}, see also \cite{Mang10} \cite{Mang13}. The properness is ensured by \cite[Lemma 5.3]{BBF}
We can now define two composite projection systems with composite
rotating families. The first one is defined on
$\bbY_*$ is the set $\mathfrak{S}$ of {\it all} homotopy classes of simple closed curves of $\Sigma$.
Let us define $\bbY_i$ to be the subset of this set of simple closed
curve of color $i$ in the Bestvina-Bromberg-Fujiwara coloring, and
$\bbY_*$ their union. It is, as we just said, a composite projection
system on which $G_0$ acts by automorphisms.
Performing the construction of \cite{BBF} and the choices as after Definition \ref{def;CPS}, we have
constants $\Theta, \kappa, c_*, \Theta_P, \Theta_{Rot}$.
We select $N_1$ such that all $N_1$-powers of Dehn
twists in ${\rm MCG} (\Sigma)$ are in $G_0$. This is possible since there are only
finitely many ${\rm MCG} (\Sigma)$-orbits of simple closed curves in $\Sigma$, and
$G_0$ has finite index. Then we select $N_2$ a multiple of $N_1$ such
that for all simple closed curve $c$, the Dehn twist $\tau_c^{N_2}$
around $c$ satisfies that $d_c (c', \tau^{N_2} c') >\Theta_{Rot}+2\Theta_P$ if
$c'$ is a curve of the same color than $c$ (hence intersecting $c$).
Since $d_c$ is comparable with $d^\pi_c$, by definition of the latter,
there exists such an exponent $N_2$. Then it follows that, for all
$k\in \bbN$, the
collection $\{\Gammaamma_c=\langle \tau_c^{kN_2}\rightarrowngle,\, c\in \mathfrak{S} \}$,
is a composite
rotating family.
The second composite projection system is a sub-system, invariant for
$G_0$, provided by the ${\rm MCG} (\Sigma)$-orbit of a
simple closed curve $c_0 \in \mathfrak{S}$. Namely, the composite rotating
family is the collection
$\{\Gammaamma_c,\, c\in ({\rm MCG} (\Sigma) c_0) \subset \mathfrak{S} \}$.
It is straightforward that both families are composite rotating
families.
One can then apply Theorem \ref{theo;main}. In the first case, one
obtains that the group generated by the $kN_2$-th powers of all Dehn
twists has a partially commutative presentation, which is the second
point of Theorem \ref{theo;intro}. In the case of the second
composite rotating family,
one obtains that the group generated by all $kN_2$-th powers of all Dehn
twists that are ${\rm MCG} (\Sigma)$-conjugated to $\tau_{c_0}$ has a partially commutative
presentation. This latter group is the normal closure of
$\tau_{c_0}^{kN_2}$ in ${\rm MCG} (\Sigma)$. We therefore obtained Theorem \ref{theo;intro}.
\noindent
{\sc Institut Fourier, Univ. Grenoble Alpes, cnrs, 38000 Grenoble, France}\\
{\tt e-mail: [email protected]}
}
\end{document}
|
\begin{document}
\begin{abstract}
In this paper, we give a new generalization of positive sectional curvature called \emph{positive weighted sectional curvature}. It depends on a choice of Riemannian metric and a smooth vector field. We give several simple examples of Riemannian metrics which do not have positive sectional curvature but support a vector field that gives them positive weighted curvature. On the other hand, we generalize a number of the foundational results for compact manifolds with positive sectional curvature to positive weighted curvature. In particular, we prove generalizations of Weinstein's theorem, O'Neill's formula for submersions, Frankel's theorem, and Wilking's connectedness lemma. As applications of these results, we recover weighted versions of topological classification results of Grove--Searle and Wilking for manifolds of high symmetry rank and positive curvature.
\end{abstract}
\title{Positive Weighted Sectional Curvature}
Understanding Riemannian manifolds with positive sectional curvature is a deep and notoriously difficult problem in Riemannian geometry. A common approach in mathematics to such problems is to generalize it to a more flexible one and study this generalization with the hope that it will shed light on the harder original problem. Indeed, there are a number of generalizations of positive sectional curvature that have been studied. The most obvious is non-negative sectional curvature, but other conditions such as quasi-positive or almost positive curvature have been studied in the literature (see \cite{Ziller07,KerrTapp14} and references therein)
In this paper we propose a different approach to generalizing positive curvature that depends on choosing a positive, smooth density function, denoted by $e^{-f}$, or a smooth vector field $X$. Our motivation for considering such a generalization is the corresponding theory of Ricci curvature for manifolds with density, which was studied by Lichnerowicz \cite{Lich1, Lich2} and was later generalized and popularized by Bakry--Emery and their collaborators \cite{BakryEmery}. There are too many recent results in this area to reference all of them here, but some that are more relevant to this article are \cite{Lott03, Morgan05, Morgan09, MunteanuWang12, WeiWylie09}. Also see Chapter 18 of \cite{Morganbook} and the references therein.
For a triple $(M^n,g,X)$, where $(M,g)$ is a Riemannian manifold and $X$ is a smooth vector field, the $m$--Bakry--Emery Ricci tensor is
\[ \Ric_X^m = \Ric +\frac{1}{2} L_X g - \frac{X ^{\mathbb{S}harp}\otimes X^{\mathbb{S}harp}}{m}, \]
where $m$ is a constant that is also allowed to be infinite, in which case we write $\Ric_X^{\infty} = \Ric_X = \Ric +\frac{1}{2} L_X g$. For a manifold with density, we set $X = \nabla f$ and write $\Ric_f^m = \Ric +\mathrm{Hess} f - \frac{ df \otimes df}{m}$.
The Bakry--Emery Ricci tensors come up in many areas of geometry and analysis including optimal transport \cite{LottVillani09, SturmI, SturmII, SturmVon}, the isoperimetric inequality \cite{Morgan05}, and the Ricci flow \cite{Perelman}. Our definition of positive weighted sectional curvature, which looks similar to the Bakry--Emery Ricci tensors, is the following.
\begin{nonumberdefinition}\label{def:pwsc}
A Riemannian manifold $(M,g)$ equipped with a vector field $X$ has \textit{positive weighted sectional curvature} if for every point $p\in M$, every $2$-plane $\mathbb{S}igma \mathbb{S}ubseteq T_pM$, and every unit vector $V \in \mathbb{S}igma$,
\begin{itemize}
\item $\mathbb{S}ec(\mathbb{S}igma) + \frac{1}{2} (L_X g)(V,V) > 0$, or
\item $X = \nabla f$ and $\mathbb{S}ec(\mathbb{S}igma) + \mathrm{Hess} f(V,V) + df(V)^2 > 0$ for some function $f$.
\end{itemize}
\end{nonumberdefinition}
Note that a Riemannian manifold with positive sectional curvature admits positive weighted sectional curvature, where $X$ is chosen to be zero. This converse to this statement does not hold, as we show by example in Propositions \ref{pro:RotSym} and \ref{pro:CPnExample}. For additional examples that further illustrate the difference between these notions, we refer to Section \ref{sec:Examples}.
This definition is motivated by earlier work of the second author \cite{Wylie-pre} where generalizations of classical results such as the classification of constant curvature spaces, the theorems of Cartan--Hadamard, Synge, and Bonnet--Myers, and the (homeomorphic) quarter-pinched sphere theorem are proven for manifolds with density.
There are a number of reasons why positive weighted sectional curvature is a natural generalization of positive sectional curvature. We will discuss this in more detail in Section \ref{sec:Preliminaries}. For example, we observe in Section \ref{sec:Preliminaries} that the following low-dimensional result holds (see Theorem \ref{thm:pi1finite} and the following remarks). It follows from earlier work of the second author \cite{Wylie08, Wylie-pre}.
\begin{main}\label{thm:LowDimIntro} Suppose $M$ is a compact manifold of dimension two or three. If $M$ admits a metric and a vector field with positive weighted sectional curvature, then $M$ is diffeomorphic to a spherical space form. \end{main}
This raises the following motivating question in higher dimensions.
\begin{motivatingquestion}
If $(M^n, g, X)$ is compact with positive weighted curvature, does $M$ admit a metric of positive sectional curvature?
\end{motivatingquestion}
Theorem \ref{thm:LowDimIntro} shows the answer is ``yes" in dimension $2$ and $3$. On the other hand, we show there are complete metrics with density on $\mathbb{R} \times T^n$ with positive weighted sectional curvature. By a theorem of Gromoll--Meyer \cite{GromollMeyer69}, $\mathbb{R} \times T^n$ does not admit a metric of positive curvature, so the answer is ``no" in the complete case.
We approach this question by considering spaces with a high amount of symmetry. Since the 1990s, when Grove popularized the approach, quite a lot of powerful machinery has been developed for studying manifolds with positive curvature through symmetry. See the survey articles \cite{Wilking07, Grove09, Ziller14} for details as well as the many applications.
A first consideration is that a given vector field $X$ may not be invariant under the isometries of $g$. In Section \ref{sec:Averaging}, we deal with this issue by showing that, given a triple $(M,g,X)$ with positive weighted curvature and a compact group of isometries $G$ acting on $(M,g)$, it is always possible to change $X$ to $\widetilde{X}$ which is invariant under $G$ so that $(M,g, \widetilde{X})$ has positive weighted sectional curvature. The fact that we can always assume that the density is invariant under a fixed compact subgroup of isometries will be a key observation in most of our results. In fact, it immediately gives the following result in the homogeneous case (see Proposition \ref{pro:CompactHomogeneous}).
\begin{main}\label{thm:PWSChomogeneous} If a compact, homogeneous Riemannian manifold $(M,g)$ supports a gradient field $X = \nabla f$ such that $(M,g,X)$ has positive weighted curvature, then $(M,g)$ has positive sectional curvature. \end{main}
Simple examples show that this proposition is not true if the manifold is not compact (see Example \ref{Ex:Gaussian}). In Section \ref{sec:Examples}, we also give examples of cohomogeneity one metrics on spheres and projective spaces that have positive weighted sectional curvature but not positive sectional curvature, so the homogeneous assumption cannot be weakened.
Another way to quantify that a Riemannian manifold has a large amount of symmetry is the symmetry rank, which is the largest dimension of a torus which acts effectively on $M$ by isometries. Our main result regarding symmetry rank and positive weighted sectional curvature is an extension of the maximal symmetry rank theorem of Grove--Searle \cite{GroveSearle94} to positive weighted sectional curvature (see Theorem \ref{thm:GroveSearle}).
\begin{main}[Maximal symmetry rank theorem] \label{IntroGroveSearle}
Let $(M^n,g,X)$ be closed with positive weighted sectional curvature. If $T^r$ is a torus acting effectively by isometries on $M$, then $r \leq \floor{\frac{n+1}{2}}$. Moreover, if equality holds and $M$ is simply connected, then $M$ is homeomorphic to $\mathbb{S}^n$ or $\C\mathbb{P}^{n/2}$.
\end{main}
In higher dimensions, Wilking has shown one can assume less symmetry and still obtain a homotopy classification \cite[Theorem 2]{Wilking03}. We also give an extension of this result (see Theorem \ref{thm:WilkingHomotopy}).
\begin{main}[Half-maximal symmetry rank theorem] \label{IntroWilkingHomotopy}
Let $(M^n,g,X)$ be closed and simply connected with positive weighted sectional curvature. If $M$ admits an effective, isometric torus action of rank $r \geq \frac{n}{4} + \log_2 n$, then $M$ is homeomorphic to $\mathbb{S}^n$ or tangentially homotopy equivalent to $\C\mathbb{P}^{n/2}$.
\end{main}
Theorems \ref{IntroGroveSearle} and \ref{IntroWilkingHomotopy}
show that the answer to our motivating question is ``yes" (at least up to homeomorphism or homotopy) in the case of high enough symmetry rank. On the other hand, our results are slightly weaker than the results in the unweighted setting. We discuss this further in Sections \ref{sec:TorusActions} and \ref{sec:FutureDirections}.
There are two key tools used in the proofs of Theorems \ref{IntroGroveSearle} and \ref{IntroWilkingHomotopy}. The first is an extension of Berger's theorem (Corollary \ref{cor:Berger}) to the weighted case. The proof follows as in \cite{GroveSearle94} and makes use of the O'Neill formula in the weighted case (Theorem \ref{thm:submersions}). The second main tool is a generalization of Wilking's connectedness lemma \cite[Theorem 2.1]{Wilking03} to positive weighted sectional curvature (see Theorem \ref{thm:Connectedness}).
\begin{main}[Wilking's connectedness lemma]\label{thm:IntroConnectedness}
Let $(M^n,g,X)$ be closed with positive weighted sectional curvature.
\begin{enumerate}
\item If $X$ is tangent to $N^{n-k}$, a closed, totally geodesic, embedded submanifold of $M$, then the inclusion $N \to M$ is $(n-2k+1)$--connected.
\item If $X$ and $N^{n-k}$ are as above, and if $G$ acts isometrically on $M$, fixes $N$ pointwise, and has principal orbits of dimension $\delta$, then the inclusion $N \to M$ is $(n-2k+1+\delta)$--connected.
\item If $X$ is tangent to $N_1^{n-k_1}$ and $N_2^{n-k_2}$, a pair of closed, totally geodesic, embedded submanifolds with $k_1 \leq k_2$, then $N_1 \cap N_2 \to N_2$ is $(n-k_1-k_2)$--connected.
\end{enumerate}
\end{main}
The only assumption in
Theorem \ref{thm:IntroConnectedness}
not needed in the unweighted version is that $X$ be tangent to the submanifolds. This of course is true in the unweighted setting where $X=0$. In the applications, this extra assumption holds since the submanifolds we apply the result to will be fixed-point sets of isometries and $X$ will be invariant under these actions (see Corollary \ref{cor:FrankelGroupAction} and the following discussion). The proof of
Theorem \ref{thm:IntroConnectedness}
follows from Wilking's arguments in \cite{Wilking03} using the second variation formula for the weighted curvatures derived in \cite{Wylie-pre} in place of the classical one.
\mathbb{S}mallskip
This paper is organized as follows. In Sections \ref{sec:Preliminaries} and \ref{sec:Examples}, we recall the notion of weighted sectional curvature from \cite{Wylie-pre}, define positive weighted sectional curvature, survey its basic properties (including Theorem \ref{thm:LowDimIntro}), and construct a number of examples. In Sections \ref{sec:Averaging}--\ref{sec:Frankel}, we establish these properties and use them to prove Theorem \ref{thm:PWSChomogeneous} as well as generalizations of the O'Neill formulas, Weinstein's theorem, and Wilking's connectedness lemma (Theorem \ref{thm:IntroConnectedness}). In Section \ref{sec:TorusActions}, we use these tools to prove
Theorems \ref{IntroGroveSearle} and \ref{IntroWilkingHomotopy}.
In Section \ref{sec:FutureDirections}, we discuss future directions.
\mathbb{S}ubsection*{Acknowledgements} We would like to thank Karsten Grove, Frank Morgan, Guofang Wei, Dmytro Yeroshkin, and Wolfgang Ziller for helpful suggestions and discussions. The first author is partially supported by NSF grants DMS-1045292 and DMS-1404670.
\mathbb{S}ection{Definitions and Motivation} \label{sec:Preliminaries}
In this section, we fix some notation and go into more detail about the motivation for the definition of positive weighted sectional curvature. At the end of this section (see Subsection \ref{sec:symsec}), we address the fact that weighted sectional curvature is not simply a function of $2$--planes in the way that sectional curvature is, and we discuss a symmetrized version of weighted sectional curvature which is.
\mathbb{S}ubsection{Definition of positive weighted sectional curvature} First we recall some notation from \cite{Wylie-pre}. For a Riemannian manifold $(M,g)$ and a vector $V$ on $M$, we will call the symmetric $(1,1)$--tensor $R^V$, given by
\[R^V(U) = R(U, V)V = \nabla_U \nabla_V V - \nabla_V \nabla_U V - \nabla_{[U,V]} V,\]
the \textit{directional curvature operator in the direction of $V$}. Given a smooth vector field $X$, the \textit{weighted directional curvature operator in the direction of $V$} is another symmetric $(1,1)$--tensor,
\[R_X^V = R^V + \frac 1 2 (L_Xg)(V, V) \mathrm{id},\]
where $\mathrm{id}$ is the identity operator. The \textit{strongly weighted directional curvature operator in the direction of $V$} is defined as
\[\overline R_X^V = R^V_X + g(X, V)^2 \mathrm{id}.\]
Given an orthonormal pair $(U,V)$ of vectors in $T_pM$ for some $p \in M$, the sectional curvature $\mathbb{S}ec(U,V)$ of the plane spanned by $U$ and $V$ is, by definition, $\mathbb{S}ec(U,V) = g(R^V(U),U)$. In the weighted cases, we similarly define
\begin{eqnarray*}
\mathbb{S}ec^V_X(U) &=& g(R^V_X(U), U) = \mathbb{S}ec(V,U) + \frac{1}{2}(L_Xg)(V,V) ,\\
\overline\mathbb{S}ec^V_X(U) &=& g(\overline R^V_X(U), U) = \mathbb{S}ec^V_X(U) + g(X,V)^2.
\end{eqnarray*}
We say that $\mathbb{S}ec_X \geq \lambda$ if $\mathbb{S}ec_X^V(U) \geq \lambda$ for every orthonormal pair $(V,U)$, or equivalently if all of the eigenvalues of $R_X^V$ are at least $\lambda$ for every unit vector $V$. We define the condition $\overline{\mathbb{S}ec}_X \geq \lambda$ in the analogous way. Note that $ \overline{\mathbb{S}ec}_X^V(U) \geq \mathbb{S}ec_X^V (U)$, so that $\mathbb{S}ec_X \geq \lambda$ implies $\overline{\mathbb{S}ec}_X \geq \lambda$.
In terms of this notation we can then rephrase the definition of positive weighted sectional curvature.
\begin{nonumberdefinition}
A Riemannian manifold $(M,g)$ equipped with a vector field $X$ has \textit{positive weighted sectional curvature} if
\begin{itemize}
\item $\mathbb{S}ec_X > 0$, or
\item $X = \nabla f$ and $\overline{\mathbb{S}ec}_f > 0$ for some function $f$.
\end{itemize}
\end{nonumberdefinition}
Note that, unlike $\mathbb{S}ec(U,V)$, the weighted sectional curvatures are not symmetric in $U$ and $V$. This may at first seem unnatural, but it is necessary if we want the weighted sectional curvatures to agree with the Bakry--Emery Ricci curvatures in dimension $2$ as the Bakry--Emery Ricci tensors of a surface with density will generally have two different eigenvalues. See Section \ref{sec:symsec} for a discussion of a symmetrized version.
Also note that $\mathbb{S}ec^V_X$ and $\overline\mathbb{S}ec^V_X$ average to Bakry--Emery Ricci curvatures in the following sense. Let $\{E_i\}_{i=1}^{n-1}$ be an orthonormal basis of the orthogonal complement of $V$, then
\begin{eqnarray}
\Ric_{(n-1) X} (V,V) &=& \mathbb{S}um_{i=1}^{n-1} \mathbb{S}ec^V_X(E_i) \label{eqn:AvgSec1} \\
\Ric_{(n-1)X}^{-(n-1)}(V,V) &=& \mathbb{S}um_{i=1}^{n-1} \overline{\mathbb{S}ec}^V_X(E_i) \label{eqn:AvgSec2}
\end{eqnarray}
In particular, for surfaces, $\mathbb{S}ec_X \geq \lambda$ is equivalent to $\Ric_X \geq \lambda $ and similarly for $\overline{\mathbb{S}ec}_X$ and $\Ric_X^{-1}$. The curvature $\Ric_{(n-1)X}^{-(n-1)}$ is an example of Bakry--Emery Ricci curvature with negative $m$ which has been studied recently in \cite{ KolesnikovMilman, Ohta}.
\mathbb{S}ubsection{Properties of positive weighted sectional curvature}
Now that we have introduced the main equations involving weighted sectional curvature, we summarize some of properties that the condition of positive weighted sectional curvature shares with positive sectional curvature. We then give a basic outline of how these facts lead to the proof of
Theorems \ref{IntroGroveSearle} and \ref{IntroWilkingHomotopy}.
First, positive weighted sectional curvature is preserved under covering maps. Namely if $(M,g,X)$ has positive weighted sectional curvature and $\tilde{M}$ is a cover of $M$, then $(\tilde{M}, \tilde{g}, \tilde{X})$ has positive weighted sectional curvature where $\tilde g$ and $\tilde X$ are the pullbacks of $g$ and $X$ respectively under the covering map.
A second property of positive weighted sectional curvature is that the fundamental group is finite in the compact case. Indeed, this follows from \cite[Theorem 1.1]{Wylie08} and \cite[Theorem 1.14]{Wylie-pre} by using the fact that positive weighted sectional curvature lifts to covers:
\begin{theorem} \label{thm:pi1finite}
Let $(M,g)$ be a complete Riemannian manifold.
\begin{itemize}
\item If there exists a vector field $X$ such that $\mathrm{Ric}_X > \la > 0$, or
\item if $M$ is compact and there is a function $f$ such that $\mathrm{Ric}^{-(n-1)}_f > \la >0$,
\end{itemize}
then $\pi_1(M)$ is finite.
\end{theorem}
This theorem immediately implies the classification of compact $2$-- and $3$--dimensional manifolds with positive weighted sectional curvature stated in Theorem \ref{thm:LowDimIntro}. Indeed, this follows in dimension two from the classification of surfaces and in dimension three from the Ricci flow proof of the Poincar\'e conjecture.
We remark that, for positive Ricci curvature, the finiteness of fundamental group follows from the Bonnet--Myers' diameter estimate. There is no diameter estimate for the weighted curvatures as there are complete non-compact examples with $\mathbb{S}ec_f > \la > 0$ (see Example \ref{Ex:Gaussian}).
A third property of positive weighted sectional curvature is that the vector field $X$ can always be chosen so that it is invariant under a fixed compact group of isometries. We interpret this as a shared property with positive sectional curvature since the zero vector field is always invariant. Specifically we have:
\begin{corollary}\label{cor:PWSCaveraging}
If $(M,g,X)$ has positive weighted sectional curvature, and if $G$ is a compact subgroup of the isometry group of $(M,g)$, then $X$ can be replaced by a $G$--invariant vector field $\tilde X$ such that $(M,g,\tilde X)$ has positive weighted sectional curvature.
\end{corollary}
When $M$ is compact, the isometry group is compact, hence this corollary applies in this case where $G$ is the whole isometry group. As we mentioned in the introduction, reducing to the invariant case will be key in most of our results. Corollary \ref{cor:PWSCaveraging} follows immediately from Lemmas \ref{lem:averaging} and \ref{lem:u-averaging} below.
A fourth property of positive weighted sectional curvature is that Riemannian submersions preserve it in the following sense:
\begin{corollary} \label{cor:PWSCSubmersion}
Let $\pi:(M,g) \to (B,h)$ be a Riemannian submersion. Let $X$ be a vector field $X$ on $M$ that descends to a well defined vector field $\pi_*X$ on $B$. If $(M,g,X)$ has positive weighted sectional curvature, then so does $(B,h,\pi_*X)$.\end{corollary}
This follows immediately from a generalization of O'Neill's formulas proved below (Theorem \ref{thm:submersions}). We also obtain from O'Neill's formulas that Cheeger deformations preserve positive weighted sectional curvature (Lemma \ref{Lemma:CheegerDeformation}).
Corollary \ref{cor:PWSCSubmersion} implies the following: If $(M,g,X)$ is compact with positive weighted sectional curvature and $G$ is a closed subgroup of the isometry group that acts freely on $M$, then $M/G$ admits positive weighted sectional curvature. Indeed, by Corollary \ref{cor:PWSCaveraging} we can modify $X$ so that it is $G$--invariant and so descends to a vector field on $M/G$ via the quotient map $\pi:M \to M/G$. It follows that $M/G$ equipped with the vector field $\pi_*X$ has positive weighted sectional curvature by Corollary \ref{cor:PWSCSubmersion}. We implicitly use this fact in the proof of Berger's theorem (see Corollary \ref{cor:Berger}).
Finally, a crucial property of positive weighted sectional curvature is that Synge-type arguments for positive sectional curvature generalize to the weighted setting. This follows from studying a second variation formula for energy of geodesics that was derived in \cite{Wylie-pre}. Given a variation $\overline \gamma:[a,b]\times(-\ep,\ep) \to M$ of a geodesic $\gamma = \overline\gamma(\cdot,0)$, let $V = \left.\frac{\partial\overline\gamma}{\partial s}\right|_{s=0}$ denote the variation vector field along $\gamma$. The second variation of energy is given by
\[\left.\frac{d^2}{ds^2}\right|_{t=0} E(\gamma_s)
= I(V,V) + \left.g\of{\frac{\partial^2\overline\gamma}{\partial s^2}}\right|_{t=a}^{t=b},\]
where $I(V,V)$ is the index form of $\gamma$. The usual formula for the index form is
\[ I(V,V) = \int_a^b \of{ |V'|^2 -R^{\gamma'}(V,V) } dt .\]
In terms of the weighted directional curvature operators, the index form can be re-written as follows (see \cite[Section 5]{Wylie-pre}):
\begin{eqnarray}
\hspace{.1in}I(V,V)\hspace{-.1in} &=&\hspace{-.1in} \int_a^b \of{ |V'|^2
- R_X^{\gamma'}(V,V)
- 2g(\gamma', X)g(V,V')}dt
+ \left.g\of{\gamma', X} |V|^2\right|_{t=a}^{t=b}\label{eqn:IndexForm1}\\
~&=&\hspace{-.1in} \int_a^b \of{ |V' - g(\gamma',X)V|^2
- \overline R_X^{\gamma'}(V, V)}dt
+ \left.g\of{\gamma', X} |V|^2\right|_{t=a}^{t=b}\label{eqn:IndexForm2}
\end{eqnarray}
It may not be immediately apparent why these formulas are natural, but they do allow us to generalize Synge-type arguments using the following.
\begin{lemma}\label{lem:SecondVariation}
Fix a triple $(M,g,X)$. Let $\gamma:[a,b] \to M$ be a geodesic on $M$, and let $Y$ be a unit-length, parallel vector field along and orthogonal to $\gamma$.
\begin{enumerate}
\item If $\mathbb{S}ec_X > 0$, then the variation $\gamma_s(t) = \exp(sY)$ of $\gamma$ satisfies
\[\left.\frac{d^2}{ds^2}\right|_{s=0}E(\gamma_s)
< \left.g\of{\gamma'(t),X_{\gamma(t)}}\right|_{t=a}^{t=b}.\]
\item If $X = \nabla f$ and $\overline\mathbb{S}ec_f > 0$, then the variation $\gamma_s(t) = \exp(s e^f Y)$ of $\gamma$ satisfies
\[\left.\frac{d^2}{ds^2}\right|_{s=0}E(\gamma_s)
< \left. e^{f(\gamma(t))}g\of{\gamma'(t),X_{\gamma(t)}}\right|_{t=a}^{t=b}.\]
\end{enumerate}
\end{lemma}
This lemma is used in Sections \ref{sec:Weinstein} and \ref{sec:Frankel} to generalize theorems of Weinstein, Berger, Synge, and Frankel, as well as Wilking's connectedness lemma. Once we have these results, it is not hard to see the how to generalize the proofs of
Theorems \ref{IntroGroveSearle} and \ref{IntroWilkingHomotopy} to the weighted setting. We indicate briefly how the arguments go.
The proofs proceed by induction on the dimension $n$, the base cases $n \in \{2,3\}$ being handled by the classification of simply connected, compact manifolds in these dimensions. If a torus acts effectively on $M$, which has positive weighted sectional curvature, then we obtain a fixed point set $N$ of lower dimension by Berger's theorem. The fixed point set of a subgroup of isometries is always a totally geodesic submanifold, and since we can assume $X$ is invariant under the group, we also obtain that $X$ is tangent to $N$. It follows immediately that $N$ with restricted vector field $X$ also has positive weighted sectional curvature. Finally, the torus action restricts to $N$, so it might follow by induction on the dimension of the manifold that $N$ satisfies the conclusion of the theorem. If the induction hypothesis does not apply, the codimension of $N$ is small and other arguments are used to again show that $N$ satisfies the conclusion of the theorem. By applying Wilking's connectedness lemma, the topology of $M$ is recovered from the topology of $N$.
\mathbb{S}ubsection{Symmetrized weighted sectional curvature}\label{sec:symsec}
Unlike sectional curvature, a weighted sectional curvature $\mathbb{S}ec_X$ on a triple $(M,g,X)$ is not a function of $2$--planes. In this subsection, we define a symmetrized version of this quantity that is. We also compare the notions of sectional curvature and symmetrized weighted sectional curvature.
Given a vector field $X$ on a Riemannian manifold $M$,
$\mathbb{S}ec_X$ can be regarded as a function $\mathbb{S}ec_X(\mathbb{S}igma,V)$ of $(\mathbb{S}igma, V)$, where $\mathbb{S}igma \mathbb{S}ubseteq T_p M$ is a $2$--plane and $V$ is a unit vector in $\mathbb{S}igma$. To evaluate $\mathbb{S}ec_X(\mathbb{S}igma,V)$, choose either of the two unit vectors in $\mathbb{S}igma$ orthogonal to $V$, call it $U$, and evaluate $\mathbb{S}ec_X^V(U)$.
Note that the unit circle $\mathbb{S}^1(\mathbb{S}igma)$ in $\mathbb{S}igma$ is defined by the metric, so it makes sense to average over unit vectors $e^{i\theta} \leftrightarrow V \in \mathbb{S}^1(\mathbb{S}igma)$. We denote this by
\[\mathbb{S}ym\mathbb{S}ec_X(\mathbb{S}igma) = \frac{1}{2\pi} \int_0^{2\pi} \mathbb{S}ec_X(\mathbb{S}igma,e^{i\theta}) d\theta.\]
One can similarly define $\mathbb{S}ym\overline\mathbb{S}ec_X$. One appealing aspect of this curvature quantity is that it is the same kind of object as $\mathbb{S}ec$, a function on two--planes.
This definition was motivated by a suggestion of Guofang Wei, who suggested looking at the quantity
\[\mathbb{S}ec_X^V(U) + \mathbb{S}ec_X^U(V).\]
Note that $\frac 1 2 \of{\mathbb{S}ec_X^V(U) + \mathbb{S}ec_X^U(V)}$ equals $\mathbb{S}ym\mathbb{S}ec_X$ and likewise in the strongly weighted case.
We analyze the conditions $\mathbb{S}ym\mathbb{S}ec_X > 0$ and $\mathbb{S}ym\overline\mathbb{S}ec_X > 0$ in dimension two. First, it is clear that in any dimension
\[
\begin{array}{ccc}
\mathbb{S}ec_X > 0 & \Rightarrow & \mathbb{S}ym\mathbb{S}ec_X > 0\\
\Downarrow & ~ & \Downarrow\\
\overline\mathbb{S}ec_X > 0 &\Rightarrow & \mathbb{S}ym\overline\mathbb{S}ec_X > 0
\end{array}
\]
Second, in dimension 2, $\mathbb{S}ym \mathbb{S}ec_f = \frac{\mathbb{S}cal}{2} + \Delta f$. This is the same as the weighted Gauss curvature studied in \cite{CorwinHoffmanHurderSesumXu06, CorwinMorgan11}, which contain proofs that the Gauss--Bonnet theorems hold for this weighted curvature. In particular, we have the following (compare \cite[Proposition 5.3]{CorwinHoffmanHurderSesumXu06}):
\begin{theorem}[Gauss--Bonnet]\label{thm:GBsymsec}
If $M^2$ is orientable, then $\int_M \mathbb{S}ym\mathbb{S}ec_X = 2\pi\chi(M)$.
\end{theorem}
This gives the following generalization of one case of Theorem \ref{thm:LowDimIntro}, which implies that a $2$--dimensional, compact manifold $M$ that admits $\mathbb{S}ec_X > 0$ for some vector field $X$ is diffeomorphic to a spherical space form..
\begin{corollary} If $M^2$ is compact and admits a metric and vector field $X$ with $\mathbb{S}ym \mathbb{S}ec_X >0$, then $M^2$ is diffeomorphic to a spherical space form. \end{corollary}
On the other hand, the torus $T^2$, while it does not admit $\mathbb{S}ym \mathbb{S}ec_X > 0$, does admit a metric with $\mathbb{S}ym \overline \mathbb{S}ec_X > 0$. To see this equip the torus with a flat metric and a unit-length Killing field $X$, then we have
\[\mathbb{S}ym\overline\mathbb{S}ec_X = 0 + 0 + \frac{1}{2\pi}\int_0^{2\pi}g\of{X,e^{i\theta}}^2d\theta = 1.\]
In fact, this example immediately generalizes as follows:
\begin{proposition}
If $(N,g)$ is a Riemannian manifold with positive sectional curvature, then $\mathbb{S}^1 \times N$ admits a metric and a vector field $X$ such that $\mathbb{S}ym \overline\mathbb{S}ec_X > 0$.
\end{proposition}
\begin{proof}
Let $g$ be the product metric, and let $X$ denote the unit-length Killing field tangent to the circle factor. If $\mathbb{S}igma$ is a two-plane tangent to $N$, then
\[\mathbb{S}ym\overline\mathbb{S}ec_X(\mathbb{S}igma) \geq \mathbb{S}ec^{g_N}(\mathbb{S}igma) > 0.\]
If $\mathbb{S}igma$ is a two-plane not contained in the tangent space to $N$, then
\[\mathbb{S}ym\overline\mathbb{S}ec_X(\mathbb{S}igma) \geq \frac 1 2 |\mathrm{proj}_\mathbb{S}igma(X)|^2 > 0,\]
where $\mathrm{proj}_\mathbb{S}igma$ denotes the projection onto $\mathbb{S}igma$.
\end{proof}
This raises the following question.
\begin{question} Does the torus admit $\overline{\mathbb{S}ec}_X >0$ or $\mathbb{S}ym \overline\mathbb{S}ec_f > 0$? More generally are there compact manifolds with $\overline{\mathbb{S}ec}_X >0$ or $\mathbb{S}ym \overline\mathbb{S}ec_f > 0$ and infinite fundamental group?
\end{question}
We point out that Gauss--Bonnet type arguments do not seem to give a different proof that any compact surface with density with $\overline{\mathbb{S}ec}_f>0$ is a sphere. Indeed, if we trace $\overline{\mathbb{S}ec}_X$, we obtain
\[ \mathbb{S}cal + \mathrm{div}(X) + |X|^2. \]
The integral of this is $4 \pi \chi(M) + \int_M |X|^2 d \vol_g$, which is not a topological quantity.
We also note that the Gauss-Bonnet theorem gives interesting information about other inequalities involving curvature. First we consider a positive lower bound.
\begin{proposition} \label{pro:SA} Let $(M,g)$ be a compact surface with $\mathbb{S}ym \mathbb{S}ec_X \geq 1$. The area of $M$ is at most $4\pi$. Moreover, if $\mathbb{S}ec_X \geq 1$ and the area of $M$ is $4\pi$ then $(M,g)$ is the round sphere and $X$ is a Killing field.
\end{proposition}
\begin{proof} We apply the discussion above to the universal cover $\tilde M$ of $M$, endowed with the pulled back metric $\tilde g$ and vector field $\tilde X$. It follows that $\chi(\tilde M) > 0$, so that $\chi(\tilde M) = 2$ and $\area(M) \leq \area(\tilde M) \leq 4 \pi$. Moreover, if $\area(M) = 4\pi$, then both of these inequalities are equalities. In particular, $\pi_1(M)$ is trivial and $\mathbb{S}ec_{\tilde X} = \Ric_{\tilde X}= \tilde g$, so that $(M,g,X) = (\tilde M,\tilde g, \tilde X)$ is a compact, two-dimensional Ricci soliton. A result of Chen, Lu, and Tian \cite{ChenLuTian06} then shows that $M$ has constant curvature $1$ and that $X$ is a Killing field. Since $M$ is simply connected, this proves the proposition.
\end{proof}
We can also consider the case of negative curvature in dimension $2$. It was shown in \cite{Wylie-pre} that if a compact manifold has $\overline{\mathbb{S}ec}_X \leq 0$ then the universal cover is diffeomorphic to Euclidean space, showing that a compact surface admits $\overline{\mathbb{S}ec}_X \leq 0$ if and only if it is not the sphere or real projective space. In fact, the Gauss--Bonnet argument improves this result for surfaces as it shows that the conclusion holds if $\mathbb{S}ym \mathbb{S}ec_X \leq 0$. Moreover, it also shows that if a metric on the torus supports a vector field with $\mathbb{S}ym \mathbb{S}ec_X \leq 0$ then the metric is flat and $X$ is Killing. In particular the 2-torus has no metric with density on it with $\mathbb{S}ym \mathbb{S}ec_X < 0$.
The discussion above, along with the work of Corwin and Morgan \cite{CorwinMorgan11} certainly shows that the study of the symmetrized weighted sectional curvature is warranted. In fact, the results in Section \ref{sec:Submersions} of this paper about Riemannian submersions and Cheeger deformation have analogues for the symmetrized curvatures with the same proofs. On the other hand, there does not seem to be a good second variation formula for the symmetrized curvatures which can give us a version of Lemma \ref{lem:SecondVariation}. Note that the unsymmetrized curvatures also appear in the second variation of the weighted distance, see \cite{Morgan06, Morgan09}. Without some kind of second variation formula for the symmetrized curvatures, it seems unlikely that the other results of this paper can be generalized to the symmetrized case or that many of the facts for surfaces mentioned above can be generalized to higher dimensions.
\mathbb{S}ection{Examples}\label{sec:Examples}
In this section, we discuss a number of examples of metrics with positive weighted curvature, including some which do not have positive sectional curvature. As a warm-up we first consider the case of products.
\begin{definition} Given $(M_1, g_1, X_1)$ and $(M_2, g_2, X_2)$ where $(M_i, g_i)$ are Riemannian manifolds and $X_i$ are smooth vector fields, the product of $(M_1, g_1, X_1)$ and $(M_2, g_2, X_2)$ is the triple $(M_1 \times M_2, g_1 + g_2, X_1 + X_2)$.
\end{definition}
A basic fact about positive sectional curvature is that it is not preserved by taking products, as the sectional curvature of a plane spanned by vectors in each factor is zero. Indeed, one of the most famous open problems in Riemannian geometry is the Hopf conjecture which states that $\mathbb{S}^2 \times \mathbb{S}^2$ does not admit any metric of positive sectional curvature. In the weighted case, there are noncompact examples of products which have positive weighted sectional curvature.
\begin{example} \label{Ex:Gaussian} We define the $1$-dimensional Gaussian as the real line $\mathbb{R}$ with coordinate $x$, standard metric $g = dx^2$, and vector field $X = \frac{1}{2}\nabla( x^2) = x \frac{d}{dx}$. This triple has $\mathbb{S}ec_X = 1$. If we take the product of two $1$-dimensional Gaussians we obtain a $2$-dimensional Gaussian. That is, we obtain $\mathbb{R}^2$ with the Euclidean metric and vector field $X = \nabla f$ where $f(A) = \frac{1}{2} |A|^2$. This triple still has $\mathbb{S}ec_X = 1$. Moreover, taking further products we obtain the $n$-dimensional Gaussian as the product of $n$ one-dimensional Gaussians all of which have $\mathbb{S}ec_X = 1. $
\end{example}
On the other hand, it is easy to see that such examples cannot exist in the compact case.
\begin{proposition}
No product of the form $(M_1 \times M_2, g_1+g_2, X_1+X_2)$ with one of the $M_i$ compact has positive weighted sectional curvature.
\end{proposition}
\begin{proof}
Let $M_1$ be the compact factor and suppose first that $\mathbb{S}ec_X > 0$. Consider the ``vertizontal" curvature given by $Y$ tangent to $M_1$ and $U$ tangent to $M_2$,
\[ \mathbb{S}ec_X (Y, U) = \mathbb{S}ec(Y,U) + \frac{1}{2} L_X g(Y,Y) = \frac{1}{2} L_{X_1} g_1(Y,Y) \]
This shows that if $\mathbb{S}ec_X > 0$, then $\frac{1}{2} L_{X_1} g_1 > 0$. This is impossible if $M_1$ is compact by the divergence theorem as $\mathrm{tr}\left(L_{X_1} g\right) = \mathrm{div}(X_1)$.
The case where $\overline{\mathbb{S}ec}_f > 0$ is analogous. In that case we obtain that the function $u_1 = e^{f_1}$ has $\mathrm{Hess}_{g_1} u_1 > 0$ on $M_1$, which is again impossible on a compact manifold.
\end{proof}
This shows that the Hopf conjecture is also an interesting for weighted sectional curvature.
\begin{question}[Weighted Hopf conjecture] Does $\mathbb{S}^2 \times \mathbb{S}^2$ admit a metric and vector field with positive weighted sectional curvature?
\end{question}
In the next few sections, we investigate examples with positive weighted sectional curvature using the simple construction of warped products over a one-dimensional base. As we can see even in the case of products, it is easier to construct non-compact examples than compact ones, so we will investigate the non-compact case first.
\mathbb{S}ubsection{Noncompact Examples}
A warped product metric over a $1$--dimensional base is a metric of the form $g = dr^2 + \phi^2(r) g_N$ where $N$ is an $(n-1)$--dimensional manifold. Up to rescaling $\phi$ and the fiber metric $g_N$ and re-parametrizing $r$ there are three possibilities for the topology of complete metrics of this form:
\begin{enumerate}
\item If $\phi(r) >0$ for $r \in \mathbb{R}$ and $(N,g_N)$ is complete, then $g$ gives a complete metric on $\mathbb{R} \times N$. If $\phi$ is also periodic, then we can take the quotient a get a metric on $\mathbb{S}^1 \times N$.
\item If $\phi(r) >0$ for $r\in (0, \infty)$ and $\phi$ is an odd function with $\phi'(0) = 1$, and if $(N,g_N)$ is a round sphere of constant curvature $1$, then $g$ defines a complete rotationally symmetric metric on $\mathbb{R}^n$
\item If $\phi(r)>0$ for $r \in (0,R)$ and $\phi$ is an odd function at $0$ and $R$ with $\phi'(0) = 1$ and $\phi'(R) = -1$, and if $(N,g_N)$ is a round sphere of constant curvature $1$, then $g$ defines a complete rotationally symmetric metric on $\mathbb{S}^n$.
\end{enumerate}
For $Y,Z$ tangent to $g_N$, we have the following well known formulas for the curvature operator of a one-dimensional warped product,
\begin{eqnarray*}
\mathcal{R}(\partial_r \wedge Y)
&=& -\frac{\phi''}{\phi} \partial_r \wedge Y \\
\mathcal R(Y \wedge Z)
&=& \mathcal R^N(Y \wedge Z) - \of{\frac{\phi'}{\phi}}^2 Y \wedge Z
\end{eqnarray*}
where $\mathcal R^N$ denotes the curvature operator of $N$. We will be interested in lower bounds on weighted curvature of the warped product. All of our examples will also have the property that $X = \nabla f$, so we focus only on this case. The following lemma simplifies the problem of proving such lower bounds for warped product metrics over a one-dimensional base.
\begin{lemma}\label{lem:SinglyWarpedTest}
Let $dr^2 + \phi^2(r) g_N$ be a warped product metric, and assume $f$ is a smooth function that only depends on $r$. The weighted curvature $\mathbb{S}ec_f \geq \lambda$ if and only if
\begin{eqnarray*}
\lambda &\leq& \mathbb{S}ec_f^{\partial_r}(Y) = -\frac{\phi''}{\phi} + f'',\\
\lambda &\leq& \mathbb{S}ec_f^{Y}(\partial_r) = -\frac{\phi''}{\phi} + f'\frac{\phi'}{\phi},~\mathrm{and}\\
\lambda &\leq& \mathbb{S}ec_f^{Y}(Z) = \frac{\mathbb{S}ec^{g_N}(Y,Z) - (\phi')^2}{\phi^2} + f'\frac{\phi'}{\phi},
\end{eqnarray*}
for all orthonormal pairs $(Y,Z)$, where $Y$ and $Z$ are tangent to $N$.
Similarly, $\overline\mathbb{S}ec_f \geq \lambda$ if and only if these three inequalities hold with $f'$ replaced by $u'/u$ and $f''$ replaced by $u''/u$, where $u = e^f$.
\end{lemma}
This lemma implies that one can show $\mathbb{S}ec_f \geq \la$ for these metrics by plugging in ``test pairs'' of the form $(\partial_r, Y)$, $(Y,\partial_r)$, and $(Y,Z)$, where $Y$ and $Z$ are tangent to $N$. In particular, if $\mathbb{S}ec^{g_N}$ is bounded from below, then proving $\mathbb{S}ec_f \geq \lambda$ reduces to showing three inequalities involving the functions $\phi$ and $f$.
\begin{proof}
As the proof is similar, we omit the proof in the strongly weighted case. Let $U = a\partial_r + Y$ and $V = b\partial_r + Z$ be an arbitrary orthonormal pair of vectors, where $Y$ and $Z$ are tangent to $N$. By orthonormality,
$|Y \wedge Z|^2 = 1 - a^2 - b^2$,
so $a^2 + b^2 \leq 1$. Since $\partial_r \wedge (aZ-bY)$ and $Y\wedge Z$ are eigenvalues of the curvature operator, we have
\begin{eqnarray*}
\mathbb{S}ec(U,V) &=& \inner{\mathcal R(U\wedge V), U \wedge V}\\
&=& - \frac{\phi''}{\phi} |\partial_r \wedge (aZ - bY)|^2
+ \of{\frac{\mathbb{S}ec^N(Y,Z) - (\phi')^2}{\phi^2}} |Y\wedge Z|^2\\
&=& - \frac{\phi''}{\phi} \of{a^2 + b^2}
+ \of{\frac{\mathbb{S}ec^N(Y,Z) - (\phi')^2}{\phi^2}} \of{1 - a^2 - b^2}.
\end{eqnarray*}
Next, we calculate
\[\Hess f(U,U)|V|^2 = a^2 f'' + (1-a^2) f' \frac{\phi'}{\phi}.\]
Observe that $\mathbb{S}ec_f^U(V) = \mathbb{S}ec(U,V) + \Hess f(U,U) |V|^2$ is a linear function in the quantities $a^2$ and $a^2 + b^2$. Moveover, these quantities vary over a triangle since
\[0 \leq a^2 \leq a^2 + b^2 \leq 1,\]
so the minimal (and maximal) values of $\mathbb{S}ec_f^U(V)$ occur at one of the three corners. This proves the lemma since these corners correspond to orthonormal pairs of the form $(\partial_r, Y)$, $(Y,\partial_r)$, and $(Y,Z)$.
\end{proof}
As a first application of Lemma \ref{lem:SinglyWarpedTest} we consider the problem of prescribing positive weighted sectional curvature locally on a subset of the round sphere.
\begin{proposition}
Let $M$ be a round sphere of constant curvature $1$ and $H^+$ an open round hemisphere in $M$. For any $\lambda \in \mathbb{R}$, there is a density on $H^+$ with $\mathbb{S}ec_f \geq \lambda$ and there is no density defined on an open set containing the closure of $H^+$ with $\overline{\mathbb{S}ec}_f > 1$.
\end{proposition}
\begin{proof}
First we prove the non-existence. It suffices to show that a geodesic ball $B$ of radius $\frac \pi 2 + \ep$ cannot admit a density $f$ such that $\overline\mathbb{S}ec_f > 1$. On $B$, we can write the round metric as the warped product $dr^2 + \mathbb{S}in^2(r)g_{\mathbb{S}^{n-1}}$, where $r \in \left(0, \frac{\pi}{2} + \varepsilon\right)$. By Lemma \ref{lem:u-averaging} proved in the next section, we can assume that $f =f(r)$. By Lemma \ref{lem:SinglyWarpedTest}, $\overline{\mathbb{S}ec}_f > 1$ only if $\frac{u'}{u} \cot(r) > 0$. However, $\cot\left(\frac{\pi}{2} \right) = 0$, so the second inequality is impossible to satisfy.
On the other hand, in order to find a density $f$ with $\mathbb{S}ec_f \geq \lambda$, we only need that
\begin{eqnarray*}
f'' &\geq& \lambda -1 \qquad \text{and} \qquad
f' \cot(r) \geq \lambda-1
\end{eqnarray*}
Such a density exists, e.g., $f$ given by
\[ f(r) = (\lambda-1) \int \tan(x)dx = -(\lambda-1)\log(\cos(r)). \]
satisfies these properties. Note that in these examples, $f$ blows up at the equator $r = \frac{\pi}{2}$.
\end{proof}
On the other hand, we note the general fact that every point $p$ in a Riemannian manifold has a neighborhood $U$ supporting a density such that $\mathbb{S}ec_f \geq \lambda$.
\begin{proposition} Let $(M,g)$ be a Riemannian manifold, $p \in M$, and $\lambda \in \mathbb{R}$. There is an open set $U$ containing $p$ which supports a density $f$ such that $\mathbb{S}ec_f \geq \lambda$ on $U$. \end{proposition}
\begin{proof}
Let $r$ be the distance function to $p$. Since $\Hess r \mathbb{S}im 1/r$ as $r \rightarrow 0$, there exists $0<\varepsilon<1$ such that $r \Hess r > \ep g$ on $B(p,\varepsilon)$. Let $\rho = \inf \mathbb{S}ec(B(p,\varepsilon), g)$. Define $f = \frac{\lambda-\rho}{2\varepsilon} r^2$ . We have that
\[ \Hess f = \frac{\lambda-\rho}{\varepsilon} dr \otimes dr + \frac{\lambda-\rho}{\varepsilon}r \Hess r \geq (\lambda-\rho) g, \]
which implies that $\mathbb{S}ec_f \geq \lambda$ on $B(p, \varepsilon)$.
\end{proof}
Now we come to our first complete example.
\begin{proposition} \label{pro:WPPosCurv}
Let $(N,g_N)$ be a metric of non-negative sectional curvtaure. For any $\lambda$, the metric $g = dr^2 + e^{2r} g_N$ on $\mathbb{R} \times N$ admits a density of the form $f=f(r)$ such that $\overline{\mathbb{S}ec}_f \geq \lambda$. On the other hand, $g$ admits no density of the form $f=f(r)$ with $\mathbb{S}ec_f \geq -1 + \ep$ with $\ep > 0$.
\end{proposition}
\begin{proof}
Set $\phi(r) = e^r$. Because $N$ has non-negative sectional curvature, Lemma \ref{lem:SinglyWarpedTest} implies $\overline\mathbb{S}ec_f \geq \lambda$ if and only if $-1 + \frac{u''}{u} \geq \lambda$ and $-1 + \frac{u'}{u} \geq \lambda$. This can be achieved by taking $u = e^{Ar}$ for some sufficiently large $A \in \R$.
On the other hand, for a general $f= f(r)$, if we have $\mathbb{S}ec_f \geq -1 + \ep$, then $f$ satisfies $f''(r)\geq \ep$ and $f'(r) \geq \ep$ for all $r \in \R$. This is impossible.
\end{proof}
\begin{remark} Gromoll and Meyer \cite{GromollMeyer69} proved that a non-compact, complete manifold with $\mathbb{S}ec>0$ is diffeomorphic to Euclidean space. These examples show this is not true for $\overline{\mathbb{S}ec}_f>0$. Moreover, Cheeger and Gromoll \cite{CheegerGromoll72} showed that a non-compact complete manifold with $\mathbb{S}ec \geq 0$ is the normal bundle over a compact totally geodesic submanifold called a soul. While our examples are topologically $\mathbb{R} \times N$, we note that the cross sections $\{r_0\} \times N$ are not geometrically a ``soul" as they are not totally geodesic.
\end{remark}
\begin{remark}
If we take $g_N$ to be a flat metric, then the metric $g = dr^2 + e^{2r} g_N$ is a hyperbolic metric. If we also choose $f(r) = r$, then we get a density with constant $\overline{\mathbb{S}ec}_f = 0$.
\end{remark}
\mathbb{S}ubsection{Compact Examples}
Now we give examples of rotationally symmetric metrics on the $n$--sphere which admit a density $f$ such that $\mathbb{S}ec_f>0$ but do not have $\mathbb{S}ec\geq 0$.
In general, a rotationally symmetric on the sphere will be of the form $g = dr^2 + \phi^2(r)g_{\mathbb{S}^{n-1}}$ for $r\in [0,2L]$. The smoothness conditions for the warping function $\phi$ and density function $f$ are that $\phi(0) = \phi(2L) = 0$, $\phi'(0) = 1$, $\phi'(2L) = -1$, $\phi^{(even)}(0)= \phi^{(even)}(2L) = 0$ and $f'(0) = f'(2L) = 0$. Our main construction is contained in the following proposition.
\begin{proposition}\label{pro:RotSym}
There are rotationally symmetric metrics on $\mathbb{S}^n$ which support a density $f$ such that $\mathbb{S}ec_f >0$, but which do not have $\mathbb{S}ec \geq 0$.
\end{proposition}
\begin{proof}
First we define $\phi(r) = r$ on $[0, \pi/6]$ and $\phi(r) = \mathbb{S}in(r)$ on $[\pi/3, \pi/2]$. On the interval $(\pi/6, \pi/3)$, extend $\phi$ smoothly so that $\phi'' \leq 0$ and $\phi' \geq 0$. Then we reflect $\phi$ across $\pi/2$ to obtain a warping function defined on $[0, \pi]$ that gives a smooth rotationally symmetric metric on the sphere. Geometrically, this metric consists of two flat discs connected by a region of positive curvature.
Now define $f(r) = \frac{1 }{2} r^2$ on $[0, \pi/3]$ and extend $f$ on $(\pi/3, \pi/2]$ so that $f'>0$ on $(\pi/3, \pi/2)$ and has $f^{(odd)}(\pi/2) = 0$ so that $f$ also defines a smooth function when reflected across $\pi/2$.
Now we consider the potential function $\lambda f''$ for a positive constant $\lambda.$ The table below shows the values for the eigenvalues of the curvature operator and Hessian of $\lambda f$ on the different regions
\[\begin{array}{|c|c|c|c|c|}\hline
~ & \frac{-\phi''}{\phi} & \frac{1 - (\phi')^2}{\phi^2} & \lambda f'' & \lambda f' \frac{\phi'}{\phi} \\\hline
[0, \pi/6] & 0 & 0 & \lambda & \lambda \\\hline
(\pi/6, \pi/3] & > 0 & > 0 & \lambda & \geq\lambda \\\hline
(\pi/3, \pi/2] & 1 & 1 & \lambda f'' & \geq 0 \\\hline
\end{array}\]
By Lemma \ref{lem:SinglyWarpedTest}, $\mathbb{S}ec_f \geq \lambda$ on $[0, \pi/3]$. On $(\pi/3, \pi/2]$ note that $f'' < 0$ somewhere since $f'$ must decrease from $\pi/3$ to $0$. However, by choosing $\lambda$ small enough we can make $1+\la f'' \geq \la$ on $[\pi/3, \pi/2]$, and then we will have $\mathbb{S}ec_f \geq \la$ everywhere.
We have thus constructed examples with $\mathbb{S}ec_f > 0$ but which do not have $\mathbb{S}ec>0$. Of course, this example does have $\mathbb{S}ec \geq 0$. However, since having $\mathbb{S}ec_f>0$ is an open condition we can perturb the metric in an arbitrary small way and still have $\mathbb{S}ec_f >0$. This will give metrics with some negative sectional curvatures which still have $\mathbb{S}ec_f > 0$.
\end{proof}
On the other hand, we note that most rotationally symmetric metrics on the sphere do not have any density such that $\overline{\mathbb{S}ec}_f > 0$.
\begin{proposition} \label{pro:RotObstruct} Let $g = dr^2 + \phi^2(r) g_{\mathbb{S}^n-1}$, $r \in [0,2L]$, be a metric on $\mathbb{S}^n$
\begin{enumerate}
\item If there is a density $f$ such that $\mathbb{S}ec_f > 0$ then $\int_0^{2L} \frac{-\phi''(r)}{\phi(r)} dr \geq 0 $.
\item If there is a density $f$ such that $\overline{\mathbb{S}ec}_f > 0$, then $\phi$ has a unique critical point $t_0$. Moreover, at $t_0$, the metric has positive sectional curvature.
\end{enumerate}
\end{proposition}
\begin{proof}
By Lemmas \ref{lem:averaging} and \ref{lem:u-averaging} we can assume in either case that $f$ is a function of $r$. Both results are simple consequences of the equations for curvature. For the first, we consider the equation
\[ \mathbb{S}ec_f^{\partial_r} (Y) = \frac{-\phi''}{\phi} + f'' > 0. \]
For $f$ to define a smooth function, we must have $f'(0) = f'(2L) = 0$, so integrating the equation gives (1). In dimension $2$, this is the Gauss--Bonnet Theorem which we will discuss in the Appendix, see Theorem \ref{thm:GBsymsec}.
For (2), consider a point where $\phi'(t) = 0$. Fix an orthonormal pair of vectors, $Y$ and $Z$, at this point that are tangent to $N$. Since $\Hess u(Y,Y) = u' \frac{\phi'}{\phi} g(Y,Y) = 0$, the only way $\overline{\mathbb{S}ec}_f^Y(\partial_r)$ and $\overline\mathbb{S}ec_f^Y(Z)$ can be positive is if $\mathbb{S}ec(\partial_r, Y)$ and $\mathbb{S}ec(Y,Z)$ are positive. It follows that all sectional curvatures are positive at this point. Moreover, it follows that $\phi''(t) < 0$ at each critical point $t$, so there can be at most one critical point of $\phi$.
\end{proof}
\begin{remark} Proposition \ref{pro:RotObstruct}, part (2), shows that a spherical ``dumbell" metric consisting of two spheres connected by a long neck of non-positive curvature does not have any density with $\overline{\mathbb{S}ec}_f > 0$.
\end{remark}
Now we consider doubly warped products of the form
\[ g = dr^2 + \phi^2(r) g_{\mathbb{S}^k}
+ \psi^2(r)g_{\mathbb{S}^m} \qquad r \in [0,L]. \]
These metrics are also cohomogeneity one with $G = \Or(k+1) \times \Or(m+1)$, so by Lemmas \ref{lem:averaging} and \ref{lem:u-averaging} we can assume that the density is of the form $f=f(r)$. We also have
\[ \Hess r = \phi' \phi g_{\mathbb{S}^{k}} + \psi' \psi g_{\mathbb{S}^m}. \]
So
\[ \Hess f = f'' dr^2 + f' \phi' \phi g_{\mathbb{S}^{k}}
+ f' \psi' \psi g_{\mathbb{S}^m}.\]
In order for $f$ to be $C^2$ we thus need $f'(0) = f'(L) = 0$.
We let $Y,Z$ denote vectors in the $\mathbb{S}^k$ factor and $U,V$ be vectors in the $\mathbb{S}^m$ factor. The curvature operator in this case is
\begin{eqnarray*}
\mathcal{R}(\partial_r \wedge Y) &=& -\frac{\phi''}{\phi} \partial_r \wedge Y \\
\mathcal{R}(\partial_r \wedge U) &=& -\frac{\psi''}{\psi} \partial_r \wedge U \\
\mathcal{R}(Y \wedge Z) &=& \frac{1 - (\phi')^2}{\phi^2} Y \wedge Z\\
\mathcal{R}(U \wedge V) &=& \frac{1 - (\psi')^2}{\psi^2} U \wedge V\\
\mathcal{R}(Y \wedge U) &=& -\frac{\phi' \psi'}{\phi \psi} Y \wedge U
\end{eqnarray*}
This shows that, at a point $(r, p, q)$, there exists a basis $\{E_i\}$ of the tangent space such that the following hold:
\begin{itemize}
\item The $E_i$ are eigenvectors of $\Hess f$, and
\item The $E_i \wedge E_j$ for $i<j$ are eigenvectors of $\mathcal R$.
\end{itemize}
In this setting, we will use the following algebraic lemma to show that certain doubly warped products on the sphere have positive weighted sectional curvature. The proof is algebraic and is postponed until the next subsection.
\begin{corollary}\label{cor:Minimizingsecf}
Let $(M,g)$ be a closed Riemannian manifold with non-negative curvature operator $\mathcal R$. Let $X$ be a vector field on $M$. Assume that, for all $p\in M$, the tangent space at $p$ has a basis $\{E_i\}$ such that all of the following hold:
\begin{itemize}
\item $E_i$ is an eigenvector for $L_Xg$ with eigenvalue $\mu_i$ for all $i$,
\item $E_i \wedge E_j$ is an eigenvalue for $\mathcal R$ with eigenvalue $\la_{ij}$ for all $i < j$, and
\item $\la_{ij} > 0$ or $\min(\mu_i, \mu_j) > 0$ for all $i < j$.
\end{itemize}
There exists $\la > 0$ such that $(M,g,\la X)$ has positive weighted sectional curvature.
\end{corollary}
More geometrically, this result allows us to conclude that $\mathbb{S}ec_{\la f} > 0$ by testing this condition on orthonormal pairs of the form $(E_i, E_j)$ or $(E_j, E_i)$ with $i < j$.
\begin{proposition}
For any positive integers $m$ and $k$, there is a doubly warped product metric on $\mathbb{S}^{k+m+1}$ of the form $ g = dr^2 + \phi^2(r) g_{\mathbb{S}^k} + \psi^2(r)g_{\mathbb{S}^m}$ with $\mathbb{S}ec_f >0$ but which does not have $\mathbb{S}ec \geq 0$.
\end{proposition}
\begin{proof}
Let $r$ vary over the interval $[0, \pi/2]$, choose $\phi$ and $f$ as in the proof of Proposition \ref{pro:RotSym}, and set $\psi(r) = \cos(r)$. The proof of Proposition \ref{pro:RotSym} shows that we can scale $f$ so that the weighted sectional curvatures of the pairs involving $\partial_r$ and $Y$ are positive. For this argument, we apply Corollary \ref{cor:Minimizingsecf}.
Choose an orthonormal basis $\{E_i\}_{i=0}^{k+m}$ for the tangent space such with $E_0 = \partial_r$, with $E_1,\ldots,E_{k}$ tangent to $\mathbb{S}^k$, and with $E_{k+1},\ldots, E_{k+m}$ tangent to $\mathbb{S}^m$. This basis satisfies the first two conditions of Corollary \ref{cor:Minimizingsecf}. It suffices to check the third condition.
Using the expressions above for the curvature operator, all $\la_{ij} > 0$, except in the case where $r \in [0,\pi/6]$ and where $E_i$ and $E_j$ are tangent to the $\mathbb{S}^k$ factor. For these indices, however, $\mu_i = \Hess f(E_i, E_i) > 0$ and $\mu_j = \Hess f(E_j,E_j) > 0$. By Corollary \ref{cor:Minimizingsecf} we have $\mathbb{S}ec_{\la f} >0$ for some $\la > 0$. The fact that we can make $\mathbb{S}ec<0$ for some two-planes follows for the same reason it was true in the rotationally symmetric case.
\end{proof}
Applying O'Neill's formula from Section \ref{sec:Submersions}, this also gives us an example on $\C\mathbb{P}^n$.
\begin{proposition}\label{pro:CPnExample} There are cohomogeneity one metrics on $\C\mathbb{P}^n$ which admit a density such that $\mathbb{S}ec_f >0$ but which do not have $\mathbb{S}ec \geq 0$. \end{proposition}
\begin{proof}
Consider a double warped product metric on the sphere $\mathbb{S}^{2n+1}$ of the form
\[g = dr^2 + \phi^2(r) g_{\mathbb{S}^{2n-1}} + \psi^2(r) d\theta^2 \]
Consider the Hopf fibration on $\mathbb{S}^{2n-1}$ and write the metric $g_{\mathbb{S}^{2n-1}} = k+h$ where $h$ is the metric tangent to the Hopf fiber and $k$ is the metric on the orthogonal complement. Complex multiplication on the $\mathbb{S}^{2n-1}$ and $\mathbb{S}^1$ factors induces a free isometric action on $g$ and the quotient is $\C\mathbb{P}^n$. The quotient map is a Riemannian submersion if we equip $\C\mathbb{P}^n$ with the metric
\[ dr^2 + \phi^2(r)k + \frac{(\phi(r) \psi(r))^2}{\phi^2(r) + \psi^2(r) } h \]
By O'Neill's formula (Theorem \ref{thm:submersions}), we know this metric also has $\mathbb{S}ec_f >0$. Note also that if $Y$ is a horizontal vector field in the $\mathbb{S}^{2n-1}$ factor then for $r>0$, $[\partial_r, Y] = 0$ which implies that the sectional curvature $\mathbb{S}ec_f^{\partial_r}(Y)$ does not change under the submersion. Since there are curvatures in the doubly warped product of this form which are negative, we also obtain that the metric on $\C\mathbb{P}^n$ has some negative sectional curvatures.
\end{proof}
\mathbb{S}ubsection{Proof of Corollary \ref{cor:Minimizingsecf}}\label{sec:Computations}
This section is devoted to the proof of Corollary \ref{cor:Minimizingsecf}, which is applied in the previous section. The proof is algebraic and not required for the rest of the paper, so the reader may choose to skip this subsection. The result is restated here for convenience:
\begin{nonumbercorollary}[Corollary \ref{cor:Minimizingsecf}]
Let $(M,g)$ be a closed Riemannian manifold with non-negative curvature operator $\mathcal R$. Let $X$ be a vector field on $M$. Assume that, for all $p\in M$, the tangent space at $p$ has a basis $\{E_i\}$ such that all of the following hold:
\begin{itemize}
\item $E_i$ is an eigenvector for $L_Xg$ with eigenvalue $\mu_i$ for all $i$,
\item $E_i \wedge E_j$ is an eigenvalue for $\mathcal R$ with eigenvalue $\la_{ij}$ for all $i < j$, and
\item $\la_{ij} > 0$ or $\min(\mu_i, \mu_j) > 0$ for all $i < j$.
\end{itemize}
There exists $\la > 0$ such that $(M,g,\la X)$ has positive weighted sectional curvature.
\end{nonumbercorollary}
To prove this result, first note that it suffices to prove that a $\la > 0$ as in the conclusion exists at every point in $M$. It is then straightforward to conclude this pointwise claim from the following lemma together with the non-negativity of the curvature operator.
\begin{lemma}\label{lem:MinimizingsecfHACK}
Let $(V, \inner{\cdot,\cdot})$ be a finite-dimensional inner product space. Let $\mathcal L$ and $\mathcal R$ be symmetric, linear maps on $V$ and $\Lambda^2 V$, respectively. Assume there exists an orthonormal eigenbasis $\{E_i\}$ for $\mathcal L$ such that $\{E_i \wedge E_j\}_{i<j}$ is an eigenbasis for $\mathcal R$. Denote the corresponding eigenvalues by $\mu_i$ and $\la_{ij}$, respectively. Set $\la_{ji} = \la_{ij}$ for $i<j$. Considered as a function of orthonormal pairs $(Y,Z)$ in $V$, the minimum and maximum values of
\[\mathcal S(Y,Z) = \inner{\mathcal R(Y \wedge Z), Y \wedge Z} + \inner{\mathcal L(Y),Y}\]
lie in the set
\[\left\{\la_{ij}+\mu_i \mathbb{S}t i,j~\mathrm{distinct}\right\}
\cup \left\{\frac{1}{2}\of{\la_{ij}+\la_{kl}+\mu_i+\mu_j} \mathbb{S}t i,j,k,l~\mathrm{distinct}\right\}.\]
\end{lemma}
\begin{proof}
Let $n = \dim(V)$. Let $Y = \mathbb{S}um a_i E_i$ and $Z = \mathbb{S}um b_i E_i$ be an orthonormal pair in $V$. Observe that
\[\mathcal S(Y,Z)
= \mathbb{S}um_{i < j} \la_{ij} z_{ij} + \mathbb{S}um_i \mu_i x_i
= S(x_i, z_{ij}),\]
where $x_i = a_i^2$ for $1 \leq i \leq n$, where $z_{ij} = (a_ib_j - a_jb_i)^2$ for $1 \leq i < j \leq n$. To simplify notation later, set $z_{ii} = 0$ and $z_{ji} = z_{ij}$ for $1 \leq i < j \leq n$. By orthonormality of $(Y,Z)$, all of the following hold:
\begin{enumerate}
\item $x_i \geq 0$ and $\mathbb{S}um x_i = 1$, hence the vector $x = (x_i)$ lies on the standard simplex $\Delta^{n-1} \mathbb{S}ubseteq \R^n$.
\item Likewise, $z = (z_{ij})$ lies on the standard simplex $\Delta^{\binom{n}{2}-1} \mathbb{S}ubseteq \R^{n(n-1)/2}$.
\item For all $1 \leq i \leq n$, $x_i \leq \mathbb{S}um_{j=1}^n z_{ij}$.
\end{enumerate}
Hence $\mathcal S(Y,Z)$ equals $S(x,z)$ for some point $(x,z)$ in the convex polytope $C \mathbb{S}ubseteq \R^n \times \R^{n(n-1)/2}$ defined by
\[C = \left\{(x,z) \in \Delta^{n-1} \times \Delta^{\binom{n}{2}-1} \mathbb{S}t x_i \leq \mathbb{S}um_j z_{ij} \mathrm{~for~all~} i\right\}.\]
To prove the lemma, it suffices to show that the function $S:C \to \R$ has extremal values in the set described in the conclusion of the lemma.
We prove this claim by induction over $n$. First, if $n = 2$, then $C = \Delta^1 \times \Delta^0$, so
\[S(x,z) = \la_{12} + \mu_1 x_1 + \mu_2 x_2\]
has extremal values $\la_{12} + \mu_1$ and $\la_{12} + \mu_2$, as claimed. Assume now that $n \geq 3$ and that the claim holds in dimension $n-1$.
Since $C$ is a convex polytope -- i.e., an intersection of half-spaces -- and since $S$ is linear, the extremal values are attained at the corners (or $0$--dimensional faces) of $C$. We now evaluate $S$ at these corners.
Let $(x,z) \in C$ be a corner. There exist $0 \leq k \leq n$ and distinct indices $i_1,\ldots,i_k$ such that all of the following hold:
\begin{enumerate}
\item $(x,z)$ lies in the interior of a $k$--dimensional face of $\Delta^{n-1} \times \Delta^{\binom{n}{2}-1}$,
\item $x_{i_h} = \mathbb{S}um_{j=1}^{n} z_{i_h j}$ for $1 \leq h \leq k$, and
\item $x_i \leq \mathbb{S}um_{j=1}^n z_{ij}$ for all $1 \leq i \leq n$.
\end{enumerate}
Indeed, each corner of $C$ is obtained by intersecting some $k$--dimensional face of $\Delta^{n-1} \times \Delta^{\binom n 2}$ with some choice of $k$ hyperplanes $x_i = \mathbb{S}um_j z_{ij}$. Recall that a $k$--dimensional face of the product is a product of a $l$--dimensional face with a $(k-l)$--dimensional face for some $0 \leq l \leq k$. Also recall that a $k$--dimensional face of a standard simplex is given by a choice of $k+1$ indices $i_0,\ldots,i_k$ for which $x_{i_0} + \ldots + x_{i_k} = 1$ and all other $x_i = 0$. Moreover, the interior of this face is the set of such points where, in addition, each of the $x_{i_h} > 0$.
First, suppose that $k = 0$. In other words, suppose that $(x,z)$ lies on a corner of $\Delta^{n-1} \times \Delta^{\binom{n}{2}}$. There exists $i$ and $p<q$ such that $x_i = 1$, $z_{pq} = 1$, and all other entries of $x$ and $z$ are zero. By condition (3), $i \in \{p,q\}$, hence $S(x,z)$ equals $\la_{iq} + \mu_i$ or $\la_{pi} + \mu_i$, as required.
Second, suppose that $k \geq 1$ and that there exists $i_h$ with $x_{i_h} = 0$. By conditions (1) and (2), $z_{i_h j} = 0$ for all $j$. Hence $S(x,z)$ does not contain any terms with index $i_h$. The claim follows in this case by the induction hypothesis.
Finally, suppose that $k \geq 1$ and $x_{i_h} > 0$ for all $1 \leq h \leq k$. In particular, $x$ does not lie in a face of dimension less than $k-1$. Hence Condition (1) implies that $x$ lies in the interior of a $(k-1)$-- or $k$--dimensional face of $\Delta^{n-1}$, and that $z \in \Delta^{\binom n 2 - 1}$ lies in the interior of a $1$--dimensional face or a corner, respectively. We consider these cases separately:
\begin{enumerate}
\item[(a)] In the first case, there exists $i_0 \not\in\{i_1,\ldots,i_k\}$ such that $x_{i_0} > 0$ and $x_{i_0} + x_{i_1} + \ldots + x_{i_k} = 1$. Moreover, there exists $p<q$ such that $z_{pq} = 1$ and $z_{rs} = 0$ for all $(r,s) \neq (p,q)$. By condition (3), $i_0 \in \{p,q\}$ and likewise for all of the distinct indices $i_0,i_1,\ldots,i_k$. It follows that $k$ cannot be larger than one. Moreover, if $k = 1$, then $\{i_0,i_1\} = \{p,q\}$, so
\[S(x,z) = \la_{i_0 i_1} + \mu_{i_0} x_{i_0} + \mu_{i_1} x_{i_1}.\]
Since $x_{i_0}$ and $x_{i_1}$ are positive and sum to one, this quantity is at least $\la_{i_0 i_1} + \mu_{i_0}$ or $\la_{i_0 i_1} + \mu_{i_1}$, as required.
\item[(b)] In the second case, $x_{i_1} + \ldots + x_{i_k} = 1$ and there exists $p<q$ and $r<s$ such that $z_{pq} > 0$, $z_{rs} > 0$, and $z_{pq} + z_{rs} = 1$. By condition (3), $i_h \in \{p,q\} \cup \{r,s\}$ for all $h$, so clearly $k\leq 4$. In fact, if $k \geq 3$, then there exist $i_{h_1} \in \{p,q\}$ and $i_{h_2} \in \{r,s\}$, which implies
\[1 = \mathbb{S}um_{h=1}^k x_{i_h} > x_{i_{h_1}} + x_{i_{h_2}} = \mathbb{S}um_j z_{i_{h_1} j} + \mathbb{S}um_j z_{i_{h_2} j} \geq z_{pq} + z_{rs} = 1,\]
a contradiction.
This leaves the possibilities that $k=2$ and $k = 1$. First, suppose $k = 1$. It follows that $x_{i_1} = 1$ and that
\[S(x,z)=\la_{pq} z_{pq} + \la_{rs} z_{rs} + \mu_{i_1}.\]
Hence $S(x,z)$ is bounded between $\la_{pq}+\mu_{i_1}$ and $\la_{rs}+\mu_{i_1}$. Moreover,
\[1 = x_{i_1} = \mathbb{S}um_j z_{i_1 j},\]
so all $z_{ij}$ that do not appear in this sum are zero. In particular, $i_1 \in \{p,q\}$ and $i_1 \in \{r,s\}$, so the claim follows in this case.
This leaves the case with $k=2$. We start by showing that $i_1$ cannot be in both $\{p,q\}$ and $\{r,s\}$. Indeed, if it were, then Conditions (1) and (2) imply that
\[1 = x_{i_1} + x_{i_2} > x_{i_1} = \mathbb{S}um_j z_{i_1 j} \geq z_{pq} + z_{rs} = 1,\]
a contradiction. By a similar argument, $i_2$ cannot be in both sets. Condition (3) implies $i_1,i_2 \in \{p,q\} \cup \{r,s\}$. If $i_1$ and $i_2$ lie in different sets, say $i_1 \in \{p,q\}$ and $i_2 \in \{r,s\}$, then Condition (2) further implies that $x_{i_1} = z_{pq}$ and $x_{i_2} = z_{rs}$, hence
\[S(x,z) = \of{\la_{pq} + \mu_{i_1}} z_{pq}
+ \of{\la_{rs} + \mu_{i_2}} z_{rs},\]
so the claim follows in this case. Finally, if $i_1$ and $i_2$ lie in the same set, say $\{p,q\}$, then
\[S(x,z) = \la_{i_1 i_2} z_{i_1 i_2} + \la_{rs} z_{rs}
+ \mu_{i_1} x_{i_1} + \mu_{i_2} x_{i_2}.\]
Moreover, in this case, condition (2) implies $x_{i_1} = z_{pq} = x_{i_2}$, and condition (1) implies that $1 = x_{i_1} + x_{i_2} = 2z_{pq}$, hence all four variables are equal to $1/2$. This concludes the proof of the claim.
\end{enumerate}
This shows in all cases that the extremal vaues of $S : C \to \R$ are given as in the conclusion of the lemma. As established at the beginning of the proof, the same holds of $\mathcal S$.
\end{proof}
Regarding the proof of Lemma \ref{lem:MinimizingsecfHACK}, we note that the point $(x,z)$ with $x_1 = x_2 = z_{12} = z_{34} = \frac 1 2$ and all other entries zero lies in the set $C$. Moreover, since the $\la_{ij}$ and $\mu_i$ are arbitrary, we have provided the optimal solution the the optimization problem for the function $S : C \to \R$. On the other hand, $\mathcal S(Y,Z)$ actually equals $S(x,z)$ for some $(x,z) \in C_0$, where $C_0$ is a proper subset of $C$. Indeed, given the definitions of $x_i$ and $z_{ij}$ as in the proof, it is straightforward to check that
\begin{enumerate}
\item[(4)] $z_{ij} \leq x_i + x_j$ for all $i < j$.
\end{enumerate}
Note that the point $(x,z)$ with $x_1 = x_2 = z_{12} = z_{34} = \frac 1 2$ is not in the smaller set $C_0$. This suggests that the Lemma \ref{lem:MinimizingsecfHACK} could be improved to state that the optimal values are of the form $\la_{ij} + \mu_i$ or $\la_{ij} + \mu_j$ with $i < j$. Since this is not needed for our applications, we do not pursue this here.
\mathbb{S}ection{Averaging the density}\label{sec:Averaging}
In this section, we begin to establish the properties of positive weighted sectional curvature described in Section \ref{sec:Preliminaries}. Our first consideration is that, in studying manifolds with density and symmetry, a symmetry of the metric might not be a symmetry of the density. We prove in this section that this difficulty can be overcome in the compact case. At the end, we apply these ideas to study weighted curvature properties of homogeneous metrics.
\mathbb{S}ubsection{Preservation of weighted curvature bounds under averaging}
Fix a Riemannian manifold $(M,g)$ and a vector field $X$ on $M$. Let $G$ be a compact subgroup of the isometry group, and let $d\mu$ denote a unit-volume, bi-invariant measure on $G$. Define a new, $G$--invariant vector field $\bar{X}$ on $M$ as follows:
\[\bar{X}_p = \int_G \phi_*^{-1}(X_{\phi(p)}) d\mu,\]
where we identify the elements $\phi \in G$ with isometries $\phi\colon M\to M$. In the gradient case, where $X = \nabla f$, we similarly define $\bar f(p) = \int_G f(\phi(p)) d\mu$.
As a basic observation note that, for a fixed vector field $V$ in $T_pM$,
\begin{eqnarray*}
g\of{\bar{X}, V } &=& g\of{ \int_G \phi_*^{-1}(X)d\mu, V} = \int_G g\of{\phi_*^{-1}(X), V}d\mu, \\
D_V \of{\int_G g\of{\phi_*^{-1}(X), V}d\mu} &=& \int_G g\of{\nabla_V \phi_*^{-1}(X), V} d\mu + \int_G g\of{\phi_*^{-1}(X), \nabla_V V} d\mu.
\end{eqnarray*}
This follows from the fact that all of the functions involved are smooth, the linearity of the integral, and the fact that $G$ as a compact space admits a finite partition of unity. Similar identities for passing integrals over $G$ past a derivative also hold for the same reasons. We will use these facts repeatedly below with out further comment.
Now we claim the following:
\begin{lemma} \label{lem:LieDerivative}
With the notation above, for any vector field $X$ and any $V\in T_pM$,
\[ (L_{\bar{X}} g)(V,V) = \int_G (L_X g) ( \phi_*V, \phi_* V) d\mu \]
\end{lemma}
\begin{proof}
This follows from a straightforward calculation:
\begin{eqnarray*}
g\of{\nabla_V \bar X, V}
&=& D_Vg\of{\bar X, V} - g\of{\bar X, \nabla_V V}\\
&=& D_Vg\of{\int_G \phi_*^{-1}(X)d\mu, V} - g\of{\int_G \phi_*^{-1}(X)d\mu, \nabla_V V}\\
&=& \int_G D_Vg\of{\phi_*^{-1}(X), V}d\mu - \int_G g\of{\phi_*^{-1}(X), \nabla_V V}d\mu\\
&=& \int_G g\of{\nabla_V\of{\phi_*^{-1}(X)}, V} d\mu\\
&=& \int_G g\of{\nabla_{\phi_* V} X, \phi_*V} d\mu.
\end{eqnarray*}
\end{proof}
For a function we also have the following.
\begin{lemma} \label{lem:Hessian}
With the notation above, for any function $f$,
\begin{eqnarray*}
\nabla\bar f &=& \overline{\nabla f}\\
\Hess \bar f &=& \int_G \Hess f ( \phi_*V, \phi_* V) d\mu
\end{eqnarray*}
\end{lemma}
\begin{proof} First note that the second equation follows from the first combined with Lemma \ref{lem:LieDerivative} along with the fact that
\[ \Hess f = \frac 1 2 L_{\nabla f} g .\]
To prove the first equation, let $V$ be a vector field on $M$, and observe that
\[g\of{\nabla \bar f, V}
= D_V\of{\int_G f\circ \phi d\mu}
= \int_G df(\phi_* V) d\mu
= \int_G g\of{\phi_*^{-1}(\nabla f), V}
= g\of{\overline{\nabla f}, V}.\]
\end{proof}
Now we are ready to show that the weighted curvatures can be averaged over the compact group $G$. First we consider the $\infty$--cases.
\begin{lemma}\label{lem:averaging}
Given a triple $(M,g,X)$ and a compact subgroup $G$ of the isometry group, the weighted curvatures satisfy
\begin{eqnarray*}
\Ric_{\bar X}(U,V) &=& \int_G \Ric_X(\phi_*U, \phi_*V) d\mu, \\
\mathbb{S}ec_{\bar X}^V(U) &=& \int_G \mathbb{S}ec_X^{\phi_*V}(\phi_*U) d\mu,
\end{eqnarray*}
where $\bar X$ is the average of $X$. In particular, if $\mathbb{S}ec_X \geq \lambda$, then $\mathbb{S}ec_{\bar X} \geq \la$ where $\bar X$ is $G$--invariant.
\end{lemma}
\begin{remark} One similarly can draw conclusions about upper bounds and for the Bakry--Emery Ricci curvature. In addition, analogous statements hold for $\Ric_f$ and $\mathbb{S}ec_f$. They follow immediately from Lemmas \ref{lem:Hessian} and \ref{lem:averaging} .\end{remark}
\begin{proof}
Using Lemma \ref{lem:LieDerivative} we can see all we need to show is
\begin{eqnarray*}
\Ric(U,V) &=& \int_G \Ric(\phi_*U, \phi_*V) d\mu\\
\mathbb{S}ec(U,V) &=& \int_G \mathbb{S}ec(\phi_*U, \phi_*V) d\mu.
\end{eqnarray*}
But this just follows from the isometry invariance of the curvature as well as the fact that $d\mu$ has unit volume.
\end{proof}
For the strongly weighted curvatures, averaging the vector field $X$ causes some issues as the equation contains terms which are quadratic in $X$. In the gradient case we can overcome this by changing the form of the potential function. Given $m$, set $u = e^{-f/m}$, then a simple calculation shows that
\[ \Hess f - \frac{df \otimes df}{m} = -\frac{ m \Hess u }{u} \]
So, we have
\begin{eqnarray*}
\Ric_f^m = \Ric -\frac{ m \Hess u }{u}
\end{eqnarray*}
and, choosing $m = -1$,
\begin{eqnarray*}
\overline{\mathbb{S}ec}^V_f(U) = \mathbb{S}ec(V,U)+ \frac{\Hess u}{u}(V,V)
\end{eqnarray*}
In these cases, it is natural to average the function $u$. Let $\widetilde{u}(p) = \int_G u(\phi(p)) d\mu$ and define $\widetilde{f} = -m \log(\widetilde{u})$. Then we have the following Lemma.
\begin{lemma} \label{lem:u-averaging}
Given a triple $(M,g,f)$ and a compact subgroup $G$ of the isometry group, the weighted curvatures satisfy
\begin{eqnarray*}
\widetilde{u} \Ric^m_{\widetilde f} (U,V)
&=& \int_G u \Ric^m_f(\phi_*U, \phi_*V) d\mu, \\
\widetilde{u} \overline{\mathbb{S}ec}_{\widetilde f}^V(U)
&=& \int_G u \overline{\mathbb{S}ec}_f^{\phi_*V}(\phi_*U) d\mu,
\end{eqnarray*}
where $\widetilde u$ is the average of $u = e^{-f/m}$ and $\widetilde f = -m \log(\widetilde u)$. In particular, if $\overline\mathbb{S}ec_f \geq \la$, then $\overline\mathbb{S}ec_{\widetilde f} \geq \la$ where $\widetilde f$ is $G$--invariant.
\end{lemma}
\begin{proof}
We will discuss the Ricci case and the sectional curvature case will follow from an analogous argument. We have
\begin{eqnarray*}
u \Ric_f^m(V,V) &=& u \Ric(V,V) - m \Hess u(V,V) \\
\int_G u \Ric_f^m(\phi_*V,\phi_*V) d\mu &=& \int_G \of{ u \Ric(\phi_*V,\phi_*V) - m \Hess u(\phi_*V,\phi_*V) } d\mu \\
&=& \widetilde{u} \Ric(V,V) - m \Hess \widetilde{u} (V,V) \\
&=& \widetilde{u} \Ric^m_{\widetilde f} (V,V)
\end{eqnarray*}
To see the final remark note that, if $\Ric_f^m \geq \lambda g$, then
\begin{eqnarray*}
\Ric^m_{\widetilde f} (U,V) &=& \frac { \int_G u \Ric^m_f(\phi_*U, \phi_*V) d\mu}{\widetilde u} \geq \frac{ \int_G \lambda u g(U,V) d\mu}{\widetilde{u}} = \lambda g(U,V),
\end{eqnarray*}
so $\Ric^m_{\widetilde f} \geq \la g$ as well. Similar arguments hold for upper bounds.
\end{proof}
We remind the reader that Lemmas \ref{lem:averaging} and \ref{lem:u-averaging} immediately imply Corollary \ref{cor:PWSCaveraging}: If $(M,g,X)$ has positive weighted sectional curvature and $G$ is a compact subgroup of the isometry group of $(M,g)$, then there exists a $G$--invariant vector field $\tilde X$ such that $(M,g,\tilde X)$ has positive weighted sectional curvature. Indeed, if $\mathbb{S}ec_X > 0$, then one can replace $X$ by its average $\bar X$ over the $G$--orbits. If $X = \nabla f$ and $\overline\mathbb{S}ec_f > 0$, then one can replace $X$ by $\tilde X = \nabla \tilde f$, where $\tilde f = \log(\tilde u)$ and where $\tilde u$ is the average of $u=e^f$ over the $G$--orbits.
\begin{remark} Note that Lemma \ref{lem:u-averaging} does not clearly extend to the non-gradient case, since there is no globally defined function $u$ to average. We can still average over $X$, but only one side of the curvature bound is preserved. To see this note that the strongly weighted curvatures satisfy
\[\overline{\mathbb{S}ec}_{\bar X}^V(U) = \int_G \overline{\mathbb{S}ec}_X^{\phi_*V}(\phi_*U) d\mu + \of{\int_G g\of{X, \phi_* V} d \mu}^2 - \int_G g\of{X, \phi_*V}^2 d \mu.\]
In particular, by the Cauchy--Schwarz inequality,
\[\overline\mathbb{S}ec_{\bar X}^V(U) \leq
\int_G \overline\mathbb{S}ec_X^{\phi_*V}(\phi_*U) d\mu,\]
so upper bounds on strongly weighted curvatures are preserved by averaging the density. Similar statements hold in the gradient case. For the $m$--Bakry--Emery curvature, we similarly have
\[\Ric^m_{\bar X}(V,V) = \int_G \Ric_X^m(\phi_*V, \phi_*V)d\mu - \frac{1}{m}\ofsq{\of{\int_Gg\of{X, \phi_*V}}^2 - \int_Gg\of{X,\phi_*V}^2}.\]
\end{remark}
\mathbb{S}ubsection{Homogeneous metrics}
Now we apply averaging the density to the special case of homogeneous metrics. Homogeneous Riemannian manifolds with positive sectional curvature are classified Wallach \cite{Wallach72} and B\'erard-Bergery \cite{Berard-Bergery76}. By averaging the density, we show here that there are no additional examples in the weighted case when $X = \nabla f$.
\begin{proposition} \label{pro:CompactHomogeneous}
Let $(M,g)$ be a compact homogeneous manifold and let $f\in C^\infty(M)$.
\begin{enumerate}
\item If $\Ric_f \geq \lambda g$ or $\Ric_f^m \geq \lambda g$, then $\Ric \geq \lambda g$.
\item If $\mathbb{S}ec_f \geq \lambda g$ or $\overline{\mathbb{S}ec}_f \geq \lambda g$ then $\mathbb{S}ec \geq \lambda g$.
\end{enumerate}
Analogous results hold for upper bounds.
\end{proposition}
This proposition immediately implies Theorem \ref{thm:PWSChomogeneous} from the introduction. Indeed, if $(M,g)$ admits a gradient field $X = \nabla f$ with positive weighted sectional curvature, then $\overline\mathbb{S}ec_f > 0$ and hence this proposition applies.
\begin{proof}
Let $G$ be the isometry group of $(M,g)$. In all cases, we can replace $f$ by a $G$--invariant function $\tilde f$ such that the $\tilde f$--weighted curvatures have the same lower bounds as the $f$--weighted curvatures. Since $G$ acts transitively, $\tilde f$ is constant, so the $\tilde f$--weighted curvatures are equal to the usual, unweighted curvatures.
\end{proof}
It is not clear whether this fact is also true when the field $X$ is not gradient. Averaging the field so that it is invariant under the isometries will not necessarily make the field Killing, but there is one important case where it does.
\begin{proposition}
If a compact Lie group with a bi-invariant metric admits an $X$ such $\mathbb{S}ec_X \geq \la$ or $\Ric_X \geq \la g$, then $\mathbb{S}ec\geq \la$ or $\Ric\geq \la g$, respectively.
\end{proposition}
\begin{proof}
We can replace $X$ by its average over the left and right actions of $G$. This preserves the lower bounds on curvature, and it makes $X$ bi-invariant and hence a Killing field. Hence $L_X g = 0$, so the weighted curvatures equal the unweighted curvatures.
\end{proof}
In particular, the previous two propositions have the following corollary.
\begin{corollary} A compact Lie group with a bi-invariant metric has positive weighted sectional curvature if and only if it has positive sectional curvature. \end{corollary}
In the simplest non-trivial case of a left-invariant metric that is not bi-invariant, a computation shows that we again do not get new examples.
\begin{proposition} If a left invariant metric on the Lie group $\SU(2)$ supports a vector field $X$ such that $\mathbb{S}ec_X \geq \la$ or $\Ric_X \geq \la g$, then $\mathbb{S}ec\geq \la$ or $\Ric\geq \la g$, respectively.
\end{proposition}
\begin{proof}
For a left invariant metric on $\SU(2)$, choose an orthonormal frame
\[ \lambda_1^{-1} X_1, \lambda_2^{-1} X_2, \lambda_3^{-1} X_3 \]
such that $[X_i, X_{i+1}] = 2 X_{i+2}$ with indices taken mod 3. It follows that
\begin{eqnarray*}
\nabla_{X_i} X_i
&=& 0 \\
\nabla_{X_i} X_{i+1}
&=& \left(\frac{ \lambda_{i+2}^2 + \lambda_{i+1}^2 - \lambda_i ^2}{\lambda_{i+2}^2} \right) X_{i+2}\\
\nabla_{X_{i+1}} X_i
&=& \left(\frac{- \lambda_{i+2}^2 + \lambda_{i+1}^2 - \lambda_i ^2}{\lambda_{i+2}^2} \right) X_{i+2}
\end{eqnarray*}
Now since $\SU(2)$ is compact, we can assume by averaging that $X$ is a left-invariant vector field, which we will write as
\[ X = a_1 X_1 + a_2 X_2 + a_3 X_3 \]
for constants $a_i$. We have
\begin{eqnarray*}
(L_Xg) (X_i, X_i) &=& 2 g(\nabla_{X_i} X, X_i) = 0\\
(L_Xg) (X_i, X_{i+1}) &=& g(\nabla_{X_i} X, X_{i+1}) + g(\nabla_{X_{i+1}} X, X_{i}) \\
&=& 2 a_{i+2} \left( \lambda_i^2 - \lambda_{i+1}^2 \right)
\end{eqnarray*}
This shows that $X$ is not a Killing field in general. However, $\mathbb{S}ec_X(X_i, X_j) = \mathbb{S}ec(X_i, X_j)$, so if $\mathbb{S}ec_X \geq \lambda$ then $\mathbb{S}ec(X_i, X_j) \geq \lambda$. Further computation also shows that the basis $X_1 \wedge X_2$, $X_2 \wedge X_3$, $X_3 \wedge X_1$ diagonalizes the curvature operator, and thus that all of the sectional curvatures are bounded by the maximum and minimum curvatures of the sectional curvatures involving $X_1, X_2,$ and $X_3$. Thus we actually have $\mathbb{S}ec \geq \lambda$. The basis $X_1, X_2, X_3$ also diagonalizes the Ricci tensor so the statement about Ricci curvatures follows similarly.
\end{proof}
In general, Proposition \ref{pro:CompactHomogeneous} does not hold in the non-compact case, as we have already seen in Example \ref{Ex:Gaussian}. We can generalize the Gaussian example in the following simple way:
\begin{example} \label{Ex:GaussianGeneralization}
Suppose that $(M,g)$ is a simply connected space of non-positive sectional curvature. The distance function to a point squared, $d^2$, is a smooth function. Moreover,
$\Hess (d^2) \geq 2 g$. Therefore, if $(M,g)$ has sectional curvature bounded from below by $-K$, then, for $f = A d^2$, we have $\mathbb{S}ec_f \geq 2A-K$, which we can make arbitrarily large.
\end{example}
Letting $(M,g)$ in the example be a hyperbolic space gives a noncompact homogeneous manifold with positive weighted sectional curvature and negative sectional curvature.
We also note that there are many examples of non-compact homogeneous Ricci soliton metrics (i.e metrics with $\Ric_X = \lambda g$) which do not have $\Ric = \lambda g$. Examples of homogeneous metrics with $\Ric_f^m = \lambda g$ which do not have $\Ric = \lambda g$ are also constructed in \cite{HePetersenWylie-pre}.
\mathbb{S}ection{Riemannian submersions and Cheeger deformations}\label{sec:Submersions}
We analyze the behavior of the weighted and strongly weighted directional curvature operators under a Riemannian submersion $\pi:M \to B$. For this, we restrict to vector fields $X$ on $M$ for which the vector field $\pi_*(X)$ on $B$ is well defined. Following Besse \cite[Chapter 9]{Besse-Einstein}, let $R$, $\hat{R}$, and $\check{R}$ denote the curvature tensors of $M$, the fibers, and the base, respectively, and let $\mathcal V$ and $\mathcal H$ denote the projection maps onto the vertical and horizontal spaces, respectively.
\begin{theorem}[O'Neill formulas]\label{thm:submersions}
Let $(M,g)$ be a closed Riemannian manifold, let $\pi$ be a Riemannian submersion with domain $M$, and let $X$ be a smooth vector field on $M$ such that the map $p \mapsto \pi_*(X_p)$ is constant along the fibers of $\pi$. If $Y$ and $Z$ are horizontal vector fields and $U$ and $V$ are vertical vector fields on $M$, then
\begin{eqnarray*}
R_X^V(U,U) &=& \hat R_{\mathcal V X}^V(U,U) + g\of{T_U V,T_U V} - g\of{T_U U, T_V V} - g\of{T_V V, \mathcal H X}g(U,U),\\
R_X^Z(Y,Y) &=& \check R_{\pi_* X}^{\pi_*Z}(\pi_*Y, \pi_*Y) - 3g(A_Y Z, A_Y Z),
\end{eqnarray*}
and likewise with $R_X$, $\hat R_{\mathcal V X}$ and $\check R_{\pi_* X}$ replaced by the strongly weighted directional curvature operators on $M$, the fibers, and the base, respectively.
In particular, if $(Y, Z)$ is an orthonormal pair of horizontal vector fields, then
\[\mathbb{S}ec_{\pi_*X}^{\pi_* Y}(\pi_* Z)
= \mathbb{S}ec_X^Y(Z) + \frac{3}{4}\left|[Y,Z]^{\mathcal V}\right|^2\]
and likewise for $\overline\mathbb{S}ec_X$.
\end{theorem}
We remark that analogous statements hold in the gradient case. There, one assumes that $f$ is a smooth function on $M$ that is constant along the fibers of $\pi$. The function $f$ replaces $X$ in the above formulas, and the induced map $\bar f$ on the base replaces $\pi_*X$. The gradient case follows from the general case since $d\bar f$ and $\Hess \bar f$ pull back via $\pi$ to $df$ and $\Hess f$, respectively.
We also remark that, as with sectional curvature, the base of a Riemannian submersion inherits lower bounds on weighted or strongly weighted sectional curvatures. In particular, if the total space admits a vector field $X$ with positive weighted sectional curvature such that $X$ descends to a well defined vector field on the base, then the base too has positive weighted sectional curvature (see Corollary \ref{cor:PWSCSubmersion}).
Finally, we remark that the vector field $X$ is arbitrary and hence need not be horizontal or vertical. For example, suppose $\pi$ is the quotient map by a free, isometric group action. The vector field $X$ might be an action field (hence vertical), basic (hence horizontal), or any smooth combination of the two (hence neither).
\begin{proof}
Let $\hat g$ and $\check g$ denote the metrics on the fibers and the base, respectively. First, the conclusions in the strongly weighted cases follows immediately from the weighted cases since
\begin{eqnarray*}
g\of{X, V}^2 g\of{U,U}
&=& \hat g(\mathcal V X, V)^2 \hat g(U,U),\\
g\of{X, Z}^2 g\of{Y,Y}
&=& \check g(\pi_* X, \pi_*Z)^2 \check g(\pi_*Y,\pi_*Y).
\end{eqnarray*}
Second, the weighted cases follow from the unweighted case once we establish that
\begin{eqnarray*}
\frac 1 2 (L_X g)(V,V)g(U,U)
&=& \frac 1 2 (L_{\mathcal V X} \hat g)(V,V) \hat g(U,U)
- g\of{T_V V, \mathcal H X}g\of{U,U},\\
\frac 1 2 (L_X g)(Z,Z)g(Y,Y)
&=& \frac 1 2 (L_{\pi_*X} \check g)(\pi_*Z,\pi_*Z)\check g(\pi_*Y,\pi_*Y).
\end{eqnarray*}
Indeed these follow from the fact that $U$ is vertical, the fact that $Y$ is horizontal, and the observations that
\begin{eqnarray*}
\frac 1 2 (L_X g)(V,V)
= g\of{\nabla_V X, V}
&=& g\of{\nabla_V(\mathcal V X), V}
+ g\of{\nabla_V(\mathcal H X), V}\\
&=& \hat g\of{\hat \nabla_V(\mathcal V X), V}
- g\of{\nabla_V V, \mathcal H X}\\
&=& \frac 1 2 (L_{\mathcal V X} \hat g)(V,V)
- g\of{T_V V, \mathcal H X}
\end{eqnarray*}
and
\[ \frac 1 2 (L_X g)(Z,Z)
= g\of{\nabla_Z X, Z}
= \check g(\nabla_{\pi_* Z} \pi_*X, \pi_*Z)\\
= \frac 1 2 (L_{\pi_*X} \check g)(\pi_*Z, \pi_*Z).\]
\end{proof}
Regarding the O'Neill formulas for mixed inputs (vertical and horizontal), we remark that one simply obtains weighted versions by adding the appropriate terms from the definition of $R_X$ and $\overline R_X$. The formulas do not simplify as in Theorem \ref{thm:submersions}, but one can still use them. To illustrate this with one easy example, we generalize here a result of Weinstein \cite[Theorem 6.1]{Weinstein80} to the case of positive weighted sectional curvature (cf. Florit and Ziller \cite{FloritZiller11} and Chen \cite{Chen14-thesis}).
\begin{theorem}[Weinstein]\label{thm:WeinsteinFloritZiller}
Let $\pi:M\to B$ be Riemannian submersion of closed Riemannian manifolds with totally geodesic fibers. If there exists a function $f \in C^\infty(M)$ such that $\overline\mathbb{S}ec_f > 0$ on all orthonormal pairs of vectors spanning ``vertizontal'' planes,
then the fiber dimension is most $\rho(\dim B)$, where $\rho(n)$ denotes the maximum number of linearly independent vector fields on $\mathbb{S}^{n-1}$. \end{theorem}
Note that this reduces to the Weinstein's result when $f = 0$. Recall that $f \in C^\infty(M)$ is basic if it is constant along the fibers of $\pi$.
\begin{proof}
Since the fibers are totally geodesic, the $T$ tensor vanishes. Hence, for any orthonormal pair $(V,Z)$, where $V$ is vertical and $Z$ is horizontal, the O'Neill formula
\[R(Z,V,V,Z) = |A_Z V|^2 - |T_Z U|^2 + g\of{(\nabla_Z T)_V V, Z}\]
implies
\[\overline\mathbb{S}ec_f^V(Z) = |A_Z V|^2 + \Hess f(V,V) + df(V)^2.\]
At a point $p \in M$ where $f$ is maximized, $df(V) = 0$ and $\Hess f(V,V) \leq 0$ for all $V$. Hence, $A_Z V \neq 0$ for all vertizontal pairs $(V,Z)$ at $p$. The proof now proceeds as in \cite{Weinstein80} by constructing $\dim(\mathcal V_p)$ linearly independent vectors on the unit sphere in $\mathcal H_p$, where $\mathcal V_p$ and $\mathcal H_p$ are the vertical and horizontal spaces at $p$, respectively.
\end{proof}
Theorem \ref{thm:WeinsteinFloritZiller} relates to a conjecture of Fred Wilhelm, namely, that $\dim(F) < \dim(B)$ for any Riemannian submersion from a manifold $M$ with positive sectional curvature, where $\dim(F)$ and $\dim(B)$ denote the dimensions of the fibers and the base, respectively. If one only assumes $\mathbb{S}ec > 0$ almost everywhere on $M$, then there are counterexamples due to Kerin \cite{Kerin11}. On the other hand, the above result suggests that the assumption of positive sectional curvature might be weakened to cover manifolds with density. For example, Frankel's theorem (Theorem \ref{thm:Frankel}) in the weighted case implies the following: if $M$ admits a vertical vector field $X$ such that $M$ has positive weighted sectional curvature, then the conclusion of Wilhelm's conjecture holds.
As a second application, we discuss Cheeger deformations. These have been used in multiple constructions of metrics with positive or non-negative sectional curvature (see Ziller \cite{Ziller07} for a survey). Here, we establish the weighted curvature formulas for the deformed metric in terms of the original. We will use the formulas from this section in the proof of Theorem \ref{thm:IntroConnectedness}.
The setup involves a Riemannian manifold $(M,g)$, a subgroup $G$ of the isometry group, a bi-invariant metric $Q$ on $G$, and a real parameter $\lambda > 0$. We are interested in understanding how the weighted curvatures behave under these peturbations. Hence we also fix a smooth vector field $X$ on $M$. We assume that $X$ is $G$--invariant, which can be arranged if the subgroup $G$ is compact, e.g., if $G$ is closed and $M$ is compact.
The new metric on $M$ is denoted by $g_\lambda$. It is the metric for which the map
\[\pi\colon(G \times M, \lambda Q + g) \to (M, g_\lambda)\]
given by $(h,p) \mapsto h^{-1}p$ is a Riemannian submersion.
There is a $(\lambda Q+g)$--orthogonal decomposition of $T_{(e,p)}(G\times M)$ as
\[ \{(Y, Y_p^*) \mathbb{S}t Y \in \mathfrak g\}
\oplus \left\{\of{- |Y^*_p|^2 Y, \la|Y|^2 Y^*_p} \mathbb{S}t Y \in \mathfrak g\right\}
\oplus \{(0, Z) \mathbb{S}t Z \in T_p(G\cdot p)^\perp\}.\]
Here, and throughout, $\mathfrak g = T_eG$ denotes the Lie algebra of $G$, and $Y^*$ denotes the Killing field associated to $Y \in \mathfrak{g}$. The first of these summands is the vertical space $\mathcal V_{(p,e)} = \ker(D\pi_{(e,p)})$ of the projection $\pi$. The last two summands together form the horizontal space $\mathcal H_{(e,p)} = \mathcal V_{(e,p)}^\perp$.
The horizontal lift of $Y^*\in T_p(G\cdot p) \mathbb{S}ubseteq T_p M$ is
\[\frac{1}{|Y^*_p|^2 + \la |Y|^2}\of{- |Y^*_p|^2 Y, \la|Y|^2 Y^*_p},\]
and the horizontal lift of $Z \in T_p(G\cdot p)^\perp \mathbb{S}ubseteq T_p M$ is $(0,Z)$. Note that $|Z|_{g_\la} = |Z|_g$, while
\[|Y^*|_{g_\la}^2 = \frac{\la |Y|^2 |Y^*|^2}{|Y^*|^2 + \la |Y|^2}.\]
As $\la \to \infty$, $|Y^*|_{g_\la}$ increases and converges to $|Y^*|_g$, hence $|Y^*|_{g_\la} \leq |Y^*|$. We will use this in the proof of the connectedness lemma.
Our goal now is to compute the weighted and strongly weighted directional curvature operators of $(M,g_\lambda, X)$ in terms of those of $(M, g, X)$.
\begin{lemma}[Curvature tensors after Cheeger deformations] \label{Lemma:CheegerDeformation}
Let $R = R^g$ and $R^{g_\la}$ denote the curvature tensors of $(M, g)$ and $(M, g_\la)$, respectively. For vector fields $W_i$ on $M$, if $\tilde W_i = (\tilde W_i^G, \tilde W_i^M)$ denote the horizontal lifts in $G \times M$, then
\begin{eqnarray*}
g_\la\of{(R^{g_\la})_X^{W_1}(W_2), W_3}
&=& \la Q\of{(R^Q)^{\tilde W_1^G}(\tilde W_2^G), \tilde W_3^G}\\
&~& + g\of{(R^g)_X^{\tilde W_1^M}(\tilde W_2^M), \tilde W_3^M}\\
&~& + (\la Q + g)\of{A_{\tilde W_1} \tilde W_2,
A_{\tilde W_1} \tilde W_2}.
\end{eqnarray*}
In particular, if $Z_1$ and $Z_2$ are vector fields in $M$ that are everywhere orthogonal to the $G$--orbits, then
\[
g_\la\of{(R^{g_\la})_X^{Z_1}(Z_2), Z_2}
\geq g\of{ (R^g)_X^{Z_1}(Z_2), Z_2}.
\]
If, in addition, $(Z_1,Z_2)$, forms an orthonormal pair with respect to $g$ (equivalently with respect to $g_\la$), then
\[(\mathbb{S}ec^{g_\la})_X^{Z_1}(Z_2) \geq (\mathbb{S}ec^g)_X^{Z_1}(Z_2).\]
\end{lemma}
\begin{proof}
Consider the vector field $(0,X)$ on $G \times M$. It is $G$--invariant and $\pi_*(0,X) = X$, where $\pi:(G\times M, \la Q + g) \to (M,g_\la)$ is the Riemannian submersion defining $g_\la$. The first claim follows directly from the (first) O'Neill formula in the weighted case applied to $\pi$. The second and third claims follow from the fact that the horizontal lift of $Z \in T_p(G \cdot p)^\perp$ is $(0,Z) \in T(G\times M)$.
\end{proof}
\mathbb{S}ection{Weinstein's fixed point theorem and applications}\label{sec:Weinstein}
In the next two sections, we demonstrate how Synge-type arguments extend to the case of positive weighted sectional curvature. The only technical ingredient required is Lemma \ref{lem:SecondVariation}. We first prove Weinstein's fixed point theorem in the weighted case:
\begin{theorem}[Weinstein's fixed point theorem]
Let $(M^n,g)$ be a closed, orientable Riemannian manifold equipped with vector field $X$ such that $(M,g,X)$ has positive weighted sectional curvature. If $F$ is an isometry of $M$ with no fixed point, then $F$ reverses orientation if $n$ is even and preserves it if $n$ is odd.
\end{theorem}
\begin{proof}
Corollary \ref{cor:PWSCaveraging} implies that we may assume without loss of generality that $X$ is invariant under isometries. In particular, $F_*(X) = X$.
The proof now proceeds as in Weinstein \cite{Weinstein68}. Using compactness, choose $p \in M$ such that $d(p,F(p))$ is minimal. Choose a unit-speed, minimizing geodesic $\gamma:[a,b] \to M$ from $p$ to $F(p)$. As in \cite{Weinstein68}, there exists a special unit-length, parallel vector field $V$ along $\gamma$,
and it suffices to show that the index $I(V,V)$ of $\gamma$ is negative. One of the properties of $\gamma$ is that $F_*(\gamma'(a)) = \gamma'(b)$. By Lemma \ref{lem:SecondVariation}, it suffices to show that
\[\left.g\of{\gamma'(t), X_{c(t)}}\right|_{t=a}^{t=b}
= \inner{\gamma'(b), X_{\gamma(b)}}
- \inner{\gamma'(a), X_{\gamma(a)}} = 0.\]
Indeed, this is the case since $F$ carries $\gamma'(a)$ to $\gamma'(b)$ and $X_{\gamma(a)}$ to $X_{F(\gamma(a))} = X_{\gamma(b)}$.
\end{proof}
We derive three corollaries of Weinstein's theorem, all of which are analogues of what happens in the unweighted case. The first is the textbook application of Weinstein's theorem to prove Synge's theorem.
\begin{corollary}[Synge's theorem]
If $(M^n, g, X)$ is closed and has positive weighted sectional curvature, then
\begin{itemize}
\item If $n$ is odd, then $M$ is orientable.
\item If $n$ is even and $M$ is orientable, then $\pi_1(M)$ is trivial.
\end{itemize}
\end{corollary}
This is proved in \cite{Wylie-pre}, but we indicate another proof based on Weinstein's theorem. Depending on whether $n$ is odd or even, one applies Weinstein's theorem in the weighted case to the free action of $\Z_2$ or $\pi_1(M)$, respectively, on the orientation or universal cover of $M$ equipped with the pullback metric and vector field or function. For this, it is important that $\pi_1(M)$ is finite (see Theorem \ref{thm:pi1finite}).
Weinstein's theorem, together with O'Neill's formula, also provides another proof of Berger's result (see \cite{Berger66,GroveSearle94}):
\begin{corollary}[Berger's theorem]\label{cor:Berger}
If $(M^n,g,X)$ is closed and has positive weighted sectional curvature, then the following hold:
\begin{itemize}
\item If $n$ is even, then any Killing field has a zero. Equivalently, any isometric torus action has a fixed point.
\item If $n$ is odd, any torus acting isometrically on $M$ has a circle orbit. In particular, there exists a codimension one subtorus that has a fixed point.
\end{itemize}
\end{corollary}
We remark that the even-dimensional case is also proved in \cite{Wylie-pre}.
\begin{proof}
The equivalence of the conclusions about Killing fields and torus actions is based on the fact that the isometry group of $M$ is a compact Lie group. Consider an isometric action on $M$ by a torus $T$. Without loss of generality, we assume that $X$ is invariant under the action of $T$. The conclusion follows by choosing $F \in T$ that generates a dense subgroup of $T$ and applying Weinstein's theorem to $F$.
The odd-dimensional case follows from the even-dimensional case and the O'Neill formula, as proved in Grove and Searle \cite{GroveSearle94}). Since the even-dimensional case and O'Neill's formula hold in the weighted case, the proof is complete.
\end{proof}
Finally, it was observed in \cite{Kennard3} that Weinstein's theorem pairs nicely with a result of Davis and Weinberger to provide an obstruction to free group actions on positively curved rational homology spheres of dimension $4k+1$:
\begin{theorem}[Davis--Weinberger factorization]
Let $(M^{4k+1},g,X)$ be closed with positive weighted sectional curvature. If the universal cover of $M$ is a rational homology sphere, then $\pi_1(M) \cong \Z_{2^e} \times \Gamma$ for some odd-order group $\Gamma$.
\end{theorem}
\begin{proof}
Since $\pi_1(M)$ is finite (see Theorem \ref{thm:pi1finite}), we may consider the free action of $\pi_1(M)$ on the universal cover of $M$, which is a compact, simply connected manifold with the same weighted curvature bound as $M$. By Weinstein's theorem in the weighted case, the action of $\pi_1(M)$ is (rationally) homologically trivial. Since $\dim(M) \equiv 1 \bmod{4}$ and the surgery semicharacteristic $\mathbb{S}um_{i \leq 2k} (-1)^i \dim H^i(M;\Q)$ is odd, the factorization of $\pi_1(M)$ follows from Theorem D in \cite{Davis83}.
\end{proof}
\mathbb{S}ection{Frankel's theorem and Wilking's connectedness lemma}\label{sec:Frankel}
In this section, we prove generalizations of Frankel's theorem and Wilking's connectedness lemma in the weighted case. Specifically, we assume throughout this section that $(M^n,g,X)$ is a Riemannian manifold equipped with a vector field $X$ such that $(M,g,X)$ has positive weighted sectional curvature.
\begin{theorem}[Frankel]\label{thm:Frankel}
Assume $(M^n,g,X)$ is closed with positive weighted sectional curvature. Assume $N_1$ and $N_2$ are closed, totally geodesic submanifolds of $M$ such that $X$ is tangent to $N_i$ for $i\in\{1,2\}$. If $\dim(N_1) + \dim(N_2) \geq n$, then $N_1$ and $N_2$ intersect.
\end{theorem}
Before proving this, we record an easy corollary that we will use in the next section.
\begin{corollary}\label{cor:FrankelGroupAction}
Let $(M^n,g,X)$ be closed with positive weighted sectional curvature. Suppose $G_1$ and $G_2$ are subgroups of the isometry group of $M$, and suppose that $N_1$ and $N_2$ are components of the fixed-point sets of $G_1$ and $G_2$, respectively. If $\dim(N_1) + \dim(N_2) \geq n$, then the submanifolds intersect.
\end{corollary}
To deduce the corollary, one replaces $X$ by $\tilde X$ such that $\tilde X$ is invariant under isometries of $(M,g)$ and $(M,g,\tilde X)$ has positive weighted sectional curvature (see Corollary \ref{cor:PWSCaveraging}). For $p \in N_1$, it follows that $X_p \in (T_p M)^{G_1} = T_p N_1$, hence $X$ is tangent to $N_1$ and likewise for $N_2$. The corollary follows since the $N_i$ are closed and totally geodesic.
\begin{remark}\label{FrankelFailsNoncompact}
Note that both Theorem \ref{thm:Frankel} and the corollary fail if we remove the assumption that $M$ is compact. Indeed, consider the flat metric on Euclidean space, and let $f = \frac 1 2 d^2$, where $d$ is the distance to a fixed point in $M$. Clearly $\mathbb{S}ec_f^V(U) = \Hess f(V,V) = 1$ for all orthonormal pairs $(U,V)$, yet any two parallel hyperplanes are disjoint, closed, totally geodesic, and have dimensions adding up to at least $\dim M$.
In fact, $N_1$ and $N_2$ are fixed-point components of reflection subgroups $G_1 \cong \Z_2$ and $G_2 \cong \Z_2$ of the isometry group. However, the subgroup generated by $G_1$ and $G_2$ is infinite, so we cannot replace $X$ by a $G_1$-- and $G_2$--invariant vector field as in Corollary \ref{cor:PWSCaveraging} and proceed as in the proof of the corollary.
\end{remark}
\begin{proof}[Proof of Frankel's theorem]
Let $M$, $N_1$, $N_2$, and $X$ be as in the theorem. We proceed now as in Frankel \cite{Frankel61}. By compactness, there is a minimizing geodesic $\gamma:[a,b] \to M$ connecting $N_1$ to $N_2$. By the first variation formula, $\gamma$ is normal to $N_1$ and $N_2$ at its endpoints. Since $X$ is tangent to $N_1$ and $N_2$,
\[g\of{\gamma'(b),X_{\gamma(b)}}
=g\of{\gamma'(a),X_{\gamma(a)}} = 0.\]
Using Lemma \ref{lem:SecondVariation}, the rest of the proof is as in the unweighted case.
\end{proof}
Wilking proved a vast generalization of Frankel's result (see \cite[Theorem 2.1]{Wilking03}). The generalization to the weighted case is the following:
\begin{theorem}[Wilking's connectedness lemma]\label{thm:Connectedness}
Let $(M^n,g,X)$ be closed with positive weighted sectional curvature.
\begin{enumerate}
\item If $X$ is tangent to $N^{n-k}$, a closed, totally geodesic, embedded submanifold of $M$, then the inclusion $N \to M$ is $(n-2k+1)$--connected.
\item If $X$ and $N^{n-k}$ are as above, and if $G$ acts isometrically on $M$, fixes $N$ pointwise, and has principal orbits of dimension $\delta$, then the inclusion $N \to M$ is $(n-2k+1+\delta)$--connected.
\item If $X$ is tangent to $N_1^{n-k_1}$ and $N_2^{n-k_2}$, a pair of closed, totally geodesic, embedded submanifolds with $k_1 \leq k_2$, then $N_1 \cap N_2 \to N_2$ is $(n-k_1-k_2)$--connected.
\end{enumerate}
\end{theorem}
As in the corollary to Frankel's theorem, this result applies to inclusions of fixed-point components of isometric group actions.
\begin{proof}
The proof in each case proceeds as in Wilking \cite[Theorem 2.1]{Wilking03}, where the result is reduced to an index estimate. In the first and third cases, this estimate involves parallel vector fields and hence extends to the weighted case exactly as in the proof of Frankel's theorem above in the weighted case.
In the remaining case, the index estimate is a bit more involved, so we repeat it here, modifying it as necessary to cover the weighted case. The setup in \cite{Wilking03} is as follows: The metric $g_\la$ on $M$ is a Cheeger deformation of $g$, there is a geodesic $c:[a,b] \to M$ that starts and ends perpendicular to $N$, and there is a $(n-2k+1+\delta)$--dimensional vector space $W$ of vector fields $V$ along $c$ such that
\begin{itemize}
\item $V$ is tangent to $N$ at the endpoints of $c$,
\item $V$ is orthgonal to the $G$--orbits at all points along $c$, and
\item $V'=\nabla_{c'}V$ is tangent to the $G$--orbits at all points.
\end{itemize}
By the argument in \cite{Wilking03}, it suffices to show that, for all $V \in W$, there exists $\la > 0$ such that the index form with respect to $g_\la$ of $c$ evaluated on $V$ is negative. We show this first under the assumption that $\mathbb{S}ec_X > 0$ on $M$.
By Equation \ref{eqn:IndexForm1}, the index form can be written as
\[\int_a^b \of{|V'|_{g_\la}^2 - (R^{g_\la})_X^{c'}(V,V) - 2g_\la(c',X) g_\la(V,V')} dt + \left.g_\la(c', X)|V|_{g_\la}^2\right|_{t=a}^{t=b}.\]
First, we show that the last term in this expression is zero. Without loss of generality, we may assume that $X$ is $G$--invariant and hence tangent to $N$. Since the $G$--orbits in $N$ are trivial, $X$ is orthogonal to the orbits. Hence the horizontal lift of $X_{c(t)}$ at $t\in\{a,b\}$ is $(0,X_{c(t)})$, and
\[\left.g_\la(c',X) \right|_{t=a}^{t=b}
= \left.g(c',X)\right|_{t=a}^{t=b} = 0.\]
Second, the O'Neill formula in the weighted case implies that $(R^{g_\la})_X^{c'}(V,V) \geq R_X^{c'}(V,V)$. Since this lower bound is independent of $\la > 0$, the proof will be complete once we show both of the following:
\begin{itemize}
\item $|V'|_{g_\la}^2 \to 0$ as $\la \to 0$, and
\item $g_\la(c',X)g_\la(V,V') \to 0$ as $\la \to 0$.
\end{itemize}
Indeed, since $V'$ is tangent to the $G$--orbits, $|V'|_{g_\la} \to 0$ as $\la \to 0$. This proves the first statement. The second statement follows from the first, together with the estimate
\[|g_\la(c',X)||g_\la(V,V')| \leq |c'|_{g_\la} |X|_{g_\la}|V|_{g_\la} |V'|_{g_\la} \leq |c'|_g |X|_g|V|_g |V'|_{g_\la}.\]
Here, the second inequality follows since Cheeger deformations (weakly) decrease lengths, i.e., $|\cdot|_{g_\la} \leq |\cdot|_g$ for all $\la > 0$.
This completes the proof if $\mathbb{S}ec_X > 0$. Consider now the case where $X = \nabla f$ and $\overline\mathbb{S}ec_f > 0$. Here, we consider the vector space
\[W_f = \{Y = e^f V \mathbb{S}t V \in W\},\]
and show that, for all $Y \in W_f$, there exists $\la > 0$ such that the index $I_c(Y,Y)$ of $Y$ along $c$ is negative. Since $\dim(W_f) = \dim(W)$, this would complete the proof in this case. This is easily accomplished by proceeding as in the previous case and using the alternative formula for the index given in Equation \ref{eqn:IndexForm2}.
\end{proof}
\mathbb{S}ection{Torus actions and positive weighted sectional curvaure}\label{sec:TorusActions}
Throughout this section, we consider closed Riemannian manifolds $(M,g)$ equipped with a vector field $X$ such that $(M,g,X)$ has positive weighted sectional curvature. In addition, we assume a torus $T$ acts isometrically on $M$. Applying Corollary \ref{cor:PWSCaveraging}, if necessary, we assume that $X$ is invariant under the torus action.
Our first result is the following generalization of a result of Grove--Searle \cite{GroveSearle94}:
\begin{theorem}[Maximal symmetry rank]\label{thm:GroveSearle}
Let $(M^n,g,X)$ be closed with positive weighted sectional curvature. If $T^r$ is a torus acting effectively by isometries on $M$, then $r \leq \floor{\frac{n+1}{2}}$. Moreover, if equality holds and $M$ is simply connected, then $M$ is homeomorphic to $\mathbb{S}^n$ or $\C\mathbb{P}^{n/2}$.
\end{theorem}
The upper bound on $r$ is sharp and agrees with Grove and Searle's result. However, in the unweighted case, Grove and Searle prove an equivariant diffeomorphism classification when the maximal symmetry rank is achieved. We obtain this weaker rigidity statement by a different argument that relies on Wilking's connectedness lemma and a lemma in Fang and Rong \cite{FangRong05}. For a more detailed argument along these lines, we refer to \cite[Section 7.1.3]{PetersenRGv2}
\begin{proof}
By Berger's theorem (Corollary \ref{cor:Berger}) in the weighted case, there exists $x \in M$ fixed by either $T^r$ or a subtorus $T^{r-1}$, according to whether $n$ is even or odd. Since this subtorus embeds into $\SO(n)$ via the isotropy representation, it follows that $r \leq \floor{\frac{n+1}{2}}.$
We proceed to the equality case. First, if $n \in \{2,3\}$, then $M$ is homeomorphic to a sphere since it is simply connected by the resolution of the Poincar\'e conjecture. Suppose therefore that $n \geq 4$. By arguing inductively as in Grove--Searle, it follows that some circle in $T^r$ fixes a codimension-two submanifold $N$. By the connectedness lemma in the weighted case, we conclude that the inclusion $N \hookrightarrow M$ is $\dim(N)$--connected. It follows immediately from Poincar\'e duality that $M$ and $N$ are integral cohomology spheres or complex projective spaces (see, for example, \cite[Section 7]{Wilking03}). If $M$ is an integral sphere, then it is a homeomorphism sphere by the resolution of the Poincar\'e conjecture. If $M$ is an integral complex projective space, then the fact that $N$ respresents the generator of $H^2(M;\Z)$ implies that $M$ is homeomorphic to complex projective space, by Lemma 3.6 in Fang--Rong \cite{FangRong05}.
\end{proof}
We remark that there are a number of generalizations of Grove and Searle's result. These include results of Rong and Fang in the cases of ``almost maximal symmetry rank'' or non-negative curvature (see Fang and Rong \cite{Rong02, FangRong05}, Galaz-Garcia and Searle \cite{GalazGarciaSearle11,GalazGarciaSearle14}, and Wiemeler \cite{Wiemeler-pre}).
Returning to the case of positive curvature, there are additional results that assume less symmetry. We focus here on the following homotopy classification due to Wilking \cite[Theorem 2]{Wilking03}:
\begin{theorem}[Wilking's homotopy classification]
Let $M^n$ be a closed, simply connected, positively curved manifold, and let $T^r$ act effectively by isometries on $M$. If $n \geq 10$ and $r \geq \frac{n}{4} + 1$, then $M$ is either homeomorphic to $\mathbb{S}^n$ or $\HH\mathbb{P}^{n/4}$ or homotopy equivalent to $\C\mathbb{P}^{n/2}$.
\end{theorem}
By Grove and Searle \cite{GroveSearle94} and Fang and Rong \cite{FangRong05}, this result actually holds for all $n \neq 7$. Additionally the conclusion in this theorem has been improved to a classification up to tangential homotopy equivalence (see Dessai and Wilking \cite[Remark 1.4]{DessaiWilking04}). We prove the following analogue of Wilking's classification under a slightly stronger symmetry assumption:
\begin{theorem}\label{thm:WilkingHomotopy}
Let $(M^n,g,X)$ be closed and simply connected with positive weighted sectional curvature. If $M$ admits an effective, isometric torus action of rank $r \geq \frac{n}{4} + \log_2 n$, then $M$ is homeomorphic to $\mathbb{S}^n$ or tangentially homotopy equivalent to $\C\mathbb{P}^{n/2}$.
\end{theorem}
Note that $\HH\mathbb{P}^{n/4}$ does not appear in the conclusion. This is consistent with Theorem 3 in Wilking \cite{Wilking03}, which states that the maximal rank of a smooth torus action on an integral $\HH\mathbb{P}^{m}$ is $m + 1$.
One reason for the larger symmetry assumption is that Wilking's original proof invokes the full strength of Grove and Searle's equivariant diffeomorphism classification. Since we do no prove this here, we cannot use exactly the same proof. In addition, the larger symmetry assumption allows us to side-step some of the more delicate parts of Wilking's proof and thereby allows for a quick argument that captures the essence of his induction machinery, as described in the introduction of \cite{Wilking03}.
\begin{proof}[Proof of Theorem \ref{thm:WilkingHomotopy}]
We first note that it suffices to prove that $M$ has the integral cohomology of $\mathbb{S}^n$ or $\C\mathbb{P}^{n/2}$. Indeed, a simply connected integral sphere is homeomorphic to the standard sphere by the resolution of the Poincar\'e conjecture. Moreover, it is well known that a simply connected integral complex projective space is homotopy equivalent to the standard one, and the classification up to tangential homotopy follows directly from Dessai and Wilking \cite{DessaiWilking04}.
Second, note that the theorem holds in dimensions $n \leq 13$ by the extension of Grove and Searle's result (Theorem \ref{thm:GroveSearle}). We proceed by induction for dimensions $n \geq 14$. By examining the istropy representation at a fixed-point of $T^r$ (or $T^{r-1}$ in the odd-dimensional case), one sees that an involution $\iota \in T^r$ exists such that some component $N$ of its fixed-point set has codimension $\cod(N) \leq \frac{n+3}{4}$ (see, for example, Lemma \cite[1.8.(1)]{Kennard2}). By replacing $\iota$ by another involution, if necessary, we may assume $\cod(N)$ is minimal. In particular, the induced action of the torus $T^r/\ker(T^r|_N)$ has rank at least $r - 1$.
If $\cod(N) = 2$ and $N$ is fixed by a circle, the claim follows as in the proof of the Grove--Searle result. Otherwise, $T^r/\ker(T^r|_N)$ is a torus that acts effectively and isometrically on $N$ with dimension at least $\frac 1 4 \dim N + \log_2(\dim N)$. Since $N$ is a fixed-point set of an involution in $T^r$, the vector field $X$ is tangent to $N$, and $N$ inherits positive weighted sectional curvature. By the connectedness lemma, $N$ is simply connected. By the induction hypothesis, $N$ is an integral sphere or complex projective space. By the connectedness lemma again, it follows that $M$ too is an integral sphere or projective space. This concludes the proof.
\end{proof}
The theorems of this section should be viewed as a representative, as opposed to exhaustive, list of the kinds of topological results we can now generalize to the weighted setting. Indeed, the tools discussed in this paper have been applied to similar, weaker topological classification problems for positively curved manifolds with torus symmetry. Invariants calculated or estimated include the fundamental group (see Wilking \cite[Theorem 4]{Wilking03}, Frank--Rong--Wang \cite{FrankRongWang13}, Sun--Wang \cite{SunWang09}, and \cite{Kennard3}), the Euler characteristic (see work of the first author and Amann \cite{Kennard1,AK1,AmannKennard3}), and the elliptic genus (see Dessai \cite{Dessai05,Dessai07} and Weisskopf \cite{Weisskopf}). Much of this work now can also be extended to the weighted case using the results in this article.
On the other hand, it is much less clear whether some other prominent classification theorems for manifolds with positive curvature and torus symmetry can be extended to the weighted setting. Principal among these is the situation in low dimensions. In Section \ref{sec:Examples}, we discussed why closed manifolds with positive weighted sectional curvature in dimension two and three are diffeomorphic to spherical space forms. In dimension 4, Hsiang and Kleiner \cite{HsiangKleiner89} proved that a closed, simply connected manifold $M$ in dimension four with positive curvature and an isometric circle action is homeomorphic to $\mathbb{S}^4$ or $\C\mathbb{P}^2$.
This result has been generalized in a number of ways. Recently, Grove and Wilking strengthened the conclusion to state that the circle action on $M$ is equivariantly diffeomorphic to a linear action on one of these two spaces (see \cite{GroveWilking-pre} and references therein for a survey of related work). A natural question is whether this result also holds for positive weighted sectional curvature.
\begin{question}\label{quesdim4}
Let $(M^4,g,X)$ be simply connected and closed with positive weighted sectional curvature. Is every effective, isometric circle action on $M$ equivariantly diffeomorphic to a linear action on $\mathbb{S}^4$ or $\C\mathbb{P}^2$?
\end{question}
In dimension five, Rong \cite{Rong02} proved that a positively curved $M^5$ with an isometric $2$--torus action is diffeomorphic to $\mathbb{S}^5$. This result has also been improved to an equivariant diffeomorphism classification (see Galaz-Garcia and Searle \cite{GalazGarciaSearle14}), giving the following question.
\begin{question}\label{quesdim5}
Let $(M^5,g,X)$ be simply connected and closed with positive weighted sectional curvature. Is every effective, isometric torus action of rank two on $M$ equivariantly diffeomorphic to a linear action on $\mathbb{S}^5$?
\end{question}
\mathbb{S}ection{Future directions}\label{sec:FutureDirections}
In addition to addressing Questions \ref{quesdim4} and \ref{quesdim5}, another avenue of research is to consider compact manifolds with density that admit positive weighted curvature and an isometric action by an arbitrary Lie group $G$. To make this problem tractible, one can assume that $G$ is large in some sense, e.g., that $G$ or its principal orbits have large dimension. Notable are classification results in this context due to Wallach \cite{Wallach72} and B\'erard-Bergery \cite{Berard-Bergery76} for transitive group actions, Wilking \cite{Wilking06} for more general group actions, Grove and Searle \cite{GroveSearle97} and Spindeler \cite{Spindeler14-thesis} for fixed-point homogeneous group actions, and Grove and Kim \cite{GroveKim04} for fixed-point cohomogeneity one group actions. In the non-negatively curved case, especially in small dimensions, there have been some extensions of these results due to DeVito \cite{DeVito14,DeVito-pre}, Galaz-Garcia and Spindeler \cite{GalazGarciaSpindeler12}, Simas \cite{Simas-pre}, and Gozzi \cite{Gozzi-pre}.
A particularly interesting case is where $G$ is so large that the principal orbits have codimension one. Manifolds that admit a cohomogeneity one metric with positive sectional curvature have been classified by Verdiani \cite{Verdiani04} in the even-dimensional case and by Grove, Wilking, and Ziller \cite{GroveWilkingZiller08} in the odd-dimensional case (see also \cite{VerdianiZiller-pre} and the recent generalization to the case of polar actions by Fang, Grove, and Thorbergsson \cite{FangGroveThorbergsson-pre}).
The classification is actually incomplete in dimension seven, as there are two infinite families of manifolds that are considered ``candidates'' to admit positive curvature. There are very few examples of manifolds that admit positive curvature, so it was remarkable that one of these candidates was recently shown to admit positive sectional curvature by Dearricott \cite{Dearricott11} and Grove--Verdiani--Ziller \cite{GroveVerdianiZiller11}. It remains to be seen whether the others admit positive curvature.
It would be interesting to examine these results in the case of manifolds with density. Doing this would hopefully lead to new insights into the question posed in the introduction: If $(M,g,X)$ is compact with positive weighted sectional curvature, does $M$ admit a metric with positive sectional cuvature?
The most prominent missing ingredient when trying to generalize results to the weighted setting is a Toponogov-type triangle comparison theorem and the resulting convexity properties of distance functions. These crucial tools would be needed to address Questions \ref{quesdim4} and \ref{quesdim5}, the equivariant diffeomorphism rigidity in Grove and Searle's theorem (Theorem \ref{thm:GroveSearle}), and the results above for general group actions.
The examples in Section \ref{sec:Examples} show that the classical statement of the Toponogov theorem is false for positive weighted sectional curvature. On the other hand, we can make an analogy here with the situation of Ricci curvature and Bakry--Emery Ricci curvtaure. For positive Ricci curvature, instead of convexity of the distance function, one obtains Laplacian and volume comparisons. These comparisons strictly speaking do not hold for positive Bakry--Emery Ricci curvature, but they have modified weaker versions which are still enough to recover topological obstructions, see \cite{WeiWylie09}. We believe there should be some form of modified convexity for distance functions one obtains from positive weighted sectional curvature which may lead to generalizations of all of the results mentioned above. This will be the topic of future research.
\end{document}
|
\begin{document}
\pagestyle{plain}
\def\arabic{page}{\arabic{page}}
\thispagestyle{empty}
\title{Two-Phase Bicriterion Search for Finding \\
Fast and Efficient Electric Vehicle Routes}
\author{
Michael T.~Goodrich\\
Dept.~of Computer Science \\
University of California, Irvine \\
\url{http://www.ics.uci.edu/~goodrich}
\and
Pawe{\l} Pszona \\
Dept.~of Computer Science \\
University of California, Irvine \\
\url{http://www.ics.uci.edu/~ppszona}
}
\date{}
\maketitle
\begin{abstract}
The problem of finding an electric vehicle route that
optimizes both driving time and energy consumption can be modeled as
a bicriterion path problem.
Unfortunately, the problem of finding optimal bicriterion paths
is NP-complete.
This paper studies such problems
restricted to \emph{two-phase} paths, which correspond to
a common way people drive electric vehicles, where a driver
uses one driving style (say, minimizing driving time)
at the beginning of a route
and another driving style (say, minimizing energy consumption) at the end.
We provide efficient polynomial-time
algorithms for finding optimal two-phase paths in bicriterion networks,
and we empirically verify the effectiveness
of these algorithms for finding good electric vehicle driving routes
in the road networks of various U.S.~states.
In addition, we show how to incorporate charging stations
into these algorithms, in spite of the computational challenges
introduced by the negative energy consumption of such network vertices.
\noindent\textbf{Keywords:} road networks, electric vehicles,
shortest paths, bicriterion paths,
NP-complete.
\end{abstract}
\section{Introduction}
Finding an optimal path for an electric vehicle (EV)
in a road network, from a given origin to a given destination,
involves optimizing two criteria---driving time
and energy consumption.
Unfortunately, these two criteria are usually in conflict, since people
typically would like to minimize
driving time, but EVs are least efficient at high speeds.
(E.g., see Figures~\ref{fig-tesla-range} and~\ref{fig-tesla-mph}.)
Thus, planning good driving routes for EVs is
challenging~\cite{Franke201356,GrahamRowe2012140},
leading some to refer to the stress of dealing with the
restricted driving distances imposed by battery capacities
as ``range anxiety''~\cite{APPS474}.
To help electric vehicle owners deal with range anxiety, therefore,
it would be ideal if
GIS route-planning systems could quickly provide electric vehicle
owners with routes that optimize a set of preferred
trade-offs for time and energy,
based on the energy-usage characteristics and
the battery capacity of their vehicle.
\begin{figure}
\caption{Range versus speed for a Tesla Roadster and Tesla Model S
with 85 kWh battery~\cite{tesla}
\caption{Battery consumption per mile
for a Tesla Roadster and Tesla Model S 85kWh~\cite{tesla}
\label{fig-tesla-range}
\label{fig-tesla-mph}
\end{figure}
\subsection{Modeling EV Route Planning}
This electric-vehicle route-planning problem can be modeled as a
\emph{bicriterion path optimization} problem~\cite{h-mcdmt-80}
(which is also known as the \emph{resource constrained shortest path}
problem~\cite{mz-rcsp-00}), where
one is given a directed graph, $G=(V,E)$,
such that each edge, $e \in G$, has a weight, $w(e)$,
that is a pair of integers, $(x,y)$,
such that cost of traversing $e$ uses $x$ units of one type and $y$ units
of a second type. For instance, in a road network calibrated for a certain
electric vehicle, a given edge, $e$, might have a
weight, $w(e)=(75,\,304)$, which
indicates that driving at a given speed (say, 60 mph) will require 75 seconds
and consume 304 Wh to traverse $e$.
The graph $G$ is allowed to contain
parallel edges, that is, multiple edges having
the same origin and destination, $v$ and $w$,
so as to represent different ways of going from $v$ to $w$. For example,
one edge, $e_1=(v,w)$, could represent a traversal
from $v$ to $w$
at 60 mph, another edge, $e_2=(v,w)$, could representing a traversal
from $v$ to $w$ at 55 mph,
and yet another edge, $e_3=(v,w)$, could represent a traversal at 65 mph.
For a path, $P=(e_1,e_2,\ldots,e_k)$, in $G$,
whose edges have respective weights,
$(x_1,y_1)$, $\ldots$, $(x_k,y_k)$, the weight, $w(P)$, of $P$, is defined as
\[
w(P) = \left( \sum_{i=1}^k x_i\,,\,\,\, \sum_{i=1}^k y_i \right).
\]
Given a starting vertex, $s$, and a target vertex, $t$, and two
integer parameters,
$X$ and $Y$, the
\emph{bicriterion path problem} is to find a path, $P$, in $G$, from $s$
to $t$, such that $w(P)=(x,y)$ with $x\le X$ and $y\le Y$.
(See Figure~\ref{fig-two-phase}.)
Unfortunately, as we review below, the bicriterion path problem
is NP-complete.
\begin{figure}
\caption{An instance of the bicriterion path problem. (a) A network with
(driving-time,~energy-consumption) edge weights;
(b) All the
paths in the graph and their respective weights. We highlight 3 interesting
path weights.}
\label{fig-two-phase}
\end{figure}
The bicriterion path problem has a rich history, and several heuristic
and approximation
algorithms have been proposed to solve it
(e.g., see~\cite{h-mcdmt-80,h-asrsp-92,Henig1986281,Modesti1998495,
mz-rcsp-00,Mote199181,NamoradoClimaco1982399,Skriver2000507}).
Rather than take a heuristic or approximate approach, however, we are interested
here in reformulating the problem so as to simultaneously achieve the following
goals:
\begin{itemize}
\item
The formulation should capture the way people drive electronic vehicles
in the real world.
\item
This formulation should be solvable
in (strongly) polynomial time, ideally, with the same asymptotic
worst-case running time
needed to solve a single-criterion shortest path problem.
\end{itemize}
\subsection{Our Results}
In this paper, we show that one can, indeed, achieve both of the
above goals by using a formulation we call
the \emph{two-phase bicriterion path problem}.
In a \emph{two-phase path}, $P$, we
traverse the first part of $P$ according to one driving
style and we traverse the remainder of $P$ according to a second driving style.
For instance, we might begin an electric vehicle route optimizing
primarily for driving time but finish this route optimizing primarily
for energy consumption, which is a common way electric vehicles are driven
in the real world (e.g., see~\cite{Franke201356,GrahamRowe2012140}).
We provide a general mathematical framework for the two-phase bicriterion
path problem and
we show how to find such paths in a network of $n$ vertices and $m$
edges in $O(n\log n + m)$ time,
if edge weights are pairs of non-negative integers,
and in $O(nm)$ time otherwise.
In addition, we show to extend our algorithms to incorporate charging
stations in the network, with similar running times.
We include an experimental validation of our algorithms using Tiger/Line
USA road network data, showing that our algorithms are effective both in terms of
their running times and in terms of the quality of the solutions that they
find.
\subsection{Additional Related Work}
In ACM SIGSPATIAL GIS '13,
Baum {\it et al.}~\cite{Baum:2013} describe an algorithm for finding
energy-optimal routes for electric vehicles, based on a variant of Dijkstra's
shortest path algorithm.
They contrast the paths
their algorithm finds with shortest travel time and shortest distance paths,
showing that the paths found by their algorithm are significantly more energy
efficient.
In addition to this work, the problem of finding
energy-optimal paths for electric vehicles
is also studied by
Artmeier {\it et al.}~\cite{artmeier-10},
Eisner {\it et al.}~\cite{eisner2011optimal},
and
Sachenbacher {\it et al.}~\cite{sachenbacher2011efficient}.
Unfortunately,
these energy-optimal paths are not that practically useful
for typical drivers of electric vehicles, who
care more about quickly reaching their
destinations (while not depleting their batteries)
than they do about minimizing overall energy consumption
(e.g., see~\cite{Franke201356,GrahamRowe2012140}).
For instance, as shown in
Figures~\ref{fig-tesla-range} and~\ref{fig-tesla-mph},
in a Tesla Roadster or Model S 85kWh,
a driver achieves optimal
energy efficiency on level ground by maintaining a constant speed of 15 to 20
mph, which is unrealistic for real-world road trips.
Thus, we feel it is more productive to provide algorithms that can find routes
with small travel times that also conserve sufficient energy to avoid
fully depleting a vehicle's battery (if possible),
which motivates studying electric vehicle
route planning as a bicriterion path problem.
We are not familiar with any prior work on finding optimal two-phase
bicriterion paths, but
there are well-known algorithms
for finding single-phase paths and
for enumerating all Pareto optimal bicriterion paths.
We review these classic results in the next section.
Bidirectional shortest-path algorithms have been used
as an approach to speedup
shortest path searching~\cite{gkw-alenex,Righini2006255},
but, to our knowledge, these have not been applied in the way we are doing
bidirectional search for finding optimal two-phase shortest paths.
In addition, Storandt~\cite{Storandt:2012} studies EV route planning
taking into account charging stations, but not in the same way
that we incorporate charging stations into two-phase routes.
\section{The Complexity of Bicriterion Path Finding}
\label{pseudo_polynomial_sec}
We begin by reviewing known results for
the bicriterion path problem, absent of the two-phase path formulation,
including
that finding bicriterion shortest paths is NP-complete, but there
is a pseudo-polynomial time algorithm for finding bicriterion paths, which
can be very slow in practice.
\subsection{Bicriterion Path Finding is NP-Complete}
The bicriterion path problem
is NP-complete, even if the values in the weight pairs are all positive
integers
(e.g., see~\cite{arkin1991bicriteria,gj-cigtn-79}).
For instance, there is a simple polynomial-time
reduction from the Partition problem,
where one is given a set, $A$, of $n$ positive numbers,
$A=\{a_1,a_2,\ldots,a_n\}$, and asked if there is a subset,
$B\subset A$, such that $\sum_{a_i\in B} a_i = \sum_{a_i\in A-B} a_i$.
To reduce this to the bicriterion path problem,
let the set of vertices be $V=\{v_1,v_2,\ldots,v_{n+1}\}$,
and, for each $v_i$, $i=1,\ldots,n$, create two edges,
$e_{i,1}=(v_i,v_{i+1})$ and $e_{i,2}=(v_i,v_{i+1})$,
such that $w(e_{i,1})=(1+a_i,\,1)$ and $w(e_{i,2})=(1,\,1+a_i)$.
Let $h=(\sum_{i=1}^n a_i)/2$,
and define this instance of the bicriterion path problem to ask if
there is a path, $P$, from $v_1$ to $v_{n+1}$, with weight
$w(P)=(x,y)$ such that $x\le n+h$ and $y\le n+h$.
This instance of the bicriterion path problem has a solution if and
only if there is a solution to the Partition problem.
\subsection{A Pseudo-Polynomial Time Algorithm}
As with the Partition problem, there is a pseudo-polynomial time
algorithm for the bicriterion path problem
(e.g., see~\cite{h-mcdmt-80,h-asrsp-92}).
Recall that the input to this problem is an $n$-vertex graph, $G$, with integer
weight pairs stored at its $m$ edges (and assume for now that
none of these values are negative), together with parameters $X$ and $Y$.
In this pseudo-polynomial time algorithm,
which we call the ``vertex-labeling'' algorithm,
we store at each vertex, $v$, a set of pairs,
$(x,y)$, such that there is a path, $P$, from $s$ to $v$ with weight $(x,y)$.
We store such a pair, $(x,y)$, at $v$, if we have discovered
a path with this weight and only if, at this point
in the algorithm, there is no
other discovered weight pair, $(x',y')$, with $x'<x$ and $y'<y$,
for a path from $s$ to $v$.
Initially, we store ${(0,0)}$ at $s$ and we store $\emptyset$ at
every other vertex in $G$.
Next, for a sequence of iterations, we perform a \emph{relaxation} for each
edge, $e=(v,w)$, in $G$, with $w(e)=(x,y)$,
such that, for each pair, $(x',y')$, stored at $v$, we add
$(x+x',\,y+y')$ to $w$, provided there is no pair, $(x'',y'')$,
already stored at $w$, such that $x''\le x+x'$ and $y''\le y+y'$.
Moreover, if we add such a pair $(x+x',\,y+y')$ to $w$, then we
remove each pair, $(x'',y'')$, from $w$ such that
$x''> x+x'$ and $y''> y+y'$.
The algorithm completes when an iteration causes
no label updates, at which point we then test if there is a pair,
$(x,y)$, stored at the target vertex, $t$, such that $x\le X$ and $y\le Y$.
If we let $N$ denote the maximum
value of a sum of $x$-values or $y$-values
along a path in $G$, then
the running time of this algorithm
is $O(nmN)$, because each iteration takes at most $O(mN)$ time and there
can be at most $O(n)$ iterations (since there can be no negative-weight cycles).
Because $N$ can be very large, this is only a pseudo-polynomial time
algorithm.
In practice, this algorithm can be quite inefficient; for instance,
in a road network, $G$, for an electric vehicle, $N$ could be
the number of seconds in
the maximum duration of a trip in $G$ or the capacity of the battery
measured in Wh.
\subsection{Battery Capacities and Charging Stations}
Although the above algorithm is not very efficient,
we can nevertheless modify it
to work for electric vehicle routes, taking into
consideration battery capacities and the existence of charging
stations.
Here, we assume that each edge weight $w(e)=(x,y)$, where $x$ is the
time to traverse the edge (at a speed associated with the edge $e$)
and $y$ is the energy consumed by this traversal.
We also assume that the vehicle starts its journey from the start
vertex, $s$, with a fully charged battery.
A charging station can be modeled as a vertex that has a
self-loop with a weight $(x,y)$ having a positive $x$ value and negative $y$.
There may be other edges in the graph with negative $y$-values, as
well, such as a stretch of road that goes sufficiently downhill to
allow net battery charging through regenerative braking.
We store at each vertex a collection of $(x,y)$ values
corresponding to the driving time, $x$, and net energy consumption,
$y$, along some path starting from the start vertex, $s$.
We modify the above vertex-labeling
algorithm, however, to disallow storing an $(x,y)$ pair with
a negative $y$ value,
since we assume our vehicle begins with a fully charged
battery, and it is not possible to store more energy
in a battery after it is fully charged.
Similarly, we assume we know the capacity, $C$, for the vehicle's
battery. If we ever consider a weight pair, $(x,y)$, for an $s$-to-$w$ path,
such that $y>C$, then we discard this pair and do not add it to
the label set for $w$.
Such a pair $(x,y)$ corresponds to a path that would fully discharge
the battery;
hence, attempting to traverse this path would cause the vehicle to
stop functioning and it would not reach its destination.
Making these modifications allows the vertex-labeling algorithm to be
adapted to an environment for planning the route of an electric
vehicle, including consideration of its battery capacity, the fact
that its battery cannot hold more than a full charge, and removal
of paths that would require too much energy to traverse.
These modifications do not improve its asymptotic running time,
however, which becomes $O(nmN^2)$, where $N$ is the largest route
duration or the battery capacity, since each iteration takes
$O(mN)$ time and there can be at most $O(nN)$ iterations (given
our restrictions based on the battery capacity).
\subsection{Drawbacks}
In addition to its inefficiency, the
vertex-labeling algorithm
might find an
optimal path
that could be difficult to actually drive in practice.
For instance, it could involve many alternations between various styles
of driving, such as ``drive the speed limit''
and ``drive 10 mph below the speed limit.''
In addition, it could involve several detours, for instance, asking a driver
to systematically
get on and off a limited-access high-speed highway. Such detours
are distracting and difficult to follow, of course, but
they could also be expensive, if that limited-access highway were a
toll road.
Thus, implementing
the so-called ``optimal'' path that this algorithm produces might
require an onboard GPS system to constantly be barking out strange
orders to the driver, which, unless the driver enjoys
road rallies, could be difficult and annoying to follow.
Clearly, we prefer a formulation of the bicriterion path problem
that would better match the ways people drive in practice.
\section{Linear Utility Functions}
\label{sec:modes}
Fortunately,
there is a more natural and efficient algorithm for finding good bicriterion
paths, by using linear utility functions
(e.g., see~\cite{Henig1986281,mz-rcsp-00,Modesti1998495}).
Suppose we are given a directed
network, $G$, together with pairs, $(x,y)$, defined
for each edge in $G$.
Formally, we define a linear utility function in terms
of a \emph{preference pair}, $(\alpha,\beta)$, of non-negative real numbers.
A path $P$, from $s$ to $t$, in $G$, is \emph{optimal} for
a preference pair $(\alpha,\beta)$
if it minimizes the cost, $C_{\alpha,\beta}(P)$, of $P=(e_1,e_2,\ldots,e_k)$,
with $w(e_i)=(x_i,y_i)$,
\[
C_{\alpha,\beta}(P) = \sum_{i=1}^k (\alpha x_i + \beta y_i),
\]
taken over all possible paths from $s$ to $t$ in $G$ (that is, $k$ is
a free variable and we do not limit the number of edges in $P$).
For example, using the preference pair $(1,\, 0.01)$, for edge weights defined by
pairs of driving times in seconds and energy consumption in watt-hours,
would imply a driving style that tends
to emphasize driving time over energy consumption.
Note that we can also write this cost for a path, $P$, as two global sums,
\[
C_{\alpha,\beta}(P) = \sum_{i=1}^k \alpha x_i \,+\, \sum_{i=1}^k \beta y_i,
\]
which implies that we can visualize this optimization as
that of finding a vertex
on the convex hull of $(x,y)$ points for the weights of $s$-$t$ paths in $G$,
in a direction determined by $\alpha$ and $\beta$.
Moreover, this algorithm cannot find $(x,y)$ points that are not on the
convex hull.
(See Figure~\ref{fig:hull}.)
\begin{figure}
\caption{Sample $(x,y)$ points that correspond to the weights of paths
in a bicriterion network. The solid points could potentially be
found by a linear optimization algorithm
using an $(\alpha,\beta)$ preference pair, as they are on the convex
hull of the set of $(x,y)$ points, shown dashed.
The gray points are Pareto-optimal points (that is, not dominated
by any other point), but they would not be found
by an algorithm that searches for optimal paths based on linear utility
functions and preference pairs.
The empty points are not Pareto optimal; hence, they should not
be returned as options from a bicriterion optimization algorithm.}
\label{fig:hull}
\end{figure}
If the $\alpha x_i+\beta y_i$ values for the edges
in $G$ are all non-negative, then an optimal
$s$-to-$t$ path, for any preference pair, $(\alpha,\beta)$,
can be found using a standard single-source shortest
path algorithm~\cite{Henig1986281},
which runs in $O(n\log n+m)$ time,
where $n$ is the number of vertices in $G$ and $m$ is the number of edges,
by an implementation of Dijkstra's algorithm (e.g., see~\cite{clrs-ia-01}).
Otherwise, such a path can be found in $O(nm)$ time, by the
Bellman-Ford algorithm (e.g., see~\cite{clrs-ia-01}).
Indeed, for any vertex, $v$, and a given preference pair, $(\alpha,\beta)$,
we can use these algorithms to find the tree
defined by the union of all $(\alpha,\beta)$-optimal paths
in $G$ that emanate out from $v$, or are directed into $v$,
in these same time bounds.
(Note that we may allow such paths to include
self-loops at charging stations a finite number of times, so that the
topology of their union is still essentially a tree.)
\section{Two-Phase Bicriterion Paths}
\label{two_phase_sec}
Restriction to finding a route
optimizing a single linear utility function, as described above,
may be too constraining. Because it misses $(x,y)$ pairs that
are not on the convex hull,
if we are planning a route from a source, $s$, to a target,
$t$,
there might be fast and efficient $s$-to-$t$ path,
that is missed, since
a path minimizing driving time might run out of energy before
reaching $t$, while a route minimizing energy consumption might be
needlessly slow.
(See, for example, Figure~\ref{fig-two-phase}.)
Thus, it would be desirable to consider routes that include a transition
from one linear utility function to another at some point,
such as a route that optimizes driving time in the beginning
of the route and switches to optimizing energy consumption at the end,
so as to reach the target vertex quickly without fully discharging
the battery.
Suppose we are given two preference pairs,
$(\alpha_1,\beta_1)$ and $(\alpha_2,\beta_2)$.
For example, we might have
$(\alpha_1,\beta_1)=(1,0.1)$, which emphasizes
driving time, and $(\alpha_2,\beta_2)=(0.1,1)$,
which emphasizes energy consumption.
A path, $P$, from $s$ to $t$ is a \emph{two-phase} path
for $(\alpha_1,\beta_1)$ and $(\alpha_2,\beta_2)$
if there is a vertex, $v$, in $P$, such that
we can divide $P$ into the path, $P_1$, from $s$ to $v$, and the
path, $P_2$, from $v$ to $t$, so that
$P_1$ is an optimal $s$-to-$v$ path
for the preference pair $(\alpha_1,\beta_1)$ and
$P_2$ is an optimal $v$-to-$t$ path
for the preference pair $(\alpha_2,\beta_2)$.
(For example, in Figure~\ref{fig-two-phase},
the path $abdef$ is a composition of a
time-optimal path from $a$ to $d$ and an energy-optimal path from
$d$ to $f$, and this would be a two-phase optimal path for a battery capacity
from 27 to 30, inclusive.)
As a boundary case,
we allow the vertex $v$ to be equal to $s$ or $t$,
so that a single-phase path is just a special case of
a two-phase path.
\subsection{Finding Two-Phase Paths}
In this section, we describe our polynomial-time algorithm for
finding an optimal two-phase path from a source, $s$, to a target, $t$,
in a graph, $G$, with bicriterion weights on its edges.
We describe an algorithm that can search for two-phase paths based on
optimizing two out of $c$ given preference pairs.
Suppose, then, that we are given $c$ preference pairs,
$(\alpha_1,\beta_1), (\alpha_2,\beta_2), \ldots, (\alpha_c,\beta_c)$.
\begin{enumerate}
\item
For each
preference pair, $(\alpha_i,\beta_i)$, use the algorithm of
Section~\ref{sec:modes} to find the tree, $T^{\rm out}_{s,i}$, that is the
union, for all $v$ in $G$,
of the optimal $s$-to-$v$ paths in $G$ for the pair
$(\alpha_i,\beta_i)$.
With each node, $v$, store the bicriterion weight, $(x,y)^{\rm out}_{i}$,
of the $s$-to-$v$ path in $T^{\rm out}_{s,i}$.
\item
For each
preference pair, $(\alpha_j,\beta_j)$, use the (reverse) algorithm of
Section~\ref{sec:modes} to find the tree, $T^{\rm in}_{t,j}$, that is the
union, for all $v$ in $G$,
of the optimal $v$-to-$t$ paths in $G$ for the pair
$(\alpha_j,\beta_j)$.
With each node, $v$, store the bicriterion weight, $(x,y)^{\rm in}_{j}$,
of the $s$-to-$v$ path in $T^{\rm in}_{t,j}$.
\item
For each node $v$ in $G$, and each pair of indices
$i,j=1,2,\ldots,c$,
compute the score
\[
(x,y)^v_{i,j} = (x,y)^{\rm out}_i + (x,y)^{\rm in}_j,
\]
for performing a transition from preference pair $(\alpha_i,\beta_i)$
to $(\alpha_j,\beta_j)$ at $v$,
where ``$+$'' is component-wise addition.
\item
Search all the $(x,y)^v_{i,j}$ values, including values $(x,y)^v_{i,i}$,
to find an optimal $(x,y)$
pair according
to the user's specified optimization
goals, such as $x\le X$ and $y\le Y$, for some $X$ and $Y$.
\end{enumerate}
We give a schematic illustration of this algorithm in Figure~\ref{fig:mixing}.
\begin{figure}
\caption{Schematic illustration of the two-phase polynomial-time algorithm.}
\label{fig:mixing}
\end{figure}
We note, in addition, that in combining weights in this two-phase manner,
we are able to find Pareto-optimal scores that could not be found in any
optimization using a single linear utility function. That is, we can
find Pareto-optimal scores for paths from $s$ to $t$ that are not on the
convex hull of $(x,y)$ scores.
(See Figure~\ref{fig:plot}.)
\begin{figure}
\caption{A plot of the different weight pairs for $a$-to-$f$ paths in the network
of Figure~\protect{\ref{fig-two-phase}
\label{fig:plot}
\end{figure}
Let us analyze the running time of this algorithm. Suppose, first,
that there are no negative-weight edges. In this case, we can use
Dijkstra's algorithm to compute each $T^{\rm out}_{s,i}$ and $T^{\rm in}_{t,j}$;
hence, these steps run in $O(c(n\log n+m))$ time.
If, on the other hand, there are negative-weight edges, but no
negative cycles, in $G$, then
we use a Bellman-Ford
algorithm to compute each $T^{\rm out}_{s,i}$ and $T^{\rm in}_{t,j}$;
hence, these steps run in $O(cnm)$ time in this case.
Then, computing all the $(i,j)^v_{i,j}$ pairs and choosing an optimal
such pair takes $O(c^2n)$ time.
Thus, if $c$ is a fixed
constant independent of $n$ and $m$, then this algorithm runs
in $O(n\log n + m)$ time if there are no negative-weight edges and in
$O(nm)$ time otherwise.
Note that these running times are asymptotically the same as that of
computing an optimal path for a single traversal mode.
In the context of finding electric vehicle routes,
each preference pair, $(\alpha_i,\beta_i)$,
corresponds to a driving style, such as ``minimize driving time,''
``minimize energy consumption,'' or ``minimize a weighted combination of driving
time and energy consumption.''
In addition, the path that achieves the chosen optimal pair,
$(x,y)^v_{i,j}$, is simple to implement for the driver of an electric
vehicle.
He or she simply needs to drive according to driving style~$i$
from $s$ to $v$, that is, using the path in $T^{\rm out}_{s,i}$,
and then switch to drive according to driving style~$j$ from $v$ to $t$,
that is, using the path in $T^{\rm in}_{t,j}$.
\section{Including Charging Stations}
The above two-phase path finding algorithm can be used in the context
of negative-weight edges (e.g., where regenerative braking charges
the battery, provided we add the capacity constraints
as discussed in Section~\ref{pseudo_polynomial_sec}).
In this case, assuming there are no negative-weight
cycles, we could use the Bellman-Ford
algorithm to compute the optimal paths, requiring an $O(cnm+c^2n)$
running time.
If the charging stations themselves are the only places
in the network that provide negative energy consumption,
then we can achieve a potentially better algorithm for finding good paths.
In this case, we consider $s$ and $t$ to themselves
to be charging stations, and
we let $d$ be the number of charging stations in the network.
Moreover, in this case, we assume the user is interested in the
shortest duration path
from $s$ to $t$ that can be achieved with a given battery capacity,
which starts out fully charged.
Also, we assume here that the user fully charges the battery at each
charging station at which he or she stops.
With the algorithm we discuss in this section,
we can design a long route for
an electric vehicle that starts at $s$,
and includes several charging stations,
fully charging the vehicle at each one along the way, and finally
going to $t$, such that we
implement a different two-phase path between each pair of charging
stations along the way.
\begin{enumerate}
\item
For each charging station, $z$, and
each traversal mode, $(\alpha_i,\beta_i)$, use the Dijkstra-type algorithm of
Section~\ref{sec:modes} to find the tree, $T^{\rm out}_{z,i}$, that is the
union, for all $v$ in $G$,
of the optimal $z$-to-$v$ paths in $G$ for the traversal
$(\alpha_i,\beta_i)$.
With each node, $v$, store the bicriterion weight, $(x,y)^{z,{\rm out}}_{i}$,
of the $z$-to-$v$ path in $T^{\rm out}_{z,i}$.
\item
For each charging station, $z$, and each
traversal mode, $(\alpha_j,\beta_j)$, use the (reverse) Dijkstra-type
algorithm of
Section~\ref{sec:modes} to find the tree, $T^{\rm in}_{z,j}$, that is the
union, for all $v$ in $G$,
of the optimal $v$-to-$z$ paths in $G$ for the traversal
$(\alpha_j,\beta_j)$.
With each node, $v$, store the bicriterion weight, $(x,y)^{z,{\rm in}}_{j}$,
of the $v$-to-$z$ path in $T^{\rm in}_{z,j}$.
\item
For each pair of charging stations, $u$ and $w$, and,
for each node $v$ in $G$, and each pair of indices
$i,j=1,2,\ldots,c$,
compute the two-phase score,
\[
(x,y)^{u,v,w}_{i,j} = (x,y)^{u,{\rm out}}_i + (x,y)^{w,{\rm in}}_j,
\]
where ``$+$'' is component-wise addition.
\item
For each pair of charging stations, $u$ and $w$,
search all the $(x,y)^{u,v,w}_{i,j}$ values to find an optimal pair according
to the user's desired goals, to go from $u$ to $w$,
such as $x\le X$ and $y\le Y$ for given values of $X$ and $Y$.
Create a ``super edge,'' $e$, from $u$ to $w$, and label it with this
$(x,y)$ weight.
\item
Create a graph, $G'$, whose vertices are charging stations and whose
edges are the super edges created in the previous step.
For each such super edge, $e$, with weight, $(x,y)$, replace this
weight with the weight
\[
w(e)= x + {\rm charge}(C-y),
\]
where ${\rm charge}(E)$ is the time needed to charge the
battery to add $E$ units of energy capacity (and recall that $C$ is
the capacity of the battery).
\item
Use Dijkstra's algorithm to find a shortest duration path from $s$ to
$t$ in $G'$.
\end{enumerate}
We illustrate this algorithm in Figure~\ref{fig:stations}.
\begin{figure}
\caption{An illustration of the algorithm for incorporating charging stations.
We consider $s$ and $t$ to be stations, then run the two-phase optimization
algorithm between all the stations. This gives us the graph, $G'$, where
edge weights are now just driving time, since we know at this point which
stations can be driven between without depleting the battery (and we
always fully charge the battery at each charging station). Once we have the
graph, $G'$, we then do one more call to Dijkstra's algorithm to find
the shortest path from $s$ to $t$.
}
\label{fig:stations}
\end{figure}
Incidentally, if there are negative-weight edges in the graph, but no
negative-weight cycles (ignoring charging stations), then we would
substitute the Dijkstra-type algorithms used in Steps~1 and~2 for
Bellman-Ford-type algorithms.
Let us analyze
the running time of this algorithm.
To compute all the trees of the form
$T^{z,{\rm out}}_i$
and
$T^{z,{\rm in}}_j$, using Dijkstra's algorithm, takes $O(cd(n\log n+m))$.
The time to compute the optimal $(x,y)$ value for each super edge is
$O(c^2d^2n)$, but in practice we only need to consider each pair of
charging stations, $u$ and $w$, such that $w$ is reachable from $u$
with a fully charged battery. So the $d^2$ term in this bound might
be overly pessimistic.
Finally, the final Dijkstra's algorithm takes at most $O(d^2)$
time, but this is dominated by the running times of the other steps.
So the total running time of this algorithm is at most
$O(c^2d^2n + cd(n\log n+m))$, assuming no negative-weight edges (other
than charging stations).
Note that
if $c$ and $d$ are fixed constants independent of $n$ and $m$, then
this running time is $O(n\log n + m)$, which is asymptotically the
same as doing a single Dijkstra-like computation with a single-phase
optimization criterion.
If there are negative-weight edges, but no negative-weight cycles, then
replacing the Dijkstra-type algorithms in Steps~1 and~2 with Bellman-Ford-type
algorithms increases the running time
to be $O(c^2d^2n + cdnm)$, which becomes asymptotically
equal to that of a single Bellman-Ford-type
computation, i.e., $O(nm)$, if $c$ and $d$ are fixed constants.
\section{Experiments}
\newcolumntype{R}{>{\centering\arraybackslash}X}
\renewcommand{1.2}{1.2}
To empirically
measure the performance of our algorithms, we tested them
using road networks for several U.S. states from the TIGER/Line
data sets~\cite{tiger}, as prepared for the 9\textsuperscript{th} DIMACS
Implementation Challenge~\cite{dimacs}.
These road networks
are undirected, with each edge (road segment)
characterized as belonging to one of
four general classes: highway, primary major road, secondary major road, or
local road.
For each
road segment of a given class, we consider $c=3$ different driving styles
for traversing an edge of that class, allowing for three different speeds
at which it can be traveled, in order to capture both lower and upper
speed limits inherent to all roads of a certain class. We derived these speeds
based on the guidelines presented in the road design manual for the state of
Florida~\cite{greenbook} (the ``Florida greenbook''). For these speed values, see
Table~\ref{driving_params_t}.
\begin{table}[hbt]
\begin{tabularx}{0.48\textwidth}{|R|c|c|c|}
\hline \multicolumn{2}{|c|}{\multirow{2}{*}{Road type}} & Speed & Energy Consumption \\
\multicolumn{2}{|c|}{\multirow{2}{*}{}} & {[}mph{]} & {[}Wh / mile{]} \\
\hline \multirow{3}{*}{Highway} & fast & 70 & 378 \\
\cline{2-4} & moderate & 60 & 329 \\
\cline{2-4} & slow & 50 & 291 \\
\hline
\multirow{3}{*}{\parbox[t]{1cm}{Primary main \\ road}} & fast & 70 & 378 \\
\cline{2-4} & moderate & 55 & 308 \\
\cline{2-4} & slow & 40 & 258 \\
\hline \multirow{3}{*}{\parbox[t]{1cm}{Secondary main \\ road}} & fast & 60 & 329 \\
\cline{2-4} & moderate & 45 & 275 \\
\cline{2-4} & slow & 35 & 221 \\
\hline \multirow{3}{*}{\parbox[t]{1cm}{Local \\ road}} & fast & 30 & 202 \\
\cline{2-4} & moderate & 25 & 199 \\
\cline{2-4} & slow & 20 & 197 \\
\hline
\end{tabularx}
\caption{Driving parameters.}
\label{driving_params_t}
\end{table}
Although our algorithms can accommodate elevation changes and even the
negative energy consumption that comes from regenerative braking,
the data sets in the TIGER/Line collection
do not include elevation information; hence, for the sake of simplicity,
we assumed in our tests that all roads lie on a flat surface.
Extending our testing regime to include elevation data would change some
of the weight pairs on some edges in hilly terrains, and would allow for
including the second-order effect of elevation, but
it would not significantly change the results for reasonably flat terrains.
Moreover, the main goal of our tests was to determine the effectiveness of
the two-phase strategy, for which the TIGER/Line data sets were sufficient.
In particular,
in order to estimate energy consumption for each edge segment, we
used the provided edge length and estimated energy consumption based on the
data for the Tesla Model S with 85 kWh battery~\cite{tesla,tesla2}
and air conditioning / heating turned on (see also Figure~\ref{fig-tesla-mph}).
The speed/energy consumption combinations are shown in Table~\ref{driving_params_t}.
For the two-phase algorithm from Section~\ref{two_phase_sec}, we considered
three driving styles:
\begin{itemize}
\item emphasize smaller driving time
\item emphasize smaller energy consumption
\item balance energy consumption and driving time.
\end{itemize}
The preference pairs characterizing such paths are shown in Table~\ref{path_type_t}.
\begin{table}[h]
\begin{tabularx}{0.48\textwidth}{|R|c|c|}
\hline Path type & $\alpha$ (time coeff.) & $\beta$ (energy coeff.) \\
\hline Fast & 0.8 & 0.2 \\
\hline Balanced & 0.5 & 0.5 \\
\hline Energy-saving & 0.2 & 0.8 \\
\hline
\end{tabularx}
\caption{Path types.}
\label{path_type_t}
\end{table}
\begin{table}[h!]
Rhode Island ($n=53658$, $m=69213$):
\begin{tabularx}{0.48\textwidth}{|c|c|c|R|c|}
\hline Capacity & \multicolumn{2}{c|}{Reachable} & \multicolumn{2}{c|}{Two-phase algorithm} \\
\cline{2-5} {[}Wh{]} & Nodes & \% $n$ & Reachability & Longer \% \\
\hline 1000 & 2291 & 4.27 \% & 100 \% & 0.36 \% \\
\hline 2000 & 3580 & 6.67 \% & 100 \% & 0.37 \% \\
\hline 4000 & 9824 & 18.31 \% & 99.90 \% & 1.81 \% \\
\hline 6000 & 23482 & 43.76 \% & 99.40 \% & 2.33 \% \\
\hline 8000 & 44815 & 83.52 \% & 99.69 \% & 3.07 \% \\
\hline
\end{tabularx}
Alaska ($n=69082$, $m=78100$):
\begin{tabularx}{0.48\textwidth}{|c|c|c|R|c|}
\hline Capacity & \multicolumn{2}{c|}{Reachable} & \multicolumn{2}{c|}{Two-phase algorithm} \\
\cline{2-5} {[}Wh{]} & Nodes & \% $n$ & Reachability & Longer \% \\
\hline 1000 & 2824 & 4.09 \% & 100 \% & 0.29 \% \\
\hline 2000 & 7837 & 11.34 \% & 100 \% & 0.18 \% \\
\hline 4000 & 9497 & 13.75 \% & 99.99 \% & 0.02 \% \\
\hline 6000 & 11306 & 16.37 \% & 99.85 \% & 0.35 \% \\
\hline 8000 & 12129 & 17.56 \% & 99.99 \% & 0.25 \% \\
\hline 10000 & 13335 & 19.30 \% & 99.50 \% & 0.80 \% \\
\hline 12000 & 17658 & 25.56 \% & 99.56 \% & 2.36 \% \\
\hline
\end{tabularx}
Delaware ($n=49109$, $m=60512$):
\begin{tabularx}{0.48\textwidth}{|c|c|c|R|c|}
\hline Capacity & \multicolumn{2}{c|}{Reachable} & \multicolumn{2}{c|}{Two-phase algorithm} \\
\cline{2-5} {[}Wh{]} & Nodes & \% $n$ & Reachability & Longer \% \\
\hline 1000 & 3970 & 8.08 \% & 100 \% & 0.53 \% \\
\hline 2000 & 12249 & 24.94 \% & 100 \% & 1.43 \% \\
\hline 4000 & 18154 & 36.97 \% & 99.99 \% & 0.09 \% \\
\hline 6000 & 19875 & 40.47 \% & 99.98 \% & 0.18 \% \\
\hline 8000 & 21252 & 43.28 \% & 99.98 \% & 0.10 \% \\
\hline 10000 & 23113 & 47.06 \% & 99.84 \% & 0.13 \% \\
\hline 12000 & 26656 & 54.28 \% & 99.87 \% & 0.28 \% \\
\hline 14000 & 28783 & 58.61 \% & 99.87 \% & 0.37 \% \\
\hline 16000 & 31381 & 63.90 \% & 99.80 \% & 0.29 \% \\
\hline
\end{tabularx}
District of Columbia ($n=9559$, $m=14909$):
\begin{tabularx}{0.48\textwidth}{|c|c|c|R|c|}
\hline Capacity & \multicolumn{2}{c|}{Reachable} & \multicolumn{2}{c|}{Two-phase algorithm} \\
\cline{2-5} {[}Wh{]} & Nodes & \% $n$ & Reachability & Longer \% \\
\hline 1000 & 3370 & 35.25 \% & 99.97 \% & 3.20 \% \\
\hline 2000 & 8353 & 87.39 \% & 99.96 \% & 4.74 \% \\
\hline 4000 & 9522 & 99.61 \% & 100 \% & 0.76 \% \\
\hline
\end{tabularx}
\caption{Quality of the two-phase algorithm. Here, we use $n$ to denote
the number of vertices and $m$ to denote the number of edges in the underlying graph.}
\label{quality_t}
\end{table}
\begin{figure}
\caption{Optimal reachability for small capacities.}
\label{reachability_time_fig}
\end{figure}
\subsection{Quality of Paths}
As we argue above, the real-world goal of people driving electric
vehicles is to find a path that leads to the destination in the smallest amount
of time while ensuring that the battery stays at least partially charged at all
points along the way~\cite{Franke201356,APPS474,GrahamRowe2012140}.
To measure the quality of the two-phase
bicriterion algorithm of Section~\ref{two_phase_sec}, we compared the paths it
returns against the optimal paths (that arrive at reachable destinations in
shortest time) found by the pseudo-polynomial time algorithm
of Section~\ref{pseudo_polynomial_sec}, where we set $N$ to be the capacity
(in Wh)
of the battery.
Due to the time complexity needed for finding optimal paths
using the vertex-labeling algorithm, we were only able to
compare the two algorithms on smaller graphs (with $n\leq 100000$), representing
small states (like Rhode Island, Delaware or the District of Columbia) or
large states with sparse road network (Alaska).
In addition,
due to time constraints imposed by the slow running time of the vertex-labeling
algorithm, we also did not consider placing charging stations in the graphs
for these comparison tests.
The results are shown in
Table~\ref{quality_t} and Figure~\ref{reachability_time_fig},
comparing the paths found by our algorithm
with the optimal paths found by the vertex-labeling algorithm.
Due to high running times for the vertex-labeling algorithm (which depend
in a pseudo-polynomial fashion on battery capacity), we restricted
battery capacity to values much smaller than the actual 60 kWh (or 85 kWh) for
the Tesla Model S. These capacities are shown in the first column
of Table~\ref{quality_t}.
In the second and third column, we show the number of nodes reachable by
the vertex-labeling algorithm,
both in absolute numbers and as a percentage of all
nodes in the network. The next column shows the percentage of the (optimally)
reachable nodes that can be reached by the two-phase algorithm. The final
column depicts the average slowdown of the paths computed by the
two-phase algorithm relative to the optimal paths.
\begin{table}
California ($n=1613325$, $m=1989149$):
\begin{tabularx}{0.48\textwidth}{|c|R|c|R|}
\hline Capacity {[}Wh{]} & Chargers & Reachability & Time {[}s{]} \\
\hline 60000 & 0 & 55.2 \% & 12.80 \\
\hline 60000 & 1 & 56.2 \% & 24.63 \\
\hline 60000 & 2 & 56.2 \% & 33.37 \\
\hline 60000 & 3 & 95.3 \% & 53.98 \\
\hline 60000 & 4 & 96.2 \% & 73.04 \\
\hline 60000 & 5 & 97.6 \% & 91.39 \\
\hline 85000 & 0 & 70.7 \% & 21.38 \\
\hline 85000 & 1 & 77.0 \% & 28.26 \\
\hline 85000 & 2 & 98.3 \% & 43.01 \\
\hline
\end{tabularx}
Alaska ($n=69082$, $m=78100$):
\begin{tabularx}{0.48\textwidth}{|c|R|c|R|}
\hline Capacity {[}Wh{]} & Chargers & Reachability & Time {[}s{]} \\
\hline 60000 & 0 & 29.2 \% & 0.48 \\
\hline 60000 & 2 & 39.5 \% & 1.01 \\
\hline 60000 & 5 & 40.8 \% & 2.03 \\
\hline 60000 & 13 & 40.9 \% & 6.94 \\
\hline 60000 & 14 & 43.3 \% & 7.70 \\
\hline 60000 & 15 & 47.7 \% & 8.59 \\
\hline 85000 & 0 & 43.6 \% & 0.48 \\
\hline 85000 & 2 & 47.6 \% & 1.11 \\
\hline 85000 & 13 & 47.7 \% & 7.86 \\
\hline 85000 & 15 & 47.8 \% & 9.84 \\
\hline
\end{tabularx}
Montana ($n=547028$, $m=670443$):
\begin{tabularx}{0.48\textwidth}{|c|R|c|R|}
\hline Capacity {[}Wh{]} & Chargers & Reachability & Time {[}s{]} \\
\hline 60000 & 0 & 88.3 \% & 9.90 \\
\hline 60000 & 1 & 88.4 \% & 12.83 \\
\hline 60000 & 2 & 96.1 \% & 18.46 \\
\hline 60000 & 3 & 96.7 \% & 22.39 \\
\hline 60000 & 6 & 97.5 \% & 39.00 \\
\hline 60000 & 7 & 97.9 \% & 57.42 \\
\hline 85000 & 0 & 97.0 \% & 9.93 \\
\hline 85000 & 1 & 97.3 \% & 14.01 \\
\hline 85000 & 2 & 98.2 \% & 20.33 \\
\hline
\end{tabularx}
Texas ($n=2073870$, $m=2584159$):
\begin{tabularx}{0.48\textwidth}{|c|R|c|R|}
\hline Capacity {[}Wh{]} & Chargers & Reachability & Time {[}s{]} \\
\hline 60000 & 0 & 47.2 \% & 22.83 \\
\hline 60000 & 1 & 49.8 \% & 26.55 \\
\hline 60000 & 2 & 56.1 \% & 49.86 \\
\hline 60000 & 3 & 57.9 \% & 64.33 \\
\hline 60000 & 4 & 58.4 \% & 89.50 \\
\hline 60000 & 5 & 69.2 \% & 113.34 \\
\hline 60000 & 7 & 69.3 \% & 154.51 \\
\hline 60000 & 9 & 71.2 \% & 190.46 \\
\hline 85000 & 0 & 68.7 \% & 28.73 \\
\hline 85000 & 1 & 75.1 \% & 35.10 \\
\hline 85000 & 2 & 80.6 \% & 58.75 \\
\hline 85000 & 3 & 82.4 \% & 86.36 \\
\hline 85000 & 5 & 94.9 \% & 138.59 \\
\hline
\end{tabularx}
\caption{Performance of the two-phase algorithm.}
\label{performance_t}
\end{table}
\begin{table}
Nevada ($n=261155$, $m=311043$):
\begin{tabularx}{0.48\textwidth}{|c|R|c|R|}
\hline Capacity {[}Wh{]} & Chargers & Reachability & Time {[}s{]} \\
\hline 60000 & 0 & 55.7 \% & 2.87 \\
\hline 60000 & 1 & 63.2 \% & 4.81 \\
\hline 60000 & 2 & 67.9 \% & 6.43 \\
\hline 60000 & 4 & 81.9 \% & 11.62 \\
\hline 60000 & 10 & 92.6 \% & 34.06 \\
\hline 85000 & 0 & 80.6 \% & 3.44 \\
\hline 85000 & 1 & 92.6 \% & 5.99 \\
\hline
\end{tabularx}
\caption{Performance of the two-phase algorithm (continued).}
\label{performance_t2}
\end{table}
\subsection{Performance}
As mentioned above, due to the extremely large running time of the optimal
pseudo-polynomial algorithm (for the largest instances our runs
exceeded 24 hours), we
were forced to restrict our qualitative
testing to road networks of small states, and use
unrealistically small battery capacities. In this subsection, we focus on the
performance of the two-phase algorithm, which, thanks to its superior time
complexity, allows us to meet the following goals:
\begin{itemize}
\item Use actual capacities of Tesla Model S (60/85 kWh).
\item Include charging stations.
\item Test the algorithm on larger graphs.
\end{itemize}
Under the above assumptions, we measured the running time of the algorithm,
as well as estimated the reachability percentage (measured as a ratio of
feasible paths between pairs of randomly chosen vertices and the total
number of pairs tested; in each case, we tested 1000 pairs). The results are
summarized in Table~\ref{performance_t}, Table~\ref{performance_t2},
Figure~\ref{time_fig} and Figure~\ref{reachability_charge_fig}.
The times shown in the last column
is the average duration of a single execution of the two-phase algorithm.
Charging stations were placed at randomly selected vertices. Only instances
that actually increased reachability are shown.
\begin{figure}
\caption{Dependence of running time on the number of charging stations.}
\label{time_fig}
\end{figure}
\begin{figure}
\caption{Dependence of reachability on the number of charging stations.}
\label{reachability_charge_fig}
\end{figure}
\subsection{Discussion}
Tests were implemented in \textsc{C++} and carried out on a PC with a
2.2 GHz CPU, 1066 MHz
bus, and 4 GB RAM running Linux.
It is evident that the two-phase algorithm finds paths to almost all reachable
destinations, with the paths being only slightly slower
(taking more time) that the optimal ones.
Our results were obtained using the following procedure:
for each state, we randomly chose a starting position and 1000 destinations.
It gave us 1000 origin-destination pairs, on which we then tested the
algorithms described above. The resolution of our algorithms was: seconds
(for time) and Wh (for energy).
Our implementation of the two-phase algorithm is straightforward. We did not
optimize it for running time and we deliberately ran it
on a relatively old PC, and, admittedly, this shows in the results.
Even then, the algorithm was able to compute paths within several dozens of
seconds. Since the number of charging stations is the main factor in running
time, one optimization would be to precompute best paths between all pairs of
charging stations (which is feasible, as the number of charging stations is
small and they are fixed features of a road network). The running time of the
algorithm would then be reduced to the case of no charging stations.
As the main component of our procedure is the Dijkstra's shortest path algorithm,
another straightforward improvement would be to incorporate
some of existing approaches~\cite{DBLP:conf/dfg/DellingSSW09,DBLP:conf/wea/DellingGPW11}
aimed at speeding up Dijkstra's algorithm.
\section{Conclusion}
We have presented a two-phase approach for finding good paths in
bicriterion networks, and we have demonstrated that our algorithms are both
fast and effective for finding good routes for electric vehicles.
In particular, we have shown empirically that the paths found by the two-phase
algorithm can identify over 99\% of the vertices reachable in a road network
by some energy-efficient algorithm, while being only
slightly longer on average than paths found by the inefficient vertex-labeling
algorithm.
Moreover, we believe that two-phase
are easier for people to follow, since, in addition
to the route they plan to take, they only need to remember two
different driving styles and the point in the route where they transition from
the first driving style to the second. Of course, if $k$ charging stations
are involved, it may require $2k-1$ style transitions. This is usually not a
problem, since common trips tend to use a small number of charging station
located far away.
As possible future work, it would be interesting to test the two-phase
approach for finding good delivery routes for electric vehicles that have
multiple destinations.
\subsection*{Acknowledgments}
This work was supported in part by the NSF, under grant 1011840
and 1228639,
and by the Office
of Naval Research, under grant N00014-08-1-1015.
We would like to thank David Eppstein and Amelia Regan for several helpful
communications regarding the topics of this paper.
\begin{flushleft}
\end{flushleft}
\end{document}
|
\begin{document}
\begin{center}
\textbf{CRITICALITY OF LAGRANGE MULTIPLIERS\\IN EXTENDED NONLINEAR OPTIMIZATION}
\end{center}
\begin{center}
HONG DO\footnote{Department of Mathematics, Wayne State University, Detroit, Michigan, 48202, USA ([email protected]). Research of this author was partly supported by the USA National Science Foundation under grants DMS-1512846 and DMS-1808978, and by the USA Air Force Office of Scientific Research grant \#15RT04.}, BORIS S. MORDUKHOVICH\footnote{Department of Mathematics, Wayne State University, Detroit, Michigan, 48202, USA ([email protected]). Research of this author was partly supported by the USA National Science Foundation under grants DMS-1512846 and DMS-1808978, and by the USA Air Force Office of Scientific Research grant \#15RT04.} and M. EBRAHIM SARABI\footnote{Department of Mathematics, Miami University, Oxford, Ohio, 45056, USA ([email protected]).}
\end{center}
\partialrovidecommand{\Abstract}[1]{\textbf{Abstract. }#1}
\Abstract{The paper is devoted to the study and applications of criticality of Lagrange multipliers in variational systems, which are associated with the class of problems in composite optimization known as extended nonlinear programming (ENLP). The importance of both ENLP and the concept of multiplier criticality in variational systems has been recognized in theoretical and numerical aspects of optimization and variational analysis, while the criticality notion has never been investigated in the ENLP framework. We present here a systematic study of critical and noncritical multipliers in a general variational setting that covers, in particular, KKT systems in ENLP with establishing their verifiable characterizations as well as relationships between noncriticality and other stability notions in variational analysis. Our approach is mainly based on advanced tools of second-order variational analysis and generalized differentiation.}\\[1ex]
{\textbf{Keywords} Variational analysis, composite optimization, extended nonlinear programming, critical and noncritical multipliers, generalized differentiation, stability of variational systems}\\[1ex]
{\bf Mathematical Subject Classification (2000)} 90C31, 49J52, 49J53
\section{Introduction}\langlebel{intro}
One of the major goals of this paper is to study a remarkable class of optimization problems given in the following, formally unconstrained, {\em composite format}:
\begin{equation}\langlebel{co}
\textrm{minimize }\;\partialh(x):=\partialh_0(x)+\thetaeta\big(\Phi(x)\big),\quad x\in\mathbb{R}^n,
\end{equation}
where $\partialh_0\colon\mathbb{R}^n\to\mathbb{R}$ is an original cost function and $\Phi\colon\mathbb{R}^n\to\mathbb{R}^m$ is a constraint mapping, both are twice differentiable at the reference points unless otherwise stated, and where $\theta\colon\mathbb{R}^m\to\overline{\mathbb{R}}:=(-\infty,\infty]$ is an extended-real-valued function defined for all $u\in\mathbb{R}^m$ by the formula
\begin{equation}\langlebel{theta}
\theta(u)=\thetay(u):=\underset{y\in Y}\sup{\mathbb{B}ig\{\inp{y}{u}-\frac{1}{2}\inp{y}{By}\mathbb{B}ig\}}
\end{equation}
via a convex polyhedral set $Y:=\{y\in\mathbb{R}^m\arrowvert\;\inp{b_i}{y}\le\alphapha_i,\;i=1,\ldots,p\}$ as well as an $m\times m$ positive-semidefinite and symmetric matrix $B$.
Note that the unconstrained composite format \eqref{co} gives us a convenient representation of the constrained optimization problem to minimize the cost function $\partialh_0(x)$ subject to the inclusion constraint $\Phi(x)\in\Theta:=\{u\in\mathbb{R}^m|\;\theta(u)<\infty\}$. In particular, conventional nonlinear programs (NLPs) with $s$ inequality constraints and $m-s$ equality constraints described by ${\cal C}^2$-smooth functions can be written in the composite format \eqref{co}, where $\theta:={\mathrm d}elta_\Theta$ is the indicator function of the polyhedron $\Theta:=\mathbb{R}^s_-\times\{0\}^{m-s}$ that is equal to $0$ on $\Theta$ and to $\infty$ otherwise.
Problems of the ENLP type \eqref{co} with $\theta$ given by \eqref{theta} were introduced by Rockafellar \cite{r} under the name of {\em extended nonlinear programs} (ENLPs). It has been realized over the years that ENLPs in this form provide a suitable framework for developing both theoretical and computational aspects of optimization in broad classes of constrained problems that include stochastic programming, robust optimization, etc. The special expression (\ref{theta}) for the extended-real-valued function $\theta$, known as the {\em dualizing representation} or the {\em piecewise linear-quadratic penalty}, is significant for the theory and applications of Lagrange multipliers in the Karush-Kuhn-Tucker (KKT) systems associated with the ENLPs under consideration.
It is not hard to check (see more details in Section~\ref{comp-opt}) that KKT systems associated with local optimal solutions to ENLPs
are included in the following more general class of {\em variational systems} of the {\em subdifferential type}
\begin{equation}\langlebel{VS}
\Psi(x,\langlembda):=f(x)+\nabla\Phi(x)^*\langlembda=0,\;\langlembda\in\partial\theta\big(\Phi(x)\big)\;\mbox{ with }\;\theta=\thetay,
\end{equation}
where $f\colon\mathbb{R}^n\rightarrow\mathbb{R}^n$ is a differentiable mapping while $\Phi\colon\mathbb{R}^n\rightarrow\mathbb{R}^m$ is a twice differentiable mapping in the classical sense \cite[Definition~13.1(i)]{rw},
where $\thetay$ is taken from \eqref{theta}, where $^*$ indicates the matrix transposition/adjoint operator, and where $\partial$ stands for the subdifferential of convex analysis.
The main attention of this paper is paid to a systematic study of the {\em multiplier criticality} concept (i.e., the notions of critical and noncritical Lagrange multipliers) for variational systems of type \eqref{VS} with applications to KKT systems in ENLPs.
The notions of critical and noncritical multipliers were first introduced by Izmailov \cite{iz05} for the classical KKT systems corresponding to {\em NLPs with equality constraints} described by ${\cal C}^2$-smooth functions. It has been realized from the very beginning that the presence of critical multipliers plays a {\em negative} role in numerical optimization and is largely responsible for primal slow convergence in primal-dual algorithms of the Newtonian type. Further strong developments in this direction for NLPs and related variational inequalities have been done over the years, mainly by Izmailov, Solodov, and their collaborators; see, e.g., the book \cite{is14} and the survey paper \cite{is15}, which is entirely devoted to critical multipliers. The criticality definitions in the above publications are heavily based on the specific structures of NLPs and related variational inequalities.
In \cite{ms17}, Mordukhovich and Sarabi suggested new definitions of critical and noncritical multipliers for a general class of {\em subdifferential variational systems} of type \eqref{VS}, where $\theta$ may be even a nonconvex extended-real-valued function. The given definitions in \cite{ms17} are expressed via second-order generalized differential constructions of variational analysis while reduced to those from \cite{iz05,is14} for the classical KKT systems corresponding to NLPs. Furthermore, for extended-real-valued {\em convex piecewise linear} (CPWL) functions $\theta$ in \eqref{VS}, which include \eqref{theta} when $B=0$, the definitions of critical and noncritical multipliers are expressed in \cite{ms17} entirely in terms of the problem data with the subsequent characterizations of criticality and various applications to optimization and stability problems for such systems.
The quite recent paper of the same authors \cite{ms18} contains counterparts of some major results from \cite{ms17} with developing also novel issues on criticality for variational systems described by
\begin{equation}\langlebel{VS1}
f(x)+\nabla\Phi(x)^*\langlembda=0,\;\langlembda\in N_\Theta\big(\Phi(x)\big),
\end{equation}
where $f$ and $\Phi$ are the same as in \eqref{VS}, and where $N_\Theta$ is the normal cone to a {\em ${\cal C}^2$-cone reducible} set $\Theta\subset\mathbb{R}^m$. This framework covers, in particular, KKT systems associated with general problems of (nonpolyhedral) {\em conic programming}; see, e.g., \cite{bs}.\vspace*{0.05in}
The main results of the current paper extend those from \cite{ms17}, obtained for CPWL functions $\theta$, to the case of functions $\thetay$ defined in \eqref{theta}, which form a major class of extended-real-valued convex {\em piecewise linear-quadratic} functions in variational analysis; see \cite{rw} and Section~\ref{prel} below. At the same time, the new results obtained here are completely independent from those derived for the variational system \eqref{VS1} in \cite{ms17} in the case of nonpolyhedral sets $\Theta$ therein.
The basic tools of first-order and second-order generalized differentiation employed in this paper are {\em tangentially generated}, except the classical subdifferential of convex analysis. We mostly rely on the generalized differential theory in primal spaces developed by Rockafellar; see \cite{rw} and the references therein. Using these tools allows us to establish verifiable characterizations of noncritical multipliers in the general setting of \eqref{VS}, to characterize the uniqueness of Lagrange multipliers in \eqref{VS}, to ensure noncriticality for ENLPs via a new second-order optimality condition, which is employed in turn to verify the important stability property of solutions to KKT systems that is known as robust isolated calmness and is related to noncriticality. We also reveal a relationship between the isolated calmness and Lipschitz-like properties of solution maps for canonically perturbed variational systems with the piecewise linear-quadratic term \eqref{theta}.
As mentioned above, the existence of critical multipliers is a negative factor in convergence analysis, since it seems to prevent primal superlinear convergence of major primal-dual algorithms. Thus it is crucial to find verifiable conditions, expressed entirely in terms of the problem data in question, which ensure that critical multipliers corresponding to this minimizer do not arise. It is conjectured in \cite{m15}, based on preliminary results for NLPs, that {\em full stability} of local minimizers in the sense of \cite{lpr} {\em rules out} the appearance of {\em critical multiplies}. This conjecture was verified in \cite{ms17} for polyhedral problems of type \eqref{co} with convex piecewise linear functions $\theta$. Now we justify this conjecture in the general case of ENLPs with piecewise linear-quadratic functions $\thetay$ in form \eqref{theta}. \vspace*{0.05in}
The rest of the paper is organized as follows. In Section~\ref{prel} we present some definitions and facts from variational analysis and generalized differentiation that are broadly employed throughout the whole paper. Other variational constructions and results are recalled in those places of the subsequent sections where they are actually used.
Section~\ref{crit-def} contains basic {\em definitions} of {\em critical} and {\em noncritical multipliers} for variational systems \eqref{VS} involving piecewise linear-quadratic functions of type \eqref{theta} with providing equivalent descriptions, examples, and discussions. In Section~\ref{unique} we obtain new results on the relationship between the well-recognized {\em calmness} and {\em isolated calmness} properties of multiplier maps associated with the variational systems \eqref{VS} with the piecewise linear-quadratic term \eqref{theta} and the {\em uniqueness} of Lagrange multipliers in such systems. This is certainly of its independent interest, while the developed approach and results can be viewed as the preparation to the subsequent characterizations of noncritical multipliers in the variational systems under consideration.
Section~\ref{noncrit-char} plays a central role in the paper. It establishes major {\em characterizations} of {\em noncritical multipliers} for systems \eqref{VS} with $\thetay$ taken from \eqref{theta} via a novel {\em semi-isolated calmness} property for solution maps to canonical perturbations of \eqref{VS} and also via two new {\em error bounds} that are specific for the variational systems \eqref{VS} with the piecewise linear-quadratic term
\eqref{theta}.
Section~\ref{comp-opt} is devoted to noncritical multipliers in {\em KKT systems} associated with {\em ENLPs} for which the results of the previous sections are automatically applied with the specification of $\Psi$ in \eqref{VS} as the $x$-partial gradient of the appropriate Lagrangian. The main new result here, that is characteristic to the optimization framework, is a novel {\em second-order sufficient condition} for strict local minimizers, which also ensures that all the corresponding multipliers are noncritical.
In Section~\ref{full-stab} we justify, for the case of ENLPs from \eqref{co} and \eqref{theta}, the aforementioned conjecture on {\em excluding critical multipliers} corresponding to a {\em fully stable} local minimizer for the given ENLP. The proof of this result is based on characterizations of noncriticality via semi-isolated calmness obtained in Section~\ref{noncrit-char}.
The last Section~\ref{lip-stab} provides applications of the developed characterizations of noncritical multipliers for the variational systems under consideration to the study of an important stability property of solution maps to KKT systems associated with ENLPs. This property of set-valued mappings has been recently recognized as {\em robust isolated calmness}. The results obtained above allow us to characterize robust isolated calmness via the noncriticality and uniqueness of Lagrange multipliers on one side and via the new second-order optimality condition for ENLPs on the other.
Finally, we characterize the Lipschitz-like/Aubin property of solution maps to perturbed variational systems and establish its relationship with isolated calmness.
\section{Preliminaries from Variational Analysis}\langlebel{prel}
In this section we review, based on the book \cite{rw}, some basic notions of generalized differentiation in variational analysis and then recall important facts broadly used in what follows. Throughout the paper we use the standard notation of variational analysis; see \cite{m18,rw}.
Given a nonempty subset $\Omega\subset\mathbb{R}^d$ and a point $\bar{z}\in\Omega$, the (Bouligand-Severi) {\em tangent/contingent cone} $T_\Omegamega(z)$ to $\Omegamega$ at $\bar{z}$ is defined by
\begin{equation}\langlebel{tan}
T_\Omegamega(\bar{z}):=\mathbb{B}ig\{w\in\mathbb{R}^d\mathbb{B}ig|\;\exists\,z_k\xrightarrow{\Omegamega}\bar{z},\;\exists\,\alphapha_k\ge 0\;\textrm{ with }\;\alphapha_k(z_k-z)\rightarrow w\;\textrm{ as }\;k\rightarrow\infty\mathbb{B}ig\},
\end{equation}
where the symbol $z\stackrel{\Omega}{\to}\bar{z}$ indicates that $z\to\bar{z}$ with $z\in\Omega$.
For a set-valued mapping $F\colon\mathbb{R}^n\rightrightarrows\mathbb{R}^p$, define its {\em domain} and {\em graph} by, respectively,
\begin{equation*}
{\rm dom}\, F:=\big\{x\in\mathbb{R}^n\big|\;F(x)\ne\emptyset\big\}\;\mbox{ and }\;\mathrm{gph}\, F:=\big\{(x,y)\in\mathbb{R}^n\times\mathbb{R}^p\big|\;y\in F(x)\big\}.
\end{equation*}
The {\em graphical derivative} of $F$ at $(\bar{x},\bar{y})\in\mathrm{gph}\, F$ is given by
\begin{equation}\langlebel{gr-der}
DF(\bar{x},\bar{y})(u):=\big\{v\in\mathbb{R}^p\big|\;(u,v)\in T_{\mathrm{gph}\, F}(\bar{x},\bar{y})\big\},\quad u\in\mathbb{R}^n.
\end{equation}
Next we consider an extended-real-valued function $\partialh\colon\mathbb{R}^n\to\overline{\mathbb{R}}:=(-\infty,\infty]$ with $\bar{x}\in{\rm dom}\,\partialh:=\{x\in\mathbb{R}^n|\;\partialh(x)<\infty\}$.
Given $\bar{y}\in\mathbb{R}^n$, the {\em second subderivative} of $\partialh$ at $(\bar{x},\bar{y})$ in the direction $\bar{w}$ is defined by
\begin{equation}\langlebel{ssd1}
{\mathrm d}^2\partialh(\bar x,\bar{y})(\bar{w}):=\liminf_{\substack{t{\mathrm d}n 0\\w\to\bar{w}}}{\mathrm d}frac{\partialh(\bar{x}+tw)-\partialh(\bar{x})-t\langlengle\bar{y},\,w\ranglengle}{\frac{1}{2}t^2}.
\end{equation}
When $\partialh$ is convex and proper (i.e., ${\rm dom}\,\partialh\ne\emptyset$), we use its {\em subdifferential} (i.e., the collection of subgradients) at $\bar{x}\in{\rm dom}\,\partialh$ given by
\begin{equation}\langlebel{sub}
\partialartial\partialh(\bar{x}):=\big\{v\in\mathbb{R}^n\big|\;\langle v,x-\bar{x}\rangle\le\partialh(x)-\partialh(\bar{x})\;\mbox{ for all }\;x\in\mathbb{R}^n\big\}.
\end{equation}
If $\Omega\subset\mathbb{R}^n$ is a nonempty convex set, then the {\em normal cone} to $\Omega$ at $\bar{x}\in\Omega$ is the subdifferential \eqref{sub} of its indicator function and thus is defined by
\begin{equation}\langlebel{nc-conv}
N_\Omega(\bar{x}):=\big\{v\in\mathbb{R}^n\big|\;\langle v,x-\bar{x}\rangle\le 0\;\mbox{ for all }\;x\in\Omega\big\}.
\end{equation}
The {\em critical cone} to $\Omega$ at $\bar{x}$ for $\bar{v}\in N_\Omega(\bar{x})$ is expressed via the tangent cone \eqref{tan} as
\begin{equation}\langlebel{cri3}
K_\Omega(\bar{x},\bar{v}):=T_\Omega(\bar{x})\cap\{\bar{v}\}^\partialerp
\end{equation}
with the notation $\{\bar{v}\}^\partialerp:=\big\{w\in\mathbb{R}^n|\;\langle w,v\rangle=0\}$.
Along with \eqref{ssd1}, we employ in this paper yet another second-order generalized derivative of an extended-real-valued convex function $\partialh\colon\mathbb{R}^n\to\overline{\mathbb{R}}$ at $\bar{x}\in{\rm dom}\,\partialh$ for $\bar{v}\in\partialartial\partialh(\bar{x})$ that is defined via the graphical derivative \eqref{gr-der} of the subgradient mapping $\partialartial\partialh\colon\mathbb{R}^n\rightrightarrows\mathbb{R}^n$ under the name of the {\em subgradient graphical derivative} by
\begin{equation}\langlebel{sgd}
D\partialartial\partialh(\bar{x},\bar{v})(u):=D\big(\partialartial\partialh\big)(\bar{x},\bar{v})(u),\quad u\in\mathbb{R}^n.
\end{equation}
Invoking the constructions above, we now formulate the basic facts about the functions $\thetay$ taken from \eqref{theta} that are systematically exploited in the paper. The proofs of these facts can be found in \cite[Examples~11.18, 13.23 and Theorem~13.40]{rw}. Recall that the {\em horizon cone} of a nonempty set $Y\subset\mathbb{R}^m$ used below is defined by
\begin{equation*}
Y^\infty:=\big\{y\in\mathbb{R}^m\big|\;\exists\,y_k\in Y,\;\exists\,\langlembda_k{\mathrm d}n 0\;\mbox{ with }\;\langlembda_k y_k\to y\big\}.
\end{equation*}
Recall also \cite[Definition~10.20]{rw} that a function $\partialh\colon\mathbb{R}^n\to\overline{\mathbb{R}}$ is {\em piecewise linear-quadratic} if its domain ${\rm dom}\,\partialh$ can be represented as the union of finitely many convex polyhedral sets, relative to each of which $\partialh(x)$ is given by an
expression of the form $\frac{1}{2}\langle x,Ax\rangle+\langle a,x\rangle+\alpha$ for some scalar $\alpha\in\mathbb{R}$, vector $a\in\mathbb{R}^n$, and $n\times n$ symmetric matrix $A$.
\begin{Th}{\bf(properties of piecewise linear-quadratic penalties).}\langlebel{gdt} Let $\thetay$ be defined by \eqref{theta}. Then the following properties hold:
\begin{itemize}[noitemsep]
\item[\bf(i)] The function $\thetay$ is a proper and convex piecewise linear-quadratic with the domain
\begin{equation*}
{\rm dom}\,\thetay=\big(Y^\infty\cap\ker B\big)^*.
\end{equation*}
\item[\bf(ii)] The subdifferential \eqref{sub} of $\thetay$ is calculated by
\begin{equation}\langlebel{fo}
\partialartial\thetay(u)=\operatornamewithlimits{arg\,max}_{y\in Y}\big\{\inp{y}{u}-\frac{1}{2}\inp{y}{By}\big\}=(N_Y+B)^{-1}(u),\quad u\in\mathbb{R}^m.
\end{equation}
\item[\bf(iii)] Given any $(\bar{z},\bar\langlembda)\in\mathrm{gph}\,\partialartial\thetay$, the second subderivative \eqref{ssd1} is calculated by
\begin{equation}\langlebel{ssd}
{\mathrm d}^2\thetay(\bar{z},\bar\langlembda)(u)=2\thetak(u):=\sup_{w\in{\cal K}}\big\{2\inp{w}{u}-\inp{w}{Bw}\big\},\quad u\in\mathbb{R}^m,
\end{equation}
in the same form $\thetak(u)$ as in \eqref{theta} with the replacement of $Y$ by critical cone ${\cal K}:=K_Y(\bar\langlembda,\bar{z}-B\bar\langlembda)$ defined via \eqref{cri3}. Furthermore, the subgradient graphical derivative \eqref{sgd} of $\thetay$ at $\bar{z}$ for $\bar \langlembda$ is represented as
\begin{equation}\langlebel{gdr}
D\partialartial\thetay(\bar{z},\bar{\langlembda})(u)=\partialartial\thetak(u),\quad u\in\mathbb{R}^m.
\end{equation}
\end{itemize}
\end{Th}
\section{Multiplier Criticality in Piecewise Linear-Quadratic Settings}\langlebel{crit-def}
In this section we formulate the definitions of critical and noncritical multipliers corresponding to stationary points of the variational system \eqref{VS} with the piecewise linear-quadratic term \eqref{theta}, establish
an equivalent description of criticality entirely via the given data of \eqref{VS}, and then present two examples illustrating the calculation of critical and noncritical multipliers for this setting.
Given a point $\bar{x}\in\mathbb{R}^n$, define the set of {\em Lagrange multipliers} associated with $\bar{x}$ by
\begin{equation}\langlebel{lag}
\Lambda(\bar{x}):=\big\{\langlembda\in\mathbb{R}^m\big|\;\Psi(\bar{x},\langlembda)= 0,\;\langlembda\in\partialartial\thetay\big(\Phi(\bar{x})\big)\big\}.
\end{equation}
If $(\bar{x},\bar\langlembda)$ is a solution to the variational system \eqref{VS}, we clearly get $\bar\langlembda\in\Lambda(\bar{x})$. Furthermore, it is not hard to check that the inclusion $\bar\langlembda\in\Lambda(\bar{x})$ ensures that $\bar{x}$ is a {\em stationary point} of \eqref{VS} in the sense that it satisfies
the condition
\begin{equation}\langlebel{stat}
0\in f(\bar{x})+\partialartial\big(\thetay\circ\Phi\big)(\bar{x}).
\end{equation}
Suppose from now on that $\Lambda(\bar{x})\ne\emptyset$, which is ensured, e.g., by any constraint qualification condition in problems of constrained optimization. The following definitions of critical and noncritical multipliers for \eqref{VS},
are just specifications of those from \cite{ms17}, given there for general variational systems with the subsequent implementation for the case of a convex piecewise linear function $\theta$.
It is worth noticing that the function $\theta$ from \eqref{theta} with $B=0$ is convex piecewise linear, namely its epigraph is a convex polyhedral set, and so
can be covered by the results already established in \cite{ms17}; however, when $B\neq 0$, it is a convex piecewise linear-quadratic function and requires different techniques
to achieve similar results.
\begin{Def}{\bf(critical and noncritical multiplies in variational systems).}\langlebel{crit} Let $(\bar{x},\bar\langlembda)$ be a solution to the variational system \eqref{VS}. We say that $\bar{\langlembda}\in\Lambda(\bar{x})$ is a {\sc critical Lagrange multiplier} for \eqref{VS} corresponding to $\bar{x}$ if there exists a nonzero vector $\xi\in\mathbb{R}^n$ such that
\begin{equation}\langlebel{cm}
0\in\nabla_x\Psi(\bar{x},\bar{\langlembda})\xi+\nabla\Phi(\bar{x})^*D\partialartial\thetay\big(\Phi(\bar{x}),\bar{\langlembda}\big)\big(\nabla\Phi(\bar{x}\big)\xi).
\end{equation}
A given multiplier $\bar{\langlembda}\in\Lambda(\bar{x})$ is {\sc noncritical} for \eqref{VS} corresponding to $\bar{x}$ if the generalized equation \eqref{cm} admits only the trivial solution $\xi=0$.
\end{Def}
Applying the representations of Theorem~\ref{gdt} for the graphical derivative in \eqref{cm} gives us an equivalent description of critical and noncritical multipliers from Definition~\ref{crit},
expressed entirely in terms of the initial data of \eqref{VS}.
\begin{Th}{\bf(equivalent description of criticality via piecewise linear-quadratic penalties).}\langlebel{desc} Let $(\bar{x},\bar{\langlembda})$ be a solution to the variational system \eqref{VS} with the term $\thetay$ taken from \eqref{theta}. Denoting $\bar{z}:=\Phi(\bar{x})$ and ${\cal K}:=K_Y(\bar\langlembda,\bar{z}-B\bar\langlembda)$ via the critical cone \eqref{cri3}, we have that the multiplier $\bar{\langlembda}$ corresponding to $\bar{x}$ is critical for \eqref{VS} if and only if the system of relationships
\begin{equation}\langlebel{cri}
\begin{cases}
\nabla_x\Psi(\bar{x},\bar{\langlembda})\xi+\nabla\Phi(\bar{x})^*\eta=0,\quad\inp{\nabla\Phi(\bar{x})\xi-B\eta}{\eta}=0,\\
\nabla\Phi(\bar{x})\xi-B\eta\in{\cal K}^*,\;\mbox{ and }\;\eta\in{\cal K}
\end{cases}
\end{equation}
admits a solution $(\xi,\eta)\in\mathbb{R}^n\times\mathbb{R}^m$ with $\xi\ne 0$. Accordingly, $\bar\langlembda$ is a noncritical multiplier in this setting if and only if we have $\xi=0$ for any solution $(\xi,\eta)$ to \eqref{cri}.
\end{Th}
\begin{proof} To achieve the claimed equivalencies, we require to calculate the graphical derivative $D\partialartial\thetay$ in \eqref{cm} for the function $\thetay$ given in \eqref{theta}. First we use formula \eqref{gdr} from Theorem~\ref{gdt}(iii), which yields
\begin{equation*}
D\partialartial\thetay(\bar{z},\bar{\langlembda})\big(\nabla\Phi(\bar{x})\xi\big)=\partialartial\thetak\big(\nabla\Phi(\bar{x})\xi\big).
\end{equation*}
On the other hand, the second expression of $\partialartial\thetak$ in \eqref{fo} of Theorem~\ref{gdt}(ii) shows that
\begin{equation*}
\partialartial\thetak\big(\nabla\Phi(\bar{x})\xi\big)=\big(N_{{\cal K}}+B\big)^{-1}\big(\nabla\Phi(\bar{x})\xi\big).
\end{equation*}
Putting these representations together, we arrive at
\begin{equation}\langlebel{desc1}
D\partialartial\thetay(\bar{z},\bar{\langlembda})\big(\nabla\Phi(\bar{x})\xi\big)=\big(N_{{\cal K}}+B\big)^{-1}\big(\nabla\Phi(\bar{x})\xi\big).
\end{equation}
Picking further any vector $\eta$ from the set on the left-hand side of \eqref{desc1} gives us therefore that
$\eta\in(N_{{\cal K}}+B\big)^{-1}\big(\nabla\Phi(\bar{x})\xi)$ and so $\nabla\Phi(\bar{x})\xi-B\eta\in N_{{\cal K}}(\eta)$. Since ${\cal K}$ is a convex cone, the latter inclusion is equivalent to the conditions
\begin{equation*}
\inp{\nabla\Phi(\bar{x})\xi- B\eta}{\eta}=0,\quad\nabla\Phi(\bar{x})\xi-B\eta\in{\cal K}^*,\quad \eta\in{\cal K}.
\end{equation*}
Finally, we substitute the obtained descriptions of $\eta\in D\partialartial\thetay(\bar{z},\bar{\langlembda})(\nabla\Phi(\bar{x})\xi)$ into \eqref{cm} and thus clearly verify both assertions of the theorem.
\end{proof}\vspace*{0.05in}
Next we present two examples, which demonstrate how to use the descriptions of Theorem~\ref{desc} to explicitly determine critical and noncritical multipliers and illustrate in this way some characteristic features of multiplier criticality.
\begin{Example}{\bf(calculating critical and noncritical multipliers).}\langlebel{ex1}
{\rm Consider the multidimensional case of \eqref{VS} with $\thetay$ from \eqref{theta}, where $B=I_m=:I$ is the $m\times m$ identity matrix, and where the convex polyhedral set $Y$ is the nonnegative orthant in $\mathbb{R}^m$, i.e.,
\begin{equation*}
Y=\mathbb{R}^m_+:=\big\{y=(y_1,\ldots,y_m)\in\mathbb{R}^m\big|\;y_i\ge 0\;\mbox{ for all }\;i=1,\ldots,m\big\}.
\end{equation*}
Thus the function $\thetaeta_{Y,B}$ from \eqref{theta} reduces in this case to
\begin{equation*}
{\mathrm d}isplaystyle\thetaeta_{\mathbb{R}^m_+,I}(u)=\sup_{y\in\mathbb{R}^m_+}\mathbb{B}ig\{\inp{y}{u}-\frac{1}{2}\inp{y}{y}\mathbb{B}ig\},\quad u\in\mathbb{R}^m.
\end{equation*}
For any $\bar{x}\in\mathbb{R}n$ and $\bar{z}:=\Phi(\bar{x})$, by Theorem~\ref{gdt}(ii) we have that $\langlembda\in\partialartial\thetaeta_{\mathbb{R}^m_+,I}(\bar{z})$ if and only if $\bar{z}-B\langlembda\in N_{\mathbb{R}^m_+}(\langlembda)=\mathbb{R}^m_-\cap\langlembda^\partialerp$. Denoting $\bar{z}-\langlembda$ by $\widehat{\langlembda}$, the latter inclusion is equivalent to the following system
of equations and inclusions:
\begin{equation}\langlebel{sgr}
\begin{cases}
\langlembda+\widehat{\langlembda}=\bar{z}\\
\inp{\langlembda}{\widehat{\langlembda}}=0\\
\langlembda\in\mathbb{R}^m_+\\
\widehat{\langlembda}\in\mathbb{R}^m_-
\end{cases}
\end{equation}
It is not hard to see that for each fixed $\bar{x}$ and $\bar{z}=\Phi(\bar{x})$ this system has only one solution, which implies that the set of Lagrange multipliers has at most one element.
We now give two specific examples of mappings $f$ and $\Phi$, where one has a noncritical multiplier and the other has a critical multiplier. First, let $f(x):=x$ and $\Phi(x):=(x_1,0,\ldots,0)\in\mathbb{R}^m$ for all $x=(x_1,\ldots,x_n)\in\mathbb{R}n$, and let $\bar{x}:=0\in\mathbb{R}n$. Combining \eqref{sgr} with the fact that $\Psi(\bar{x},\langlembda)=(\langlembda_1,0,\ldots,0)\in\mathbb{R}n$ implies that the unique Lagrange multiplier is $\bar\langlembda=0$. Then we calculate the critical cone ${\cal K}=K_Y(0,\bar{z})$ in Theorem~\ref{desc} with $\bar{z}=\Phi(\bar{x})=0$ and its dual cone ${\cal K}^*$ by, respectively,
\begin{equation*}
{\cal K}=T_{\mathbb{R}^m_+}(0)\cap\{\bar{z}\}^\partialerp=\mathbb{R}^m_+\;\mbox{ and }\;{\cal K}^*={\rm span}\{\bar{z}\}+N_{\mathbb{R}^m_+}(0)=\mathbb{R}^m_-.
\end{equation*}
It follows from Theorem~\ref{desc} that the unique Lagrange multiplier $\bar\langlembda=0$ is noncritical if and only if the system of equations and inclusions
\begin{equation*}
\begin{cases}
\nabla_x\Psi(\bar{x},\bar{\langlembda})\xi+\nabla\Phi(\bar{x})^*\eta=0\\
\inp{\nabla\Phi(\bar{x})\xi-\eta}{\eta}=0\\
\nabla\Phi(\bar{x})\xi-\eta\in\mathbb{R}^m_-\\
\eta\in\mathbb{R}^m_+
\end{cases}
\end{equation*}
admits the only solution pairs $(\xi,\eta)\in\mathbb{R}n\times\mathbb{R}^m$ with $\xi=0$. Denoting $\zeta:=\nabla\Phi(\bar{x})\xi-\eta$, the above system can be equivalently rewritten as
\begin{equation}\langlebel{crsys}
\begin{cases}
\nabla_x\Psi(\bar{x},\bar{\langlembda})\xi+\nabla\Phi(\bar{x})^*\eta=0\\
\nabla\Phi(\bar{x})\xi-\eta-\zeta=0\\
\inp{\zeta}{\eta}=0\\
\zeta\in\mathbb{R}^m_-\\
\eta\in\mathbb{R}^m_+.
\end{cases}
\end{equation}
Since $\nabla_x\Psi(\bar{x},\bar\langlembda)\xi=\xi$, $\nabla\Phi(\bar{x})\xi=(\xi_1,0,\ldots,0)\in\mathbb{R}^m$, and $\nabla\Phi(\bar{x})^*\eta=(\eta_1,0,\ldots,0)\in\mathbb{R}n$ for any $\eta=(\eta_1,\ldots,\eta_m)\in\mathbb{R}^m$, it can be easily checked that the latter system has the unique solution pair $(\xi,\eta)=(0,0)$. This tells us that $\bar\langlembda=0$ is a noncritical multiplier.
Next we consider the case where $\Phi(x):=(x_1,0,\ldots,0)\in\mathbb{R}^m$ as before while $f(x):=(x_1,\ldots,x_{n-1},0)\in\mathbb{R}n$ for all $x=(x_1,\ldots,x_n)\in\mathbb{R}n$. Proceeding similarly to the previous case shows that $\bar\langlembda=0$ is the unique Lagrange multiplier with the same critical cone $\mathcal{K}$. In this setting we have $\nabla_x\Psi(\bar{x},\bar\langlembda)\xi=(\xi_1,\ldots,\xi_{n-1},0)\in\mathbb{R}n$, and therefore system \eqref{crsys} reduces to
\begin{equation*}
\begin{cases}
(\xi_1,\ldots,\xi_{n-1},0)+(n_1,0,\ldots,0)=0\\
\nabla\Phi(\bar{x})\xi-\eta-\zeta=0\\
\inp{\zeta}{\eta}=0\\
\zeta\in\mathbb{R}^m_-\\
\eta\in\mathbb{R}^m_+.
\end{cases}
\end{equation*}
It shows that all the pairs $(\xi,\eta)$ with $\eta=0$ and $\xi=(0,\ldots,0,\xi_n)$ for $\xi_n\in\mathbb{R}$ are solutions to the above system. Thus the multiplier $\bar\langlembda=0$ is critical.
In Section~\ref{comp-opt} we revisit this example in the optimization framework; see Example~\ref{ex1a}.}
\end{Example}
The next two-dimensional example presents a simple linear-quadratic variational system of type \eqref{VS} with $\thetay$ from \eqref{theta} such that a stationary point therein is associated with both critical and noncritical Lagrange multipliers.
\begin{Example}{\bf(variational systems with both critical and noncritical multipliers corresponding to a given stationary point).}\langlebel{ex2} {\rm Specify the data of \eqref{theta} and \eqref{VS} as follows:
\begin{equation}\langlebel{kd01}
Y:=\mathbb{R}^2_+,\quad{\mathrm d}isplaystyle B:=\begin{pmatrix}1&0\\0&0\end{pmatrix},\quad f(x):=-x,\;\mbox{ and }\;\Phi(x):=(0,x^2)\;\mbox{ for }\;x\in\mathbb{R}.
\end{equation}
Thus we have in \eqref{VS} that $\Psi(x,\langlembda)=f(x)+\nabla\Phi(x)^*\langlembda=-x+2x\langlembda_2$ for any $x\in\mathbb{R}$ and $\langlembda=(\langlembda_1,\langlembda_2)\in\mathbb{R}^2$.
By Theorem~\ref{gdt}(i), we obtain $ {\rm dom}\,\thetay=\mathbb{R} \times \mathbb{R}_-$.
Since $\partial\thetay(u)=(N_Y+B)^{-1}(u)$ by Theorem~\ref{gdt}(iii), it is not hard to see $\partial\thetay(0)=\{0\}\times\mathbb{R}_+$, and so $\Lambda(\bar{x})=\{0\}\times\mathbb{R}_+$ with $\bar{x}:=0$.
Then for any $\langlembda=(\langlembda_1,\langlembda_2)\in\Lambda(\bar{x})$ we get $\langlembda_1=0$ and $\langlembda_2\ge 0$. On the other hand, conditions \eqref{crit} from Theorem~\ref{desc} read now as
\begin{equation*}
(2\langlembda_2-1)\xi= 0,\;\;\inp{-B\eta}{\eta}=0,\;\;-B\eta\in{\cal K}^*,\;\;\eta\in{\cal K}.
\end{equation*}
This tells us that if $\langlembda_2\ne\frac{1}{2}$, the latter system admits only the solution $\xi=0$, and thus the obtained Lagrange multiplier $\langlembda$ is noncritical. In the case where $\langlembda_2=\frac{1}{2}$, this system admits nontrivial solutions $\xi$, and so the Lagrange multiplier $\langlembda=(0,\frac{1}{2})$ is critical.}
\end{Example}
\section{Uniqueness of Lagrange Multipliers and Isolated Calmness}\langlebel{unique}
This section is devoted to the study of uniqueness of Lagrange multipliers corresponding to given stationary points of the variational systems \eqref{VS} with piecewise linear-quadratic penalties \eqref{theta}. This issue is definitely of its own interest while seems to be independent of multiplier criticality. However, the methods we develop for the uniqueness study and the obtained conditions for it occur to be closely related to the subsequent characterizations of noncritical multiplies as well as their deeper understanding and specification.
First we recall some ``at-point" (vs.\ ``around/neighborhood") stability properties of set-valued mappings that have been recognized in variational analysis; see, e.g., \cite{dr,m18,rw} with the references and commentaries therein.
It is said that a mapping $F\colon\mathbb{R}^n\rightrightarrows\mathbb{R}^m$ is {\em calm} at $(\bar{x},\bar{y})\in\mathrm{gph}\, F$ if there exist a constant $\ell\ge 0$ and neighborhoods $U$ of $\bar{x}$ and $V$ of $\bar{y}$ such that
\begin{equation}\langlebel{calm}
F(x)\cap V\subset F(\bar{x})+\ell\|x-\bar{x}\|\mathbb{B}\;\textrm{ for all }\;x\in U,
\end{equation}
where $\mathbb{B}$ stands for the closed unit ball of the space in question. If \eqref{calm} is replaced by
\begin{equation}\langlebel{iso-calm}
F(x)\cap V \subset\big\{\bar{y}\big\}+\ell\n{x-\bar{x}}\mathbb{B}\;\textrm{ for all }\;x\in U,
\end{equation}
then the corresponding property is known as {\em isolated calmness} of $F$ at $(\bar{x},\bar{y})$. If the $\mathrm{gph}\, F$ is locally closed at $(\bar{x},\bar{y})$, the latter property admits the graphical derivative characterization
\begin{equation}\langlebel{grdcr}
DF(\bar{x},\bar{y})(0)=\{0\}
\end{equation}
known as the {\em Levy-Rockafellar criterion}; see the commentaries to \cite[Theorem~4E.1]{dr}.
Finally, $F$ enjoys the {\em robust isolated calmness} property at $(\bar{x},\bar{y})$ if in addition to \eqref{iso-calm} we have $F(x)\cap V\ne\emptyset$. This name is coined quite recently \cite{dsz}, while the property itself has been actually used in optimization over the years; see the discussions in \cite{dsz,ms17}.\vspace*{0.05in}
In this section we employ the calmness and isolated calmness properties for characterizations of uniqueness of Lagrange multipliers in \eqref{VS} with the piecewise linear-quadratic term \eqref{theta}. Robust isolated calmness is used in the last section of the paper.
Using the data of \eqref{VS}, consider the set-valued mapping $G\colon\mathbb{R}^n\times\mathbb{R}^m\rightrightarrows\mathbb{R}^n\times\mathbb{R}^m$ given by
\begin{equation}\langlebel{g}
G(x,\langlembda):=\begin{pmatrix}
\Psi(x,\langlembda)\\
-\Phi(x)\end{pmatrix}+\begin{pmatrix}
0\\
(\partialartial\thetay)^{-1}(\langlembda)
\end{pmatrix}\;\mbox{ for all }\;(x,\langlembda)\in\mathbb{R}^n\times\mathbb{R}^m.
\end{equation}
Then fix a point $\bar{x}\in \mathbb{R}^n$ and define the parameterized {\em multiplier map} $M_{\bar{x}}\colon\mathbb{R}n\times\mathbb{R}^m\rightrightarrows\mathbb{R}^m$ associated with $\bar{x}$ by
\begin{equation} \langlebel{lmm}
M_{\bar{x}}(p_1,p_2):=\big\{\langlembda\in\mathbb{R}^m\big|\;(p_2,p_2)\in G(\bar{x},\langlembda)\big\},\quad(p_1,p_2)\in\mathbb{R}^n\times\mathbb{R}^m.
\end{equation}
We have $M_{\bar{x}}(0,0)=\Lambda(\bar{x})$ for the Lagrange multiplier set \eqref{lag} of the unperturbed system \eqref{VS}.
The next theorem characterizes uniqueness of Lagrange multipliers in variational systems \eqref{VS} with the term $\thetay$ from \eqref{theta} via both calmness and isolated calmness properties of the multiplier map \eqref{lmm}, which are equivalent to each other in this case and are characterized in turn by a novel {\em dual qualification condition}.
\begin{Th}{\bf(characterizations of uniqueness of Lagrange multipliers in variational systems).}\langlebel{uniq}
Let $(\bar{x},\bar{\langlembda})$ be a solution to the variational system \eqref{VS} with $\thetay$ taken from \eqref{theta}.
Then the following properties are equivalent:
\begin{itemize}[noitemsep]
\item[\bf(i)] $\Lambda(\bar{x})=\{\bar\langlembda\}$.
\item[\bf(ii)]$M_{\bar{x}}$ is calm at $\big((0,0),\bar\langlembda\big)$ and $\Lambda(\bar{x})=\{\bar\langlembda\}$.
\item[\bf(iii)] $M_{\bar{x}}$ is isolatedly calm at $\big((0,0),\bar\langlembda\big)$.
\item[\bf(iv)]We have the dual qualification condition
\begin{equation}\langlebel{dqc}
D\partial\thetay(\bar{z},\bar\langlembda)(0)\cap \ker\nabla\Phi(\bar{x})^*=\{0\},
\end{equation}
where $D\partial\thetay(\bar{z},\bar\langlembda)$ is calculated by \eqref{desc1}.
\end{itemize}
\end{Th}
\begin{proof}
Denoting $\bar{z}:=\Phi(\bar{x})$ as above, we begin with proving the equivalence (iii)$\Longleftrightarrow$(iv). To proceed, observe that the graph of $M_{\bar{x}}$ is closed and deduce from \eqref{grdcr} that $M_{\bar{x}}$ is isolatedly calm at $((0,0),\bar\langlembda)$ if and only if
$DM_{\bar{x}}\big((0,0),\bar\langlembda\big )(0,0)=\{0\}$. It is not hard to check that $\eta\in DM_{\bar{x}}\big((0,0),\bar\langlembda\big )(0,0)$ amounts to saying that
$\eta$ is a solution to the system
\begin{equation*}
\left[\begin{array}{c}
0\\
0
\end{array}
\right]\in\left[\begin{array}{c}
\nabla\Phi(\bar{x})^*\eta\\
0
\end{array}
\right]+\left[\begin{array}{c}
0\\
D(\thetay)^{-1}(\bar\langlembda,\bar{z})(\eta)
\end{array}
\right].
\end{equation*}
This tells us that $\eta$ is a solution to the above system if and only if
\begin{equation*}
\eta\in D\partial\thetay(\bar{z},\bar\langlembda)(0)\cap\ker\nabla\Phi(\bar{x})^*.
\end{equation*}
Combining these facts verifies the equivalence between conditions (iii) and (iv).
Next we show that (i)$\Longrightarrow$(iv). Assume on the contrary that the dual qualification condition \eqref{dqc} fails while (i) holds, and so find an element
\begin{equation*}
\eta\in D\partial\thetay(\bar{z},\bar\langlembda)(0)\cap\ker\nabla\Phi(\bar{x})^*\;\mbox{ such that }\;\eta\ne 0.
\end{equation*}
Since $\Psi(\bar{x},\bar\langlembda+t\eta)=0$ for any $t>0$, we get from $\eta\in D\partial\thetay(\bar{z},\bar\langlembda)(0)$ and \eqref{gdr} that $\eta\in\partial\thetak(0)$, and hence $-B\eta\in N_{{\cal K}}(\eta)$ by Theorem~\ref{gdt}(ii). Choosing $t$ to be sufficiently small and employing the Reduction Lemma from \cite[Lemma~2E.4]{dr} ensure the existence of a neighbored $U$ of $(0,0)\in \mathbb{R}^m\times \mathbb{R}^m$ such that
$$
t(\eta, -B\eta)\in [\mathrm{gph}\, N_{\cal K}]\cap U=\big[\mathrm{gph}\, N_Y- (\bar\langlembda,\bar{z}-B\bar\langlembda)\big]\cap U.
$$
This in turn results in $\bar{z}-B\bar\langlembda-tB\eta\in N_Y(\bar\langlembda+t\eta)$, which yields by \eqref{fo} the inclusion $\bar\langlembda+t\eta\in\partial\thetay(\bar{z})$. Combining the latter with $\Psi(\bar{x},\bar\langlembda+t\eta)=0$ results in $\bar\langlembda+t\eta\in\Lambda(\bar{x})$. However, we have $\eta\ne 0$ thus $\bar\langlembda+t\eta\ne\bar\langlembda$ for any $t>0$, which contradicts (i) and so verifies the claimed implication (i)$\Longrightarrow$(iv).
To show further that the isolated calmness of $M_{\bar{x}}$ at $\big((0,0),\bar\langlembda\big)$ imposed in (iii) yields (ii), it suffices to check that $\Lambda(\bar{x})=\{\bar\langlembda\}$. Indeed, the assumed isolated calmness allows us to find a neighborhood $O$ of $\bar \langlembda$ such that $M_{\bar{x}}(0,0)\cap O=\{\bar\langlembda\}$, which tells us by the convex-valuedness of $M_{\bar{x}}$ that $M_{\bar{x}}(0,0)=\{\bar\langlembda\}$. Combining the latter with $M_{\bar{x}}(0,0)=\Lambda(\bar{x})$ verifies (ii). Since (ii) obviously implies (i), we complete the proof of the theorem.
\end{proof}\vspace*{0.05in}
The next example reveals that the dual qualification condition \eqref{dqc} is essential for the uniqueness of Lagrange multipliers in Theorem~\ref{uniq}.
\begin{Example}{\bf(nonuniqueness of Lagrange multipliers under failure of the dual qualification condition).}\langlebel{manymul} {\rm Consider the variational system \eqref{VS} with term \eqref{theta}, where $Y$ and $B$ are taken from \eqref{kd01}, while $\Phi\colon\mathbb{R}^2\to\mathbb{R}^2$ is defined by $\Phi(x_1,x_2):=(x_1,0)$ and $f\colon\mathbb{R}^2\to\mathbb{R}^2$ is defined by $f(x)=0$ for all $x\in\mathbb{R}^2$. It is shown in Example~\ref{ex2} that ${\rm dom}\,\thetay=\mathbb{R}\times \mathbb{R}_-$. Letting $\bar{x}:=(0,0)$, we get by the direct calculation that
\begin{equation*}
\partial\thetay(\bar{x})=\{0\}\times\mathbb{R}_+\;\mbox{ and }\;\Psi(\bar{x},\langlembda)=\nabla\Phi(x)^*\langlembda=(\langlembda_1,0),
\end{equation*}
and so $\Lambda(\bar{x})=\{0\}\times\mathbb{R}_+$, which is not a singleton.
Let us now show that the dual qualification condition fails in this setting. Having $\ker\nabla\Phi(\bar{x})^*=\{0\}\times\mathbb{R}$ and choosing $\bar\langlembda:=(0,0)$ give us the critical cone
\begin{equation*}
{\cal K}=T_Y(\bar\langlembda)\cap\big\{\Phi(\bar{x})-B\bar\langlembda\big\}^\partialerp=Y,
\end{equation*}
and so $\partial\thetak(0,0)=\{0\}\times\mathbb{R}_+$. Combining it with \eqref{gdr}, we arrive at
\begin{equation*}
\partial\thetak(0,0)\cap\ker\nabla\Phi(\bar{x})^*=D\partial\thetay(\bar{z},\bar\langlembda)(0,0)\cap\ker\nabla\Phi(\bar{x})^*=\{0\}\times\mathbb{R}_+\ne\{(0,0)\},
\end{equation*}
which demonstrates the failure of the dual qualification condition \eqref{dqc}.}
\end{Example}
\section{Characterizations of Noncritical Multipliers}\langlebel{noncrit-char}
In this section we derive major characterizations of noncritical multipliers for the piecewise linear-quadratic variational systems \eqref{VS} in terms of semi-isolated calmness and error bounds.
Using the mapping $G$ from \eqref{g}, define the {\em solution map} $S\colon\mathbb{R}^n\times\mathbb{R}^m\rightrightarrows\mathbb{R}^n\times\mathbb{R}^m$ for the {\em canonical perturbation} of system \eqref{VS} by
\begin{equation}\langlebel{s}
S(p_1,p_2):=\big\{(x,\langlembda)\in\mathbb{R}^n\times\mathbb{R}^m\big|\;(p_1,p_2)\in G(x,\langlembda)\big\}.
\end{equation}
The property of {\em semi-isolated calmness} used in \eqref{upper} was introduced in \cite{ms17} for solution maps to general variational systems with a product structure of values as in \eqref{s}. The reader can see that for such mappings the semi-isolated calmness of the variational systems of type \eqref{VS} occupies an intermediate position between the calmness and isolated calmness.
In what follows we use the notation ${\mathrm d}ist(x;\Omega)$ for the distance between a point $x\in\mathbb{R}^n$ and a set $\Omega\subset\mathbb{R}^n$, $\mathbb{B}_\varepsilon(x)$ for the closed ball centered at $x\in\mathbb{R}^n$ with radius $\varepsilon>0$, and
\begin{equation}\langlebel{pr}
P\partialh(x):={\rm argmin}\mathbb{B}ig\{\partialh(u)+\frac{1}{2}\|x-u\|^2\mathbb{B}ig|\;u\in\mathbb{R}^n\mathbb{B}ig\},\quad x\in\mathbb{R}^n,
\end{equation}
for the {\em proximal mapping} $P\partialh\colon\mathbb{R}^n\rightrightarrows\mathbb{R}^n$ associated with a function $\partialh\colon\mathbb{R}^n\to\overline{\mathbb{R}}$.
\begin{Th}{\bf(major characterizations of noncritical multipliers in variational systems).}\langlebel{charc} Let $(\bar{x},\bar{\langlembda})$ be a solution to the variational system \eqref{VS} with the piecewise linear-quadratic term \eqref{theta}. Then the following conditions are equivalent:
\begin{itemize}[noitemsep]
\item[\bf(i)] The Lagrange multiplier $\bar{\langlembda}$ is noncritical for \eqref{VS} corresponding to $\bar{x}$.
\item[\bf(ii)]
There exist numbers $\varepsilon>0$, $\ell\ge 0$ and neighborhoods $U$ of $0\in\mathbb{R}^n$ and $W$ of $0\in\mathbb{R}^m$ such that for any $(p_1,p_2)\in U\times W$ the following inclusion holds:
\begin{equation}\langlebel{upper}
S(p_1,p_2)\cap\mathbb{B}_\varepsilon(\bar{x},\bar\langlembda)\subset\big[\{\bar{x}\}\times\Lambda(\bar{x})\big]+\ell\big(\|p_1\|+\|p_2\|\big)\mathbb{B}.
\end{equation}
\item[\bf(iii)] There exist numbers $\varepsilon>0$ and $\ell\ge 0$ such that the error bound estimate
\begin{equation*}\langlebel{subr}
\|x-\bar{x}\|+{\mathrm d}ist\big(\langlembda;\Lambda(\bar{x})\big)\le\ell\big(\|\Psi(x,\langlembda)\|+{\mathrm d}ist\big(\Phi(x);(\partialartial\thetay)^{-1}(\langlembda)\big)\big)
\end{equation*}
holds for any $(x,\langlembda)\in\mathbb{B}_\varepsilon(\bar{x},\bar\langlembda)$ in terms of the inverse subdifferential of $\thetay$.
\item[\bf(iv)] There are numbers $\varepsilon>0$ and $\ell\ge 0$ such that the error bound estimate
\begin{equation}\langlebel{subr3}
\|x-\bar{x}\|+{\mathrm d}ist\big(\langlembda;\Lambda(\bar{x})\big)\le\ell\big(\|\Psi(x,\langlembda)\|+\|\Phi(x)-(P\thetay)(\langlembda+\Phi(x))\|\big)
\end{equation}
holds for any $(x,\langlembda)\in\mathbb{B}_\varepsilon(\bar{x},\bar\langlembda)$ in terms of the proximal mapping $P\thetay$ from \eqref{pr}.
\end{itemize}
\end{Th}
\begin{proof} Let us first verify that (ii) implies (i). Theorem~\ref{desc} reduces it to proving that the semi-isolated calmness property in (ii) ensures that for any solution $(\xi,\eta)\in\mathbb{R}^n\times\mathbb{R}^m$ to the system \eqref{cri} we have $\xi=0$. Define $(x_t,\langlembda_t):=(\bar{x}+t\xi,\bar{\langlembda}+t\eta)$ for all $t>0$ and observe that
\begin{equation*}
\begin{array}{ll}
\Psi(x_t,\langlembda_t)-\Psi(\bar{x},\bar{\langlembda})&=\big(f(x_t)-f(\bar{x})\big)+\big(\nabla\Phi(x_t)-\nabla\Phi(\bar{x})\big)^*\bar{\langlembda}+t\nabla\Phi(x_t)^*\eta \\
&=t\nabla f(\bar{x})\xi+o(t)+t\big(\nabla^2\Phi(\bar{x})\xi\big)^*\bar{\langlembda}+t\nabla\Phi(\bar{x})^*\eta+o(t)\\
&=t\big(\nabla_x\Psi(\bar{x},\bar{\langlembda})\xi+\nabla\Phi(\bar{x})^*\eta\big)+o(t)=o(t)
\end{array}
\end{equation*}
whenever $t$ is sufficiently small. Letting $p_{1t}:=\Psi(x_t,\langlembda_t)$ and using $\Psi(\bar{x},\bar{\langlembda})=0$, we deduce from the last equality above that $p_{1t}=o(t)$. It follows in the similar way that
\begin{equation*}
\Phi(x_t)=\Phi(\bar{x})+t\nabla\Phi(\bar{x})\xi+o(t)\;\mbox{ for all small }\;t>0.
\end{equation*}
Denoting further $z_t:=\Phi(\bar{x})+t\nabla\Phi(\bar{x})\xi$ implies that
\begin{equation*}
z_t-\Phi(x_t)=o(t)\;\mbox{ as }\;t>0,
\end{equation*}
and therefore we get $p_{2t}=o(t)$ for $p_{2t}:=z_t-\Phi(x_t)$.
Let us now prove that $(x_t,\langlembda_t)\in S(p_{1t},p_{2t})$ for $t>0$ sufficiently small. Since $p_{1t}=\Psi(x_t,\langlembda_t)$, we only need to verify by Theorem~\ref{gdt}(ii) that
\begin{equation}\langlebel{lm-th}
\langlembda_t\in\partialartial\thetay(z_t)=(N_Y+B)^{-1}(z_t),\;\mbox{ or equivalently }\;z_t-B\langlembda_t\in N_Y(\langlembda_t).
\end{equation}
To proceed with checking \eqref{lm-th}, deduce from \eqref{cri} that
\begin{equation*}
\eta\in{\cal K}=K_Y(\bar\langlembda,\bar{z}-B\bar\langlembda)=T_Y(\bar{v})\cap\{\bar{z}-B\bar{\langlembda}\}^\partialerp.
\end{equation*}
Denoting $\langlembda_t:=\bar\langlembda+t\eta$ and remembering that $Y$ is a convex polyhedral set, we conclude that $\langlembda_t\in Y$ for all $t>0$ sufficiently small. Furthermore, it follows from \eqref{cri} that
\begin{equation*}
\nabla\Phi(\bar{x})\xi-B\eta\in{\cal K}^*=N_Y(\bar\langlembda)+\mathbb{R}(\bar{z}-B\bar{\langlembda}).
\end{equation*}
Thus there exist $\alphapha\in\mathbb{R}$ and $w\in N_Y(\bar{\langlembda})$ such that $\nabla\Phi(\bar{x})\xi-B\eta=\alphapha(\bar{z}-B\bar{\langlembda})+w$. Using this together with \eqref{cri} gives us the equalities
\begin{equation*}
0=\inp{\nabla\Phi(\bar{x})\xi-B\eta}{\eta}=\alphapha\inp{\bar{z}-B\bar{\langlembda}}{\eta}+\inp{w}{\eta}=\inp{w}{\eta}.
\end{equation*}
Recall that $ N_Y(\bar{\langlembda})=\{\sum_{i\in I(\bar{\langlembda})}\beta_i b_i|\;\beta_i\ge 0\}$, where $I(\bar\langlembda)$ stands for the set of active constraints in $Y$ at $\bar\langlembda$. It allows us to deduce from the inclusion $w\in N_Y(\bar{\langlembda})$ that there are numbers $\beta_i\ge 0$ as $i\in I(\bar{\langlembda})$ such that $w=\sum_{i\in I(\bar{\langlembda})}\beta_i b_i$, and therefore
\begin{equation*}
\sum_{i\in I(\bar{\langlembda})}\beta_i\inp{b_i}{\eta}=\inp{w}{\eta}=0.
\end{equation*}
Observe furthermore the relationships
\begin{equation*}
z_t-B\langlembda_t=\Phi(\bar{x})+t\nabla\Phi(\bar{x})\xi-B\bar{\langlembda}-tB\eta=\bar{z}-B\bar{\langlembda}+t(\nabla\Phi(\bar{x})\xi-B\eta)=(1+t\alphapha)(\bar{z}-B\bar{\langlembda})+tw,
\end{equation*}
where $1+t\alphapha>0$ for small $t>0$. Since both $\bar{z}-B\bar{\langlembda}$ and $w$ belong to $N_Y(\bar{\langlembda})$, it follows that $(1+t\alphapha)(\bar{z}-B\bar{\langlembda})+tw\in N_Y(\bar{\langlembda})$, and thus there is $\tau_{it}\ge 0$ for $i\in I(\bar{\langlembda})$ such that $z_t-B\langlembda_t=\sum_{i\in I(\bar{\langlembda})}\tau_{it}b_i$. Noting that $\inp{z_t-B\langlembda_t}{\eta} = 0$ and $\inp{b_i}{\eta} \leq 0$ for all $i \in I(\bar\langlembda)$, we deduce that
\begin{equation}\langlebel{act}
\inp{b_i}{\eta} = 0 \textrm{ for all } i \in I(\bar\langlembda) \textrm{ with } \tau_{it}>0.
\end{equation}
Let us now show that
\begin{equation*}
\tau_{it}=0\;\mbox{ if }\;i\in I(\bar{\langlembda})\setminus I(\langlembda_t).
\end{equation*}
Suppose on the contrary that there is an index $i_0\in I(\bar{\langlembda})\setminus I(\langlembda_t)$ for which $\tau_{i_0t}>0$. This means that $\inp{b_{i_0}}{\bar{\langlembda}}=\alphapha_{i_0}$ and $\inp{b_{i_0}}{\langlembda_t}<\alphapha_{i_0}$. Therefore
\begin{equation*}
\inp{b_{i_0}}{\bar{\langlembda}}+t\inp{b_{i_0}}{\eta}=\inp{b_{i_0}}{\langlembda_t}<\alphapha_{i_0},
\end{equation*}
which in turn yields $\inp{b_{i_0}}{\eta}<0$, a contradiction with \eqref{act}. Thus for all $i\in I(\bar{\langlembda})\setminus I(\langlembda_t)$ we get $\tau_{it}=0$ and hence arrive at
\begin{equation*}
z_t-B\langlembda_t=\sum_{i\in I(\langlembda_t)}\tau_{it}b_i\in N_Y(\langlembda_t).
\end{equation*}
This verifies \eqref{lm-th} and thus implies that $(x_t,\langlembda_t)\in S(p_{1t},p_{2t})$. It now follows from the assumed semi-isolated calmness \eqref{upper} in (ii) that
\begin{equation*}
\n{\xi}=\frac{\n{x_t-\bar{x}}}{t}\le\frac{\ell\big(\n{p_{1t}}+\n{p_{2t}}\big)}{t},
\end{equation*}
which results in $\xi=0$ by letting $t{\mathrm d}n 0$. It tells us $\bar{\langlembda}$ is noncritical and hence justify the implication (ii)$\implies$(i) of the theorem.\vspace*{0.05in}
Next we prove the opposite implication (i)$\Longrightarrow$(ii). Assuming that the multiplier noncriticality in (i) holds, let us first verify the following statement.\\[1ex]
{\bf Claim:} {\em There exist numbers $\varepsilon>0$ and $\ell\ge 0$ and neighborhoods $U$ of $0\in\mathbb{R}^n$ and $W$ of $0\in \mathbb{R}^m$ such that
for any $(p_1,p_2)\in U\times W$ and $(x_{p_1p_2},\langlembda_{p_1p_2})\in S(p_1,p_2)\cap\mathbb{B}_\varepsilon(\bar{x},\bar{\langlembda})$ we have
\begin{equation} \langlebel{up1}
\n{x_{p_1p_2}-\bar{x}}\le\ell\big(\n{p_1}+\n{p_2}\big).
\end{equation}}\vspace*{-0.15in}
To justify this claim, suppose on the contrary that \eqref{up1} fails and thus for any $k\in\mathbb{N}$ find $(p_{1k},p_{2k})\in\mathbb{B}_{1/k}(0)\times\mathbb{B}_{1/k}(0)$, $k\in\mathbb{N}$, and $(x_k,\langlembda_k)\in S(p_{1k},p_{2k})\cap \mathbb{B}_{1/k}(\bar{x},\bar{\langlembda})$ such that
\begin{equation*}
\frac{\n{p_{1k}}+\n{p_{2k}}}{\n{x_k-\bar{x}}}\rightarrow 0\;\mbox{ as }\;k\to\infty.
\end{equation*}
Denote $t_k:=\|x_k-\bar{x}\|$ and deduce from the convergence above that $p_{1k}=o(t_k)$ and $p_{2k}=o(t_k)$. Since $\thetay$ is a convex piecewise linear-quadratic function, it follows from the proof of \cite[Theorem~11.14(b)]{rw} that $\mathrm{gph}\,\partialartial\thetay$ is a union of finitely many convex polyhedral sets. This together with \cite[Theorem~3D.1]{dr} and $\bar{z}:=\Phi(\bar{x})\in{\rm dom}\,\partialartial\thetay$ ensures the existence of a number $\ell'\ge 0$ and a neighborhood $O$ of $\bar{z}$ such that for all $z\in O\cap{\rm dom}\,\partialartial\thetay$ we have
\begin{equation}\langlebel{ulp}
\partialartial\thetay(z)\subset\partialartial\thetay(\bar{z})+\ell'\n{z-\bar{z}}\mathbb{B}.
\end{equation}
Suppose without loss of generality that $z_{k}:=p_{2k}+\Phi(x_{k})\in O$ for all $k\in\mathbb{N}$. Since $\langlembda_k\in\partialartial\thetay(z_k)$, there exist
$\langlembda\in\partialartial\thetay(\bar{z})$ and $b\in\mathbb{B}$ such that $\langlembda_k=\langlembda+\ell'\n{z_k-\bar{z}}b$. Using this along with the classical Hoffman lemma, we find a number $M\ge 0$ such that
\begin{eqnarray}\langlebel{err}
\begin{array}{ll}
\mathrm{dist}\big(\langlembda_{k};\Lambda(\bar{x})\big)&\le M\left(\n{\Psi(\bar{x},\langlembda_{k})}+{\mathrm d}ist\big(\langlembda_k;\partialartial\thetay(\bar{z})\big)\right)\\
&\le M\n{\Psi(\bar{x},\langlembda_{k})-\Psi(x_{k},\langlembda_{k})}+M \n{\Psi(x_{k},\langlembda_{k})}+\ell'\n{z_{k}-\bar{z}}\\
&\le M\rho(1+\|\langlembda_k\|)\n{x_{k}-\bar{x}}+M\n{p_{1k}}+\ell'\rho\n{x_{k}-\bar{x}}+\ell'\|p_{2k}\|,
\end{array}
\end{eqnarray}
where $\rho$ is a common calmness constant for the mappings $f$, $\Phi$, and $\nabla\Phi$ at $\bar{x}$. Since $\Lambda(\bar{x})$ is closed and convex, for each $k\in\mathbb{N}$ there exists a vector $\mu_k\in\Lambda(\bar{x})$ for which
\begin{equation*}
\frac{\|\langlembda_k-\mu_k\|}{t_k}\le M\rho(1+\|\langlembda_k\|)+M\frac{\n{p_{1k}}}{t_k}+\ell'\rho+\ell'\frac{\|p_{2k}\|}{t_k},\quad k\in\mathbb{N}.
\end{equation*}
Thus we can assume without loss of generality that
\begin{equation*}
\frac{\langlembda_k-\mu_k}{t_k}\rightarrow\widetilde{\eta}\;\mbox{ for some }\;\widetilde\eta\in\mathbb{R}^m.
\end{equation*}
By passing to a subsequence if necessary, it follows that
\begin{equation*}\langlebel{xi}
\frac{x_k-\bar{x}}{t_k}\to\xi\;\mbox{ as }\;k\to\infty\;\mbox{ with some }\;0\ne\xi\in\mathbb{R}^n.
\end{equation*}
Due to $\mu_k\in\Lambda(\bar{x})$ and the discussions above we get the equalities
\begin{equation*}
\begin{array}{ll}
o(t_k)=p_{1k}&=\Psi(x_k,\mu_k)=\Psi(x_k,\mu_k)-\Psi(\bar{x},\mu_k)+\nabla\Phi(x_k)^*(\langlembda_k-\mu_k)\\
&=\nabla_x\Psi(\bar{x},\mu_k)(x_k-\bar{x})+\nabla\Phi(x_k)^*(\langlembda_k-\mu_k)+o(t_k),
\end{array}
\end{equation*}
which lead us as $k\to\infty$ to the limiting condition
\begin{equation} \langlebel{pl03}
\nabla_x\Psi(\bar{x},\bar{\langlembda})\xi+\nabla\Phi(\bar{x})^*\widetilde{\eta}=0,
\end{equation}
It further follows from $(x_k,\langlembda_k)\in S(p_{1k},p_{2k})$ that $\langlembda_k\in\partialartial\thetay(z_k)$, which is equivalent to the inclusion $z_k-B\langlembda_k\in N_Y(\langlembda_k)$ for each $k\in\mathbb{N}$ by Theorem~\ref{gdt}(ii). Since $Y$ is a convex polyhedral set, the Reduction Lemma from \cite[Lemma~2E.4]{dr}) tells us that
\begin{equation*}
z_k-B\langlembda_k-(\bar{z}-B\bar\langlembda)\in N_{\cal K}(\langlembda_k-\bar\langlembda)
\end{equation*}
for all $k\in\mathbb{N}$ sufficiently large, where ${\cal K}$ is the critical cone to $Y$ at $\bar{z}$ for $\bar{z}-B\bar\langlembda$ taken from Theorem~\ref{gdt}(iii).
This along with Theorem~\ref{gdt}(iii) brings us to the conclusions
\begin{equation*}
\langlembda_k-\bar\langlembda\in\partialartial\thetak(z_k-\bar{z})=D\partialartial\thetay(\bar{z},\bar{\langlembda})(z_k-\bar{z}),\;\mbox{ and so}
\end{equation*}
\begin{equation}\langlebel{pl01}
\frac{\langlembda_k-\bar\langlembda}{t_k}\in D\partialartial\thetay(\bar{z},\bar{\langlembda})\mathbb{B}ig(\frac{z_k-\bar{z}}{t_k}\mathbb{B}ig)=\partialartial\thetak\mathbb{B}ig(\frac{z_k-\bar{z}}{t_k}\mathbb{B}ig),
\end{equation}
which imply in turn that ${\mathrm d}isplaystyle\frac{z_k-\bar{z}}{t_k}\in{\rm dom}\,\partialartial\thetak$. Since ${\cal K}$ is a convex polyhedral set, it follows from Theorem~\ref{gdt}(i) that $\thetak$ is a convex piecewise linear-quadratic function. Thus \cite[Proposition~10.21]{rw} tells us that ${\rm dom}\,\partialartial\thetak={\rm dom}\,\thetak$. Employing Theorem~\ref{gdt}(i) ensures that ${\rm dom}\,\thetak$ is a closed set. Combining it with the convergence ${\mathrm d}isplaystyle \frac{z_k-\bar{z}}{t_k}\rightarrow\nabla\Phi(\bar{x})\xi$ as $k\rightarrow\infty$ yields
\begin{equation}\langlebel{pl02}
\nabla\Phi(\bar{x})\xi\in{\rm dom}\,\partialartial\thetak.
\end{equation}
Since $\mu_k\in\Lambda(\bar{x})$, we get $\mu_k\in\partialartial\thetay(\bar{z})$ and, proceeding similarly to the proof of \eqref{pl01}, arrive at
\begin{equation*}
\frac{\mu_k-\bar\langlembda}{t_k}\in\partialartial\thetak(0).
\end{equation*}
Furthermore, it follows from $\bar\langlembda\in\Lambda(\bar{x})$ and $\mu_k\in\Lambda(\bar{x})$ that $\bar\langlembda-\mu_k\in\ker\nabla\Phi(\bar{x})^*$. Using \eqref{pl02} and arguing as in the proof of
\eqref{ulp}, we find $\ell'\ge 0$ and a neighborhood $O$ of $\nabla\Phi(\bar{x})\xi$ such that
\begin{equation*}
\partialartial\thetak(u)\subset\partialartial\thetak\big(\nabla\Phi(\bar{x})\xi\big)+\ell'\n{u-\nabla\Phi(\bar{x})\xi}\mathbb{B}
\end{equation*}
for all $u\in O\cap{\rm dom}\,\partialartial\thetak$. Employing the latter together with \eqref{pl01} leads us to the relationships
\begin{eqnarray*}
\frac{\langlembda_k-\mu_k}{t_k}&=&\frac{\langlembda_k-\bar\langlembda}{t_k}+\frac{\bar\langlembda-\mu_k}{t_k}\\
&\in&\partialartial\thetak\mathbb{B}ig(\frac{z_k-\bar{z}}{t_k}\mathbb{B}ig)-\big[\mathrm{ker}\nabla\Phi(\bar{x})^*\cap\partialartial\thetak(0)\big]\\
&\subset&\partialartial\thetak(\nabla\Phi(\bar{x})\xi)+\ell'\big\|\frac{z_k-\bar{z}}{t_k}-\nabla\Phi(\bar{x})\xi\big\|\mathbb{B}-\big[\ker\nabla\Phi(\bar{x})^*\cap\partialartial\thetak(0)\big].
\end{eqnarray*}
This allows us to find, for all $k\in\mathbb{N}$ sufficiently large, a $b_k\in\mathbb{B}$ such that
\begin{equation}\langlebel{inc1}
\frac{\langlembda_k-\mu_k}{t_k}-\ell'\big\|\frac{z_k-\bar{z}}{t_k}-\nabla\Phi(\bar{x})\xi\big\|b_k\in\partialartial\thetak(\nabla\Phi(\bar{x})\xi)-\big[\ker\nabla\Phi(\bar{x})^*\cap \partialartial\thetak(0)\big].
\end{equation}
We can see that the left-hand side of inclusion \eqref{inc1} converges as $k\to\infty$ to the vector $\widetilde{\eta}$. On the other hand, the right-hand side of this inclusion is the sum of two convex polyhedral sets, and so is closed. This shows that $\widetilde\eta$ satisfies to
\begin{equation}\langlebel{u67}
\widetilde{\eta}\in\partialartial\thetak(\nabla\Phi(\bar{x})\xi)-\big[\ker\nabla\Phi(\bar{x})^*\cap\partialartial\thetak(0)\big].
\end{equation}
Thus we get vectors $\eta\in\partialartial\thetak(\nabla\Phi(\bar{x})\xi)$ and $\eta'\in \ker\nabla\Phi(\bar{x})^*\cap\partialartial\thetak(0)$, which provide the representation $\widetilde{\eta}=\eta-\eta'$. It follows from the relationship \eqref{gdr} in Theorem~\ref{gdt}(iii) that $\eta\in D\partialartial\thetay(\bar{z},\bar\langlembda)(\nabla\Phi(\bar{x})\xi)$. Furthermore, employing \eqref{pl03} tells us that
\begin{equation*}
0=\nabla_x\Psi(\bar{x},\bar{\langlembda})\xi+\nabla\Phi(\bar{x})^*\widetilde\eta=\nabla_x\Psi(\bar{x},\bar{v})\xi+\nabla\Phi(\bar{x})^*\eta,
\end{equation*}
which contradicts the noncriticality of $\bar\langlembda$ due to $\xi\ne 0$ and thus completes the proof of the claim.\vspace*{0.05in}
To finalize verifying implication (i)$\Longrightarrow$(ii) in the theorem, take the neighborhoods $U$ and $W$ from the above claim and shrink them if necessary for the subsequent procedure. Using the claim and arguing similarly to the proof of the conditions in \eqref{err} give us
a constant $\ell'\ge 0$ such that for any $(p_1,p_2)\in U\times W$ and any $(x_{p_1p_2},\langlembda_{p_1p_2})\in S(p_1,p_2)\cap\mathbb{B}_\varepsilon(\bar{x},\bar{\langlembda})$ we have
\begin{equation}\langlebel{pap}
{\mathrm d}ist\big(\langlembda_{p_1p_2};\Lambda(\bar{x})\big)\le\ell'\big(\n{x_{p_1p_2}-\bar{x}}+\n{p_1}+\n{p_2}\big).
\end{equation}
Combining it with \eqref{up1} allows us to find $\ell\ge 0$ for which $(p_1,p_2)\in U\times W$ and
\begin{equation*}
\n{x_{p_1p_2}-\bar{x}}+{\mathrm d}ist\big(\langlembda_{p_1p_2};\Lambda(\bar{x})\big)\le\ell\big(\n{p_1}+\n{p_2}\big)
\end{equation*}
whenever $(x_{p_1p_2},\langlembda_{p_1p_2})\in S(p_1,p_2)\cap\mathbb{B}_\varepsilon(\bar{x},\bar{\langlembda})$. This clearly justifies the semi-isolated calmness property \eqref{upper} and thus finishes the proof of implication (i)$\implies$(ii).
The equivalence between (ii) and (iii) can be verified similarly to the corresponding arguments in the proof of \cite[Theorem~4.1]{ms17}, and so we omit them here. Thus it remains to establish the equivalence between assertions (ii) and (iv) of the theorem to complete its proof.
Let us start with checking implication (iv)$\Longrightarrow$(ii). Picking $(p_1,p_2)\in\mathbb{B}_{\varepsilon}(0,0)$ and $(x,\langlembda)\in S(p_1,p_2)\cap\mathbb{B}_{\varepsilon}(\bar{x},\bar\langlembda)$ with $\varepsilon$ and $\ell$ taken from (iv), we get from the definition of $S$ that
\begin{equation}\langlebel{inc2}
\Psi(x,\langlembda)=p_1\;\mbox{ and }\;\langlembda\in\partial\thetay(\Phi(x)+p_2).
\end{equation}
It follows from \cite[Proposition~12.19]{rw} due to the convexity of $\thetay$ that $P\thetay=(I+\partial\thetay)^{-1}$, and hence the second inclusion in \eqref{inc2} is equivalent to the equality $P\thetay(\langlembda+\Phi(x)+p_2)=\Phi(x)+p_2$. Appealing now to \eqref{subr3} brings us to the estimates
\begin{eqnarray*}
\|x-\bar{x}\|+{\mathrm d}ist\big(\langlembda,\Lambda(\bar{x})\big)&\le&\ell\big(\|\Psi(x,\langlembda)\|+\|\Phi(x)-P\thetay(\langlembda +\Phi(x)\|\big)\\
&\le&\ell\big(\|p_1\|+\|P\thetay(\langlembda+\Phi(x)+p_2)-P\thetay(\langlembda +\Phi(x))\| +\|p_2\|\big)\\
&\le&\ell\big(\|p_1\|+\|p_2\|+\|p_2\|\big),
\end{eqnarray*}
which readily justify the assertion in (ii).
Finally, we verify the converse implication (ii)$\Longrightarrow$(iv). To proceed, pick $(x,\langlembda)\in\mathbb{B}_{\varepsilon/2}(\bar{x},\bar\langlembda)$, where $\varepsilon$ is taken from (ii). Define the vectors
\begin{equation}\langlebel{ee1}
p_2:= P\thetay\big(\langlembda+\Phi(x)\big)-\Phi(x)\;\mbox{ and }\;p_1:=\Psi(x,\langlembda-p_2).
\end{equation}
Since $\Phi$ and $\nabla \Phi$ are continuous at $\bar{x}$ and since $P\thetay$ is Lipschitz continuous,
we assume without loss of generality that $(p_1,p_2)\in\mathbb{B}_{\varepsilon/2}(0,0)$ and $\mathbb{B}_{\varepsilon/2}(0,0)\subset U\times W$, where $U$ and $W$ come from (ii).
It follows from \eqref{ee1} that $(x,\langlembda-p_2)\in S(p_1,p_2)\cap\mathbb{B}_{\varepsilon}(\bar{x},\bar\langlembda)$. Since $\nabla \Phi$ is continuous at $\bar{x}$, we can assume without loss
generality that for some $\rho>0$ we have $\|\nabla \Phi(x)\|\leq \rho$ for all $x\in\mathbb{B}_\varepsilon(\bar{x})$.
So we deduce from \eqref{upper} that
\begin{eqnarray*}
\|x-\bar{x}\|+{\mathrm d}ist\big(\langlembda-p_2,\Lambda(\bar{x})\big)&\le&\ell\big(\|p_1\|+\|p_2\|\big)\\
&\le&\ell\big(\|\Psi(x,\langlembda-p_2)\|+\|\Phi(x)-P\thetay(\langlembda +\Phi(x))\|\big)\\
&\le&\ell\big(\|\Psi(x,\langlembda)\|+\rho\|p_2\|+\|\Phi(x)-P\thetay(\langlembda +\Phi(x))\|\big)\\
&\le&\ell\big(\|\Psi(x,\langlembda)\|+(\rho+1)\|\Phi(x)-P\thetay(\langlembda+\Phi(x))\|\big).
\end{eqnarray*}
Recall that the distance function ${\mathrm d}ist\big(\cdot;\Lambda(\bar{x})\big)$ is Lipschitz continuous; so we have
\begin{equation}\langlebel{ee2}
{\mathrm d}ist\big(\langlembda;\Lambda(\bar{x})\big)-{\mathrm d}ist\big(\langlembda-p_2;\Lambda(\bar{x})\big)\le\|p_2\|=\|\Phi(x)-P\thetay(\langlembda+\Phi(x))\|,
\end{equation}
which in combination with the obtained inequalities leads us to
\begin{equation*}
\|x-\bar{x}\|+{\mathrm d}ist\big(\langlembda;\Lambda(\bar{x})\big)\le\ell\|\Psi(x,\langlembda)\|+\big(\ell(\rho+1)+1\big)\|\Phi(x)-P\thetay(\langlembda+\Phi(x))\|.
\end{equation*}
This verifies (iv) and completes the proof of the theorem.
\end{proof}\vspace*{0.05in}
To conclude this section, let us mention some connection of the obtained characterizations of noncritical multipliers for variational systems \eqref{VS} with the uniqueness of Lagrange multipliers therein, which is {\em not} assumed in Theorem~\ref{charc}. Indeed, looking more closely at the proof of theorem reveals that the second term in \eqref{u67} is actually {\em undesired}, since it provides complications for the proof. But, as follows from Theorem~\ref{uniq}, this terms disappears (reduces to $\{0\}$) if the set of Lagrange multipliers $\Lambda(\bar{x})$ is a singleton. This phenomenon has been recently observed in \cite{ms18} for the case of constrained optimization problems.
\section{Noncriticality in Extended Nonlinear Programming}\langlebel{comp-opt}
Here we concentrate on problems of composite optimization given by \eqref{co}, where $\theta=\thetay$ is taken from \eqref{theta}. It means that we are dealing with the class of ENLPs discussed in Section~\ref{intro}. Starting with this section we assume that $\partialh_0$ and $\Phi$ are not just twice differentiable, but belongs to the class of ${\cal C}^2$-smooth mappings around the points in question.
Define the {\em Lagrangian} of \eqref{co} by
\begin{equation}\langlebel{comlag}
L(x,\langlembda):=\partialh_0(x)+\inp{\Phi(x)}{\langlembda}-\frac{1}{2}\inp{\langlembda}{B\langlembda}\;\mbox{ for }\;(x,\langlembda)\in\mathbb{R}^n\times\mathbb{R}^m
\end{equation}
and observe that the KKT system for \eqref{co} is written as
\begin{equation}\langlebel{kkt-co}
\nabla_xL(x,\langlembda)=0,\;\langlembda\in\partial\thetay(\Phi(x)).
\end{equation}
Thus \eqref{kkt-co} is a particular case of \eqref{VS} with $\Psi:=\nabla_x L$. Denoting
\begin{equation}\langlebel{lcom}
\Lambda_{\mathrm{com}}(\bar{x}):=\big\{\langlembda\in\mathbb{R}^m\big|\;\nabla_xL(\bar{x},\langlembda)=0,\;\langlembda\in\partial\thetay(\Phi(\bar{x}))\big\},
\end{equation}
the corresponding set of Lagrange multipliers, we have Definition~\ref{crit} of multiplier criticality as well as all the above results being specified for the KKT system \eqref{kkt-co}.
On the other hand, there are some phenomena concerning critical and noncritical Lagrange multipliers that distinguish KKT systems in optimization from general variational systems of type \eqref{VS}. We consider them in this and two subsequent sections.\vspace*{0.05in}
The following theorem provides a certain {\em second-order sufficient condition} ensuring simultaneously the {\em strict minimality} of a feasible solution to ENLP \eqref{co} and the {\em noncriticality} of the corresponding Lagrange multiplier. In its formulation we use the critical cone ${\cal K}$ defined in Theorem~\ref{gdt}(iii) as well as the notation ${\rm rge\,} A$ for the range of a linear operator $A$. Note that the existence of Lagrange multipliers corresponding to $\bar{x}$ in \eqref{co}, which is assumed below, is ensured by the first-order qualification condition \eqref{bcq} from Lemma~\ref{esonc}.
\begin{Th}{\bf(second-order sufficient condition for strict local minimizers and noncritical multipliers in ENLPs).}\langlebel{esosc}
Let $(\bar{x},\bar\langlembda)$ be a solution to KKT system \eqref{kkt-co}. Assume further that the second-order sufficient condition
\begin{equation}\langlebel{sc}
\big\langle\nabla_{xx}^2L(\bar{x},\bar\langlembda)w,w\big\rangle+2\thetaeta_{{\cal K},B}\big(\nabla\Phi(\bar{x})w\big)>0\;\mbox{ if }\;w\in\mathbb{R}^n\setminus\{0\}\;\mbox{with}\;\nabla\Phi(\bar{x})w\in{\cal K}^*+{\rm rge\,} B
\end{equation}
holds. Then there exist numbers $\varepsilon>0$ and $\ell\ge 0$ such that the quadratic lower estimate
\begin{equation}\langlebel{sogc}
\partialh(x)\ge\partialh(\bar{x})+\ell\,\|x-\bar{x}\|^2\;\mbox{ for all }\;x\in\mathbb{B}_\varepsilon(\bar{x})
\end{equation}
holds for the function $\partialh$ taken from \eqref{co}. Furthermore, the Lagrange multiplier $\bar\langlembda$ satisfying \eqref{sc} is noncritical for the KKT system \eqref{kkt-co} corresponding to $\bar{x}$.
\end{Th}
\begin{proof} Define the family of second-order difference quotients for $\partialh$ at $\bar{x}$ for $\bar{y}\in\mathbb{R}^n$ by
\begin{equation}\langlebel{lk01}
\Delta_t^2\partialh(\bar{x},\bar{y})(w):={\mathrm d}frac{\partialh(\bar{x}+tw)-\partialh(\bar{x})-t\langlengle\bar{y},\,w\ranglengle}{\frac{1}{2}t^2}\;\mbox{ with }\;w\in\mathbb{R}^{n},\;t>0.
\end{equation}
Set $\bar{y}:=0\in\mathbb{R}^n$ and deduce from $\bar\langlembda\in\Lambda_{\mathrm{com}}(\xb)$ that $\bar{y}=\nabla\partialh_0(\bar{x})+\nabla\Phi(\bar{x})^*\bar\langlembda$. Then for any $w\in\mathbb{R}^n$ we get the equalities
\begin{eqnarray*}
\Delta_t^2\partialh(\bar{x},0)(w)&=&\Delta_t^2\partialh_0(\bar{x},\nabla\partialh_0(\bar{x}))(w)+{\mathrm d}frac{\thetay\big(\Phi(\bar{x}+tw)\big)-\thetay\big(\Phi(\bar{x})\big)-t\langlengle\nabla\Phi(\bar{x})^*\bar\langlembda,w\ranglengle}{\frac{1}{2}t^2}\\
&=&\Delta_t^2\partialh_0(\bar{x},\nabla\partialh_0(\bar{x}))(w)+{\mathrm d}frac{t\langlengle\bar\langlembda,w_t\ranglengle-t\langlengle\bar\langlembda,\nabla\Phi(\bar{x})w\ranglengle}{\frac{1}{2}t^2}\\
&&+{\mathrm d}frac{\thetay\big(\Phi(\bar{x})+tw_t\big)-\thetay\big(\Phi(\bar{x})\big)-t\langlengle\bar\langlembda,w_t\ranglengle}{\frac{1}{2}t^2}\\
&=&\Delta_t^2\partialh_0(\bar{x},\nabla\partialh_0(\bar{x}))(w)+{\mathrm d}frac{t\langlengle\bar\langlembda,w_t\ranglengle-t\langlengle\bar\langlembda,\nabla\Phi(\bar{x})w\ranglengle}{\frac{1}{2}t^2}+\Delta_t^2 \thetay(\Phi(\bar{x}),\bar\langlembda)(w_t),
\end{eqnarray*}
where $w_t:=\nabla\Phi(\bar{x})w+\frac{t}{2}\langlengle\nabla^2\Phi(\bar{x})w,w\ranglengle+\frac{o(t^2)}{t}$. It implies together with \eqref{ssd1} and \eqref{ssd} that
\begin{eqnarray}\langlebel{cv01}
\begin{array}{ll}
{\mathrm d}^2\partialh(\bar{x},0)(w)&\ge\langlengle\nabla^2\partialh_0(\bar{x})w,w\ranglengle+\langlengle\nabla^2_{xx}\langlengle\bar\langlembda,\Phi(\bar{x})\ranglengle w,w\ranglengle+{\mathrm d}^2\thetay(\Phi(\bar{x}),\bar\langlembda)\big(\nabla\Phi(\bar{x})w\big)\\\\
&=\langlengle\nabla^2_{xx}L(\bar{x},\bar\langlembda)w,w\ranglengle+2\thetak\big(\nabla\Phi(\bar{x})w\big).
\end{array}
\end{eqnarray}
Theorem~\ref{gdt}(i) tells us that ${\rm dom}\,\thetak=({\cal K}\cap\ker B)^*={\cal K}^*+{\rm rge\,} B$. This means that the inclusion $\nabla\Phi(\bar{x})w\in{\cal K}^*+{\rm rge\,} B$ amounts to $\nabla\Phi(\bar{x})w\in{\rm dom}\,\thetak$. Employing the second-order sufficient condition \eqref{sc} together with \eqref{cv01} ensures that ${\mathrm d}^2\partialh(\bar{x},0)(w)>0$ for all such vectors $w\in\mathbb{R}^n\setminus\{0\}$. Otherwise, we have $\nabla\Phi(\bar{x})w\notin{\rm dom}\,\thetak$, and hence $\thetak(\nabla\Phi(\bar{x})w)=\infty$. This along with \eqref{cv01} results in
\begin{equation*}
{\mathrm d}^2\partialh(\bar{x},0)(w)>0\;\mbox{ for all }\;w\in\mathbb{R}^n\;\mbox{ with }\;\nabla\Phi(\bar{x})w\notin{\rm dom}\,\thetak.
\end{equation*}
Combining all the above brings us to
\begin{equation*}
{\mathrm d}^2\partialh(\bar{x},0)(w)>0\;\mbox{ whenever }\;w\in\mathbb{R}^n\setminus\{0\}.
\end{equation*}
Appealing now to \cite[Theorem~13.24]{rw} guarantees the existence of numbers $\varepsilon>0$ and $\ell\ge 0$ for which the quadratic estimate \eqref{sogc} holds and so ensures that $\bar{x}$ is a strict local minimizer for $\partialh$.
Finally, we verify that a multiplier $\bar\langlembda$ satisfying the second-order condition \eqref{sc} is noncritical for \eqref{kkt-co}. To see it, pick $(\xi,\eta)\in\mathbb{R}^n\times\mathbb{R}^m$ fulfilling \eqref{cri} with $\Psi=\nabla_x L$, i.e., so that
\begin{equation*}
\begin{cases}
\nabla^2_{xx}L(\bar{x},\bar{\langlembda})\xi+\nabla\Phi(\bar{x})^*\:\eta=0,\;\inp{\nabla\Phi(\bar{x})\xi-B\eta}{\eta}=0,\\
\nabla\Phi(\bar{x})\xi-B\eta\in{\cal K}^*,\;\mbox{ and }\;\eta\in{\cal K}.
\end{cases}
\end{equation*}
It follows from $\nabla\Phi(\bar{x})\xi-B\eta\in{\cal K}^*$ and the discussion above that $\nabla\Phi(\bar{x})\xi\in{\rm dom}\,\thetak$ and that
$\eta\in\partialartial\thetak(\nabla\Phi(\bar{x})\xi)$. Employing the subdifferential expression in \eqref{fo} gives us
\begin{equation*}
\thetak\big(\nabla\Phi(\bar{x})\xi\big)=\langlengle\eta,\nabla\Phi(\bar{x})\xi\ranglengle-\frac{1}{2}\langlengle B\eta,\eta\ranglengle=\frac{1}{2}\langlengle B\eta,\eta\ranglengle.
\end{equation*}
In this way we arrive at the equalities
\begin{eqnarray*}
0=\langlengle\nabla^2_{xx}L(\bar{x},\bar{\langlembda})\xi,\xi\ranglengle+\langlengle\eta,\nabla\Phi(\bar{x})\xi\ranglengle&=&\langlengle\nabla^2_{xx}L(\bar{x},\bar{\langlembda})\xi,
\xi\ranglengle+\langlengle B\eta,\eta\ranglengle\\
&=&\langlengle\nabla^2_{xx}L(\bar{x},\bar{\langlembda})\xi ,\xi\ranglengle+2\thetak(\nabla\Phi(\bar{x})\xi),
\end{eqnarray*}
which yield $\xi=0$ due to \eqref{sc} as well as to $\nabla\Phi(\bar{x})\xi\in{\rm dom}\,\thetak={\cal K}^*+{\rm rge\,} B$. This shows that $\bar\langlembda$ is a noncritical multiplier of \eqref{kkt-co} corresponding to $\bar{x}$ and thus completes the proof.
\end{proof}\vspace*{0.05in}
The next example, which revisits Example~\ref{ex1} in the ENLP framework, illustrates the possibility to use the second-order sufficient condition \eqref{sc} to justify the strict optimality of a feasible solution to \eqref{co} and the noncriticality of the corresponding Lagrange multiplier.
\begin{Example}{\bf(multiplier noncriticality via the second-order sufficient condition).}\langlebel{ex1a} {\rm Consider the ENLP from \eqref{co}, where $m=n$, $\partialh_0(x):=x_1^2+\ldots+x_n^2$ and $\Phi(x):=x$, and where $Y$ and $B$ are taken from Example~\ref{ex1}. Then we have
\begin{equation}\langlebel{theta-phi}
\begin{array}{c c c}
\thetay\big(\Phi(x)\big)&=&{\mathrm d}isp\sup\limits_{y\in\mathbb{R}n_+}\mathbb{B}ig\{\inp{y}{\Phi(x)}-\frac{1}{2}\inp{y}{y}\mathbb{B}ig\}\\
&=&{\mathrm d}isp\sup\limits_{(y_1,\ldots,y_n)\in\mathbb{R}n_+}\mathbb{B}ig\{\sum\limits_{i=1}^{n}\big(x_iy_i-\frac{1}{2}y_i^2\big)\mathbb{B}ig\}\\
&=&{\mathrm d}isp\frac{1}{2}\sum\limits_{i=1}^n\big(\max\{x_i,0\}\big)^2.
\end{array}
\end{equation}
Let us check that condition \eqref{sc} holds when $\bar{x}=0$ and $\bar\langlembda=0$, which confirms by Theorem~\ref{esosc} that $\bar{x}$ is a strict minimizer for this ENLP and $\bar\langlembda$ is the corresponding noncritical multiplier. Indeed, it follows from Example~\ref{ex1} that $\bar\langlembda\in\partialartial\thetaeta(\bar{z})$, where $\bar{z}:=\Phi(\bar{x})=0$. By the structure of $L(x,\langlembda)$ we have the expressions
\begin{equation*}
\nabla_xL(x,\langlembda)=(2x_1+\langlembda_1,\ldots,2x_n+\langlembda_n)\;\mbox{ and }\;\nabla^2_{xx}L(x,\langlembda)=2I.
\end{equation*}
Then $\nabla_xL(\bar{x},\bar\langlembda)=0$ and hence $\bar\langlembda\in\Lambda_{\mathrm{com}}(\xb)$. Since ${\rm rge\,} B=\mathbb{R}n$, it follows that $\{w|\;\nabla\Phi(\bar{x})w\in\mathcal{K}^*+{\rm rge\,} B\}=\mathbb{R}n$, and therefore the sufficient condition in Theorem~\ref{esosc} reads as
\begin{equation*}
\inp{\nabla_{xx}^2L(\bar{x},\bar\langlembda)\xi}{\xi}+2\thetaeta_{{\cal K},B}\big(\nabla\Phi(\bar{x})\xi\big)>0\;\textrm{ for all }\;\xi\ne 0,
\end{equation*}
which is equivalently presented by
\begin{equation}
2\inp{\xi}{\xi}+2\thetaeta_{{\cal K},B}\big(\nabla\Phi(\bar{x})\xi\big)> 0\;\textrm{ for all }\;\xi\ne 0.
\end{equation}
Furthermore, Example~\ref{ex1} tells us that $\mathcal{K}=\mathbb{R}n_+\cap\{\bar{z}\}^\partialerp$ and so $\mathcal{K}=\mathbb{R}n_+=Y$. Combining this with \eqref{theta-phi}, the sufficient condition \eqref{sc} now becomes
\begin{equation}\langlebel{sc1}
2\inp{\xi}{\xi}+2\thetay\big(\nabla\Phi(\bar{x})\xi\big)>0\;\textrm{ for all }\;\xi\ne 0.
\end{equation}
Since $\thetay$ from \eqref{theta-phi} is always nonnegative, condition \eqref{sc1} holds, and thus it confirms the strict minimality of $\bar{x}$ and the noncriticality of $\bar\langlembda$.}
\end{Example}
\section{Critical Multipliers and Full Stability of Minimizers in ENLPs}\langlebel{full-stab}
This section also deals with constrained minimization problems of the ENLP type and delivers as important message for both theoretical and numerical aspects of optimization. As discussed in Section~\ref{intro}, critical multipliers are particularly responsible for slow convergence of major primal-dual algorithms of optimization and are desired to be excluded for a given local minimizer. It is natural to suppose that seeking not arbitrary while just ``nice" and stable in some sense local minimizers allows us to rule out the appearance of critical multipliers associated with such local optimal solutions. It is conjectured in \cite{m15} that fully stable local minimizers in the sense of \cite{lpr} are appropriate candidate for excluding critical multipliers. This conjecture is affirmatively verified in \cite{brs13} for problems \eqref{co} with $\thetaeta=\thetay$ where $B=0$. Now we are able to extend this result to the general case of \eqref{theta} with an arbitrary symmetric positive-semidefinite matrix $B$.
To proceed, we first specify the definition of fully stable local minimizers from \cite{lpr} for problems \eqref{co} with term \eqref{theta}. Consider their {\em canonically perturbed} version described by
\begin{equation}\langlebel{pertco}
\textrm{minimize }\;\partialh_0(x)+\thetaeta\big(\Phi(x)+p_2\big)-\inp{p_1}{x}\;\textrm{ subject }\;x\in\mathbb{R}^n
\end{equation}
with parameter pairs $(p_1,p_2)\in\mathbb{R}^n\times\mathbb{R}^m$. Fix $\gamma>0$ and $(\bar{x},\bar{p}_1,\bar{p}_2)$ with $\Phi(\bar{x})+\bar{p}_2\in{\rm dom}\,\thetaeta$ and then define the parameter-depended optimal value function for \eqref{pertco} by
\begin{equation*}
m_\gamma (p_1,p_2):=\inf_{\n{x-\bar{x}}\le\gamma}\big\{\partialh_0(x)+\thetaeta\big(\Phi(x)+p_2\big)-\inp{p_1}{x}\big\}
\end{equation*}
together with the parameterized set of optimal solutions to \eqref{pertco} given by
\begin{equation}\langlebel{arg}
M_\gamma (p_1,p_2):=\operatornamewithlimits{arg\,min}_{\n{x-\bar{x}}\le\gamma}\big\{\partialh_0(x)+\thetaeta\big(\Phi(x)+p_2)-\inp{p_1}{x}\big\}
\end{equation}
with the convention that $\operatornamewithlimits{arg\,min}:=\emptyset$ when the expression under minimization in \eqref{arg} is $\infty$. We say that $\bar{x}$ is a {\em fully stable} local optimal solution to problem \eqref{co} if there exist a number $\gamma>0$ and neighborhoods $U$ of $\bar{p}_1$ and $W$ of $\bar{p}_2$ such that the mapping $(p_1,p_2)\mapsto M_\gamma(p_1,p_2)$ is single-valued and Lipschitz continuous with $M_\gamma(\bar{p}_1,\bar{p}_2)=\{\bar{x}\}$ and that the function $(p_1,p_2)\mapsto m_\gamma(p_1,p_2)$ is likewise Lipschitz continuous on $U\times W$.
Note that \cite[Proposition~3.5]{lpr} deduces the local Lipschitz continuity of $m_\gamma$ from the {\em basic constraint qualification} \eqref{bcq} formulated in the following lemma, which is obtained in \cite[Exercise~13.26]{rw}. The second-order necessary condition presented below can be viewed as a ``no-gap" version of the second-order sufficient one used in Theorem~\ref{esosc} with the notation therein.
\begin{Lemma}{\bf(second-order necessary optimality condition for composite optimization problems).}\langlebel{esonc}
Let $\bar{x}$ be a local optimal solution to problem \eqref{co} with $\thetaeta=\thetay$ taken from \eqref{theta}, and let the basic constraint qualification
\begin{equation}\langlebel{bcq}
N_{\scriptsize {{\rm dom}\,\thetay}}(\Phi(\bar{x}))\cap\ker\nabla\Phi(\bar{x})^*=\{0\}
\end{equation}
be satisfied, and so $\Lambda_{\mathrm{com}}(\bar{x})\ne\emptyset$. Then we have second-order necessary optimality condition
\begin{equation}\langlebel{nc1}
\max_{\langlembda\in\Lambda_{\mathrm{com}}(\bar{x})}\big\langle\nabla_{xx}^2L(\bar{x},\langlembda)w,w\big\rangle+2\thetaeta_{{\cal K},B}\big(\nabla\Phi(\bar{x})w\big)\ge 0
\end{equation}
valid for all $w\in\mathbb{R}^n$ with $\nabla\Phi(\bar{x})w\in{\cal K}^*+{\rm rge\,} B$.
\end{Lemma}
Now we are ready to establish the aforementioned result in the general ENLP setting.
\begin{Th}{\bf(excluding critical multipliers by full stability of local minimizers).}\langlebel{criful} Let $\bar{x}$ be a fully stable local optimal solution to problem \eqref{co}, and let $\thetaeta$ be taken from \eqref{theta}. Then the Lagrange multiplier set $\Lambda_{\mathrm{com}}(\xb)$ in \eqref{lcom} is nonempty and does not include critical multipliers.
\end{Th}
\begin{proof}
First we show that the full stability of $\bar{x}$ ensures the validity of the qualification condition \eqref{bcq}. Indeed, pick any $\eta\in N_{\scriptsize {{\rm dom}\,\thetay}}(\Phi(\bar{x}))\cap\ker\nabla\Phi(\bar{x})^*$. Select $p_1=\bar{p}_1:=0$ and $p_2:=t\eta$ as $t{\mathrm d}ownarrow 0$. It follows from the full stability of $\bar{x}$ that there exist a Lipschitz constant $\ell\ge 0$ and the unique solution $x_{p_1 p_2}$ to problem \eqref{pertco} such that
\begin{equation}\langlebel{xleta}
\n{x_{p_1 p_2}-\bar{x}}\le\ell t\n{\eta}.
\end{equation}
Since $\Phi(x_{p_1 p_2})+p_2\in{\rm dom}\,\thetay$ and $\eta\in N_{\scriptsize {{\rm dom}\,\thetay}}(\Phi(\bar{x}))$, we get $\inp{\eta}{\Phi(x_{p_1 p_2})+p_2-\Phi(\bar{x})}\le 0$.
This gives us the relationships
\begin{equation*}
\begin{array}{ll}
0&\ge\big\langle\eta,\nabla\Phi(\bar{x})(x_{p_1 p_2}-\bar{x})+o(\n{x_{p_1 p_2}-\bar{x}})+p_2\big\rangle\\
&=\big\langle\nabla\Phi(\bar{x})^*\eta,x_{p_1 p_2}-\bar{x}\big\rangle+\big\langle\eta,o(\n{x_{p_1 p_2}-\bar{x}})+p_2\big\rangle\\
&=\big\langle\eta,o(\n{x_{p_1 p_2}-\bar{x}})\big\rangle+t\|\eta^2\|.
\end{array}
\end{equation*}
Using estimate \eqref{xleta} and letting $t{\mathrm d}n 0$ lead to $\eta=0$. Thus the basic constraint qualification \eqref{bcq} is satisfied, which ensures that $\Lambda_{\mathrm{com}}(\bar{x})\ne\emptyset$.
Next we pick any $\bar\langlembda\in\Lambda_{\mathrm{com}}(\xb)$ and show that it is noncritical for the unperturbed KKT system \eqref{kkt-co} corresponding to $\bar{x}$. Consider the KKT system for the perturbed problem \eqref{pertco} that can be written as
\begin{equation}\langlebel{kkt}
\begin{pmatrix}
p_1\\partial_2
\end{pmatrix}\in\begin{pmatrix}
\nabla_x L(x,\langlembda)\\
-\Phi(x)\end{pmatrix}+\begin{pmatrix}
0\\
(\partialartial\thetay)^{-1}(\langlembda)
\end{pmatrix}.
\end{equation}
Let $S_{KKT}\colon\mathbb{R}^n\times\mathbb{R}^m\rightrightarrows\mathbb{R}^n\times\mathbb{R}^m$ be the solution map to \eqref{kkt} given by
\begin{equation}\langlebel{skkt}
S_{KKT}(p_1,p_2):=\big\{(x,\langlembda)\in\mathbb{R}^n\times\mathbb{R}^m\big|\;p_1=\nabla_x L(x,\langlembda),\;\langlembda\in\partial\thetay\big(p_2+\Phi(x)\big)\big\}.
\end{equation}
Employing Theorem~\ref{charc}, we only need to prove that there exist numbers $\ve>0$ and $\ell\ge 0$ as well as neighborhoods $U$ of $0\in\mathbb{R}^n$ and $W$ of $0\in\mathbb{R}^m$ such that for any $(p_1,p_2)\in U\times V$ and any $(x_{p_1 p_2},\langlembda_{p_1 p_2})\in S_{KKT}(p_1,p_2)\cap(\mathbb{B}_\ve(\bar{x})\times\mathbb{B}_\ve(\bar\langlembda))$, estimate \eqref{upper} holds with replacing $\Lambda(\bar{x})$ by the set of Lagrange multipliers $\Lambda_{\mathrm{com}}(\xb)$ taken from \eqref{lcom}.
To this end we deduce from the full stability of $\bar{x}$ in \eqref{pertco} with $(\bar{p}_1,\bar{p}_2)=(0,0)$ due to the result of \cite[Proposition~6.1]{brs13} that there exist neighborhoods $\widetilde{U}\times\widetilde{W}$ of $(0,0)$ and $\widetilde{V}$ of $\bar{x}$ for which the set-valued mapping
\begin{equation*}
(p_1,p_2)\mapsto Q(p_1,p_2):=\big\{x\in\mathbb{R}^n\big|\;p_1\in\nabla\partialh_0(x)+\nabla\Phi(x)^*\partial\thetay(\Phi(x)+p_2)\big\}
\end{equation*}
admits a Lipschitzian single-valued graphical localization on $\widetilde{U}\times\widetilde{W}\times\widetilde{V}$. This means that there exists a Lipschitzian single-valued mapping $g\colon\widetilde{U}\times\widetilde{W}\mapsto\widetilde{V}$ such that $(\mathrm{gph}\, Q)\cap(\widetilde{U}\times\widetilde{W}\times\widetilde{V})=\mathrm{gph}\, g$. Denote $U:=\widetilde{U}$, $W:=\widetilde{W}$ and take $\ve>0$ so small that $\mathbb{B}_\ve(\bar{x})\subset\widetilde{V}$. The Lipschitzian single-valued graphical localization property of $Q$ allows us to find a constant $\ell\ge 0$ such that for any $(p_1,p_2)\in U\times W$ and any $(x_{p_1 p_2},\langlembda_{p_1 p_2})\in S_{KKT}(p_1,p_2)\cap\big(\mathbb{B}_\ve(\bar{x})\times\mathbb{B}_\ve(\bar\langlembda)\big)$ we have the inclusion $x_{p_1 p_2}\in Q(p_1,p_2)$, and hence
\begin{equation*}
\n{x_{p_1 p_2}-\bar{x}}=\n{x_{p_1 p_2}-x_{\bar{p}_1\bar{p}_2}}\le\ell\big(\n{p_1}+\n{p_2}\big).
\end{equation*}
Using now the error bound estimate \eqref{pap} from the proof of Theorem~\ref{charc} with replacing $\Lambda(\bar{x})$ by $\Lambda_{\mathrm{com}}(\xb)$ and adjusting $\ve$ if necessary give us the semi-isolated calmness property \eqref{upper}, which is equivalent to the noncriticality of $\bar\langlembda$ that was chosen arbitrary from the Lagrange multiplier set $\Lambda_{\mathrm{com}}(\xb)$. This therefore completes the proof of theorem.
\end{proof}\vspace*{0.05in}
The result of Theorem~\ref{criful} calls for the deriving verifiable conditions for full stability of local minimizers to \eqref{co} expressed entirely via the problem data and the given minimizer. Such conditions allow us to efficiently exclude slow convergence of primal-dual algorithms to seek fully stable minimizers based on the initial data. Some characterizations of full stability of local minimizers for ENLPs of type \eqref{co} are obtained in \cite[Theorem~7.3]{brs13} under rather strong assumptions. Relaxing these assumptions is a challenging goal of our future research.
\section{Noncriticality and Lipschitzian Stability of Solutions to ENLPs}\langlebel{lip-stab}
In this section we use the machinery developed above to investigate other notions of Lipschitzian stability, which occur to be related to noncriticality of multipliers for ENLPs. The following theorem provides characterizations of both isolated calmness and robust isolated calmness properties of the KKT solution map \eqref{skkt} associated with ENLP \eqref{co} in terms of the second-order sufficient condition \eqref{sc} as well as noncriticality and uniqueness of Lagrange multipliers.
\begin{Th}{\bf(characterizations of robust isolated calmness of solution maps).}\langlebel{calmness} Let $\bar{x}$ be a feasible solution to ENLP \eqref{co} with $\thetaeta$ taken from \eqref{theta}, and let $\bar\langlembda\in\Lambda_{\mathrm{com}}(\xb)$ be a corresponding Lagrange multiplier from \eqref{lcom}. The following assertions are equivalent:
\begin{itemize}[noitemsep]
\item[{\bf(i)}] The solution map $S_{KKT}$ from \eqref{skkt} is robustly isolatedly calm at the point $\big((0,0),(\bar{x},\bar\langlembda)\big)\in\mathbb{R}^{n+m}\times\mathbb{R}^{n+m}$,
and $\bar{x}$ is a local optimal solution to \eqref{co}.
\item[{\bf(ii)}] The second-order sufficient condition \eqref{sc} holds, and $\Lambda_{\mathrm{com}}(\bar{x})=\{\bar\langlembda\}$.
\item[{\bf(iii)}] $\Lambda_{\mathrm{com}}(\bar{x})=\{\bar\langlembda\}$, $\bar{x}$ is a local optimal solution to \eqref{co}, and $\bar\langlembda$ is a noncritical multiplier for
\eqref{VS} with $\Psi=\nabla_x L$ that is associated with the optimal solution $\bar{x}$.
\item[{\bf(iv)}] $S_{KKT}$ is isolatedly calm at $\big((0,0),(\bar{x},\bar\langlembda)\big)$, and $\bar{x}$ is a local optimal solution to \eqref{co}.
\end{itemize}
\end{Th}
\begin{proof} The outline of the proof is as follows. We sequentially verify implications (ii)$\Longrightarrow$(iii), (iii)$\Longrightarrow$(iv),
(iv)$\Longrightarrow$(iii), (iii)$\Longrightarrow$(ii), and (i)$\iff$(iv).
To prove (ii)$\Longrightarrow$(iii), assume the validity of \eqref{sc} and that $\Lambda_{\mathrm{com}}(\xb)=\{\bar\langlembda\}$. Then Theorem~\ref{esosc} tells us that $\bar{x}$ is a strict local minimizer of \eqref{co} and that $\bar\langlembda$ is a noncritical multiplier of \eqref{VS} with $\Psi=\nabla_x L$ corresponding to $\bar{x}$, and thus (iii) is satisfied.
Suppose next that all the conditions in (iii) hold.
Since $\bar\langlembda$ is noncritical, we derive the semi-isolated calmness of $S_{KKT}$ at $\big((0,0),(\bar{x},\bar\langlembda)\big)$. This together with $\Lambda_{\mathrm{com}}(\xb)=\{\bar\langlembda\}$
results in the existence of a number $\ell\ge 0$ as well as neighborhoods $U$ of $(0,0)$ and $V$ of $(\bar{x},\bar\langlembda)$ such that
\begin{equation}\langlebel{ilc}
S_{KKT}(p_1,p_2)\cap V\subset\big\{(\bar{x},\bar\langlembda)\big\}+\ell\n{(p_1,p_2)}\mathbb{B}\;\textrm{ for all }\;(p_1,p_2)\in U.
\end{equation}
Thus $S_{KKT}$ enjoys the isolated calmness property at $\big((0,0,(\bar{x},\bar\langlembda))\big)$, and we arrive at (iv).
To verify the opposite implication (iv)$\Longrightarrow$(iii), let us show that the isolated calmness of $S_{KKT}$ at $\big((0,0),(\bar{x},\bar\langlembda)\big)$ in (iv) yields $\Lambda_{\mathrm{com}}(\xb)=\{\bar\langlembda\}$. Indeed, suppose on the contrary that $\Lambda_{\mathrm{com}}(\xb)$ is not a singleton. Then there exists $\widehat{\langlembda}\in\Lambda_{\mathrm{com}}(\xb)$ with $\widehat{\langlembda}\ne\bar\langlembda$. Since the set $\Lambda_{\mathrm{com}}(\xb)$ is convex, every point of the line segment connecting $\bar\langlembda$ and $\widehat{\langlembda}$ belongs to $\Lambda_{\mathrm{com}}(\xb)$. The isolated calmness of $S_{KKT}$ at $\big((0,0),(\bar{x},\bar\langlembda)\big)$ amounts to \eqref{ilc}, and hence we can find $\langlembda'\ne\bar\langlembda$ with $\langlembda'\in\Lambda_{\mathrm{com}}(\xb)$ and such that $\langlembda'$ is sufficiently close to $\bar\langlembda$, i.e., $(\bar{x},\langlembda')\in V$. Then it follows from \eqref{ilc} that
\begin{equation*}
\n{\langlembda'-\bar\langlembda}\le\ell\cdot 0=0,
\end{equation*}
which yields $\langlembda'=\bar\langlembda$, a contradiction ensuring that $\Lambda_{\mathrm{com}}(\xb)$ is a singleton. Theorem~\ref{charc} tells us that $\bar\langlembda$ is a noncritical multiplier of \eqref{VS} corresponding to $\bar{x}$, and thus (iii) holds.
Next we verify implication (iii)$\Longrightarrow$(ii). Let us first deduce from $\Lambda_{\mathrm{com}}(\xb)=\{\bar\langlembda\}$ in (iii) that the qualification condition \eqref{bcq} in (ii) is satisfied. Supposing the contrary, find a normal $v\in N_{{\rm dom}\,\thetay}(\Phi(\bar{x}))$ with $v\ne 0$ such that $\nabla\Phi(\bar{x})^*v=0$. Letting $\langlembda':=\bar\langlembda+v$, we get $\langlembda'\ne\bar\langlembda$ and $\nabla_x L(\bar{x},\langlembda')=0$ for the Lagrangian function \eqref{comlag}. By the choice of $v$ and the normal cone definition \eqref{nc-conv} we get from the above that
\begin{equation*}
\langle\langlembda',z-\Phi(\bar{x})\rangle\le\thetay(z)-\thetay(\Phi(\bar{x}))\;\mbox{ for all }\;z\in{\rm dom}\,\thetay,
\end{equation*}
which shows that $\langlembda'\in\partial\thetay(\Phi(\bar{x}))$ and hence $\langlembda'\in\Lambda_{\mathrm{com}}(\xb)$ due to $\nabla_x L(\bar{x},\langlembda')=0$. Since $\langlembda'\ne\bar{v}$, it gives us a contradiction with the assumption of $\Lambda_{\mathrm{com}}(\xb)=\{\bar\langlembda\}$ in (iii) and thus justifies the validity of the qualification condition \eqref{bcq}. Employing now Lemma~\ref{esonc} tells us that the second-order {\em necessary} optimality condition \eqref{nc1} is satisfied.
To finish the verification of (iii)$\Longrightarrow$(ii), we need to prove that the second-order {\em sufficient} optimality condition \eqref{sc} holds under the assumptions in (iii). Supposing the contrary gives us a nonzero element $\xi_0\in\{w|\;\nabla\Phi(\bar{x})w\in{\cal K}^*+{\rm rge\,} B\}$ such that
\begin{equation*}
\big\langle\nabla_{xx}^2L(\bar{x},\bar\langlembda)\xi_0,\xi_0\big\rangle+2\thetaeta_{{\cal K},B}\big(\nabla\Phi(\bar{x})\xi_0\big)\le 0.
\end{equation*}
Since $\Lambda_{\mathrm{com}}(\xb)=\{\bar\langlembda\}$, it is easy to see that the second-order necessary condition \eqref{nc1} can be equivalently written as
\begin{equation*}
\big\langle\nabla_{xx}^2L(\bar{x},\bar\langlembda)w,w\big\rangle+2\thetaeta_{{\cal K},B}\big(\nabla\Phi(\bar{x})w\big)\ge 0\quad\mbox{for all}\;w\in\mathbb{R}^n\;\;\mbox{with}\;\;\nabla\Phi(\bar{x})w\in {\rm dom}\,\thetak.
\end{equation*}
Furthermore, employing the equalities
\begin{equation*}
\nabla\Phi(\bar{x})\xi_0\in{\cal K}^*+{\rm rge\,} B=\big({\cal K}\cap\ker B\big)^*={\rm dom}\,\thetak
\end{equation*}
allows us to deduce from the equivalent form of the second-order necessary condition that
\begin{equation*}
\big\langle\nabla_{xx}^2L(\bar{x},\bar\langlembda)\xi_0,\xi_0\big\rangle+2\thetaeta_{{\cal K},B}\big(\nabla\Phi(\bar{x})\xi_0\big)=0.
\end{equation*}
This in turn implies that the vector $\xi_0$ is an {\em optimal solution} to the problem
\begin{equation*}
\min_{\xi\in\mathbb{R}^n}\;\frac{1}{2}\big\langle\nabla_{xx}^2L(\bar{x},\bar\langlembda)\xi,\xi\big\rangle+\thetaeta_{{\cal K},B}\big(\nabla\Phi(\bar{x})\xi\big).
\end{equation*}
Applying the subdifferential Fermat rule to the latter problem and then using the elementary sum rule for convex subgradients together with the chain rule from
\cite[Exercise 10.22(b)]{rw} yield
\begin{eqnarray*}
0&\in&\nabla_{xx}^2L(\bar{x},\bar\langlembda)\xi_0+
\nabla\Phi(\bar{x})^*\partial\thetak(\nabla\Phi(\bar{x})\xi_0)\\
&=& \nabla_{xx}^2L(\bar{x},\bar\langlembda)\xi_0+
\nabla\Phi(\bar{x})^*D\partial\thetay(\Phi(\bar{x}),\bar\langlembda)(\nabla\Phi(\bar{x})\xi_0),
\end{eqnarray*}
where the last equality comes from \eqref{gdr}. Since $\xi_0\ne 0$, it shows by Definition~\ref{crit} that $\bar\langlembda$ is a critical multiplier.
This contradicts the assumption in (iii) that $\bar\langlembda$ is a noncritical multiplier and therefore verifies the validity of \eqref{sc} and the entire implication (iii)$\Longrightarrow$(ii).
Our next step is to prove implication (i)$\Longrightarrow$(iv), which clearly holds. To complete the proof of the theorem, it remains to verify implication (iv)$\Longrightarrow$(i).
To achieve this implication, we only need to show that there are neighborhoods $U$ of $(0,0)$ and $V$ of $(\bar{x},\bar\langlembda)$
such that $S_{KKT}(p_1,p_2)\cap V\neq \emptysettyset$ for all $(p_1,p_2)\in U$.
To this end, define the set-valued mapping $Q\colon\mathbb{R}^m\rightrightarrows\mathbb{R}n$ by
\begin{equation*}
Q(p):=\big\{x\in\mathbb{R}^n\big|\;\Phi(x)+p\in{\rm dom}\,\thetay\big\},\quad p\in\mathbb{R}n.
\end{equation*}
Having already proved (iv) and (iii) are equivalent, we have
the qualification condition \eqref{bcq} because of the assumptions in (iii).
As proved above, (iii) and (ii) are equivalent. Thus the second-order sufficient condition \eqref{sc} is satisfied and implies by Theorem~\ref{esosc} that $\bar{x}$ is a strict local minimizer for \eqref{co}.
This gives a neighborhood $O$ of $\bar{x}$ for which we have
\begin{equation}\langlebel{gd03}
\partialh_0(\bar{x})+\thetay\big(\Phi(\bar{x})\big) < \partialh_0(x)+\thetay\big(\Phi(x)\big)\quad \mbox{for all}\quad x\in O.
\end{equation}
Applying \cite[Theorem~4.37(ii)]{m06} to the mapping $Q$ with the initial point $(0,\bar{x})$ gives us numbers $r>0$ and $\ell\ge 0$ such that
\begin{equation}\langlebel{q}
Q(p)\cap\mathbb{B}_r(\bar{x})\subset Q(p')+\ell\n{p-p'}\mathbb{B}\;\textrm{ for all }\;p,p'\in \mathbb{B}_r(0),
\end{equation}
where $r$ can be chosen such that $\mathbb{B}_r(\bar{x})\subset O$. Consider now the optimization problem
\begin{equation}\langlebel{mf01}
\textrm{minimize }\;\partialh_0(x)+\thetay\big(\Phi(x)+p_2\big)-\inp{p_1}{x}\;\textrm{ subject to }\;x\in\mathbb{B}_r(\bar{x})\cap Q(p_2).
\end{equation}
It is clear that this problem admits an optimal solution $x_{p_1p_2}$ for any pair $(p_1,p_2)\in \mathbb{R}^n\times \mathbb{B}_r(0)$ since the cost function therein is lower semicontinuous while the constraint set is obviously compact.
Let us now show that there is a number $\ve>0$ with $\mathbb{B}_\ve(0,0)$ such that
\begin{equation}\langlebel{bd}
x_{p_1p_2}\in{\rm int}\,\mathbb{B}_r(\bar{x})\;\textrm{ for any }\;(p_1,p_2)\in \mathbb{B}_\ve(0,0).
\end{equation}
Suppose the contrary and then find sequences $(p_{1k},p_{2k})\rightarrow(0,0)$ and $x_{p_{1k}p_{2k}}$ for which $\|{x_{p_{1k}p_{2k}}}-\bar{x}\|=r$.
We get without loss of generality that $x_{p_{1k}p_{2k}}\rightarrow x_0$ as $k\to\infty$ and so $\|{x_0}-\bar{x}\|=r$.
This yields $x_0\neq \bar{x}$.
Since $x_{p_{1k}p_{2k}}$ is an optimal solution to \eqref{mf01}, it follows that
\begin{equation}\langlebel{coarr}
\partialh_0(x_{p_{1k}p_{2k}})+\thetay\big(\Phi(x_{p_{1k}p_{2k}})+p_{2k}\big)-\inp{p_{1k}}{x_{p_{1k}p_{2k}}}\le\partialh_0(x)+\thetay\big(\Phi(x)+p_{2k}\big)-
\inp{p_{1k}}{x}
\end{equation}
for all $x\in\mathbb{B}_r(\bar{x})\cap Q(p_{2k})$. Pick any $x\in\mathbb{B}_{\frac{r}{2}}(\bar{x})\cap Q(0)$ and $k\in\mathbb{N}$ so large that $p_{2k}\in\alphapha\mathbb{B}$ with $\alphapha<\min\{\frac{r}{2\ell},r\}$. It follows from \eqref{q} that there exist $x'\in Q(p_{2k})$ and $b\in\mathbb{B}$ satisfying
\begin{equation*}
\n{x'-\bar{x}}\le\n{x-\bar{x}}+\ell\n{p_{2k}}\le\frac{r}{2}+\ell\frac{r}{2\ell}=r,\;\textrm{ where }\;x:=x'+\ell\n{p_{2k}}b.
\end{equation*}
Thus $x'\in\mathbb{B}_r(\bar{x})\cap Q(p_{2k})$, and it follows from \eqref{coarr} that
\begin{eqnarray*}
\partialh_0(x_{p_{1k}p_{2k}})&+\thetay\big(\Phi(x_{p_{1k}p_{2k}})+p_{2k}\big)-\inp{p_{1k}}{x_{p_{1k}p_{2k}}}\le\partialh_0\big(x-\ell\n{p_{2k}}b\big)\\
&+\thetay\big(\Phi(x-\ell\n{p_{2k}}b)+p_{2k}\big)-\inp{p_{1k}}{x-\ell\n{p_{2k}}b}.
\end{eqnarray*}
Passing to the limit at the latter inequality as $k\rightarrow\infty$ gives us the estimate
\begin{equation*}
\partialh_0(x_0)+\thetay\big(\Phi(x_0)\big)\le\partialh_0(x)+\thetay\big(\Phi(x)\big),
\end{equation*}
which holds for all $x\in\mathbb{B}_{\frac{r}{2}}(\bar{x})\cap Q(0)$. In particular, we have
\begin{equation}\langlebel{oppstos}
\partialh_0(x_0)+\thetay\big(\Phi(x_0)\big)\le\partialh_0(\bar{x})+\thetay\big(\Phi(\bar{x})\big),
\end{equation}
which contradicts \eqref{gd03} since $x_0\neq \bar{x}$ and $x_0\in \mathbb{B}_r(\bar{x})\subset O$, and thus we arrive at \eqref{bd}.
At the last step of the proof, denote by ${\Lambda}_\mathrm{com}(x_{p_1p_2})$ be the set of Lagrange multipliers associated with the optimal solution $x_{p_1p_2}$ to problem \eqref{mf01}. It follows from the validity of the qualification condition \eqref{bcq} and its robustness with respect to perturbations of the initial point that this qualification condition is also satisfied for the perturbed problem \eqref{mf01}.
This implies in turn that ${\Lambda}_\mathrm{com}(x_{p_1p_2})\ne\emptyset$ for all $(p_1,p_2)$ sufficiently close to $(0,0)\in \mathbb{R}^n\times \mathbb{R}^m$.
Assume without loss of generality that ${\Lambda}_\mathrm{com}(x_{p_1p_2})\ne\emptyset$ for all $(p_1,p_2)\in \mathbb{B}_\ve(0,0)$, where $\ve$ is taken from
\eqref{bd}.
Using a similar argument as \eqref{err} and \eqref{pap} via the Hoffman lemma
gives us a constant $\ell'\ge 0$ such that for any $(p_1,p_2)\in \mathbb{B}_\ve(0,0)$ and any $\langlembda_{p_1p_2}\in {\Lambda}_{\mathrm{com}}(x_{p_1p_2})$ we have
\begin{equation*}
\|\langlembda_{p_1p_2}-\bar\langlembda\|={\mathrm d}ist\big(\langlembda_{p_1p_2};\Lambda_\mathrm{com}(\bar{x})\big)\le\ell'\big(\n{x_{p_1p_2}-\bar{x}}+\n{p_1}+\n{p_2}\big).
\end{equation*}
This clearly proves the existence of a neighborhood $V$ of $(\bar{x},\bar\langlembda)$
such that $S_{KKT}(p_1,p_2)\cap V\neq \emptysettyset$ for all $(p_1,p_2)\in \mathbb{B}_\ve(0,0)$
and so finishes the proof of implication (iv)$\Longrightarrow$(i).
\end{proof}\vspace*{0.05in}
The final piece of this paper concerns yet another well-recognized Lipschitzian type property, which seems to be the most natural extension of {\em robust} Lipschitzian behavior to set-valued mapping. For this reason we label it as the Lipschitz-like property \cite{m06} while it is also known as the pseudo-Lipschitz or Aubin one. It is said that a set-valued mapping/multifunction $F\colon\mathbb{R}^n\rightrightarrows\mathbb{R}^m$ is {\em Lipschitz-like} around $(\bar{x},\bar{y})\in\mathrm{gph}\, F$ if there exists a constant $\ell\ge 0$ together with neighborhoods $U$ of $\bar{x}$ and $V$ of $\bar{y}$ such that we have the inclusion
\begin{equation}\langlebel{lip-like}
F(x')\cap V\subset F(x)+\ell\|x-x'\|\mathbb{B}\;\mbox{ for all }\;x,x'\in U.
\end{equation}
To formulate a convenient characterization of property \eqref{lip-like}, we recall first the notion of the {\em normal cone} to a set $\Omega\subset\mathbb{R}^n$ at a point $\bar{x}\in\Omega$ defined by
\begin{equation*}
N_\Omegamega(\bar{x}):=\mathbb{B}ig\{v\in\mathbb{R}^n\mathbb{B}ig|\;\textrm{ there exist }\;x_k\xrightarrow{\Omegamega}\bar{x},\;v_k\to v\;\mbox{ with }\;{\mathrm d}isp\limsup_{x\to x_k}\frac{\langle v_k,x-x_k\rangle}{\|x_k-x\|}\le 0\mathbb{B}ig\}.
\end{equation*}
The {\em coderivative} of a set-valued mapping $F\colon\mathbb{R}^n\rightrightarrows\mathbb{R}^m$ at $(\bar{x},\bar{y})\in\mathrm{gph}\, F$ is given by
\begin{equation*}
D^*F(\bar{x},\bar{y})(v):=\big\{u\in\mathbb{R}^n\big|\;(u,-v)\in N_{\mathrm{gph}\, F}(\bar{x},\bar{y})\big\},\quad v\in\mathbb{R}^m.
\end{equation*}
The following characterization of the Lipschitz-like property for any closed-graph mapping $F\colon\mathbb{R}^n\rightrightarrows\mathbb{R}^m$ around $(\bar{x},\bar{y})\in\mathrm{gph}\, F$ is known as the {\em Mordukhovich criterion} from \cite[Theorem~9.40]{rw}, where the proof is different from the original one; see \cite[Theorem~5.7]{m93} as well as its infinite-dimensional extension given in \cite[Theorem~4.10]{m06}:
\begin{equation}\langlebel{cod-cr}
D^*F(\bar{x},\bar{y})(0)=\{0\}.
\end{equation}
Note the results obtained therein provide also a precise computation of the {\em exact bound}/infimum of Lipschitzian moduli $\{\ell\}$ in \eqref{lip-like} via the coderivative norm at $(\bar{x},\bar{y})$.
Full {\em coderivative calculus} developed for coderivatives, which is based on variational/extremal principles of variational analysis and can be found in \cite{m18,m06,rw}, allows us apply the general characterization \eqref{cod-cr} to specific multifunctions given in some structural forms. The next theorem employs \eqref{cod-cr} and coderivative calculus to characterize the Lipschitz-like property of the solution map \eqref{skkt} to the canonically perturbed KKT system \eqref{kkt}.
\begin{Th}{\bf(Lipschitz-like property of solution maps).}\langlebel{Liplike} Let $(\bar{x},\bar\langlembda)\in S_{KKT}(0,0)$ for the solution map $S_{KKT}$ defined in \eqref{skkt} with $\thetaeta$ taken from \eqref{theta}. Then $S_{KKT}$ is Lipschitz-like around $\big((0,0),(\bar{x},\bar\langlembda)\big)$ if and only if we have the implication
\begin{equation}\langlebel{Lipdesc}
\begin{cases}
\nabla_{xx}^2 L(\bar{x},\bar\langlembda)\xi+\nabla\Phi(\bar{x})^*\eta=0\\
\eta\in\big(D^*\partialartial\thetay\big)(\Phi(\bar{x}),\bar\langlembda)\big(\nabla\Phi(\bar{x})\xi\big)
\end{cases}
\Longrightarrow(\xi,\eta)=(0,0).
\end{equation}
\end{Th}
\begin{proof} Consider the mapping $G$ from \eqref{g} with $\Psi=\nabla_x L$.
We easily deduce from the coderivative definition and the form of $S$ that
\begin{equation}\langlebel{S}
(\xi,\eta)\in D^*S_{KKT}\big((0,0),(\bar{x},\bar\langlembda)\big)(w_1,w_2)\Longleftrightarrow-(w_1,w_2)\in D^*G\big((\bar{x},\bar\langlembda),(0,0)\big)(-\xi,-\eta)
\end{equation}
for all $(\xi,\eta)\in\mathbb{R}^n\times\mathbb{R}^m$ and $(w_1,w_2)\in\mathbb{R}^n\times\mathbb{R}^m$.
Using the structure of $G$ and employing the coderivative sum rule in the equality form from \cite[Theorem~3.9]{m18} yield
\begin{equation}\langlebel{G}
\begin{array}{ll}
D^*G\big((\bar{x},\bar\langlembda),(0,0)\big)(\xi,\eta)&=\left[\begin{array}{c c}\nabla_{xx}^2 L(\bar{x},\bar\langlembda)&-\nabla\Phi(\bar{x})^*\\
\nabla\Phi(x)&0
\end{array}\right]
\left[\begin{array}{c c}\xi\\\eta\end{array}\right]+\left[\begin{array}{c c}0\\D^*(\partialartial\thetay)^{-1}(\bar\langlembda,\Phi(\bar{x}))(\eta) \end{array}\right]\\\\
&=\left[\begin{array}{c c}\nabla_{xx}^2 L(\bar{x},\bar\langlembda)\xi-\nabla\Phi(\bar{x})^*\eta\\\nabla\Phi(x)\xi+D^*(\partialartial\thetay)^{-1}(\bar\langlembda,\Phi(\bar{x}))(\eta)
\end{array}\right].
\end{array}
\end{equation}
It follows from \eqref{S} and the coderivative criterion \eqref{lip-like} that $S_{KKT}$ is Lipschitz-like around $\big((0,0),(\bar{x},\bar\langlembda)\big)$ if and only if we have the implication
\begin{equation*}
(0,0)\in D^*G\big((\bar{x},\bar\langlembda),(0,0)\big)(\xi,\eta)\Longrightarrow(\xi,\eta)=(0,0),
\end{equation*}
which leads us together the coderivative representation for $G$ in \eqref{G} to characterization \eqref{Lipdesc} of the Lipschitz-like property of the solution map $S_{KKT}$.
\end{proof}\vspace*{0.05in}
Combining finally the obtained characterization of the Lipschitz-like property in Theorem~\ref{Liplike} with some known facts of variational analysis allows us to reveal a relationship between the latter property of the solution map $S_{KKT}$ and its isolated calmness at the same point.
\begin{Th}{\bf(Lipschitz-like property of solution maps implies their isolated calmness).}\langlebel{ll-calm} Let $S_{KKT}$ be the solution map \eqref{skkt} of the canonically perturbed KKT system \eqref{kkt} with the piecewise linear-quadratic term \eqref{theta}, and let $(\bar{x},\bar\langlembda)\in S_{KKT}(0,0)$. If $S_{KKT}$ is Lipschitz-like around $\big((0,0),(\bar{x},\bar\langlembda)\big)$, then it enjoys the isolated calmness property at this point.
\end{Th}
\begin{proof} Assuming that $S_{KKT}$ has the Lipschitz-like property around $\big((0,0),(\bar{x},\bar\langlembda)\big)$, we get implication \eqref{Lipdesc} by Theorem~\ref{Liplike}. On the other hand, we proceed similarly to the proof of Theorem~\ref{Liplike} and get counterparts of the equalities in \eqref{S} and \eqref{G} with replacing the coderivative by the graphical derivative therein. The latter one is due to the easily checkable sum rule for graphical derivatives of summations with one smooth term as in \eqref{g}. Having this, we apply the Levy-Rockafellar criterion of isolated calmness \eqref{grdcr} to the solution map \eqref{skkt} and thus conclude that the isolated calmness of $S_{KKT}$ at $\big((0,0),(\bar{x},\bar\langlembda)\big)$ is equivalent to
\begin{equation}\langlebel{calmS}
\begin{cases}
\nabla_{xx}^2L(\bar{x},\bar\langlembda)\xi+\nabla\Phi(\bar{x})^*\eta=0\\
\eta\in\big(D\partialartial\thetay\big)(\Phi(\bar{x}),\bar\langlembda)\big(\nabla\Phi(\bar{x})\xi\big)
\end{cases}
\Longrightarrow(\xi,\eta)=(0,0).
\end{equation}
Comparing \eqref{Lipdesc} and \eqref{calmS}, we see that the only difference is in terms involving $(D^*\partialartial\thetay)(\Phi(\bar{x}),\bar\langlembda)$ and $(D\partialartial\thetay)(\Phi(\bar{x}),\bar\langlembda)$. To this end we use derivative-coderivative relationship from \cite[Theorem~13.57]{rw}, which tells us that the inclusion
\begin{equation*}
(D\partialartial\thetay)(\Phi(\bar{x}),\bar\langlembda)(u)\subset(D^*\partialartial\thetay)(\Phi(\bar{x}),\bar\langlembda)(u)\;\mbox{ for all }\;u\in\mathbb{R}^m
\end{equation*}
holds under the assumptions that are automatically satisfied for the piecewise linear-quadratic function $\thetay$ from \eqref{theta}. This therefore completes the proof of the theorem.
\end{proof}
\end{document}
|
\begin{document}
\title{Efficient Implementation of Baker–Campbell–Hausdorff Formula}
\begin{abstract}
This short paper presents an efficient implementation of Baker–Campbell–Hausdorff formula for calculating the logarithm of product of two possibly non-commutative Lie group elements using only Lie algebra terms.
\end{abstract}
\section{Introduction}
Given a Lie group $G$ and its Lie algebra $\mathfrak{g}$, there is an exponential map
$$\exp:\mathfrak{g}\rightarrow G.$$
In a small neighborhood of the identity element $I\in G$, $\exp$ is a smooth bijection and has an inverse map $\log:G\rightarrow \mathfrak{g}$.
It is sometimes very useful to compute the logarithm of a product of two elements in the Lie group near the identity, i.e.\ $Z=\log(\exp{X}\exp{Y})$. In the case that $G$ is commutative, we can solve $Z$ exactly as $X+Y$; however, difficulty arises when $G$ is non-commutative. Our goal is to approximately compute $Z$ up to a given order $N$, which will be defined later.
\section{Dynkin's Explicit Expression for BCH formula}
Due to Eugene Dynkin, the explicit combinatorial expression for BCH's formula is~\cite{jacobson1979lie,dynkin2000calculation}
\begin{equation}
\log(\exp{X}\exp{Y})=\sum_{n=1}^{\infty}\frac{(-1)^{n-1}}{n}\sum_{\substack{r_1+s_1>0 ,\\ \cdots, \\r_n+s_n>0}}\frac{[X^{r_1}Y^{s_1} \cdots X^{r_n}Y^{s_n}]}{\sum_{i=1}^{n}(r_i+s_i)\cdot\prod_{i=1}^n r_i! s_i!}.
\label{dynkin}
\end{equation}
Here, the sum is performed over all positive integers $n$, nonnegative combinations of $r_i, s_i$, and
\begin{equation}
{\displaystyle [X^{r_{1}}Y^{s_{1}}\dotsm X^{r_{n}}Y^{s_{n}}]=[\underbrace {X,[X,\dotsm [X} _{r_{1}},[\underbrace {Y,[Y,\dotsm [Y} _{s_{1}},\,\dotsm \,[\underbrace {X,[X,\dotsm [X} _{r_{n}},[\underbrace {Y,[Y,\dotsm Y} _{s_{n}}]]\dotsm ]].}
\end{equation}
For each commutator monomial $C=[X^{r_1}Y^{s_1} \cdots X^{r_n}Y^{s_n}]$, define the \emph{order} $N(C):=\sum_{i=1}^n (r_i+s_i)$. An $N$-th order approximation of $\log(\exp{X}\exp{Y})$ is the summation over all monomial terms with order at most $N$.
Several difficulties lies ahead. First, each monomial appears multiple times in Dynkin's formula due to different ways of separating one term into $(r,s)$ pairs, leading to inefficiency for the computation. Therefore it would be desirable to come up with a more efficient method for computing the coefficient associated to each term. Second, there are inherently exponentially many terms need to be taken into account with respect to $N$. Although this cannot be accelerated to polynomial time, we can use several tricks to make it more time and space efficient. Here we focus on the first point. In the next section, we present a more efficient way of calculating the coefficient associated to each monomial term.
\section{Coefficients associated to each monomial}
In this section we focus on computing the coefficient $M(C)$ associated to a given monomial $C$. Note that some monomials in the BCH formula might be linearly dependent so that we can combine the coefficients together; we ignore this issue for now and just focus on the coefficient which arises in the formula itself, i.e.,
$$M(C)=\sum_{n=1}^{\infty}\frac{(-1)^{n-1}}{n}\sum_{r_i, s_i}\frac{1}{\sum_{i=1}^n (r_i+s_i)\cdot\prod_{i=1}^n r_i!s_i!}.$$
where the second summation is over all $n$ pairs $(r_i,s_i)_{i=1}^n$ which gives rise to the monomial $C$. Note that by definition of $N(C)$, $\sum_{i=1}^n (r_i+s_i)=N$ is a fixed number, and
$$N(C)M(C)=\sum_{n=1}^{\infty}\frac{(-1)^{n-1}}{n}M(C,n),$$
where
$$M(C,n)=\sum_{(r_i,s_i)}\frac{1}{\prod_{i=1}^n r_i!s_i!}.$$
\subsection{Separation into blocks}
Given an $N$th-order monomial $C=[X,[\cdots,[Y,[,\cdots,[\cdots,[X,Y]]]]]]$, we can encode it into an $N$-bit binary string $X\cdots Y\cdots\cdots XY$. Here we identify a monomial with its encoding as a string. $n$ pairs of numbers $(r_i,s_i)_{i=1}^n$ gives rise to $C$ if and only if $C$ is exactly the concatenation $X^{r_1}Y^{s_1}\Vert X^{r_2}Y^{s_2}\Vert\cdots\Vert X^{r_n}Y^{s_n}$. We call such $(r_i,s_i)_{i=1}^n$ a \emph{partition} of the string $C$, and $X^{r_i}Y^{s_i}$ the $i$th substring with respect to the partition $(r_i,s_i)_{i=1}^{n}$. Note that each substring takes the form $X^{r_i}Y^{s_i}$, therefore whenever there is a descending edge $YX$ in the original string $C$, that $Y$ and that $X$ must not lie in the same substring. This enables us to separate the string $C$ to blocks by descending edges, e.g.,
$$C=\underbrace{YY}_{\text{block }1}|\underbrace{XXY}_{\text{block }2}|\underbrace{XY}_{\text{block }3}|\underbrace{XXYY}_{\text{block }4}|\underbrace{X}_{\text{block } 5}.$$
Denote $L(C)$ the number of blocks in $C$ separated by descending edges. Then $$C=X^{u_1}Y^{v_1}\cdots X^{u_{L(C)}}Y^{v_{L(C)}}$$ can be uniquely specified by $L(C)$ pairs of numbers $(u_i,v_i)_{i=1}^{L(C)}$, where $u_i,v_i>0$ except for $u_1$ and $v_{L(C)}$. It is clear that each block contains at least one substrings, yet no substrings can go across blocks. Given that each substring must also be nonempty, we know that the number of substrings separating a nomonimal $C$ is bounded between $L(C)$ and $N(C)$, i.e.,
$$N(C)M(C)=\sum_{n=L(C)}^{N(C)}\frac{(-1)^{n-1}}{n}M(C,n).$$
On the other hand, since no substrings can go across different blocks, it suffices to consider each block separately. Suppose that $n_i$ substrings are allocated to block $i$ with $1\leq i\leq L(C)$, then with fixed sequence $(n_i)_{i=1}^{L(C)}$, how these $n_i$ substrings are allocated inside block $i$ is independent of the allocation inside other blocks, so we can simplify the expression of $M(C,n)$ to be
$$M(C,n)=\sum_{n_1+\cdots +n_{L(C)}=n}\prod_{i=1}^{L(C)}\left(\sum_{(r,s)}\frac{1}{\prod r_j!s_j!}\right),$$
where the inner summation is only over all partitions of block $i$ into $n_i$ substrings. Note that each block $i$ takes the form $X^{u_i}Y^{v_i}$, thus it can be identified by a pair $(u_i,v_i)$. Furthermore, the inner summation only depends on the current block and the number of substrings, so we can denote it as $g(u_i,v_i,n_i)$ and the total summation then becomes
$$M(C,n)=\sum_{n_1+\cdots+n_{L(C)}=n}\prod_{i=1}^{L(C)}g(u_i,v_i,n_i).$$
\subsection{Contribution from individual blocks}
Now let's compute $$g(u_i,v_i, n_i)=\sum_{(r,s)}\frac{1}{\prod r_i!s_i!}.$$ Each term in the summation corresponds to one particular partition $(r_j,s_j)_{j=1}^{n_i}$ of $X^{u_i}Y^{v_i}$ into $n_i$ substrings. Since the block being separated takes the form $X^{u_i}Y^{v_i}$, we know that at most one substring contains both $X$ and $Y$, or equivalently, at most one pair of $(r_j,s_j)$ inside this block has both entries nonzero. Furthermore, given such a partition $(r_j, s_j)$, there exists a partition of the block into $n_i+1$ pieces, which is just refining the substring containing both $X$ and $Y$ to two substrings, one consisting of only $X$s and the other only $Y$s. One can observe that the contribution of coefficients from these two partitions are identical. In the case that both $u_i$ and $v_i$ are nonzero, such a correspondence is one-to-one, meaning that every partition into $n_i$ substrings with only $X^{r_j}$'s and $Y^{s_j}$'s can be mapped to a partition into $n_i-1$ substrings by merging the middle two substrings. We will deal with the case that either $u_i$ or $v_i$ is zero later. Let
$$h(u_i,v_i,n_i)=\sum_{(r,s)}\frac{1}{\prod r_j!s_j!}$$
be the summation over all partitions without substrings containing both $X$ and $Y$, then
$$g(u_i,v_i,n_i)=h(u_i,v_i,n_i)+h(u_i,v_i,n_i+1).$$
Since no string contains both $X$ and $Y$, we can enumerate over the number of substrings partitioning $X^{u_i}$ and $Y^{v_i}$, then
$$h(u_i,v_i,n_i)=\sum_{1<n_x<n_i}\sum_{\substack{r_1,\cdots, r_{n_x}>0\\\sum r_j=u_i}}\sum_{\substack{s_{n_x+1},\cdots, s_{n_i}>0\\\sum s_j=v_i}}\frac{1}{\prod_{j=1}^{n_x} r_j!\prod_{j=n_x+1}^{n_i}s_j!}.$$
Denote $f(u,n)=\sum_{\substack{r_1,\cdots, r_n>0,\\\sum_j{r_j}=u}}\frac{1}{\prod_{j=1}^n r_j!}$, then we have
$$h(u_i,v_i,n_i)=\sum_{1<n_x<n_i}f(u_i,n_x)*f(v_i,n_i-n_x).$$
Rewrite $f(u,n)$ as
$$f(u,n)=\frac{1}{u!}\sum_{\substack{r_1,\cdots, r_n>0,\\\sum_j{r_j}=u}}\frac{u!}{\prod_{j=1}^n r_j!}=\frac{1}{u!}\sum_{\substack{r_1,\cdots, r_n>0,\\\sum_j{r_j}=u}}\binom{u}{r_1,\cdots, r_n}.$$
By multinomial theorem, we know that $\sum_{\substack{r_1,\cdots, r_n\geq0,\\\sum_j{r_j}=u}}\binom{u}{r_1,\cdots, r_n}=n^u$. This summation is almost the term we want except that it has extra terms where some of the $r_j$s are zero. By inclusion-exclusion principle, we have
$$u!f(u,n)=\sum_{S\subseteq [n]}(-1)^{|S|}\sum_{\substack{r_j\geq 0,j\notin S,\\r_j=0,j\in S,\\ \sum_j{r_j}=u}}\binom{u}{r_1,\cdots, r_n}=\sum_{S\subseteq [n]}(-1)^{|S|}(n-|S|)^u=\sum_{z=0}^n(-1)^z\binom{n}{z}(n-z)^u.$$
We can express $f(u,n)$ more concisely in terms of finite difference as
$$f(u,n)=\frac{1}{u!}\Delta^n_x x^u|_{x=0}.$$
The case where either $u_i$ or $v_i$ is zero can be similarly calculated; we have
$$g(u_i,0,n_i)=g(0,u_i,n_i)=f(u_i,n_i).$$
\subsection{Computing the overall coefficient}
Given all blocks, we are now ready to compute the coefficient given a monomial $C$. We first divide $C$ into blocks $(u_i,v_i)_{i=1}^{L(C)}$. Then
$$M(C)=\underbrace{\frac{1}{N(C)\cdot\prod_{i=1}^{L(C)}u_i!v_i!}}_{\text{overall constant}}\cdot \underbrace{\sum_{n=L(C)}^{N(C)}}_{S_1}\frac{(-1)^{n+1}}{n}\underbrace{\sum_{\substack{n_1,\cdots, n_{L(C)}>0\\ \sum_{i=1}^{L(C)}n_i=n}}}_{S_2}\underbrace{\prod_{i=1}^{L(C)}}_{P_1}\underbrace{g'(u_i,v_i,n_i)}_{T},$$
$$g'(u_i,v_i,n_i)=\begin{cases}\sum_{1<n_x<n_i}f'(u_i,n_x)*f'(v_i,n_i-n_x)+\sum_{1<n_x<n_i+1}f'(u_i,n_x)*f'(v_i,n_i-n_x+1), u_i,v_i>0,\\ f'(u_i,n_i), v_i=0,\\ f'(v_i,n_i), u_i=0,\end{cases}$$
$$f'(u,n)=\Delta^n_x x^u|_{x=0}=\sum_{z=0}^n(-1)^z\binom{n}{z}(n-z)^u.$$
\subsection{*Complexity analysis}
The main complexity of computing such a coefficient comes from the nested function calls $S_1$, $S_2$, $P_1$ and the subroutine $T$ computing $g'(u_i,v_i,n_i)$, since the overall constant can be computed only once so it does not contribute much to the complexity. $P_1$ has $L(C)$ factors; $T$ can either be computed from scratch in $O(N^3)$ time, or be computed from preprocessed table storing $f(u,n)$'s in $O(N)$ time with $O(N^2)$ extra memory, or in $O(1)$ time from preprocessed table storing $g(u,v,n)$, with $O(N^3)$ extra memory. Putting $S_2$ and $S_1$ together, we are essentially summing up over all possible numbers of substrings inside each block. For a block with length $N_i$, it can be partitioned into $1$ to $N_i$ substrings, and the number of substrings inside this block is independent over the numbers of substrings inside other blocks. Altogether, there are $\prod_{i=1}^{L(C)}N_i\leq e^{N/e}$ summands to take into consideration. Putting everything together, the complexity of computing $M(C)$ for a monomial $C$ can be reduced to
$O(e^{N/e})$ with $O(N^3)$ extra memory. Since the complexity for computing the coefficient dominates the cost for computing the commutator itself, enumerating over all strings with length up to $N$, the total running time would be $O(2^N\cdot e^{N/e})=O(2^{1.53N})$. A more careful analysis might give a tighter bound (numerical evidence shows that the running time scales as $O(2^{1.47N})$), but for now it is not our main focus. One can see that it is a big improvement with respect to naively enumerating over all possible partitions for each monomial $C$, which would take $\Omega(2^{2N})$ time.
\end{document}
|
\begin{document}
\fancyhead{}
\renewcommand{0pt}{0pt}
\fancyfoot{}
\fancyfoot[LE,RO]{
\thepage}
\setcounter{page}{1}
\title[Fibonacci Numbers and Identities ]{Fibonacci Numbers and Identities}
\author{Cheng Lien Lang}
\address{Department Applied of Mathematics\\
I-Shou University\\
Kaohsiung, Taiwan\\
Republic of China}
\email{[email protected]}
\thanks{}
\author{Mong Lung Lang}
\address{ }
\email{[email protected]}
\begin{abstract}
By investigating a recurrence relation about functions, we first give
alternative proofs for various identities on Fibonacci numbers and Lucas numbers, and then,
make certain well known identities {\em visible} via certain trivalent graph
associated to the recurrence relation.
\end{abstract}
\maketitle
\section{Introduction}
A function $x(n)$ defined over $\Bbb N \cup \{0\}$ is called an $\mathcal F$-function
if $x(n)$ satisfies the following recurrence relation.
$$ x(n+3) = 2(x(n+2) + x(n+1)) - x(n).\eqno(1.1)$$
One sees easily that the following are ${\mathcal F}$-functions :
$$x(n) = (-1)^n, \,\,x(n) = F_{n }^2,\,\,
x(n) = F_{n+r}F_{n},\,\,x(n) = F_{2n}, \eqno(1.2)$$
$$x(n) = L_{n}^2,\,\,
x(n) = L_{n+r}L_{n},\,\,x(n) = L_{2n}, x(n) = F_nL_{n+r},\eqno(1.3)$$
where $r \in \Bbb Z$ and $F_n,\, L_n$ are the $n$-th Fibonacci and Lucas numbers
respectively.
Note that sum and difference
of $\mathcal F$-functions are $\mathcal F$-functions.
A search of the literature turns out that a great deal of identities involve $\mathcal F$-functions
only.
For instance, all the terms in the Cassini's identity $F_{n-1}F_{n+1}-F_n^2 = (-1)^n$ are
$\mathcal F$-functions. For our convenience, we shall call such an identity $\mathcal F$-identity.
Identities with other recurrence relations are less frequent.
The main purpose of this article is to
avoid the intricate case-by-case analysis, thereby obtaining
a unified proof of the $\mathcal F$-identities.
Since these identities involve $\mathcal F$-functions only, our proof will make usage of (1.1)
and (i), (ii) of the following only.
\begin{enumerate}
\item[(i)] (1.2) and (1.3) of the above are $\mathcal F$-functions,
\item[(ii)] $F_{n+2} = F_{n+1}+ F_n$, $F_{-m} = (-1)^{m+1}F_m,$
$L_{n+2} = L_{n+1}+ L_n$, $L_{-m}= (-1)^mL_m$.
\end{enumerate}
Note that our proof can be applied easily to all $\mathcal F$-identities.
Identities involve other recurrence relations such as (iii) and (iv) of the
following will be discussed in section 6.
\begin{enumerate}
\item[(iii)] $ A(n+2)= -A(n+1)+ A(n) $,
\item[(iv)] $A(n+3) = -A(n+2)+A(n+1) +A(n)$.
\end{enumerate}
The rest of the article is organised as follows. In section 2 we give some basic properties about
$\mathcal F$-function. Section 3 gives alternative proofs of the well known Catalan's identity
and Melham's identity.
Section 4 lists a few more identities (including d'Ocagne's, Tagiure's and
Gelin-Ces\`{a}ro
identities) that can be proved by applying our technique presented in
section 3. They are the $\mathcal F$-identities involved functions we listed in (1.2) and (1.3). In other
words, they use functions in (1.2) and (1.3) as building blocks (see Lemma 2.1). Since product of $\mathcal
F$-functions are not necessarily $\mathcal F$-functions, our idea cannot be applied to all
identities (see Appendix B). Section 5 is devoted to the possible visualisation of identities via the
recurrence relation (1.1). After all, there is nothing to prove if one cannot {\em see} the
identities in the first place. The last section gives a very brief discussion about
identities that involve other recurrence relations.
\section{Basic Properties about $\mathcal F$-functions}
\noindent {\bf Lemma 2.1.} {\em Functions defined in $(1.2)$ and $(1.3)$ are $\mathcal F$-functions.
Let $A(n)$ and $B(n)$ be $\mathcal F$-functions and let $r_0\in \Bbb Z$ be fixed. Then
$X(n) = A(n +r_0), Y(n) = r_0 A(n)$ and $A(n)\pm B(n)$ are $\mathcal F$-functions. }
\noindent {\em Proof.} Let $x(n)$ be given as in (1.2) or (1.3). To show $x(n)$ is an $\mathcal F$-functions, it suffices
to show that $x(n+3) = 2(x(n+2)+x(n+1))-x(n)$, which can be verified easily. \qed
\noindent {\bf Lemma 2.2.} {\em
Let $A(n)$ and $B(n)$ be $\mathcal F$-functions. Then
$A(n)= B(n)$ if and only if
$A(k)= B(k)$ for
$k = 0,1$ and $2$. }
Note that the fact that the following functions are $\mathcal F$-functions has been treated
as identities in the literature ([HB], identities (31)-(34) of [L]).
\noindent {\bf Example 2.3.} By Lemma 2.1,
$F_{n+3}^2$, $F_{n+3}F_{n+4}$, $L_{n+3}^2$ and $ F_{n+3}L_{n+3}$
are $\mathcal F$-functions.
The following lemma is straightforward and will be used in sections 3 and 6.
\noindent {\bf Lemma 2.4.} {\em Let $A(n)$ and $ B(n)$ be functions defined over $\Bbb N\cup \{0\}$.
Suppose that both $A(n)$ and $B(n)$ satisfy either
\begin{enumerate}
\item[(i)] $x(n+2) = -x(n+1)+ x(n)$, or
\item[(ii)] $x(n+3) = -2x(n+2) +2x(n+1) +x(n)$.
\end{enumerate}
Then $A(n) = B(n)$ if and only of $A(k)=B(k)$ for $k =0, 1,2$.}
\subsection {Discussion} Let $\{x_n\}$ be a sequence satisfies the recurrence relation $x_{n+2} = x_
{n+1} +x_{n}$. Then $A(n) = x_{2n}$, $B(n) = x_nx_{n+r}, C(n) = x_n^2$ are $\mathcal F$-functions.
\section{ An alternative proof for Catalan's Identities}
In [H], Howard studied generalised Fibonacci sequence and proved that Calatan's identity is equivalent to an identity discovered and proved by
Melham ([M]) (see section 3.1). We give in the following our alternative proof which uses Lemmas 2.1 and
2.2 only.
\noindent {\bf Lemma 3.1.} $ 4(-1)^{3-r} + F_{r+3}F_{r-3} - F_{r}^2 =0.$
\noindent {\em Proof.}
We assume that $r \ge 0$. The case $r \le 0$ can be dealt with similarly. Let $A(r) = 4(-1)^{3-r} + F_{r+3}F_{r-3} - F_{r}^2 $. By Lemma 2.1, $A(r)$
is an $\mathcal F$-function.
By Lemma 2.2, $A(r) = 0$ for all $r$. This completes the proof of the Lemma. \qed
\noindent {\bf Remark.} Any $\mathcal F$-identity with one variable $n$
can be proved by applying the proof of
Lemma 3.1. For instance, the Cassini's identity.
\noindent {\bf Theorem 3.2 (Catalan's Identity).} {\em
$F_n^2 -F_{n+r}F_{n-r} = (-1)^{n-r} F_r^2$.}
\noindent {\em Proof.} Recall first that
$F_{-m} = (-1)^{m+1}F_m$. As a consequence, we may assume without loss of generality that
$ n , r \ge 0$. As a matter of fact, we may assume that $n \ge 1$ as the case $n =0$ is trivial.
Let $A(n) =F_n^2 -F_{n+r}F_{n-r}$, $B(n)= (-1)^{n-r} F_{r}^2$.
By Lemma 2.1, both $A(n)$ and $B(n)$
are $\mathcal F$-functions (in $n$).
Note that $$A(1) = 1-F_{1+r}F_{1-r}, \,\,A(2) = 1-F_{2+r}F_{2-r},\,\, A(3) = 4-F_{3+r}F_{3-r},
\eqno (3.1)$$
and $$B(1) = (-1)^{1-r}F_{r}^2, \,\,B(2) = (-1)^{2-r}F_{r}^2,\,\,
B(3) = (-1)^{3-r}F_{r}^2.\eqno(3.2)$$
\noindent One sees easily that if $r \le 3$, then $A(n) = B(n)$ for $n=1,2,3$.
By Lemma 2.2, we have $A(n) = B(n)$. Hence the theorem is proved.
We shall therefore assume that $r\ge 4$.
Recall that
$F_{-m} = (-1)^{m+1}F_m$. This allows us to rewrite $A(3)$ into
$A(3) = 4 +(-1)^{3-r} F_{r+3}F_{r-3}.$
Hence $$A(3)-B(3) = 4 + (-1)^{3-r}F_{r+3}F_{r-3} - (-1)^{3-r}F_{r}^2.\eqno(3.3)$$
\noindent By Lemma 3.1, $A(3) = B(3)$.
One can show similarly that $A(1) = B(1)$ and $A(2) = B(2)$.
Applying Lemma 2.2, we have
$A(n) = B(n)$
for all $n $. This completes the proof of the theorem. \qed
\noindent {\bf Remark.} Any $\mathcal F$-identity with two variables $m$ and $n$
can be proved by applying the proof of Theorem 3.2.
\subsection{Melham's Identity.} In [M], Melham proved among some very
general results the identity $F_{n+r+1}^2 +F_{n-r}^2
= F_{2r+1}F_{2n+1}$. We shall give our alternative proof as follows.
Denoted by $A(n)$ and $B(n)$ the left and right hand side of the identity.
By Lemma 2.1,
$A(n)$ and $B(n)$ are $\mathcal F$-functions (in $n$).
The cases $n = 0,1$ and 2 of the above identity are given by
$$F_{r+1}^2 + F_{-r}^2= F_{2r+1} F_1,\,\,
F_{r+2}^2 + F_{1-r}^2= F_{2r+1} F_3,\,\,
F_{r+3}^2 + F_{2-r}^2= F_{2r+1} F_5.\eqno(3.4)$$
\noindent By Lemma 2.1, the functions in (3.4) are $\mathcal F$-functions
(in $r$)
and that the identities can be verified by applying Lemma 2.2. Consequently,
we have $A(0)= B(0), A(1)=B(1)$ and $A(2)= B(2)$. By Lemma 2.2, we have
$A(n) = B(n)$ for all $n$.
\subsection{Discussion.} Our method can be generalised to functions such as $x(n)
= F_n^3, y(n) = F_{3n}$ which satisfy the recurrence relation (see Appendix B)
$$x(n+4) = 3x(n+3)+6x(n+2) -3x(n+1) -x(n).\eqno(3.5)$$
\section{ More Identities}
The purpose of this section is to list a few identities we found in the literature
that can be proved by applying Lemmas 2.1 and 2.2
\subsection{ d'Ocagne's Identity} The proof we presented in section 3 can be applied
to all $\mathcal F$-identities. A search
of the literature turns out that there are many such identities. However, as
the identities may be described in different manner, it is important to get
equivalent forms of the identities. Take d'Ocagne's identity
for instance. It is, in many occasion, given as follows (see [W]).
$$F_mF_{n+1} - F_nF_{m+1} = (-1)^n F_{m-n}.\eqno(4.1)$$
\noindent The very first look of the left and right hand side does not reveal the fact
that they are $\mathcal F$-function. However, one has the following.
Let $r= m-n$. Then (4.1) can be rewritten as
$$F_{n+1}F_{n+r}-F_nF_{n+r+1} = (-1)^nF_{r},\eqno(4.2)$$
\noindent
where both the left and right hand side in (4.2) are $\mathcal F$-functions in terms of $n$.
One may now apply the proof of Theorem 3.2 to give a proof of (4.2). As the proof is
identically the same, we will not include it here.
\subsection{Some more identities} A search of the literature turns up that there are many
identities can be verified by Lemmas 2.1 and 2.2
(for instance, out of the 44 identities given by Long [L], 35 of them
involve $\mathcal F$-functions). We shall list a few which we pick mainly from [W]
($(c1)$-$(c8)$, $(c11)$, $(c12)$, $(d1)$-$(d6)$).
{\small
$\begin{array} {lllrrr}
\\
(c1) &F_{n+a}F_{n+b}-F_nF_{n+a+b}= (-1)^nF_aF_b & :& F_{2n}= F_{n+1}^2-F_{n-1}^2 & (d1)\\
\\
(c2) & F_{n+1}^2 = 4F_n F_{n-1}+F_{n-2}^2 & : & F_{2n+1}= F_{n+1}^2+F_{n}^2 & (d2) \\
\\
(c3) & L_n^2 -5F_n^2 = 4(-1)^n & :& F_{n+2}F_{n-1}= F_{n+1}^2 -F_{n}^2& (d3)\\
\\
(c4) & F_{n-1}F_{n+1}-F_n^2 = (-1)^{n-1} & :& F_{n}^2 - F_{n-2}F_{n+2} = (-1)^n& (d4)\\
\\
(c5) &F_mF_n = \frac{1}{5}(L_{m+n} -(-1)^nL_{m-n}) & :& \sum_{n=1}^n F_n^2 = F_n F_{n+1}& (d5)\\
\\
(c6) & F_n^2 = \frac{1}{5}( L_{2n}-2(-1)^n) & :& F_n^2 -F_{n-1}F_{n+1} = (-1)^{n-1}& (d6)\\
\\
(c7) & F_{n+m} = F_{n-1}F_m +F_nF_{m+1} & :& F_n F_{n+3} = F_{n+1} F_{n+2} +(-1)^{n-1}& (d7)\\
\\
(c8)&F_{m+n} = \frac{1}{2}(F_mL_n+L_mF_n) & :& F_n^2 -F_{n-1}^2 = F_n F_{n-1} +(-1)^{n-1}& (d8)\\
\\
(c9) &F_{n+k+1}^2+F_{n-k}^2 =F_{2k+1}F_{2n+1}& :& F_{2n+1}+(-1)^n =F_{n-1}F_{n+1} +F_{n+1}^2& (d9)\\
\\
(c10) &L_{n+k+1}^2+L_{n-k}^2 =5L_{2k+1}L_{2n+1} &:& L_{2n+1}-F_{n+1}^2- A=(-1)^{n-1}& (d10)\\
\\
(c11) &F_n^2 +(-1)^{n+r-1} F_r^2 = F_{n-r}F_{n+r} &:& L_{n-1}^2-F_{n-4}F_n-F_nF_{n+1}=F_{n-2}^2& (d11)\\
\\
(c12) & F_n^4 -F_{n-2}F_{n-1}F_{n+1}F_{n+2} = 1 &:& F_{2n+1} = F_{n+3}F_n-F_{n+1}F_{n-1}& (d12)\\
\end{array}
$}
\noindent where $A = (L_n^2-F_{n-3}F_{n+1})+F_{2n-2}$ and $L_n$ is the $n$-th Lucas number.
(d10) and (d11) are less standard. We decide to include them in the table as they are
{\em visible} via certain trivalent graph (see section 5).
\noindent {\em Proof.} We note first that functions in $(c12)$ are not $\mathcal F$-functions but
the identity can be proved by applying $(c4)$ and $(d4)$.
In $(c5), \,(c7)$ and $(c8)$, one needs to do a rewriting to see that
the functions are $\mathcal F$-functions (let $m+n=r$).
One may now apply Lemmas 2.1 and 2.2 and our proof presented in Theorem
3.2 to verify these identities. \qed
\noindent {\bf Remark.}
As identities may be described differently, the technique of rewriting identities into
equivalent forms is crucial (see (4.1) and (4.2)).
$(c9)$ and $(c10)$
were first found and proved by Melhem ([M1]).
$(c12)$ is the Gelin-Ces\`{a}ro identity.
\subsection{Discussion.} Note that in our proof, we do not use any existing identities such as
Binet's formula or any identities listed in [W] except (1.1), (1.2) and (1.3) of this
article, which is what we promise in our introduction. Note also that one has to apply Lemma
2.2 three times to prove identity $(c1)$, known as the Tagiuri's identity.
\section {How Far can (1.1) go ?}
We have demonstrated that the recurrence relation (1.1) can be used to verified various
identities. In this section, we will present a trivalent graph (see the
graph given in Appendix A) which is closely related
to (1.1) that enables us to {\em visualise} identities in the following.
Let $e_3,e_2,e_1$ be vectors
placed in the following
trivalent graph and let $e_4$ be the vector given by
$ e_4 = 2(e_3+e_2)-e_1.$ Such a vector $e_4$ is said to be
$\mathcal F$-generated by $e_3$, $e_2$, and $e_1$ (in this order).
For our convenience, we use the following notation for the vector
$e_4$ :
$$ e_4 = \left < e_3,e_2 :e_1\right > = 2(e_3+e_2)-e_1.\eqno(5.1)$$
\noindent
Note that $ \left < e_3,e_2 :e_1\right > \ne \left < e_2,e_1 :e_3\right > \ne
\left < e_1,e_3:e_2\right > $.
Note also that (5.1) can be viewed as a generalisation (in the form of vectors) of
the recurrence relation (1.1). We may construct an infinite sequence of vectors
given as follows.
$$e_1,e_2,e_3, e_4 = 2(e_3+e_2)-e_1, \cdots, e_{n+1} = 2(e_n +e_{n-1})-e_{n-1}, \cdots . \eqno(5.2)$$
\begin{center}
\begin{picture}(60,20)
\multiput(2,0)(30,0){1}{\line(1,0){60}}
\multiput(2,0)(60,0){1}{\line(-1,1){20}}
\multiput(2,0)(60,0){1}{\line(-1,-1){20}}
\multiput(82,-20)(60,0){1}{\line(-1,1){20}}
\multiput(82,20)(30,30){1}{\line(-1,-1){20}}
\put(-30, -3){\small $e_1$}
\put(85, -3){\small $x = 2(e_3+e_2)-e_1$}
\put(30, 15){\small $ e_2$}
\put(30, -18){\small $e_3$}
\end{picture}
\end{center}
\noindent
Denoted by $F(e_1,e_2,e_3)$ the above sequence. In the
case
$\{e_1, e_2, e_3\}$ is the canonical basis of $\Bbb R^3$,
the first nine vectors are given as follows.
$e_1= (1,0,0),e_2=(0,1,0), e_3= (0,0,1), e_4=( -1,2,2), e_5 = (-2,3,6),\cdots$.
Note that $e_2, e_4, e_6, \cdots , e_{2n} $ take the top half of the graph and
$e_1, e_3, e_5, \cdots , e_{2n+1}$ take the bottom half of the graph.
\begin{center}
\begin{picture}(200,20)
\multiput(-30,0)(0,0){1}{\line(0,1){20}}
\multiput(-30,0)(0,0){1}{\line(-3,-1){30}}
\multiput(-30,0)(0,0){1}{\line(3,-1){30}}
\multiput(0,-10)(0,0){1}{\line(0,-1){20}}
\multiput(0,-10)(0,0){1}{\line(3,1){30}}
\multiput(30,0)(0,0){1}{\line(3,-1){30}}
\multiput(30,0)(0,0){1}{\line(0,1){20}}
\multiput(60,-10)(0,0){1}{\line(0,-1){20}}
\multiput(60,-10)(0,0){1}{\line(3,1){30}}
\multiput(90,0)(0,0){1}{\line(3,-1){30}}
\multiput(90,0)(0,0){1}{\line(0,1){20}}
\multiput(120,-10)(0,0){1}{\line(3,1){30}}
\multiput(120,-10)(0,0){1}{\line(0,-1){20}}
\multiput(150,0)(0,0){1}{\line(0,1){20}}
\multiput(150,0)(0,0){1}{\line(3,-1){30}}
\multiput(180,-10)(0,0){1}{\line(0,-1){20}}
\multiput(180,-10)(0,0){1}{\line(3,1){30}}
\put( -2, 10){\tiny$e_2$}
\put( -32, -20){\tiny$e_1$}
\put( 28, -20){\tiny$e_3$}
\put( 47, 10){\tiny$
\left [\begin{array} {r}
-1\\2\\2\\
\end{array}\right ]
$}
\put( 75, -25){\tiny$
\left [\begin{array} {r}
-2\\3\\6\\
\end{array}\right ]
$}
\put( 105, 10){\tiny$
\left [\begin{array} {r}
-6\\10\\15\\
\end{array}\right ]
$}
\put( 135, -25){\tiny$
\left [\begin{array} {r}
-15\\24\\40\\
\end{array}\right ]
$}
\put( 165, 10){\tiny$
\left [\begin{array} {r}
-40\\65\\104\\
\end{array}\right ]
$}
\put( 195, -25){\tiny$
\left [\begin{array} {r}
-104\\168\\273\\
\end{array}\right ]
$}
\put (240, -2) {$\cdots$}
\end{picture}
\end{center}
\noindent One sees immediately the following :
\begin{enumerate}
\item[(a)] The entries of the vectors $e_n =(a,b,c)$ are product of two Fibonacci numbers.
To be more accurate, for the first nine terms, the vectors take the form
$$(-F_nF_{n+1}, F_{n}F_{n+2}, F_{n+1}F_{n+2}).\eqno(5.3)$$
\item[(b)] The norm of $e_n= (a,b,c)$ is just the sum $a+b+c$,
where the norm $N((x_1,x_2,x_3))$ is defined to be $(x_1^2+x_2^2+x_3^2)^{1/2}$
(see Appendix A).
\item[(c)] The absolute value of the entries of $e_{2n}-e_{2n-2}$ (the top half of the
trivalent graph) and
$e_{2n+1}-e_{2n-1}$ (the bottom half of the trivalent graph) are Fibonacci numbers.
For instance,
($-40, 65,104)-(-6,10,15) = (-F_9, F_{10}, F_{11})$. This allows one to write
each entry of the vectors as a sum of Fibonacci numbers.
\end{enumerate}
\noindent
(a) and (b) of the above implies that for $n\le 6$
{\small $$(F_nF_{n+1})^2 + (F_nF_{n+2})^2 +(F_{n+1}F_{n+2})^2 = (-F_nF_{n+1} +F_nF_{n+2}+F_{n+1}F_{n+2})^2,
\eqno(5.4)$$}
\noindent which leads us to (i) of the following lemma.
Note that $-F_nF_{n+1} +F_nF_{n+2}+F_{n+1}F_{n+2} = F_n^2 + F_{n+1}F_{n+2} $.
A careful study of (a) and (c) implies that each entry of the nine vectors can be written as
sum as well as product of Fibonacci numbers which leads us to (ii)-(v) of the following lemma.
\noindent
{\bf Lemma 5.1.} {\em Let $F_n$ be the $n$-th Fibonacci number. Then the following hold.}
\begin{enumerate}
\item[(i)]
$(F_nF_{n+1})^2 + (F_nF_{n+2})^2 +(F_{n+1}F_{n+2})^2 = ( F_n^2+ F_{n+1}F_{n+2} )^2.$
\item[(ii)] $F_{2n-3}F_{2n-2} = F_1 +F_5 + \cdots + F_{4n-7}$.
\item[(iii)]$F_{2n-3}F_{2n-1}= 1+ F_2 +F_6 + \cdots +F_{4n-6}.$
\item[(iv)]$F_{2n-2}F_{2n-1} = F_3 +F_7 + \cdots +F_{4n-5}.$
\item[(v)] $F_{2n-2}F_{2n}= F_4 +F_8 + \cdots+ F_{4n-4}.$
\end{enumerate}
\subsection{Discussion.} (i) of the above lemma can be viewed as generalisation of Raine's results
on Pythagorean's triple. (i)-(v) must be well known. As they are not included in
[L] or [W], we have them here for the reader's reference. Proofs of (i)-(v) are not
included here as they can be proved easily.
As the identities in Lemma 5.1 are from our observation of the trivalent graph.
We consider that the recurrence relation (1.1), (5.1) and the
trivalent graph $F(e_1,e_2,e_3)$ make those
identities {\em visible}.
\noindent To one's surprise, the trivalent graph actually tells us more.
\begin{enumerate}
\item[(i)] The sum of the first entries (staring from $e_4$) of the first $2k-1$ consecutive
vectors is the negative of a perfect square of a Fibonacci numbers.
\item[(ii)] The sum of the second entry (starting from $e_4$) of the first $k$ vectors is
a product of two Fibonacci numbers.
\item[(iii)] The entries of every vector is a product of two
Fibonacci numbers. Let
$(-a,b,c)$ be such a vector. Then $c-b-a = \pm 1$.
\item[(iv)] Take {\em any} two consecutive vectors of the top half of the trivalent graph (such as
$(e_2, e_4), $ $(e_4, e_6),\cdots )$. Label them as $(-a,b,c)$ and $(-A,B, C)$. Then
$C-c = (B-b) + (A-a)$.
\item[(v)] Take two consecutive vectors of the top half (likewise
the bottom half) and label them as $(-a,b,c)$ and $(-C,B,A)$. One {\em sees} that
all the entries are product
of two Fibonacci numbers and the product of $a$ and $A$ is one less than a fourth power of a Fibonacci
number ! ($1\cdot 15 = 2^4-1$, $6\cdot 104 = 5^4-1$, $2\cdot 40= 3^4 -1$, $15\cdot 273= 12^4-1$).
\end{enumerate}
\noindent (i)-(v) of the above actually give us five well known identities. Take (v) for
instance, our observation shows that
{\em
a fourth power of a Fibonacci number $-1 = $ product of four Fibonacci numbers.}
To be more accurate, one has the
remarkable Gelin-Ces\`{a}ro identity {\em visible}.
$$F_n^4 -F_{n-2}F_{n-1}F_{n+1}F_{n+2} =1.\eqno(5.5)$$
We are currently investigate the trivalent graph
$F(u, v,w)$ for arbitrary triples $(u,v.w)$. It turns out that such study makes a lot of identities {\em visible}. For example, every identity appears in the right hand side of our table
((d1)-(d11)) in section 4 can be
seen from some trivalent graph $F(u,v, w)$.
See [LL]
for more detail.
\section {Discussion}
We have demonstrated in this article that a simple
study of the recurrence relation (1.1) ends up
with a unified proof for many known identities
in the literature. This suggests that one should probably group the
identities together based on the recurrence relations (if it exists) and
study them as a whole. Note that a given function may satisfy more than one recurrence
relations ($(-1)^nF_n$ satisfies (i) of the following and (3.5)).
The next recurrence relation in line, we believe,
should be
\begin{enumerate}
\item[(i)] $x(n+2) = -x(n+1)+ x(n)$,
\item[(ii)] $x(n+3) = -2x(n+2) +2x(n+1) +x(n)$.
\end{enumerate}
Identities (in Fibonacci numbers) with such recurrence relations are rare but of great importance. To see our point,
one recall that the right hand side of the very elegant identity of Melham's
($F_{n+1}F_{n+2}F_{n+6} -F_{n+3}^3=(-1)^{n}F_{n}$)
satisfies
(i) of the above
and that the following attractive identities of
Fairgrieve and Gould ([FG])
also satisfy (i) and (ii) of the above.
$$F_{n-2}F_{n+1}^2 -F_n^3 = (-1)^n F_{n-1},\eqno(6.1)$$
$$F_{n-3}F_{n+1}^3-F_n^4 = (-1)^n (F_{n-1}F_{n+3}+2F_n^2).\eqno(6.2)$$
To end our discussion, we give the following example which suggests how a new identity
can be obtained by the study of recurrence relation (i):
Since the right hand side of (6.1) satisfies (i) of the
above, $x(n) = F_{n-2}F_{n+1}^2 -F_n^3$ satisfies the same recurrence relation.
Namely, $x(n+2) = -x(n+1) +x(n)$. With the help of the famous identity
$F_{3n} = F_{n+1}^3 + F_n^3 - F_{n-1}^3$, one has
$$F_nF_{n+3}^2 +F_{n-1}F_{n+2}^2 -F_{n-2}F_{n+1}^2
=F_{3n+3},\eqno(6.3)$$
\section{Appendix A}
Let
$u = (u_i) ,v=(v_i) ,w=(w_i) \in \Bbb Z^3 $ be vectors.
We say $\{u,v, w\}$ is a $\mathcal F$-triple if
$N(u)=
u_1+u_2+u_3, N(v)
= v_1+v_2+v_3
$ and $N(w)=w_1+w_2+w_3$ are squares in $\Bbb N$ and
\begin{enumerate}
\item[(i)]
$2u\cdot v - v\cdot w - w\cdot u
= 2N(u)N(v) - N(v)N(w) - N(w)N(u)$,
\item[(ii)]
$2u\cdot w - v\cdot w - v\cdot u
= 2N(u)N(w) - N(v)N(w) - N(v)N(u) $,
\item[(iii)]
$2v\cdot w - v\cdot u - w\cdot u
= 2N(v)N(w) - N(v)N(u) - N(w)N(u)$,
\end{enumerate}
where $u\cdot v$ is the usual dot product.
One sees easily that $\{e_1, e_2, e_3\}$ is an $\mathcal F$-triple,
where $e_1=(1,0,0), e_2= (0,1,0), e_3=(0,0,1)$. The following lemma shows that
if $\{u, v, w\}$ is an $\mathcal F$-triple, then any vector $(a,b,c)$ in $F(u,v, w)$
has the property $N((a,b,c)) = a+b+c$, which proves (b) of section 5.
\noindent {\bf Lemma A.} {\em Let
$u = (u_i) ,v=(v_i) ,w=(w_i) \in \Bbb R^3 $ be a $\mathcal F$-triple
and let $x = (x_i) =2(u+v)-w$,
$y =(y_i) =2(w+v)-u$,
$z =(z_i) =2(w+u)-v$.
Then the following hold.}
\begin{enumerate}
\item[(i)]
$\{u,v, x \}$, $ \{u,w, z\}$ and
$\{v,w, y\}$ are $\mathcal F$-triples,
\item[(ii)]
$N(x)^2 = (2N(u) + 2N(v)-N(w))^2$,
$N(y)^2 = (2N(w) + 2N(v)-N(u))^2$,
$N(z)^2 = (2N(w) + 2N(u)-N(v))^2$.
\end{enumerate}
\noindent
{\em Proof.} The lemma is straightforward and can be proved by direct calculation.
\qed
\noindent Note that $x$, $y$, and $z$ in the above lemma are defined as in $(5.1)$ and can be
described as follows :
\begin{center}
\begin{picture}(60,75)
\multiput(2,0)(30,0){1}{\line(1,0){45}}
\multiput(2,0)(60,0){1}{\line(-1,1){30}}
\multiput(2,0)(60,0){1}{\line(-1,-1){30}}
\multiput(77,-30)(60,0){1}{\line(-1,1){30}}
\multiput(77,30)(30,30){1}{\line(-1,-1){30}}
\multiput(-28,-30)(50,0){1}{\line(-4,0) {30}}
\multiput(-28,-60)(50,0){1}{\line(0,4){30}}
\multiput(-58,30)(50,0){1}{\line(4,0){30}}
\multiput (-28,30)(50,0){1}{\line(0,4){30}}
\put(80 , -3){\small $x$}
\put(25, 20){\small $u$}
\put(25, -30){\small $v$}
\put(-45 , -3){\small $w$}
\put(-45 , 50){\small $z$}
\put(-45 , -50){\small $y$}
\end{picture}
\end{center}
\noindent Following our lemma, one may extend the above graph to an infinite trivalent
graph that takes the whole $xy$-plane such that
each triple $\{r,s,t\}$ associated to a vertex is an $\mathcal F$-triple. In particular,
the entries of every vector of this trivalent graph give solution to
$x^2+y^2 +z^2 = (x+y+z)^2$. Note that a complete set of integral solutions of the above mentioned equation
is given by $\{ (mn, m(m+n), n(m+n)) \,:\, n, m \in \Bbb Z\}$.
\section {Appendix B : More Recurrence relations}
Let $x(n)$ be a function defined on $\Bbb Z$. Consider the equation
$$ x(n+k) = a_{k-1} x(n+k-1) +\cdots + a_1x(n+1) + a_0x(n).\eqno(B1)$$
One sees easily that whether $x(n)$ satisfies some recurrence relation depends on whether
there exists some $k$ and $a_i$'s such that $(B1)$ holds for all $n$. In the case $x(n)$ indeed
admits some recurrence relation, such relation can be obtained by solving system of
linear equations.
\subsection{The Recurrence relation
$x(n+4) = 3x(n+3)+6x(n+2)-3x(n+1) -x(n)$}
In [M2], Melham proved that
$$F_{n+1}F_{n+2}F_{n+6} -F_{n+3}^3=(-1)^{n}F_{n}.\eqno(B2)$$
\noindent We shall give our alternative proof as follows.
Let $A(n) = F_{n+1}F_{n+2}F_{n+6} -F_{n+3}^3$, $B(n) = (-1)^{n}F_{n}$.
One sees easily that both $A(n)$ and $B(n)$ satisfy the above recurrence relation.
Since
\begin{enumerate}
\item[(i)] $A(n)$ and $B(n)$ satisfy the above recurrence relation and $A(n) = B(n) $ for $n = 0,1,2,3$,
\item[(ii)] function $x(n)$ satisfies the above recurrence relation is completely
determined by $x(0)$, $x(1),$
$ x(2)$ and $x(3)$,
\end{enumerate}
we conclude that $A(n) = B(n)$. This completes the proof of $(B2)$.
The identity $F_{3n} = F_{n+1}^3 +F_n^3-F_{n-1}^3$ and
Fairgrieve and Gould's identities
((11), (12) of [FG]) can be proved by the same method.
\subsection{Recurrence relation for $F_n^4$} $F_n^4$ satisfies the following recurrence
relation,
$$x(n+5)
= 5x(n+4)+15x(n+3)-15x(n+2)-5x(n+1)+x(n).\eqno(B3)$$
One sees easily that both the left and right hand side of $(6.2)$ satisfy $(B3)$.
As a consequence, identity (6.2) can be verified by applying our technique given in
the above subsection.
\subsection{Construction of Identities} Recurrence relations can be used to construct
identities. Take (6.3) for example, one can actually construct (6.3) as follows.
$$\begin{array}{lrrrr}
x(n) & x(0) &x(1)& x(2)& x(3)\\
\\
F_{3n+3} & 2 &8 & 34& 144\\
\\
F_nF_{n+3}^2 & 0 & 9 &25 &128 \\
\\
F_{n-1}F_{n+2}^2& 1 & 0& 9 & 25\\
\\
F_{n-2}F_{n+1}^2 & -1 & 1 & 0 & 9 \\
\\
\end{array}
$$
Since $F_{3n+3},F_nF_{n+3}^2,F_{n-1}F_{n+2}^2$ and $ F_{n-2}F_{n+1}^2 $ satisfy
the recurrence relation
$x(n+4) = 3x(n+3)+6x(n+2)-3x(n+1) -x(n)$, applying (ii) of the
above, one sees from the above table that
$$F_{3n+3}=F_nF_{n+3}^2+F_{n-1}F_{n+2}^2 - F_{n-2}F_{n+1}^2.\eqno(B4)$$
\noindent MSC2010: 11B39, 11B83
\end{document}
|
\begin{document}
\title{Weak $n$-categories: opetopic and multitopic foundations}
\author{Eugenia Cheng\\ \\Department of Pure Mathematics, University
of Cambridge\\E-mail: [email protected]}
\date{October 2002}
\maketitle
\begin{abstract}
We generalise the concepts introduced by Baez and Dolan to define
opetopes constructed from symmetric operads with a category,
rather than a set, of objects. We describe the category of
1-level generalised multicategories, a special case of the concept
introduced by Hermida, Makkai and Power, and exhibit a full
embedding of this category in the category of symmetric operads
with a category of objects. As an analogy to the Baez-Dolan slice
construction, we exhibit a certain multicategory of function
replacement as a slice construction in the multitopic setting, and
use it to construct multitopes. We give an explicit description
of the relationship between opetopes and multitopes.
\end{abstract}
\setcounter{tocdepth}{3}
\tableofcontents
\section*{Introduction}
\addcontentsline{toc}{section}{Introduction}
The problem of defining a weak $n$-category has been approached in
various different ways (\cite{bd1}, \cite{hmp1}, \cite{lei1},
\cite{pen1}, \cite{bat1}, \cite{tam1}, \cite{str2}, \cite{may1},
\cite{lei7}), but so far the relationship between
these approaches has not been fully understood. The subject of
the present paper is the relationship between the approaches given
in \cite{bd1} and \cite{hmp1}.
In \cite{bd1}, John Baez and James Dolan give a definition
of weak $n$-categories based on opetopes and opetopic
sets. In \cite{hmp1}, Claudio Hermida, Michael Makkai and
John Power begin a related definition, based on multitopes
and multitopic sets. In each case the definition has two
components. First, the language for describing $k$-cells
is set up. Then, a concept of universality is introduced,
to deal with composition and coherence. Any comparison of
the two approaches must therefore begin at the construction
of $k$-cells, and in this paper we restrict our attention
to this process. This, in the terminology of Baez and
Dolan, is the theory of opetopes.
In \cite{bd1}, the underlying shapes of $k$-cells are shapes called `opetopes'
by Baez and Dolan. The starting point is the theory of (symmetric)
operads. A `slicing' process on operads is defined, which is the
means of `climbing up' through dimensions; it is eventually used
to construct $(k+1)$-cells from $k$-cells. Opetopes are
constructed from the slicing process iterated, and presheaves on
the category of opetopes are called opetopic sets. A weak
$n$-category is defined as an opetopic set with certain
properties.
In \cite{hmp1}, an analogous process is presented, with shapes called
`multitopes'. The construction is based on multicategories in a generalised
form defined in the paper. Instead of a slicing process, the construction of a
`multicategory of function replacement' is given. This is a more general
concept, and multitopic sets are defined directly from the iteration of this
process. Multitopes are then defined to arise from the terminal multitopic
set, and multitopic sets are shown to arise as presheaves on the category of
multitopes.
Although the multitopic approach was developed explicitly as an analogy to the
opetopic approach, the exact relationship between the notions has not
previously been clear. The conspicuous difference between the two approaches
is the presence in the opetopic version, and absence in the multitopic, of
symmetric actions. In this paper we make explicit the relationship between
opetopes and multitopes, showing that they are `the same up to isomorphism'.
In fact, we do not use the definition of opetopes precisely as given in
\cite{bd1}, but rather, we develop a generalisation of the notion along lines
which Baez and Dolan began but chose to abandon, for reasons unknown to the
present author. Baez and Dolan work with operads having an arbitrary {\em set}
of types (objects), but at the beginning of the paper they use operads having
an arbitrary {\em category} of types, before restricting to the case where the
category of types is small and discrete. However, the construction gives many
copies of each opetope, and we need to regard these as isomorphic. So we need a
category of objects in order to preserve this vital information. Without it, the isomorphisms are lost and such
objects are considered to be different and in this manner the relationship
between the two approaches is destroyed. We discuss this in more detail in
Section~\ref{over}
Thus motivated, we study the approach presented by Baez and Dolan,
but using operads with a category of objects; we refer to these as
symmetric multicategories (with a category of objects), in
accordance with the terminology of \cite{hmp1} and \cite{lei1}.
The approach presented by Hermida, Makkai and Power uses
generalised 2-level multicategories, which have `upper level' and
`lower level' objects. As far as the construction of multitopes,
however, we have found only 1-level versions to be involved, so we
consider only these, which we refer to simply as generalised
multicategories.
The constructions of multitopes and opetopes are explicitly analogous,
so we compare them step by step as follows.
We begin, in Section~\ref{over} with an informal overview of the whole theory.
We include for completeness the theory proposed by Leinster although the formal
treatment is given in a further work.
In Section \ref{tacmcats} we define the categories {\bfseries
SymMulticat} and {\bfseries GenMulticat}, of symmetric and
generalised multicategories respectively. These are the underlying
theories of the two approaches.
In Section \ref{tacxi} we construct a functor
\[\xi:\mbox{{\bfseries GenMulticat}} \longrightarrow
\mbox{{\bfseries SymMulticat}}\]
and show that it is full and faithful. Given a generalised
multicategory $M$, $\xi$ acts by leaving the objects unchanged,
but adding a symmetric action freely on the arrows. (By `free'
here we mean that the orbit of an arrow with $n$ source elements
is the size of the whole permutation group ${\mathbf S}_n$.)
Clearly not all symmetric multicategories are in the image of
$\xi$. To be in the image, a symmetric multicategory certainly
must have a discrete category of objects (we call this {\em
object-discrete}) and a free symmetric action (we call this {\em
freely symmetric}). We show that these conditions are in fact
necessary and sufficient. Eventually we will see that every symmetric
multicategory used in the construction is equivalent to one with these
properties.
In Section \ref{tacslice} we examine the construction of
opetopes. We first define and compare the slicing
processes. Our method is as follows. Given a morphism of
symmetric multicategories \[\phi:Q \longrightarrow \xi(M)\]
we construct a morphism \[\phi^+:Q^+ \longrightarrow
\xi(M_+)\] from the action of $\phi$. We show that if
$\phi$ is an equivalence, then $\phi^+$ is also an
equivalence. In particular we deduce that the functor
$\xi$ and the slicing process `commute' up to equivalence,
that is, for any generalised multicategory $M$ \[\xi(M)^+
\simeq \xi(M_+).\]
In Section \ref{tacopetopes} we apply the above constructions to
opetopes and multitopes. Writing $I$ for the symmetric
multicategory with one object and one arrow, a $k$-dimensional
opetope is defined to be an object of $I^{k+}$, the $k$th iterated
slice of $I$. Similarly, writing $J$ for the generalised
multicategory with one object and one arrow, a $k$-dimensional
multitope is defined to be an object of $J_{k+}$, the $k$th
iterated slice of $J$. By the above constructions, we have for
each $k$
\[\xi(J_{k+}) \simeq I^{k+}\]
giving a correspondence between opetopes and multitopes.
Hermida, Makkai and Power suggest that where their concept is
``concrete and geometric'' the Baez-Dolan concept is ``abstract
and conceptual''. In uniting the two approaches the reward is a
concept which enjoys the elegance of being abstract and
conceptual while at the same time providing a concrete, geometric
description of the objects in question.
\subsubsection*{Terminology}
\renewcommand{\roman{enumi})}{\roman{enumi})}
\begin{enumerate}
\item Since we are concerned chiefly with {\em weak}
$n$-categories, we follow Baez and Dolan (\cite{bd1}) and omit the
word `weak' unless emphasis is required; we refer to {\em strict}
$n$-categories as `strict $n$-categories'.
\item We use the term `weak $n$-functor' for an $n$-functor where
functoriality holds up to coherent isomorphisms, and `lax' functor
when the constraints are not necessarily invertible.
\item In \cite{bd1} Baez and Dolan use the terms `operad' and
`types' where we use `multicategory' and `objects'; the latter
terminology is more consistent with Leinster's use of `operad' to
describe a multicategory whose `objects-object' is 1.
\item In \cite{hmp1} Hermida, Makkai and Power use the term
`multitope' for the objects constructed in analogy with the
`opetopes' of \cite{bd1}. This is intended to reflect the fact
that opetopes are constructed using operads but multitopes using
multicategories, a distinction that we have removed by using the
term `multicategory' in both cases. However, we continue to use
the term `opetope' and furthermore, use it in general to refer to
the analogous objects constructed in each of the theories.
\item We regard sets as sets or discrete categories with no
notational distinction.
\end{enumerate}
{\bfseries Acknowledgements}
This work was supported by a PhD grant from EPSRC. I would
like to thank Martin Hyland and Tom Leinster for their support and
guidance.
\section{Overview}
\label{over}
In this Section we give an informal overview of the opetopic foundations for
theory of $n$-categories. This is not intended to be a rigorous treatment, but
rather, to give the reader an idea of the `spirit' of the definition, the
issues involved in making it, and the reason (as opposed to the proof) that the
different approaches in question turn out to be equivalent. For completeness we
include here discussion of Leinster's construction (\cite{lei1}) although the
formal account is given in a further work (\cite{che8}).
\subsection{What are opetopes?}
The defining feature of the opetopic theory of $n$-categories is, superficially,
that the underlying shapes of cells are {\em opetopes}. Below are some examples
of opetopes at low dimensions.
$\bullet$ \hspace{1ex} 0-opetope
\begin{picture}(20,10) \put(10,0){.} \end{picture}
$\bullet$ \hspace{1ex} 1-opetope
\begin{picture}(20,10) \onecell{0}{0}{1mm}{}{}{} \end{picture}
$\bullet$ \hspace{1ex} 2-opetopes
\begin{picture}(16,20)(2,0) \diagbn{0}{0}{1mm} \end{picture}
\begin{picture}(20,15)(-2,0) \onetwo{0}{0}{0.8mm}{}{}{}{}{} \end{picture}
\begin{picture}(25,22)(-5,0) \twotwo{}{}{}{}{}{}{} \end{picture}
\begin{picture}(25,20)(-2,0) \threetwo{}{}{}{}{}{}{}{}{} \end{picture}
\begin{picture}(20,30) $\cdots$ \end{picture}
$\bullet$ \hspace{1ex} a 3-opetope
\begin{picture}(50,30)(15,0)
\diagbo{18}{0}{1mm} \end{picture}
$\bullet$ \hspace{1ex} a 4-opetope
\begin{center}
\begin{picture}(100,60)
\put(26,25){\diagbo{0}{0}{1.2mm}}
\put(3,42){\assleft{}{}{}{}{}}
\put(28,47){\three{0}{0}{1mm}{}}
\put(33,42){\threetwob{}{}{}{}{}{}{}{}{}}
\put(41,42.5){\diagad{0}{0}{2mm}}
\put(-8,8){\assright{}{}{}{}{}}
\put(17,13){\three{0}{0}{1mm}{}}
\put(22,8){\threetwob{}{}{}{}{}{}{}{}{}}
\put(30,17){\diagq{0}{0}{2mm}}
\put(58,-15){\diagboo{0}{0}{1.2mm}}
\put(38,-10){\diagc{0}{0}{1.2mm}}
\end{picture}
\end{center}
\subsubsection*{Remarks}
\numarabic
\begin{enumerate}
\item Note that all edges and faces are directed, but we tend to omit the arrows as
at low dimensions directions are understood.
\item The number of bars on an arrow indicate its dimension.
\item The curved arrows indicate `pasting' which is otherwise difficult to
represent in higher-dimensions on a 2-dimensional sheet of paper.
\end{enumerate}
Compared with ordinary `globular' cell shapes such as
\begin{center}
\begin{picture}(35,15)(5,-3)
\onetwo{10}{0}{1mm}{$$}{$$}{$$}{$$}{$$}
\end{picture}
\end{center}
opetopes have the following important feature: the domain of a $k$-opetope is
not a single $(k-1)$-opetope but a `pasting diagram' of $(k-1)$-opetopes. Note
that a pasting diagram can be degenerate, giving `nullary' opetopes whose
domain consists of an `empty' pasting diagram. For example, the following is a
nullary 2-opetope:
\begin{center}
\begin{picture}(20,15) \diagbn{0}{0}{1mm} \end{picture}
\end{center}
Cells in an opetopic $n$-category may thus be thought of as `labelled opetopes'
where the sources and targets of the constituent cells must match up where they
coincide in the opetope. For example the following is a 2-cell
\begin{center}
\begin{picture}(25,20)(-2,-4)
\threetwo{$a_1$}{$a_2$}{$a_3$}{$a_4$}{$f_1$}{$f_2$}{$f_3$}{$g$}{$\alpha$}
\end{picture}
\end{center}
and the following is a 3-cell in which some of the lower-dimensional labels have
been omitted
\begin{center}
\begin{picture}(35,20)
\threethree{$\alpha_1$}{$\alpha_2$}{$\alpha_3$}{$\beta$}{$\theta$}
\end{picture}
\end{center}
There now arise a philosophical question and a technical question, namely: why
and how do we do this?
\subsection{Why opetopes?}
Opetopes arise from the need, in a {\em weak} $n$-category, to record the
precise way in which a composition has been performed. For example, consider
the following chain of composable 1-cells:
\[a \map{f} b \map{g} c \map{h} d.\]
This gives a unique composite in an ordinary category (or any {\em strict}
$n$-category). However, in a bicategory (or any {\em weak} $n$-category) we
should be wary of drawing such a diagram at all, as there is more than one
composite that could be produced, for example $(hg)f$ or $h(gf)$, which may in
general be distinct.
We might record the way in which the composition has occurred by a diagram such
as
\begin{center}
\begin{picture}(25,20)(-2,-4)
\assright{$f$}{$g$}{$h$}{$gf$}{$h(gf)$}
\end{picture}
\end{center}
indicating that first $f$ is composed with $g$, and then the result is composed
with $h$. So this diagram represents the forming of the composite $h(gf)$.
Here the 2-cells are seen to indicate composition of their domain 1-cells. This
is one of the fundamental ideas of the opetopic theory, that {\em composition is
not given by an operation, but by certain higher-dimensional cells}. The cells
giving composites are those with a certain universal property, and there may be
many such cells for any composable configuration of cells. For, as we have seen
above, there may be many distinct ways of composing a given diagram of cells.
This is the motivation behind taking opetopes as the underlying shapes of cells.
\subsection{How are opetopes constructed formally?}
We have seen that the source of a cell is to be a pasting diagram of cells
rather than just a single cell. This is expressed using the language of {\em
multicategories}. A multicategory is like a category whose morphisms have as
their domain a list of objects rather than just a single object. Thus arrows
may be drawn as
\begin{center}
\setleng{1mm}
\begin{picture}(18,30)(8,8)
\scalecq
\end{picture}
\end{center}
and composition then looks like
\begin{center}
\setleng{1mm}
\begin{picture}(18,55)(14,3)
\scalecs{}{}{}{}{}{}{}{}{}
\end{picture}
\end{center}
So a $k$-cell is considered as a morphism from its constituent $(k-1)$-cells to
its codomain $(k-1)$-cell. For example
\begin{picture}(20,15)(0,3) \diagbl{0}{0}{1mm} \end{picture} : \
(\begin{picture}(16,10)(-2,-1) \diagcz{0.6mm} \end{picture},
\begin{picture}(16,10)(-2,-1) \diagcz{0.6mm} \end{picture},
\begin{picture}(16,10)(-2,-1) \diagcz{0.6mm} \end{picture}) \hspace{1.5em} \lra
\begin{picture}(20,10)(-8,-1) \diagcz{0.6mm} \end{picture}
\begin{picture}(25,15)(0,2) \diagan{0}{0}{1mm} \end{picture} : \
(\begin{picture}(13,15)(-2,2) \diagf{0}{0}{2mm} \end{picture},
\begin{picture}(13,15)(-2,2) \diagj{0}{0}{2mm} \end{picture}) \hspace{1.5em}
$\lra$ \hspace{1em} \begin{picture}(20,15)(0,2) \diagf{0}{0}{2mm}
\end{picture}
This raises the immediate question: in what order should we list the constituent
cells? Tom Leinster points out (\cite{lei1}) that there is no way of ordering
the cells that is stable under composition as required for
a multicategory as above.
The three different approaches to this construction (\cite{bd1}, \cite{hmp1},
\cite{lei1}) arise from three different ways of dealing with this problem.
\bpoint{Baez and Dolan}
Baez and Dolan say: include all possible orderings. For example
\begin{picture}(25,20)(-2,0)
\threetwob{$$}{$$}{$$}{$$}{$1$}{$2$}{$3$}{$$}{$$}
\end{picture} \ , \
\begin{picture}(25,20)(-2,0)
\threetwob{$$}{$$}{$$}{$$}{$1$}{$3$}{$2$}{$$}{$$}
\end{picture} \ , \
\begin{picture}(25,20)(-2,0)
\threetwob{$$}{$$}{$$}{$$}{$2$}{$1$}{$3$}{$$}{$$}
\end{picture} \ $\cdots$
where the numbers indicate the order in which the source cells are
listed.
So a symmetric action arises, giving the different orderings, and
Baez and Dolan use {\em symmetric} multicategories for the construction.
However, a peculiar situation arise in which arrows such as
\begin{center}
\begin{picture}(25,22)(-5,-4)
\twotwob{$$}{$$}{$$}{$1$}{$2$}{$$}{$$}
\end{picture}
\begin{picture}(10,20)
\three{5}{8}{1mm}{$\theta$}
\end{picture}
\begin{picture}(25,22)(-5,-4)
\twotwob{$$}{$$}{$$}{$1$}{$2$}{$$}{$$}
\end{picture}
\end{center}
and
\begin{center}
\begin{picture}(25,22)(-5,-4)
\twotwob{$$}{$$}{$$}{$2$}{$1$}{$$}{$$}
\end{picture}
\begin{picture}(10,20)
\three{5}{8}{1mm}{$\phi$}
\end{picture}
\begin{picture}(25,22)(-5,-4)
\twotwob{$$}{$$}{$$}{$2$}{$1$}{$$}{$$}
\end{picture}
\end{center}
cannot be composed, as the ordering on the target of one does not match the
ordering of the source of the other. The situation quickly escalates with more and more different possible
manifestations of the same opetope arising from not only the orderings on the
source cells, but also the orderings on {\em their} source cells, and so on.
For example the following innocuous looking opetope
\begin{center} \length{0.4mm}
\begin{picture}(100,40)
\diagp{60}{12}{6mm}
\put(0,0){
\begin{picture}(50,30)
\put(0,0){\line(1,2){10}}
\put(10,20){\line(1,1){10}}
\put(20,30){\line(1,-1){10}}
\put(0,0){\line(1,0){40}}
\put(10,20){\line(1,0){20}}
\put(30,20){\line(1,-2){10}}
\end{picture}}
\put(80,0){
\begin{picture}(50,30)
\put(0,0){\line(1,2){10}}
\put(10,20){\line(1,1){10}}
\put(20,30){\line(1,-1){10}}
\put(0,0){\line(1,0){40}}
\put(30,20){\line(1,-2){10}}
\end{picture}}
\end{picture}
\end{center}
has 576 possible manifestations, and the following one
\begin{center}
\begin{picture}(40,20) \diagbo{0}{0}{1mm} \end{picture}
\end{center}
has 311040.
We need a way of saying that these objects `look the same' and this is where the
use of a {\em category} of objects comes in. The isomorphisms in this category
tell us precisely this.
\bpoint{Hermida, Makkai, Power}
Hermida, Makkai and Power say: pick one ordering. We know that this cannot
stable under composition; instead, the notion of multicategory is
generalised so that this stability is not required. Rather, for each composite
there is a specified re-ordering of the source elements, satisfying some
coherence laws. This is a notion we refer to as {\em generalised}
multicategory.
\bpoint{Leinster}
Leinster says: pick no ordering at all. The idea is that, fundamentally,
squashing the constituent cells into a straight line is an unnatural (and indeed
rather violent) thing to try and do. So instead, the source of an arrow such as
\begin{center}
\begin{picture}(40,20) \diagbo{0}{0}{1mm} \end{picture}
\end{center}
is literally the diagram
\begin{center} \length{0.4mm}
\begin{picture}(50,30)
\put(0,0){\line(1,2){10}}
\put(10,20){\line(1,1){10}}
\put(20,30){\line(1,-1){10}}
\put(30,20){\line(1,0){15}}
\put(45,20){\line(1,-2){5}}
\put(40,0){\line(1,1){10}}
\put(0,0){\line(1,0){40}}
\put(10,20){\line(1,0){20}}
\put(30,20){\line(1,-2){10}}
\end{picture}
\end{center}
expressed as a structure given by a cartesian monad $T$. This is the notion of
$T$-multicategory.
These differences notwithstanding, the constructions proceed in a similar
manner: a process of `slicing' is used to construct $k$-cells from $(k-1)$-cells
in each of the respective frameworks.
\subsection{Why are the different approaches equivalent?}
At first sight, it might seem implausible that a construction with so much
symmetry should give anything like a construction without any symmetry. In
fact, the symmetric actions in the Baez-Dolan approach are a sort of {\em
trompe d'{\oe}il} created by our attempt to view constituents of an opetope in
a straight line when they simply are not in one. It is not the opetope itself
that is symmetric, but only our presentations of it.
So, with the Baez-Dolan version, we end up with many isomorphic presentations
of the same opetope, given by all the different orders in which we could list
its components. In effect, with the Hermida-Makkai-Power version we pick one
representative of each isomorphism class, and with the Leinster version, we take
the whole isomorphism class as one opetope.
In the end there is a trade-off between naturality (in the informal sense of
the word) and practicality. Consider the following analogy. If I tidy the
papers on my desk into a neat pile, I have forced them into a straight line
when they had natural positions as they were. However, they are thus easier to
carry around. Likewise, the Leinster construction may seem less brutal in
this sense, but the Hermida-Makkai-Power construction yields a framework that
is more practical for calculating with cells.
This means that if we are to write down a set of domain cells on a piece of
paper in a calculation, we can write them in some order. The
Baez-Dolan construction mediates for us, giving us peace of mind that the order
we chose is irrelevant, as the symmetric actions are quietly working in the
shadows dealing with all the other possibilities.
\subsection{How is the Baez-Dolan definition modified here?}
The present author began studying the relationship between opetopes and
multitopes as given, but began to encounter difficulties when examining the
process of slicing. Essentially, slicing yields a multicategory whose objects
are the morphisms of the original multicategory, and whose morphisms are its
composition laws. Given a multicategory $Q$, Baez and Dolan define the slice
multicategory $Q^+$ to be a multicategory whose {\em set} of objects is the
{\em set} of arrows of $Q$. The effect is that some information has been
abandoned, or at least, concealed. That is, we have discarded the symmetries
relating the arrows of $Q$ to one another. As the slicing process is iterated,
progressively more information is abandoned in this manner, essentially a layer
of symmetry at each stage of slicing. For the construction of opetopes, the
crucial fact is that the symmetries arise {\em precisely and exclusively} from the
different possible orderings of source elements. So it is precisely these
symmetries which give the vital information about which opetopes are merely
different presentations of the same thing, and therefore should be isomorphic.
Without it, the isomorphisms are lost and such objects are considered to be
different. In this manner the relationship between the two approaches would
destroyed.
However, pursuing Baez and Dolan's original approach, using
multicategories with an arbitrary {\em category} of objects, it is no
longer necessary to force the category of objects of $Q^+$ to be
discrete. This theory yields a different slice multicategory, in which the
symmetric action in $Q$ is recorded in the morphisms of the category of objects
of $Q^+$.
This modification can then be pursued throughout the definition of $n$-category
(see \cite{che9}, \cite{che10}). The relationship between this definition and
the original one is not currently clear. For low dimensions it appears that
the existence of certain universal cells may eventually iron out the
differences, but such explicit arguments are unfeasible for arbitary higher
dimensions. Moreover, such arguments cannot be applied to the structures
underlying $n$-categories where the existence of such universals has not yet
been asserted.
So what does seem clear is that the equivalences between theories as described
above facilitates much further work in this area, for example, the study of the
categories of opetopes and opetopic sets (\cite{che9}, \cite{che10}, \cite{che11},
\cite{che12}, \cite{che13}). Using the original definition and
therefore without the help of these equivalences, this work would not have been
possible.
\section{The theory of multicategories}
\label{newopes}
Opetopes are described using the language of multicategories. In
each of the two theories of opetopes in question, a different
underlying theory of multicategories is used. In this section we
examine the two underlying theories, and we construct a way of
relating these theories to one another; this relationship provides
subsequent equivalences between the definitions. We adopt a concrete
approach here; certain aspects of the definitions suggest a more
abstract approach but this will require further work beyond the scope
of this work.
\label{tacmcats}
\subsection{Symmetric multicategories} \label{tacsym}
\numarabic
In \cite{bd1} opetopes are constructed using symmetric
multicategories. In this section we define {\bfseries
SymMulticat}, the category of symmetric multicategories with a
category of objects. The definition we give here includes one
axiom which appears to have been omitted from \cite{bd1}.
We write $\cl{F}$ for the `free symmetric strict monoidal
category' monad on \cat{Cat}, and ${\mathbf S}_k$ for the group of
permutations on $k$ objects; we also write $\iota$ for the
identity permutation.
\begin{definition} A {\em symmetric multicategory} $Q$ is given by
the following data
\begin{enumerate}
\item A category $o(Q)=\bb{C}$ of objects. We refer to \bb{C} as the
{\em object-category}, the morphisms of \/ ${\mathbb C}$ as {\em
object-morphisms}, and if \/ ${\mathbb C}$ is discrete, we say
that $Q$ is {\em object-discrete}.
\item For each $p\in \cl{F}{\mathbb C}^{\mbox{\scriptsize op}} \times
{\mathbb C}$, a set $Q(p)$ of arrows. Writing
\[p=(x_1, \ldots ,x_k;x),\]
an element $f \in Q(p)$ is considered as an arrow with source and
target given by
\begin{eqnarray*}
s(f) &=& (x_1,\ldots ,x_k)\\
t(f) &=& x
\end{eqnarray*}
and we say $f$ has {\em arity} $k$. We may also write $a(Q)$ for
the set of all arrows of $Q$.
\item For each object-morphism $f:x \longrightarrow y$,
an arrow $\iota(f) \in Q(x;y)$. In particular we write
$1_x=\iota(1_x)\in Q(x;x)$.
\item Composition: for any $f \in Q(x_1, \ldots ,x_k;x)$ and
$g_i \in Q(x_{i1}, \ldots ,x_{im_i};x_i)$ for $1 \le i \le k$, a
composite
\[f \circ (g_1, \ldots ,g_k) \in Q(x_{11}, \ldots ,x_{1m_1},
\ldots ,x_{k1}, \ldots ,x_{km_k};x)\]
\item Symmetric action: for each permutation $\sigma \in {\mathbf S}_k$, a
map
\[\begin{array}{rccc} \sigma : & Q(x_{1}, \ldots ,x_{k};x)
& \lra & Q(x_{\sigma (1)}, \ldots ,x_{\sigma (k)};x) \\
& f & \longmapsto & f \sigma
\end{array}\]
\end{enumerate}
\noindent satisfying the following axioms:
\begin{enumerate}
\item Unit laws: for any $f \in Q(x_1, \ldots ,x_m;x)$, we have
\[1_x \circ f = f = f \circ (1_{x_1}, \ldots, 1_{x_m})\]
\item Associativity: whenever both sides are defined,
\[ \begin{array}{c} f \circ(g_1 \circ (h_{11}, \ldots , h_{1m_1}),
\ldots , g_k \circ (h_{k1}, \ldots , h_{km_k})) = \mbox{\hspace{5em}}\\
\mbox{\hspace*{5em}} (f \circ (g_1, \ldots , g_k)) \circ (h_{11},
\dots ,h_{1m_1}, \ldots ,h_{k1}, \ldots ,h_{km_k}) \end{array}\]
\item For any $f \in Q(x_1, \ldots ,x_m;x)$ and
$\sigma , \sigma' \in {\mathbf S}_k$,
\[(f \sigma)\sigma' = f(\sigma\sigma')\]
\item For any $f \in Q(x_1, \ldots ,x_k;x)$, $g_i \in Q(x_{i1},
\ldots ,x_{im_i};x_i)$ for $1 \le i \le k$, and $\sigma \in
{\mathbf S}_k$, we have
\[(f \sigma) \circ (g_{\sigma (1)}, \ldots
,g_{\sigma(k)}) =
f \circ (g_1, \ldots , g_k) \cdot \rho(\sigma)\]
where $\rho :{\mathbf S}_k \longrightarrow {\mathbf S}_{m_1 +
\ldots + m_k}$ is the obvious homomorphism.
\item For any $f \in Q(x_1, \ldots ,x_k;x)$, $g_i \in Q(x_{i1},
\ldots ,x_{im_i};x_i)$, and $\sigma_i \in {\mathbf S}_{m_i}$ for
$1 \le i \le k$, we have
\[f \circ (g_1 \sigma_1, \ldots, g_k \sigma_k) =
(f \circ(g_1, \ldots, g_k))\sigma\]
where $\sigma \in {\mathbf S}_{m_1 + \dots + m_k}$ is the
permutation obtained by juxtaposing the $\sigma_i$.
\item $\iota(f\circ g) = \iota(f) \circ \iota(g)$
\end{enumerate}
\end{definition}
We may draw an arrow $f \in Q(x_1, \ldots, x_k; x)$ as
\begin{center}
\setleng{1mm}
\begin{picture}(18,30)(8,8)
\scalecr{$x_1$}{$x_2$}{$x_k$}{$\ \ x$}{$f$}
\end{picture}
\end{center} and a composite $f \circ (g_1, \ldots ,g_k)$ as
\begin{center}
\setlength{\unitlength}{0.8mm}
\begin{picture}(70,100)(28,50)
\put(10,10){
\begin{picture}(90,140)
\put(20,90){\line(0,1){20}}
\put(45,90){\line(0,1){20}}
\put(63,110){\makebox(0,0){$\ldots$}}
\put(80,90){\line(0,1){20}}
\put(20,90){\line(1,0){60}}
\put(20,90){\line(1,-1){30}}
\put(80,90){\line(-1,-1){30}}
\put(50,40){\line(0,1){20}}
\put(50,76){\makebox(0,0){$f$}}
\end{picture}}
\put(16,115){ \setlength{\unitlength}{0.2mm}
\begin{picture}(90,140)
\put(20,90){\line(0,1){20}}
\put(40,90){\line(0,1){20}}
\put(80,90){\line(0,1){20}}
\put(20,90){\line(1,0){60}}
\put(20,90){\line(1,-1){30}}
\put(80,90){\line(-1,-1){30}}
\put(50,40){\line(0,1){20}}
\put(20,114){\makebox[0pt]{$x_{11}$}}
\put(48,114){\makebox[0pt]{$\cdots$}}
\put(80,114){\makebox[0pt]{$x_{1m_1}$}}
\put(50,76){\makebox(0,0){$g_1$}}
\put(50,36){\makebox(0,0)[t]{$$}}
\end{picture}}
\put(41,115){ \setlength{\unitlength}{0.2mm}
\begin{picture}(90,140)
\put(20,90){\line(0,1){20}}
\put(40,90){\line(0,1){20}}
\put(80,90){\line(0,1){20}}
\put(20,90){\line(1,0){60}}
\put(20,90){\line(1,-1){30}}
\put(80,90){\line(-1,-1){30}}
\put(50,40){\line(0,1){20}}
\put(20,114){\makebox[0pt]{$x_{21}$}}
\put(48,114){\makebox[0pt]{$\cdots$}}
\put(80,114){\makebox[0pt]{$x_{2m_2}$}}
\put(50,76){\makebox(0,0){$g_2$}}
\put(50,36){\makebox(0,0)[t]{$$}}
\end{picture}}
\put(76,115){ \setlength{\unitlength}{0.2mm}
\begin{picture}(90,140)
\put(20,90){\line(0,1){20}}
\put(40,90){\line(0,1){20}}
\put(80,90){\line(0,1){20}}
\put(20,90){\line(1,0){60}}
\put(20,90){\line(1,-1){30}}
\put(80,90){\line(-1,-1){30}}
\put(50,40){\line(0,1){20}}
\put(20,114){\makebox[0pt]{$x_{k1}$}}
\put(48,114){\makebox[0pt]{$\cdots$}}
\put(80,114){\makebox[0pt]{$x_{km_k}$}}
\put(50,76){\makebox(0,0){$g_k$}}
\put(50,36){\makebox(0,0)[t]{$$}}
\end{picture}}
\end{picture}.
\end{center}
\label{tacspan} A symmetric multicategory $Q$ may be thought of as
a functor
\[Q:\cl{F}\bb{C}^{\op} \times {\mathbb C}
\longrightarrow \cat{Set}\]
with some extra structure.
In a more abstract view, we would expect \cl{F} to be a 2-monad on
the 2-category \cat{Cat}, which lifts via a generalised form of
distributivity to a bimonad on \cat{Prof}, the bicategory of
profunctors. Then the Kleisli bicategory for this bimonad should
have as objects small categories, and its 1-cells should be
essentially profunctors of the form $\cl{F}\bb{C}
\makebox[0pt][l]{\hspace{1em}$\mid$}\longrightarrow \bb{D}$ in the
opposite category. However, the calculations involved in this
description are intricate and require further work.
In this abstract view, a symmetric multicategory $Q$ would then be
a monad in this bicategory. Arrows and symmetric action (Data 2,
5) are given by the action of $Q$, identities (Data 3) by the unit
of the monad and composition (Data 4) by the multiplication for
the monad.
\begin{definition} Let $Q$ and $R$ be symmetric multicategories
with object-categories \bb{C} and \bb{D} respectively. A {\em
morphism of symmetric multicategories} $F:Q \longrightarrow R$ is
given by
\begin{itemize}
\item A functor $F=F_0:{\mathbb C} \longrightarrow {\mathbb D}$
\item For each arrow $f \in Q(x_1, \ldots ,x_k;x)$ an
arrow $Ff \in R(Fx_1, \ldots ,Fx_k; Fx)$
\end{itemize}
\noindent satisfying
\begin{itemize}
\item $F$ preserves identities: $F(\iota(f)) = \iota(Ff)$ so in
particular $F(1_x) = 1_{Fx}$
\item $F$ preserves composition: whenever it is defined
\[F(f \circ (g_1,\ldots,g_k)) = (Ff \circ (Fg_1, \ldots ,
Fg_k))\]
\item $F$ preserves symmetric action: for each $f \in Q(x_1,
\ldots , x_k;x)$ and $\sigma \in {\mathbf S}_k$
\[F(f\sigma) = (Ff) \sigma \]
\end{itemize}
\end{definition}
\noindent Composition of such morphisms is defined in the obvious
way, and there is an obvious identity morphism $1_Q:Q
\longrightarrow Q$. Thus symmetric multicategories and their
morphisms form a category {\bf SymMulticat}.
\begin{definition} A morphism $F:Q \longrightarrow R$ is an {\em equivalence} if and
only if the functor $F_0:{\mathbb C} \longrightarrow {\mathbb D}$
is an equivalence, and $F$ is full and faithful. That is, given
objects $x_1, \ldots, x_m, x$ the induced function
\[F: Q(x_1, \ldots, x_m; x) \lra R(Fx_1, \ldots, Fx_m; Fx)\]
is an isomorphism.
\end{definition}
Note that, given morphisms of symmetric multicategories
\[Q \map{F} R \map{G} P\]
we have a result of the form `any 2 gives 3', that is, if any two
of $F, G$ and $GF$ are equivalences, then all three are
equivalences.
Furthermore, we expect that \cat{SymMulticat} may be given the
structure of a 2-category, and that the equivalences in this
2-category would be the equivalences as above. However, we do not
pursue this matter here.
\subsection{Generalised multicategories}
\label{tacgen}
In \cite{hmp1} multitopes are constructed using `generalised
multicategories'; in fact we need only a special case of the
generalised multicategory defined in \cite{hmp1}, that is, the
`1-level' case.
\begin{definition} A {\em generalised multicategory} $M$ is given by
\begin{itemize}
\item A set $o(M)$ of objects
\item A set $a(M)$ of arrows, with source and target
functions
\[\begin{array}{rcccl}
s &:& a(M) & \lra & o({\mathbb C})^\star \\
t &:& a(M) & \lra & o({\mathbb C})\\
\end{array}\]
\noindent where $A^\star$ denotes the set of lists of elements of
a set $A$. If
\[s(f)=( x_1, \ldots ,x_k)\]
we write $s(f)_p = x_p$ and $|s(f)| = \{1,\ldots,k\}$.
\item Composition: for any $f,g \in a(M)$ with $t(g) = s(f)_p$, a
composite $f \circ_p g \in a(M)$ with
\begin{eqnarray*}
t(f \circ_p g) & = & t(f) \\
|s(f \circ_p g)| & \cong & (|s(f)| \setminus \{p\}) \amalg |s(g)|
\end{eqnarray*} and {\em amalgamating maps}
\[\begin{array}{rcccc}
\psi[f,g,p] &:& |s(f)| \setminus \{p\} & \lra & |s(f\circ_p g)|\\
\phi[f,g,p] &:& |s(g)| & \lra & |s(f \circ_p g)|.
\end{array}\] such that $\psi \amalg \phi$ gives a bijection as above.
Equivalently, writing
\begin{eqnarray*}
s(f) &=& ( x_1, \ldots x_k),\\
s(g) &=& ( y_1, \ldots, y_j)
\end{eqnarray*}and
\[(z_1, \ldots , z_{k+j-1}) = ( x_1, \ldots,
x_{p-1}, y_1, \ldots y_j, x_{p+1}, \ldots, x_{k+j-1})\]
we have a permutation $\chi = \chi[f,g,p] \in {\mathbf S}_{k+j-1}$
such that
\[s(f \circ_p g) = ( z_{\chi(1)}, \ldots, z_{\chi(k+j-1)}).\]
\item Identities: for each $x \in o(M)$ an arrow $1_x : x \lra x \in a(M)$
\end{itemize}
\noindent satisfying the following laws
\begin{itemize}
\item Unit laws: for any $f \in a(M)$ with $s(f)_p=x$ and
$t(f)=y$, we have
\[\begin{array}{c} 1_y \circ_1 f = f = f \circ_p 1_x \\
\chi[1_y,f,1]= \iota = \chi[f,1_x,p]. \end{array}\]
\item Associativity: for any $f,g,h \in a(M)$ with $s(f)_p=t(g)$ and
$s(g)_q=t(h)$ we have
\[(f \circ_p g) \circ_{\bar{q}} h = f \circ_p(g \circ_q h)\]
where $\bar{q}=\phi[f,g,p](q)$. Furthermore, the composite
amalgamation maps must also be equal; that is, the following
coherence conditions must be satisfied:
\[\begin{array}{c} \psi[f\circ_p g, h, \bar{q}] \circ
\psi[f,g,p] = \psi[f, h\circ_q g, p]\\
\psi[f\circ_p g, h, \bar{q}] \circ \bar{\phi}[f,g,p] = \phi[f,
h\circ_q g, p] \circ \psi[g,h,q]\\
\phi[f\circ_p g, h, \bar{q}] = \phi[f, h\circ_q g, p] \circ
\phi[g,h,q]\end{array}\]
where $\bar{\phi}$ indicates restriction to the appropriate
domain. Note that the conditions concern the source elements of
$f$, $g$ and $h$ respectively.
\item Commutativity: for any $f,g,h \in a(M)$ with $s(f)_p=t(g)$,\
$s(f)_q = t(h)$,\ $p \neq q$ we have
\[(f \circ_p g)\circ_{\bar{q}} h = (f \circ_q h) \circ_{\bar{p}} g\]
where $\bar{q}=\psi[f,g,p]$ and $\bar{p} = \psi[f,h,q]$. As above, the
composite amalgamation maps must also be equal; that is, the
following coherence conditions must be satisfied:
\[\begin{array}{c} \psi[f\circ_p g, h, \bar{q}] \circ
\bar{\psi}[f,g,p] = \psi[f\circ_q h, g, \bar{p}] \circ
\bar{\psi}[f,h,q]\\
\psi[f\circ_p g, h, \bar{q}] \circ \phi[f,g,p] =
\phi[f\circ_q h, g, \bar{p}]\\
\phi[f\circ_p g, h, \bar{q}] = \psi[f\circ_q h, g, \bar{p}]
\circ \phi[f,h,q]. \end{array}\]
The conditions concern the source elements of $f$, $g$ and $h$
respectively.
\end{itemize}
\end{definition}
Note that the coherence conditions are necessary in case of
repeated source elements.
\begin{definition} A {\em morphism of generalised multicategories}
\[F=(F,\theta):M \longrightarrow N\] is given by:
\begin{itemize}
\item for each object $x \in o(M)$ an object $Fx \in
o(N)$
\item for each arrow \[f:(x_1, \ldots ,x_k)\lra x \in a(M)\] a
{\em transition map} $\theta_f= \theta_f^F \in {\mathbf S}_k$ and
an arrow \[Ff:(Fx_{\theta^{-1}(1)}, \ldots, Fx_{\theta^{-1}(k)})
\lra Fx \in a(N)\]
\end{itemize}
\noindent satisfying
\begin{itemize}
\item $F$ preserves identities: $F(1_x)=1_{Fx}$
\item $F$ preserves composition: if \(f,g \in a(M)\) and \(t(g) = s(f)_p\)
then \[Ff \circ_{\theta_f(p)}Fg = F(f \circ_p g).\]
Furthermore, the following coherence conditions must be satisfied:
\[\begin{array}{c} \theta_{f\circ_p g} \circ \phi[f,g,p] =
\phi[Ff, Fg, \theta_f(p)] \circ \theta_g \\
\theta_{f\circ_p g} \circ \psi[f,g,p] =
\psi[Ff, Fg, \theta_f(p)] \circ \bar{\theta}_f \end{array}\]
on the source elements of $g$ and $f$ respectively, where
$\bar{\theta}$ indicates the restriction of $\theta$ as
appropriate.
\end{itemize}
\end{definition}
Given morphisms of generalised multicategories $M
\stackrel{F}{\longrightarrow} N \stackrel{G}{\longrightarrow} L$
we have a composite morphism $H = G \circ F : M \longrightarrow L$
where $H$ is the usual composite on objects and arrows, and we put
$\theta_f^H =\theta_{Ff}^G \circ \theta_f^F$. There is an identity
morphism $1_M:M \longrightarrow M$ which is the usual identity on
objects and arrows, with $\theta_f = \iota$ for all $f \in a(M)$.
Thus generalised multicategories and their morphisms form a category
{\bfseries GenMulticat}. We now compare the two theories of
multicategories.
\subsection{Relationship between symmetric and generalised
multicategories} \label{tacxi}
We compare symmetric and generalised multicategories by means of a
functor
\[\xi: \cat{GenMulticat} \lra \cat{SymMulticat}.\]
Given a generalised multicategory $M$, the idea is to generate a symmetric
action freely by adding in symmetric copies of each morphism. The arrows of $M$
are then representatives of symmetry classes of arrows of $\xi(M)$.
We begin by constructing this functor, and then show that it is
full and faithful.
We construct the functor $\xi$ as follows. Given a generalised
multicategory $M$, we define an object-discrete symmetric
multicategory $\xi(M) = Q$ by
\begin{itemize}
\item Objects: $o(Q)={\mathbb C}$ is the discrete category with objects
$o(M)$.
\item Arrows: for each
\[p =(x_1, \ldots ,x_k;x) \in \cl{F}({\mathbb C})^
{\mbox{\scriptsize op}} \times {\mathbb C}\]
an element of $Q(p)$ is given by $(f,\sigma)$ where $\sigma \in
{\mathbf S}_k$ and
\[f : (x_{\sigma(1)}, \ldots , x_{\sigma(k)})\lra x \in
a(M).\]
\item Composition: by commutativity, it is sufficient to define
\[\alpha \circ_p \beta = \alpha \circ (1_{x_1}, \ldots ,
1_{x_{p-1}}, \beta, 1_{x_{p+1}}, \ldots , 1_{x_k})\]
where
\begin{eqnarray*}
\alpha &=& (f,\sigma) \in Q(x_1, \ldots , x_k;x)\\
\mbox{and \ \ } \beta &=& (g,\tau) \in Q(y_1, \ldots
,y_j;x_p).
\end{eqnarray*} Now given such $\alpha$ and $\beta$, we have in $M$ arrows
\[\begin{array}{rcccc}
f &:& ( x_{\sigma(1)}, \ldots , x_{\sigma(k)}) &\lra& x\\
\mbox{and \ \ }g &:& ( y_{\tau(1)}, \ldots ,y_{\tau(j)}) &\lra& x_p
\end{array}\]giving a composite in $M$
\[f \circ_{\bar{p}} g: ( z_{\chi(1)}, \ldots , z_{\chi(k+j-1)})\longrightarrow x\]
where $\bar{p} = \sigma^{-1}(p)$, $\chi = \chi(f,g,\bar{p})$ and
\[(z_1 , \ldots , z_{k+j-1}) = (x_{\sigma(1)}, \ldots ,
x_{\sigma(\bar{p}-1)}, y_{\tau(1)}, \ldots , y_{\tau(j)} ,
x_{\sigma(\bar{p}+1)}, \ldots, x_{\sigma(k)}).\]
We seek a composite in $Q$ with source
\[( a_1, \ldots , a_{k+j-1}) = ( x_1, \ldots , x_{p-1},
y_1, \ldots, y_j, x_{p+1}, \ldots , x_k)\] so the composite should
be of the form $(f \circ_{\bar{p}} g, \gamma)$, where $f
\circ_{\bar{p}} g$ has source
\[(a_{\gamma(1)}, \ldots, a_{\gamma(k+j-1)})\]
in $M$. So we define a permutation $\gamma \in {\mathbf
S}_{j+k-1}$ by $a_{\gamma(i)} = z_{\chi(i)}$ and we define the
composite to be
\[(f,\sigma) \circ_p (g,\tau) = (f \circ_{\bar{p}} g
,\gamma).\]
Note that $\gamma$ is determined by $\sigma$, $\tau$ and $\chi$.
\item For each $x \in {\mathbb C} = o(M)$, $1_x \in
Q(x;x)$ is given by $(1_x, \iota)$.
\item For each permutation $\sigma \in {\mathbf S}_k$, we have a
map
\[\begin{array}{rccc}
\sigma : & Q(x_1, \ldots , x_k;x) & \lra & Q(x_{\sigma(1)},
\ldots , x_{\sigma(k)};x)
\\
& (f,\tau) & \longmapsto & (f,\sigma^{-1}\tau) \\
\end{array}.\]
Note that $f$ has source $(x_{\tau(1)} \ldots, x_{\tau(k)})$ in
$M$, and $(f, \sigma^{-1}\tau)$ on the right hand side exhibits
the $i$th source of $f$ to be $x_{\sigma(\sigma^{-1}\tau)(i)} =
x_{\tau(i)}$ as required.
\end{itemize}
\numarabic
We check that this definition satisfies the conditions for a
symmetric multicategory:
\begin{enumerate}
\item Unit laws follow from unit laws of {\bf GenMulticat}
\item Associativity follows from associativity in {\bf
GenMulticat} and the coherence conditions for amalgamating maps
\item $((f,\tau)\sigma)\sigma' = (f,\sigma^{-1}\tau)\sigma' = (f,
{\sigma'}^{-1}\sigma^{-1}\tau) = (f,\tau)(\sigma\sigma')$
\item Given
\begin{eqnarray*}
(f,\tau) &\in& Q(x_1, \ldots , x_k;x),\\
(g,\mu) &\in& Q(y_1, \ldots , y_j, x_p)
\end{eqnarray*}
and $\sigma \in {\mathbf S}_k$ we check that
\[(f,\tau)\sigma\circ_{\bar{p}} (g,\mu) =
((f,\tau)\circ_p(g,\mu)) \cdot \rho(\sigma)\]
where $\bar{p}=\sigma^{-1}(p)$ and $\rho$ is the homomorphism
indicated in Section~\ref{tacsym}. The required result then
follows by simultaneous composition. Note that it is sufficient to
check that both expressions in question have the same first
component and source (in $Q$), so we write $\gamma,\gamma'$ for
the permutations in the second component, without specifying what
they are. Now
\[(f,\tau)\sigma \circ_{\bar{p}} (g,\mu) =
(f,\sigma^{-1}\tau)\circ_{\bar{p}}(g,\mu) =
(f\circ_{\tau^{-1}(p)}g,\gamma)\]
with source
\[(x_{\sigma(1)}, \ldots, x_{\sigma(\bar{p}-1)}, y_1,
\ldots, y_j, x_{\sigma(\bar{p}+1)}, \ldots, x_{\sigma(k)})\]
and
\[((f,\tau)\circ_p(g,\mu))\cdot \rho(\sigma) =
(f\circ_{\tau^{-1}(p)}g,\gamma')\]
with source
\[(z_{\rho\sigma(1)}, \ldots, z_{\rho\sigma(k+j-1)})\]
where
\[( z_1, \ldots, z_{k+j-1})=( x_1, \ldots, x_{p-1}, y_1, \ldots,
y_j, x_{p+1}, \ldots, x_k ).\]
The action of $\rho(\sigma)$ is that of $\sigma$ on the $x_i$ but
with $( y_1, \ldots, y_j )$ substituted for $x_p$. So
\[\begin{array}{c}
( z_{\rho\sigma(1)}, \ldots, z_{\rho\sigma(k+j-1)} ) =
\mbox{\hspace{45mm}}\\
\mbox{\hspace*{20mm}}(x_{\sigma(1)}, \ldots, x_{\sigma
(\bar{p}-1)}, y_1, \ldots, y_j, x_{\sigma(\bar{p}+1)},
\ldots, x_{\sigma(k)} )\end{array}\] as required.
\item Given $(f,\tau)$ and $(g,\mu)$ as above, and $\sigma \in
\cat{S}_j$ we check that
\[(f,\tau) \circ_p (g,\mu)\sigma = ((f,\tau) \circ_p
(g,\mu))\sigma'\]
where $\sigma' \in \cat{S}_{k+j-1}$ is given by inserting $\sigma$
at the $p$th place.
Now, on the left hand side we have
\[\begin{array}{rcl} (f,\tau) \circ_p (g,\mu)\sigma & = &
(f, \tau) \circ_p (g, \sigma^{-1} \mu) \\
& = & (f \circ_{\tau^{-1}(p)} g, \gamma), \end{array}\]
say, with source
\[(x_1, \ldots, x_{p-1}, y_{\sigma(1)}, \ldots y_{\sigma(j)},
x_{p+1}, \ldots, x_k).\]
This agrees with the right hand side.
\item Since all object-morphisms are identities, this axiom is
trivially satisfied.
\end{enumerate}
So $\xi(M)$ is a symmetric multicategory.
Next we define $\xi$ on morphisms of generalised multicategories.
Given a morphism $F:M \longrightarrow N$ in \cat{GenMulticat} we
define a morphism \[\xi F: \xi M \longrightarrow \xi N\] in
\cat{SymMulticat} as follows.
\begin{itemize}
\item On objects: given $x \in o(\xi M) = o(M)$, put \[(\xi F)(x) =
Fx \in o(N) = o(\xi N)\]
\item On arrows: given $(f,\sigma) \in \xi M(x_1, \ldots, x_k;x)$,
put
\[\xi F(f,\sigma) = (Ff,\sigma {\theta_f}^{-1})\]
and check that
\[(Ff,\sigma {\theta_f}^{-1}) \in \xi N (Fx_1, \ldots, Fx_k; Fx).\]
First note that
\[t(Ff,\sigma {\theta_f}^{-1}) = t(Ff) = F(t(f))=Fx.\]
Now
\[s(f) = ( x_{\sigma(1)}, \ldots, x_{\sigma(k)}) \]
in $M$, so by the action of $(F,\theta)$ we have
\[s(Ff)=( Fx_{\sigma{\theta_f}^{-1}(1)}, \ldots,
Fx_{\sigma{\theta_f}^{-1}(k)})\]
in $N$, and so
\[(Ff,\sigma {\theta_f}^{-1}) \in \xi N (Fx_1, \ldots, Fx_k; Fx)\]
as required.
\end{itemize}
We check that this definition satisfies the laws for a morphism of
symmetric multicategories:
\begin{itemize}
\item $\xi F$ preserves identities: since $\theta_{1_x} \in {\mathbf S}_1 =
\{\iota\}$, we have \[\xi F(1_x, \iota) = (F(1_x), \iota) =
(1_{Fx}, \iota).\]
\item $\xi F$ preserves composition: we check that
$\xi F(\alpha \circ_p \beta) = \xi F \alpha \circ_p \xi F \beta$,
and the result then follows by simultaneous composition. Put
\begin{eqnarray*}
\alpha &=& (f,\sigma) \in Q(x_1,\ldots,x_k ; x)\\
\mbox{and \ \ } \beta &=& (g,\tau) \in Q(y_1, \ldots , y_j ;
y).
\end{eqnarray*} Then
\begin{eqnarray*}
\xi F(\alpha \circ_p \beta) & = & \xi F(f \circ_{\sigma^{-1}(p)}g\
,\
\gamma) \\
& = & (\ F(f \circ_{\sigma^{-1}(p)}g)\ ,\ \gamma
{\theta_f}^{-1}\ ) \\
& = & (\ Ff \circ_{\theta_f \sigma^{-1}(p)} Fg\ ,\ \gamma
{\theta_f}^{-1}\ )
\end{eqnarray*}
and this has source
\[s(F \alpha \circ_p F \beta)=(Fx_1, \ldots , Fx_{p-1}, Fy_1, \ldots, Fy_j,
Fx_{p+1}, \ldots , Fx_k) .\]
For the right hand side, we have
\[\xi F \alpha = (Ff, \sigma \theta_f^{-1})\]
\[\xi F \beta = (Fg, \tau \theta_g^{-1})\]
and so the first component of $\xi F \alpha \circ_p \xi F \beta$
is also $Ff \circ_{\theta_f \sigma^{-1}(p)} Fg$. So since $\xi
F(\alpha \circ_p \beta)$ and $ \xi F \alpha \circ_p \xi F \beta$
agree in the first component and source, we have the result
required.
\item $\xi F$ preserves symmetric action:
\begin{eqnarray*}
\xi F (\ (f,\tau)\sigma\ ) & = & \xi F(f,
\sigma^{-1}\tau) \\
& = & (Ff\ ,\ \sigma^{-1}\tau{\theta_f}^{-1}) \\
& = & (Ff\ ,\ \tau{\theta_f}^{-1})\sigma \\
& = & (\xi F(f,\tau))\sigma
\end{eqnarray*}
\end{itemize}
\noindent So $\xi F$ is a morphism of symmetric multicategories.
We check that $\xi$ is functorial. Clearly $\xi 1_M = 1_{\xi M}$.
Now consider morphisms of generalised multicategories
\[M \map{F} N \map{G} L\]
so we need to show \[\xi(G \circ F) = \xi G \circ \xi F.\]
\begin{itemize}
\item On objects
\begin{eqnarray*}
\xi(G \circ F)(x) & = & (G \circ F)(x) \\
& = & (\xi G \circ \xi F)(x)
\end{eqnarray*}
\item On arrows
\begin{eqnarray*}
\xi(G \circ F)(f,\sigma) & = & (\ (G \circ F)(f)\ ,\
\sigma({\theta^{GF}}_f)^{-1}\ ) \\
& = & (\ GFf\ ,\ \sigma ({\theta^G}_{Ff} \circ {\theta^F}_f)^{-1}\ ) \\
& = & (\ GFf\ ,\ \sigma({\theta^F}_f)^{-1}({\theta^G}_{Ff})^{-1}\ ) \\
& = & \xi G(\ Ff\ ,\ \sigma({\theta^F}_f)^{-1}\ ) \\
& = & (\xi G \circ \xi F)(f, \tau)\sigma
\end{eqnarray*}
\end{itemize}
So $\xi$ is a functor as required.
\begin{proposition}
The functor $\xi:\cat{GenMulticat} \longrightarrow
\cat{SymMulticat}$ is full and faithful.
\end{proposition}
\begin{prf} Given any morphism
\[G: \xi M \lra \xi N\]
of symmetric multicategories, we show that there is a unique
morphism
\[H=(H,\theta):M\lra N\]
of generalised multicategories such that
\[\xi H = G.\]
Suppose first that such an $H$ exists.
\begin{itemize}
\item On objects: for each object $x \in o(M) = o(\xi M)$ we must
have
\[Hx = (\xi H)x = Gx.\]
\item On arrows: given an arrow $f \in M(x_1, \ldots , x_k; x)$, we certainly have
\[\begin{array}{c}
(f,\iota) \in \xi M(x_1, \ldots, x_k; x)\\
\mbox{and \ \ } G(f,\iota) = (\bar{f}, \sigma)
\in \xi N(Gx_1, \ldots Gx_k; Gx),
\end{array}\] say, where $\bar{f}$ is a morphism in $N$ with
source
\[s(\bar{f}) = ( Gx_{\sigma(1)}, \ldots, Gx_{\sigma(k)} ).\]
Now $(\xi H)(f, \iota) = (Hf, \theta_f^{-1})$ but we must have
\[\begin{array}{rcl} (\xi H)(f,\iota) & = & G(f, \iota) \\
& = & (\bar{f}, \sigma) \end{array}\]
so we must have $Hf=\bar{f}$ and $\theta_f = \sigma^{-1}$.
\end{itemize}
So we define $H$ as above and check that this satisfies the axioms
for a morphism of generalised multicategories.
\begin{itemize}
\item $H$ preserves identities
\end{itemize}
We have
\[G(1_x, \iota) = (1_{Gx}, \iota)\]
so
\[H(1_x)=1_{Gx}=1_{Hx}.\]
\begin{itemize}
\item $H$ preserves composition
\end{itemize}
We need to show
\[Hf \circ_{\theta_f(p)} Hg = H(f\circ_p g)\]
and that the coherence conditions are satisfied. Now, $G$
preserves the composition of $\xi M$ so
\[G\alpha \circ_p G\beta = G(\alpha \circ_p \beta).\]
Now we have
\[\begin{array}{rcl} G\alpha \circ_p G\beta & = & (\bar{f},
\theta_f^{-1}) \circ_p (\bar{g}, \theta_g^{-1})\\
& = & (\bar{f} \circ_{\theta_f(p)} \bar{g}, \gamma), \mbox{\
say}
\end{array}\]
and
\[\begin{array}{rcl} G(\alpha \circ_p \beta) & = & G(f\circ_p
g, \gamma') \\
& = & (\overline{f \circ_p g}, \gamma''), \mbox{\ say.}\end{array}\]
So these must be equal on both components. Comparing first
components, we have
\[\overline{f\circ_p g} = \bar{f} \circ_{\theta_f(p)} \bar{g}\]
but by definition we have
\[\begin{array}{rcl} \overline{f \circ_p g} & = & H(f \circ_p g)\\
\mbox{and\ \ } \bar{f}\circ_{\theta_f(p)} \bar{g} & = & Hf
\circ_{\theta_f(p)} Hg \end{array}\]
so
\[Hf \circ_{\theta_f(p)} Hg = H(f\circ_p g)\]
as required. Furthermore, equality of the second components gives
precisely the coherence condition we require, since $\gamma$ is
formed from $\theta_f, \theta_g$ and the amalgamation map
$\chi(\bar{f}, \bar{g}, \theta_f(p))$, and $\gamma''$ is formed
from $\chi(f, g, p)$ and $\theta_{f \circ_p g}$.
So $H$ is a morphism of generalised multicategories; by
construction it is unique such that $\xi H = G$, so $\xi$ is
indeed full and faithful. \end{prf}
We now give necessary and sufficient conditions for a symmetric
multicategory to be in the image of $\xi$.
\begin{definition} We say that a symmetric multicategory $ Q$ is {\em freely
symmetric} if and only if for every arrow $\alpha \in Q$ and
permutation $\sigma$ \[\alpha \sigma = \alpha \Rightarrow \sigma =
\iota.\] \end{definition}
\begin{proposition} \label{prop119} Let $ Q$ be a symmetric multicategory. Then $ Q \cong
\xi(M)$ for some generalised multicategory $M$ if and only if $ Q$
is object-discrete and freely symmetric.
\end{proposition}
\begin{prf} Suppose $ Q \cong \xi(M)$. Then by the definition of $\xi$,
$ Q$ is object-discrete, with object-category ${\mathbb C}\cong
o(M)$. To show that $Q$ is freely symmetric, write $p=( x_1,
\ldots, x_k;x )$, so
\[\begin{array}{rcl} Q(p) = \{(f,\tau) & | & f \in a(M), \ \tau
\in {\mathbf S}_k\\
&& f: x_{\tau(1)}, \ldots, x_{\tau(k)} \lra x \in M \}\end{array}\]
and consider $\alpha = (f,\tau) \in Q(p)$. Now $(f,\tau)\sigma =
(f,\sigma^{-1}\tau)$ so
\begin{eqnarray*}
\alpha\sigma=\alpha & \Rightarrow & \sigma^{-1}\tau = \tau \\
& \Rightarrow & \sigma = \iota
\end{eqnarray*} as required.
Conversely, suppose that $Q$ is object-discrete and freely
symmetric. So, given an arrow $\alpha$ of arity $k$, we have
distinct arrows $\alpha\sigma$ for each $\sigma\in{\mathbf S}_k$.
We define an equivalence relation $\sim$ on $a(Q)$, by
\[\alpha \sim \beta \iff \beta = \alpha\sigma \mbox{ for some
permutation } \sigma\]
and we specify a representative of each equivalence class.
Now let $M$ be a generalised multicategory whose objects are those
of $Q$, and whose arrows are the chosen representatives of the
equivalence classes of $\sim$. Composition is inherited, with
amalgamation maps re-ordering the sources as necessary. So
associativity and commutativity are inherited; the coherence
conditions for amalgamation maps are satisfied since $Q$ is freely
symmetric. Observe that for each $x \in {\mathbb C}$, the
equivalence class of $1_x$ is $\{1_x\}$, so $M$ inherits
identities.
So $M$ is a generalised multicategory, and $\xi(M) \cong Q$. Note
that a different choice of representatives would give an
equivalent generalised multicategory. \end{prf}
\begin{definition} We call a symmetric multicategory {\em tidy} if it is
freely symmetric with a category of objects equivalent to a
discrete one. We write \cat{TidySymMulticat} for the full
subcategory of \cat{SymMulticat} whose objects are tidy symmetric
multicategories. \end{definition}
\begin{lemma} \label{tidylemma} A \sm\ is tidy if and only if it is equivalent to
one in the image of $\xi$. \end{lemma}
\begin{prf} We show that $Q$ is tidy if and only if $Q \simeq R$
where $R$ is freely symmetric and object-discrete. The result
then follows by Proposition~\ref{prop119}.
Suppose $Q$ is tidy. We construct $R$ as follows. Let ${\mathbb
C}$ be the category of objects of $Q$, with ${\mathbb C}$
equivalent to a discrete category $S$, say, by
\[{\mathbb C} {{F \atop \longrightarrow} \atop {\longleftarrow
\atop G}} S.\]
Then $R$ is given by
\begin{itemize}
\item $o(R) = S$.
\item $R(d_1,\ldots,d_n;d) = Q(Gd_1,\ldots,Gd_n;Gd)$.
\item identities, composition and symmetric action induced from
$Q$.
\end{itemize}
\noindent Then certainly $Q \simeq R$ and $R$ is freely symmetric
and object-discrete; the converse is clear. \end{prf}
We will later see (Section~\ref{tacopetopes}) that only tidy
symmetric multicategories are needed for the construction of
opetopes. We now include another result that will be useful in
the next section.
\begin{lemma} If $Q$ is a tidy \sm\ then $\elt{Q}$ is equivalent
to a discrete category. \end{lemma}
\begin{prf} This may be proved by direct calculation; it is also
seen in Proposition~\ref{mcatpropf}. \end{prf}
Note that we write $\elt{Q}$ for the category of elements of $Q$,
where $Q$ is here considered as a functor $Q:\cl{F}{\mathbb
C}^{\mbox{\scriptsize op}} \times {\mathbb C} \longrightarrow
\mbox{\cat{Set}}$ with certain extra structure.
So $\elt{Q}$ has as objects pairs $(p,g)$ with $p \in
\cl{F}{\mathbb C}^{\mbox{\scriptsize op}} \times {\mathbb C}$ and
$g \in Q(p)$; a morphism $\alpha:(p,g) \longrightarrow (p',g')$ is
an arrow
$\alpha:p \longrightarrow p' \in \cl{F}{\mathbb C}
^{\mbox{\scriptsize op}} \times {\mathbb C}$
such that
\[\begin{array}{rccc}
Q(\alpha) : & Q(p) & \lra & Q(p')
\\
& g & \longmapsto & g' \ . \\
\end{array}\]
For example, an arrow
\[(\sigma,f_1,f_2,f_3,f_4;f):(x_1,x_2,x_3,x_4;x) \longrightarrow
(y_1,y_2,y_3,y_4;y) \in \cl{F}{\mathbb
C}^{\mbox {\scriptsize op}} \times {\mathbb C}\]
may be represented by the following diagram
\setlength{\unitlength}{0.5mm}
\begin{center}
\begin{picture}(70,140)(20,15)
\put(0,20){
\begin{picture}(70,50)
\put(50,10){\line(0,1){30}}
\put(50,40){\vector(0,-1){15}}
\put(50,44){\makebox(0,0)[b]{$x$}}
\put(50,6){\makebox(0,0)[t]{$y$}}
\put(53,26){\makebox(0,0)[l]{$f$}}
\end{picture}}
\put(0,70){
\begin{picture}(90,80)
\put(20,70){\line(2,-1){60}}
\put(20,70){\vector(2,-1){55}}
\put(40,70){\line(0,-1){30}}
\put(40,70){\vector(0,-1){27}}
\put(60,70){\line(-4,-3){40}}
\put(60,70){\vector(-4,-3){36}}
\put(80,70){\line(-2,-3){20}}
\put(80,70){\vector(-2,-3){18}}
\put(20,74){\makebox[0pt]{$y_{\sigma(4)}$}}
\put(40,74){\makebox[0pt]{$y_{\sigma(2)}$}}
\put(60,74){\makebox[0pt]{$y_{\sigma(1)}$}}
\put(80,74){\makebox[0pt]{$y_{\sigma(3)}$}}
\put(20,36){\makebox(0,0)[t]{$x_1$}}
\put(40,36){\makebox(0,0)[t]{$x_2$}}
\put(60,36){\makebox(0,0)[t]{$x_3$}}
\put(80,36){\makebox(0,0)[t]{$x_4$}}
\put(22,43){\makebox(0,0)[br]{$f_1$}}
\put(42,43){\makebox(0,0)[bl]{$f_2$}}
\put(60,43){\makebox(0,0)[br]{$f_3$}}
\put(78,43){\makebox(0,0)[bl]{$f_4$}}
\end{picture}}
\end{picture}.
\end{center}Then, given any arrow $g \in Q(x_1, \ldots x_m; x)$,
we have an arrow
\[\alpha(g) = g' \in Q(y_1, \ldots, y_m; y)\]
given by
\[g' = (\iota(f) \circ g \circ (\iota(f_1), \ldots,
\iota(f_m))\sigma).\]
So continuing the above example we may have:
\begin{center}
\setlength{\unitlength}{0.4mm}
\begin{picture}(200,150)(40,0)
\put(150,0){
\begin{picture}(70,50)
\put(50,10){\line(0,1){30}}
\put(50,40){\vector(0,-1){15}}
\put(46,40){\makebox(0,0)[r]{$x$}}
\put(50,6){\makebox(0,0)[t]{$y$}}
\put(53,26){\makebox(0,0)[l]{$f$}}
\put(50,40){\circle*{2}}
\end{picture}}
\put(150,0){
\begin{picture}(90,140)
\put(20,90){\line(0,1){20}}
\put(40,90){\line(0,1){20}}
\put(60,90){\line(0,1){20}}
\put(80,90){\line(0,1){20}}
\put(20,90){\line(1,0){60}}
\put(20,90){\line(1,-1){30}}
\put(80,90){\line(-1,-1){30}}
\put(50,40){\line(0,1){20}}
\put(50,75){\makebox(0,0){$g$}}
\end{picture}}
\put(150,70){
\begin{picture}(90,80)
\put(20,70){\line(2,-1){60}}
\put(20,70){\vector(2,-1){55}}
\put(40,70){\line(0,-1){30}}
\put(40,70){\vector(0,-1){27}}
\put(60,70){\line(-4,-3){40}}
\put(60,70){\vector(-4,-3){36}}
\put(80,70){\line(-2,-3){20}}
\put(80,70){\vector(-2,-3){18}}
\put(20,74){\makebox[0pt]{$y_{\sigma(4)}$}}
\put(40,74){\makebox[0pt]{$y_{\sigma(2)}$}}
\put(60,74){\makebox[0pt]{$y_{\sigma(1)}$}}
\put(80,74){\makebox[0pt]{$y_{\sigma(3)}$}}
\put(16,37){\makebox(0,0)[t]{$x_1$}}
\put(36,37){\makebox(0,0)[t]{$x_2$}}
\put(56,37){\makebox(0,0)[t]{$x_3$}}
\put(76,37){\makebox(0,0)[t]{$x_4$}}
\put(20,40){\circle*{2}}
\put(40,40){\circle*{2}}
\put(60,40){\circle*{2}}
\put(80,40){\circle*{2}}
\put(22,43){\makebox(0,0)[br]{$f_1$}}
\put(42,43){\makebox(0,0)[bl]{$f_2$}}
\put(60,43){\makebox(0,0)[br]{$f_3$}}
\put(78,43){\makebox(0,0)[bl]{$f_4$}}
\end{picture}}
\put(20,0){
\begin{picture}(90,140)
\put(20,90){\line(0,1){20}}
\put(40,90){\line(0,1){20}}
\put(60,90){\line(0,1){20}}
\put(80,90){\line(0,1){20}}
\put(20,90){\line(1,0){60}}
\put(20,90){\line(1,-1){30}}
\put(80,90){\line(-1,-1){30}}
\put(50,40){\line(0,1){20}}
\put(20,114){\makebox[0pt]{$y_1$}}
\put(40,114){\makebox[0pt]{$y_2$}}
\put(60,114){\makebox[0pt]{$y_3$}}
\put(80,114){\makebox[0pt]{$y_4$}}
\put(50,75){\makebox(0,0){$g'$}}
\put(50,36){\makebox(0,0)[t]{$y$}}
\end{picture}}
\put(135,90){\makebox(0,0){$=$}}
\end{picture}.
\end{center} Note that we may write an object $(p,g)\in \mbox{elt}
(Q)$ simply as $g$, since $p$ is uniquely determined by $g$.
\section{The theory of opetopes} \label{opeope}
In this section we give the analogous constructions of opetopes in
each theory, and show in what sense they are equivalent. That is,
we show that the respective categories of $k$-opetopes are
equivalent.
\label{tacslice}
We first discuss the process by which $(k+1)$-cells are
constructed from $k$-cells. In \cite{bd1}, the `slice'
construction is used, giving for any \sm\ $Q$ the slice \mcat\
$Q^+$. In \cite{hmp1} the `multicategory of function replacement'
is used but this has a more far-reaching role than that of the
Baez-Dolan slice. For comparison with the Baez-Dolan theory, we
construct a `slice' which is analogous to the Baez-Dolan slice and
is a special case of a multicategory of function replacement.
Opetopes and multitopes are then constructed by iterating the slicing
process. We finally apply the results already established to show
that the category of multitopes is equivalent to the category of
opetopes.
\subsection{Slicing a symmetric multicategory}
\label{tacsymslice}
Let $Q$ be a symmetric multicategory with a category ${\mathbb C}$
of objects, so $Q$ may be considered as a functor
$Q:\cl{F}{\mathbb C}^{\mbox{\scriptsize op}} \times {\mathbb C}
\longrightarrow \mbox{\cat{Set}}$ with certain extra structure.
The slice multicategory $Q^+$ is given by:
\begin{itemize}
\item Objects: put $o(Q^+) = \mbox{elt}(Q)$
\end{itemize}
So the category $o(Q^+)$ has as objects pairs $(p,g)$ with $p \in
\cl{F}{\mathbb C}^{\mbox{\scriptsize op}} \times {\mathbb C}$ and
$g \in Q(p)$; a morphism $\alpha:(p,g) \longrightarrow (p',g')$ is
an arrow
$\alpha:p \longrightarrow p' \in \cl{F}{\mathbb C}
^{\mbox{\scriptsize op}} \times {\mathbb C}$
such that
\[\begin{array}{rccc}
Q(\alpha) : & Q(p) & \lra & Q(p')
\\
& g & \longmapsto & g' \ \\
\end{array}\]
Then, given any arrow \[g \in Q(x_1, \ldots x_m; x)\] we have an
arrow $\alpha(g) = g' \in Q(y_1, \ldots, y_m; y)$ given by
\[g' = (\iota(f) \circ g \circ (\iota(f_1), \ldots,
\iota(f_m))\sigma)\]
(see Section~\ref{tacxi}).
\begin{itemize}
\item Arrows: $Q^+(f_1, \ldots, f_n;f)$ is given by the set of
`configurations' for composing $f_1, \ldots, f_n$ as arrows of
$Q$, to yield $f$.
\end{itemize}
Writing $f_i \in Q(x_{i1}, \ldots x_{im_i}; x_i)$ for $1 \leq i
\leq n$, such a configuration is given by $(T,\rho, \tau)$ where
\begin{enumerate}
\item $T$ is a planar tree with $n$ nodes. Each node is labelled
by one of the $f_i$, and each edge is labelled by an
object-morphism of $Q$ in such a way that the (unique) node
labelled by $f_i$ has precisely $m_i$ edges going in from above,
labelled by $a_{i1}, \ldots, a_{im_i} \in \mbox{arr}({\mathbb
C})$, and the edge coming out is labelled $a_i \in a({\mathbb
C})$, where $\mbox{cod}(a_{ij}) = x_{ij}$ and $\mbox{dom}(a_i) =
x_i$.
\item $\rho \in {\mathbf S}_k$ where $k$ is the number of leaves
of $T$.
\item $\tau:\{\mbox{nodes of } T\} \longrightarrow [n]=\{1, \ldots, n\}$ is a
bijection such that the node $N$ is labelled by $f_{\tau(N)}$.
(This specification is necessary to allow for the possibility $f_i
= f_j,\ i \neq j$.)
\end{enumerate} Note that $(T,\rho)$ may be considered as a `combed tree',
that is, a planar tree with a `twisting' of branches at the top
given by $\rho$.
The arrow resulting from this composition is given by composing
the $f_i$ according to their positions in $T$, with the $a_{ij}$
acting as arrows $\iota(a_{ij})$ of $Q$, and then applying $\rho$
according to the symmetric action on $Q$. This construction
uniquely determines an arrow $(T,\rho,\tau) \in Q^+(f_1, \ldots,
f_n;f)$.
\begin{itemize}
\item Composition
\end{itemize}
When it can be defined, $(T_1,\rho_1,\tau_1)
\circ_m (T_2,\rho_2,\tau_2) = (T,\rho,\tau)$ is given by
\begin{enumerate}
\item $(T,\rho)$ is the combed tree obtained by replacing the node
${\tau_1}^{-1}(m)$ by the tree $(T_2,\rho_2)$, composing the edge
labels as morphisms of ${\mathbb C}$, and then `combing' the tree
so that all twists are at the top.
\item $\tau$ is the bijection which inserts the source of $T_2$
into that of $T_1$ at the $m$th place.
\end{enumerate}
\begin{itemize}
\item Identities: given an object-morphism
\[\alpha=(\sigma, f_1, \ldots, f_m;f) : g \longrightarrow g',\]
$\iota(\alpha) \in Q^+(g;g')$ is given by a tree with one node,
labelled by $g$, twist $\sigma$, and edges labelled by the $f_i$
and $f$ as in the example above.
\item Symmetric action: $(T,\rho,\tau)\sigma = (T,\rho,\sigma^{-1}\tau)$
\end{itemize}
\noindent This is easily seen to satisfy the axioms for a
symmetric multicategory.
Note that, given a labelled tree $T$ with $n$ nodes and $k$
leaves, there is an arrow $(T,\rho,\tau) \in a(Q^+)$ for every
permutation $\rho \in {\mathbf S}_k$ and every bijection
$\tau:\{\mbox{nodes of } T\} \longrightarrow [n]$. Suppose
\begin{eqnarray*}
s(T,\rho,\tau) &=& ( f_1, \ldots, f_n ) \\
\mbox{and \ \ }t(T,\rho,\tau) &=& f.
\end{eqnarray*}
Then, given any $\rho_1 \in {\mathbf S}_k, \ \tau:\{\mbox{nodes of
} T\} \longrightarrow [n]$, we have
\begin{eqnarray*}s(T,\rho_1\rho,\tau) &=& ( f_1, \ldots, f_n ) \\
\mbox{and \ \ }t(T,\rho_1\rho,\tau) &=& f\rho_1
\end{eqnarray*}
whereas
\begin{eqnarray*} s(T,\rho,\tau_1\tau) &=& ( f_{{\tau_1}^{-1}(1)},
\ldots f_{{\tau_1}^{-1}(n)} ) \\
\mbox{and \ \ }t(T,\rho,\tau_1\tau) &=& f.
\end{eqnarray*} We observe immediately that $Q^+$ is freely symmetric,
since
\begin{eqnarray*} (T,\rho,\tau)\sigma = (T,\rho,\tau) &
\Rightarrow & \sigma^{-1}\tau = \tau \\
& \Rightarrow & \sigma = \iota.
\end{eqnarray*}
However $Q^+$ is not in general object-discrete; we will later see
(Proposition~\ref{mcatpropf}) that $Q^+$ is tidy if $Q$ is tidy.
\subsection {Slicing a generalised multicategory}
\label{tacgenslice}
Given a generalised multicategory $M$, we define a slice
multicategory $M_+$. We use the `multicategory of function
replacement' as defined in \cite{hmp1}, which plays a role similar
to (but more far-reaching than) that of the Baez-Dolan slice. The
slice defined in this section is only a special case of a
multicategory of function replacement, but it is sufficient for
the construction of multitopes. Moreover, for the purpose of
comparison it is later helpful to be able to use this closer
analogy of the Baez-Dolan slice.
We first explain how this slice arises from the multicategory of
function replacement as defined in \cite{hmp1}, and then give an
explicit construction of the slice multicategory that is analogous
to the symmetric case. This latter construction is the one we
continue to use in the rest of the work.
Using the terminology of \cite{hmp1}, the slice is defined as
follows. Let ${\mathcal L}$ be the language with objects $o(M)$
and arrows $a(M)$, and let ${\mathbb F}$ be the free generalised
multicategory on ${\mathcal L}$. So the objects of ${\mathbb F}$
are the objects of $M$, and the arrows of ${\mathbb F}$ are formal
composites of arrows of $M$. We define a morphism of generalised
multicategories $h:{\mathbb F} \longrightarrow M$ as the identity
on objects, and on arrows the action of composing the formal
composite to yield an arrow of M. Then we define $M_+$ to be the
multicategory of function replacement on $({\mathcal L}, {\mathbb
F}, h)$.
Explicitly, the slice multicategory $M_+$ is a generalised
multicategory given by:
\begin{itemize}
\item Objects: $o(M_+) = a(M)$.
\item Arrows: $a(M_+)$ is given by configurations for composing arrows of
$M$.
\end{itemize}
\noindent Such a configuration is given by $T=(T,\rho_T,\tau_T)$,
where: \numroman
\begin{enumerate}
\item $T$ is a planar tree with $n$ nodes labelled by $f_1, \ldots
f_n \in a(M)$, and edges labelled by objects of $M$ in such a way
that, writing
\[s(f_i) = ( x_{i1}, \ldots, x_{im_i} ),\]
the node labelled by $f_i$ has $m$ edges coming in, labelled by $
x_{i1}, \ldots, x_{im_i}$ from left to right, and one edge going
out, labelled by $t(f_i)$.
\item $\rho_T \in {\mathbf S}_k$, where $k$ is the number of
leaves of $T$. The composition in $M$ given by $T$ has specified
amalgamation maps giving information about the ordering of the
source; $\rho_T$ is the permutation induced on the source.
\item $\tau_T:\{\mbox{nodes of }T\} \longrightarrow [n]$ is a
bijection so that the node $N$ is labelled by $f_{\tau_T(N)}$. In
fact, specifying $\tau_T$ corresponds to specifying amalgamation
maps in the free multicategory ${\mathbb F}$, and this defines the
amalgamation maps of $M_+$.
\end{enumerate}
Note that whereas in the symmetric case $\rho$ and $\tau$ may be
chosen freely for any given $T$, in this case precisely one
$\rho_T$ and $\tau_T$ is specified for each $T$. The source and
target of such an arrow $T$ are given by $s(T) = ( f_1, \ldots f_n
)$ and $t(T) = f \in a(M)$, the result of composing the $f_i$
according to their positions in $T$. Here, the tree $T$ may be
thought of as a combed tree as in the symmetric case, but with all
edges labelled by identities.
\begin{itemize}
\item Composition
\end{itemize}
When it can be defined, we have $T_1 \circ_m T_2 = T$ as follows:
\begin{enumerate}
\item $T$ is the combed labelled tree obtained from $(T_1,
\tau_{T_1})$ by replacing the node ${\tau_{T_1}}^{-1}(m)$ by the
combed tree $(T_2,\tau_{T_2})$, combing the tree and then
forgetting the twist at the top.
\item The amalgamation maps are defined to reorder the source
as necessary according to $\tau_{T_1}$, $\tau_{T_2}$ and $\tau_T$.
\end{enumerate}
\begin{itemize}
\item Identities: $1_f$ is the tree with one node, labelled by $f$.
\end{itemize}
This definition is easily seen to satisfy the axioms for a
generalised multicategory. Note that a different choice of
amalgamation maps for ${\mathbb F}$ gives rise to different
bijections $\tau_T$ and hence different amalgamation maps in
$M_+$, resulting in an isomorphic slice multicategory.
\subsection{Comparison of slice}
\label{slicesymgen}
In this section we compare the slice constructions and make
precise the sense in which they correspond to one another. Recall
(section~\ref{tacxi}) that we have defined a functor
\[\cat{GenMulticat} \map{\xi} \cat{TidySymMulticat}.\]
We now show that this functor `commutes' with slicing, up to
equivalence.
We will eventually prove (Corollary~\ref{mcatcord}) that for any
generalised multicategory $M$
\[\xi(M_+) \simeq \xi(M)^+.\]
We prove this by constructing, for any morphism of symmetric
multicategories $\phi:Q \longrightarrow \xi(M)$ a morphism
$\phi^+:Q^+ \longrightarrow \xi(M_+)$ such that
\[\phi \mbox{ is an equivalence} \Rightarrow \phi^+ \mbox{ is an
equivalence}.\]
The result then follows by considering the case $\phi = 1$.
We begin by constructing $\phi^+$. Recall
\[\begin{array}[t]{rcll}
o(Q^+) & = & a(Q) & \\
a(Q^+) & = & \{(T,\rho,\tau): &T \mbox { a labelled tree with
$n$ nodes, $k$ leaves}\\
&&& \rho \in {\mathbf S}_k,\\
&&& \tau:\{\mbox{nodes of T}\} \map{\sim} [n]\\
&&& \mbox{edges labelled by morphisms of }{\mathbb C}\}\\
o(\xi(M_+)) & = & a(M) \\
a(\xi(M_+)) & = & \{(T,\sigma)\ : &T \mbox{ a labelled tree
with $n$ nodes} \\
&&& \sigma\in {\mathbf S}_n \\
&&& \mbox{edges labelled by identities}\}.
\end{array}\]
The idea is that given a way of composing arrows $f_1, \ldots,
f_n$ of $Q$ to an arrow $f$, we have a way of composing arrows
$g_1, \ldots, g_n$ of $M$ to an arrow $g$, where
\begin{eqnarray*} \phi(f_i) & = & (g_i, \sigma_i) \\
\mbox{and\ } \phi(f) & = & (g, \sigma).\end{eqnarray*}
Observe that since $\xi M$ is object-discrete, we have $\phi a =
1$ for all object-morphisms $a\in \bb{C}$.
So we define $\phi^+$ as follows:
\begin{itemize}
\item On objects: if $\phi(f) = (g,\sigma),\ g \in a(M)$ then put
$\phi^+(f) = g$.
\item On object-morphisms: since $\xi(M^+)$ is object-discrete,
we must have $\phi^+(\alpha) = 1$ for all object-morphisms
$\alpha$.
\item On arrows: put $\phi^+:(T,\rho,\tau)\longmapsto(\bar{T},\tau\circ
{\tau_{\bar{T}}}^{-1})$, where $\bar{T}$ is the labelled planar
tree obtained as follows. Given a node with label $f$ say, and
$\phi(f) = (g,\sigma)$:
\begin{enumerate}
\item replace the label with $g$
\item `twist' the inputs of the node according to $\sigma$
\item proceed similarly with all nodes, make all edge labels
identities, then comb and ignore the twist at the top of the
resulting tree (since the twist in $M_+$ is determined by the
tree).
\end{enumerate}
\end{itemize}
For example, suppose $T$ is given by
\setlength{\unitlength}{0.7mm}
\begin{center}
\begin{picture}(80,55)
\put(40,0){\line(0,1){20}}
\put(40,20){\line(-3,2){30}}
\put(40,20){\line(-1,2){10}}
\put(40,20){\line(3,2){30}}
\put(40,20){\circle*{1}}
\put(10,43){\makebox[0pt]{$T_1$}}
\put(30,43){\makebox[0pt]{$T_2$}}
\put(70,43){\makebox[0pt]{$T_n$}}
\put(44,20){\makebox(0,0)[tl]{$f$}}
\put(40,35){$\ldots$}
\end{picture}
\end{center}where the $T_i$ are subtrees of $T$, and $\phi(f)=(g,\sigma)$.
Then steps (i) and (ii) above give
\setlength{\unitlength}{0.7mm}
\begin{center}
\begin{picture}(80,55)
\put(40,0){\line(0,1){20}}
\put(40,20){\line(-3,2){30}}
\put(40,20){\line(-1,2){10}}
\put(40,20){\line(3,2){30}}
\put(40,20){\circle*{1}}
\put(10,43){\makebox[0pt]{$T_{\sigma(1)}$}}
\put(30,43){\makebox[0pt]{$T_{\sigma(2)}$}}
\put(70,43){\makebox[0pt]{$T_{\sigma(n)}$}}
\put(44,20){\makebox(0,0)[tl]{$g$}}
\put(40,35){$\ldots$}
\end{picture}
\end{center} and $\bar{T}$ is then defined inductively on the subtrees. Node
$N$ in $\bar{T}$ is considered to be the image of node $N$ in $T$
under the operation $T \longrightarrow \bar{T}$.
Writing
\begin{eqnarray*}
s(T,\rho,\tau) &=& (f_1,\ldots,f_n)\\
\makebox(0,0)[br]{and \ \ }t(T,\rho,\tau) &=& f
\end{eqnarray*} we check that
\begin{eqnarray*}
s(\phi^+(T,\rho,\tau)) &=& (\phi^+(f_1),\ldots,\phi^+(f_n))\\
\makebox(0,0)[br]{and \ \ } t(\phi^+(T,\rho,\tau))
&=&\phi^+(f).
\end{eqnarray*}
Writing $s(\bar{T},\tau\circ\tau_{\bar{T}}^{-1}) =
(g_1,\ldots,g_n)$ in $\xi(M)$, we have, in $M_+$
\[s(\bar{T}) =
(g_{\tau\circ\tau_{\bar{T}}^{-1}(1)},\ldots,
g_{\tau\circ\tau_{\bar{T}}^{-1}(n)})\]
so node $N$ is labelled in $\bar{T}$ by
\(g_{\tau\circ\tau_{\bar{T}}^{-1}(\tau_{\bar{T}}(N))} =
g_{\tau(N)}\)
and in $T$ by
\(f_{\tau(N)}.\)
So by definition of $\bar{T}$ we have
\[\phi^+(f_{\tau(N)}) = g_{\tau(N)}\]
so $\phi^+(f_i) = g_i$ for each $i$ and
\[s(\bar{T},\tau\circ\tau_{\bar{T}}^{-1}) =
(\phi^+(f_1),\ldots,\phi^+(f_n))\]
as required. Also, $t(\bar{T},\tau\circ\tau_{\bar{T}}^{-1}) =
\phi^+(f)$ by functoriality of $\phi$ and definition of
composition in $\xi(M)$.
We have shown that $\phi^+$ is functorial on the object-category
$o(Q^+)$; we need to check the remaining conditions for $\phi^+$
to be a morphism of symmetric multicategories. We may now assume
that all edge labels are identities since they all become
identities under the action of $\phi^+$.
\begin{itemize}
\item$\phi^+$ preserves identities:
\end{itemize}
$1_f \in a(Q^+)$ is $(T,\iota,\iota)$ where $T$ has one node,
labelled by $f$. So we have $\phi^+(1_f)=T$ where $T$ has one
node, labelled by $\phi^+(f)$, and $\phi^+(1_f)=1_{\phi^+}(f)$.
\begin{itemize}
\item $\phi^+$ preserves composition: We need to show
\[\phi^+(\alpha\circ_m\beta)=\phi^+(\alpha)\circ_m\phi^+(\beta).\]
\end{itemize}
Now, the underlying trees are the same by functoriality of $\phi$,
the permutation of leaves is the same by coherence for
amalgamation maps of $M$, and the node ordering is the same by
definition of $\phi^+$.
\begin{itemize}
\item $\phi^+$ preserves symmetric action:
\begin{eqnarray*}
\phi^+((T,\rho,\tau)\sigma)) & = &
\phi^+(T,\rho,\sigma^{-1}\tau)\\
& = &
(\bar{T},\sigma^{-1}\tau\circ{\tau_{\bar{T}}}^{-1})\\
& = & (\bar{T},\tau\circ{\tau_{\bar{T}}}^{-1})\sigma\\
& = & (\phi^+(T,\rho,\tau))\sigma.\\
\end{eqnarray*}
\end{itemize}
So $\phi^+$ is a morphism of symmetric multicategories.
\begin{proposition} \label{mcatpropb} Let $Q$ be a symmetric
multicategory, $M$ a generalised multicategory and
$\phi:Q\longrightarrow\xi(M)$ a morphism of symmetric
multicategories. If $\phi$ is an equivalence then $\phi^+$ is an
equivalence.
\end{proposition}
This enables us to prove the following proposition:
\begin{proposition} \label{mcatpropf} If $Q$ is tidy then $Q^+$ is tidy.
\end{proposition}
\noindent {\bfseries Proof of Proposition \ref{mcatpropb}. }
First we observe that given any such morphism $\phi$, $Q$ is
freely symmetric:
\begin{eqnarray*}
\alpha\sigma = \alpha & \Rightarrow & \phi(\alpha\sigma) =
\phi(\alpha)\sigma = \phi(\alpha) \ \in \xi(M) \\
& \Rightarrow & \sigma = \iota,
\end{eqnarray*}
the second implication following from $\xi(M)$ being freely
symmetric.
Now, given that $\phi$ is full, faithful and essentially
surjective on the category of objects, and full and faithful, we
prove the proposition in the following steps:
\renewcommand{\roman{enumi})}{\roman{enumi})}
\begin{enumerate}
\item $\phi^+$ is surjective on objects
\item $\phi^+$ is full on the category of objects
\item $\phi^+$ is faithful on the category of objects
\item $\phi^+$ is full
\item $\phi^+$ is faithful
\end{enumerate}
\begin{prfof}{(i)} Recall the action of $\phi^+$ on objects: let $f\in o(Q^+)
= a(Q)$ with $\phi(f)=(g,\sigma)$ then $\phi^+:f\longmapsto g$.
Now, given any $g\in o(\xi(M_+)) = a(M)$, we have $(g,\iota)\in
a(\xi( M))$. $\phi$ is full and surjective, so there exists $f\in
a(Q)$ such that $\phi(f)=(g,\sigma)$ and $\phi^+(f)=g$.
\end{prfof}
\begin{prfof}{(ii)}$\xi(M_+)$ is object-discrete so we only need to show that
if $\phi^+(f_1)=\phi^+(f_2)$ then there is a morphism $f_1
\longrightarrow f_2$ in $o(Q^+)$. Now
\begin{eqnarray*}
\phi^+(f_1)=\phi^+(f_2) \ \Rightarrow \ \phi(f_1) & = &
\phi(f_2)\sigma \mbox{ \ for some permutation } \sigma\\
& = & \phi(f_2 \sigma).
\end{eqnarray*}
Suppose
\[\begin{array}{rcccc}
f_1 & : & a_1, \ldots, a_n & \lra & a
\\
\mbox{and \ } f_2 \sigma & : & b_1, \ldots, b_n & \lra & b. \\
\end{array}\]
Then we must have $\phi(a_i)=\phi(b_i)$ for all $i$, and
$\phi(a)=\phi(b)$. So there exist morphisms
\[\begin{array}{rcccc}
g_i & : & b_i & \lra & a_i
\\
\mbox{and \ } g & : & a & \lra & b \\
\end{array}\]
and we have
\[f_2\sigma = g \circ f_1 \circ
(g_1,\ldots,g_n)\]
giving a morphism $f_1 \lra f_2$ as required. \end{prfof}
\begin{prfof}{(iii)} An arrow $\alpha:f_1 \longrightarrow f_2$ is uniquely of the
form $(\sigma,g_1,\ldots,g_n;g)$ with
\[\begin{array}{rcccc}
g_i & : & s(f_2)_{\sigma(i)} & \lra & s(f_1)_i
\\
\mbox{and \ } g & : & t(f_1) & \lra & t(f_2) \\
\end{array}\]
as arrows of ${\mathbb C}$. Since $\phi$ is faithful on the
category of objects and $\xi(M)$ is object-discrete, there can
only be one such map. \end{prfof}
\begin{prfof}{(iv)} Given $f_1,\ldots,f_n,f \in o(Q^+)$ and
\[(T,\sigma):(\phi^+(f_1),\ldots,\phi^+(f_n))
\longrightarrow\phi^+(f) \in \xi(M_+)\]
we seek
\[(T',\rho,\tau):(f_1,\ldots,f_n) \longrightarrow f \in
Q^+\]
such that
\[\phi^+(T',\rho,\tau)=(T,\sigma)\]
i.e. such that $\bar{T'}=T$ and
$\tau\circ{\tau_{\bar{T}}}^{-1}=\sigma$.
Write $\phi(f)=(g,\alpha)$ and for each $i$, $\phi(f_i) =
(g_i,\alpha_i)$. Then $\phi^+(f_i)=g_i$ and $\phi^+(f)=g$.
$(T,\sigma)$ is a configuration for composing the $g_i$ to yield
$g$, so we certainly have a configuration for composing the
$(g_i,\alpha_i)$ to yield $g_i$ as follows: replace node label
$g_i$ by $(g_i,\alpha_i)$ and insert a twist ${\alpha_i}^{-1}$
above the node, then comb and add the necessary twist at the top.
This gives a configuration for composing the $f_i$ as follows. We
have
\[t(g_i,\alpha_i) = s(g_k,\alpha_k)_m \Rightarrow \phi(t(f_i))
= \phi(s(f_k)_m).\]
Now $\phi$ is faithful on the category of objects, so there exists
a morphism
\[t(f_i) \longrightarrow s(f_k)_m\]
and we label the edge joining $t(f_i)$ and $s(f_i)_m$ with this
object-morphism. So this gives a configuration for composing the
$f_i$, to yield $h$, say, with $\phi(h) = \phi(f)$. That is, we
have a morphism
\[(f_1,\ldots,f_n) \stackrel{\theta}{\longrightarrow} h\]
such that $\phi^+(\theta) = (T,\sigma)$.
Now $\phi$ is full on the category of objects, so if $\phi(h) =
\phi(f)$ then there is a morphism $\alpha:h \longrightarrow f$ in
$o(Q^+)$. So we have
\[(f_1,\ldots,f_n)\stackrel{\theta}{\longrightarrow} h
\stackrel{\iota(\alpha)}{\longrightarrow}f\]
and $\phi^+(\iota(\alpha))$ is the identity since $\xi(M_+)$ is
object-discrete. So
\[\phi^+(\iota(\alpha)\circ\theta) = \phi^+(\theta) =
(T,\sigma)\]
as required. \end{prfof}
\begin{prfof}{(v)} Suppose $\phi^+(\alpha)=\phi^+(\beta)$. Then, writing
\[\begin{array}{rcccc}
\alpha = (T_1,\rho_1,\tau_1) & : & (f_1,\ldots,f_n) & \lra & f
\\
\beta = (T_2,\rho_2,\tau_2) & : & (f_1,\ldots,f_n) & \lra & f
\end{array}\]
we have $\bar{T_1}=\bar{T_2}=\bar{T}$, say, and
$\tau_1\circ{\tau_{\bar{T_1}}}^{-1} =
\tau_2\circ{\tau_{\bar{T_2}}}^{-1}$ so $\tau_1=\tau_2$. So given
any node $N$ in $\bar{T}$, its pre-image in $T_1$ has the same
label $f_i$ as its pre-image in $T_2$. The same is true of edge
labels, since $\phi$ is faithful on the category of objects.
Then the tree $T_1$ may be obtained from $\bar{T}$ as follows.
Suppose $\phi(f_i) = (g_i,\sigma)$ and $\phi(f)=g$. Then for the
node labelled by $g_i$, apply the twist $\sigma^{-1}$ to the edges
above it, and then relabel the node with $f_i$. This process may
also be applied to obtain the tree $T_2$. Since the process is
the same in both cases, we have $T_1=T_2=T$, say.
Finally, suppose $f'$ is the arrow obtained from composing
according to $T$. Then by the action of $\alpha$, $f=f'\rho_1$,
and by the action of $\beta$, $f=f'\rho_2$. Then, since $Q$ is
freely symmetric, $\rho_1=\rho_2$, so $\alpha=\beta$ as required.
\end{prfof}
\begin{prfof}{Proposition \ref{mcatpropf}} Given a tidy symmetric
multicategory $Q$ we need to show that $Q^+$ is also tidy.
Recall (Lemma~\ref{tidylemma}) that a \sm\ $Q$ is tidy if and only
if it is equivalent to one in the image of $\xi$, $\xi M$ say,
with equivalence given by
\[\phi: Q \lra \xi (M).\]
Then by Proposition~\ref{mcatpropb} $\phi^+$ is an equivalence
\[\phi^+: Q^+ \lra \xi(M_+)\]
so $Q^+$ is tidy as required.
\end{prfof}
\begin{corollary} \label{mcatcord} Let $M$ be a generalised
multicategory. Then \[\xi(M)^+ \simeq \xi(M_+)\] as symmetric
multicategories with a category of objects.
\end{corollary}
\begin{prf} Put $Q=\xi(M)$, $\phi=1$ in Proposition~\ref{mcatpropb}.
\end{prf}
We are now ready to give the construction of opetopes.
\label{tacopetopes}
\subsection{Opetopes}
\label{tacopes}
For any symmetric multicategory $Q$ we write
\[Q^{k+} = \left\{\begin{array} {l@{\extracolsep{2em}}l}
Q & k=0 \\
{(Q^{(k-1)+})}^+ & k \ge 1\end{array} \right. \]
Let $I$ be the symmetric multicategory with precisely one object,
precisely one (identity) object-morphism, and precisely one
(identity) arrow. A {\em $k$-dimensional opetope}, or simply {\em
$k$-opetope}, is defined in \cite{bd1} to be an object of
$I^{k+}$. We write $\bb{C}_k = o(I^{k+})$, the category of
$k$-opetopes.
\subsection{Multitopes}
\label{mtopes}
Multitopes are defined in \cite{hmp1} using the multicategory of
function replacement. We give the same construction here, but
state it in the language of slicing; this makes the analogy with
Section~\ref{tacopes} clear.
For any generalised multicategory $M$ we write
\[M_{k+} = \left\{\begin{array} {l@{\extracolsep{2em}}l}
M & k=0 \\
{(M_{(k-1)+})}_+ & k \ge 1 \end{array} \right.\]
Let $J$ be the generalised multicategory with precisely one object
and precisely one (identity) morphism. Then a {\em$k$-multitope}
is defined to be an object of $J_{k+}$. We write $P_k =
o(J_{k+})$, the set of $k$-multitopes; we will also regard this as
a discrete category.
\subsection{Comparison of opetopes and multitopes} \label{opecomp}
In this section we compare the construction of opetopes and
multitopes, applying the results we have already established.
\begin{proposition} For each $k \ge 0$
\[\xi(J_{k+}) \simeq I^{k+}.\]
\end{proposition}
\begin{prf} By induction. First observe that $\xi(J) \cong I$ and write $\phi$ for this
isomorphism. So for each $k \ge 0$ we have
\[\phi^{k+}:I^{k+} \longrightarrow \xi(J_{k+}),\]
where
\[\phi^{k+} = \left\{\begin{array} {l@{\extracolsep{1.5cm}}l}
\phi & k=0 \\
(\phi^{(k-1)+})^+ & k \ge 1
\end{array} \right. \]
Now $I$ is (trivially) tidy, so by Proposition~\ref{mcatpropf},
$I^{k+}$ is tidy for each $k \ge 0$. So by
Proposition~\ref{mcatpropb}, $\phi^{k+}$ is an equivalence for all
$k \ge 0$. \end{prf}
Then on objects, the above equivalence gives the following result.
\begin{corollary} For each $k \geq 0$
\[P_k \simeq {\mathbb C}_k.\]
\end{corollary}
This results shows that `opetopes and multitopes are the same up to
isomorphism'.
\addcontentsline{toc}{section}{References}
\nocite{bd2}
\nocite{hmp2} \nocite{hmp3} \nocite{hmp4}
\nocite{bae1}
\nocite{ks1}
\nocite{sim1}
\nocite{ben1}
\end{document}
|
\mathfrak{b}egin{document}
\title{Artinian Gorenstein algebras of embedding dimension four:
Components of $\PGOR(H)$ for $ H=(1,4,7,\ldots ,1)$}
\centerline {{\it Dedicated to Wolmer Vasconcelos on the occasion of his
65th birthday}}
\mathfrak{b}egin{abstract} A Gorenstein sequence $H$ is a sequence of nonnegative
integers
$H=(1,h_1,\ldots ,h_j=1)$ symmetric about $j/2$ that occurs as the Hilbert
function in
degrees less or equal
$j$ of a standard graded Artinian Gorenstein algebra
$A=R/I$, where $R $ is a polynomial ring in $r$ variables and $
I$ is a graded ideal. The scheme $\mathbb{P}\mathrm{Gor} (H)$ parametrizes all such
Gorenstein algebra quotients of $R$
having Hilbert function $H$ and it is known to be smooth when the embedding
dimension
satisfies $h_1 \le 3$.
The authors give a structure theorem for such
Gorenstein algebras of
Hilbert function
$H=(1,4,7,\ldots )$ when $R=K[w,x,y,z]$ and
$I_2\cong \lambdangle wx,wy,wz\rangle $ ({Theorem~\ref{V,W}, \ref{ResI,J}}).
They also show that any Gorenstein sequence
$H=(1,4,a,\ldots ), a\le 7$ satisfies the condition
$\mathbf{D}elta H_{\le j/2}$ is an O-sequence ({Theorem~\ref{nonempty2},
\ref{aless7}}).
Using these results, they show that
if $H=(1,4,7,h,b,\ldots ,1)$ is a Gorenstein sequence
satisfying $3h-b-17\mathfrak{g}e 0$, then the Zariski closure
$\overline{\mathfrak{C}(H)}$ of the
subscheme
$\mathfrak{C}(H)\subset
\mathbb{P}\mathrm{Gor}(H)$ parametrizing Artinian Gorenstein quotients
$A=R/I$ with $I_2\cong \lambdangle wx,wy,wz\rangle$ is a generically smooth
component of
$\mathbb{P}\mathrm{Gor}(H)$ ({Theorem \ref{sevcompt}}).
They show that if in addition $8\le h\le 10$, then such $\mathbb{P}\mathrm{Gor}(H)$ have
several irreducible components (Theorem \ref{sevcomp2}). M. Boij
and others had given previous examples of certain $\mathbb{P}\mathrm{Gor}(H)$ having
several components in embedding
dimension four or more
\cite{Bj2},\cite[Example C.38]{IK}.\varphiar
The proofs use properties
of minimal resolutions, the smoothness of
$\mathbb{P}\mathrm{Gor}(H')$ for embedding dimension three \cite{Klj2}, and the Gotzmann
Hilbert scheme theorems
\cite{Gotz,IKl}.
\end{abstract}
\section{Introduction}\lambdabel{j1}\varphiar
Let $R$ be the polynomial ring $R=K[x_1,\ldots x_r]$ over an algebraically
closed field $K$, and denote by
$M=(x_1,x_2,\ldots ,x_r)$ its maximal ideal. When
$r=4$, we let
$R=K[w,x,y,z]$ and regard it as
the coordinate ring of the projective space $\mathbb
P^3$. Let $A=R/I$ be a standard graded
Artinian Gorenstein (GA) algebra, quotient of $R$. We will denote
by
$\mathrm{Soc}(A)=(0:M$) the socle of $A$,
the one-dimensional subvector space of $A$ annihilated by multiplication by
$M$. It is the minimal
non-zero ideal of $A$. Its degree is the \emph{socle degree} $j(A): \,
j(A)=\mathrm{max}\{i\mid
A_i\not=0\}$. A sequence
$H=(h_0,\ldots ,h_j)=(1,r,\ldots , r,1)$ of positive integers symmetric about
$j/2$ is called a {\it Gorenstein sequence} of socle degree $j$, if it
occurs as the Hilbert
function of some graded Artinian Gorenstein (GA) algebra $A = R/I$. We let
$\mathbf{D}elta
H_i=h_i-h_{i-1}$, and denote by $H_{\le d}$ the subsequence $(1,h_1,\ldots
,h_d)$. The graded Betti
numbers of an algebra are the dimensions of the various graded pieces that
occur in the minimal graded R-resolution of
$A$.
When $r=2$, F.~H.~S. Macaulay had shown \cite{Mac1} that an Artinian
Gorenstein quotient of
$R$ is a complete intersection quotient $A=R/(f,g)$; thus, for $A$ graded,
the Gorenstein sequence must have the
form $H(A)=H(s)=(1,2,\ldots ,s-1,s,s,\ldots ,2,1)$. Also, when $r=2$ the
family $\mathbb{P}\mathrm{Gor}(H(s))$ parametrizing such
Artinian quotients is smooth; its closure
$\overline{\mathbb{P}\mathrm{Gor}(H(s))}=\mathfrak{b}igcup_{t\le s}\mathbb{P}\mathrm{Gor}(H(t))$ is
naturally isomorphic to the secant variety of a rational normal curve, so
is well understood (see, for example,
\cite[\S 1.3]{IK}). \varphiar For Artinian Gorenstein algebras
$A$ of embedding dimension three ($r=3$), the Gorenstein sequences
$H(A)$, and the possible sequences $\mathfrak{b}eta$ of graded Betti numbers for $A$
given the Hilbert function $H(A)$ had
been known for some time
\cite{BE,St,D,HeTV}, see also \cite[Chapter 4]{IK}. More recently, the
irreducibility and smoothness of the
family
$\mathbb{P}\mathrm{Gor}(H)$ parametrizing such GA quotients having Hilbert function $H$ was
shown by S.~J.~Diesel and
J.-O.~Kleppe, respectively
(\cite{D,Klj2}). When $r=3$, there are also several dimension formulas for
the family $\mathbb{P}\mathrm{Gor}(H)$,
due to A.~Conca and G.~Valla, J.-O.~Kleppe, Y.~Cho and B.~Jung
\cite{CV,Klj2,CJ}
(see also \cite[Section 4.4]{IK} for a survey); also, M. Boij has found the
dimension of the
subfamily $\mathbb{P}\mathrm{Gor}(H,\mathfrak{b}eta)$ parametrizing $A$ with a given sequence $\mathfrak{b}eta$
of graded Betti numbers
\cite{Bj3}. The closure $\overline{\mathbb{P}\mathrm{Gor}(H)}$ is in general less well
understood when $r=3$, but see \cite[Theorem
5.71,\S 7.1-7.2]{IK}.\varphiar
For embedding dimensions five or greater, it is known that a Gorenstein
sequence may be
non-unimodal: that is, it may have several maxima separated by a smaller
local minimum (\cite{BeI,BjL}).
\varphiar
When the embedding dimension is four, it is not known whether
Gorenstein
sequences must satisfy the
condition that the first difference
$\mathbf{D}elta H_{\le j/2}$ is an $O$-sequence --- a sequence admissible for the
Hilbert function of some ideal of
embedding dimension three (see Definition \ref{Macexp}). Nor do we
know whether height four
Gorenstein sequences are unimodal, a weaker restriction. Little was
known about the parameter
scheme
$\mathbb{P}\mathrm{Gor}(H)$ when $r=4$, except that for suitable Gorenstein sequences $H$,
it may have several irreducible
components
\cite{Bj2},\cite[Example C.38]{IK}. We had the following questions, that
guided this portion of
our study.
\varphiar
$\mathfrak{b}ullet$ Can we find insight into the open problem of whether height four
Gorenstein sequences $H$ must
satisfy the condition, $\mathbf{D}elta H_{\le j/2}$ is an $O$-sequence?
\varphiar
$\mathfrak{b}ullet$ Do most schemes $\mathbb{P}\mathrm{Gor}(H)$ when $r=4$ have
several irreducible components, or is this a rare phenomenon?
\varphiar
We now outline our main results.
We consider Hilbert sequences $H = (1, 4, 7, \cdots 1)$. Thus, $I$ is always
a graded height four Gorenstein ideal in $K[w,x,y,z]$ whose minimal sets of
generators
include exactly three quadrics.
First, in Theorem
\ref{V,W}, we obtain a structure theorem for Artinian Gorenstein
quotients $A=R/I$ with Hilbert function $H(A)=H$ and
with $I_2\cong \lambdangle
wx,wy,wz\rangle$. The proof relies on the connection between $I$ and
the intersection $J=I\cap K[x,y,z]$, which is a height three Gorenstein ideal.
We also construct the minimal resolution of $A$ in Theorem \ref{ResI,J}.
This allows us to determine the tangent space ${\mathrm{H}om}_0(I,R/I)$ to
$A$ on $\mathbb{P}\mathrm{Gor} (H)$, and to
show that under a simple condition on $H$, if such an algebra $A$ is
general enough, then $A$ is parametrized by a
smooth point of
$\mathbb{P}\mathrm{Gor}(H)$ (Theorem \ref{sevcomp}).\varphiar
We then study the intriguing case $A=R/I$ where
$I_2\cong \lambdangle
w^2,wx,wy\rangle$ and exhibit a subtle connection between $A$ and a height
three
Gorenstein algebra. We determine in Theorem~\ref{hfw2} that
the possible Hilbert functions $H=H(A)$ for such Artinian algebras $A$ satisfy
\mathfrak{b}egin{equation}\lambdabel{simpleH}
H=H'+(0,1,\ldots ,1,0)
\end{equation}
where $H'$ is a height three Gorenstein sequence. \varphiar
Our result pertaining to the first question is
\mathfrak{b}egin{uthm} (Theorem \ref{nonempty2}, Corollary
\ref{nonempty3}, Proposition \ref{aless7})
All Gorenstein sequences of the form $H=(1,4,a,\ldots ),\, a\le 7$ must
satisfy the condition
that $\mathbf{D}elta H_{\le j/2}$ is an
$O$-sequence.
\end{uthm}
To show this we eliminate
potential sequences not satisfying
the condition by frequently using the symmetry of the minimal resolution of
a graded Artinian Gorenstein algebra $A$, the Macaulay bounds on the
Hilbert function,
and the Gotzmann Persistence and Hilbert scheme theorems (Theorem
\ref{MacGo}). However, these methods
do not
extend to all height four Gorenstein sequences, and we conjecture that not
all will satisfy the
condition that $\mathbf{D}elta H_{\le j/2}$ is an
$O$-sequence (see Remark~\ref{SIconj}).
\varphiar
We then combine these results with a well known construction of
Gorenstein ideals from sets of
points to obtain our theorem concerning irreducible components of
$\mathbb{P}\mathrm{Gor} (H)$
\mathfrak{b}egin{uthm}(Theorem
\ref{sevcomp2}\ref{sevcomp2A}) Let $H=(1,4,7,h,b,\ldots ,1)$ be a
Gorenstein sequence
satisfying $8\le h\le 10$ and
$3h-b-17\mathfrak{g}e 0$. Then $\mathbb{P}\mathrm{Gor} (H)$ has at least two components. The first
is the Zariski
closure of the subscheme $\mathfrak C(H)$
of $
\mathbb{P}\mathrm{Gor}(H)$ parametrizing Artinian Gorenstein quotients
$A=R/I$ for which $I_2$ is $\mathrm{Pgl}(3)$-isomorphic to $ \lambdangle
wx,wy,wz\rangle$. The second
component parametrizes quotients of the coordinate rings of certain
punctual schemes in $\mathbb
P^3$.
\end{uthm}
\section {Notation and basic results}
In this Section we give definitions and some basic
results that we will need. Recall that $R=K[w,x,y,z]$ is the
polynomial ring with the standard grading over an algebraically closed
field, and that we consider only graded ideals $I$.
\varphiar
Let $V\subset R_v$ be a vector subspace. For
$u\le v$ we let
$V:R_u=\lambdangle f\in R_{v-u}\mid R_u\cdot f\subset V\rangle$.
We state as a lemma a result of Macaulay \cite[Section
60ff]{Mac1} that we will use frequently.
\mathfrak{b}egin{lemma}\lambdabel{MacD}(F.H.S. Macaulay). Let $\mathrm{char}\ K=0$ or $\mathrm{char}\ K>j$.
There is a one-to-one
correspondence between graded
Artinian Gorenstein algebra quotients
$A=R/I$ of
$R$ having socle degree
$j$, on the one hand, and on the other hand, elements
$F\in \mathcal R_j$ modulo $ K^{\ast}-$action. where $\mathcal
R=K[W,X,Y,Z]$, the dual polynomial
ring.
The correspondence is given by
\mathfrak{b}egin{align}\lambdabel{eMac1}
I&=\mathrm{Ann}\ F = \{h\in R\mid h\circ F=h(\varphiartial /\varphiartial W,\ldots ,\varphiartial
/\varphiartial
Z)\circ F=0\};\notag\\
F&=(I_j)^{\varphierp}\in \mathcal R_j \mod K^{\ast} .
\end{align}
Here $F$ is also the generator of the $R$-submodule $I^\varphierp\subset
\mathcal R, I^\varphierp=\{ G\in
\mathcal R\mid h\circ G=0 \text { for all } h\in I\}$. The Hilbert function
$H(R/I)$ satisfies
\mathfrak{b}egin{equation}
H(R/I)_i=\dim_K (R\circ F)_i=H(R/I)_{j-i}.
\end{equation}
Furthermore, for $i\le j$, $I_i$ is determined by $I_j$ or by $F$ as follows:
\mathfrak{b}egin{equation}\lambdabel{ancclosure}
I_i=I_j:R_{j-i}=\{h\in R_i\mid h\cdot R_{j-i}\subset I_j\}=\{ h\in R_i\mid
h\circ(R_{j-i}\circ F)=0\}.
\end{equation}
When $\mathrm{char}\ K=p>j$ the statements are analogous, but we must replace
$K[W,X,Y,Z]$ by the ring of
divided powers
$\mathcal D$, and the action of $R$ on $\mathcal D$ by the contraction action
(see below).
\end{lemma}
\mathfrak{b}egin{proof} For a modern proof see \cite[Lemmas 2.15, 2.17]{IK}. For a
discussion of the use of
the divided power ring when $\mathrm{char}\ K = p$ see also \cite[Appendix A]{IK}.
\end{proof}
\mathfrak{b}egin{corollary}\lambdabel{include} Let $A=R/I$ be a graded Artinian
Gorenstein algebra
of socle degree $j$. Let $J=I_\mathfrak{Z}$ be a saturated ideal defining a
scheme $\mathfrak{Z}\subset \mathbb P^{3}$,
such that for some $i, 2\le i\le j$, $\mathfrak{Z}=\mathrm{Proj}\ (R/(J_i))$, with
$J_i\subset I_i$. Then for $0\le u\le i$ we have $J_u\subset I_u$. If also
$J_i=I_i$, then for such $u$,
$J_u=I_u$.
\end{corollary}
\mathfrak{b}egin{proof} Let $0\le u\le i$. Since
$J$ is its own saturation, we have $J_u=J_k:R_{k-u}$ for large $k$, so we have
\mathfrak{b}egin{equation*}
J_u=J_k:R_{k-u}=\{ J_k:R_{k-i}\} : R_{i-u}=J_i:R_{i-u}.
\end{equation*}
Now \eqref{ancclosure} implies that for $0\le u\le i$
\mathfrak{b}egin{equation*}
I_u=I_j:R_{j-u}=\{ I_j:R_{j-i}\} :R_{i-u}=I_i:R_{i-u}.
\end{equation*}
This completes the proof of the relation between $I_\mathfrak{Z}$ and $I$.
\end{proof}\varphiar
Note that \cite[Example 3.8]{I}, due to D. Berman, shows that one cannot
conclude that $J\subset I$
in
Corollary~\ref{include}. For let $I=(x^3,y^3,z^3)$, and
let $J$ be the saturated ideal
$J=(x^2y^3,y^2z^3,x^3z^2,x^2y^2z^2)$, a local complete intersection of
degree 18 defining a
punctual scheme concentrated at the points $(1,0,0),(0,1,0)$ and $(0,0,1)$.
Then we have
$J_5\subset I_5$ but $x^2y^2z^2\in J$ so $J\footnotesize \itubseteq I$.
\varphiar
We suppose $R=K[w,x,y,z]$. Let ${ \mathcal D}=K_{DP}[W,X,Y,Z]$ denote the
divided power
algebra associated to $R$: the basis of $\mathcal D_j$ is $\{ W^{[j_1]}\cdot
X^{[j_2]}\cdot Y^{[j_3]}\cdot Z^{[j_4]}, \sum j_i=j\}$. We let $x^i\circ
X^{[j]}
= X^{[j-i]}$ when
$j\mathfrak{g}e i$ and zero otherwise; this action extends in a natural way to the
contraction action of
$R$ on
$\mathcal D$. Multiplication in $\mathcal D$ is determined by $X^{[u]}\cdot
X^{[v]}=\mathfrak{b}inom{u+v}{v}X^{[u+v]}$. By $(\alphapha X+ Y)^{[u]}, \alphapha\in K$ we
mean $\sum_{0\le
i\le u}\alphapha^iX^{[i]}\cdot Y^{[u-i]}$: this is $(\alphapha X+Y)^u/u!$ when
the latter makes sense.
When
$\mathrm{char}\ K=0$, or $\mathrm{char}\ K>j$ we may replace
$\mathcal D$ by the polynomial ring
$\mathcal R=K[W,X,Y,Z]$ with $R$ acting on $\mathcal R$ as partial
differential operators
\eqref{eMac1}, and we replace all $X^{[u]}$ by $X^u$, and $(\alphapha
X+Y)^{[u]}$ by $(\alphapha
X+Y)^u$. \varphiar The inverse system
$I^\varphierp\subset
\mathcal D$ of the ideal
$I\subset R$ satisfies
\mathfrak{b}egin{equation}
I^{\varphierp} = \{G\in K_{DP}[W,X,Y,Z], h\circ G = 0 \text { for all } h\in I\},
\end{equation}
and it is an $R$-submodule of $\mathcal D$ isomorphic to the dual module
of $A=R/I$.
When $A=R/I$ is graded Gorenstein of socle degree $j$, then by Macaulay's
Lemma \ref{MacD} the
inverse system is principal, generated by $F\in \mathcal D_j$: we call $F$
the \emph{dual
generator} of $A$ or for $I$. Thus, we may parametrize the algebra $A$ by
the class of
$F\mod$ nonzero $ K^\ast -$multiple, an element of the projective space
$\mathbb
P^{N-1},N={\mathfrak{b}inom{j+3}{j}}$. Given a Gorenstein sequence
$H$ of socle degree $j$ (so $H_j\not= 0, H_{j+1}=0$) we let $\mathbb{P}\mathrm{Gor}(H)\subset
\mathbb P^{N-1}$ denote the scheme parametrizing the
family of all GA quotients
$A=R/I$ having Hilbert function $H$. Here, we use the scheme structure given by
the catalecticants, and described in
\cite[Definition 1.10]{IK}. A
``geometric point'' $p_A$ of $\mathbb{P}\mathrm{Gor}(H)$
parametrizes a Artinian Gorenstein quotient $A=R/I$ of $R$ having Hilbert
function $H$.
We now state Macaulay's theorem characterizing Hilbert functions or
$O$-sequences, and the
version of the Persistence
and Hilbert Scheme theorems of G. Gotzmann that we will use \cite{Gotz}.
Let $d$ be a positive integer. The $d$-th
Macaulay coefficients of a positive
integer
$c$ are the unique decreasing sequence of
non-negative integers $k(d),\ldots ,k(1)$ satisfying
\mathfrak{b}egin{equation*}
c=\mathfrak{b}inom{k(d)}{d}+\mathfrak{b}inom{k(d-1)}{d-1}+\cdots
+\mathfrak{b}inom{k(1)}{1};
\end{equation*}
We denote by $c^{(d)}$ the integer
\mathfrak{b}egin{equation}\lambdabel{eMacExp}
c^{(d)}=\mathfrak{b}inom{k(d)+1}{d+1}+\mathfrak{b}inom{k(d-1)+1}{d}+\cdots +\mathfrak{b}inom{k(1)+1}{2}.
\end{equation}
Then,
the Hilbert polynomial
$p_{c,d}(t)$ for quotients $B$ of the polynomial ring $R$, such that $B$ is
regular in degree
$d$ and $H(B)_d=c$
satisfies
\mathfrak{b}egin{equation}\lambdabel{eMacExp2}
p_{c,d}(t)=\mathfrak{b}inom{k(d)+t-d}{t}+\mathfrak{b}inom{k(d-1)+t-d}{t-1}+\cdots
+\mathfrak{b}inom{k(1)+t-d}{t-d}.
\end{equation}
The length of the
$d$-th Macaulay expansion of $c$, or of the Macaulay expansion of the
polynomial $p_{c,d}$, is the number of
$\{ k(i)\mid k(i)\mathfrak{g}e i\} $, equivalently, the number of nonzero binomial
coefficients in the Macaulay
expansion, and this is well known to be the Gotzmann regularity degree of
$p_{c,d}$ (\cite[Theorem
4.3.2]{BH}).
\mathfrak{b}egin{theorem}\lambdabel{MacGo} Suppose that $1\le c\le \dim_kR_d$, and $I$ is
a graded ideal of
$R=K[x_1,\ldots x_r]$.
\mathfrak{b}egin{enumerate}[i.]
\item\lambdabel{Macgrowth}\cite{Mac2} If $H(R/I)_d=c$, then $H(R/I)_{d+1}\le
c^{(d)}$ (Macaulay's inequality).
\item\lambdabel{Gotzper}\cite{Gotz} If $H(R/I)_d=c$ and $H(R/I)_{d+1}=c^{(d)}$,
then
$\mathrm{Proj}\ (R/(I_d))$ is a projective scheme in $\mathbb P^{r-1}$ of Hilbert
polynomial $p_{c,d}(t).$
In particular
$H(R/(I_d))_{k}=p_{c,d}(k)$ for $k\mathfrak{g}e d$, and $H'=H(R/(I_d))$ has extremal
growth ($h'_{k+1}={{h'}_k}^{(k)}$) in each degree
$k$ to
$k+1, k\mathfrak{g}e d$.
\end{enumerate}
\end{theorem}
\mathfrak{b}egin{proof} For a proof of Theorem \ref{MacGo}\eqref{Macgrowth} see
\cite[Theorem 4.2.10]{BH}.
For a proof of the persistence (second) part of Theorem \ref{MacGo}
\eqref{Gotzper} see
\cite[Theorem 4.3.3]{BH}; for the Gotzmann Hilbert scheme theorem see
\cite[Satz1]{Gotz}, or the
discussion of
\cite[Theorem C.29]{IKl}.
\end{proof}
\mathfrak{b}egin{definition}\lambdabel{Macexp}
A sequence of nonnegative integers $H=(1,h_1,\ldots ,h_d,\ldots ),$ is said
to be an
$O$-{\emph{sequence}}, or to be \emph{admissible} if it
satisfies Macualay's inequality of Theorem \ref{MacGo}\eqref{Macgrowth}
for each integer $d\mathfrak{g}e 1$.
\end{definition}
Recall that the regularity degree $\sigma(p)$ of a Hilbert polynomial
$p=p(t)$ is the smallest degree for
which all projective schemes $\mathfrak{Z}$ of Hilbert polynomial $p$ are
Castelnuovo-Mumford regular in degree
less or equal
$\sigma (p)$. G. Gotzmann and D. Bayer showed that this bound is the
length $\sigma(p)$ of the
Macaulay expression for $p$ \cite{Gotz,Ba}: for an exposition
and proof see \cite[Theorem 4.3.2]{BH}; also see
\cite[Definition C.12, Proposition C.24]{IKl}, which includes some
historical remarks. As an easy consequence we
have
\mathfrak{b}egin{corollary}\lambdabel{regdeg} The regularity degree of the polynomial
$p(t)=at+1-\mathfrak{b}inom{a-1}{2}+b$ where $a >0,b\mathfrak{g}e 0$ satisfies $
\sigma (p)=a+b$. These
Hilbert polynomials cannot occur with
$b<0$. In particular we have, the regularity degree of the polynomial
$p(t)=3t+b, b\mathfrak{g}e 0$ is $3+b$,
of $p(t)=2t+1+b, b\mathfrak{g}e 0$ is $b+2$, and of $p(t)=t+1+b,b\mathfrak{g}e 0$ is $b+1$. The
regularity of
the constant polynomial $p(t)=b$ is $b$.
\end{corollary}
\varphiroof One has for $p(t)=at+1-{a-1\choose 2}+b$, the following sum,
equivalent to a
Macaulay expansion as in \eqref{eMacExp2} of length $a+b$,
\mathfrak{b}egin{equation*}
\mathfrak{b}egin{split}
p(t)={t+1\choose
1}&+{t+1-1\choose 1}+{t+1-2\choose 1}+\dots + {t+1-(a-1)\choose
1}+\\
&{t-a\choose 0}+{t-(a+1)\choose 0}+\dots + {t-(a+b-1)\choose 0}.
\end{split}
\end{equation*}
\varphiar
\mathfrak{b}egin{corollary}\lambdabel{oseq}
Let $H$ be a Gorenstein sequence of socle degree $j$, and suppose for that
some $d< j$,
$h_{d+1}=(h_{d})^{(d)}$ is extremal in the sense of Theorem
\ref{MacGo}\eqref{Macgrowth}. Then
$\mathbf{D}elta H_{\le d+1}$ is an $O$-sequence.
\end{corollary}
\mathfrak{b}egin{proof} Theorem \ref{MacGo}\eqref{Gotzper} and Corollary
\ref{include} show the existence of a scheme
$\mathfrak{Z}\subset
\mathbb P^{r-1}$ satisfying $h_u=H(R/I_\mathfrak{Z})_u$ for $u\le d+1$. Since
$I_\mathfrak{Z}$ is saturated and thus $R/I_\mathfrak{Z}$ has depth at least one, there is a
homogeneous degree one
nonzero divisor, implying that the first difference $\mathbf{D}elta (H(R/I_\mathfrak{Z}))$ is
an $O$-sequence.
\end{proof}\varphiar
\mathfrak{b}egin{remark}
The assertion of Corollary \ref{oseq} as well as those of Corollary
\ref{include} are valid more
generally for graded Artinian algebras having socle only in degree $j$
(level algebras), or those
having socle only in degrees greater or equal $j$.
\end{remark}
As an example of the application of Theorem \ref{MacGo}, we determine below
the Gorenstein
sequences
\linebreak $H=(1,4,7,h,7,4,1)$ that
occur, having socle degree 6.
\mathfrak{b}egin{corollary}\lambdabel{nonempty1} The sequence $H=(1,4,7,h,7,4,1)$ is a
Gorenstein sequence
if and only if
$7\le h\le 11$.
\end{corollary}
\mathfrak{b}egin{proof} From Macaulay's extremality Theorem
\ref{MacGo}\eqref{Macgrowth} we have
$H(3)=h\le H(2)^{(2)}=7^{(2)}=11,$ and
$H(4)=7\le h^{(3)}$ which implies $h\mathfrak{g}e 6.$
Now $H=(1,4,7,6,7,4,1)$
implies that
the growth of $H_3=6$ to $H_4=7$ is maximum, since
$6=\mathfrak{b}inom{4}{3}+\mathfrak{b}inom{2}{2}+\mathfrak{b}inom{1}{1}$, while
$7=6^{(3)}=\mathfrak{b}inom{5}{4}+\mathfrak{b}inom{3}{3}+\mathfrak{b}inom{2}{2}$. Corollary~\ref{oseq}
shows this is impossible.
\end{proof}\varphiar
For a subscheme $\mathfrak{Z}\subset \mathbb P^3$ we will denote by
$H_\mathfrak{Z}=H(R/I_\mathfrak{Z})$ its Hilbert function, sometimes
called its postulation; here $I_\mathfrak{Z}\subset R$ is the saturated ideal
defining $\mathfrak{Z}$. Inequalities among Hilbert
functions are termwise. The following result is well-known and easy to
show, since a degree-d punctual scheme can
cut out at most d conditions in a given degree.
\mathfrak{b}egin{lemma}\lambdabel{upbd} Let $\mathfrak{Z}=W\cup \mathfrak{Z}_1\subset \mathbb P^3$, be a
subscheme of $\mathbb P^3$, where $W$
is a degree
$d$ punctual scheme. Then for all $i$, $(H_\mathfrak{Z})_i \le (H_{\mathfrak{Z}_1})_i+d$.
\end{lemma}
\mathfrak{b}egin{proof} We have (the first inequality is from Maroscia's result
\cite{Mar}, see
\cite[Theorem 5.1A]{IK})
\mathfrak{b}egin{equation}
\mathfrak{b}egin{split}
d\mathfrak{g}e (H_W)_i=&\dim R_i-\dim (I_W)_i\mathfrak{g}e \dim (I_{\mathfrak{Z}_1})_i-\dim (I_W\cap
I_{\mathfrak{Z}_1})_i\\=&H(R/(I_W\cap I_{\mathfrak{Z}_1}))_i-H(R/I_{\mathfrak{Z}_1})_i\mathfrak{g}e
(H_\mathfrak{Z})_i-(H_{\mathfrak{Z}_1})_i.
\end{split}
\end{equation}
\end{proof}
\section{Nets of quadrics in $\mathbb P^3$, and Gorenstein
ideals}\lambdabel{netsec}
In Section \ref{Netsquad} we give preparatory material on nets of quadrics,
and on the Hilbert schemes
of low degree curves in $\mathbb P^3$. In Section \ref{struc} we prove a
structure theorem for Artinian
Gorenstein algebras
$A=R/I$ of Hilbert function
$H(A)=(1,4,7,\ldots )$ for which the net of quadrics $I_2$ has a common factor
and is isomorphic
after a change of variables to $\lambdangle wx,wy,wz\rangle$ (Theorem
\ref{V,W}). We then determine the dimension of the tangent space to $\mathbb{P}\mathrm{Gor}
(H)$ at a point
parametrizing such an ideal; we also show that when $H$ has socle degree 6,
the subfamily parametrizing such Gorenstein algebras is an irreducible
component of $\mathbb{P}\mathrm{Gor}(H)$
(Theorem \ref{sevcomp}), a result which we will later generalize to
arbitrary socle degree
(Theorem \ref{sevcompt}). In Section
\ref{w2sec} we determine the possible Hilbert functions $H(A),A=R/I$ when
$I_2=\lambdangle w^2,wx,wy
\rangle$ (Theorem
\ref{hfw2}).
\subsection{Nets of quadrics}\lambdabel{Netsquad}
Three homogeneous quadratic polynomials $f,g,h$ in $R=K[w,x,y,z]$ form a
family
$\alphapha_1f+\alphapha_2g+\alphapha_3 h,
\alphapha_i\in K$, comprising a net of quadrics in $\mathbb P^3$. Here we will
use the term net also
for the vector space span $V=\lambdangle f,g,h\rangle$. We divide these
families according to the
number of linear relations among the three quadrics. We now show that
they can have at most 3 linear
relations. Let
$(I_2) = (f,g,h)$ be the ideal generated by a net of quadrics $I_2=\lambdangle
f,g,h\rangle$. Then
$H(R/(I_2)) = ( 1,4,7, h,
\cdots)$, where
$h\leq 11=7^{(3)} $ by Macaualay's growth condition. When there are no
relations
$H(R/(I_2))_3=20-12=8$, so the number of linear relations on the net of
quadrics
$\lambdangle f,g,h\rangle$ is no greater than $ 11-8 =3$, as claimed. \varphiar
Nets of quadrics in $\mathbb P^3$ have been extensively
studied geometrically, earlier by
W.~L.~Edge and others, more recently by C.T.C. Wall and others for their
connections with mapping
germs, and instantons. I. Vainsecher and also G. Ellingsrud, R. Peine, and
S.A.
Str\o{m}me have showed that the Hilbert scheme of twisted cubics in $\mathbb
P^3$ is a blow-up of the
family $\mathfrak {F}_{\mathrm{RNC}}$ of nets of quadrics arising as minors
of a
$2\times 3$ matrix (Definition \ref{defF}) along the
sublocus of those nets having a common factor. Nets of quadrics are
parametrized by the
Grassmanian $\mathbb{G}=\mathrm{Grass} (3,R_2)\cong \mathrm{Grass}(3,10)$, of dimension 21. It
is easy to see that up to
isomorphism under the natural $\mathrm{Pgl}(3)$ action, the vector spaces
$V=\lambdangle f,g,h\rangle\subset R_2$ have a 6 dimensional family of orbits,
as $\dim \mathrm{Grass}(3,10)-\dim
\mathrm{Pgl}(3)=21-15=6$, and the stabilizer of a general enough net is
finite. In this section, we determine the irreducible components of the
subfamily $\mathfrak{F}$ of
nets having at least one linear relation (Lemma \ref{netsq0}), and also the
possible graded Betti
numbers for the algebras $R/(V)$, for nets $V\in \mathfrak{F}$ (Lemma
\ref{netsq}).
\mathfrak{b}egin{definition}\lambdabel{defF} We denote by $\mathfrak{F}\subset\mathbb{G}=
\mathrm{Grass}(3,R_2)$ the subfamily of nets of
quadrics, vector spaces
$V=\lambdangle f,g,h\rangle\subset R_2$, for which $f,g,h$ have at least one
linear relation
\mathfrak{b}egin{equation}\lambdabel{linrel}
\alphapha_1f+\alphapha_2g+\alphapha_3h=0,
\exists\alphapha_i\in R_1=\lambdangle w,x,y,z\rangle.
\end{equation} \varphiar
We denote by $\mathfrak{F}_i\subset\mathbb{G}=\mathrm{Grass}(3,R_2)$ the subfamily of
$\mathfrak{F}$ consisting of
those nets that have exactly $i$ linear relations, $i=1,2,3$. We denote
by
$\mathfrak{F}_{\mathrm{RNC}}\subset
\mathfrak{F}_2$ the subset of nets defining twisted cubic curves, and by
$\mathfrak{F}_{sp}$ the
subset of nets $\mathrm{Pgl}(3)$ isomorphic to $\lambdangle w^2,wx,wy\rangle$.
\end{definition}
\mathfrak{b}egin{lemma}\lambdabel{F1} The family $\mathfrak{F}_1$ comprises those nets
that can be written
$V=\lambdangle \ell\cdot U,h\rangle$, where $\ell\in R_1$ is a linear form,
$U\subset R_1$ is a two
dimensional subspace of linear forms, and $h$ is not divisible by either
$\ell$ or by any element
of $U$.\varphiar
Up to isomorphism
$V\in
\mathfrak{F}_1$ may be
written either
$V=\lambdangle xw,yw,h\rangle$ for some quadric
$h$ divisible neither by $w$ nor by any element of $\lambdangle x,y\rangle$,
or $V=\lambdangle
w^2,wx,h\rangle$ with
$h$ divisible by no element of $\lambdangle w,x\rangle$.
\end{lemma}
\mathfrak{b}egin{proof} First consider nets $V=\lambdangle f,g,h\rangle $ having no
two dimensional subspace with a common factor: we show that $V$ cannot be
in $\mathfrak {F}_1$. When the coefficients of a relation as in
\eqref{linrel} form an $m$-sequence, a simple argument given in the proof
of Lemma \ref{netsq0} shows that $V\in
\mathfrak F_2$, and is determinantal (see equation \eqref{rnc}).\varphiar
Now assume that $V$ has a relation as in
\eqref{linrel} such that $\dim_K\lambdangle
\alphapha_1,\alphapha_2,\alphapha_3\rangle =2$; after a change of basis in $R$ we
may suppose that
$xf+yg+(x+y)h=0$ where $h$ may be zero.
Replacing $f$ by $f+h$, and $g$ by $g+h$, we obtain
$xf=-yg$. Thus $V$ may be written $V=\lambdangle U\ell,h\rangle$ with
$\ell=f/y$ and
$U=\lambdangle x,y\rangle$, and, evidently if $V\in \mathfrak {F}_1$ then
$h$ is not divisible by $\ell$ nor by any element of $U$. We have shown
the first claim of the Lemma. The second follows.
\end{proof}\varphiar
As we shall see
below,
$\mathfrak {F}_2$ has
$\mathfrak{F}_{\mathrm{RNC}}$ as open dense subset.
Evidently, the family ${\mathfrak{F}}_3$ of nets $V$ having a common
factor, contains as open
dense subset the ${\mathrm{Pgl}}(3)$ orbit of
$V=\lambdangle wx,wy,wz\rangle$; the family also contains $\mathfrak
{F}_{sp}$, the orbit of $\lambdangle
w^2,wx,wy\rangle$. \varphiar
The dimension calculations of the following lemmas are elementary;
recall that $\dim\mathbb{G}=21$. The results about closures also involve standard
methods but are more subtle: for example to identify
$\mathfrak{F}_{sp}$ with $\overline{\mathfrak
{F}_2}\cap\mathfrak{F}_3$ we rely on previous work on the
closure of the family of rational normal curves, such as \cite{No,PS,Va,Lee}.
\mathfrak{b}egin{lemma}\lambdabel{netsq0}{\sc Components of $\mathfrak{F}$}: The
subfamily
$\mathfrak {F}\subset\mathbb{G}=\mathrm{Grass}(3,R_2)$
parametrizing quadrics having at least one linear relation, has two
irreducible components,
$\overline{\mathfrak {F}_1}$ and $\overline{\mathfrak
{F}_2}=\overline{\mathfrak{F}_{RNC}}$, of codimensions 7 and 9,
respectively in
$G$. They satisfy
\mathfrak{b}egin{enumerate}[i.]
\item\lambdabel{netsq0i} The intersection
$\overline{\mathfrak {F}_1}\cap
\mathfrak {F}_2$, has an open dense subset parametrizing nets
isomorphic to $\lambdangle
wx,wy,xz\rangle$; this intersection has codimension 11 in
$G$.
\item\lambdabel{netsq0ii} We have
$\overline{\mathfrak{F}_1}-\mathfrak{F}_1=(\overline{\mathfrak{F}_1}\cap
\mathfrak{F}_2)\cup
\mathfrak{F}_3$. Each element of $\mathfrak{F}_2$ has a basis consisting of
minors of a $2\times 3$ matrix of linear forms.
\item\lambdabel{netsq0iii} The
locus
$\mathfrak{F}_3\subset
\overline{\mathfrak{F}_1}$ has codimension 15 in $G$;
$\mathfrak{F}_3-\mathfrak{F}_{sp}$ consists
of nets isomorphic to $\lambdangle wx, wy, wz\rangle$. The locus
$\mathfrak{F}_{sp}=\overline{\mathfrak{{F}_2}}\cap\mathfrak{F}_3$, and is a
subfamily of
codimension 16 in $G$.
\end{enumerate}
\end{lemma}
\mathfrak{b}egin{proof}
We first calculate $\dim \mathfrak{F}_1$. By Lemma \ref{F1} $V\in
\mathfrak{F}_1$ may
be written as $\lambdangle\ell\cdot U,h\rangle$, where $\ell\in R_1$ and $\cdot
U\subset R_1$ is a two dimensional subspace, and $h$ is not divisible
by $\ell$ nor by any element of $U$. Since there is a single
linear relation, $V$ determines both
$\ell$ and $U$ uniquely. Thus,
there is a surjective morphism
\mathfrak{b}egin{equation*}
\varphii_1: \mathfrak{F}_1\to \mathbb P^3\times \mathrm{Grass}(2,R_1):
\varphii_1(V)=(\ell,U),
\end{equation*}
The fibre of $\varphii_1$ over the pair $(\ell, U )$ corresponds to the
choice of $h$; given $V$, $h$ is
unique up to constant multiple, mod an element of $\ell\cdot U$. Thus, the
fibre of $\varphii_1$ is
parametrized by an open dense subset of the projective space $\mathbb
P(R_2/\lambdangle \ell\cdot
U\rangle )$, of dimension 7. Thus,
$
\mathfrak{F}_1$ has dimension 14, and codimension 7 in $G$.\varphiar
We next show that $\mathfrak {F}_2$ contains $\mathfrak{F}_{RNC}$ as
dense open subset. When there is a linear relation for $V$ as in
\eqref{linrel} whose coefficients
$\alphapha_i$ are a length 3 regular sequence we may suppose after a
coordinate change that
$xf+yg+zh=0$; letting $f=uz+f_1, g=vz+g_1$, with $f_1,g_1$ relatively prime
to $z$, we obtain $h=-(ux+vy)$,
and
$xf_1=-yg_1$, whence there is a linear form $\mathfrak{b}eta\in R_1$ with
$f=uz+y\mathfrak{b}eta, g=vz-x\mathfrak{b}eta$, and $(f,g,h)$ is
the ideal of $2\times 2$ minors of
\mathfrak{b}egin{equation}\lambdabel{rnc}
\mathfrak{b}egin{pmatrix}
u&v&\mathfrak{b}eta\\
-y&x&z
\end{pmatrix}.
\end{equation}
When also $(f,g,h)$ has height two,
then
$V$ is an element of
$\mathfrak{F}_2$ determining a twisted cubic in $\mathbb P^3$; for a dense
open subset of
such elements of $\mathfrak{F}_2$ one may up to isomorphism choose in
\eqref{rnc} the triple
$(u,v,\mathfrak{b}eta)=(x,z,w)$. Otherwise,if $f,g,h$ is not Cohen-Macualay of height two,
$V$ has a common linear factor, and it is well known that then $V\in
\mathfrak{F}_{sp}=
\overline{\mathfrak{F}_2}\cap\mathfrak{F}_3$ \cite{Lee,PS,Va}.\varphiar
We now consider those nets $V\subset \mathfrak {F}_2$ for which there is
no linear relation as in \eqref{linrel} whose coefficients form a
length three $m$-sequence. By the proof of Lemma \ref{F1} such a net has
the form $V=\lambdangle Uáw, h\rangle$, with $U\subset R_1$, and it
thus lies in the closure of $\mathfrak {F}_1$. It is easy to see that the
most general element of
$\overline{\mathfrak{F}_1}\cap
\mathfrak{F}_2$ is a net isomorphic to $\lambdangle wx,wy,xz\rangle$: for
when
$V=\lambdangle wx,wy,h\rangle$
has a second linear relation, either $w$ divides $h$ and $V\in
\mathfrak{F_3}$, or some $ax+by$
divides
$h$, and after a change in basis for $R_1$, $V\cong \lambdangle
wx,wy,xz\rangle$. A similar
discussion for $\lambdangle w^2,wx,h\rangle$ completes the proof that any
element of
$\overline{\mathfrak{F}_1}\cap
\mathfrak{F}_2$ is in the closure of the orbit of $V=\lambdangle
wx,wy,xz\rangle$, which is also the
determinantal ideal of $\mathfrak{b}egin{pmatrix}
x+y&y&0\\
z&z&w
\end{pmatrix}$. This shows
also that
$\overline{\mathfrak{F}_1}\cap
\mathfrak{F}_2\subset \overline{\mathfrak{F}_{\mathrm{RNC}}}$, and
completes the proof that $\mathfrak {F}_2$ contains
$\mathfrak{F}_{\mathrm{RNC}}$ as dense open subset.\varphiar We recall that
$\dim
\mathfrak{F}_{\mathrm{RNC}} =12$. A twisted cubic -- a rational normal
curve of degree three -- is determined by the choice of four
degree three forms in the polynomial ring $K[x,y]$, up to common
$K^\ast$-multiple, mod the action
of $\mathrm{{\mathrm{Pgl}}} (1)$, yielding dimension $4\cdot 4-4=12$
\cite{PS}.\varphiar We have that
$\overline{\mathfrak{F}_1}$ and
$\overline{\mathfrak{F}_2}$ define two distinct
irreducible components of $\mathfrak{F}$, since the subfamily
$\mathfrak{F}_2$ parametrizing nets for which there are two linear
relations, cannot specialize
to any net
$V=\lambdangle f,g,h\rangle$ for which $f,g,h$ have a single linear relation;
and $\mathfrak{F}_1$, parametrizing
nets $V$ each containing a subspace of the form $\ell \cdot U$, cannot specialize
to a vector space $V$ for which
the ideal
$(V)$ is the prime ideal of a twisted cubic. This completes the proof of
the initial claims of the lemma.
\varphiar We now complete the proof of \eqref{netsq0i}, by determining the
dimension of
$\overline{\mathfrak{F}_1}\cap
\mathfrak{F}_2$, which is by the above argument equal to the dimension of
the
$\mathrm{{\mathrm{Pgl}}(3)}$- orbit $\mathcal B$ of
$\lambdangle wx,wy,xz\rangle$. For $W=\lambdangle w'x',w'y',x'z'\rangle\in \mathcal
B$, the unordered pair
of linear forms
$(w',x')$, each mod $K^\ast$-multiple is uniquely determined by $W$ (as
each divides a two
dimensional subspace of $W$): thus there is a morphism $\varphii :\mathcal B\to
\mathrm{Sym}^2(\mathbb P^3)$, from
$\mathcal B$ to the symmetric product, whose image is the non-diagonal
pairs. Spaces $W$ in the
fibre of $\varphii$ over
$(w',x')$ are determined by the choice of the two 2-dimensional subspaces,
the first $\lambdangle
x',y'\rangle$ containing $x'$, the second $\lambdangle w',z'\rangle$ containing
$w'$. Thus, a space $W$
in the fibre is determined by the choice of
$y'\in R_1/\lambdangle x'\rangle$ and
$z'\in R_1/\lambdangle w'\rangle$, each up to
$K^\ast$-multiple, and these choices are each made in an open dense subset
of $\mathbb P^2$ (as
$z'$ must not equal $x'\mod w'$ for $W\in \mathcal B$). Thus, the fibre
$\varphii^{-1}(w',x')\subset\mathcal B$ is isomorphic to an open dense subset of
$\mathbb P^2\times
\mathbb P^2$. It follows that $\mathcal B$ and $\overline{\mathfrak{F}_1}\cap
\mathfrak{F}_2$ have dimension 10, and codimension
11 in
$G$.\varphiar We now show the claim in \eqref{netsq0ii} that
$\overline{\mathfrak{F}_1}-\mathfrak{F}_1=(\overline{\mathfrak{F}_1}\cap
\mathfrak{F}_2)\cup
\mathfrak{F}_3$. Suppose that $V\in
\overline{\mathfrak{F}_1}-\mathfrak{F}_1$; then evidently there is a
two-dimensional subspace $V_1\subset V$ having a common factor $ V_1=\elláU$.
Letting
$V=\lambdangle V_1,h\rangle$ then $V\in \mathfrak{F}_2$ implies
$h$ must have a common divisor with an element of $V_1$. Thus, up to
$\mathrm{{\mathrm{Pgl}}(3)}$ isomorphism we have
$V=\lambdangle wx,wy,xz\rangle$ or $V=\lambdangle w^2,wx,xz\rangle$, both
in $\mathfrak{F}_2$ (we may ignore $w$ is a common factor of $V$ since then $V\in
\mathfrak {F}_3$). Each of these spaces has basis the minors of a $2\times 3$
matrix of linear forms. This with \eqref{rnc} above completes the proof of
\eqref{netsq0ii}.\varphiar The family
$\mathfrak{F}_3$ has as open dense subset the orbit $\mathcal B'$ of
$V=\lambdangle wx,wy,wz\rangle $.
An element $W'=w'V', V'\subset R_1$ of $\mathcal B'$ is determined by a
choice of $w'\in R_1$
and a codimension one vector space $V'\subset R_1$, thus $\mathcal B'$ is
an open in $\mathbb
P^3\times \mathbb P^3$, so has dimension six, codimension 15 in $G$. \varphiar
The claim in \eqref{netsq0iii} that the locus
$\mathfrak{F_{sp}}=\overline{\mathfrak{F}_2}\cap
\mathfrak{F_3} $ follows from the well known
classification of the
specializations of rational normal curves \cite{PS,Lee}; the dimension
count for this locus is five, 3 for the
choice of $w$, and 2 for the choice of $\lambdangle x,y\rangle\subset
R_2/\lambdangle w^2\rangle$. This
completes the proof of Lemma \ref{netsq0}.
\end{proof}\varphiar
\mathfrak{b}egin{lemma}\lambdabel{netsq}{\sc Minimal resolutions for nets of quadrics in
$\mathfrak{F}$}. There are exactly
three possible sets of graded Betti numbers for the ideal generated by a
net of quadrics in
$\mathfrak{F}$ (those having at least one linear relation):\varphiar
\mathfrak{b}egin{enumerate}[i.]
\item\lambdabel{netsqii} Those
$V$ in the family $\mathfrak{F}_1$ have graded Betti numbers that of
$(wx,wy,z^2)$, with a single linear and
two quadratic relations, and Hilbert function
$H=H(R/(V))=(1,4,7,9,11,13,\ldots)$ where $H_i=2i+3$ for $i\mathfrak{g}e 2$. Such $V$
define a curve of
degree 2, genus -2. (See Lemma \ref{deg2}).
\item\lambdabel{netsqi}
For $V\in\mathfrak{F}_2$, the ideal $(V)$ is Cohen-Macaulay
of height two, the
Hilbert function
$H=H(R/(V))=(1,4,7,10,13,\ldots )$ where $H_i=3i+1$ for $i\mathfrak{g}e 0$, and $V$
has the standard
determinantal minimal resolution with two linear
relations.
\item\lambdabel{netsqiii} Those $V$ in the family $\mathfrak{F}_3$ have
graded Betti numbers that of $(wx,wy,wz)$.
\end{enumerate}
\end{lemma}
\mathfrak{b}egin{proof}
For \eqref{netsqii}, Lemma \ref{F1} implies that the ideal determined by
an element $V=(wx,wy,h)$ of $\mathfrak{F}_1$ is cut out from $R/(wx,wy)$ or
$R/(w^2,wx)$ by the nonzero-divisor $h$, hence the minimal resolution of $R/(V)$
is that of
$R/(wx,wy,z ^2)$.
For \eqref{netsqi} let $V\in \mathfrak{F}_2$. Then by Lemma
\ref{netsq0}\eqref{netsq0ii}, $V$ is has a basis consisting of the minors of a
$2\times 3$ matrix of linear forms; an examination of cases shows that $V$ is
Cohen-Macaulay of height two, so is determinantal. Thus $V$ has the standard
determinantal minimal resolution.
The last part \eqref{netsqiii} follows immediately from Lemma
\ref{netsq0}\eqref{netsq0iii}, and a computation in {\sc Macaulay}.
\end{proof}\varphiar
\mathfrak{b}egin{lemma}\lambdabel{deg2}\cite[Section 3.4-3.6]{Lee} The Hilbert scheme
$\mathrm{H}ilb ^{2,-2}(\mathbb P^3)$
parametrizing curves
$C\subset
\mathbb P^3$ of degree 2, genus -2 (Hilbert polynomial $2t+3$) has two
irreducible components. A general
point of the first parametrizes a scheme
consisting of two skew lines union a point off the line; this component has
dimension 11. A general point
of the second component parametrizes a planar conic union two points; this
component has dimension 14. \varphiar
Likewise, \cite[Theorem 3.5.1]{Lee} $\mathrm{H}ilb^{2,-1}(\mathbb P^3)$ (Hilbert
polynomial $2t+2$) has the
analogous components parametrizing
two skew lines, or a planar conic union a point. The scheme
$\mathrm{H}ilb^{2,0}(\mathbb P^3)$ (Hilbert polynomial $2t+1$)
has a single component, whose generic points parametrize plane conics.\varphiar
\end{lemma}
The following result mostly concerns certain ideals $I$ for which $I_3$ to
$I_4$ or $I_4$ to $I_5$ is of extremal
growth in the sense of F.H.S. Macaulay. We thank a referee for
the simple argument
for \eqref{Hrestii}. Note that nets $V$ with no linear relation need not
define complete
intersections, and the ideal $(V)$ need not be saturated: thus
\eqref{Hrestiii} below does not
follow from \eqref{Hrestii}.
\mathfrak{b}egin{lemma}\lambdabel{Hrest} Assume for \eqref{Hresti},\eqref{Hrestii} below
that $I$ is a saturated
ideal of
$R=K[w,x,y,z]$.
\mathfrak{b}egin{enumerate}[i.]
\item\lambdabel{Hresti} If
$H(R/I)=(1,4,7,10,13,16,\ldots )$, then $ I_{\le 3}$ defines a twisted cubic
(or specialization not in
the closure of the plane cubics) or a
plane cubic union a point (possibly embedded). In the former case, $I_2$
lies in $\mathfrak{F}_2$, and generates
$I$; in the latter case
$I_2\in
\mathfrak{{F}_3}$.\varphiar
\varphiar
\item\lambdabel{Hrestii} $H(R/I)$ cannot be any of $(1,4,7,8,10,\ldots
), (1,4,7,b,9,11,\ldots )$, or $ (1,4,7,9,12,\ldots $).
\item\lambdabel{Hrestiii}
If $R/I$ is Artinian Gorenstein of socle degree at least 5, then $R/(I_2)
$ cannot have a
Hilbert function of the form $H(R/(I_2))= (1,4,7,8,10,\ldots
)$, $H(R/(I_2)) = (1,4,7,b,9,11,\ldots )$, or
$H(R/(I_2))=(1,4,7,9,12,\ldots )$.
\end{enumerate}
\end{lemma}
\mathfrak{b}egin{proof}Suppose that a saturated ideal $I$ has the Hilbert function
given in case \eqref{Hresti}. Then 13 to 16 is an extremal growth. So, by
the Gotzmann theorem
$I$ defines a scheme
$\mathfrak{Z}\subset
\mathbb P^3$, of Hilbert polynomial $3t+1$ so $\mathfrak{Z}$
is a degree three curve of genus zero.
The Piene-Schlessinger
Theorem characterizing the components of $\mathrm{H}ilb^{3,0}(\mathbb{P}^3)$
\cite{PS} implies that if
$\mathfrak{Z}$ is non-degenerate (not contained in a plane), then $\mathfrak{Z}$ is either a
twisted cubic or a specialization, so
$I_2$ is in
$\overline{\mathfrak{F}_2}$, or $\mathfrak{Z}$ is the union of a planar cubic and a
(possibly
embedded) spatial point, and then
$I_2$ is in $\mathfrak{F}_3$. If $\mathfrak{Z}$ is degenerate, then also $I_2\in
\mathfrak{F}_3$. This completes the proof of \eqref{Hresti}.\varphiar
The three sequences of \eqref{Hrestii} cannot occur for a saturated ideal
$I$: a saturated
ideal has depth at least one, so $A=R/I$ has a (linear) non-zero divisor,
and the first differences
$\mathbf{D}elta H(R/I)$ must be admissible. But $(1,3,3,1,2,..),
(1,3,3,b-7,9-b,2,..)$ and
$(1,3,3,2,3,..)$ are not $O$-sequences. \varphiar
In the first case of \eqref{Hrestiii} we have that $10=8^{(3)}$, so by
Theorem
\ref{MacGo}\eqref{Gotzper} $\mathfrak{Z}=\mathrm{Proj}\ (R/(I_3))$ is a scheme of Hilbert
polynomial $2t+2$ (degree
two and genus -1) and regularity degree no more than 3, the Gotzmann
regularity degree of $2t+2$.
By a classical degree inequality, such a scheme is either reducible, or
degenerate --- contained
in a hyperplane \cite[p. 173]{GH}. Furthermore, by Lemma \ref{deg2} the
Hilbert scheme
$\mathrm{H}ilb^{2,-1}(\mathbb P^3)$ of degree two genus -1 curves has two
irreducible components, one
whose generic point parametrizes two skew lines, the second, whose generic
point parametrizes a planar conic
union a point. For either component, the Hilbert function
$H(R/I_\mathfrak{Z}))_2\le 6$ which by Corollary \ref{include} implies $H(R/I)_2\le
6$, contradicting the
assumption. A similar argument handles the second case of \eqref{Hrestiii}:
since $9^{(4)}=11$,
$H_{4,5}=(9,11)$ is maximal growth; by Theorem \ref{MacGo} \eqref{Gotzper}
the scheme
$\mathfrak{Z}=\mathrm{Proj}\ (R/(I_4))$ has Hilbert polynomial
$2t+1$, of Gotzmann regularity two implying
$H(R/I_\mathfrak{Z})_2=5$, and by Corollary \ref{include}, $H(R/I)_2\le 5$, a
contradiction.
For the last case it suffices by Corollary \ref{include} and the Gotzmann
Theorem to know that
any scheme of Hilbert polynomial $3t$ (degree three and genus one) is a planar
cubic or degenerate,
a result of the classification of curves \cite{PS, Lee}.
\end{proof}\varphiar
\subsection{Ideals with $I_2=\lambdangle wx,wy,wz\rangle$.}\lambdabel{struc}
Let $\mathfrak{V}$ denote the vector space $\lambdangle
wx,wy,wz\rangle$. In this section we assume $H=(1,4,7,\ldots ,1)$ and we
consider the subfamily
${\mathfrak C}(H)\subset \mathbb{P}\mathrm{Gor}(H)$ parametrizing those algebras $A=R/I$ of
Hilbert function $H$ for
which
$I_2$ is $\mathrm{Pgl} (3)$ isomorphic to
$\mathfrak V$. We first determine when $\mathfrak C(H)$ is nonempty and
give a structure theorem
for such $A$ (Theorem
\ref{V,W}). We then determine the minimal resolution of
$A$ (Theorem
\ref{ResI,J}). We also
determine the tangent
space to the family $\mathfrak{C}(H)$ (Theorem \ref{sevcomp}). To prove our
results we connect these Artinian algebras with height three Artinian
Gorenstein quotients $R'/J_I$
of
$R'=K[x,y,z]$, where $J_I=I\cap R'$, which are well understood
\cite{BE,D,Klj2,IK}.
\varphiar We recall from
Lemma~\ref{MacD}ff. that, given an ideal $I$ of
$R$, we denote by $I^\varphierp$ its inverse system, the perpendicular
$R-$submodule to $I$ in the divided power ring $\mathcal
D=K_{DP}[W,X,Y,Z]$, where $R$
acts by contraction.
\mathfrak{b}egin{theorem}\lambdabel{V,W} Let $H=(1,4,7, \ldots )$ of socle degree $j\mathfrak{g}e
4$ be a Gorenstein
sequence, and assume that
$I\in
\mathfrak C(H)$ satisfies $I_2=\mathfrak V=
\lambdangle wx,wy,wz\rangle
$. Let $F\in
\mathcal D_j$ satisfy
$I=\mathrm{Ann}\ (F)$.
Let $R'=K[x,y,z]$.
Then,
\renewcommand{\roman{enumi}}{\roman{enumi}}
\mathfrak{b}egin{enumerate}[i.]
\item\lambdabel{V,Wi} The inverse system $(\mathfrak{V})^{\varphierp}$ of
the ideal
$(\mathfrak{V}),\mathfrak{V}=\lambdangle wx,wy,wz\rangle \subset R$, satisfies
\mathfrak{b}egin{equation}\lambdabel{Vdual}{(\mathfrak{V})^{\varphierp}}_j=\lambdangle
K_{DP}[X,Y,Z]_j,W^{[j]}\rangle.
\end{equation}
\item\lambdabel{V,Wii} $F \in K_{DP}[W,X,Y,Z]_j$ and satisfies
\mathfrak{b}egin{equation}\lambdabel{eqvspduali}
F=G+\alphapha\cdot W^{[j]}, \quad G\in K_{DP}[X,Y,Z]_j, \alphapha\in K,
\end{equation}
where $G\ne 0, \alphapha\ne 0$.
Furthermore,
$I=(J_I,\mathfrak{V},f)$ where
$J_I=I\cap R'$ is the height three Gorenstein ideal $Ann_{R'}(G)$
and $f=w^j-g, g\in K[x,y,z]_j, g\not= 0$.
The Hilbert function $H(R/I)_i=H(R'/J_I)_i+1$ for $1\le i \le j-1$, so we have
\mathfrak{b}egin{equation}\lambdabel{VIHilb}
H(R/I)=H(R'/J_I)+(0,1,1,\ldots ,1,0)=(1,4,\ldots ,4,1).
\end{equation}
The inverse system
$I^{\varphierp}$ satisfies $I^{\varphierp}_j=\lambdangle F\rangle$, $I^{\varphierp}_{i}=0$ for
$i\mathfrak{g}e j+1$, and
\mathfrak{b}egin{equation}\lambdabel{VIdual}
I^{\varphierp}_i=(R\circ F)_i=\lambdangle (R'\circ G)_i,W^{[i]}\rangle \text{ for }
1\le i\le
j-1.
\end{equation}
\item\lambdabel{V,Wiiia} The Gorenstein sequence $H=(1,4,7,\ldots )$ satisfies
$\mathfrak{C}(H)$ is nonempty if and only if $H'=H-(0,1,1,\ldots ,1,0)$ is
a Gorenstein sequence of
height three. (See Corollary \ref{nonempty3}).
\end{enumerate}
\end{theorem}
\mathfrak{b}egin{proof} We first prove \eqref{V,Wi}. Since
$\mathfrak{V}=(wx,wy,wz)=w\cap (x,y,z)$ we have from the
properties of the Macaulay duality,
\mathfrak{b}egin{equation*}
(wx,wy,wz)^{\varphierp}=(w)^{\varphierp}+(x,y,z)^{\varphierp}=K_{DP}[X,Y,Z]+K_{DP}[W],
\end{equation*}
which is \eqref{Vdual}.
We now show \eqref{V,Wii}. Since $F$ generates
$(I_j)^\varphierp$, $F\in
{(\mathfrak{V})^{\varphierp}}_j$ can be written
$F=G+\alphapha W^{[j]}$ as in \eqref{eqvspduali}. Since $H(R/I)=(1,4,\ldots
)$, we have
$G\ne 0$ and $\alphapha \ne 0$. The inverse system relation \eqref{VIdual} is
immediate, and gives
\mathfrak{b}egin{equation*}
R\circ F={R'}_{\mathfrak{g}e 1}\circ
h+\lambdangle W,W^{[2]},\ldots ,W^{[j-1]},F\rangle,
\end{equation*}
as well as the
Hilbert function equality
\eqref{VIHilb}. Let $J_I=\mathrm{Ann}\ (G)\cap K[x,y,z]$: evidently,
$\mathrm{Ann}\ (G)=(w,J_I)$. Let $h\in I\cap K[x,y,z]$. Then
we have $h\circ F=0$ and $h\circ W^j=0$, implying $h\circ G=0$ so $h\in
J_I$; conversely, if $h\in J_I=
\mathrm{Ann}\ (G)\cap K[x,y,z]$ then $h\circ G=0, h\circ W^j=0$, implying $h\circ
F=0$, so $h\in I\cap K[x,y,z]$. Thus
$J_I=I\cap K[x,y,z]$, as claimed. as specified in
\eqref{V,Wii} is immediate, Now, any form $h$ of degree less than $j$
satisfying $h\cdot F=0$, and $h\ne
(wx,wy,wz)$ must satisfy $h\in K[x,y,z]$ and hence is in $J_1$.
If $f=w^j-g$ with
$g\circ G=\alphapha$ then we have $f\circ F=0$ and hence $f\in I$. If
$g= 0$ we would have
$R_1\cdot w^{j-1}\in I$, implying that $w^{j-1}\mod I$ is a socle element
of $A=R/I$, contradicting the
assumption that
$A$ is Artinian Gorenstein of socle degree $j$. Thus, we have $f=w^j-g$
with $g\ne 0$. Since the lowest-degree
third syzygy of
$I$ are those in degree four arising from
$\mathfrak{V}$, the symmetry of the minimal resolution implies that $I$ has
no generators (first syzygies) in
degrees greater than $j$. Thus
the ideal
$I\in
\mathfrak{F}$ is minimally generated as
$I=(J_I,\mathfrak{V},f)$. as claimed, and completes
the proof of \eqref{V,Wii}.
To show \eqref{V,Wiiia}, note that if $I\in
\mathfrak {C}(H)$ then $H'$ from
\eqref{V,Wiiia} satisfies
$H'=H(R/J_I)=H(R'/(I\cap R'))$ with $I\cap R'$ a Gorenstein ideal in $R'$, so
$H'$ is a Gorenstein sequence. Conversely if $H'=H-(0,1,1,\ldots ,1,0)$ is
a Gorenstein sequence
then take $J$ to be any Gorenstein ideal in $R'$ of Hilbert function $H'$ and
let $J = \mathrm{Ann}\ _{R'}(G)$. Let $F = G+W^j$. Then
$\mathrm{Ann}\ (F) = I= (J,w^j-g,wx,wy,wz)$
where $g\in R'_j$ but $g\notin J$: the ideal $I$ is a Gorenstein ideal of
height four.
Then we have $I\in
\mathfrak {C}(H)$.
Thus, $\mathfrak {C}(H)$ is nonempty if and only if
$H'=H-(0,1,1,\ldots , 1,0)=(1,3,\ldots ,3,1)$ is a
Gorenstein sequence of height three. This completes the proof.
\end{proof}
\varphiar
The minimal resolution of $R/I$ can be constructed from the minimal
resolution of $J_I$. We construct a putative complex in Definition
\ref{resolveI};
we prove that it is an exact complex in Theorem~\ref{ResI,J}. The
construction relies on Theorem \ref{V,W}\eqref{V,Wii}.\varphiar
Suppose that $I\subset R$ defines a Artinian Gorenstein quotient $A=R/I$,
that $I_1=0$ and
$I_2=\mathfrak{V}$, and that
$I=(\mathfrak{V},J_I,g-w^j)$ with $g\in R'_j$ satisfying $g\ne 0$, and
$J=J_I=I\cap R'=K[x,y,z]$ defining a
Artinian Gorenstein quotient $A'=R/J'$ of $R'$. Let the minimal resolution of
$R/J$ be (here $m=2n+1$ is odd)
\mathfrak{b}egin{equation}\lambdabel{Jres}\mathbb{J} :\quad 0\to
R\xrightarrow{\alphapha^t}R^{m}\xrightarrow{\varphihi}R^{m}\xrightarrow{\alphapha} R\
\to R/J\to 0,
\end{equation}
where $\varphihi$ is an $m\times m$ alternating matrix with homogeneous
entries, and $\alphapha=\left[ J\right]$
denotes the $1\times m$ row vector with entries the homogeneous
generators of $J$ that are the
Pfaffians of
$\varphihi$, according to the Buchsbaum-Eisenbud structure theorem for height
three Gorenstein ideals (since $J$ is
homogeneous, $\mathbb{J}$ may be chosen homogeneous: see
\cite{BE,D}). Denote by
$\mathbb{K}$ the Koszul complex resolving $R/(x,y,z)$ (so $\mathbb K_0=
\mathbb K_3=R$):
\mathfrak{b}egin{equation}
\mathbb{K} :\quad 0\to R \xrightarrow{\delta_3} R^3 \xrightarrow{\delta_2}
R^3 \xrightarrow{\delta_1}R\to
R/(x,y,z) \to 0,
\end{equation}
where $\delta_1=[x,y,z], \delta_2=\left( \mathfrak{b}egin{smallmatrix}
y&z&0\\
-x&0&z\\
0&-x&-y
\end{smallmatrix}\right)$, and $\delta_3=\delta_1^t$. We will let
$\mathbb{T}:\mathbb{K}\to \mathbb{J}$ be a
map of complexes induced by multiplication by $g$ on $R$. By degree
considerations, we see that
$\deg T_3=0$, so $T_3$ is multiplication by $\mathfrak{g}amma\in K$. \varphiar
So we have $T_1\circ \delta_2=\varphihi\circ T_2$, also $T_2\circ
\delta_3=[J]^t$, and
\mathfrak{b}egin{equation*}T_2\circ \left[ \mathfrak{b}egin{smallmatrix}z&\\ y&\\ x&
\end{smallmatrix} \right] = \mathfrak{g}amma \circ
[J]^t.
\end{equation*}
\mathfrak{b}egin{definition}\lambdabel{resolveI} Given $I,J,\mathbb J, \mathbb K$ as above,
we define the following complex,
\mathfrak{b}egin{equation}\lambdabel{complexF}\mathbb{F}:\quad 0\to
R\xrightarrow{F_4}R^{m+4}\xrightarrow{F_3}R^{2m+6}\xrightarrow{F_2}R^{m+4}
\xrightarrow{F_1}R\to R/I\to 0,
\end{equation}
where $F_1=(wx,wy,wz,\alphapha,w^j-g)$, and $F_2$ satisfies
\mathfrak{b}egin{equation}
F_2\quad=\qquad \left(
\mathfrak{b}egin{array}{c|cccc}
&3&m&m&3\\ \mathfrak{h}line \\
3&\delta_2&0&\frac{-1}{\mathfrak{g}amma}E\circ T_2^t&w^{j-1}I_{3\times 3}\\
m&0&\varphihi &wI_{m\times m}& T_1\\
1& 0& 0&0&-x\ -y\ -z
\end{array}\right),
\end{equation}
where $E=\left[
\mathfrak{b}egin{smallmatrix}
0&0&1\\
0&1&0\\
1&0&0
\end{smallmatrix}\right]$. The map $F_3$ satisfies
\mathfrak{b}egin{equation}
F_3\quad=\qquad \left(
\mathfrak{b}egin{array}{c|ccc}
&3&m&3\\ \mathfrak{h}line \\
3&w^{j-1}I_{3\times 3}&ET_1^t&-z\ -y\ -x\\
m&T_2&-{\mathfrak{g}amma}wI_{m\times m}&0\\
m&0&\mathfrak{g}amma\varphihi &0\\
3& -\delta_2& 0&0
\end{array}\right),
\end{equation}
and $F_4=(wz,wy,wx,\alphapha,w^j-g)^t$.
\end{definition}
\mathfrak{b}egin{theorem}\lambdabel{ResI,J}
Let I be a homogenous height four Gorenstein ideal in $R = K[w,x,y,z]$ with
socle degree j and with $I_2 = (wx,wy,wz)$. Then the complex $\mathbb{F}$ of
\eqref{complexF} in Definition \ref{resolveI} is exact and is the minimal
resolution of $R/I$.
\end{theorem}
\mathfrak{b}egin{proof}
We first show that
$\mathbb{F}$ is a complex. By \eqref{V,Wii} of the structure theorem, we
see that
$I$ is minimally generated by $J = I\cap K[x,y,z], wx,wy,wz , g-w^j $ where
$g\in K[x,y,z]$. So, $g\notin J$. Suppose that $\mathfrak{g}amma=0$. Then
$T_2\circ \delta_3=0$, hence we would have
$T_2=T'\circ\delta_2$ for some
$T'$. Then
\mathfrak{b}egin{align*}
T_1\circ\delta_2&=\varphihi\circ T_2=\varphihi\circ T'\circ\delta_2;\\
\mbox{so }\qquad (T_1-\varphihi\circ T')\circ\delta_2&=0;
\mbox{and}\\
T_1-\varphihi\circ T'&=\mathfrak{b}eta [x,y,z], \mathfrak{b}eta\in K,\mbox{ and}\\
\alphapha\circ T_1&=\alphapha\circ \mathfrak{b}eta [x,y,z], \mbox{ and}\\
-g[x,y,z]&=\alphapha\mathfrak{b}eta [x,y,z].
\end{align*}
This implies $g\in J$ contradicting $g\notin J$. So, we get
$\mathfrak{g}amma\ne 0$.
\varphiar We get $F_1\circ F_2=0$ and $F_3\circ F_4=0$ from the following three
identities. First, from the exact
sequence $\mathbb J$ of \eqref{Jres} we have
\mathfrak{b}egin{equation} \varphihi\alphapha=\alphapha^t\varphihi=0.
\end{equation}
Second, from
\mathfrak{b}egin{align} [x,y,z]\left( (\frac{-1}{\mathfrak{g}amma})ET_2^t\right)
&=\frac{-1}{\mathfrak{g}amma}[z y x]T_2^t\notag\\
&=\frac{-1}{\mathfrak{g}amma}\left[ T_2\left[
\mathfrak{b}egin{smallmatrix}
z&\\
y&\\
x&
\end{smallmatrix}
\right]
\right]^t=\frac{-1}{\mathfrak{g}amma}\mathfrak{g}amma (\alphapha^t)^t\notag\\
&=-\alphapha\notag\\
\mbox{ we have }\qquad [x,y,z]\left[ \frac{-1}{\mathfrak{g}amma}ET_2^t\right] &
=-\alphapha .
\end{align}
Third, we have
\mathfrak{b}egin{equation} T_1J=-g[x,y,z] .
\end{equation}
To see that $F_2\circ F_3=0$ we just need to check that
\mathfrak{b}egin{align*}
\varphihi T_2-T_1\delta_2&=0\\
\mbox{and}\qquad \delta_2ET_1^t-\frac{1}{\mathfrak{g}amma}ET_2^t(\mathfrak{g}amma \varphihi)&=0.
\end{align*}
The first of these follows from the map of complexes
$\mathbb{T}:\mathbb{K}\to\mathbb{F}$. For the second\
we have
\mathfrak{b}egin{align*}\delta_2ET_1^t-ET_2^t\varphihi&=\delta_2ET_1^t+ET_2^t\varphihi^t\\
&=\delta_2ET_1^t+E(\varphihi T_2)^t\\
&=\delta_2ET_1^t+E(T_1\delta_2)^t\\
&=\delta_2ET_1^t+E(\delta_2^t)T_1^t\\
&=\left( \delta_2E+E\delta_2^t\right) T_1^t = 0,\\
\mbox{since}\qquad &\delta_2E+E\delta_2^t=0.
\end{align*}
So we get $F_2F_3=0$. Thus, $\mathbb{F}$ is a complex.\varphiar To see that
the complex $\mathbb{F}$ is exact, we
use the exactness criterion \cite{BE1}\cite[Theorem 20.9]{Ei}. It suffices
to show that $\sqrt{I_{m+3}(F_2)}$
and
$\sqrt{I_{m+3}(F_3)}$ have depth at least three, where $I_{m+3}(F_2)$
denotes the Fitting ideal generated by
the
$(m+3)\times (m+3)$ minors of $F_2$.
We write $F_2$ as
\mathfrak{b}egin{equation}
F_2\quad=\qquad \left(
\mathfrak{b}egin{array}{c|cccc}
&3&m&m&3\\
\mathfrak{h}line \\
3&\mathfrak{b}egin{smallmatrix}y&z&0\\-x&0&z\\0&-x&-y\end{smallmatrix}&0&
\mathfrak{b}egin{smallmatrix}\tau _{11}&\ldots
&\tau _{1m}\\&\ldots&\\
\tau_{31}&\ldots&\tau_{3m}\end{smallmatrix}&
\mathfrak{b}egin{smallmatrix}w^{j-1}&0&0\\0&w^{j-1}&0\\0&0&w^{j-1}
\end{smallmatrix}\\
m&0&\varphihi
&\mathfrak{b}egin{smallmatrix}w\ &0\ &0\ \\0\ &w\ &0\ \\0\ &0\ &w\ \end{smallmatrix}
&T_1\\ 1& 0& 0&0&-x\ -y\ -z
\end{array}
\right),
\end{equation}
where $x\tau_{1i}+y\tau_{2i}+z\tau_{3i}=-\alphapha_i$, and $J=(\alphapha_1,\ldots
,\alphapha_m)$. Consider the
minor
$M_i$ of $F_2$ having all rows except the $(3+i)$-th row, and having the
columns $1,2,4,\ldots
,3+i-1,3+i+1,m+3,m+3+i,2m+4$. This is the minor
\mathfrak{b}egin{equation}
M_i\quad=\quad\qquad
\mathfrak{b}egin{array}{|cccc|}
\mathfrak{b}egin{smallmatrix}y&z\\-x&0\\0&-x\end{smallmatrix}&0&
\mathfrak{b}egin{smallmatrix}t_{1i}\\t_{2i}\\t_{3i}\end{smallmatrix}&\mathfrak{b}egin{smallmatrix}w^
{j-1}\\0\\0\end{smallmatrix}\\
0&\varphihi_i &0&\mathfrak{b}egin{smallmatrix}\ast \\\ast \\\ast \end{smallmatrix}\\
0& 0&0&-x
\end{array},
\end{equation}
and it equals
\mathfrak{b}egin{align*}&\varphim
xa_i^2\mathfrak{b}egin{vmatrix}y&-z&t_{1i}\\-x&0&t_{2i}\\0&-x&t_{3i}\end{vmatrix}\\
&=\varphim xa_i^2x(x\tau_{1i}+y\tau_{2i}+z\tau_{3i}\\
&=\varphim x^3a_i^2.
\end{align*}
Thus $xa_i\in \sqrt{I(F_2)}$. Similarly, $ya_i,za_i\in \sqrt{I(F_2)}$. Thus
$mJ\subset\sqrt{I(F_2)}$.
Finally, looking at the last $m+3$ rows and the columns $1,2,m+4,\ldots
,2m+4$, we get $\varphim x^3w^m$ in
$I(F_2)$. So $wx\in\sqrt{I(F_2)}$, as well as $wy,wz$, by similar
computations. Thus $\sqrt{I(F_2)}\supset
(J,wx,wy,wz)$. Similarly $\sqrt{I(F_3)}\supset (J,wx,wy,wz)$. So these
Fitting ideals have depth at least
three, and the complex $\mathbb F$ is exact.
This completes the proof.
\end{proof}
\mathfrak{b}egin{remark} The above resolution in Theorem \ref{ResI,J} is
similar to but different from the
minimal resolution obtained by A. Kustin and M. Miller in \cite{KuMi2}.
They consider ideals of the form
$ (f, g, h, wJ)$ where $(f,g,h)$ is a regular sequence and $J$ is height
three Gorenstein. It
turns out that it is not a specialization of their resolution. One
reason for the resemblance is
that $(wx,wy,wz)$ has three Koszul type relations even though they are not
a regular sequence.
\end{remark}\varphiar
If $H(R/I)=(1,4,7,h,7,4,1)$, recall that
$\mathfrak {C}(H)\subset
\mathbb{P}\mathrm{Gor}(H)$ denotes the subfamily parametrizing ideals $I$ such that
$I_2\cong \mathfrak{V}=\lambdangle
wx,wy,wz\rangle$, up to a coordinate change. We denote by
$\nu_i(J)$ the number of degree-$i$ generators of $J$. We will later show
that any
Gorenstein sequence $H=(1,4,7,\ldots )$ satisfies $\mathfrak {C}(H)$
nonempty (Theorem \ref{nonempty2}).
For $I\in \mathbb{P}\mathrm{Gor}(H)$ we denote by $\mathcal T_I$ the tangent space
to the
affine cone over $\mathbb{P}\mathrm{Gor}(H)$ at the point corresponding to $A=R/I$. Recall that
$H'=H-(0,1,1,\ldots ,1,0)$. We denote by
$\mathcal T_{J_I}$ the tangent space to the affine cone over $
\mathbb{P}\mathrm{Gor}(H'), H'=H(R/J_I) $ from \eqref{VIHilb}, at the point corresponding to
$A'=R'/J_I,$ where $ J_I=I\cap
K[x,y,z]$.
\mathfrak{b}egin{theorem}\lambdabel{sevcomp}Let $H=(1,4,7,\ldots)$ of socle degree $j\mathfrak{g}e
5$. In
\eqref{sevcompi},\eqref{sevcompii}, \eqref{sevcompiii} we let
$A=R/I\in
\mathfrak {C}(H)$, and we let
$J_I=I\cap K[x,y,z]$.
\mathfrak{b}egin{enumerate}[i.]
\item\lambdabel{sevcompi}
The dimension of $\mathfrak {C}(H)\subset \mathbb{P}\mathrm{Gor}(H)$ satisfies
\mathfrak{b}egin{equation}\lambdabel{sevcompieq}
\dim ({\mathfrak {C}(H)})=7+\dim\mathbb{P}\mathrm{Gor}(H').
\end{equation}
\item\lambdabel{sevcompii} The dimension of the
tangent space
$\mathcal T_I$ to the affine cone over
$\mathbb{P}\mathrm{Gor}(H)$ at the point determined by $A=R/I\in\mathbb{P}\mathrm{Gor}(H)$ satisfies,
\mathfrak{b}egin{equation}\lambdabel{tangsp0}
\dim_K \mathcal{T}_I=7+\dim_K \mathcal{T}_{J_I}+\nu_{j-1}(J_I).
\end{equation}
\item\lambdabel{sevcompiii} The GA algebra $A\in \mathfrak{E}(H)$ is a smooth
point of $\mathbb{P}\mathrm{Gor}(H)$ if and only if
$\nu_{j-1}(J_I)=0$.
\item\lambdabel{sevcompiv} The subscheme $
\mathfrak {C}(H)$ of $\mathbb{P}\mathrm{Gor}(H)$ is irreducible.
\item\lambdabel{sevcompv} When $j=6$ and $H=H_h=(1,4,7,h,7,4,1), 7\le h\le 11$
we have
\mathfrak{b}egin{equation}\lambdabel{dimE}
\dim ({\mathfrak {C}(H)})= 34-\mathfrak{b}inom{h^\vee +1}{2}, \quad h^\vee = 11-h.
\end{equation}
When also,
$8\le h\le 11$, $\mathfrak {C}(H)$ is generically smooth.
\end{enumerate}
\end{theorem}
\mathfrak{b}egin{proof} The proof of \eqref{sevcompi} is immediate from the structure
Theorem \ref{V,W}\eqref{V,Wii}: the
choice of $\mathfrak{V}$ involves that of $w$ and the vector space $\lambdangle
x,y,z\rangle$, so 6 dimensions, and
that of the $F=w^j+G$ involves one parameter, given $\lambdangle G\rangle$,
which determines $J_I$. \varphiar
We now show \eqref{sevcompii}. Let $A=R/I\in \mathfrak {C}(H)$. We recall
from \cite[Theorem 3.9]{IK} that for a
GA quotient $A=R/I$, we have
$\dim_K \mathcal T_I=\dim_K R_j/(I^2)_j=H(R/I^2)_j$.
We have
\mathfrak{b}egin{align*}
(I^2)_j&= I_2\cdot I_{j-2}\oplus (J^2)_j\\
&= \left( wR'_1\cdot
\left((w^{j-3}R'_1\oplus w^{j-4}R'_2\oplus\cdots \oplus wR'_{j-3})\oplus
J_{j-2}\right)\right)\oplus (J^2)_j\\
&=\left( w^{j-2}R'_2\oplus w^{j-3}R'_3\oplus \cdots \oplus
w^2R'_{j-2}\right)\oplus wR'_1J_{j-2}\oplus (J^2)_j.
\end{align*}
Hence we have
\mathfrak{b}egin{align*}
R_j/(I^2)_j&\cong
w^j\oplus w^{j-1}R_1'\oplus w\left( R'_{j-1}/R'_1J_{j-2}\right)\oplus
R'_j/(J_j)^2, \mbox{ and }
\\ \dim_K R_j/(I^2)_j&=1+3+H'_{j-1}+\nu_{j-1}(J)+\dim_K R'_j/(J_j)^2\\
&=7+\dim_K \mathcal{T}_{J_I}+\nu_{j-1}(J_I).
\end{align*}
We now show \eqref{sevcompiii}. We
use J.-O. Kleppe's result that in codimension 3,
$\mathbb{P}\mathrm{Gor}(H')$ is smooth \cite{Klj2}. It follows that for the Gorenstein ideal
$J_I\subset R'=K[x,y,z]$, of socle
degree
$j$, of
Hilbert function $H(R'/J)=H'$
the dimension of the tangent space $\mathcal T_{J_I}$ to the affine cone
over $\mathbb{P}\mathrm{Gor}(H')$ at $J_I$ satisfies
\mathfrak{b}egin{equation*}
\dim_K T_{J_I}=\dim (\mathbb{P}\mathrm{Gor}(H'))+1.
\end{equation*}
This, together with \eqref{sevcompi},\eqref{sevcompii} shows that
$\nu_{j-1}(J_I)=0$ implies $\dim_K \mathcal
T_I=\dim \mathfrak{E}(H)+1$, hence that $\mathfrak{E}(H)$ and $\mathbb{P}\mathrm{Gor}(H)$
are smooth at such points, which is
\eqref{sevcompiii}.\varphiar
We now show \eqref{sevcompiv}. We first show that $\mathfrak {C}(H)$ is
irreducible. The scheme $\mathbb{P}\mathrm{Gor}(H')$ is
irreducible by
\cite{D} (or by its smoothness
\cite{Klj2}, discovered later). The scheme $\mathfrak {C}(H)$, is fibred
over the family of nets isomorphic
to
$\mathfrak{V}$ by $\mathbb{P}\mathrm{Gor}(H')$, then by an open in $\mathbb P^1$ (to choose
$F$ given $G$), so it is
irreducible.
\varphiar
We now show \eqref{sevcompv}. The dimension formula \eqref{dimE} results
immediately from \eqref{sevcompi}
and the known dimension of $\mathbb{P}\mathrm{Gor}(H')$ (see \cite[Theorem
4.1B]{IK},\cite{Klj2}). From the latter source, we have
that the codimension of $\mathbb{P}\mathrm{Gor}(H')\subset \mathbb P^{27},
H'=(1,3,6,h-1,6,3,1)$ is $\mathfrak{b}inom{h^{\vee}+1}{2}$ where
$h^\vee=10-(h-1)$. When also $8\le h\le 11$, we have $\mathbf{D}elta^3(H')_5=0$; it
follows simply from \cite{D} (or see
\cite[Theorem 5.25]{IK}) that the generic GA quotient $R'/J$ having
Hilbert function $H'$ satisfies $\nu_5(J)=0$.
This completes the proof of
\eqref{sevcompv} and of the Theorem.
\end{proof}
\varphiar
\subsection{Mysterious Gorenstein algebras with $I_2=\lambdangle
w^2,wx,wy\rangle$}\lambdabel{w2sec}
Let $\mathfrak{W}$ denote the vector space $\lambdangle
w^2,wx,wy\rangle$. In this section we assume $H=(1,4,7,\ldots ,1)$ and
study graded Artinian
Gorenstein algebras
$A=R/I, R=K[w,x,y,z]$, such that
\mathfrak{b}egin{equation}\lambdabel{ebase}
A\in \mathfrak{E}_{sp}(H):\
I_2=\mathfrak{W}.
\end{equation}
We will show that their Hilbert functions are closely related
to those of a
Gorenstein ideal in three variables (Lemmas \ref{HFw1},\ref{HFw2}). From
these results we can characterize the Hilbert functions $H$ for which
$\mathfrak E_{sp}(H)$ is
nonempty (Theorem
\ref{hfw2}): these are the same as found in the previous
section for Gorenstein algebras
$A\in \mathfrak C(H)$: those with $I_2\cong \lambdangle wx,wy,wz\rangle$. However
it is
an open question whether the Zariski closure $\overline{\mathfrak C (H)}$
contains
$\mathfrak E_{sp}(H)$, and it is this uncertainty that requires us to
consider $\mathfrak E_{sp}$
in detail.
\varphiar
The ideal $(\mathfrak{W})$ generated by $\mathfrak{W}$ satisfies $(\mathfrak{W})=(w^2,x,y)\cap
(w)$. The inverse system $\mathfrak{W}^\varphierp\subset \mathcal D$ satisfies
\mathfrak{b}egin{align}\lambdabel{eW}\mathfrak{W}^{\varphierp}=\left( (w^2,x,y)\cap
(w)\right) ^{\varphierp}=&(w^2,x,y)^{\varphierp}+(w)^{\varphierp}\\ =&\notag
K_{DP}[Z]+W\cdot K_{DP}[Z] +K_{DP}[X,Y,Z],
\end{align}
Thus we have for the degree-$j$ component
\mathfrak{b}egin{equation*}
\{\mathfrak{W}^\varphierp\}_j=K_{DP}[X,Y,Z]_j+\lambdangle
WZ^{[j-1]}, Z^{[j]}\rangle .
\end{equation*}
\mathfrak{b}egin{lemma} Let $I$ satisfy \eqref{ebase}, and let
$F\in \mathcal{R} =K_{DP}[W,X,Y,Z]_j$ be a generator
of its inverse system. Then $F$ may be written uniquely
\mathfrak{b}egin{equation}\lambdabel{esum}
F=G+WZ^{[j-1]}, \, G\in K_{DP}[X,Y,Z],
\end{equation}
in the sense that the decomposition depends only on $I$, and the choice of
generators $w,x,y,z$ of $R$. Further, after a linear change of basis in $R$, we
may suppose that
$G$ in
\eqref{esum} has no monomial term in $Z^{[j]}$.
\end{lemma}
\mathfrak{b}egin{proof} Since $w^2, wx, wy$ are all in $I$, by \eqref{eW} the
generator $F$ of $I^{\varphierp}$
can be written in the form $F=G+\lambdambda WZ^{[j-1]}, G\in K_{DP}[X,Y,Z]$.
Evidently, $\lambdambda \ne 0$, since otherwise $H(A)=(1,3,\ldots
)$; so we may choose $\lambdambda =1$. The decomposition of \eqref{esum} is
certainly unique, given $I$, and the choice of $x,y,z,w$.
A linear change of basis $w\to w, x \to x, y\to
y, z\to z+\mathfrak{b}eta w$ in
$R$, and the contragradient change of basis $W\to W-\mathfrak{b}eta Z,
X\to X, Y\to Y, Z\to Z$ in
$\mathcal R$ eliminates any
monomial term in $Z^{[j]}$ from $G$.
\end{proof}\varphiar
We denote by $R'$ the polynomial ring $R'=K[x,y,z]$.
\mathfrak{b}egin{lemma}\lambdabel{alpha} Let $I$ be an ideal satisfying
\eqref{ebase}, let
$F=G+WZ^{[j-1]}$ be a generator of its inverse system as in
\eqref{esum}, and let $J=\mathrm{Ann}\ (G), J'=\mathrm{Ann}\ (G)\cap R'$; then $J=(w,J')=J'R$.
Let
$\alphapha (J)$ be the integer
\mathfrak{b}egin{equation}\lambdabel{ealpha}
\alphapha (J)=\min\{ \alphapha\mathfrak{g}e 1 \mid J'_\alphapha \footnotesize \itubseteq (x,y)\}=\min\{ \alphapha\mathfrak{g}e
1 \mid J_\alphapha \footnotesize \itubseteq (x,y,w)\}.
\end{equation}
Then $\alphapha(J)=\min\{ i\mid Z^{[i]}\notin R'_{j-i}\circ G\}$, and
we have $2\le \alphapha(J)\le j$.
\end{lemma}
\mathfrak{b}egin{proof} The first statement follows from $(x,y)^\varphierp\cap
\mathcal{R}'=K_{DP}[Z]$. The lower bound on $\alphapha$ follows
from the assumption of \eqref{ebase}, which implies that
$H(R/J')=(1,3,\ldots )$, so $2\le \alphapha$. The upper bound on
$\alphapha$ follows from the fact that $z^j \in J' = \mathrm{Ann}\ (G)$.
\end{proof}
\mathfrak{b}egin{definition}\lambdabel{halpha}Let $I$ satisfy \eqref{ebase},
let
$F=G+WZ^{[j-1]}$ be a generator of its inverse system, as in
\eqref{esum}, and let $\alphapha=\alphapha(J)$ as in \eqref{ealpha}.
We define a sequence
\mathfrak{b}egin{equation}\lambdabel{eH}
H_\alphapha=
\mathfrak{b}egin{cases}(0,1,1,\ldots ,1,
2=h_\alphapha,2,\ldots ,2=h_{j-\alphapha},1,\ldots ,1,0=h_j)
\text{ if $\alphapha\le
j/2$,}\\
(0,1,1,\ldots ,1=h_{j-\alphapha},0,\ldots ,0,1=h_\alphapha,1,\ldots
1,0=h_j)
\text{ if $\alphapha >j/2$ and $j\ne 2\alphapha-1$}\\
(0,1,1,\ldots ,1,0=h_j) \text{ if $j=2\alphapha -1$.}
\end{cases}
\end{equation}
We let $H_0=(0,1,1,\ldots , 1,0=h_j)$.
\end{definition}
Note that $H_\alphapha$ takes values only 0,1, and 2. When
$\alphapha\le j/2$, there are
$j+1-2\alphapha
$ 2's in the middle of the sequence $H_\alphapha$; when
$\alphapha>j/2$ there are $2\alphapha+1-j$ 0's
in the middle of
$H_\alphapha$. When $\alphapha \le j/2$ the middle run of 2's is bordered on the left
by 0 in degree zero, followed by $\alphapha-1$ 1's. When $\alphapha >j/2$ the
middle run of 0's is bordered on the left by 0 in degree zero followed by
$j-\alphapha$ 1's.\varphiar
\mathfrak{b}egin{definition}
We denote by $M$ the $R$-submodule of $\mathcal D$ generated by
$WZ^{[j-1]}$, whose degree-$i$ component satisfies $M_i=\lambdangle
Z^{[i]},W\cdot Z^{[i-1]}\rangle$ for $1\le i<j$. Given $F,G$ as in
\eqref{esum} we define two $R$-modules
\mathfrak{b}egin{align}
B=&R\circ
\lambdangle F,WZ^{[j-1]}\rangle /R\circ G \notag\\ C=&R\circ
\lambdangle F,WZ^{[j-1]}\rangle /R\circ F.\lambdabel{eBC}
\end{align}
We denote by $H^\vee (B)$ the
\emph{dual} sequence $H^\vee (B)_i=H(B)_{j-i}$,
and likewise $H^\vee (C)_i=H(C)_{j-i}$.
\end{definition}\noindent
Evidently we have for $F,G$ as in \eqref{esum}
\mathfrak{b}egin{equation}\lambdabel{eFrel} I\cap J=\mathrm{Ann}\ \lambdangle F,G\rangle =\mathrm{Ann}\
\lambdangle F,WZ^{[j-1]}\rangle =\mathrm{Ann}\ \lambdangle G,WZ^{[j-1]}\rangle .
\end{equation}
Our convention will be to specify Hilbert functions of $R$-submodules of
$\mathcal D$ (or
of $\mathcal R$) as subobjects: thus $H(R\circ \{Z^{[2]},WZ\})=H(\lambdangle
1;Z,W;Z^{[2]},WZ\rangle)=(1,2,2)$. However, the Hilbert functions $H(B)$,
and $H(C)$ are as
$R$-modules: thus, when $F=X^{[2]}\cdot Z^{[2]}+WZ^{[3]}$, the module $B$ from
\eqref{eBC} satisfies, after taking representatives for the quotient,
$
B\cong\lambdangle WZ^{[3]};Z^{[3]},W\cdot Z^{[2]}; WZ;W\rangle$
so $H(B)=(1,2,1,1),
$
and the dual sequence $H'(B)=(0,1,1,2,1)$.
\mathfrak{b}egin{lemma} We have
\mathfrak{b}egin{align}
H(R/(I\cap J))=&H(R'/J')+H^\vee (B)\notag\\
=&H(R/I)+H^\vee (C).\lambdabel{ehilbrel}
\end{align}
The $R$-modules $B$ and $C$
each have a single generator, the class of $WZ^{[j-1]}$.
\end{lemma}
\mathfrak{b}egin{proof} \eqref{ehilbrel} is immediate from \eqref{eFrel}, and the
definition of $H(B),H(C)$. The last statement is immediate from the
definition of $B,C$.
\end{proof}
\mathfrak{b}egin{lemma}\lambdabel{HFw1} Let $I$ be an ideal satisfying
\eqref{ebase}, and let
$F=G+WZ^{[j-1]}$ be a decomposition as in \eqref{esum} of
the generator $F$ of the inverse system $I^{\varphierp}$. Let $J = \mathrm{Ann}\ (G)$
and $\alphapha =\alphapha (J)$ as in
\eqref{ealpha}. Then we have
\mathfrak{b}egin{enumerate}[i.]
\item \lambdabel{HFw1i} $I\cap J=\mathrm{Ann}\ \lambdangle G,WZ^{[j-1]}\rangle$, and
$(I\cap J)^{\varphierp}=\lambdangle R'\circ G,M\rangle
=(J')^{\varphierp}+M=I^{\varphierp}+M$.
\item\lambdabel{HFw1ii} $H(B)=(1,2,\ldots ,2_{j-\alphapha},1,\ldots
,1,0)$,
$H(C)=(1,1,\dots 1_c)$, with $c=\alphapha$ or $c=j-\alphapha$. The
case $c=j-\alphapha$ can occur only if $\alphapha \mathfrak{g}e j/2$.
\item\lambdabel{HFw1iii} When $\ c=\alphapha$, we have
$H(R/I)-H(R'/J')=H_\alphapha$; when $c=j-\alphapha$ we have
$H(R/I)-H(R'/J')=H_0=(0,1,1,\ldots ,1,0)$.
\end{enumerate}
\end{lemma}
\mathfrak{b}egin{proof} Since $I=\mathrm{Ann}\ (F)=\mathrm{Ann}\ (G+WZ^{[j-1]})$ and
$J=\mathrm{Ann}\ (G)$, we have
\mathfrak{b}egin{equation*}
I\cap J=\mathrm{Ann}\ \lambdangle F,G\rangle =\mathrm{Ann}\ \lambdangle
G, WZ^{[j-1]})=\mathrm{Ann}\ \lambdangle F, WZ^{[j-1]}\rangle.
\end{equation*}
This proves \eqref{HFw1i}. To show \eqref{HFw1ii} we consider
the two $R$-modules $B,C$ defined above. Evidently we have
$H(B)_i\le 2$, whence by the Macaulay inequalities
$H(B)=(1,2,\ldots ,2_a,1,\ldots ,1_b,0)$, with invariants the
length $a-1$ of the sequence of $2's$, and the length $b-a$ of
the sequence of $1's$. Since $Z^{[i]}\in R\circ F$ for $1\le i\le
j-1$, we have that the Hilbert function $H(C)$ satisfies
$H(C)_i
\le 1$, hence,
$H(C)=(1,1,\ldots 1_c,0)$, with sole invariant the length
$c+1$ of the sequence of $1's$. Now
\mathfrak{b}egin{align*} H(R/(I\cap
J))_i-H(R/J)_i=2 & \mathfrak{L}eftrightarrow M_i\oplus (R'\circ
G)_i=R_{j-i}\circ \lambdangle G,WZ^{[j-1]}\rangle \\
&\mathfrak{L}eftrightarrow Z^{[i]}\notin (R'\circ G)_i\\
&\mathfrak{L}eftrightarrow i\mathfrak{g}e \alphapha (J).
\end{align*}
Otherwise, for $1\le i<\alphapha(J), H(R/(I\cap
J))_i-H(R/J)_i=1$, since for such $i$ we have
\mathfrak{b}egin{equation*}
WZ^{i-1}\in
R_{j-i}\circ
\lambdangle G,WZ^{[j-1]}\rangle \text{ but }
WZ^{[j-1]}\notin R'_{j-i}\circ G,
\end{equation*}
and for $i=0$ the difference
is
$0$. Hence, taking into account that $H^\vee
(B)=H(R/(I\cap J))-H(R'/J')$, we have $a=j-\alphapha (J)$ and
$b=j-1$. Since both
$H(R/I)$ and
$H(R'/J')$ are symmetric about $j/2$, so is their difference
\mathfrak{b}egin{equation}\lambdabel{ediff}
H(R/I)-H(R'/J')=H^\vee(B)-H^\vee (C).
\end{equation}
This difference can be symmetric only if $c=\alphapha$ or
$c=j-\alphapha$. \varphiar
Suppose now that $c=j-\alphapha$, and
$\alphapha < j/2$. We will show that $H(R/I)\alphapha
=H(R/J)_\alphapha +2$. By definition of $\alphapha$,
$J'_\alphapha
$ has a generator of the form
$z^\alphapha-g,g\in (x,y)R'$; it follows that $z^{j-\alphapha}-g'\in
J', g'=z^{j-2\alphapha}g\in (x,y)R'$. Consider the subset
\mathfrak{b}egin{equation*} \left( (x,y)\cdot R'\right)\circ G = \left(
(x,y)\cdot R'\right)\circ F.
\end{equation*}
Note that $z^{j-\alphapha}\circ G\in (x,y)R'\circ G$. However,
$z^{j-\alphapha}\circ F$ has a term $WZ^{[\alphapha -1]}$, and
$wz^{j-\alphapha -1}\circ F=Z^{[\alphapha ]}$. By Lemma \ref{alpha}
$Z^{[\alphapha ]} \notin R'\circ G$, it follows that $\dim
R_{j-\alphapha}\circ F=\dim R'_{j-\alphapha}\circ G +2$, as claimed.
This implies that $H(R/I)=H(R/J)+H_\alphapha$ is the only
possibility when $\alphapha < j/2$.
\varphiar The statement \eqref{HFw1iii} is immediate from
\eqref{HFw1ii} and \eqref{ediff}.
\end{proof}
\mathfrak{b}egin{remark} Note that, {\it given the Hilbert function}
$H'=H(R/J)$ the condition $\alphapha (J)\mathfrak{g}e \alphapha_0$ is a
closed condition on the familiy $\mathbb{P}\mathrm{Gor} (H')$. That is, it is
rarer to have higher values of $\alphapha(J)$. However, the situation
is quite different if the Hilbert function is allowed to change, for
example if a
term $\lambdambda Z^{[j]}$ is added to the dual generator $G$ of $J$: see Lemma
\ref{lambda}, where the effect of such a change is described.
\end{remark}
\mathfrak{b}egin{lemma}\lambdabel{HFw2}Let $I$ be an ideal satisfying
\eqref{ebase}, and suppose that
$F=G+WZ^{[j-1]}$ be a decomposition as in \eqref{esum} of
a generator $F$ of the inverse system $I^\varphierp$. Let $\alphapha
=\alphapha (J), J=\mathrm{Ann}\ (G)$ be the integer of
\eqref{ealpha}. Then we have
\mathfrak{b}egin{enumerate}[i.]
\item \lambdabel{HFw2i} $H(R/I)$ satisfies either
$H(R/I)=H(R'/J')+H_\alphapha$ or
$H(R/I)=H(R'/J)+H_0$; the second possibility may occur only if
$\alphapha\mathfrak{g}e j/2$.
\item \lambdabel{HFw2ii} If $H(R/I)=H(R/J')+H_\alphapha$,
then \mathfrak{b}egin{align*}
H(R/(I\cap J))=&H(R/I)+(0,0,\ldots
,0,1_{j+1-\alphapha},1,\ldots ,1_j)
\text{ and }\\(I\cap J)^{\varphierp}=&I^{\varphierp}\oplus \lambdangle WZ^{[j-\alphapha
]},\ldots
,WZ^{[j-1]}\rangle \\
\text { also } H(R/(I\cap
J))=&H(R'/J')+(0,1,\ldots ,1,2_\alphapha,2,\ldots 2_{j-1},1_j), \text {
and } \\(I\cap J)^{\varphierp}=&(J')^{\varphierp}\oplus
\lambdangle W, WZ,\ldots ,WZ^{[j-1]};Z^{[\alphapha ]},Z^{[\alphapha+1]},\ldots
,Z^{[j-1]}\rangle .
\end{align*}
\item \lambdabel{HFw2iii} If $H(R/I)=H(R/J')+H_0$, then $H(R/(I\cap
J))$ and $H(R'/J')$ are related as above, but
\mathfrak{b}egin{equation*}
H(R/(I\cap
J))=H(R/I)+(0,0,\ldots ,0,1_\alphapha,1,\ldots ,1_j).
\end{equation*}
\mathfrak{b}egin{proof} The Lemma is an immediate consequence of Lemma
\ref{HFw1} and \eqref{ediff}.
\end{proof}
\end{enumerate}
\end{lemma}
Recall that a Gorenstein sequence $H$ of height $3$ is a
non-negative sequence of integers $H=(1,3,\ldots ,
1=h_j,0,\ldots )$, symmetric about $j/2$, that occurs as the
Hilbert function of a graded Artinian Gorenstein algebra
$A\cong K[x_1,\ldots ,x_r]/I$. Recall that
then $
(\mathbf{D}elta H)_i=H_i-H_{i-1}$.\varphiar\noindent
\mathfrak{b}egin{theorem}\lambdabel{hfw2} Let $I$ be an ideal satisfying
\eqref{ebase}. Then $H=H(R/I)$ satisfies
\mathfrak{b}egin{enumerate}[i.]
\item\lambdabel{hfw2i} $\mathbf{D}elta
H_{\le j/2}$ is an $O$-sequence.
\item\lambdabel{hfw2ii} $H=H'+H_0=H'+(0,1,1,\ldots ,1,0)$ for some
Gorenstein sequence $H'$ of height three.
\end{enumerate}
\end{theorem}
Warning:
the $H'$ of \eqref{hfw2ii} above is \emph{not} in general equal to
$H(R'/J')$, except when $c=j-\alphapha$.
\mathfrak{b}egin{proof} By Lemma \ref{HFw1}\eqref{HFw1ii} we have $c=\alphapha $ or
$c=j-\alphapha$. The result of the Theorem is obvious in the case
$c=j-\alphapha$, since then by Lemma \ref{HFw1}\eqref{HFw1iii}
$H(R/I)=H(R'/J')+H_0$.
So we assume $c=\alphapha$. By Lemma \ref{HFw2} we have
$H(R/I)=H(R'/J')+H_\alphapha$. Here $J'=(\mathrm{Ann}\ G)\cap K[x,y,z]$ from
Lemma~\ref{alpha} has a
generator in degree $\alphapha$, since by its definition \eqref{ealpha}
$\alphapha $ is
the lowest degree for which $J'_i\footnotesize \itubseteq \lambdangle x,y\rangle
\cdot R'_{i-1}$. \varphiar
First, assume $\alphapha < j/2$, when $H_\alphapha=(0,1,\ldots
,1,2_\alphapha,\ldots 2_{j-\alphapha},1,\ldots
,1,0)$, from Definition \ref{halpha}.
We let
$H'=H(R/I)-H_0$, and we have
\mathfrak{b}egin{equation}\lambdabel{eH'}
H'=H(R'/J')+(0,\ldots ,0,1_\alphapha ,1,\ldots
1_{j-\alphapha},0,\ldots ).
\end{equation}
Thus, to show \eqref{hfw2ii} here it would
suffice to show that
$H'$ of \eqref{eH'} is a height three Gorenstein sequence.
Assuming that the order of $J'$ is $\nu$, we have
\mathfrak{b}egin{align}\lambdabel{delta1}\mathbf{D}elta H'&=\mathbf{D}elta
H(R'/J')+(0,0,\ldots ,1_\alphapha,0,\ldots
,0,-1_{j+1-\alphapha},0,\ldots ),
\text{ and }\notag\\
\mathbf{D}elta H(R'/J') &=(1,2,3,\ldots ,\nu,t_\nu,\ldots ,-2,-1),
\end{align}
with $\nu\mathfrak{g}e t_{\nu}\mathfrak{g}e \ldots \mathfrak{g}e t_{\lfloor j/2 \rfloor}$.
Furthermore, a result of A. Conca and G. Valla is that the maximum number
of degree-i
generators possible for any Gorenstein ideal
$J'$ of Hilbert function $H(R/J')$ is
\mathfrak{b}egin{equation*}
\mathrm{max} \{\nu_i\} =
\mathfrak{b}egin{cases} -(\mathbf{D}elta^2 H(R'/J')))_i=t_{i-1}-t_i, & \text{ when $ i\le j/2$
and $ i\not=\nu$}\\
1-(\mathbf{D}elta^2 H(R'/J'))_\nu=1+\nu-t_\nu &\text { when
$i=\nu$.}
\end{cases}
\end{equation*}
(see \cite{CV} or \cite[Theorem B.13]{IK}). Since $J'$ has a generator in
degree $\alphapha$
it follows when $\alphapha>\nu$ that $t_{\alphapha -1}\mathfrak{g}e t_\alphapha +1$. Thus, for
$\alphapha \mathfrak{g}e \nu$ adding one in degree
$\alphapha$ to the first difference
$(\mathbf{D}elta H(R'/J'))_{\le j/2}$ yields a sequence $\mathbf{D}elta H'$ as in
\eqref{delta1} that is still an
$O$-sequence: for height two this
condition is simply that the sequence $\mathbf{D}elta H'$ must rise to a maximum
value $\nu'$, then be nonincreasing. This implies that $H'$ is indeed
a height three Gorenstein sequence, and completes the proof
when
$\alphapha
\le j/2$.\varphiar
Now assume that $c=\alphapha$ and $\alphapha > j/2$. Let
\mathfrak{b}egin{equation*}
H''=H(R'/J')+(0,\ldots ,0,-1_{j+1-\alphapha} ,-1,\ldots ,
-1_{\alphapha -1},0,\ldots ).
\end{equation*}
Then we have in this case
$H(R/I)=H''+H_0$. Thus, to show \eqref{hfw2ii} here it would
suffice to show that
$H''$ also is a height three Gorenstein sequence. We have
\mathfrak{b}egin{equation}\lambdabel{delta2}\mathbf{D}elta H''=\mathbf{D}elta
H(R'/J')+(0,0,\ldots ,-1_{j+1-\alphapha},0,\ldots
,0,1_{\alphapha},0,\ldots ).
\end{equation}
That $J'$ has a generator in degree $\alphapha> j/2$, implies that
$(\mathbf{D}elta^2(H(R'/J))_\alphapha \le -1$, which is equivalent by the
symmetry of $\mathbf{D}elta ^2 (H(R'/J'))$ to $(\mathbf{D}elta ^2
(H(R'/J'))_{j+2-\alphapha} \le -1$. This in turn implies
$(\mathbf{D}elta H(R'/J'))_{j+2-\alphapha}< ( \mathbf{D}elta
H(R'/J'))_{j+1-\alphapha}$. Thus, lowering $(\mathbf{D}elta
H(R'/J'))_{j+1-\alphapha}
$ by 1 in degree $j+1-\alphapha$ to obtain $\mathbf{D}elta
H''_{\le j/2}$ as in
\eqref{delta2} preserves the condition that
$(\mathbf{D}elta H'')_{\le j/2}$ is the Hilbert function of some height
two Artinian algebra. This completes the proof of the Theorem.
\end{proof}\varphiar
The following examples illustrate Lemma \ref{HFw2}. In particular we
explore how the Hilbert functions $H(R/I), H(R/J)$ change (recall
that
$I=\mathrm{Ann}\ (F), J=\mathrm{Ann}\ (G)$) as we alter the coefficient of $Z^{[j]}$
in $F,G$. Here there is a marked difference for the cases $\alphapha (J)\le
j/2$, and $\alphapha (J)> j/2$. The subsequent Lemma
\ref{lambda} explains some of the observations.
\mathfrak{b}egin{example}\lambdabel{anoninv}
Letting $G=X^{[4]}Z^{[2]}-X^{[4[}YZ, F=G+WZ^{[5]}$,
we have $J=\mathrm{Ann}\ (G)=(w,yz+z^2, y^2, x^5)$, so $\alphapha (J)=2$, and
$I=\mathrm{Ann}\ (F)=(
w^2,wx, wy, y^2 ,yz^2, xyz+xz^2, x^4y+wz^4, x^5, z^6)$. Also
$H(R/J)=(1,3,4,4,4,3,1)
$, and
\mathfrak{b}egin{equation*} H(R/I)=(1,4,6,6,6,4,1)=H(R/J)+H_2.
\end{equation*}
Changing $G$ by adding a $Z^{[6]}$ term, we have
$G_1=X^{[4]}Z^{[2]}-X^{[4]}YZ+Z^{[6]},
F_1=G_1+WZ^{[5]}$, $J(1)=\mathrm{Ann}\ (G_1)=(w,y^2, yz^2,xyz+xz^2,x^4y+z^5,
x^5)$,
so $\alphapha (J(1))=5$, and $I(1)=\mathrm{Ann}\ (F_1)=(w^2,wx, wy, y^2 ,yz^2,
xyz+xz^2, x^4y+wz^4, x^5, wz^5-z^6)$. Also
$H(R/J(1))=(1,3,5,5,5,3,1)
$, and
\mathfrak{b}egin{equation*} H(R/I(1))=(1,4,6,6,6,4,1)=H(R/J(1))+H_0.
\end{equation*}
\end{example}
\mathfrak{b}egin{example} In this example, we chose
$G=(Z+X)^{[6]}+(Z+2X)^{[6]}+(Z+Y)^{[6]}+(Z+2Y)^{[6]}+
(Z+X+Y)^{[6]}+(Z+2X+2Y)^{[6]}$,
the sum of 6 divided powers, and let $J=\mathrm{Ann}\ (G)$. Then $H(R/J)$
has the expected value $H(R/J)=(1,3,6,6,6,3,1)$ (see \cite{IK}), and $\alphapha
(J)=3$. From
Lemma \ref{HFw2}, letting $
I=\mathrm{Ann}\ (F), F=G+WZ^{[5]}$ we have
\mathfrak{b}egin{equation*}
H(R/I)=H(R/J)+H_3=(1,4,7,8,7,4,1).
\end{equation*}
Here
\mathfrak{b}egin{align*}
I=&(w^2,wx,wy, y^3-3y^2z+2yz^2, x^2y-xy^2 ,x^3-3x^2z+2xz^2 ,\\
&51xy^2z-18x^2z^2-99xyz^2-18y^2z^2-12wz^3+34xz^3+34yz^3, 5y^2z^3+4wz^4-9yz^4,
yz^5-z^6).
\end{align*}
\varphiar
Omitting the pure $Z^{[6]}$ term from $G$ and $F$, to obtain $G_1,F_1$ we have
$H(R/\mathrm{Ann}\ (G_1))=(1,3,6,7,6,3,1)$, $\alphapha(\mathrm{Ann}\ (G_1))=4$ and
\mathfrak{b}egin{equation*}
H(R/\mathrm{Ann}\ F_1)=H(R/I)=H(R/\mathrm{Ann}\ (G_1))+H_0.
\end{equation*}
This example
shows that it is not the inclusion of a $Z^{[6]}$ term in $G$ that keys
the simpler case
$H(R/I)=H(R/J)+H_0$.
The Hilbert function $H(R/I)$ is always invariant under a change
in the $Z^{[j]}$ term of
$F$: this follows from $z^i\circ F=
WZ^{[j-1-i]}+z^i\circ G$, linearly disjoint from $\lambdangle R_i\mod z^i\rangle
\circ F$.
\end{example}
\mathfrak{b}egin{example} When $j=8$,
$G=X^{[3]}Y^{[5]}+X^{[2]}Y^{[4]}Z^{[2]}+Y^{[5]}Z^{[3]}$, then
\mathfrak{b}egin{equation*}
J=\mathrm{Ann}\ G=
(w,x^3-z^3, z^4 ,xz^3, x^2z^2-yz^3, y^6, xy^5+x^2y^3z-y^4z^2,
x^2y^4-y^5z-xy^3z^2).
\end{equation*}
We have $\alphapha (G)=3$, $H(R/\mathrm{Ann}\ (G))=(1,3,6,9,9,9,6,3,1)$, and $I
=\mathrm{Ann}\ (F), F=G+WZ^{[7]}$,
satisfies
\mathfrak{b}egin{equation*}
H(R/I)=(1,4,7,11,11,11,7,4,1)=H(R/\mathrm{Ann}\ (G))+H_3.
\end{equation*}
Here
\mathfrak{b}egin{align*}I=(w^2,wx,wy,&
xz^3, x^2z^2-yz^3, x^3z, x^3y-yz^3, x^4, y^5z-wz^5, y^6,\\&
xy^5+x^2y^3z-y^4z^2,
x^2y^4-xy^3z^2-wz^5 ,z^8).
\end{align*}
Adding a $Z^{[8]}$ term to $G$ to form $G_1$ leads to $ J(1)=\mathrm{Ann}\ (G_1)$ with
$\ \alphapha (J(1)) =6$ and $F_1, I(1)=\mathrm{Ann}\ (F_1)$ satisfying
\mathfrak{b}egin{equation*}
H(R/I(1))=H(R/I)=H(R/J(1))+H_0.
\end{equation*}
\end{example}
It might be thought from the previous examples, that adding $\lambdambda
Z^{[j]}$ with
$\lambdambda$ generically chosen, will ``improve'' $G$ to a $G_\lambdambda$ such that
$J(\lambdambda)=\mathrm{Ann}\ G_\lambdambda$ and $I_\lambdambda=\mathrm{Ann}\ F_\lambdambda, F_\lambdambda
=G_\lambdambda+WZ^{[j-1]}$ will satisfy
$H(R/I_\lambdambda)=H(R/J_\lambdambda) +H_0$. This change would indeed be an
improvement, since when $H(R/I)=H(R/J)+H_0$ the minimal resolutions of
the ideals $I,J$ appear to be closer than they are when
$H(R/I)=H(R/J)+H_\alphapha$.
In the next Lemma we show that this ``improvement'' must occur when $\alphapha
(J)\le j/2$, but can occur either never, or for a single value of
$\lambdambda$ when
$\alphapha (J) > j/2$. We suppose that
$\lambdambda
\in K$.
\mathfrak{b}egin{lemma}\lambdabel{lambda} Let $J=\mathrm{Ann}\ (G), I=\mathrm{Ann}\ (F), F=G+WZ^{[j-1]}$ be
such that
$I$ satisfies
\eqref{ebase}, and define $G_\lambdambda=G+\lambdambda Z^{[j]}$, $F_\lambdambda = F+\lambdambda
Z^{[j]}$,
$J(\lambdambda)=\mathrm{Ann}\ (G_\lambdambda), I(\lambdambda)=\mathrm{Ann}\ (F_\lambdambda)$. Then we have
\mathfrak{b}egin{enumerate}[i.]
\item\lambdabel{lambdai} $\left( I\cap J\right) +m^j=\left(
I(\lambdambda)
\cap J(\lambdambda)\right) +m^j$ and $(I\cap J)_j$ differs
from $(I(\lambdambda )
\cap J(\lambdambda ))_j$ by replacing $z^j-u,u\in J\cap ((x,y)\cap
K[x,y,z])$ by $z^j-u', u'\in J(\lambdambda )\cap ((x,y)\cap
K[x,y,z])$.
\item\lambdabel{lambdaii} $H(R/I)=H(R/I(\lambdambda)), $ and $H(R/(I\cap
J))=H\left( R/(I(\lambdambda)\cap J(\lambdambda))\right)$;
\item\lambdabel{lambdaiii} If $\alphapha (J)\le j/2$ and $\lambdambda \ne 0$
then $\alphapha (J(\lambdambda))=j+1-\alphapha (J)$, and \mathfrak{b}egin{equation*}
H(R/J(\lambdambda ))=
H(R/J)+(0,\ldots,0_{\alphapha -1},1_\alphapha ,1,\ldots
,1_{j-\alphapha},0_{j+1-\alphapha } ,\ldots ,0_j).
\end{equation*}
In this case
$H(R/I(\lambdambda ))=H(R/J(\lambdambda ))+H_0$.
\item\lambdabel{lambdaiv} Let $\alphapha (J)>j/2$
then $\ \alphapha (J(\lambdambda))=\alphapha(J)$ or $\alphapha (J(\lambdambda))=j+1-\alphapha
(J)$. In the former case
$H(R/J(\lambdambda ))=H(R/J)$. The latter case may occur for at most a single
value $\lambdambda _0$; if it occurs, then for $\lambdambda =\lambdambda
_0,\alphapha=\alphapha(J)$,
\mathfrak{b}egin{equation*}
H(R/J(\lambdambda _0))=H(R/J)-(0,\ldots,0_{j-\alphapha },1_{j+1-\alphapha}
,1,\ldots ,1_{\alphapha -1},0_{\alphapha } ,\ldots ,0_j).
\end{equation*}
\mathfrak{b}egin{enumerate}[a.]
\item\lambdabel{lambdaiva} If
$H(R/I)=H(R/J)+H_\alphapha$ then $\alphapha
(J(\lambdambda ))=\alphapha (J)$ and $H(R/J(\lambdambda))=H(R/J)$.
\item\lambdabel{lambdaivb} If
$H(R/I)=H(R/J)+H_0$, then for all values of $\lambdambda$ except
possibly a single value $\lambdambda _0\not= 0$ we have
$\alphapha(J(\lambdambda))=\alphapha (J)$ and $H(R/J(\lambdambda ))=H(R/J).$
\end{enumerate}
\end{enumerate}
\mathfrak{b}egin{proof}Since for $i\le j-1, Z^{[i]}=wz^{j-1-i}\circ
WZ^{[j-1]}$, $(I\cap J)_i=(I(\lambdambda ) \cap J(\lambdambda ))_i$ for
$i\le j-1$. The second statement in \eqref{lambdai} is evident.
The first claim in \eqref{lambdaii} follows since the two ideals
$I, I(\lambdambda)$ are isomorphic, under a change of variables. The
second claim in \eqref{lambdaii} follows from \eqref{lambdai}. \varphiar
Suppose that $\alphapha \le j/2$ and $\lambdambda\ne 0$, and that $h=z^\alphapha-g,g\in
(x,y)\cdot K[x,y,z] \in J$. Then for $0\le u \le j-2\alphapha $ we have
\mathfrak{b}egin{equation*}
(z^u
h) \circ (G+\lambdambda Z^{[j]})= z^uh\circ (\lambdambda Z^{[j]})=\lambdambda
Z^{[j-\alphapha-u]}.
\end{equation*}
It follows that for $i\le j-\alphapha$, $Z^{[i]}\in R\circ G(\lambdambda)$. This
implies
that for $\alphapha\le i \le j-\alphapha$, we have $ H(R/J(\lambdambda )_i=H(R/J)_i+1$,
since by Lemma \ref{HFw2} $Z^{[i]}\ne R\circ G $ for $i\mathfrak{g}e \alphapha (J)$. The
claims
in
\eqref{lambdaiii} now follow from the
symmetry of
$H(R/J(\lambdambda)), H(R/J)$ and hence of $H(R/J(\lambdambda))-H(R/J)$.\varphiar
Suppose that $\alphapha (J)> j/2$. The symmetry of $H(R/J(\lambdambda))-H(R/J)$
and Lemma \ref{HFw1} \eqref{HFw1ii} show the first claim concerning $\alphapha
(J(\lambdambda))$ in
\eqref{lambdaiv}. This and \eqref{lambdaii} show \eqref{lambdaiva}.
The same symmetry, and \eqref{lambdaiii} also prove \eqref{lambdaivb}, and
completes the proof of \eqref{lambdaiv} that the exceptional case may occur for
at most a single value $\lambdambda _0$.
\end{proof}
\end{lemma}
\mathfrak{b}egin{example} Letting
$G=X^{[3]}Z^{[3]}-Y^{[4]}X^{[2]}+Y^{[2]}Z^{[4]}+XY^{[2]}Z^{[3]}+Z^{[6]},
F=G+WZ^{[5]}$ we have $J=\mathrm{Ann}\ (G)=(x^3+x^2z-y^2z, y^3z,
y^4-x^2z^2+y^2z^2+xz^3-z^4, xy^2z+x^2z^2-y^2z^2, xy^3-xyz^2+yz^3, x^2yz,
x^2y^2+x^2z^2-y^2z^2+z^4)$, $\alphapha (J)=4$, and $H(R/J)=(1,3,6,9,6,3,1)$.
Then
\mathfrak{b}egin{align*} I&=(wy, wx, w^2, x^2z-y^2z+wz^2, x^3-wz^2 ,y^3z,
xy^3-xyz^2+yz^3, x^2y^2+y^4+xz^3, wz^5-z^6), \text{ and }\\
&\qquad \qquad H(R/I)=(1,4,7,9,7,4,1)=H(R/J)+(0,1,1,0,1,1,0)=H(R/J)+H_4.
\end{align*}
This is an example of Lemma \ref{lambda}\eqref{lambdaiva} where
$H(R/J(\lambdambda))=H(R/J)$ for every $\lambdambda$.
\end{example}
\section{Hilbert functions $H=(1,4,7,h,\ldots ,4,1)$}\lambdabel{jarb}
We now consider Gorenstein sequences --- Hilbert functions of Artinian
Gorenstein algebras, so
symmetric about $j/2$ --- having the form
\mathfrak{b}egin{equation}\lambdabel{genhfe}
H=(1,4,7,h,b,\ldots ,4,1),
\end{equation}
of
any socle degree
$j\mathfrak{g}e 6$ for any possible $b$. We show in Theorem \ref{nonempty2} that each such
Gorenstein sequence must satisfy the \emph{SI
condition} that
$\mathbf{D}elta H_{\le j/2}$ is an O-sequence. This
condition was shown by R. Stanley, and
by D.~Buchsbaum and D. Eisenbud to characterize Gorenstein sequences of height
three (see \cite{BE,St,Hari2}).
When a Gorenstein sequence $H$ satisfies this condition
we can construct Artinian Gorenstein algebras, elements of $\mathbb{P}\mathrm{Gor}(H)$, as
quotients of the
coordinate ring of suitable punctual schemes, and we have good control
over their Betti numbers (Lemma \ref{sevcomp2AB}, Corollary \ref{relGor}). In
particular, when
$H=(1,4,7,h,\ldots )$ satisfies the {SI} condition and $7\le h\le 10$ we
may choose $A\in
\mathbb{P}\mathrm{Gor} (H)$ such that
$I_2$ has only two linear relations: thus
$A\notin \overline{{\mathfrak{C}}(H)}$, the locus where $I_2\cong
\lambdangle wx,wy,wz\rangle$, implying for most such Hilbert functions $H$ that
$\mathbb{P}\mathrm{Gor}(H)$ has at least two irreducible components (Theorems
\ref{sevcompt}, \ref{sevcomp2}). \varphiar Our first result is relevant also to the
open question of whether all height four Gorenstein sequences satisfy the
SI condition. Despite our positive result we doubt that this is true in
general (see Remark
\ref{SIconj}). We now set some notation. When $H$ is clear we usually write $h_i$
for
$H_i$ below.. We set $\mathbf{D}elta
H_i=h_i-h_{i-1}$. By
$H_{i,i+1}$ we mean $(h_i,h_{i+1})$.
Given a Hilbert function $H_\mathfrak{Z}$, we define $\mathrm{Sym}(H_\mathfrak{Z},j)$ as the
symmetrization of
$(H_\mathfrak{Z})_{\le j/2}$ about $j/2$:
\mathfrak{b}egin{equation} \mathrm{Sym}(H_\mathfrak{Z},j)_i=
\mathfrak{b}egin{cases}
(H_\mathfrak{Z})_i \text{ if $i\le j/2$}\\
(H_\mathfrak{Z})_{j-i} \text{ if $i> j/2$}
\end{cases}.
\end{equation}
\mathfrak{b}egin{lemma}\lambdabel{nonempty2a}
Let $j\mathfrak{g}e 6$ and suppose that the Gorenstein sequence $H$ of socle degree
$j$ satisfies
\eqref{genhfe}. Then
$7\le h\le 11$. If $j\mathfrak{g}e 7$, then the minimum value of $b=H_4$ that can
occur is
$b=h$, and the maximum values of
$b$ that can occur in \eqref{genhfe} are
\mathfrak{b}egin{equation}\lambdabel{nonemptyeq}
\mathfrak{b}egin{array}{r|ccccc}
h & 7 & 8 & 9 & 10 & 11\\
b_{\mathrm{max}} & 7&9&11&13&16
\end{array}
\end{equation}
Equivalently, a Gorenstein sequence $H$ satisfying \eqref{genhfe} must
satisfy $\mathbf{D}elta H_{\le 4}$ is an
$O$-sequence.
Also, each initial sequence $(1,4,7,h,b)$ satisfying $7\le h\le 11$ and
$h\le b\le b_\mathrm{max} $ occurs for
$j=8$. \varphiar
Finally, if $H$ satifies \eqref{genhfe} and $j\mathfrak{g}e 6,h\le 10$ then $\mathbf{D}elta
H_{\le j/2} $ is an $O$-sequence if and only if its subsequence $\mathbf{D}elta
H_{1\le i\le
j/2}=(3,3,h-7,b-h,\ldots )$ is both nonnegative and nonincreasing.
\end{lemma}
\mathfrak{b}egin{proof}
We showed $7\le h\le 11$ in Corollary \ref{nonempty1}. We
now show the upper bounds
$b\le b_\mathrm{max}$ of \eqref{nonemptyeq}. When
$h=11$, the upper bound of \eqref{nonemptyeq} is just the Macaulay upper
bound. When $h=10$, the
impossibility of
$(h,b)=(10,15)$ follows from Corollary \ref{oseq}. The impossibility of
$(h,b)=(10,14)$
follows from two considerations.
First, by Theorem~\ref{V,W}
\eqref{V,Wiiia} and Theorem \ref{hfw2}\eqref{hfw2ii} $I_2$ cannot be
$\mathrm{Pgl}(3)$-isomorphic
to $\lambdangle wx,wy,wz\rangle
$ or
$\lambdangle w^2,wx,wz\rangle$, as $H'=H-(0,1,1,\ldots ,1,0)=(1,3,6,9,13,\ldots
)$ is not a
height three Gorenstein sequence, since $\mathbf{D}elta H'_{\le
j/2}=(1,2,3,3,4,\ldots )$ is not an O-sequence in
two variables \cite{BE,D}. Thus $I_2$ cannot have a common factor, so has
two linear relations. By
Lemma
\ref{netsq}
\eqref{netsqi} $I_2$ has a basis given by the $2\times 2$ minors of a
$2\times 3$ matrix; since $I_2$
has no common factor, the quotient $R/(I_2)$ has height two, $I_2$ is
determinantal
and has the usual determinantal minimal resolution. In
particular we have
$H(R/(I_2))_i=3i+1$, for all $i\mathfrak{g}e 0$, so as before
$H(R/I)_4\le H(R/(I_2))_4=13$. \varphiar
When $h=8$ or $9$ the upper
bound of
\eqref{nonemptyeq} is one less than the Macaulay upper bound. The
impossibility of the Macaulay upper bound
for $H(R/I)_{3,4}$ in the cases $h=8,9$ follow from
Lemma~\ref{Hrest}\eqref{Hrestiii}. When $h=7$,
the upper bound $b\le 7$ is shown in the $h=7$ case of the proof of
Theorem \ref{nonempty2} below. This completes the proof of the upper
bounds $b\le b_\mathrm{max}$ of \eqref{nonemptyeq}.
\varphiar We next show the lower bound on $b$: when $j\mathfrak{g}e 7$, then $b\mathfrak{g}e h$.
Evidently, when $j=7$, the symmetry
of
$H$ implies
$b=h$, so we may assume
$j\mathfrak{g}e 8$. The symmetry of $H$ implies $(H_{j-4,j-3})=(b,h)$. The Macaulay
Theorem
\ref{MacGo}
\eqref{Macgrowth} applied to $(H_{j-4,j-3})$ eliminates all
triples
$(j,h,b)$ where
$b\le h-2$ except the triple $(j,h,b)=(8,5,4)$. For this triple
$H_{4,5}=(b,h)=(9,11)$ is extremal
growth as
$9^{(4)}=11$; then we have a contradiction by Corollary \ref{oseq}.\varphiar
We now assume $j\mathfrak{g}e 8$ and $b=h-1$. We have $h\neq 11$ by Theorem \ref{V,W}
\eqref{V,Wiiia} and Theorem \ref{hfw2}. Since in Macaulay's inequality of
Theorem
\ref{MacGo}\eqref{Macgrowth} $b^{(d)}=b$ when $b\le d$, and $h_{j-4}=b,
h_{j-3}=b+1$ we must have
$b>j-4$, so $h\mathfrak{g}e j-2$. Except for the triples $(j,h,b)=(8,10,9)$ or
$(8,11,10)$, then
$H_{j-4,j-3}$ has extremal Macaulay growth, a contradiction by Corollary
\ref{oseq}. The second
triple has
$h=11$, already ruled out. The first triple occurs only for
$H=(1,4,7,10,9,10,7,4,1)$
where $\mathbf{D}elta^4 H_6=-12$; by symmetry of the minimal resolution of $R/I$,
the number of degree six
generators of $I$ satisfies
$\nu_6(I)\mathfrak{g}e 6$, implying that
$H(R/(I_5))_{5,6}=(10,13)$, contradicting the Macaulay bound which requires
$H(R/(I_5))_6\le
10^{(6)}=11$. This completes the proof of the lower bound on $b$, that
$h\le b$ in \eqref{genhfe}.\varphiar
It is easy to see that these bounds are just the condition that $\mathbf{D}elta
H_{\le 4}$ be an $O$-sequence,
as claimed.
\varphiar That
each extremal pair
$(h,b)$ satisfying $h\le b\le b_\mathrm{max}$ from \eqref{nonemptyeq} occurs in
socle degree 8 can be shown by
choosing the ring $A$ to be a general enough socle-degree 8 Artinian
Gorenstein quotient of the
coordinate ring of any smooth punctual scheme of degree
$b$, having Hilbert function $H_\mathfrak{Z}=(1,4,7,h,b,b,\ldots )$. Since $b\mathfrak{g}e h$,
$\mathbf{D}elta H_\mathfrak{Z}$ is an
$O$-sequence and there are Artinian algebras of Hilbert function $\mathbf{D}elta
H_\mathfrak{Z}$; then there is a smooth punctual
schemes of Hilbert function $H_\mathfrak{Z}$, by the result of P. Maroscia
\cite{Mar,GMR,MiN}). That the general socle-degree $j$ GA quotient of
$\mathrm{B}amma (\mathfrak{Z},{\mathcal{O}_\mathfrak{Z}})$ has
the expected symmetrized Hilbert function $H=\mathrm{Sym}(H_\mathfrak{Z},j)$ satisfying
$(\mathrm{Sym}(H_\mathfrak{Z},j))_i=(H_\mathfrak{Z})_i$ for
$i\le j/2$, is well known: see
\cite{Bj1,MiN}\cite[Lemma 6.1]{IK}.\varphiar
The last statement of Lemma \ref{nonempty2a} that $j\mathfrak{g}e 6,h\le 10$ and
$\mathbf{D}elta H_{\le j/2}$ an
$O$-sequence is equivalent to
$\mathbf{D}elta H_{2\le i\le j/2}$ being non-negative and non-increasing, follows
from $\mathbf{D}elta H=(1,3,3,h-7,\ldots )$, with $h-7\le 3$: by Macaulay's
inequality Theorem \ref{MacGo}\eqref{Macgrowth}, we have for any
$O$-sequence $T$ that $t_i\le i$ implies $t_{i+1}\le i$.
\end{proof}
\mathfrak{b}egin{theorem}\lambdabel{nonempty2}
Every Gorenstein sequence $H$ beginning $H=(1,4,7,\ldots )$
satisfies the condition,
$\mathbf{D}elta H_{\le j/2}$ is an O-sequence. \varphiar
\end{theorem}
\mathfrak{b}egin{proof} We assume $H=H(R/I)$ for an Artinian Gorenstein quotient
$R/I$ satisfies
\eqref{genhfe} that $H=(1,4,7,h,b,\ldots )$ and consider each value of
$h$ in turn. We show that each occurring sequence $H$ satisfies the
criterion from Lemma
\ref{nonempty2a} for
$\mathbf{D}elta H$ to be an
$O$-sequence.\varphiar {\mathfrak{b}f Case}
$h=7$. We have $H(R/I)_{j-3,j-2}=
H(R/I)_{2,3}=(7,7)$; if $j\mathfrak{g}e 10$ then $H$ is extremal in
degrees j-3 to j-2, and we have that
$\mathfrak{Z}=\mathrm{Proj}\ (I_{j-3})$ is a degree-7 punctual scheme satisfying by Lemma
\ref{upbd}
$H(_\mathfrak{Z})_i=7$ for all $i\mathfrak{g}e 3$: by Corollary \ref{include}, we have
$H(R/I)_i=7$ for $3\le i\le
j-2$. So we may assume that $j=8$ or $9$. We have
$b\le 7^{(3)}= 9$. Should
$b=9$ then $\mathrm{Proj}\ (R/(I_3))$ would
define a degree-2 curve of genus zero and regularity two, so its Hilbert
function would satisfy
$H(R/(I_\mathfrak{Z}))_2\le 5$, by Corollary \ref{include} contradicting
$H(R/I)_2=7$. We now suppose that $h=7,b=8$, and suppose the socle degree
$j=8$ or $9$. When $j=8$, $H=(1,4,7,7,8,7,7,4,1)$, since $\mathbf{D}elta^4
H_5=-7$, the ideal
$I$ has
$\nu_5$ generators (first syzygies) and $\mu_5$ third syzygies in degree
5, with $7\le\nu_5+\mu_5$; by symmetry of the minimal resolution
$\nu_7=\mu_5$ and
$\mu_7=\nu_5$; Thus we have either $\nu_5\mathfrak{g}e 3$ or $\nu_7\mathfrak{g}e
4$; but $\nu_5\le 2$ and $\nu_7\le
4$ by Macaulay's Theorem \ref{MacGo}. If $\nu_7=4$ then the ideal $(I_{\le
6})$ would satisfy $H(R/(I_
6))_{6,7}=7,8$ of extremal growth, a contradiction with $\mathbf{D}elta H_3=0$, by
Corollary \ref{oseq} and
Lemma
\ref{nonempty2a}. For $j=9$ we would have similarly $\mathbf{D}elta^4 H_5=-6$, so
$\nu_5+\nu_8\mathfrak{g}e 6$, but
$\nu_5\le 2$, and when $\nu_8=4$ we'd have $H(R/(I_
6))_{7,8}=7,8$, and a similar contradiction.
We have shown that a Gorenstein sequence beginning
$(1,4,7,7)$ continues with a subsequence of $7's$ followed by $(4,1)$.
\varphiar {\mathfrak{b}f Case} $h=8$. Macaulay extremality shows $h_i\le i+5$ and $\mathbf{D}elta
H_{i+1}\le 1$ for $i\mathfrak{g}e
3$. Suppose by way of contradiction that
$\mathbf{D}elta H_i<0$, for some $i\le j/2$ (this is equivalent to $H$ being
non-unimodal). Letting
$i'=j-i$, we have by the symmetry of
$H$ that
$h_{i'+1}=h_{i'}+1=h_{i-1}\le i-1+5\le i'+4$; it follows from Theorem
\ref{MacGo} that either this
is impossible (when $h_{i'}\le i'$) or
$H$ is extremal in degrees $i'$ to $i'+1$, a contradiction by Corollary
\ref{oseq} and Lemma
\ref{nonempty2a}. Now suppose
$4\le i<k,\mathbf{D}elta H_{i}=0$ but
$\mathbf{D}elta H_{k}=1$. Then $H_k=(H_{k-1})^{(k-1)}$, and we have a contradiction
by Corollary
\ref{oseq} and Lemma \ref{nonempty2a}. It follows that
$H$ satisfies,
$\mathbf{D}elta H_i, 2\le i\le j/2$ is nonnegative
and nonincreasing, thus $\mathbf{D}elta H_{\le j/2}$ is an $O$-sequence.\varphiar
{\mathfrak{b}f Case $h=9$}. Lemma \ref{Hrest}\eqref{Hrestiii} implies that $h_4\le
11$; applying Macaulay extremality
inductively we have for $i\mathfrak{g}e 4$ that
$h_i\le 2i+3$ and $\mathbf{D}elta H_i\le 2$. Suppose by way of contradiction that
$\mathbf{D}elta H_i<0$, for some $i\le j/2$; then $h_i=i+a$ with $a\le i$. We now
use
the symmetry of $H$ about $j/2$. Letting
$i'=j-i$, we have
$h_{i'}=i+a=i'+a',a'=a-(i'-i)$; since $a'<a<i'$ we must have $h_{i'}\le
2i'$ whence $h_{i'+1}\le
h_{i'}+1$ by the Macaulay Theorem
\ref{MacGo}\eqref{Macgrowth}, so $\mathbf{D}elta H_{i'+1}=-\mathbf{D}elta H_i=1$, and
$h_{i'+1}=h_{i'}+1$ is extremal, a contradiction by Corollary \ref{oseq}
and Lemma
\ref{nonempty2a}.\varphiar Now suppose that for some
$i\le j/2$ we have $\mathbf{D}elta H_{i-1}=1$, but
$\mathbf{D}elta H_i=2$: then by Theorem~\ref{MacGo} we would have $\mathrm{Proj}\ (R/(I_i))$
defines a degree 2
curve union some points, of Hilbert polynomial $2t+a,a\le 2$, of
regularity degree at most 3 by Corollary \ref{regdeg}, hence by Lemma
\ref{upbd} and
Corollary~\ref{include} we would have $h_3\le 8$, a contradiction. \varphiar
Finally, suppose that for
some $i\le j/2$ we have
$\mathbf{D}elta H_{i-1}=0$ but
$\mathbf{D}elta H_i>0$. By Corollary \ref{oseq} we have $\mathbf{D}elta H_i\not= 2$, so
$\mathbf{D}elta H_i=1$. If
also there is a previous $u, 4\le u\le i-2$ with $\mathbf{D}elta H_u<2$ then
$h_i\le 2i$, implying
that $H_i=(H_{i-1})^{(i-1)}$, a contradiction by Corollary \ref{oseq}. Thus
to complete
the case $h=9$, we need only
consider sequences
\mathfrak{b}egin{equation}\lambdabel{h=9lastcase}
H=(1,4,7,9,\ldots ,h_u=2u+3,\ldots ,h_{i-2}=h_{i-1}=2i-1, h_i=2i,\ldots ,7,4,1)
\end{equation}
with possible consecutive
repetition of the maximum value $2i$. We have $\mathbf{D}elta^4
H_{i+1}=-5$ if $h_{i+1}=h_i$, and
$-6$ if $j=2i$ so $h_{i+1}=h_i-1$. In either case, we obtain
$\nu_{i+1}+\nu_{j+3-i}\mathfrak{g}e 5$. This is
impossible since on the one hand $\nu_{j+3-i}\mathfrak{g}e 3$ would
imply that
$H(R/(I_{j+2-i}))_i=h_{i-2}=2i-1, H(R/(I_i))_{j+3-i}=h_{i-3}+3=2i-3+3=2i$,
which is extremal growth of $H$, a contradiction by Corollary \ref{oseq}.
On the other hand if
$\nu_{i+1}\mathfrak{g}e 1$ when
$h_{i+1}=h_i$, or if
$\nu_{i+1}\mathfrak{g}e 2$ when
$h_{i+1}=h_i-1$ we would have
$H(R/I)_i=2i,H(R/(I_i))_{i+1}=2i+1 $ implying
extremal growth, a contradiction with \eqref{h=9lastcase} by Corollary
\ref{oseq}.
This completes the proof that $\mathbf{D}elta H$ is an
$O$-sequence when $h=9$.\varphiar {\mathfrak{b}f Case }$h=10$. By Lemma \ref{nonempty2a}
$h_4\le 13$; also
when $I_2$ has a common factor Theorems \ref{V,W}\eqref{V,Wiiia} and
Theorem
\ref{hfw2} show that $\mathbf{D}elta H_{\le j/2}$ is an $O$-sequence.
We suppose henceforth in our analysis of $h=10$ that
$I_2$ does not have a common factor. Then by Lemma
\ref{netsq}\eqref{netsqi}
$I_2$ defines a rational normal curve, satisfying
$H(R/(I_2))_t=3t+1$ for all $t\mathfrak{g}e 0$.
Notice also that if $H(R/I)_t\le
3t-1$, and $t\mathfrak{g}e 4$, then the Macaulay inequality Theorem
\ref{MacGo}\eqref{Macgrowth} implies $\mathbf{D}elta H(R/I)_{i+1}\le
2$. We next rule out various perturbations in the Hilbert
function sequence.
\varphiar First, $\mathbf{D}elta H_{i+1}\le -2$ for some $i<j/2$ is impossible from the
Macaulay bound and the symmetry of
$H$. We would have $\mathbf{D}elta H_{i'+1}\mathfrak{g}e 2$ for
$i'=j-i-1\mathfrak{g}e i+1$;
then letting
$h_i=3i+1-e, e\mathfrak{g}e 0$ we have
$h_{i'}=h_{i+1}\le h_i-2= 3i-(e+1)=2i+(i-e-1)=2i'+ b, b\le i-e-3$; thus, the
Macaulay bound here implies $\mathbf{D}elta H_{i'+1}\le 2$, so there is equality
$\mathbf{D}elta H_{i'+1}=2$,
a contradiction by Corollary \ref{oseq}. Also $\mathbf{D}elta H_{i+1}=-1 $ for some
$i<j/2$,
and
$j> 5i+e$, is impossible by a similar calculation that $\mathbf{D}elta H_{i'+1}=1$
the maximum possible, again a contradiction by Corollary
\ref{oseq}. \varphiar
Suppose $\mathbf{D}elta H_{i+1}=-1 $ with $i\le j/2-1$ and no restriction on $j$;
suppose that $i$ is the
maximum such integer. Letting
$ c=h_{i+1}$ we write the consecutive subsequence
$(h_{i-1},\ldots , h_{i+3})$ as
\mathfrak{b}egin{equation}\lambdabel{h=10,dec1}
(a+c,1+c,c,1-\alphapha+c, b+c).
\end{equation}
Then $\nu_{i+3}(I)+\nu_{j+5-i}(I)\mathfrak{g}e -\mathbf{D}elta^4 H_{i+3}=-\mathbf{D}elta^4
H_{j+5-i}=8-a-b-4\alphapha$. We have
\mathfrak{b}egin{align*}H(R/(I_{i+2}))_{i+2,i+3}&=(1-\alphapha+c,b+c+\nu_{i+3})\mathfrak{h}box{
and }\\
H(R/(I_{j+4-i}))_{j+4-i,j+5-i}&=(1+c,a+c+\nu_{j+5-i}).
\end{align*}
Thus the sum $\delta +\delta', \delta =\mathbf{D}elta H(R/(I_{i+2}))_{i+3},\delta
'=H(R/(I_{j+4-i}))_{j+5-i}$ satisfies
\mathfrak{b}egin{equation*}
\delta +\delta '=(b+\nu_{i+3}+\alphapha-1)+(a+\nu_{j+5-i}-1)\mathfrak{g}e 6-3\alphapha .
\end{equation*}
So
if $\alphapha\le 1$ at least one of $\delta ,\delta '$ is two, and the
corresponding
Hilbert function has extremal growth of two, a contradiction by Corollary
\ref{oseq}. If $\alphapha
=2$, then $i+1\le j/2-1$ (by the symmetry of $H$), and $\mathbf{D}elta H_{i+2}=-1$,
contradicting the
assumption on
$i$; and
$\alphapha\mathfrak{g}e 3$ has already been ruled out. We have shown $\mathbf{D}elta H_{i+1}=-1$
for $i\le j/2-1$
is impossible.\varphiar We cannot have both $\mathbf{D}elta H_u\le 2$ and $\mathbf{D}elta
H_{i+1}=3$ for a pair $u,i$
satisfying
$u<i<j/2$, since then
$h_i\le 3i$. This is possible only if $h_i=3i$ and $h_{i+1}=h_i^{(i)}$,
a contradiction by
Corollary~\ref{oseq}. We cannot have both $\mathbf{D}elta H_u\le 1$ and $\mathbf{D}elta
H_{i+1}=2$ for $u<i<j/2$,
since then
$h_i= 3i-1-e,e\mathfrak{g}e 0$, and $H_{i,i+1}$ is extremal, again a contradiction by
Corollary \ref{oseq}.
\varphiar
Suppose that for some $i, 2\le i\le j/2-1$, we have $\mathbf{D}elta H_i=0$, but
$\mathbf{D}elta H_{i+1}=1$.
Then, letting $ c=h_i$ the consecutive subsequence
$(h_{i-2},\ldots , h_{i+2})$ is
\mathfrak{b}egin{equation}\lambdabel{h=10,delta0,1}
(a+c,c,c,1+c, b+c).
\end{equation}
Then $\nu_{i+2}(I)+\nu_{j+6-i}(I)\mathfrak{g}e -\mathbf{D}elta^4 h_{i+2}=-\mathbf{D}elta^4
h_{j+6-i}=4-(b+a)$. It follows that the sum
$\mathbf{D}elta H(R/(I_{i+2}))_{i+3}+\mathbf{D}elta
H(R/(I_{j+5-i}))_{j+6-i}=a+b-1+4-(a+b)=3$, hence one of the two
differences is at least two, which is here extremal growth, since
$H_{i+2}\le 3(i+2)$
and similarly $H_{j+5-i}\le 3(j+5-i)$. Then Corollary
\ref{oseq} implies a contradiction with
\eqref{h=10,delta0,1}.\varphiar
This completes the proof in the case $h=10$.\varphiar
{\mathfrak{b}f Case }$h=11$. In this case $I_2$ must have a common linear factor.
Theorem \ref{V,W}
\eqref{V,Wiiia} for $I_2\cong \lambdangle wx,wy,wz\rangle$ and Theorem
\ref{hfw2} for $I_2\cong \lambdangle w^2,wy,wz\rangle$ show that
$H=H'+(0,1,1,\ldots ,1,0)$, which
implies that $\mathbf{D}elta H_{\le j/2}$ is an $O$-sequence. \varphiar
This completes the proof of the Theorem.
\end{proof}\varphiar
For $H$ satisfying \eqref{genhfe}, recall that we denote by $\mathfrak
{C}(H)\subset
\mathbb{P}\mathrm{Gor}(H)$ the subfamily parametrizing ideals $I$ such that $I_2\cong
\mathfrak{V}=\lambdangle
wx,wy,wz\rangle$, up to a coordinate change. By Theorem
\ref{V,W}\eqref{V,Wii} we have that
$\mathfrak {C}(H)$ is nonempty if and only if $\mathbb{P}\mathrm{Gor}(H')$ is nonempty, where
$H'=(1,3,6,h-1,i-1,\ldots , 3,1)$.
\mathfrak{b}egin{cor}\lambdabel{nonempty3} Let $H=(1,4,7,\ldots )$. The following are
equivalent.
\mathfrak{b}egin{enumerate}[i.]
\item\lambdabel{nonempty3i} The sequence $H$ is a Gorenstein
sequence.
\item\lambdabel{nonempty3ii} The sequence $\mathbf{D}elta H_{\le j/2}$ is an O-sequence.
\item\lambdabel{nonempty3iii} The sequence $H'=H-(0,1,1,\ldots ,1,0)$ is a
height three Gorenstein sequence.
\item\lambdabel{nonempty3iv}
$\mathbf{D}elta H'_{\le j/2}$ is an O-sequence.
\item\lambdabel{nonempty3v} $\mathbf{D}elta H'_{\le j/2}=(1,2,3,\ldots
,i+1,h_v,h_{v+1},\ldots )$ with $i+1\mathfrak{g}e h_v\mathfrak{g}e
h_{v+1}\mathfrak{g}e \ldots $.
\end{enumerate}
Under this assumption, the subfamily
$\mathfrak {C}(H)\subset
\mathbb{P}\mathrm{Gor}(H)$ is always nonempty.
\end{cor}
\mathfrak{b}egin{proof}
That \eqref{nonempty3i} is equivalent to \eqref{nonempty3ii} is Theorem
\ref{nonempty2}. That
\eqref{nonempty3ii} is equivalent to
\eqref{nonempty3iv} is immediate from the last statement of Lemma
\ref{nonempty2a}, and an easy verification when $H=(1,4,7,11,\ldots )$. That
\eqref{nonempty3iii} is equivalent
to \eqref{nonempty3iv} follows from the Buchsbaum-Eisenbud structure theorem
\cite{BE,St}. That specific criterion
\eqref{nonempty3iv} is equivalent to \eqref{nonempty3v} is well known ---
see for example
\cite[Theorem 5.25,Corollary C6]{IKl}. That
$\mathfrak {C}(H)$ is always nonempty when $H$ satisfies these
conditions follows from Theorem
\ref{V,W} and
\eqref{nonempty3iii}.
\end{proof}\varphiar
The following result handles height four Gorenstein sequences below those
considered in Theorem
\ref{nonempty2}.
\mathfrak{b}egin{proposition}\lambdabel{aless7} A symmetric sequence $H=(1,4,a,\ldots
,4,1), a\le 6$ of socle degree
$j$ is a Gorenstein sequence if and only if
$\mathbf{D}elta H_{\le j/2}$ is an O-sequence, or, equivalently, if
$\mathbf{D}elta H_{\le j/2}$ is nonincreasing once it does not increase. The values
$a=2,3$ cannot occur.
\end{proposition}
\mathfrak{b}egin{proof} When $a=6$, then $h_3=10$, the maximum under Macaulay's
theorem, would imply $h_1=3$, by Corollary \ref{oseq}. Assume $H=H(A)$ for
an Artinian
Gorenstein $A=R/I$ and let $\alphapha_i$ denote the
number of relations (first syzygies) in degree $i$. When
$a=6$ and
$h_3=9,h_4=b$, then the fourth differences of $H$ satisfy
$\mathbf{D}elta^4(H)_4=\mathbf{D}elta^4(H)_j=b-15$, so by the symmetry of the
minimal resolution of
$A$ we have $\alphapha_4+\alphapha_j\mathfrak{g}e 15-b$. Since $\mathbf{D}elta
H(R/(I_{j-1})_j=\alphapha-3$ and $j-1\mathfrak{g}e 5$,
the Macaulay bound implies that growth from $h_{j-1}=4$
to $H(R/(I_{j-1}))_j=1+\alphapha$ would be maximal when $\alphapha_j=3$. But
$\alphapha_j=3$, is impossible by Corollary \ref{oseq}.
However, $\alphapha_j\le 2$, implies $\alphapha_4\mathfrak{g}e
13-b$; thus $H(R/(I_3))_4\mathfrak{g}e
b+13-b=13$, contradicting the Macaulay bound of
$9^{(3)}=12$. We have shown
$H=(1,4,6,9,\ldots )$ to be impossible. Establishing the result for
$H=(1,4,6,b,\ldots )$ with $b\le 8$ is
relatively simple, requiring only Theorem \ref{MacGo} and Corollary
\ref{include} without using the
symmetry of the minimal resolution: we leave this to the reader. \varphiar
When $a=5$, then the Macaulay bound gives $h_3\le 7$; and
$H=(1,4,5,7,b,\ldots )$ is not possible by Corollary \ref{oseq}.
\varphiar
The remaining cases are simpler, and we leave them as an exercise. Note
that $a=2,3$ are impossible,
since by the symmetry of $H$, we would have $h_{j-2}=a$ and $h_{j-1}=4$:
however, the
Macaulay bound gives $a^{(j-2)}\le a$ when $a\le j-2$, and here $j-2\mathfrak{g}e 4$.
\end{proof}
\mathfrak{b}egin{remark}\lambdabel{SIconj} {\sc Do height four Gorenstein sequences
satisfy $\mathbf{D}elta H_{\le j/2}$ is
an $O$-sequence?} The height four Gorenstein sequences of the form
$H=(1,4,7,\ldots )$ are probably close to an upper
bound of those which may be shown to satisfy the condition $\mathbf{D}elta
H_{\le j/2}$ is an
$O$-sequence, by the kind of arguments we have used for Theorem
\ref{nonempty2}. Notice that we were not able to rule out the nonoccuring,
sequence
$H=(1,4,7,10,14,10,7,4,1)$ by a simple application of Macaulay bounds
and the Gotzmann
method of Lemma \ref{MacGo}, together with calculation of $\mathbf{D}elta^4 H$.
Rather, we needed to use
Lemma \ref{netsq}, which involves the twisted cubic. Likewise, in
proving other parts of Theorem~\ref{nonempty2}, we use at times detailed
information about low degree curves in
$\mathbb P^3$.
\varphiar Thus we are inclined to conjecture that there are height four
Gorenstein sequences that do not
satisfy the condition that $\mathbf{D}elta H_{\le j/2}$ is an
$O$-sequence.
\end{remark}
Recall that we denote by
$\nu_i(J)$ the number of degree-$i$ generators of the ideal $J$. The next
result
follows from Theorems \ref{sevcomp} and \ref{nonempty2}. Recall that the
socle degree of $H$ is the highest $j$
such that
$h_j\not=0$.
\mathfrak{b}egin{theorem}\lambdabel{sevcompt} Assume that the Gorenstein sequence $H$
satisfies $H=(1,4,7,h,b,\ldots ,4,1)$, of socle degree $j\mathfrak{g}e 6$, where
$h,b$ are arbitrary integers satisfying the necessary restrictions of
Lemma~\ref{nonempty2a}.
\renewcommand{\roman{enumi}}{\roman{enumi}}
\mathfrak{b}egin{enumerate}[i.]
\item\lambdabel{sevcompti} the dimension of the tangent space
$\mathcal T_I$ on $\mathbb{P}\mathrm{Gor}(H)$ to a general element $I$ of\linebreak
$\mathfrak{ C}(H)\subset
\mathbb{P}\mathrm{Gor}(H)$ satisfies,
\mathfrak{b}egin{equation}\lambdabel{tangspe2}
\dim_K \mathcal{T}_I=\dim \mathfrak{C}(H) +1+\nu_{j-1}(J)
\end{equation}
where $J$ is a generic element of $\mathbb{P}\mathrm{Gor}(H'), H'=(1,3,6,h-1,b-1,
\ldots, h-1,6,3,1)$.
\item\lambdabel{sevcomptii} When $j\mathfrak{g}e 6$, the Zariski closure
$\overline{\mathfrak{C}(H)}$ is a
generically smooth irreducible component of $\mathbb{P}\mathrm{Gor}(H)$ when, equivalently
\mathfrak{b}egin{enumerate}
\item\lambdabel{sevcomptiia} $\nu_{j-1}(J)=0$ for $J$ generic in $\mathbb{P}\mathrm{Gor}(H')$;
\item\lambdabel{sevcomptiib} a generic $J\in\mathbb{P}\mathrm{Gor}(H')$ has no degree-4
relations;
\item\lambdabel{sevcomptiic} $3h-b-17\mathfrak{g}e 0$.
\end{enumerate}
\end{enumerate}
\end{theorem}
\mathfrak{b}egin{proof} Here \eqref{sevcompti} follows immediately from Theorem
\ref{sevcomp}
\eqref{sevcompi},\eqref{sevcompii}. This shows
\eqref{sevcomptiia}; by the symmetry of the minimal resolution of $J$,
\eqref{sevcomptiia} is equivalent to \eqref{sevcomptiib}. The third difference
satisfies $(\mathbf{D}elta^3 H')_4=17+b-3h$, and under the assumption $j\mathfrak{g}e 6$, it
gives, when
positive, the number of degree-4 relations --- the linear relations among those
generators of $J$ having degree 3; when 0 or negative there are no such
relations. This
completes the proof of the equivalence of \eqref{sevcomptiib} and
\eqref{sevcomptiic}.
\end{proof}\varphiar
We now show that there are monomial ideals in $R'=K[x,y,z]$, having certain
Hilbert functions $T'$ and
having a small number of generators. This prepares a key step for
Theorem
\ref{sevcomp2}. We consider Hilbert functions of the form
$T'=(1,3,3,\ldots , 2_a,\ldots ,1_c,\ldots ,0,\ldots )$ where degree $a$ is
the first degree in which
$T'_a<3$, and $c$ is the first degree $c\mathfrak{g}e 3$ in which $T'_i\le 1$, and
$d$ is the first positive degree in
which $T'_i=0$: we allow equalities among $a,c,d$, so if $a=c=4,d=5$,
$T'=(1,3,3,3,1,0,\ldots )$. The
following result is easy to verify.
\mathfrak{b}egin{lemma}\lambdabel{sevcomp2AB} \mathfrak{b}egin{enumerate}[i.]
\item\lambdabel{sevcomp2A} The Artinian algebra $A=R'/J_{a,c,d},
J_{a,c,d}=(xy,xz,yz,x^a,y^c,z^d), 3\le a\le c\le
d$ has Hilbert function
$T'(a,c,d)=(1,3,3,\ldots ,2_a,\ldots ,1_c,\ldots ,0_d,\ldots )$ in the
sense above. \varphiar
\item\lambdabel{sevcomp2B} The Artinian algebra $A=R'/K_{a,c},
K_{a,c}=(x^2,xy,z^2,x^{a-1}z,y^c),3\le a\le c$ has
Hilbert function $T'(a,c)=(1,3,3,2,\ldots ,1_a,\ldots ,0_c,\ldots )$.
\end{enumerate}
\end{lemma}
\mathfrak{b}egin{corollary}{\sc{Artinian Gorenstein algebras with related minimal
resolution.}}\lambdabel{relGor}
\mathfrak{b}egin{enumerate}[i.]
\item (P. Maroscia)
\cite{Mar,GMR,IK,MiN} Let
$s=\sum_{i\mathfrak{g}e 0} T'(a,c,d)_i$, or
$\sum _{i\mathfrak{g}e0}T'_{a,c}$, respectively. Then there are smooth degree-$s$
punctual schemes $\mathfrak{Z}=\mathfrak{Z} (a,c,d)\subset
\mathbb P^3$ or $\mathfrak{Z}=\mathfrak{Z} (a,c)\subset \mathbb P^3$, respectively, whose
coordinate rings have the same minimal
resolutions as the Artinian algebras defined by $J_{a,c,d}$ or $K_{a,c}$,
respectively.
\varphiar
\item (M. Boij)
\cite{Bj1})\lambdabel{goodcomp}
Furthermore, let $j\mathfrak{g}e 2c$, or $j\mathfrak{g}e 2b$, respectively, and let
$A=A(a,c,d,j,F)$ or $A=A(a,c,j,F)$,
respectively, denote a general enough GA quotient of $\mathcal O_\mathfrak{Z},
\mathfrak{Z}=\mathfrak{Z}(a,c,d)$ or $\mathfrak{Z}=\mathfrak{Z}(a,c)$
having socle degree $j$, defined by
$A=R/\mathrm{Ann}\ (F), F\in (I_\mathfrak{Z})_j^\varphierp $. The minimal resolution of
$A$ agrees with that of the corresponding coordinate ring $\mathcal O_\mathfrak{Z}$
in degrees up to $j/2$.
\end{enumerate}
\end{corollary}
\mathfrak{b}egin{proof} P. Maroscia's well known result deforms a given monomial
ideal defining an Artinian
algebra to a graded ideal defining a smooth punctual scheme $\mathfrak{Z}$, and
having the same minimal
resolution. M.~Boij showed that a general enough GA quotient of $\mathfrak{Z}$ has a
related minimal
resolution.
\end{proof}
\mathfrak{b}egin{theorem}\lambdabel{sevcomp2}{\sc Families $\mathbb{P}\mathrm{Gor}(H)$ with several
components.}
\mathfrak{b}egin{enumerate}[i.]
\item\lambdabel{sevcomp2A} Assume that $H$ is a Gorenstein sequence of socle
degree $j\mathfrak{g}e 6$ satisfying
\eqref{genhfe}, namely $H=(1,4,7,h,b,\ldots )$ and that $h\le 10$. Then
there is a GA quotient of
the coordinate ring of a smooth punctual scheme $\mathfrak{Z}$ having Hilbert
function $H$, and
$H=\mathrm{Sym}(H_\mathfrak{Z},j)$.
\item\lambdabel{sevcomp2B} Assume further that
$3h-b-17\mathfrak{g}e 0$ and $8\le h\le 10$. Then $\mathbb{P}\mathrm{Gor}(H)$ has at least two
irreducible components, the
component
$\overline{\mathfrak {C}}$, and a component containing
suitable GA algebras
$A=A(a,c,d,j,F)$ or $A=A(a,c,j,F)$,
respectively that are quotients of the coordinate ring of smooth punctual
schemes.
\end{enumerate}
\end{theorem}
\mathfrak{b}egin{proof} Assume that $H=(1,4,7,h,b,\ldots ,1)$ has socle
degree $j\mathfrak{g}e 6$ and let $T'=\mathbf{D}elta H_{\le
j/2}$. By Theorem
\ref{nonempty2},
$T'$ is an $O$-sequence; since $h \le 10$ Lemma \ref{nonempty2a} implies
$T'$ satisfies
$T'=(1,3,3,h-7,b-h,\ldots )$, with
$h-7\le 3$, with $T'_{\le j/2}$ nonnegative, and nonincreasing after
degree $0$ to $1$. Thus
$T'=
T'(a,c,d)$ or
$T'=T'(a',c')$ for suitable $(a,c,d)$ or $(a',c')$. Lemma \ref{sevcomp2AB}
and Corollary
\ref{relGor}\eqref{goodcomp}
imply that there is a Artinian Gorenstein algebra $A=R/I$ of Hilbert
function $H$, such that the
beginning of its minimal resolution is that of $R'/J(a,b,c)$ or
$R'/K(a,b)$. In particular $I_2$
has at most two linear relationss. Since one cannot specialize from a GA
algebra
$A=R/I\in
\overline{\mathfrak {C}(H)}$ where
$I_2$ has three linear relations, to a GA algebra $A=A(a,c,d,j,F)$ or
$A(a,c,j,F)$ where $I_2$ has at most
two linear relations, the claim of the theorem follows.
\end{proof}
\varphiar \noindent
{\mathfrak{b}f Acknowledgements}\varphiar
The authors thank Carol Chang, Dale
Cutkosky, Juan Migliore, Hal
Schenck, and Jerzy Weyman for helpful discussions; we are also appreciative
to Joe Harris who loaned us a
copy of Y.~A.~Lee's unpublished senior thesis, completed under his
direction. The authors thank
the referees for helpful comments, in particular one leading to our use of
Corollary \ref{oseq}.
\mathfrak{b}ibliographystyle{amsalpha}
\mathfrak{b}egin{thebibliography}{ACGHM}
\mathfrak{b}ibitem[Ba]{Ba}
Bayer D.: \emph{The Division algorithm and the Hilbert scheme},
thesis, (1982) Harvard U., Cambridge.
\mathfrak{b}ibitem[BeI]{BeI}
Bernstein D., Iarrobino A.: \emph{A nonunimodal graded Gorenstein
Artin algebra in codimension five}, Comm. in Algebra {\mathfrak{b}f 20} \# 8
(1992), 2323--2336.
\mathfrak{b}ibitem[Bo1]{Bj1}
Boij M.: \emph{{G}orenstein {A}rtin algebras and points in projective
space}, Bull. London Math. Soc. {\mathfrak{b}f 31} (1999), no. 1, 11--16.
\mathfrak{b}ibitem[Bo2]{Bj2}
Boij M. : \emph{Components of the space parametrizing graded
{G}orenstein {A}rtin algebras with a given {H}ilbert function},
Pacific J. Math. {\mathfrak{b}f 187} (1999), no. 1, 1--11.
\mathfrak{b}ibitem[Bo3]{Bj3}
Boij M. : \emph{Betti number strata of the space of codimension
three Gorenstein Artin algebras}, preprint, 2000.
\mathfrak{b}ibitem[BoL]{BjL}
Boij M. , Laksov, D.: \emph{Nonunimodality of graded Gorenstein
Artin algebras},
Proc. A.M.S. {\mathfrak{b}f 120} (1994), 1083--1092.
\mathfrak{b}ibitem[BrH]{BH}
Bruns W. , Herzog J.: \emph{Cohen-Macaulay Rings}, Cambridge Studies in
Advanced Mathematics \# 39,
Cambridge University Press, Cambridge, U.K., 1993; revised paperback
edition, 1998.
\mathfrak{b}ibitem[BuE1]{BE1}
Buchsbaum D., Eisenbud D.: \emph{What makes a complex exact},
J. Algebra {\mathfrak{b}f 25} (1973), 259--268.
\mathfrak{b}ibitem[BuE2]{BE}
Buchsbaum D. , Eisenbud D. : \emph{Algebra structures for finite free
resolutions, and some structure theorems for codimension three},
Amer. J. Math. {\mathfrak{b}f 99} (1977), 447--485.
\mathfrak{b}ibitem[ChoJ]{CJ}
Cho Y.~H., Jung B.: \emph{Dimension of the
tangent space of $\rm{Gor}(T)$}, The Curves Seminar at Queen's.
Vol. XII (Kingston, ON, 1998), Queen's Papers in Pure and Appl. Math.,
{\mathfrak{b}f 114}, Queen's Univ., Kingston, ON, (1998), 29--41.
\mathfrak{b}ibitem[CoV]{CV}
Conca A., Valla G.: \emph{Hilbert functions of powers of ideals of low
codimension}, Math. Z. 230 (1999), no. 4, 753--784.
\mathfrak{b}ibitem[Di]{D}
Diesel S. J.: \emph{Some irreducibility and dimension theorems for
families
of height 3 Gorenstein algebras}, Pacific J. Math. {\mathfrak{b}f 172} (1996),
365--397.
\mathfrak{b}ibitem[Ei]{Ei}
Eisenbud D.: \emph{Commutative algebra with a view toward algebraic
geometry}. Graduate Texts in Math. \# 150, Springer-Verlag, Berlin and New
York (1994).
\mathfrak{b}ibitem[GMR]{GMR}
Geramita A.V: , Maroscia P. , Roberts G.: \emph{The Hilbert function of a
reduced $k$-algebra}, J. London Math. Soc. (2), {\mathfrak{b}f 28} (1983), 443--452.
\mathfrak{b}ibitem[Go1]{Gotz}
Gotzmann G.: \emph{Eine Bedingung f\"{u}r die Flachheit und das
Hilbertpolynom eines graduierten Ringes}, Math. Z. {\mathfrak{b}f 158} (1978),
no. 1, 61--70.
\mathfrak{b}ibitem[GH]{GH}
Griffiths Ph., Harris J.: \emph{Principles of algebraic
geometry}, John Wiley and Sons, New York (1978).
\mathfrak{b}ibitem[Har]{Hari2}
Harima T. : \emph{A note on Artinian Gorenstein algebras of codimension
three}, J. Pure and Applied Algebra, {\mathfrak{b}f 135} (1999),
45--56.
\mathfrak{b}ibitem[HeTV]{HeTV}
Herzog J., Trung N.~V., Valla G.: \emph{On hyperplane sections of
reduced irreducible varieties of low codimension}, J. Math. Kyoto Univ.
{\mathfrak{b}f 34} (1994), no. 1, 47--72.
\mathfrak{b}ibitem[I]{I}
Iarrobino A.: \emph{Ancestor ideals of vector spaces of forms, and level
algebras}, J. Algebra 272 (2004), 530-580.
\mathfrak{b}ibitem[IK]{IK}
Iarrobino A. , Kanev V.: \emph{Power Sums, Gorenstein Algebras, and
Determinantal Loci}, 345+xxvii p. (1999) Springer Lecture Notes in
Mathematics \# 1721, Springer, Heidelberg.
\mathfrak{b}ibitem[IKl]{IKl}
Iarrobino A., Kleiman S.: \emph{The Gotzmann
theorems and the Hilbert scheme}, Appendix C., p. 289--312, in A. Iarrobino
and V. Kanev,
Power Sums, Gorenstein Algebras, and Determinantal Loci, (1999),
Springer Lecture Notes in Mathematics \# 1721, Springer, Heidelberg.
\mathfrak{b}ibitem [Klp]{Klj2}
Kleppe, J.~O.: \emph{The smoothness and the dimension
of PGOR(H) and of other strata of the punctual Hilbert scheme}, J.
Algebra 200 (1998), 606--628.
\mathfrak{b}ibitem[KuMi]{KuMi2}
Kustin A. , Miller M : \emph{Structure theory for a class of grade four
Gorenstein ideals}, Trans. A.M.S. 270 (1982),
287--307.
\mathfrak{b}ibitem[Lee]{Lee}
Lee, Y. A.: \emph{The Hilbert schemes of curves in $\mathbb P^3$}, honors
B.A. thesis, 54p.
Harvard University, 2000.
\mathfrak{b}ibitem[Mac1]{Mac1}
Macaulay F. H. S.: \emph{The Algebraic Theory of Modular Systems},
Cambridge Univ. Press, Cambridge, U. K. (1916);
reprinted with a foreword by P. Roberts, Cambridge Univ. Press,
London and New York (1994).
\mathfrak{b}ibitem[Mac2]{Mac2}
Macaulay F. H. S.: \emph{Some properties of enumeration in the theory of
modular systems}, Proc. London Math. Soc. {26} (1927), 531--555.
\mathfrak{b}ibitem[Mar]{Mar}
Maroscia P.: \emph{Some problems and results on finite sets of points
in $\mathbb{P}^{n}$}, Open Problems in Algebraic Geometry, VIII,
Proc. Conf. at Ravello, (C.
Cilberto, F. Ghione, and F. Orecchia, eds.) Lecture Notes in Math. \# 997,
Springer-Verlag, Berlin and
New York, (1983), pp. 290--314.
\mathfrak{b}ibitem[MiN]{MiN}
Migliore J. , Nagel U.: \emph{Reduced arithmetically Gorenstein schemes and
simplicial polytopes with
maximal Betti numbers}, Advances in Math. 180 (2003), 1--63.
\mathfrak{b}ibitem[No]{No}
Nollet S.: \emph{The Hilbert schemes of degree three curves}, Ann. Scient.
Ecole Norm Sup. $4^e$ s\'{e}rie,
t. 30, (1997) 367--384.
\mathfrak{b}ibitem[PS]{PS}
Piene R. , Schlessinger M.: \emph{On the Hilbert scheme compactification of
the space of twisted
cubics}, Amer. J. of Math, 107 (1985), 761--774.
\mathfrak{b}ibitem[St]{St}
Stanley R.: \emph{Hilbert functions of graded algebras} Advances in
Math. {\mathfrak{b}f 28} (1978), 57--83.
\mathfrak{b}ibitem[Va]{Va}
Vainsencher I: \emph{A note on the Hilbert scheme of twisted cubics},
Bol. Soc. Brasil. Mat. 18 (1987), no. 1, 81--89.
\end{thebibliography}
\end{document}
|
\begin{document}
\title{Power substitution in quasianalytic Carleman classes}
\author{Lev Buhovsky\textsuperscript{1}, Avner Kiro\textsuperscript{2} and Sasha Sodin\textsuperscript{3}}
\maketitle
\footnotetext[1]{School of Mathematical Sciences, Tel Aviv
University, Tel Aviv, 69978, Israel. Email: [email protected]. Supported in part by ERC Starting Grant 757585 and ISF Grant 2026/17.}
\footnotetext[2]{School of Mathematical Sciences, Tel Aviv
University, Tel Aviv, 69978, Israel. Email: [email protected]. Supported in part by ERC Advanced Grant 692616, ISF Grant 382/15 and BSF Grant 2012037.}
\footnotetext[3]{School of Mathematical Sciences, Queen Mary University of London, London E1 4NS, United Kingdom \& School of Mathematical Sciences, Tel Aviv
University, Tel Aviv, 69978, Israel. Email:
[email protected]. Supported in part by the European
Research Council starting grant 639305 (SPECTRUM) and by a Royal Society Wolfson Research Merit Award.}
\begin{abstract}
Consider an equation of the form $f(x)=g(x^k)$, where $k>1$ is an integer and $f(x)$ is a function in a given Carleman class of smooth functions. For each $k$, we construct a non-homogeneous Carleman-type class which contains all the smooth solutions $g(x)$ to such equations. We prove that if the original Carleman class is quasianalytic, then so is the new class. The results admit an extension to multivariate functions.
\end{abstract}
\section{Introduction}
In this text, we consider power substitutions in Carleman classes, i.e.\ equations of the form $g(x^k)=f(x)$, where $k>1$ is an integer and $f$ is a given function in a quasianalytic Carleman class $C^M$ (see Definition~\ref{def:Cclass}). Our motivation to study power substitutions in Carleman classes mainly comes from \cite{Bierstone}. There, it was shown, under certain conditions, that if $F(x,y)$ belongs to a quasianalytic Carleman class $ C^M(\mathbb{R}^{d_1}\times \mathbb{R}^{d_2})$ (see Definition \ref{def: multCarlemanClass}) and the equation $F(x,y)=0$ admits a $C^\infty $ solution $y=h(x)$, then $h$ is the image of a $C^M(\mathbb{R}^{d_1})$ function under finitely many power substitutions and blow-ups. Another source of motivation comes from \cite{BierstoneMilman,RolinSpeisseggerWilkie}, where normalization algorithms for power series in Carleman classes also require finitely many power substitutions and blow-ups.
The results of \cite{Bierstone} imply that smooth solutions $g$ of $f(x)=g(x^k)$ inherit a certain quasianalytic property from the original Carleman class: they are definable in an appropriate $o$-minimal structure. The combination of our main results (Theorems \ref{thm: thm1} and \ref{thm:thm2} below) implies the following more explicit quasianalytic property: $g$ belongs to a quasianalytic class $C_{1-1/k}^M$ (Definition~\ref{def:3}) completely characterized in terms of bounds on the derivatives of $g$.
\begin{definition}\label{def:Cclass}
Let $M=(M_n)_{n\geq 0}$ be a positive sequence and let $I$ be an interval. The \textit{Carleman class} $C^M(I)$ consists of all functions $f\in C^\infty(I)$ such that, for any compact set $K\subset I$, there exist constants $A,\;B>0$ such that
\[\left|f^{(n)}(x)\right|\leq A B^n M_n,\quad x\in K,\; n\geq 0. \]
A Carleman class $C^M(I)$ is said to be \textit{quasianalytic} if any $f\in C^M(I)$ that has a zero formal Taylor expansion at some $x\in I$ is identically zero.
\end{definition}
According to the Denjoy--Carleman theorem (see \cite{carleman} or \cite[\S12]{Mandelbrojt} for this exact formulation) the class $C^M(I)$ is quasianalytic if and only if
\begin{equation}\label{eq:dc} \sum_{n\geq 0} \frac{M_n^C}{M_{n+1}^C}=\infty~, \end{equation}
where $M^C$ is the largest log-convex minorant of $M$, i.e.
$$M_n^C =\min\left\{M_n, \inf_{j<n<\ell} M_j^{(\ell-n)/(\ell-j)}M_\ell^{(n-j)/(\ell-j)}\right\}.$$
A necessary and sufficient condition for the equality of two Carleman classes was given in \cite{cartan1940solution}. In particular, if the sequence $M$ satisfies $M_n\geq n!$ for any $n\geq0$, then $C^M(I)=C^{M^C}(I)$.
Given $f\in C^M(I)$ where $I$ is an interval such that $0\in I$ (possibly as an endpoint), we consider a function $g$ defined on the interval $I^k=\{x^k:x\in I\}$ and satisfying $g(x^k)=f(x)$. It is well known that if the class $C^M(I)$ contains all real analytic functions (i.e. there exists $\delta>0$ such that $M_n^C\geq\delta^{n+1} n!$ for every $n\geq0$) in $I$, then $g\in C^M\left(I^k\setminus(-\varepsilon,\varepsilon)\right)$, for any $\varepsilon>0$ (see Lemma \ref{lem: add smooth lem} below), but $g$ may be singular at zero. If $g$ happens to be $ C^\infty$ near zero, then
\begin{equation}\label{eq:deriv0}
g^{(n)}(0)/n!=f^{(kn)}/(kn)!~,
\end{equation} as follows (for polynomials) from the Cauchy theorem, and thus there exist constants $A,\;B>0$ such that
\begin{equation}
\left|g^{(n)}(0)\right|\leq A B^n \frac{M_{kn}}{(kn)!^{1-1/k}},\quad n\geq 0, \label{eq:triv g bound}
\end{equation}
It was shown in \cite{Bierstone} and \cite{Nowak} that under some regularity conditions on the sequence $M$, the estimate \eqref{eq:triv g bound} on the derivatives of $g$ at zero can be extended to the interval $I^k$. A similar fact, without regularity assumptions, follows from a combination of Theorem~\ref{thm: thm1} and Proposition~\ref{prop: extra smooth}. Namely, it follows that $g\in C^{M^{(k)}}(I^k)$, where $M^{(k)}_n=n!\sup_{j\leq nk+1} \frac{M_{j}}{j!} $. Note that by the formula (\ref{eq:deriv0}) for $g^{(n)}(0)$ there is no smaller Carleman class that contains $g$. By the Denjoy--Carleman theorem, the classes $C^{M^{(k)}}$ may fail to be quasianalytic even if the original class $C^{M}$ is quasianalytic. We will show that in the above case, the function $g$ belongs to a new non-homogeneous class $C^M_{1-1/k}(I^k)$ of smooth functions (defined in Definition~\ref{def:3} below) and that the latter class is quasianalytic.
Results similar to these were first obtained by the second author as a byproduct of the work \cite{Avner}. In the first version of this paper, available on arXiv under the same address, we applied the elementary method of Bang \cite{Bang2} to relax the regularity assumptions at the expense of relinquishing the precise asymptotics. Here, instead of adapting the arguments from classical quasianalyticity, we employ a reduction to the classical setting, and in this way relax the regularity assumption even further.
\section{Results}
\begin{definition}\label{def:3}
Let $M$ be a a positive sequence, $I$ be an interval and let $0\leq a$. The class $C^M_a(I)$ consists of all functions $g\in C^\infty(I)$ such that for any compact set $K\subset I$, there exist constants $A,\;B>0$ such that
\[ \left|g^{(n)}(x)\right|\leq A B^n \frac{M_n}{|x|^{an}},\quad x\in K\setminus\{0\},\; n\geq 0.\]
\end{definition}
\begin{theorem}\label{thm: thm1}
Let $C^M(I)$ be Carleman class, and let $k>1$ be an integer. Let $g\in C^\infty(I^k)$, and let $f(x)=g(x^k)$. If $f\in C^M(I)$, then $g\in C^M_a(I^k)$, where $a=1-\tfrac{1}{k}$.
\end{theorem}
The next proposition demonstrates that functions in $C^M_a(I)$ with $a<1$ carry additional, implicit, control on their successive derivatives.
\begin{prop}{\label{prop: extra smooth}}
Let $M$ be positive sequence, and let $k>1$ be an integer. If $g\in C^M_a(I)$ with $a=1-\frac{1}{k}$, then $g\in C^{M^{(k)}}(I) $, where
$$M^{(k)}_n=n!\sup_{j\leq nk+1} \frac{M_{j}}{j!}. $$
\end{prop}
In the case that $g$ is the smooth solution to a power substitution $g(x^k)=f(x)$ with $f\in C^M(I)$, this additional smoothness was already shown in \cite{Bierstone,Nowak}.
Our next result is about quasianalyticity of $C^M_a(I)$ with $a<1$.
\begin{theorem}\label{thm:thm2} Let $M$ be positive sequence, and let $0\leq a<1$. If $M$ is log--convex or $(M_n/n!)_{n\geq0}$ is non decreasing, then the class $C^M_a(I)$ is quasianalytic if and only if \eqref{eq:dc} holds.
\end{theorem}
There are multivariate analogues to Theorems 1 and 2. We postpone the discussion of such analogues to the last section.
Finally, the next two examples show that when $a\geq 1$, there are no analogues statements to Proposition \ref{prop: extra smooth} and Theorem \ref{thm:thm2}, even in the simple analytic case, when $M_n=n!$.
\begin{Ex}\label{ex: Ex1}
Consider the $C^\infty[0,1]$ function $g$, defined by $g(x)=\exp(-1/x)$ for $0<x\leq 1$ (and $g(0)=0$). By Cauchy's estimates for the derivatives of analytic functions, we have
$$|g^{(n)}(x)|\leq n!\frac{2^n}{x^n}\cdot\max_{|z-x|=\frac{x}{2}}|g(z)|\leq n!\frac{2^n}{x^n}. $$
So $g\in C^{M}_1\left([0,1]\right)$ with $M_n=n!$, and $g^{(n)}(0)=0$ for any $n\geq 0$. In particular, there is no analogue to Theorem~\ref{thm:thm2} for $a\geq1$.
\end{Ex}
\begin{Ex}\label{ex: Ex2}
Let $(N_n)_{n\geq0}$ be an arbitrary positive sequence. We argue that there exists a function $g\in C^{n!}_1\left([0,1]\right)$ such that
\begin{equation}
{\limsup_{n\to\infty}}\frac{|g^{(n)}(0)|}{N_n}>0. \label{eq: func with large der}
\end{equation}
In particular, the existence of such a function shows that the analogue to Proposition~\ref{prop: extra smooth} in the case $a\geq 1$ does not hold.
The construction of $g$ is done in two steps. First, by Borel's Lemma (see \cite[p. 44]{Borel} or \cite[p.16]{hormanderBook}) there is a $2\pi$ periodic and $C^\infty(\mathbb{R})$ function, $h$, such that $h^{(n)}(0)=N_n$, for any $n\geq 0$. Expanding the function $h$ in a Fourier series, we have
\[h(x)=\sum_{j\in \mathbb{Z}} a_j e^{ijx}, \]
where $|a_j|=o(|j|^{-m})$ as $|j|\to\infty$, for any $m>0$.
Next, put
\[ h_+(x):= \sum_{j\geq 0} a_j e^{-jx},\quad h_-(x) =\sum_{j> 0} a_{-j} e^{-jx}.\]
Since $|a_j|=o(|j|^{-m})$ as $|j|\to\infty$, for any $m>0$, the functions $h_\pm$ belongs to $C^\infty\left(\overline{\mathbb{C}_+}\right)\cap \text{Hol}\left(\mathbb{C}_+\right)$, where $\mathbb{C}_+:=\{z\in\mathbb{C}:\re(z)>0\}$. So, as in the previous example, by Cauchy's estimates for the derivatives of analytic functions, we have $h_\pm\in C^{M}_1\left([0,1]\right)$ with $M_n=n!$. Moreover, since {$h(x)=h_+(-ix)+h_-(ix)$} for any $x\in\mathbb{R}$, either $h_+$ or $h_-$ satisfies \eqref{eq: func with large der}.
\end{Ex}
\section{Theorem \ref{thm: thm1} and Proposition \ref{prop: extra smooth}}
Theorem \ref{thm: thm1} immediately follows from the next lemma.
\begin{lemma}\label{lem: power-sub estimate}
Let $M$ be a positive sequence, $k>1$ be an integer, and $f\in C^\infty[0,1]$ be a function such that $ \max_{[0,1]} |f^{(n)}(x)|\leq M_n$ for any $n\geq 0$. Put $g(x)=f(x^{1/k})$. If $g\in C^\infty[0,1]$, then
\[|g^{(n)}(x)|\leq \frac{2^n M_n}{x^{(1-1/k)n}},\quad n\geq 0. \]
\end{lemma}
\begin{proof}
Fix $n\geq 1$, First we write Taylor expansion to $f$ around the origin with integral remainder:
$$f(x)=\sum_{j=0}^{n-1} \frac{f^{(j)}(0)}{j!}x^j+\frac{1}{(n-1)!}\int_0^x f^{(n)}(t)(x-t)^{n-1}dt. $$
Since $g$ is $C^\infty[0,1]$ function, we have $f^{(j)}(0)=0$ for $j$ which is not divisible by $k$. Therefore
\begin{equation}
g(x)=P(x)+\frac{1}{(n-1)!}\int_0^{x^{1/k}} f^{(n)}(t)(x^{1/k}-t)^{n-1}dt, \label{eq: g expan}
\end{equation}
where $P$ is a polynomial of degree at most $\tfrac{n-1}{k}$. Put
$$F(x,t) =\frac{1}{(n-1)!} (x^{1/k}-t)^{n-1}=\frac{1}{(n-1)!}\sum_{j=0}^{n-1}(-1)^{n-1-j}\binom{n-1}{j}x^{j/k}t^{n-1-j}. $$
Differentiating $F$ $n$ times with respect to the variable $x$ yields
$$ \frac{\partial^n}{\partial x^n}F(x,t) =\frac{1}{(n-1)!}\sum_{j=0}^{n-1}\left[\left(\prod_{\ell=0}^{n-1}\left(\frac{j}{k}-\ell\right)\right) (-1)^{n-1-j}\binom{n-1}{j}x^{j/k-n}t^{n-1-j}\right]. $$
In particular, for $0<t<x^{1/k}$,
\begin{equation}
\left|\frac{\partial^n}{\partial x^n}F(x,t)\right|\leq \sum_{j=0}^{n-1} \binom{n-1}{j} x^{\frac{n-1}{k}-n}= 2^{n-1} x^{\frac{n-1}{k}-n}. \label{eq: F der first}
\end{equation}
In addition, the chain rule yields
\begin{equation*}
\frac{\partial^{\ell}}{\partial x^{\ell}}F(x,x^{1/k})=\begin{cases}
\left(\frac{x^{\tfrac{1}{k}-1}}{k}\right)^{n-1}, & \ell=n-1\\
0,\quad & \ell<n-1.
\end{cases}
\end{equation*}
Thus, by differentiating \eqref{eq: g expan} $n$ times, we get
\begin{align*}
g^{(n)}(x)&=\int_0^{x^{1/k}} \frac{\partial^n}{\partial x^n}F(x,t) f^{(n)}(t)dt+\frac{\partial^{n-1}}{\partial x^{n-1}}F(x,x^{1/k})f^{(n)}(x^{\frac{1}{k}})\frac{x^{\tfrac{1}{k}-1}}{k}\\
&=\int_0^{x^{1/k}} \frac{\partial^n}{\partial x^n}F(x,t) f^{(n)}(t)dt+\left(\frac{x^{1/k-1}}{k}\right)^n f^{(n)}(x^{\frac{1}{k}}).
\end{align*}
Finally, using \eqref{eq: F der first} and the bound {$|f^{(n)}|\le M_n$} we obtain
$$ |g^{(n)}(x)|\leq M_n\left(2^{n-1}x^{n/k-n}+\frac{1}{k^n}x^{n/k-n}\right)\leq\frac{2^n M_n}{x^{(1-1/k)n}}. $$
\end{proof}
The next lemma yields Proposition \ref{prop: extra smooth} (by taking $n=k\ell+1$) and it is also being used in the proof of Theorem \ref{thm:thm2}.
\begin{lemma}\label{lem: add smooth lem}
Let $M$ be a positive sequence such that $(M_n/n!)_n$ is non decreasing, $0<\sigma<1$, and $g\in C^\infty[0,1]$ be a function such that $ |g^{(n)}(x)|\leq \frac{M_n}{x^{(1-\sigma)n}}$ for any $n\geq 0$ and $x\in(0,1]$. Then for any $0\leq\ell\leq n$ and $0\leq x\leq 1$ we have
\[ \left|g^{(\ell)}(x)\right|\leq 2^n \frac{\ell!}{n!} M_n \cdot \begin{cases}
{n\cdot}\frac{(\ell-\sigma n)^{-1}}{x^{\ell-\sigma n}}, &\ell>\sigma n\\
1+n\cdot \log\frac{1}{x},& \ell=\sigma n\\
{n\cdot} (\sigma n-\ell)^{-1},& \ell<\sigma n.
\end{cases} \]
\end{lemma}
\begin{proof}
The case $\ell=n$ is trivial. Assume that $\ell<n$.
Writing Taylor expansion of degree $n-\ell-1$ to the function $g^{(\ell)}$ around 1 with integral remainder, we get
$$ g^{(\ell)}(x)=\sum_{j=0}^{n-\ell-1}\frac{g^{(\ell+j)}(1)}{j!}(x-1)^j+\frac{1}{(n-\ell-1)!}\int_1^x g^{(n)}(t)(t-x)^{n-\ell-1} dt. $$
For $x\in(0,1]$,
\begin{align*}
\left|\int_1^x g^{(n)}(t)(t-x)^{n-\ell-1} dt\right|\leq \int_x^1\left|g^{(n)}(t)\right| (t-x)^{n-\ell-1} dt&\leq M_{n}\int_x^1 (t-x)^{n-\ell-1} t^{-(1-\sigma)n}dt\\&\leq M_{n}\int_x^1 t^{\sigma n-\ell-1}dt.
\end{align*}
Hence
\begin{align*}
\left|g^{(\ell)}(x)\right|&\leq \sum_{j=0}^{n-\ell-1}\frac{M_{\ell+j}}{j!}+\frac{M_{n}}{(n-\ell-1)!}\int_x^1 t^{\sigma n-\ell-1}dt \\ &\leq \sum_{j=0}^{n-\ell-1}\frac{(\ell+j)!}{j!}\frac{M_{n}}{n!}+n 2^n \ell!\frac{M_{n}}{n!}\int_x^1 t^{\sigma n-\ell-1}dt \\
&\leq\ell!\frac{M_n}{n!}\left(\sum_{j=0}^{n-\ell-1} \binom{n}{j}+n2^n\int_x^1 t^{\sigma n-\ell-1}dt\right) \\
&\leq 2^n\ell!\frac{M_n}{n!} \left(1+n\int_x^1 t^{\sigma n-\ell-1}dt\right).
\end{align*}
Since
\[\int_x^1 t^{\sigma n-\ell-1}dt \leq
\begin{cases}
(\ell-\sigma n)^{-1}x^{\sigma n-\ell}, &\ell>\sigma n\\
\log\frac{1}{x},&\ell=\sigma n\\
(\sigma n-\ell)^{-1},&\ell<\sigma n,
\end{cases}
\]
we obtained the desired bound.
\end{proof}
\section{Theorem~\ref{thm:thm2}}
\begin{lemma}\label{lem: powersub lem}
Let $M$ be a positive sequence such that $(M_n/n!)_n$ is non decreasing, $1<k$ be an integer, and $g\in C^\infty(0,1]$ be a function such that $ |g^{(n)}(x)|\leq \frac{M_n}{x^{(1-1/k)n}}$ for any $n\geq 0$ and $x\in (0,1]$. Put $f(x)=g(x^k)$. Then there exist constants $A,\; C>0$ such that
\[ \left|f^{(n)}(x)\right|\leq A C^n M_n \begin{cases}
1, &k\nmid n \text{ or } n=0;\\
1+\log\frac{1}{x},& k\mid n.
\end{cases} \]
\end{lemma}
\begin{proof}
First we argue by induction on $n$ that
\begin{equation}
f^{(n)}(x)= \sum_{ \substack{i+j=n\\ 1\leq i\leq n,\; i(k-1)\geq j }}B_{n}(i,j)g^{(i)}(x^k)x^{i(k-1)-j} \label{eq: g der via f}
\end{equation}
where \[ |B_{n}(i,j)|\leq C^n n^{n-i}.\]
Indeed,
$$ \frac{d}{dx}\left(g^{(i)}(x^k)x^{i(k-1)-j}\right)=kg^{(i+1)}(x^k)x^{(i+1)(k-1)-j}+ (i(k-1)-j)x^{i(k-1)-(j+1)}g^{(i)}(x^k)$$
implies that
$$B_{n+1}(i,j)=kB_n(i-1,j)+\left(i(k-1)-j\right)B_n(i,j-1).$$
So making use of the induction hypothesis, we find that
$$\left|B_{n+1}(i,j)\right|\leq k C^n n^{n+1-i}+(ik-n) C^n n^{n-i}\leq C^{n+1}(n+1)^{n+1-i} $$
as claimed.
Using \eqref{eq: g der via f}, we find that
$$\left|f^{(n)}(x)\right|\leq C^n \sum_{n/k\leq i\leq n}n^{n-i}\left|g^{(i)}(x^k)x^{ik-n}\right| $$
By Lemma \ref{lem: add smooth lem},
$$ \left|g^{(i)}(x^k)x^{ik-n}\right| \leq 2^n \frac{i!}{n!}M_n \begin{cases}
1+n \log\frac{1}{{x^k}},& i=\frac{n}{k}\\
{n},& i>\frac{n}{k}.
\end{cases} $$
Thus in the case $k\mid n$, we find that
\begin{align*}
\left|f^{(n)}(x)\right|&\leq {n\cdot}(2C)^n M_n{\bigg[} \sum_{n/k< i\leq n}\frac{n^{n-i} i!}{n!}+\frac{n^{n(1-1/k)} \left(\tfrac{n}{k}\right)!}{n!}\left(1+\log\frac{1}{{x^k}}\right){\bigg]}\\ &\leq {nk\cdot}(2eC)^n M_n \left(1+n\log\frac{1}{x}\right),
\end{align*}
while in the case $k\nmid n$, we have
$$ \left|f^{(n)}(x)\right|\leq {n}(2C)^n M_n \sum_{n/k< i\leq n}\frac{n^{n-i} i!}{n!}\leq {nk}(2eC)^n M_n. $$
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:thm2}]
Without loss of generality we assume that $I\subseteq [0,\infty)$ .
Assume first that $\sum_{n\geq 0} \frac{M^C_n}{M^C_{n+1}}=\infty$. Let $g\in C^M_a(I)$ with $g^{(n)}(x_0)\equiv 0$. We need to show that $g\equiv0$. If $x_0\neq 0$, then $g\in C^M\left(I\setminus\{0\}\right)$, in particular by the Denjoy--Carleman Theorem $g\equiv 0$. On the other hand, if $x_0=0$, then by replacing $g(x)$ with $g(x/C)$ with sufficiently large $C>0$, there is no loss of generality with assuming that $[0,1]\subseteq I$. Let $k\in\mathbb{N}$ such that $1-\tfrac{1}{k}>a$.
If $M$ is log-convex, denote by $\widehat{M}$ the log-convex sequence defined by
$$\widehat{M}_0=M_0,\quad \frac{\widehat{M}_{n-1}}{\widehat{M}_n}=\min \left\{\frac{M_{n-1}}{M_n}\;,\; \frac{1}{n} \right\}. $$
Note that $\widehat{M_n}\geq M_n$ for any $n\geq0$. Moreover,
$$ \frac{\widehat{M}_n}{n!}=M_0 \prod_{j=1}^n \max\left\{\frac{ M_j}{{j}M_{j-1}}\;,\; 1\right\} $$, so the sequence $({\widehat{M}_n}/{n!})_{n\geq 0}$ is non-decreasing. By the condensation test for convergence,
$$\sum_{n\geq 0} 2^n\frac{M_{2^n-1}}{M_{2^n}}=\infty\quad Tightarrow \quad \sum_{n\geq 0} \min\left\{2^n\frac{M_{2^n-1}}{M_{2^n}}\;,\;1\right\}=\infty \quad Tightarrow \sum_{n\geq 0}\frac{\widehat{M}_{n}}{\widehat{M}_{n+1}}=\infty.$$
On the other hand, if the sequence $(M_n/{n!})_{n\geq 0}$ is non decreasing, then we put $\widehat{M}=M$.
In both cases, the function $g$ belongs to $ C^{\widehat{M}}_{1-1/k}\left([0,1]\right)$, and $\widehat{M}$ satisfies the assumptions of Lemma~\ref{lem: powersub lem} and $\sum_{n\geq 0} \frac{\widehat{M}^C_n}{\widehat{M}^C_{n+1}}=\infty$. Consider the function $h(x)=\int_0^x g(y^k)dy$. By Lemma \ref{lem: powersub lem}, $h\in C^{\widehat{M}}([0,1])$ and since $g^{(n)}(0)\equiv 0$, then also $h^{(n)}(0)\equiv 0$. By the Denjoy--Carleman Theorem, $h\equiv 0$ and therefore also $g\equiv 0$, as claimed.
Next, if $\sum_{n\geq 0} \frac{M^C_n}{M^C_{n+1}}<\infty$, then by Denjoy--Carleman Theorem, the class $C^M(I)$ is not quasianalytic. Thus there exists a non-zero function $g\in C^{M}(I)$ which is compactly supported in $I\setminus\{0\}$. Such a function $g$ belongs to the class $C_a^M(I)$, therefore the latter is not quasianalytic.
\end{proof}
\section{The multivariate case}
\subsection{Carleman classes.}
Here we will use standard multiindex notation: If $\alpha=(\alpha_1,\cdots,\alpha_d)\in \mathbb{Z}_+^d$ and $x=(x_1,\cdots,x_d)\in\mathbb{R}^d$, we write $|\alpha|:=\alpha_1+\cdots+\alpha_d $, and $\displaystyle \frac{\partial^{|\alpha|}}{\partial x^{\alpha}}:=\frac{\partial^{\alpha_1+\cdots+\alpha_d}}{\partial x_1^{\alpha_1}\cdots \partial x_d^{\alpha_d}}$.
\begin{definition}\label{def: multCarlemanClass}
Let $M$ be a sequence of positive numbers which is logarithmically convex, and let $\Omega$ be a connected set in $\mathbb{R}^d$. \textit{The Carleman class} $C^M(\Omega)$ consists of all functions $f\in C^\infty(\Omega)$ such that for any compact $K\subset\Omega$, there exist $A,\;B>0$ such that
\[\left|\frac{\partial^{|\alpha|}}{\partial x^\alpha}f(x)\right|\leq A B^{|\alpha|} M_{|\alpha|} ,\quad x\in \Omega \]
and for any multiindex $\alpha=(\alpha_1,\cdots,\alpha_d)\in \mathbb{Z}_+^d$.
\end{definition}
\begin{definition}
Let $M$ be a sequence of positive numbers, $\Omega$ be a connected set in $\mathbb{R}^d$, and let $a=(a_1,\cdots,a_d)\in \mathbb{R}_+^d$. The class $C_a^M(\Omega)$ consists of all functions $g\in C^\infty(\Omega)$ such that for any compact $K\subset\Omega$, there exist $A,\;B>0$ such that
\[\left|\frac{\partial^{|\alpha|}}{\partial x^\alpha}g(x)\right|\leq A B^{|\alpha|}\frac{M_{|\alpha|}}{|x_1|^{a_1\cdot\alpha_1}|x_2|^{a_2\cdot\alpha_2}\cdots|x_d|^{a_d\cdot\alpha_d}}\]
for any {$x\in K\backslash\{x:x_1\cdots x_d=0\}$} and any multiindex $\alpha=(\alpha_1,\cdots,\alpha_d)\in \mathbb{Z}_+^d$.
\end{definition}
\begin{definition}
A set $\Omega\subset\mathbb{R}^d$ is called star-shaped (with respect to the origin) if for any $x\in \Omega$, $\{tx:t\in[0,1]\}\subset\Omega$.
\end{definition}
\subsection{Results.}
The next result is a muliivariate version of Theorem \ref{thm: thm1}.
\begin{theorem} \label{thm: thm3}
Let $C^M([0,1]^d)$ be a quasianalytic Carleman class. For $k\in\mathbb{N}^d$,
denote by $y_k:\mathbb{R}^d\to\mathbb{R}^d$ the map defined by $y_k(x)=(x_1^{k_1},\cdots,x_d^{k_d})$. If $g\in C^\infty([0,1]^d)$ is such that $f = g \circ y_k\in C^M([0,1]^d)$, then $g\in C^M_a([0,1]^d)$ with $a=(\tfrac{k_1-1}{k_1},\cdots,\tfrac{k_d-1}{k_d})$.
\end{theorem}
Theorem \ref{thm: thm3} is a special case of the following claim: for any $1\leq \ell\leq d$ and multiindex $\alpha=\left(\alpha_1,\cdots,\alpha_d\right)$, we have
\[\left|\frac{\partial^{|\alpha|}}{\partial x^\alpha}g(x_1,\cdots,x_\ell,x_{\ell+1}^{k_{\ell+1}},\cdots,x_d^{k_d})\right|\leq A B^{|\alpha|}\frac{M_{\alpha_1}\cdots M_{\alpha_d}}{|x_1|^{a_1\cdot\alpha_1}|x_2|^{a_2\cdot\alpha_2}\cdots|x_\ell|^{a_\ell\cdot\alpha_\ell}},\]
where $a=(\tfrac{k_1-1}{k_1},\cdots,\tfrac{k_d-1}{k_d})$.
This claim is an immediate consequence of Theorem \ref{thm: thm1}, by induction on $\ell$ and $d$.
The next result is the multivariate version of Theorem \ref{thm:thm2}, and it follows form it by restricting functions from $C^M_a(\Omega)$ to lines.
\begin{theorem}\label{thm:thm4}
Let $M$ be a positive sequence, $a\in [0,1)^d$, and and $\Omega\subset \mathbb{R}^d$ be star-shaped such that $\Omega\cap \{x_1\cdots
x_d\neq 0\}$ is dense in $\Omega$. If $M$ is log--convex or $(M_n/n!)_{n\geq0}$ is non-decreasing, then the class $C^M_a(\Omega)$ is quasianalytic if and only if
\eqref{eq:dc} holds.
\end{theorem}
\paragraph*{Acknowledgement} This work was started during the visit of the authors to IMPAN, Warsaw, in the framework of workshop ``Contemporary quasianalyticity problems''. The authors thank Misha Sodin for encouragement and useful remarks regrading the presentation of this paper. We are grateful to Gerhard Schindl for spotting several lapses in the published paper, which are corrected in the current arXiv version.
\today
\end{document}
|
\begin{document}
\theoremstyle{plain}
\renewenvironment{proof}{ \noindent {\bfseries Proof.}}{qed}
\newtheorem{defin}{Definition}
\newtheorem{theorem}{Theorem}[section]
\newtheorem{remark}{Remark}
\newtheorem{proposition}{Proposition}
\newtheorem{lemma}{Lemma}
\newtheorem{definition}{Definition}
\newtheorem{prop}{Proposition}[section]
\newtheorem{cor}{Corollary}
\, {\rm d}ef\noindent{\bf Proof: }{\noindent{\bf Proof: }}
\, {\rm d}ef\quad\vrule height4pt width4pt depth0pt
{\quad\vrule height4pt width4pt depth0pt
}
\, {\rm d}ef\mathrm {div}\,{\nablabla\cdot}
\, {\rm d}ef\nablabla\times{\nablabla\times}
\, {\rm d}ef{\rm sign}{{\rm sign}}
\, {\rm d}ef{\rm arsinh}{{\rm arsinh}}
\, {\rm d}ef{\rm arcosh}{{\rm arcosh}}
\, {\rm d}ef{\rm diag}{{\rm diag}}
\, {\rm d}ef{\rm const}{{\rm const}}
\, {\rm d}ef\mathbb{R}{\mathbb{R}}
\, {\rm d}ef\mathbb{N}{\mathbb{N}}
\, {\rm d}ef\mathbb{Z}{\mathbb{Z}}
\, {\rm d}ef{\cal D}{{\cal D}}
\, {\rm d}ef{\cal F}{{\cal F}}
\, {\rm d}ef{\cal L}{{\cal L}}
\, {\rm d}ef{\cal S}{{\cal S}}
\, {\rm d}ef\mathcal{F}{\mathcal{F}}
\, {\rm d}ef\mathrm {div}\,{\mathrm {div}\,}
\, {\rm d}ef\mathrm {supp}\,{\mathrm {supp}\,}
\, {\rm d}ef\mathrm {sgn}\,{\mathrm {sgn}\,}
\, {\rm d}ef\mathscr{H}^{n-1 }{\mathscr{H}^{n-1 }}
\, {\rm d}ef\mathcal L{\mathcal L}
\, {\rm d}ef\, \mathcal{T}{\, \mathcal{T}}
\, {\rm d}ef\partial{\partialrtial}
\, {\rm d}ef\nabla{\nablabla}
\, {\rm d}ef\varepsilon{\varepsilon}
\, {\rm d}ef\varphi{\varphi}
\, {\rm d}ef\overline{f}{\overline{f}}
\, {\rm d}ef\overline{\Omega}{\overline{\Omega}}
\, {\rm d}ef\pa\Omega{\partial\Omega}
\, {\rm d}ef\hspace*{.4cm}\text{and}\hspace*{.4cm}{\hspace*{.4cm}\text{and}\hspace*{.4cm}}
\, {\rm d}ef\overline{x}{\overline{x}}
\, {\rm d}ef\overline{y}{\overline{y}}
\, {\rm d}ef\mathcal LF{L^2( \Omega\times\mathbb{R}^d, F^{-1}(v) \, \text{d}x \text{d}v )}
\, {\rm d}ef\mathcal LFt{L^{\infty}\big((0,T); L^2( \Omega\times\mathbb{R}^d, F^{-1}(v) \, \text{d}x \text{d}v )\big)}
\, {\rm d}ef\mathcal Lf{L^2_{F^{-1}}}
\, {\rm d}ef\big(-\mathcal{D}elta_v\big)^{\alpha/2}{\big(-\mathcal{D}elta_v\big)^{\alpha/2}}
\, {\rm d}ef\big(-\mathcal{D}elta_x\big)^{\alpha/2}{\big(-\mathcal{D}elta_x\big)^{\alpha/2}}
\, {\rm d}ef\big(-\mathcal{D}elta\big)^{\alpha/2}{\big(-\mathcal{D}elta\big)^{\alpha/2}}
\, {\rm d}ef\int_{\mathbb{R}R^d}{\int_{\mathbb{R}^d}}
\, {\rm d}ef\hat{x}{\hat{x}}
\, {\rm d}ef\hat{v}{\hat{v}}
\, {\rm d}ef\hat{w}{\hat{w}}
\, {\rm d}ef\, \text{d}t \text{d}x \text{d}v {\, \text{d}t \text{d}x \text{d}v }
\, {\rm d}ef\, \text{d}t \text{d}x \text{d}w {\, \text{d}t \text{d}x \text{d}w }
\, {\rm d}ef\, \text{d}x \text{d}y \text{d}v {\, \text{d}x \text{d}y \text{d}v }
\, {\rm d}ef\, \text{d}x \text{d}v \text{d}w {\, \text{d}x \text{d}v \text{d}w }
\, {\rm d}ef\, \text{d}t \text{d}x {\, \text{d}t \text{d}x }
\, {\rm d}ef\, \text{d}t \text{d}v {\, \text{d}t \text{d}v }
\, {\rm d}ef\, \text{d}x \text{d}v {\, \text{d}x \text{d}v }
\, {\rm d}ef\, \text{d}x \text{d}y {\, \text{d}x \text{d}y }
\, {\rm d}ef\, \text{d}y \text{d}x {\, \text{d}y \text{d}x }
\, {\rm d}ef\, \text{d}x ,\text{d}z {\, \text{d}x ,\text{d}z }
\, {\rm d}ef\, \text{d}v \text{d}w {\, \text{d}v \text{d}w }
\, {\rm d}ef\, \text{d}x \text{d}w {\, \text{d}x \text{d}w }
\, {\rm d}ef\, \text{d}v{\, \text{d}v}
\, {\rm d}ef\, \text{d}w{\, \text{d}w}
\, {\rm d}ef\, \text{d}x{\, \text{d}x}
\, {\rm d}ef\, \text{d}y{\, \text{d}y}
\, {\rm d}ef\, \text{d}z{\, \text{d}z}
\, {\rm d}ef\, \partial_t{\, \text{d}t}
\, {\rm d}ef\, \text{d}xi{\, \text{d}\xi}
\, {\rm d}ef\,\text{d}\sigma(x){\,\text{d}\sigma(x)}
\, {\rm d}ef\, \text{d}\hat{v} \text{d}\hat{w} {\, \text{d}\hat{v} \text{d}\hat{w} }
\, {\rm d}ef\, \text{d}\hat{v}{\, \text{d}\hat{v}}
\, {\rm d}ef\mathring{\Omega}{\mathring{\Omega}}
\, {\rm d}ef\langlengle n(x_b)|v\ranglengle{\langlengle n(x_b)|v\ranglengle}
\, {\rm d}ef\langlengle n|v\ranglengle{\langlengle n|v\ranglengle}
\, {\rm d}ef\tilde{s}{\tilde{s}}
\, {\rm d}ef\langle{\langlengle}
\, {\rm d}ef\rangle{\ranglengle}
\, {\rm d}ef\, {\rm d}isplaystyle{\, {\rm d}isplaystyle}
\, {\rm d}ef\tilde{\tau}{\tilde{\tau}}
\, {\rm d}ef\overline{\rho}{\overline{\rho}}
\, {\rm d}ef\mathcal{F}h{\widehat F}
\, {\rm d}ef\mathscr{H}^{n-1 }SRs{\mathcal{H}_{SR}^s }
\, {\rm d}ef\mathcal Lsre{\mathscr{L}_{\text{\tiny{SR}},\varepsilon}^*}
\, {\rm d}ef\mathcal Lsr*{\mathscr{L}_{\text{\tiny{SR}}}^*}
\, {\rm d}ef\varepsilon{\varepsilon}
\, {\rm d}ef\varphi{\varphi}
\newcommand{\mbox{$\hspace{0.12em}\shortmid\hspace{-0.62em}\chi$}}{\mbox{$\hspace{0.12em}\shortmid\hspace{-0.62em}\chi$}}
\, {\rm d}ef\C{\hbox{\rlap{\kern.24em\rangleise.1ex\hbox
{\vrule height1.3ex width.9pt}}C}}
\, {\rm d}ef\mathbb{R}{\mathbb{R}}
\, {\rm d}ef\hbox{\rlap{I}\kern.16em P}{\hbox{\rlap{I}\kern.16em P}}
\, {\rm d}ef\Q{\hbox{\rlap{\kern.24em\rangleise.1ex\hbox
{\vrule height1.3ex width.9pt}}Q}}
\, {\rm d}ef\hbox{\rlap{I}\kern.16em\rlap{I}M}{\hbox{\rlap{I}\kern.16em\rlap{I}M}}
\, {\rm d}ef\hbox{\rlap{I}\kern.16em\rlap{I}N}{\hbox{\rlap{I}\kern.16em\rlap{I}N}}
\, {\rm d}ef\hbox{\rlap{Z}\kern.20em Z}{\hbox{\rlap{Z}\kern.20em Z}}
\, {\rm d}ef\mathcal{K}{\mathcal{K}}
\, {\rm d}ef\begin{eqnarray}{\begin{eqnarray}}
\, {\rm d}ef \text{e}nd{eqnarray}{ \text{e}nd{eqnarray}}
\, {\rm d}ef\begin{eqnarray*}{\begin{eqnarray*}}
\, {\rm d}ef \text{e}nd{eqnarray*}{ \text{e}nd{eqnarray*}}
\, {\rm d}ef\partialrt#1#2{{\partialrtial #1\over\partialrtial #2}}
\, {\rm d}ef\partialrtk#1#2#3{{\partialrtial^#3 #1\over\partialrtial #2^#3}}
\, {\rm d}ef\mat#1{{D #1\over Dt}}
\, {\rm d}ef\, \partial_t{\, \partialrtial_t}
\, {\rm d}ef_*{_*}
\, {\rm d}ef\, {\rm d}{\, {\rm d}}
\, {\rm d}ef \text{e}{ \text{e}}
\, {\rm d}ef\mathcal{D}{\mathcal{D}}
\, {\rm d}eff_\eps{f_\varepsilon}
\, {\rm d}ef\pmb#1{\setbox0=\hbox{$#1$}
\kern-.025em\copy0\kern-\wd0
\kern-.05em\copy0\kern-\wd0
\kern-.025em\rangleise.0433em\box0 }
\, {\rm d}ef\overline{\overline}
\, {\rm d}ef\underline{\underline}
\, {\rm d}ef\fref#1{(\ref{#1})}
\begin{center}
{\mathcal LARGE Fractional diffusion limit for a fractional\\begin{eqnarray*}8pt]
Vlasov-Fokker-Planck equation}
{\langlerge P. Aceves-S\'anchez}\footnote{Fakult\"at f\"ur Mathematik, Universit\"at Wien.} and
{\langlerge L. Cesbron}\footnote{DPMMS, Center for Mathematical Sciences, University of Cambridge.}
\text{e}nd{center}
\vskip 1cm
\noindent{\bf Abstract.} This paper is devoted to the rigorous derivation of the macroscopic limit of a Vlasov-Fokker-Planck
equation in which the Laplacian is replaced by a fractional Laplacian. The evolution of the density is governed by a fractional
heat equation with the addition of a convective term coming from the external force. The analysis is performed by a modified test function
method and by obtaining a priori estimates from quadratic entropy bounds. In addition, we give the proof of existence and uniqueness of
solutions to the Vlasov-fractional-Fokker-Planck equation.
\vskip 1cm
\noindent{\bf Key words:} Kinetic equations, fractional-Fokker-Planck operator, fractional Laplacian, anomalous diffusion limit, superdiffusion.
\noindent{\bf AMS subject classification:} 82C31, 82D10, 82B40, 26A33.
\vskip 1cm
\noindent{\bf Acknowledgment:} P.A.S. acknowledges support from Consejo Nacional
de Ciencia y Tecnologia of Mexico, the PhD program { \text{e}m Dissipation and Dispersion
in Nonlinear PDEs} funded by the Austrian Science Fund, grant no. W1245, and
the Vienna Science and Technology Fund, grant no. LS13-029. L.C. acknowledges support from the ERC Grant Mathematical Topics of Kinetic Theory.
\tableofcontents
\section{Introduction}
\subsection{The Vlasov-L\'evy-Fokker-Planck equation}
In this paper we investigate the long-time/small mean-free-path asymptotic behavior in the low-field case of the solution of the Vlasov-L\'evy-Fokker-Planck (VLFP) equation
\begin{subequations}
\begin{align}
\partialrtial_t f + v \cdot \nablabla_x f + E \cdot \nablabla_v f &= \nablabla_v \cdot ( v f) - \big(-\mathcal{D}elta_v\big)^{\alpha/2} f & \text{ in } ( 0, \infty)\times \mathbb{R}^d \times \mathbb{R}^d, \langlebel{eq:vlfpE} \\
f( 0, x, v) &= f^{in} ( x, v ) & \text{ in } \mathbb{R}^d \times \mathbb{R}^d, \langlebel{eq:vlfpEit}
\text{e}nd{align}
\text{e}nd{subequations}
where $\alpha \in [ 1, 2]$. This equation describes the evolution of the density of an ensemble of particles denoted as $f( t, x, v)$ in phase space, where
$t \geq 0$, $x \in \mathbb{R}^d$ and $v \in \mathbb{R}^d$ stand for, respectively, time, position and velocity. The operator $\big(-\mathcal{D}elta\big)^{\alpha/2}$ denotes the fractional
Laplacian and is defined by \text{e}qref{def:fracLapInt}. Let us recall that, at a microscopic level, equation \text{e}qref{eq:vlfpE}- \text{e}qref{eq:vlfpEit} is related
to the Langevin equation
\begin{align}
\, {\rm d} x ( t) &= v( t) \, {\rm d} t, \nonumber \\
\, {\rm d} v ( t) &= - v( t) \, {\rm d} t + E \, {\rm d} t + \, {\rm d} L^\alpha_t, \langlebel{eq:lang}
\text{e}nd{align}
where $L^\alpha_t$ is a Markov process with generator $-\big(-\mathcal{D}elta\big)^{\alpha/2}$ and $( x( t), v( t))$ describe the position and velocity of a
single particle (see \cite{Jourdain+2011} and \cite{Risken+1996}). Therefore, this models describes the position and velocity of a particle
that is affected by three mechanisms: a dragging force, an acceleration and a pure jump process.
In the particular case when $\alpha = 2$ the fractional operator $\big(-\mathcal{D}elta\big)^{\alpha/2}$ takes the form of a Laplace operator $\mathcal{D}elta$
and \text{e}qref{eq:vlfpE}- \text{e}qref{eq:vlfpEit} reduces to the usual Vlasov-Fokker-Planck equation. In this case the Fokker-Planck operator is known to have an equilibrium
distribution function given by a Maxwellian $M( v) = C \text{e}xp{ \left( -| v|^2 \right)}$ where $C > 0$ is a normalization constant. The
Vlasov-Fokker-Planck equation
has been used in the modeling of many physical phenomena, in particular, for the description of the evolution of plasmas \cite{Risken+1996}.
However, there
are some settings in which particles may have long jumps and an $\alpha$-stable distribution process is more suitable to describe
the phenomenon, see for instance \cite{Schertzer2001}.
The case in which $\alpha = 2$ reduces to the classical Vlasov-Fokker-Planck equation for a given external field. This equation is related to the
Vlasov-Poisson-Fokker-Planck system (VPFP) in the case in which the electric field is self-consistent. Questions such as existence
of solutions, hydrodynamic limits and long time behaviour for the VPFP system has been extensively studied by many authors, see for instance \cite{Bouchut+1995}, \cite{Pfaffelmoser1992}, and \cite{Goudon+2005}. In particular, in \cite{ElGhani+2010} the low
field limit is studied for the VPFP system and a Drift-Diffusion-Poisson system is obtained in a rigorous manner.
Let us note that, although it is classical in the framework of kinetic theory to consider a self-consistence electric fields that expresses how particles repulse one another, one can also, in the VPFP system, consider the case in which particles are
attracted by each other and this model is used in the description of galactic dynamics.
In the rest of the paper we shall need the following notation: The fractional (or L\'evy) Fokker-Planck operator denoted by $\mathcal{L}^{\alpha/2}$ and defined as
\begin{equation} \langlebel{def:LFP}
\mathcal{L}^{\alpha/2} f = \nabla_v \cdot \big( v f \big) - \big(-\mathcal{D}elta_v\big)^{\alpha/2} f.
\text{e}nd{equation}
In order to investigate the asymptotic behaviour of the system, we introduce the Knudsen number $\varepsilon$ which represent the ratio between the mean-free-path and the observation length scale. In the case when $E = 0$ it was observed in \cite{Cesbron2012} that the time rescaling $t' \rightarrow \varepsilon^{\alpha-1} t$ and
introducing a factor $ 1 / \varepsilon$ in front of $\mathcal L^{ \alpha / 2}$ is the appropriate scaling at which diffusion will be observed in the limit as $\varepsilon$ goes to zero. Moreover, we
introduce
the factor $1 / \varepsilon^{ 2 - \alpha}$ in front of the force field term $E$ corresponding to a low-field limit scaling since we shall consider the case
$1\leq \alpha \leq 2$ and thus the scaling of the collision operator $1/\varepsilon$ is much greater than the scaling of the electric field $1 / \varepsilon^{ 2 - \alpha}$. Thus we shall study in this paper the asymptotic behaviour as $\varepsilon$ tends to zero of the solutions of following rescaled VLFP equation
\begin{equation} \langlebel{eq:vlfpeps}
\varepsilon^{\alpha-1} \partial_t f^\varepsilon + v\cdot \nabla_x f^\varepsilon + \varepsilon^{\alpha-2} E(t,x)\cdot\nabla_v f^\varepsilon = \frac{1}{\varepsilon} \Big(\nabla_v\cdot \left( v f \right) - \big(-\mathcal{D}elta_v\big)^{\alpha/2} f \Big).
\text{e}nd{equation}
\subsection{Preliminaries on the Fractional Fokker-Planck operator}
In this paper we denote by $\widehat f$ or $\mathcal{F} ( f)$ the Fourier transform of $f$ and define it as
\begin{eqnarray*}
\widehat f ( k) = \int_{ \mathbb{R}^d} \text{e}^{ - i k \cdot x} f ( x) \, {\rm d} x.
\text{e}nd{eqnarray*}
There are several equivalent definitions of the fractional Laplacian in the whole domain (see \cite{Kwasnicki2015} or \cite{DiNezza+}). It can be defined via a
Fourier multiplier as
\begin{eqnarray*}
\mathcal{F} \Big( \big(-\mathcal{D}elta\big)^{\alpha/2} ( f) \Big) ( k) = | k|^\alpha \mathcal{F} ( f) ( k).
\text{e}nd{eqnarray*}
On the other hand, assuming that $f$ is a rapidly decaying function we can define the
fractional Laplacian in terms of a hypersingular integral as
\begin{equation}\langlebel{def:fracLapInt}
\big(-\mathcal{D}elta_v\big)^{\alpha/2} ( f) ( v) = c_{ d, \alpha} \, \text{P.V.} \int_{ \mathbb{R}^d} \frac{ f( v) - f ( w) }{ | v - w|^{ d + \alpha}} \, {\rm d} w
\text{e}nd{equation}
where P.V. denotes the Cauchy principal value and the constant $c_{ d, \alpha}$ is given by
\begin{equation}\langlebel{def:c}
c_{ d, \alpha} = \frac{ 2^\alpha \Gamma \left( \frac{ d+\alpha}{ 2} \right)}{ 2 \pi^{ d/2} | \Gamma \left( - \frac{ \alpha}{ 2} \right)|},
\text{e}nd{equation}
and $\Gamma ( \cdot)$ denotes the Gamma function. In \cite{DiNezza+} it is proven that for any $d > 1$, $c_{ d, \alpha} \to 0$ as $\alpha \to 2$.
Thus \text{e}qref{def:fracLapInt} does not make sense if we take $\alpha = 2$. However, we have the following result.
\begin{proposition}
Let $d > 1$. Then for any $f \in C^\infty_0 ( \mathbb{R}^d)$ we have
\begin{eqnarray*}
\lim_{ \alpha \to 2} \big(-\mathcal{D}elta\big)^{\alpha/2} f = - \mathcal{D}elta f.
\text{e}nd{eqnarray*}
\text{e}nd{proposition}
\noindent For an account of the properties of the fractional Laplacian consult \cite{DiNezza+}, \cite{Vazquez2014}, \cite{Stein1970} or \cite{Landkof1972}. Let us note
that due to its dependence on the whole domain, the fractional Laplacian is a nonlocal operator
and it has the scaling property $\big(-\mathcal{D}elta_v\big)^{\alpha/2} ( f_\langlembda) ( v) = \langlembda^\alpha \big(-\mathcal{D}elta_v\big)^{\alpha/2} f ( \langlembda v)$, for any $\langlembda > 0$ where
$f_\langlembda ( v) = f ( \langlembda v)$. Since it will be useful later on in our analysis, we also mention that since the fractional Laplacian is an integro-differential operator it satisfies:
\begin{eqnarray*}
\int \big(-\mathcal{D}elta\big)^{\alpha/2} f \, \text{d}v = 0.
\text{e}nd{eqnarray*}
In \cite{Biler+2003} it is proved that the L\'evy-Fokker-Planck operator $\mathcal L^{ \alpha / 2}$ defined by \text{e}qref{def:LFP} has a unique normalized
equilibrium distribution that we shall denote by $G_\alpha$. Therefore, the Fourier transformation of $G_\alpha$ denoted as $\widehat{G_\alpha}$ and
defined as
\begin{eqnarray*}
\widehat{ G_\alpha} ( \xi) := \int_{ \mathbb{R}^d} \text{e}^{ - i \xi \cdot v} G_\alpha ( v) \, {\rm d} v,
\text{e}nd{eqnarray*}
satisfies
\begin{eqnarray*}
\xi \cdot \nablabla_\xi \widehat{ G_\alpha} + | \xi|^\alpha \widehat{ G_\alpha} = 0.
\text{e}nd{eqnarray*}
Thus yielding
\begin{equation}\langlebel{def:eqdisLFP}
\widehat{ G_\alpha} ( \xi) = \text{e}^{ - | \xi|^\alpha / \alpha}.
\text{e}nd{equation}
In the jargon of stochastic analysis, random variables having a characteristic function of the form \text{e}qref{def:eqdisLFP} are called symmetric $\alpha$-stable
random variables, consult \cite{Applebaum2009}. Using the notation of \cite{Bogdan+2007} let
us note that setting $t = 1 / \alpha$, $x = v$, and $y=0$, we obtain the identity $G_\alpha ( v) = p ( 1/ \alpha, v, 0)$. Thus Lemma 3 of \cite{Bogdan+2007} states that there exists $C_1 = C_1 ( d, \alpha) > 0$ such that
\begin{equation}\langlebel{eq:Gbounds}
C_1^{ -1} \bigg( \frac{ 1}{ \alpha | v|^{ d + \alpha}} \wedge \frac{ 1}{ \alpha^{ d / \alpha}} \bigg) \leq G_\alpha ( v) \leq C_1 \bigg( \frac{ 1}{ \alpha | v|^{ d + \alpha}} \wedge \frac{ 1}{ \alpha^{ d / \alpha}} \bigg),
\text{e}nd{equation}
for all $v \in \mathbb{R}^d$, where $a \wedge b$ denotes the minimum between $a$ and $b$. On the other hand, Lemma 5 of \cite{Bogdan+2007} states the existence
of a positive constant $C_2 = C_2 ( d, \alpha)$ such that
\begin{equation}\langlebel{eq:gradGbounds}
\frac{ | v|}{ C_2} \bigg( \frac{ 1}{ \alpha | v|^{ d + 2 + \alpha}} \wedge \alpha^{ ( d + 2) / 2} \bigg) \leq \nabla_v \, G_\alpha ( v) \leq C_2 | v| \bigg( \frac{ 1}{ \alpha | v|^{ d + 2 + \alpha}} \wedge \alpha^{ ( d + 2) / 2} \bigg).
\text{e}nd{equation}
\subsection{Main results}
As usually in the framework of fractional Vlasov-Fokker-Planck equations, we use the following definition of weak solutions:
\begin{definition} \langlebel{def:weaksol}
Consider $f^{in}$ in $L^2(\mathbb{R}^d\times\mathbb{R}^d)$ and $E \in \big( W^{1,\infty}([0,T)\times\mathbb{R}^d) \big)^d$. We say that $f$ is a weak solution of \text{e}qref{eq:vlfpE}- \text{e}qref{eq:vlfpEit} if, for any $\varphi\in\mathcal{C}^\infty_c([0,T)\times\mathbb{R}^d\times\mathbb{R}^d)$
\begin{equation} \langlebel{eq:weakFracVFP}
\begin{aligned}
&\underset{Q_T}{\iiint} f \Big( \partial_t \varphi + v\cdot \nabla_x \varphi + \big(E(t,x) - v\big)\cdot\nabla_v \varphi - \big(-\mathcal{D}elta\big)^{\alpha/2} \varphi \Big) \, \text{d}t \text{d}x \text{d}v \\
&\quad + \underset{\mathbb{R}^d\times\mathbb{R}^d}{\iint} f^{in}(x,v) \varphi(0,x,v) \, \text{d}x \text{d}v = 0.
\text{e}nd{aligned}
\text{e}nd{equation}
\text{e}nd{definition}
Section 2 of this paper is devoted to a well-posedness result for the fractional Vlasov-Fokker-Planck with an external electric field $E$ in the following sense.
\begin{theorem} \langlebel{thm:existence}
For $f^{in}$ in $L^2(\mathbb{R}^d\times\mathbb{R}^d)$ and $E \in \big( W^{1,\infty}([0,T) \times \mathbb{R}^d) \big)^d$ there exists a unique weak solution $f$ of \text{e}qref{eq:vlfpE}- \text{e}qref{eq:vlfpEit} in the sense of Definition \ref{def:weaksol} and it satisfies
\begin{subequations}
\begin{align}
&f(t,x,v) \geq 0 \mbox{ on } Q_T , \langlebel{eq:solposi} \\
&f \in \mathcal{X} := \bigg\{ f\in L^2(Q_T) : \frac{|f(t,x,v)-f(t,x,w)|}{|v-w|^{\frac{d+\alpha}{2}}} \in L^2(Q_T\times\mathbb{R}^{d})\bigg\}. \langlebel{eq:L2txHv}
\text{e}nd{align}
\text{e}nd{subequations}
\text{e}nd{theorem}
\begin{remark}
The assumption $E \in \big( W^{1,\infty}([0,T) \times \mathbb{R}^d) \big)^d$
in Theorem \ref{thm:existence} is not optimal in the sense that we could replace it by $E \in \big( L^\infty ( [ 0, T) \times \mathbb{R}^d ) \big)^d$ or maybe it could be replaced by even weaker assumptions on $E$, however, finding the optimal regularity of $E$ is out of the scope of this paper.
\text{e}nd{remark}
The proof of this existence result relies on using the Lax-Milgram theorem for a well chosen associated problem, in the spirit of the proof in \cite{Degond1986} and in \cite{Carrillo1998} for the existence of weak solutions of the Vlasov-Fokker-Planck equation. The proof of positivity \text{e}qref{eq:solposi} is given in details as it involves the non-local nature of the fractional operator and, as such, differs from the classical proof. \\
In Section 3, we consider the electric field as a perturbation of the fractional Fokker-Planck operator and as such we introduce $\mathcal{T}_\varepsilon$:
\begin{equation*}
\, \mathcal{T}_\varepsilon ( f) := \nabla_v\cdot \Big[ \big( v- \varepsilon^{\alpha-1} E(t,x) \big) f \Big] - \big(-\mathcal{D}elta_v\big)^{\alpha/2} f.
\text{e}nd{equation*}
We prove existence and uniqueness of a normalized equilibrium $F_\varepsilon$ for this perturbed operator in Proposition \ref{prop:equi}. Then, we follow the strategy introduced in \cite{AceSch}; we investigate the decay properties of this equilibrium and its convergence to the equilibrium of the unperturbed operator, $G_\alpha$, as $\varepsilon$ goes to $0$ in Proposition \ref{prop:equi2}. Finally, we prove that $\mathcal{T}_\varepsilon$ is dissipative with regards to the quadratic entropy, Proposition \ref{prop:posisemi}, which allows us to establish uniform boundedness results for $f_\varepsilon$, the solution of the rescaled equation \text{e}qref{eq:vlfpeps}- \text{e}qref{eq:vlfpEit}, as well as its macroscopic density $\rho_\varepsilon = \int f_\varepsilon \, \text{d}v$ and its distance to the kernel of $\mathcal{T}_\varepsilon$ we which write $r_\varepsilon$ defined by the expansion $f_\varepsilon= \rho_\varepsilon F_\varepsilon + \varepsilon^{\alpha/2} r_\varepsilon$.
In the last section, we turn to the proof of our main result which is the anomalous advection-diffusion limit of our kinetic model. We follow the method introduced in \cite{Cesbron2012} which consist in choosing a test function $\psi_\varepsilon(t,x,v)$ which is solution, for some $\varphi \in \mathcal{C}^\infty_c ([0,T)\times\mathbb{R}^d)$ of the auxiliary problem:
\begin{equation*}
\begin{array}{llr}
&\varepsilon v \cdot \nablabla_x \psi_\varepsilon - v \cdot \nablabla_v \psi_\varepsilon = 0 \hspace{2cm} & \text{ in } [ 0, \infty) \times \mathbb{R}^d \times \mathbb{R}^d,\\
&\psi ( t, x, 0) = \varphi ( t, x) & \text{ in } [ 0, \infty) \times \mathbb{R}^d,
\text{e}nd{array}
\text{e}nd{equation*}
and show that the weak formulation of our problem, \text{e}qref{eq:weakFracVFP}, with such test functions converges to the weak formulation of the advection fractional diffusion equation. We first prove this convergence in the non-critical case, i.e. when $1<\alpha<2$ and then we turn to the critical cases $\alpha=1$ and $\alpha=2$. The outline of the proof remains the same in both critical cases but a few differences appear, for $\alpha=2$ the only difference is technical one in the study of the dissipative property of the perturbed operator whereas, in the case $\alpha=1$, we show that the equilibrium of the perturbed operator is independent of $\varepsilon$ and as such it stays perturbed by the electric field $E(t,x)$ even in the macroscopic limit. In all cases, our main result reads:
\begin{theorem}\langlebel{theo:main}
Let $\alpha$ be in $(1,2]$ and $f_\varepsilon$ be the weak solution of \text{e}qref{eq:vlfpeps}- \text{e}qref{eq:vlfpEit} in the sense of Definition \ref{def:weaksol} on $[0,T)\times\mathbb{R}^d\times\mathbb{R}^d$ for some $T>0$ and with $f^{ in} \in L^2_{G^{-1}_{\alpha}(v)}( \mathbb{R}^d \times \mathbb{R}^d )\cap L^1_+ ( \mathbb{R}^d \times \mathbb{R}^d)$. Then, $f_\varepsilon$ converges weak-$_*t$ to $\rho(t,x) \, G_\alpha (v) $ in $L^\infty ( 0, T; L^2_{G_\alpha^{ -1} ( v)} ( \mathbb{R}^d \times \mathbb{R}^d ))$, where $\rho$ is the solution in the distributional sense of
\begin{equation} \langlebel{macro}
\begin{array}{llr}
& \, \partial_t \rho + \mathrm {div}\, ( E \rho) + ( - \mathcal{D}elta)^{\alpha/2} \rho = 0 \hspace{2cm} & \text{ in } [ 0, T) \times \mathbb{R}^d, \\
& \rho( 0, x) = \rho^{in} ( x) & \text{ in } \mathbb{R}^d,
\text{e}nd{array}
\text{e}nd{equation}
where $\rho^{ in} = \int f^{ in} \, {\rm d} v$. In the case $\alpha=1$ the same anomalous diffusion limit holds but instead of $G_\alpha(v)$ the equilibrium distribution of velocity becomes
\begin{equation} \langlebel{eq:GalphaE}
G_{\alpha,E} (t,x,v) = G_\alpha \big( v-E(t,x) \big)
\text{e}nd{equation}
\text{e}nd{theorem}
The advection fractional-diffusion equation \text{e}qref{macro} describes the evolution of the macroscopic density $\rho$ under the effect of a drift, consequence of the kinetic electric field, and a fractional diffusion phenomenon. The regularity of the solutions of this type of equations has been studied for instance in \cite{silvestre2011holder}, \cite{silvestre2012differentiability}, and \cite{Droniou+2006}. We refer the interested reader to those articles and references within for more details on this macroscopic model.
\section{Existence of solution}
Throughout this paper, for any $T>0$ we write $Q_T = [0,T)\times\mathbb{R}^d\times\mathbb{R}^d$ and $\mathcal{C}_c^{\infty}(Q_T)$ the set of smooth function compactly supported in $Q_T$. This section is devoted to the proof of the following result of existence and regularity of weak solutions:
\begin{theorem} \langlebel{thm:exist}
Consider $f^{in}$ in $L^2 (\mathbb{R}^d\times\mathbb{R}^d)$. There exists a unique weak solution $f$ of \text{e}qref{eq:vlfpE} on $Q_T$ in the sense that for any $\varphi\in\mathcal{C}_c^{\infty}(Q_T)$:
\begin{equation} \langlebel{eq:weakFracVFP}
\begin{aligned}
&\underset{Q_T}{\iiint} f \Big( \partial_t \varphi + v\cdot \nabla_x \varphi + \big(E(t,x) - v\big)\cdot\nabla_v \varphi - \big(-\mathcal{D}elta\big)^{\alpha/2} \varphi \Big) \, \text{d}t \text{d}x \text{d}v \\
&\quad + \underset{\mathbb{R}^d\times\mathbb{R}^d}{\iint} f^{in}(x,v) \varphi(0,x,v) \, \text{d}x \text{d}v = 0
\text{e}nd{aligned}
\text{e}nd{equation}
and this solution satisfies:
\begin{align}
&f(t,x,v) \geq 0 \mbox{ on } Q_T , \nonumber \\
&f \in \mathcal{X} := \bigg\{ f\in L^2(Q_T) : \frac{|f(t,x,v)-f(t,x,w)|}{|v-w|^{\frac{d+\alpha}{2}}} \in L^2(Q_T\times\mathbb{R}^{d})\bigg\}. \langlebel{eq:L2txHv}
\text{e}nd{align}
\text{e}nd{theorem}
\begin{remark}
Note that this definition of $\mathcal{X}$ is equivalent to saying that it is the set of functions which are in $L^2([0,T)\times\mathbb{R}^d)$ with respect to time and position and in $H^{\alpha/2}(\mathbb{R}^d)$ with respect to velocity.
\text{e}nd{remark}
\begin{proof}
We follow the method in \cite{Degond1986} and in \cite{Carrillo1998} for the proof of existence and uniqueness of solutions to the linear Vlasov-Fokker-Planck equation. The first part of the proof consists in solving our linear problem in a variational setting, applying a well-known Lax-Milgram theorem of functional analysis. We consider the Hilbert space $\mathcal{X}$ provided with the norm
\begin{equation} \langlebel{def:normHs}
||f||_\mathcal{X} = \Bigg( ||f||^2_{L^2(Q_T)} + 2c_{d,\alpha}^{-1} ||(-\mathcal{D}elta)^{\frac{\alpha}{4}}f||^2_{L^2(Q_T)} \Bigg)^{\frac{1}{2}}
\text{e}nd{equation}
where
$c_{d,s}$ is defined in \text{e}qref{def:c}. We refer the reader to \cite{DiNezza+} for properties of this functional space. Let us denote $\mathcal{T}$ the transport operator, given by
\begin{equation*}
\mathcal{T}f = \partial_t f + v\cdot \nabla_x f - \big( v-E(t,x)\big) \cdot\nabla_v f.
\text{e}nd{equation*}
We define the Hilbert space $\mathcal{Y}$ as:
\begin{equation} \langlebel{def:hilY}
\mathcal{Y} = \bigg\{ f\in \mathcal{X} : \mathcal{T}f \in \mathcal{X}' \bigg\}
\text{e}nd{equation}
where $\mathcal{X}'$ is the dual of $\mathcal{X}$. $(\cdot,\cdot)_{\mathcal{X},\mathcal{X}'}$ stands for the dual relation between $\mathcal{X}$ and its dual. $\mathcal{Y}$ is provided with the norm:
\begin{equation} \langlebel{def:normY}
||f||^2_\mathcal{Y} = ||f||^2_{\mathcal{X}} + ||\mathcal{T}f||^2_{\mathcal{X}'}.
\text{e}nd{equation}
In order to apply the Lax-Milgram theorem we consider the associated problem
\begin{equation} \langlebel{eq:vlfpexp}
\begin{aligned}
&\partial_t \overline{f} + e^{-t} v\cdot \nabla_x \overline{f} + e^{t} E(t,x)\cdot\nabla_v \overline{f} + e^{\alpha t} \big(-\mathcal{D}elta\big)^{\alpha/2} \overline{f} + \langlembda \overline{f} = 0 & (t,x,v) \in Q_T\\
& \overline{f} (0,x,v) = \overline{f}^{in} (x,v ) & (x,v) \in \mathbb{R}^d\times\mathbb{R}^d\\
\text{e}nd{aligned}
\text{e}nd{equation}
which comes formally by deriving \text{e}qref{eq:vlfpE} for $\overline{f} = e^{-(\langlembda+d)t} f\big(t,x,e^{-t}v\big)$ and $\overline{f}^{in} (x,v) = f^{in} (x,e^{-t} v)$ for some $\langlembda\geq 0$. A weak solution of \text{e}qref{eq:vlfpexp} is a function $\overline{f} \in \mathcal{X}$ such that for any $\varphi$ in $\mathcal{C}_c^{\infty}(Q_T)$:
\begin{equation} \langlebel{eq:wfvlfpexp}
\begin{aligned}
&\underset{Q_T}{\iiint} \Big( -\overline{f} \partial_t \varphi - e^{-t} \overline{f} v\cdot \nabla_x \varphi - e^t \overline{f} E(t,x)\cdot \nabla_v \varphi + e^{2st} \overline{f} \big(-\mathcal{D}elta\big)^{\alpha/2} \varphi + \langlembda \overline{f} \varphi \Big) \, {\rm d} t \, {\rm d} x \, {\rm d} v \\
&\hspace{1cm} - \underset{\mathbb{R}^d\times\mathbb{R}^d}{\iint} \overline{f}^{in} \varphi(0,x,v) \, {\rm d} x \, {\rm d} v =0.
\text{e}nd{aligned}
\text{e}nd{equation}
We first prove existence of a solution in $\mathcal{X}$ of equation \text{e}qref{eq:vlfpexp} and we will prove afterwards how this implies existence of a solution of the fractional Vlasov-Fokker-Planck equation with the electric field $E$. \\
We know that $\mathcal{C}_c^{\infty}(Q_T)$ is a subspace of $\mathcal{X}$ with a continuous injection (see, e.g. \cite{DiNezza+}) and we define the prehilbertian norm:
$$ |\varphi|^2_{\mathcal{C}_c^{\infty}(Q_T)} = ||\varphi||^2_{\mathcal{X}} + \frac{1}{2} ||\varphi(0,\cdot,\cdot)||^2_{L^2(\Omega\times\mathbb{R}^d)}.$$
Now, we can introduce the bilinear form $a: \mathcal{X} \times \mathcal{C}_c^{\infty}(Q_T) \rightarrow \mathbb{R}$ as:
$$ a(\overline{f},\varphi) = \underset{Q_T}{\iiint} \Big( -\overline{f} \partial_t \varphi - e^{-t} \overline{f} v\cdot \nabla_x \varphi - e^t \overline{f} E(t,x)\cdot\nabla_v \varphi+ e^{2st} \overline{f} \big(-\mathcal{D}elta\big)^{\alpha/2} \varphi + \langlembda \overline{f} \varphi \Big) \, {\rm d} t \, {\rm d} x \, {\rm d} v $$
and the continuous bounded linear operator $L$ on $\mathcal{C}_c^{\infty}(Q_T)$ given by:
$$ L(\varphi) = -\underset{\mathbb{R}^d\times\mathbb{R}^d}{\iint} f^{in} ( x, v) \varphi(0,x,v)\, {\rm d} x \, {\rm d} v .$$
To find a solution $\overline{f}$ in $\mathcal{X}$ of equation \text{e}qref{eq:wfvlfpexp} is equivalent to finding a solution $\overline{f}$ in $\mathcal{X}$ of $a(\overline{f},\varphi) = L(\varphi)$ for any $\varphi \in \mathcal{C}_c^{\infty}(Q_T)$. Since $\overline{f}$ belongs to $\mathcal{X}$ it is easy to check that $a(\cdot,\varphi)$ is continuous. To verify the coercivity of $a$ we write:
\begin{align*}
&-\underset{Q_T}{\iiint} \Big( \varphi\partial_t \varphi + e^{-t}\varphi v\cdot \nabla_x \varphi - e^t \varphi E(t,x)\cdot\nabla_v \varphi \Big) \, {\rm d} t \, {\rm d} x \, {\rm d} v = \frac{1}{2} \underset{\mathbb{R}^d\times\mathbb{R}^d}{\iint} |\varphi(0,x,v)|^2 \, {\rm d} x \, {\rm d} v
\text{e}nd{align*}
and also:
\begin{align*}
\underset{Q_T}{\iiint} e^{2st} \varphi \big(-\mathcal{D}elta\big)^{\alpha/2} \varphi \, {\rm d} t \, {\rm d} x \, {\rm d} v = \underset{Q_T}{\iiint} e^{2st} |(-\mathcal{D}elta)^\frac{s}{2}\varphi |^2 \, {\rm d} t \, {\rm d} x \, {\rm d} v .
\text{e}nd{align*}
Hence, we see that
\begin{align*}
a(\varphi,\varphi) &= \underset{Q_T}{\iiint} \bigg( \langlembda \varphi^2 + e^{2st} |(-\mathcal{D}elta)^\frac{ \alpha}{ 4}\varphi |^2 \bigg) \, {\rm d} t \, {\rm d} x \, {\rm d} v + \frac{1}{2} \underset{\Omega\times\mathbb{R}^d}{\iint} |\varphi(0,x,v)|^2 \, {\rm d} t \, {\rm d} x \, {\rm d} v
\text{e}nd{align*}
which can be bounded from below as $a(\varphi,\varphi) \geq \min(1,\langlembda) |\varphi|_{\mathcal{C}_c^{\infty}(Q_T)}^2$. Thus, the Lax-Milgram theorem implies the existence of $\overline{f}$ in $\mathcal{X}$ satisfying \text{e}qref{eq:wfvlfpexp}. Now, we want to show that this yields existence of a solution of \text{e}qref{eq:weakFracVFP}. To that end, we first consider $\tilde{\varphi}$ in $\mathcal{C}_c^{\infty}(Q_T)$ such that $\varphi(t,x,v)= e^{\langlembda t}\tilde{\varphi}(t,x,e^{-t}v)$. Equation \text{e}qref{eq:wfvlfpexp} becomes (writing $\tilde{\varphi}(e^{-t} v)$ instead of $\tilde{\varphi}(t,x,e^{-t}v)$)
\begin{align*}
&\underset{Q_T}{\iiint} e^{\langlembda t} \Big( -\overline{f} \partial_t \tilde{\varphi}(e^{-t}v) - \overline{f} e^{-t} v\cdot \nabla_x \tilde{\varphi}(e^{-t}v) + \overline{f} e^{-t} v \cdot\nabla_v \tilde{\varphi}(e^{-t}v) - \overline{f} E(t,x)\cdot\nabla_v \tilde{\varphi} (e^{-t} v) \\
&\hspace{1.5cm} + \overline{f} \big(-\mathcal{D}elta\big)^{\alpha/2} \tilde{\varphi}(e^{-t}v) \Big) \, \text{d}t \text{d}x \text{d}v - \underset{\mathbb{R}^d\times\mathbb{R}^d}{\iint} f_{in} \tilde{\varphi}(0,x,v) \, \text{d}x \text{d}v = 0.
\text{e}nd{align*}
Hence, if we define $f(t,x,v) = e^{(\langlembda+d)t} \overline{f}(t,x,e^t v)$ and change the variable $v\rightarrow e^{-t} v$, we recover equation \text{e}qref{eq:weakFracVFP}. It is straightforward to check that $f$ is in $\mathcal{X}$ and it satisfies \text{e}qref{eq:weakFracVFP} for any $\tilde{\varphi}$ in $\mathcal{C}_c^{\infty}(Q_T)$. Moreover, since $ f\mapsto df - \big(-\mathcal{D}elta\big)^{\alpha/2} f $ is a linear bounded operator from $\mathcal{X}$ to $\mathcal{X}'$, the transport term $\mathcal{T}f$ is in $\mathcal{X}'$, hence $f\in \mathcal{Y}$ and \text{e}qref{eq:weakFracVFP} is verified in $\mathcal{X}'$.
Since the VLFP equation is linear, to show uniqueness it is enough to show that the unique solution with zero initial data is the null function $f \text{e}quiv 0$. Let $f$ be a solution of this problem on $\mathcal{Y}$. As before, we define $\overline{f} = e^{-(\langlembda + d)t} f(t,x,e^{-t}v)$, which satisfies equation \text{e}qref{eq:vlfpexp} with $\overline{f}_{in}$ null. Since $f\in \mathcal{Y}$, we know that $\overline{f}$ belongs to $\mathcal{X}$ and, moreover, that if we define $\widetilde{\mathcal{T}}$ as
\begin{equation} \langlebel{eq:defTtilde}
\widetilde{\mathcal{T}}\overline{f}= \partial_t \overline{f} + e^{-t} v\cdot \nabla_x \overline{f} +e^t E(t,x)\cdot\nabla_v \overline{f}
\text{e}nd{equation}
then $\widetilde{\mathcal{T}}\overline{f}$ belongs to $\mathcal{X}'$. Through integration by parts we have
$$ 2 \big( \widetilde{\mathcal{T}} \overline{f} , \overline{f} \big)_{\mathcal{X}',\mathcal{X}} = \underset{\mathbb{R}^d\times\mathbb{R}^d}{\iint} \big( \overline{f} \big)^2(T,x,v) \, {\rm d} x \, {\rm d} v \geq 0.$$
On the other hand, since $\overline{f}$ satisfies \text{e}qref{eq:vlfpexp}, $\widetilde{\mathcal{T}} \overline{f} = - \langlembda \overline{f} - \big(-\mathcal{D}elta\big)^{\alpha/2} \overline{f}$ in the sense of distributions which yields
\begin{equation} \langlebel{eq:ttildeof}
\big( \widetilde{\mathcal{T}} \overline{f} , \overline{f} \big)_{\mathcal{X}',\mathcal{X}} = - \underset{Q_T}{\iiint} \Big(\langlembda \overline{f}^2 + e^{\alpha t} \big| (-\mathcal{D}elta)^{\frac{\alpha}{4}} \overline{f} \big|^2 \Big) \, {\rm d} t \, {\rm d} x \, {\rm d} v \leq 0.
\text{e}nd{equation}
Hence both expression are null, in particular this means that the integral $\langlembda \overline{f}^2$ is null, hence $f = \overline{f} \text{e}quiv 0$ a.e. on $Q_T$: the solution is unique. In order to prove the positivity of the solution consider once again the associated problem \text{e}qref{eq:vlfpexp} and its solution $\overline{f} $ for some $\overline{f}^{in}\in L^2(\mathbb{R}^d\times\mathbb{R}^d)$ with $\overline{f}^{in} \geq 0$. Next, we define $\overline{f}_+$ and $\overline{f}_-$ the positive and negative parts of $\overline{f}$ given by:
\begin{align*}
&\overline{f}_+(t,x,v) = \max (f(t,x,v),0); & \overline{f}_-(t,x,v) = \max (-f(t,x,v), 0)
\text{e}nd{align*}
so that $\overline{f}= \overline{f}_+ - \overline{f}_-$ and we denote by $A_+$ and $A_-$ the respective supports of $\overline{f}_+$ and $\overline{f}_-$. Using $\widetilde{\mathcal{T}}$ defined in \text{e}qref{eq:defTtilde} we have through integration by parts
\begin{align*}
\big( \widetilde{\mathcal{T}}\overline{f} , \overline{f}_- \big) &= \underset{Q_T}{\iiint} \Big( \overline{f}_- \partial_t \big( \overline{f}_+ -\overline{f}_-\big)+ e^{-t} \overline{f}_- v\cdot \nabla_x \big( \overline{f}_+ -\overline{f}_-\big)\\
&\hspace{1,5cm}+ e^t \overline{f}_- E(t,x)\cdot\nabla_v\big( \overline{f}_+ -\overline{f}_-\big) \Big) \, {\rm d} t \, {\rm d} x \, {\rm d} v \\
&= - \frac{1}{2} \underset{\mathbb{R}^d\times\mathbb{R}^d}{\iint} \Big( \overline{f}_-^2(T,x,v) - \overline{f}_-^2(0,x,v) \Big) \, {\rm d} x \, {\rm d} v \\
&\hspace{1,5cm} + \underset{Q_T}{\iiint} \Big( \overline{f}_- \partial_t \overline{f}_+ + e^{-t} \overline{f}_- v\cdot \nabla_x \overline{f}_+ + e^t \overline{f}_- E(t,x)\cdot\nabla_v \overline{f}_+ \Big) \, {\rm d} t \, {\rm d} x \, {\rm d} v.
\text{e}nd{align*}
By definition of $\overline{f}_+$ and $\overline{f}_-$ we know that $A_+ \cap A_- = \text{e}mptyset$, hence wherever $\overline{f}_-$ is not zero, both $\partial_t \overline{f}_+$, $\nabla_x\overline{f}_+$ and $\nabla_v \overline{f}_+$ are naught, and vice-versa. Moreover, we assume $\overline{f}^{in} \geq 0$ which means $\overline{f}_-(0,x,v) = 0$ so that
\begin{equation*}
\big( \widetilde{\mathcal{T}}\overline{f} , \overline{f}_- \big) = - \frac{1}{2} \underset{\mathbb{R}^d\times\mathbb{R}^d}{\iint} \overline{f}_-^2(T,x,v) \, {\rm d} x \, {\rm d} v \leq 0.
\text{e}nd{equation*}
Since $\overline{f}$ is solution of \text{e}qref{eq:vlfpexp} we know that $\widetilde{\mathcal{T}} \overline{f} = - \langlembda \overline{f} - \big(-\mathcal{D}elta\big)^{\alpha/2} \overline{f}$ in the sense of distributions which yields
\begin{align*}
\big( \widetilde{\mathcal{T}}\overline{f} , \overline{f}_- \big) &= \underset{Q_T}{\iiint} \Big( -\langlembda \overline{f}_- \big( \overline{f}_+ -\overline{f}_-\big) - \overline{f}_- \big(-\mathcal{D}elta\big)^{\alpha/2} \big( \overline{f}_+ -\overline{f}_-\big) \Big) \, {\rm d} t \, {\rm d} x \, {\rm d} v
\text{e}nd{align*}
where
\begin{align*}
\underset{\mathbb{R}^d}{\int} \overline{f}_- \big(-\mathcal{D}elta\big)^{\alpha/2} (\overline{f}_+) \, \text{d}v
&= \underset{\mathbb{R}^d}{\int} \overline{f}_- (v) \, c_{ d, \alpha} \mbox{ P.V. } \underset{\mathbb{R}^d}{\int} \frac{\overline{f}_+(v)-\overline{f}_+(w)}{|v-w|^{d+\alpha}} \, {\rm d} w \, {\rm d} v \\
& = \underset{ A_-}{\int} \overline{f}_- (v) \, c_{ d, \alpha} \mbox{ P.V. } \underset{ A_+}{\int} \frac{\overline{f}_+(v)-\overline{f}_+(w)}{|v-w|^{d+\alpha}} \, {\rm d} w \, {\rm d} v \\
&= - c_{ d, \alpha} \underset{ A_-}{\int} \mbox{ P.V. } \underset{ A_+}{\int} \frac{\overline{f}_-(v) \overline{f}_+(w) }{|v-w|^{d+\alpha}} \, {\rm d} w \, {\rm d} v \leq 0.
\text{e}nd{align*}
Note that this integral is well defined because $\overline{f} \in \mathcal{X}$. Hence, we have:
\begin{align*}
\big( \widetilde{\mathcal{T}}\overline{f} , \overline{f}_- \big) &= \underset{Q_T}{\iiint }\Big( \langlembda \overline{f}_-^2 - \overline{f}_- \big(-\mathcal{D}elta\big)^{\alpha/2} \overline{f}_+ + \big| (-\mathcal{D}elta)^{\alpha/4} \overline{f}_- \big|^2 \Big) \, \text{d}t \text{d}x \text{d}v \geq 0 .
\text{e}nd{align*}
This proves that $\big( \widetilde{\mathcal{T}}\overline{f} , \overline{f}_- \big) = 0$ which, in particular, means $\langlembda \overline{f}_-^2 = 0$ and concludes the proof of positivity, and consequently the proof of Theorem \ref{thm:exist}.
\text{e}nd{proof}
\section{A priori estimates}
Let us consider the operator $\, \mathcal{T}_\varepsilon$ a perturbation of the fractional Fokker-Planck operator with an electric field $E(t,x)\in \big( W^{1,\infty}( [0,T)\times\mathbb{R}^d ) \big)^d$ defined as
\begin{equation} \langlebel{def:Teps}
\, \mathcal{T}_\varepsilon ( f_\varepsilon) = \nabla_v\cdot \Big[ \big( v- \varepsilon^{\alpha-1} E(t,x) \big) f_\varepsilon \Big] - \big(-\mathcal{D}elta_v\big)^{\alpha/2} f_\varepsilon.
\text{e}nd{equation}
We will prove the following:
\begin{prop} \langlebel{prop:equi}
For any $\varepsilon > 0$ fixed, there exists a unique positive equilibrium distribution $F_\varepsilon$ solution of:
\begin{equation}\langlebel{eq:equi}
\, \mathcal{T}_\varepsilon (F_\varepsilon) = \nabla_v\cdot \Big[ \big( v- \varepsilon^{\alpha-1} E(t,x) \big) F_\varepsilon \Big] - \big(-\mathcal{D}elta_v\big)^{\alpha/2} F_\varepsilon = 0, \hspace{1cm} \int_{ \mathbb{R}^d} F_\varepsilon \, {\rm d} v = 1.
\text{e}nd{equation}
\text{e}nd{prop}
\begin{proof}
The Fourier transform in velocity of the equilibrium equation \text{e}qref{eq:equi} reads
\begin{equation*}
\xi \cdot \nabla_\xi \widehat{F_\varepsilon} = - \Big( i \xi \cdot \varepsilon^{\alpha-1} E(t,x) + |\xi |^{\alpha} \Big) \widehat{F_\varepsilon},
\text{e}nd{equation*}
for which we can compute the explicit solution:
\begin{equation} \langlebel{eq:Fouriersol}
\widehat{F_\varepsilon} (t,x,\xi) = \kappa e^{ - i \varepsilon^{\alpha-1} \xi \cdot E(t,x) - |\xi |^{\alpha}/ \alpha },
\text{e}nd{equation}
where $\kappa$ is a positive constant which ensures the normalisation of the equilibrium. Now, although the inverse Fourier transform
$\mathcal{F}^{-1} \big( \widehat{F_\varepsilon} \big) (t,x,v)$
is not explicit let us note that $F_\varepsilon$ can be expressed as a translation of the equilibrium distribution $G_\alpha$ of the fractional Fokker-Planck operator:
\begin{equation}\langlebel{eq:Feps}
F_\varepsilon (t,x,v) = G_\alpha \big( v - \varepsilon^{\alpha-1} E(t,x) \big).
\text{e}nd{equation}
Hence, the positivity and normalization of $F_\varepsilon$ follows from the properties of $G_\alpha$.
\text{e}nd{proof}
\begin{prop}\langlebel{prop:equi2}
Let $F_\varepsilon$ be the unique normalized equilibrium distribution of \text{e}qref{def:Teps}. Then there exist positive constants
$\mu$, $c_1$, $c_2$ and $c_3$ such that:
\begin{itemize}
\item[ \text{e}mph{(i)}] $\, {\rm d}isplaystyle c_1 G_\alpha \leq F_\varepsilon \leq c_2 G_\alpha $,
\item[ \text{e}mph{(ii)}] $\, {\rm d}isplaystyle \bigg\lVert \frac{\partial_t F_\varepsilon}{F_\varepsilon} \bigg\lVert_{ L^\infty ( \, {\rm d} v \, {\rm d} x \, {\rm d} t)}, \bigg\lVert \frac{v\cdot \nabla_x F_\varepsilon}{F_\varepsilon} \bigg\lVert_{ L^\infty ( \, {\rm d} v \, {\rm d} x \, {\rm d} t)} \leq \varepsilon^{\alpha-1} \mu$,
\item[ \text{e}mph{(iii)}] $\, {\rm d}isplaystyle | F_\varepsilon - G_\alpha | \leq \varepsilon^{ \alpha - 1} c_3 G_\alpha $.
\text{e}nd{itemize}
for $\varepsilon > 0$ small enough.
\text{e}nd{prop}
\begin{proof} We shall start by proving part (i). Let us assume that $L$ is an arbitrary vector in $ \mathbb{R}^d$
such that $ | L| \leq 1$, then is easy to see that there exists $R_1 > 0$ big enough such that
\begin{eqnarray*}
\frac{ 1}{ 2^{ \frac{ 1}{ d + \alpha}}} \leq \bigg| 1 - \frac{ | L|}{ | v|} \bigg| \leq \bigg| \frac{ v}{ | v|} - \frac{ L}{ | v|} \bigg|,
\text{e}nd{eqnarray*}
for all $| v| > R_1$. Hence, it follows that
\begin{eqnarray*}
\frac{ 1}{ | v - L|^{ d + \alpha}} \leq \frac{ 2}{ | v|^{ d + \alpha}},
\text{e}nd{eqnarray*}
for all $| v| > R_1$. Thus, using \text{e}qref{eq:Gbounds} we obtain that there exists $\widetilde{ C} > 0$ and $R > 0$ big enough such that
\begin{eqnarray*}
G_\alpha ( v - L) \leq \widetilde{ C} G_\alpha ( v),
\text{e}nd{eqnarray*}
for all $ | v| > R$ and all $L \in \mathbb{R}^d$ with $L \leq 1$. Now, let $C_2 > 0$ such that
\begin{eqnarray*}
C_2 \bigg( \min_{ v \in B ( 0, R)} G_\alpha ( v) \bigg) \geq \lVert G_\alpha \lVert_\infty,
\text{e}nd{eqnarray*}
where $B ( 0, R) \subset \mathbb{R}^d$, is the ball of radius $R$ centered at the origin. Let us note that the minimum exists since $G_\alpha$ is continuous.
Thus choosing $\mu_2 = \widetilde{ C} \vee C_2$, where $a \vee b$ denotes the maximum between $a$ and $b$, we obtain
\begin{eqnarray*}
G_\alpha ( v - L) \leq \mu_2 G_\alpha ( v).
\text{e}nd{eqnarray*}
Next, writing $w = v + L$ where $L \in \mathbb{R}^d$ with $| L| \leq 1$ we obtain
\begin{eqnarray*}
G_\alpha ( w) \leq \mu_1 G_\alpha ( w - L),
\text{e}nd{eqnarray*}
Thus, taking $\mu_1 = 1 / \mu_2$ we obtain
\begin{eqnarray*}
\mu_1 G_\alpha ( v) \leq G_\alpha ( v - L),
\text{e}nd{eqnarray*}
for all $v \in \mathbb{R}^d$ and $| L| \leq 1$.
On the other hand, for part (ii), let us start by noting that thanks to \text{e}qref{eq:Feps}, $F_\varepsilon$ satisfies the following identities:
\begin{eqnarray*}
\frac{\partial_t F_\varepsilon}{F_\varepsilon} = - \varepsilon^{\alpha-1} \partial_t E(t,x) \cdot \frac{\nabla_v \, G_\alpha \big( v - \varepsilon^{\alpha-1} E(t,x) \big)}{ G_\alpha \big( v - \varepsilon^{\alpha-1} E(t,x) \big)},
\text{e}nd{eqnarray*}
and
\begin{eqnarray*}
\frac{v\cdot\nabla_x F_\varepsilon}{F_\varepsilon} = - \varepsilon^{\alpha-1} \nabla_x E(t,x) \frac{ v \cdot \nabla_v \, G_\alpha \big( v - \varepsilon^{\alpha-1} E(t,x) \big)}{G_\alpha \big( v - \varepsilon^{\alpha-1} E(t,x) \big)}.
\text{e}nd{eqnarray*}
Hence, thanks to the assumption $E \in W^{ 1, \infty} ([0,T)\times\mathbb{R}^d)^d$ we only need to prove that there exists a $C > 0$ such that
\begin{equation}\langlebel{eq:Gbound}
| v \cdot \nabla_v \, G_\alpha ( v - L) | \leq C G_\alpha ( v - L),
\text{e}nd{equation}
for all $v \in \mathbb{R}^d$, and all $L \in \mathbb{R}^d$ with $| L| \leq 1$. This follows via a similar line of reasoning as in the proof of part (i) around the control \text{e}qref{eq:gradGbounds}.
Finally we prove part (iii). Since $G_\alpha$ is smooth by the mean value theorem we obtain
\begin{align}
| F_\varepsilon ( v) - G_\alpha ( v)| &= | G_\alpha ( v - \varepsilon^{ \alpha - 1} E) - G_\alpha ( v) | \nonumber \\
&= \varepsilon^{ \alpha - 1} | E | | \nablabla_v \, G_\alpha ( v - \vartheta \, \varepsilon^{ \alpha - 1} E )|, \nonumber
\text{e}nd{align}
where $\vartheta \in ( 0, 1)$. Thus, the result follows thanks to \text{e}qref{eq:Gbound} and since $E \in W^{ 1, \infty} ([0,T)\times\mathbb{R}^d)^d$.
\text{e}nd{proof}
The key ingredient in order to obtain the a priori estimates needed to pass to the limit in \text{e}qref{eq:vlfpeps} is the positivity of the dissipation which we state in the following result.
\begin{prop}\langlebel{prop:posisemi}
Let us consider the operator $\, \mathcal{T}_\varepsilon$ defined by \text{e}qref{def:Teps}. The associated dissipation, defined bellow, satisfies
\begin{equation}\langlebel{eq:dissipation}
\mathcal{D}_\varepsilon (f) := - \iint \, \mathcal{T}_\varepsilon ( f) \frac{ f}{ F_\varepsilon} \, {\rm d} v \, {\rm d} x = \iiint \bigg( \frac{f(v)}{F_\varepsilon (v)} -\frac{f(w)}{F_\varepsilon (w)} \bigg)^2 \frac{F_\varepsilon(v)}{|v-w|^{d+\alpha}} \, {\rm d} w \, {\rm d} v \, {\rm d} x,
\text{e}nd{equation}
and if we write $\rho(t,x) = \int f(t,x,v) \, {\rm d} v$, then for all $f \in L^2_{F_\varepsilon^{ -1}} ( \mathbb{R}^d\times\mathbb{R}^d)$ we have
\begin{equation}\langlebel{eq:disscont}
\mathcal{D}_\varepsilon (f) \geq \int ( f - \rho F_\varepsilon )^2 \frac{\, {\rm d} x \, {\rm d} v}{F_\varepsilon (v)}.
\text{e}nd{equation}
\text{e}nd{prop}
\begin{proof}
The Poincar\'e type inequality \text{e}qref{eq:disscont} is a particular case of the so-called $\hbox{\rlap{I}\kern.16em P}hi$-entropy inequalites introduced in \cite{Gentil+2008}. For the sake of completeness
we shall give a sketch of the proof adapted to the case that we need.
We shall first start proving \text{e}qref{eq:dissipation}. Writing $\hbox{\rlap{I}\kern.16em P}hi_\varepsilon = v-\varepsilon^{\alpha-1} E(t,x)$ and $g=f/F_\varepsilon$, and since $F_\varepsilon$ satisfies \text{e}qref{eq:equi} we have:
\begin{align*}
\mathcal{D}_\varepsilon (f) &= - \iint \Big( \nabla_v\cdot \left( \hbox{\rlap{I}\kern.16em P}hi_\varepsilon g F_\varepsilon\right) g - \big(-\mathcal{D}elta_v\big)^{\alpha/2} \left( gF_\varepsilon \right)g\Big) \, {\rm d} v \, {\rm d} x \\
&=- \iint \Big( \hbox{\rlap{I}\kern.16em P}hi_\varepsilon F_\varepsilon \frac{1}{2}\nabla_v (g^2) + \nabla_v\cdot (\hbox{\rlap{I}\kern.16em P}hi_\varepsilon F_\varepsilon) g^2 - \big(-\mathcal{D}elta_v\big)^{\alpha/2}(g) gF_\varepsilon \Big) \, {\rm d} v \, {\rm d} x\\
&= \iint \Big( \frac{1}{2} g^2 \big(-\mathcal{D}elta_v\big)^{\alpha/2} (F_\varepsilon) - g^2 \big(-\mathcal{D}elta_v\big)^{\alpha/2} (F_\varepsilon) + g\big(-\mathcal{D}elta_v\big)^{\alpha/2} (g) F_\varepsilon \Big)\, {\rm d} v \, {\rm d} x\\
&= \iint \Big( g\big(-\mathcal{D}elta_v\big)^{\alpha/2} (g) -\frac{1}{2} \big(-\mathcal{D}elta_v\big)^{\alpha/2}(g^2) \Big) F_\varepsilon \, {\rm d} v \, {\rm d} x .
\text{e}nd{align*}
Hence, using \text{e}qref{def:fracLapInt} we see that:
\begin{align*}
&\iint \Big( g\big(-\mathcal{D}elta_v\big)^{\alpha/2} (g) - \frac{1}{2} \big(-\mathcal{D}elta_v\big)^{\alpha/2} (g^2) \Big) F_\varepsilon \, {\rm d} v \, {\rm d} x \\
& \hspace{1cm}= \iiint \bigg(\frac{g(v) \big( g(v) - g(w) \big)}{|v-w|^{d+\alpha}} - \frac{1}{2} \frac{g^2(v) - g^2(w)}{|v-w|^{d+\alpha}} \bigg) F_\varepsilon(t,x,v) \, {\rm d} w \, {\rm d} v \, {\rm d} x\\
& \hspace{1cm}=\frac{1}{2} \iiint \frac{\big( g(v)-g(w)\big)^2}{|v-w|^{d+\alpha}} F_\varepsilon(t,x,v) \, {\rm d} w \, {\rm d} v \, {\rm d} x .
\text{e}nd{align*}
Recall that $F_\varepsilon(t,x,v) = G_\alpha \big( v - \varepsilon^{\alpha-1} E(t,x) \big)$, therefore through a simple change of variable, if we call $h(t,x,v) = g\big( v - \varepsilon^{\alpha -1} E(t,x) \big)$ we have:
\begin{align*}
\mathcal{D}_\varepsilon (f) &= \frac{1}{2} \iiint \frac{\big( h(t,x,v)-h(t,x,w)\big)^2}{|v-w|^{d+\alpha}} G_\alpha(v) \, {\rm d} w \, {\rm d} v \, {\rm d} x .
\text{e}nd{align*}
In order to prove the control \text{e}qref{eq:disscont} we consider the semigroup associated with $\big(-\mathcal{D}elta\big)^{\alpha/2}$
\begin{equation}
\frac{\, {\rm d} }{\, {\rm d} t} P_t (h)(v) = - \big(-\mathcal{D}elta\big)^{\alpha/2} \Big( P_t (h) \Big)(v)
\text{e}nd{equation}
with $P_0 (h)(v) = h(v)$ and we see, using \text{e}qref{eq:Fouriersol}, that if we introduce the kernel
$$ K_t (v)= \mathcal{F}^{-1} \Big( \kappa e^{ - t |\xi |^{\alpha}/ \alpha }\Big) (v)$$
where $\kappa$ is a constant normalizing $K_1$, then we have explicitly $P_t(h) = K_t _*t h$. For $s\in [0,t]$ we consider
\begin{equation}
\psi(s) = P_s ( H^2)(v)
\text{e}nd{equation}
with $H = P_{t-s} (h)$. We then have for $s\in[0,t]$:
\begin{align*}
\psi'(s) &= \frac{\, {\rm d} }{\, {\rm d} s} \bigg[ K_s _*t \Big( K_{t-s} _*t h \Big)^2 \bigg] \\
&=\Big(\frac{\, {\rm d}}{\, {\rm d} s} K_s \Big) _*t \Big( K_{t-s} _*t h \Big)^2 + K_s _*t \frac{\, {\rm d}}{\, {\rm d} s} \Big[ \big( K_{t-s} _*t h \big)^2 \Big]\\
&= P_s \Big(- \big(-\mathcal{D}elta\big)^{\alpha/2} H^2 \Big) + 2 P_s \Big( H \big(-\mathcal{D}elta\big)^{\alpha/2} H \Big) \\
&= P_s \bigg( \int \frac{\big(H(v)-H(w) \big)^2 }{|v-w|^{d+\alpha}} \, \text{d}w \bigg)
\text{e}nd{align*}
Using the integral expression of the convolution and Jensen's inequality it is straightforward to see that $\big( P_{t-s}(h)(v) - P_{t-s}(h)(w) \big)^2 \leq P_{t-s} \big(h(v)-h(w)\big)^2$. Therefore, using Fubini's theorem, we have:
\begin{align*}
\psi'(s)(v) &\leq P_s \bigg( P_{t-s} \bigg( \int \frac{\big(h(v)-h(w)\big)^2}{|v-w|^{d+\alpha}} \, {\rm d} w \bigg) \bigg) = P_t \bigg( \int \frac{\big(h(v)-h(w)\big)^2}{|v-w|^{d+\alpha}} \, {\rm d} w \bigg).
\text{e}nd{align*}
Integrating over $s\in[0,t]$ one gets
\begin{align*}
P_t \big( h^2\big)(v) - \Big( P_t (h)(v) \Big)^2 \leq t P_t \bigg( \int \frac{\big(h(v)-h(w)\big)^2}{|v-w|^{d+\alpha}} \, {\rm d} w \bigg).
\text{e}nd{align*}
Finally, taking $t=1$ and evaluating at $v=0$ we get:
\begin{equation}
\int h^2(w) G_\alpha (w) \, {\rm d} w - \bigg( \int h(w) G_\alpha (w) \, {\rm d} w\bigg)^2 \leq \iint \frac{\big(h(v)-h(w)\big)^2}{|v-w|^{d+\alpha}} G_\alpha (v) \, {\rm d} v\, {\rm d} w.
\text{e}nd{equation}
Through a simple change of variables, inverse of the one we did earlier, we obtain
\begin{equation} \langlebel{eq:fracSob}
\int g^2(w) F_\varepsilon(w) \, {\rm d} w - \bigg( \int g(w) F_\varepsilon (w) \, {\rm d} w\bigg)^2 \leq \iint \frac{\big(g(v)-g(w)\big)^2}{|v-w|^{d+\alpha}} F_\varepsilon (v) \, {\rm d} v\, {\rm d} w.
\text{e}nd{equation}
Finally, replacing $g$ by $f/F_\varepsilon$, since $F_\varepsilon$ is normalized, we recover \text{e}qref{eq:disscont}.
\text{e}nd{proof}
Since the operator $\, \mathcal{T}_\varepsilon$ is negative semidefinite in $L^2_{F_{\varepsilon}^{-1}} ( \mathbb{R}^d )$ it is natural to look for bounds of the quadratic entropy associated to
solutions $f_\varepsilon$ of \text{e}qref{eq:vlfpeps}. We gather the appropriate a priori estimates that we shall need to pass to the limit in \text{e}qref{eq:vlfpeps} in
the following result.
\begin{prop}\langlebel{prop:apriori}
Let the assumptions of Theorem \ref{theo:main} be satisfied and let $f_\varepsilon$ be the solution of \text{e}qref{eq:vlfpeps}. We introduce the residue $r_\varepsilon$ through the macro-micro decomposition $f_\varepsilon = \rho_\varepsilon F_\varepsilon + \varepsilon^{ \alpha / 2} r_\varepsilon$. Then, uniformly in $\varepsilon \in (0,1)$, we have:
\begin{itemize}
\item[ \text{e}mph{(i)}] $( f_\varepsilon)$ is bounded in $L^\infty ([0,T) ; L^2_{G_{\alpha}^{ -1} ( v)} ( \mathbb{R}^d \times \mathbb{R}^d))$ and
in $L^\infty ( [0,T); L^1 ( \mathbb{R}^d\times\mathbb{R}^d))$,
\item[ \text{e}mph{(ii)}] $( \rho_\varepsilon)$ is bounded in $L^\infty ( [0,T); L^2 ( \mathbb{R}^d))$,
\item[ \text{e}mph{(iii)}] $( r_\varepsilon)$ is bounded in $L^2 ([0,T) ; L^2_{G_{\alpha}^{-1}(v)} (\mathbb{R}^d \times\mathbb{R}^d ))$.
\text{e}nd{itemize}
\text{e}nd{prop}
\begin{proof}
Multiplying \text{e}qref{eq:vlfpeps} by $f_\varepsilon / F_\varepsilon$, integrations by parts yield
\begin{eqnarray*}
\frac{ \varepsilon^{ \alpha-1}}{ 2} \frac{ \, {\rm d} }{ \, {\rm d} t} \underset{\mathbb{R}^d\times\mathbb{R}^d}{\iint}\frac{ f^2_\varepsilon}{ F_\varepsilon} \, {\rm d} v \, {\rm d} x + \frac{ \varepsilon^{\alpha-1}}{ 2} \underset{\mathbb{R}^d\times\mathbb{R}^d}{\iint}\frac{ f_\varepsilon^2}{ F_\varepsilon} \frac{ \, \partial_t F_\varepsilon}{ F_\varepsilon} \, {\rm d} v \, {\rm d} x - \frac{ 1}{ 2} \underset{\mathbb{R}^d\times\mathbb{R}^d}{\iint} \frac{ f^2_\varepsilon}{ F_\varepsilon} \frac{ v \cdot \nablabla_x F_\varepsilon}{ F^2_\varepsilon} \, {\rm d} v \, {\rm d} x + \frac{1}{\varepsilon }\mathcal{D}_\varepsilon (f^\varepsilon) =0.
\text{e}nd{eqnarray*}
Thus, thanks to Proposition \ref{prop:equi2}, part (i) and (ii), and \text{e}qref{eq:disscont} we obtain
\begin{equation}\langlebel{eq:auxi}
\frac{ \varepsilon^\alpha}{ 2} \frac{ \, {\rm d} }{ \, {\rm d} t} \underset{\mathbb{R}^d\times\mathbb{R}^d}{\iint} \frac{ f^2_\varepsilon}{ F_\varepsilon} \, {\rm d} v \, {\rm d} x + \underset{\mathbb{R}^d\times\mathbb{R}^d}{\iint} \frac{ (f_\varepsilon - \rho_\varepsilon F_\varepsilon)^2}{ F_\varepsilon} \, {\rm d} v \, {\rm d} x \leq \varepsilon^\alpha \mu\underset{\mathbb{R}^d\times\mathbb{R}^d}{\iint}\frac{ f^2_\varepsilon}{ F_\varepsilon} \, {\rm d} v \, {\rm d} x.
\text{e}nd{equation}
Whence, part (i) follows by Gronwall's lemma and the fact that the weights $1/G_{\alpha}$ and
$1/F_\varepsilon$ are equivalent uniformly in $\varepsilon$ which follows from Proposition \ref{prop:equi2}, part (i). On the other hand, part (ii) follows thanks to
the inequality
\begin{eqnarray*}
\rho_\varepsilon \leq \bigg( \int \frac{ f^2_\varepsilon}{ F_\varepsilon} \, {\rm d} v \bigg)^{ 1/2},
\text{e}nd{eqnarray*}
which is an immediate consequence of Cauchy-Schwarz and the fact $\int F_\varepsilon \, {\rm d} v = 1$. Finally, part (iii) follows from \text{e}qref{eq:auxi} after integrating with respect to $t$ over $(0,T)$ and thanks to Proposition \ref{prop:equi2} part (ii).
\text{e}nd{proof}
\section{Proof of Theorem \ref{theo:main}}
We shall follow the method introduced in \cite{Cesbron2012}. Let us start by introducing the following auxiliary problem: for $\varphi \in \mathcal{C}_c^{\infty} ([0,T)\times\mathbb{R}^d)$, define $\psi_\varepsilon$ the unique solution of
\begin{equation} \langlebel{eq:auxeq}
\begin{array}{llr}
&\varepsilon v \cdot \nablabla_x \psi_\varepsilon - v \cdot \nablabla_v \psi_\varepsilon = 0 \hspace{2cm} & \text{ in } [ 0, \infty) \times \mathbb{R}^d \times \mathbb{R}^d,\\
&\psi ( t, x, 0) = \varphi ( t, x) & \text{ in } [ 0, \infty) \times \mathbb{R}^d
\text{e}nd{array}
\text{e}nd{equation}
The function $\psi_\varepsilon$ can be obtained readily via the method of
characteristics and can be expressed in an explicit manner as follows:
\begin{equation}\langlebel{aux:function}
\psi_\varepsilon ( t, x, v) = \varphi ( t, x + \varepsilon v).
\text{e}nd{equation}
Next, multiplying \text{e}qref{eq:vlfpeps} by $\psi_\varepsilon$ and through integrations by parts we obtain
\begin{align}
& \underset{Q_T}{\iiint} f_\varepsilon \Big( \varepsilon^{ \alpha-1} \, \partial_t \psi_\varepsilon + v \cdot \nablabla_x \psi_\varepsilon - \frac{1}{\varepsilon}( v - \varepsilon^{ \alpha - 1} E ) \cdot \nablabla_v \psi_\varepsilon - \frac{1}{\varepsilon}( - \mathcal{D}elta)^{ \alpha / 2} \psi_\varepsilon \Big) \, {\rm d} v \, {\rm d} x \, {\rm d} t \nonumber \\
& \hspace{5cm} + \varepsilon^{\alpha-1} \underset{\mathbb{R}^d\times\mathbb{R}^d}{\iint} f^{ in} ( x, v) \psi_\varepsilon ( 0, x, v) \, {\rm d} v \, {\rm d} x =0 \, . \langlebel{eq:weakform}
\text{e}nd{align}
Let us note the following
\begin{align}
( - \mathcal{D}elta_v)^{ \alpha / 2} \psi_\varepsilon ( t, x, v) = \varepsilon^{ \alpha} ( - \mathcal{D}elta _v)^{ \alpha / 2} \varphi ( t, x + \varepsilon v),\langlebel{eq:auxiden} \\
\nabla_v \psi_\varepsilon (t,x,v) = \varepsilon \nabla \varphi(t, x + \varepsilon v),
\text{e}nd{align}
which follows after a simple computation using the definition \text{e}qref{def:fracLapInt} of the fractional Laplacian. Thus using the auxiliary equation \text{e}qref{eq:auxeq} and plugging \text{e}qref{eq:auxiden} into \text{e}qref{eq:weakform} yields
\begin{align}
& \int_0^\infty \iint f_\varepsilon \Big( \, \partial_t \varphi ( t, x + \varepsilon v) + E \cdot \nablabla_x \varphi ( t, x + \varepsilon v) - ( - \mathcal{D}elta_v)^{ \alpha / 2} \varphi ( t, x + \varepsilon v) \Big) \, {\rm d} v \, {\rm d} x \, {\rm d} t \nonumber \\
& \hspace{7cm} + \iint f^{ in} ( x, v) \varphi ( 0, x + \varepsilon v) \, {\rm d} v \, {\rm d} x =0 \, . \langlebel{eq:weakform1}
\text{e}nd{align}
\subsection{The non-critical case: $1< \alpha < 2$}
In order to pass to the limit in this weak formulation, we introduce the following two results.
\begin{lemma}\langlebel{lem:cv2}
Let $( f_\varepsilon)$ be the sequence of solutions of \text{e}qref{eq:vlfpeps}, and $\rho$ be the limit of $(\rho_\varepsilon)$ which exists thanks to Proposition \ref{prop:apriori} part (ii), then
\begin{eqnarray*}
f_\varepsilon(t,x,v) \rightharpoonup \rho(t,x) G_\alpha (v) \quad \text{ weakly in } L^\infty ([0,T); L^2_{G^{-1}_{\alpha}(v)} ( \mathbb{R}^d \times \mathbb{R}^d))
\text{e}nd{eqnarray*}
\text{e}nd{lemma}
\begin{proof}
This lemma follows directly from Proposition \ref{prop:apriori}. Since $f_\varepsilon$ is uniformly bounded, it converges weakly in $L^\infty([0,T);L^2_{G^{-1}_{\alpha}(v)}(\mathbb{R}^d\times\mathbb{R}^d))$. From the bounds on $F_\varepsilon$ established in Proposition \ref{prop:equi2} and the boundedness of $\rho_\varepsilon$ in $L^\infty( [0,T); L^2(\mathbb{R}^d))$ we see that $\rho_\varepsilon(t,x) F_\varepsilon (v)$ converges to $\rho(t,x) G_\alpha(v)$ weakly in $L^\infty([0,T);L^2_{G^{-1}_{\alpha}(v)}(\mathbb{R}^d\times\mathbb{R}^d))$ where $\rho$ is the weak limit of $\rho_\varepsilon$. Finally, since the residue $r_\varepsilon$ is bounded, it follows from the micro-macro decomposition $f_\varepsilon = \rho_\varepsilon F_\varepsilon + \varepsilon^{\alpha/2} r_\varepsilon$ that the limit of $f_\varepsilon$ is the same as the limit of $\rho_\varepsilon F_\varepsilon$.
\text{e}nd{proof}
\begin{lemma} \langlebel{lem:cv}
For all test functions $\psi$ in $C^\infty_c ( [ 0, \infty) \times \mathbb{R}^d)$ we have:
\begin{equation} \langlebel{lemeq:cvpsi}
\underset{\varepsilon \rightarrow 0}{\lim} \underset{Q_T}{\iiint} f^\varepsilon(t,x,v) \psi(t,x+\varepsilon v) \, {\rm d} t \, \text{d}x \, \text{d}v = \underset{[0,T)\times\mathbb{R}^d}{\iint} \rho(t,x) \psi(t,x) \, {\rm d} x \, {\rm d} t.
\text{e}nd{equation}
Moreover, if $E(t,x) \in W^{1,\infty}([0,T)\times\mathbb{R}^d)^d$ then for all $\hbox{\rlap{I}\kern.16em P}si \in C^\infty_c ( [ 0, \infty) \times \mathbb{R}^d ; \mathbb{R}^d)$ the following convergence holds:
\begin{equation} \langlebel{lemeq:cvEpsi}
\underset{\varepsilon \rightarrow 0}{\lim} \underset{Q_T}{\iiint} f^\varepsilon(t,x,v) E(t,x)\cdot \hbox{\rlap{I}\kern.16em P}si(t,x+\varepsilon v) \, {\rm d} t \, {\rm d} x \, {\rm d} v = \underset{[0,T)\times\mathbb{R}^d}{\iint} \rho(t,x) E(t,x) \cdot \hbox{\rlap{I}\kern.16em P}si(t,x) \, {\rm d} x \, {\rm d} t.
\text{e}nd{equation}
\text{e}nd{lemma}
\begin{proof}
We will give a detailed proof of the convergence in \text{e}qref{lemeq:cvEpsi}, the convergence in \text{e}qref{lemeq:cvpsi} follows as a consequence of \text{e}qref{lemeq:cvEpsi}
by taking $\psi(t,x+\varepsilon v) = E(t,x)\cdot \hbox{\rlap{I}\kern.16em P}si(t,x+\varepsilon v)$ with a smooth $E$ and Lemma \ref{lem:cv2}. For \text{e}qref{lemeq:cvEpsi}, we write:
\begin{align}
\underset{Q_T}{\iiint} f_\varepsilon E(t,x) \cdot \hbox{\rlap{I}\kern.16em P}si( t, x + \varepsilon v) \, {\rm d} v \, {\rm d} x \, {\rm d} t &= \underset{[0,T)\times\mathbb{R}^d}{\iint} \rho(t,x) E(t,x) \cdot \hbox{\rlap{I}\kern.16em P}si ( t, x) \, {\rm d} x \, {\rm d} t \nonumber \\
& \quad + \underset{Q_T}{\iiint} \Big( f_\varepsilon - \rho(t,x) G_{\alpha}(v)\Big) E(t,x) \cdot \hbox{\rlap{I}\kern.16em P}si(t,x) \, {\rm d} v \, {\rm d} x \, {\rm d} t \nonumber \\
& \quad + \underset{Q_T}{\iiint} f_\varepsilon E(t,x)\cdot \Big(\hbox{\rlap{I}\kern.16em P}si ( t, x + \varepsilon v) - \hbox{\rlap{I}\kern.16em P}si ( t, x) \Big) \, {\rm d} v \, {\rm d} x \, {\rm d} t. \langlebel{eq:auxpassage}
\text{e}nd{align}
The second term in the right hand side of \text{e}qref{eq:auxpassage} converges to zero since $f_\varepsilon$ converges to $\rho G_\alpha$ weakly in $L^\infty ([0,T); L^2_{G^{-1}_{\alpha}(v)} ( \mathbb{R}^d \times \mathbb{R}^d))$ thanks to Lemma \ref{lem:cv2}. For the
third term on the right hand side of \text{e}qref{eq:auxpassage} thanks to Cauchy-Schwarz and H\"older we obtain
\begin{align}
& \bigg| \underset{Q_T}{\iiint} f_\varepsilon E(t,x) \cdot (\hbox{\rlap{I}\kern.16em P}si ( t, x + \varepsilon v) - \hbox{\rlap{I}\kern.16em P}si ( t, x) ) \, {\rm d} v \, {\rm d} x \, {\rm d} t \bigg| \nonumber \\
& \leq \int_0^T \bigg(\underset{\mathbb{R}^d\times\mathbb{R}^d}{\iint}\frac{ f_\varepsilon^2}{ G_{\alpha}} \, {\rm d} v \, {\rm d} x \bigg)^{ 1/2} \bigg( \underset{\mathbb{R}^d\times\mathbb{R}^d}{\iint} \Big[ E(t,x) \cdot ( \hbox{\rlap{I}\kern.16em P}si ( t, x + \varepsilon v) - \hbox{\rlap{I}\kern.16em P}si ( t, x))\Big]^2 G_{\alpha} \, {\rm d} v \, {\rm d} x \bigg)^{ 1/2} \, {\rm d} t \nonumber \\
& \leq \lVert f_\varepsilon \lVert_{ L^\infty ( [0,T); L^2_{G_{\alpha}^{ -1} ( v)} ( \mathbb{R}^d \times \mathbb{R}^d))} \nonumber \\
& \hspace{1cm} \times \int_0^T \bigg(\underset{\mathbb{R}^d\times\mathbb{R}^d}{\iint} [ E(t,x) \cdot ( \hbox{\rlap{I}\kern.16em P}si ( t, x + \varepsilon v) - \hbox{\rlap{I}\kern.16em P}si ( t, x))]^2 G_{\alpha} \, {\rm d} v \, {\rm d} x \bigg)^{ 1/2} \, {\rm d} t. \langlebel{eq:auxpassage1}
\text{e}nd{align}
Next, let $R$ be an arbitrary positive real number and let us consider the following splitting
\begin{align}
&\underset{\mathbb{R}^d\times\mathbb{R}^d}{\iint} \Big[ E \cdot ( \hbox{\rlap{I}\kern.16em P}si ( t, x + \varepsilon v) - \hbox{\rlap{I}\kern.16em P}si ( t, x))\Big]^2 G_{\alpha} ( v) \, {\rm d} v \, {\rm d} x \nonumber \\
& \hspace{4cm} = \underset{\mathbb{R}^d}{\int} \, \underset{|v| \leq R}{\int} \Big[ E \cdot ( \hbox{\rlap{I}\kern.16em P}si ( t, x + \varepsilon v) - \hbox{\rlap{I}\kern.16em P}si ( t, x))\Big]^2 G_{\alpha} ( v) \, {\rm d} v \, {\rm d} x \nonumber \\
& \hspace{4cm} \quad + \underset{\mathbb{R}^d}{\int} \, \underset{|v| > R}{\int} \Big[ E \cdot ( \hbox{\rlap{I}\kern.16em P}si ( t, x + \varepsilon v) - \hbox{\rlap{I}\kern.16em P}si ( t, x))\Big]^2 G_{\alpha} ( v) \, {\rm d} v \, {\rm d} x.
\text{e}nd{align}
We will use the regularity of $\hbox{\rlap{I}\kern.16em P}si$ to bound the integral on $| v| < R$. To that end, let us consider the $\varepsilon R$ neighborhood of the support of $\hbox{\rlap{I}\kern.16em P}si$ denoted as $\Omega ( \varepsilon R)$ which consists of the union of all the balls
of radius $\varepsilon R$ having as center a point in $\mathrm {supp}\, \hbox{\rlap{I}\kern.16em P}si$. Next, let $\varLambda$ denote the diameter of $\mathrm {supp}\, \hbox{\rlap{I}\kern.16em P}si$ defined as the maximum over all the distances
between two points in $\mathrm {supp}\, \hbox{\rlap{I}\kern.16em P}si$. Then it is clear that $\Omega ( \varepsilon R) \subseteq B ( x_0; \varLambda + \varepsilon R)$ where $B ( x_0; \varLambda + \varepsilon R)$ denotes the ball with
center at $x_0$ and radius $\varLambda + \varepsilon R$ and $x_0$ is any arbitrary fix point in
$\mathrm {supp}\, \hbox{\rlap{I}\kern.16em P}si$. Then for the integral over $| v| < R$ we have the following
\begin{align}
& \underset{\mathbb{R}^d}{\int} \, \underset{|v| \leq R}{\int} [ E \cdot ( \hbox{\rlap{I}\kern.16em P}si ( t, x + \varepsilon v) - \hbox{\rlap{I}\kern.16em P}si ( t, x))]^2 G_{\alpha}( v) \, {\rm d} v \, {\rm d} x \nonumber \\
& \hspace{2cm} \leq \lVert G_{\alpha}\lVert_{L^\infty(\mathbb{R}^d)} \underset{\mathbb{R}^d}{\int} \, \underset{|v| \leq R}{\int}\bigg( \sum_{ j=1}^d | E_j |\big|\varepsilon v \cdot \nablabla_x \hbox{\rlap{I}\kern.16em P}si_j (t, x + \theta_j \varepsilon v)\big| \bigg)^2 \, {\rm d} v \, {\rm d} x \nonumber \\
& \hspace{2cm} \leq 2 \varepsilon^2 \lVert G_{\alpha}\lVert_{L^\infty(\mathbb{R}^d)} \underset{\mathbb{R}^d}{\int} \, \underset{|v| \leq R}{\int}| v|^2 \bigg( \sum_{ j=1}^d | E_j |^2 \big| \nablabla_x \hbox{\rlap{I}\kern.16em P}si_j (t, x + \theta_j \varepsilon v)\big|^2 \bigg) \, {\rm d} v \, {\rm d} x \nonumber \\
& \hspace{2cm} \leq 2 \varepsilon^2 \lVert G_{\alpha}\lVert_{L^\infty(\mathbb{R}^d)} \lVert E \lVert^2_{ W^{ 1, \infty} ([0,T)\times\mathbb{R}^d)} \lVert\nabla_x \hbox{\rlap{I}\kern.16em P}si \lVert_{L^\infty(\mathbb{R}^d)} \int_{ | v| \leq R} \int_{ B ( x_0, \, {\rm d}elta + \varepsilon R)} | v|^2 \, {\rm d} x \, {\rm d} v \nonumber \\
& \hspace{2cm} \leq \varepsilon^2 C_2 ( \varLambda + \varepsilon R)^d R^{ d + 2}, \langlebel{eq:vsmall}
\text{e}nd{align}
where $ C_2$ is a constant depending on $\lVert E \lVert^2_{W^{ 1, \infty} ([0,T)\times\mathbb{R}^d)}$, $\lVert G_{\alpha} \lVert_{L^\infty(\mathbb{R}^d)}$ and $\lVert D^2_x \varphi \lVert_{L^\infty(\mathbb{R}^d)}$ but not on $\varepsilon$, and
$\theta_j \in ( 0, 1)$ for $j = 1, \ldots, d$ is such that $\hbox{\rlap{I}\kern.16em P}si_j (t,x+\varepsilon v) - \hbox{\rlap{I}\kern.16em P}si_j (t,x) = \varepsilon v \cdot \nabla_x \hbox{\rlap{I}\kern.16em P}si_j (t,x+ \theta_j \varepsilon v)$.
For the integral on $| v| > R$ we use the decay of the equilibrium $G_{\alpha}(v)$ to derive the following upper bound:
\begin{align}
& \underset{\mathbb{R}^d}{\int} \, \underset{|v| > R}{\int} \Big[ E \cdot ( \hbox{\rlap{I}\kern.16em P}si ( t, x + \varepsilon v) - \hbox{\rlap{I}\kern.16em P}si ( t, x))\Big]^2 G_{\alpha} ( v) \, {\rm d} v \, {\rm d} x \nonumber \\
& \hspace{2.5cm} \leq \lVert E \lVert^2_{ W^{ 1, \infty} ([0,T)\times\mathbb{R}^d)} \underset{ | v| > R}{\int} \Bigg( \underset{\mathbb{R}^d}{\int} \Big( 2 | \hbox{\rlap{I}\kern.16em P}si (t, x + \varepsilon v) |^2 + 2 | \hbox{\rlap{I}\kern.16em P}si (t, x) |^2 \Big) \, {\rm d} x \Bigg) G_{\alpha} ( v) \, {\rm d} v \nonumber \\
& \hspace{2.5cm} \leq 4 \lVert E \lVert^2_{ W^{ 1, \infty} ([0,T)\times\mathbb{R}^d)} \underset{ \mathbb{R}^d}{\int} | \hbox{\rlap{I}\kern.16em P}si ( t,x)|^2 \, {\rm d} x \underset{ | v| > R}{\int} G_{\alpha}( v) \, {\rm d} v \nonumber \\
& \hspace{2.5cm} \leq C \underset{ | v| > R}{\int} G_{\alpha}( v) \, {\rm d} v. \nonumber
\text{e}nd{align}
Thanks to Proposition \ref{prop:equi}, for any $ \text{e}ta > 0$ we can choose $R > 0$ big enough
such that
\begin{eqnarray*}
\bigg| G_{\alpha}( v) - \frac{ \vartheta}{ | v|^{ d + \alpha}} \bigg| \leq \frac{ \text{e}ta}{ | v|^{ d + \alpha} }, \qquad \text{ for all } | v| \geq R.
\text{e}nd{eqnarray*}
Thus choosing $ \text{e}ta = \vartheta$ we have the following estimate:
\begin{align}
\underset{ | v| > R}{\int} G_{\alpha} ( v) \, {\rm d} v & \leq \underset{ | v| > R}{\int} \bigg| G_{\alpha} ( v) - \frac{ \vartheta}{ | v|^{ d + \alpha} } \bigg| \, {\rm d} v + \underset{ | v| > R}{\int} \frac{ \vartheta}{ | v|^{ d + \alpha}} \, {\rm d} v \nonumber \\
& \leq 2 \underset{ | v| > R}{\int} \frac{ \vartheta}{ | v|^{ d + \alpha}} \, {\rm d} v \nonumber \\
& \leq \frac{ C}{ R^{ \alpha}}. \nonumber
\text{e}nd{align}
From which we conclude
\begin{equation}\langlebel{eq:vbig}
\underset{\mathbb{R}^d}{\int} \, \underset{|v| > R}{\int} \Big[ E \cdot ( \hbox{\rlap{I}\kern.16em P}si ( t, x + \varepsilon v) - \hbox{\rlap{I}\kern.16em P}si ( t, x))\Big]^2 G_{\alpha} ( v) \, {\rm d} v \, {\rm d} x \leq \frac{ C_2}{ R^\alpha}.
\text{e}nd{equation}
Next let us note that for any $\, {\rm d}elta > 0$ we can choose $\widetilde{R} > 0$ such that $C_2 / R^\alpha < \, {\rm d}elta / 2$ for all $R > \widetilde{R}$ and
then choose $\varepsilon > 0$ so that $\varepsilon^2 C_1 ( \varLambda + \varepsilon R)^d R^{ d + 2} < \, {\rm d}elta / 2$. And thus deduce that for $\varepsilon$ small enough we have
\begin{eqnarray*}
\varepsilon^2 C_1 ( \varLambda + \varepsilon R)^d R^{ d + 2} + \frac{ C_2}{ R^\alpha} < \, {\rm d}elta.
\text{e}nd{eqnarray*}
Therefore, plugging \text{e}qref{eq:vsmall} and
\text{e}qref{eq:vbig} into \text{e}qref{eq:auxpassage1} and using Proposition \ref{prop:apriori}, part (i), we obtain that
there exists a fixed $C > 0$ such that
\begin{align}
& \bigg| \underset{Q_T}{\iiint} f_\varepsilon E \cdot ( \hbox{\rlap{I}\kern.16em P}si ( t, x + \varepsilon v) - \hbox{\rlap{I}\kern.16em P}si ( t, x)) \, {\rm d} v \, {\rm d} x \, {\rm d} t \bigg| \nonumber \\
& \hspace{3cm} \leq C \bigg( \varepsilon^2 C_1 ( \varLambda + \varepsilon R)^d R^{ d + 2} + \frac{ C_2}{ R^\alpha} \bigg) \nonumber \\
& \hspace{3cm} \leq C \, {\rm d}elta, \nonumber
\text{e}nd{align}
for any $\, {\rm d}elta > 0$, hence concluding that the third term on the right hand side of \text{e}qref{eq:auxpassage} goes to zero as $\varepsilon \rightarrow 0$.
\text{e}nd{proof}
Using Lemma \ref{lem:cv} we can now take the limit in \text{e}qref{eq:weakform1} and conclude that $\rho$ satisfies
\begin{eqnarray*}
\underset{[0,T)\times\mathbb{R}^d}{\iint} \rho \Big( \partial_t \varphi + E \cdot \nabla_x \varphi - \big(-\mathcal{D}elta_x\big)^{\alpha/2} \varphi \Big) \, {\rm d} x \, {\rm d} t + \underset{\mathbb{R}^d}{\int} \rho_{in}(x) \varphi(0,x) \, {\rm d} x =0,
\text{e}nd{eqnarray*}
for all $\varphi \in C^{\infty}_c ( [ 0, T) \times \mathbb{R}^d )$. Thus concluding the proof of Theorem \ref{theo:main}.
\subsection{The critical cases $\alpha=1$ and $\alpha=2$}
\underline{In the critical case $\alpha=2$} we recover the classical Fokker-Planck operator which means, in particular, as mentioned in the Introduction, that its equilibrium is a Maxwellian $M( v) = C \text{e}xp{ \left( -| v|^2 \right)}$ instead of the heavy-tail distribution $G_\alpha$. We can still consider the perturbed operator $\mathcal{T}_\varepsilon$ of Proposition \ref{prop:equi} and its equilibrium will also be a translation of the unperturbed one:
\begin{align*}
F_\varepsilon (t,x,v) = C e^{- | v-\varepsilon E(t,x) |^2 }
\text{e}nd{align*}
and since the decay of the Maxwellian is much faster than the decay of the heavy-tail distributions, Proposition \ref{prop:equi2} holds. The dissipative properties of the Fokker-Planck operator are well known, see e.g. \cite{CesHut} \cite{Goudon+2005} or \cite{BostanGoudon2008}, and it is straightforward to check the boundedness results of Proposition \ref{prop:apriori}. Hence, Lemma \ref{lem:cv2} holds and we can take the limit in the weak formulation \text{e}qref{eq:weakform1} to prove that Theorem \ref{theo:main} holds in the case $\alpha = 2$.
\underline{In the critical case $\alpha=1$}, the perturbed operator $\, \mathcal{T}_\varepsilon$ of \text{e}qref{def:Teps} and its equilibrium $F_\varepsilon$ \text{e}qref{eq:Feps} lose their dependence with respect to $\varepsilon$:
\begin{align*}
&\, \mathcal{T}_\varepsilon (f_\varepsilon) = \, \mathcal{T}_E (f_\varepsilon) = \nabla_v\cdot \Big[ \big( v- E(t,x) \big) f_\varepsilon \Big] - \big(-\mathcal{D}elta_v\big)^{\alpha/2} f_\varepsilon, \\
&F_\varepsilon(t,x,v) = G_{1,E} (t,x,v) = G_{1} \big(v - E(t,x) \big).
\text{e}nd{align*}
In particular, the equilibrium $G_{1,E}$ will remain unchanged in the limit as $\varepsilon$ goes to $0$ and Proposition \ref{prop:equi2} will hold with $\alpha=1$ which, in particular, means that the bounds in $(ii)$ and $(iii)$ do not go to zero. The operator is still dissipative since the dependence on $\varepsilon$ does not matter in the proof of Proposition \ref{prop:posisemi}, hence we still have \text{e}qref{eq:fracSob} and multiplying \text{e}qref{eq:vlfpeps} by $f_\varepsilon /G_{1,E}$ and integrating by parts yields:
\begin{equation}\langlebel{eq:auxi}
\frac{ \varepsilon }{ 2} \frac{ \, {\rm d} }{ \, {\rm d} t} \underset{\mathbb{R}^d\times\mathbb{R}^d}{\iint} \frac{ f^2_\varepsilon}{ G_{1,E}} \, {\rm d} v \, {\rm d} x + \underset{\mathbb{R}^d\times\mathbb{R}^d}{\iint} \frac{ (f_\varepsilon - \rho_\varepsilon G_{1,E})^2}{ G_{1,E}} \, {\rm d} v \, {\rm d} x \leq \varepsilon \mu\underset{\mathbb{R}^d\times\mathbb{R}^d}{\iint}\frac{ f^2_\varepsilon}{ G_{1,E}} \, {\rm d} v \, {\rm d} x.
\text{e}nd{equation}
Since $E$ is in $\big( W^{1,\infty}([0,T)\times\mathbb{R}^d)\big)^d$, if $f_\varepsilon(t,\cdot,\cdot)$ is in $L^2_{G_{1,E}(t,x,v)}(\mathbb{R}^d\times\mathbb{R}^d)$ and bounded independently of time, then it is also in $L^2_{G_1(v)}(\mathbb{R}^d\times\mathbb{R}^d)$. As a consequence, from \text{e}qref{eq:auxi} we still have the uniform in $\varepsilon$ boundedness of $f_\varepsilon$, $\rho_\varepsilon = \int f_\varepsilon \, \text{d}v$ and the residue $r_\varepsilon$ in $L^\infty([0,T); L^2_{G_1(v)} (\mathbb{R}^d \times\mathbb{R}^d))$ as stated in Proposition \ref{prop:apriori}. This yields the following modified version of Lemma 1:
\begin{lemma}\langlebel{lem:cv2bis}
Let $\alpha=1$, $ (f_\varepsilon)$ be the sequence of solutions of \text{e}qref{eq:vlfpeps}, and $\rho$ be the limit of $(\rho_\varepsilon)$ which exists thanks to Proposition \ref{prop:apriori} part (ii), then
\begin{eqnarray*}
f_\varepsilon(t,x,v) \rightharpoonup^\star \rho(t,x) G_{1,E} (t,x,v) \quad \text{ in } L^\infty ([0,T); L^2_{G_{1}^{-1}(v)} ( \mathbb{R}^d \times \mathbb{R}^d)).
\text{e}nd{eqnarray*}
\text{e}nd{lemma}
Finally, for the proof of convergence of the weak formulation \text{e}qref{eq:weakform1}, i.e. the proof of Lemma \ref{lem:cv}, we proceed essentially the same way. The only slight difference is that in order to control the third term of \text{e}qref{eq:auxpassage} we will use Cauchy-Schwarz as in \text{e}qref{eq:auxpassage1} but we multiplying and divide by $G_1(v)^{1/2}$ instead of the natural equilibrium $G_{1,E}$. The rest of the proof remains the same and we can then take the limit in the weak formulation, which concludes the proof of Theorem \ref{theo:main} with $\alpha = 1$.
\text{e}nd{document}
|
\begin{document}
\begin{abstract}
The main purpose of this paper is to introduce a method to ``stabilize'' certain spaces of homomorphisms from finitely generated free abelian groups to a Lie group $G$, namely ${\rm Hom}({\mathfrak{m}}athbb Z^n,G)$.
We show that this stabilized space of homomorphisms decomposes after suspending once with ``summands'' which can be reassembled, in a sense to be made precise below, into the individual spaces ${\rm Hom}({\mathfrak{m}}athbb Z^n,G)$ after suspending once. To prove this decomposition, a stable decomposition of an equivariant function space is also developed.
One main result is that the topological space of all commuting elements in a compact
Lie group is homotopy equivalent to an equivariant function space after inverting the order
of the Weyl group.
In addition, the homology of the stabilized space admits a very simple description in terms of the tensor algebra generated by the reduced homology of a maximal torus in favorable cases. The stabilized space also allows the description of the additive reduced homology of the individual spaces ${\rm Hom}({\mathfrak{m}}athbb Z^n,G)$, with the order of the Weyl group inverted.
\end{abstract}
{\mathfrak{m}}aketitle
\tableofcontents
\section{Introduction}\label{section:Introduction}
In this paper we introduce a new method of ``stabilizing'' spaces of homomorphisms ${\rm Hom}(\pi,G)$, where $\pi$ is a certain choice of finitely generated discrete group and $G$ is a compact and connected Lie group. The main results apply to ${\rm Hom}({{\mathfrak{m}}athbb{Z}}^n,G)$, the space of all ordered $n$-tuples of pairwise commuting elements in a compact Lie group $G$, by assembling these spaces into a single space for all $n \geq 0$.
The resulting space, denoted $Comm(G)$, is an infinite dimensional analogue of a Stiefel manifold which can be regarded as the space, suitably topologized, of all finite ordered sets of generators for all finitely generated abelian subgroups of $G$.
The methods are to develop the geometry and topology of the free associative monoid generated by a maximal torus of $G$, and to ``twist'' this free monoid into a space which approximates the space of ``all commuting $n$-tuples'' for all $n$ into a single space.
Topological properties of $Comm(G)$ are developed while the singular homology of this space is computed with coefficients in the ring of integers with the order of the Weyl group of $G$ inverted. One application is that the cohomology of ${\rm Hom}({{\mathfrak{m}}athbb{Z}}^n,G)$ follows from that of $Comm(G)$ for any cohomology theory.
The space of all commuting elements is usually homotopy equivalent to the equivariant function space $G \times_{NT} J(T)$ after inverting the order of the Weyl group, where $NT$ is the normalizer of a maximal torus $T$, and $J(T)$ is the free associative monoid generated by the maximal torus.
The results for singular homology of $Comm(G)$ are given in terms of the tensor algebra generated by the reduced homology of a maximal torus. Applications to classical Lie groups as well as exceptional Lie groups are given. A stable decomposition of $Comm(G)$ is also given here with a significantly finer stable decomposition to be given in the sequel to this paper along with extensions of these constructions to additional representation varieties.
An appendix by V.~Reiner is included which uses the results here concerning $Comm(G)$ together with Molien's theorem to give the Hilbert-Poincar\'e series of $Comm(G)$.
The next section provides a detailed list of the results in this paper.
\section{Summary of results }\label{section: Outline of results}
Let $G$ be a Lie group and let $\pi$ be a finitely generated discrete group. The set of homomorphisms ${\rm Hom}(\pi,G)$ can be topologized with the subspace topology of $G^m$ where $m$ is the number of generators of $\pi$. The topology of the spaces ${\rm Hom}(\pi,G)$ has seen considerable recent development. For example, the case where $\pi$ is a finitely generated abelian group is closely connected to
work of E. Witten \cite{witten1,witten2} which uses commuting pairs to construct quantum vacuum states in supersymmetric Yang-Mills theory. Further work was done by V.~Kac and A.~Smilga \cite{kac.smilga}.
Work of A.~Borel, R.~Friedman and J.~Morgan \cite{borel2002almost} addressed the special cases of commuting pairs and triples in compact Lie groups. Spaces of representations were studied by W.~Goldman \cite{goldman}, who investigated their connected components, for $\pi$ the fundamental group of a closed oriented surface and $G$ a finite cover of a projective special linear group.
For non-negative integers $n$, let ${\rm Hom}({{\mathfrak{m}}athbb{Z}}^n,G)$ denote the set of homomorphisms from a direct sum of $n$ copies of ${\mathfrak{m}}athbb{Z}$ to $G$. This set can be identified as the subset of pairwise commuting $n$-tuples in $G^n$, so similarly it can be naturally topologized with the subspace topology in $G^n$. Spaces given by ${\rm Hom}({{\mathfrak{m}}athbb{Z}}^n,G)$ have been the subject of substantial recent work.
In particular, A. \'Adem and F.~Cohen \cite{adem2007commuting} first studied these spaces in general, obtaining results for closed subgroups of $GL_n({{\mathfrak{m}}athbb{C}})$. Their work was followed by work of T.~Baird \cite{bairdcohomology}, who studied ordinary and equivariant cohomology, Baird--Jeffrey--Selick \cite{bairdsu2}, \'Adem--G\'omez \cite{adem.gomez,adem.gomez2,adem.gomez3}, \'Adem--Cohen--G\'omez \cite{adem.cohen.gomez,adem.cohen.gomez2}, Sjerve--Torres-Giese \cite{giese.sjerve}, \'Adem--Cohen--Torres-Giese \cite{fredb2g}, Pettet--Suoto \cite{pettet.souto}, G\'omez--Pettet--Suoto \cite{gomez.pettet.souto}, Okay \cite{okay}. Most of this work has been focused on the study of invariants such as cohomology, $K$-theory, connected components, homotopy type and stable decompositions. Recently, D.~Ramras and S. Lawton \cite{ramras} used some of the above work to study character varieties.
Let $G$ be a reductive algebraic group and let $K \subseteq G$ be a maximal compact subgroup. A.~Pettet and J.~Souto \cite{pettet.souto} have shown that the inclusion ${\rm Hom}({{\mathfrak{m}}athbb{Z}}^n,K) \hookrightarrow {\rm Hom}({{\mathfrak{m}}athbb{Z}}^n,G)$ is a strong deformation retract, i.e. in particular the two spaces are homotopy equivalent. Thus this article will restrict to compact and connected Lie groups. Note that the free abelian group can also be replaced by any finitely generated discrete group. Assume $\Gamma$ is a finitely generated nilpotent group. Then M.~Bergeron \cite{bergeron} showed that the natural map
$${\rm Hom}(\Gamma,K) \to {\rm Hom}(\Gamma,G)$$ is a homotopy equivalence. However, most of the results in this paper apply only to the cases of free abelian groups of finite rank.
The varieties of pairwise commuting $n$-tuples can be assembled into a useful single space which is roughly analogous to a Stiefel variety described as follows. Recall that the Stiefel variety over a field ${{\mathfrak{m}}athbf{k}}$ $$V_{{{\mathfrak{m}}athbf{k}}}(n,m)$$ may be regarded as a topological space of ordered $m$-tuples of generators for every $m$ dimensional vector subspace of ${{\mathfrak{m}}athbf{k}}^n$, see \cite{hatcher2002algebraic}.
The purpose of this paper is to study an analogue of a Stiefel
variety where ${{\mathfrak{m}}athbf{k}}^n$ is replaced by a
compact Lie group $G$, and the $m$-dimensional subspaces are replaced by a particular family of subgroups such as
\begin{enumerate}
\item all finitely generated subgroups with at most $m$ generators which gives rise to ${\rm Hom}(F_m,G)$, where $F_m$ denotes the free group with a basis of $m$ elements, or
\item all finitely generated abelian subgroups with at most $m$ generators, which gives rise to ${\rm Hom}({{\mathfrak{m}}athbb{Z}}^m,G)$.
\end{enumerate}
The analogue of $(1)$ where $\pi$ runs over all finitely generated subgroups of nilpotence class at most $q$ with at most $k$ generators, or the analogue of $(2)$ where $\pi$ runs all finitely generated elementary abelian $p$-groups with at most $k$ generators will be addressed elsewhere. This article will focus mainly on properties of examples (1) and (2) as well as how these spaces make contact with classical representation theory.
\begin{defn}\label{definition: spaces of all commuting n-tuples}
Given a Lie group $G$ together with a free group $F_k$ on $k$ generators, the space of all homomorphisms ${\rm Hom}(F_k,G)$ is naturally homeomorphic to $G^k$ with the subspace ${\rm Hom}({{\mathfrak{m}}athbb{Z}}^k,G)$ topologized by the
subspace topology. Define $Assoc(G)$, $Comm(G)$, and $Comm(q,G)$ by the following constructions.
\begin{enumerate}
\item Let $$Assoc(G) = \bigsqcup_{0 \leq k < \infty}{\rm Hom}(F_k,G))/\sim$$ where $\sim$
denotes the equivalence relation obtained by deleting the identity element of $G$
in any coordinate of ${\rm Hom}(F_k,G)$ for $ k \geq 1$ with ${\rm Hom}(F_0,G) = \{1_G\}$ a single point by convention. In addition, $Assoc(G)$ is given the
usual quotient topology obtained from the product topology on ${\rm Hom}(F_k,G)$
where the relation is generated by requiring that
$$(g_1,\dots, g_n) \sim (g_1,\dots, \widehat{g_i}, \dots, g_n)$$
if $g_i = 1_G \in G$.
\item Let $$Comm(G) = \bigsqcup_{0 \leq k < \infty}{\rm Hom}({{\mathfrak{m}}athbb{Z}}^k,G))/\sim$$
topologized as a subspace of $Assoc(G)$, where $\sim$ is the same relation.
\item Let $$Comm(q,G) = \bigsqcup_{0 \leq k < \infty}{\rm Hom}(F_k/\Gamma^q(F_k),G))/\sim$$
topologized as a subspace of $Assoc(G)$, where $\sim$ is the same relation, and $\Gamma^q(F_k)$
denotes the $q$-th stage of the descending central series for $F_k$.
\end{enumerate}
\end{defn}
This article is a study of the properties of the space ${\rm Hom}({\mathfrak{m}}athbb{Z}^n,G)$, for certain Lie groups and for all positive integers $n$ assembled into a single space, and is an exploration of properties of $Comm(G)$ regarded as a subspace of $Assoc(G)$. The main reason for introducing the spaces $Comm(G)$ is that (1) they have tractable, natural properties which (2) descend to give tractable properties of the varieties ${\rm Hom}({\mathfrak{m}}athbb{Z}^n,G)$ as addressed in this article.
Consider the free associative monoid generated by $G$ denoted $J(G)$, with the identity element in $G$ identified with the identity element in the free monoid $J(G).$ This last construction is also known as the the \textit{James construction} or \textit{James reduced product} developed in \cite{james1955reduced} (see also Definition \ref{jamesdefn}). The first theorem is an observation which provides an identification of $Assoc(G)$ as well as a framework for $Comm(G)$; the proof is omitted as it is a natural inspection.
\begin{thm}\label{theorem: Assoc(G)}
Let $G$ be a Lie group. The natural map $$J(G) \to Assoc(G)$$ is a homeomorphism.
\end{thm}
One application of the current paper is to give explicit features of the singular cohomology or homology of
$Comm(G)$ for any compact and connected Lie group $G$ with the order of the Weyl group inverted.
For example, one of the results below gives that the cohomology of $Comm(G)$ with the order of the Weyl group inverted is the fixed points of the Weyl group $W$ acting naturally on $H^*(G/T) \otimes {\mathcal{T}}^*[V]$, where ${\mathcal{T}}^*[V]$ denotes the dual of the tensor algebra ${\mathcal{T}}[V]$ generated by the reduced homology of a maximal torus.
Furthermore, if ungraded singular homology is considered, the total answer in singular homology is given explicitly in terms of classical tensor algebras generated by the reduced homology of a maximal torus.
The singular homology groups of each individual space ${\rm Hom}({\mathfrak{m}}athbb{Z}^n,G)$ are then formally given in terms of those of $Comm(G)$ as described below. To give the explicit answers, a tighter hold on fixed points of the action of $W$ on
$H^*(G/T) \otimes {\mathcal{T}}^*[V]$
is required. In particular, Molien's theorem can be applied to work out the fixed points in $H^*(G/T;{{\mathfrak{m}}athbb{C}}) \otimes {\mathcal{T}}^*[V \otimes_{{{\mathfrak{m}}athbb{Z}}} {{\mathfrak{m}}athbb{C}}]$, the resulting cohomology groups with complex coefficients ${{\mathfrak{m}}athbb{C}}$ as given in the appendix.
In the ungraded setting, the homology groups obtained from classical representation theory simplify greatly, and the ungraded answers follow directly. This simplification is illustrated by the explicit answers for the cases of $U(n)$, $SU(n)$, $Sp(n)$, $Spin(n)$, $G_2, F_4, E_6, E_7, E_8$, reflecting the fact that these global computations for $Comm(G)$ are both accessible, and are exhibiting natural combinatorics. These specific answers are given in sections
\ref{section:Un.SUn}, and \ref{section:exceptional Lie groups}.
This information depends on the construction given next.
\begin{defn}\label{definition: twist J(T)}
Given a Lie group $G$ together with a maximal torus $T\subset G$, form the free associative monoid generated by $T$, denoted $J(T) = Assoc(T)$, where the identity of $T$ is identified with the identity of $Assoc(T)$. The normalizer of $T$ in $G$, denoted $NT$, acts on $T$, and by natural diagonal extension there is an action of $NT$ on $Assoc(T)$. The normalizer $NT$ thus acts diagonally on the product $G \times Assoc(T)$. The orbit space
$$G \times_{NT}Assoc(T)$$ is useful in what follows next.
\end{defn}
\begin{defn}\label{definition: G to Comm(G)}
Given a Lie group $G$, define $$E:G \to Comm(G)$$ by
$$E(g)=[g],$$
the natural equivalence class given by the image of $g \in G = {\rm Hom}({{\mathfrak{m}}athbb{Z}},G)$ in $Comm(G)$. Let
$$Comm(G)_{1_G}$$
denote the path-component of $E(1_G) \in Comm(G)$. Let
$${\rm Hom}({{\mathfrak{m}}athbb{Z}}^m,G)_{1_G} $$
denote the path-component of $({1_G}, \dots, 1_G) \in {\rm Hom}({{\mathfrak{m}}athbb{Z}}^m,G) $.
\end{defn}
The next Proposition follows at once from \cite{adem2007commuting} and Definition \ref{definition: spaces of all commuting n-tuples}.
\begin{prop}\label{prop: connected Comm(G)}
Let $G$ be a Lie group. If every abelian subgroup of $G$ is contained in a maximal torus, then
both ${\rm Hom}({\mathfrak{m}}athbb{Z}^n,G)$, and $Comm(G)$ are path-connected.
\end{prop}
Standard examples of Proposition \ref{prop: connected Comm(G)} are $U(n)$, $SU(n)$, and $Sp(n)$.
On the other hand if $G = SO(n)$ for $n >2$, then $Comm(G)$ fails to be path-connected.
Namely, it follows from the work below that translates of $ Assoc(T)$ in $Assoc(G)$ under the natural action of the Weyl group give a ``good'' approximation for the space of all commuting $n$-tuples, $Comm(G)$. This process is identifying commuting $n$-tuples in terms of subspaces of $Assoc(G)$. Furthermore, the space $Comm(G)$ can be regarded as the space of all ordered finite sets of generators for all finitely generated abelian subgroups of $G$ modulo the single relation that the identity element $1_G$ of $G$ can be omitted in any set of generators.
The next step is to see how $Comm(G)$ relates to earlier classical constructions. In particular, $G \times_{NT}Assoc(T)$ maps naturally to $Comm(G)$ and is an infinite dimensional analogue
of the natural map $$\theta: G \times_{NT}T \to G$$ given by $$\theta(g,t) = gtg^{-1}= t^g,$$
with more details given in Section \ref{section:x2g}.
\begin{defn}\label{definition: Borel to Comm(G)}
Define $${\mathcal{T}}heta: G \times_{NT}Assoc(T) \to Comm(G)$$ by the formula
$${\mathcal{T}}heta(g, (t_1,\dots, t_n)) = (t_1^g,\dots, t_n^g)$$ for all $n$.
\end{defn}
The space $Comm(G)$ is ``roughly" the union of orbits of $Assoc(T)$ in $Assoc(G)$ as will be seen below. The next step is a direct observation with proof omitted.
\begin{prop}\label{prop: continuous map to Comm(G)}
Let $G$ be a compact connected Lie group with maximal torus $T$ and Weyl group $W$. The map
$${\mathcal{T}}heta: G \times_{NT}Assoc(T) \to Comm(G)$$
is well-defined and continuous. Furthermore, there is a commutative diagram
\[
\begin{CD}
G \times_{NT}T @>{\theta}>> G \\
@V{1 \times E}VV @VV{E}V \\
G \times_{NT}Assoc(T) @>{{\mathcal{T}}heta}>> Comm(G)\\
@V{1}VV @VV{\subseteq}V \\
G \times_{NT}Assoc(T) @>{{\mathcal{T}}heta}>> Assoc(G).\\
\end{CD}
\]
In addition, the map $${\mathcal{T}}heta: G \times_{NT} Assoc(T) \to Comm(G)$$
factors through one path-component as
${\mathcal{T}}heta: G \times_{NT} Assoc(T) \to Comm(G)_{1_G}$, and
there is a commutative diagram for all $n \geq 0$ given as follows:
\[
\begin{CD}
G \times T^m @>{\theta_m} >> {\rm Hom}({{\mathfrak{m}}athbb{Z}}^m,G)_{1_G} \\
@VV{}V @VV{}V \\
G \times_{NT}Assoc(T) @>{{\mathcal{T}}heta}>> Comm(G)_{1_G}.\\
\end{CD}
\]
\end{prop}
A second useful naturality result concerning $Comm(G)$ is satisfied for certain morphisms of Lie groups which preserve a choice of maximal torus. The next proposition is again an inspection of the definitions.
\begin{prop}\label{prop: naturality}
Let $T_H$ and $T_G$ be maximal tori of compact Lie groups $H$ and $G$, respectively. If a continuous homomorphism $$f: H \to G$$ is a morphism of Lie groups such that $f(T_H) \subset T_G$, then there is an induced commutative diagram
\[
\begin{CD}
H \times_{N_{T_H}}Assoc(T_H) @>{{\mathcal{T}}heta}>> Comm(H)\\
@V{f \times Assoc(f)}VV @VV{Comm(f)}V \\
G \times_{N_{T_G}}Assoc(T_G) @>{{\mathcal{T}}heta}>> Comm(G).\\
\end{CD}
\]
\end{prop}
The next feature is that the spaces $G \times_{NT}Assoc(T)$, and $Comm(G)$ admit stable decompositions which are
compatible as well as provide simple, direct computations.
\begin{defn}\label{definition: singular subspaces of commuting n-tuples}
Given a Lie group $G$, define
$$S_k(G) =S({\rm Hom}({{\mathfrak{m}}athbb{Z}}^k,G)) \subset {\rm Hom}({{\mathfrak{m}}athbb{Z}}^k,G)$$
as those $k$-tuples for which at least one coordinate is equal to the identity element $1_G \in G$. Define $$\widehat{{\rm Hom}}({{\mathfrak{m}}athbb{Z}}^k,G) = {\rm Hom}({{\mathfrak{m}}athbb{Z}}^k,G)/S_k(G).$$
\end{defn}
\begin{rmk}
Note that some of the theorems in this paper require that the inclusions $S_k(G)\hookrightarrow {\rm Hom}({{\mathfrak{m}}athbb{Z}}^k,G)$ are cofibrations. For this to be true it suffices to assume that $G$ is a closed subgroup of $GL_n({{\mathfrak{m}}athbb{C}})$ (see \cite[Theorem 1.5]{adem2007commuting}), and we assume this implicitly whenever necessary.
\end{rmk}
In what follows, $\Sigma(X)$ denotes the suspension of the pointed space $X$.
The following theorem was proven in \cite{adem2007commuting}:
\begin{thm}\label{theorem: stable splitting of Hom(Zn,G)}
Let $G$ be a Lie group. Then there are homotopy equivalences
$$\Sigma(\widehat{{\rm Hom}}({{\mathfrak{m}}athbb{Z}}^n,G)) \vee \Sigma(S_n(G)) \to \Sigma({\rm Hom}({{\mathfrak{m}}athbb{Z}}^n,G)),$$
and
$$\bigvee_{1 \leq j \leq n}\bigvee_{\binom {n}{j}}\Sigma(\widehat{{\rm Hom}}({{\mathfrak{m}}athbb{Z}}^j,G)) \to \Sigma({\rm Hom}({{\mathfrak{m}}athbb{Z}}^n,G)).$$
\end{thm}
The first result concerning the suspension of $Comm(G)$ is as follows.
\begin{thm}\label{thm:stable decomp f x2g}
Let $G$ be a Lie group. Then there is a homotopy equivalence
$$\Sigma(Comm(G)) \to \bigvee_{n \geq 1}\Sigma(\widehat{{\rm Hom}}({{\mathfrak{m}}athbb{Z}}^n,G)).$$
\end{thm}
The spaces $\widehat{{\rm Hom}}({{\mathfrak{m}}athbb{Z}}^q,G)$ for $ 1 \leq q \leq n$ give all of the stable summands of ${\rm Hom}({{\mathfrak{m}}athbb{Z}}^n,G)$, and so these spaces determine the homotopy type of the suspension of ${\rm Hom}({{\mathfrak{m}}athbb{Z}}^n,G)$. Thus the space $Comm(G)$ captures all of the stable structure of the spaces ${\rm Hom}({{\mathfrak{m}}athbb{Z}}^n,G)$ in a ``minimal'' way by splitting into a wedge after suspending of exactly one summand
$\widehat{{\rm Hom}}({{\mathfrak{m}}athbb{Z}}^q,G)$ for each integer $q$. Namely, $Comm(G)$ can be regarded as ``the smallest space'' which is stably equivalent to a wedge of all of the ``fundamental summands'' $\widehat{{\rm Hom}}({{\mathfrak{m}}athbb{Z}}^q,G)$.
On the other hand, it is natural to ask how the``fundamental summands" are connected with the previous construction
$G\times_{NT} Assoc(T)$. Here recall the definition of the $q$-fold smash product $$\widehat{T}^q=
T^q/S({\rm Hom}({{\mathfrak{m}}athbb{Z}}^q,T)).$$
\begin{thm}\label{thm:first approximation INTRO}
Let $G$ be a Lie group. Then there are homotopy equivalences
$$\Sigma(G\times_{NT} Assoc(T)) \to \Sigma \bigg( G/T \vee \Big(\bigvee_{q \geq 1}
G\times_{NT} \widehat{T}^q/(G/NT)\Big) \bigg).$$
\end{thm}
There is a natural extension of $Comm(G)$ to all finitely generated nilpotent subgroups of nilpotence class at most $q$. There is a space that assembles all the spaces ${\rm Hom}(F_n/\Gamma^q,G)$ into a single space, denoted $X(q,G)$ (with formal definition stated in Definition \ref{definition: XqG}), where $\Gamma^q$ denotes the $q$-th stage of the descending central series of $F_j$ for all $j$.
Notice that the spaces $X(q,G)$ give a filtration of $J(G)$ with
$$Comm(G) = X(2,G) \subset X(3,G) \subset \cdots \subset X(q,G) \subset \cdots \subset J(G).$$
The spaces $X(q,G)$ in the filtration also admit stable decompositions as in the case of $X(2,G)$; see Section \ref{section: f x2g}.
\begin{thm} \label{thm: stable decompositions of X(q G)}
Let $G$ be a compact, connected Lie group.
Then there are homotopy equivalences
$$\Sigma({\rm Hom}(F_n/\Gamma^q,G)) \to \bigvee_{1\leq q \leq n}(\bigvee_{\binom{n}{j}}\Sigma(\widehat{{\rm Hom}}(F_j/\Gamma^q,G)),$$ and
$$\Sigma(X(q,G)) \to \bigvee_{1\leq j < \infty}\Sigma(\widehat{{\rm Hom}}(F_j/\Gamma^q,G)).$$
\end{thm}
\begin{rmk}\label{rmk: stable splittings }
The above raises the question of identifying the stable wedge summands of other spaces of homomorphisms, or spaces of representations. For example, these methods also inform on the spaces of homomorphisms from a free group on $n$ letters modulo the stages of either the descending central series or mod-$p$ descending central series. It is currently unclear
whether these spaces inform on representation varieties associated to fundamental groups of Riemann surfaces, but it seems likely that these methods will inform on representation varieties for braid groups. Furthermore, the space $Comm(G)$ maps naturally by evaluation onto the space of closed, finitely generated abelian subgroups of $G$ topologized by the Chabauty topology as in \cite{bridson.h.k}. It is natural to ask whether this map is a fibration with fiber $J(T)$.
\end{rmk}
Theorem \ref{thm:first approximation INTRO} is used to obtain the next result.
\begin{thm} \label{thm: further.stable decompositions}
Let $G$ be a compact, simply-connected Lie group, and assume that all spaces have been localized such that the order of the Weyl group has been inverted (localized away from $|W|$). Then there are homotopy equivalences
$$ \Sigma (G\times_{NT} \widehat{T}^q/(G/NT)) \to \Sigma(\widehat{{\rm Hom}}({{\mathfrak{m}}athbb{Z}}^q,G))$$ for all $q \geq 1$ as long as the order of the Weyl group has been inverted.
\end{thm}
These results can be used to directly work out the homology of
$\widehat{{\rm Hom}}({{\mathfrak{m}}athbb{Z}}^n,G)$ for all $n$, and thus ${{\rm Hom}}({{\mathfrak{m}}athbb{Z}}^n,G)$ as long as the order of the Weyl group has been inverted.
The following theorem reduces a computation of the homology of $Comm(G)$ to standard methods, and is proven below.
\begin{thm}\label{theorem: the surjection map to x(2,g)}
Let $G$ be a simply-connected compact Lie group. Then the map
$${\mathcal{T}}heta: G \times_{NT}Assoc(T) \to Comm(G)$$ has homotopy theoretic
fibre which is simply-connected, and has reduced homology which is
entirely torsion with orders dividing the order of the Weyl group $W$ of $G$.
\end{thm}
Finer stable decompositions of the space $Comm(G)$ and ${\rm Hom}({{\mathfrak{m}}athbb{Z}}^n,G)$, as well as further analysis of the combinatorics arising here in homology will appear in a sequel to this paper.
Homological applications of the above decompositions are given next.
Let $\Gamma$ be a discrete group and $R[ \Gamma]$ be the group ring of $\Gamma$ over the commutative ring $R$. Let $M$ be a left $R[\Gamma]$-module. Recall that the module of coinvariants $M_{\Gamma}$ is the largest quotient of $M$ such that $\Gamma$ acts trivially on that quotient. That is, $M_{\Gamma} = M/I$, where $I$ is the group generated by $\gamma \cdot m-m$, for all $\gamma \in \Gamma$ and $m \in M$. Furthermore,
$${\mathcal{T}} [V]$$
denotes the tensor algebra generated by the free $R$-module $V$. Throughout this section $R = {{\mathfrak{m}}athbb{Z}}[1/|W|]$.
\begin{thm}\label{thm: homology of x2g}
Let $G$ be a compact, connected Lie group with maximal torus $T$ and Weyl group $W$. Then there is an isomorphism in homology $$H_{\ast}( Comm(G)_{1_G}; R) \to H_{\ast}(G/T;R) \otimes_{R[W]} {\mathcal{T}}[V]$$
where $V$ is the reduced homology of the maximal torus $T$.
\end{thm}
Let $H^U_{\ast}$ and ${\mathcal{T}}_U$ denote ungraded homology and the ungraded tensor algebra, respectively. Then the following theorem holds.
\begin{thm}\label{theorem:ungraded homology of X2G INTRO}
Let $G$ be a compact and connected Lie group with maximal torus $T$ and Weyl group $W$. Then there is an isomorphism in ungraded homology
\[H_{\ast}^U( Comm(G)_1; R) \to {\mathcal{T}}_U [V] \] where $V$ is the reduced homology of the maximal torus $T$.
\end{thm}
If $G$ is a compact and connected Lie group such that every abelian subgroup of $G$ is in a maximal torus, then the following corollary holds.
\begin{cor}\label{cor: homology of x2g}
Let $G$ be a compact and connected Lie group with maximal torus $T$ and Weyl group $W$ such that every abelian subgroup of $G$ is in a maximal torus. Then there is an isomorphism in homology
\[H_{\ast}( Comm(G); R) \to H_{\ast}(G/T;R) \otimes_{R[W]} {\mathcal{T}}[V] \]
where $V$ is the reduced homology of the maximal torus $T$.
\end{cor}
Examples of this last corollary are given below for the cases of $U(n)$, $SU(n)$, $Sp(n)$, $Spin(n)$, and the $5$ exceptional compact simply-connected simple Lie groups $G_2, F_4, E_6, E_7,$ and $E_8$ in Sections
\ref{section:Un.SUn} and \ref{section:exceptional Lie groups}. The Poincar\'e series for the ungraded homology groups are also recorded in Sections \ref{section:Un.SUn} and \ref{section:exceptional Lie groups}.
Let $G$ be a compact, connected Lie group with maximal torus $T$ and Weyl group $W$. Let $C$ denote the homology of $G/T$ with real coefficients where $C_i = H_i(G/T;{{\mathfrak{m}}athbb{R}})$,
and $C^*$ is defined by $C^i = H^i(G/T;{{\mathfrak{m}}athbb{R}})$.
The Hilbert-Poincar\'e series for the rational cohomology of $Comm(G)$ is computed as follows.
Consider the $W$-action on the exterior algebra $E=\wedge {\mathfrak{m}}athbb R^n= H_*(T; {{\mathfrak{m}}athbb{R}})$ as well as the reduced exterior algebra ${\widetilde{E}}$, namely, the reduced homology of $T$, given by $\bigoplus_{k=1}^n \wedge^k {{\mathfrak{m}}athbb{R}}R^n$. Then $W$ acts diagonally
on the ${\mathfrak{m}}athbb R$-dual ${\mathcal{T}}TT^*[{\widetilde{E}}]$ of the tensor tensor algebra generated by ${\widetilde{E}}$,
the cohomology of $J(T)$. The group $W$ acts diagonally on the tensor product $C^* \otimes {\mathcal{T}}TT^*[{\widetilde{E}}]$ giving the action on the cohomology of $G/T \times J(T)$.
These modules are naturally trigraded by both homological degree as well as tensor degree arising from the tensor algebra, and have a natural ${\mathfrak{m}}athbb N^3$-trigrading as follows.
First, the homology of $J(T)$ is given by the direct sum
$${{\mathfrak{m}}athbb{R}} \oplus \sum_{\substack{j = k_1+\cdots+k_m\\ \ k_q > 0\\}} ( \wedge^{k_1} {{\mathfrak{m}}athbb{R}}R^n \otimes \cdots \otimes \wedge^{k_m} {{\mathfrak{m}}athbb{R}}R^n).$$ Then
the sub-module of elements of tri-degree $(i,j,m)$ in the homology
of $G/T \times J(T)$ for fixed $m > 0$ is given by the direct sum
$$\sum_{\substack{j = k_1+\cdots+k_m\\ \ k_q > 0\\}} (C_i \otimes \wedge^{k_1} {{\mathfrak{m}}athbb{R}}R^n \otimes \cdots \otimes \wedge^{k_m} {{\mathfrak{m}}athbb{R}}R^n),$$
such that $1 \leq q \leq m $, $\wedge^k {\mathfrak{m}}athbb R^n$ lies in homological degree $k$.
To compute in cohomology, the ${\mathfrak{m}}athbb R$-dual of $\wedge^{k_1} {{\mathfrak{m}}athbb{R}}R^n \otimes \cdots \otimes \wedge^{k_m} {{\mathfrak{m}}athbb{R}}R^n$ lies in homological degree $j = k_1+\cdots+k_m$ of ${\mathcal{T}}TT^*[{\widetilde{E}}]$ as well as tensor degree $m>0$. The special case with $m = 0$ is by convention $C_i \otimes {\mathfrak{m}}athbb R \cong C_i $ in tri-degree $(i,0,0)$.
\begin{defn} \label{defn: trigraded Hilbert-Poincare series}
Consider the module
$$ M_{(i,j,m)} = \sum_{\substack{j = k_1+\cdots+k_m\\ \ k_q > 0 \\}} (C_i \otimes \wedge^{k_1} {{\mathfrak{m}}athbb{R}}R^n \otimes \cdots \otimes \wedge^{k_m} {{\mathfrak{m}}athbb{R}}R^n) \ {\mathfrak{m}}box{for} \ m >0$$
together with $$M_{(i,0,0)} = C_i \otimes {\mathfrak{m}}athbb R.$$
The diagonal action of $W$
on the tensor product $C^* \otimes {\mathcal{T}}TT^*[{\widetilde{E}}]$ over ${{\mathfrak{m}}athbb{R}}R$
respects this {\it ${\mathbb {N}}^3$-trigrading}.
The tri-graded Hilbert-Poincar\'e series for the module of invariants is defined by
$$
{{\rm Hilb}}\left( \left(C^* \otimes {\mathcal{T}}TT^*[{\widetilde{E}}] \right)^W, q, s, t \right)
= \sum_{\substack{ i,m\geq 0 \\ j = k_1+\cdots+k_m \\}} \dim_{{{\mathfrak{m}}athbb{R}}} (M_{(i,j,m)}^W)q^is^jt^m.
$$
\end{defn}
Let $d_1,\dots,d_n$ be \textit{the fundamental degrees of $W$} (which are defined either in the Appendix or \cite[\S 4.1]{broue.reflextion.gps}).
Note that the degrees for algebra generators of polynomial rings such as the cohomology ring of $BT$ and their images in $H^*(G/T;{\mathfrak{m}}athbb R)$ are given by doubling ``the fundamental degrees of $W$'' $(d_1,...., d_n)$ in the cited work by Shephard and Todd et al. The ``fundamental degrees of $W$'' are doubled here in order to correspond to the usual conventions for ``topological gradings'' associated to the cohomology of the topological space $BT$.
The following result is proven in the Appendix.
\begin{thm}\label{thm:souped-up-Molien INTRO}
If $G$ is a compact, connected Lie group with maximal torus $T$, and Weyl group $W$, then
$$
\begin{aligned}
&{{\rm Hilb}}\left( \left(C^* \otimes {\mathcal{T}}TT^*[{\widetilde{E}}] \right)^W , q, s, t \right) \\
&\quad =\frac{ \prod_{i=1}^n (1-q^{2d_i}) }{ |W| }\sum_{w \in W} \frac{1}{\det(1-q^2w) \left( 1-t(\det(1+sw)-1) \right)}.
\end{aligned}
$$
\end{thm}
\begin{ex}\label{ex: example Hilbert-Poincare series for U(2)}
The Hilbert-Poincar\'e series for $Comm(G)$ where $G=U(2)$ is worked out in two ways.
One way is the formula given in Theorem \ref{thm:souped-up-Molien INTRO}. The second way is by enumerating the representations of $W$ which occur in
$C^* \otimes {\mathcal{T}}TT^*[{\widetilde{E}}] $.
\begin{enumerate}
\item
The Weyl group $W$ is $\Sigma_2$ with elements $1$, and $w\neq 1$.
\item The homology of the space $G/T = U(2)/T = S^2$ is ${\mathfrak{m}}athbb R$ in degrees zero and two, and is $\{0\}$ otherwise.
\item The degrees $(d_1,d_2)$ in Theorem \ref{thm:souped-up-Molien INTRO} are given by $(d_1,d_2) = (1,2)$.
\item The sum in Theorem \ref{thm:souped-up-Molien INTRO} runs over $w$ and $1\in W$.
\end{enumerate}
Then the formula given in Theorem \ref{thm:souped-up-Molien INTRO}
$$
{{\rm Hilb}}\left( \left(C^* \otimes {\mathcal{T}}TT^*[{\widetilde{E}}] \right)^W , q, s, t \right) = \frac{ (1-q^2)(1-q^4)}{2}(A_1 + A_w), $$
where $$\displaystyle A_1= \frac{1}{(1-q^2)^2(1-t[(1+s)^2-1])}$$ and $$\displaystyle A_w = \frac{1}{(1-q^2)(1+q^2)(1-t[(1+s)(1-s)-1])}. $$
Thus
$$\frac{ (1-q^2)(1-q^4)}{2}(A_1) = \frac{1+q^2}{2(1-t(s^2+2s))} {\mathfrak{m}}box{ and } \ \frac{ (1-q^2)(1-q^4)}{2}(A_w) = \frac{1-q^2}{2(1+s^2t)}.$$
The Hilbert-Poincar\'e series is then given by
\begin{align*}
& {{\rm Hilb}}\left( \left(C^* \otimes {\mathcal{T}}TT^*[{\widetilde{E}}] \right)^W , q, s, t \right) =
\frac{1+q^2}{2(1-t(s^2+2s))} + \frac{1-q^2}{2(1+s^2t)}. \\
\end{align*}
From this information, it follows that the coefficient of $t^m, \ m > 0$, is
$$
\begin{aligned}
&\frac{1}{2}\left[ (1+q^2)(s^2+2s)^m+ (1-q^2)(-s^2)^m \right ] \\
&\quad = \sum_{1 \leq j \leq m}2^{j -1}\binom{m}{j}s^{2m-j} \ + \left\lbrace\begin{array}{lr}
s^{2m} & {\mathfrak{m}}box{if $m$ is even, and } \\
q^2s^{2m} & {\mathfrak{m}}box{if $m$ is odd. }\\
\end{array}\right..
\end{aligned}
$$
Keeping track of the representations of $W$, an exercise left to the reader, gives an independent verification of the formula in Theorem \ref{thm:souped-up-Molien INTRO} for this special case.
\end{ex}
Theorem \ref{thm: further.stable decompositions}
states that if $G$ is a compact, connected Lie group, then there are homotopy equivalences
$$ \Sigma (G\times_{NT} \widehat{T}^m/(G/NT)) \to \Sigma(\widehat{{\rm Hom}}({{\mathfrak{m}}athbb{Z}}^m,G))$$ for all $m \geq 1$
as long as the order of the Weyl group has been inverted (spaces are localized away from $|W|$). The reduced, real cohomology of
$G\times_{NT} \widehat{T}^m/(G/T)$ as well as
$\widehat{{\rm Hom}}({{\mathfrak{m}}athbb{Z}}^m,G))$ is given by $$\sum_{\substack{j = k_1+\cdots+k_m\\ \ i \geq 0 \\}} (C_i \otimes \wedge^{k_1} {{\mathfrak{m}}athbb{R}}R^n \otimes \cdots \otimes \wedge^{k_m} {{\mathfrak{m}}athbb{R}}R^n)^W=
\sum_{\substack{j = k_1+\cdots+k_m\\ \ i \geq 0 \\}} M_{(i,j,m)})^W.$$ By Theorem \ref{theorem: stable splitting of Hom(Zn,G)}, there are homotopy equivalences $$\bigvee_{1 \leq j \leq m}\bigvee_{\binom {m}{j}}\Sigma(\widehat{{\rm Hom}}({{\mathfrak{m}}athbb{Z}}^j,G)) \to \Sigma({\rm Hom}({{\mathfrak{m}}athbb{Z}}^m,G)).$$
\begin{cor}\label{cor: Hilbert-Poincare series for stable splitting of Hom(Zn,G)}
Let $G$ be a compact, connected Lie group as in Definition
\ref{defn: trigraded Hilbert-Poincare series}. Then there are
additive isomorphisms
$$ \widetilde{ H}^d({\rm Hom}({{\mathfrak{m}}athbb{Z}}^m,G);{\mathfrak{m}}athbb R) \to \sum_{1 \leq s \leq m}
\sum_{i+j = d}\bigg( \sum_{\substack{j = k_1+\cdots+k_s\\ \ i \geq 0 \\}} \oplus_{\binom {m}{s}} (M_{(i,j,s)})^W\bigg). $$
\end{cor}
\begin{rmk}\label{rmk: Poincare series for Hom(Z^n,G)}
The analogue of the Hilbert-Poincar\'e series for
$\widetilde{ H}^*{\rm Hom}({{\mathfrak{m}}athbb{Z}}^m,G)$ can be given in terms of
that for $Comm(G)$.
\end{rmk}
\textbf{Acknowledgment:} The authors thank V. Reiner for including the appendix in this paper along with his formula giving the Hilbert-Poincar\'e series for $Comm(G)$.
\section{The James construction}\label{section: James construction}
{ In this section we give a quick exposition of some properties of the James construction $J(X)$, the free monoid generated by the space $X$ with the base-point in $X$ acting as the identity as given in the next definition. The properties listed here are well-known, and are stated for the convenience of the reader.
\begin{defn}{\label{jamesdefn}}
The \textit{James construction} or \textit{James reduced product}on $X$ is the space
\[J(X):= \bigsqcup_{n\geq 0} X^n /\sim ,\]
where $\sim$ is the equivalence relation generated by
$(x_1,\dots,x_n) \sim (x_1,\dots,\widehat{x_i},\dots,x_n)$ if $x_i=\ast,$
with the convention that $X^0$ is the base-point $\ast$.
\end{defn}
}
One of the most important properties of $J(X)$ is the existence of a map
$$\theta: J(X) \to \Omega \Sigma(X)$$ which is a homotopy equivalence in case $X$ has the homotopy type of a path-connected CW-complex \cite{james1955reduced}. The singular homology of $J(X)$ is described next.
Recall the homology of $J(X)$ for any path-connected space $X$ of the homotopy type of a CW-complex as follows. Let $R$ be a commutative ring with $1$. Assume that homology is taken with coefficients in the ring $R$ where the homology of $X$ is assumed to be $R$-free. Finally, let
$$W = \widetilde{H}_*(X;R)$$
the reduced homology of $X$, and
$${\mathcal{T}}[W]$$
be the tensor algebra generated by $W$.
Then the Bott-Samelson theorem gives an isomorphism of algebras
$${\mathcal{T}}[W] \to H_*(J(X);R).$$ Although their results give a more precise description of this isomorphism as Hopf algebras,
that additional information is not used here.
Furthermore, if $X$ is of finite type, the Hilbert-Poincar\'e series for $H_*(X;R)$ is defined in the usual way as
$${{\rm Hilb}}(X,t) = \sum_{0 \leq n} d_nt^n,$$
where $d_n$ is the rank of $H_n(J(X);R)$ as an $R$-module (with the freeness assumptions made above).
Then the following formula holds:
$${{\rm Hilb}}(J(X),t) = 1/(1-({{\rm Hilb}}(X,t)-1))= 1/(2-{{\rm Hilb}}(X,t)).$$
An example is described next where $X$ is a finite product of circles.
\begin{ex}\label{ex:ungraded homology of X2G}
If $$X = (S^1)^m, $$ then $${{\rm Hilb}}(X,t) = (1+t)^m= \sum_{0 \leq j \leq m} \binom{{m}}j t^j,$$ thus
$$ {{\rm Hilb}}(J(X),t) = 1/(2- (1+t)^m) = \frac{1}{1- \sum_{1 \leq j \leq m} \binom{{m}}j t^j}.$$
\end{ex}
This procedure will be used to describe the ungraded homology groups of the spaces $Comm(G)_{1}$ for various choices of the group $G$.
\section{Proof of Theorem \ref{thm:first approximation INTRO}}\label{section:stable.decomp.Borel.constr}
Begin by proving the following theorem where it will be tacitly assumed that all spaces here are of the homotopy type of a connected CW-complex.
\begin{thm}{\label{decomp.thm}}
Let $Y$ be a $G$-space such that the projection $Y \longrightarrow Y/G$ is a locally trivial fibre bundle, and $X$ a $G$-space with fixed base-point $\ast$. Then there is a homotopy equivalence
\[\Sigma (Y\times_G J(X)) \simeq \Sigma \big(Y/G \vee (\bigvee_{n \geq 1} (Y\times_G \widehat{X}^n) /(Y\times_G \ast ))\big).\]
\end{thm}
Theorem \ref{thm:first approximation INTRO} will be an immediate corollary of Theorem \ref{decomp.thm}. The proof of the theorem is given below by a list of lemmas, see also \cite{may.generalized.splitting.thm}. The current proof is an equivariant splitting.
Assume $X$ and $Y$ are $G$-CW complexes satisfying the conditions of Theorem \ref{decomp.thm}. Consider the map
$$H:J(X) \to J(\bigvee_{1 \leq q< \infty}\widehat{X}^q)$$
given by
\[(x_1,\dots,x_n) {\mathfrak{m}}apsto \prod_{I \subset [n]}x_I,\]
where $x_I=x_{i_1}\wedge \cdots \wedge x_{i_q}$ for $I=(i_1,\dots,i_q)$ running over all admissible sequences in $[n]$, i.e. all sequences of the form $(i_1<\cdots<i_q)$, with $x_I$ having the left lexicographic order.
Next filter by defining
$$F_N J(\bigvee_{1 \leq q< \infty}\widehat{X}^q)$$
to be the image of
$$J(\bigvee_{1 \leq q \leq N}\widehat{X}^q).$$
There is also a filtration of $J(X)$ given by
$$ {\ast}=J_0(X)\subset J_1(X)\subset J_2(X) \subset \cdots \subset J_q(X)\subset \cdots \subset J(X),$$
where $J_q(X)$ is the image of $\bigsqcup_{1 \leq i \leq q} X^i/\sim $ in $J(X)$.
The next lemmas follow by inspection.
\begin{lemma}[]{\label{filtration}}
If $X$ has a base-point, then the map $$H:J(X) \to J(\bigvee_{1 \leq q< \infty}\widehat{X}^q)$$
restricts to a map $$H:J_N(X) \to F_NJ(\bigvee_{1 \leq q< \infty}\widehat{X}^q)= J(\bigvee_{1 \leq q \leq N}\widehat{X}^q),$$ and preserves filtrations. Furthermore, the map $H:J_N(X) \to J(\bigvee_{1 \leq q \leq N}\widehat{X}^q)$ is $G$-equivariant.
\end{lemma}
\begin{lemma}[]{\label{filtration.and.maps}}
Let $X$ be a pointed space where the group $G$ acts on the space $Y$ as well as on $X$ fixing the base-point of $X$. Then there is a map $$\widehat{H}: Y \times J(X) \to J(\bigvee_{1 \leq q< \infty}(Y \times \widehat{X}^q))$$ defined by
$$\widehat{H}(y;(x_1, \cdots, x_q)) = \prod _{1 \leq i_1< \cdots< i_t \leq q}(i,(x_{i_1},x_{i_2}, \cdots, x_{i_t})).$$
Furthermore, there is an induced map
\[
\begin{CD}
Y \times_GJ_N(X)/ Y\times_{G}\ast @>{}>> J(\bigvee_{1 \leq q \leq N}(Y \times_G \widehat{X}^q/ Y\times_{G}\ast)) \\
\end{CD}
\]
together with a strictly commutative diagram
\[
\begin{CD}
Y \times_GJ_{N-1}(X)/ Y\times_{G}\ast @>{}>> J(\bigvee_{1 \leq q \leq N-1}(Y \times_G \widehat{X}^q/ Y\times_{G}\ast))\\
@VV{}V @VV{J({\mathfrak{m}}box{inclusion})}V \\
Y \times_GJ_N(X)/ Y\times_{G}\ast @>{}>> J(\bigvee_{1 \leq q \leq N}(Y \times_G \widehat{X}^q/ Y\times_{G}\ast))\\
@VV{}V @VV{J({\mathfrak{m}}box{projection})}V \\
Y \times_G (\widehat{X}^N)/ Y\times_{G}\ast @>{1}>> (Y \times_G \widehat{X}^N/ Y\times_{G}\ast).\\
\end{CD}
\]
\end{lemma}
Observe that passage to adjoints gives a commutative diagram as follows:
\[
\begin{CD}
\Sigma(Y \times_GJ_{N-1}(X)/ Y\times_{G}\ast ) @>{}>> \Sigma(\bigvee_{1 \leq q \leq N-1}(Y \times_G \widehat{X}^q/ Y\times_{G}\ast))\\
@VV{}V @VV{\Sigma({\mathfrak{m}}box{inclusion})}V \\
\Sigma (Y \times_GJ_N(X)/ Y\times_{G}\ast )@>{}>> \Sigma(\bigvee_{1 \leq q \leq N}(Y \times_G \widehat{X}^q/ Y\times_{G}\ast))\\
@VV{}V @VV{\Sigma({\mathfrak{m}}box{projection})}V \\
\Sigma (Y \times_G (\widehat{X}^N)/ Y\times_{G}\ast) @>{1}>>\Sigma (Y \times_G \widehat{X}^N/ Y\times_{G}\ast).\\
\end{CD}
\]
The top horizontal arrow is an equivalence for $N-1= 0$ by hypothesis, and in general by the evident inductive hypothesis. The bottom arrow is an equivalence by inspection. Furthermore, the columns give a morphism of cofibration sequences by hypothesis. Thus, the middle arrow is an equivalence assuming that all spaces are of the homotopy type of a CW-complex.
Let $G$ be a compact and connected Lie group with maximal torus $T$. Consider the quotient space $G\times_{NT}J(T)$, where $NT$ acts diagonally on the product $G\times J(T)$. Then the following are immediate corollaries of Theorem \ref{decomp.thm}. Note that Theorem \ref{thm:first approximation INTRO} is the following corollary.
\begin{cor}{\label{app1}}
Let $G$ be a compact and connected Lie group with maximal torus $T$, and $NT$ acting on $T$ by conjugation and on $G$ by group multiplication. There is a homotopy equivalence
\[\Sigma (G\times_{NT} J(T)) \simeq \Sigma (G/NT \vee (\bigvee_{n \geq 1} G\times_{NT} \widehat{T}^n /G\times_{NT} \{1\} )).\]
\end{cor}
\begin{cor}{\label{app1/G}}
With the same assumptions of Corollary \ref{app1}, there is a homotopy equivalence
\[\Sigma (G\times_{NT} J(T)/(G\times \{1\})) \simeq \Sigma (G/NT \vee (\bigvee_{n \geq 1} G\times_{NT} \widehat{T}^n /G\times_{NT} \{1\} ))/(G\times \{1\}).\]
\end{cor}
\section{Proof of Theorem \ref{thm: stable decompositions of X(q G)}}\label{section: f x2g}
The space $Comm(G)$ is a special case of a more general construction on $G$. Let $F_n$ be a free group on $n$ letters. Consider the descending central series of $F_n$ denoted
$$ \cdots \subseteq \Gamma^q(F_n) \subseteq \cdots \subseteq \Gamma^3(F_n) \subseteq \Gamma^2(F_n) \subseteq F_n.$$
Then ${{\mathfrak{m}}athbb{Z}}^n=F_n/\Gamma^2(F_n)=F_n/[F_n,F_n]$. Consider the quotients $F_n/\Gamma^q(F_n)$. There are spaces of homomorphisms ${\rm Hom}(F_n/\Gamma^q(F_n),G)$ which can be realized as subspaces of $G^n$ by identifying $f \in {\rm Hom}(F_n/\Gamma^q(F_n),G)$ with $(g_1,\dots,g_n)\in G^n$, where $f(x_i)=g_i$ and $x_1,\dots,x_n$ are the generators of $F_n/\Gamma^q(F_n)$. The following definition extends the definition of $Comm(G)$ to include spaces of homomorphisms $
{\rm Hom} (F_n/\Gamma^q(F_n),G)$.
\begin{defn}\label{definition: XqG}
Define spaces $X(q,G) \subset Assoc(G)$ for $q\geq 2$ as follows
\[X(q,G):=\bigsqcup_{n\geq 1} {\rm Hom} (F_n/\Gamma^q(F_n),G)/{\sim}\]
where $\sim$ is the equivalence relation defined by
\[(x_1,\dots,x_n) \sim (x_1,\dots,x_{i-1},\widehat{x_i} ,x_{i+1},\dots,x_n)\text{ if } x_i=\ast, \]
as in Definition \ref{definition: spaces of all commuting n-tuples}. Note the space $X(q,G)$ coincides with $Comm(q,G)$ in Definition \ref{definition: spaces of all commuting n-tuples}.
\end{defn}
The inclusions ${\rm Hom}(F_n/\Gamma^q,G) \hookrightarrow {\rm Hom}(F_n/\Gamma^{q+1},G)$ obtained by precomposition of maps, induce inclusions $X(q,G) \subset X(q+1,G)$ for all $q \geq 2$. Hence, there is a filtration of $Assoc(G)$ by the spaces $X(q,G)$
$$Comm(G) = X(2,G) \subset X(3,G) \subset \cdots \subset X(q,G) \subset \cdots \subset J(G)=Assoc(G). $$
The space $X(q,G)$ stably splits if suspended once.
\begin{thm}\label{stable.x2g.prop}
Let $G$ be a compact and connected Lie group. There are homotopy equivalences
\[\Sigma Comm(G) \simeq \Sigma \bigvee_{n\geq 1} \widehat{{\rm Hom}}({\mathfrak{m}}athbb{Z}^n,G),\]
and the spaces $X(q,G)$
\[\Sigma X(q,G) \simeq \Sigma \bigvee_{n\geq 1} \widehat{{\rm Hom}}(F_n/\Gamma^q(F_n),G).\]
\end{thm}
\begin{proof}
Let $ q \geq 2$. Define a filtration of $X(q,G)$ as follows
\[F_1X(q,G) \subseteq F_2X(q,G) \subseteq \cdots \subseteq F_nX(q,G) \subseteq F_{n+1}X(q,G) \subseteq \cdots,\]
where by definition $F_nX(q,G)$ is the image of
$$\bigcup_{1 \leq t \leq n} {\rm Hom}(F_t/\Gamma^q(F_t),G)$$
in $X(q,G).$
Notice that in the special case of $q = 2$,
${{\mathfrak{m}}athbb{Z}}^n = F_n/\Gamma^2(F_n)$.
Hence, $F_nX(q,G)$ is the space of words of length at most $n$, such that the letters of the words
are in ${\rm Hom}(F_n/\Gamma^q(F_n), G)$ (or ${{\mathfrak{m}}athbb{Z}}^n$ in case $q = 2$).
In case $ q = 2$, it follows that the quotient of two consecutive stages of the filtration is
$$F_{q+1}X(2,G)/F_{q}X(2,G) \simeq {\rm Hom}({\mathfrak{m}}athbb{Z}^{q+1},G)/S({\rm Hom}({\mathfrak{m}}athbb{Z}^{q+1},G))=: \widehat{{\rm Hom}}({\mathfrak{m}}athbb{Z}^{q+1},G).$$ In case $ q > 2$, it was verified in \cite{adem2007commuting} that the inclusions
$${\rm Hom}(F_{n-1}/\Gamma^q(F_{n-1}),G) \to {\rm Hom}(F_n/\Gamma^q(F_n),G)$$ are closed cofibrations. By the above remarks,
these inclusions are split. Thus there are cofibration sequences
$${\rm Hom}(F_{n-1}/\Gamma^q(F_{n-1}),G) \hookrightarrow {\rm Hom}(F_n/\Gamma^q(F_n),G) \to C,$$
where $C$ is the cofiber ${\rm Hom}(F_{n}/\Gamma^q(F_{n}),G)/{\rm Hom}(F_{n-1}/\Gamma^q(F_{n-1}),G)$, and the sequences are split via the natural projection map
$$p_n: {\rm Hom}(F_{n}/\Gamma^q(F_{n}),G) \to {\rm Hom}(F_{n-1}/\Gamma^q(F_{n-1}),G)$$
which deletes the $n$-th coordinate.
Notice that $Assoc(G) = J(G)$ splits after one suspension as
$$\Sigma \bigvee_{1 \leq n < \infty} \widehat{G}^{n}.$$
The splitting of $X(q,G)$ is induced by inspection. It follows that $X(q,G)$ splits as stated.
\end{proof}
If $G$ is a closed subgroup of $GL_n({{\mathfrak{m}}athbb{C}})$, then A. Adem and F. Cohen \cite{adem2007commuting} show that there is a homotopy equivalence
\[\Sigma ({\rm Hom} ({\mathfrak{m}}athbb{Z}^n,G)) \simeq \bigvee_{1\leq k \leq n}\Sigma\big(\bigvee^{n \choose k} {\rm Hom} ({\mathfrak{m}}athbb{Z}^k,G)/S({\rm Hom}({\mathfrak{m}}athbb{Z}^{k},G)) \big).\]
Therefore, for these Lie groups, it is possible to obtain all the stable summands of ${\rm Hom}({\mathfrak{m}}athbb{Z}^n,G)$ from the stable summands of $ Comm(G)$.
\section{Proof of Theorems \ref{thm: further.stable decompositions} \& \ref{theorem: the surjection map to x(2,g)}}\label{section:x2g}
Let $G$ be a compact and connected Lie group. Let ${\rm Hom}({\mathfrak{m}}athbb{Z}^n,G)_{1_G}$ denote the connected component of the trivial representation ${\mathfrak{m}}athbf{1}=(1,\dots,1)$ in ${\rm Hom}({\mathfrak{m}}athbb{Z}^n,G)$. Since $T^n$ consists of commuting $n$-tuples, is path-connected, and contains ${\mathfrak{m}}athbf{1}$, it is a subspace of ${\rm Hom}({\mathfrak{m}}athbb{Z}^n,G)_1 \subseteq G^n$. In addition, $G$ acts on the space $T^n$ by coordinatewise conjugation
\begin{align*}
\theta_n: G\times T^n &\to {\rm Hom}({\mathfrak{m}}athbb{Z}^n,G)_{1_G} \\
g\times (t_1,\dots,t_n)& {\mathfrak{m}}apsto (t_1^g,\dots,t_n^g),
\end{align*}
where $t^g=gtg^{-1}$. An $n$-tuple $(h_1,\dots,h_n)$ of elements of $G$ is in ${\rm Hom}({\mathfrak{m}}athbb{Z}^n,G)_{1_G}$ if and only if there is a maximal torus such that all $h_i$ are in that torus.
Since all maximal tori in $G$ are conjugate (see \cite{adams.lectures.on.lie.groups}), it follows that the map $\theta_n$ is surjective.
The maximal torus $T$ acts diagonally on the product $G \times T^n$
\[t\cdot (g,t_1,\dots,t_n) = (gt,t^{-1}t_1t,\dots,t^{-1}t_nt)=(gt,t_1,\dots,t_n).\]
Hence, $T$ acts trivially on itself. Thus the map $\theta_n$
factors through $(G\times T^n)/T=G\times_T T^n$. Thus there is a surjection
$$\bar{\theta}_n: G/T\times T^n \to {\rm Hom}({\mathfrak{m}}athbb{Z}^n,G)_1.$$
Moreover, the Weyl group $W$ of $G$ acts diagonally on $G/T \times T^n$ by
\[w\cdot (gT,t_1,\dots,t_n) = (gwT,w^{-1}t_1w,\dots,w^{-1}t_nw),\]
where $gT$ is a coset in $G/T$. It follows that the map $\hat{\theta}_n$ is $W$-invariant since
\begin{align*}
(gw,w^{-1}t_1w,\dots,w^{-1}t_nw) & =((gw)w^{-1}t_1w(gw)^{-1},\dots,(gw)w^{-1}t_nw(gw)^{-1})\\
& =(gt_1 g^{-1},\dots,gt_n g^{-1}).
\end{align*}
Therefore, the map $\bar{\theta}_n$ factors through $G\times_{NT} T^n$ and so there are surjections
$$\hat{\theta}_n: G/T\times_W T^n \to {\rm Hom}({\mathfrak{m}}athbb{Z}^n,G)_{1_G}$$
for all $n$.
In what follows let $R={{\mathfrak{m}}athbb{Z}}\left[ 1/|W|\right]$ denote the ring of integers with the order of the Weyl group of $G$ inverted.
\begin{lemma}\label{fibre2}
If $G$ is a compact and connected Lie group with maximal torus $T$ and Weyl group $W$, then $H_{\ast}((\hat{\theta}_n)^{-1}(g_1,\dots,g_n);R)$ is isomorphic to the homology of a point $H_{\ast}(pt,R)$.
\end{lemma}
\begin{proof}
See \cite[Lemma 3.2]{bairdcohomology} for a proof. Note that ${{\mathfrak{m}}athbb{Q}}$ suffices for the ring of coefficients, but so does $R$.
\end{proof}
The following theorem is recorded to prove the next result. See \cite{bairdcohomology} for a proof.
\begin{thm}[Vietoris \& Begle]\label{viet-begle}
Let $h:X \longrightarrow Y$ be a closed surjection, where $X$ is a paracompact Hausdorff space. Suppose that for all $y \in Y$, $H_{\ast}(h^{-1}(y), R)= H_{\ast}(pt, R)$. Then the induced maps in homology
$h_{\ast}:H_{\ast}(X, R) \longrightarrow H_{\ast}(Y, R)$
are isomorphisms.
\end{thm}
Let $Comm(G)_{1_G}$ be the connected component of the trivial representation, see Definition \ref{definition: G to Comm(G)}.
\begin{thm}\label{thm: first approximation}
Let $G$ be a compact and connected Lie group with maximal torus $T$ and Weyl group $W$. Then there
is an induced map $${\mathcal{T}}heta: G \times_{NT} J(T) \to Comm(G)_{1_G}$$ together with a commutative diagram for all $n \geq 0$ given as follows:
\[
\begin{CD}
G \times T^m @>{\theta_m}>> {\rm Hom}({{\mathfrak{m}}athbb{Z}}^m,G)_{1_G} \\
@VV{}V @VV{}V \\
G \times_{N_{T}}J(T) @>{{\mathcal{T}}heta}>> Comm(G)_{1_G}.\\
\end{CD}
\]
Furthermore, ${\mathcal{T}}heta$ is a surjection and the homotopy theoretic fibre of this map ${\mathcal{T}}heta: G \times_{N_{T}}J(T) \to Comm(G)$
has reduced singular homology, which is entirely torsion of an order which divides the order of the Weyl group.
\end{thm}
\begin{proof}
There is an induced map
$${\mathcal{T}}heta: G/T \times_{W} J(T) \to Comm(G)_{1_G},$$
which is a surjection.
Define $J_n(T)$ to be the $n$-th stage of the James construction defined by
the image of
$$\bigsqcup_{0 \leq q \leq n}T^n$$
as a subspace of $J(T)$ with
$$J(T) = {{{\rm colim}\hspace{.2em}}} J_n(T)$$
for path-connected CW-complexes $T$.
Next define $Comm_n(G)$ to be the image of
$$\bigsqcup_{0 \leq q \leq n}{\rm Hom}({\mathfrak{m}}athbb Z^q,G)$$
as a subspace of $Comm(G)$ with
$$Comm(G) = {{{\rm colim}\hspace{.2em}}}Comm_n(G).$$
Then ${\mathcal{T}}heta$ restricts to maps
$${\mathcal{T}}heta: G/T \times_{W} J_n(T) \to Comm_n(G)$$
for all integers $n\geq 1$.
Now consider the surjections
$$\hat{\theta}_n: G/T\times_W T^n \to {\rm Hom}({\mathfrak{m}}athbb{Z}^n,G)_{1_G}$$
which induce homology isomorphisms if $|W|$ is inverted. These maps restrict to surjections
$$\hat{\theta}_n: G/T\times_W S_n(T) \to S_n(G)$$
which similarly induce homology isomorphisms, where $S_n(T)$ is the set of all $n$-tuples in $T^n$ with at least one element the identity $1_G$. Note that $S({\rm Hom}({\mathfrak{m}}athbb Z^n,T)) = S_n(T)$.
Moreover, the inclusions
$$ G/T\times_W S_n(T) \subset G/T\times_W T^n$$
and
$$S_n(G) \subset {\rm Hom}({\mathfrak{m}}athbb{Z}^n,G)_{1_G}$$
are cofibrations with cofibers $G/T\times_W \widehat{T}^n/(G/NT)$ and $\widehat{{\rm Hom}}({{\mathfrak{m}}athbb{Z}}^n,G)_{1_G}$, respectively. Equivalently, there is a commutative diagram of cofibrations
\[
\begin{CD}
G/T\times_W S_n(T) @>{i}>> G/T\times_W T^n @>>> G/T\times_W \widehat{T}^n/(G/NT)\\
@VV{\hat{\theta}_n}V @VV{\hat{\theta}_n}V @VVV \\
S_n(G) @>{{\mathcal{T}}heta}>> {\rm Hom}({\mathfrak{m}}athbb{Z}^n,G)_{1_G} @>>> \widehat{{\rm Hom}}({{\mathfrak{m}}athbb{Z}}^n,G)_{1_G}.\\
\end{CD}
\]
The five lemma applied to the long exact sequences in homology for the cofibrations shows that the maps
$$G/T\times_W \widehat{T}^n/(G/NT) \to \widehat{{\rm Hom}}({{\mathfrak{m}}athbb{Z}}^n,G)_{1_G} $$
induce isomorphisms in homology with the same coefficients. Note that from the stable decompositions in Theorems \ref{thm:first approximation INTRO} and \ref{thm: stable decompositions of X(q G)}, it follows that the cofibers above are the stable summands of the spaces $G/T \times_W J(T)$ and $Comm(G)_{1_G}$, respectively. The map
$${\mathcal{T}}heta: G/T \times_{W} J(T) \to Comm(G)_{1_G}$$
induces a map on the level of stable decompositions which is a homology isomorphism in each summand. Therefore, ${\mathcal{T}}heta$ induces a homology isomorphism with coefficients in ${{\mathfrak{m}}athbb{Z}}[1/|W|]$.
\end{proof}
Note that this also proves Theorem \ref{thm: further.stable decompositions}.
\section{Proof of Theorem \ref{thm: homology of x2g}}\label{section: homology of x2g}
Let $G$ be a compact and connected Lie group and $R$ be the ring ${{\mathfrak{m}}athbb{Z}} \left[{1}/{|W|}\right]$. Using Lemma \ref{fibre2} and Theorems \ref{viet-begle} and \ref{thm: first approximation}, the following theorem holds.
\begin{thm}\label{theorem: homology of GxJ(T)/NT}
Let $G$ be a compact and connected Lie group with maximal torus $T$ and Weyl group $W$. Then there is an isomorphism in homology
\[H_{\ast}(G \times_{NT} J(T); R ) \cong H_{\ast} ( Comm(G)_1; R ).\]
\end{thm}
\begin{thm}\label{theorem: homology of x2g1}
Let $G$ be a compact and connected Lie group with maximal torus $T$ and Weyl group $W$. Then there is an isomorphism in homology
\[H_{\ast}( Comm(G)_1; R) \cong \big( H_{\ast}(G/T;R) \otimes_{R} {\mathcal{T}}[V]\big)_W .\]
\end{thm}
\begin{proof}
There is a short exact sequence of groups
$$1 \to T \to NT \to W \to 1$$
and associated to it, there is a fibration sequence
\[\big(G \times J(T)\big)/T \to \big(G \times J(T)\big)/NT \to BW,\]
which is equivalent to the fibration
\[G \times_T J(T) \longrightarrow G \times_{NT} J(T) \longrightarrow BW.\]
The Leray spectral sequence has second page given by the groups
\[E^2_{p,q}=H_p(BW; H_q(G\times_T J(T); R))\]
which converges to $H_{p+q}(G\times_{NT} J(T); R)$. Since $|W|^{-1} \in R$, it follows that $E^2_{s>0,t}=0$ and the groups on the vertical axis are given by
\[E^2_{0,t}=H_0\big( BW; H_t(G \times_{T} J(T);R)\big).\]
Recall that homology in degree 0 is given by the coinvariants
\[ H_0\big( BW; H_t(G \times_{T} J(T);R)\big) = \big(H_t(G \times_T J(T);R)\big)_W.\]
Also $T$ acts by conjugation and thus trivially on $T^n$, so it acts trivially on $J(T)$. Hence, $G \times_T J(T)=G/T \times J(T)$.
The \textit{flag variety} $G/T$ has torsion free integer homology, see \cite{bott}, and so does $J(T)$. So the homology of $G/T \times J(T)$ with coefficients in $R$ is given by the following tensor product
\[ H_t(G \times_{T} J(T);R) =\bigoplus_{i+j=t} \big[ H_i(G/T;R) \otimes_{R} H_j (J(T);R) \big].\]
The spectral sequence collapses at the $E^2$ term as stated above. Hence,
\begin{align*}
H_0(BW;H_t(G \times_{T} J(T);R))\cong \bigg( \bigoplus_{i+j=t} \big[ H_i(G/T;R) \otimes_{R} H_j (J(T);R) \big]\bigg)_W.
\end{align*}
Using Theorem \ref{theorem: homology of GxJ(T)/NT}, it follows that
\[H_{\ast} ( Comm(G)_1; R ) \cong H_{\ast}(G \times_{NT} J(T);R) = \big( H_{\ast}(G/T;R) \otimes_{R} H_{\ast} (J(T);R)\big)_W.\]
Recall that the homology of $J(T)$ is the tensor algebra on the reduced homology of $T$. Let ${\mathcal{T}}[V]$ denote the tensor algebra on the reduced homology of $T$, denoted by $V$. Then there is an isomorphism
\[ H_{\ast} ( Comm(G)_1; R ) = \big( H_{\ast}(G/T;R) \otimes_{R} {\mathcal{T}}[V]\big)_W.\]
\end{proof}
If $G$ has the property that every abelian subgroup is contained in a path-connected abelian subgroup, then ${\rm Hom}({{\mathfrak{m}}athbb{Z}}^n,G)$ is path-connected, see \cite{adem2007commuting}. Some of these groups include $U(n)$, $SU(n)$ and ${{\rm Sp}}(n)$. In this case $Comm(G)$ is also path-connected and it follows that $Comm(G)_1=Comm(G)$.
\begin{cor}
Let $G$ be a compact, connected Lie group with maximal torus $T$ and Weyl group $W$, such that every abelian subgroup is contained in a path-connected abelian subgroup. Then there is an isomorphism in homology
\[H_{\ast}( Comm(G); R) \cong \big( H_{\ast}(G/T;R) \otimes_{R} {\mathcal{T}}[V]\big)_W .\]
\end{cor}
To find the homology groups of $Comm(G)_1$ explicitly, it is necessary to find the coinvariants
\[\big( H_{\ast}(G/T;R) \otimes_{R} {\mathcal{T}}[V]\big)_W .\]
This leads the subject to representation theory. Note that Theorem \ref{theorem: homology of x2g1} can also be used to study the cases of the compact and connected simple exceptional Lie groups $G_2,F_4,E_6,E_7$ and $E_8$.
\section{Proof of Theorem \ref{theorem:ungraded homology of X2G INTRO}}\label{section: ungraded homology of X2G}
Let $H_{\ast}^U$ denote ungraded homology and ${\mathcal{T}}_U [V]$ denote the ungraded tensor algebra over $V$, where $V$ is the reduced homology of $T$ with coefficients in $R$.
The homology $H_{\ast}(G/T;R)$, if considered ungraded, is isomorphic as an $R[W]$-module to the group ring of $W$, namely $R[W]$. This fact was proven in \cite[Proposition B.1]{bairdcohomology}, thus ignoring the grading of the homology $H_{\ast}(G/T;R)$ in the proof of Theorem \ref{theorem: homology of x2g1}, as a $W$-module the homology is isomorphic to $R[W]$. Furthermore, as an ungraded module, there is an isomorphism
$$H_*(G/T) \otimes_{R[W]} {\mathcal{T}} [V] \to {\mathcal{T}}[V].$$
The next theorem follows.
\begin{thm}\label{theorem:ungraded homology of X2G}
Let $G$ be a compact, connected Lie group with maximal torus $T$ and Weyl group $W$. Then there is an isomorphism in ungraded homology
\[H_{\ast}^U( Comm(G)_1; R) \cong {\mathcal{T}}_U [V] .\]
\end{thm}
\begin{proof}
From Theorem \ref{theorem: homology of x2g1} there is an isomorphism in homology given by
\[H_{\ast}( Comm(G)_1; R) \cong \big( H_{\ast}(G/T;R) \otimes_{R} {\mathcal{T}}[V]\big)_W .\]
If all homology is ungraded, then there are isomorphisms in ungraded homology given by
\[
H_{\ast}^U( Comm(G)_1; R) \cong \big( R[W] \otimes_{R} {\mathcal{T}}_U[V]\big)_W \cong R[W] \otimes_{R[W]} {\mathcal{T}}_U[V] \cong {\mathcal{T}}_U[V].
\]
\end{proof}
This shows that as an abelian group, without the grading, the homology of $Comm(G)_1$ with coefficients in $R$ is the ungraded tensor algebra ${\mathcal{T}}_U[V]$. The following is an immediate corollary of Theorem \ref{theorem:ungraded homology of X2G}.
\begin{cor}
Let $G$ be a compact, connected Lie group with maximal torus $T$ and Weyl group $W$, such that every abelian subgroup is contained in a path-connected abelian subgroup. Then there is an isomorphism in ungraded homology
\[H_{\ast}^U( Comm(G); R) \cong {\mathcal{T}}_U [V] .\]
\end{cor}
\section{An example given by $SO(3)$}\label{section: example SO(3)}
Consider the special orthogonal group $SO(3)$. The connected components of the space $Comm(SO(3))$
are given next. D. Sjerve and E. Torres-Giese \cite{giese.sjerve} showed that the space ${\rm Hom}({{\mathfrak{m}}athbb{Z}}^n,SO(3))$ has the following decomposition into path components
$$
{\rm Hom}({{\mathfrak{m}}athbb{Z}}^n,SO(3)) \approx {\rm Hom}({{\mathfrak{m}}athbb{Z}}^n,SO(3))_1 \bigsqcup \bigg( \bigsqcup_{N_n} S^3/Q_8 \bigg),
$$
where $Q_8$ is the group of quaternions acting on the 3-sphere. $N_n$ is a finite positive integer depending on $n$ and equals
$\frac{1}{6}\left(4^n+3\cdot 2^n +2 \right)$ if $n$ is even, and
$\frac{2}{3}(4^{n-1}-1)-2^{n-1}+1$ otherwise. By definition it follows that
$$
Comm(SO(3)) = \bigg(\bigsqcup_{n \geq 1} \big[ {\rm Hom}({{\mathfrak{m}}athbb{Z}}^n,SO(3))_1 \bigsqcup \big( \bigsqcup_{N_n} S^3/Q_8 \big) \big] \bigg)/\sim,
$$
where $\sim$ is the relation in Definition \ref{definition: spaces of all commuting n-tuples}.
An $n$-tuple $(M_1,\dots , M_n)$ is in the connected component ${\rm Hom}({{\mathfrak{m}}athbb{Z}}^n,SO(3))_1$ if and only if all the matrices $M_1,\dots , M_n$ are rotations about the same axis. The $n$-tuple is in one of the components homeomorphic to $S^3/Q_8$ if and only if there are two matrices $M_i$ and $M_j$ that are rotations about orthogonal axes such that the other coordinates are equal to one of $M_i$, $M_j$, $M_i M_j$ or $1_G$, see \cite{giese.sjerve}. For any positive integer $n$ there is at least one $n$-tuple not in ${\rm Hom}({{\mathfrak{m}}athbb{Z}}^n,SO(3))_1$ with no coordinate the identity. Therefore, in the identifications above the number of copies of $S^3/Q_8$ increases with $n$ and there are infinitely many copies of $S^3/Q_8$ as connected components of $Comm(SO(3))$. So it follows that $Comm(SO(3))$ has the following path components
$$Comm(SO(3)) = Comm(SO(3))_1 \bigsqcup \big( \bigsqcup_{\infty} S^3/Q_8 \big).$$
See \cite{stafa.thesis} for more details. Note that Theorem \ref{thm:souped-up-Molien INTRO} gives information about the rational cohomology as well as the integral cohomology with $2$ inverted of $Comm(SO(3))_1$.
\section{The cases of $U(n)$, $SU(n)$, $Sp(n)$, and $Spin(n)$ }\label{section:Un.SUn}
Recall the map of spaces
$${\mathcal{T}}heta: G\times_{NT}J(T) \to Comm(G)_{1},$$
which induces a map in singular homology which is an isomorphism under the conditions that $G$ is compact, simply-connected, and the order of the Weyl group $W$ for a maximal torus $T$ has been inverted in the coefficient ring (Theorem \ref{theorem: homology of x2g1}). The purpose of this section is to describe the ungraded homology of $Comm(G)$ for $G$ one of the groups $U(n)$, $SU(n)$, $Sp(n)$, and $Spin(n)$.
A maximal torus for $U(n)$ is of rank $n$ given by $$T = (S^1)^n.$$ Thus the ungraded homology of $Comm(U(n))$ has Poincar\'e series
$$ {{\rm Hilb}}(Comm(U(n)),t)= 1/(2-(1+t)^n).$$ The analogous result for $SU(n)$ follows, that is, the Poincar\'e series is given by
$${{\rm Hilb}}(Comm(SU(n)),t)=1/(2-(1+t)^{n-1}).$$
These series give the ungraded homology of $Comm(G)$ for $G = U(n)$ and $SU(n)$, respectively.
The case of $Sp(n)$ is analogous as a maximal torus is of rank $n$, so the Poincar\'e series for the ungraded
homology of $Comm(Sp(n))$ with $n!$ inverted is
$${{\rm Hilb}}(Comm(Sp(n)),t)=1/(2-(1+t)^n).$$
The cases of $SO(n)$ and $Spin(n)$ breaks down classically into two cases as expected with $n = 2a \ {\mathfrak{m}}box{or} \ n = 2b+1$.
First, the case of $$n = 2b+1.$$ The rank of a maximal torus (finite product of circles) is $b$. Thus the Poincar\'e series for the ungraded homology of $Comm(Spin(2b+1),t)$ with $n! = (2b+1)!$ inverted is
$${{\rm Hilb}}(Comm(Spin(2b+1)),t)=1/(2-(1+t)^b).$$
Second, the case of $$n = 2a.$$ The rank of a maximal torus (finite product of circles) is $a$. Thus the Poincar\'e series for the ungraded homology of $Comm(Spin(2a),t)$ with $n! = (2a)!$ inverted is
$${{\rm Hilb}}(Comm(Spin(2a)),t)=1/(2-(1+t)^a).$$
\section{Results for $G_2$, $F_4$, $E_6$, $E_7$ and $E_8$}\label{section:exceptional Lie groups}
In this section applications of earlier results concerning $Comm(G)$ are applied to the classical exceptional, simply-connected, simple compact Lie groups $G_2$, $F_4$, $E_6$, $E_7$ and $E_8$. In particular, the ungraded homology of $Comm(G)$ is given in these cases.
Recall the map
$${\mathcal{T}}heta: G\times_{NT}J(T) \to Comm(G).$$
In the special cases of this section, if the rank of the maximal torus is $r$, then a choice of maximal torus will be denoted $$T_r.$$ Thus the following is classical, see \cite{chevalley1999theory}.
The first explicit determination of the Poincar\'e polynomials of the exceptional simple Lie groups has been accomplished by Yen Chih-Ta \cite{yen.chih.ta}.
\begin{enumerate}
\item If $G = G_2$, then $r=2$, the Weyl group $W$ is the dihedral group of order $12$, and
the cohomology with $6$ inverted is an exterior algebra on classes in degrees $3$, and $11$,
\item If $G = F_4$, then $r=4$, the Weyl group $W$ is the dihedral group of order $2^7\times 3^2=1,152$, and
and the cohomology with $6$ inverted is an exterior algebra on classes in degrees $3,11,15,23$.
\item If $G = E_6$, then $r=6$, the Weyl group $W$ is $O(6, {\mathfrak{m}}athbb F_2)$ of order $51,840$, and
and the cohomology with $6$ inverted is an exterior algebra on classes in degrees $3,9,11,15,17,23$.
\item If $G = E_7$, then $r=7$, the Weyl group $W$ is $O(7,{\mathfrak{m}}athbb F_2) \times {\mathfrak{m}}athbb Z/2$ of order $2,903,040$, and
and the cohomology with $2\cdot3\cdot 5 \cdot 7$ inverted is an exterior algebra on classes in degrees $3, 11, 15, 19, 23, 27, 35$.
\item If $G = E_8$, then $r=8$, the Weyl group $W$ is a double cover of $O(8,{\mathfrak{m}}athbb F_2)$ of order $(2 ^{14})(3^5)(5^2)(7)$, and the cohomology with $2 \cdot 3 \cdot 5\cdot 7$ inverted is an exterior algebra on classes in degrees $3,15,23,27,35,39, 47, 59$, and $11$,
\end{enumerate}
Finally, recall Theorem \ref{theorem:ungraded homology of X2G} restated as follows where
$H_{\ast}^U(X; R)$ denotes ungraded homology.
\begin{thm}
Let $G$ be one of the exceptional Lie groups $G_2, F_4, E_6, E_7, E_8$ with maximal torus $T$ and Weyl group $W$. Then there is an isomorphism in ungraded homology
\[H_{\ast}^U( Comm(G); R) \cong {\mathcal{T}}_U [V] .\]
\end{thm}
One consequence is that if homology groups are regarded as ungraded, then the copies of the group ring can be canceled, and this gives the ungraded homology of $Comm(G)$ in the cases of these exceptional Lie groups, which are recorded as follows.
\begin{cor}\label{cor: rational.for.exceptional simple Lie groups}
Let $G$ be one of $G_2, F_4, E_6, E_7, E_8$. The ungraded homology of the space $Comm(G)_{1}$
in these cases with coefficients in ${\mathfrak{m}}athbb Z[1/|W|]$, where $|W|$ is the order of the Weyl group, is given as follows: The ungraded homology is isomorphic to $${\mathcal{T}}[\widetilde{H}_*(T)]$$
where $T$ is a maximal torus. For each of the groups, the Hilbert-Poincar\'e series is given as follows
\begin{enumerate}
\item If $G = G_2$, then the ungraded homology of $Comm(G)_1$
is the tensor algebra ${\mathcal{T}}[\widetilde{H}_*((S^1)^2)]$, and has Hilbert-Poincar\'e series $$1/(1-2t-t^2).$$
\item If $G = F_4$, then the ungraded homology of $Comm(G)_1$
is the tensor algebra ${\mathcal{T}}[\widetilde{H}_*((S^1)^4)]$, and has Hilbert-Poincar\'e series $$1/(2-(1+t)^4).$$
\item If $G = E_6$, then the ungraded homology of $Comm(G)_1$
is the tensor algebra ${\mathcal{T}}[\widetilde{H}_*((S^1)^6)]$, and has Hilbert-Poincar\'e series $$1/(2-(1+t)^6).$$
\item If $G = E_7$, then the ungraded homology of $Comm(G)_1$
is the tensor algebra ${\mathcal{T}}[\widetilde{H}_*((S^1)^7)]$, and has Hilbert-Poincar\'e series $$1/(2-(1+t)^7).$$
\item If $G = E_8$, then the ungraded homology of $Comm(G)_1$
is the tensor algebra ${\mathcal{T}}[\widetilde{H}_*((S^1)^8)]$, and has Hilbert-Poincar\'e series $$1/(2-(1+t)^8).$$
\end{enumerate}
\end{cor}
\appendix
\section{Proof of Theorem \ref{thm:souped-up-Molien INTRO}}\label{appendix: proof of proposition}
\begin{center}
{V. Reiner
}\end{center}
Let $W$ be a finite subgroup of $GL_n({{\mathfrak{m}}athbb{R}}R)$ generated by reflections acting on ${{\mathfrak{m}}athbb{R}}R^n$. Then $W$ also acts in a grade-preserving fashion
on the polynomial algebra
$$
R={{\mathfrak{m}}athbb{R}}R[x_1,\ldots,x_n],
$$
where $x_1,\ldots,x_n$ are a basis for the dual space $({{\mathfrak{m}}athbb{R}}R^n)^*$.
The theorem of Shephard-Todd and Chevalley \cite[\S 4.1]{broue.reflextion.gps}
asserts that the subalgebra of $W$-invariant polynomials is again a polynomial algebra
$$
R^W={{\mathfrak{m}}athbb{R}}R[f_1,\ldots,f_n].
$$
One can choose the $f_1,\ldots,f_n$ homogeneous, say with degrees
$d_1,\ldots,d_n$. For example, when $W$ is the symmetric group $S_n$ permuting coordinates in ${{\mathfrak{m}}athbb{R}}R^n$,
one can choose $f_i=e_i(x_1,\ldots,x_n)$
the {\it elementary symmetric functions},
and one has $(d_1,\ldots,d_n)=(1,2,\ldots,n)$.
The usual grading conventions for the cohomology of a topological space requires that all degrees $d_j$ be doubled in the formulas here. For example, in the case of $G= U(n)$ with $W$ given by the symmetric group on $n$-letters, the value $d_j = j$, but the homological degree of the $j$-th Chern class is degree $2j$.
Then $R^W$ has the following ${\mathbb {N}}$-graded Hilbert series in the variable $q$:
\begin{equation}
\label{polynomial-invariants-hilb}
{{\rm Hilb}}(R^W,q):=\sum_{i=0}^\infty \dim_{{\mathfrak{m}}athbb{R}}R(R^W_i)\,\, q^{i}
=\prod_{j=1}^n \frac{1}{1-q^{2d_j}}.
\end{equation}
The {\it coinvariant algebra} is the quotient ring
$$
C:=R/(f_1,\ldots,f_n)=R/(R^W_+),
$$
which also carries a grade-preserving $W$-action.
In addition, we consider the $W$-action on the
exterior algebra $E=\wedge {{\mathfrak{m}}athbb{R}}R^n$,
the reduced exterior algebra ${\widetilde{E}}:=\bigoplus_{k=1}^n \wedge^k {{\mathfrak{m}}athbb{R}}R^n$,
and on the ${{\mathfrak{m}}athbb{R}}R$-dual ${\mathcal{T}}TT^*[{\widetilde{E}}]$ of the tensor
algebra over ${{\mathfrak{m}}athbb{R}}R$ on ${\widetilde{E}}$.
These will have their own separate ${\mathbb {N}}^2$-bi-grading
so that the ${{\mathfrak{m}}athbb{R}}R$-dual of $\wedge^k {{\mathfrak{m}}athbb{R}}R^n$ lies in bidegree $(k,1)$,
and the ${{\mathfrak{m}}athbb{R}}R$-dual of
$\wedge^{k_1} {{\mathfrak{m}}athbb{R}}R^n \otimes \cdots \otimes \wedge^{k_m} {{\mathfrak{m}}athbb{R}}R^n$ lies in
the bidegree $(k_1+\cdots+k_m,m)$ of ${\mathcal{T}}TT^*[{\widetilde{E}}]$.
Thus the diagonal action of $W$
on the tensor product $C \otimes {\mathcal{T}}TT^*[{\widetilde{E}}]$ over ${{\mathfrak{m}}athbb{R}}R$
respects an {\it ${\mathbb {N}}^3$-tri-grading}. Its $W$-fixed space
has trigraded Hilbert series defined by
$$
\begin{aligned}
&{{\rm Hilb}}\left( \left(C \otimes {\mathcal{T}}TT^*[{\widetilde{E}}] \right)^W , q, s, t \right) \\
&\quad := \sum_{i,m=0}^\infty \sum_{\substack{(k_1,\ldots,k_m) \\ \text{in }\{1,2,\ldots\}^m} }
\dim_{{\mathfrak{m}}athbb{R}}R \left( C^i \otimes
\left(\wedge^{k_1} {{\mathfrak{m}}athbb{R}}R^n \otimes \cdots \otimes \wedge^{k_m} {{\mathfrak{m}}athbb{R}}R^n\right)^*
\right)^W
q^{i} s^{k_1+\cdots+k_m} t^m.
\end{aligned}
$$
In the next theorem, the exponent in $q^{2i}$ is doubled as these degrees correspond to the topological degrees in the invariant algebra.
\begin{thm}\label{thm:souped-up-Molien}
If $G$ is a compact, connected Lie group with maximal torus $T$, and Weyl group $W$, then
$$
\begin{aligned}
&{{\rm Hilb}}\left( \left(C^* \otimes {\mathcal{T}}TT^*[{\widetilde{E}}] \right)^W , q, s, t \right) \\
&\quad =\frac{ \prod_{i=1}^n (1-q^{2d_i}) }{ |W| }\sum_{w \in W} \frac{1}{\det(1-q^2w) \left( 1-t(\det(1+sw)-1) \right)}.
\end{aligned}
$$
\end{thm}
\begin{proof}
For any (ungraded) finite-dimensional $W$-representation $X$ over ${{\mathfrak{m}}athbb{R}}R$,
one has an isomorphism \cite[(4.5)]{broue.reflextion.gps} of ${\mathbb {N}}$-graded ${{\mathfrak{m}}athbb{R}}R$-vector spaces
\begin{equation}
\label{Broue-isomorphism}
\left( R \otimes X \right)^W
\cong R^W \otimes \left( C^* \otimes X \right)^W.
\end{equation}
If the $W$-action respects an additional ${\mathbb {N}}^2$-grading on $X=\oplus_{(j,m)} X_{j,m}$,
separate from the ${\mathbb {N}}$-grading on $R$, then
\eqref{Broue-isomorphism} becomes an
isomorphism of ${\mathbb {N}}^3$-trigraded ${{\mathfrak{m}}athbb{R}}R$-vector spaces. Consequently, one has
\begin{equation}
\label{tensor-consequence}
\begin{aligned}
{{\rm Hilb}}\left( \left( C^* \otimes X \right)^W, q,s,t\right)
&=
\frac{
{{\rm Hilb}}\left( \left( R \otimes X \right)^W, q,s,t\right)
}
{
{{\rm Hilb}}( R^W, q)
}\\
&=
\prod_{i=1}^n (1-q^{2d_i}) \cdot
{{\rm Hilb}}\left(( \left( R \otimes X \right)^W, q,s,t\right)
\end{aligned}
\end{equation}
using \eqref{polynomial-invariants-hilb} for the last equality.
Next note that
\begin{equation}
\label{bigraded-trace-consequence}
\begin{aligned}
{{\rm Hilb}}\left( \left( R \otimes X \right)^W, q,s,t\right)
&=\sum_{i,j,m = 0}^{\infty}
\dim_{{\mathfrak{m}}athbb{R}}R \left( R_i \otimes X_{j,m} \right)^W q^{i} s^j t^m \\
&=\frac{1}{|W|} \sum_{w \in W}
\sum_{i,j,m = 0}^{\infty}
{\mathcal{T}}race\left(w|_{R_i \otimes X_{j,m}} \right) q^{i} s^j t^m
\end{aligned}
\end{equation}
as $\pi: v \longmapsto 1/|W| \sum_{w \in W} w(v)$ is
an idempotent projection onto the $W$-fixed subspace $V^W$ of any
$W$-representation $V$, so
the trace of $\pi$ is $\dim_{{\mathfrak{m}}athbb{R}}R(V^W)$.
Taking $X={\mathcal{T}}TT^*[{\widetilde{E}}]$ in \eqref{tensor-consequence} and \eqref{bigraded-trace-consequence}, the theorem follows
from this claim: for any $w$ in $W$,
\begin{equation}
\label{trace-claim}
\sum_{i,j,m = 0}^{\infty}
{\mathcal{T}}race\left(w|_{R_i \otimes {\mathcal{T}}TT^*[{\widetilde{E}}]_{j,m}} \right) q^{i} s^j t^m =
\frac{1}{\det(1-q^2w) \left( 1-t(\det(1+sw)-1) \right)}
\end{equation}
To prove the claim \eqref{trace-claim}, start with the facts \cite[Example 3.25]{broue.reflextion.gps} that
\begin{eqnarray}
\label{polynomials-graded-trace}
\sum_{i=0}^\infty
{\mathcal{T}}race\left(w|_{R_i} \right) q^{i}
\displaystyle
&=& \frac{1}{\det(1-q^2w^{-1})}
= \frac{1}{\det(1-q^2w)}, \\
\label{exterior-graded-trace}
\sum_{j=0}^n
{\mathcal{T}}race\left(w|_{\wedge^j {{\mathfrak{m}}athbb{R}}R^n} \right) s^j
&=&\det(1+sw)=\det(1+sw^{-1}).
\end{eqnarray}
The rightmost equalities in
\eqref{polynomials-graded-trace},\eqref{exterior-graded-trace}
arise because $w$ acts orthogonally on ${{\mathfrak{m}}athbb{R}}R^n$,
forcing $w, w^{-1}$ to have the same eigenvalues with multiplicities,
From \eqref{exterior-graded-trace} one deduces that
$$
\sum_{j=1}^n
{\mathcal{T}}race\left(w|_{\wedge^j {{\mathfrak{m}}athbb{R}}R^n} \right) s^j
= \det(1+sw)-1.
$$
Since (the dual of) $\wedge^j {{\mathfrak{m}}athbb{R}}R^n$
lies in the bidegree ${\mathcal{T}}TT^*[{\widetilde{E}}]_{j,1}$,
this implies
\begin{equation}
\label{tensors-graded-trace}
\begin{aligned}
\sum_{j,m=0}^\infty
{\mathcal{T}}race\left( w|_{{\mathcal{T}}TT^*[{\widetilde{E}}]_{j,m}} \right) s^j t^m
&= \frac{1}{1-t\left(\det(1+sw^{-1})-1\right)} \\
&= \frac{1}{1-t\left(\det(1+sw)-1\right)}.
\end{aligned}
\end{equation}
Consequently the claim \eqref{trace-claim} follows from
\eqref{polynomials-graded-trace}
and \eqref{tensors-graded-trace}, since
graded traces are multiplicative on
graded tensor products. This completes the proof.
\end{proof}
\section{acknowledgments}
The authors thank V. Reiner for including the appendix in this paper along with his formula giving the Hilbert-Poincar\'e series for $Comm(G)$.
\end{document}
|
\begin{document}
\title[nD Kuramoto-Sivashinsky]
{ Global Regular Solutions for the multi-dimensional Kuramoto-Sivashinsky equation posed on smooth domains
}
\author{N. A. Larkin}
\address
{
Departamento de Matem\'atica, Universidade Estadual
de Maring\'a, Av. Colombo 5790: Ag\^encia UEM, 87020-900, Maring\'a, PR, Brazil
}
\thanks
{
MSC 2010:35Q35; 35Q53.\\
Keywords: Kuramoto-Sivashinsky equation, Global solutions; Decay in Bounded domains
}
\email{ [email protected];[email protected] }
\date{}
\begin{abstract}Initial- boundary value problems for the n-dimensional ($n$ is a natural number from the interval [2,7]) Kuramoto-Sivashinsky equation posed on smooth bounded domains in ${\mathbb R}^n$ were considered. The existence and uniqueness of global regular solutions as well as their exponential decay have been established.
\end{abstract}
\maketitle
\mathcal Section{Introduction}\label{introduction}
This work concerns the existence, uniqueness, regularity and exponential decay rates of solutions to initial-boundary value problems for the $n$-dimensional Kuramoto-Sivashinsky equation (KS)
\begin{align}
& \phi_t+\Delta^2 \phi+ \Delta \phi +\frac{1}{2}|\nabla \phi|^2=0.
\end{align}
Here $n$ is a natural number from the interval [2,7], $\Delta$ and $\nabla$ are the Laplacian and the gradient in ${\mathbb R}^n.$
In \cite{kuramoto}, Kuramoto studied the turbulent phase waves and Sivashinsky in \cite{sivash} obtained an asymptotic equation which simulated the evolution of a disturbed plane flame front. See also \cite{Cross}.
Mathematical results on initial and initial- boundary value problems for one-dimensional (1.1) are presented in \cite{Iorio, Guo,cousin,Larkin2,temam1,otto,temam2,zhang}, see references cited there for more information. In
\cite{gramchev,Guo}, the initial value problem for the multi-dimensional (1.1) type equations has been considered.
Two-dimensional periodic problems for the K-S- equation and its modifications posed on rectangles were studied in \cite{kukavica,molinet,temam1,sell,temam2}, where some results on the existence of weak solutions and nonlinear stability have been established. In \cite{Larkin}, initial-boundary value problems for the 3D Kuramoto-Sivashinsky equation have been studied; the existence, uniqueness and exponential decay of global regular solutions have been proved.\\
For $n$ dimensions, $ x=(x_1,...,x_n)$, $n=2,3,4,5,6,7$, (1.1) can be rewritten in the form of the following system:
\begin{align}
(u_j)_t+\Delta^2 u_j+\Delta u_j +\frac{1}{2}\mathcal Sum _{i=1}^n(u_i)^2_{x_j}=0,\; j=1,...,n,& \\
(u_i)_{x_j}=(u_j)_{x_i},\; j\ne i,\;\; i,j=1,...,n.
\end{align}
where $u_j=(\phi)_{x_j},\;j=1,...,n.$ Let $\Omega_n= \prod_{i=1}^n(0,L_i)$ be the minimal nD parallelepiped containing a given smooth domain $\Bar{D_n}.$
First essential problem that arises while one studies either (1.1) or (1.2)-(1.3), are destabilizing effects of
$\Delta u_j;$ they may be damped by dissipative terms $\Delta^2 u_j$ provided $D_n$ has some specific properties. In order to understand this, we use Steklov`s inequalities to estimate
$$a\|u_j\|^2\leq \|\nabla u_j\|^2,\;a\|\nabla u_j\|^2\leq \|\Delta u_j\|^2;\;a=\mathcal Sum_{i=1}^n\frac{\pi^2}{L_i^2},\; \;j=1,...,n.$$
A simple analysis shows that if
\begin{equation}
1-\frac{1}{a}>0,
\end{equation} then $\Delta^2 u_j$ damp $\Delta u_j$. Naturally, here appear admissible domains where (1.4) is fulfilled, so called "thin domains",
where some $L_i$ are sufficiently small while others $L_j$ may be large $ i,j=1,...,7; \;i\ne j.$
Second essential problem is presence of semi-linear terms in (1.2) which are interconnected. This does not allow to obtain the first estimate independent of $u_j$ and leads to a connection between $L_i$ and $u_j(0), \;i,j=1,...,7.$ \\
Our aim in this work is to study n-dimensional initial-boundary value problems for (1.2)-(1.3) posed on smooth domains, where the existence and uniqueness of global regular solutions as well as their exponential decay of the $H^2(D_n)$-norm have been established. Moreover, we obtained a "smoothing effect" for solutions with respect to initial data. Although, cases $n=2,3$ are not new, we included them for the sake of generality.
This work has the following structure: Section I is Introduction. Section 2 contains notations and auxiliary facts. In Section 3, formulation of an initial-boundary value problem in a smooth bounded domain $D_n$ is given. The existence and uniqueness of global regular solutions, exponential decay of the $H^2(D_n)$-norm and a "smoothing effect" have been established. Section 4 contains conclusions.
\mathcal Section{Notations and Auxiliary Facts}
let $D_n$ be a suffuciently smooth domain in ${\mathbb R}^n,$ where $n$ is a fixed natural number, $n\in [2,7]$ satisfying
the Cone condition, \cite{Adams}, and $x=(x_1,...,x_n) \in D_n.$ We use the standard notations of Sobolev spaces $W^{k,p}$, $L^p$ and $H^k$ for functions and the following notations for the norms \cite{Adams}
for scalar functions $f(x,t):$
$$\| f \|^2 = \int_{D_n} | f |^2dx, \hspace{1cm} \| f \|_{L^p(D_n)}^p = \int_{D_n} | f |^p\, dx,$$
$$\| f \|_{W^{k,p}(D_n)}^p = \mathcal Sum_{0 \leq \alpha \leq k} \|D^\alpha f \|_{L^p(D_n)}^p, \hspace{1cm} \| f \|_{H^k(D_n)} = \| f \|_{W^{k,2}(D_n)}.$$
When $p = 2$, $W^{k,p}(D_n) = H^k(D_n)$ is a Hilbert space with the scalar product
$$((u,v))_{H^k(D_n)}=\mathcal Sum_{|j|\leq k}(D^ju,D^jv),\;
\|u\|_{L^{\infty}(D_n)}=ess\; sup_{D_n}|u(x)|.$$
We use a notation $H_0^k(D_n)$ to represent the closure of $C_0^\infty(D_n)$, the set of all $C^\infty$ functions with compact support in $D_n$, with respect to the norm of $H^k(D_n)$.
\begin{lemma}[Steklov's Inequality \cite{steklov}] Let $v \in H^1_0(0,L).$ Then
\begin{equation}\label{Estek}
\frac {\pi^2}{L^2}\|v\|^2(t) \leq \|v_x\|^2(t).
\end{equation}
\end{lemma}
\begin{lemma}
[Differential form of the Gronwall Inequality]\label{gronwall} Let $I = [t_0,t_1]$. Suppose that functions $a,b:I\to {\mathbb R}$ are integrable and a function $a(t)$ may be of any sign. Let $u:I\to {\mathbb R}$ be a differentiable function satisfying
\begin{equation}
u_t (t) \leq a(t) u(t) + b(t),\text{ for }t \in I\text{ and } \,\, u(t_0) = u_0,
\end{equation}
then
\begin{equation}u(t) \leq u_0 e^{ \int_{t_0}^t a(t)\, dt } + \int_{t_0}^t e^{\int_{t_0}^s a(r) \, dr} b(s)\, ds.\end{equation}.
\end{lemma}
\begin{proof}
Multiply (2.3) by the integrating factor $e^{\int_{t_0}^{s} a(r)\, dr}$ and integrate from $t_0$ to $t$.
\end{proof}
The next Lemmas will be used in estimates:
\begin{lemma}[See: \cite{friedman}, Theorem 9.1] Let $n$ be a natural number from the interval $[2,7]; \;D_n$ be a sufficiently smooth bounded domain in ${\mathbb R}^n$ satisfying the cone condition and $v \in H^4(D_n)\cap H^1_0(D_n)$. then
for all $n$ defined above,
\begin{equation}
\mathcal Sup_{D_n}|v(x)|\leq C_n\| v\|_{H^4(D_n)}.
\end{equation}
The constant $C_n$ depends on $n, D_n.$
\end{lemma}
\begin{lemma}\label{lemma3}
Let $f(t)$ be a continuous positive function such that
\begin{align} & f'(t) + (\alpha - k f^n(t)) f(t) \leq 0,\;t>0,\;n\in{\mathbb N},\label{lemao1}\\
& \alpha - k f^n(0)> 0,\;k>0.\label{lemao2}\end{align}
\noindent Then
\begin{equation}f(t) < f(0)\end{equation}
\noindent for all $t > 0$.
\end{lemma}
\proof{ Obviously, $f'(0) + (\alpha - k f^n(0)) f^n(0) \leq 0$. Since $f$ is continuous, there exists $T>0$ such that $f(t)<f(0)$ for every $t \in [0,T)$. Suppose that $f(0) = f(T)$. Integrating \eqref{lemao1}, we find
$$f(T)+ \int_0^T(\alpha - k f^n(t)) f(t) \, dt \leq f(0).$$
\noindent Since $$ \int_0^T(\alpha - k f^n(t)) f(t) \, dt >0,$$ then $f(T)<f(0).$ This contradicts that $f(T)=f(0).$ Therefore, $f(t) < f(0)$ for all $t > 0.$ \\
The proof of Lemma 2.5 is complete. $\Box$
\mathcal Section{K-S equation posed on smooth domains}
Let $\Omega_n$ be the minimal nD-parallelepiped containing a given bounded smooth domain $\Bar{D_n}\in{\mathbb R}^n,\;n=1,...,7$:
$$\Omega_n =\{ x\in {\mathbb R}^n; x_i\in (0,L_i)\},\; u_i=(\phi)_{x_i},\;i=1,...,n.$$ Fixing any natural $n=2,...,7,$ consider in $Q_n=D_n\times (0,t)$ the following initial-boundary value problem:
\begin{align}
(u_j)_t+\Delta^2 u_j+\Delta u_j +\frac{1}{2}\mathcal Sum _{i=1}^n(u^2_i)_{x_j}=0,\; j=1,...,n,&\\
(u_i)_{x_j}=(u_j)_{x_i},\; j\ne i,\;\; i,j=1,...,n;&\\
u_j|_{\partial D_n}=\Delta u_j|_{\partial D_n}=0,\; t>0,&\\
u_j(x,0)=u_{j0}(x),\;j=1,...,n,\;\;x \in D_n.
\end{align}
\begin{lemma}
Let $f\in H^4(D_n)\cap H^1_0(D_n)$ and $ \Delta f|_{\partial D_n}=0.$ Then
$$
a\|f\|^2 \leq \|\nabla f\|^2,\;\;a^2\|f\|^2\leq \|\Delta f\|^2,\;\;a\|\nabla f\|^2\leq \|\Delta f\|^2,$$
$$ a^2\|\Delta f\|^2\leq \|\Delta^2 f\|^2,\;\; \|\Delta \nabla f\|^2\leq \|\Delta^2 f\|\|\Delta f\|\leq \frac{1}{a}\|\Delta^2 f\|^2.$$
where $ a=\mathcal Sum_{i=1}^n\frac{\pi^2 }{L^2_i} ,\;\;\|f\|^2=\int_{D_n}f^2(x)dx.$
\end{lemma}
\begin{proof} We have
$$\|\nabla f\|^2=\mathcal Sum_{i=1}^n\|f_{x_i}\|^2.$$
Define
\begin{equation}\tilde{f}(x,t)= \begin{cases}f(x,t) \text{ if } x \in D_n;\\
0 \text{ if } x \in \Omega_n/D_n.\end{cases} \end{equation}
Making use of Steklov`s inequalities for $\tilde{f}(x,t)$ and taking into account that $\|\nabla f\|=\|\nabla \tilde{f}\|,$\;we get
\begin{align*}
\|\nabla f\|^2\geq a\|f\|^2,\;
\text{where} \; a=\mathcal Sum_{i=1}^n\frac{\pi^2 }{L_i^2} .
\end{align*}
On the other hand,
$$a\|f\|^2\leq \|\nabla f\|^2 =-\int_{D_n}f\Delta fdx\leq \|\Delta f\|\|f\|.$$
This implies
$$a\|f\|\leq \|\Delta f\|\;\;\text{and} \;\;a^2\|f\|^2\leq \|\Delta f\|^2.$$
Consequently, \;$a\|\nabla f\|^2\leq \|\Delta f\|^2.$ Similarly,
$$ \|\Delta f\|^2=\int_{D_n} f\Delta^2 fdx\leq \|\Delta^2 f\|\|f\|\leq \frac{1}{a}\|\Delta^2 f\|\|\Delta f\|.$$
Hence, \;\;$a\|\Delta f\|\leq \|\Delta^2 f\|.$ Moreover, $$ \|\Delta \nabla f\|^2=-\int_{D_n} \Delta^2 f \Delta fdx\leq \|\Delta^2 f\|\|\Delta f\|\leq \frac{1}{a}\|\Delta^2 f\|^2.$$
Proof of Lemma 3.1 is complete.
\end{proof}
\begin{rem} Assertions of Lemma 3.1 are true if the function $f$ is replaced respectively by $u_j,\;j=1,...,n$.
\end{rem}
\begin{lemma} In conditiions of Lemma 3.1,
\begin{align}\|f\|^2(t)_{H^2(D_n)}\leq 3\|\Delta f\|^2(t), &\\
\|f\|^2(t)_{H^4(D_n)}\leq 5\|\Delta^2f\|^2(t), &\\
\mathcal Sup_{D_n}|f(x)|\leq C_s\| \Delta^2 f\|,\; \text{where}\;C_s=5C_n.
\end{align}
\end{lemma}
\begin{proof} To prove (3.7), making use of Lemma 3.1, we find
$$\|f\|^2_{H^4(D_n)}=\|f\|^2+\|\nabla f\|^2+\|\Delta f\|^2+\|\nabla\Delta f\|^2+\|\Delta^2 f\|^2$$$$
\leq\Big(\frac{1}{a^4}+ \frac{1}{a^3}+\frac{1}{a^2}+\frac{1}{a}+1\Big)\|\Delta^2 f\|^2.$$
Since $a>1$, then (3.7) follows. Similarly, (3.6) can be proved. Moreover, taking into account Lemma 2.3, we get (3.8).
\end{proof}
\begin{thm} [Special basis]\label{existencethm}
Let $n$ be a natural number from the interval [2,7]; $D_n\in{\mathbb R}^n$ be a bounded smooth domain satisfying the Cone condition and $\Omega_n$ be a minimal $nD$-parallelepiped containing $\Bar{D_n}.$ Let
\begin{equation}
\theta= 1-\frac{1}{a}=1-\frac{1}{\mathcal Sum_{i=1}^n\frac{\pi^2 }{L^2_i} }>0.
\end{equation}
Given $$u_{j0}(D_n)\in H^2(D_n)\cap H^1_0(D_n),\;j=1,...,n$$ such that
\begin{align}
\theta-\frac{2 C_s^2 7^3}{a\theta}\Big(\mathcal Sum_{j=1}^n\|\Delta u_j\|^2(0)\Big)>0,
\end{align}
then there exists a unique global regular solution to (3.1)- (3.4):
$$ u_j\in L^{\infty}({\mathbb R}^+; H^2(D_n))\cap L^2({\mathbb R}^+;H^4(D_n)\cap H^1_0(D_n));$$$$u_{jt}\in L^2({\mathbb R}^+;L^2(D_n)), \;j=1,...,n.$$ Moreover,
\begin{align}\mathcal Sum_{j=1}^n \|\Delta u_j\|^2(t)
\leq \Big(\mathcal Sum_{j=1}^n\|\Delta u_{j0}\|^2\Big)\exp\{-a^2t\theta/2\}
\end{align}
and
\begin{align}
\mathcal Sum_{i=1}^n\|\Delta u_i\|^2(t)+\int_0^t\mathcal Sum_{i=1}^n\|\Delta^2 u_i\|^2(\tau)d\tau \leq C\mathcal Sum_{i=1}^n\|\Delta u_{i0}\|^2, \;t>0.\notag
\end{align}
\end{thm}
\begin{rem} In Theorem 3.1, there are two types of restrictions: the first one is pure geometrical,
$$ 1-\frac{1}{a}>0$$
which is needed to eliminate destabilizing effects of the terms $\Delta u_j$ in (3.1):
$$ \|\Delta u_j\|^2-\|\nabla u_j\|^2.$$
It is clear that$$\lim_{L_i\to 0}a=\mathcal Sum_{i=1}^n\frac{\pi^2 }{L^2_i}=+\infty,$$ hence to achieve (3.9), it is possible to decrease $L_i,\;i=1,...,n$ allowing other $L_j,\;j\ne i$ to grow. \\
A situation with condition (3.10)
is more complicated: if initial data are not small, then it is possible either to decrease $L_i,\;i=1,...,n,$ to fulfill this condition or for fixed $L_i,\;i=1,...,n$ to decrease initial data $\|u_{j0}\|.$\\
\end{rem}
\begin{proof} It is possible to construct Galerkin`s approximations to (3.1)-(3.4) by the following way:
let $w_j(x)$ be eigenfunctions of the problem:
$$\Delta^2 w_j-\lambda_jw_j=0\;\text{in}\; D_n;\;\;w_j|_{\partial D_n}=\Delta w_j|_{\partial D_n}=0, j=1,2,....$$
Define
\begin{align*}
u^N_j(x,t)=\mathcal Sum_{k=1}^N g_k^j(t)w_j(x).
\end{align*}
Unknown functions $g_i^j(t)$\;satisfy the following initial problems:
\begin{align*}
(\frac{d}{dt}u^N_j,w_j)(t)+(\Delta^2 u^N_j,w_j)(t)+(\Delta u^N_j,w_j)(t)&\\
+\frac{1}{2}(\mathcal Sum_{i=1}^n(u^N_i)^2_{x_j},w_j)(t)=0,\\
g_k^j(0)=g_{0k}^j,\;\;j=1,...,n,\;\;k=1,2,... .
\end{align*}
The estimates that follow may be established on Galerkin's approximations (see: \cite{Guo,cousin}), but it is more explicitly to prove them on smooth solutions of (3.1)-(3.4).
\noindent {\bf Estimate I:} $u \in L^\infty({\mathbb R}^+; H^2(D_n)\cap H^1_0(D_n))\cap L^2({\mathbb R}^+;H^4(D_n)\cap H^1_0(D_n))$.
For any natural $n\in [2,7],$ multiply (3.1) by $2\Delta^2 u_j$ to obtain
\begin{align}
\frac{d}{dt} \|\Delta u_j\|^2(t)+2\|\Delta^2 u_j\|^2(t)+2\|\Delta^2 u\|(t)\|\Delta u_j\|(t)&\notag\\
+2\mathcal Sum_{i=1}^n (u_i(u_i)_{x_j},\Delta^2 u_j)(t)=0.
\end{align}
Making use of (3.7) and Lemmas 2.3, 3.1, 3.2, we can write
\begin{align}
\frac{d}{dt} \|\Delta u_j\|^2(t)+2\theta\|\Delta^2 u_j\|^2(t)&\notag\\
\leq 2\Big[\mathcal Sum_{i=1}^n \mathcal Sup_{D_n}| u_i(x,t)|\|\nabla u_i\|(t)\Big]\|\Delta^2 u_j\|(t)&\notag\\
\leq
2\Big[C_s\mathcal Sum_{i=1}^n\|\Delta^2 u_i\|(t)\|\nabla u_i\|(t)\Big]\|\Delta^2 u_j\|(t); j=1,...,n.
\end{align}
Summing over $j=1,...,n$ and making use of Lemma 3.1, we rewrite (3.12) in the form:
\begin{align*}
\frac {d}{dt}\mathcal Sum_{j=1}^n\|\Delta u_j\|^2(t)+2\theta \mathcal Sum_{j=1}^n \|\Delta^2 u_j\|(t)&\notag\\
\leq 2C_s n\Big(\mathcal Sum_{j=1}^n\|\nabla u_j\|(t)\Big)\Big[\mathcal Sum_{j=1}^n \|\Delta^2 u_j\|^2(t)\Big]&\notag\\
\leq \Big[\frac{\theta}{2}+\frac{2C_s^2 n^2}{\theta}\Big(\mathcal Sum_{j=1}^n\|\nabla u_j\|(t)\Big)^2\Big]\mathcal Sum_{j=1}^n\|\Delta^2 u_j\|^2(t)&\notag\\
\leq \Big[\frac{\theta}{2}+\frac{2C_s^2 n^3}{\theta}\Big(\mathcal Sum_{j=1}^n\|\nabla u_j\|^2(t)\Big)\Big]\mathcal Sum_{j=1}^n\|\Delta^2 u_j\|^2(t)&\notag\\
\leq \Big[\frac{\theta}{2}+\frac{2C_s^2 n^3}{a\theta}\Big(\mathcal Sum_{j=1}^n\|\Delta u_j\|^2(t)\Big)\Big]\mathcal Sum_{j=1}^n\|\Delta^2 u_j\|^2(t).
\end{align*}
Taking this into account, we transform (3.12) in the form
\begin{align}
\frac {d}{dt}\mathcal Sum_{j=1}^n\|\Delta u_j\|^2(t)+\frac{\theta}{2} \mathcal Sum_{j=1}^n \|\Delta^2 u_j\|^2(t)&\notag\\
+ \Big[\theta-\frac{2C_s^2 n^3}{a\theta}\Big(\mathcal Sum_{j=1}^n\|\Delta u_j\|^2(t)\Big)\Big]\mathcal Sum_{j=1}^n\|\Delta^2 u_j\|^2(t)\leq 0.
\end{align}
Condition (3.10) and Lemma 2.4 guarantee that
$$\theta-\frac{2C_s^2 n^3}{a\theta}\Big(\mathcal Sum_{j=1}^n\|\Delta u_j\|^2(t)\Big)>0,\; \;t>0.$$
Hence (3.13) can be rewritten as
\begin{align}
\frac {d}{dt}\mathcal Sum_{j=1}^n\|\Delta u_j\|^2(t)+\frac{a^2\theta}{2}\mathcal Sum_{j=1}^n \|\Delta u_j\|^2(t)\leq 0.
\end{align}
Integrating, we get
\begin{align}\mathcal Sum_{i=1}^n \|\Delta u_j\|^2(t) \leq\mathcal Sum_{j=1}^n\|\Delta u_{j0}\|^2\exp\{-a^2\theta t/2\}.
\end{align}
Returning to (3.22), we find
\begin{align}
\mathcal Sum_{i=1}^n\|\Delta u_i\|^2(t)+\int_0^t\mathcal Sum_{i=1}^n\|\Delta^2 u_i\|^2(\tau)d\tau \leq C\mathcal Sum_{i=1}^n\|\Delta u_{i0}\|^2.
\end{align}
Finally, directly from (3.1), we obtain $$(u_j)_t \in L^2({\mathbb R}^+;L^2(D_n)),\;j=1,...,n.$$
This completes the proof of the existence part of Theorem 3.1.
\begin{lemma} A regular solution of Theorem 3.1 is uniquely defined.
\end{lemma}
\begin{proof} Let $u_j$ and $v_j$,\;$j=1,...,n$ be two distinct solutions to (3.1)-(3.4). Denoting $w_j=u_j-v_j$, we come to the following system:
\begin{align}
(w_j)_t+\Delta^2 w_j+\Delta w_j +\frac{1}{2}\mathcal Sum_{i=1}^n\Big(u^2_i-v^2_i\Big)_{x_j} =0,&\\
(w_j)_{x_i}=(w_i)_{x_j},\;i\ne j,&\\
w_j|_{\partial D_n}=\Delta w_j|_{\partial D_n}
=0,\; t>0;&\\
w_j(x,0)=0 \;\text{in} \;D,\;\;j=1,...,n.
\end{align}
Multiply (3.18) by $2w_j$ to obtain
\begin{align} \frac{d}{dt}\|w_j\|^2(t)+2\|\Delta w_j\|^2(t)-2\|\nabla w_j\|^2(t)&\notag\\
-\mathcal Sum_{i=1}^n\Big(\{u_i+v_i\}w_i,(w_j)_{x_j}\Big)(t)=0,\;j=1,...,n.
\end{align}
Making use of Lemmas 2.3 and 3.1, 3.2, we estimate
$$
I=\mathcal Sum_{i=1}^n(\{u_i+v_i\}w_i,(w_j)_{x_j})\leq\frac{\epsilon}{2}\|\nabla w_j\|^2+\frac{1}{2\epsilon}\Big(\mathcal Sum_{i=1}^n\|\{u_i+v_i\}w_i\|\Big)^2$$
$$\leq \frac{\epsilon}{2a}\|\Delta w_j\|^2 +\frac{2}{\epsilon}\mathcal Sum_{i=1}^n \mathcal Sup_{D_n}\{u^2_i(x,t)_i+v^2_i(x,t)\}\|w^2_i\|(t)$$
$$\leq \frac{\epsilon}{2a}\|\Delta w_j\|^2 +\frac{4C_s^2}{\epsilon}\Big[\mathcal Sum_{i=1}^n\{\|\Delta^2 u_i\|^2+\|\Delta^2 v_i\|^2\}\Big]\mathcal Sum_{j=1}^n\|w_j\|^2.$$ Here $\epsilon$ is an arbitrary positive number. Substituting $I$ into (3.22), we get
\begin{align}
\frac{d}{dt}\|w_j\|^2(t)+(2-\frac{2}{a}-\frac{\epsilon}{2a})\|\Delta w_j\|^2(t)&\notag\\
\leq \frac{4C_s^2}{\epsilon}\Big[\mathcal Sum_{i=1}^n\{\|\Delta^2 u_i\|^2+\|\Delta^2 v_i\|^2\}\Big]\mathcal Sum_{j=1}^n\|w_j\|^2.
\end{align}
Taking $\epsilon=\frac{\theta}{2}$ and summing up over $j=1,...,n,$ we transform (3.23) as follows:
\begin{align*}\frac{d}{dt}\mathcal Sum_{j=1}^n\|w_j\|^2(t)\leq C \Big[\mathcal Sum_{i=1}^n\{\|\Delta^2 u_i\|^2&\notag\\+\|\Delta^2 v_i\|^2+ \|u_i\|^2(t)+\|v_i\|^2(t)\}\Big]\mathcal Sum_{j=1}^n\|w_j\|^2,\;i=1,..., n.
\end{align*}
Since by (3.17) and Lemma 3.1,$$\|\Delta^2 u_i\|^2(t), \|\Delta^2 v_i \|^2(t)\in L^1({\mathbb R}^+)$$ and
$$\|u_i\|(t),\|v_i\|^2(t)\in L^1({\mathbb R}^+),\;i=1,...,n,$$ then by Lemma 2.2,
$$ \|w_j\|^2(t)\equiv 0\;\;j=1,...,n,\; \text{for all}\;\; t>0.$$
Hence
\begin{align*}
u_j(x,t)\equiv v_j(x,t);j=1,...,n.
\end{align*}
The proof of Lemma 3.3, and consequently, Theorem 3.1 are complete.
\end{proof}
\end{proof}
\
\mathcal Section{ Conclusions}
This work concerns formulation and solvability of initial-boundary value problems for the n-dimensional ($n$ is a natural number from the interval [2,7],) Kuramoto-Sivashinsky system (3.1)-(3.2) posed on smooth bounded domains. Theorem 3.1 contains results on existence and uniqueness of global regular solutions as well as exponential decay of the $H^2(D_n)$-norm, where $D_n$ is a smooth bounded domain in ${\mathbb R}^n$. We define a set of admissible domains, where destabilizing effects of terms $\Delta u_j$ are damped by dissipativity of $\Delta^2 u_j$ due to condition (3.9). This set contains "thin domains", see \cite{Iftimie,molinet,sell}, where some dimensions of $D_n$ are small while others may be large. Since initial- boundary value problems studied in this work do not admit the first a priori estimate independent of $t,u_j,$ in order to prove the existence of global regular solutions, we put conditions (3.10) connecting geometrical properties of $D_n$ with initial data $u_{j0}$.
Moreover, Theorem 3.1 provides "smoothing effect": initial data $u_{j0}\in H^2(D_n)\cap H^1_0(D_n)$ imply that
$ u_j\in L^2({\mathbb R}^;H^4(D_n)).$\\
\mathcal Section*{Conflict of Interests}
The author declares that there is no conflict of interest regarding the publication of this paper.
\end{document}
|
\begin{document}
\maketitle
\begin{abstract}
Lorenz knots are an important and well studied family of knots. Birman--Kofman showed that Lorenz knots are in correspondence with $T$-links, which are positive braid closures. They determined the braid index of any $T$-link, based on their defining parameters. In this work, we leverage Birman--Kofman's correspondence to show that 1-bridge braids (and, more generally, $n$-bridge braids) are Lorenz knots. Then, we determine the braid index of 1-bridge braids and $n$-bridge braids; our proof is independent of Birman--Kofman's work, and is explicit, elementary, and effective in nature.
\end{abstract}
\section{Introduction}
\subsection{Motivation and Summary}
Knots and links play an important role in low-dimensional topology. One simple way to measure the complexity of a knot $K$ is the \textit{braid index}.
\begin{defn}
Let $K$ be a knot in $S^3$. The braid index, $i(K)$, is the minimum number of strands required to represent $K$ as the closure of a braid on as many strands.
\end{defn}
As every knot is realized as the closure of some braid \cite{Alexander}, the braid index is a well defined knot invariant. In general, the braid index of knots is difficult to compute. The simplest infinite family for which the braid index is computed are the $T(p,q)$ torus knots: it is well known that $i(T(p,q))=\min\{p,q\}$.
One generalization of torus knots are \textit{1-bridge braids}, which are well studied from the Dehn surgery perspective. Dehn surgery is a powerful operation within 3--manifold topology: every 3--manifold is obtained by Dehn surgery along some link in $S^3$ \cite{Lickorish:DehnSurgery, Wallace:DehnSurgery}. Despite the ubiquity of this technique, some of the most basic questions about Dehn surgery remain open. For example, the infamous \textit{Berge Conjecture} predicts exactly which knots in $S^3$ admit a Dehn surgery to \textit{lens spaces}, the rational homology 3--spheres admiting genus--1 Heegaard splittings \cite{Berge}. Moser \cite{Moser:TorusKnots} showed that torus knots always admit Dehn surgeries to lens spaces. In fact, there are an infinite sub-family of 1-bridge braids that also admit lens spaces surgeries; these are the \textit{Berge-Gabai knots} \cite{Berge:SolidTorus, Gabai:SolidTori, Gabai:1BridgeBraids}. While only some 1-bridge braids admit lens space surgeries, all 1-bridge braids admit \textit{L-space surgeries} \cite{GLV:11LSpace}. In particular, they are some of the few explicit examples of hyperbolic L-space knots, making them a valuable probe into the \textit{L-space conjecture} \cite{Juhasz:Survey, BoyerGordonWatson}.
\begin{prop} \label{lemma:1BBsAreLorenzKnots}
1-bridge braids are Lorenz knots.
\end{prop}
Lorenz knots exhibit interesting dynamical and geometric properties \cite{BirmanWilliams, BirmanKofman, ChampanerkarFuterKofmanNeumannPurcell, Dehornoy:Lorenz, dePaivaPurcell, Birman:Lorenz}. For example, Birman--Kofman show that over half of the ``simplest" hyperbolic knots are Lorenz knots \cite{BirmanKofman}.
We also generalize 1-bridge braids to $n$-bridge braids, an infinite family of positive braid closures defined using four parameters (in contrast to the two parameters used to define torus knots). We show that $n$-bridge braids are also Lorenz knots.
\begin{defn}
An \textbf{$n$-bridge braid}, denoted $\mathcal{K}(w,b,t,n)$, is the knot realized as the closure of the positive braid
$(\sigma_b \sigma_{b-1} \ldots \sigma_{1})^n(\sigma_{w-1} \ldots \sigma_2 \sigma_1)^t$.
\end{defn}
\begin{figure}\label{fig:Example}
\end{figure}
Our definition of an $n$-bridge braid also generalizes the notion of \textit{twisted torus knots}, which appear in investigations of the L-space conjecture \cite{Vafaee:TwistedTorusKnots, LeeVafaee, Nie:11LspaceKnots}. However, they have been studied in their own right as well \cite{DePaiva}. We note: Birman-Kofman \cite{BirmanKofman} remark that twisted torus knots are Lorenz knots -- however, there are two distinct definitions in the literature of twisted torus knots and the braids which close to them. Our work unifies these two perspectives.
In this paper, we explicitly compute the braid index of an $n$-bridge braid, in terms of their defining parameters:
\begin{thm} \label{thm:NBridge}
The braid index of an $n$-bridge braid is determined by the defining parameters; namely,
\begin{center}
$ i(\mathcal{K}(w,b,t,n)) = \begin{cases}
w & t \geq w, \ n\geq 1 \\
t & w > t > b, \ n \geq 1 \\
t+1 & w > b \geq t, n=1\\
b+1 & w > b\geq t, n+t \geq b+1, n>1\\
n+t & w > b \geq t, n+t < b+1, n>1
\end{cases} $
\end{center}
\end{thm}
As an immediate consequence, we determine the braid index of a 1-bridge braid:
\begin{cor} \label{thm:BraidIndex}
The braid index of a 1--bridge braid $K(w,b,t)$ is:
\begin{center}
$ i(\mathcal{K}(w,b,t)) = \begin{cases}
w & t \geq w \\
t & w > t > b \\
t+1 & b \geq t
\end{cases} $
\end{center}
\end{cor}
The main proof strategy for \Cref{lemma:1BBsAreLorenzKnots}, \Cref{thm:NBridge}, and \Cref{thm:BraidIndex} is elementary: we use the well-known \textit{Markov moves} to manipulate the presentation of the braid, and then apply the Morton; Franks--Williams Theorem, which states that if a positive braid on $k$ strands contains a \textit{full twist}, then in fact, $i(\beta) = k$ \cite{Morton:KnotPolys, FranksWilliams}. Our proof is completely effective, in the sense that it produces a positive braid with a full twist which is Markov equivalent to an $n$-bridge braid. The algorithm we produce could be extended to all $T$-links. However, the computations are significantly more tedious, so we do not include them here. An interesting future direction would be to write a computer implementation of this algorithm for all $T$-links.
\subsection{Outline of the paper} In \Cref{section:background} we outline the definitions and foundational results that we will use throughout the paper. In \Cref{section:LorenzKnots}, we prove that 1-bridge braids are Lorenz knots. In \Cref{section:preliminaries}, we establish a series of lemmas and propositions to be used in the proof of \Cref{thm:NBridge}. The proof of \Cref{thm:NBridge} is contained in \Cref{section:n-bridge}.
\subsection{Conventions} Throughout the paper, we will indicate how the braid word changes by underlining the letters of the braid word as they are changed by braid relations, conjugations, or de-stabilizations. When we draw our braids vertically, we read them top-to-bottom. When we draw our braids horizontally, we read them from left-to-right. For us, $\sigma_i$ corresponds to strand $(i+1)$ crossing over strand $(i)$.
\section{Background} \label{section:background}
We begin with some preliminaries.
\begin{defn}
A \textbf{T-link} is a knot $K$ which is realized as the closure of a positive braid $\tau$, where
\begin{align} \label{TLink}
\tau = (\sigma_1 \sigma_2 \ldots \sigma_{p_1-1})^{q_1} (\sigma_1 \sigma_2 \ldots \sigma_{p_2-1})^{q_2} \ldots (\sigma_1 \sigma_2 \ldots \sigma_{p_s-1})^{q_s}
\end{align}
Here, $2 \leq p_1 \leq p_2 \leq \ldots \leq p_s$, $0 < q_i$ for all $i$, and $\tau$ is a braid in $B_{p_s}$.
\end{defn}
\begin{defn}[c.f. \cite{Vafaee:TwistedTorusKnots}]
A \textbf{twisted torus knot} is realized as the closure of a positive braid $\tau$ on $n$ strands, where
\begin{align}
\tau = (\sigma_{n-1} \sigma_{n-2} \ldots \sigma_2 \sigma_1)^p (\sigma_{n-1} \sigma_{n-2} \ldots \sigma_{n-k})^qk
\end{align}
That is, adding $q$ many full twists into $k$ adjacent strands of a torus knot yields a twisted torus knot.
\end{defn}
\begin{defn}
An \textbf{$n$-bridge braid}, denoted $\mathcal{K}(w,b,t,n)$, is the knot realized as the closure of the positive braid
$$K(w,b,t,n) = (\sigma_b \sigma_{b-1} \ldots \sigma_{1})^n(\sigma_{w-1} \ldots \sigma_2 \sigma_1)^t$$
Here, $w$ is the number of strands on which the knot is presented, $b$ is the bridge length, $t$ is the number of twists, and $n$ is the number of bridges (each of length $b$).
\end{defn}
Note that we often conflate the braid word, $K(w,b,t,n)$ and the braid closure, $\mathcal{K}(w,b,t,n)$.
\begin{defn} The following are the braid relations on the braid group $B_n$:
\begin{enumerate}
\item $\sigma_i \sigma_j = \sigma_j \sigma_i$ if $|i - j| > 1$
\item $\sigma_i \sigma_{i+1} \sigma_i = \sigma_{i+1} \sigma_i \sigma_{i+1}$ where $1 \leq i \leq n-2$.
\end{enumerate}
\end{defn}
To compute the braid index of a knot $K$ we need a method for decreasing the number of strands in the braided presentation of $K$. This method is called \emph{destabilization}.
\begin{defn}
Let $\omega$ be a braid word on $n$ strands. A \textbf{stabilization} replaces $\omega$ with $\omega \sigma_n$ or $\omega \sigma_n ^{-1}$, a braid word on $n+1$ strands. The reverse operation (of replacing $\omega \sigma_n$ or $\omega \sigma_n^{-1}$, where $\omega$ has no $\sigma_n^{\pm 1}$ letters) is called \textbf{destabilization}.
\end{defn}
The following result guarantees that the process of applying braid relations, destabilizing, and conjugating a braid word will not effect the closure of the braid.
\begin{thm} [Markov \cite{Markov}] \label{thm:Markov}
Let $\beta_1$ and $\beta_2$ be two braid words. Then, their braid closures are isotopic if and only if $\beta_1$ and $\beta_2$ are related by any combination of: (1) braid relations, (2) conjugations, and (3) (de)stabilizations.
\end{thm}
Finally, to determine the braid index we will use this result independently obtained by Morton and Franks--Williams.
\begin{thm} [Morton \cite{Morton:PolysFromBraids, Morton:KnotPolys}; Franks--Williams \cite{FranksWilliams}] \label{thm:FMW}
Suppose $\beta \in B_n$ is a positive braid, and $\beta = \omega (\sigma_{n-1} \ldots \sigma_1)^n$, where $\omega$ is a positive braid word. Then the braid index of $\beta$ is $n$, i.e. $i(\beta) = n$.
\end{thm}
\section{$n$-bridge braids are Lorenz knots} \label{section:LorenzKnots}
Birman--Kofman \cite{BirmanKofman} showed that Lorenz knots are in correspondence with $T$-links. They also remarked that twisted torus knots are $T$-links. Since there are many different braid-word theoretic ways to define twisted torus links, we explicitly use Markov moves to put twisted torus knots and $n$-bridge braids into T-link form.
\begin{lemma} \label{lemma:LeftMove}
Let $\alpha_1 = (\sigma_a \sigma_{a+1}\ldots \sigma_{a+c})(\sigma_1 \sigma_2 \ldots \sigma_{w-1})^t$ and let $\alpha_2 = (\sigma_{a-1} \sigma_{a}\ldots \sigma_{a+c-1})(\sigma_1 \sigma_2 \ldots \sigma_{w-1})^t$, where $\alpha_1$ and $\alpha_2$ are both elements of $B_w$. Then $\alpha_1$ and $\alpha_2$ are conjugate braids. In particular, $\widehat{\alpha_1}$ and $\widehat{\alpha_2}$ are isotopic knots in $S^3$.
\end{lemma}
\begin{proof} We do some explicit braid moves to verify the claim. For clarity, we underline the portions of the braid that are being transformed from one line to the next. We set $\gamma_w :=(\sigma_1 \sigma_2 \ldots \sigma_{w-1})$, a braid word in $B_w$. We begin by pushing some terms to the right:
\begin{align*}
\alpha_1 &= (\sigma_{a}\sigma_{a+1}\ldots \sigma_{a+c-1} \sigma_{a+c})(\sigma_1 \sigma_2 \ldots \sigma_{w-1})^t \\
&= (\sigma_{a}\sigma_{a+1}\ldots \sigma_{a+c-1} \sigma_{a+c})\ \gamma_w^t \\
&= (\sigma_{a}\sigma_{a+1}\ldots \sigma_{a+c-1}\ \sigma_{a+c})((\sigma_1 \sigma_2 \ldots \sigma_{a+c-2}\sigma_{a+c-1}\sigma_{a+c})(\sigma_{a+c+1}\ldots \sigma_{w-2}\sigma_{w-1}))\ \gamma_w^{t-1} \\
&= (\sigma_{a}\sigma_{a+1}\ldots \sigma_{a+c-1}\ \underline{\sigma_{a+c}})(\sigma_1 \sigma_2 \ldots \sigma_{a+c-2}\sigma_{a+c-1}\sigma_{a+c})(\sigma_{a+c+1}\ldots \sigma_{w-2}\sigma_{w-1})\ \gamma_w^{t-1} \\
&= (\sigma_{a}\sigma_{a+1}\ldots \sigma_{a+c-1})(\sigma_1 \sigma_2 \ldots \sigma_{a+c-2}\ \underline{\sigma_{a+c}\sigma_{a+c-1}\sigma_{a+c}})(\sigma_{a+c+1}\ldots \sigma_{w-2}\sigma_{w-1})\ \gamma_w^{t-1} \\
&= (\sigma_{a}\sigma_{a+1}\ldots \underline{\sigma_{a+c-1}})(\sigma_1 \sigma_2 \ldots \sigma_{a+c-2}\ \sigma_{a+c-1}\sigma_{a+c}\sigma_{a+c-1})(\sigma_{a+c+1}\ldots \sigma_{w-2}\sigma_{w-1})\ \gamma_w^{t-1} \\
&= (\sigma_{a}\sigma_{a+1}\ldots \sigma_{a+c-2})(\sigma_1 \sigma_2 \ldots \underline{\sigma_{a+c-1}\sigma_{a+c-2}\sigma_{a+c-1}} (\sigma_{a+c}\sigma_{a+c-1})(\sigma_{a+c+1}\ldots \sigma_{w-2}\sigma_{w-1})\ \gamma_w^{t-1} \\
&= (\sigma_{a}\sigma_{a+1}\ldots \sigma_{a+c-2})(\sigma_1 \sigma_2 \ldots \sigma_{a+c-2}\sigma_{a+c-1}\sigma_{a+c-2} (\sigma_{a+c}\sigma_{a+c-1})(\sigma_{a+c+1}\ldots \sigma_{w-2}\sigma_{w-1})\ \gamma_w^{t-1} \\
&= (\sigma_{a}\sigma_{a+1}\ldots \underline{\sigma_{a+c-2}})(\sigma_1 \sigma_2 \ldots \sigma_{a+c-2})(\sigma_{a+c-1}\sigma_{a+c-2}) (\sigma_{a+c}\sigma_{a+c-1})(\sigma_{a+c+1}\ldots \sigma_{w-2}\sigma_{w-1})\ \gamma_w^{t-1} \\
&= (\sigma_{a}\sigma_{a+1}\ldots \sigma_{a+c-3})(\sigma_1 \ldots \underline{\sigma_{a+c-2} \sigma_{a+c-3} \sigma_{a+c-2}})(\sigma_{a+c-1}\sigma_{a+c-2}) (\sigma_{a+c}\sigma_{a+c-1})(\sigma_{a+c+1}\ldots \sigma_{w-2}\sigma_{w-1})\ \gamma_w^{t-1} \\
&= (\sigma_{a}\sigma_{a+1}\ldots \sigma_{a+c-3})(\sigma_1 \ldots \sigma_{a+c-3})(\sigma_{a+c-2} \sigma_{a+c-3})(\sigma_{a+c-1}\sigma_{a+c-2}) (\sigma_{a+c}\sigma_{a+c-1})(\sigma_{a+c+1}\ldots \sigma_{w-2}\sigma_{w-1})\ \gamma_w^{t-1} \\
\intertext{We repeat this process -- of peeling the last term of the left-most parenthetical as far into the braid as possible, and then applying a braid relation -- until we reach $\sigma_a$. Notice that at the end of each iteration of this process, we produce a pair of adjacent terms of the form $(\sigma_{a+c-k} \sigma_{a+c-(k+1)})$. At the penultimate stage, we have:}
&= (\underline{\sigma_{a}}) (\sigma_1 \sigma_2 \ldots \sigma_{a-2} \sigma_{a-1}) (\sigma_a \sigma_{a+1}\sigma_{a}) (\sigma_{a+2}\sigma_{a+1}) \ldots (\sigma_{a+c}\sigma_{a+c-1})(\sigma_{a+c+1}\ldots \sigma_{w-2}\sigma_{w-1})\ \gamma_w^{t-1} \\
&= (\sigma_1 \sigma_2 \ldots \sigma_{a-2}) (\underline{\sigma_a \sigma_{a-1}\sigma_a})( \sigma_{a+1}\sigma_{a}) (\sigma_{a+2}\sigma_{a+1}) \ldots (\sigma_{a+c}\sigma_{a+c-1})(\sigma_{a+c+1}\ldots \sigma_{w-2}\sigma_{w-1})\ \gamma_w^{t-1} \\
&= (\sigma_1 \sigma_2 \ldots \sigma_{a-2}) (\sigma_{a-1} \sigma_{a}\sigma_{a-1})( \sigma_{a+1}\sigma_{a}) (\sigma_{a+2}\sigma_{a+1}) \ldots (\sigma_{a+c}\sigma_{a+c-1})(\sigma_{a+c+1}\ldots \sigma_{w-2}\sigma_{w-1})\ \gamma_w^{t-1} \\
\intertext{Reassigning some parenthesis, we obtain:}
&= (\sigma_1 \sigma_2 \ldots \sigma_{a-2}\sigma_{a-1} \sigma_{a})(\sigma_{a-1})( \sigma_{a+1}\sigma_{a}) (\sigma_{a+2}\sigma_{a+1}) \ldots (\sigma_{a+c}\sigma_{a+c-1})(\sigma_{a+c+1}\ldots \sigma_{w-2}\sigma_{w-1})\ \gamma_w^{t-1} \\
\intertext{We observe that in each parenthetical of the form $(\sigma_{a+c-k} \sigma_{a+c-(k+1)})$, the left term has a larger index than the right term. Moreover, as we read the parentheticals from left to right, the index of the first term uniformly increases until we hit $(\sigma_{a+c+1}\ldots \sigma_{w-2}\sigma_{w-1})\ \delta_w^t$. Therefore, we can rewrite our braid by collecting terms towards the front of the braid. In the following set of moves, we push the underlined terms to the left:}
&= (\sigma_1 \sigma_2 \ldots \sigma_{a-2}\sigma_{a-1} \sigma_{a})(\sigma_{a-1})(\underline{\sigma_{a+1}}\sigma_{a}) (\sigma_{a+2}\sigma_{a+1}) \ldots (\sigma_{a+c}\sigma_{a+c-1})(\sigma_{a+c+1}\ldots \sigma_{w-2}\sigma_{w-1})\ \gamma_w^t \\
&= (\sigma_1 \sigma_2 \ldots \sigma_{a-2}\sigma_{a-1} \sigma_{a} \sigma_{a+1})(\sigma_{a-1}\sigma_{a}) (\underline{\sigma_{a+2}}\sigma_{a+1}) \ldots (\sigma_{a+c}\sigma_{a+c-1})(\sigma_{a+c+1}\ldots \sigma_{w-2}\sigma_{w-1})\ \gamma_w^{t-1} \\
&= (\sigma_1 \sigma_2 \ldots \sigma_{a-2}\sigma_{a-1} \sigma_{a} \sigma_{a+1}\sigma_{a+2})(\sigma_{a-1}\sigma_{a}\sigma_{a+1}) \ldots (\sigma_{a+c}\sigma_{a+c-1})(\sigma_{a+c+1}\ldots \sigma_{w-2}\sigma_{w-1})\ \gamma_w^{t-1} \\
\intertext{Repeating this leftwards operation eventually yields:}
&= (\sigma_1 \sigma_2 \ldots \sigma_{a+1}\sigma_{a+2}\ldots \sigma_{a+c})(\sigma_{a-1}\sigma_{a}\sigma_{a+1} \ldots \sigma_{a+c-2}\sigma_{a+c-1})(\underline{\sigma_{a+c+1}\ldots \sigma_{w-2}\sigma_{w-1}})\ \gamma_w^{t-1} \\
&= (\sigma_1 \ldots \sigma_{w-1})(\sigma_{a-1}\sigma_{a}\sigma_{a+1} \ldots \sigma_{a+c-2}\sigma_{a+c-1})\ \gamma_w^{t-1} \\
&\approx (\sigma_{a-1}\sigma_{a}\sigma_{a+1} \ldots \sigma_{a+c-2}\sigma_{a+c-1})\ \gamma_w^{t} \\
&= \alpha_2
\end{align*}
In the last step, we conjugated by $(\sigma_1 \ldots \sigma_{w-1})$. We conclude that $\alpha_1$ and $\alpha_2$ are conjugate.
\end{proof}
\noindent \textbf{\Cref{lemma:1BBsAreLorenzKnots}} \textit{1-bridge braids are Lorenz knots.}
\begin{proof}
To prove that 1-bridge braids are Lorenz knots, it suffices to show that some sequence of Markov moves transforms $\beta$ to a braid $\tau$, as in \Cref{TLink}.
Let $\beta = (\sigma_b \sigma_{b-1}\ldots \sigma_2 \sigma_1) (\sigma_{w-1}\sigma_{w-2}\ldots \sigma_2 \sigma_1)^t$ denote the standard braid presentation of a 1-bridge braid. Let $\beta' = (\sigma_{w-b}\sigma_{w-b-1}\ldots \sigma_{w-1})(\sigma_1 \sigma_2 \ldots \sigma_{w-1})^t$. We claim that $\widehat{\beta}$ and $\widehat{\beta'}$ are isotopic knots in $S^3$: as seen in \Cref{fig:Reflect}, reflecting $\beta$ across the dotted line yields $\beta'$. Alternatively, one can use some standard results in braid theory: if we conjugate $\beta$ by the Garside element $\Delta \in B_w$, we produce $\beta'$; since conjugation preserves the link type of the closure, $\widehat{\beta}$ and $\widehat{\beta'}$ present the same knot.
\begin{figure}\label{fig:Reflect}
\end{figure}
Next, we perform $(w-b-1)$ many applications of \Cref{lemma:LeftMove}:
\begin{align*}
\beta &= (\sigma_b \sigma_{b-1}\ldots \sigma_2 \sigma_1) (\sigma_{w-1}\sigma_{w-2}\ldots \sigma_2 \sigma_1)^t \\
& \approx (\sigma_{w-b}\sigma_{w-b-1}\ldots \sigma_{w-1})(\sigma_1 \sigma_2 \ldots \sigma_{w-1})^t \\
& \approx (\sigma_{w-b-1}\sigma_{w-b-1}\ldots \sigma_{w-2})(\sigma_1 \sigma_2 \ldots \sigma_{w-1})^t \\
& \approx (\sigma_{1}\sigma_{2}\ldots \sigma_{b})(\sigma_1 \sigma_2 \ldots \sigma_{w-1})^t
\end{align*}
Thus, the 1-bridge braid $\beta$ admits a T-link presentation.
\end{proof}
\begin{lemma} \label{lemma:ExamplesOfLorenzKnots}
Twisted torus knots and $n$-bridge braids are Lorenz knots.
\end{lemma}
\begin{proof}
Twisted torus knots are the closures of positive braids on $w$ strands with the following form: $$\rho = (\sigma_{w-1} \sigma_{w-2} \ldots \sigma_1)^t (\sigma_{w-1} \sigma_{w-2} \ldots \sigma_{w-k})^{sk}$$ Reflecting $\rho$ as in \Cref{fig:Reflect} yields $\rho' = (\sigma_1 \sigma_2 \ldots \sigma_{w-1})^t (\sigma_1 \sigma_2 \ldots \sigma_k)^{sk}$. We know that $\widehat{\rho}$ and $\widehat{\rho'}$ present isotopic knots; since $\rho'$ is presented as a T-link braid, we deduce that twisted torus knots are T-links.
Indeed, the braided presentation for $n$-bridge braids appears very similar to those of twisted torus knots (however, there is not required that $b$ divides $n$). We quickly show that these, too, are T-links:
\begin{align*}
\eta &= (\sigma_{b} \sigma_{b-1} \ldots \sigma_{1})^{n}(\sigma_{w-1} \sigma_{w-2} \ldots \sigma_1)^t \\
&\approx (\sigma_{w-b} \sigma_{w-b+1} \ldots \sigma_{w-1})^{n}(\sigma_{1} \sigma_{2} \ldots \sigma_{w-1})^t \\
&=(\sigma_{w-b} \sigma_{w-b+1} \ldots \sigma_{w-1})^{n-1}(\sigma_{w-b} \sigma_{w-b+1} \ldots \sigma_{w-1})(\underline{\sigma_{1} \sigma_{2} \ldots \sigma_{w-1}})(\sigma_{1} \sigma_{2} \ldots \sigma_{w-1})^{t-1} \\
\intertext{In the proof of \Cref{lemma:LeftMove}, we only performed braid relationships -- the only place we conjugated our braid is in the last step. Thus, applying the proof of \Cref{lemma:LeftMove}, we see:}
&=(\sigma_{w-b} \sigma_{w-b+1} \ldots \sigma_{w-1})^{n-1} (\underline{\sigma_{1} \sigma_{2} \ldots \sigma_{w-1}})(\sigma_{w-b-1} \sigma_{w-b} \ldots \sigma_{w-2}) (\sigma_{1} \sigma_{2} \ldots \sigma_{w-1})^{t-1} \\
&=(\sigma_{1} \sigma_{2} \ldots \sigma_{w-1})(\sigma_{w-b-1} \sigma_{w-b} \ldots \sigma_{w-2})^n (\sigma_{1} \sigma_{2} \ldots \sigma_{w-1})^{t-1} \\
&\approx (\sigma_{w-b-1} \sigma_{w-b} \ldots \sigma_{w-2})^n (\sigma_{1} \sigma_{2} \ldots \sigma_{w-1})^{t} \\
\intertext{We repeat this process an additional $w-b-2$ times, yielding:}
&\approx (\sigma_{1} \sigma_{2} \ldots \sigma_{b})^n (\sigma_{1} \sigma_{2} \ldots \sigma_{w-1})^{t}
\end{align*}
Thus, $n$-bridge braids are more general than twisted torus knots, and they are T-links.
\end{proof}
\section{Preliminaries for the proof of the main theorem}
\label{section:preliminaries}
\begin{rmk*}
After posting this article to the arXiv, the author of \cite{Nie:1BBsSatellites} informed us that the results in this section are sketched in the body of the proof of Theorem 1.1 in \cite{Nie:1BBsSatellites}. This work contains the full proofs.
\end{rmk*}
\begin{defn} \label{def2}
We define $\delta_n$ and $\gamma_n$ to be the positive braid words $\delta_n := (\sigma_n \ldots \sigma_2 \sigma_1)$ and $\gamma_n := (\sigma_1 \sigma_2 \ldots \sigma_n)$ in $B_{r}$, the braid group on $r$ strands, where $r \geq n+1$.
\end{defn}
\begin{rmk}
Note that $K(w,b,t) \approx \widehat{\delta_b \delta_{w-1}^t}$.
\end{rmk}
\begin{lemma} \label{lem3}
Let $\delta_k \in B_{k+1}$. Then $\delta_k \ \delta_k = \delta_{k-1}\ \delta_k \ \sigma_1$ as braid words in $B_{k+1}$.
\end{lemma}
\begin{proof}
We begin by expanding the left hand side.
\begin{align}
\delta_k \ \delta_k &= (\sigma_k \ \sigma_{k-1} \ \sigma_{k-2} \ \dots \ \sigma_3 \ \sigma_2 \ \sigma_1)(\underline{\sigma_k} \dots \sigma_1) \\
&= (\underline{\sigma_k \ \sigma_{k-1} \ \sigma_k} \ \sigma_{k-2} \ \dots \ \sigma_3 \ \sigma_2 \ \sigma_1)(\sigma_{k-1} \dots \sigma_1) \\
&= (\sigma_{k-1} \ \sigma_{k} \ \sigma_{k-1} \ \sigma_{k-2} \ \sigma_{k-3}\ \dots \ \sigma_3 \ \sigma_2 \ \sigma_1)(\underline{\sigma_{k-1}} \dots \sigma_1) \\
&= (\sigma_{k-1} \ \sigma_{k} \ \underline{\sigma_{k-1} \ \sigma_{k-2} \ \sigma_{k-1}} \ \sigma_{k-3}\ \dots \ \sigma_3 \ \sigma_2 \ \sigma_1)(\sigma_{k-2} \dots \sigma_1) \\
&= (\sigma_{k-1} \ \sigma_{k} \ \underline{\sigma_{k-2}} \ \sigma_{k-1} \ \sigma_{k-2} \ \sigma_{k-3}\ \dots \ \sigma_3 \ \sigma_2 \ \sigma_1)(\sigma_{k-2} \dots \sigma_1) \\
&= (\sigma_{k-1} \ \sigma_{k-2} \ \sigma_{k} \ \sigma_{k-1} \ \sigma_{k-2} \ \sigma_{k-3}\ \dots \ \sigma_3 \ \sigma_2 \ \sigma_1)(\underline{\sigma_{k-2}} \dots \sigma_1) \\
&= (\sigma_{k-1} \ \sigma_{k-2} \ \sigma_{k} \ \sigma_{k-1} \ \underline{\sigma_{k-2} \ \sigma_{k-3}\ \sigma_{k-2}}\dots \ \sigma_3 \ \sigma_2 \ \sigma_1)(\sigma_{k-3} \dots \sigma_1) \\
&= (\sigma_{k-1} \ \sigma_{k-2} \ \sigma_{k} \ \sigma_{k-1} \ \underline{\sigma_{k-3}} \ \sigma_{k-2}\ \sigma_{k-3}\dots \ \sigma_3 \ \sigma_2 \ \sigma_1)(\sigma_{k-3} \dots \sigma_1) \\
&= (\sigma_{k-1} \ \sigma_{k-2} \ \sigma_{k-3} \ \sigma_{k} \ \sigma_{k-1} \ \sigma_{k-2}\ \sigma_{k-3}\dots \ \sigma_3 \ \sigma_2 \ \sigma_1)(\sigma_{k-3} \dots \sigma_1)
\end{align}
We describe the operations at play: in (1), we identify the $\sigma_k$ letter that is furthest to the right, and apply $k-2$ commuting relations to push it as much to the left as possible. This creates the underlined subword $\sigma_{k}\sigma_{k-1} \sigma_{k}$ in (2); applying the braid relation yields (3). We repeat this procedure (of finding the largest letter in the right parenthetical subword, applying commuting relations to push it as far to the left as possible, and then applying a braid relation in lines (4), (5), and (6). From (6) to (7), we identify and execute another commuting relation. We call this 4-step procedure a \emph{left push}, and say that we perfom a \emph{left push on $\sigma_{t}$} when $\sigma_t$ is the largest letter in the second parenthetical braid word. Notice that after executing the \text{left push} operation on $\sigma_{r+1}$, the braid word decomposes into the three subwords $((\sigma_{k-1} \ldots \sigma_{r})(\sigma_{k} \ldots \sigma_{1}))(\sigma_r \ldots \sigma_1)$; this is seen explicitly in lines (7) and (11).
After repeating the \emph{left push} operation another another $k-5$ times from (11) onwards, we get:
\begin{align}
&= ((\sigma_{k-1} \ \sigma_{k-2} \ \sigma_{k-3} \ldots \sigma_{3} \ \sigma_{2}) \ (\sigma_{k} \ \sigma_{k-1} \ \sigma_{k-2}\ \sigma_{k-3}\dots \ \sigma_3 \ \sigma_2 \ \sigma_1))(\sigma_{2} \ \sigma_1) \\
&= (\sigma_{k-1} \ \sigma_{k-2} \ \sigma_{k-3} \ldots \sigma_{3} \ \sigma_{2} \ \ \sigma_{k} \ \sigma_{k-1} \ \sigma_{k-2}\ \sigma_{k-3}\dots \ \sigma_3 \ \underline{\sigma_2 \ \sigma_1 \ \sigma_{2}} \ \sigma_1) \\
&= (\sigma_{k-1} \ \sigma_{k-2} \ \sigma_{k-3} \ldots \sigma_{3} \ \sigma_{2} \ \ \sigma_{k} \ \sigma_{k-1} \ \sigma_{k-2}\ \sigma_{k-3}\dots \ \sigma_3 \ \underline{\sigma_1} \ \sigma_2 \ \sigma_{1} \ \sigma_1) \\
&= ((\sigma_{k-1} \ \sigma_{k-2} \ \sigma_{k-3} \ldots \sigma_{3} \ \sigma_{2} \ \sigma_1) \ (\sigma_{k} \ \sigma_{k-1} \ \sigma_{k-2}\ \sigma_{k-3}\dots \ \sigma_3 \ \sigma_2 \ \sigma_{1})) \ \sigma_1 \\
&= \delta_{k-1} \ \delta_{k} \ \sigma_1
\end{align}
This is exactly what we wanted to show.
\end{proof}
\begin{lemma} \label{lem4}
Let $\delta_k \in B_{k+1}$. Then, $\sigma_1 \ \delta_k = \delta_k \ \sigma_2.$
\end{lemma}
\begin{proof}
We begin by expanding the left hand side.
\begin{align*}
\sigma_1 \ \delta_k &= \sigma_1 \ \sigma_k \ \sigma_{k-1} \ \sigma_{k-2} \dots \sigma_3 \ \sigma_2 \ \sigma_1 \\
&= \underline{\sigma_1} \ \sigma_k \ \sigma_{k-1} \ \sigma_{k-2} \ \dots \sigma_3 \ \sigma_2 \ \sigma_1 \\
&= \sigma_k \ \sigma_{k-1} \ \sigma_{k-2} \dots \sigma_3 \ \underline{\sigma_1 \ \sigma_2 \ \sigma_1} \\
&= (\sigma_k \ \sigma_{k-1} \ \sigma_{k-2} \dots \sigma_3 \ \sigma_2 \ \sigma_1) \ \sigma_2 \\
&= \delta_k \sigma_2
\end{align*}
This is what we wanted to prove.
\end{proof}
\begin{lemma} \label{lem5}
Let $\delta_k \in B_{k+1}$. Then $\sigma_j \ \delta_k = \delta_k \ \sigma_{j+1}$ when $1 < j < k$.
\end{lemma}
\begin{proof}
Suppose $1<j<k$. We begin by expanding $\sigma_j \delta_k$:
\begin{align*}
\sigma_j \delta_k
&= \underline{\sigma_j}(\sigma_k \sigma_{k-1} \dots \sigma_{j+2} \ \sigma_{j+1} \sigma_j {\sigma_{j-1}} \ \dots \sigma_1) \\
&= \sigma_k \sigma_{k-1} \dots \sigma_{j+2} \ \underline{\sigma_j \sigma_{j+1} \sigma_j} {\sigma_{j-1}} \ \dots \sigma_1 \\
&= \sigma_k \sigma_{k-1} \dots \sigma_{j+2} \ \sigma_{j+1} \sigma_{j} \underline{\sigma_{j+1}} {\sigma_{j-1}} \ \dots \sigma_1 \\
&= (\sigma_k \sigma_{k-1} \dots \sigma_{j+2} \ \sigma_{j+1} \sigma_{j} {\sigma_{j-1}} \ \dots \sigma_1) \sigma_{j+1}
\end{align*}
which is $\delta_k \sigma_{j+1}$, as desired.
\end{proof}
\begin{lemma} \label{lem6}
Let $\delta_k \in B_{k+1}$. Then
$\sigma_1 \ \delta_k^s = \delta_k^s \ \sigma_{s+1}$,
where $s < k$.
\end{lemma}
\begin{proof}
We see that
\begin{align*}
\sigma_1 {\delta_k}^s &= \sigma_1 \delta_k {\delta_k}^{s-1}\\
&= \delta_k \sigma_2 \delta_k {\delta_k}^{s-2} \text{ by \Cref{lem4}, } \\
&= {\delta_k}^2 \sigma_3 \delta_k {\delta_k}^{s-3} \text{ by \Cref{lem5}.}
\end{align*}
Applying \Cref{lem5} a total of $s-1$ times, we get:
\begin{align*}
\delta_k^s \ \sigma_{s+1}.
\end{align*}
Thus, $\sigma_1 \ \delta_k^s = \delta_k^s \ \sigma_{s+1}$.
\end{proof}
Note that \Cref{lem5} generalizes \Cref{lem4}, and \Cref{lem6} combines Lemmas~\ref{lem4} and \ref{lem5}.
\begin{prop} \label{prop8}
Let $\delta_j \in B_{j+1}$. Then,
$\delta_j^t = \delta_{j-1} \ \delta_j^{t-1} \ \sigma_{t-1}$
where $t<j$.
\begin{proof}
We see that
\begin{align*}
\delta_j^t &= (\delta_j \delta_j) \delta_j^{t-2}\\
&= (\delta_{j-1}\delta_j\sigma_1)\delta_j^{t-2} \text{ by \Cref{lem3} }\\
&=\delta_{j-1}\delta_j(\sigma_1 \delta_j^{t-2})\\
&= \delta_{j-1}\delta_j(\delta_j^{t-2}\sigma_{(t-2)+1}) \text{ by \Cref{lem6}}\\
&= \delta_{j-1}\delta_j\delta_j^{t-2}\sigma_{t-1}\\
&= \delta_{j-1}\delta_j^{t-1}\sigma_{t-1}.
\end{align*}
Thus, $\delta_j^t = \delta_{j-1} \ \delta_j^{t-1} \ \sigma_{t-1}$.
\end{proof}
\end{prop}
\begin{prop} \label{prop9}
Let $\delta_k, \gamma_{t-1} \in B_{k+1}$. Then,
$\delta_k^t = \delta_{k-1}^{t-1} \ \delta_k \ \gamma_{t-1}$
where $t<k$.
\end{prop}
\begin{proof}
Note that
\begin{align*}
\delta_k^t &= \delta_{k-1} ({\delta_k}^{t-1}) \sigma_{t-1} \text{ by \Cref{prop8}} \\
&= \delta_{k-1} (\delta_{k-1} {\delta_k}^{t-2} \sigma_{t-2}) \sigma_{t-1} \text{ by \Cref{prop8}} \\
&= \delta_{k-1}^2 \ ({\delta_k}^{t-2}) \ \sigma_{t-2} \sigma_{t-1}.
\intertext{Iteratively apply \Cref{prop8} an additional $t-3$ times to the rightmost $\delta_{k}^{t - \star}$ to obtain}
{\delta_k}^t &= \delta_{k-1}^{t-1} \ \delta_k^1 \ (\sigma_1 \sigma_2 \ldots \sigma_{t-1}) \\
&= \delta_{k-1}^{t-1} \ \delta_k \ \gamma_{t-1}.
\end{align*}
This yields the desired conclusion.
\end{proof}
\begin{prop} \label{prop:SameSuperSub}
Let $\delta_k \in$ $B_{k+1}$. Then,
$\delta_k^k = \delta_{k-1} (\delta_k^{k-1})\sigma_{k-1}$.
\end{prop}
\begin{proof}
\begin{align*}
\delta_k^k
&= (\delta_k^{k-1}) \delta_k \\
&= (\delta_{k-1} \delta_k ^{k-2} \sigma_{k-2}) \delta_k \text{\qquad by \Cref{prop8}}\\
&= \delta_{k-1} \delta_k ^{k-2} (\sigma_{k-2} \delta_k) \\
&= \delta_{k-1} \delta_k^{k-2}(\delta_k \sigma_{k-1}) \text{\qquad by \Cref{lem5}} \\
&= \delta_{k-1} (\delta_k^{k-1})\sigma_{k-1}
\end{align*}
This is what we wanted to show.
\end{proof}
\section{Proof of the main theorem}
\label{section:n-bridge}
\noindent \textbf{\Cref{thm:NBridge}.} \textit{The braid index of an n--bridge braid $K(w,b,t,n)$ is:}
\begin{center}
$ i(K(w,b,t,n)) = \begin{cases}
w & t \geq w, \ n\geq 1 \\
t & w > t > b, \ n \geq 1 \\
t+1 & w > b \geq t, n=1\\
b+1 & w > b\geq t, n+t \geq b+1, n>1\\
n+t & w > b \geq t, n+t < b+1, n>1
\end{cases} $
\end{center}
\begin{proof}
We use \Cref{prop8} and \Cref{prop9} and destabilizations to find a presentation of the knot which allows us to apply \Cref{thm:FMW}. \\
\noindent \fbox{\textbf{Case 1: $t \geq w, n \geq 1$}} \\
\noindent Let $t \geq w$ and $n \geq 1$. Then, we know:
\begin{align*}
K(w,b,t,n)
&= \delta_b \delta_{w-1}^t \\
&= \delta_b \delta_{w-1}^w \delta_{w-1}^{t-w}
\end{align*}
Since $(\sigma_{w-1} \sigma_{w-2} \ldots \sigma_1)^w$ is a full twist on $w$ strands, by applying \Cref{thm:FMW}, $i(K(w,b,t,n)) = w$.
\bigbreak \noindent
\noindent \fbox{\textbf{Case 2: $w > t > b, n \geq 1$}} \\
\noindent Suppose $w > t > b$ and $n \geq 1$. We know:
\begin{align*}
K(w,b,t,n) &= \delta_b^n (\delta_{w-1}^t)\\
&= \delta_b^n (\delta_{w-2}^{t-1} \delta_{w-1} \gamma_{t-1}) \text{ by \Cref{prop9}} \\
&= \delta_b^n \delta_{w-2}^{t-1} (\sigma_{w-1} \delta_{w-2}) \gamma_{t-1}
\end{align*}
Since $w > t > b$, then $w-1 \geq t > b$, hence there is a single $\sigma_{w-1}$ in the braid word, which is currently in $B_{w-1}$. Thus, we can destablize the braid to produce a new braid in $B_{w-2}$:
\begin{align*}
K(w,b,t,n) &= \delta_b^n \delta_{w-2}^{t-1} (\sigma_{w-1} \delta_{w-2}) \gamma_{t-1} \\
&= \delta_b^n \delta_{w-2}^{t-1} ( \delta_{w-2}) \gamma_{t-1} \\
&= \delta_b^n \delta_{w-2}^{t} \gamma_{t-1}
\end{align*}
\noindent We iteratively: (1) apply \Cref{prop9} to the rightmost $\delta_{w-\star}^t$ term, and (2) destabilize the largest remaining Artin generator. Since $w > t > b$, we can repeat this process a total of $w-t$ times, after which we have:
\begin{align*}
K(w,b,t) &= \delta_b^n \delta_{t-1}^t \gamma_{t-1}^{w-t}
\end{align*}
As $t > b$, we know that $t-1 \geq b$, hence $\delta_{b}^n$ contains no $\sigma_s$ letters, where $s \geq t-1$. Moreover, this is a braid word in $B_t$, and it contains a full twist on $t$ strands. By \Cref{thm:FMW}, $i(K(w,b,t,n)) = t$ in this case.
\bigbreak \noindent
\noindent \fbox{\textbf{Case 3: $w > b \geq t, n=1$}} \\
\noindent Note that
\begin{align*}
K(w,b,t) &= \delta_b (\delta_{w-1}^t) \\
&= \delta_b \delta_{w-2}^{t-1} (\delta_{w-1}) \gamma_{t-1} \text{ by \Cref{prop9}} \\
&= \delta_b \delta_{w-2}^{t-1} (\sigma_{w-1} \delta_{w-2}) \gamma_{t-1}
\end{align*}
Since $ w-2 \geq b$, there is a single $\sigma_{w-1}$; it is the largest Artin generator in the braid word. Thus, we can destabilize:
\begin{align*}
K(w,b,t) &= \delta_b \delta_{w-2}^{t-1} (\sigma_{w-1}) \delta_{w-2} \gamma_{t-1} \\
&\approx \delta_b \delta_{w-2}^{t-1} \delta_{w-2} \gamma_{t-1}\\
&= \delta_b \delta_{w-2}^{t} \gamma_{t-1}.
\end{align*}
We iteratively: (1) apply \Cref{prop9} to the rightmost $\delta_{w-\star}^t$ term, and (2) destabilize the largest remaining Artin generator. Since $w > b \geq t$, we can repeat this process a total of $w-b-1$ times, after which we have:
\begin{align*}
K(w,b,t) &= \delta_b \delta_{w-2}^{t} \gamma_{t-1} \\
&= \delta_b (\delta_{b}^t)\gamma_{t-1}^{w-b-1}
\end{align*}
This braid word is in $B_{b+1}$, the braid group on $b+1$ strands.
If $b=t$, then: $$K(w,b,t) = \delta_b^{t+1}\gamma_{t-1}^{w-b-1} = \delta_t^{t+1}\gamma_{t-1}^{w-b-1}$$
Applying \Cref{thm:FMW} allows us to conclude that $i(K(w,b,t))=t+1$. \\
Otherwise, $b>t$ and we iteratively: (1) apply \Cref{prop9} to the $\delta_{b - \star}^{t+1}$ term (note that $\star = 0$ to start), and (2) destabilize the largest remaining Artin generator. Since $b > t$, we can repeat this process a total of $b-t$ times to obtain:
\begin{align*}
K(w,b,t) &= \delta_b^{t+1} \gamma_{t-1}^{w-b-1} \\
& = \delta_{t}^{t+1} \gamma_{t}^{b-t} \gamma_{t-1}^{w-b-1}
\end{align*}
Once again, by \Cref{thm:FMW}, we deduce $i(K(w,b,t))=t+1$. \\
\noindent \fbox{\textbf{Case 4: $w > b\geq t, n+t \geq b+1, n>1$}} \\
\noindent Suppose $w > b\geq t, n+t \geq b+1,$ and $n>1.$ then we know
\begin{align*}
K(w,b,t,n) &= \delta_b^n (\delta_{w-1}^t)\\
&= \delta_b^n \delta_{w-2}^{t-1} (\delta_{w-1}) \gamma_{t-1} \text{ by \Cref{prop9}} \\
&= \delta_b^n \delta_{w-2}^{t-1}(\sigma_{w-1} \delta_{w-2}) \gamma_{t-1}
\end{align*}
Since $w-2 \geq b$, there is a single $\sigma_{w-1}$ in the braid word, and we can destabilize to yield:
\begin{align*}
K(w,b,t) &= \delta_b^n \delta_{w-2}^{t-1} (\sigma_{w-1}) \delta_{w-2} \gamma_{t-1} \\
&\approx \delta_b^n \delta_{w-2}^{t-1} \delta_{w-2} \gamma_{t-1}\\
&\approx \delta_b^n \delta_{w-2}^{t} \gamma_{t-1}
\end{align*}
\noindent We iteratively: (1) apply \Cref{prop9} to the rightmost $\delta_{w - \star}^t$ term, and (2) destabilize the largest remaining Artin generator. We repeat this process a total of $w-b-1$ times to arrive at:
\begin{align*}
K(w,b,t) &= \delta_b^n \delta_{w-2}^t \gamma_{t-1} \\
&= \delta_b^n \delta_{b}^{t} \gamma_{t-1}^{w-b-1}
\end{align*}
This is a braid word on $b+1$ strands.
As $n+t \geq b+1$, we get
\begin{align*}
K(w,b,t) &= \delta_b^n \delta_b^t \gamma_{t-1}^{w-b-1} \\
&= \delta_b^{n+t} \gamma_{t-1}^{w-b-1}
\end{align*}
Applying \Cref{thm:FMW}, we deduce $i(K(w,b,t,n))=b+1$. \\
\noindent \fbox{\textbf{Case 5: $w > b \geq t, n+t < b+1$}} \\
\noindent We begin by applying \Cref{prop9} to the standard braided presentation of $K(w,b,t,n)$:
\begin{align*}
K(w,b,t,n) &= \delta_b^n (\delta_{w-1}^t)\\
&= \delta_b^n (\delta_{w-2}^{t-1} (\delta_{w-1}) \gamma_{t-1}) \text{ by \Cref{prop9}} \\
&= \delta_b^n \delta_{w-2}^{t-1}(\sigma_{w-1} \delta_{w-2}) \gamma_{t-1}
\end{align*}
We know that $w-2 \geq b$, hence there is a single $\sigma_{w-1}$ in the braid word $\beta \in B_{w-1}$. Thus, we can destabilize to produce:
\begin{align*}
K(w,b,t,n) &= \delta_b^n \delta_{w-2}^{t-1}(\sigma_{w-1}) \delta_{w-2} \gamma_{t-1} \\
&\approx \delta_b^n \delta_{w-2}^{t-1} \delta_{w-2} \gamma_{t-1}\\
&\approx \delta_b^n \delta_{w-2}^{t} \gamma_{t-1}
\end{align*}
\noindent We iteratively: (1) apply \Cref{prop9} to the rightmost $\delta_{w - \star}^t$ term, and (2) destabilize the largest remaining Artin generator. We repeat this process a total of $w-b-1$ times to arrive at:
\begin{align}
K(w,b,t,n) &= \delta_b^n \delta_{w-2}^{t} \gamma_{t-1} \nonumber\\
&\approx \delta_b ^{n} \delta_b^t \gamma_{t-1}^{w-b-1} \label{eqn:remember}
\end{align}
This braid word is on $b+1$ strands. We assumed that $n+t < b+1$, so namely, $n+t \leq b$. Suppose $n+t = b$. In this case,
\begin{align*}
K(w,b,t,n) &= \delta_b ^{n} \delta_b^t \gamma_{t-1}^{w-b-1} \\
&= \delta_b^b \gamma_{t-1}^{w-b-1} \\
&= \delta_{b-1} (\delta_b^{b-1})\sigma_{b-1} \gamma_{t-1}^{w-b+1} \text{\qquad by \Cref{prop:SameSuperSub}} \\
&= \delta_{b-1}(\delta_{b-1} ^{b-2} \delta_b \gamma_{b-2}) \sigma_{b-1} \gamma_{t-1}^{w-b+1} \text{\qquad by \Cref{prop9}} \\
&= \delta_{b-1}\delta_{b-1} ^{b-2} (\delta_b) \gamma_{b-2} \sigma_{b-1} \gamma_{t-1}^{w-b+1} \\
&= \delta_{b-1}\delta_{b-1} ^{b-2} (\sigma_b \delta_{b-1}) \gamma_{b-2} \sigma_{b-1} \gamma_{t-1}^{w-b+1} \\
&= \delta_{b-1}\delta_{b-1} ^{b-1} \gamma_{b-2} \sigma_{b-1} \gamma_{t-1}^{w-b+1} \text{\qquad by destabilizing the unique $\sigma_b$ letter} \\
&= \delta_{b-1} ^{b} \gamma_{b-1} \gamma_{t-1}^{w-b+1}
\end{align*}
\noindent This braid word on $b$ strands contains a full twist; thus, by \Cref{thm:FMW}, $i(K(w,b,t,n))= b = n + t$. Now suppose $n+t < b $. In particular, $n+t \leq b-1$. In this case, as in \Cref{eqn:remember},
\begin{align*}
K(w, b, t, n) &= \delta_b ^{n} \delta_b^t \gamma_{t-1}^{w-b-1}\\
&=(\delta_b ^{n+t}) \gamma_{t-1}^{w-b-1}\\
&= (\delta_{b-1}^{n+t-1} \delta_b \gamma_{n + t - 1}) \gamma_{t-1}^{w-b-1} \text{\qquad by \Cref{prop9}} \\
&= \delta_{b-1}^{n+t-1} (\delta_b) \gamma_{n + t - 1} \gamma_{t-1}^{w-b-1} \\
&= \delta_{b-1}^{n+t-1} (\sigma_b \delta_{b-1}) \gamma_{n + t - 1} \gamma_{t-1}^{w-b-1} \\
&= \delta_{b-1}^{n+t-1} \delta_{b-1} \gamma_{n + t - 1} \gamma_{t-1}^{w-b-1} \text{\qquad by destabilizing the unique $\sigma_b$ letter} \\
&= \delta_{b-1}^{n+t} \gamma_{n + t - 1} \gamma_{t-1}^{w-b-1}
\end{align*}
\noindent We iteratively: (1) apply \Cref{prop9} to the $\delta_{b - \star}^{n+t}$ term, and (2) destabilize the largest remaining Artin generator. We repeat this process $b-1 - n - t$ times to obtain:
\begin{align*}
K(w, b, t, n) &= \delta_{b-1}^{n+t} \gamma_{n + t - 1} \gamma_{t-1}^{w-b-1} \\
&= (\delta_{n+t} ^{n+t}) \gamma_{n+t-1} \gamma_{t-1}^{w-b+1}\\
&= (\delta_{n+t-1} \delta_{n+t}^{n+t-1} \sigma_{n+t-1}) \gamma_{n+t-1} \gamma_{t-1}^{w-b+1} \text{\qquad by \Cref{prop:SameSuperSub}} \\
&= \delta_{n+t-1} (\delta_{n+t}^{n+t-1}) \sigma_{n+t-1} \gamma_{n+t-1} \gamma_{t-1}^{w-b+1} \\
&= \delta_{n+t-1} (\delta_{n+t-1}^{n+t-2} \delta_{n+t} \gamma_{n+t-2}) \sigma_{n+t-1} \gamma_{n+t-1} \gamma_{t-1}^{w-b+1} \text{\qquad by \Cref{prop9}} \\
&= \delta_{n+t-1} \delta_{n+t-1}^{n+t-2} (\delta_{n+t}) \gamma_{n+t-2} \sigma_{n+t-1} \gamma_{n+t-1} \gamma_{t-1}^{w-b+1} \\
&= \delta_{n+t-1} \delta_{n+t-1}^{n+t-2} (\sigma_{n+t}\delta_{n+t-1}) \gamma_{n+t-2} \sigma_{n+t-1} \gamma_{n+t-1} \gamma_{t-1}^{w-b+1} \\
&= \delta_{n+t-1} \delta_{n+t-1}^{n+t-2} (\delta_{n+t-1}) \gamma_{n+t-2} \sigma_{n+t-1} \gamma_{n+t-1} \gamma_{t-1}^{w-b+1} \text{\qquad by destablizing the $\sigma_{n+t}$ term} \\
&= \delta_{n+t-1}^{n+t} (\gamma_{n+t-2} \sigma_{n+t-1}) \gamma_{n+t-1} \gamma_{t-1}^{w-b+1} \\
&= \delta_{n+t-1}^{n+t} \gamma_{n+t-1} \gamma_{n+t-1} \gamma_{t-1}^{w-b+1} \\
&= \delta_{n+t-1}^{n+t} \gamma_{n+t-1}^2 \gamma_{t-1}^{w-b+1}
\end{align*}
This braid word in $B_{n+t}$ contains a full twist; applying \Cref{thm:FMW}, we deduce the braid index is $n + t$. \end{proof}
\end{document}
|
\begin{document}
\title{On complexity of propositional Linear-time Temporal Logic with
finitely many variables\thanks{Prifinal version of the paper
published in: In van Niekerk J., Haskins
B. (eds). \textit{Proceedings of SAICSIT'18}. ACM,
2018. pp. 313-316. DOI: 10.1145/3278681.3278718}}
\author[1]{Mikhail Rybakov} \author[2]{Dmitry Shkatov} \affil[1]{Tver
State University and University of the Witwatersrand, Johannesburg}
\affil[2]{University of the Witwatersrand, Johannesburg}
\maketitle
\begin{abstract}
It is known~\cite{DS02} that both satisfiability and model-checking
problems for propositional Linear-time Temporal Logic, {\bf LTL},
with only a single propositional variable in the language are
PSPACE-complete, which coincides with the complexity of these
problems for {\bf LTL} with an arbitrary number of propositional
variables~\cite{SislaClarke85}. In the present paper, we show that
the same result can be obtained by modifying the original proof of
PSPACE-hardness for {\bf LTL} from~\cite{SislaClarke85}; i.e., we
show how to modify the construction from~\cite{SislaClarke85} to
model the computations of polynomially-space bound Turing machines
using only formulas of one variable. We believe that our
alternative proof of the results from~\cite{DS02} gives additional
insight into the semantic and computational properties of {\bf LTL}.
\end{abstract}
\maketitle
\section{Introduction}
\label{sec:intro}
The propositional Linear-time Temporal Logic {\bf LTL}, proposed
in~\cite{Pnueli77}, is historically the first temporal logic to have
been used in formal specification and verification of (parallel)
non-terminating computer programs~\cite{HR04}, such as (components of)
operating systems. It has stood the test of time, despite a dizzying
variety of temporal logics that have since been introduced for the
purpose (see, e.g.,~\cite{DGL16}).
The task of verifying that a program conforms to a specification can
be carried out by checking whether an {\bf LTL} formula expressing the
specification is satisfied in the structure modelling the execution
paths of the program. This corresponds to the model checking problem
for {\bf LTL}: given a formula, a model, and a state, check if the
formula is satisfied by all paths of the model beginning with the
given state. The related task of verifying that a specification of a
program is consistent---and, thus, can be satisfied by some
program---corresponds to the satisfiability problem for {\bf LTL}:
given a formula, check whether there is a model and a path satisfying
the formula.
Therefore, the complexity of both satisfiability and model checking
are of crucial interest when it comes to applications of {\bf LTL} to
formal specification and verification. It has been shown
in~\cite{SislaClarke85} that both satisfiability and model checking
for {\bf LTL} are PSPACE-complete. It might have been hoped that the
complexity of satisfiability, as well as of model checking, may be
reduced if we consider a language with only a finite number of
propositional variables, which is sufficient for most
applications---as has been observed in~\cite{DS02}, most properties of
interest can be specified using a very small number of variables;
typically, not more than three. Indeed, examples are known of logics
whose satisfiability problem goes down from ``intractable'' to
``tractable'' once we place a limit on the number of propositional
variables allowed in the language: thus, satisfiability for the
classical propositional logic and all the normal extensions of the
modal logic {\bf K5}~\cite{NTh75}, including logics such as {\bf K45},
{\bf KD45}, and {\bf S5} (see also~\cite{Halpern95}), used in formal
specification and verification of distributed and multi-agent
systems~\cite{FHMV95}, goes down from NP-complete to polynomial-time
decidable once we limit the number of propositional variables by an
(arbitrary) finite number. Similarly, as follows
from~\cite{Nishimura60}, satisfiability for the intuitionistic
propositional logic goes down from PSPACE-complete to polynomial-time
if we allow only a single propositional variable in the language.
It has been shown in~\cite{DS02}, however, that even a single variable
in the language of {\bf LTL} is sufficient to produce a fragment whose
model-checking and satisfiability problems are as hard as
corresponding problems for the entire logic. Thus, the complexity of
these tasks for {\bf LTL} cannot be lowered by placing restrictions on
the number of variables allowed in the construction of formulas.
It is often instructive to have various proofs of important formal
results, to which we believe the results on complexity of
model-checking and satisfiability for {\bf LTL} undoubtedly
belong. Thus, in the present paper, we present an alternative proof,
which is, in fact, a modification of the original proof
from~\cite{SislaClarke85} establishing PSPACE-hardness of {\bf LTL}
with an unlimited number of variables. We show that, with some
ingenuity, one can modify the construction used
in~\cite{SislaClarke85} of an {\bf LTL}-model based on a computation
of a polynomially-space bound Turing machine so that we obtain a model
for a single-variable fragment of {\bf LTL}. The interesting feature
of a modified construction is that---even though the size of the
model, and the {\bf LTL}-formula describing it, blows up---the blow-up
is proportionate to the size of the Turing machine, which is
independent of the size of the input, and thus the reduction remains
polynomial.
It is worth noticing that most well-know general methods
\cite{Halpern95, ChRyb03, RSh18a, RSh18b, RSh18c} of establishing
similar results for modal and temporal logics are not applicable to
{\bf LTL} due to the restiction on the branching factor in its models.
The paper is structured as follows. In
section~\ref{sec:syntax-semantics}, we briefly recall the syntax and
semantics of {\bf LTL}. In section~\ref{sec:single-variable-fragment},
we present our proof of PSPACE-hardness of model-checking and
satisfiability problems for the single-variable fragment of {\bf LTL},
which is a modification of the construction from the original proof
from~\cite{SislaClarke85}. We conclude in
section~\ref{sec:conclusion} by drawing attention to some features of
{\bf LTL} that make it stand apart from other modal and temporal
logics used in formal specification and verification.
\section{Syntax and semantics}
\label{sec:syntax-semantics}
The language of {\bf LTL} contains an infinite set of propositional
variables $\Var = \linebreak \{p_1, p_2, \ldots \}$, the Boolean
constant $\bottom$ (``falsehood''), the Boolean connective $\imp$
\linebreak (``if \ldots, then \ldots''), and the temporal operators
$\next$ (``next'') and $\until$ (``until''). The formulas are defined
by the following Backus-Naur form expression:
\[\vp ::= p \mid\ \bottom\ \mid (\vp
\imp \vp) \mid \next \vp \mid (\vp \until \vp), \]
where $p$ ranges over \Var. We also define
$\top := ({\bottom} \imp {\bottom})$,
$\neg \vp := (\vp \imp {\bottom})$,
$(\vp \con \psi) := \neg (\vp \imp \neg \psi)$,
$\Diamond \vp := (\truth \until \vp)$, and
$\Box \vp := \neg \Diamond \neg \vp$. We adopt the usual conventions
about omitting parentheses. For every formula $\vp$ and every number
$n$ such that $n \geqslant 0$, we inductively define the formula
$\next^n \vp$ as follows: $\next^0 \vp := \vp$, and
$\next^{n+1} \vp := \next \next^n \vp$.
Formulas are evaluated in Kripke models (often referred to as
``transition systems''). A Kripke model is a tuple
$\mmodel{M} = (\states{S}, \ar, V)$, where \states{S} is
a non-empty set \linebreak (of states), $\ar$ is a binary
(transition) relation on \states{S} that is serial (i.e., for every
$s \in \states{S}$, there exists $s' \in \states{S}$ such that
$s \ar s'$), and $V$ is a (valuation) function
$V: \Var \rightarrow 2^{\states{S}}$.
An infinite sequence $s_0, s_1, \ldots$ of states of \mmodel{M} such
that $s_i \ar s_{i+1}$, for every $i \geqslant 0$, is called a
\textit{path}. Given a path $\pi$ and some $i \geqslant 0$, we denote
by $\pi[i]$ the $i$th element of $\pi$ and by $\pi[i, \infty]$ the
suffix of $\pi$ beginning with its $i$th element.
Formulas are evaluated with respect to paths. The satisfaction
relation between models $\frak{M}$, paths $\pi$, and formulas $\vp$ is
defined inductively, as follows:
\begin{itemize}
\item \sat{M}{\pi}{p_i} \sameas\ $\pi[0] \in V(p_i)$; \nopagebreak[3]
\nopagebreak[3]
\nopagebreak[3]
\item \sat{M}{\pi}{\falsehood} never holds;
\nopagebreak[3]
\item \sat{M}{\pi}{(\vp_1 \imp \vp_2)} \sameas\ \sat{M}{\pi}{\vp_1}
implies \sat{M}{\pi}{\vp_2}; \nopagebreak[3]
\item \sat{M}{\pi}{\next \vp_1} \sameas\ \sat{M}{\pi[1, \infty]}{\vp_1};
\nopagebreak[3]
\item \sat{M}{\pi}{\vp_1 \until \vp_2} \sameas\ \sat{M}{\pi[i,
\infty]}{\vp_2}, for some $i \geqslant 0$, and \sat{M}{\pi[j,
\infty]}{\vp_1} for every $j$ such that $0 \leqslant j < i$.
\end{itemize}
A formula is satisfiable if it is satisfied by some path of some
model. A formula is valid if it is satisfied by every path of every
model.
We now state the two computational problems considered in the
following section. The {\it satisfiability problem for {\bf LTL}}:
given a formula $\vp$, determine whether there exists a model
$\frak{M}$ and a path $\pi$ in $\frak{M}$ such that \sat{M}{\pi}{\vp}.
The {\it model-checking problem for {\bf LTL}}: given a formula $\vp$,
a model $\frak{M}$, and a state $s$ in $\frak{M}$, determine whether
\sat{M}{\pi}{\vp} for every path $\pi$ such that $\pi[0] =
s$.
Clearly, formula $\vp$ is valid if, and only if, $\neg \vp$ is not
satisfiable; thus any deterministic algorithm that solves the
satisfiability problem also solves the validity problem, and vice
versa.
\section{Complexity of satisfiability and model-checking for
finite-variable fragments}
\label{sec:single-variable-fragment}
In this section, we show how the original construction used in
~\cite{SislaClarke85} to establish PSPACE-hardness of model-checking
and satisfiability for {\bf LTL} with an arbitrary number of
propositional variables can be modified to prove that model-checking
and satisfiability for the single-variable fragments of {\bf LTL} are
PSPACE-hard, too. Before doing so, we briefly note that, for the
variable-free fragment, both problems are polynomially decidable.
Indeed, it is easy to check that every variable-free {\bf LTL} formula
is equivalent to either $\bottom$ or $\top$ (for example,
${\top} \until {\top}$ is equivalent to $\top$ and
${\top} \until {\bottom}$ is equivalent to $\bottom$); thus, to check
for satisfiability of a variable-free formula $\vp$, all we need to do
is to recursively replace each subformula of $\vp$ by either $\bottom$
or $\top$, which is linear in the size of $\vp$; likewise for
model-checking.
We recall that in~\cite{SislaClarke85} an arbitrary problem
``$x \in A?$'' solvable by polynomially-space bounded (deterministic)
Turing machines is reduced to model-checking for {\bf LTL}. (The
authors of~\cite{SislaClarke85} then reduce the model-checking problem
for {\bf LTL} to the satisfiability problem for {\bf LTL}.) We show
how one can modify the construction from~\cite{SislaClarke85} to
simultaneously reduce the problem ``$x \in A?$'' to both model
checking and satisfiability for {\bf LTL} using formulas containing
only one variable. Since we are describing a modification of a
well-known construction, we will be rather brief. As we go along, we
point out the main differences of our construction from that
in~\cite{SislaClarke85}.
Let $M = (Q, \Sigma, q_0, q_1, a_0, a_1, \delta)$ be a (deterministic)
Turing machine, where $Q$ is the set of states, $\Sigma$ is the
alphabet, $q_0$ is the starting state, $q_1$ is the final state, $a_0$
is the blank symbol, $a_1$ is the symbol marking the leftmost cell,
and $\delta$ is the machine's program. We adopt the convention that
$M$ gives a positive answer if, at the end of the computation, the
tape is blank save for $a_0$ written in the leftmost cell. We assume,
for technical reasons, that $\delta$ contains an instruction to the
effect that the ``yes'' configuration yields itself (thus, we assume
that all computations with a positive answer are infinite). Given an
input on length $n$, we assume that the amount of space $M$ uses is
$S(n)$, for some polynomial $S$.
We now construct, in time polynomial in the size of $x$, a model
$\frak{M}$, a path $\pi$ in $\frak{M}$, and a formula $\psi$---of a
single variable, $p$---such that $x \in A$ if, and only if,
\sat{M}{\pi}{\vp}. It will also be the case that $x \in A$ if, and
only if, $\psi$ is {\bf LTL}-valid. The model $\frak{M}$ intuitively
corresponds, in the way described below, to the computation of $M$ on
input $x$.
First, we need the ability to model natural numbers within a certain
range, say $1$ through $k$. To that end, we use models based on the
frame $\frak{F}_k$, depicted in Figure~\ref{fig:frame-k}, which is a
line made up of $k$ states. By making $p$ true exactly at the $i$th
state of $\frak{F}_k$, where $1 \leqslant i \leqslant k$, we obtain a
model representing the natural number $i$. We denote the model
representing the number $m$ by $\frak{N}_m$.
\begin{figure}
\caption{Frame $\frak{F}
\label{fig:frame-k}
\end{figure}
We next use models $\frak{N}_m$ to build a model representing all
possible contents of a single cell of $M$. Let $|Q| = n_1$ and
$|\Sigma| = n_2$. As each cell of $M$ may contain either a symbol
from $\Sigma$ or a sequence $qa$, where $q \in Q$ and $a \in \Sigma$,
indicating that $M$ is scanning the present cell, where $a$ is
written, there are $n_2 \times (n_1 + 1)$ possibilities for the
contents of a single cell of $M$. Let $k = n_2 \times (n_1 + 1)$;
clearly, $k$ is independent of the size of the input $x$. To model
the contents of a single cell, we use models $\frak{N}_1$ through
$\frak{N}_k$ to build a model $\frak{C}$, depicted in
Figure~\ref{fig:model-c}, where small boxes represent models
$\frak{N}_1$ through $\frak{N}_k$. In Figure~\ref{fig:model-c}, an
arrow from $s_0$ to a box corresponding to the model $\frak{N}_m$
represents a transition from $s_0$ to the first state of $\frak{N}_m$,
and an arrow from a box corresponding to the model $\frak{N}_m$ to
$s_1$ represents a transition from the last state of $\frak{N}_m$ to
$s_1$. On the states in $\frak{N}_m$ ($1 \leqslant m \leqslant k$),
the evaluation of $p$ in $\frak{C}$ agrees with the evaluation of $p$
in $\frak{N}_m$; in addition, $p$ is false both at $s_0$ and $s_1$.
\begin{figure}
\caption{Model $\frak{C}
\label{fig:model-c}
\end{figure}
Let the length of $x$ be $n$. We use $S(n)$ copies of $\frak{C}$ to
represent a single configuration of $M$. This is done with the model
$\frak{M}$, depicted in Figure~\ref{fig:model-T}. In $\frak{M}$, a
chain made up of $S(n)$ copies of $\frak{C}$ is preceded by a model
$\frak{B}$ marking the beginning of a configuration; the use of
$\frak{B}$ allows us to separate configurations from each other. All
that is required of the shape of $\frak{B}$ is for it to contain a
pattern of states (with an evaluation) that does not occur elsewhere
in $\frak{M}$; thus, we may use the frame $\frak{F}_3$ and define the
evaluation to make $p$ true at its every state.
This completes the construction of the model $\frak{M}$. One might
think of $\frak{M}$ as consisting of ``cycles,'' each cycle
representing a single configuration of $M$ in the following way: to
obtain a particular configuration of $M$, pick a path from the first
state of $\frak{B}$ to the last state of the last copy of $\frak{C}$
that traverses the model $\frak{N}_i$ withing the $j$th copy of
$\frak{C}$ exactly when the $j$th cell of the tape of $M$ contains the
$i$th ``symbol'' from the alphabet $\Sigma \union Q \times \Sigma$.
The main, and crucial, difference between the model $\frak{M}$
described above and the model used in~\cite{SislaClarke85} is that we
use ``components'' $\frak{N}_k$ where~\cite{SislaClarke85} use an
anti-chain of $k$ states distinguished by the evaluation of $k$
distinct propositional variables. This allows us---in contrast
to~\cite{SislaClarke85}---to use a single propositional variable in
describing our model.
\begin{figure}
\caption{Model $\frak{M}
\label{fig:model-T}
\end{figure}
We now describe how to build a formula $\psi$ whose satisfaction we
want to check with respect to an infinite path beginning with the
first state of $\frak{B}$. It is rather straightforward to write out
the following formulas (all one needs to say is what symbols are
written in each of the cells of $M$'s tape):
\begin{itemize}
\item A formula $\psi_{start}$ describing the initial configuration
of $M$ on $x$;
\item A formula $\psi_{positive}$ describing the configuration of $M$
corresponding to the positive answer.
\end{itemize}
The length of both $\psi_{start}$ and $\psi_{positive}$ is clearly
proportionate to $k \times S(n)$, as we have $S(n)$ cells to describe
and use formulas of length proportionate to $k$ to describe each of
them. Next, we can write out a formula $\psi_{\delta}$ describing the
program $\delta$ of $M$. This can be done by starting with formulas
of the form $\next^j \sigma$, where $j$ is the number of states in a
path leading from the first state of $\frak{B}$ to the last state of
the last copy of $\frak{C}$ in a single ``cycle'' in $\frak{M}$, to
describe the change in the contents of the cells from one
configuration to the next, and then, for each instruction $I$ from
$\delta$, writing a formula $\alpha(I)$ of the form
$\bigwedge_{i=0}^{S(n)} \Box \chi$, where $\chi$ describes changes
occurring in each cell of $M$. Clearly, the length of each
$\alpha(I)$ is proportionate to $k \times S(n)$. Then,
$\psi_{\delta} = \bigwedge_{I \in \delta} \alpha(I)$. As the number
of instructions in $\delta$ is independent from the length of the
input, the length of $\psi_{\delta}$ is proportionate to
$c \times S(n)$, for some constant $c$.
Lastly, we define
$$
\psi = \psi_{start} \con \Box \psi_{\delta} \imp \Diamond
\psi_{positive}.
$$
One can then show, by induction on the length of the computation of
$M$ on $x$, that $M(x) = yes$ if, and only if, $\psi$ is satisfied in
$\frak{M}$ by an infinite path corresponding, in the way described
above, to the computation of $M$ on $x$. This gives us the following:
\begin{theorem}
The model-checking problem for {\bf LTL} formulas with at most one
variable is \rm{PSPACE}-complete.
\end{theorem}
Likewise, we can show that $M(x) = yes$ if, and only if, $\psi$ is
satisfiable, which gives us the following:
\begin{theorem}
The satisfiability problem for {\bf LTL} formulas with at most one
variable is \rm{PSPACE}-complete.
\end{theorem}
\section{Conclusion}
\label{sec:conclusion}
We have shown how the construction from~\cite{SislaClarke85} can be
modified to prove the PSPACE-hardness of both model-checking and
satisfiability for the single-variable fragment of the propositional
Linear-time Temporal Logic, {\bf LTL}. The essential difference
between the original construction and the modified construction
presented above is that we use chains of states of length
$n_2 \times (n_1 + 1)$, where $n_1$ and $n_2$ are the number of states
and symbols, respectively, of the Turing machine whose computation we
model, rather than single states used in~\cite{SislaClarke85} to
evaluate $n_2 \times (n_1 + 1)$ variables. Since numbers $n_1$ and
$n_2$ are not known in advance, the modelling in~\cite{SislaClarke85}
requires an unlimited number of variables. In our modification of the
proof from~\cite{SislaClarke85}, the number $n_2 \times (n_1 + 1)$ is
reflected in the model that can be described by formulas with a single
variable, thus producing a reduction to a single-variable formula.
Even though the length of the formula is clearly dependent on
$n_2 \times (n_1 + 1)$, this number is independent of the input $x$ to
the problem ``$x \in A?$'' which we are reducing to the model-checking
and satisfiability for {\bf LTL}; thus, the reduction remains
polynomial. This is a rather curious property of {\bf LTL}, which
makes it stand apart from most ``natural'' modal and temporal logics
(by a ``natural'' logic, we mean a logic that was not purposefully
constructed to exhibit a certain property).
We conclude by drawing attention to another peculiarity of {\bf LTL}
that makes it stand apart from other ``natural'' modal and temporal
logics. While the complexity function (see~\cite{ChZ97}, Section
18.1) for {\bf LTL}, both in the language with infinitely many
variables, and---as follows from the proof presented above, in the
language with a single variable---is polynomial, the complexity of the
corresponding satisfiability problem is PSPACE-complete. By contrast,
for most ``natural'' modal and temporal logics, the polynomiality of
the complexity function implies the polynomial-time decidable
satisfiability problem, and PSPACE-completeness of satisfiability
problem implies the exponential complexity function.
\end{document}
|
\begin{document}
\title{An accurate finite element method for elliptic interface problems}
\maketitle
\begin{abstract}
A finite element method for elliptic problems with discontinuous
coefficients is presented. The discontinuity is assumed to take place along a closed smooth
curve. The proposed method allows to deal with meshes that are not adapted to
the discontinuity line. The (nonconforming) finite element space is enriched with local basis functions.
We prove an optimal convergence rate in the $H^1$--norm. Numerical tests confirm the theoretical results.
\end{abstract}
\section{Introduction}
Boundary value problems with discontinuous coefficients constitute a prototype of various problems
in heat transfer and continuum mechanics where heterogeneous media are involved. The numerical solution
of such problems requires much care since their solution does not generally enjoy enough smoothness properties
required to obtain optimal convergence rates. Although fitted or adapted meshes can handle such difficulties, these
solution strategies become expensive if the discontinuity front evolves with time or within an iterative process.
Such a (weak) singularity appears also in the numerical solution of other types of problems
like free boundary problems when they are formulated for a fixed mesh or for fictitious domain methods.
We address, in this paper, a new finite element approximation of a model elliptic transmission problem that
allows nonfitted meshes. It is well known that the standard finite element approximation of such a problem does not
converge with a first order rate in the $H^1$-norm in the general case. We propose a method that converges optimally
provided the interface curve is a sufficiently smooth curve. Our method is based on a local enrichment of the finite
element space in the elements intersected by the interface. The local feature is ensured by the use of a hybrid approximation.
A Lagrange multiplier enables to recover the conformity of the approximation. The derived method appears then rather
as a local modification of the equations of interface elements than a modification of the linear system of equations.
This property ensures that the structure of the matrix of the linear system is not affected by the enrichment.
Let us mention other authors who addressed this topic in the finite element context. We point out the
so-called XFEM (eXtended Finite Element Methods) developed in Belytschko \emph{et al.} \cite{BMUP}
where the finite element space is modified in interface elements by using the level set function
associated to the interface. Such methods, that are used also for crack propagation, have in our
point of view, the drawback of resulting in a variable matrix structure.
Moreover, although no theoretical analysis is available, numerical experiments
show that they are not optimal in terms of accuracy. Other authors like Hansbo {\it et al.} \cite{HH,HLPS},
have similar approaches to ours but here also the proposed method seems to
modify the matrix structure by enriching the finite element.
In Lamichhane--Wohlmuth \cite{LW} and Braess--Dahmen \cite{BD}, a similar Lagrange multiplier approach is used
for a mortar finite element
formulation of a domain decomposition method. Finally, in a work by Li \emph{et al}
\cite{LLW}, an immersed interface technique, inspired from finite difference schemes, is adapted to
the finite element context. Note also that the references where Lagrange multipliers are employed have
for these multipliers as supports the edges defining the interface. In our method, the interface supports the
added degrees of freedom but the Lagrange multipliers are defined on the edges intersected by the interface
and thus serve to compensate the nonconformity of the finite element space rather than enforcing
interface conditions, which are being naturally ensured by the variational formulation.
\noindent
In the following, we use the space $L^2(\Omega)$ equipped with the norm $\|\cdot\|_{0,\Omega}$
and the Sobolev spaces $H^m(\Omega)$ and $W^{m,p}(\Omega)$ endowed with the norms $\|\cdot\|_{m,\Omega}$ and
$\|\cdot\|_{m,p,\Omega}$ respectively. We shall also use the semi-norm $|\cdot|_{1,\Omega}$ of $H^1(\Omega)$.
Moreover, if $\Omega_1$ and $\Omega_2$ form a partition of $\Omega$, {\em i.e.}, $\overline\Omega=
\overline\Omega_1\cup\overline\Omega_2$, $\Omega_1\cap\Omega_2=\emptyset$ and if $v$ is
a function in $W^{m-1,p}(\Omega)$ with $v_{|\Omega_i}\in W^{m,p}(\Omega_i)$,
then we shall adopt the convention $v\in W^{m,p}(\Omega_1\cup\Omega_2)$ and denote
by $\|v\|_{m,p,\Omega_1\cup\Omega_2}$ the broken Sobolev norm
$$
\|v\|_{m,p,\Omega_1\cup\Omega_2} = \|v\|_{m-1,p,\Omega} +
\|v\|_{m,p,\Omega_1} + \|v\|_{m,p,\Omega_2}.
$$
Similarly, we denote by $\|\cdot\|_{m,\Omega_1\cup\Omega_2}$ and $|\cdot|_{m,\Omega_1\cup\Omega_2}$,
the broken Sobolev norm and semi-norm respectively for the $H^m$--space.
Finally, we shall denote by $C$, $C_1, C_2, \ldots$ various generic constants that
do not depend on mesh parameters and by $|A|$ the Lebesgue measure of a set $A$
and by $A^\circ$ the interior of a set $A$.
\noindent
Let $\Omega$ denote a domain in $\mathbb R^2$ with smooth boundary $\Gamma$ and let $\gamma$ stand for
a closed $C^2$-curve in $\Omega$ which separates $\Omega$ into two disjoint subdomains $\Omega^+$,
$\Omega^-$ such that $\Omega= \Omega^+\cup\gamma\cup\Omega^-$ and $\partial \Omega^+ =\gamma$.
For given $f\in L^2(\Omega)$ and $a\in L^\infty(\Omega)$ we consider the transmission problem:
$$
\left\{
\begin{aligned}{}
-\nabla\cdot(a\nabla u) &= f&&\qquad\text{in }\Omega^+\cup\Omega^-,\\
u &= 0&&\qquad\text{on }\Gamma,\\
[u] = \big[a\frac{\partial u}{\partial n}\big] &= 0&&\qquad\text{on }\gamma,
\end{aligned}
\right.
$$
where $[v]$ denotes the jump of a quantity $v$ across the interface $\gamma$ and $n$ is the normal
unit vector to $\gamma$ pointing into $\Omega^-$. For definiteness we let $[v] = v^- -v^+$ with
$v^\pm = v_{|\Omega^\pm}$. In addition to boundedness of the diffusion coefficient we assume
\begin{align}\label{eq:ass-a}
\begin{split}
a^\pm &\in W^{1,\infty}(\Omega^\pm),\\
a(x) &\ge \alpha>0,\qquad\text{for } x\in\Omega,
\end{split}
\end{align}
i.e. $a$ is uniformly continuous on $\Omega\setminus \gamma$, but discontinuous across $\gamma$.
The standard variational formulation of this problem consists in seeking $u\in H^1_0(\Omega)$
such that
\begin{equation}
\int_\Omega a\,\nabla u\cdot\nabla v\,dx = \int_\Omega fv\,dx\qquad\forall\ v\in H^1_0(\Omega).
\label{Pb}
\end{equation}
In view of the ellipticity condition \eqref{eq:ass-a}, Problem \eqref{Pb} has a unique solution
$u$ in $H^1_0(\Omega)$ but clearly $u\notin H^2(\Omega)$. We shall assume throughout this
paper the regularity properties:
\begin{align}
&u_{|\Omega^-}\in H^2(\Omega^-),\quad u_{|\Omega^+}\in H^2(\Omega^+),\nonumber\\
&\|u\|_{2,\Omega^-\cup\Omega^+} \le C\,\|f\|_{0,\Omega}.\label{H2-regularity}
\end{align}
Note that these assumptions are satisfied in the case where $a_{|\Omega^-}$ and
$a_{|\Omega^+}$ are constants (see \cite{Lemrabet,Petzoldt} for instance).
In the following, we describe a {\em fitted finite element method}.
defined by adding extra unknowns on the interface $\gamma$. It turns out that this method
leads to an optimal convergence rate. Although it is well suited for the model problem
it seems to be inefficient in more elaborate problems which, for example, involve moving interfaces.
To circumvent this difficulty, we define a new method where the added degrees of freedom have local
supports and then yield a nonconforming finite element method. We show that the use of a Lagrange
multiplier removes this nonconformity and ensures an optimal convergence rate.
\section{A fitted finite element method}
Assume that the domain $\Omega$ is a convex polygon and consider a regular triangulation ${\mathscr T}_h$ of
$\overline\Omega$ with closed triangles whose edges have lengths $\le h$. We assume that $h$ is small enough so that for each triangle $T\in{\mathscr T}_h $ only the following cases have to be considered:
\begin{enumerate}
\item[1)] $T\cap \gamma = \emptyset$.
\item[2)] $T\cap \gamma$ is an edge or a vertex of $T$.
\item [3)]$\gamma$ intersects two different edges of $T$ in two distinct points different from the vertices.
\item[4)] $\gamma$ intersects one edge and its opposite vertex.
\end{enumerate}
Let $V_h$ denote the lowest degree finite element
space
$$
V_h = \{\,v\in C^0(\overline\Omega);\ v_{|T}\in P_1(T)\ \forall\ T\in {\mathscr T}_h,\ v=0\text{ on }\Gamma\},
$$
where $P_1(T)$ is the space of affine functions on $T$. A finite element approximation of \eqref{Pb}
consists in computing $u_h\in V_h$ such that
\begin{equation}
\int_\Omega a\,\nabla u_h\cdot\nabla v\,dx = \int_\Omega fv\,dx\qquad\forall\ v\in V_h.
\label{SFEM}
\end{equation}
It is well known that, since $u\notin H^2(\Omega)$, the classical error estimates
(see \cite{Ciarlet}) do not hold any more even though we still have the convergence result,
$$
\lim_{h\to 0}\|u-u_h\|_{1,\Omega} = 0.
$$
A fitted treatment of the interface $\gamma$ can however improve this result. Let for this purpose $\mathscr T_h^\gamma$
denote the set of triangles that intersect the interface $\gamma$ corresponding to cases 3) and 4) above,
$$
\mathscr T_h^\gamma := \{T\in\mathscr T_h;\ \gamma\cap T^\circ\neq\emptyset\},
$$
and consider a continuous piecewise linear interpolation of $\gamma$, denoted by $\gamma_h$, as shown in Figure
\ref{Fig-1}. Clearly, $\gamma_h$ is the line that intersects $\gamma$ at two edges of any triangle that contains $\gamma$.
Unless the intersection of $\gamma$ with the boundary of a triangle $T$ does not coincide
with an edge, $T$ is split into two sets $T^+$ and $T^-$ separated
by the curve $\gamma$. In case 3), the straight line $\gamma_h\cap T$ splits $T$ into a triangle $K_1$
and a quadrilateral that we split into two subtriangles $K_2$ and $K_3$, where we choose $K_2$ such that
$K_1\cap K_2=\gamma_h$. In case 4), $\gamma_h\cap T$ splits $T$ into two triangles $K_1$ and $K_2$.
In this case we set $K_3=\emptyset$. This construction
defines the new fitted finite element mesh of the domain $\Omega$ (see Figure \ref{Fig-1}).
The splitting $T=K_1\cup K_2\cup K_3$ is not unique but
the convergence analysis does not depend on it. Let us denote by $\mathscr T_T^\gamma$ the set of the
three subtriangles of $T$. Below
$\mathscr E_h$ will stand for the set of all edges of elements and $\mathscr E_h^\gamma$ is the set of all edges
that are intersected by $\gamma$ (or $\gamma_h$), {\em i.e.}
$$
\mathscr E_h^\gamma := \{e\in\mathscr E_h;\ \gamma\cap e^\circ\neq\emptyset\}.
$$
For each $T\in\mathscr T_h$, $\mathscr E_T$ is the set of the three edges of $T$.
\begin{figure}
\caption{\label{Fig-1}
\label{Fig-1}
\end{figure}
The fitted mesh is denoted by $\mathscr T_h^F$, {\em i.e.}
$$
\mathscr T_h^F := \mathscr T_h \cup \bigcup_{T\in\mathscr T_h^\gamma}\Big(\cup_{K\in\mathscr T_K^\gamma}K\Big).
$$
and by $S_h^\gamma := \bigcup\{T;\ T\in\mathscr T_h^\gamma\}$. Let us finally note that the curve $\gamma_h$
defines a new splitting of $\Omega$ into two subdomains $\Omega^-_h$ and $\Omega^+_h$
where $\Omega^\pm_h$ is defined analogously to $\Omega^\pm$ with $\gamma$ replaced by $\gamma_h$.
Next we construct an approximation of the function $a$ on the elements of $\mathscr T_h^F$:
For this purpose, let $\tilde a^{\pm}$ be extensions of $a^{\pm}$ to $\Omega$ such that
$\tilde a^{\pm}\in W^{1,\infty}(\Omega)$. Such extensions exist due to the regularity of $\gamma$
(see \cite{Adams}). Define $\tilde a_h\in W^{1,\infty}(\Omega)$ by
\[
\tilde a_h = \begin{cases}
\tilde a^+ & \text{in } \Omega_h^+,\\
\tilde a^- & \text{in } \Omega_h^-,
\end{cases}
\]
and denote by $a_h$ the piecewise linear interpolant of $\tilde a$ on $\mathscr T_h^F$. Hence $a_h$ is continuous on $\Omega_h^+\cup \Omega_h^-$ and coincides with $a$ on the nodes of $\mathscr T_h^F$.
In addition, the function $a_h$ is discontinuous
across the line $\gamma_h$ and satisfies the properties,
\begin{align}
&a_{h|\Omega^+_h}\in W^{1,\infty}(\Omega^+_h),\ a_{h|\Omega^-_h}\in W^{1,\infty}(\Omega^-_h),\label{Prop-ah-1}\\
&\|a_h\|_{0,\infty,\Omega} \le C\,\|a\|_{0,\infty,\Omega},\label{Prop-ah-2}\\
&a_h \ge\alpha > 0\qquad\text{a.e. in }\Omega.\label{Prop-ah-3}
\end{align}
We now define the finite element space
\begin{align*}
&W_h = V_h + X_h,\\
&X_h:=\{v\in C^0(\overline\Omega);\ v_{|\Omega\setminus S_h^\gamma}=0,\ v_{|K}\in
P_1(K)\ \forall\ K\in\mathscr T_T^\gamma,\ \forall\ T\in\mathscr T_h^\gamma\}.
\end{align*}
Note that we have $W_h\subset H^1_0(\Omega)$.
A fitted finite element approximation is defined as the follows:
\begin{equation}
\left\{
\begin{aligned}{}
&\text{Find }u_h^F\in W_h\text{ such that}\\
&\int_\Omega a_h\nabla u_h^F\cdot\nabla v\,dx = \int_\Omega fv\,dx\qquad\forall\ v\in W_h.
\end{aligned}
\right.
\label{AFEM}
\end{equation}
In order to study the convergence of Problem \eqref{AFEM}, we consider the auxiliary problem:
\begin{equation}
\left\{
\begin{aligned}{}
&\text{Find }\widehat u_h\in H^1_0(\Omega)\text{ such that}\\
&\int_\Omega a_h\nabla\widehat u_h\cdot\nabla v\,dx = \int_\Omega fv\,dx\qquad\forall\ v\in H^1_0(\Omega).
\end{aligned}
\right.
\label{Ah}
\end{equation}
We note that both problems \eqref{AFEM} as well as \eqref{Ah} have a unique solution.
The regularity properties \eqref{H2-regularity} imply $u^+\in C^0(\bar\Omega^+)$, $u^-\in C^0(\bar\Omega^-)$
and that $u^+$ and $u^-$ have a common trace on $\gamma$. Therefore $u$ is continuous on $\Omega$
and the piecewise $P_1$
interpolant $I_h u\in W_h$ is well defined. In the following let $\tilde u^\pm \in H^2(\Omega)$ stand for the
extensions of $u^\pm$ from $\Omega^\pm$ to $\Omega$.
In the sequel, we assume that the fitted family of meshes $(\mathscr T_h\cup\mathscr T_h^\gamma)_h$
satisfies the condition
\begin{equation}
\frac h\varrho\le C\,h^{-\theta}\label{MeshRegularity}
\end{equation}
for some $\theta\in [0,1)$ and for which $C$ is independent of $h$, where $\varrho$ denotes the radius
of the largest ball contained in any triangle in any triangle $T\in\mathscr T_h^F$.
\begin{lem}\label{eq:interpolation-error}
Let $u\in H^2(\Omega^+\cup\Omega^-)$.
\begin{enumerate}
\item We have the local interpolation error
\begin{equation}
\label{InterpolationErrorLocal}
|u-I_h u|_{1,T} \le
\begin{cases}
Ch \,|u|_{2,T} & \text{for }T\in \mathscr T_h\setminus \mathscr T_h^\gamma\\
C \frac{h^2}{\varrho_K}(|\tilde u^+|_{2,K} + |\tilde u^-|_{2,K}) &\text{for }K\in \mathscr T_T^\gamma, \,T\in \mathscr T_h^\gamma,
\end{cases}
\end{equation}
where $\varrho_K$ is the radius of the inscribed circle of $K$.
\item The global interpolation error is given by
\begin{equation}
|u-I_hu|_{1,\Omega} \le C\,h^{1-\theta}\,|u|_{2,\Omega^+\cup\Omega^-}.\label{InterpolationErrorGlobal-1}
\end{equation}
Moreover, if $u\in W^{2,\infty}(\Omega^+\cup\Omega^-)$ then
\begin{equation}
|u-I_hu|_{1,\Omega} \le C\,h\,|u|_{2,\infty,\Omega^+\cup\Omega^-}.\label{InterpolationErrorGlobal-2}
\end{equation}
\end{enumerate}
\end{lem}
\begin{proof}
Since the local interpolation error estimate for $T\in\mathscr T_h\setminus\mathscr T_h^\gamma$ is
classic in finite element theory (see \cite{BS} or \cite{Ciarlet} for instance), we only need to prove
the second estimate on triangles where $u$ is only piecewise smooth. Consider an element $T\in \mathscr T_h^\gamma$ and any subtriangle $K\in \mathscr T_T^\gamma$. Without loss of generality we assume $K\subset \Omega_h^+$, then
\begin{align*}
K &= (K\cap \Omega^+) \,\cup\, (K\cap \Omega^-).
\end{align*}
Since $K\cap \Omega^-\subset T\cap\Omega^-\cap \Omega_h^+$ and $\gamma_h$ interpolates the interface $\gamma$ we obtain for the measure of $K\cap \Omega^-$
\begin{equation}\label{eq:measure-k-omega}
|K\cap \Omega^-| \le |T\cap\Omega^-\cap \Omega_h^+|\le C h^3,
\end{equation}
with a constant $C>0$ which depends on $\gamma$ only. In view of $I_h u = I_h \tilde u^+$, the standard
interpolation theory (see \cite{Ciarlet} or \cite{BS}) implies
\begin{align}\label{eq:interp1}
\begin{split}
|u-I_h u|_{1,K} &\le |u-\tilde u^+|_{1,K} + |\tilde u^+-I_h \tilde u^+|_{1,K}\\
&\le
|u-\tilde u^+|_{1,K} + C\,\frac{h^2}{\varrho_K}\,|\tilde u^+|_{2,K}.
\end{split}
\end{align}
Since $\tilde u^+ = u$ holds on $K\cap \Omega^+$ we obtain
$$
|u-\tilde u^+|_{1,K} = |u-\tilde u^+|_{1,K\cap\, \Omega^-} \le |u^-|_{1,K\cap \, \Omega^-} +
|\tilde u^+|_{1,K\cap\, \Omega^-}.
$$
Applying H{\"o}lder's inequality with $p=\frac32$ and $q=3$, the imbedding of $H^1(K)$ into $L^6(K)$ (Note that
the imbedding constant can be bounded independently of $h$)
and \eqref{eq:measure-k-omega} one can bound $|u^-|_{1,K\cap \,\Omega^-}$ (and analogously
$|\tilde u^+|_{1,K\cap\, \Omega^-}$) by
\begin{align*}
|u^-|_{1,K\cap \,\Omega^-} &\le
|K\cap \Omega^-|^{\frac13} \|\nabla u^-\|_{0,6,K\cap\,\Omega^-}\\
&\le C\,h\,\|\nabla \tilde u^-\|_{0,6,K} \le C\,h\,|\tilde u^-|_{2,K}.
\intertext{Hence}
|u-\tilde u^+|_{1,K} &\le C\,h\, (|\tilde u^-|_{2,K} + |\tilde u^+|_{2,K}).
\end{align*}
Inserting this estimate into \eqref{eq:interp1} leads to
\[
|u-I_h u|_{1,K} \le C \frac{h^2}{\varrho_K}\, (|\tilde u^-|_{2,K} + |\tilde u^+|_{2,K}).
\]
To prove the global interpolation error bound, we write
\begin{align*}
|u - I_h u|_{1,\Omega}^2 &= \sum_{T\in \mathscr{T}_h\setminus \mathscr{T}_h^\gamma} |u - I_h u|_{1,T}^2 + \sum_{T\in \mathscr{T}_h^\gamma} \sum_{K\in \mathscr{T}_T^\gamma} |u - I_h u|_{1,K}^2\\
&\le
Ch^2 \sum_{T\in \mathscr{T}_h\setminus \mathscr{T}_h^\gamma} |u|^2_{2,T} + C \sum_{T\in \mathscr{T}_h^\gamma} \sum_{K\in \mathscr{T}_T^\gamma} \frac{h^2}{\varrho_K} (|\tilde u^-|^2_{2,K} + |\tilde u^+|^2_{2,K})\\
&\le
C\frac{h^2}{\varrho}\, (|\tilde u^-|^2_{2,\Omega} + |\tilde u^+|^2_{2,\Omega})\\
&\le C\frac{h^2}{\varrho}\, |u|^2_{2,\Omega^+\cup\,\Omega^-},
\intertext{where}
\varrho &=\min\{\varrho_K\colon K\in \mathscr{T}_T^\gamma, T\in \mathscr{T}_h^\gamma\}.\\[-2em]
\end{align*}
The calculation above indicates how the convergence rate can be improved in case $u\in W^{2,\infty}(\Omega^+\cup \Omega^-)$ observing that $|S_h^\gamma|\le Ch$ holds.
\end{proof}
\begin{rem}
It is classic in finite element theory to assume that the meshes are regular in the sense that
Condition \eqref{MeshRegularity} is satisfied for $\theta=0$. For the fitted meshes
$\mathscr T_h^\gamma$ one cannot guarantee that such a condition is satisfied.
To relax this constraint, we
assume here \eqref{MeshRegularity} for a $\theta\in [0,1)$ thus allowing a larger class of fitted
meshes than permitted by $\theta=0$.
\end{rem}
The following result gives the convergence rate for Problem \eqref{AFEM}.
\begin{thm}
\label{FirstErrorEstimate}
Assume that the family of fitted meshes $(\mathscr{T}_h^F)_h$ satisfies the regularity property
\eqref{MeshRegularity}. Then we have the error estimate
\begin{equation}
|u-u_h^F|_{1,\Omega} \le
\begin{cases}
Ch^{1-\theta}\,\|u\|_{2,\Omega^+\cup\,\Omega^-} &\text{if } u\in H^2(\Omega^+\cup\Omega^-),\\
C h\,\|u\|_{2,\infty,\Omega^+\cup\,\Omega^-}&\text{if } u\in W^{2,\infty}(\Omega^+\cup\Omega^-).
\end{cases}
\label{error-uhf}
\end{equation}
\end{thm}
\begin{proof}
We have from the triangle inequality
\begin{equation}
|u-u_h^F|_{1,\Omega} \le |u-\widehat u_h|_{1,\Omega} + |\widehat u_h-u_h^F|_{1,\Omega}.\label{TrIneq}
\end{equation}
To bound the first term on the right-hand side of \eqref{TrIneq}, we proceed as follows: Let us subtract
\eqref{Ah} from \eqref{Pb} and choose $v=u-\widehat u_h$. We have
$$
\int_\Omega (a\,\nabla u-a_h\nabla\widehat u_h)\cdot\nabla(u-\widehat u_h)\,dx = 0.
$$
Then
\begin{align*}
\int_\Omega &a_h |\nabla(u-\widehat u_h)|^2\,dx = -\int_\Omega (a-a_h)\,\nabla u\cdot
\nabla(u-\widehat u_h)\,dx\\
&=-\int_{\Omega\setminus S_h^\gamma}(a-a_h)\,\nabla u\cdot\nabla(u-\widehat u_h)\,dx -
\sum_{T\in \mathscr{T}_h^\gamma} \int_T (a-a_h)\,\nabla u\cdot\nabla(u-\widehat u_h)\,dx.
\end{align*}
The usual estimate for the interpolation error gives
\begin{align*}
\|a-a_h\|_{0,\infty,\Omega} &\le Ch\,(\|\tilde a\|_{1,\infty,\Omega^+_h}+\|\tilde a\|_{1,\infty,\Omega^-_h})\\
&\le Ch\,\|a\|_{1,\infty,\Omega^+\cup \Omega^-}.
\end{align*}
with a constant $C$ which only depends on a reference triangle, (see \cite{Ciarlet}, p.~124). Thus we obtain
\begin{equation}\label{eq:est-outside-band}
\bigg|\int_{\Omega\setminus S_h^\gamma}(a-a_h)\,\nabla u\cdot\nabla(u-\widehat u_h)\,dx\bigg| \le C\,h\,
\|a\|_{1,\infty,\Omega^+\cup \Omega^-}\, |u|_{1,\Omega\setminus S_h^\gamma}\,
|u-\widehat u_h|_{1,\Omega\setminus S_h^\gamma}.
\end{equation}
Next we consider a triangle $T\in \mathscr{T}_h^\gamma$ which we split as
\[
T = (T\cap \Omega^+\cap\Omega_h^+)\cup (T\cap \Omega^-\cap\Omega_h^-)
\cup (T\cap \Omega^+\cap\Omega_h^-)\cup (T\cap \Omega^-\cap\Omega_h^+).
\]
As before, we obtain
\[
\bigg|\int_{T\cap \Omega^+\cap\Omega_h^+}(a-a_h)\,\nabla u\cdot \nabla(u-\widehat u_h)\,dx\bigg|
\le Ch\, \|a\|_{1,\infty,\Omega^+\cup \Omega^-}|u|_{1,T\cap \Omega^+\cap\Omega_h^+}\,
|u-\widehat u_h|_{1,T\cap \Omega^+\cap\Omega_h^+}.
\]
Arguing as in the proof of Lemma~\ref{eq:interpolation-error}, the generalized H{\"o}lder inequality together with
\eqref{eq:measure-k-omega} yields the estimate
\begin{align*}
\bigg|\int_{T\cap \Omega^+\cap\Omega_h^-}& (a-a_h)\,\nabla u\cdot \nabla(u-\widehat u_h)\,dx\bigg|\\
&= \bigg|\int_{T\cap \Omega^+\cap\Omega_h^-} (a^+-a_h^-)\nabla u^+\cdot \nabla(u^+-\widehat u_h)\,dx\bigg|\\
&\le
C\, \|a\|_{0,\infty,\Omega}\, |T\cap \Omega^+\cap\Omega_h^-|^{1/3}\,\|\nabla u^+\|_{0,6,T\cap \Omega^+\cap\Omega_h^-}
\,\|\nabla(u^+-\widehat u_h)\|_{0,T\cap \Omega^+\cap\Omega_h^-}\\
&\le
C\, h\,\|a\|_{0,\infty,\Omega}\, |\tilde u^+|_{2,T}\,\|\nabla(u^+-\widehat u_h)\|_{0,T}.
\end{align*}
Analogous estimates hold with $+$ and $-$ interchanged. Collecting the four contributions to the
triangle $T$ one obtains
\begin{align*}
\bigg|\int_T&(a-a_h)\,\nabla u\cdot \nabla(u-\widehat u_h)\,dx\bigg|\\
&\le C h\, (\|a\|_{0,\infty,\Omega} +\|a\|_{1,\infty,\Omega^+\cup\,\Omega^-})\\
&\quad\times \big(|\tilde u^+|_{2,T}\,\|\nabla(u^+-\widehat u_h)\,
\|_{0,T\cap\Omega^+} + |\tilde u^-|_{2,T}\,\|\nabla(u^--\widehat u_h)\|_{0,T\cap\Omega^-}\big).
\end{align*}
Combining this estimate with \eqref{eq:est-outside-band} leads to
\begin{align*}
\int_\Omega &a_h\,|\nabla(u-\widehat u_h)|^2\,dx \le Ch\, \|a\|_{1,\infty,\Omega^+\cup\,\Omega^-}
|u|_{1,\Omega\setminus S_h^\gamma}|u- \widehat u_h|_{1,\Omega\setminus S_h^\gamma}\\
&+C\,h\, (\|a\|_{0,\infty,\Omega} + \|a\|_{1,\infty,\Omega^+\cup\,\Omega^-})\\
&\quad\times\sum_{T\in \mathscr{T}_h^\gamma}
\Big(|\tilde u^+|_{2,T}\|\nabla(u^+-\widehat u_h)\|_{0,T\cap\Omega^+}+
|\tilde u^-|_{2,T}\|\nabla(u^--\widehat u_h)\|_{0,T\cap\Omega^-}\Big)\\
&\le C\,h\,\|a\|_{1,\infty,\Omega^+\cup\,\Omega^-}\,|u|_{1,\Omega\setminus S_h^\gamma}\,
|u- \widehat u_h|_{1,\Omega\setminus S_h^\gamma} \\
&\qquad+C\,h\,(\|a\|_{0,\infty,\Omega} + \|a\|_{1,\infty,\Omega^+\cup\,\Omega^-})(|\tilde u^+|_{2,S_h^\gamma}
+ |\tilde u^-|_{2,S_h^\gamma})\,\|\nabla(u-\widehat u_h)\|_{0,S_h^\gamma}\\
&\le C\,h\,(\|a\|_{0,\infty,\Omega} + \|a\|_{1,\infty,\Omega^+\cup\,\Omega^-})\,
|u|_{2,\Omega^+\cup\,\Omega^-} \|\nabla(u-\widehat u_h)\|_{0,\Omega},
\end{align*}
which by \eqref{Prop-ah-3} implies
\begin{equation}\label{eq:est-u-hat-u}
|u-\widehat u_h|_{1,\Omega}\le C\,h\,(\|a\|_{0,\infty,\Omega} + \|a\|_{1,\infty,\Omega^+\cup\,\Omega^-})\,|u|_{2,\Omega^+\cup\,\Omega^-}.
\end{equation}
To bound the norm $|\widehat u_h-u_h^F|_{1,\Omega}$, we have from problems \eqref{Ah} and \eqref{AFEM},
$$
\int_\Omega a_h\nabla(\widehat u_h-u_h^F)\cdot\nabla v\,dx = 0\qquad\forall\ v\in W_h.
$$
Standard finite element approximation theory combined with \eqref{Prop-ah-1}--\eqref{Prop-ah-2} gives
\begin{equation}
|\widehat u_h-u_h^F|_{1,\Omega} \le C\,\inf_{v\in W_h}|\widehat u_h-v|_{1,\Omega},
\label{AbstractBound}
\end{equation}
which together with \eqref{eq:est-u-hat-u} implies
\begin{align*}
|\widehat u_h-u_h^F|_{1,\Omega} &\le C\,|\widehat u_h - I_h u|_{1,\Omega}\\
&\le C\,|\widehat u_h - u|_{1,\Omega} + C\,|u - I_h u|_{1,\Omega}\\
&\le
C\,h\,|u|_{2,\Omega^+\cup\,\Omega^-} + C\,|u - I_h u|_{1,\Omega}.
\end{align*}
The interpolation error is bounded using \eqref{InterpolationErrorGlobal-1} or \eqref{InterpolationErrorGlobal-2}.
\end{proof}
\section{A hybrid approximation}
The method presented in the previous section has proven its efficiency as numerical tests will show
in the last section. In more elaborate problems like time dependent
or nonlinear problems where the interface $\gamma$ is a moving front, the subtriangulation $\mathscr T_h^\gamma$
moves within iterations and then the matrix structure has to be frequently modified. To remedy to this difficulty, we resort to
a hybridization of the added unknowns. More specifically, the added discrete space $X_h$ is replaced by a nonconforming
approximation space. In addition, a Lagrange multiplier is used to compensate this inconsistency.
The hybridization enables to locally eliminate the added unknowns in each triangle
$T\in\mathscr T_h^\gamma$. In the sequel we fix an orientation for the interface $\gamma$.
This induces an orientation of the normals to the edges $e\in \mathscr{E}_h^\gamma$ by
following the interface in the positive direction. The jump of a function $v$ across an
edge $e\in \mathscr{E}_h^\gamma$ can then be defined as
$$
[v]_e(x) := \lim_{s\to 0, s>0}v(x+s n(x)) - \lim_{s\to 0, s<0}v(x+s n(x))\equiv v^+(x)-v^-(x),\qquad x\in e,
$$
where $n$ is the unit normal to $e$.
To develop this method, we start by defining an ad-hoc formulation for the solution $\widehat u_h$ of \eqref{Ah}.
Let us define the spaces
\begin{align*}
\widehat Z_h &:= H^1_0(\Omega) + \widehat Y_h,\\
\widehat Y_h &:=\{v\in L^2(\Omega);\ v_{|\Omega\setminus S_h^\gamma}=0,
\ v_{|T}\in H^1(T)\ \forall\ T\in\mathscr T_h^\gamma,\\
&\qquad [v]=0\text{ on }e,\ \forall\ e\in\mathscr E_h\setminus\mathscr E_h^\gamma\},\\
\widehat Q_h &:= \prod_{e\in\mathscr E_h^\gamma}H^{-\frac 12}_{00}(e),
\end{align*}
where $H^{-\frac 12}_{00}(e)$ is the dual space of the trace space
$$
H^{\frac 12}_{00}(e) := \{v_{|e};\ v\in H^1(T),\ e\in\mathscr E_T,\ v=0\text{ on }d\quad \forall\ d\in\mathscr E_T, d\ne e\}.
$$
We remark that the jumps $[v]$ for $v\in\widehat Z_h$ can be interpreted in $H^{\frac 12}_{00}(e)$ for $e\in\mathscr E_h^\gamma$. This is due to the fact that $v\in H^1(T)$ for all $T\in\mathscr T_h$, that for every $e\in\mathscr E_h^\gamma$, the jump of $v$ lies in $H^{\frac 12}(e)$ and vanishes at the endpoints of $e$ as well as on at least two adjacent edges. This motivates the choice of $\widehat Q_h$.
The elements of $\widehat Q_h$ will be referred to by $\mu = (\mu_e)_{e\in \mathscr E_h^\gamma}$.
We endow $\widehat Z_h$ with the broken norm
$$
\|u\|_{\widehat Z_h} = (\sum_{T\in \mathscr T_h} |u|^2_{1,T})^{1/2}.
$$
On $\widehat Q_h$ we use the norm
$$
\|\mu\|_{\widehat Q_h} = \Big(\sum_{e\in \mathscr E_h^\gamma} \|\mu_e\|^2_{H^{-\frac 12}_{00}(e)}\Big)^{\frac 12}
:= \Bigg(\sum_{e\in \mathscr E_h^\gamma}
\bigg(\sup_{v\in H^{\frac 12}_{00}(e)\setminus\{0\}} \frac{\int_e \mu_e v\,ds}{\|v\|_{H^{\frac 12}_{00}(e)}}\bigg)^2\Bigg)^{\frac 12}.
$$
Above, the integrals over edges $e$ are to be interpreted as duality pairings between $H^{-\frac 12}_{00}(e)$ and $H^{\frac 12}_{00}(e)$.
We mention that the broken norm in $\widehat Z_h$ reflects the fact that $\widehat Z_h$ is not a subspace of $H_0^1(\Omega)$.
Next we define the variational problem,
\begin{alignat}{2}
&\text{Find }(\widehat u_h^H,\widehat\lambda_h)\in \widehat Z_h\times \widehat Q_h\text{ such that:}\nonumber\\
&\sum_{T\in\mathscr T_h}\int_T a_h\,\nabla \widehat u_h^H\cdot\nabla v\,dx - \sum_{e\in\mathscr E_h^\gamma}\int_e\widehat\lambda_h\, [v]\,ds = \int_\Omega fv\,dx
&&\qquad\forall\ v\in\widehat Z_h,\label{PbLambda-1}\\
&\sum_{e\in\mathscr E_h^\gamma}\int_e\mu\, [\widehat u_h^H]\,ds = 0&&\qquad\forall\ \mu\in \widehat Q_h.\label{PbLambda-2}
\end{alignat}
The saddle point problem \eqref{PbLambda-1}--\eqref{PbLambda-2} indicates that the continuity of $\widehat u_h$ across the edges of $\mathscr E_h^\gamma$ is enforced by a Lagrange multiplier technique.
\begin{thm}
\label{Existence-uH}
Problem \eqref{PbLambda-1}--\eqref{PbLambda-2} has a unique solution
$(\widehat u_h^H,\widehat\lambda_h)\in \widehat Z_h\times \widehat Q_h$. Moreover, we have
$\widehat u_h^H=\widehat u_h$ and the following estimate holds
\begin{equation}
\|\widehat u_h^H\|_{\widehat Z_h} + \|\widehat\lambda_h\|_{\widehat Q_h} \le C\,\|f\|_{0,\Omega},\label{estim-ul}
\end{equation}
with a constant $C$ which is independent of $h$.
\end{thm}
\begin{proof}
Problem \eqref{PbLambda-1}--\eqref{PbLambda-2} can be put in the standard variational form
$$
\left\{
\begin{aligned}{}
&\mathscr A(\widehat u^H_h,v) + \mathscr B(v,\widehat\lambda_h) = (f,v)&&\qquad\forall\ v\in \widehat Z_h,\\
&\mathscr B(\widehat u^H_h,\mu) = 0&&\qquad\forall\ \mu\in \widehat Q_h,
\end{aligned}
\right.
$$
where
\begin{align*}
&\mathscr A(u,v) = \sum_{T\in\mathscr T_h}\int_T a_h\,\nabla u\cdot\nabla v\,dx,\\
&\mathscr B(v,\mu) = - \sum_{e\in\mathscr E_h^\gamma}\int_e \mu\,[v]\,ds,\\
&(f,v) = \int_\Omega fv\,dx.
\end{align*}
The bilinear form $\mathscr A$ is clearly continuous and coercive on the space
$\widehat Z_h\times\widehat Z_h$.
The bilinear form $\mathscr B$ is also continuous on $\widehat Z_h\times\widehat Q_h$.
Next we verify that $\mathscr B$ satisfies the inf-sup condition, i.e. there exists $\delta >0$
such that for every $\lambda\in \widehat Q_h$ there exists $v_\mu\in \widehat Z_h$ such that
$$
\mathscr B(v_\mu,\mu) \ge \delta\, \|v_\mu\|_{\widehat Z_h}\|\mu\|_{\widehat Q_h}
$$
{\em i.e.}
\begin{equation}\label{eq:inf-sup-semidiscr}
\sum_{e\in \mathscr{E}_h^\gamma} \int_e \mu_e[v_\mu]\,ds \ge \delta\,\|v_\mu\|_{\widehat Z_h}\|\mu\|_{\widehat Q_h}
\end{equation}
holds.
Given $\mu =(\mu_e)_{e\in\mathscr E^\gamma_h}\in \widehat Q_h$ and an edge $e\in\mathscr{E}_h^\gamma$
choose a triangle $T\in \mathscr{T}_h^\gamma$ which has $e$ as one of its edges.
Define $v_T\in H^1(T)$ as the solution of
\begin{equation}
\left\{
\begin{aligned}{}
&\Delta v = 0&&\qquad\text{in } T,\\
&\frac{\partial v}{\partial n} = \mu_e &&\qquad\text{on } e,\\
&v = 0 &&\qquad\text{on } \partial T\setminus e,
\end{aligned}
\right.
\label{Pb-mu-v}
\end{equation}
which is equivalent to
$$
\int_T \nabla v\cdot\nabla \varphi\,dx = \int_e\mu_e \varphi\,ds\quad\text{for }\varphi\in H^1_e(T)
$$
where
$$
H^1_e(T) = \{\varphi\in H^1(T);\ \varphi=0 \text{ on }\partial T\setminus e \}.
$$
By Green's theorem we obtain
\begin{align*}
\|\mu_e\|_{-1/2,e} &= \Big\|\frac{\partial v_T}{\partial n}\Big\|_{-1/2,e}\le \|\nabla v_T\|_{0,T},\\
\int_e\mu_e v_T\, ds &= \int_T |\nabla v_T|^2\,dx,
\end{align*}
which implies
\[
\|\mu_e\|_{-1/2,e}^2 \le \int_T |\nabla v_T|^2\,dx = \int_e\mu_e v_T\, ds.
\]
Let $\chi_T$ denote the characteristic function of $T$ and define
\[
v_\mu = \sum_{T\in \mathscr{T}_h^\Gamma} \chi_Tv_T.
\]
Since there are as many edges in $\mathscr{E}_h^\gamma$ as triangles in $\mathscr{T}_h^\gamma$ then
$[v_\mu] = v_T$ holds for every edge $e\in \mathscr{E}_h^\gamma$. Hence we obtain
\[
\|\mu\|^2_{\widehat Q_h} = \sum_{e\in \mathscr{E}_h^\gamma} \|\mu_e\|_{-1/2,e}^2 \le
\sum_{T\in\mathscr{T}_h^\gamma} \|\nabla v_T\|^2_{0,T} = \sum_{e\in \mathscr{E}_h^\gamma}\int_e
\mu_e [v_\mu]\,ds.
\]
Furthermore,
\[
\|v_\mu\|^2_{\widehat Z_h} = \sum_{T\in\mathscr{T}_h}\|\nabla v_T\|^2_{0,T} =
\sum_{T\in\mathscr{T}_h^\gamma}\|\nabla v_T\|^2_{0,T}
\]
holds. This implies
\[
\|\mu\|^2_{\widehat Q_h}\|v_\mu\|^2_{\widehat Z_h} \le \Big(\sum_{T\in\mathscr{T}_h^\gamma}
\|\nabla v_T\|^2_{0,T}\Big)^2 = \mathscr B(v_\mu,\mu)^2.
\]
Adjusting the sign of $v_\mu$ this is equivalent to \eqref{eq:inf-sup-semidiscr} with $\delta = 1$. The estimate \eqref{estim-ul} is a direct
consequence of \eqref{eq:inf-sup-semidiscr}.
Now, it is clear from \eqref{PbLambda-2} that
$$
[\widehat u_h^H]=0\quad\text{on e},\quad\forall\ e\in\mathscr T_h^\gamma.
$$
This implies that $\widehat u_h^H\in H^1_0(\Omega)$. Choosing a test function $v\in H^1_0(\Omega)$ in \eqref{PbLambda-1},
we find that $\widehat u_h^H$ is a solution to Problem \eqref{Ah}, and then $\widehat u^H_h=\widehat u_h$.
The interpretation of $\widehat\lambda_h$ is simply obtained by the Green's formula.
\end{proof}
We are now able to present a numerical method to solve the interface problem. This one is simply derived
as a finite element method to solve the saddle point problem \eqref{PbLambda-1}--\eqref{PbLambda-2}. We consider for this end
a piecewise constant approximation of the Lagrange multiplier. Let us define the finite dimensional spaces,
\begin{align*}
Z_h &:= V_h + Y_h,\\
Y_h &:=\{v\in L^2(\Omega);\ v_{|\Omega\setminus S_h^\gamma}=0,\ v_{|K}\in
P_1(K)\ \forall\ K\in\mathscr T_T^\gamma,\ \forall\ T\in\mathscr T_h^\gamma,\\
&\qquad [v]=0\text{ on }e,\ \forall\ e\in\mathscr E_h\setminus\mathscr E_h^\gamma\},\\
Q_h &:= \big\{\mu\in \prod_{e\in\mathscr E_h^\gamma}L^2(e);\ \mu_{|e}=\text{const.}\quad \forall\ e\in\mathscr E_h^\gamma\big\}.
\end{align*}
The hybrid finite element approximation is given by the following problem:
\begin{alignat}{2}
&\text{Find }(u^H_h,\lambda_h)\in Z_h\times Q_h\text{ such that:}\nonumber\\
&\sum_{T\in\mathscr T_h}\int_T a_h\,\nabla u^H_h\cdot\nabla v\,dx - \sum_{e\in\mathscr E_h^\gamma}\int_e\lambda_h\, [v]\,ds = \int_\Omega fv\,dx
&&\qquad\forall\ v\in Z_h,\label{PbLambda-h-1}\\
&\sum_{e\in\mathscr E_h^\gamma}\int_e\mu\, [u^H_h]\,ds = 0&&\qquad\forall\ \mu\in Q_h.\label{PbLambda-h-2}
\end{alignat}
Let us give some additional remarks before proving convergence properties of this method.
1. The matrix formulation of the method has the following form
\begin{equation}
\begin{pmatrix}
A & C & 0 \\ C^T & D & B \\ 0 & B^T & 0
\end{pmatrix}
\begin{pmatrix}
\widetilde u \\ \widetilde v \\ \widetilde\lambda
\end{pmatrix}
=
\begin{pmatrix}
b \\ c \\ 0
\end{pmatrix},
\label{LS}
\end{equation}
where the vector $\widetilde u$ contains the values of $u_h^H$ at nodes of the mesh $\mathscr T_h$, {\em i.e.} components
of $u_h^H$ in the Lagrange basis of $V_h$, $\widetilde v$ contains the components of $u_h^F$ in the basis of
$Y_h$, and $\widetilde \lambda$ has as components the values of $\lambda_h$ on the edges of $\mathscr E_h^\gamma$. There is clearly no simple method to
eliminate off diagonal blocks in the system \eqref{LS} in order to decouple the variables. More specifically,
our aim is to eliminate the unknowns $\widetilde v$.
2. The method must be viewed in the context of an iterative process like the Uzawa method, where
the Lagrange multiplier $\lambda_h$ is decoupled from the primal variable $u_h^H$. In such situations, each iteration step
consists in solving an elliptic problem with a given $\lambda_h$. Let us recall that, due to the local feature
of the basis functions of nodes on edges of $\mathscr E_h^\gamma$, the unknowns associated to these nodes
can be eliminated at the element level. This is a basic issue in our method.
3. We point out that equation \eqref{PbLambda-h-2} entails
\begin{equation}
[u_h^H]=0\quad\text{on }e,\quad\forall\ e\in\mathscr T_h^\gamma.\label{Cont-uhH}
\end{equation}
This follows from the fact that $u_h^H$ is an affine function on each edge of $\mathscr T_h^\gamma$. This implies that actually $u_h^H\in W_h$. Choosing $v\in W_h$ in \eqref{PbLambda-1} we find
$$
\int_\Omega a_h\nabla u_h^H\cdot\nabla v\,dx = \int_\Omega fv\,dx.
$$
This yields $u_h^H= u_h^F$.
\section{Convergence analysis}
This section is devoted to the proof of existence, uniqueness and stability of the solution
of \eqref{PbLambda-h-1}--\eqref{PbLambda-h-2} as well as its convergence to Problem \eqref{PbLambda-1}--\eqref{PbLambda-2}.
For this result we need a localized quasi-uniformity of the mesh. More precisely, we assume that
\begin{equation}
|e| \ge Ch\qquad\forall\ e\in\mathscr E_h^\gamma.\label{QU}
\end{equation}
In addition, we make the following assumption:
\begin{equation}
\begin{aligned}{}
&\text{The distance of the intersection point of $\gamma$ with any edge $e\in\mathscr E_h^\gamma$}\\
&\text{to the endpoints of $e$ can be bounded from below by $\delta h$, where $\delta$ is}\\
&\text{independent of $h$.}
\label{Hyp}
\end{aligned}
\end{equation}
Although this assumption appears to be quite restrictive, numerical tests have shown that it can be actually ignored in applications.
\begin{thm}
\label{EU:Discrete}
Assume that the family of meshes $(\mathscr T_h)_h$ satisfies Property \eqref{QU}.
Then Problem \eqref{PbLambda-h-1}--\eqref{PbLambda-h-2} has a unique solution. Moreover, we have the bound
\begin{equation}
\|u^H_h\|_{\widehat Z_h} + \|\lambda_h\|_{Q_h} \le C\,\|f\|_{0,\Omega},
\label{Stability}
\end{equation}
where the constant $C$ is independent of $h$.
\end{thm}
\begin{proof}
It is clearly sufficient to prove the inf-sup condition (see for instance Brezzi-Fortin \cite{BF}):
\begin{equation}
\sup_{v_h\in Z_h\setminus\{0\}}\dfrac{\sum_{e\in\mathscr E_h^\gamma}\int_e\mu_h\,[v_h]\,ds}
{\|v_h\|_{\widehat Z_h}\,\|\mu_h\|_{\widehat Q_h}}\ge \beta > 0
\qquad\forall\ \mu_h\in Q_h.
\label{LBB}
\end{equation}
In the following, for each triangle $T\in\mathscr T_h^\gamma$, we shall denote
by $e^+_T$ ({\sl resp.} $e^-_T$) the edge where $\gamma$ enters $T$ ({\sl resp.} leaves $T$), and by $\tilde e_T$
the remaining edge of $T$ (see Figure \eqref{Fig-2}). Recall that we fixed an orientation for $\gamma$.
\begin{figure}
\caption{\label{Fig-2}
\label{Fig-2}
\end{figure}
Let $\mu_h\in Q_h$, and let $v\in\widehat Z_h$ be the function given by Problem \eqref{Pb-mu-v}.
We define a function $v_h\in Z_h$ by
\begin{equation}
\left\{
\begin{aligned}{}
&v_{h|T} = 0 &&\qquad\forall\ T\in\mathscr T_h\setminus\mathscr T_h^\gamma,\\
&\int_e v_h\,ds = \int_e v\,ds &&\qquad\forall\ e\in\mathscr E_T,\ \forall\ T\in\mathscr T_h^\gamma.
\end{aligned}
\right.
\label{def-vh}
\end{equation}
The gradient of $v_h$ can be expressed in $T\in\mathscr T_h^\gamma$ by
$$
\nabla v_{h|T} = \frac 2{|e^-_T|}\bigg(\int_{e^-_T}v\,ds\bigg)\,\nabla\varphi_{e^-_T}
+ \frac 2{|e^+_T|}\bigg(\int_{e^+_T}v\,ds\bigg)\,\nabla\varphi_{e^+_T},
$$
where $\varphi_{e^+_T}$ ({\sl resp.} $\varphi_{e^-_T}$) is the basis function of $Z_h$ associated to the
added node on $e^+_T$ ({\sl resp.} $e^-_T$).
Then by using \eqref{QU} and the Cauchy-Schwarz inequality, we get for each $T\in\mathscr T_h^\gamma$,
\begin{align}
\|\nabla v_h\|_{0,T} &= C_1\,h^{-1}\bigg|\int_{e^-_T}v\,ds\bigg|\,\|\nabla\varphi_{e^-_T}\|_{0,T}
+ C_2\,h^{-1}\bigg|\int_{e^+_T}v\,ds\bigg|\,\|\nabla\varphi_{e^+_T}\|_{0,T}\nonumber\\
&\le C_3\,h^{-\frac 12}\,\big(\|v\|_{0,e_T^-}\|\nabla\varphi_{e^-_T}\|_{0,T}
+ \|v\|_{0,e_T^+}\,\|\nabla\varphi_{e^+_T}\|_{0,T}\big).\label{ident-1}
\end{align}
The trace inequality (see \cite{Arnold}, eq. (2.5)) and the Poincar\'e inequality
owing to $v=0$ on $\tilde e_T$, yield for $T\in\mathscr T_h^\gamma$,
\begin{equation}
\|v\|_{0,e^\pm_T}
\le C_4\,\big(h^{-\frac 12}\|v\|_{0,T}+h^{\frac 12}\|\nabla v\|_{0,T}\big)
\le C_5\,h^{\frac 12}\|\nabla v\|_{0,T}.\label{ident-2}
\end{equation}
On the other hand, Assumption \eqref{Hyp} implies the uniform boundedness of
$\|\nabla\varphi_{e^\pm_T}\|_{0,T}$. From \eqref{ident-1} and \eqref{ident-2} we obtain then
$$
\|\nabla v_h\|_{0,T} \le C_6\,\|\nabla v\|_{0,T}.
$$
Using the inf-sup condition \eqref{eq:inf-sup-semidiscr} and \eqref{def-vh}, we finally obtain
\begin{align*}
\|\mu_h\|_{\widehat Q_h}\|v_h\|_{\widehat Z_h}
&\le C_6\,\|\mu_h\|_{\widehat Q_h}\|v\|_{\widehat Z_h}\\
&\le C_7 \sum_{e\in\mathscr E_h^\gamma}\int_e \mu_h\,[v]\,ds\\
&= C_7\,\sum_{e\in\mathscr E_h^\gamma}\int_e \mu_h\,[v_h]\,ds.
\end{align*}
Finally, obtaining the estimate \eqref{Stability} is a classical task that we skip here.
\end{proof}
We now prove the main convergence result.
\begin{thm}
\label{Convergence-u}
Assume hypotheses \eqref{MeshRegularity} and \eqref{QU} are satisfied, then
there exists a constant $C$, independent of $h$, such that
$$
\|u-u_h^H\|_{\widehat Z_h} \le
\begin{cases}
Ch^{1-\theta}\,|u|_{2,\Omega^+\cup\Omega^-}&\text{if }u\in H^2(\Omega^+\cup\Omega^-),\\
Ch\,\|u\|_{2,\infty,\Omega^+\cup\Omega^-}&\text{if }u\in W^{2,\infty}(\Omega^+\cup\Omega^-).
\end{cases}
$$
\end{thm}
\begin{proof}
From classical theory of saddle point problems (see \cite{GiraultRaviart}, p. 114),
we obtain from Theorem \ref{EU:Discrete},
\begin{equation}
\|\widehat u_h^H-u_h^H\|_{\widehat Z_h} + \|\widehat\lambda_h-\lambda_h\|_{\widehat Q_h}\le C\,
\Big(\inf_{v\in Z_h}\|\widehat u_h^H-v\|_{\widehat Z_h}+\inf_{\mu\in Q_h}\|\widehat\lambda_h-\mu\|_{\widehat Q_h}\Big).
\label{AbstractErrorEstimate-1}
\end{equation}
Furthermore, using (Braess \cite{Braess}, Theorem 4.8), Property \eqref{Cont-uhH}
implies that Estimate \eqref{AbstractErrorEstimate-1} can be improved, for the error on $\widehat u_h^H$ by
\begin{equation}
\|\widehat u_h^H-u_h^H\|_{\widehat Z_h}\le C\,\inf_{v\in Z_h}\|\widehat u_h^H-v\|_{\widehat Z_h}.
\label{AbstractErrorEstimate-2}
\end{equation}
To bound the right-hand side, we choose $v=I_hu$, where $I_h$ is the previously defined
Lagrange interpolant in $Z_h$. Since $\widehat u_h^H=\widehat u_h$ (see Theorem \ref{Existence-uH}),
then by using \eqref{InterpolationErrorGlobal-1} and \eqref{eq:est-u-hat-u},
\begin{align*}
\|\widehat u_h^H-I_hu\|_{\widehat Z_h} &= \|\widehat u_h-I_hu\|_{\widehat Z_h}\\
&\le \|u-I_h u\|_{\widehat Z_h} + \|u-\widehat u_h\|_{\widehat Z_h}\\
&\le C_1\,h^{1-\theta}\,|u|_{2,\Omega^+\cup\Omega^-} + C_2\,h\,|u|_{2,\Omega^+\cup\Omega^-}.
\end{align*}
If $u\in W^{2,\infty}(\Omega^+\cup\Omega^-)$, then Estimate \eqref{InterpolationErrorGlobal-2} yields
\begin{equation}
\|\widehat u_h^H-I_hu\|_{\widehat Z_h} \le C\,h\,\|u\|_{2,\infty,\Omega^+\cup\Omega^-}.
\label{ErrorInterp}
\end{equation}
\end{proof}
\begin{rem}
As it was previously mentioned, we know that if Problem \eqref{PbLambda-1}--\eqref{PbLambda-2} has a unique
solution $(u_h^H,\lambda_h)$ then $u_h^H=u_h^F$ where $u_h^F$ is the solution of Problem
\eqref{AFEM} and therefore the error estimate \eqref{error-uhf} holds. Consequently, Theorem \ref{Convergence-u}
can be simply proven by obtaining a nonuniform inf-sup condition ({\em i.e.} \eqref{LBB} with $\beta=\beta(h))$.
This can be achieved without assuming \eqref{Hyp}. In this case, no error estimate is to be expected for
the Lagrange multiplier.
\end{rem}
Finally, since the Lagrange multiplier $\widehat\lambda_h$ can be interpreted in terms of $\widehat u_h$
(see Theorem \ref{Existence-uH}), it is interesting to see how good is its approximation $\lambda_h$.
Let, for this, $E_h$ denote the set
$$
E_h := \prod_{e\in\mathscr E_h^\gamma}e.
$$
\begin{thm}
Under the same hypotheses as in Theorem \ref{Convergence-u}, we have the following error bounds
$$
\|\widehat\lambda_h-\lambda_h\|_{\widehat Q_h}\le
\begin{cases}
C(h^{1-\theta} + h^{\frac 12})\,\Big(|u|_{2,\Omega^+\cup\Omega^-}+\|\lambda\|_{0,E_h}\Big)
&\text{if }\widehat\lambda_h\in L^2(E_h),\\
Ch^{1-\theta}\,\Big(|u|_{2,\Omega^+\cup\Omega^-}+\|\lambda\|_{\frac 12,E_h}\Big)
&\text{if }\widehat\lambda_h\in H^{\frac 12}(E_h).
\end{cases}
$$
\end{thm}
\begin{proof}
We use the abstract error bound \eqref{AbstractErrorEstimate-1}. Let, for $e\in\mathscr E_h^\gamma$,
$$
\lambda_e := \frac 1{|e|}\int_e\widehat\lambda_h\,ds.
$$
Using Lemma 7 in Girault--Glowinski \cite{GG}, we obtain the bound
$$
\|\widehat\lambda_h-\lambda_e\|_{H^{-\frac 12}_{00}(e)}\le Ch^{\frac 12}\|\widehat\lambda_h\|_{0,e}
\qquad\text{if }\widehat\lambda_h\in L^2(e),
$$
and
$$
\|\widehat\lambda_h-\lambda_e\|_{H^{-\frac 12}_{00}(e)}\le Ch\,\|\widehat\lambda_h\|_{\frac 12,e}
\qquad\text{if }\widehat\lambda_h\in H^{\frac 12}(e),
$$
Combining these bounds with \eqref{AbstractErrorEstimate-1}, \eqref{AbstractErrorEstimate-2}
and \eqref{ErrorInterp} achieves the proof.
\end{proof}
\section{A numerical test}
To test the efficiency and accuracy of our method, we present in this section a numerical test.
We consider an exact radial solution and test convergence rates in various norms.
Let $\Omega$ denote the square $\Omega=(-1,1)^2$ and let the function $a$ be given by
$$
a(x) = \begin{cases}
\alpha &\text{if }|x|<R_1,\\
\beta &\text{if }|x|\ge R_1,
\end{cases}
$$
where $\alpha,\beta>0$. We test the exact solution
$$
u(x) =
\begin{cases}
\dfrac 1{4\alpha}\,(R_1^2-|x|^2) + \dfrac 1{4\beta}(R^2_2-R^2_1) &\text{if }|x|<R_1,\\
\dfrac 1{4\beta}\,(R^2_2-|x|^2) &\text{if }|x|\ge R_1.
\end{cases}
$$
We choose $R_1=0.5$ and $R_2=\sqrt{2}$.
The function $f$ and Dirichlet boundary conditions are determined according to this choice. Note that unlike
the presented model problem, we deal here with non homogeneous boundary conditions but this cannot
affect the obtained results.
The finite element mesh is made of $2N^2$ equal triangles. According to the definition of $a$,
the interface $\gamma$ is given by the circle of center $0$ and radius $R_1$. The error is measured
in the following discrete norms:
\begin{align*}
&\|e\|_{0,h} := \bigg(\frac 1M\,\sum_{i=1}^M (u(x_i)-u_h(x_i))^2\bigg)^{\frac 12},\\
&\|e\|_{0,\infty} := \max_{1\le i\le M}|u(x_i)-u_h(x_i)|,\\
&\|e\|_{1,h} := \bigg(\sum_{T\in\mathscr T_h} \int_T|I_h(\nabla u)(x)-\nabla u_h|^2\bigg)^{\frac 12},
\end{align*}
where $x_i$ are the mesh nodes, $M$ is the total number of nodes, and $I_h$ is the piecewise linear interpolant.
We denote in the sequel by $p$ the ratio
$\alpha/\beta$.
Table 1 presents convergence rates for the standard $P_1$ finite element method using the unfitted mesh
\eqref{SFEM} with the choice $p=1/10$.
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
$h^{-1}$ & $\|e\|_{0,h}$ & Rate & $\|e\|_{0,\infty}$ & Rate & $\|e\|_{1,h}$ & Rate \\
\hline
$\phantom{1}10$ & $1.40\times 10^{-2}$ & & $2.02\times 10^{-2}$ & & $6.28\times 10^{-2}$ & \\
\hline
$\phantom{1}20$ & $6.78\times 10^{-3}$ & $1.05$ & $1.09\times 10^{-2}$ & $0.89$ & $5.23\times 10^{-2}$ & $0.26$ \\
\hline
$\phantom{1}40$ & $3.61\times 10^{-3}$ & $0.91$ & $5.81\times 10^{-3}$ & $0.91$ & $3.68\times 10^{-2}$ & $0.51$ \\
\hline
$\phantom{1}80$ & $1.83\times 10^{-3}$ & $0.98$ & $3.06\times 10^{-3}$ & $0.92$ & $2.56\times 10^{-2}$ & $0.52$ \\
\hline
$160$ & $9.44\times 10^{-4}$ & $0.95$ & $1.55\times 10^{-3}$ & $0.98$ & $1.82\times 10^{-2}$ & $0.49$ \\
\hline
\end{tabular}\\
Table 1. Convergence rates for a standard (unfitted) finite element method.
\end{center}
As expected, numerical experiments show poor convergence behavior. Let us consider now the results
obtained by the present method, \emph{i.e.} \eqref{PbLambda-1}--\eqref{PbLambda-2} or equivalently
\eqref{AFEM}. We obtain for $p=1/10$ and $p=1/100$ the convergence rates illustrated in Tables 1 and 2
respectively.
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
$h^{-1}$ & $\|e\|_{0,h}$ & Rate & $\|e\|_{0,\infty}$ & Rate & $\|e\|_{1,h}$ & Rate \\
\hline
$\phantom{1}10$ & $3.45\times 10^{-3}$ & & $4.25\times 10^{-3}$ & & $1.75\times 10^{-2}$ & \\
\hline
$\phantom{1}20$ & $8.18\times 10^{-4}$ & $2.1$ & $1.72\times 10^{-3}$ & $1.3$ & $6.87\times 10^{-3}$ & $1.3$ \\
\hline
$\phantom{1}40$ & $1.70\times 10^{-4}$ & $2.3$ & $5.22\times 10^{-4}$ & $1.7$ & $2.81\times 10^{-3}$ & $1.3$ \\
\hline
$\phantom{1}80$ & $3.94\times 10^{-5}$ & $2.1$ & $1.64\times 10^{-4}$ & $1.7$ & $1.02\times 10^{-3}$ & $1.5$ \\
\hline
$160$ & $8.57\times 10^{-6}$ & $2.2$ & $4.89\times 10^{-5}$ & $1.7$ & $3.59\times 10^{-4}$ & $1.5$ \\
\hline
\end{tabular}\\
Table 2. Convergence rates for the hybrid finite element method with $p=1/10$.
\end{center}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
$h^{-1}$ & $\|e\|_{0,h}$ & Rate & $\|e\|_{0,\infty}$ & Rate & $\|e\|_{1,h}$ & Rate \\
\hline
$\phantom{1}10$ & $3.26\times 10^{-3}$ & & $4.07\times 10^{-3}$ & & $1.69\times 10^{-2}$ & \\
\hline
$\phantom{1}20$ & $7.91\times 10^{-4}$ & $2.0$ & $1.74\times 10^{-3}$ & $1.2$ & $6.65\times 10^{-3}$ & $1.3$ \\
\hline
$\phantom{1}40$ & $1.72\times 10^{-4}$ & $2.2$ & $5.47\times 10^{-4}$ & $1.7$ & $2.72\times 10^{-3}$ & $1.3$ \\
\hline
$\phantom{1}80$ & $4.01\times 10^{-5}$ & $2.1$ & $1.74\times 10^{-4}$ & $1.6$ & $9.88\times 10^{-4}$ & $1.5$ \\
\hline
$160$ & $8.82\times 10^{-6}$ & $2.2$ & $5.22\times 10^{-5}$ & $1.7$ & $3.50\times 10^{-4}$ & $1.5$ \\
\hline
\end{tabular}\\
Table 3. Convergence rates for the hybrid finite element method with $p=1/100$.
\end{center}
Tables 2 and 3 show convergence rates that are even better than the theoretical results.
This is probably due to the choice of a discrete norm but may also be due to a superconvergence
phenomenon.
Rates for the $L^2$--norm give also good behavior. For the $L^\infty$--convergence rate,
we can note that we dot retrieve the second order obtained for a continuous coefficient problem.
However, these rates are better ($1.5$ rather than $1$) than the ones obtained for a standard finite element
method and, moreover, the error values are significantly lower in our case. It is in addition
remarkable that the error values depend very weekly on $p$ but the convergence rates are
independent of this value.
\section{Concluding remarks}
We have presented an optimal rate finite element method to solve interface problems with unfitted meshes.
The main advantage of the method is that the added unknowns that deal with the interface singularity
do not modify the matrix structure. This feature enables using the method in more complex situations
like in problems with moving interfaces. The price to pay for this is the use of a Lagrange multiplier
that adds an unknown on each edge that cuts the interface. This drawback can be easily
removed by using an iterative method such as the classical Uzawa method or more elaborate methods
like the Conjugate Gradient. The good properties of the obtained saddle point problem enable
choosing among a wide variety of dedicated methods. This topic will be addressed in a future work.
Let us also mention that the present finite element method does not specifically address problems
with large jumps in the coefficients. These ones are in addition ill conditioned and this drawback
is not removed by this technique.
\end{document}
|
\begin{document}
\begin{abstract}
We adapt the method of Simon \cite{simon1} to prove a $C^{1,\alpha}$-regularity theorem for minimal varifolds which resemble a cone $\mathbf{C}_0^2$ over an equiangular geodesic net. For varifold classes admitting a ``no-hole'' condition on the singular set, we additionally establish $C^{1,\alpha}$-regularity near the cone $\mathbf{C}_0^2 \times \mathbb{R}^m$. Combined with work of Allard \cite{All}, Simon \cite{simon1}, Taylor \cite{taylor}, and Naber-Valtorta \cite{naber-valtorta}, our result implies a $C^{1,\alpha}$-structure for the top three strata of minimizing clusters and size-minimizing currents, and a Lipschitz structure on the $(n-3)$-stratum.
\end{abstract}
\title{The singular set of minimal surfaces near polyhedral cones}
\tableofcontents
\section{Introduction}
In this paper we are interested in the regularity and fine-scale structure of stationary integral varifolds (and varifolds with bounded mean curvature) which resemble \emph{polyhedral-type} cones. That is, we address the following question:
\begin{question}\label{question:main}
Suppose $\mathbf{C}_0^2 \subset \mathbb{R}^{2+k}$ is the cone over an equiangular geodesic net in $\mathbb{S}^{1+k}$ (each junction meeting precisely three arcs), and $M^{2+m}$ is a stationary integral varifold weakly close to $\mathbf{C} = \mathbf{C}_0\times\mathbb{R}^m$. Then what can be said about the regular and singular structure of $M$?
\end{question}
Understanding the relationship between the local (singular) structure of a minimal surface $M$ and its tangent cone $\mathbf{C}$ has been a central question in geometric analysis, even when multiplicity is not a factor. There are many profound and optimal results concerning the \emph{dimension} of the singular set, in various circumstances (e.g. \cite{All}, \cite{Alm}, \cite{DG}, \cite{Fed3}, \cite{ScSi}, \cite{BDG}), but relatively few works have addressed the \emph{structure} of $M$ near singularities (except notably when the singular set has dimension $0$, in which case it is often known to be locally finite \cite{Fed3, Chang}).
Generally the best results are known when a $\mathbf{C}$ has smooth cross-section (and multiplicity-one), or when $M$ belongs to a class with very rigid tangent cone structure, with topological obstructions to ``perturbing'' them away.
For example, when $\mathbf{C}$ is smooth, multiplicity-one the picture is largely complete: using ideas dating back to De Giorgi, Allard \cite{All} and Allard-Almgren \cite{AllAlm} proved that if $\mathbf{C}$ satisfies an \emph{integrability} hypothesis, then $M$ is a locally $C^{1,\alpha}$-perturbation of $\mathbf{C}$; later, in huge generality, Simon \cite{simon:loja} proved that for \emph{any} such $\mathbf{C}$ (not necessarily integrable), then $M$ is locally a $C^{1,\log}$-perturbation; and in Adams-Simon \cite{adams-simon} this decay rate was shown to be sharp.
For $2$-dimensional $(\mathbf{M}, \varepsilon, \partiallta)$-minimizing sets in $\mathbb{R}^3$, Taylor \cite{taylor} (see also \cite{david1}, \cite{david2}) has shown the following beautiful structure theorem: $M^2$ decomposes into a union of $C^{1,\alpha}$ manifolds, meeting along various $C^{1,\alpha}$ curves at $120^\circ$, which in turn meet at isolated tetrahedral junctions. (We remark that, though they may coincide in certain minimization problems, the notions of being $(\mathbf{M},\varepsilon,\partiallta)$-minimizing and having bounded mean curvature are essentially independent.\footnote{For example, a union of $\geq 2$ intersecting lines is stationary but not $(\mathbf{M}, \varepsilon, \partiallta)$-minimizing. Conversely, the $1$-d graph of the curve $f(x) = |x|^3 \sin(1/|x|)$ is $(\mathbf{M}, c r^2, 1)$-minimizing in $B_1^2(0)$ but its mean curvature does not lie in any $L^p$ for $p > n = 1$. See Section \ref{sec:clusters} for more background.}) Crucial to Taylor's work is a classification of tangent cones for this class of sets. For certain $(\mathbf{M}, 0, \partiallta)$-minimizing sets, our work generalizes Taylor's theorem to higher dimensions.
Simon \cite{simon1} was the first to consider the singular structure for general \emph{stationary} integral varifolds, and for varifold classes \emph{without} such rigid tangent cone structure. He considered cones of the form $\mathbf{C} = \mathbf{C}_0^\ell\times \mathbb{R}^m$, where $\mathbf{C}_0$ is smooth and integrable, and proved an ``excess decay dichotomy,'' which loosely says that \emph{either} $M$ has a significant gap in the singular set, \emph{or} the scale-invariant $L^2$-distance $\rho^{-n-2}\int_{M \cap B_\rho} d_{\mathbf{C}}^2$ decreases when the radius is decreased by a fixed factor.
For certain tangent cones which cannot be ``perturbed away,'' like the union of three half-planes, and in general for classes of $M$ admitting some kind of ``no-hole'' condition on the singular set, Simon's result implies that $\mathrm{sing}(M)$ is locally a $C^{1,\alpha}$ manifold (see Theorem \ref{thm:Y-reg}). More generally, he used his decay dichotomy to show countable-rectifiability of the singular set for particular ``multiplicity-one'' classes, e.g. for mod-$2$ minimizing flat chains.
Later, in \cite{simon:rect} Simon used the Lojaciewicz inequality to show countable-rectifiability of each stratum\footnote{There is a subtle difference between the strata of Almgren used in Naber-Valtorta \cite{naber-valtorta}, and the strata used in Simon \cite{simon:rect}: Simon defines $S^m(M)$ to be the set of points for which every tangent cone $\mathbf{C}$ satisfies $\dim(\mathrm{sing} \mathbf{C}) \leq m$, rather than asking for every tangent cone to have $\leq m$ dimensions of symmetry. In paticular, the if $M^2 = \mathbf{T}$ is the tetrahedral cone (defined in Section \ref{sec:polyhedral}), then $0$ lies in the $0$-stratum for Naber-Valtorta, but only the $1$-stratum for Simon.} of $M$ in \emph{any} ``multiplicity-one class'' (e.g. codimension-$1$ mass-minimizing currents), and almost-everywhere uniqueness of the tangent cones in the singular set. Just recently Naber-Valtorta \cite{naber-valtorta} proved rectifiability of each stratum for general stationary integral varifolds, and rectifiability \emph{with mass bounds} of the singular set for $M$ in any multiplicity-one class.
We generalize the seminal results of \cite{simon1} to prove that whenever $M$ admits a certain ``no-holes'' property on the singular set, and $\mathbf{C}_0$ is integrable, then $M$ as in Question \ref{question:main} must be a $C^{1,\alpha}$-perturbation of $\mathbf{C}$. Integrability loosely means that every infintesimal motion through polyhedral cones can be generated by a family rotations, see Section \ref{sec:compatible}. Both of these conditions are satisfied in several natural circumstances, and for a wide class of cones.
Our main Theorem \ref{thm:main-decay} is an excess decay dichotomy in the spirit of Simon, and is given in Section \ref{sec:main-thm}. Here we list two consequences, which correspond to two classes of varifold admitting a no-hole condition.
\begin{theorem}[$\varepsilon$-regularity for polyhedral cones]\label{thm:no-spine-reg}
Let $\mathbf{C}^0^2 \subset \mathbb{R}^2 \subset \mathbb{R}^{2+k}$ be a polyhedral cone. There are $\partiallta(\mathbf{C}^0), \mu(\mathbf{C}^0) \in (0, 1)$ so that if $M$ is an integral varifold with bounded (generalized) mean curvature $H_M$ and no boundary in $B_1$, satisfying
\begin{align}
\theta_M(0) \geq \theta_{\mathbf{C}}(0), \quad \mu_M(B_1) \leq \frac{3}{2} \theta_\mathbf{C}(0), \quad \int_{B_1} \mathrm{dist}(z, \mathbf{C})^2 d\mu_M + ||H_M||_{L^\infty(B_1)} \leq \partiallta^2,
\end{align}
then $\mathrm{spt} M \cap B_{1/2}$ is a $C^{1,\mu}$-perturbation of $\mathbf{C}^0$.
\end{theorem}
We remark that the restriction $\mathbf{C}_0 \subset \mathbb{R}^3$ is due to integrability: Theorem \ref{thm:no-spine-reg} holds for \emph{any} integrable polyhedral $\mathbf{C}_0$, but we can only verify integrability for those nets in $\mathbb{S}^2$ (indeed, we feel integrability may be generally false in higher codimension)
A second class admitting the no-hole condition consists of varifolds with an associated orientation (i.e. current) structure. These arise naturally as size- and cluster-minimizers, and we correspondingly have the following interior regularity theorem.
\begin{theorem}[Regularity of size-/cluster-minimizers]\label{thm:clusters}
Let $M^n$ be the support of the integral varifold associated to either a minimizing cluster in $U = \mathbb{R}^{n+1}$, or a homologically size-minimizing current in an open set $U$ (e.g. as constructed by Morgan \cite{morgan-size}). Then we can decompose $M \cap U = M_n \cup M_{n-1} \cup M_{n-2} \cup M_{n-3}$ (disjoint union), where:
\begin{enumerate}
\item $M_{n}$ is a locally-finite union of embedded $C^{1,\alpha}$ $n$-manifolds;
\item $M_{n-1}$ is a locally-finite union of embeded, $C^{1,\alpha}$ $(n-1)$-manifolds, near which $M$ is locally diffeomorphic to $\mathbf{Y}^1 \times \mathbb{R}^{n-1}$;
\item $M_{n-2}$ is a locally-finite union of embedded, $C^{1,\alpha}$ $(n-2)$-manifolds, near which $M$ is locally diffeomorphic to $\mathbf{T}^2 \times \mathbb{R}^{n-2}$;
\item $M_{n-3}$ is relatively closed, $(n-3)$-rectifiable, with locally-finite $\mathcal{H}^{n-3}$-measure.
\end{enumerate}
Here $\mathbf{Y}^1$ is the stationary $1$-dimensional cone consisting of three rays, and $\mathbf{T}^2$ is the stationary $2$-dimensional cone over the tetrahedral net in $\mathbb{S}^2$ (see Section \ref{sec:polyhedral} for precise definitions).
\end{theorem}
\begin{remark}
In either of the above cases, standard interior estimates and work of \cite{kns}, \cite{krummel} imply that $M_n$ and $M_{n-1}$ are analytic. Contrarily, we suspect $C^{1,\alpha}$ may be sharp for $M_{n-2}$, as there exist Jacobi fields on $\mathbf{T}^2$ which near $0$ are bounded in $C^{1,\alpha}$ but not $C^{2,\alpha}$.
\end{remark}
When $n = 2$ then Theorem \ref{thm:clusters} has been established (for general $(\mathbf{M}, \varepsilon, \partiallta$)-minimizing sets) by Taylor \cite{taylor}. David \cite{david1}, \cite{david2} has given an entirely different proof of Taylor's Theorem, and has proven partial generalizations to higher codimension.
For general $n$, conclusions, 1), 2) are respectively consequences of Allard's \cite{All} and Simon's work \cite{simon1}. Conclusion 4) follows from parts 1), 2), 3), and the work of Naber-Valtorta \cite{naber-valtorta}. White \cite{white-announce} has announced a result analogous to Theorem \ref{thm:clusters} parts 2), 3) for general $(\mathbf{M}, \varepsilon, \partiallta)$-minimizing sets. We mention that, in light of the essentially independent natural of general $(\mathbf{M}, \varepsilon, \partiallta)$-minimizing sets, and varifolds with bounded mean curvature, our main Theorem is likewise independent from the result asserted by White.
The very broad strategy of proof is to ``linearize'' the minimal surface operator over $\mathbf{C}$, and use good decay properties of solutions to the linearized problem (called \emph{Jacobi fields}) to prove decay of the minimal surface. In general the linear problem may not adequately capture the non-linear problem, and for this reason we must (as in \cite{simon1}) make two running assumptions: first, we require the polyhedral cone $\mathbf{C}_0$ to be integrable (Definition \ref{def:integrable}), to ensure that every $1$-homogeneous Jacobi field can be realized ``geometrically'' through a family of rotations; second, we require the singular set of $M$ to satisfy a no-holes condition (Definition \ref{def:no-holes}), which prevents the tangent cone from ``gaining'' symmetries not seen in $M$.
Our proof follows Simon \cite{simon1}, but there several complications when dealing with polyhedral cylindrical cones. Our main contributions are making sense of inhomogeneous blow-ups on cylindrical cones with \emph{singular cross section $\mathbf{C}_0$}, correspondingly defining a good notion of Jacobi field on polyhedral cones, and extending the various non-concentration and growth estimates of Simon to the singular setting. We additionally remove the ``multiplicity-one'' hypothesis from the excess decay Theorem (in both our result and Simon's), but we caution the reader that the structural results of Simon still require this hypothesis.
\textbf{Acknowledgments} We are grateful to Guido De Philippis for many interesting discussions and for introducing us to the problem. We thank Spencer Becker-Kahn, Leon Simon, and Neshan Wickramasekera for several helpful conversations. We wish to acknowledge the support of Gigliola Staffilani, whose grant allowed M.C. to visit MIT. N.E. was supported by NSF grant DMS-1606492.
\section{Notation and preliminaries}\label{sec:prelim}
Let us fix some notation. We work in $\mathbb{R}^{n+k} = \mathbb{R}^{\ell + k} \times \mathbb{R}^m \ni (x, y)$. We denote by capital letters $X \in \mathbb{R}^{n+k}$. Write $r = |x|$, and $R = |X| = \sqrt{|x|^2 + |y|^2}$. We shall always write $d_A(x)$ for the Euclidean distance function to a set $A$. We write $B_r(A) = \{ x : d_A(x) < r \}$ for the open $r$-tubular neighborhood of $A$. More generally, given a radius function $r_x : A \to \mathbb{R}$, we write $B_{r_x}(A) = A \cup \bigcup_{x \in A} B_{r_x}(x)$.
Given a linear subspace $V \subset \mathbb{R}^{n+k}$, we write $V^\perp$ to denote its orthogonal complement, and $\pi_V$ for the linear projection operator. Given another linear space $W$, write $<V, W>^2 = \sum_{i, j} e_i \cdot f_j$ for the distance between $V$, $W$, where $\{e_i\}_i$, $\{f_j\}_j$ are choices of orthonormal basis on $V$, $W$. The $\cdot$ always denotes Euclidean inner product.
We will be working with $n = (\ell + m)$-dimensional integral varifolds in $\mathbb{R}^{n+k} = \mathbb{R}^{(\ell+k)+m}$ with bounded mean curvature, and the reader should always think of them as having (almost-)symmetry in the $\{0\}\times \mathbb{R}^m$ factor. Any cone $\mathbf{C}$ will be a rotation of $\mathbf{C}_0^\ell \times \mathbb{R}^m$ where $\mathbf{C}_0$ is $\ell$-dimensional, stationary and either smooth or polyhedral (see Definition \ref{def:polyhedral-cone}).
Typically $M$ will denote a general integral varifold, and $\mu_M$ will be its mass measure. Our integral varifolds will always have bounded mean curvature and no boundary in $B_1$. This means there is a $\mu_M$-a.e. bounded vector field $H_M$, so that
\begin{gather}\label{eqn:first-variation}
\int_M div_M(Y) = - \int_M H_M \cdot X \quad \forall Y \in C^1_c(B_1, \mathbb{R}^{n+k}).
\end{gather}
Here $div_M(Y)$ is the tangential divergence, defined at $\mu_M$-a.e. point by $div_M(Y) = \sum_i e_i \cdot (D_{e_i} Y)$, for any orthonormal basis $\{e_i\}_i$ of $T_X M$. Of course $M$ is \emph{stationary} if $H_M \equiv 0$.
We shall aways write $\theta_M(X, R)$ for the Euclidean density ratio in $B_R(X)$:
\begin{gather}
\theta_M(X, R) := r^{-n} \mu_M(B_R(X)).
\end{gather}
When $|H_M| \leq \Lambda_M$, and $M$ has no boundary in $B_1$, the $\theta_M(X, R)$ is almost-monotone in the sense that
\begin{gather}\label{eqn:monotonicity}
e^{\Lambda_M R} \theta_M(X, R) \text{ is increasing for all $X \in B_1$ and $R < 1-|X|$.}
\end{gather}
In this case the density at $X$ is well-defined
\begin{gather}
\theta_M(X) := \lim_{r \to 0} \theta_M(X, R).
\end{gather}
By the monotonicity \eqref{eqn:monotonicity} any integral varifold having bounded mean curvature and no boundary can be identified with its support plus multiplicity, in the sense that
\begin{gather}
\int f d\mu_M = \int_{\mathrm{spt} M} f \theta d\mathcal{H}^n
\end{gather}
for some $\mu_M$-measurable, integer-valued function $\theta$. We shall make this identification. In particular, we shall use the following shorthand:
\begin{gather}
\int_{M \cap A} f \equiv \int_A f d\mu_M, \quad d_M(x) \equiv d_{\mathrm{spt} M}(x), \quad \phi(M) \equiv \phi_\sharp M,
\end{gather}
Here $\phi : \mathbb{R}^{n+k} \to \mathbb{R}^{n+k}$ is some $C^1$ mapping, and $\phi_\sharp$ is the pushforward.
We write $\mathrm{reg} M$ for the set of points in $M$ for which $M$ locally coincides with a $C^{1,\alpha}$ graph, and $\mathrm{sing} M = M \setminus \mathrm{reg} M$.
We may further stratify $M$ using the quantitative strata of Cheeger-Naber \cite{cheeger-naber}. We say a varifold cone $\mathbf{C}$ is $m$-symmetric if it takes the form $q(\mathbf{C}_0\times \mathbb{R}^m)$ for some $q \in SO(n+k)$. The $m$-stratum of $M$ consists of points
\begin{gather}
S^m(M) = \{ X \in M : \text{ no tangent cone at $X$ is $(m+1)$-symmetric} \}.
\end{gather}
Fix a metric $d_{\mathcal{V}}$ on the space of $n$-varifolds which induces varifold convegence. We say $M$ is $(m,\varepsilon)$-symmetric in some ball $B_r(x)$ if $d_{\mathcal{V}}(r^{-1} (M - X) \llcorner B_1, \mathbf{C} \llcorner B_1) < \varepsilon$ for some $m$-symmetric cone $\mathbf{C}$. The $(m, \varepsilon)$-stratum then consists of points
\begin{gather}
S^m_\varepsilon(M) = \{ X \in M : \text{ $M$ is \emph{not} $(\varepsilon, m+1)$-symmetric in $B_r(X)$ for all $r < 1$} \}.
\end{gather}
We will often use the following local Holder-semi-norm. Suppose $\mathbf{C}$ is a cone, and $f : \Omega \subset \mathbf{C} \to \mathbf{C}^\perp$, and $(x, y) \in \mathbf{C} \cap B_{1/2}$. Then we define
\begin{gather}
[f]_{\alpha, \mathbf{C}}(x, y) = \sup \left\{ \frac{|f(Z) - f(W)|}{|Z - W|^\alpha} : Z, W \in \Omega \cap B_{|x|/4}(x, y) \right\}.
\end{gather}
One can easily verify the following compactness: if $f_i : \mathbf{C}_i \cap B_1 \to \mathbf{C}_i^\perp$ is a sequence of functions, and $\mathbf{C}_i \to \mathbf{C}$ in $C^{1,\alpha}_{loc}$, and $[f_i]_{\alpha,\mathbf{C}_i}$ is uniformly bounded on compact subsets, then after passing to a subsequence we have $f_i \to f : \mathbf{C} \cap B_1 \to \mathbf{C}^\perp$ in $C^{0,\alpha'}_{loc}$ for any $\alpha' < \alpha$.
\subsection{One-sided excess, holes}
We shall prove decay of the following \emph{one-sided} excess.
\begin{definition}
Let $M$ be an integrable varifold with bounded mean curvature, and $\mathbf{C}$ a stationary integral varifold cone, and $\partiallta \in (0, 1]$. Then we define
\begin{gather}
E_\partiallta(M, \mathbf{C}, X, R) = R^{-n-2} \int_{M \cap B_R(X)} d_{\mathbf{C}}^2 + \partiallta^{-1} R ||H_M||_{L^\infty(M \cap B_R)} .
\end{gather}
Notice $E$ is scale-invariant, in the sense that $E_\partiallta(M, \mathbf{C}, X, R) = E_\partiallta(R^{-1}(M - X), R^{-1}(\mathbf{C} - X), 0, 1)$. When $\partiallta = 1$ we may write $E$ instead of $E_\partiallta$, and when $X = 0$ we may simply write $E_\partiallta(M, \mathbf{C}, R)$.
\end{definition}
\begin{remark}
The factor of $\partiallta^{-1}$ effectively ``captures'' the region where $R ||H_M||_{L^\infty(B_R)}$ is much smaller than $L^2$-distance. Our ultimate blow-up argument must work in this regime -- it cannot hope to see regions where $L^2$-excess is controlled by mean curvature, since in this case excess decay is dominated by the scaling of $H_M$.
\end{remark}
\begin{definition}
Take a fixed cone $\mathbf{C} = \mathbf{C}_0^\ell\times \mathbb{R}^m$, and $\varepsilon > 0$. We let $\mathcal{C}_{\varepsilon}(\mathbf{C})$ be the collection of cones
\begin{gather}
\mathcal{C}_{\varepsilon}(\mathbf{C}) = \{ q(C) : q \in SO(n+k) \text{ with } |q - Id| \leq \varepsilon \}.
\end{gather}
Define $\mathcal{N}_\varepsilon(\mathbf{C}^0)$ to be the set of integral $n$-varifolds $M^n \subset \mathbb{R}^{n+k}$, having bounded mean curvature and no boundary in $B_1$, satisfying
\begin{gather}
0 \in M, \quad \mu_M(B_1) \leq \frac{3}{2} \theta_{\mathbf{C}^0}(0), \quad E(M, \mathbf{C}^0, 0, 1) \leq \varepsilon^2.
\end{gather}
\end{definition}
In general, even an isolated singularity could potentially have a tangent cone with lots of symmetry. To prove decay of $M$ towards a cone $\mathbf{C}^0 = \mathbf{C}^0_0^\ell \times \mathbb{R}^m$ with a spine of singularities, we must, like in \cite{simon1}, impose a ``no-holes'' condition on $M$. In certain circumstances the no-hole condition can be deduced for topological reasons (note that no lower density assumptions are made in the class $\mathcal{N}_\varepsilon$).
\begin{definition}\label{def:no-holes}
Take the cone $\mathbf{C}^0^n = \mathbf{C}^0_0^\ell\times \mathbb{R}^m$. We say $M^{n}$ satisfies the \emph{$\partiallta$-no-holes condition} in $B_r$ w.r.t. $\mathbf{C}^0$ if the following holds: for any $y \in B_r^m$, there is some $X \in B_\partiallta(0, y)$ with $\theta_M(X) \geq \theta_\mathbf{C}^0(0)$.
\end{definition}
\subsection{Polyhedral cones}\label{sec:polyhedral}
We are concerned with the following types of cones.
\begin{definition}\label{def:polyhedral-cone}
A $2$-dimensional cone $\mathbf{C}_0^2 \subset \mathbb{R}^{2+k}$ is \emph{polyhedral} if:
\begin{enumerate}
\item[A)] $\mathbf{C}_0$ is the cone over some multiplicity-1 geodesic net in $\mathbb{S}^{1+k}$, having the property that every junction has precisely three edges meeting at $120^\circ$ (nets with this property are sometimes called \emph{equiangular}), and
\item[B)] the cone $\mathbf{C}_0$ has no additional symmetries, i.e. we cannot write $\mathbf{C}_0 = q(\mathbf{C}_0' \times \mathbb{R})$ for some $q \in SO(2 + k)$, and some $1$-dimensional cone $\mathbf{C}_0'$.
\end{enumerate}
We shall often say a cone $\mathbf{C}_0^2 \times \mathbb{R}^m$ is polyhedral if $\mathbf{C}_0$ is polyhedral.
\end{definition}
The equiangular geodesic nets in $\mathbb{S}^2$ are completely classified, and there are $10$ of them. However, by our Definition \ref{def:polyhedral-cone} only $8$ of these nets give rise to polyhedral cones. For a comprehenive list see Section \ref{sec:int}. Let us remark that, from the work of \cite{AllAlm2}, any integer-multiplicity geodesic net in $\mathbb{S}^{1+k}$ (with finite mass) consists of only finitely many geodesic arcs.
We bring the readers attention to two important (non-)examples. Define $\mathbf{Y}^1 \subset \mathbb{R}^2$ to be the cone consisting of three rays meeting at $120^\circ$:
\begin{gather}
\mathbf{Y}^1 = \{ (x, 0) : x \geq 0 \} \cup \{ (x, -\sqrt{3} x) : x \leq 0 \} \cup \{ (x, \sqrt{3}) : x \leq 0 \} .
\end{gather}
The cone $\mathbf{Y}^1 \times \mathbb{R}$ arises from the geodesic net consisting of three half-great-circles meeting at $120^\circ$. Though of fundamental importance in this paper, cones $\mathbf{Y}^1 \times \mathbb{R}$ and $\mathbb{R}^2$ are \emph{not} considered polyhedral cones.
Define $\mathbf{T}^2 \subset \mathbb{R}^3$ to be the cone over the tetrahedral net: $\mathbf{T}^2 \cap \mathbb{S}^2$ is the equiangular net having vertices
\begin{gather}
(1, 0, 0), \quad (-1/3, 2\sqrt{2}/3, 0), \quad (-1/3, -\sqrt{2}/3, \sqrt{6}/3), \quad (-1/3, -\sqrt{2}/3, -\sqrt{6}/3).
\end{gather}
The tetrahedral cone $\mathbf{T}^2$ is the archtype of polyhedral cone, and is the polyhedral cone of least density in $\mathbb{R}^3$. It would be interesting to know whether $\mathbf{T}^2$ is the least density polyhedral cone in any codimension.
\begin{remark}\label{rem:looks-like-Y}
A very important fact is that if $\mathbf{C} = \mathbf{C}_0^2 \times \mathbb{R}^m$ is polyhedral, then away from the axis ${ \{0\} \times \R^m }$ a small neighborhood is (up to rigid motion) either flat $\mathbb{R}^{2+m}$ or the cone $\mathbf{Y}^1 \times \mathbb{R}^{1+m}$.
\end{remark}
The cone $\mathbf{Y}^1\times \mathbb{R}^{m}$ we shall decompose into three half-planes $H(1) \cup H(2) \cup H(3)$, and write $Q(i)$ for the $m$-plane containing $H(i)$, $n(i)$ for the outer conormal of $\partial H(i) \subset Q(i)$.
To adequately parameterize polyhedral surfaces and cones we require some further notation. We call a subset of the form
\begin{equation}\label{eqn:wedge-example}
W = \{ re^{i\theta} \in \mathbb{R}^2 : \theta \in [\theta_0, \theta_1] \text{ and } r \in [0, \infty) \} \subset \mathbb{R}^2
\end{equation}
a wedge. Given a plane $P^2 = q(\mathbb{R}^2) \subset \mathbb{R}^{n+k}$, for $q \in O(n+k)$, a subset $W \subset P^2$ is wedge if $q^{-1}(W) \subset \mathbb{R}^2$ is a wedge. We shall write $\mathrm{int} W$ for the ``interior'' points
\begin{gather}
\mathrm{int} W = \{ r e^{i\theta} : \theta \in (\theta_0, \theta_1) \text{ and } r \in (0, \infty) \},
\end{gather}
and $\partial W$ for the ``boundary'' points
\begin{gather}
\partial W = \{ r e^{i\theta} : \theta \in \{\theta_0, \theta_1\} \text{ and } r \in [0, \infty)\}.
\end{gather}
We shall decompose our polyhedral cone $\mathbf{C}_0$ into a union of wedges $W(1), \ldots, W(d)$, meeting along a collection of lines $L_1, \ldots, L(2d/3)$. Let us write $P(i)$ for the $2$-plane containing $W(i)$, and $n(i)$ for the outer conormal of $\partial W(i)$ in $P(i)$. A function $v : \mathbf{C}_0 \to \mathbf{C}_0^\perp$ is interpreted as a collection of functions $v(i) : W(i) \to P(i)^\perp$.
When dealing with polyhedral cones, it will be convenient to have a notion of annulus which is flat near the junctions. Given a wedge $W \subset \mathbb{R}^2$ as in \eqref{eqn:wedge-example}, define the star-shaped curve $S_W$ by letting
\begin{gather}
r(\theta) = \left\{ \begin{array}{ l l}
\frac{1}{\cos(\theta - \theta_0)} & \theta \in [\theta_0 - (\theta_1-\theta_0)/4, \theta_0 + (\theta_1 - \theta_0)/4] \\
\frac{1}{\cos(\theta - \theta_1)} & \theta \in [\theta_1 - (\theta_1 - \theta_0)/4, \theta_1 + (\theta_1 - \theta_0)/4] \\
\frac{1}{\cos((\theta_1 - \theta_0)/4)} & else \end{array} \right.
\end{gather}
For all intents and purposes $S_W$ is a circle, but because $S_W$ is linear in a neighborhood of $\partial W$, our lives are simplified when dealing with domains whose boundaries are graphs over $\partial W$. We have
\begin{equation}\label{eqn:circle-inclusions}
S_W \subset B_2 \setminus B_1
\end{equation}
Let us correspondingly define the annular domain
\begin{gather}
A_W(r_1, r_2) = \bigcup_{r_1 < r < r_2} r S_W.
\end{gather}
As before, $A_W(r_1, r_2)$ is essentially just a round annulus, but is slightly adjusted to fit $W$ better. If $W \subset P^2 = q(\mathbb{R}^2)$ is a wedge, then we define $A_W := q(A_{q^{-1}W})$ in the obvious way. As in \eqref{eqn:circle-inclusions}, we have
\begin{equation}\label{eqn:wedge-inclusions}
(B_{r_2} \setminus B_{2r_1}) \cap W \subset A_W(r_1, r_2) \subset (B_{2r_2} \setminus B_{r_1}) \cap W.
\end{equation}
\subsection{Compatible Jacobi fields}\label{sec:compatible}
Let us consider the following model scenario: fix a polyhedral cone $\mathbf{C}^n = \mathbf{C}_0^2\times \mathbb{R}^m$, and take $M^n_t$ to be a $1$-parameter family of minimal surfaces, continuous in the varifold distance, satifying $M_0 = \mathbf{C}$. For any $\tau > 0$ and $t$ sufficiently small (depending on $\tau$), the $M_t \setminus B_\tau({ \{0\} \times \R^m })$ are graphical over $\mathbf{C}$ by a function $u_t$, in some suitable sense (see Lemma \ref{lem:poly-graph}).
We can define the initial velocity $v = \partial_t u_t$ as a function $v : \mathbf{C} \to \mathbf{C}^\perp$. The resulting $v$ will satisfy $\Delta v = 0$ on each wedge, and certain compatibility conditions on the junction lines. This PDE system is a notion of \emph{linearization} of the mean curvature operator over $\mathbf{C}$ (which itself we do not explicitly define). We call such a $v$ a compatible Jacobi field.
We shall see in Section \ref{sec:blow-up} how general inhomogeneous blow-up sequences give rise to compatible Jacobi fields.
\begin{definition}\label{def:compatible}
Let $\mathbf{C} = \mathbf{C}_0^2\times\mathbb{R}^m$ be a polyhedral cone. We say $v : \mathbf{C}^0 \cap B_1 \to {\mathbf{C}^0}^\perp$ is a \emph{compatible Jacobi field} on $\mathbf{C} \cap B_1$ if it satisfies the following conditions:
\begin{enumerate}
\item[A)] For each $i$, $v(i)$ is smooth on $((W(i)\setminus\{0\})\times \mathbb{R}^m) \cap B_1$, and satisfise $\Delta v(i) = 0$.
\item[B)] (``$C^0$ compatibility'') For every $z \in ((\partial W(i) \setminus \{0\}) \times \mathbb{R}^m) \cap B_1$, there is a vector $V(z) \in \mathbb{R}^{2+k}$ (\emph{independent} of $i$) so that
\begin{gather}
v(i)(z) = \pi_{P(i)^\perp}(V(z)).
\end{gather}
\item[C)] (``$C^1$ compatibility'') If $W(i_1)$, $W(i_2)$, $W(i_3)$ share a common edge $\partial W(i_1)$, then
\begin{gather}
\sum_{j=1}^3 \partial_n v(i_j)(z) = 0 \quad \forall z \in \left( (\partial W(i_1) \setminus \{0\}) \times \mathbb{R}^m\right) \cap B_1.
\end{gather}
\end{enumerate}
We say a compatible Jacobi field $v$ is \emph{linear} if there are skew-symmetric matrices $A(i) : \mathbb{R}^{n+k} \to \mathbb{R}^{n+k}$ so that $v(i) = \pi_{P(i)^\perp} \circ A(i)$. Notice we \emph{do not} require the $A(i)$ to coincide.
\end{definition}
As outlined in the Introduction, we wish to use the decay properties of compatible Jacobi fields to prove excess decay on $M$. There is a catch however, which is illustrated in the following example: let $M_t$, $\mathbf{C}$, and $v$ be as in the previous example. Let $q_t = \exp(tA) \in SO(n+k)$ be a $1$-parameter family of rotations generated by the skew-symmetric matrix $A$.
Then one can easily verify the initial velocity of the family $M_t$ as graphs over $q_t(\mathbf{C})$ is now $v - \pi_{\mathbf{C}^\perp} \circ A$, which decays at most linearly, and in particular is insufficient to give any kind of excess decay. We need to know that, by choosing ``good'' reference cones in our blow-up sequence, we can always eliminate first-order growth in the limiting Jacobi field.
As in \cite{AllAlm} and \cite{simon1}, we require an \emph{integrability} condition on our cross sectional cones $\mathbf{C}_0$, which for us simply asks that every $1$-homogeneous Jacobi field arises from a family of rotations, like the above example.
\begin{definition}\label{def:integrable}
We say a polyhedral cone ${\mathbf{C}^0_0}^2 \subset \mathbb{R}^{2+k}$ is \emph{integrable} if every linear, compatible Jacobi field $v : \mathbf{C}^0_0 \to {\mathbf{C}^0_0}^\perp$ takes the form
\begin{gather}
v = \pi_{{\mathbf{C}^0}^\perp} \circ A,
\end{gather}
for some skew-symmetric matrix $A : \mathbb{R}^{2+k} \to \mathbb{R}^{2+k}$.
\end{definition}
\begin{remark}\label{rem:locally-int}
By Proposition \ref{prop:baby-linear}, any $1$-homogeneous compatible Jacobi field on $\mathbf{C}^0_0$ is linear in the sense that each component $v(i)$ is the restriction of some skew-symmetric matrix $A(i)$. Integrability in this definition means that all the $A(i)$'s match up to generate a \emph{global} rotation of $\mathbf{C}^0_0$.
\end{remark}
\begin{remark}\label{rem:maybe-non-int}
The polyhedral cones we are most interested in (those arising from equiangular geodesic nets in $\mathbb{S}^2$) are integrable in the sense of Definition \ref{def:integrable}. We prove this in Section \ref{sec:nets}.
We note however that our notion of integrability is \emph{stronger} than the ``usual'' definition (of \cite{AllAlm}, \cite{simon1}), which simply requires every $1$-homogeneous Jacobi field to arise from a $1$-parameter family of stationary cones.
Indeed, although any $1$-homogeneous compatible Jacobi field on a polyhedral cone is \emph{locally} generated by a rotation (Proposition \ref{prop:baby-linear}), it seems plausible to us that there may exist $1$-parameter families of connected, equiangular geodesic nets in some $\mathbb{S}^{2+k}$ which are not \emph{global} rotations. Of course disconnected equiangular net are trivially not integrable by our definition.
We have chosen to write this paper using rotations, but (like in \cite{simon1}), the methods carry over directly to the more general notion of integrability. See, for example, Remark \ref{rem:blow-up-rot}.
\end{remark}
\begin{remark}\label{rem:general-non-int}
It is also not clear to us that every linear Jacobi field need arise from a $1$-parameter family of nets, being global rotations or otherwise. That is, there may be non-integrable polyhedral cones even in the more general sense of integrable.
\end{remark}
It will be convenient to define a general notion of inhomogeneous blow-up sequence. The following defines sufficient conditions to inhomogeneously blow-up $M_i$ over $\mathbf{C}_i$ at scale $\beta_i$, so as to obtain a compatible Jacobi field with ``good'' properties (see Proposition \ref{prop:blow-up}).
\begin{definition}
Let $\varepsilon_i, \beta_i$ be two sequences of numbers $\to 0$. We say $(M_i, \mathbf{C}_i, \varepsilon_i, \beta_i)$ is a \emph{blow-up sequence w.r.t. $\mathbf{C}^0$} if the following holds:
\begin{enumerate}
\item[A)] Each $M_i \in \mathcal{N}_{\varepsilon_i}(\mathbf{C})$, and $\mathbf{C}_i \in \mathcal{C}_{\varepsilon_i}(\mathbf{C}^0)$;
\item[B)] Each $M_i$ satisfies the $\varepsilon_i$-no-holes condition w.r.t $\mathbf{C}^0$ in $B_1$;
\item[C)] We have $\limsup_i \beta_i^{-2} E_{\varepsilon_i}(M_i, \mathbf{C}_i, 0, 1) < \infty$.
\end{enumerate}
\end{definition}
\begin{remark}
This last condition ensures that we can inhomogeneously scale the graph by size $\beta_i^{-1}$, and still have uniform $C^{1,\alpha}$ and $L^2$ bounds.
\end{remark}
\begin{remark}
In many cases we will simply take $\mathbf{C}_i \equiv \mathbf{C}^0$, but allowing for slightly tilted $\mathbf{C}_i$ is what enables us to kill $1$-homogeneous terms in the resulting Jacobi field. In general it may not hold that $\limsup_i \beta_i^{-1} \varepsilon_i < \infty$.
\end{remark}
\section{Main Theorems}\label{sec:main-thm}
Our main decay Theorem is the following. Recall the Definition \ref{def:integrable} of integrability.
\begin{theorem}[Excess decay]\label{thm:main-decay}
Take $\mathbf{C}^0 = \mathbf{C}^0_0^2 \times \mathbb{R}^m$, where $\mathbf{C}^0_0^2 \subset \mathbb{R}^{2+k}$ is an integrable polyhedral cone, and take $\theta > 0$. There are numbers $\partiallta(\mathbf{C}^0, \theta)$, $c(\mathbf{C}^0)$, $\gamma(\mathbf{C}^0, \theta)$, $\mu(\mathbf{C}^0)$ so that the following holds: Suppose $M^{2+m}$ is an integral varifold, with bounded mean curvature and no boundary in $B_1$, satisfying
\begin{gather}
0 \in M, \quad \mu_M(B_1) \leq \frac{3}{2}\theta_\mathbf{C}^0(0), \quad E_\partiallta(M, \mathbf{C}^0, 1) \leq \partiallta^2, \\
\text{and the $\partiallta$-no-holes condition in $B_{1/2}$ w.r.t. $\mathbf{C}^0$.}
\end{gather}
Then we can find a rotation $q \in SO(n+k)$ with $|q - Id| \leq \gamma E_\partiallta(M, \mathbf{C}^0, 1)^{1/2}$, so that
\begin{gather}
E_\partiallta(M, q(\mathbf{C}^0), \theta) \leq c \, \theta^\mu E_\partiallta(M, \mathbf{C}^0, 1).
\end{gather}
Note that $\mu$ and $c$ are \emph{independent} of $\theta$. In particular, we deduce that for $\partiallta(\mathbf{C}^0)$ sufficiently small, we have
\begin{gather}\label{eqn:main-decay-half}
E_\partiallta(M, q(\mathbf{C}^0), \theta) \leq \frac{1}{2} E_\partiallta(M, \mathbf{C}^0, 1).
\end{gather}
\end{theorem}
\begin{remark}
Though we state and prove all our results for bounded mean curvature, they continue to hold with minor modifications for integral varifolds with mean curvature in $L^p$, provided $p > n$.
\end{remark}
An important special case of Theorem \ref{thm:main-decay} is when $m = 0$, where the no-holes condition becomes simply the requirement that \emph{some} point of the correct density exists. Note we do not assume any kind of minimizing quality to $M$.
\begin{corollary}\label{cor:no-spine-reg}
Let $\mathbf{C}^0^2 \subset \mathbb{R}^{2+k}$ be an integrable polyhedral cone. For example, suppose $\mathbf{C}^0^2 \subset \mathbb{R}^3 \subset \mathbb{R}^{2+k}$. There are $\partiallta(\mathbf{C}^0), \mu(\mathbf{C}^0) \in (0, 1)$ so that if $M^2 \in \mathcal{N}_\partiallta(\mathbf{C}^0)$ satisfies $\theta_M(0) \geq \theta_\mathbf{C}^0(0)$, then $M \cap B_{1/2}$ is a $C^{1,\mu}$-perturbation of $\mathbf{C}^0$.
\end{corollary}
For certain classes of varifolds we can deduce the no-holes condition whenever $M$ is sufficiently close to $\mathbf{C}$ in excess. One important way the no-holes condition arises is by imposing a boundary/orientability structure. If $T$ is an integral $n$-current, we can write its action on $n$-forms $\omega$ as
\begin{gather}\label{eqn:current-action}
T(\omega) = \int_{M_T} <\omega, \tau_T> \theta_T \mathcal{H}^n,
\end{gather}
where $M_T$ is some $n$-rectifiable set, $\theta_T$ is a positive, integer-valued, $\mathcal{H}^n \llcorner M_T$-integrable function, and $\tau_T$ is a $\mathcal{H}^n \llcorner M_T$-measurable choice of $n$-orientation.
\begin{definition}
Given an open set $U$, we say an integral varifold $V$ has an \emph{associated cycle structure in $U$} if there is a countable collection of integral $n$-currents $T_1, T_2,\ldots$, each without boundary in $U$, so that
\begin{gather}
\mu_V \llcorner U = (\theta\, \mathcal{H}^n \llcorner \bigcup_{i=1}^\infty M_{T_i}) \llcorner U
\end{gather}
where $\theta$ is some \emph{positive}, integer-valued, $\mathcal{H}^n \llcorner \bigcup_{i=1}^\infty M_{T_i}$-measurable function.
\end{definition}
Varifolds with cycle structure arise naturally when constructing size-minizers, clusters, and more generally $(\mathbf{M}, \varepsilon, \partiallta)$-minimizers. See the following Section \ref{sec:clusters} for details. For codimension-one varifolds having a cycle structure, we can prove the following (again we note that no minimizing property of $M$ is required).
\begin{theorem}\label{thm:spine-reg}
There are constants $\partiallta(m), \mu(m) \in (0, 1)$ so that the following holds. Let $M^{2+m} \subset \mathbb{R}^{3+m}$ be a varifold with an associated cycle structure in $B_1$, and suppose $M \in \mathcal{N}_{\partiallta}(\mathbf{T}^2 \times \mathbb{R}^m)$. Then $M \cap B_{1/2}$ is a $C^{1,\mu}$ perturbation of $\mathbf{T}\times \mathbb{R}^m$.
\end{theorem}
\subsection{Clusters and size-minimizers}\label{sec:clusters}
The most dramatic application of our regularity Theorem is seen in (certain classes of) $(\mathbf{M},\varepsilon, \partiallta)$-minimizing sets, as in this case we have a very good classification of tangent cones due to Taylor \cite{taylor}. For cluster minimizers and size-minimizing currents we can establish $C^{1,\alpha}$-structure of the $n$, $(n-1)$, and $(n-2)$-strata, and thereby (using results of Naber-Valtorta \cite{naber-valtorta}) give finite Lipschitz structure on the $(n-3)$-stratum.
We first give some background definitions and theorems. First we define precisely the notion of $(\mathbf{M}, \varepsilon, \partiallta)$-minimizing set, in the sense of Almgren.
\begin{definition}
Let $U$ be an open set, and $\varepsilon(r) = C r^\alpha$ for some constants $C$, $\alpha > 0$. A set $S$ is an $n$-dimensional \emph{$(\mathbf{M}, \varepsilon, \partiallta)$-minimizer in $U$} if the following hold:
\begin{enumerate}
\item[A)] $S = (\mathrm{spt} \mathcal{H}^n \llcorner S) \cap U$;
\item[B)] given any ball $B_r(x) \subset U$ with $r < \partiallta$, and any Lipschitz map $\phi : B_r(x) \to B_r(x)$ satisfying $\mathrm{spt} (\phi - Id) \subset B_r(x) \cap U$, we have
\begin{gather}
\mathcal{H}^n(\phi(S) \cap B_r(x)) \leq (1 + \varepsilon(r)) \mathcal{H}^n(S \cap B_r(x)).
\end{gather}
\end{enumerate}
\end{definition}
In this paper we shall only deal with $(\mathbf{M}, \varepsilon, \partiallta)$-minimizers having an associated cycle structure. There are two classes in particular we shall consider.
\subsubsection{Size-minimizers}
If $T$ is a rectifiable $n$-current, then in the notation of \eqref{eqn:current-action} the \emph{size} of $T$ is given by $\mathbf{S}(T) = \mathcal{H}^n(M_T)$. Given an open set $U$, we say $T$ is \emph{homologically size-minimizing in $U$} if $\mathbf{S}(T)\leq \mathbf{S}(T + S)$ for any rectifiable $n$-current $S$ supported in $U$, with $\partial S = 0$.
Given a size-minimizing current $T$ in $U$, then in $U$ its underlying multiplicity-$1$ varifold is stationary, and its support $(\mathbf{M}, 0, \infty)$-minimizing. Morgan has demonstrated the following existence Theorem for size-minimizing currents.
\begin{theorem}[\cite{morgan-size}]
Let $B$ be an $(n-1)$-dimensional compact oriented submanifold of the unit sphere in $\mathbb{R}^{n+1}$. Then there exists a integral $n$-current $T$ with $\partial T = B$, which is size-minimizing in $\mathbb{R}^n\setminus B$.
\end{theorem}
\subsubsection{Clusters}
Given a natural number $N$, an \emph{$N$-cluster} $\mathcal{E}$ in $\mathbb{R}^{n+1}$ is a partition of $\mathbb{R}^{n+1}$ of disjoint sets $\mathcal{E}(0), \mathcal{E}(1), \ldots, \mathcal{E}(N)$ of finite-perimeter, satisfying $\mathcal{E}(0) = \mathbb{R}^n \setminus (\mathcal{E}(1) \cup \ldots \cup \mathcal{E}(N))$. Typically the sets $\mathcal{E}(1), \ldots, \mathcal{E}(N)$ are understood to be bounded. We define the volume vector and perimeter scalar as (resp.)
\begin{gather}
\mathbf{M}(\mathcal{E}) = ( |\mathcal{E}(1)|, \ldots, |\mathcal{E}(i)| ) \in \mathbb{R}^N, \quad P(\mathcal{E}) = \frac{1}{2} \sum_{i=0}^N \mathcal{H}^n(\partial [\mathcal{E}(i)]).
\end{gather}
Here $|\mathcal{E}(i)| = \mathcal{L}^{n+1}(\mathcal{E}(i))$ is the $(n+1)$-volume, and $\mathcal{H}^n(\partial [\mathcal{E}(i)]) \equiv \mathbf{M}(\partial [\mathcal{E}(i)])$ is the mass of the reduced boundary. Of course $|\mathcal{E}(0)| = \infty$.
Given a volume vector $\mathbf{m} \in \mathbb{R}^N$, a \emph{minimizing cluster for $\mathbf{m}$} is an $N$-cluster which realizes the infimum
\begin{gather}
\inf \{ P(\mathcal{E}) : \text{$\mathcal{E}$ is an $N$-cluster in $\mathbb{R}^{n+1}$ with $\mathbf{M}(\mathcal{E}) = \mathbf{m}$} \}.
\end{gather}
In other words, a minimizing cluster is a solution to the isoperimetric problem of $N$ regions of prescribed volume. Almgren proved the following existence Theorem for minimizing clusters (see also the modern presentation \cite{maggi}).
\begin{theorem}[\cite{Alm-eps-delta}]\label{thm:background-clusters}
Given any positive volume vector $\mathbf{m} \in \mathbb{R}^N$ (so, each $m_h > 0$), then there is a minimizing $N$-cluster for $\mathbf{m}$ enjoying the following properties:
\begin{enumerate}
\item Each set $\mathcal{E}(1), \ldots, \mathcal{E}(N)$ is bounded;
\item The associated set $\partial \mathcal{E}(1) \cup \ldots \cup \partial \mathcal{E}(N)$ is $(\mathbf{M}, Kr, \partiallta)$-minimizing, for some constants $K$, $\partiallta$;
\item The associated varifold $\mathcal{H}^n \llcorner (\partial \mathcal{E}(1) \cup \ldots \cup \partial\mathcal{E}(N))$ has bounded mean curvature, and no boundary.
\end{enumerate}
Here $\partial \mathcal{E}(i)$ denotes the \emph{topological} boundary of $\mathcal{E}(i)$.
\end{theorem}
\begin{remark}
Conclusion 3) is not explicitly stated in \cite{Alm-eps-delta}, \cite{maggi}, but follows directly from \cite[Theorem VI.2.3]{Alm-eps-delta} or \cite[Theorem IV.1.14]{maggi}. For the reader's convenience we include a proof of part 3) in Section \ref{sec:corollaries}.
\end{remark}
\subsubsection{Interior regularity}
We prove the following general interior regularity theorem, from which Theorem \ref{thm:clusters} is an immediate consequence.
\begin{theorem}\label{thm:main-reg}
Let $M^n = \mathcal{H}^n \llcorner \mathrm{spt} M$ be a varifold in an open set $U \subset \mathbb{R}^{n+1}$. Suppose that, in $U$: $M$ has an associated cycle structure, no boundary, bounded mean curvature, and $\mathrm{spt} M$ is $(\mathbf{M}, \varepsilon, \partiallta)$-minimizing.
Then \emph{in $U$} we have the following structure:
\begin{enumerate}
\item $S^n(M) \setminus S^{n-1}(M)$ is a locally-finite union of embedded $C^{1,\alpha}$ $n$-manifolds;
\item $S^{n-1}(M) \setminus S^{n-2}(M)$ is a locally-finite union of embeded, $C^{1,\alpha}$ $(n-1)$-manifolds, near which $M$ is locally diffeomorphic to $\mathbf{Y} \times \mathbb{R}^{n-1}$;
\item $S^{n-2}(M) \setminus S^{n-3}(M)$ is a locally-finite union of embedded, $C^{1,\alpha}$ $(n-2)$-manifolds, near which $M$ is locally diffeomorphic to $\mathbf{T} \times \mathbb{R}^{n-2}$;
\item $S^{n-3}(M)$ is relatively closed, $(n-3)$-rectifiable, with locally-finite $\mathcal{H}^{n-3}$-measure.
\end{enumerate}
\end{theorem}
\begin{remark}
Theorem \ref{thm:main-reg} holds for any class of $n$-dimensional $(\mathbf{M}, \varepsilon, \partiallta)$-minimizers in $\mathbb{R}^{n+1}$, whose associated (multiplicity-one) varifolds have bounded mean curvature, and satisfy a no-holes condition like Proposition \ref{prop:no-holes}. We could only verify this condition for minimizers having a cycle structure, but it seems plausible one could prove this for more general classes. See Remark \ref{rem:general-no-holes}.
An obstacle to extending Theorem \ref{thm:main-reg} to higher \emph{co}dimension $\mathbb{R}^{n+k}$ is whether the tetrahedron $\mathbf{T}^2$ continues to have the least density of any polyhedral cone in $\mathbb{R}^{n+k}$.
\end{remark}
\subsection{Outline of Proof}
The basic idea, which harks back to methods pioneered by De Giorgi, and implemented first more-or-less in this form by Allard-Almgren \cite{AllAlm}, is to use good decay properties of solutions to the \emph{linearized} minimal surface operator over $\mathbf{C}$ (i.e. Jacobi fields), to prove decay of minimal surfaces close to $\mathbf{C}$: if we write $M$ as a ``graph'' over $\mathbf{C}$ by a function $u$, then as $u$ becomes very small it starts to act like a Jacobi field on $\mathbf{C}$.
We will argue by contradiction. Let us outline the proof. For simplicity assume $H_M \equiv 0$. If, towards a contradiction, the excess decay \eqref{eqn:main-decay-half} failed, we would have a sequence of numbers $\varepsilon_i \to 0$, and minimal surfaces $M_i \in \mathcal{N}_{\varepsilon_i}(\mathbf{C})$, each satisfying the $\varepsilon_i$-no-holes condition w.r.t. $\mathbf{C}$ in $B_{1/2}$, so that
\begin{gather}\label{eqn:intro-hyp}
\theta^{-n-2} \int_{M \cap B_\theta} d_{q(\mathbf{C})}^2 \geq \frac{1}{2} \int_{M \cap B_1} d_{\mathbf{C}}^2 =: \beta_i^2 \quad \forall q \in SO(n+k)
\end{gather}
For any $\tau > 0$, and $i >> 1$, we can write $M_i \cap B_1 \setminus B_\tau({ \{0\} \times \R^m })$ as graph over $\mathbf{C}$ by the function $u_i$ (in a suitable sense, see Section \ref{sec:graph}), where $u_i \to 0$ and
\begin{gather}
\int_{\mathrm{dom}(u_i) \subset \mathbf{C}} |u_i|^2 \leq (1+o(1))\int_{M \cap B_1} d_{\mathbf{C}}^2
\end{gather}
The rescaled graphs $\beta_i^{-1} u_i$ have uniform $L^2$ and $C^{1,\alpha}$ bounds, and we can make sense of the limit $\beta^{-1}_i u_i \to v$ as a compatible Jacobi field $v : \mathbf{C} \cap B_{1/2} \to \mathbf{C}^\perp$ (see Section \ref{sec:blow-up}, and recall Definition \ref{def:compatible}).
We would then like to make two assertions:
\begin{enumerate}
\item For any ball $B_\rho$, with $\rho \leq 1/4$, we have strong $L^2$ convergence
\begin{gather}\label{eqn:intro-fake-1}
\beta_i^{-2} \int_{M_i \cap B_\rho} d_{\mathbf{C}}^2 \to \int_{\mathbf{C} \cap B_\rho} |v|^2.
\end{gather}
\item For any $\theta \leq 1/4$, we have decay
\begin{gather}\label{eqn:intro-fake-2}
\theta^{-n-2} \int_{\mathbf{C} \cap B_\theta} |v|^2 \leq c(\mathbf{C})\theta^\mu \int_{\mathbf{C} \cap B_{1/2}} |v|^2.
\end{gather}
\end{enumerate}
If both these claim were true, then for $i >> 1$ we could contradict \eqref{eqn:intro-hyp} with $q = id$, and $\theta(\mathbf{C})$ sufficiently small.
The first assertion is true but highly non-trivial. The issue is the non-graphicality of $M_i$ near the spine ${ \{0\} \times \R^m }$, where (rescaled) $L^2$ distance may accumulate in the limit. To rule this out we prove a non-concentration estimate like in \cite{simon1} (equation \eqref{eqn:intro-non-conc}), which uses very strongly the no-holes condition.
The second assertion is in general false, even for toy examples like when $\mathbf{C}$ is a plane. While it is true that $v$ grows \emph{at least} $1$-homogeneously (loosely a consequence of scaling), $v$ may have a non-zero $1$-homogeneous component, which would preclude an estimate like \eqref{eqn:intro-fake-2}. The problem is partly that we may have chosen the wrong cone $\mathbf{C}$ (e.g. if $M$ were smooth, we would want to pick $\mathbf{C}$ to be the tangent space at $0$; see also the example of Section \ref{sec:compatible}), but a deeper issue is that, for general $\mathbf{C}$ there may (and do) exist $1$-homogeneous Jacobi fields on $\mathbf{C}$ that do not arise geometrically as initial velocities.
Here we use the \emph{integrability} condition on $\mathbf{C}_0$, which allows us to always select ``good'' cones $q_i(\mathbf{C})$, so that if we repeat the above blow-up procedure with $q_i(\mathbf{C})$ in place of $\mathbf{C}$, we can kill the $1$-homogeneous component of the limiting field (Proposition \ref{prop:kill-linear}). We end up with a decay of the ``non-linear component'' of $v$ (Theorem \ref{thm:linear-decay}).
The ``corrected'' assertions, which still contradict \eqref{eqn:intro-hyp} for $i >> 1$, are:
\begin{enumerate}
\item If we write $v_\theta$ for the component of $v$ that is $L^2(\mathbf{C} \cap B_\theta)$-orthogonal to the linear fields on $\mathbf{C}$, then there is a sequence of rotations $q_i$ so that
\begin{gather}\label{eqn:intro-real-1}
\beta_i^{-2} \int_{M_i \cap B_\theta} d_{q_i(\mathbf{C})}^2 \to \int_{\mathbf{C} \cap B_\theta} |v_\theta|^2,
\end{gather}
\item We have the decay
\begin{gather}\label{eqn:intro-real-2}
\theta^{-n-2} \int_{\mathbf{C} \cap B_\theta} |v_\theta|^2 \leq c(\mathbf{C}) \theta^\mu \int_{\mathbf{C} \cap B_{1/2}} |v|^2.
\end{gather}
\end{enumerate}
Let us outline the structure of the paper, and provide some insight into each section.
\subsubsection{Graphicality}
We demonstrate in this section that when $M \in \mathcal{N}_\varepsilon(\mathbf{C})$, with $\varepsilon(\tau, \beta, \mathbf{C})$ sufficiently small, then $M \cap B_{3/4} \setminus B_\tau({ \{0\} \times \R^m })$ decomposes as graphical pieces over $\mathbf{C}$, with scale-invariant $C^{1,\alpha}$ norm controlled by $\beta$. Most importantly, we show effective estimates on both the graphical and non-graphical parts.
Precisely, we show in Lemma \ref{lem:poly-graph} the following kind of decomposition: there are domains $\Omega(i) \subset P(i) \times \mathbb{R}^m$, each a perturbation of the wedge $W(i)\times \mathbb{R}^m$, and functions $u(i) : \Omega(i) \to (P(i)\times \mathbb{R}^m)^\perp$, so that
\begin{gather}\label{eqn:intro-graph-1}
M(0) := M \cap B_{3/4} \setminus \bigcup_{i=1}^d u(i)(\Omega(i)) \subset B_\tau({ \{0\} \times \R^m }).
\end{gather}
This by itself is a straightforward contradiction argument, using Simon's and Allard's regularity Theorems, and the ``irreducibility'' of integrable polyhedral cones (see Section \ref{sec:mult-one}).
The more involved part is establishing effective \emph{global} estimates, for example
\begin{gather}\label{eqn:intro-graph-2}
\int_{M(0)} r^2 + \sum_{i=1}^d \int_{\Omega(i)} r^2 |Du(i)|^2 \leq c(\mathbf{C}) \int_{M \cap B_1} d_{\mathbf{C}}^2,
\end{gather}
and, if we write $f(i)$ for the functions defining $\partial\Omega(i)$ as graphs over $\partial W(i)\times \mathbb{R}^m$, then
\begin{gather}\label{eqn:intro-graph-3}
\sum_{i=1}^d \int_{\partial W(i)\times \mathbb{R}^m} r |f(i)|^2 \leq c(\mathbf{C}) \int_{M\cap B_1} d_{\mathbf{C}}^2.
\end{gather}
Estimates \eqref{eqn:intro-graph-2}, \eqref{eqn:intro-graph-3} are crucial in controlling density excess by $L^2$ excess (Proposition \ref{prop:density-est}). Note the RHS is independent of $\tau$: this is because both sides scale the same way, which allows us to sum up local estimates from Allard's or Simon's regularity Theorems.
The strategy to prove these is to start with a non-effective graphical decomposition, of the form \eqref{eqn:intro-graph-1}, and then by a further contradiction argument ``push'' the region of graphicality towards the spine until either: we hit the spine (!), or a localized $L^2$ excess passes some threshold. This is the content of Lemma \ref{lem:poly-tiny-graph}.
The scheme is similar to \cite{simon1}, but more involved, and we draw the reader's attention to two particular differences: first, the singular nature of the cross-section of the cone requires additional structure and estimates (e.g. \eqref{eqn:intro-graph-3}); second, we can remove Simon's requirement of $M$ lying in some multiplicity-one class (both in our case and his original setting when $\mathbf{C}_0$ is smooth).
\subsubsection{$L^2$ estimates}
Here we prove key $L^2$ estimates on $M$ and the $u(i)$ of decomposition \eqref{eqn:intro-graph-1}, which guarantee strong $L^2$ convergence and decay of the Jacobi field (minus its linear part). Various intermediate steps are involved, but the crucial estimates at the end of the day are the following (Theorem \ref{thm:l2-est}): provided $M \in \mathcal{N}_\varepsilon(\mathbf{C})$ satisfies the $\tau/10$-no-holes condition w.r.t. $\mathbf{C}$ in $B_{1/4}$, and $\varepsilon(\tau, \mathbf{C})$ is small, and $\alpha \in (0, 1)$, then
\begin{align}
&\sum_{i=1}^d \int_{\Omega(i) \cap B_{1/10}\setminus B_\tau({ \{0\} \times \R^m })} R^{2-n} |\partial_R (u(i)/R)|^2 \leq c(\mathbf{C})\int_{M \cap B_1} d_{\mathbf{C}}^2 , \label{eqn:intro-hardt-simon}\\
&and \quad \int_{M \cap B_{1/4}} \frac{d_{\mathbf{C}}^2}{\max(r, \tau)^{2-\alpha}} + \sum_{i=1}^d \int_{\Omega(i) \cap B_{1/4} \setminus B_\tau(L\times \mathbb{R}^m))} \frac{|u(i) - \kappa^\perp|^2}{\max(r, \tau)^{2+2-\alpha}} \label{eqn:intro-non-conc}\\
&\quad\quad\quad \leq c(\mathbf{C}, \alpha) \int_{M \cap B_1} d_{\mathbf{C}}^2 .
\end{align}
where $L = \cup_{i=1}^{2d/4} L(i)$ are the singular lines of $\mathbf{C}_0$, and $\kappa$ is a piecewise-constant function which forms a discrete approximate-parameterization of the singular set of $M$ over the spine ${ \{0\} \times \R^m }$.
Estimate \eqref{eqn:intro-hardt-simon} says that the blow-up limit field $v$ must grow at least $1$-homogeneously in $R$, and is a key component in proving super-linear growth of $v$ minus-its-linear-part. Estimate \eqref{eqn:intro-non-conc} is a non-concentration estimate for $L^2$ excess, and gives a growth bound on $v$ which is crucial for characterizing $1$-homogeneous fields. Notice the RHS of both equations is independent of $\tau$.
To prove \eqref{eqn:intro-hardt-simon}, \eqref{eqn:intro-non-conc} we follow Simon's computations, but the singular nature of $\mathbf{C}_0$ adds significant complications. The most delicate estimate controls the density excess of $M$ by its $L^2$-distance to $\mathbf{C}$ (Proposition \ref{prop:density-est}), which requires heavily the no-holes condition and effective graphical estimates \eqref{eqn:intro-graph-2}, \eqref{eqn:intro-graph-3}. We additionally exploit heavily the $120^\circ$ angle condition on the geodesic net, and this highlights a technical difference between our paper and \cite{simon1}: we require stationarity of $M$ and $\mathbf{C}$ \emph{through} the singular set, while Simon only requires stationarity on the regular parts.
\subsubsection{Jacobi fields}
The aim of this section is to prove an $L^2$ decay for Jacobi fields satisfying certain orthogonality and growth conditions (Theorem \ref{thm:linear-decay}). If $\mathbf{C}$ had a smooth cross-section, this would follow easily from the Fourier expansion: the discrete powers of decay would show that any $v$ growing $> 1$-homogeneously, must grow at least $(1+\varepsilon)$-homogeneously. In our case, we adapt the ingenious method devised in \cite{simon1} to handle cylindrical cones.
The basic idea is the following. On the one hand, we have an upper bound \eqref{eqn:intro-hardt-simon} at any scale $\rho$, which says that $v$ grows at least $1$-homogeneously. On the other hand, in Theorem \ref{thm:1-homo-linear} we can characterize $1$-homogeneous Jacobi fields satisfying a growth bound like \eqref{eqn:intro-non-conc} as linear (or most specifically, as lying in a subspace of the linear fields). By a simple contradiction argument, this allows us to say that whenever $v$ is $L^2(B_\rho)$-orthogonal to the linear fields, then $v$ must grow \emph{quantitatively} more than $1$-homogeneously at scale $\rho$. That is,
\begin{gather}\label{eqn:intro-hardt-simon-lower}
\int_{\mathbf{C} \cap B_1 \setminus B_{1/10}} R^{2-n} |\partial_R(v/R)|^2 \geq \frac{1}{c(\mathbf{C})} \int_{\mathbf{C} \cap B_1} |v|^2.
\end{gather}
Chaining \eqref{eqn:intro-hardt-simon} and \eqref{eqn:intro-hardt-simon-lower} with a hole-filling gives the required decay.
Most of this section is analogous to \cite{simon1}, except care must be taken to ensure the argument works with compatible Jacobi fields. In particular, we demonstrate in Theorem \ref{thm:eigenfunctions} a spectral decomposition for the Jacobi operator system on equiangular geodesic nets.
\subsubsection{Inhomogeneous blow-ups and conclusion of proof}
Here we make sense of inhomogeneous blow-up limits $\beta_i^{-1} u_i \to v$, and prove that the resulting $v$ is a compatible Jacobi field in the sense of Definition \ref{def:compatible}. The $C^0$ compatibility condition arises from the sheets of $M$ meeting along a common single edge, and is essentially a direct consequence of Simon's $\varepsilon$-regularity for the $\mathbf{Y}\times \mathbb{R}^m$. The $C^1$ condition requires the stationarity of both $M$ and $\mathbf{C}$ (\emph{through} the singular set), and depends strongly on the Remark \ref{rem:looks-like-Y} that away from ${ \{0\} \times \R^m }$, $M$ is locally either $\mathbb{R}^{m+2}$ or $\mathbf{Y}\times \mathbb{R}^{m+1}$.
We then show in Proposition \ref{prop:kill-linear} how integrability allows us to choose new cones $q_i(\mathbf{C})$ (for $q_i \in SO(n+k)$) nearby $\mathbf{C}$ in such a way the the limiting $v$ has no linear component at a given scale. This allows us to prove the required estimates on $v$ to apply the linear decay Theorem \ref{thm:linear-decay}.
Finally, we can implement the blow-up argument sketched in the initial Proof Outline, to finish proving decay Theorem \ref{thm:main-decay}.
\subsubsection{Equiangular nets in $\mathbb{S}^2$}
In this section we establish some background results on nets in $\mathbb{S}^2$. We reprove for the reader's convience the general no-holes principle for $\mathbf{Y}\times \mathbb{R}^m$, and additionally demonstrate a no-holes principle for the tetrahedral cone $\mathbf{T}\times \mathbb{R}^m$, under natural structure assumptions on $M$.
We prove integrability (in the sense of Definition \ref{def:integrable}) for all equiangular nets in $\mathbb{S}^2$, which allows us to apply Theorem \ref{thm:main-decay} to any polyhedral cone $\mathbf{C}_0 \subset \mathbb{R}^3 \subset \mathbb{R}^{2+k}$. Unfortunately we are unable to give a general abstract proof, but must appeal to the classification of these nets due to \cite{lamarle}, \cite{heppes}. It is possible that in general codimension there exist non-integrable equiangular nets (see also Remarks \ref{rem:locally-int}, \ref{rem:maybe-non-int}, \ref{rem:general-non-int}).
\begin{comment}
\begin{definition}
We say a cone $C_0$ is \emph{atomic} if it cannot be written as the union (i.e. varifold sum) of two non-zero stationary cones.
Given a stationary cone $C_0$, consider the connected components $C_i'$ of $C_0 \setminus \{0\}$, and let $C_0'$ be the component of least density. We define the \emph{critical density} of $C_0$ to be
\begin{gather}
\theta_{crit}(C_0) := \theta_{C_0}(0) + \theta_{C_0'}(0)/2.
\end{gather}
\end{definition}
\begin{example}
When $C_0^\ell \subset \mathbb{R}^{\ell+k}$ is smooth, $k = 1$ and $\ell \geq 2$, then $C_0$ is atomic by Frankel's theorem, and has crtical density $\frac{3}{2} \theta_{C_0}(0)$.
When $\ell = 1$, any half-line extending from the origin is a connected component of $C_0 \setminus \{0\}$. So if $C_0$ consists of $q$ lines, then $\theta_{crit}(C_0) = q/2 + 1/4$. The $\mathbf{Y}$ cone is atomic, but the $\mathbf{X}$ cone is non-atomic: it can be written as the union of two lines.
\end{example}
\begin{definition}
Define $\mathcal{N}_\varepsilon(\mathbf{C}^0)$ to be the set of stationary integral varifolds $n$-varifolds $M$, satisfying
\begin{gather}
0 \in M, \quad \theta_M(0, 1) \leq \theta_{crit}(\mathbf{C}^0), \quad E(M, \mathbf{C}^0, 1) \leq \varepsilon^2.
\end{gather}
\end{definition}\end{comment}
\begin{comment}
\begin{lemma}\label{lem:mult-one}
Let $\mathbf{C} = \mathbf{C}_0^\ell\times \mathbb{R}^m$ be a stationary cone, where $\mathbf{C}_0$ is either smooth or polyhedral (in which case $\ell = 2$). Let $M_i$ be a sequence stationary integral varifolds in $B_1$, satisfying $0 \in M_i$, $\theta_{M_i}(0, 1) \leq \theta_{crit}(\mathbf{C}^0)$, and
\begin{gather}\label{eqn:mult-one-hyp}
E(M_i, \mathbf{C}^0, 1) \to 0.
\end{gather}
Then, after passing to a subsequence, the $M_i \to \mathbf{C}^0$ as varifolds with multiplicity $1$ on compact subsets of $B_1$. If $\mathbf{C}^0_0$ is atomic, then instead of \eqref{eqn:mult-one-hyp} it suffices to assume
\begin{gather}
\int_{M_i \cap B_1} d_{\mathbf{C}^0}^2 \to 0.
\end{gather}
\end{lemma}
\begin{proof}
After taking a subseuqence we have convergence on compact subsets of $B_1$ to some $M$, and by our hypothesis we know $M$ is supported in $C$. We claim that $M$ has constant multiplicity on each subcone of $\mathbf{C}^0$. If $\mathbf{C}^0_0$ is smooth this is immediate from the constancy theorem. If $C_0$ is polyhedral, then the constancy theorem implies $M$ has constant density on each wedge, and by sationarity of the $Y$ junction any three wedges which meet must have the same multiplicity. This proves the claim.
Moreover, since FIXME $\int_{\mathbf{C}^0 \cap B_1\setminus B_{1/4}} d_{M_i}^2 \to 0$, necessarily $M$ must have (constant) multiplicity $\geq 1$ on each connected component of $\mathbf{C}^0 \setminus (axis)$. It will therefore suffice to show that each multiplicity is $\leq 1$ also. But this follows from our restriction $\theta_{M_i}(0, 1) \leq \theta_{crit}(\mathbf{C}^0_0)$: if $M$ had multiplicity $\geq 2$ on some component $\mathbf{C}'$ of $\mathbf{C}^0 \setminus (axis)$, then we would have
\begin{gather}
\theta_M(0, 1) \geq \theta_{\mathbf{C}^0}(0) + \theta_{\mathbf{C}'}(0) > \theta_{crit}(\mathbf{C}^0) \geq \theta_M(0,1),
\end{gather}
a contradiction.
Suppose now $\mathbf{C}^0_0$ is atomic. Since $M$ is stationary and supported in $\mathbf{C}^0$, then necessarily $M = p \mathbf{C}^0$, for some integer $p$. But then by our mass restriction $M$ we must have $p = 1$.
\end{proof}
\end{comment}
\begin{comment}
\paragraph{Two-sided excess}
Our fundamental decay quantity is the following ``two-sided'' excess. Wickramasekera CITE:NESHAN defines and uses a similar quantity which he calls this the \emph{fine} excess.
\begin{definition}
Let $\mathbf{C}^0_0^2$ be a polyhedral cone, and let $L = \cup_{i=1}^{2d/3} L(i)$ be the associated lines. Define the quantity $\partiallta_{len} = \partiallta_{len}(\mathbf{C}^0_0)$ to be the least length of any geodesic segment in the corresponding net $\mathbf{C}^0_0 \cap \mathbb{S}^{2+k}$.
Given any rotation $q \in SO(n+k)$, we define the \emph{two-sided excess} of $M$ over $\mathbf{C}^0 = q(\mathbf{C}^0_0^2\times \mathbb{R}^m)$ at scale $r$ to be
\begin{gather}
E(M, \mathbf{C}^0, r) = r^{-n-2} \int_{M \cap B_r} d_{\mathbf{C}}^2 + r^{-n-2} \int_{\mathbf{C}^0 \cap B_r \setminus B_{r \partiallta_{len}/100}(q(L \times \mathbb{R}^m))} d_M^2.
\end{gather}
\end{definition}
\begin{remark}
The extra $d_M^2$ term is carried along to ensure multiplicity-one convergence. The actual decay of this term comes ``for free.'' See Section REF.
\end{remark}
\end{comment}
\begin{comment}
\documentclass[12pt]{article}
\usepackage{amsmath}
\usepackage{hyperref}
\usepackage{xfrac}
\usepackage{stmaryrd,mathrsfs,bm,amsthm,mathtools,yfonts,amssymb,color}
\usepackage{xcolor}
\usepackage{courier}
\newcommand\net{\mathrm{\bf net}}
\newcommand{\partial}{\partial}
\newcommand\sphere{\mathbb{S}}
\newcommand{\varepsilon}{\varepsilon}
\newcommand{\mathcal{H}}{\mathcal{H}}
\newcommand{\mathrm{sing}}{\mathrm{sing}}
\newcommand{\mathrm{reg}}{\mathrm{reg}}
\newcommand{\mathrm{graph}}{\mathrm{graph}}
\newcommand{\mathrm{dom}}{\mathrm{dom}}
\newcommand{\mathrm{spt}}{\mathrm{spt}}
\newcommand{\mathcal{M}}{\mathcal{M}}
\newcommand{\mathcal{S}}{\mathcal{S}}
\newcommand{\mathbf{C}}{\mathbf{C}}
\newcommand{\mathbf{V}}{\mathbf{V}}
\newcommand{\mathbf{E}}{\mathbf{E}}
\newcommand{\mathbf{T}}{\mathbf{T}}
\newcommand{\mathbf{Y}}{\mathbf{Y}}
\newcommand{\mathbf{X}}{\mathbf{X}}
\newcommand{\mathbb{R}}{\mathbb{R}}
\newcommand{\mathcal{C}}{\mathcal{C}}
\newcommand{\mathcal{N}}{\mathcal{N}}
\newcommand{{ \{0\} \times \R^m }}{{ \{0\} \times \mathbb{R}^m }}
\newcommand{\mathbf{C}^0}{\mathbf{C}}
\newtheorem{theorem}{Theorem}[section]
\newtheorem{ctheorem}[theorem]{Conjectural Theorem}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{fact}[theorem]{Fact}
\newtheorem{prop}[theorem]{Proposition}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{corollary}[theorem]{Corollary}
\theoremstyle{definition}
\newtheorem{definition}[theorem]{Definition}
\theoremstyle{remark}
\newtheorem{remark}[theorem]{Remark}
\theoremstyle{remark}
\newtheorem{example}[theorem]{Example}
\theoremstyle{remark}
\newtheorem{note}[theorem]{Note}
\theoremstyle{remark}
\newtheorem{question}[theorem]{Question}
\theoremstyle{remark}
\newtheorem{conjecture}[theorem]{Conjecture}
\setlength{\textheight}{10in} \setlength{\textwidth}{7.5in}
\setlength{\columnsep}{0.5in} \setlength{\topmargin}{-1.2in}
\setlength{\headheight}{0in} \setlength{\headsep}{0.5in}
\setlength{\parindent}{1pc}
\setlength{\oddsidemargin}{-0.5in}
\setlength{\evensidemargin}{-0.5in}
\begin{document}
\tableofcontents \end{comment}
\section{Graphical estimates}\label{sec:graph}
In this section, and in fact for the duration of the paper, we take $\mathbf{C}^0_0^2 \subset \mathbb{R}^{2+k}$ to be a fixed polyhedral cone, composed of wedges $\{W(i)\}_{i=1}^d$. We set $\mathbf{C}^0^n = \mathbf{C}^0_0^2 \times \mathbb{R}^m$. Using the $\varepsilon$-regularity theorems for the plane and $\mathbf{Y}\times \mathbb{R}^{1+m}$, we prove that any $M^n$ sufficiently near $\mathbf{C}$ must decompose away from the axis as $C^{1,\alpha}$ graphs with effective estimates (though of course the estimates degenerate as $r \to 0$). Note that $c$ is independent of $\tau$.
\begin{lemma}[Effective graphicality over polyhedral cones]\label{lem:poly-graph}
For any $\beta, \tau > 0$, there is an $\varepsilon(\mathbf{C}^0, \beta, \tau)$ and $c(\mathbf{C}^0, \beta)$, $\alpha(\mathbf{C})$ so that the following holds. Take $M \in \mathcal{N}_{\varepsilon}(\mathbf{C}^0)$. Then there is a radius function $r_y : B_1 \cap (\{0\}\times \mathbb{R}^m) \to \mathbb{R}$ with $r_y < \tau$, so that we can decompose
\begin{gather}\label{eqn:poly-graph-1}
M \cap B_{3/4} \setminus B_{r_y}(\{0\}\times \mathbb{R}^m) = M(1) \cup \cdots \cup M(d),
\end{gather}
where for each $i$ there is some domain $\Omega(i) \subset P(i) \times \mathbb{R}^m$, and $C^{1,\alpha}$ function $u(i) : \Omega(i) \to (P(i) \times \mathbb{R}^m)^\perp$, such that
\begin{gather}\label{eqn:poly-graph-2}
M(i) = \{ x + u(i)(x) : x \in \Omega(i) \}.
\end{gather}
Each $\Omega(i)$ is graphical over $W(i)\times \mathbb{R}^m$ in the sense that there exist $C^{1,\alpha}$ functions $f(i) : \partial W(i) \times \mathbb{R}^m \to (\partial W(i)^\perp \cap P(i)) \times \mathbb{R}^m$,
so that
\begin{gather}\label{eqn:poly-graph-3}
\partial\Omega(i) \cap B_{1/2} \setminus B_{r_y}({ \{0\} \times \R^m }) \subset \{ x' + f(i)(x') : x' \in \partial W(i) \times \mathbb{R}^m \} .
\end{gather}
Moreover the functions $u(i)$, $f(i)$ have the following pointwise estimates
\begin{align}
&\sup_ {\partial W(i)\cap (B_{1/2} \setminus B_{r_y}({ \{0\} \times \R^m }))}r^{-1} |f(i)| +|Df(i)| + r^\alpha[Df(i)]_{\alpha, C} \leq \beta,\label{eqn:point-est-poly-1}\\
&\sup_ {\Omega(i)\cap (B_{1/2} \setminus B_{r_y}({ \{0\} \times \R^m }))}r^{-1} |u(i)| +|Du(i)|+ r^\alpha [Du(i)]_{\alpha, C} \leq \beta,\label{eqn:point-est-poly-11}\\
&\sup_{\partial W(i)\cap (B_{1/2} \setminus B_{5r_y}({ \{0\} \times \R^m }))} r^{n+2}\bigl( r^{-1} |f(i)| + |Df(i)| + r^\alpha [Df(i)]_{\alpha, C} \bigr)^2 \leq c\, E(M, \mathbf{C}, 1), \label{eqn:point-est-poly-2}\\
&\sup_{\Omega(i)\cap (B_{1/2} \setminus B_{5r_y}({ \{0\} \times \R^m }))} r^{n+2}\bigl(r^{-1} |u(i)| + |Du(i)| + r^\alpha ([Du(i)]_{\alpha, C}\bigr)^2 \leq c \, E(M, \mathbf{C}, 1)\, .\label{eqn:point-est-poly-22}
\end{align}
and integral estimates
\begin{align}\label{eqn:integral-est-poly-1}
&\sum_{i=1}^d \int_{(\partial W(i) \times \mathbb{R}^m) \cap (B_{1/2}\setminus B_{5r_y}(\{0\}\times \mathbb{R}^m))} r |f(i)|^2 + \sum_{i=1}^d \int_{\Omega(i) \cap B_{1/2} \setminus B_{5r_y}(\{0\}\times \mathbb{R}^m)} r^2 |D u(i)|^2 \\ \label{eqn:integral-est-poly-2}
& \quad+ \sum_{i=1}^d \int_{M(i) \cap B_{10r_y}(\{0\}\times \mathbb{R}^m)} r^2 \leq c \, E(M, \mathbf{C}, 1)\, .
\end{align}
\begin{comment}
In $B_{1/2} \setminus B_{r_y}({ \{0\} \times \R^m })$ the functions $u(i)$, $f(i)$ have the pointwise estimates
\begin{align}\label{eqn:point-est-poly-1}
r^{-1} |u(i)| + |Du(i)| + r^\alpha [Du(i)]_{\alpha, C} \leq \beta, \quad r^{-1} |f(i)| + |Df(i)| + r^\alpha [Df(i)]_{\alpha, C} \leq \beta,
\end{align}
and in $B_{1/2} \setminus B_{5r_y}({ \{0\} \times \R^m })$ we have
\begin{align}\label{eqn:point-est-poly-2}
&r^{n+2}(r^{-1} |u(i)| + |Du(i)| + r^\alpha [Du(i)]_{\alpha, C})^2 \leq c \int_{M \cap B_1} d_{\mathbf{C}^0}^2, \\
&r^{n+2}(r^{-1} |f(i)| + |Df(i)| + r^\alpha [Df(i)]_{\alpha, C})^2 \leq c \int_{M \cap B_1} d_{\mathbf{C}^0}^2 .
\end{align}
And we have the following integral estimates
\begin{align}\label{eqn:integral-est-poly-1}
&\sum_{i=1}^d \int_{(\partial W(i) \times \mathbb{R}^m) \cap (B_{1/2}\setminus B_{5r_y}(\{0\}\times \mathbb{R}^m))} r |f(i)|^2 \\ \label{eqn:integral-est-poly-2} +
& + \sum_{i=1}^d \int_{M(i) \cap B_{10r_y}(\{0\}\times \mathbb{R}^m)} r^2 + \sum_{i=1}^d \int_{\Omega(i) \cap B_{1/2} \setminus B_{5r_y}(\{0\}\times \mathbb{R}^m)} r^2 |D u(i)|^2 \\
&\quad \leq c \int_{M \cap B_1} d_{\mathbf{C}^0}^2 .
\end{align}
\end{comment}
\end{lemma}
\subsection{Multiplicity-one convergence}\label{sec:mult-one}
We will be working with a one-sided excess, and therefore must restrict our admissible class of cones to those for which one-sided closeness (so, smallness of $E$) gives regularity. We call these ``atomic.'' This restriction can be easily avoided by considering a \emph{two-sided} excess, similar to that of \cite{Neshan}. However, we shall see that any \emph{integrable} polyhedral cone (as per our Definition \ref{def:integrable}) is atomic, so for our purposes this is no restriction at all.
\begin{definition}
We say a cone $C_0$ is \emph{atomic} if it cannot be written as the union (i.e. varifold sum) of two non-zero stationary cones.
\end{definition}
\begin{lemma}
Any polyhedral integrable cone $\mathbf{C}_0^2 \subset \mathbb{R}^{2+k}$ is atomic. The cone $\mathbf{Y}^1 \times \mathbb{R}$ is atomic.
\end{lemma}
\begin{proof}
Suppose on the contrary we can write $\mathbf{C}_0 = \mathbf{C}_0^{(1)} + \mathbf{C}_0^{(2)}$, where each $\mathbf{C}^{(i)}_0$ is a non-zero stationary polyhedral cone. The geodesic nets $\mathbf{C}^{(i)} \cap \mathbb{S}^{2+k}$ are disjoint, and so we can construct a $1$-parameter family of polyhedral cones $\mathbf{C}_t$ obtained by rotating $\mathbf{C}_0^{(1)}$ but keeping $\mathbf{C}_0^{(2)}$ fixed. Therefore $\mathbf{C}_0$ is non-integrable, since the deformation $\mathbf{C}_t$ is not a \emph{global} rotation. Atomicity of $\mathbf{Y}\times \mathbb{R}$ is obvious.
\end{proof}
The following Lemma is the reason we introduce the notion of atomicity.
\begin{lemma}\label{lem:mult-one}
Let $\mathbf{C} = \mathbf{C}_0^\ell\times \mathbb{R}^m$ be a stationary atomic cone, where $\mathbf{C}_0$ is either smooth or polyhedral (in which case $\ell = 2$). Let $M_i$ be a sequence of integral varifolds, so that $M_i \in \mathcal{N}_{\varepsilon_i}(\mathbf{C}^0)$ with $\varepsilon_i \to 0$. Then $M_i \to \mathbf{C}$ as varifolds with multiplicity $1$.
\end{lemma}
\begin{proof}
After taking a subsequence we have convergence on compact subsets of $B_1$ to some stationary $M$, and by our hypothesis we know $M$ is supported in $\mathbf{C}$. We claim that $M$ has constant multiplicity on each subcone of $\mathbf{C}^0$. If $\mathbf{C}^0_0$ is smooth this is immediate from the constancy theorem. If $C_0$ is polyhedral, then the constancy theorem implies $M$ has constant density on each wedge, and by stationarity of the $Y$ junction any three wedges which meet must have the same multiplicity (compare Lemma \ref{lem:vect-vs-scalar-cond}). This proves the claim.
Therefore $M$ is stationary and supported inside $\mathbf{C}^0$, and since $\mathbf{C}^0$ is atomic we must have $M = p \mathbf{C}^0$ for some integer $p$. Since $0 \in M$ we have $p \geq 1$. On the other hand, by our restriction $\theta_M(0, 1) \leq \frac{3}{2} \theta_{\mathbf{C}^0}(0)$, we must have $p \leq 1$. This proves the Lemma.
\end{proof}
\subsection{$\varepsilon$-regularity of Allard and Simon}
Let us recall the $\varepsilon$-regularity results for the plane and $\mathbf{Y}\times \mathbb{R}^m$.
\begin{theorem}[Allard's $\varepsilon$-regularity for the plane \cite{All}]\label{thm:allard}
There are $\varepsilon(n, k)$ and $\mu(n, k)$ so that the following holds. Suppose $M^n \in \mathcal{N}_{\varepsilon}(\mathbb{R}^n)$. Then there is a $C^{1,\mu}$ function $u : \Omega \subset \mathbb{R}^n \to \mathbb{R}^k$, so that
\begin{gather}
M \cap B_{3/4} = \mathrm{graph}_{\mathbb{R}^n}(u), \quad |u|_{C^{1,\mu}} \leq c(n, k) E(M, \mathbb{R}^n, 1)^{1/2}.
\end{gather}
\end{theorem}
In \cite{simon1} Simon proved an $\varepsilon$-regularity theorem for cones of the form $\mathbf{Y}\times \mathbb{R}^m$. In his original paper, Simon worked in a so-called multiplicity-one class of varifolds, but by using our Lemma \ref{lem:graph-smooth} in place of his Lemma 2.6 one can remove this hypothesis (see Appendix \ref{sec:graphical-smooth}). A caveat: our Lemma \ref{lem:graph-smooth} is \emph{not} sufficient to remove the multiplicity-one class assumption from Simon's various structure theorems for the singular set.
\begin{theorem}[Simon's $\varepsilon$-regularity for $\mathbf{Y}\times \mathbb{R}^m$ \cite{simon1}]\label{thm:Y-reg}
There are $\varepsilon(m, k)$, $\mu(n, k)$ so that the following holds. Suppose $M^{1+m} \in \mathcal{N}_{\varepsilon}(\mathbf{Y}^1\times \mathbb{R}^m)$. Then $M\cap B_{3/4}$ is $C^{1,\mu}$-close to $\mathbf{Y}\times \mathbb{R}^m$ in the following sense: We can decompose $M\cap B_{3/4} = M(1) \cup M(2) \cup M(3)$, so that for each $i = 1, 2, 3$ there is a domain $\Omega(i) \subset Q(i)$, and a $C^{1,\mu}$ function $u(i) : \Omega(i) \to Q(i)^\perp$, so that
\begin{gather}
M(i) = \{ x + u(i)(x) : x \in \Omega(i) \}, \quad Q(i) \cap B_{1/2} \subset \Omega(i), \quad |u(i)|_{C^{1,\mu}} \leq c(m,k) E(M, \mathbf{Y}\times \mathbb{R}^m, 1)^{1/2} .
\end{gather}
Each $\Omega(i)$ is graphical over $H(i)$, in the sense that there are $C^{1,\mu}$ functions
\begin{gather}
f(i) : \partial H(i) \cap B_{3/4} \to (\partial H(i))^\perp \cap Q(i), \quad |f(i)|_{C^{1,\mu}} \leq c(m,k) E(M, \mathbf{Y}\times \mathbb{R}^m, 1)^{1/2},
\end{gather}
so that
\begin{gather}
\partial \Omega(i) \cap B_{1/2} \subset \{ x' + f(i)(x') : x' \in \partial H(i) \cap B_{3/4} \} .
\end{gather}
\end{theorem}
\begin{proof}[Proof (see \cite{simon1})]
Ensuring $\varepsilon(k, m)$ is sufficiently small, by Lemma \ref{lem:global-graph} we have $\mathrm{sing} M \cap B_{3/4} \subset B_{3/4} \cap B_{1/10}({ \{0\} \times \R^m })$. Write $\varepsilon = E(M, \mathbf{Y}\times \mathbb{R}^m, 1)$. Now given $Z \in \mathrm{sing} M \cap B_{3/4}$, we have
\begin{gather}\label{eqn:Y-small-norm}
E(M, \mathbf{Y} \times \mathbb{R}^m, Z, 1/4) \leq c \,\varepsilon^2 .
\end{gather}
For topological reasons (see Proposition \ref{prop:no-holes-Y}), \eqref{eqn:Y-small-norm} implies $M$ must satisfy the $\partiallta$-no-holes condition w.r.t. $\mathbf{Y} \times \mathbb{R}^m$ for $\partiallta(\varepsilon) \to 0$ as $\varepsilon \to 0$.
Therefore, by \cite{simon1} (and our Lemma \ref{lem:graph-smooth}) we have for every $Z \in \mathrm{sing} M \cap B_{3/4}$ a rotation $q_Z \in SO(n+k)$, so that
\begin{gather}
\rho^{-n-2} \int_{M \cap B_\rho(Z)} d_{Z + q_Z(\mathbf{Y}\times \mathbb{R}^m)}^2 \leq c(k, m) \rho^{2\mu} \varepsilon^2,
\end{gather}
for some fixed $\mu = \mu(k, m) \in (0, 1)$. In particular, for any other $W \in \mathrm{sing}(M) \cap B_{3/4}$, we have
\begin{gather}\label{eqn:holder-sing-Y}
|q_Z - q_W| \leq c(k, m) \varepsilon |Z - W|^{\mu}, \quad |q_Z - Id| \leq c(k, m) \varepsilon, \quad d(Z, { \{0\} \times \R^m }) \leq c(k, m)\varepsilon.
\end{gather}
From \eqref{eqn:holder-sing-Y} we deduce that we can parameterize $\mathrm{sing} M \cap B_{3/4}$ by a map $F : { \{0\} \times \R^m } \to \mathbb{R}^{1+k}\times\{0\}$ having $C^{1,\mu}$ norm bounded by $c(k,m)\varepsilon$. We define $f(i) := \pi_{Q(i)^T}(F)$.
On the other hand, take now an $X \in \mathrm{reg} M \cap B_{3/4}\cap B_{1/10}({ \{0\} \times \R^m })$, and set $4\rho = d(X, \mathrm{sing} M) = d(X, Z)$, where $Z \in \mathrm{sing} M$. Then up to renumbering we have
\begin{gather}\label{eqn:decay-away-Y}
\rho^{-n-2} \int_{M \cap B_\rho(X)} d_{Z + q_Z(Q(1))}^2 \leq \rho^{-n-2} \int_{M \cap B_{4\rho}(Z)} d_{Z + q_Z(\mathbf{Y}\times \mathbb{R}^m)}^2 \leq c \,\varepsilon^2 \rho^{2\mu}.
\end{gather}
Therefore, by Allard we can write $M \cap B_{\rho/2}(X)$ as a graph of $u$ over $Z + q_Z(Q(1))$ with estimates
\begin{gather}
\rho^{-1} |u| + |Du| + \rho^{\mu} [Du]_\mu \leq c(k,m) \varepsilon \rho^\mu.
\end{gather}
Using estimates \eqref{eqn:holder-sing-Y}, we can therefore write $M \cap B_{\rho/2}(X)$ as a graph over $Q(1)$ with uniform $C^{1,\mu}$ norm bounded by $c(k,m)\varepsilon$. Moreover, if $\tilde q_{X} \in SO(n+k)$ is the rotation taking $Q(1)$ to the tangent space $T_X M$, then \eqref{eqn:decay-away-Y} shows that
\begin{gather}\label{eqn:holder-reg}
|q_Z - \tilde q_X| \leq c(k,m)\varepsilon |Z - \tilde X|^\mu.
\end{gather}
Since $X$ is arbitrary, estimates \eqref{eqn:holder-sing-Y} and \eqref{eqn:holder-reg} show that $u$ extends as a $C^{1,\mu}$ function up to and including the boundary $\pi_{Q(1)}(\mathrm{sing} M)$.
\end{proof}
\begin{definition}\label{def:Y-graph}
For ease of notation, we will write the following to indicate $M$ decomposes as in Theorem \ref{thm:Y-reg}.
\begin{gather}
M \cap B_{3/4} = \mathrm{graph}_{\mathbf{Y}\times \mathbb{R}^m}(u, f, \Omega), \quad B_{1/2} \subset\Omega, \quad |u|_{C^{1,\alpha}} + |f|_{C^{1,\alpha}} \leq c(k,m) E(M, \mathbf{Y}\times \mathbb{R}^m, 1)^{1/2}.
\end{gather}
\end{definition}
\subsection{Graphicality for polyhedra}
We first prove a ``crude'' graphicality for polyhedral cones, from which we push towards the spine as far as possible.
\begin{lemma}\label{lem:poly-global-graph}
Given any $\beta, \tau > 0$, there is an $\varepsilon_1(\mathbf{C}^0, \beta, \tau)$ so that the following holds. Given $M \in \mathcal{N}_{\varepsilon_1}(\mathbf{C}^0)$, then we can decompose
\begin{gather}
M \cap B_{3/4} \setminus B_{\tau}({ \{0\} \times \R^m }) = M(1) \cup \cdots \cup M(d),
\end{gather}
where for each $i$ there is some domain $\Omega(i) \subset P(i) \times \mathbb{R}^m$, and $C^{1,\alpha}$ function $u(i) : \Omega(i) \to (P(i) \times \mathbb{R}^m)^\perp$, so that
\begin{gather}
M(i) = \{ x + u(i)(x) : x \in \Omega(i) \} .
\end{gather}
Each $\Omega(i)$ is graphical over $W(i)\times \mathbb{R}^m$ in the sense that there are $C^{1,\alpha}$ functions $f(i) : \partial W(i) \times \mathbb{R}^m \to (\partial W(i)^\perp \cap P(i)) \times \mathbb{R}^m$,
so that
\begin{gather}
\partial\Omega(i) \cap B_{1/2}\setminus B_{2\tau}({ \{0\} \times \R^m }) \subset \{ x' + f(i)(x') : x' \in \partial W(i) \times \mathbb{R}^m \} .
\end{gather}
Moreover, we have the pointwise estimates
\begin{align}\label{eqn:polyhedral-graph-est-1}
&\sup_{\Omega(i) \cap B_{1/2} \setminus B_{2\tau}({ \{0\} \times \R^m })} r^{-1} |u(i)| + |Du(i)| + r^\alpha [Du(i)]_{\alpha, C} \leq \beta, \\ \label{eqn:polyhedral-graph-est-2}
&\sup_{(\partial W(i) \times \mathbb{R}^m) \cap B_{1/2} \setminus B_{2\tau}({ \{0\} \times \R^m })} \quad r^{-1} |f(i)| + |Df(i)| + r^\alpha [Df(i)]_{\alpha, C} \leq \beta.
\end{align}
\end{lemma}
\begin{proof}
This is essentially a direct Corollary of Lemma \ref{lem:mult-one}, and the $\varepsilon$-regularity of the plane and $\mathbf{Y}$-type cones. If the Lemma failed, we would have a counter-example sequence $M_i$. Passing to a subsequence, we have by Lemma \ref{lem:mult-one} multiplicity-$1$ varifold convergence $M_i \to \mathbf{C}^0$ on compact subsets of $B_1$.
In any ball avoiding the lines $\partial W(i) \times \mathbb{R}^m$ we can eventually write $M_i$ as a $C^{1,\alpha}$ graph over $\mathbf{C}$ by Allard's theorem, satisfying the (local, scale-invariant) estimates \eqref{eqn:polyhedral-graph-est-1}. Similarly, in any ball centered on a line $\partial W(i) \times \mathbb{R}^m$, but disjoint from the axis ${ \{0\} \times \R^m }$, we can eventually decompose $M_i$ into graphs over $\mathbf{C}$ as in Theorem \ref{thm:Y-reg}, and having estimates \eqref{eqn:polyhedral-graph-est-2}.
\end{proof}
\begin{definition}\label{def:poly-graph}
For ease of notation, we write
\begin{gather}
M \cap B_{3/4} \setminus B_{\tau}({ \{0\} \times \R^m }) = \mathrm{graph}_{\mathbf{C}}(u, f, \Omega), \quad B_{1/2} \setminus B_{2\tau}({ \{0\} \times \R^m }) \subset \Omega, \quad |u|_{C^{1,\alpha}} + |f|_{C^{1,\alpha}} \leq \beta
\end{gather}
to indicate the decomposition as in Lemma \ref{lem:poly-global-graph}.
\end{definition}
\begin{remark}\label{rem:wedge-cont}
One consequence of Lemma \ref{lem:poly-global-graph} is that the number and size of wedges for polyhedral cones is continuous under varifold convergence.
\end{remark}
For $y \in \mathbb{R}^m$, us define the torus
\begin{gather}
U(\rho, y, \gamma) = \{ (\xi, \eta) \in \mathbb{R}^{2+k}\times \mathbb{R}^m: (|\xi| - \rho)^2 + |\eta - y|^2 \leq \gamma \rho^2 \},
\end{gather}
and the ``halved-torus''
\begin{gather}
U_+(\rho, y, \gamma) = U(\rho, y, \gamma) \cap \{ (\xi, \eta) : |\xi| \geq \rho \}.
\end{gather}
The following Lemma gives us a criterion to decide how close to the spine we can push graphicality, and is the key to integral estimates \eqref{eqn:integral-est-poly-1}. The graphicality assumption in the half-torus allows us to avoid working in a multiplicity-1 class.
\begin{lemma}\label{lem:poly-tiny-graph}
For any $\beta > 0$ there is an $\varepsilon_2(\mathbf{C}^0, \beta)$ so that the following holds. Take $M^n \in \mathcal{N}_{1/10}(\mathbf{C})$. Pick $\rho \leq 1/16$, and $y \in B_{3/4}^m$. Suppose we know
\begin{equation}\label{eqn:poly-tiny-u}
M \cap U_+(\rho, y, 1/16) \subset \mathrm{graph}_{\mathbf{C}}(u,\Omega, f), \quad |u|_{C^{1,\alpha}} + |f|_{C^{1,\alpha}} \leq 1/10,
\end{equation}
where $\mathrm{graph}_\mathbf{C}(u, \Omega, f)$ is a decomposition as in Lemma \ref{lem:poly-global-graph}, and
\begin{equation}\label{eqn:poly-tiny-dist}
\rho^{-n-2} \int_{M \cap U(\rho, y, 1/4)} d_{\mathbf{C}}^2 + \rho||H_M||_{L^\infty(U(\rho, y, 1/4))} \leq \varepsilon_2 .
\end{equation}
Then we have
\begin{gather}\label{eqn:poly-tiny-concl}
M \cap U(\rho, y, 1/8) \subset \mathrm{graph}_{\mathbf{C}}(u,\Omega, f), \quad |u|_{C^{1,\alpha}} + |f|_{C^{1,\alpha}} \leq \beta.
\end{gather}
\end{lemma}
\begin{proof}
By dilation invariance and monotonicity, we see there is no loss in supposing $\rho = 1/2$. Suppose the Lemma is false, and consider a counterexample sequence $M_i$, $y_i$, $\varepsilon_i \to 0$, which satisfy the hypothesis of the Lemma and $M_i \cap U(\rho, y_i, 1/8) \neq \emptyset$, but fail \eqref{eqn:poly-tiny-concl}. Passing to a subsequence, the $y_i \to y \in B_{3/4}^m$, and in $U(\rho, y, 1/5)$ the $M_i$'s converge to a stationary varifold supported in $\mathbf{C}^0$. The multiplicity of the limit in each component of $\mathbf{C}^0 \cap U(\rho, y, 1/5)$ is constant, but by the graphicality assumption we converge with multiplicity one inside $U_+(\rho, y, 1/16)$.
Therefore the convergence is with multiplicity $1$, and by Theorems \ref{thm:allard}, \ref{thm:Y-reg} we deduce that for $i >> 1$ we satisfy the conclusions of the Lemma.
\end{proof}
Using Lemma \ref{lem:poly-tiny-graph}, we can obtain the finer graphical estimates of Lemma \ref{lem:poly-graph}.
\begin{proof}[Proof of Lemma \ref{lem:poly-graph}]
We can assume $\beta \leq 1/10$. Ensure $\varepsilon \leq \min\{ \varepsilon_1(\mathbf{C}^0, \beta, \tau), \varepsilon_2(\mathbf{C}^0, \beta)\}$, the constants from Lemmas \ref{lem:poly-global-graph}, \ref{lem:poly-tiny-graph}. Recalling Definition \ref{def:poly-graph}, we have the crude decomposition
\begin{gather}
M \cap B_{3/4} \setminus B_{\tau}({ \{0\} \times \R^m }) = \mathrm{graph}_{\mathbf{C}^0}(u, f, \Omega), \quad B_{1/2} \setminus B_{2\tau}({ \{0\} \times \R^m }) \subset \Omega, \quad |u|_{C^{1,\alpha}} + |f|_{C^{1,\alpha}} \leq \beta .
\end{gather}
Given $y \in B_{3/4}^m$, define
\begin{gather}
r_y = \inf \{ r' : \text{\eqref{eqn:poly-tiny-u} holds for all $r' < \rho < 3/4$} \}.
\end{gather}
According to this definition, we can extend $\Omega(i)$, $u(i)$, and $f(i)$ to obtain the decomposition of \eqref{eqn:poly-graph-1}, \eqref{eqn:poly-graph-2}, \eqref{eqn:poly-graph-3}, with estimates \eqref{eqn:point-est-poly-1}, \eqref{eqn:point-est-poly-11}. Moreoever, by Lemma \ref{lem:poly-global-graph} $r_y \leq \tau$. If $r_y > 0$, then necessarily by Lemma \ref{lem:poly-tiny-graph}, \eqref{eqn:poly-tiny-dist} must fail at $r_y$, and hence
\begin{gather}
r_y^{n+2} \varepsilon_2 \leq \int_{M \cap U(r_y, y, 1/4)} d_{\mathbf{C}^0}^2 + r_y^{n+3} ||H_M||_{L^\infty(U(r_y, y, 1/4))}.
\end{gather}
In particular, by monotonicity we have
\begin{gather}
\int_{M \cap B_{20 r_y}(0, y)} r^2 \leq c(\mathbf{C}^0, \beta) \int_{M \cap U(r_y, y, 1/4)} d_{\mathbf{C}^0}^2 + c(\mathbf{C}, \beta) r_y^{n+3} ||H||_{L^\infty(B_1)}.
\end{gather}
Take a Vitali subcover $\{B_{2\rho_j}(0, y_j)\}_j$ of
\begin{gather}
\{B_{10r_y}(0, y) : y \in B_{3/4}^m \text{ and } r_y > 0 \},
\end{gather}
and then by construction $\{B_{10\rho_j}(0, y_j)\}_j$ covers $\mu_M$-a.e. $B_{3/4} \cap B_{10r_y}({ \{0\} \times \R^m })$, and the $U(\rho_j, y_j, 1/4) \subset B_{2\rho_j}(0, y_j)$ are disjoint. Note that, by disjointness, $\sum_j \rho_j^m \leq c(m)$. We deduce that
\begin{align}
\int_{M \cap B_{3/4} \cap B_{10r_y}({ \{0\} \times \R^m })} r^2
&\leq \sum_j \int_{M \cap B_{20\rho_j}(0, y_j)} r^2 \\
&\leq \sum_j c\int_{M \cap U(\rho_j, y_j, 1/4)} d_{\mathbf{C}^0}^2 + \sum_j c \rho_j^{n+3} ||H||_{L^\infty(B_1)}\label{eqn:poly-global-graph-vitali} \\
& \leq c(\mathbf{C}^0, \beta) \int_{M \cap B_1} d_{\mathbf{C}^0}^2 + c(\mathbf{C}^0, \beta) ||H||_{L^\infty(B_1)}
\end{align}
We prove the additional pointwise and $L^2$ estimates. We claim that
\begin{gather}
(x, y) \not\in B_{2r_y}({ \{0\} \times \R^m }) \implies B_{|x|/2} \cap B_{r_y}({ \{0\} \times \R^m }) = \emptyset.
\end{gather}
Otherwise, suppose the latter intersection is non-empty, and contains some $(x', y')$. Then we have
\begin{gather}
|x|/2 \leq |x'| \leq 3|x|/2, \quad \text{ and } (x', y') \in B_{r_{y''}}(0, y'') \text{ for some $y''$},
\end{gather}
which implies that
\begin{gather}
|(x, y) - (0, y'')| < |x|/2 + r_{y''} \leq |x'| + r_{y''} < 2r_{y''} ,
\end{gather}
a contradiction. This proves the claim.
Define $\partiallta(\mathbf{C}^0)$ by
\begin{gather}
\partiallta = 1/100 \cdot \text{(smallest geodesic length in $\mathbf{C}^0_0\cap \mathbb{S}^{1+k}$)} \leq 1/20,
\end{gather}
so that $B_{10\partiallta|x|}(x, y) \cap B_{10\partiallta |x'|}(x', y') = \emptyset$ whenever $(x, y)$ and $(x', y')$ belong to different triple junctions in $\mathbf{C}$.
Now provided $\beta(m, k, \partiallta)$ is sufficiently small, given any $(x, y) = (x' + u(i)(x', y), y) \in \mathrm{sing} M \setminus B_{2r_y}({ \{0\} \times \R^m })$, we can use the above claim and Simon's regularity at scale $B_{10\partiallta |x|}(x, y)$ to deduce
\begin{align}\label{eqn:poly-graph-6}
\sup_{B_{5\partiallta |x|}(x, y)} |x|^{n+2}|Du(i)|^2 + |x|^{n}|f(i)|^2
&\leq c \int_{B_{10\partiallta |x|}(x, y) \cap M} d_{\mathbf{C}^0}^2 + c|x|^{n+3} ||H_M||_{L^\infty(B_1)} \\
&\leq c(\mathbf{C}^0, \beta) E(M, \mathbf{C}^0, 1).
\end{align}
Of course, on the LHS we could put any of the $C^{1,\alpha}$ estimates for $u$ or $f$ from Theorem \ref{thm:Y-reg}, normalized to scale like $|x|^{n+2}$.
Let $\{(x_j, y_j)\}_j$ be the centers of a Vitali cover of
\begin{gather}\label{eqn:poly-graph-5}
\{ B_{\partiallta |x|}(x, y) : (x, y) \in \mathrm{sing} M \cap B_{1/2} \setminus B_{2r_y}({ \{0\} \times \R^m }) \}.
\end{gather}
Then the balls $\{B_{5\partiallta|x_j|}(x_j, y_j)\}_j$ cover \eqref{eqn:poly-graph-5}, and have overlap bounded by a universal constant $c(n)$. In particular, since $\beta \leq 1/10$ the number of $5\partiallta|x_i|$-balls meeting $M \cap \{ |x| = r \} \cap B_1$ is bounded by $c(\mathbf{C}^0) r^{-m}$.
We can we can sum up the estimates \eqref{eqn:poly-graph-6} to obtain
\begin{align}
\sum_{i=1}^d \int_{(\partial W(i) \times \mathbb{R}^m) \cap (B_{1/2} \setminus B_{5r_y}({ \{0\} \times \R^m }))} r |f(i)|^2
&\leq c \sum_j \left( \int_{M \cap B_{5\partiallta|x_j|}(x_j, y_j)} d_{\mathbf{C}}^2 + |x_j|^{n+3} ||H_M||_{L^\infty(B_1)} \right) \\
&\leq c \int_{M \cap B_1} d_{\mathbf{C}}^2 + c \int_0^1 r^{5} \frac{dr}{r} ||H||_{L^\infty(B_1)} \\
&\leq c(\mathbf{C}, \beta) E(M, \mathbf{C}, 1) .
\end{align}
If instead $(x, y) \in M \setminus (B_{2r_y}({ \{0\} \times \R^m })$ and $d((x, y), \mathrm{sing} M) \geq \partiallta|x|/5$, then we can apply Allard to deduce
\begin{gather}\label{eqn:poly-graph-4}
\sup_{B_{\partiallta |x|/10}(x, y)} |x|^{n+2} |Du(i)|^2 \leq c( \mathbf{C}^0, \beta) \int_{B_{\partiallta |x|/5}(x, y) \cap M} d_{\mathbf{C}^0}^2 + c(\mathbf{C}) |x|^{n+3} ||H_M||_{L^\infty(B_1)}.
\end{gather}
By taking a Vitali cover of
\begin{gather}
\{ B_{\partiallta |x|/10}(x, y) : (x, y) \in M \setminus B_{2r_y}({ \{0\} \times \R^m }) \text{ and } d((x, y), \mathrm{sing} M) \geq \partiallta |x|/5 \},
\end{gather}
we can use both Vitali covers to sum up estimates \eqref{eqn:poly-graph-6} and \eqref{eqn:poly-graph-4} as before to obtain
\begin{gather}
\sum_{i=1}^d \int_{\Omega(i) \cap (B_{1/2} \setminus B_{5r_y}({ \{0\} \times \R^m }))} r^2 |Du(i)|^2 \leq c(\mathbf{C}^0, \beta) E(M, \mathbf{C}, 1).
\end{gather}
This proves the $L^2$ estimates \eqref{eqn:integral-est-poly-1}, \eqref{eqn:integral-est-poly-2}. The pointwise estimates \eqref{eqn:point-est-poly-2}, \eqref{eqn:point-est-poly-22} follow directly from from Simon's and Allard's regularity theorems as in \eqref{eqn:poly-graph-6}, \eqref{eqn:poly-graph-4}.
\end{proof}
\begin{comment}
\documentclass[12pt]{article}
\usepackage{amsmath}
\usepackage{hyperref}
\usepackage{xfrac}
\usepackage{stmaryrd,mathrsfs,bm,amsthm,mathtools,yfonts,amssymb,color}
\usepackage{xcolor}
\usepackage{courier}
\newcommand\net{\mathrm{\bf net}}
\newcommand{\partial}{\partial}
\newcommand\sphere{\mathbb{S}}
\newcommand{\varepsilon}{\varepsilon}
\newcommand{\mathcal{H}}{\mathcal{H}}
\newcommand{\mathrm{sing}}{\mathrm{sing}}
\newcommand{\mathrm{graph}}{\mathrm{graph}}
\newcommand{\mathrm{dom}}{\mathrm{dom}}
\newcommand{\mathrm{spt}}{\mathrm{spt}}
\newcommand{\mathcal{M}}{\mathcal{M}}
\newcommand{\mathcal{S}}{\mathcal{S}}
\newcommand{\mathbf{C}}{\mathbf{C}}
\newcommand{\mathbf{V}}{\mathbf{V}}
\newcommand{\mathbf{E}}{\mathbf{E}}
\newcommand{\mathbf{T}}{\mathbf{T}}
\newcommand{\mathbf{Y}}{\mathbf{Y}}
\newcommand{\mathbf{X}}{\mathbf{X}}
\newcommand{\mathbb{R}}{\mathbb{R}}
\newcommand{\mathcal{C}}{\mathcal{C}}
\newcommand{\mathcal{N}}{\mathcal{N}}
\newcommand{\mathrm{int}}{\mathrm{int}}
\newcommand{{ \{0\} \times \R^m }}{{ \{0\} \times \mathbb{R}^m }}
\newcommand{\mathbf{C}^0}{\mathbf{C}}
\newtheorem{theorem}{Theorem}[section]
\newtheorem{ctheorem}[theorem]{Conjectural Theorem}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{fact}[theorem]{Fact}
\newtheorem{prop}[theorem]{Proposition}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{corollary}[theorem]{Corollary}
\theoremstyle{definition}
\newtheorem{definition}[theorem]{Definition}
\theoremstyle{remark}
\newtheorem{remark}[theorem]{Remark}
\theoremstyle{remark}
\newtheorem{example}[theorem]{Example}
\theoremstyle{remark}
\newtheorem{note}[theorem]{Note}
\theoremstyle{remark}
\newtheorem{question}[theorem]{Question}
\theoremstyle{remark}
\newtheorem{conjecture}[theorem]{Conjecture}
\setlength{\textheight}{10in} \setlength{\textwidth}{7.5in}
\setlength{\columnsep}{0.5in} \setlength{\topmargin}{-1.2in}
\setlength{\headheight}{0in} \setlength{\headsep}{0.5in}
\setlength{\parindent}{1pc}
\setlength{\oddsidemargin}{-0.5in}
\setlength{\evensidemargin}{-0.5in}
\begin{document} \end{comment}
\section{$L^2$ estimates}\label{sec:estimates}
We demonstrate various decay and growth quantities are controlled at the scale of excess. We require first a
\begin{definition}\label{def:chunky}
Set $r_\nu = 2^{-\nu}$, where $\nu \in \{0, 1, 2, \ldots \}$. For each $\nu$, let us choose and fix (for the duration of this paper) a covering of $\mathbb{R}^m$ by disjoint, half-open cubes $\{Q_{\nu \mu}\}_\mu$, each having side length $r_\nu$.
We say $f(r, y) : \mathbb{R}_+ \times \mathbb{R}^m \to \mathbb{R}$ is \emph{chunky} if $f$ is constant on each annular cylinder $[r_{\nu+1}, r_\nu) \times Q_{\nu\mu}$.
\end{definition}
We introduce this class of functions is because of the following compactness result.
\begin{lemma}
Let $\{\kappa_i\}$ be a sequence of chunky functions with $||\kappa_i||_{L^\infty(U)} \leq c(U)$ for all $U \subset\subset \mathbb{R}_+\times \mathbb{R}^m$. Then we can find a chunky function $\kappa$, admitting the same bounds $||\kappa||_{L^\infty(U)} \leq c(U)$, and a subsequence $i'$, so that $\kappa_{i'} \to \kappa$ pointwise, and uniformly on compact sets.
\end{lemma}
\begin{proof}
Obvious.
\end{proof}
We work towards the following Theorem. As before we fix a polyhedral cone $\mathbf{C}^0_0^2 \subset \mathbb{R}^{2+k}$, composed of wedges $\{W(i)\}_{i=1}^d$ and lines $\{L(i)\}_{i=1}^{2d/3}$. We take $\mathbf{C}^0 = \mathbf{C}^0_0 \times \mathbb{R}^m$. Recall the Definition \ref{def:no-holes} of the ``no-holes'' condition.
\begin{theorem}\label{thm:l2-est}
For any $\tau > 0$ and $\alpha \in (0, 1)$, there is an $\varepsilon(\mathbf{C}^0, \tau)$ so that the following holds. Let $M \in \mathcal{N}_{\varepsilon}(\mathbf{C}^0)$ and decompose
\begin{gather}
M \cap B_{3/4} \setminus B_{\tau}({ \{0\} \times \R^m }) = \mathrm{graph}_{\mathbf{C}}(u, f, \Omega)
\end{gather}
as in Definition \ref{def:poly-graph}/Lemma \ref{lem:poly-global-graph}.
Then provided $\theta_M(0) \geq \theta_{\mathbf{C}}(0)$, the following decay/growth estimates hold:
\begin{align}\label{eqn:thm-point-est}
&\int_{M \cap B_1} \frac{d_{\mathbf{C}}^2}{R^{n+2-\alpha}} + \sum_{i=1}^d \int_{\Omega(i) \cap B_{1/10} \setminus B_\tau({ \{0\} \times \R^m })} R^{2-n} |\partial_R (u(i)/R)|^2 + \int_{M \cap B_{1/10}} \frac{|X^\perp|^2}{R^{n+2}} \\
&\quad \leq c(\mathbf{C}^0, \alpha) E(M, \mathbf{C}, 1),
\end{align}
where $X = \pi_{N_XM}(X)$ is the projection to the normal bundle of $M$.
Provided $M$ satisfies the $\tau/10$-no-holes condition w.r.t. $\mathbf{C}^0$ in $B_{1/4}$, then we have decay along the spine:
\begin{align}\label{eqn:thm-spine-est}
&\int_{M \cap B_{1/4}} \frac{d_{\mathbf{C}}^2}{\max(r, \tau)^{2-\alpha}} + \sum_{i=1}^d \int_{\Omega(i) \cap B_{1/4} \setminus B_\tau(L\times \mathbb{R}^m))} \frac{|u(i) - \kappa^\perp|^2}{\max(r, \tau)^{2+2-\alpha}} \\
&\quad \leq c(\mathbf{C}^0, \alpha) E(M, \mathbf{C}, 1) ,
\end{align}
where we write $L = \cup_{i=1}^{2d/3} L(i)$ for the lines of $\mathbf{C}_0$, and $\kappa : (0, 1] \times B_1^m \to \mathbb{R}^{2+k} \times \{0\}$ is a chunky function admitting the bound
\begin{gather}\label{eqn:thm-k-est}
\sup_{(0, 1]\times B_1^m} |\kappa|^2 \leq c(\mathbf{C}^0, \alpha) E(M, \mathbf{C}, 1).
\end{gather}
\end{theorem}
Let us give a brief
\begin{proof}[Outline of proof]
We will first show that testing the stationarity of $M$ with a radial vector field proportional to $d_{\mathbf{C}}^2$, the following estimate holds
\begin{gather}\label{eqn:outline-decay}
\int_{M \cap B_1} \frac{d_{\mathbf{C}}^2}{R^{n+2-\alpha}} \leq c(\mathbf{C}, \alpha) E(M,\mathbf{C}, 1) +c(\mathbf{C}, \alpha) \int_{M \cap B_{1/10}} \frac{|X^\perp|^2}{R^{n+2}} .
\end{gather}
Due to the fact that $\partial_R( X \equiv (x', y) + u(i)(x', y))$ is tangent to $M$, we also have a pointwise inequality
\begin{gather}\label{eqn:outline-hardt-simon}
|\partial_R(u(i)/R)|^2 \leq 2 |X^\perp|^2/R^4
\end{gather}
on each $\Omega(i) \cap B_{1/10}$.
Next, using a cylindrical vector field of the form $\phi^2(R) (x, 0)$, and the effective graphical estimates of Lemma \ref{lem:poly-graph}, we will show that whenever $\theta_M(0) \geq \theta_{\mathbf{C}^0}(0)$, we have
\begin{gather}\label{eqn:outline-density}
\int_{M \cap B_{1/10}} \frac{|X^\perp|^2}{R^{n+2}} \leq c(\mathbf{C}^0) E(M, \mathbf{C}, 1).
\end{gather}
This estimate controlling density excess by $L^2$ excess is very important, and is by far the most involved. Combining \eqref{eqn:outline-density} with \eqref{eqn:outline-decay}, \eqref{eqn:outline-hardt-simon} gives estimate \eqref{eqn:thm-point-est}.
To obtain the estimates \eqref{eqn:thm-spine-est}, we can apply \eqref{eqn:thm-point-est} at each singular point $Z$ satisfying $\theta_M(Z) \geq \theta_\mathbf{C}(0)$, which by assumption form $\tau/10$-dense set in a $\tau$-neighborhood of the spine. Now sum all these estimates up over cubes centered in ${ \{0\} \times \R^m }$.
\end{proof}
\subsection{Decay estimate} We bound the decay and growth rates in terms of $L^2$ distance and density drop. We first need a helper Lemma, which says we can find a good $C^1$ approximation to the distance function to our polyhedral cone.
\begin{lemma}\label{lem:smoothing-d}
We can find a $1$-homogeneous function $\tilde d$, which is $C^1$ on $\{\tilde d > 0\}$, and satisfies
\begin{equation}\label{eqn:tilde-d}
\frac{1}{c(\mathbf{C}^0)} d_{\mathbf{C}} \leq \tilde d \leq c(\mathbf{C}^0) d_{\mathbf{C}}, \quad |D\tilde d| \leq c(\mathbf{C}^0).
\end{equation}
\end{lemma}
\begin{proof}
We first consider a $1$-dimensional stationary cone $\tilde \mathbf{C}_0^1 \subset \mathbb{R}^{1+k}$, so that $\partial \tilde \mathbf{C}_0 \cap \mathbb{S}^{k}$ is a finite collection of points. By smoothing $d_{\tilde \mathbf{C}_0}/|x|$ at the approrpriate cut-locii, we can easily obtain a $\phi : \mathbb{S}^{k} \to \mathbb{R}$ so that $\phi$ is $C^1$ on $\{ \phi > 0 \}$, and
\begin{gather}
\frac{1}{2} \frac{d_{\mathbf{C}_0}}{|x|} \leq \phi(x/|x|) \leq 2 \frac{d_{\mathbf{C}_0}}{|x|} , \quad \text{ and } \quad |D\phi| \leq 10.
\end{gather}
Now we consider the polyhedral cone $\mathbf{C}_0^2 \subset \mathbb{R}^{2+k}$. Recall that by \cite{AllAlm2}, $\mathbf{C}_0 \cap \mathbb{S}^{1+k}$ consists of finitely many geodesic arcs. By applying the previous paragraph to a small neighborhood of every vertex, we can construct a function $\psi : \mathbb{S}^{1+k} \to \mathbb{R}$ which satisfies:
\begin{gather}
\frac{1}{c(\mathbf{C}^0)} \frac{d_{\mathbf{C}_0}}{|x|} \leq \psi(x/|x|) \leq c(\mathbf{C}^0) \frac{d_{\mathbf{C}_0}}{|x|}, \quad \psi \text{ is $C^1$ on } \{ \psi > 0 \}, \quad |D\psi| \leq c(\mathbf{C}^0).
\end{gather}
Now set $\tilde d(x, y) = |x|\psi(x/|x|)$.
\end{proof}
\begin{prop}\label{prop:decay-growth-est}
For any $\tau > 0$ and $\alpha \in (0, 1)$, there is an $\varepsilon(\mathbf{C}^0, \tau)$ so that the following holds. Let $M \in \mathcal{N}_{\varepsilon}(\mathbf{C}^0)$, and let us decompose
\begin{gather}
M \cap B_{3/4} \setminus B_\tau({ \{0\} \times \R^m }) = \cup_{i=1}^d M(i) = \mathrm{graph}_{\mathbf{C}}(u, f, \Omega)
\end{gather}
as in Lemma \ref{lem:poly-global-graph}. Then we have
\begin{align}
&\int_{M \cap B_1} \frac{d_{\mathbf{C}}^2}{R^{n+2-\alpha}} + \sum_{i=1}^d \int_{\Omega(i) \cap B_{1/10} \setminus B_\tau({ \{0\} \times \R^m })} R^{2-n} |\partial_R (u(i)/R)|^2 \\
&\quad \leq c(\mathbf{C}^0, \alpha) \int_{M \cap B_{1/10}} \frac{|X^\perp|^2}{R^{n+2}} + c(\mathbf{C}^0, \alpha) E(M, \mathbf{C}, 1).
\end{align}
Here $X^\perp = \pi_{N_XM}(X)$ is the projection to the normal boundle of $M$.
\end{prop}
\begin{proof}
Let $\zeta$ be a smoothing of the function which is $\equiv 1$ on $[\partiallta, 1/20]$, $0$ at $0$ and $\equiv 0$ on $B_1 \setminus B_{1/10}$, and linearly interpolates in between. Consider the vector field
\begin{gather}
V(X) = \zeta^2 (\tilde d/R)^2 R^{-n+\alpha} X,
\end{gather}
where $\tilde d$ as in Lemma \ref{lem:smoothing-d}.
Since $(\tilde d/R)^2$ is $C^1$ and homogeneous degree-$0$, we have
\begin{gather}
X \cdot D (\tilde d/R)^2 = 0.
\end{gather}
We also have $|D(\tilde d/R)| \leq c(\mathbf{C}^0)/R$. We therefore calculate
\begin{align}
div(V) &\geq -2\zeta |\nabla^T\zeta| (\tilde d/R)^2 R^{-n+1+\alpha} -2\zeta^2 (\tilde d/R) |\nabla^\perp (\tilde d / R)| |(x, y)^\perp| \\
&\quad + \zeta^2(\tilde d/R)^2 (n-\alpha) R^{-n-2+\alpha} |(x, y)^\perp|^2 + \zeta^2 (\tilde d/R)^2 R^{-n+\alpha} \alpha .
\end{align}
And so, using \eqref{eqn:tilde-d}, we have for any $\eta$
\begin{align}
\frac{\alpha}{c(\mathbf{C}^0)} \int_M \zeta^2 (d_{\mathbf{C}}/R)^2 R^{-n+\alpha}
&\leq \int_M |V| |H_M| + \int_M \eta \zeta^2 (d_{\mathbf{C}}/R)^2 R^{-n+\alpha} + c(\eta, \mathbf{C}^0) \zeta^2 |(x, y)^\perp|^2 R^{-n-2+\alpha} \\
&\quad + \int_M \eta \zeta^2 (d_{\mathbf{C}}/R)^2 R^{-n+\alpha} + c(\eta, \mathbf{C}^0) d_{\mathbf{C}}^2 R^{-n+\alpha} |\nabla^T \zeta|^2 .
\end{align}
Take $\eta(\mathbf{C}^0, \alpha)$ sufficiently small, and use that $|V| \leq R^{-n+1}$ in a computation similar to \eqref{eqn:integrable-on-M}. We obtain
\begin{gather}
\int_{M \cap B_1} d^2_{\mathbf{C}} R^{-n-2+\alpha} \leq c ||H_M||_{L^\infty(B_1)} + c \int_{M \cap B_{1/10}} |(x, y)^\perp|^2 R^{-n-2+\alpha} + c \int_M d_{\mathbf{C}}^2 R^{-n+\alpha} |\nabla^T \zeta|^2,
\end{gather}
for $c = c(\alpha, \mathbf{C})$.
We analyze the last term:
\begin{align}
\int_M d_{\mathbf{C}}^2 R^{-n+\alpha} (\zeta')^2 |(x, y)^T|^2/R^2
&\leq c\int_{M \cap B_\partiallta} d_{\mathbf{C}}^2 R^{-n+\alpha} \partiallta^{-2} + c \int_{M \cap (B_{1/10}\setminus B_{1/20})} d_{\mathbf{C}}^2 R^{-n+\alpha} \\
&\leq \int_{M \cap B_\partiallta} R^{-n+\alpha} + c 100^{\alpha-n} \int_{M \cap (B_1 \setminus B_{1/20})} d_{\mathbf{C}}^2 ,
\end{align}
where $c$ is an absolute constant. Using the standard layer cake formula, and the monotonicity $\theta_M(0, r) \leq c(\mathbf{C}^0) r^n$, we have
\begin{align}\label{eqn:integrable-on-M}
\int_{M \cap B_\partiallta} R^{-n+\alpha}
&\leq \int_0^\infty \mathcal{H}^n( M \cap B_\partiallta \cap \{r^{-n+\alpha} > t\} ) dt \\
&\leq c(\mathbf{C}^0) \int_{\partiallta^{\alpha-n}}^\infty t^{-n/(n-\alpha)} dt \\
&= c(\alpha, \mathbf{C}^0) \partiallta^\alpha \\
& \to 0 \quad \text{ as } \partiallta \to 0.
\end{align}
This proves the first inequality.
We now prove the second. It will suffice to prove the pointwise bound
\begin{gather}
|\partial_R(u(i)/R)|^2 \leq 2 |X^\perp|^2/R^4.
\end{gather}
Let us write $(x, y) \in M(i)$ as $(x, y) = (x', y) + u(i)(x', y)$, for $(x', y) \in \Omega(i) \cap B_{1/10} \setminus B_\tau({ \{0\} \times \R^m })$. For ease of notation let us drop the $i$ index from now on.
We compute
\begin{gather}
\partial_R (u(x', y)/R) = \partial_R \left( \frac{ (x', y) + u(x', y))}{R} \right) = \frac{(x', y) + (x', y) \cdot Du(x', y)}{R^2} - \frac{(x, y)}{R^2},
\end{gather}
and observe that $\partial_R ( (x', y) + u(i)(x', y)) \in T_{(x, y)}M$. Therefore, we deduce
\begin{gather}
\pi_{N_{(x, y)}M}\left( \partial_R( u(x', y)/R ) \right) = - \frac{\pi_{N_{(x, y)}M}(x, y)}{R^2}.
\end{gather}
This is the required expression, but only for the normal component. Since $\partial_R (u(x', y)/R) \in T_{(x', y)} \mathbf{C}$, and $M$ is $C^1$-close to $\mathbf{C}$, we can show the tangential component is negligable:
\begin{align}
|\partial_R(u(x', y)/R)|^2
&= |\pi_{N_{(x, y)}M}(\partial_R(u(x', y)/R))|^2 + |(\pi_{T_{(x, y)}M} - \pi_{T_{(x', y)}\mathbf{C}})(\partial_R(u(x', y)/R))|^2 \\
&\leq |\pi_{N_{(x, y)}M}(x, y)|^2/R^4 + c(n, k) |Du(x', y)|^2 |\partial_R(u(x', y)/R)|^2 \\
&\leq |\pi_{N_{(x, y)}M}(x, y)|^2/R^4 + \frac{1}{2} |\partial_R(u(x', y)/R)|^2,
\end{align}
provided $\varepsilon(n, k)$ is sufficiently small.
\end{proof}
\subsection{Density drop} By far the trickiest and most important estimate is estimating the density drop in terms of $L^2$ distance. We are in effect bounding $W^{1,2}$ by $L^2$.
\begin{proposition}\label{prop:density-est}
There is an $\varepsilon(\mathbf{C}^0)$ so that the following holds. Let $M \in \mathcal{N}_{\varepsilon}(\mathbf{C}^0)$, and suppose $\theta_M(0) \geq \theta_{\mathbf{C}^0}(0)$. Then we have
\begin{gather}
\int_{M \cap B_{1/10}} \frac{|X^\perp|^2}{R^{n+2}} \leq c(\mathbf{C}^0) E(M, \mathbf{C}, 1) .
\end{gather}
\end{proposition}
\begin{proof}
Let $\phi$ be any smooth function, with $\phi' \leq 0$, $\phi = 1$ on $[0, 1/10]$, and $\phi = 0$ on $[2/10, \infty)$. By the first variation formula, the structure $\mathbf{C} = \mathbf{C}_0 \times \mathbb{R}^m$, and our assumption that $\theta_M(0) \geq \theta_{\mathbf{C}^0}(0)$, we have the following inequalities:
\begin{gather}\label{eqn:lem-density-1}
\frac{1}{2} n (1/10)^n \int_{M \cap B_{1/10}} \frac{|X^\perp|^2}{R^{n+2}} \leq \int_M \phi^2(R) - \int_{\mathbf{C}} \phi^2(R) + c(\mathbf{C}, \phi) ||H_M||_{L^\infty(B_1)},
\end{gather}
and
\begin{align}\label{eqn:lem-density-2}
\ell \left( \int_M \phi^2(R) - \int_{\mathbf{C}} \phi^2(R) \right) &\leq \left( \int_M 2\phi |\phi'| r^2/R - \int_{\mathbf{C}} 2\phi |\phi'| r^2/R \right) \\
&\quad + \int_M 2\phi (\phi')^2 |(x, 0)^\perp|^2 + c(\mathbf{C}, \phi) ||H_M||_{L^\infty(B_1)}.
\end{align}
See \cite[pages 613-615]{simon1} for a derivation; relations \eqref{eqn:lem-density-1}, \eqref{eqn:lem-density-2} require no special structure on $\mathbf{C}_0$. Note that \cite{simon1} proves \eqref{eqn:lem-density-1}, \eqref{eqn:lem-density-2} for stationary surfaces, but the modification for bounded mean curvature is straightforward -- for completeness we provide a brief sketch in Section \ref{sec:variation}.
The Proposition will now follow by Lemma \ref{lem:density-workhouse}, because if we write
\begin{gather}
F(x, y) \equiv F(R) = \phi(R) |\phi'(R)| /R,
\end{gather}
then on $\mathrm{spt} F$ we have that $F$ is a smooth function of $x, y$.
\end{proof}
\begin{lemma}\label{lem:density-workhouse}
There is an $\varepsilon(\mathbf{C}^0)$ so that the following holds. Let $M \in \mathcal{N}_{\varepsilon}(\mathbf{C}^0)$, and let $F(x, y) \equiv F(R)$ be a non-negative $C^1$ function supported on $R \in [1/10, 2/10]$.
Then we have
\begin{equation}\label{eqn:F-diff-est}
\int_M r^2 F - \int_{\mathbf{C}} r^2 F \leq c(\mathbf{C}^0, |F|_{C^1}) E(M, \mathbf{C}, 1).
\end{equation}
And relatedly, we have
\begin{equation}\label{eqn:F-single-est}
\int_{M \cap B_{1/10}} |(x, 0)^\perp|^2 \leq c(\mathbf{C}^0) E(M, \mathbf{C}, 1).
\end{equation}
\end{lemma}
\begin{proof}
Let us prove \eqref{eqn:F-diff-est}. Choosing $\varepsilon$ sufficiently small, we have that $M \cap B_{3/4} \setminus B_{r_y}({ \{0\} \times \R^m })$ decomposes as graphs over $\mathbf{C}^0$ as in Lemma \ref{lem:poly-graph}, with $r_x \leq 1/100$ and $\beta \leq 1/10$. So we have
\begin{align}
&\int_M F r^2 - \int_{\mathbf{C}} F r^2 \nonumber\\
&\leq \sum_{i=1}^d \int_{\Omega(i) \cap (B_{1/2} \setminus B_{2r_y}({ \{0\} \times \R^m }))} F( (x' + u(i), y) |x' + u(i)|^2 Ju(i) - \sum_i \int_{W(i)} F(x', y) |x'|^2 \nonumber \\
&\quad + \sum_{i=1}^d \int_{M(i) \cap B_{10r_y}({ \{0\} \times \R^m })} |F|_{C^0} r^2 \label{eqn:diff-F-1} .
\end{align}
Since each domain $\Omega(i)$ is flat, each Jacobian is bounded by
\begin{gather}
Ju(i) \leq 1 + c |Du(i)|^2 \leq c.
\end{gather}
Further, since $u(i)(x', y) \in N_{(x', y)} \mathbf{C}$, we have
\begin{align}
&F(x' + u(i), y) |x' + u(i)|^2 - F(x', y) |x'|^2 \\
&\leq |F|_{C^0} |u(i)|^2 + |F|_{C^1} \left( \sqrt{|x'|^2 + |u(i)|^2 + |y|^2} - \sqrt{|x'|^2 + |y|^2}\right) |x'|^2 \\
&\leq c|F|_{C^1} |u(i)|^2.
\end{align}
Using the above calculations, and $|u(i)(x', y)| \leq |x'|/10$, we have
\begin{align}
&\sum_{i=1}^d \int_{\Omega(i) \cap (B_{1/2} \setminus B_{2r_y}({ \{0\} \times \R^m }))} F(x' + u(i), y) |x' + u(i)|^2 Ju(i) \nonumber \\
&\leq c |F|_{C^1} \sum_{i=1}^d \int_{\Omega(i) \cap (B_{1/2} \setminus B_{2r_y}({ \{0\} \times \R^m }))} r^2|Du(i)|^2 + |u(i)|^2 + F(x', y). \label{eqn:diff-F-2}
\end{align}
Now by construction, if $x' \in \partial W(i)$, then
\begin{equation}\label{eqn:n-f-relation}
n(i)(x') \cdot f(i)(x') = n(i)(x') \cdot (z - x'),
\end{equation}
where $z \in \partial M(i)$. In particular, if $W(1)$, $W(2)$, $W(3)$ all share a common edge, and $x$ lies in this edge, then
\begin{gather}
\sum_{i=1}^3 n(i)(x') \cdot f(i)(x') = 0.
\end{gather}
This follows simply because the $M(1)$, $M(2)$, $M(3)$ all share a common edge (and in particular $z$ in \eqref{eqn:n-f-relation} is independent of $i = 1, 2, 3$), and $\sum_{i=1}^3 n(i)(x') = 0$.
Let us fix a $y$, and recall that the annular region $A_W(r_y, 1/4)$ satisfies
\begin{equation}\label{eqn:A-inclusion}
(B_{1/4} \setminus B_{2r_y}) \cap W \subset A_W(r_y, 1/4) \subset (B_{1/2} \setminus B_{r_y}) \cap W.
\end{equation}
Let us write $\Omega_y(i) \equiv \Omega(i) \cap (\mathbb{R}^{\ell+k}\times \{y\})$, so that $\Omega_y(i)$ is a $2$-dimensional approximate-wedge.
By a similar argument as above, if $x' = s + f(i)(s) \in \partial\Omega_y(i)$ for $s \in \partial W(i)$, then
\begin{align}
&F(s + tf(i), y) |s + tf(i)|^2 - F(s, y) s^2 \\
&\leq |F|_{C^1} \left( \sqrt{ s^2 + t^2|f(i)|^2 + |y|^2} - \sqrt{s^2 + |y|^2} \right) s^2 + |F|_{C^0} t^2 |f(i)|^2 \\
&\leq c |F|_{C^1} t^2 |f(i)|^2.
\end{align}
For this fixed $y$, recalling that $\mathrm{spt} F \subset \{ R \in [1/10, 2/10]\}$, we have
\begin{align}
&\left| \sum_{i=1}^d \int_{\Omega_y(i) \cap A_{W(i)}(r_y, 1/4)} F(x', y)r^2 - \sum_{i=1}^d \int_{W(i) \cap A_{W(i)}(r_y, 1/4)} F(x', y)r^2 \right| \\
&= \left| \sum_{i=1}^d \int_{\partial W(i) \cap A_{W(i)}(r_y, 1/4)} (f(i)(s)\cdot n(i)(s)) \int_0^{1} F(s + tf(i)(s), y)|s + tf(i)(s)|^2 dt ds \right| \\
&= \left| \sum_{i=1}^d \int_{\partial W(i) \cap A_{W(i)}(r_y, 1/4)} (f(i)(s) \cdot n(i)(s)) \left( \int_0^{1} F(s + tf(i), y) |s + tf(i)|^2 - F(s, y) s^2 dt \right) ds \right| \\
&\leq c \sum_{i=1}^d \int_{\partial W(i) \cap A_{W(i)}(r_y, 1/4)} r |f(i)|^2 .
\end{align}
Integrating over $y$ (remember both $\Omega(i)$ and $W(i)$ are flat), and using \eqref{eqn:A-inclusion}, gives
\begin{align}
&\sum_{i=1}^d \int_{\Omega(i) \cap (B_{1/2}\setminus B_{2r_y}({ \{0\} \times \R^m }))} F(x', y) r^2 \label{eqn:diff-F-3} \\
&\leq \sum_{i=1}^d \int_{ (W(i) \times \mathbb{R}^m) \cap (B_{1/2}\setminus B_{r_y}({ \{0\} \times \R^m }))} F(x', y) r^2 + c\sum_{i=1}^d \int_{(\partial W(i) \times \mathbb{R}^m) \cap (B_{1/2} \setminus B_{r_y}({ \{0\} \times \R^m }))} r |f(i)|^2 . \nonumber
\end{align}
Combining the calculations \eqref{eqn:diff-F-1}, \eqref{eqn:diff-F-2}, \eqref{eqn:diff-F-3} with the effective estimates of Lemma \ref{lem:poly-graph}, we have
\begin{align}
&\int_M F r^2 - \int_C F r^2 \\
&\leq c \sum_{i=1}^d \int_{\Omega(i) \cap (B_{1/2} \setminus B_{r_y}({ \{0\} \times \R^m }))} |u(i)|^2 + r^2 |Du(i)|^2 \\
&\quad+ c \sum_{i=1}^d \int_{(\partial W(i) \times \mathbb{R}^m) \cap (B_{1/2} \setminus B_{r_y}({ \{0\} \times \R^m }))} r |f(i)|^2 \\
&\quad + \sum_{i=1}^d \int_{M(i) \cap B_{10r_y}({ \{0\} \times \R^m })} F(x, y)r^2 - \sum_{i=1}^d \int_{(W(i)\times \mathbb{R}^m) \cap B_{r_y}({ \{0\} \times \R^m })} F(x',y) r^2 \\
&\leq c E(M, \mathbf{C}, 1) + c \sum_{i=1}^d \int_{M(i) \cap B_{10r_y}({ \{0\} \times \R^m })} r^2 \\
&\leq c E(M, \mathbf{C}, 1).
\end{align}
This establishes \eqref{eqn:F-diff-est}.
We prove \eqref{eqn:F-single-est}. Take $\varepsilon$ as before. We make an initial computation. Suppose $(x', y) \in \Omega(i)$, and $(x, y) = (x', y) + u(i)(x', y) \in M(i)$. Write $\pi_{M(i)^\perp}$ for the orthogonal projection onto $N_{(x, y)}M(i)$, and $\pi_{P(i)^\perp}$ for the orthogonal projection to $P(i)^\perp$. Then we have
\begin{align}
|\pi_{M(i)^\perp}(x, 0)|
&= |\pi_{M(i)^\perp}(x, 0) - \pi_{P(i)^\perp}(x, 0)| + |\pi_{P(i)^\perp}(x - x', 0)| \\
&\leq c|x| |Du(i)(x', y)| + |u(i)(x', y)|.
\end{align}
We deduce that
\begin{align}
&\int_{M \cap B_{1/10}} |(x, 0)^\perp|^2 \\
&\leq \sum_i \int_{\Omega(i) \cap (B_{1/2} \setminus B_{r_y}({ \{0\} \times \R^m }))} |\pi_{M(i)^\perp}(x' + u(i), 0)|^2 J u(i) + \sum_i \int_{M(i) \cap B_{10r_y}({ \{0\} \times \R^m })} r^2 \\
&\leq \sum_i \int_{\Omega(i) \cap (B_{1/2} \setminus B_{r_y}({ \{0\} \times \R^m }))} c r^2 |Du(i)|^2 + c|u(i)|^2 + \sum_i \int_{M(i) \cap B_{10 r_y}({ \{0\} \times \R^m })} r^2 \\
&\leq c E(M, \mathbf{C}, 1) .
\end{align}
This completes the proof Lemma \ref{lem:density-workhouse}.
\end{proof}
\subsection{Moving the point} We localize the $L^2$-decay estimate to a given singular point, and demonstrate that the singular set must lie close to ${ \{0\} \times \R^m }$ at the scale of the excess.
\begin{prop}\label{prop:Z-est}
For any $\tau > 0$ and $\alpha \in (0, 1)$, there is an $\varepsilon(\mathbf{C}^0, \tau)$ so that the following holds. Take $M \in \mathcal{N}_{\varepsilon}(\mathbf{C}^0)$. Then for any $Z = (\zeta, \eta) \in \mathrm{sing}(M) \cap B_{1/4}$ with $\theta_M(Z) \geq \theta_{\mathbf{C}^0}(0)$, we have
\begin{gather}\label{eqn:est-Z-M}
|\zeta|^2 + \int_{M \cap B_{1/4}} \frac{d_{Z + \mathbf{C}}^2}{|X - Z|^{n+2-\alpha}} \leq c(\mathbf{C}^0, \alpha) E(M, \mathbf{C}, 1) ,
\end{gather}
and if we write $L = \cup_i L(i)$ for the lines of $\mathbf{C}_0$, then
\begin{gather}\label{eqn:est-Z-u}
\int_{\Omega(i) \cap B_{1/2} \setminus B_{\tau}(L \times \mathbb{R}^m)} \frac{|u(i) - \zeta^\perp|^2}{|X - Z|^{n+2-\alpha}} \leq c(\mathbf{C}^0, \alpha) E(M, \mathbf{C}, 1) .
\end{gather}
Here $M(i), \Omega(i)$ is the decomposition as in Lemma \ref{lem:poly-global-graph}
\end{prop}
\begin{proof}
We first show the estimate
\begin{gather}
|\zeta|^2 \leq c(\mathbf{C}^0) E(M, \mathbf{C}, 1).
\end{gather}
Take $\varepsilon_2(\mathbf{C}^0, \tau, \beta)$ as in Theorem \ref{lem:poly-global-graph}, with $\tau$ and $\beta \leq \tau/10$ to be specified. So in particular, if we write $L = \cup_i L(i)$ for the union of lines of $\mathbf{C}_0$, then we decompose
\begin{gather}\label{eqn:est-Z-graph}
M \cap B_{1/2} \setminus B_{10 \tau}(L \times \mathbb{R}^m) = \cup_i M(i),
\end{gather}
where each $M(i)$ is a graph of $u(i)$ over $W(i)$, with $|u(i)| \leq \beta |x|$. An important but obvious consequence of Theorem \ref{lem:poly-global-graph} is that $|\zeta| \leq \tau$.
For simplicity let us take $\beta = \tau/10$. Since $\mathbf{C}$ is flat away from $B_{10\tau}(L \times \mathbb{R}^m)$, and $|\zeta| \leq \tau$, and each $M(i)$ has small $C^0$ norm, we have
\begin{gather}
|d_{\mathbf{C}}(x, y) - d_{Z + \mathbf{C}}(x, y)| = |\pi_{\mathbf{C}^\perp}(\zeta)|
\end{gather}
for any $(x, y) \in M(i)$. Here $\pi_{\mathbf{C}^\perp}$ denotes the projection onto $N_{(x', y)} \mathbf{C}$, where $(x, y) = (x', y) + u(i)(x', y)$.
Because we assume $\mathbf{C}_0$ to have no additional symmetries, using a contradiction argument, can prove the existence of some $\partiallta_0(\mathbf{C}^0) > 0$ so that, provided $\varepsilon \leq \partiallta_0$, we have
\begin{gather}\label{eqn:no-sym-bound}
\int_{\mathbf{C} \cap B_{1/4} \setminus B_{\partiallta_0}(L \times \mathbb{R}^m)} |a^\perp|^2 \geq 10 \partiallta_0 |a|^2 \quad \forall a \in \mathbb{R}^{2+k}\times \{0\},
\end{gather}
where $a^\perp$ at $(x', y) \in \mathbf{C}$ is simply the projection to $N_{(x', y)}\mathbf{C}$.
Pick $\rho$ small, but arbitrary. Ensure $\tau \leq 2\partiallta_0(\mathbf{C}^0) \rho \leq \rho/10$ and we obtain
\begin{align}
\int_{\cup_i M(i) \cap B_\rho(Z)} |\pi_{\mathbf{C}^\perp}(\zeta)|^2
&\geq \frac{1}{10} \int_{\mathbf{C} \cap B_{\rho/2}(Z) \setminus B_{\partiallta_0 \rho}(L \times \mathbb{R}^m)} |\zeta^\perp|^2 \\
&\geq \frac{\rho^n}{10} \int_{\mathbf{C} \cap B_{1/4} \setminus B_{\partiallta_0}(L \times \mathbb{R}^m)} |\zeta^\perp|^2\\
&\geq \partiallta_0(\mathbf{C}^0) |\zeta|^2 \rho^n,
\end{align}
where $\partiallta_0$ is \emph{independent} of $\rho$. The first inequality follows from the graphical decomposition \eqref{eqn:est-Z-graph}. The second inequality holds since $|\zeta| \leq \tau \leq \rho/10$. The third inequality is \eqref{eqn:no-sym-bound}.
We can apply Propositions \ref{prop:decay-growth-est}, \ref{prop:density-est} to the point $Z$, and the cone $\mathbf{C} + Z$, to deduce
\begin{gather}
\int_{M \cap B_{1/10}(Z)} \frac{d_{Z + \mathbf{C}}^2}{|X - Z|^{n+2-\alpha}} \leq c \int_{M \cap B_1} d_{Z + \mathbf{C}}^2 + c ||H_M||_{L^\infty(B_1)} \leq c E(M, \mathbf{C}, 1) + c |\zeta|^2 .
\end{gather}
Combine the above two relations, to deduce
\begin{align}
\partiallta_0 \rho^n |\zeta|^2
&\leq \int_{\cup_i M(i) \cap B_\rho(Z)} |\pi_{\mathbf{C}^\perp}(\zeta)|^2 \\
&\leq \int_{M \cap B_\rho(Z)} d_{Z + \mathbf{C}}^2 + \int_{M \cap B_\rho(Z)} d_{\mathbf{C}}^2 \\
&\leq c E(M, \mathbf{C}, 1) + c \rho^{n+2-\alpha} |\zeta|^2 .
\end{align}
where $c$ depends on $(\mathbf{C}^0, \alpha)$ only (so, is independent of $\rho$ and $\tau$). Choose $\rho$ small, and correspondingly ensure $\tau = \beta$ is sufficiently small, and we obtain the first part of \eqref{eqn:est-Z-M}.
To obtain the second estimate, apply Propositions \ref{prop:decay-growth-est}, \ref{prop:density-est} at $Z$, and then use the first part of \eqref{eqn:est-Z-M}:
\begin{align}
\int_{M \cap B_{1/4}} \frac{d_{Z + \mathbf{C}}^2}{|X - Z|^{n+2-\alpha}} \leq c E(M, \mathbf{C}, 1) + c |\zeta|^2 \leq c E(M, \mathbf{C}, 1).
\end{align}
We prove the last estimate \eqref{eqn:est-Z-u}. Take $(x, y) = (x', y) + u(i)(x', y) \in M(i) \cap B_{1/2} \setminus B_{\tau}(L\times \mathbb{R}^m)$. From the bounds $|f(i)| \leq \beta |x|$ and $|u(i)| \leq \beta|x|$, we know that $(x', y) \in W(i)$ and
\begin{gather}
|u(i)(x', y) - \pi_{\mathbf{C}^\perp}(\zeta)(x', y)| = d_{Z + \mathbf{C}}(x, y).
\end{gather}
Now use the second part of \eqref{eqn:est-Z-M}, and the fact that the Jacobian has bound $1/2 \leq Ju(i) \leq 2$.
\end{proof}
\subsection{Estimates on the spine} Using the $\partiallta$-no-holes condition we can sum the estimates \ref{prop:Z-est} along the spine ${ \{0\} \times \R^m }$.
\begin{prop}\label{prop:spine-est}
Given $\tau > 0$ and $\alpha \in (0, 1)$, there is an $\varepsilon(\mathbf{C}^0, \tau)$ so that the following holds. Let $M \in \mathcal{N}_{\varepsilon}(\mathbf{C}^0)$, and suppose that $M$ satisfies the $\tau/10$-no-holes condition w.r.t. $\mathbf{C}^0$ in $B_{1/4}$. Take $\alpha \in (0, 1)$.
Then we have
\begin{gather}\label{eqn:spine-d-est}
\int_{M \cap B_{1/4}} \frac{d_{\mathbf{C}}^2}{\max(r, \tau)^{2-\alpha}} \leq c(\mathbf{C}^0, \alpha) E(M, \mathbf{C}, 1),
\end{gather}
and if we write $L = \cup_{i=1}^{2d/3} L(i)$ for the lines of $\mathbf{C}_0$, then
\begin{gather}\label{eqn:spine-u-est}
\sum_{i=1}^d \int_{\Omega(i) \cap B_{1/4} \setminus B_\tau(L\times \mathbb{R}^m))} \frac{|u(i) - \kappa^\perp|^2}{\max(r, \tau)^{2+2-\alpha}} \leq c(\mathbf{C}^0, \alpha) E(M, \mathbf{C}, 1).
\end{gather}
Here $\kappa : (0,1]\times B_1^m \to \mathbb{R}^{2+k}\times\{0\}$ is a chunky function satisfying the bound $|\kappa|^2 \leq c(\mathbf{C}^0, \alpha) E(M, \mathbf{C}, 1)$.
\end{prop}
\begin{proof}
Let $r_\nu$, $Q_{\nu\mu}$ be as in Definition \ref{def:chunky}. Whenever $r_\nu > \tau/2$, by the no-holes condition there is $Z_{\nu\mu} = (\zeta_{\nu\mu}, \eta_{\nu\mu}) \in \mathrm{sing} M \cap (B_{\tau/10}^{2+k}(0)\times Q_{\nu\mu})$ with $\theta_M(Z_{\nu\mu}) \geq \theta_{\mathbf{C}^0}(0)$. By Proposition \ref{prop:Z-est}, we we have
\begin{align}
\int_{\Omega(i) \cap (B_{r_\nu}^{2+k}(0) \times Q_{\nu\mu}) \setminus B_\tau(L\times \mathbb{R}^m)} |u(i) - \zeta_{\nu\mu}^\perp|^2
&\leq \int_{ \Omega(i) \cap B_{c(n,k) r_\nu}(Z_{\nu\mu}) \setminus B_\tau(L\times \mathbb{R}^m)} |u(i) - \zeta_{\nu\mu}^\perp|^2 \\
&\leq c(\mathbf{C}^0,\alpha) r_\nu^{n+2-\alpha} E(M, \mathbf{C}, 1) .
\end{align}
Define $\kappa$ by
\begin{gather}
\kappa(r, y) = \zeta_{\nu \mu} \text{ for } (r, y) \in [r_{\nu+1}, r_\nu) \times Q_{\nu\mu}.
\end{gather}
Then since the number of cubes $\{Q_{\nu\mu}\}_\mu$ intersecting $B_1^m$ is bounded by $c(m) r_\nu^{-m}$, we have for any $r_\nu > \tau/2$:
\begin{align}
\int_{\Omega(i) \cap B_{1/4} \cap \{ r_{\nu + 1} \leq r < r_\nu \} \setminus B_\tau(L \times \mathbb{R}^m)} |u(i) - \kappa^\perp|^2
&\leq \sum_{\mu : Q_{\nu\mu} \cap B^m_1 \neq \emptyset} \int_{\mathbf{C} \cap (B_{r_\nu}^{2+k}(0) \times Q_{\nu\mu}) \setminus B_{\tau}(L\times \mathbb{R}^m)} |u - \zeta_{\nu\mu}^\perp|^2 \\
&\leq c(\mathbf{C}^0,\alpha) r_\nu^{2+2-\alpha} E(M, \mathbf{C}, 1).
\end{align}
Now given any $\rho > \tau$, choose $\nu$ so that $r_{\nu+1} \leq \rho < r_\nu$. We have
\begin{align}
\int_{\Omega(i) \cap B_{1/4} \cap \{ \tau \leq r < \rho\} \setminus B_\tau(L \times \mathbb{R}^m)} |u(i) - \kappa^\perp|^2
&\leq c r_\nu^{2+2-\alpha} \left( \sum_{j = 0}^\infty 2^{-j (\ell+2-\alpha)} \right) E(M, \mathbf{C}, 1) \\
&\leq c \rho^{2+2-\alpha} E(M, \mathbf{C}, 1).
\end{align}
Multiply by $\rho^{-2-3+2\alpha}$ and integrate in $\rho \in [\tau, 1/4]$, to obtain \eqref{eqn:spine-u-est} with $2\alpha$ in place of $\alpha$.
Let us prove \eqref{eqn:spine-d-est}. Take $Q_{\nu\mu}$, $Z_{\nu\mu}$ as before. Then for each $\nu,\mu$, we have by the same reasoning as above
\begin{align}
\int_{M \cap Q_{\mu\nu}} \frac{d_{\mathbf{C}}^2}{r_\nu^{n-\alpha}}
&\leq 2\int_{M \cap B_{c(n,k)r_\nu}(Z_{\mu\nu})} \frac{d_{Z_{\nu\mu}+\mathbf{C}}^2}{|X - Z_{\nu\mu}|^{n+2-\alpha}} + 2|\zeta|^2 \int_{M \cap B_{c(n,k)r_\nu}(Z_{\nu\mu})} |X - Z_{\nu\mu}|^{-n+\alpha} \\
&\leq c(\mathbf{C}_0, \alpha) E(M, \mathbf{C}, 1).
\end{align}
In this last inequality we used Proposition \ref{prop:Z-est} centered at $Z$, and the mass bound $\mu_M(B_R) \leq c(\mathbf{C}^0) R^n$.
Therefore, given any $\rho$, we can choose an appropriate $\nu$ and sum over $\mu$ as before to deduce
\begin{gather}
\int_{M \cap B_{1/4} \cap B_\rho(\{0\}\times \mathbb{R}^m)} d_{\mathbf{C}}^2 \leq c \rho^{\ell-\alpha} E(M, \mathbf{C}, 1).
\end{gather}
Now multiply by $\rho^{-\ell-1+2\alpha}$ and integrate in $\rho \in [\tau, 1/4]$, to obtain \eqref{eqn:spine-d-est} with $2\alpha$ in place of $\alpha$.
\end{proof}
\begin{comment}
\documentclass[12pt]{article}
\usepackage{amsmath}
\usepackage{hyperref}
\usepackage{xfrac}
\usepackage{stmaryrd,mathrsfs,bm,amsthm,mathtools,yfonts,amssymb,color}
\usepackage{xcolor}
\usepackage{courier}
\newcommand\net{\mathrm{\bf net}}
\newcommand{\partial}{\partial}
\newcommand\sphere{\mathbb{S}}
\newcommand{\varepsilon}{\varepsilon}
\newcommand{\mathcal{H}}{\mathcal{H}}
\newcommand{\mathrm{sing}}{\mathrm{sing}}
\newcommand{\mathrm{graph}}{\mathrm{graph}}
\newcommand{\mathrm{dom}}{\mathrm{dom}}
\newcommand{\mathrm{spt}}{\mathrm{spt}}
\newcommand{\mathcal{M}}{\mathcal{M}}
\newcommand{\mathcal{S}}{\mathcal{S}}
\newcommand{\mathbf{C}}{\mathbf{C}}
\newcommand{\mathbf{V}}{\mathbf{V}}
\newcommand{\mathbf{E}}{\mathbf{E}}
\newcommand{\mathbf{T}}{\mathbf{T}}
\newcommand{\mathbf{Y}}{\mathbf{Y}}
\newcommand{\mathbf{X}}{\mathbf{X}}
\newcommand{\mathbb{R}}{\mathbb{R}}
\newcommand{\mathcal{C}}{\mathcal{C}}
\newcommand{\mathcal{N}}{\mathcal{N}}
\newcommand{\mathcal{L}}{\mathcal{L}}
\newcommand{\mathcal{V}}{\mathcal{V}}
\newcommand{\mathrm{int}}{\mathrm{int}}
\newcommand{{ \{0\} \times \R^m }}{{ \{0\} \times \mathbb{R}^m }}
\newcommand{\mathbf{C}^0}{\mathbf{C}}
\newtheorem{theorem}{Theorem}[section]
\newtheorem{ctheorem}[theorem]{Conjectural Theorem}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{fact}[theorem]{Fact}
\newtheorem{prop}[theorem]{Proposition}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{corollary}[theorem]{Corollary}
\theoremstyle{definition}
\newtheorem{definition}[theorem]{Definition}
\theoremstyle{remark}
\newtheorem{remark}[theorem]{Remark}
\theoremstyle{remark}
\newtheorem{example}[theorem]{Example}
\theoremstyle{remark}
\newtheorem{note}[theorem]{Note}
\theoremstyle{remark}
\newtheorem{question}[theorem]{Question}
\theoremstyle{remark}
\newtheorem{conjecture}[theorem]{Conjecture}
\setlength{\textheight}{10in} \setlength{\textwidth}{7.5in}
\setlength{\columnsep}{0.5in} \setlength{\topmargin}{-1.2in}
\setlength{\headheight}{0in} \setlength{\headsep}{0.5in}
\setlength{\parindent}{1pc}
\setlength{\oddsidemargin}{-0.5in}
\setlength{\evensidemargin}{-0.5in}
\begin{document}\end{comment}
\section{Jacobi fields}\label{sec:jacobi}
\begin{comment}
We first define an appropriate notion of Jacobi field on a polyhedral cone. We shall demonstrate in Proposition \ref{prop:blow-up} that inhomogenous blow-up sequences give rise to these Jacobi fields. Recall ${\mathbf{C}^0_0}^2 \subset \mathbb{R}^{2+k}$ is a fixed polyhedral cone composed of wedges $\{W(i)\}_{i=1}^d$, each contained in a $2$-plane $P(i)$. We take $\mathbf{C}^0 = \mathbf{C}^0_0 \times \mathbb{R}^m$.
\begin{definition}\label{def:compatible}
We say $v : \mathbf{C}^0 \cap B_1 \to {\mathbf{C}^0}^\perp$ is a \emph{compatible Jacobi field} on $C \cap B_1$ if it satisfies the following conditions:
\begin{enumerate}
\item[A)] For each $i$, $v(i)$ is $C^{1,\alpha}$ on $\left( (\overline{W(i)} \setminus \{0\}) \times \mathbb{R}^m \right) \cap B_1$, and inside $( \mathrm{int} W(i)\times \mathbb{R}^m) \cap B_1$ is smooth and satisfies $\Delta v(i) = 0$.
\item[B)] (``$C^0$ compatability'') For every $z \in (\partial W(i) \times \mathbb{R}^m) \cap B_1$, there is a vector $V(z)$ (\emph{independent} of $i$) so that
\begin{gather}
v(i)(z) = \pi_{P(i)^\perp}(V(z)).
\end{gather}
\item[C)] (``$C^1$ compatability'') If $W(i_1)$, $W(i_2)$, $W(i_3)$ share a common edge $\partial W(i_1)$, then
\begin{gather}
\sum_{j=1}^3 \partial_n v(i_j)(z) = 0 \quad \forall z \in \left( (\partial W(i_1) \setminus \{0\}) \times \mathbb{R}^m\right) \cap B_1.
\end{gather}
\end{enumerate}
\end{definition}
\begin{definition}\label{def:linear-field}
We say a compatible Jacobi field $v$ is \emph{linear} if there are skew-symmetric matrices $A(i) : \mathbb{R}^{n+k} \to \mathbb{R}^{n+k}$ so that $v(i) = \pi_{P(i)^\perp} \circ A(i)$.
\end{definition}
\begin{remark}
Notice we \emph{do not} require the $A(i)$ to coincide, and in fact whether the $A(i)$s all arise from a single map is essentially the question of integrability.
\end{remark}
Our aim of this section is to prove an $L^2$ decay estimate for compatible Jacobi fields arising from our blow-up procedure. The basic idea is to show that any field that grows at least $1$-homogenously, and is not itself $1$-homogenous (by integrability), must grow at least $(1+\varepsilon)$-homogenously. If $\mathbf{C}^0$ had no spine (so, if $m = 0$) this would be a direct consequence of the Fourier expansion.
In the presense of a spine we can prove decay using the Hardt-Simon estimate \eqref{eqn:linear-decay-hyp2}. On the one hand, we have an upper bound \eqref{eqn:linear-decay-hyp2} at any scale $\rho$, which says $v$ grows at least $1$-homogenously. On the other, whenever $v$ is $L^2$-orthogonal to the linear fields at a scale $\rho$, then we have a lower bound \eqref{eqn:ortho-growth}: $v$ must grow \emph{quantiatively} more than $1$-homogenously at scale $\rho$.
Integrability and Theorem \ref{thm:1-homo-linear} ensure we can always kill the linear component at any scale. So we can use a hole-filling to chain the above two inequalities together and obtain decay. To make this precise we require some further notation.
\end{comment}
The aim of this prove, under suitable assumptions, a superlinear decay on Jacobi fields whose linear part has been removed. To state this we require some additional notation.
\begin{definition}
Let $\mathcal{L}$ be the subspace of linear compatible fields $v : \mathbf{C}^0 \to {\mathbf{C}^0}^\perp$ of the form
\begin{gather}
\mathcal{L} = \left\{ v(x, y) = \pi_{{\mathbf{C}^0}^\perp}(Ay) + v_0(x) : \begin{array}{c}\text{$A$ is a linear map ${ \{0\} \times \R^m } \to \mathbb{R}^{2+k}\times \{0\}$,} \\ \text{and $v_0 : \mathbf{C}^0_0 \to {\mathbf{C}^0_0}^\perp$ is linear compatible} \end{array} \right\}.
\end{gather}
We will find these are precisely the $1$-homogeneous Jacobi fields arising from our blow-up procedure.
Now given an arbitrary compatible Jacobi field $v : \mathbf{C}^0 \to {\mathbf{C}^0}^\perp$, and a scale $\rho > 0$, let us define $\psi_\rho \in \mathcal{L}$ to be the element of $\mathcal{L}$ minimizing
\begin{gather}
\min \left\{ \int_{\mathbf{C}^0 \cap B_\rho} v\cdot \psi \text{ among } \psi \in\mathcal{L} \right\} ,
\end{gather}
and then define
\begin{gather}
v_\rho = v - \psi_\rho,
\end{gather}
so that $v_\rho$ is $L^2(\mathbf{C}^0 \cap B_\rho)$-orthogonal to every field in $\mathcal{L}$.
\end{definition}
Our main Theorem of this section is the following.
\begin{theorem}[Linear decay]\label{thm:linear-decay}
Let $v : \mathbf{C}^0 \cap B_{1} \to {\mathbf{C}^0}^\perp$ be a compatible Jacobi field, and fix $\theta \in (0, 1/4]$. Suppose that for every $\rho \in [\theta, 1/4]$, there is a chunky function $\kappa_\rho : (0,\rho] \times B_\rho^m \to \mathbb{R}^{2+k}\times\{0\}$ so that we have the following two estimates:
A) Non-concentration estimate:
\begin{gather}\label{eqn:linear-decay-hyp1}
\rho^{2+2-\alpha} \int_{\mathbf{C}^0 \cap B_{\rho/4}} \frac{|v_\rho - \kappa_\rho^\perp|^2}{r^{2+2-\alpha}} \leq \beta \int_{\mathbf{C}^0 \cap B_\rho} |v_\rho|^2,
\end{gather}
with the pointwise bound $|\kappa_\rho| \leq \beta \rho^{-n} \int_{\mathbf{C}^0 \cap B_\rho} |v_\rho|^2$;
B) Hardt-Simon growth estimate:
\begin{gather}\label{eqn:linear-decay-hyp2}
\int_{\mathbf{C}^0 \cap B_{\rho/10}} R^{2-n} |\partial_R (v/R)|^2 \leq \beta \rho^{-n-2} \int_{\mathbf{C}^0 \cap B_\rho} |v_\rho|^2.
\end{gather}
Then there are constants $c_2$, $\mu$, depending only on $(\mathbf{C}^0, \beta, \alpha)$, so that
\begin{gather}
\theta^{-n-2} \int_{\mathbf{C}^0 \cap B_\theta} |v_\theta|^2 \leq c_2 \theta^\mu \int_{\mathbf{C}^0 \cap B_1} |v_{1}|^2.
\end{gather}
\end{theorem}
Since the argument is somewhat involved, we provide a short outline.
\begin{proof}[Outline of Proof]
The biggest hurdle is to show that any $1$-homogeneous compatible Jacobi field $v$ satisfying \eqref{eqn:linear-decay-hyp1} must lie in $\mathcal{L}$. This is proven in Theorem \ref{thm:1-homo-linear} as follows.
First, we decompose $v(r\theta,y)=\sum_{i = 0}^\infty v_i(r, y)$, where each $v_i$ is the projection of $v$ onto the $i$-th eigenfunction of the Jacobi operator on the geodesic net $\Gamma=\mathbf{C}_0\cap \sphere^{1+k}$. Thanks to the compatibility conditions, this operation is well defined on the net (see Section \ref{sec:eigenfunctions}) and each $v_i$ is smooth (Proposition \ref{lem:jacobi-apriori-est}).
Next we observe that by $1$-homogeneity of $v$, we can write $v_i(r, y) = r \phi_i(y/r)$ and each $\phi_i$ satisfies the equation
\begin{gather}
\sum_{j,k=1}^m (\partiallta_{jk} + z^j z^k) D_j D_k \phi_i - \sum_{j=1}^m z^j D_j \phi_i + (1-\lambda_i)\phi_i = 0 .
\end{gather}
with $\lambda_i$ the eigenvalue associated to $\phi_i$. Moreover \eqref{eqn:linear-decay-hyp1}, becomes
\begin{align*}
&\int_{1}^\infty \int_{S^{m-1}} t^{-1-\alpha} |\phi_i(t\omega)|^2 d\omega dt < \infty \quad \text{when }\lambda_i \neq 0, \\
&\int_1^\infty \int_{S^{m-1}} t^{1-\alpha} | t^{-1} \phi_i(t\omega) - \tilde \kappa_i(t\omega)|^2 d\omega dt < \infty \quad \text{when }\lambda_i = 0 .
\end{align*}
Now one exploits the fact that in polar coordinates the PDE of $\phi$ has a divergence structure, so that we can test it with a logarithmic cut-off function and use the above inequalities to estimate the RHS to prove in Lemma \ref{lem:pde-analysis} that:
\begin{enumerate}
\item when $\lambda_i = 0$, then $\phi_i(z) = a\cdot z$ for some $a \in \mathbb{R}^m$ (corresponding to a rotation of the spine),
\item when $\lambda_i = 1$, then $\phi_i(z) \equiv const$ (corresponding to an action on $\mathbf{C}$ that fixes the spine),
\item otherwise, $\phi_i(z) = 0$ ($v$ cannot act on the spine in any other fashion).
\end{enumerate}
However we cannot do this directly, since a reverse Poincar\'e inequality might not be true for $\phi_i$, and indeed, we will need to study the equation for the radial part of each fourier mode of $\phi_i$ separately (see Lemma \ref{lem:ode-est}).
The underlying reason this works is because of the no-holes condition: in assuming the existence of a singular set (i.e. points of good density) arbitrarily near ${ \{0\} \times \R^m }$, we enforce the infintesimal motion to act on ${ \{0\} \times \R^m }$ by rotation.
At this point a simple contradiction argument allows us to prove that whenever $v_\rho$ ($ = $ component of $v$ orthogonal at scale $B_\rho$ to $\mathcal{L}$) satisfies the non-concentration estimate \eqref{eqn:linear-decay-hyp1}, then the following quantitative growth estimate holds
\begin{gather}
\int_{\mathbf{C}^0 \cap B_\rho\setminus B_{\rho/10}} R^{2-n}|\partial_R(v/R)|^2 \equiv \int_{\mathbf{C} \cap B_\rho \setminus B_{\rho/10}} R^{2-n}|\partial_R(v_\rho/R)|^2 \geq \frac{1}{c(\mathbf{C}^0)} \rho^{-n-2} \int_{\mathbf{C}^0 \cap B_\rho} |v_\rho|^2.
\end{gather}
This can be combined with the Hardt-Simon inequalty \eqref{eqn:linear-decay-hyp2} to prove a decay of $\int_{\mathbf{C}\cap B_\rho} R^{2-n} |\partial_R(v/R)|^2$, and hence a decay of $\rho^{-n-2} \int_{\mathbf{C} \cap B_\rho} |v_\rho|^2$ also.
\end{proof}
\subsection{Elementary facts}
Let us prove some elementary properties of compatible Jacobi fields. First, we demonstrate smoothness and a priori estimates up to and including the wedge boundaries.
\begin{lemma}\label{lem:jacobi-apriori-est}
Suppose $v : \mathbf{C}^0 \cap B_1 \to {\mathbf{C}^0}^\perp$ is $C^{1,\alpha}$, satisfies the $C^0$- and $C^1$-compatibility conditions of Definition \ref{def:compatible}, and each $v(i)$ is harmonic on $\mathrm{int} W(i)\times \mathbb{R}^m$.
Then $v$ is a compatible Jacobi field in $B_1$ (so, is smooth up to and including the wedge boundaries), and for every non-negative integer $k$ and $\rho < 1$ we have the pointwise bound
\begin{gather}\label{eqn:jacobi-apriori-est}
\sup_{B_{\rho} \cap \left(W(i) \times \mathbb{R}^m\right)} |x|^{2k+n} |D^k v(i)|^2 \leq c(\mathbf{C}^0, \rho, k) \int_{\mathbf{C}^0 \cap B_1} |v|^2.
\end{gather}
\end{lemma}
\begin{proof}
Away from $\partial W(i) \times \mathbb{R}^m$ smoothness follows from harmonicity. Let assume assume $W(1)$, $W(2)$, $W(3)$ share a common boundary line $L$. By the $C^1$ compatability condition we can perform an even extension of $v(1) + v(2) + v(3)$ across $L \times \mathbb{R}^m$ near $z$, and deduce that $v(1) + v(2) + v(3)$ is smooth up to $L \times \mathbb{R}^m$.
Let $P$ be the plane spanned by the conormals $n(1), n(2), n(3)$, and denote by $v(i)^T$ and $v(i)^\perp$ the orthogonal projections to $P$, $P^\perp$ respectively. We can identify $P$ with $\mathbb{R}^2$, and the $n(i)$ with $1, e^{2\pi i/3}, e^{4\pi i/3}$. By the $C^1$ compatability condition one can easily verify that
\begin{gather}
\partial_{n(i)} v(i)^T = \alpha e^{i\pi/2} n(i),
\end{gather}
for some $\alpha \in\mathbb{R}$. In other words, up to a fixed scaling factor, each $\partial_{n(i)} v(i)^T$ is a $90^0$ rotation of $n(i)$.
We deduce that $\partial_{n(i)} v(i) \cdot n(i) = \partial_{n(j)} v(j)\cdot n(j)$ along $L$, and so using an even extension we deduce $v(i)^T - v(j)^T$ is smooth up to $L$. Similarly, by the $C^0$ compatability condition, we have that $v(i)^\perp = v(j)^\perp$ along $L$, and so using an odd extension we deduce $v(i)^\perp - v(j)^\perp$ is smooth up to $L$.
Combining the above relations gives that each $v(i)$ extends smoothly to $L$.
We now prove \eqref{eqn:jacobi-apriori-est}. Observe that near $L$, each $v(i)$ can be written as the sum of harmonic functions which extend smoothly across $L$ (by either an even or odd reflection). Therefore, at \emph{any} point $(x, y) \in \mathbf{C}^0\cap B_\rho$, we can scale up $|x| \to 1$ and use standard interior estimates to bound
\begin{gather}\label{eqn:jacobi-apriori-1}
\sup_{B_{\partiallta |x|}(x, y) \cap (W(i)\times \mathbb{R}^m)} |x|^{2k+n} |D^k v(i)|^2 \leq c(\mathbf{C}^0, \partiallta, k) \int_{B_{2\partiallta |x|}(x, y) \cap (W(i)\times \mathbb{R}^m)} |v|^2 \leq c \int_{M \cap B_1} |v|^2.
\end{gather}
Here $\partiallta = \partiallta(\mathbf{C}^0, \rho)$ is chosen to be
\begin{gather}
\partiallta = \min \{ 1/100 \cdot \text{ (smallest geodesic length in $\mathbf{C}^0_0 \cap \mathbb{S}^{1+k}$)}, (1-\rho)/2 \}.
\end{gather}
The Lemma follows directly.
\end{proof}
The following Proposition demonstrates that a compatible, $1$-homogeneous field on a polyhedral cone generates a rotation locally. Unfortunately, it's not always clear if the local-rotations can patch together for form a global movement of the net. Note this Proposition concerns the \emph{cross-section} $\mathbf{C}^0_0$, not the full cone $\mathbf{C}^0 = \mathbf{C}^0_0 \times \mathbb{R}^m$.
\begin{prop}\label{prop:baby-linear}
Let $v : \mathbf{C}^0_0 \to {\mathbf{C}^0_0}^\perp$ be a compatible $1$-homogeneous Jacobi field. Then $v$ is linear: there are skew-symmetric matrices $A(i) : \mathbb{R}^{n+k} \to \mathbb{R}^{n+k}$ so that $v(i) = \pi_{P(i)^\perp} \circ A(i)$.
Morover, the $A(i)$ are locally compatible in the following sense: if $W(i_1), W(i_2), W(i_3)$ share a common boundary line $L$, then there is a skew-symmetric matrix $A_L$ so that
\begin{gather}
\pi_{P(i_j)^\perp} \circ A(i_j) = \pi_{P(i_j)^\perp} \circ A_L \quad \text{ for each } j = 1, 2, 3.
\end{gather}
\end{prop}
\begin{proof}
On each wedge $W(i)$, $v(i)$ is harmonic and $1$-homogeneous. Since $W(i)$ is $2$-dimensional, it follows immediately that $v(i)$ is a linear map $W(i) \to W(i)^\perp$. Since the domain and range of $v(i)$ are orthogonal, we can extend it to a skew-symmetric linear mapping on $\mathbb{R}^{n+k}$.
Let us prove local compatibility. Fix a line $L \equiv L(1)$ of $\mathbf{C}^0_0$, and suppose without loss of generality that the wedges $W(1), W(2), W(3)$ all meet at $L$. For each such wedge, write $n(i)$ for the unit outward conormal of $L \subset W(i)$, and $\ell$ for the unit vector defining $L$.
On each piece $W(i)$, by assumption we can write the field $v(i)$ as
\begin{gather}
v(i)(x) = a(i) (x \cdot n(i)) + b(i) (x \cdot \ell),
\end{gather}
where $a(i), b(i) \in P(i)^\perp \subset \mathbb{R}^{2+k}$. Here $\cdot$ denotes the standard Euclidean inner product.
By the $C^0$ compatibility condition, we have that
\begin{gather}
b(i) = \pi_{P(i)^\perp}(b),
\end{gather}
where $b$ is a fixed vector in $L^\perp \subset \mathbb{R}^{\ell+k}$.
From the $C^1$ compatability condition, we have $\sum_{i=1}^3 a(i) = 0$. Therefore, by Lemma \ref{lem:sum-is-zero} we can choose an anti-symmetric $A'$ so that $A'(n(i)) = a(i)$. Define the linear mapping
\begin{gather}
A(x) = A' x + (b - A'(\ell))(x \cdot \ell).
\end{gather}
Then $A$ is anti-symmetric, since $x^T A x = 0$ for every $x$, and by construction we have $v(i) = \pi_{P(i)^\perp} \circ A$.
\end{proof}
\subsection{Eigenfunctions on a net}\label{sec:eigenfunctions}
We require some additional notation. Write $\Gamma = \mathbf{C}^0_0 \cap \mathbb{S}^{1+k}$ to be the corresponding equiangular geodesic net of $\mathbf{C}^0_0$ composed of geodesc segments $\cup_{i=1}^d \ell(i)$. Here each $\ell(i) = W(i) \cap \mathbb{S}^{1+k}$. We write a function $u : \Gamma \to {\mathbf{C}^0_0}^\perp$ as a collection of functions $u(i) : \ell(i) \to W(i)^\perp$.
Define the norms
\begin{gather}
||u||_0^2 = \int_\Gamma |u|^2,\quad ||u||^2_1 = \int_\Gamma |u|^2 + |u'|^2, \quad ||u||^2_2 = \int_\Gamma |u|^2 + |u'|^2 + |u''|^2
\end{gather}
and let $L^2(\Gamma)$, $W^{1,2}(\Gamma)$, $W^{2,2}(\Gamma)$ be the completion of $C^\infty(\Gamma, {\mathbf{C}^0_0}^\perp)$ with respect to these norms. By Sobolev embedding, we have $W^{1,2} \subset C^0(\Gamma, {\mathbf{C}^0_0}^\perp)$ and $W^{2,2} \subset C^1(\Gamma, {\mathbf{C}^0_0}^\perp)$.
We say $u \in C^0(\Gamma)$ is \emph{$C^0$-compatible} if for every $p \in \partial \ell(i)$, there is a vector $V$ \emph{independent of $i$} so that $u(i)(p) = \pi_{\ell(i)^\perp}(V)$. We say $u \in C^1(\Gamma)$ is \emph{$C^1$-compatible} if:
\begin{gather}
\partial_n u(i_1)(p) + \partial_n u(i_2)(p) + \partial_n u(i_3)(p) = 0
\end{gather}
whenever $\ell(i_1), \ell(i_2), \ell(i_3)$ share a common vertex $p$ ($n$ being the outward conormal). Clearly, a Jacobi field $v : \mathbf{C}^0_0 \to {\mathbf{C}^0_0}^\perp$ is compatible if and only if each slice $v(r \equiv r_0)$ is compatible on the net $r_0 \Gamma$.
We aim to show the following:
\begin{theorem}\label{thm:eigenfunctions}
There is a sequence $0 = \lambda_1 \leq \lambda_2 \leq \ldots \to \infty$, and a collection $u_i \in C^\infty(\Gamma, {\mathbf{C}^0_0}^\perp)$, so that
\begin{gather}
u_i'' + \lambda_i u_i = 0, \quad u_i \text{ is $C^0$- and $C^1$-compatible},
\end{gather}
and the $\{u_i\}_i$ form an orthonormal basis in $L^2(\Gamma)$.
\end{theorem}
\begin{remark}
If $v : \mathbf{C}^0_0 \to {\mathbf{C}^0_0}^\perp$ is a $1$-homogeneous compatible Jacobi field, then $v(r = 1)$ is an eigenfunction of $u \mapsto -u''$ with eigenvalue $1$.
If $V \in \mathbb{R}^{2+k}$ is a fixed vector, then $u(i)(x) = \pi_{W(i)^\perp}(V)$ is an eigenfunction of $-u''$ with eigenvalue $0$. If $A : \mathbb{R}^{2+k} \to \mathbb{R}^{2+k}$ is a fixed linear map, then $u(i)(x) = \pi_{W(i)^\perp}(Ax)$ is an eigenfunction with eigenvalue $1$.
\end{remark}
Let us define the spaces
\begin{align}
H_1 &= \{ u \in W^{1,2}(\Gamma) \subset C^0(\Gamma, {\mathbf{C}^0_0}^\perp) : \text{ $u$ is $C^0$-compatible} \}, \\
H_2 &= \{ u \in H_1 \cap W^{2,2}(\Gamma) \subset C^1(\Gamma, {\mathbf{C}^0_0}^\perp) : \text{ $u$ is $C^1$-compatible } \}.
\end{align}
By Sobolev embedding and linearity of compatability conditions each $H_i$ is a well-defined closed (Hilbert) subspace of $W^{i,2}(\Gamma)$. Our key Lemma is the following.
\begin{lemma}\label{lem:compact-mapping}
The mapping $H_2 \to L^2(\Gamma)$ sending $u \mapsto -u'' + u$ has a bounded inverse map $S : L^2(\Gamma) \to H_2$, which is self-adjoint as a map $L^2 \to L^2$.
\end{lemma}
\begin{proof}
The bilinear form $A : H_1 \times H_1 \to \mathbb{R}$ defined by
\begin{gather}
A(u, \phi) = \int_\Gamma u' \cdot \phi' + u\cdot \phi
\end{gather}
coincides with the inner product on $H_1$. So by the Reisz representation theorem, there is a bounded solution operator $S : L^2(\Gamma) \to H_1$, which solves
\begin{gather}\label{eqn:basis-1}
A(S(f), \phi) = \int f \cdot \phi \quad \forall \phi \in H_1.
\end{gather}
We show that $S$ maps into $H_2$. Let us fix some $u = S(f)$. By standard arguments, and since $\Gamma$ is $1$-dimensional, we can take various $\phi$ supported in a fixed segment to deduce $u \in W^{2,2}$. In particular, $u$ solves
\begin{gather}\label{eqn:basis-2}
-u'' + u = f \quad \mathcal{H}^1-a.e. \text{ in } \Gamma.
\end{gather}
We just need to verify $u$ is $C^1$-compatible.
Fix a vertex $p$, and WLOG we can assume the segment $\ell(1), \ell(2), \ell(3)$ meet at $p$. Choose $\phi$ to be supported in a neighborhood of $p$, then integrate by parts \eqref{eqn:basis-1} and use \eqref{eqn:basis-2} to obtain, for some fixed vector $V$,
\begin{gather}
\sum_{i=1}^3 \phi(i)(p) \cdot \partial_n u(i)(p) = \sum_{i=1}^3 \pi_{<n(i)>^\perp}(V) \cdot \partial_n u(i)(p),
\end{gather}
where in the equality we used the $C^0$-compatibility of $\phi$. Here we write explicitly $n(i)$ for the outward conormal of $\ell(i)$. Since $V$ and $p$ are arbitrary, by Lemma \ref{lem:sum-is-zero} we deduce that $u$ is $C^1$-compatible. This proves the claim.
Using that $u$ solves $-u'' + u = f$ at $\mathcal{H}^1$-a.e. point, we can test against $u$ and $u''$, and use the $C^0$-/$C^1-$ compatibility conditions to integrate by parts, to obtain
\begin{gather}
||u||_2^2 \leq 2 \int_\Gamma |u|^2 + 2 \int_\Gamma |f|^2 + 2\int_\Gamma |f|^2 \leq 10 ||f||_0^2.
\end{gather}
So $S : L^2 \to H_2$ is bounded.
Let us demonstrate self-adjointness. Take $v = S(g)$, and then by Lemma \ref{lem:sum-is-zero} the $C^0$- and $C^1$-compatability conditions ensure we can integrate by parts without picking up any boundary terms:
\begin{align}
\int_\Gamma f \cdot S(g) = \int_\Gamma (-u'' + u) \cdot v = \int_\Gamma u' \cdot v' + u\cdot v = \int_\Gamma u \cdot (-v'' + v) = \int_\Gamma S(f) \cdot g.
\end{align}
This completes the proof of Lemma \ref{lem:compact-mapping}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:eigenfunctions}]
From Lemma \ref{lem:compact-mapping}, the solution operator $S : L^2(\Gamma) \to L^2(\Gamma)$ is compact and self-adjoint, and therefore has a countable eigenbasis $u_i$ with eigenvalues $\mu_i \to 0$. For each $u_i$, we have $u_i \in H_2$, and
\begin{gather}
u_i'' + (\mu_i^{-1} - 1) u_i = 0
\end{gather}
weakly in $H_1$, and strongly in $H_2$. It's now straightforward to check each $u_i$ is smooth, and non-negativity of the $\lambda_i$ follows from integration by parts.
\end{proof}
\subsection{$1$-homogeneous implies linear}
For a polyhedral cone without any spine, we easily have that any $1$-homogeneous Jacobi field is linear in the sense of Definition \ref{def:compatible}. However this argument fails in the presence of a spine. Following Simon \cite{simon1}, we show that any compatible Jacobi field on $\mathbf{C}^0 = \mathbf{C}^0_0\times \mathbb{R}^m$ with appropriate decay splits into rotation of the spine plus some linear component on $\mathbf{C}^0_0$.
\begin{theorem}\label{thm:1-homo-linear}
Let $v : \mathbf{C}^0 \cap B_1 \to {\mathbf{C}^0}^\perp$ be a $1$-homogeneous compatible Jacobi field, satisfying
\begin{gather}\label{eqn:1-homo-decay}
\int_{\mathbf{C}^0 \cap B_1} \frac{|v - \kappa^\perp|^2}{r^{2+2-\alpha}} < \infty,
\end{gather}
for some $\alpha \in (0, 1)$, and some bounded chunky function $\kappa : (0, 1] \times B_1^m \to \mathbb{R}^{2+k}\times \{0\}$.
Then there is a linear map $A : \{0\}\times \mathbb{R}^m \to \mathbb{R}^{2+k}\times \{0\}$, and a linear compatible Jacobi field $v_0 : \mathbf{C}^0_0 \to {\mathbf{C}^0_0}^\perp$, so that
\begin{gather}
v(x, y) = \pi_{{\mathbf{C}^0}^\perp}(Ay) + v_0(x).
\end{gather}
In other words, $v \in \mathcal{L}$.
\end{theorem}
\begin{proof}
Write $\Gamma = \mathbf{C}^0_0 \cap \mathbb{S}^{1+k}$, and let $\Psi_i(\theta)$ be the eigenfunction expansion of $L^2(\Gamma)$ from Theorem \ref{thm:eigenfunctions}, with associated eigenvalues $\lambda_i$. Write
\begin{gather}\label{eqn:1-homo-1}
v_i(r, y) = \int_\Gamma v(r\theta, y)\cdot \Psi_i(\theta) d\theta, \quad \kappa_i(r, y) = \int_\Gamma \kappa(r, y)^\perp \cdot \Psi_i(\theta) d\theta,
\end{gather}
so that $v(r\theta, y) = \sum_i v_i(r, y) \Psi_i(\theta)$ and $\kappa^\perp(r\theta, y) = \sum_i \kappa_i(r, y) \Psi_i(\theta)$. Notice that, since
\begin{gather}
\kappa_i(r, y) \equiv \kappa(r, y) \cdot \int_\Gamma \Psi_i(\theta) d\theta,
\end{gather}
we have $\kappa_i \equiv 0$ unless $\lambda_i = 0$.
Since both $v$ and $\Psi_i$ are smooth and compatible, and $v$ is harmonic on each wedge, we can integrate \eqref{eqn:1-homo-1} by parts to deduce each $v_i(r, y)$ is smooth and solves:
\begin{gather}
\partial_r^2 v_i + \frac{1}{r} \partial_r v_i + \Delta_y v_i - \frac{\lambda_i}{r^2} v_i = 0.
\end{gather}
Let us define $\phi_i : \mathbb{R}^m \to \mathbb{R}$ by
\begin{gather}
\phi_i(z) = v_i(1, z),
\end{gather}
so that by $1$-homogeneity we have $v_i(r, y) = r \phi_i(y/r)$. By direct calculation, we see that $\phi_i$ satisfies the equation
\begin{gather}\label{eqn:spine-pde}
\sum_{j,k=1}^m (\partiallta_{jk} + z^j z^k) D_j D_k \phi_i - \sum_{j=1}^m z^j D_j \phi_i + (1-\lambda)\phi_i = 0 .
\end{gather}
We aim to show that any $\phi_i$ satisfying \eqref{eqn:spine-pde} and a decay condition guaranteed by \eqref{eqn:1-homo-decay}, must be either linear or constant, depending on the value of $\lambda_i$. Let us first find the correct decay condition on each $\phi_i$.
Using the orthonormality of the $\Psi_i$, we can write
\begin{align}
\int_{\mathbf{C}^0 \cap B_1} \frac{|v - \kappa^\perp|^2}{r^{2+2-\alpha}}
&= \sum_i \int_{B_1^m} \int_0^{\sqrt{1-|y|^2}} r^{\alpha-3} |v_i(r, y) - \kappa_i(r, y)|^2 dr dy \label{eqn:1-homo-2} \\
&= \sum_i \int_{0 \ni s}^1 s^{m-1} \int_{0 \ni r}^{\sqrt{1-s^2}} \int_{\mathbb{S}^{m-1}} r^{\alpha-3} |v_i(r, s\omega) - \kappa_i(r, s\omega)|^2 d\omega dr ds \\
&= \sum_i \int_{0 \ni s}^1 s^{m-1} \int_{1/\sqrt{1-s^2} \ni t}^\infty \int_{\mathbb{S}^{m-1}} t^{1-\alpha} |v_i(t^{-1}, s\omega) - \kappa_i(t^{-1}, s\omega)|^2 d\omega dt ds.
\end{align}
Therefore, by choosing an appropriate $s_0 \in (1/3, 1/2)$, we have
\begin{align}
&\int_{1}^\infty \int_{\mathbb{S}^{m-1}} t^{-1-\alpha} |\phi_i(t\omega)|^2 d\omega dt < \infty \quad \text{when }\lambda_i \neq 0, \\
&\int_1^\infty \int_{\mathbb{S}^{m-1}} t^{1-\alpha} | t^{-1} \phi_i(t\omega) - \tilde \kappa_i(t\omega)|^2 d\omega dt < \infty \quad \text{when }\lambda_i = 0 ,
\end{align}
where $\tilde \kappa_i(t\omega) := \kappa_i(t^{-1}, s_0\omega)$ is uniformly bounded.
We can now apply Lemma \ref{lem:pde-analysis} (proved just below) to deduce that $\phi_i(z) = a_i \cdot z$ when $\lambda_i = 0$, $\phi_i(z) \equiv b_i$ when $\lambda_i = 1$, and $\phi_i \equiv 0$ otherwise. So we can write
\begin{align}
v(r\theta, y)
&= \sum_{ \{ i : \lambda_i = 0 \} } r \phi_i(y/r) \Psi_i(\theta) + \sum_{ \{ i : \lambda_i = 1 \} } r \phi_i(y/r) \Psi_i(\theta) \\
&= \sum_{j=1}^{m} y^j w_j(\theta) + r v_0(\theta),
\end{align}
where each $w_j(\theta)$ lies in the $\lambda = 0$ eigenspace of $L^2(\Gamma)$, and $v_0(r\theta) \equiv rv_0(\theta)$ is a $1$-homogeneous compatible Jacobi field on $\mathbf{C}^0_0$.
By Proposition \ref{prop:baby-linear}, we know $v_0$ is linear. We must show each $w_j(\theta)$ lies in the space
\begin{gather}
\mathcal{V} = \{ \pi_{{\mathbf{C}^0_0}^\perp}(v) : v \in \mathbb{R}^{2+k}\times\{0\} \}.
\end{gather}
Let $P$ be the $L^2(\Gamma)$ orthogonal projection to $\mathcal{V}^\perp \subset L^2(\Gamma)$.
Since $\kappa^\perp \in \mathcal{V}$ for each $(r, y)$, we have from \eqref{eqn:1-homo-decay} and $L^2(\Gamma)$-orthogonality of $w_j(\theta), v_0(\theta)$ that
\begin{align}
\int_0^1 \int_{B^m_{\sqrt{1-r^2}}} \int_\Gamma r^{\alpha-3} |\sum_{j=1}^{m} y^j w_j(\theta) - \kappa^\perp(r, y)|^2 d\theta dy dr
&\geq \int_0^1 \int_{B^m_{\sqrt{1-r^2}}} \int_\Gamma r^{\alpha-3} |P(\sum_{j=1}^{m} y^j w_j(\theta))|^2 d\theta dy dr
\end{align}
is finite, which necessitates that $P(\sum_{j=1}^{m} y^j w_j(\theta)) \equiv 0$ on $B^m_1 \times \Gamma$. Hence, every $w_j \in \mathcal{V}$ as required.
\end{proof}
To prove Lemma \ref{lem:pde-analysis} we shall need the following $W^{1,2}$ estimate. We note that \eqref{eqn:pde-ineq} fails for general solutions of \eqref{eqn:spine-pde}, so in our analysis of Lemma \ref{lem:pde-analysis} we must consider each term of the Fourier expansion separately.
\begin{lemma}\label{lem:ode-est}
Suppose $\gamma : \mathbb{R}_+ \to \mathbb{R}$ satisfies the ODE
\begin{gather}\label{eqn:main-ode}
(1+r^2)\gamma'' + \left( (m-1)/r - r \right) \gamma' + (-\mu/r^2 + 1-\lambda)\gamma = 0,
\end{gather}
where $m \geq 1$. Then for any $4 \leq \rho$ we have
\begin{gather}\label{eqn:ode-ineq}
\int_{\rho}^{2\rho} (\gamma')^2 dr \leq c(\mu, \lambda) \int_{\rho/2}^{4\rho} \gamma^2/r^2 dr.
\end{gather}
In particular, if $m \geq 2$, and $\phi = \gamma(r) \tilde\phi(\omega)$ solves \eqref{eqn:spine-pde} in $\mathbb{R}^m$, where $\tilde\phi(\omega)$ is an eigenfunction of $-\Delta_{\mathbb{S}^{m-1}}$ with eigenvalue $\mu$, then
\begin{gather}\label{eqn:pde-ineq}
\int_\rho^{2\rho} \int_{\mathbb{S}^{m-1}} |D\phi(r\omega)|^2 d\omega dr \leq c(\mu, \lambda) \int_{\rho/2}^{4\rho} \int_{\mathbb{S}^{m-1}} \phi(r\omega)^2/r^2 d\omega dr .
\end{gather}
\end{lemma}
\begin{proof}
The ODE \eqref{eqn:main-ode} can be written in the divergence form:
\begin{gather}\label{eqn:main-ode-div}
\partial_r (h(r) \partial_r \gamma(r)) + \frac{h(r)}{1+r^2} (r^{-2} \mu + 1 - \lambda)\gamma(r) = 0,
\end{gather}
where $h(r) = r^{m-1} (1+r^2)^{1-(2+m)/2}$. Take $\eta(r)$ a cutoff which is $\equiv 0$ outside $[\rho/2, 4\rho]$, $\equiv 1$ on $[\rho, 2\rho]$, and linearly interpolates in between. If we multiply \eqref{eqn:main-ode-div} by $\gamma \eta^2$, then we obtain
\begin{align}
\int (\gamma')^2 \eta^2 h dr \leq 5\int \frac{h}{1+r^2} \eta^2 \gamma^2 (|\mu| + 1 + |\lambda|) + (\eta')^2 \gamma^2 h dr.
\end{align}
where we used that $r^{-2} |\mu| \leq |\mu|$ on $\mathrm{spt} \eta$.
Since $\rho \geq 4$, then we have
\begin{gather}
\frac{1}{2} r^{-1} \leq h(r) \leq 2 r^{-1},
\end{gather}
and therefore
\begin{align}
\rho^{-1} \int_\rho^{2\rho} (\gamma')^2
&\leq 50 (1 + |\mu| + |\lambda|) \rho \int_{\rho/2}^{4\rho} \gamma^2 ,
\end{align}
which proves the required relation \eqref{eqn:ode-ineq}.
Let us now take $\phi(r\omega) = \gamma(r) \tilde\phi(\omega)$ solving \eqref{eqn:spine-pde}, with $\Delta_{\mathbb{S}^{m-1}} \tilde\phi + \mu \tilde\phi = 0$. By direct computation, we see that $\gamma$ solves the ODE \eqref{eqn:main-ode}. Therefore, we can use \eqref{eqn:ode-ineq} to compute that
\begin{align}
\int_\rho^{2\rho} \int_{\mathbb{S}^{m-1}} |D\phi|^2 d\omega dr = \left( \int_{\mathbb{S}^{m-1}} \tilde\phi^2 d\omega \right) \int_\rho^{2\rho} (\gamma')^2 + \mu \gamma^2/r^2 dr \leq c(\mu, \lambda) \int_{\rho/2}^{2\rho} \int_{\mathbb{S}^{m-1}} \phi^2/r^2 d\omega dr
\end{align}
\end{proof}
\begin{lemma}\label{lem:pde-analysis}
Let $\phi: \mathbb{R}^m \to \mathbb{R}$ is a smooth function satisfying \eqref{eqn:spine-pde}, and take a fixed $\lambda \geq 0$. Assume $\phi$ satisfies the decay bound
\begin{align}
&\int_1^\infty \int_{\mathbb{S}^{m-1}} r^{1-\alpha} | r^{-1} \phi(r\omega) - k(r\omega)|^2 d\omega dr < \infty \quad \text{ if $\lambda = 0$}, \label{eqn:pde-bound-1} \\
&\int_1^\infty \int_{\mathbb{S}^{m-1}} r^{-1-\alpha} |\phi(r\omega)|^2 < \infty \quad \text{ if $\lambda > 0$}, \label{eqn:pde-bound-2}
\end{align}
where $k : \mathbb{R}^m \to \mathbb{R}$ is some bounded measurable function.
Then:
A) when $\lambda = 0$, then $\phi(z) = a\cdot z$ for some $a \in \mathbb{R}^m$,
B) when $\lambda = 1$, then $\phi(z) \equiv const$,
C) otherwise, $\phi(z) = 0$.
\end{lemma}
\begin{proof}
Consider the case $m \geq 2$, and let us first suppose $\phi$ takes the special form $\phi(r\omega) = \gamma(r) \psi(\omega)$, where $\psi$ is an eigenfunction of $-\Delta_{\mathbb{S}^{m-1}}$ with eigenvalue $\mu$. Let $u = D_k \phi$ for any fixed $k$. Then by direct computation $u$ solves
\begin{gather}\label{eqn:pde-analysis-1}
(\partiallta_{ij} + z^i z^j) D_i D_j u + z^i D_i u - \lambda u = 0.
\end{gather}
In polar coordinates \eqref{eqn:pde-analysis-1} becomes
\begin{gather}
(1+r^2) \partial_r^2 u + ((m-1)/r - (\ell-3)r ) \partial_r u + \Delta_{\mathbb{S}^{m-1}} u/r^2 - \lambda u = 0,
\end{gather}
which can be written in the divergence form
\begin{gather}\label{eqn:pde-analysis-2}
\partial_r (g(r) \partial_r u) + \frac{g(r)}{1+r^2} (\Delta_S u / r^2 - \lambda u) = 0.
\end{gather}
where $g(r) = r^{m-1} (1 + r^2)^{-(m-2)/2}$ (this should not be surprising, since the original Jacobi equation is in divergence form).
If we multiply \eqref{eqn:pde-analysis-2} by $\zeta(r)^2 u$, where $\zeta \in C^\infty_0(\mathbb{R}_+)$, then we otain
\begin{gather}
\int_0^\infty \int_{\mathbb{S}^{m-1}} \frac{g}{r^2(1+r^2)} |\nabla u|^2 \zeta^2 + g \zeta^2 (\partial_r u)^2 + \frac{g\zeta^2 u^2 \lambda}{1+r^2} d\omega dr \leq 10 \int_0^\infty \int_{\mathbb{S}^{m-1}} g(r) (\zeta')^2 u^2 d\omega dr.
\end{gather}
Here $\nabla$ indicates the covariant derivative on $\mathbb{S}^{m-1}$. Since $r^{-2} |\nabla u|^2 \leq |Du|^2$ is bounded as $r \to 0$, we can in fact plug in any $\zeta \in C^\infty_0([0, \infty))$. In particular, let us take $\zeta$ to be the usual log cutoff
\begin{gather}\label{eqn:log-cutoff}
\zeta(r) = \max\left\{ 2 - \frac{\log (\max\{r, \rho\})}{\log \rho}, 0 \right\}, \quad \rho \geq 4.
\end{gather}
Since $g(r) \leq 2r$ on $\mathrm{spt} \zeta'$, we can use Lemma \ref{lem:ode-est} to obtain
\begin{align}
\int_0^\rho \int_{\mathbb{S}^{m-1}} g(r) \left( \frac{r^{-2}|\nabla u|^2 + \lambda u^2}{1+r^2} + (\partial_r u)^2 \right) d\omega dr
&\leq \frac{c}{(\log \rho)^2} \int_\rho^{\rho^2} \int_{\mathbb{S}^{m-1}} r^{-1} |D\phi|^2 d\omega dr \\
&\leq \frac{c(\lambda, \mu)}{(\log \rho)^2} \int_{\rho/2}^{2\rho^2} \int_{\mathbb{S}^{m-1}} r^{-3} \phi^2 d\omega dr. \label{eqn:pde-analysis-3}
\end{align}
If $\lambda > 0$, then since $r^{-3} \leq r^{-1-\alpha}$ the integral in \eqref{eqn:pde-analysis-3} is bounded as $\rho \to \infty$. This shows that $u = D_k\phi \equiv 0$ for any $k$, and hence $\phi$ is constant. Using \eqref{eqn:spine-pde}, we see that the only constant solution when $\lambda \neq 1$ is $\phi \equiv 0$.
If $\lambda = 0$ then we can instead estimate \eqref{eqn:pde-analysis-3} as
\begin{gather}
\eqref{eqn:pde-analysis-3} \leq \frac{c}{(\log \rho)^2} \int_{\rho/2}^{2\rho^2} r^{-1} \int_{\mathbb{S}^{m-1}} |r^{-1} \phi - k|^2 d\omega dr + \frac{c}{(\log \rho)^2} \int_{\rho/2}^{2\rho^2} r^{-1} dr \leq \frac{c}{\log \rho},
\end{gather}
for some constant $c$ independent of $\rho$. Taking $\rho \to \infty$ gives that $\phi = a\cdot z + b$, but from \eqref{eqn:spine-pde} we see that necessarily $b = 0$.
Now for a general $\phi$, we can decompose $\phi = \sum_i \gamma_i(r) \phi_i(\omega)$ where each $\gamma_i(r) \phi_i(\omega)$ extends to a $C^\infty$ solution of \eqref{eqn:spine-pde} on $\mathbb{R}^m$, and continues to satisfy bounds \eqref{eqn:pde-bound-1}, \eqref{eqn:pde-bound-2}. Therefore we can apply the previous logic to each $\gamma_i \phi_i$ to deduce the required result.
Now consider $m = 1$. This is essentially the same, but easier. We observe that $u = \phi'$ satisfies the ODE
\begin{gather}
(1 + z^2) u'' + z u' - \lambda u = 0,
\end{gather}
which can be written in divergence form as
\begin{gather}\label{eqn:pde-analysis-4}
(g(z) u')' - \frac{\lambda g(z)}{1 + z^2} u = 0,
\end{gather}
where $g(z) = (1+z^2)^{1/2}$.
Multiply \eqref{eqn:pde-analysis-4} by $u(z) \zeta^2(|z|)$, where $\zeta$ is the log cutoff \eqref{eqn:log-cutoff}, and observe that $\phi(|z|)$ solves \eqref{eqn:main-ode} on $\mathbb{R} \setminus \{0\}$. Using Lemma \ref{lem:ode-est}, we obtain as before that
\begin{align}
\int_{-\rho}^\rho (u')^2 g + \frac{\lambda u^2 g}{1+z^2} dz
\leq \frac{10}{(\log \rho)^2} \int_{|z| \in [\rho, \rho^2]} |z|^{-1} (\phi')^2 dz
\leq \frac{c(\lambda)}{(\log \rho)^2} \int_{ |z| \in [\rho/2, 2\rho^2]} |z|^{-3} \phi^2 dz,
\end{align}
and the proof proceeds as in the case $m \geq 2$.
\end{proof}
\subsection{Linear decay}
We first demonstrate the lower bound: if $v$ is orthogonal to linear fields, then at that scale $v$ must grow quantitatively more than $1$-homogeneously.
\begin{lemma}\label{lem:ortho-implies-growth}
Suppose $v : \mathbf{C}^0 \cap \overline{B_1} \to {\mathbf{C}^0}^\perp$ is a smooth compatible Jacobi field, which is $L^2(\mathbf{C}^0 \cap B_1)$ orthogonal to every element in $\mathcal{L}$, and satisfies the decay estimate
\begin{gather}\label{eqn:ortho-hyp}
\int_{\mathbf{C}^0 \cap B_{1/4}} \frac{|v - \kappa^\perp|^2}{r^{2+2-\alpha}} \leq \beta \int_{\mathbf{C}^0 \cap B_1} |v|^2,
\end{gather}
where $\kappa : (0, 1] \times B_1^m \to \mathbb{R}^{2+k}\times \{0\}$ is a chunky function with bound $|\kappa|^2 \leq \beta \int_{\mathbf{C}^0 \cap B_1} |v|^2$.
Then we have
\begin{gather}\label{eqn:ortho-growth}
\int_{\mathbf{C}^0 \cap B_1\setminus B_{1/10}} |\partial_R(v/R)|^2 \geq \frac{1}{c(\mathbf{C}^0, \beta, \alpha)} \int_{\mathbf{C}^0 \cap B_1} |v|^2.
\end{gather}
\end{lemma}
\begin{proof}
Suppose, towards a contradiction, the Lemma fails: we have a sequence of smooth, compatible Jacobi field $v_i$ on $\mathbf{C}^0 \cap \overline{B_1}$, and associated chunky functions $\kappa_i$, which both satisfy the hypothesis of Lemma \ref{lem:ortho-implies-growth}, but each $v_i$ admits the bound
\begin{gather}\label{eqn:ortho-1}
\int_{\mathbf{C}^0 \cap B_1 \setminus B_{1/10}} R^{2-n} |\partial_R (v_i/R)|^2 \leq \varepsilon_i \int_{\mathbf{C}^0 \cap B_1} |v_i|^2,
\end{gather}
with $\varepsilon_i \to 0$.
Define the rescaled $\tilde v_i := ||v_i||_{L^2(\mathbf{C}^0 \cap B_1)}^{-1} v_i$. Then $||\tilde v_i||_{L^2(\mathbf{C}^0\cap B_1} = 1$ for all $i$, and using Lemma \ref{lem:jacobi-apriori-est} we can pass to a subsequence, and deduce the $\tilde v_i$ converge smoothly to on compact subsets of $\mathbf{C}^0 \cap B_1 \setminus { \{0\} \times \R^m }$ to some limit $\tilde v$. We have strong convergence in $L^2(\mathbf{C}^0 \cap B_{1/4})$, since \eqref{eqn:ortho-hyp} implies
\begin{gather}\label{eqn:ortho-2}
\int_{\mathbf{C}^0 \cap B_{1/4} \setminus B_\partiallta({ \{0\} \times \R^m })} |v|^2 \leq c(\mathbf{C}^0, \beta) \partiallta^{2-\alpha} \quad \forall \partiallta > 0 .
\end{gather}
By compactness of chunky functions, we can assume $||v_i||_{L^2(\mathbf{C}^0 \cap B_1)}^{-1} \kappa_i \to \tilde\kappa$ uniformly on compact subsets of $\mathbf{C}^0 \cap B_1 \setminus { \{0\} \times \R^m }$.
The resulting $\tilde v$ is a compatible Jacobi field, which is $L^2(\mathbf{C}^0 \cap B_1)$-orthogonal to the linear fields, and satisfies the bound
\begin{gather}\label{eqn:ortho-3}
\int_{\mathbf{C}^0 \cap B_{1/4}} \frac{|\tilde v - \tilde\kappa^\perp|^2}{r^{2+2-\alpha}} < \infty,
\end{gather}
where $\tilde\kappa : (0, 1] \times B_1^m \to \mathbb{R}^{2+k}\times \{0\}$ is bounded and chunky.
Moreover, by our hypothesis \eqref{eqn:ortho-1}, $\tilde v$ extends to a $1$-homogeneous field on $\mathbf{C}^0$. By Theorem \ref{thm:1-homo-linear} and our bound \eqref{eqn:ortho-3} we deduce $\tilde v$ is linear, but this contradicts our orthogonality assumption unless $\tilde v \equiv 0$.
So $\tilde v_i \to 0$ uniformly on compact subsets of $B_1 \cap \mathbf{C}^0 \setminus { \{0\} \times \R^m }$. But, by radial integration and \eqref{eqn:ortho-3}, one can show that
\begin{gather}
\int_{\mathbf{C}^0 \cap B_1 \setminus B_{1/10}} |\partial_R(\tilde v_i/R)|^2 \geq \frac{1}{c(n)} - c(\mathbf{C}^0)(\varepsilon^2 + \beta \partiallta^{2-\alpha}),
\end{gather}
whenever $\sup_{\mathbf{C}^0 \cap B_{1/10} \setminus B_{\partiallta}({ \{0\} \times \R^m })} |\tilde v_i| \leq \varepsilon$. For $i >> 1$, this is a contradiction.
\end{proof}
We now prove Theorem \ref{thm:linear-decay}.
\begin{proof}[Proof of Theorem \ref{thm:linear-decay}]
From Lemma \ref{lem:ortho-implies-growth} and \eqref{eqn:linear-decay-hyp2} there is a constant $\beta_2 = \beta_2(\mathbf{C}^0, \beta, \alpha)$ so that, for every $\rho \in [\theta, 1/10]$,
\begin{gather}
\int_{\mathbf{C}^0 \cap B_{\rho/10}} R^{2-n} |\partial_R (v/R)|^2 \leq \beta\int_{\mathbf{C}^0 \cap B_\rho} |v_\rho|^2 \leq \beta \beta_2 \int_{\mathbf{C}^0 \cap B_\rho \setminus B_{\rho/10}} R^{2-n} |\partial_R(v/R)|^2.
\end{gather}
Therefore by hole-filling we obtain
\begin{gather}
\int_{\mathbf{C}^0 \cap B_{\rho/10}} R^{2-n} |\partial_R(v/R)|^2 \leq \frac{\beta\beta_2}{1+\beta\beta_2} \int_{\mathbf{C}^0 \cap B_\rho} R^{2-n} |\partial_R (v/R)|^2.
\end{gather}
Writing $\gamma = \frac{\beta\beta_2}{1+\beta\beta_2} < 1$, we can iterate the above inequality to obtain
\begin{gather}
\int_{\mathbf{C}^0 \cap B_{\theta}} R^{2-n} |\partial_R(v/R)|^2 \leq c(\gamma) \theta^\mu \int_{\mathbf{C}^0 \cap B_{1/40}} R^{2-n} |\partial_R(v/R)|^2,
\end{gather}
where $\mu = -\log(\gamma)/\log(10) > 0$. Using Lemma \ref{lem:ortho-implies-growth} at scale $\theta$ and \eqref{eqn:linear-decay-hyp2} at scale $1/4$ completes the proof of Theorem \ref{thm:linear-decay}.
\end{proof}
\begin{comment}
\documentclass[12pt]{article}
\usepackage{amsmath}
\usepackage{hyperref}
\usepackage{xfrac}
\usepackage{stmaryrd,mathrsfs,bm,amsthm,mathtools,yfonts,amssymb,color}
\usepackage{xcolor}
\usepackage{courier}
\newcommand\net{\mathrm{\bf net}}
\newcommand{\partial}{\partial}
\newcommand\sphere{\mathbb{S}}
\newcommand{\varepsilon}{\varepsilon}
\newcommand{\mathcal{H}}{\mathcal{H}}
\newcommand{\mathrm{sing}}{\mathrm{sing}}
\newcommand{\mathrm{graph}}{\mathrm{graph}}
\newcommand{\mathrm{dom}}{\mathrm{dom}}
\newcommand{\mathrm{spt}}{\mathrm{spt}}
\newcommand{\mathcal{M}}{\mathcal{M}}
\newcommand{\mathcal{S}}{\mathcal{S}}
\newcommand{\mathbf{C}}{\mathbf{C}}
\newcommand{\mathbf{V}}{\mathbf{V}}
\newcommand{\mathbf{E}}{\mathbf{E}}
\newcommand{\mathbf{T}}{\mathbf{T}}
\newcommand{\mathbf{Y}}{\mathbf{Y}}
\newcommand{\mathbf{X}}{\mathbf{X}}
\newcommand{\mathbb{R}}{\mathbb{R}}
\newcommand{\mathcal{C}}{\mathcal{C}}
\newcommand{\mathcal{N}}{\mathcal{N}}
\newcommand{\mathcal{L}}{\mathcal{L}}
\newcommand{\mathrm{int}}{\mathrm{int}}
\newcommand{{ \{0\} \times \R^m }}{{ \{0\} \times \mathbb{R}^m }}
\newcommand{\mathbf{C}^0}{\mathbf{C}}
\newtheorem{theorem}{Theorem}[section]
\newtheorem{ctheorem}[theorem]{Conjectural Theorem}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{fact}[theorem]{Fact}
\newtheorem{prop}[theorem]{Proposition}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{corollary}[theorem]{Corollary}
\theoremstyle{definition}
\newtheorem{definition}[theorem]{Definition}
\theoremstyle{remark}
\newtheorem{remark}[theorem]{Remark}
\theoremstyle{remark}
\newtheorem{example}[theorem]{Example}
\theoremstyle{remark}
\newtheorem{note}[theorem]{Note}
\theoremstyle{remark}
\newtheorem{question}[theorem]{Question}
\theoremstyle{remark}
\newtheorem{conjecture}[theorem]{Conjecture}
\setlength{\textheight}{10in} \setlength{\textwidth}{7.5in}
\setlength{\columnsep}{0.5in} \setlength{\topmargin}{-1.2in}
\setlength{\headheight}{0in} \setlength{\headsep}{0.5in}
\setlength{\parindent}{1pc}
\setlength{\oddsidemargin}{-0.5in}
\setlength{\evensidemargin}{-0.5in}
\begin{document}\end{comment}
\section{Inhomgeoneous blow-ups}\label{sec:blow-up}
We finish proving the excess decay Theorem \ref{thm:main-decay}. We shall demonstrate how blow-up sequences generate compatible Jacobi fields, and how integrability allows to remove the linear part of the limiting field at any fixed scale. This allows us to apply the linear decay of Theorem \ref{thm:linear-decay} to prove non-linear excess decay.
As before we continue to work with a fixed $\mathbf{C}^0 = \mathbf{C}^0_0^2 \times \mathbb{R}^m$, with ${\mathbf{C}^0_0}^2 \subset \mathbb{R}^{2+k}$ polyhedral.
\subsection{Blowing-up}
We need a notion of convergence under varying domains. Consider the sequence of domains
\begin{gather}
\Omega_i = \{ (x', x_{m+1}) \in B_1^m \times \mathbb{R} : 0 \leq x_{m+1} \leq 1 + f_i(x') \} \subset \mathbb{R}^{m+1},
\end{gather}
where $f_i : B_1^m \to \mathbb{R}$ is $C^{1,\alpha}$, and $|f_i|_{C^{1,\alpha}} \to 0$.
Suppose we have $u_i : \Omega_i \to \mathbb{R}$, with uniformly bounded $|u_i|_{C^{1,\alpha}(\Omega_i)} \leq \Lambda$. Define the functions $\phi_i : B_1^m \times [0, 1] \to \Omega_i$ by setting
\begin{gather}
\phi_i(x', x_{m+1}) = (x', (1 + f_i(x')) x_{m+1} ).
\end{gather}
Then $\phi_i$ is a diffeomorphism for large $i$, and we can consider the functions $\hat u_i : B_1^m \times [0,1] \to R$ defined by $\hat u_i = u_i \circ \phi_i$.
Now by Arzela-Ascoli and convergence of $f_i$, we can find a $C^{1,\alpha}$ function $u : B_1^m \times [0,1] \to \mathbb{R}$, with $|u|_{C^{1,\alpha}} \leq \Lambda$, so that:
\begin{gather}\label{eqn:varying-domain-conv}
\hat u_i \to u \text{ in $C^{1,\alpha'}(B_1^m \times [0,1])$}, \quad \text{ and } \quad u_i \to u \text{ in $C^{1,\alpha'}_{loc}(B_1^m \times (0, 1))$},
\end{gather}
for any $\alpha' < \alpha$.
Let us now take $(M_i, \mathbf{C}_i, \varepsilon_i, \beta_i)$ a blow-up sequence w.r.t. $\mathbf{C}^0 = \mathbf{C}^0_0 \times \mathbb{R}^m$. By Lemma \ref{lem:poly-graph}, there are numbers $\tau_i \to 0$ so that (for $i >> 1$) we can decompose
\begin{gather}
M_i \cap B_{3/4} = \mathrm{graph}_{\mathbf{C}_i}(u_i, f_i, \Omega_i), \quad B_{1/2} \setminus B_{\tau_i}({ \{0\} \times \R^m }) \subset \Omega_i, \quad |u_i|_{C^{1,\mu}} + |f_i|_{C^{1,\mu}} \leq \tau_i,
\end{gather}
as per Definition \ref{def:poly-graph}, where the $u_i$, $f_i$ satisfy estimates \eqref{eqn:point-est-poly-1}, \eqref{eqn:point-est-poly-2}, \eqref{eqn:integral-est-poly-1}, \eqref{eqn:integral-est-poly-2}.
Similarly, we can decompose
\begin{gather}
\mathbf{C}_i = \mathrm{graph}_{\mathbf{C}^0}(\phi_i, g_i, U_i), \quad B_{3/4} \subset U_i, \quad |\phi_i|_{C^{1,\mu}} + |g_i|_{C^{1,\mu}} \leq \tau_i,
\end{gather}
where we use the fact $\mathbf{C}_i$ is also conical to extend $U_i$. Here $\phi_i$, $g_i$ also satisfy estimates \eqref{eqn:point-est-poly-1}, \eqref{eqn:point-est-poly-2}, \eqref{eqn:integral-est-poly-1}, \eqref{eqn:integral-est-poly-2} of Lemma \ref{lem:poly-graph}.
Since each $\mathbf{C}_i$ is also polyhedral, we have that both $\phi_i$ and $g_i$ are \emph{linear} functions on the domains $U_i$ in $\mathbf{C}^0$. In particular, we can extend $\phi_i$ to be defined on each plane $P(i)\times \mathbb{R}^m$ associated to the wedges, and note that we can say (trivially) that
\begin{gather}\label{eqn:linear-smooth}
|\phi_i|_{C^\infty} + |g_i|_{C^\infty} \to 0.
\end{gather}
Let us define $\tilde \Omega_i(j) \subset P(i)$ to be the domains where
\begin{gather}
\Omega_i(j) = \{x' + \phi_i(x') : x' \in \tilde\Omega_i(j) \}.
\end{gather}
Since every $f_i, \phi_i, g_i \to 0$ in $C^{1,\mu}_{loc}$, each domain $\tilde\Omega_i(j)$ is converging locally in $C^{1,\mu}(B_{1/2} \setminus { \{0\} \times \R^m })$ to $W(i) \times\mathbb{R}^m$.
Now consider the rescaled graphs $v_i(j) : \tilde\Omega_i(j) \to {\mathbf{C}_i}^\perp$ defined by
\begin{gather}
v_i(j)(x') = \beta_i^{-1} u_i(j)(x' + \phi_i(x')).
\end{gather}
From Lemma \ref{lem:poly-graph} and the definition of blow-up sequence, the $v_i$ satisfy:
\begin{gather}\label{eqn:holder-bounds-v}
\limsup_i \sum_{j=1}^d \int_{\Omega_i(j)} |v_i|^2 < \infty, \quad \sup_{\tilde\Omega_i(j)} r^{n+2}(r^{-1}|v_i| + |Dv_i| + r^{\alpha}[Dv_i]_{\alpha,C} )^2 \leq c(\mathbf{C}^0, \alpha).
\end{gather}
Therefore, using \eqref{eqn:linear-smooth}, after passing to a subsequence (which we will also denote by $i$) we can find a function $v : \mathbf{C}^0 \cap B_{1/2} \to {\mathbf{C}^0}^\perp$ so that for each $j = 1, \ldots, d$, we have $C^{1,\mu'}$ convergence $v_i(j) \to v(j)$ locally in the sense of \eqref{eqn:varying-domain-conv}. In particular, we have
\begin{gather}
v(j) \in C^{1,\mu}_{loc}( ( (W(j) \setminus \{0\}) \times \mathbb{R}^m) \cap B_{1/2}).
\end{gather}
We can then make the following
\begin{definition}
Let $(M_i, \mathbf{C}_i, \varepsilon_i, \beta_i)$ be the subsequence which gives convergence to $v$ as outlined above. We then say that $v$ is the \emph{Jacobi field generated by $(M_i, \mathbf{C}_i, \varepsilon_i, \beta_i)$}.
\end{definition}
We shall demonstrate in the following Proposition that $v$ is a compatible Jacobi field on $\mathbf{C}^0$ with good estimates.
\begin{prop}\label{prop:blow-up}
Let $(M_i^{2+m}, \mathbf{C}_i, \varepsilon_i, \beta_i)$ be a blow-up sequence w.r.t $\mathbf{C}^0$, generating Jacobi field $v : \mathbf{C}^0 \cap B_{1/2} \to \mathbf{C}^0^\perp$. Then $v$ is compatible (in the sense of Definition \ref{def:compatible}), and moreover satisfies the following estimates: for every $\rho \leq 1/4$, we have
A) Strong $L^2$ convergence:
\begin{gather}
\int_{\mathbf{C}^0 \cap B_\rho} |v|^2 = \lim_i \beta^{-2}_i \int_{M_i \cap B_\rho} d_{\mathbf{C}_i}^2 ;
\end{gather}
B) Non-concentration:
\begin{gather}
\rho^{2+2-1/2} \int_{\mathbf{C}^0 \cap B_{\rho/2}} \frac{|v - \kappa_\rho^\perp|^2}{r^{2+2-1/2}} \leq c(\mathbf{C}^0) \rho^{-n-2} \int_{\mathbf{C}^0 \cap B_\rho} |v|^2,
\end{gather}
where $\kappa_\rho : (0,\rho]\times B^m_\rho \to \mathbb{R}^{\ell+k}\times \{0\}$ is a chunky function satisfying $|\kappa_\rho|^2 \leq c(\mathbf{C}^0) \rho^{-n} \int_{\mathbf{C}^0 \cap B_\rho} |v|^2$;
C) Growth estimates:
\begin{gather}
\int_{\mathbf{C}^0 \cap B_{\rho/10}} R^{2-n} |\partial_R (v/R)|^2 \leq c(\mathbf{C}^0) \int_{\mathbf{C}^0 \cap B_\rho} |v|^2.
\end{gather}
\end{prop}
\begin{remark}
Even though $v$ is smooth, convergence to $v$ may be only $C^{1,\alpha}$.
\end{remark}
\begin{proof}
We first show compatibility. Let $\{e_p\}_{p=1}^n$ be an ON basis for the plane $P(j) \times \mathbb{R}^m$. Using the first-variation formula and the definition of $\phi_i$, $u_i$, one obtains directly that
\begin{gather}
\int_{W(j)\times \mathbb{R}^m} \sum_{p=1}^n (D_p u_i(j))(x + \phi_i(j)(x)) \cdot D_p \zeta(x) = \int_{\mathrm{spt} \zeta} O(|Du_i|^2 + |Du_i||D\phi_i| + |H_M|),
\end{gather}
for any $\zeta \in C^\infty_c( ((\mathrm{int} W(j)\times \mathbb{R}^m)\cap B_{3/4}, \mathbb{R}^{n+k})$. Therefore, using \eqref{eqn:holder-bounds-v}, and the definition of $v_i$ and blow-up sequence, we get that
\begin{gather}
\int_{W(j)\times \mathbb{R}^m} \sum_{p=1}^n D_p v \cdot D_p \zeta = 0
\end{gather}
for all such $\zeta$. We deduce that $v(j)$ is harmonic on $(\mathrm{int}W(j)\times \mathbb{R}^m)\cap B_{1/2}$.
Write $L = \cup_{j} L(j)$ for the lines of $\mathbf{C}^0_0$. Pick any $X = (x, y) \in ( (L \setminus \{0\}) \times \mathbb{R}^m) \cap B_{1/2}$. In view of Remarks \ref{rem:wedge-cont} and \ref{rem:looks-like-Y}, we can choose a fixed $\rho = \rho(X, \mathbf{C})$, a constant $c = c(m, k, \rho)$ and a sequence of roatations $q_i \to q \in SO(n+k)$, so that
\begin{gather}
(q_i(\rho^{-1}(M_i - X)), \mathbf{Y}\times \mathbb{R}^{m+1}, c \varepsilon_i, \beta_i)
\end{gather}
is a blow-up sequence w.r.t $\mathbf{Y}\times \mathbb{R}^{m+1}$, generating Jacobi field
\begin{gather}
\tilde v(Y) := \rho^{-1} (q \circ v)( X + \rho q^{-1}(Y)).
\end{gather}
By Lemma \ref{lem:compatible-Y}, $\tilde v$ satisfies the required $C^0$ and $C^1$ compatability conditions in $B_{1/2}$, and therefore $v$ satisfies these conditions in $B_{\rho/2}(X)$. Compatibility of $v$ now follows from Lemma \ref{lem:jacobi-apriori-est}.
We now prove properties A), B), C). Fix $\rho \leq 1/4$, and recall $v_i$ as the approximating sequence which converges to $v$. We first observe that
\begin{gather}
\sum_{j=1}^d \int_{\Omega_i(j) \cap B_\rho \setminus B_\partiallta({ \{0\} \times \R^m })} |v_i(j)|^2 = O(\tau_i) + (1 + o(1)) \beta_i^{-2} \int_{M_i \cap B_\rho \setminus B_\partiallta({ \{0\} \times \R^m }))} d_{\mathbf{C}_i}^2,
\end{gather}
since the Jacobian of $u_i(x' + \phi_i(x'))$ is $1 + o(1)$, and $|u_i(j)| = d_{\mathbf{C}_i}$ away from $B_{10\tau_i}(L\times \mathbb{R}^m)$.
Therefore, by the $C^{1,\mu}$ convergence of $\tilde\Omega_i(j)$ and $v_i(j)$ (as per \eqref{eqn:varying-domain-conv}) we have
\begin{gather}
\int_{\mathbf{C}^0 \cap B_\rho \setminus B_\partiallta({ \{0\} \times \R^m })} |v|^2 = \lim_{i \to \infty} \beta_i^{-2} \int_{M_i \cap B_\rho \setminus B_\partiallta({ \{0\} \times \R^m }))} d_{\mathbf{C}_i}^2.
\end{gather}
On the other hand, by estimates \eqref{eqn:thm-spine-est} and \eqref{eqn:thm-k-est} we have for any $\partiallta \geq \tau$ and $i >> 1$:
\begin{gather}\label{eqn:blow-up-1}
\sum_j \int_{\Omega_i(j) \cap B_\partiallta({ \{0\} \times \R^m })\setminus B_{\tau}(L\times \mathbb{R}^m)} |u_i(j)|^2 \leq c(\mathbf{C}^0) \partiallta^{2-1/2} E(M_i, \mathbf{C}, 0, 1).
\end{gather}
Write $\Gamma = \limsup_i \beta_i^{-2} E_{\varepsilon_i}(M_i, \mathbf{C}_i, 0, 1)$. Passing to the limit in \eqref{eqn:blow-up-1}, and then taking $\tau \to 0$, we deduce
\begin{gather}\label{eqn:no-conc-v}
\int_{\mathbf{C}^0 \cap B_\rho \cap B_\partiallta({ \{0\} \times \R^m })} |v|^2 \leq c(\mathbf{C}^0, \Gamma) \partiallta^{2-1/2}.
\end{gather}
Similarly, we have by estimate \eqref{eqn:thm-spine-est} that (for $i >> 1$)
\begin{gather}\label{eqn:no-conc-d}
\beta_i^{-2} \int_{M_i \cap B_\rho \cap B_\partiallta({ \{0\} \times \R^m })} d_{\mathbf{C}_i}^2 \leq c(\mathbf{C}^0, \Gamma) \partiallta^{2-1/2}.
\end{gather}
Since \eqref{eqn:no-conc-v}, \eqref{eqn:no-conc-d} are valid for any fixed $\partiallta$ (provided $i$ sufficiently large), we deduce the strong $L^2$ convergence of A).
Let us prove B). Fix $\tau > 0$. We can apply Theorem \ref{thm:l2-est} at scale $\rho$ to deduce that, for each $i >> 1$, we have a chunky functon $\kappa_{\rho,i} : (0,\rho] \times B_\rho^m \to \mathbb{R}^{\ell+k}\times\{0\}$, with the bound
\begin{gather}
|\kappa_{\rho,i}| \leq c(\mathbf{C}^0) \rho^{-n} \int_{M \cap B_\rho} d_{\mathbf{C}_i}^2 + c(\mathbf{C}^0) \rho \beta_i^2 \Gamma \varepsilon_i,
\end{gather}
so that
\begin{gather}\label{eqn:conv-part-B}
\rho^{2+2-1/2} \sum_{j=1}^d \int_{\Omega_j(i) \cap B_{\rho/2} \setminus B_\tau(L \times \mathbb{R}^m)} \frac{|\beta_i^{-1} u_i(j) - \beta_i^{-1} \kappa_{\rho, i}^\perp|^2}{r^{2+2-1/2}} \leq c(\mathbf{C}^0) \rho^{-n-2} \beta_i^{-2} \int_{M \cap B_\rho} d_{\mathbf{C}_i}^2 + c(\mathbf{C}^0) \rho \Gamma \varepsilon_i.
\end{gather}
By compactness of chunky functions, we can find a subsequence $i'$ and a chunky function $\kappa_\rho$ so that $\beta_i^{-1} \kappa_{\rho,i} \to \kappa_\rho$ pointwise, and uniformly on $B_\rho \setminus B_\tau(L\times \mathbb{R}^m)$ (for any fixed $\tau > 0$). Using A), we can therefore take the limit in $i'$ on each side of \eqref{eqn:conv-part-B}, to deduce
\begin{gather}
\rho^{2+2-1/2} \int_{\mathbf{C}^0 \cap B_{\rho/2} \setminus B_\tau(L \times \mathbb{R}^m)} \frac{|v - \kappa_\rho|^2}{r^{2+2-1/2}} \leq c(\mathbf{C}^0) \rho^{-n-2}\int_{\mathbf{C}^0 \cap B_\rho} |v|^2.
\end{gather}
Taking $\tau \to 0$ gives B).
We show C). From \eqref{eqn:thm-point-est}, we have for any $\tau > 0$ and $i >> 1$,
\begin{gather}
\sum_i \int_{\Omega_i(j) \cap B_{\rho/10} \setminus B_\tau(L\times \mathbb{R}^m)} R^{2-n} |\partial_R(u_i(j)/R)|^2 \leq c(\mathbf{C}^0) \rho^{-n-2} \int_{M_i \cap B_\rho} d_{\mathbf{C}_i}^2 + c(\mathbf{C}^0) \rho \beta_i^2 \Gamma \varepsilon_i.
\end{gather}
Therefore, using the $C^1$ convergence of $v_i(j)$ away from $\partial W(j) \times \mathbb{R}^m$, and part A) we have
\begin{gather}
\int_{\mathbf{C}^0 \cap B_{\rho/2} \setminus B_\tau(L\times \mathbb{R}^m)} R^{2-n} |\partial_R(v_i(j)/R)|^2 \leq c(\mathbf{C}^0) \rho^{-n-2} \int_{\mathbf{C}^0 \cap B_\rho} |v|^2.
\end{gather}
Now take $\tau \to 0$ to deduce C).
\end{proof}
We demonstrate that Jacobi fields obtained through inhomogeneous blow-up limits are compatible.
\begin{lemma}\label{lem:compatible-Y}
Suppose $(M_i^{1+m}, \mathbf{Y}\times \mathbb{R}^m, \varepsilon_i, \beta_i)$ is a blow-up sequence w.r.t $\mathbf{Y}\times \mathbb{R}^m$, generating Jacobi field $v : \mathbf{C}^0 \cap B_{1/2} \to (\mathbf{C}^0)^\perp$. Then for every $y \in B_{1/2}^m$, there is a vector $V \in \mathbb{R}^{n+k}$ so that
\begin{gather}
v(j)(0, y) = \pi_{Q(j)^\perp}(V) \quad j = 1, 2, 3, \quad \text{and} \quad \sum_{j=1}^3 \partial_n v(j)(0, y) = 0.
\end{gather}
\end{lemma}
\begin{proof}
Fix some $y \in B_{1/2}^m$, and let $V_i$ be the (unique) point in $\mathrm{sing} M_i \cap (\mathbb{R}^{\ell+k}\times \{y\}) \cap B_1$. So, we have
\begin{gather}
u_i(j)(f_i(j) (0, y)) = \pi_{Q(j)^\perp}(V_i),
\end{gather}
and from the $120^\circ$ angle condition we have
\begin{gather}\label{eqn:V-bound}
|V_i| \leq \sum_{j=1}^3 |u_i(j)(f_i(j)(0, y))|.
\end{gather}
From the blow-up procedure we have $u_i(j)(f_i(j)(0, y)) \to v(j)(0, y)$, and from \eqref{eqn:V-bound} we can pass to a subsequence $i'$ so that $V_{i'} \to V$. Then we have
\begin{gather}
v(j)(0, y) = \pi_{Q(j)^\perp}(V).
\end{gather}
This proves the $C^0$-compatability.
We prove the $C^1$ condition. Our proof follows \cite{simon1}, but we additionally exploit the stationarity of $\mathbf{Y}\times \mathbb{R}^m$ (as a technical aside, we mention that \cite{simon1} only requires stationarity away from the axis, while we stipulate stationary through the axis; for unions of half-planes this restricts not only the allowable surfaces but also the notion of integrability). Let $\zeta(r, y)$ be any function with $\partial_r \zeta \equiv 0$ near $\{0\}\times \mathbb{R}^m$, and $\mathrm{spt}\zeta \subset B_{1/10}(X)$ for some $X$. For ease of notation write $E_i = E(M_i, \mathbf{Y}\times \mathbb{R}^{1+m}, 1)$.
After rotation we can fix one of the $H(j) \equiv \mathbb{R}_+ \times \{0\}^k \times \mathbb{R}^m$. So, coordinates on $H(j)$ are $(x^1, y^1, \ldots, y^m)$, and coordinates on $H(j)^\perp \equiv \mathbb{R}^k$ are $(x^2, \ldots, x^{1+k})$. Ensuring $i >> 1$, we can assume $M_i$ is graphical over $H(j) \cap (B_{3/4} \setminus B_{\tau/2}(\partial H(j))$, with graphing function $u_i(j)$. Write
\begin{gather}
U(j) = H(j)\cap (B_{1/2} \setminus B_{\tau}(\partial H(j))).
\end{gather}
Let us drop the $i$ and $j$ indices momentarily. Write $h^{pq}$ for the inverse of $h_{pq} = \partiallta_{pq} + D_p u \cdot D_q u$, and $\sqrt{h}$ for the determinant of $h_{pq}$. Then we have
\begin{align*}
\int_{u(U)} \nabla x^1 \cdot \nabla \zeta
&= \underbrace{\int_{U} \sqrt{h} h^{11} \partial_{x^1} ( \zeta(\sqrt{x^2 + |u|^2}, y) )}_{=:I_1} \\
&\quad + \underbrace{\int_U \sum_{p=1}^m \sqrt{h} h^{1, 1+p} \partial_{y^p} (\zeta(\sqrt{x^2 + |u|^2}, y)}_{=:I_2}) .
\end{align*}
Since the cross terms $|h^{1,1+p}| \leq c|Du|^2$, we can bound the second term directly as
\begin{align*}
\left| I_2 \right|
&\leq \int_{U} \left| \sum_{p=1}^m \sqrt{h} h^{1, 1+p} ( (\partial_r \zeta) \frac{u \partial_{y^p} u}{\sqrt{x^2 + |u|^2}} + (\partial_{y^p} \zeta))\right| \\
&\leq c(n, \beta) (|\partial_r \zeta| + |D_y \zeta|) \int_{U} |Du|^2 (1 + |u| |Du|) \\
&\leq c(n,\tau, \beta, \zeta) E_i .
\end{align*}
The first term we don't bound quite explicitly. Recalling that $|u| \leq \beta |x|$, we have
\begin{align*}
\left| I_1 - \int_{U} (\partial_r \zeta)(x, y) \right|
&= \left|\int_{U} \sqrt{h} h^{11} (\partial_r \zeta)(\sqrt{x^2 + |u|^2}, y) \frac{x}{\sqrt{x^2 + |u|^2}} - \int_{U} (\partial_r \zeta)(x, y) \right| \\
&\leq \int_{U} | \sqrt{h} h^{11} - 1| \left| (\partial_r \zeta)(\sqrt{x^2 + |u|^2}, y) \right| \\
&\quad + \int_{U} \left| (\partial_r \zeta)(\sqrt{x^2 + |u|^2}, y) \frac{x}{\sqrt{x^2 + |u|^2}} - (\partial_r \zeta)(x, y) \right| \\
&\leq c(n, \beta) |\partial_r \zeta| \int_{U} |Du|^2 + c(n, \beta) |\partial_r^2 \zeta| \int_{U(i)} |u|^2 \\
&\leq c(n, \beta, \tau, \zeta) E_i .
\end{align*}
For $j = 2, \ldots, 1+k$, we also have
\begin{align}
\left| \int_{u(U)} \nabla x^j \cdot \nabla \zeta - \int_U D u^j \cdot D \zeta \right|
&\leq \int_U |h^{pq} - \partiallta^{pq}| D_p u^j D_q (\zeta(\sqrt{x^2 + |u|^2}, y)) \\
&\quad + \int_U |D u^j| |D (\zeta(\sqrt{x^2 + |u|^2}, y)) - D \zeta(x, y) | \\
&\leq c(n, \beta) (|\partial_r \zeta| + |D_y \zeta| + |D^2 \zeta|) \int_U |Du|^2 + |u|^2 \\
&\leq c(n, \beta, \tau, \zeta) E_i.
\end{align}
Therefore, turning indices back on we have the coordinate-free expression
\begin{align}
\int_{u_i(j)(U(j))} \pi_{\mathbb{R}^{1+k}\times\{0\}}(\nabla \zeta) \label{eqn:compat-sum-errors}
&= \int_{u_i(j)(U(j))} \sum_{p=1}^{1+k} (e_p \cdot \nabla \zeta) e_p \\
&= -\int_{U(j)} (\partial_r \zeta)(r, y) n(j) + \sum_q D_q u_i(j) \cdot D_q \zeta + R_i(j),
\end{align}
Here $q$ sums over the coordinates on $H(j)$ (so, $x^1, y^1,\dots, y^m$), $n(j)$ is the outwards conormal of $\partial H(j) \subset H(j)$, and
\begin{gather}
R_i(j) \leq c(n, \beta, \tau, \zeta) E_i.
\end{gather}
We can identify any $U(j)$ and $U(j')$ by a rotation, and thereby view the integrand \eqref{eqn:compat-sum-errors} as defined on a fixed $U(1) \equiv U$. Since $\sum_{j=1}^3 n(j) = 0$, this gives
\begin{align}
\sum_{j=1}^3 \int_{u_i(j)(U(j))} \pi_{\mathbb{R}^{1+k}\times\{0\}} (\nabla \zeta)
&= \int_U \sum_q D_q (\sum_{j=1}^3 u_i(i)) \cdot D_q \zeta + \sum_{j=1}^3 R_i(j),
\end{align}
where again $q$ sums over coordinates in $H(j)$.
On the other hand, provided $i$ is sufficiently large we can always ensure $\tau(\zeta)$ is sufficiently small so that $\partial_r \equiv 0$ on $V = B_{5\tau}(\{0\} \times \mathbb{R}^m)$. In particular, we have $\pi_{\mathbb{R}^{1+k}\times\{0\}}(D\zeta) = 0$ on $V$. Therefore, we use the $L^2$-estimates of \cite[Theorem 3.1]{simon1} to deduce
\begin{align}
\left| \int_{M_i \cap V} e_i \cdot \nabla \zeta \right|
&\leq \int_{M_i \cap V} | \pi_{\mathbb{R}^{1+k}\times\{0\}}(e_i) \cdot \pi_{M^T}(D\zeta)| \\
&= \int_{M_i \cap V} | \pi_{\mathbb{R}^{1+k}\times \{0\}}(e_i) \cdot (-\pi_{M^\perp}(D\zeta))| \\
&\leq \int_{M_i \cap V} | \pi_{M^\perp}(\pi_{\{0\}\times \mathbb{R}^m}(D\zeta))| \\
&\leq c(n, \zeta) \sqrt{t} \left( \int_{M_i \cap V \cap \mathrm{spt} \zeta} <M^\perp, \{0\}\times \mathbb{R}^m>^2 \right)^{1/2} \\
&\leq c(n, \zeta) \sqrt{t E_i} .
\end{align}
Since we can ensure $|u_i(j)| \leq \beta |x| \leq |x|/100$ on $U(j)$, what this amounts to is that, for $e_p$ an ON basis of $\mathbb{R}^{1+k}\times \{0\}$,
\begin{align}
R_1 = \sum_{p=1}^{1+k} \int_{M_i} \mathrm{div}(\zeta e_p) e_p
&= \int_{M_i} \pi_{\mathbb{R}^{1+k}\times \{0\}}(\nabla \zeta) \\
&= \sum_{j=1}^3 \int_{u_i(j)(U(j))} \pi_{\mathbb{R}^{1+k}\times\{0\}}(\nabla \zeta) + S \\
&= \int_U \sum_q (\sum_{j=1}^3 D_q u_i(j))D_q \zeta + R_2 + S, \label{eqn:compat-2}
\end{align}
where $|R_1| + |R_2| \leq c(n, \zeta) E_i$ and $|S| \leq c(n, \zeta) \sqrt{tE_i}$.
Multiply \eqref{eqn:compat-2} by $\beta_i^{-1}$, and by hypothesis $\beta_i^{-1} u_i(j) \to v(j)$ in $C^1$ on $U$, where $v$ is the generated Jacobi field. Therefore, we obtain
\begin{gather}
0 = \int_U \sum_q \sum_{j=1}^3 D_q v(j) D_q \zeta + S
\end{gather}
for all $U$, and $|S| \leq c(n,\zeta)\sqrt{t}$. Now take $t \to 0$, to deduce
\begin{gather}\label{eqn:compat-3}
0 = \int_{H} \sum_q \sum_{j=1}^3 D_q v(j) D_q \zeta,
\end{gather}
where we identify all the $H(j) \equiv H$ together via rotation, and $q$ sums over coordinates $(x^1, y^1, \ldots, y^m)$.
Let us write $\tilde v$ for the even extension of $\sum_{j=1}^3 v(j)$ to $Q \equiv Q(1) \equiv \mathbb{R} \times \{0\}^k \times \mathbb{R}^m$. The above condition \eqref{eqn:compat-3} implies that
\begin{gather}\label{eqn:compat-4}
\int_Q \tilde v \Delta \zeta = 0
\end{gather}
for every $\zeta(r, y)$ with $\zeta(r, y) = \zeta(-r, y)$, and supported in $B_{1/10}(X)$ for some $X$. But \eqref{eqn:compat-4} trivially holds for $\zeta$ which are odd in $r$, and therefore $\tilde v$ is weakly harmonic. So in fact $\tilde v$ is smooth, and we deduce $\partial_r \tilde v = 0$.
\end{proof}
\subsection{Killing the linear part} We demonstrate that when $\mathbf{C}^0_0$ is integrable (as per Definition \ref{def:integrable}), we can adjust the blow-up sequence to obtain a field that has no linear component. Recall the notation that if $v$ is a compatible Jacobi field on $\mathbf{C}^0$, then $v_\rho := v - \psi_\rho$, where $\psi_\rho$ is the $L^2(\mathbf{C}^0 \cap B_\rho)$-projection to $\mathcal{L}$.
\begin{prop}\label{prop:kill-linear}
Let $(M_i, \mathbf{C}^0, \varepsilon_i, \beta_i)$ be a blow-up sequence w.r.t $\mathbf{C}^0$, generating Jacobi field $v : \mathbf{C}^0 \cap B_{1/2} \to {\mathbf{C}^0}^\perp$. Suppose $\mathbf{C}^0_0$ is integrable, and fix $\theta \in (0, 1/4]$. Write $\Gamma = \limsup_i \beta_i^{-2} E_{\varepsilon_i}(M_i, \mathbf{C}^0, 1)$.
Then there is a constant $\gamma(\theta, \mathbf{C}^0, \Gamma)$ so that the following holds: given any $\rho \in [\theta, 1/4]$, we can find a sequence of rotations $q_i \in SO(n+k)$, satisfying $|q_i - Id| \leq \gamma \beta_i$, so that $(M_i, q_i(\mathbf{C}), \varepsilon_i + \gamma \beta_i, \beta_i)$ is a blow-up sequence w.r.t $\mathbf{C}^0$, generating the Jacobi field $v_\rho$. In particular, we have the estimates:
A) Strong $L^2$ convergence:
\begin{gather}
\int_{\mathbf{C}^0 \cap B_\rho} |v_\rho|^2 = \lim_i \beta^{-2}_i \int_{M_i \cap B_\rho} d_{q_i(\mathbf{C})}^2 ;
\end{gather}
B) Non-concentration:
\begin{gather}
\rho^{2+2-1/2} \int_{\mathbf{C}^0 \cap B_{\rho/2}} \frac{|v_\rho - \kappa_{\rho, \psi}^\perp|^2}{r^{2+2-1/2}} \leq c(\mathbf{C}^0) \rho^{-n-2} \int_{\mathbf{C}^0 \cap B_\rho} |v_\rho|^2,
\end{gather}
where $\kappa_{\rho,\psi} : (0,\rho]\times B^m_\rho \to \mathbb{R}^{\ell+k}\times \{0\}$ is a chunky function satisfying $|\kappa_{\rho,\psi}|^2 \leq c(\mathbf{C}^0) \rho^{-n} \int_{\mathbf{C}^0 \cap B_\rho} |v_\rho|^2$;
C) Growth estimates:
\begin{gather}\label{eqn:growth-no-linear}
\int_{\mathbf{C}^0 \cap B_{\rho/10}} R^{2-n} |\partial_R (v/R)|^2 \leq c(\mathbf{C}^0) \int_{\mathbf{C}^0 \cap B_\rho} |v_\rho|^2.
\end{gather}
\end{prop}
\begin{remark}
Of course $\partial_R (\psi_\rho/R) \equiv 0$, so \eqref{eqn:growth-no-linear} holds for both $v$ and $v_\rho$.
\end{remark}
\begin{remark}\label{rem:blow-up-rot}
Due to our particular notion of integrability (by rotations), we can always assume our initial blow-up sequence has $\mathbf{C}_i \equiv \mathbf{C}^0$ fixed, and thereby reduce to the hypothesis of Proposition \ref{prop:kill-linear}. Proposition \ref{prop:kill-linear} holds also for general blow-up sequences (and the ``actual'' notion of integrability), using the fact that integrability is essentially an open condition on cones, but we will not need this. See \cite{simon1} pages 601-602.
\end{remark}
\begin{proof}
Fix a $\rho \in [\theta, 1/4]$. Using Proposition \ref{prop:blow-up} part A) we have
\begin{gather}
\rho^{-n-2} \int_{\mathbf{C}^0 \cap B_\rho} |\psi_\rho|^2 \leq \Gamma^2 \theta^{-n-2} ,
\end{gather}
and therefore, since $\psi_\rho$ is linear, we obtain
\begin{gather}
\sup_{\mathbf{C}^0 \cap B_1} |\psi_\rho| \leq c(\mathbf{C}^0) \Gamma^2 \theta^{-n-2}.
\end{gather}
By integrability of $\mathbf{C}^0_0$, the definition of $\mathcal{L}$, and Theorem \ref{thm:1-homo-linear}, there is a skew-symmetric matrix $A_\rho : \mathbb{R}^{n+k} \to \mathbb{R}^{n+k}$ so that $\psi_\rho = \pi_{{\mathbf{C}^0}^\perp} \circ A_\rho$, and $|A_\rho| \leq c(\mathbf{C}^0, \Gamma, \theta)$. We can therefore find a sequence of rotations $q_i \in SO(n+k)$, with $|q_i - Id| \leq c(\mathbf{C}^0, \Gamma, \theta) \beta_i$, so that if we write
\begin{gather}
q_i(\mathbf{C}) = \mathrm{graph}_{\mathbf{C}^0}(\phi_i, g_i, U_i),
\end{gather}
then each $\phi_i(j) : P(j)\times \mathbb{R}^m \to P(j)^\perp$ is a linear function satisfying
\begin{gather}
\phi_i(j) = \beta_i \psi_\rho(j) + o(\beta_j).
\end{gather}
Now we have, for $i >> 1$,
\begin{gather}
\int_{M \cap B_1} d_{q_i(\mathbf{C})}^2 \leq \int_{M \cap B_1} d_{\mathbf{C}}^2 + c(\mathbf{C}, \Gamma, \theta) \beta_i^2,
\end{gather}
and since increasing $\varepsilon_i$ does not change the property of being a blow-up sequence, we see that $(M_i, q_i(\mathbf{C}), \varepsilon_i + \gamma \beta_i, \beta_i)$ is also a blow-up sequence.
We demonstrate that this blows up to $v_\rho$ as required. As in Section \ref{sec:blow-up}, let us write
\begin{gather}
M_i = \mathrm{graph}_{q_i(\mathbf{C})}(u_i, f_i, \Omega_i), \quad M_i = \mathrm{graph}_{\mathbf{C}}(u^*_i, f^*_i, \Omega^*_i) ,
\end{gather}
and define domains $\tilde \Omega_i(j) \subset W(j)$ by the condition that
\begin{gather}
\Omega_i(j) = \{ x' + \phi_i(x') : x' \in \tilde\Omega_i(j) \}.
\end{gather}
Now by elementary geometry we have that for every $x' \in \tilde\Omega_i(j) \cap \Omega^*_i(j)$, we have
\begin{gather}
u_i(x' + \phi_i(x')) = u^*_i(x') - \phi_i(x) + O( (|u_i| + |Du_i|) |D\phi_i|) = u^*_i(x') - \beta_i \psi_\rho + o(\beta_i).
\end{gather}
Since both $\tilde\Omega_i(j)$ and $\Omega^*_i(j)$ converge to the wedge $W(j)$ as $i \to \infty$, and since $\beta_i^{-1} u^*_i \to v$ by assumption, the blow-up of $u_i$ as per Section \ref{sec:blow-up} will yield the field $v_\rho = v - \psi_\rho$.
\end{proof}
\subsection{Non-linear decay: Proof of Theorem \ref{thm:main-decay}} Propositions \ref{prop:blow-up} and \ref{prop:kill-linear} allow us to use the linear decay of Jacobi fields as in Section \ref{sec:jacobi} to prove non-linear decay of $M$.
\begin{comment}
Let us first observe the following consequence elementary of Theorem \ref{lem:poly-global-graph}, and the definition of $E$.
\begin{prop}
Take $\theta > 0$. Provided $\varepsilon \leq \varepsilon_7(\mathbf{C}^0, \theta)$, then whenever $M \in \mathcal{N}_{\varepsilon_7}(\mathbf{C}^0)$ and $C \in \mathcal{C}_{\varepsilon_7}(\mathbf{C}^0)$, we have
\begin{gather}
E(M, C, \theta) \leq 10 \int_{M \cap B_\theta} d_C^2.
\end{gather}
\end{prop}\end{comment}
\begin{proof}[Proof of Theorem \ref{thm:main-decay}]
Fix $\theta \in (0, 1/4]$. We first take $c_0(\mathbf{C}^0) \equiv c(\mathbf{C}^0)$ and $\gamma(\mathbf{C}^0, \theta) \equiv \gamma(\mathbf{C}^0, \theta, \Gamma = 1)$ to be the constants from Proposition \ref{prop:kill-linear}. Now take $\mu(\mathbf{C}^0) \equiv \mu(\mathbf{C}^0, \beta = c_0, \alpha = 1/2)$ the constant from Theorem \ref{thm:linear-decay}. We proceed by contradiction:
Suppose we had a sequence $M_i \in \mathcal{N}_{\varepsilon_i}(\mathbf{C}^0)$ satisfying $E_{\varepsilon_i}(M_i, \mathbf{C}, 0, 1) \leq \varepsilon_i^2$ and the $\varepsilon_i/10$-no-holes condition, with $\varepsilon_i \to 0$, but admitting for some $c_i \to \infty$ the bound
\begin{gather}
E_{\varepsilon_i}(M_i, q(\mathbf{C}^0), 0, \theta) \geq c_i \theta^\mu E_{\varepsilon_i}(M_i, \mathbf{C}^0, 0, 1)
\end{gather}
for every $q \in SO(n+k)$ satisfying $|q - Id| \leq \gamma E_{\varepsilon_i}(M_i, \mathbf{C}^0, 0, 1)^{1/2}$.
Let us set $\beta_i^2 = E_{\varepsilon_i}(M_i, \mathbf{C}, 0, 1)$, and thereby obtain a blow-up sequence $(M_i, \mathbf{C}^0, \varepsilon_i, \beta_i)$, generating some Jacobi field $v : \mathbf{C}^0 \cap B_{1/2} \to \mathbf{C}^0^\perp$. By Proposition \ref{prop:kill-linear} and integrability of $\mathbf{C}^0_0$, $v$ satisfies the hypotheses of Theorem \ref{thm:linear-decay} at scale $B_{1/2}$, with $\beta = c_0(\mathbf{C}^0)$, and $\alpha = 1/2$. Therefore we have the decay estimate
\begin{gather}
\theta^{-n-2} \int_{\mathbf{C}^0 \cap B_\theta} |v_\theta|^2 \leq c(\mathbf{C}^0)\theta^\mu \int_{\mathbf{C}^0 \cap B_{1/2}} |v_{1/2}|^2 \leq c(\mathbf{C}^0) \theta^\mu ,
\end{gather}
and a sequence of $q_i \in SO(n+k)$, with $|q_i - Id| \leq \gamma \beta_i$, so using the strong $L^2$-convergence of Proposition \ref{prop:kill-linear} A), we have for $i >> 1$
\begin{align}
E_{\varepsilon_i}(M_i, q_i(\mathbf{C}^0), 0, \theta)
&\equiv \theta^{-n-2} \int_{M_i \cap B_\theta} d_{q_i(\mathbf{C}^0)}^2 + \varepsilon_i^{-1} \theta ||H_{M_i}||_{L^\infty(B_\theta)} \\
&\leq \left( 2\theta^{-n-2} \int_{\mathbf{C}^0 \cap B_\theta} |v_\theta|^2 \right) E_{\varepsilon_i}(M_i, \mathbf{C}, 0, 1) + \varepsilon_i^{-1} \theta^\mu ||H_{M_i}||_{L^\infty(B_1)} \\
&\leq 4c(\mathbf{C}^0) \theta^\mu E_{\varepsilon_i}(M_i, \mathbf{C}^0, 0, 1) .
\end{align}
For large $i$ this is a contradiction.
\end{proof}
\begin{comment}
\documentclass[12pt]{article}
\usepackage{amsmath}
\usepackage{hyperref}
\usepackage{xfrac}
\usepackage{stmaryrd,mathrsfs,bm,amsthm,mathtools,yfonts,amssymb,color}
\usepackage{xcolor}
\usepackage{courier}
\newcommand\net{\mathrm{\bf net}}
\newcommand{\partial}{\partial}
\newcommand\sphere{\mathbb{S}}
\newcommand{\varepsilon}{\varepsilon}
\newcommand{\mathcal{H}}{\mathcal{H}}
\newcommand{\mathrm{sing}}{\mathrm{sing}}
\newcommand{\mathrm{graph}}{\mathrm{graph}}
\newcommand{\mathrm{dom}}{\mathrm{dom}}
\newcommand{\mathrm{spt}}{\mathrm{spt}}
\newcommand{\mathcal{M}}{\mathcal{M}}
\newcommand{\mathcal{S}}{\mathcal{S}}
\newcommand{\mathbf{C}}{\mathbf{C}}
\newcommand{\mathbf{V}}{\mathbf{V}}
\newcommand{\mathbf{E}}{\mathbf{E}}
\newcommand{\mathbf{T}}{\mathbf{T}}
\newcommand{\mathbf{Y}}{\mathbf{Y}}
\newcommand{\mathbf{X}}{\mathbf{X}}
\newcommand{\mathbb{R}}{\mathbb{R}}
\newcommand{\mathcal{C}}{\mathcal{C}}
\newcommand{\mathcal{N}}{\mathcal{N}}
\newcommand{\mathcal{V}}{\mathcal{V}}
\newcommand{\mathbf{T}}{\mathbf{T}}
\newcommand{\mathrm{reg}}{\mathrm{reg}}
\newcommand{\mathrm{int}}{\mathrm{int}}
\newcommand{{ \{0\} \times \R^m }}{{ \{0\} \times \mathbb{R}^m }}
\newcommand{\mathbf{C}^0}{\mathbf{C}^0}
\newtheorem{theorem}{Theorem}[section]
\newtheorem{ctheorem}[theorem]{Conjectural Theorem}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{fact}[theorem]{Fact}
\newtheorem{prop}[theorem]{Proposition}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{corollary}[theorem]{Corollary}
\theoremstyle{definition}
\newtheorem{definition}[theorem]{Definition}
\theoremstyle{remark}
\newtheorem{remark}[theorem]{Remark}
\theoremstyle{remark}
\newtheorem{example}[theorem]{Example}
\theoremstyle{remark}
\newtheorem{note}[theorem]{Note}
\theoremstyle{remark}
\newtheorem{question}[theorem]{Question}
\theoremstyle{remark}
\newtheorem{conjecture}[theorem]{Conjecture}
\setlength{\textheight}{10in} \setlength{\textwidth}{7.5in}
\setlength{\columnsep}{0.5in} \setlength{\topmargin}{-1.2in}
\setlength{\headheight}{0in} \setlength{\headsep}{0.5in}
\setlength{\parindent}{1pc}
\setlength{\oddsidemargin}{-0.5in}
\setlength{\evensidemargin}{-0.5in}
\begin{document} \end{comment}
\section{Equiangular nets in $\mathbb{S}^2$}\label{sec:nets}
We demonstrate that certain polyhedral cones are integrable, in the sense of Definition \ref{def:integrable}. First, we demonstrate that $\mathbf{Y}\times \mathbb{R}$ and $\mathbf{T}$ (under certain circumstances) admit no hole conditions.
\subsection{No-holes for $\mathbf{Y}$ and $\mathbf{T}$}
The $\mathbf{Y}^1 \times \mathbb{R}^m$ cone is very special, in that closeness to thise cone always guarantees the existence of good density points. No extra assumptions on the class or structure of the varifold are necessary.
\begin{prop}\label{prop:no-holes-Y}
There is an $\varepsilon(m, k, \partiallta)$ so that if $M^{1+m} \subset \mathbb{R}^{1+k+m}$ lies in $M \in \mathcal{N}_{\varepsilon}(\mathbf{Y}\times \mathbb{R}^m)$, then $M$ satisfies the $\partiallta$-no-holes condition in $B_{1/2}$ w.r.t. $\mathbf{Y} \times \mathbb{R}^m$.
\end{prop}
\begin{proof}
By Lemma \ref{lem:global-graph}, provided $\varepsilon$ is sufficiently small $M \cap B_{3/4}\setminus B_{\partiallta}({ \{0\} \times \R^m })$ is a $C^{1,\alpha}$-perturbation of $\mathbf{Y}\times \mathbb{R}^m$. We claim that
\begin{gather}
\mathrm{sing} M \cap (\mathbb{R}^{1+k}\times \{y\}) \cap B_{1/2} \neq \emptyset \quad \forall y \in B_{1/2}^m .
\end{gather}
Otherwise, since $\mathrm{sing} M$ is relatively closed, by Sard's theorem, we could choose a $y^*$ arbitrarily near $y$ so that $M \cap (\mathbb{R}^{1+k}\times \{y^*\}) \cap B_{1/2}$ would consist of a smooth $1$-manifold having three boundary components, which is impossible.
Therefore, using Almgren's stratification we have for $\mathcal{H}^m$-a.e. $y \in B_{1/2}^m$ a singular point $X_y \in \mathrm{sing} M \cap (B_\partiallta(0)^{1+k}\times \{y\})$ which is $m$-symmetric. So there is a tangent cone at $X_y$ which is either a multiplicity $\geq 2$ plane, or a union of $\geq 3$ half-planes, either of which has density $\geq \theta_{\mathbf{Y}}(0)$.
\end{proof}
Unfortunately, the tetrahedral cones $\mathbf{T}^2 \times \mathbb{R}^m$ do not admit so nice a property, without imposing further restrictions: we can find piecewise-smooth varifolds of bounded mean curvature which look very close to $\mathbf{T}$ at scale $B_1$, but which only have singularities of type $\mathbf{Y}\times \mathbb{R}$. To rule this out one can enforce a boundary/orientability structure.
\begin{lemma}\label{lem:T-density}
Let $\mathbf{C} = \mathbf{C}_0^2 \times \mathbb{R}^m \subset \mathbb{R}^{3+m}$, where $\mathbf{C}_0$ is $2$-dimensional, stationary and singular. If (up to rotation) $\mathbf{C}_0$ is not a multiplicity $1$ plane or the $\mathbf{Y} \times \mathbb{R}$, then we have
\begin{gather}
\theta_{\mathbf{C}}(0) \geq \theta_{\mathbf{T}}(0).
\end{gather}
\end{lemma}
\begin{proof}
If $\mathbf{C}_0$ is planar, then it must be with multiplicity $\geq 2 > \theta_{\mathbf{T}}(0)$. If $\mathbf{C}_0$ has $1$-degree of symmetry, then since we are not regular nor are we the $\mathbf{Y}$, then $\mathbf{C}_0$ must consist of $\geq 4$ half-planes meeting along an edge, which also has multplicity $\geq 2$.
Suppose $\mathbf{C}_0$ has no symmetries. Consider the geodesic net $\Gamma := \mathbf{C}_0 \cap \partial B_1 \subset \mathbb{S}^2$. If any geodesic has multiplicity $\geq 2$, or any junction has $\geq 4$ vertices, then $\theta_{\mathbf{C}}(0) \geq 2$ and we are done. Let us suppose therefore that $\Gamma$ consists only of multiplicity-$1$ geodesics, which meet at $120^\circ$.
These nets are classified, and listed in the following subsection. One can readily verify that the net with least length, aside from the circle and $\mathbf{Y}$, is the tetrahedral net.
\end{proof}
\begin{lemma}\label{lem:T-cross-section-sing}
Let $M^2$ be a set in $\mathbb{R}^3$ which coincides with $\mathbf{T}^2$ in $B_1 \setminus B_\partiallta$. Suppose $\mathcal{H}^2 \llcorner M$ is an integral varifold with an associated cycle structure in $B_1$. Then there is a point $x \in M \cap B_1$, so that $M$ near $x$ is \emph{not} a $C^1$ perturbation of $\mathbb{R}^2$ or $\mathbf{Y} \times \mathbb{R}$.
\end{lemma}
\begin{proof}
By assumption $M$ divides the annulus $B_{1} \setminus B_\partiallta$ into four regions $A_1, A_2, A_3, A_4$. Any two $A_i$, $A_j$ share a boundary wedge $W \subset \mathbf{T}$.
Suppose, towards a contradiction, that around every point $M$ is locally a $C^1$ perturbation of $\mathbb{R}^2$ or $\mathbf{Y}\times \mathbb{R}$. Then $M \cap B_1$ consists of a finite collection of $C^1$ embedded surfaces $M_i$ meeting at $120^\circ$ along $C^1$ embedded curves $\gamma_i$. Since $M$ coincides with $\mathbf{T}$ outside $B_\partiallta$, we see that up to renumbering the curves $\gamma_1$, $\gamma_2$ start and end at vertices of $\mathbf{T} \cap S^2$, while curves $\gamma_3, \gamma_4, \ldots$ must be closed. See figure \ref{fig:no-holes} for an idealized picture.
\begin{figure}
\caption{A surfaces with only $\mathbf{Y}
\label{fig:no-holes}
\end{figure}
We can assume $\gamma_1$ starts at the vertex adjoined by regions $A_1, A_2, A_3$, while $\gamma_1$ ends at the vertex adjoined by regions $A_1, A_2, A_4$. A small tubular neighborhood of $\gamma_1$ is diffeomorphic $\mathbf{Y}\times \mathbb{R}$, and therefore if we push $\gamma_1$ away from any bounding surface in the conormal direction, the resulting curve $\hat \gamma_1$ induces a path connecting $A_i$ ($i = 1, 2, 3$) to some $A_j$ ($j = 1, 2, 4$). After relabeling as necessary, we can thicken $\hat \gamma_1$ to obtain an open set $A$, disjoint from $M$, with $A \supset A_3 \cup A_4$.
Since each associated current is codimension $1$ and without boundary, we can assume WLOG that $\mathcal{H}^2 \llcorner M$ is associated to a countable union of \emph{boundaries} $\partial [U_i]$, where $U_i$ are open sets, and we take the boundaries as $3$-currents. From the above we have $A \subset U_i$ or $A \cap U_i = \emptyset$ for every $i$. But now if $W$ is the boundary wedge shared by $A_3, A_4$, then the previous sentence implies
\begin{gather}
W \cap (B_{1/2} \setminus B_{2\partiallta}) \cap \mathrm{spt} \partial [U_i] = \emptyset \quad \forall i.
\end{gather}
And so $W$ \emph{cannot} be part of $M$. This is a contradiction.
\end{proof}
\begin{remark}\label{rem:general-no-holes}
If one could show either curve $\gamma_1$ or $\gamma_2$ is unknotted (as in Figure \ref{fig:no-holes}), then one could construct a Lipschitz deformation of $M$ onto two faces of the solid tetrahedron (plus one edge). This would prove Lemma \ref{lem:T-cross-section-sing} for general $(\mathbf{M}, \varepsilon, \partiallta)$-sets (at least for $\varepsilon$ sufficiently small) without any extra orientation or codimension requirements. Unfortunately, we have very little idea whether Lemma \ref{lem:T-density} holds in general codimension.
\end{remark}
\begin{prop}\label{prop:no-holes}
There is an $\varepsilon(m, \partiallta)$ so that the following holds. Let $M^{n = 2+m} \subset \mathbb{R}^{3+m}$ be an integral varifold with associated cycle structure in $B_1$, and suppose $M \in \mathcal{N}_{\varepsilon}(\mathbf{T}^2 \times \mathbb{R}^m)$. Then $M$ satisfies the $\partiallta$-no-holes condition in $B_{1/2}$.
\end{prop}
\begin{proof}
By Lemma \ref{lem:poly-global-graph}, $M \cap B_{3/4} \setminus B_\partiallta({ \{0\} \times \R^m })$ is a $C^{1,\alpha}$-perturbation of $\mathbf{T}\times \mathbb{R}^m$, for $\varepsilon(m, \partiallta)$ sufficiently small. So
\begin{gather}\label{eqn:no-holes-T-1}
\mathrm{sing} M \cap B_{3/4} \subset B_\partiallta({ \{0\} \times \R^m }),
\end{gather}
and there is no loss in assuming $M \cap B_{3/4} \setminus B_\partiallta({ \{0\} \times \R^m })$ coincides with $\mathbf{T} \times \mathbb{R}^m$.
We claim that, for every $y \in B_{1/2}^m$, there is some singular point
\begin{gather}\label{eqn:no-holes-T-2}
X_y \in \mathrm{sing} M \cap (\mathbb{R}^{3}\times \{y\}) \cap B_{1/2}
\end{gather}
which is not a (multiplicity-1) $\mathbf{Y}\times \mathbb{R}^{1+m}$. We prove this by contradiction.
First, observe that by Simon's regularity Theorem \ref{thm:Y-reg}, the set of singular points which are not a multiplicity-1 $\mathbf{Y}\times \mathbb{R}^{1+m}$ is relatively closed in $\mathrm{sing} M$, and hence closed. Therefore, if the claim failed, it would fail for $y$ in some open set $U$. Using Allard's and Simon's regularity we obtain that $M \cap (\mathbb{R}^{3}\times U)$ consists of embedded, multiplicity-one $C^1$ $n$-surfaces, meeting at $120^0$ along embedded $C^1$ $(n-1)$-surfaces.
Therefore by Sard's theorem, for a.e. $y \in U$, the $M \cap (\mathbb{R}^3 \times \{y\})$ consists of embedded $C^1$ surfaces meeting at $120^\circ$ along embedded $C^1$ curves, which coincides with $\mathbf{T}^2$ in an annulus. However, by slicing we also have that for a.e. $y \in U$, $\mathcal{H}^2 \llcorner M \cap (\mathbb{R}^3 \times \{y\})$ has an associated cycle structure in $B_{1/4}(0, y)$, contradicting Lemma \ref{lem:T-cross-section-sing}. This proves the claim.
The Proposition is completed by combining \eqref{eqn:no-holes-T-1} and the above claim with Lemma \ref{lem:T-density}.
\end{proof}
\subsection{Integrability}\label{sec:int}
We establish integrability of those polyhedral cones which arise from an equiangular geodesic net in $\mathbb{S}^2$. As discussed in Remark \ref{rem:maybe-non-int}, it seems possible to us that in higher-codimension there exist non-integrable polyhedral cones (for either definition of integrability). Indeed, even in the codimension-$1$ case we are unable to give a general abstract proof, but instead we make use of the classification of equiangular geodesics nets in $\mathbb{S}^2$ due to \cite{lamarle}, \cite{heppes} and proceed on a case-by-case basis.
\begin{theorem}
Suppose $\mathbf{C}^2 \subset \mathbb{R}^3 \subset \mathbb{R}^{2+k}$ is a polyhedral cone. Then $\mathbf{C}$ is integrable in $\mathbb{R}^{2+k}$. In particular the tetrahedron $\mathbf{T}^2 \subset \mathbb{R}^{2+k}$ is integrable.
\end{theorem}
\begin{proof}
Fix a polyhedral cone $\mathbf{C}^2 \subset \mathbb{R}^3 \subset \mathbb{R}^{2+k}$, composed of wedges $\cup_{i=1}^d W(i)$. Write $\Gamma = \mathbf{C} \cap \mathbb{S}^{1+k}$ for the corresponding equiangular geodesic net, and $\ell(i) \equiv W(i) \cap \mathbb{S}^{1+k}$ for the geodesic segments. After relabeling as necessary we can assume $\ell(1), \ell(2), \ell(3)$ share a common vertex.
Let $v : \mathbf{C} \to \mathbf{C}^\perp$ be a linear, compatible Jacobi field. We wish to show that $v = \pi_{\mathbf{C}^\perp} \circ A$ for some skew-symmetric matrix $A : \mathbb{R}^{2+k} \to \mathbb{R}^{2+k}$. From Proposition \ref{prop:baby-linear} we know this holds locally, in the sense that there is a skew-symmetric $A_0$, so that
\begin{gather}
v(i) = \pi_{\mathbf{C}^\perp} \circ A_0 \quad i = 1, 2, 3.
\end{gather}
Therefore, by considering the field $v - \pi_{\mathbf{C}^\perp}\circ A_0$, we can and shall reduce to the case when $v(1) = v(2) = v(3) = 0$.
In fact we shall prove that any linear, compatible Jacobi field $v$ satsifying $v(1) = v(2) = v(3) = 0$ must be identically zero. It is reasonable to expect this to be true, as the $v(i)$s with their compatibility conditions effectively form a system of linear equations, and one can easily verify that the total number of variables equals the total number of conditions (equals $2kd$). However an abstract counting argument seems insufficient to establish $v \equiv 0$, as the linear independence of this system depends strongly on both the global topology and geometry of the underlying net. Thankfully, the possible nets $\Gamma$ are very well understood, and we can prove our assertion on a case-by-case basis.
Let us first assume $k = 1$. For each $i$, fix a unit speed paramterization of $\ell(i)$, and write $\hat \ell(i)$ for the induced unit tangent vector. We take $\hat\ell(i) \wedge \hat x$ to be the choice of unit normal to $W(i)$ (and hence an orientation on $W(i)^\perp$), where $\hat x$ is the unit position vector.
Define scalar functions $f(i) : \ell(i) \cong [0, \mathrm{length}(\ell(i))] \to \mathbb{R}$ by setting
\begin{gather}
f(i)(\theta) = v(i)(\theta) \cdot (\hat \ell(i) \wedge \hat x).
\end{gather}
Then each $f(i)$ completely determines $v(i)$, and takes the form
\begin{gather}\label{eqn:form-of-f}
f(i)(\theta) = a(i) \sin(\theta) + b(i) \cos(\theta), \quad \theta \in \ell(i) \cong [0, \mathrm{length}(\ell(i))],
\end{gather}
for real constants $a(i), b(i)$.
We shall prove that every $f(i)$ must be identically $0$. Recall that by hypothesis we have
\begin{gather}\label{eqn:scalar-initial-conds}
f(1) = f(2) = f(3) = 0,
\end{gather}
while using Lemma \ref{lem:vect-vs-scalar-cond}, the $C^0$- and $C^1$-compatibility conditions on $v$ imply that
\begin{gather}\label{eqn:scalar-compat-conds}
\sum_{j=1}^3 (n(i_j) \cdot \hat \ell(i_j)) f(i_j)(p) = 0, \quad \text{ and } \quad f'(i_1)(p) = f'(i_2)(p) = f'(i_3)(p),
\end{gather}
whenever $\ell(i_1), \ell(i_2), \ell(i_3)$ share a common vertex $p$. Here $n(i)$ is the outer conormal of $\ell(i)$, and $f'(i) \equiv \partial_{\hat \ell(i)} f(i)$ is the derivative in the direction $\hat\ell(i)$.
From the work of \cite{lamarle}, \cite{heppes}, and since $\mathbf{C} \subset \mathbb{R}^3$ cannot have additional symmetries, then up to rotation $\Gamma$ can be only one of $8$ possible nets. We prove integrability case-by-case by establishing that the corresponding system of $f(i)$s satisfying \eqref{eqn:scalar-initial-conds}, \eqref{eqn:scalar-compat-conds} must vanish. In each case we give a topological diagram indicating numbering, orientation, and length (a single arrow indicates length $\theta_1$, a double arrow indicates $\theta_2$, etc.). We will additionally use the following notation: if $p$ is the vertex joining edges $\ell(1), \ell(2), \ell(3)$ (e.g.), then we refer to $p$ by the triple $(1,2,3)$.
The possible nets (presented in the same order as in \cite{taylor}), with their corresponding proofs of integrability, are as follows. Each edge length is given to $3$ decimal places.
\begin{enumerate}
\item \textbf{Regular tetrahedron}, having $6$ edges, each of length $\theta_1 = 109.471^\circ$.
We can apply the $C^1$ condition \eqref{eqn:scalar-compat-conds} at each end of $\ell(4)$ to obtain $f(4)'(0) = f(4)'(\theta_1) = 0$. We deduce $f(4) \equiv 0$, and by symmetry we have $f(i) \equiv 0$ for all $i$.
\begin{figure}
\caption{Regular tetrahedron}
\end{figure}
\item \textbf{Regular cube}, having $12$ edges of length $\theta_1 = 70.529^\circ$.
Applying the $C^0$ and $C^1$ conditions \eqref{eqn:scalar-compat-conds}, and using that all edges have the same length, gives directly the relations
\begin{align}
&f(6) = -A \cos(\theta), \quad f(7) = A \cos(\theta), \quad f(9) = A \cos(\theta), \quad f(10) = -A \cos(\theta), \\
&\quad f(5) = -A\cos(\theta), \quad f(4) = A \cos(\theta),
\end{align}
where $A$ is the same constant. But then, applying \eqref{eqn:scalar-compat-conds} at vertex $(4, 10, 12)$ gives the relation $A \cos(\theta_1) = -A \cos(\theta_1)$, which can only hold if $A = 0$. By symmetry we deduce that every $f(i) \equiv 0$.
\begin{figure}
\caption{Regular cube}
\end{figure}
\item \textbf{Prism over regular pentagon}, forming $15$ edges: ``with the pentagonal arcs having length $\theta_1 = 41.810^\circ$ and the other arcs being of length $\theta_2 = 105.245^\circ$.''
By the same reasoning as in the cube, taking into account the different lengths $\theta_1$, $\theta_2$, we have
\begin{align}
&f(6) = A \cos(\theta), \quad f(5) = -A \cos(\theta), \quad f(14) = A \cos(\theta), \quad f(11) = -A\cos(\theta),
\end{align}
for some constant $A$. We can therefore apply the $C^1$ condition at each end of $\ell(9)$, to see that
\begin{gather}
f(9) = -A \sin(\theta_1) \sin(\theta) - A(\cos(\theta_1) + 1)\cos(\theta).
\end{gather}
Apply both conditions at vertex $7,5,4$ to obtain
\begin{gather}
f(7) = A \sin(\theta_2)\sin(\theta) - A(\cos(\theta_2) + \sin(\theta_2) \frac{\cos(\theta_1)}{\sin(\theta_1)} )\cos(\theta).
\end{gather}
These, together with $f(6)$, give three conditions on $f(8)$, and we obtain the relation
\begin{gather}
A(4 \sin \theta_2 \cos\theta_1 + 2\sin\theta_1\cos\theta_2 - \sin\theta_2) = 0.
\end{gather}
The term in the brackets is $-3.5$, to one decimal place. We deduce that $A = 0$, and it's straightforward to verify that $f(i) \equiv 0$ for every $i$.
\begin{figure}
\caption{Prism over regular pentagon}
\end{figure}
\item \textbf{Prism over a regular triangle}, forming $9$ edges: ``the triangular arcs being of length $109.471^\circ$ and the other arcs of length $38.942^\circ$.''
By same reasoning as the tetrahedron, we can apply the $C^1$ condition on each side of $\ell(7)$ to see $f(7) = 0$. Apply both $C^0$- and $C^1$-condition at vertex $(2,6,7)$ to obtain $f(6)(0) = f(6)'(0) = 0$, and hence $f(6) = 0$. Similarly, we have $f(8) = 0$. We then deduce directly that $f(5) = f(9) = f(4) = 0$.
\begin{figure}
\caption{Prism over regular triangle}
\end{figure}
\item \textbf{Regular dodecahedron}, having $30$ edges, each of length $\theta_1 = 41.810^\circ$.
\begin{figure}
\caption{Regular dodecahdron}
\end{figure}
We have immediately the equations
\begin{align}
&f(5) = A \cos\theta, \quad f(4) = -A\cos\theta, \quad f(10) = B\cos\theta, \quad f(9) = -B\cos\theta,\\
&\quad f(8) = G \cos\theta, \quad f(7) = -G\cos\theta,
\end{align}
for some constants $A, B, G$. By symmetry it will suffice to show that $A = B = G = 0$. We obtain, using the above and the compatibility conditions,
\begin{align}
&f(11) = -B\sin\theta_1 \sin\theta - (B\cos\theta_1 + A)\cos\theta, \\
&f(17) = B\sin\theta_1 \sin\theta + (G + B\cos\theta_1)\cos\theta ,\\
&f(6) = -A\sin\theta_1 \sin\theta - (A\cos\theta_1 + G)\cos\theta , \\
& f(13) = -A\sin\theta_1 \sin\theta + (G + 2A\cos\theta_1)\cos\theta , \\
& f(12) = A\sin\theta_1 \sin\theta - (2A\cos\theta_1 + B)\cos\theta, \\
& f(24) = B\sin\theta_1 \sin\theta - (2B\cos\theta_1 + G)\cos\theta, \\
& f(19) = -B\sin\theta_1\sin\theta + (A+2B\cos\theta_1)\cos\theta, \\
& f(25) = -(G + 3A\cos\theta_1)\sin\theta_1 \sin\theta - (G\cos\theta_1 + 3A\cos^2\theta_1 + 3A\cos\theta_1 + B)\cos\theta,
\end{align}
and
\begin{align}
f(21) = (3A\sin\theta_1\cos\theta_1 + B\sin\theta_1)\sin\theta + (-G-2B\cos\theta_1 - 6A\cos^2\theta_1 - 3A\cos\theta_1 + A) \cos\theta.
\end{align}
And
\begin{align}
f(20) &= \left[ A(9\cos^2\theta_1 + 3\cos\theta_1\sin\theta_1 - \sin\theta_1) + 3B\sin\theta_1\cos\theta_1 + G\sin\theta_1 \right] \sin\theta \\
&\quad \left[ A(9\cos^3\theta_1 + 3\cos^2\theta_1 - \cos\theta_1 + 1) + B(3\cos^2\theta_1 + 3\cos\theta_1) + G\cos\theta_1 \right] \cos\theta .
\end{align}
From $19$ and $24$, we obtain
\begin{gather}
f(18) = -(3B\sin\theta_1 \cos\theta_1 + A\sin\theta_1)\sin\theta - (A\cos\theta_1 + 3B\cos^2\theta_1 + 3B\cos\theta_1 + G)\cos\theta.
\end{gather}
But we have additionally $20$, which implies the relation:
\begin{align}
2G = A(-9\cos^2\theta_1 - 6\cos\theta_1 + 1) + B(-9\cos^2\theta_1 - 6\cos\theta_1 + 1).
\end{align}
We work upwards. We have
\begin{align}
&f(14) = G\sin\theta_1\sin\theta - (2G\cos\theta_1 + A)\cos\theta, \\
&f(16) = -G\sin\theta_1 \sin\theta + (B + 2G\cos\theta_1)\cos\theta, \\
&f(15) = (3G\cos\theta_1\sin\theta_1 + A\sin\theta_1)\sin\theta + (3G\cos^2\theta_1 + 3G\cos\theta_1 + A\cos\theta_1 + B)\cos\theta
\end{align}
And
\begin{align}
&f(26) = -(3A\sin\theta_1\cos\theta_1 + G\sin\theta_1)\sin\theta + (2G\cos\theta_1 + B + 6A\cos^2\theta_1 + 3A\cos\theta_1 - A)\cos\theta
\end{align}
We calculate $27$. Using $14$, $15$, we obtain
\begin{gather}
f(27) = (3G\sin\theta_1\cos\theta_1 + A\sin\theta_1) \sin\theta + ( G - 6G\cos^2\theta_1 - 3G\cos\theta_1 - 2A\cos\theta_1 - B)\cos\theta.
\end{gather}
Combining this with $26$ gives the relation:
\begin{gather}
2B = A( -9 \cos^2\theta_1 - 6\cos\theta_1 + 1) + G (-9\cos^2\theta_1 - 6\cos\theta_1 + 1).
\end{gather}
Let us proceed to the left. We have
\begin{align}
f(22) = -(3G\sin\theta_1\cos\theta_1 + B\sin\theta_1)\sin\theta + (6G\cos^2\theta_1+3G\cos\theta_1 - G + A + 2B\cos\theta_1)\cos\theta.
\end{align}
Using $22$ and $24$ we obtain
\begin{align}
&f(23) = \left[ G(-9\cos^2\theta_1\sin\theta_1 - 3\cos\theta_1\sin\theta_1 + \sin\theta_1) - A\sin\theta_1 - 3B\sin\theta_1\cos\theta_1 \right] \sin\theta \\
&\quad + \left[ G(-9\cos^3\theta_1 - 3\cos^2\theta_1 + \cos\theta_1 - 1) - A\cos\theta_1 - B(3\cos^2\theta_1 + 3\cos\theta_1)\right]\cos\theta.
\end{align}
But now using additionally $18$, we obtain the relation
\begin{gather}
2A = G(-9\cos^2\theta_1 - 6\cos\theta_1 + 1) + B(-9\cos^2\theta_1 - 6\cos\theta_1 + 1).
\end{gather}
We thus have the three equations
\begin{gather}\label{eqn:dodec-final}
A = \alpha(B + G), \quad B = \alpha(A + G), \quad G = \alpha(A + B),
\end{gather}
where $\alpha = 1.5$ (to one decimal place). One easily verifies the only solution to \eqref{eqn:dodec-final} is when $A = B = G = 0$, and by symmetry we deduce that $f(i) = 0$ for every $i$.
\item \textbf{Two regular quadrilaterals and eight equal pentagons}, forming $24$ edges: ``each quadrilateral surrounded by four pentagons, and each pentagons surrounded by four pentagons and one quadrilateral, the quadrilateral arcs being of length $\theta_2 = 70.529^\circ$, the arcs adjacent to no quadrilateral vertex being of length $\theta_3 = 52.448^\circ$, and the remaining edges being of length $\theta_1 = 21.428^\circ$.''
\begin{figure}
\caption{Two regular quadrilaterals and eight equal pentagons}
\end{figure}
We have directly that
\begin{align}
&f(4) = -A\cos\theta, \quad f(5) = A\cos\theta, \\
&\quad f(6) = -B\cos\theta, \quad f(7) = B\cos\theta, \quad f(8) = B\cos\theta, \quad f(9) = -B\cos\theta,
\end{align}
for some constants $A, B$. Using the compatability conditions at various vertices, we obtain
\begin{align}
&f(10) = A\sin\theta_3\sin\theta + (A\cos\theta_3 - B \frac{\sin\theta_1}{\sin\theta_3}) \cos\theta\\
&f(11) = -A\sin\theta_3\sin\theta - (A\cos\theta_3 + B \frac{\sin\theta_1}{\sin\theta_3})\cos\theta \\
&f(12) = A\sin\theta_3\sin\theta + (-2A\cos\theta_3 + B\frac{\sin\theta_1}{\sin\theta_3})\cos\theta \\
&f(13) = -A\sin\theta_3\sin\theta + (2A\cos\theta_3 + B\frac{\sin\theta_1}{\sin\theta_3})\cos\theta \\
&f(18) = B\sin\theta_1\sin\theta + (-B\cos\theta_1 - B \sin\theta_1 \frac{\cos\theta_3}{\sin\theta_3} + A)\cos\theta \\
&f(19) = -B\sin\theta_2\sin\theta + 2B\cos\theta_2\cos\theta \\
&f(17) = B\sin\theta_1 \sin\theta - (B\cos\theta_1 + B\sin\theta_1 \frac{\cos\theta_3}{\sin\theta_3} + A)\cos\theta
\end{align}
And we have
\begin{align}
&f(22) = \left[ -B\sin\theta_2 \cos\theta_1 - 2B\cos\theta_2\sin\theta_1 \right] \sin\theta \\
&\quad + \left[ B \frac{-\cos\theta_1\sin\theta_2\cos\theta_3 - 2\sin\theta_1\cos\theta_2\cos\theta_3 - 2\sin\theta_1\cos\theta_3}{\sin\theta_3} - B\cos\theta_1 + A \right]\cos\theta.
\end{align}
Using $17$ and $19$, we obtain
\begin{align}
&f(21) = (2B \sin\theta_1\cos\theta_3 + B\cos\theta_1\sin\theta_3 + A\sin\theta_3)\sin\theta \\
&\quad + \left[ B \frac{2\sin\theta_1\cos^2\theta_3 + \cos\theta_1\sin\theta_2 + 2\sin\theta_1\cos\theta_2}{\sin\theta_3} + B \cos\theta_1\cos\theta_3 + A \cos\theta_3 \right] \cos\theta
\end{align}
But then we can use the $C^0$ condition with $22$ to get the relation
\begin{align}
0 &= B \left[ 2\sin\theta_1\cos\theta_3\sin\theta_3 + 2\cos\theta_1 + 2\cos\theta_1\cos\theta_2 - \sin\theta_1\sin\theta_2 \right. \\
&\quad \left. + \frac{\cos\theta_3}{\sin\theta_3} (-2\sin^3\theta_1 + 2\cos\theta0\sin\theta_1 + 4\sin\theta_1\cos\theta_2 ) \right] .
\end{align}
Notice the terms involving $A$ cancel! One can readily calculate the term in the brackets is $= 3.3$ (to one decimal place), and therefore we must have $B = 0$. We deduce
\begin{gather}
f(6) = f(7) = f(8) = f(9) = f(19) = 0.
\end{gather}
We now calculate
\begin{align}
&f(14) = A(-\sin\theta_3\cos\theta_1 - 2\cos\theta_3\sin\theta_1)\sin\theta \\
&\quad + A (\sin\theta_2)^{-1} \left[ -\cos\theta_1\cos\theta_2\sin\theta_3 - 2\sin\theta_1\cos\theta_2\cos\theta_3 - \cos\theta_1\sin\theta_3 - 2\cos\theta_3\cos\theta_1 \right] \cos\theta .
\end{align}
And
\begin{gather}
f(20) = -A\sin\theta_3\sin\theta + 2A\cos\theta_3\cos\theta.
\end{gather}
Since $B = 0$ we see $f(20)$ has precisely the same form as $f(13)$, and so by using $20$ and $12$ we see that $f(16)$ correspondingly has the same expression as $f(14)$. Now we can additionally use the $C^0$ condition at vertex $(14, 12, 16)$ to get the condition
\begin{align}
&A \left[ -2\cos\theta_1\sin\theta_3 - 4\sin\theta_1\cos\theta_3 - 2\cos\theta_1\cos\theta_2\sin\theta_3 - 4\sin\theta_1\cos\theta_2 \right. \\
&\quad \left. + \sin\theta_1\sin\theta_2\sin\theta_3 - 2\cos\theta_1\sin\theta_2\cos\theta_3 \right] = 0.
\end{align}
The term in the brackets is $-3.8$ (to one decimal), and we deduce $A = 0$ also. By symmetry we deduce $f(i) = 0$ for all $i$.
\item \textbf{Four equal quadrilaterals and four equal pentagons}, forming $18$ edges: ``each quadrilateral surrounded by three pentagons and one quadrilateral, and each pentagon by three quadrilaterals and two pentagons, and having the arcs held in common by two quadrilaterals (and the quadrilateral arcs opposite to them) being of length $\theta_3 = 83.802^\circ$ and the other quadrilateral arcs of length $\theta_2 = 58.257^\circ$ and all remaining edges of length $\theta_1 = 13.559^\circ$.''
\begin{figure}
\caption{Four equal quadrilaterals and four equal pentagons}
\end{figure}
Let us calculate. We have directly
\begin{align}
&f(4) = A\cos\theta, \quad f(10) = -A\cos\theta, \quad f(9) = -A\frac{\sin\theta_3}{\sin\theta_2}\cos\theta, \quad f(8) = A\frac{\sin\theta_3}{\sin\theta_2}\cos\theta, \\
&\quad f(7) = A\cos\theta, \quad f(6) = -A\cos\theta,
\end{align}
for some constant $A$. We have
\begin{align}
&f(5) = -A\sin\theta_1 \sin\theta - (A\sin\theta_1\frac{\cos\theta_3}{\sin\theta_3} + A \frac{\sin\theta_1}{\sin\theta_3})\cos\theta \\
&f(14) = -A\sin\theta_3 \sin\theta + (A \cos\theta_3 + A\frac{\sin\theta_3}{\sin\theta_2}\cos\theta_2)\cos\theta \\
&f(15) = A\sin\theta_3 \sin\theta - (A\cos\theta_3 + A\frac{\sin\theta_3}{\sin\theta_2} \cos\theta_2)\cos\theta
\end{align}
And
\begin{align}
&f(16) = A (\cos\theta_1\sin\theta_3 + \sin\theta_1\cos\theta_3 + \frac{\sin\theta_1\cos\theta_2\sin\theta_3}{\sin\theta_2})(\sin\theta + \frac{\cos\theta_3 + 1}{\sin\theta_3} \cos\theta).
\end{align}
We have
\begin{align}
&f(17) = A \left[ \cos\theta_1 \sin\theta_3 + \sin\theta_1\cos\theta_3 + \frac{\sin\theta_1\cos\theta_2\sin\theta_3}{\sin\theta_2} \right] \sin\theta \\
&\quad + A \left[ \sin\theta_1\sin\theta_3 - \cos\theta_1\cos\theta_3 - \frac{\cos\theta_1\cos\theta_2\sin\theta_3}{\sin\theta_2} \right. \\
&\quad\quad \left. - (\frac{\cos\theta_3 + 1}{\sin\theta_3})(\cos\theta_1\sin\theta_3 + \sin\theta_1\cos\theta_3 + \frac{\sin\theta_1\cos\theta_2\sin\theta_3}{\sin\theta_2} ) \right] \cos\theta.
\end{align}
Now using $4$, $5$, we obtain
\begin{gather}
f(11) = -A\sin\theta_1\sin\theta + A(\frac{\sin\theta_1}{\sin\theta_3} + \sin\theta_1\frac{\cos\theta_3}{\sin\theta_3} + \cos\theta_1) \cos\theta.
\end{gather}
But we additionally have a condition with $17$, giving us the relation:
\begin{align}
&A\left[ 2\sin\theta_1\cos\theta_2 + 2\sin\theta_1\cos\theta_2\cos\theta_3 + 2\frac{\sin\theta_1\sin\theta_2}{\sin\theta_3} + \frac{\sin\theta_1\sin\theta_3}{\sin\theta_2} - 3\sin\theta_1\sin\theta_2\sin\theta_3 \right. \\
&\quad \left. +2 \frac{\sin\theta_1\sin\theta_2\cos\theta_3}{\sin\theta_3}+ 2\cos\theta_1\sin\theta_2 + 2\cos\theta_1\cos\theta_2\sin\theta_3 + 2\cos\theta_1\sin\theta_2\cos\theta_3 \right] = 0.
\end{align}
Therefore we must have $A = 0$. It follows directly that $f(i) = 0$.
\item \textbf{Three regular quadrilaterals and six equal pentagons}, forming $21$ edges: ``each quadrilateral surrounded by four pentagons and each pentagon by two quadrilaterals and three pentagons, with the quadrilateral edge being of length $\theta_2 = 70.529^\circ$, the pentagonal edge adjacent to just one quadrilateral vertex being of length $\theta_3 = 35.264^\circ$, and the remaining three edges of length $\theta_1 = 10.529^\circ$.''
\begin{figure}
\caption{Three regular quadrilaterals and six equal pentagons}
\end{figure}
We have directly that
\begin{align}
&f(7) = A\cos\theta, \quad f(8) = -A\cos\theta, \quad f(9) = -A\cos\theta, \quad f(10) = A\cos\theta \\
&\quad f(5) = B\cos\theta, \quad f(4) = -B\cos\theta,
\end{align}
for some constants $A, B$. We obtain
\begin{align}
&f(6) = -A\sin\theta_3\sin\theta + (-A\cos\theta_3 + B\frac{\sin\theta_2}{\sin\theta_3})\cos\theta \\
&f(11) = -A\sin\theta_3\sin\theta - (A\cos\theta_3 + B \frac{\sin\theta_2}{\sin\theta_3})\cos\theta \\
&f(12) = B\sin\theta_2\sin\theta - (B\sin\theta_2 \frac{\cos\theta_3}{\sin\theta_3} + B\cos\theta_2 + A)\cos\theta \\
&f(13) = -B\sin\theta_2\sin\theta + (B\sin\theta_2\frac{\cos\theta_3}{\sin\theta_3} + B\cos\theta_2 - A) \cos\theta.
\end{align}
But now we can use the $C^1$ condition at vertex $12, 19, 13$ to get the relation
\begin{gather}
B \left[ 2\cos\theta_2\sin\theta_3 + \sin\theta_2\cos\theta_3 \right] = 0,
\end{gather}
which necessitates that $B = 0$.
We proceed by calculating
\begin{align}
&f(16) = A\sin\theta_2\sin\theta - 2A\cos\theta_2\cos\theta \\
&f(14) = f(18) = -A\sin\theta_3 \sin\theta + 2A\cos\theta_3\cos\theta \\
&f(15) = f(17) = (A\cos\theta_1\sin\theta_2 + 2A\sin\theta_1\cos\theta_2)\sin\theta \\
&\quad + A\left[ \frac{\cos\theta_1\cos\theta_2\sin\theta_2 + 2\sin\theta_1\cos^2\theta_2 + 3\sin\theta_3\cos\theta_3}{\sin\theta_2} \right] \cos\theta.
\end{align}
But now we can apply the $C^0$ condition at vertex $16, 15, 17$ to get
\begin{gather}
A \left[ 6\cos\theta_3 \sin\theta_3 + 5\sin\theta_1\cos^2\theta_2 - \sin\theta_1 \right] = 0,
\end{gather}
which implies $A = 0$. It then follows directly that $f(i) = 0$ for all $i$.
\end{enumerate}
This completes the proof of integrability when $k = 1$. Suppose now $k \geq 2$. We can handle the projection $\pi_{\mathbb{R}^3\times\{0\}} \circ v$ in precisely the same manner as above. On the other hand, given any coordinate vector $e \in \{0\}\times \mathbb{R}^{k-1}$, let us define
\begin{gather}
f(i)(\theta) = e \cdot v(i)(\theta),
\end{gather}
and observe $f(i)$ takes the same form \eqref{eqn:form-of-f}. By Lemma \ref{lem:vect-vs-scalar-cond} the compatibility conditions are now
\begin{gather}
f(i_1)(p) = f(i_2)(p) = f(i_3)(p), \quad \text{and} \quad \sum_{j=1}^3 (n(i_j) \cdot \hat \ell(i_j)) f'(i_j)(p) = 0,
\end{gather}
whenever $\ell(i_1), \ell(i_2), \ell(i_3)$ share a common vertex $p$. Since $f'' + f = 0$, we see that the functions $f'(i)$ satisfy conditions \eqref{eqn:scalar-initial-conds}, \eqref{eqn:scalar-compat-conds}, and we can apply the proof above to deduce every $f'(i) = 0$. This implies $f = 0$, and hence $\pi_{\{0\}\times \mathbb{R}^{k-1}} \circ v$ is zero also.
\end{proof}
\section{Corollaries}\label{sec:corollaries}
Given Theorem \ref{thm:main-decay} and some background results on $(\mathbf{M}, \varepsilon, \partiallta)$-minimizing sets, the proofs of our Corollaries are essentially standard.
\begin{proof}[Proof of Corollary \ref{cor:no-spine-reg}]
The argument is standard, but we include it for completeness. Take $\partiallta_1(\mathbf{C})$ as in Theorem \ref{thm:main-decay}, and $\varepsilon_1(\mathbf{C}, \beta = 1/100, \tau = 1/100)$ as in Lemma \ref{lem:poly-graph}. Ensure $\partiallta \leq \partiallta_1$.
If $M$ is such that $M \in \mathcal{N}_\partiallta(\mathbf{C})$, and $\theta_M(0) \geq \theta_{\mathbf{C}}(0)$, then $E_{\partiallta_1}(M, \mathbf{C}, 1) \leq \partiallta_1^2$, and $M$ satisfies the $\partiallta$-no-holes condition in $B_{3/4}$ w.r.t. $\mathbf{C}$, for all $\partiallta > 0$. We deduce by Theorem \ref{thm:main-decay} there is a sequence of rotations $q_i$ so that
\begin{gather}
E_{\partiallta_1}(M, q_i(\mathbf{C}), \theta^i) \leq 2^{-i} E_{\partiallta_1}(M, \mathbf{C}, 1).
\end{gather}
It follows that $|q_i - q_{i+1}| \leq c(\mathbf{C}) 2^{-i} E(M, \mathbf{C}, 1)$, and in particular there is a rotation $q$ so that
\begin{gather}
\rho^{-n-2} \int_{M \cap B_\rho} d_{q(\mathbf{C})}^2 \leq c(\mathbf{C}) \rho^{2\mu} E(M, \mathbf{C}, 1)
\end{gather}
for all $\rho \leq 1$, and for some $\mu = \mu(\mathbf{C})$. Ensuring $c(\mathbf{C}) \partiallta \leq \varepsilon_1$, we can apply Lemma \ref{lem:poly-graph} at any scale $B_\rho$, with $\mu \leq \alpha$ in place of $\alpha$, to obtain a uniform $C^{1,\mu}$ decomposition of $M$ over $\mathbf{C}$. That is, in the sense of Definition \ref{def:poly-graph}, we have $M \cap B_{1/2} = \mathrm{graph}_{\mathbf{C}}(u, f, \Omega)$, where $u$ and $f$ admit the pointwise bounds
\begin{align}
r^{-1}|u(i)| + |Du(i)| + r^\mu [Du(i)]_{\mu, \mathbf{C}} \leq c(\mathbf{C}) r^{\mu} E(M, \mathbf{C}, 1)^{1/2} \\
r^{-1} |f(i)| + |Df(i)| + r^\mu [Df(i)]_{\mu, \mathbf{C}} \leq c(\mathbf{C})r^{\mu} E(M, \mathbf{C}, 1)^{1/2}.
\end{align}
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:spine-reg}]
The argument is same as the proof given in Section \ref{sec:graph} for Theorem \ref{thm:Y-reg}, except using Proposition \ref{prop:no-holes} in place of \ref{prop:no-holes-Y}, and Simon's $\varepsilon$-regularity in addition to Allard's.
\end{proof}
To prove Theorems \ref{thm:main-reg} and \ref{thm:clusters} we need a few background results. First, we prove assertion 3) of Theorem \ref{thm:background-clusters}, as promised.
\begin{lemma}
The underlying varifold $M^n = \mathcal{H}^n \llcorner (\partial^* \mathcal{E}(1) \cup \ldots \cup \partial^* \mathcal{E}(N))$ associated to a minimizing $N$-cluster (where $\partial^*$ denotes the reduced boundary) has bounded mean curvature, and no boundary. As a corollary, $M = \mathcal{H}^n \llcorner (\partial \mathcal{E}(1) \cup \ldots\cup \mathcal{E}(N))$, where $\partial$ denotes the \emph{topological} boundary.
\end{lemma}
\begin{proof}
For convenience write $V = \{ \mathbf{a} \in \mathbb{R}^{N+1} : \sum_h a_h = 0 \}$. From \cite[Theorem VI.2.3]{Alm-eps-delta}/\cite[Theorem IV.1.14]{maggi}, we have the following: for any $N$-cluster $\mathcal{E}$, there are constants $\eta, c, R$ (depending only on $\mathcal{E}$), and a $C^1$ function
\begin{gather}
\Psi : B_\eta^{N+1} \times \mathbb{R}^{n+1} \to \mathbb{R}^{n+1},
\end{gather}
with $\Psi_{\mathbf{a} = 0} = Id$, which satisfies for any $\mathbf{a} \in B_\eta^{N+1} \cap V$:
\begin{gather}\label{eqn:vol-preserving-prop}
\mathrm{spt} (\Psi_{\mathbf{a}} - Id) \subset B_R, \quad |\Psi_{\mathbf{a}}(\mathcal{E}(h)) \cap B_R| = |\mathcal{E}(h) \cap B_R| + a_h, \quad | D \Psi_{\mathbf{a}} - Id| \leq c \sum_{h=1}^N |a_h|.
\end{gather}
Of course we can also assume $B_R$ contains all the bounded chambers $\{ \mathcal{E}(h) \}_{h=1}^N$.
Now suppose $\mathcal{E}$ is a minimizing $N$-cluster, take $\Psi$ as above, and consider an arbitrary $C^1$ vector field $X$ supported in $B_R$ generating flow $\phi_t$. Define the function $F : \mathbb{R} \times (B_\eta^{N+1} \cap V) \to \mathbb{R}^N$ by setting
\begin{gather}
F^h(t, \mathbf{a}) = |\phi_t(\Psi_{\mathbf{a}}(\mathcal{E}(h)))| - |\mathcal{E}(h)| .
\end{gather}
Choosing coordinates on $V$ via the map
\begin{gather}
\mathbf{b} \in \mathbb{R}^N \mapsto (-\sum_{i=1}^N b_i, b_1, \ldots, b_N) \in \mathbb{R}^{N+1} \cap V,
\end{gather}
we obtain that
\begin{gather}
F(0, 0) = 0, \quad \partial_{b_h} F^k|_{(0, 0)} = \partiallta_{kh} \quad \forall k, h = 1, \ldots, N.
\end{gather}
Therefore, by the implicit function theorem we can find a $C^1$ curve $\mathbf{a} : (-\varepsilon, \varepsilon) \to B_\eta^{N+1} \cap V$, so that $\mathbf{a}(0) = 0$ and $F(t, \mathbf{a}(t)) \equiv 0$. In other words, the variation $\phi_t \circ \Psi_{\mathbf{a}(t)}$ preserves the volume vector of $\mathcal{E}$.
If $Y$ is the initial velocity vector field for $\Psi_{\mathbf{a}(t)}$, then by \eqref{eqn:vol-preserving-prop} we have $|DY| \leq c \sum_{h=1}^N |a'_h(0)|$. On the other hand, since $D_t F(t, \mathbf{a}(t)) = 0$, we have for each $h$:
\begin{gather}
0 = \int_{\partial^* \mathcal{E}(h)} (X + Y) \cdot \nu = \int_{\partial^* \mathcal{E}(h)} X \cdot \nu + a'_h(0).
\end{gather}
Therefore, since $\mathcal{E}$ is minimizing for volume-vector-preserving deformations,
\begin{gather}
\int_M div_M(X) = -\int_M div_M(Y) \leq c \int_M |X|.
\end{gather}
This shows that $\partiallta M$ forms a bounded linear operator on $L^1(\mu_M)$, which implies $M$ has no boundary and bounded $H_M$.
\end{proof}
Next, we give a general ``sheeting'' theorem for $(\mathbf{M}, \varepsilon,\partiallta)$-minimizing varifolds, which effectively says that this class forms a multiplicity-one class. This is well-known, and essentially the same as \cite[Corollary II.2]{taylor}.
\begin{lemma}\label{lem:almost-min-mult}
Let $M_i^n = \mathcal{H}^n \llcorner \mathrm{spt} M_i$ be a sequence of (multiplicity-one) integral varifolds in $U \subset \mathbb{R}^{n+k}$ without boundary, such that: the $M_i$ have uniformly bounded mean curvature and mass, and each $\mathrm{spt} M_i$ is $(\mathbf{M}, \varepsilon, \partiallta)$-minimizing in $U$ (for uniform $\varepsilon$, $\partiallta$).
If $M_i \to M$ as varifolds in $U$, then $M = \mathcal{H}^n \llcorner \mathrm{spt} M$, and $\mathrm{spt} M$ is $(\mathbf{M}, \varepsilon, \partiallta)$-minimizing in $U$. In particular, if $\mathbf{C}$ is any tangent cone for $M$, then $\mathbf{C}$ has multiplicity-one and $\mathrm{spt} \mathbf{C}$ is $(M, 0, \infty)$-minimizing.
\end{lemma}
\begin{proof}
Since $M$ is integral, at $\mu_M$-a.e. $x$ we have an approximate tangent plane $P$. Fix such an $x$, and suppose towards a contradiction that $\theta_M(x) = q > 1$. By monotonicity, for sufficiently small $r$ and $i >> 1$, both $B_r(x) \cap \mathrm{spt} M$ and $B_r(x) \cap \mathrm{spt} M_i$ lie in an $\eta r$-neighborhood of $x + P$. Therefore, if we construct a $C^1$ deformation which pushes $B_{r/2}(x)$ into $B_{r/2}(x) \cap (x + P)$, we save $\geq c(n) (q-1) r^n$ amount of area in $M_i$. This contradicts $(\mathbf{M}, \varepsilon, \partiallta)$-minimality.
That $M$ is $(\mathbf{M}, \varepsilon, \partiallta)$-minimizing follows directly from the facts: a) any piecewise $C^1$ mapping $\phi$ induces a continuous map $\phi_\sharp$ on the space of integral varifolds; and b) any Lipschitz deformation on $M$ can be well-approximated by piecewise-$C^1$ deformations.
\end{proof}
The last crucial fact we need is Taylor's classification of $2$-dimensional, $(\mathbf{M}, 0, \infty)$-minimizing cones in $\mathbb{R}^3$. The classification for $1$-d cones is trivial. The following Lemma is a straightforward consequence of \cite[Proposition II.3]{taylor}.
\begin{lemma}\label{lem:almost-min-tangent}
Let $\mathbf{C}^n$ be an $(\mathbf{M}, 0, \infty)$-minimizing cone in $\mathbb{R}^{n+k}$. If $\mathbf{C} = \mathbf{C}_0^1 \times \mathbb{R}^{n-1}$, then up to rotation $\mathbf{C}_0 = \mathbf{Y}$. If $k = 1$ and $\mathbf{C} = \mathbf{C}_0^2\times \mathbb{R}^{n-2}$, then (up to rotation) $\mathbf{C}_0$ is either $\mathbb{R}^2$, $\mathbf{Y}\times \mathbb{R}$, or $\mathbf{T}$.
\end{lemma}
Lemma \ref{lem:almost-min-tangent} highlights the importance of the cones $\mathbf{Y}\times \mathbb{R}$ and $\mathbf{T}$: up to factors of $\mathbb{R}^m$, they are the only singular cones arising in the top three strata of $(\mathbf{M}, \varepsilon, \partiallta)$-minimizing sets. Moreoever, they always occur with multiplicity one. From these facts Theorem \ref{thm:clusters} follows in a straightforward way from our decay Theorem \ref{thm:main-decay} and no-holes Proposition \ref{prop:no-holes}.
\begin{proof}[Proof of Theorem \ref{thm:main-reg}/Theorem \ref{thm:clusters}]
Recall the definitions of $k$-strata and $(k,\varepsilon)$-strata as given in Section \ref{sec:prelim}. Let us define $M_k = S^k(M)$ to be the $k$-th stratum, for $k = n-3, \ldots, n$. Conclusions 1), 2), 3) follow immediately from Lemmas \ref{lem:almost-min-mult}, \ref{lem:almost-min-tangent}, and the $\varepsilon$-regularity Theorems of Allard (Theorem \ref{thm:allard}), Simon (Theorem \ref{thm:Y-reg}), and Theorem \ref{thm:spine-reg}.
More generally, the aforementioned Lemmas and Theorems show each stratum $S^m(M)$ (for $m = n, n-1, n-2, n-3$) is closed in the following sense: suppose $M_i$ is a family of varifolds satisfying the hypotheses of Theorem \ref{thm:main-reg} with uniform bounds on mass, mean curvature, and uniform $\varepsilon, \partiallta$. If $M_i \to M$, and $x_i \in S^m(M_i)$ converge to $x \in U$, then $x \in S^m(M)$.
We claim that, for every compact $K \subset U$, there is an $\varepsilon> 0$ so that $S^{n-3} \cap K \subset S^{n-3}_\varepsilon$. This is an easy consequence of the closedness of the strata. Otherwise, if the claim was false, we would have sequences $x_i \to x \in K \cap S^{n-3}$, $\varepsilon_i \to 0$, and $r_i \in (0, \min\{d(x_i, \partial U), 1\})$, for which $M$ is $(n-2, \varepsilon_i)$-symmetric in $B_{r_i}(x_i)$. Let $M_i = r_i^{-1}(M - x_i)$. Then the $M_i$ have uniformly bounded mass and first-variation in $B_1$, each $M_i$ is $(n-2, \varepsilon_i)$-symmetric in $B_1$, while $0 \in S^{n-3}(M_i)$.
Passing to a subsequence, we have varifold convergence $M_i \to \mathbf{C}$, where $\mathbf{C}$ is a $(n-2)$-symmetric cone. But by the closedness property, $0 \in S^{n-3}(\mathbf{C})$. This is a contradiction. Conclusion 4) is now a consequence of Naber-Valtorta \cite{naber-valtorta}.
\end{proof}
\section{Appendix}
\subsection{Linear algebra} We require some elementary linear algebra. The following Lemma relates vectorial and scalar compatability conditions. Notice how the scalar conditions in different cases are dual to each other.
\begin{lemma}\label{lem:vect-vs-scalar-cond}
Let $\omega_1, \omega_2, \omega_3$ be unit vectors, with $\omega_1 + \omega_2 + \omega_3 = 0$, and take vectors $v_1, v_2, v_3$ so that $v_i \perp \omega_i$ for each $i$. Write $P^2$ for the $2$-plane spanning $\omega_i$.
\begin{enumerate}
\item[A)]
We can write $\pi_P(v_i) = \alpha_i e^{i\pi/2}\omega_i$. Then
\begin{gather}
\pi_{P}(v_i) = \pi_{<\omega_i>^\perp}(u) \text{ for some fixed $u$} \iff \sum_i \alpha_i = 0,
\end{gather}
and
\begin{gather}
\sum_i \pi_P(v_i) = 0 \iff \alpha_1 = \alpha_2 = \alpha_3.
\end{gather}
\item[B)] Suppose $\pi_{P^\perp}(v_i) = \alpha_i v$ for some fixed $v \in P^\perp$. Then
\begin{gather}
\pi_{P^\perp}(v_i) = \pi_{<\omega_i>^\perp}(u) \text{ for some fixed $u$} \iff \alpha_1 = \alpha_2 = \alpha_3,
\end{gather}
and
\begin{gather}
\sum_i \pi_{P^\perp}(v_i) = 0 \iff \sum_i \alpha_i = 0.
\end{gather}
\end{enumerate}
Here $<\omega_i>^\perp$ denotes the orthogonal complement to the line spanned by $\omega_i$.
\end{lemma}
\begin{proof}
Since part B) is obvious, let us concentrate on part A). For ease of notation we can swap the role of $\omega_i$ and $e^{i\pi/2}\omega_i$. Let us identify $P$ with $\mathbb{R}^2$, and the $\omega_i$ with $1, e^{2\pi i/3}, e^{4\pi i/3}$.
The ``only if'' direction of the first statement is obvious. Conversely, given $\alpha_i$ with $\sum_i \alpha_i$, define
\begin{gather}
u = \alpha_1 \omega_1 + \frac{1}{\sqrt{3}}(\alpha_2 - \alpha_3) e^{i\pi/2} \omega_1.
\end{gather}
Trivially $\pi_{\omega_1}(u) = \alpha_1$, and we calculate
\begin{align}
\pi_{\omega_2}(u)
&= \alpha_1 (\omega_2 \cdot \omega_1) + \frac{1}{\sqrt{3}}(\alpha_2 - \alpha_3) (\omega_2 \cdot (e^{i\pi/2}\omega_1)) \\
&= \frac{-1}{2} \alpha_1 + \frac{1}{2}(\alpha_2 - \alpha_3) \\
&= \alpha_2.
\end{align}
By a symmetric calculation we have $\pi_{\omega_3}(u) = \alpha_3$ also.
We prove the second assertion of A). We have $e_2 \cdot \omega_2 = -e_2 \cdot \omega_3 = \sqrt{3}/2$, and $e_1 \cdot \omega_2 = e_1\cdot \omega_3 = -1/2$. Therefore,
\begin{align}
\sum_i \alpha_i \omega_i = 0
&\iff \sqrt{3}/2 (\alpha_2 - \alpha_3) = 0 \text{ and } \alpha_1 - \frac{1}{2}(\alpha_2 + \alpha_3) = 0 \\
&\iff \alpha_1 = \alpha_2 = \alpha_3. \qedhere
\end{align}
\end{proof}
\begin{lemma}\label{lem:sum-is-zero}
Suppose $\omega_1, \omega_2, \omega_3$ are unit vectors, with $\omega_1 + \omega_2 + \omega_3 = 0$. Let $v_1, v_2, v_3$ be vectors, such that $v_i \perp \omega_i$ for each $i$.
Then the following are equivalent:
\begin{enumerate}
\item[A)] $v_1 + v_2 + v_3 = 0$;
\item[B)] There is a skew-symmetric $A$, which is zero on the orthogonal complement of $\mathrm{span}(v_1, v_2, v_3, \omega_1, \omega_2, \omega_3)$, such that $Av_i = \omega_i$;
\item[C)] For any vector $u$, we have $\sum_i v_i \cdot \pi_{<\omega_i>^\perp}(u) = 0$. Here $<\omega_i>^\perp$ is the orthogonal complement to the line spanned by $\omega_i$.
\end{enumerate}
\end{lemma}
\begin{proof}
We show A) implies B). The converse B) $\implies$ A) is trivial. If $P^2$ is the plane containing the points $0, \omega_1, \omega_2$, then clearly $\omega_3$ must lie in $P$ also. Therefore, after a suitable rotation, we can identify $P^2$ with $\mathbb{R}^2$, and the $\omega_i$ with $1, e^{2\pi i/3}, e^{4\pi i/3} \in \mathbb{R}^2$.
Let $v_i^T$ and $v_i^\perp$ be the orthogonal projections of $v_i$ to $P$ and $P^\perp$ respectively. Define the matrix
\begin{gather}
A_{ij} = \sum_{\ell=1}^3 \left( \omega_\ell \wedge \left( \frac{v_\ell^T}{3/2 + \sqrt{3}/2} + \frac{v_\ell^\perp}{3/2} \right) \right)(e_j, e_i),
\end{gather}
where $e_i$ is the standard basis of $\mathbb{R}^{n+k}$. Of course in Euclidean space we can identify vectors and covectors via the standard inner product. Clearly $A_{ij}$ is skew-symmetric.
By symmetry it will suffice to show $A\omega_1 = v_1$. First, since $\sum_i v_i^T = 0$ and $v_i^T \cdot \omega_i = 0$, then one can easily check that
\begin{gather}
v_i^T = \alpha e^{i \pi/2} \omega_i \quad i = 1, 2, 3,
\end{gather}
for some fixed $\alpha \in \mathbb{R}$, i.e. each $v_i^T$ is a $90^0$ rotation of $\alpha \omega_i$. We therefore have
\begin{align}
(3/2 + \sqrt{3}/2) (A \omega_1)^T
&= v_1^T + (\omega_2 \cdot \omega_1) v_2^T + (\omega_3 \cdot \omega_1) v_3^T - (v_2\cdot \omega_1) \omega_2 - (v_2 \cdot \omega_1) v_3 \\
&= v_1^T - \frac{1}{2} (v_2^T + v_3^T) - \alpha ( (e^{i\pi/2} \omega_2)\cdot \omega_1) \omega_2 - \alpha ((e^{i\pi/2} \omega_3) \cdot \omega_1) \omega_3 \\
&= \frac{3}{2} v_1^T + \frac{\alpha}{2} (\omega_2 - \omega_3) \\
&= \frac{3}{2} v_1^T + \frac{\sqrt{3}}{2} \alpha e^{i\pi/2} \omega_1 \\
&= (3/2 + \sqrt{3}/2) v_1^T.
\end{align}
Similarly, we have
\begin{align}
\frac{3}{2} (A \omega_1)^\perp
&= v_1^\perp + (\omega_2 \cdot \omega_1) v_2^\perp + (\omega_3 \cdot \omega_1) v_3^\perp = v_1^\perp - \frac{1}{2} (v_2^\perp + v_3^\perp) = \frac{3}{2} v_1^\perp.
\end{align}
This shows $A\omega_1 = v_1$.
We show A) $\iff$ C). With $P$ as above, we trivially have that
\begin{gather}
\pi_{<\omega_i>^\perp}(u) = u \quad \forall u \in P^\perp.
\end{gather}
Therefore $\sum_i v_i^\perp = 0$ if and only if $\sum_i v_i \cdot \pi_{<\omega_i>^\perp}(u) = 0$ for all $u \in P^\perp$.
On the other hand, given $u \in P$, and our assumption $v_i \perp \omega_i$, then we can write
\begin{gather}
\pi_{<\omega_i>^\perp}(u) = \beta_i e^{i\pi/2} \omega_i, \quad v_i^T = \alpha_i e^{i\pi/2} \omega_i,
\end{gather}
where $\beta_i \in \mathbb{R}$ satisfy $\sum_i \beta_i = 0$, and $\alpha_i \in \mathbb{R}$. Then, using Lemma \ref{lem:vect-vs-scalar-cond}, we have
\begin{align}
\sum_i v_i^T = 0
&\iff \alpha_1 = \alpha_2 = \alpha_3 \\
&\iff \sum_i \alpha_i \beta_i = 0 \quad \forall \beta_i \text{ such that} \sum_i \beta_i = 0 \\
&\iff \sum_i v_i \cdot \pi_{<\omega_i>^\perp}(u) = 0 \quad \forall u \in P^T.
\end{align}
This completes the proof.
\end{proof}
\subsection{Two variation inequalities}\label{sec:variation}
We sketch the proof of the estimates \eqref{eqn:lem-density-1} and \eqref{eqn:lem-density-2}. Both are minor modifications of the derivation given in \cite{simon1}.
\begin{lemma}
Let $\mathbf{C} = \mathbf{C}^\ell_0 \times \mathbb{R}^m$, and take $M \in \mathcal{N}_\varepsilon(\mathbf{C})$ with $\theta_M(0) \geq \theta_\mathbf{C}(0)$. Let $\phi: \mathbb{R} \to \mathbb{R}$ be any smooth function satisfying $\phi' \leq 0$, $\phi \equiv 1$ on $[0, 1/10]$, and $\phi \equiv 0$ on $[2/10, \infty)$. Then we have
\begin{gather}\label{eqn:app-density-1}
\frac{1}{2} n (1/10)^n \int_{M \cap B_{1/10}} \frac{|X^\perp|^2}{R^{n+2}} \leq \int_M \phi^2(R) - \int_{\mathbf{C}} \phi^2(R) + c(\mathbf{C}, \phi) ||H_M||_{L^\infty(B_1)},
\end{gather}
and
\begin{align}\label{eqn:app-density-2}
\ell \left( \int_M \phi^2(R) - \int_{\mathbf{C}} \phi^2(R) \right) &\leq \left( \int_M 2\phi |\phi'| r^2/R - \int_{\mathbf{C}} 2\phi |\phi'| r^2/R \right) \\
&\quad + \int_M 2\phi (\phi')^2 |(x, 0)^\perp|^2 + c(\mathbf{C}, \phi) ||H_M||_{L^\infty(B_1)}.
\end{align}
\end{lemma}
\begin{proof}
Write $\Lambda = ||H||_{L^\infty(B_1)}$. By the monotonicity formula (see e.g. \cite{simon:gmt}) we have
\begin{gather}
e^{\Lambda \rho}\theta_M(0, \rho) - \theta_M(0) \geq \int_{M \cap B_\rho} \frac{|X^\perp|^2}{R^{n+2}} \quad \forall \rho < 1.
\end{gather}
By plugging (a $C^1$ approximation to) the vector field $(x, y) 1_{B^{n+k}_\rho}$ into the first variation \eqref{eqn:first-variation}, and using the coarea formula, we obtain
\begin{gather}
\frac{n}{\rho} \mu_M(B_\rho) - c\Lambda \leq D_\rho \int_{M \cap B_\rho} |\nabla^T R|^2 \leq D_\rho \mu_M(B_\rho).
\end{gather}
Therefore, taking $\varepsilon \geq \Lambda$ small, by the monotonicity formula and our assumption $\theta_M(0) \geq \theta_{\mathbf{C}}(0)$ we have
\begin{align}
\frac{1}{2} n \rho^{n-1} \int_{M \cap B_\rho} \frac{|X^\perp|^2}{R^{n+2}}
&\leq D_\rho \mu_M(B_\rho) - e^{-\Lambda\rho} n\rho^{n-1} \theta_\mathbf{C}(0) + c\Lambda \\
&\leq D_\rho( \mu_M(B_\rho) - \mu_\mathbf{C}(B_\rho)) + c(\mathbf{C})(\Lambda + (1 - e^{-\Lambda}))
\end{align}
Now multiply by $\phi^2(\rho)$ and integrate in $\rho \in [0, 1]$ to obtain \eqref{eqn:app-density-1}.
We prove \eqref{eqn:app-density-2}. Plugging the vector field $(x, 0) \phi^2(R)$ into the first variation, and rearranging, gives
\begin{gather}\label{eqn:app-density-3}
\int_M (\ell + \frac{1}{2} <M^\perp, \{0\}\times \mathbb{R}^m>^2)\phi^2 \leq \int_M 2\phi|\phi'| r^2/R + 2(\phi')^2 |(x, 0)^\perp|^2 + c(\mathbf{C}, \phi) \Lambda .
\end{gather}
On the other hand, using Fubini and integrating by parts in $r$, gives
\begin{gather}\label{eqn:app-density-4}
\ell \int_{\mathbf{C}} \phi^2 = \int_{ \{0\} \times \R^m } \int_0^\infty \phi(\sqrt{r^2 + |y|^2})^2 \,\ell r^{\ell-1} \theta_{\mathbf{C}}(0) dr dy = \int_\mathbf{C} -2\phi \phi' r^2/R.
\end{gather}
Now subtract \eqref{eqn:app-density-3} from \eqref{eqn:app-density-4}.
\end{proof}
\subsection{Graphicality for $\mathbf{C}_0$ smooth}\label{sec:graphical-smooth}
We prove the analogue of decomposition Lemma \ref{lem:poly-graph} when $\mathbf{C}_0^\ell$ is \emph{smooth}, which allows us in certain circumstances to remove the multiplicity-one hypothesis of \cite{simon1}. The proof is essentially the same as for Lemma \ref{lem:poly-graph}, but simpler.
In this section we always assume $\mathbf{C}^n = \mathbf{C}_0^\ell\times \mathbb{R}^m$, for $\mathbf{C}_0$ smooth. Recall the torus
\begin{gather}
U(\rho, y, \gamma) = \{ (\xi, \eta) \in \mathbb{R}^{\ell+k}\times \mathbb{R}^m: (|\xi| - \rho)^2 + |\eta - y|^2 \leq \gamma \rho^2 \},
\end{gather}
and the ``halved-torus''
\begin{gather}
U_+(\rho, y, \gamma) = U(\rho, y, \gamma) \cap \{ (\xi, \eta) : |\xi| \geq \rho \}.
\end{gather}
We first demonstrate global graphical structure, but without good estimates.
\begin{lemma}\label{lem:global-graph}
For any $\beta, \tau > 0$ there is an $\varepsilon_1(\mathbf{C}^0, \beta, \tau)$ so that the following holds. Take $M \in \mathcal{N}_{\varepsilon_1}(\mathbf{C}^0)$. Then there is a domain $\Omega \subset \mathbf{C}$, and smooth function $u : \Omega \to \mathbf{C}^\perp$, so that
\begin{gather}\label{eqn:global-graph}
M \cap B_{3/4} \setminus B_\tau(\{0\}\times \mathbb{R}^m) = \mathrm{graph}_C(u), \quad r^{-1} |u| + |\nabla u| \leq \beta.
\end{gather}
\end{lemma}
\begin{proof}
This is essentially a direct Corollary of Lemma \ref{lem:mult-one}. If the Lemma failed, we would have a counter-example sequence $M_i$. Passing to a subsequence, we have multiplicity-$1$ convergence $M_i \to \mathbf{C}^0$, on compact subsets of $B_1$. Therefore, by Allard convergence is smooth in $B_1 \setminus (\{0\}\times \mathbb{R}^m)$.
\end{proof}
\begin{lemma}\label{lem:tiny-graph}
For any $\beta > 0$ there is an $\varepsilon_2(\mathbf{C}^0, \beta)$ so that the following holds. Take $M \in \mathcal{N}_{1/10}(\mathbf{C}^0)$. Take $\rho \leq 1/2$, and $\eta \in B_{3/4}^m(0)$, and suppose
\begin{equation}\label{eqn:tiny-u}
M \cap U_+(\rho, y, 1/16) = \mathrm{graph}_\mathbf{C}(u), \quad r^{-1} |u| + |\nabla u| \leq 1/10,
\end{equation}
and
\begin{equation}\label{eqn:tiny-dist}
\rho^{-n-2} \int_{M \cap U(\rho, y, 1/4)} d_\mathbf{C}^2 + \rho ||H_M||_{L^\infty(U(\rho, y, 1/4))} \leq \varepsilon_2 .
\end{equation}
Then we have
\begin{gather}
M \cap U(\rho, y, 1/8) = \mathrm{graph}_C(u), \quad r^{-1} |u| + |\nabla u| \leq \beta.
\end{gather}
\end{lemma}
\begin{proof}
By dilation invariance, we can suppose $\rho = 1/2$. Suppose the Lemma is false, and consider a counterexample sequence $M_i$, $y_i$, $\varepsilon_i \to 0$. Passing to a subsequence, the $y_i \to y \in B_{3/4}^m$, and in $U(\rho, y, 1/5)$ the $M_i$'s converge to some stationary varifold supported in $\mathbf{C}^0$. The multiplicity in each disk is constant, but by the graphicality assumption we converge with multplicity one inside $U_+(\rho, y, 1/16)$.
Therefore the convergence is with multiplicity $1$, and therefore by Allard we satisfy the conclusions of the Lemma when $i >> 1$.
\end{proof}
\begin{lemma}[Graphicality for smooth $\mathbf{C}^0_0^\ell \times \mathbb{R}^m$]\label{lem:graph-smooth}
Given any $\beta, \tau > 0$, there is an $\varepsilon(\mathbf{C}^0, \beta, \tau)$ so that: if $M \in \mathcal{N}_{\varepsilon}(\mathbf{C}^0)$, then there are open sets $U \subset M$, $\Omega \subset \mathbf{C}$, with $U \supset M \cap B_{3/4} \setminus B_\tau({ \{0\} \times \R^m })$, and a function $u : \Omega \to \mathbf{C}^0^\perp$, so that
\begin{gather}
M \cap U = \mathrm{graph}_{\mathbf{C}}(u), \quad r^{-1} |u| + |\nabla u| \leq \beta,
\end{gather}
and
\begin{gather}
\int_{\Omega} r^2 |\nabla u|^2 + \int_{M \cap B_{3/4} \setminus U} r^2 \leq c(\mathbf{C}^0, \beta) E(M, \mathbf{C}, 1) .
\end{gather}
Note that $c$ is \emph{independent} of $\tau$.
\end{lemma}
\begin{proof}
We can assume $\beta \leq 1/10$. Ensure $\varepsilon \leq \varepsilon_1(\mathbf{C}^0, \beta, \tau)$ and $\varepsilon \leq \varepsilon_2(\mathbf{C}^0, \beta)$, the constants from Lemmas \ref{lem:global-graph}, \ref{lem:tiny-graph}. So, from Lemma \ref{lem:global-graph}, $M \cap (B_{1/2} \setminus B_\tau({ \{0\} \times \R^m })) = \mathrm{graph}_{\mathbf{C}}(u)$ with $u : \Omega \subset \mathbf{C} \to \mathbf{C}^\perp$ satisfying estimates \eqref{eqn:global-graph}.
Given $y \in B_{3/4}^m$, define
\begin{gather}
r_y = \inf \{ r' : \text{\eqref{eqn:tiny-u} holds for all $r' < \rho < 3/4$} \}.
\end{gather}
By Lemma \ref{lem:global-graph} $r_y \leq \tau$. Necessarily by Lemma \ref{lem:tiny-graph}, \eqref{eqn:tiny-dist} must fail at $\rho(\eta)$, and therefore
\begin{gather}
r_y^{n+2} \varepsilon_2 \leq \int_{M \cap U(r_y, y, 1/4)} d_{\mathbf{C}}^2 + r_y^{n+3} ||H||_{L^\infty(U(\rho, y, 1/4))}.
\end{gather}
In particular, by monotonicity we have
\begin{gather}
\int_{M \cap B_{20 r_y)}(0, \eta)} r^2 \leq c(\mathbf{C}^0, \beta) \int_{M \cap U(r_y, y, 1/4)} d_{\mathbf{C}}^2 + c(\mathbf{C}, \beta) r_y^{n+3} ||H||_{L^\infty(B_1)}.
\end{gather}
Let $U$ be the region
\begin{gather}
U = \{ (x, y) \in M \cap B_{3/4} : |x| > \rho(y) \},
\end{gather}
so that $U \subset B_{3/4} \setminus B_\tau({ \{0\} \times \R^m })$, and $M \cap U = \mathrm{graph}_{\mathbf{C}}(u)$.
Take a Vitali subcover $\{B_{2\rho_i}(0, y_i)\}_i$ of $\{B_{2 r_y}(y)\}_{y \in B_{3/4}^m}$, and then by construction $\{B_{10\rho_i}(0, y_i)\}_i$ covers $\mu_M$-a.e. $B_{3/4} \setminus U$, and the $U(\rho_i, y_i, 1/4) \subset B_{2\rho_i}(0, y_i)$ are disjoint. We deduce that
\begin{align}
\int_{M \cap B_{3/4} \setminus U} r^2
&\leq \sum_i \int_{M \cap B_{20\rho_i}(0, y_i)} r^2 \\
&\leq \sum_i c\int_{M \cap U(\rho_i, y_i, 1/4)} d_{\mathbf{C}}^2 + \sum_i c \rho_i^{n+3} ||H||_{L^\infty(B_1)} \label{eqn:global-graph-vitali} \\
& \leq c(\mathbf{C}^0, \beta) E(M, \mathbf{C}, 1) .
\end{align}
Given $(x, y) \in \Omega$ with $d((x, y), \partial \Omega) < |x|/2$, then there are $(x', y') \in \partial \Omega$ with $|x| < 2|x'|$. We have $(x', y') + u(x', y') \in B_{10\rho_i}(0, y_i)$ for some $i$, and since $|u(x', y')| \leq |x'|/10$, we have
\begin{gather}
|x| < 2|x'| < 20 \rho_i.
\end{gather}
We deduce that $\cup_i B_{20 y_i}(0, \eta_i)$ covers $\Omega' = \{ (x, y) \in \Omega : d((x, y), \partial \Omega) < |x|/2\}$.
Therefore, since $|\nabla u| \leq \beta$ we have from \eqref{eqn:global-graph-vitali} that
\begin{gather}
\int_{\Omega'} r^2 |\nabla u|^2 \leq c(\mathbf{C}^0, \beta) E(M, \mathbf{C}, 1).
\end{gather}
If $(x, y) \in \Omega \setminus \Omega'$, then we can use Allard and smallness of $\beta$ to give bounds
\begin{gather}
\int_{\mathbf{C} \cap B_{|x|/4}(x, y)} r^2 |\nabla u|^2 \leq c \int_{\mathbf{C} \cap B_{|x|/2}(x, y)} |u|^2 + c |x|^{n+3} ||H_M||_{L^\infty(B_{|x|}(x, y))}.
\end{gather}
Choose an appropriate Vitali subcover of $\{B_{|x|/4}(x, y) : (x, y) \in \Omega \setminus \Omega'\}$, then the resulting cover will have overlap bounded by $c(n)$, and therefore we have
\begin{gather}
\int_{\Omega \setminus \Omega'} r^2 |\nabla u|^2 \leq c E(M, \mathbf{C}, 1).
\end{gather}
\end{proof}
\end{document}
|
\begin{document}
\title{
Finiteness of outer automorphism groups of random right-angled Artin groups
}
\begin{abstract}
We consider the outer automorphism group $\mathrm{Out}(A_\Gamma)$ of the right-angled Artin group $A_\Gamma$ of a random graph $\Gamma$ on $n$ vertices in the Erd\H os--R\'enyi model.
We show that the functions $n^{-1}(\log(n)+\log(\log(n)))$ and $1-n^{-1}(\log(n)+\log(\log(n)))$ bound the range of edge probability functions for which $\mathrm{Out}(A_\Gamma)$ is finite:
if the probability of an edge in $\Gamma$ is strictly between these functions as $n$ grows, then asymptotically $\mathrm{Out}(A_\Gamma)$ is almost surely finite, and if the edge probability is strictly outside of both of these functions, then asymptotically $\mathrm{Out}(A_\Gamma)$ is almost surely infinite.
This sharpens results of Ruth Charney and Michael Farber from their preprint \emph{Random groups arising as graph products}, arXiv:1006.3378v1.
\end{abstract}
\section{Introduction}
Let $\Gamma$ be a simplicial graph with vertex set $V$.
The \emph{right-angled Artin group} $A_\Gamma$ defined by $\Gamma$ is the group with the presentation
\[\langle V |\text{$ab=ba$ for all $a,b\in V$ with $a$ adjacent to $b$}\rangle.\]
Right-angled Artin groups include free groups and free abelian groups and are common objects of study in geometric group theory.
Outer automorphism groups of right-angled Artin groups exhibit great variety: although they include infinite groups such as outer automorphism groups of free groups and $\GL(n,\Z)$, many of them are finite.
The theory of random graphs is a branch of combinatorics initiated by Erd\H os and R\'enyi in a 1959 paper~\cite{ErdosRenyi}.
Since right-angled Artin groups are indexed over graphs, it is natural to ask about the properties of random ones.
Random right-angled Artin groups were studied by Costa--Farber in~\cite{CostaFarber}, and their automorphism groups were specifically studied by Charney--Farber in~\cite{CharneyFarber}.
Charney and Farber showed that under certain conditions, a random right-angled Artin group almost certainly has a finite outer automorphism group; the results of this paper are a sharpening of their results.
\subsection{Background}\label{se:background}
This paper is about two graph-theoretic notions that arise in the study of right-angled Artin groups: \emph{domination} and \emph{star $2$--connectedness}.
Again let $\Gamma$ be a simplicial graph with vertex set $V$ and adjacency relation~$\sim$.
The \emph{star} of a vertex $a$ of $\Gamma$ is the set consisting of $a$ and all vertices adjacent to~$a$:
\[\st(a) = \{a\}\cup\{b\in V| b\sim a\}.\]
A vertex $a\in V$ is a \emph{star-cut-vertex} for $\Gamma$ if the full subgraph $\Gamma\drop\st(a)$ is disconnected.
The graph $\Gamma$ is \emph{star $2$--connected} if it has no star-cut-vertices.
If $\Gamma$ is star $2$--connected, then either $\Gamma$ is connected or $\Gamma$ is the disjoint union of exactly two complete graphs.
For a pair of distinct vertices $a,b\in V$, $a$ \emph{dominates} $b$ in $\Gamma$ if every vertex adjacent to $b$ is adjacent to or equal to $a$, in other words that
\[\st(b)\drop\{b\}\subset \st(a).\]
We write $a>b$ if $a$ dominates $b$, and refer to $(a,b)$ as a \emph{domination pair}.
Note that it is possible for $a$ to dominate $b$ whether $a\sim b$ or not; if $a\sim b$ then $(a,b)$ is an \emph{adjacent domination pair}, and otherwise it is a \emph{non-adjacent domination pair}.
A vertex is \emph{isolated} if it is adjacent to no other vertices, and it is \emph{central} if it is adjacent to all other vertices.
Note that if $a$ is isolated then $b>a$ for any $b$, and if $a$ is central, then $a>b$ for any $b$.
The following examples are instructive: a path with at least four vertices has exactly four domination pairs, and a path with at least five vertices is not star $2$--connected, but a cycle on at least five vertices is star $2$--connected and has no domination pairs.
The presence of domination pairs and star-cut-vertices indicate the existence of infinite order outer automorphisms of right-angled Artin groups.
M. Laurence showed in~\cite{Laurence} that the automorphism group $\Aut A_\Gamma$ of the right-angled Artin group $A_\Gamma$ of a finite graph $\Gamma$ is generated by finitely many automorphisms that fall into four classes: inversions, symmetries, dominated transvections, and partial conjugations.
While inversions and symmetries generate a finite subgroup of $\Aut A_\Gamma$, dominated transvections and partial conjugations are infinite order.
A dominated transvection always has an infinite-order image in the outer automorphism group $\Out A_\Gamma$, but a dominated transvection will only exist if $\Gamma$ has a domination pair.
If $\Gamma$ is star $2$--connected, then every partial conjugation is an inner automorphism; if there is a star-cut-vertex, then there is a partial conjugation whose image in $\Out A_\Gamma$ has infinite order.
We have explained the following fact.
\begin{fact}\label{fa:outfinite}
The group $\OAG$ is finite if and only if $\Gamma$ is star $2$--connected and has no domination pairs.
\end{fact}
For the details of this argument, we refer the reader to \S6 of Charney--Farber~\cite{CharneyFarber}.
To proceed, we must formalize our notion of random graphs.
The \emph{Erd\H os--R\'enyi model} for random graphs is the sequence of probability spaces $\gnp$, where
\begin{itemize}
\item $n$ varies over the positive integers,
\item $p=p(n)$ is a sequence of probability values in $[0,1]$,
\item the underlying set of $\gnp$ is the finite set of all simplicial graphs with vertex set $V$ of cardinality $|V|=n$,
\item each edge occurs with probability $p$ and independently of other edges.
\end{itemize}
It is easy to see that this last condition uniquely determines $\gnp$; it means that $\gnp$ assigns each graph $\Gamma$ with $m$ edges the probability
\[\prob(\Gamma)=p^m(1-p)^{n-m}.\]
Suppose we are given a sequence of probabilities $p$ and a property of graphs $P$.
We say that $\Gamma\in\gnp$ has the property $P$ \emph{asymptotically almost surely} (\emph{a.a.s.}) if the probability that $\Gamma\in\gnp$ has $P$ goes to $1$ as $n\to\infty$.
This model for random graphs and related models are described in detail in Chapter 2 of Bollob\'as~\cite{Bollobas}.
The work in this paper is inspired by the following results of Charney--Farber~\cite{CharneyFarber}:
\begin{theorem}[Charney--Farber]
Suppose $p$ is any function satisfying
\[p(1-p)n-2\ln(n)\to\infty \text{ as } n\to\infty,\]
(for example, if $p$ is constant in $n$ with $0<p<1$), then $\Gamma\in\gnp$ a.a.s.\ has no domination pairs.
\end{theorem}
\begin{theorem}[Charney--Farber]
Suppose $p$ is constant with respect to $n$ and
\[1-\frac{1}{\sqrt{2}} < p < 1,\]
then $\Gamma\in\gnp$ a.a.s.\ is star $2$--connected.
\end{theorem}
In this paper, we find sharper descriptions of the functions $p$ for which $\Gamma\in\gnp$ a.a.s.\ has no domination pairs and is star $2$--connected.
Further, we show that for $p$ outside of these ranges, the negations of these statements hold a.a.s.
\subsection{Statement of results}
Our two main theorems explain the asymptotically almost sure existence and nonexistence of domination pairs and star-cut-vertices.
\begin{theorem}\label{th:masterdomination}
Let $C>0$ be any fixed constant, and
suppose $p=p(n)$ is a sequence of probability values.
The existence of domination pairs in $\Gamma\in\gnp$, asymptotically almost surely, is summarized as follows:
\begin{itemize}
\item If $pn^2\to 0$, then there are no adjacent domination pairs.
\item If
\[p<\frac{\log(n)+\log(\log(n)) - \omega(n)}{n}\]
for some sequence $\omega(n)$ with $\omega(n)\to+\infty$, then there are at least $C$ non-adjacent domination pairs.
\item If we still have $p<n^{-1}(\log(n)+\log(\log(n)) - \omega(n))$, but in addition we have $pn^2\to \infty$,
then there are also at least $C$ adjacent domination pairs.
\item If
\[\frac{\log(n)+\log(\log(n)) + \omega_1(n)}{n} < p < 1- \frac{\log(n)+\log(\log(n)) + \omega_2(n)}{n},\]
for some sequences $\omega_1(n)$, $\omega_2(n)$, both tending to positive infinity, then there are no domination pairs.
\item If
\[p > 1- \frac{\log(n)+\log(\log(n)) - \omega(n)}{n}\]
for some sequence $\omega(n)$ with $\omega(n)\to+\infty$, then there are at least $C$ adjacent domination pairs.
\item If we still have $p>1-n^{-1}\log(n)+\log(\log(n)) - \omega(n))$, but in addition we have $(1-p)n^2\to \infty$,
then there are also at least $C$ non-adjacent domination pairs.
\item If $(1-p)n^2\to 0$, then there are no non-adjacent domination pairs.
\end{itemize}
\end{theorem}
\begin{proof}
It is well known that if $pn^2\to 0$, then the probability that $\Gamma\in\gnp$ is the edgeless graph goes to $1$ (since $\Gamma$ has $\binom{n}{2}$ pairs of vertices, the probability that $\Gamma$ is the edgeless graph is $(1-p)^{n(n-1)/2}\sim e^{-pn(n-1)/2}$ as $n\to\infty$).
Adjacent domination pairs require the existence of edges, and therefore we have proven the first item.
The last item follows by a dual argument: if $(1-p)n^2\to 0$, then the probability that $\Gamma$ is the complete graph goes to $1$, but non-adjacent domination pairs require pairs of vertices with no edges between them.
We prove the items that assert existence in Proposition~\ref{pr:dominationexistence} below, and Theorem~\ref{th:dominationnonexistence} below covers the item asserting the nonexistence of domination pairs.
\end{proof}
\begin{theorem}\label{th:masterstarcut}
Suppose $p=p(n)$ is a sequence of probability values.
There are a.a.s.\ no star-cut-vertices in $\Gamma\in\gnp$ if
\begin{itemize}
\item for some sequence $\omega(n)$ with $\omega(n)\to\infty$,
\[p > \frac{\log(n)+\log(\log(n))+\omega(n)}{n}\text{, and}\]
\item either $n(1-p)\to 0$ or $n(1-p)\to \infty$.
\end{itemize}
Further, if only the first hypothesis holds, then a.a.s.\
for any star-cut-vertex $a\in\Gamma$,
there is at most one component of $\Gamma\setminus\st(a)$ with more than one vertex.
\end{theorem}
The proof of this theorem appears in \S\,\ref{se:summedcounts} below.
If vertices $a,b,c\in V$ form an isolated triangle in the complement graph $\overline{\Gamma}$, then each of $a$, $b$ and $c$ is a star-cut-vertex.
Isolated triangles are only asymptotically forbidden if $np\to\infty$ or $np\to0$.
This fact is explained in Theorem~V.16 of Bollob\'as~\cite{Bollobas}.
In particular, the presence of isolated triangles in $\overline\Gamma$ explains the possibility of star-cut-vertices if the second hypothesis in Theorem~\ref{th:masterstarcut} fails.
The following corollary is the goal of the paper.
\begin{corollary}
If the probability sequence $p$ satisfies
\[\frac{\log(n)+\log(\log(n)) + \omega_1(n)}{n} < p < 1- \frac{\log(n)+\log(\log(n)) + \omega_2(n)}{n},\]
for some sequences $\omega_1,\omega_2$ limiting to $+\infty$,
then $\OAG$ is a.a.s.\ finite for $\Gamma\in\gnp$.
Conversely, if
\[p<\frac{\log(n)+\log(\log(n)) + \omega(n)}{n}\text{ or }p>1-\frac{\log(n)+\log(\log(n)) + \omega(n)}{n}\]
for some $\omega\to+\infty$, then $\OAG$ is a.a.s.\ infinite for $\Gamma\in\gnp$.
\end{corollary}
\begin{proof}
Theorem~\ref{th:masterdomination} explains that there are a.a.s.\ no domination pairs in the first case, and a.a.s.\ there exist some domination pairs in the second case.
Theorem~\ref{th:masterstarcut} implies that a.a.s.\ there are no star-cut-vertices in the first case above.
Then the corollary follows from Fact~\ref{fa:outfinite}.
\end{proof}
\begin{remark}
If $\Gamma$ has isolated vertices then $\OAG$ has a subgroup isomorphic to the automorphism group of a free group, and if $\Gamma$ has central vertices then $\OAG$ has a subgroup isomorphic to a general linear group over the integers.
By a famous theorem of Erd\H os and R\'enyi (see Theorem~\ref{th:connectivitythreshold}), $\Gamma$ will a.a.s.\ have isolated vertices if $p$ is less than $n^{-1}(\log(n)-\omega(n))$ and central vertices if $p$ is greater than $1-n^{-1}(\log(n)-\omega(n))$ for some $\omega\to\infty$.
However, there are two narrow ranges of probability functions, where $p$ or $1-p$ is between $n^{-1}(\log(n)+\omega_1(n))$ and $n^{-1}(\log(n)+\log(\log(n))-\omega_2(n))$ for any $\omega_1,\omega_2\to\infty$, such that $\Gamma$ and $\overline{\Gamma}$ are a.a.s.\ connected but $\OAG$ is a.a.s.\ infinite.
\end{remark}
We end this section with a corollary that gives some insight into the group theory of $\OAG$ in the case that $p$ does not go to zero quickly enough.
\begin{corollary}
If $p>n^{-1}(\log(n)+\log(\log(n))+\omega(n))$ for some $\omega\to\infty$, then a.a.s. $\Out(A_\Gamma)$ is generated by dominated tranvsections, symmetries, and inversions only---partial conjugations are unnecessary.
\end{corollary}
\begin{proof}
It is enough to explain why a partial conjugation can be expressed as a product of dominated transvections under this hypothesis.
As explained in Theorem~\ref{th:masterstarcut}, this hypothesis implies that a.a.s.\ the star of any star-cut-vertex in $\Gamma$ has at most one complementary component with more than one vertex.
Suppose $a\in\Gamma$ is a star-cut-vertex with this property, and $S\subset\Gamma$ is a union of complementary components of $\st(a)$.
The data of $a$ and $S$ determine a partial conjugation automorphism $\alpha\in\Aut(A_\Gamma)$, which is defined on generators of $A_\Gamma$ as follows:
\[\alpha(b)=\left\{\begin{array}{cc} aba^{-1} & b\in S \\ b & b\notin S.\end{array}\right.\]
If $b\in\Gamma$ is an isolated vertex in $\Gamma\setminus\st(a)$, then $a$ dominates $b$ and the dominated transvections multiplying $b$ by $a^{\pm1}$ on the right and on the left all exist; these are the four automorphisms that fix all generators other than $b$, but send $b$ to $a^{\pm1}b$ or $ba^{\pm1}$.
Let $S_1$ be the component of $\Gamma\setminus \st(a)$ with more than one vertex, if it exists.
If $\alpha$ does not fix $S_1$ (meaning that $S_1\subset S$), we can compose $\alpha$ with an inner automorphism to get an automorphism that does fix $S_1$.
In any case, the class of $\alpha$ in $\OAG$ is represented by an automorphism that conjugates certain generators by $a^{\pm1}$ and fixes the rest, and such that those generators that it does not fix are all dominated by $a$.
This automorphism is certainly a product of the dominated transvections given above.
\end{proof}
\subsection{Conventions}
We always use $\Gamma$ to denote a finite graph with vertex set $V$ and edge relation $\sim$.
Between functions, $\sim$ denotes asymptotic unity.
We use the notations $f\in O(g)$ and $f\lesssim g$ to indicate that eventually $f$ is less than a constant multiple of $g$, and we also use $O(f)$ to denote an unknown function asymptotically bounded by $f$.
\section{Domination pairs}
\subsection{Duality of domination pairs}
We will exploit the following connection between adjacent domination and non-adjacent domination.
The \emph{link} $\lk(a)$ of a vertex $a$ is $\st(a)\setminus\{a\}$; then $a>b$ if and only if $\lk(b)\subset\st(a)$.
\begin{lemma}\label{le:adpnadp}
For $a,b\in V$, we have $a>b$ in $\Gamma$ if and only if $b>a$ in the complement graph $\overline \Gamma$.
In particular, $\overline \Gamma$ has as many adjacent domination pairs as $\Gamma$ has non-adjacent ones, and vice versa.
\end{lemma}
\begin{proof}
We add subscripts to our notations for stars and links to make clear which graph we are taking these stars and links in.
Of course, $a>b$ in $\Gamma$ if and only if $\lk_\Gamma(b)\subset\st_\Gamma(a)$.
Note that $\lk_{\overline{\Gamma}}(a)=V\setminus\st_\Gamma(a)$, and $\st_{\overline{\Gamma}}(b)=V\setminus \lk_\Gamma(b)$.
Then $\lk_{\overline{\Gamma}}(a)\subset\st_{\overline{\Gamma}}(b)$, which proves the lemma.
\end{proof}
\subsection{Existence results}
Our existence results follow well-known facts about random graphs by using some straightforward deductions.
The following statement is taken from Bollob\'as~\cite{Bollobas}, Theorem~III.1, and incorporates a comment preceding that theorem.
\begin{theorem}\label{th:manyvalence1}
Let $C>0$ be fixed.
If $p$ is a sequence of probabilities such that $pn^2\to\infty$ and $pn^{3/2}\to 0$, or if $p>\epsilon n^{-3/2}$ for some $\epsilon>0$ and
\begin{equation}\label{eq:v1condition} n(n-1)p(1-p)^{n-2}\to \infty\text{ as }n\to\infty,\end{equation}
then $\Gamma\in\gnp$ a.a.s.\ has at least $C$ vertices of valence $1$.
\end{theorem}
We note an easy corollary of this.
\begin{corollary}\label{co:manyvalence1}
If $pn^2\to\infty$ and $p < n^{-1}(\log(n)+\log(\log(n))-\omega(n))$ for some $\omega(n)$ tending to positive infinity,
then $\Gamma\in\gnp$ a.a.s.\ has at least $C$ vertices of valence $1$ for any fixed $C$.
\end{corollary}
\begin{proof}
We break the sequence $\{(n,p(n))\}_n$ into three subsequences.
The first one satisfies $pn^{3/2}\to 0$,
the second satisfies $p>\epsilon n^{-3/2}$ for some $\epsilon>0$ and $p<n^{-1}((1/2)\log(n)-\omega_1(n))$ for some $\omega_1(n)$ with $\omega_1(n)\to\infty$,
and the third satisfies $p> (1/4)n^{-1}\log(n)$.
The probability that $\Gamma$ has at least $C$ vertices of valence $1$ goes to $1$ on all three subsequences;
it does so on the first one because it falls under the first clause of Theorem~\ref{th:manyvalence1} and it does so on the other subsequences by the second clause of that theorem, as we now show.
Since $p\to 0$, we know $(1-p)^{n-2}\sim e^{-np}$; then the limit in Equation~\eqref{eq:v1condition} is asymptotically equivalent to $n^2pe^{-np}$.
By substituting our bounds for $p$ in the second subsequence, we get a lower bound on this limit:
\[n^2pe^{-np} > \epsilon n^{1/2} \cdot n^{-1/2}e^{\omega_1(n)} .\]
We do the same for the third subsequence:
\[n^2pe^{-np} > (1/4) n\log(n) \cdot n^{-1}\log(n)^{-1}e^{\omega(n)}.\]
These lower bounds go to infinity, so the theorem applies.
\end{proof}
The next statement is from Bollob\'as~\cite{Bollobas}, Theorem~V.4.
An \emph{isolated edge} is one both of whose endpoints have valence $1$.
\begin{theorem}\label{th:noisoedges}
Fix $C>0$.
If $2np-\log(n)-\log(\log(n))\to\infty$, then $\Gamma\in\gnp$ a.a.s.\ does not have any isolated edges.
\end{theorem}
From these we deduce the following:
\begin{proposition}\label{pr:v1notiso}
Let $C>0$ be fixed.
If $p$ is in the range
\[\frac{\log(n)+\log(\log(n))+\omega_1(n)}{2n} < p < \frac{\log(n)+\log(\log(n))-\omega_2(n)}{n},\]
for some sequences $\omega_1(n),\omega_2(n)$ approaching positive infinity,
then $\Gamma\in\gnp$ a.a.s.\ has at least $C$ vertices of valence $1$ that are not on isolated edges.
\end{proposition}
\begin{proof}
If $\Gamma$ has less than $C$ vertices of valence $1$ that are not on isolated edges, then either some of the vertices of valence $1$ in $\Gamma$ are on isolated edges, or $\Gamma$ has less than $C$ vertices of valence $1$.
Let $R$ denote the event that $\Gamma$ has less than $C$ vertices of valence $1$ that are not on isolated edges, let $S$ denote the event that there are less than $C$ vertices of valence $1$, and let $T$ denote the event that there are some isolated edges.
Then $R\subset S\cup T$.
Any $p$ in the given range certainly satisfies the hypotheses of Theorem~\ref{th:noisoedges}, by the choice of lower bound.
Also, $p$ satisfies the hypotheses of Corollary~\ref{co:manyvalence1}.
So we know $\prob(S)$ goes to $0$ and $\prob(T)$ goes to $0$.
Then since
\[0 \leq \prob(R) \leq \prob(S) + \prob(T)\]
we have $\prob(R)\to 0$.
\end{proof}
We need one more classical result, due to Erd\H os and R\'enyi~\cite{ErdosRenyi}.
A reference is Bollob\'as~\cite[Theorem~V.3, Theorem~VII.3]{Bollobas}.
\begin{theorem}[Erd\H os--R\'enyi]
\label{th:connectivitythreshold}
If
\[p<\frac{\log(n)-\omega(n)}{n}\]
for some $\omega(n)\to\infty$, then a.a.s.\ $\Gamma\in\gnp$ has at least $C$ isolated vertices for any fixed $C>0$.
If
\[p>\frac{\log(n)+\omega(n)}{n}\]
for some $\omega(n)\to\infty$, then a.a.s\ $\Gamma\in\gnp$ is connected.
\end{theorem}
\begin{proposition}\label{pr:dominationexistence}
Let $C>0$.
If
\[p<\frac{\log(n)+\log(\log(n)) - \omega(n)}{n}\]
for some sequence $\omega(n)$ with $\omega(n)\to+\infty$, then a.a.s.\ there are at least $C$ non-adjacent domination pairs in $\Gamma\in\gnp$.
If further, $pn^2\to \infty$,
then there are also at least $C$ adjacent domination pairs.
Dually, if
\[1-p<\frac{\log(n)+\log(\log(n)) - \omega(n)}{n}\]
for some sequence $\omega(n)$ with $\omega(n)\to+\infty$, then a.a.s.\ there are at least $C$ adjacent domination pairs in $\Gamma\in\gnp$.
Finally, if $(1-p)pn^2\to \infty$ as well,
then there are also at least $C$ non-adjacent domination pairs.
\end{proposition}
\begin{proof}
It is enough to show the first two statements in the proposition, where $p\to 0$; Then the second two statements, where $p\to 1$, follow dually by Lemma~\ref{le:adpnadp}.
If $b$ is an isolated vertex, then for any other vertex $a$, the pair $(a,b)$ is a non-adjacent domination pair.
If $b$ is a vertex of valence $1$ and $a$ is the vertex $b$ is adjacent to, then $(a,b)$ is an adjacent domination pair.
If $b$ is a vertex of valence $1$ adjacent to a vertex $a$, and $c$ is some third vertex adjacent to $a$, then $(c,a)$ is a non-adjacent domination pair.
So the number of adjacent domination pairs is at least the number of vertices of valence $1$, and the number of non-adjacent domination pairs is at least the number of isolated vertices, plus the number of vertices of valence $1$ not on isolated edges.
If $p$ satisfies the more general hypotheses of the proposition, we break it into at most two subsequences, one which satisfies the hypotheses of Proposition~\ref{pr:v1notiso} and the other which satisfies $np-\log(n)\to -\infty$.
On the first subsequence, the probability of having at least $C$ valence-$1$ vertices not on isolated edges goes to $1$,
and on the second subsequence, the probability of having at least $C$ isolated vertices goes to $1$ by Theorem~\ref{th:connectivitythreshold}.
So the probability of having at least $C$ non-adjacent domination pairs goes to $1$ on the entire sequence.
If $p$ satisfies the more restrictive hypotheses, then Corollary~\ref{co:manyvalence1} applies and there are a.a.s.\ at least $C$ vertices of valence $1$.
Then a.a.s.\ we also have at least $C$ adjacent domination pairs.
\end{proof}
\subsection{Nonexistence results}
We proceed to count non-adjacent domination pairs; our results on adjacent ones follow using Lemma~\ref{le:adpnadp}.
It is enough to show the first two statements in the proposition, where $p\to 0$, and the second two statements, where $p\to 1$, follow dually by Lemma~\ref{le:adpnadp}.
\begin{proposition}\label{pr:countnadp}
The expected number of non-adjacent domination pairs in $\Gamma$ in $\gnp$ is
\[n(n-1)(1-p)(p+(1-p)^2)^{n-2}.\]
\end{proposition}
\begin{proof}
Let $X\co \gnp\to \Z$ be the random variable with $X(\Gamma)$ equal to the number of pairs $(a,b)$ of distinct vertices in $V$ with $a$ not adjacent to $b$ and $a>b$ in $\Gamma$.
For each pair $(a,b)\in V^2$ with $a\neq b$, we define a random variable
$\widehat X_{(a,b)}\co \gnp\to \Z$ with $\widehat X_{(a,b)}(\Gamma)$ equal to $1$ if $a$ is not adjacent to $b$ and $a>b$, and equal to $0$ otherwise.
The expectation of $\widehat X_{(a,b)}$ is the probability that $(a,b)$ is a non-adjacent domination pair.
Suppose $a,b\in V$ with $a\neq b$.
For each $c\in V\drop \{a,b\}$, let $S_c\subset \gnp$ denote the event that
either $c\sim a$ or both $c\not\sim a$ and $c\not\sim b$.
This is a union of two disjoint events whose probabilities are $p$ and $(1-p)^2$.
So $\prob(S_c)=p +(1-p)^2$.
Since these events involve different edges for different choices of $c$, the events $\{S_c\}_{c\in V\drop\{a,b\}}$ are independent.
The event $\bigcap_{c\in V\drop\{a,b\}}S_c$ is exactly the event that
every vertex adjacent to $b$ is also adjacent to $a$, which is by definition the event that $a>b$.
So in particular,
\[\prob(a>b)=\prob\left(\bigcap_{c\in V\drop\{a,b\}}S_c\right)=(1-p+p^2)^{n-2}.\]
The event that $a$ dominates $b$ involves only the edges from $a$ and $b$ to other vertices; in particular, the event that $a$ is non-adjacent to $b$ is independent of it.
Then the expectation of $\widehat X_{(a,b)}$ is the product of these probabilities:
\[\Expect(\widehat X_{(a,b)})=(1-p)(p+(1-p)^2)^{n-2}.\]
Since $X$ counts the number of non-adjacent domination pairs, and each $X_{(a,b)}$ counts whether $(a,b)$ is such a pair, we have:
\[X = \sum_{a\in V}\sum_{b\in V\drop\{a\}} \widehat X_{(a,b)}.\]
Then by linearity of expectations:
\[\Expect(X)=n(n-1)(1-p)(p+(1-p)^2)^{n-2}.\]
\end{proof}
To show that probability of a domination pair goes to zero in a certain range, we take limits of these expectations and use Markov's inequality.
We will use the following lemma in taking these limits.
In fact, counting star-cut-vertices will involve a closely related limit, so to reuse this lemma, we use a parameter $k$.
\begin{lemma}\label{le:convexbound}
Suppose $k\geq 1$ is an integer and $p=p(n)$ satisfies
\[2\frac{\log(n)+\omega(n)}{n} \leq p\leq 1-(k+1)\frac{\log(n)+\omega(n)}{n},\]
for some sequence $\omega(n)$ that approaches infinity.
Let $F(x,y)$ be defined by
\[F(x,y)=x^{k+1}(y+(1-y)^{k+1})^{x-k-1},\] for suitable $x,y\in\R$.
Then
$\lim_{n\to\infty} F(n,p)=0.$
\end{lemma}
\begin{proof}
We take the second partial derivative
\[
\begin{split}
\frac{\partial^2 F}{\partial y^2} &= x^{k+1}(x-k-1)(y+(1-y)^{k+1})^{x-k-3}\\
&\quad\cdot ((x-k-2)(1-(k+1)(1-y)^k)^2+(k+1)k(1-y)^{k-1}).
\end{split}
\]
For values of $y$ in $[0,1]$ and values of $x$ in $(k+3,\infty)$,
we see that $\partial^2 F/\partial y^2$ is positive, so that $F$ is concave up in its second input (in this range).
Let $a(n)$ be the lower bound for $p$ from the statement, and let $1-b(n)$ be the upper bound.
Then for large enough $n$, we have
\[0\leq F(n,p(n))\leq \max\{F(n,a(n)),F(n,1-b(n))\}.\]
Using the well-known bound $(1+s)^t\leq e^{st}$ (for $t>0$), we see
\[F(n,a(n))\leq n^{k+1}\exp[(n-k-1)(a(n)+(1-a(n))^{k+1})].\]
We may write this bound as
\[n^{k+1}\exp(-kna(n) + O(na(n)^2))\]
as $n\to\infty$,
using the binomial expansion of $(1-a(n))^{k+1}$ and the fact that $a(n)$ is $O(na(n)^2)$.
This is equivalent to $n^{k+1}\cdot n^{-2k}e^{-2\omega(n)}$ as $n\to\infty$, so $F(n,a(n))\to 0$.
Similarly, we have
\[F(n,1-b(n))\leq n^{k+1}\exp[(n-k-1)(1-b(n)+b(n)^{k+1})],\]
which can be written as
\[n^{k+1}\exp(-nb(n)+O(nb(n)^2))\]
as $n\to\infty$.
This is equivalent to $n^{k+1}\cdot n^{-k-1}\exp(-(k+1)\omega(n))$ as $n\to\infty$, so $F(n,1-b(n))\to 0$.
Then $F(n,p)\to 0$ as $n\to\infty$ for any sequence $p(n)$ in the given range.
\end{proof}
\begin{proposition}\label{pr:firstnonexist}
Suppose $\omega(n)$ is a sequence of real numbers tending to positive infinity.
If $p=p(n)$ is a sequence of probability values satisfying
\begin{equation*}
2\frac{\log(n)+\omega(n)}{n}\leq p\leq 1-\frac{\log(n)+\log(\log(n))+\omega(n)}{n},
\end{equation*}
then the probability that $\Gamma$ in $\gnp$ has a non-adjacent domination pair is a.a.s.\ zero.
\end{proposition}
\begin{proof}
Let $X$ be the random variable from the Proposition~\ref{pr:countnadp}.
According to that proposition, we have
\[\Expect(X)\leq n^2(p+(1-p)^2)^{n-2}.\]
Then by Lemma~\ref{le:convexbound}, with $k=1$, we have that $E(X)\to 0$ as $n\to\infty$ if we assume that
$p\leq 1-2n^{-1}(\log(n)+\omega(n))$.
Next we momentarily assume that
$p(n)\geq 1-3n^{-1}\log(n)$.
We change variables to use $q=1-p$.
In this case
\[\Expect(X)\leq n^2q(1-q+q^2)^{n-2}\leq n^2q\exp(-nq+O(nq^2)).\]
Then using the bounds on $q$, we have that
\[\Expect(X)\lesssim 3n\log(n)\cdot n^{-1}\log(n)^{-1}e^{-\omega(n)}.\]
This certainly limits to zero.
If $p$ satisfies the more general bounds, we subdivide the sequence $(n,p(n))$ into two subsequences, the first of which satisfies $p(n)\leq 1-2n^{-1}(\log(n)+\omega(n)$, and the second of which satisfies $p(n)\geq 1-3n^{-1}\log(n)$.
Since $E(X)$ goes to zero on both of these subsequences, we have that $E(X)\to 0$ as $n\to\infty$.
Markov's inequality states that for any $\lambda>0$,
\[\prob(X\geq \lambda)\leq (1/\lambda)\Expect(X).\]
Setting $\lambda=1/2$, we have $\prob(X\geq 1/2)=\prob(X\neq 0)$, which therefore goes to $0$ as $n\to\infty$.
\end{proof}
To tighten the range of probability functions in which non-adjacent domination pairs occur, we consider the following configuration, which we call a \emph{domination diamond}.
This is a quadruple $(a,b,c,d)$ of vertices, all distinct, with $a\sim b\sim c\sim d \sim a$, $a\not\sim c$, $b\not\sim d$, and $a>c$.
\begin{lemma}\label{le:mustbeldd}
If $(a,c)$ is a non-adjacent domination pair, $c$ is not isolated, and nothing dominates $c$ adjacently, then there is a domination diamond $(a,b,c,d)$ in $\Gamma$.
\end{lemma}
\begin{proof}
Of course $\lk(c)$ is nonempty.
If the induced subgraph on $\lk(c)$ is a complete graph, then any element of $\lk(c)$ adjacently dominates $c$.
So there are two vertices $b,d\in\lk(c)$ with $b\not\sim d$.
Since $b,d\sim c$ and $a>c$, we know $b,d\sim a$.
\end{proof}
\begin{proposition}\label{pr:noldd}
If $p\to 0$ and $np\to\infty$ as $n\to\infty$, then a.a.s.\ $\Gamma\in\gnp$ has no domination diamonds.
\end{proposition}
\begin{proof}
Let $\widehat W_{(a,b,c,d)}$ be the random variable which is $1$ if $(a,b,c,d)$ is a domination diamond and $0$ otherwise.
Then
\[\Expect(\widehat W_{(a,b,c,d)})=p^4(1-p)^2(p+(1-p)^2)^{n-4},\]
since the mandated edges and non-edges among $(a,b,c,d)$ are given, and for any fifth vertex $e$, we must have $e$ adjacent to $c$ or not adjacent to $c$ or $a$.
Setting $W$ equal to the sum of all $\widehat W_{(a,b,c,d)}$ over all choices of $(a,b,c,d)$, we have that the random variable $W$ counts the number of domination diamonds.
By additivity:
\[\Expect(W)=\frac{n!}{(n-4)!}p^4(1-p)^2(p+(1-p)^2)^{n-4}\]
Since $p\to 0$, we know
\[(p+(1-p)^2)^{n-4}\sim \exp(-np).\]
Then
\[\Expect(W)\sim (np)^4\exp(-np).\]
Since the function $t\mapsto t^4e^{-t}$ converges to $0$ as $t\to\infty$, our hypothesis that $np\to\infty$ forces $\Expect(W)$ to converge to $0$ as $p\to\infty$.
The proposition follows immediately by Markov's inequality.
\end{proof}
\begin{theorem}\label{th:dominationnonexistence}
Suppose $\omega_1(n), \omega_2(n)$ are sequences with $\omega_1(n),\omega_2(n)\to+\infty$ as $n\to\infty$.
If $p$ satisfies:
\[\frac{\log(n)+\log(\log(n))+\omega_1(n)}{n}<p<1-\frac{\log(n)+\log(\log(n))+\omega_2(n)}{n}\]
then $\Gamma\in\gnp$ a.a.s.\ has no domination pairs.
\end{theorem}
\begin{proof}
By Lemma~\ref{le:adpnadp}, it is enough to show that a.a.s.\ there are no non-adjacent domination pairs.
We break $p$ into two subsequences, one where Proposition~\ref{pr:firstnonexist} applies, and one where $p\to 0$.
Then it is enough to show that the theorem holds if we assume $p\to 0$ as $n\to\infty$.
Let $A$ be the event that there is some adjacent domination pair,
$B$ the event that there is some non-adjacent domination pair,
$C$ the event that there is some isolated vertex, and
$D$ the event that there is some domination diamond.
Proposition~\ref{pr:firstnonexist} and Lemma~\ref{le:adpnadp} tell us that $\prob(A)\to 0$ in this range, since replacing $p$ with $(1-p)$ puts the probability function in the range in which there are no non-adjacent domination pairs.
Theorem~\ref{th:connectivitythreshold} implies that $\prob(C)\to 0$.
Further, Proposition~\ref{pr:noldd} tell us that $\prob(D)\to 0$.
Lemma~\ref{le:mustbeldd} implies that for every non-adjacent domination pair ($a,c)$, one of the following holds: (1) $c$ is isolated, (2) there is a domination diamond $(a,b,c,d)$ or (3) something adjacently dominates $c$.
In other words, $B\subset A\cup C\cup D$.
Then
\[0<\prob(B)<\prob(A)+\prob(C)+\prob(D),\]
and therefore $\prob(B)\to 0$.
Of course the theorem then follows from Markov's inequality.
\end{proof}
\section{Star 2-connectedness}
\subsection{Star separations}
A subset $S\subset V$ is a \emph{separation} of $\Gamma$ if $S\neq\varnothing$, $S\neq V$ and there are no edges in $\Gamma$ from $S$ to $V\drop S$.
Define a \emph{star separation} of $\Gamma$ to be a pair $(a,S)$ with $a\in V$ and $S\subset V\drop \st(a)$, such $S$ is a separation of $\Gamma\drop\st(a)$.
Call a star separation a star $k$-separation if $\abs{S}=k$.
A star separation $(a,S)$ is \emph{proper} if $S$ is not a separation of $\Gamma$.
Given a separation $S$, there is a star separation $(a,S)$ only if there is a vertex $a\in V\drop S$ such that $\st(a)\neq V\drop S$.
Of course, $\Gamma$ is star $2$-connected if and only if it has no star separations.
If $(a,\{b\})$ is a star separation, then $(a,b)$ is a non-adjacent domination pair.
However, the converse does not hold.
If $V=\st(a)\sqcup\{b\}$, then $(a,b)$ is a non-adjacent domination pair, but $(a,\{b\})$ is not a star separation.
However, this means that in sparse graphs with many vertices, non-adjacent domination pairs are practically the same thing as star $1$-separations.
This explains the similarities in the functions that describe the expected number of each.
\subsection{Counting small star separations}
Our first result on star-separations shows that for $k$ not depending on $n$, star $k$-separations asymptotically almost certainly do not occur in a wide range of probabilities.
\begin{proposition}\label{pr:smallstarseps}
Let $p=p(n)$ be a sequence of probabilities and let $k\geq 1$ be fixed.
Suppose
\[p\geq \frac{\log(n)+(2/k)\log(\log(n))+\omega(n)}{n}\]
for some $\omega(n)$ approaching positive infinity.
Further suppose that either $k\geq2$ or else that $n(1-p)\to 0$ or $n(1-p)\to\infty$.
Then a.a.s. $\Gamma\in\gnp$ has no star $k$-separations.
\end{proposition}
The proof appears after the next two lemmas.
We would like to proceed by fixing $k$ and counting directly the number of star $k$-separations.
However, the random variable that counts star $k$-separations has a problem: it turns out that there is a range of probabilities where the probability of a star $k$-separation existing goes to zero even though the expected number of star separations goes to infinity.
We get better bounds by counting proper star $k$-separations instead.
\begin{lemma}\label{le:countss}
Let $k>0$ be an integer and let $U_k$ be the random variable on $\gnp$ that counts the number of proper star $k$-separations.
Then the expectation of $U_k$ is
\[
\begin{split}
\Expect(U_k) & = n\binom{n-1}{k}(1-p)^k\\
&\quad \cdot
\big[(p+(1-p)^{k+1})^{n-k-1}+(1-p^{n-k-1})(1-(1-p)^{k(n-k-1)})-1\big].
\end{split}
\]
\end{lemma}
\begin{proof}
Let $U_k$ count the number of proper star $k$-separations $(a,S)$ for various $a\in V$ and $S\subset V\setminus\{a\}$ (with $|S|=k$).
Let $\wU{a}{S}$ be the random variable with value $1$ if $(a,S)$ is a proper star $k$-separation and value $0$ otherwise.
Of course the expectation $\Expect(\wU{a}{s})$ is the probability that $(a,S)$ is a proper star $k$-separation.
If $(a,S)$ is a proper star separation, then necessarily $a$ is not adjacent to any element of $S$.
This event has probabilty $(1-p)^k$.
This event is independent of the other aspects of the definition that we are about to describe, since we will not mention these edges again.
The next feature of the definition is that we must have every element of $V\drop (S\cup\{a\})$ either in $\lk(a)$ or not adjacent to any member of $S$.
Suppose $b\in V\drop (S\cup \{a\})$.
The event that $b$ is adjacent to $a$ has probability $p$; the event that $b$ is not adjacent to any element of $S\cup\{a\}$ is disjoint from this event and has probability $(1-p)^{k+1}$.
So the probability that $b$ is either adjacent to $a$ or not adjacent to any member of $S$ has probability $p+(1-p)^{k+1}$.
Since these events involve different edges for different choices of $b$, they are independent, and the event that every element of $V\drop (S\cup\{a\})$ is in $\lk(a)$ or not adjacent to anything in $S$ has probability
\[(p+(1-p)^{k+1})^{n-k-1}.\]
Next, we must exclude two events that are included in the previous event.
To fit the definition of a star separation, we must have that $V\drop S$ is not all of $\st(a)$.
This means that we are excluding the event that $a$ is adjacent to every element of $V\drop S$, which has probability $p^{n-k-1}$.
To ensure that $(S,a)$ is a proper star separation, we must exclude the event that $S$ is a separation.
Since we have already stipulated that there are no edges from $a$ to $S$, this is then the event that there are no edges from $V\drop (S\cup\{a\})$ to $S$.
This has probability $(1-p)^{k(n-k-1)}$.
The two events we have just described are independent, so by DeMorgan's law, the probability of either event happening is:
\[
1 - (1-p^{n-k-1})(1-(1-p)^{n(n-k-1)}).
\]
Putting this all together, we see that
\[
\begin{split}
\Expect(&\wU{a}{S})=\\
& (1-p)^k\big[(p+(1-p)^{k+1})^{n-k-1}+(1-p^{n-k-1})(1-(1-p)^{k(n-k-1)})-1\big].
\end{split}
\]
We note that there are $n$ possible choices for $a$ in $V$, and given a choice of $a$, there are $\binom{n-1}{k}$ choices for $S$.
Since $U_k$ is the sum of $\wU{a}{S}$ over all choices of $(a,S)$, the result follows from the linearity of expectations.
\end{proof}
Next we process this expression into a pair of more manageable bounds.
\begin{lemma}\label{le:newbounds}
We have the following bounds on $\Expect(U_k)$:
\[
\Expect(U_k)\leq n^2\binom{n-1}{k}(1-p)^{2k+1}(p+(1-p)^{k+1})^{n-k-2},
\]
and
\[
\Expect(U_k)\leq k n^2\binom{n-1}{k}(1-p)^{k+1}p^2(p+(1-p)^{k+1})^{n-k-2}.
\]
\end{lemma}
\begin{proof}
These bounds come from the expression in Lemma~\ref{le:countss} in essentially the same way.
We will use the following claim both times:
\begin{claim*}
Suppose $a$ and $b$ are real numbers with $0\leq a\leq b\leq 1$ and $n$ is a natural number.
Then
\[
a^n-b^n \leq na^{n-1}(b-a).
\]
\end{claim*}
The claim follows easily from a calculus argument:
if
\[f(a)=na^{n-1}(b-a)+b^n-a^n,\]
then $f(b)=0$, but $f'(a)$ is negative for $a$ in $(0,b)$.
Now using the claim, we deduce the lemma from Lemma~\ref{le:countss}.
Since certainly, $1-p^{n-k-1}\leq 1$, we deduce first that
\[
\Expect(U_k)\leq n\binom{n-1}{k}(1-p)^k((p+(1-p)^{k+1})^{n-k-1}-(1-p)^{k(n-k-1)}).
\]
Since $0\leq(1-p)^k\leq p+(1-p)^{k+1}\leq1$, the claim implies
\[
\begin{split}
(p&+(1-p)^{k+1})^{n-k-1}-(1-p)^{k(n-k-1)}
\\&\leq (n-k-1)(p+(1-p)^{k+1})^{n-k-2}(p+(1-p)^{k+1}-(1-p)^k)\\
&=(n-k-1)(p+(1-p)^{k+1})^{n-k-2}p(1-(1-p)^k).
\end{split}\]
Since $(1-p)^k\geq 1-kp$ for $p\in[0,1]$, it follows that
\[
\Expect(U_k)\leq kn(n-k-1)\binom{n-1}{k}p^2(1-p)^{k+1}(p+(1-p)^{k+1})^{n-k-2}.
\]
Second, since $1-(1-p)^{k(n-k-1)}<1$, we have
\[
\Expect(U_k)\leq n\binom{n-1}{k}(1-p)^k((p+(1-p)^{k+1})^{n-k-1}-p^{n-k-1}).
\]
Then since $0\leq p\leq p+(1-p)^{k+1}\leq 1$, the claim implies
\[
\Expect(U_k)\leq n(n-k-1)\binom{n-1}{k}(1-p)^{2k+1}(p+(1-p)^{k+1})^{n-k-1}.
\]
The lemma follows.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{pr:smallstarseps}]
As usual, we split $\{(n,p(n))\}_n$ into subsequences satisfying stronger hypotheses that overlap.
Therefore it is enough to show the proposition for $p$ in several subcases with stronger hypotheses.
First we suppose that $p$ satisfies
\[2\frac{\log(n)+\omega(n)}{n} \leq p\leq 1-(k+1)\frac{\log(n)+\omega(n)}{n}.\]
We get the following bound from Lemma~\ref{le:countss}.
\[
\Expect(U_k)\leq n^{k+1}(p+(1-p)^{k+1})^{n-k-1}.
\]
(This is because $(1-p^{n-k-1})(1-(1-p)^{k(n-k-1)})-1\geq 0$.)
Then by Lemma~\ref{le:convexbound}, we have that $\Expect(U_k)$ converges to $0$ for any $p$ in this range.
Next we suppose that $q\to 0$ and $nq^2\to 0$, where $q=1-p$.
By the first bound from Lemma~\ref{le:newbounds}, we see that
\[\Expect(U_k)\leq n^{k+2}q^{2k+1}(1-q+q^{k+1})^{n-k-2}.\]
Since $(1-x)^m\sim e^{-mx}$ as $x\to 0$, this bound is asymptotically equivalent to
\[n^{k+1}q^{2k+1}\exp(-nq+O(nq^2)+O(q)),\]
which is equivalent to
\[q^{k-1}(nq)^{k+2}e^{-nq}.\]
From calculus, we know that $x\mapsto x^{k+2}e^{-x}$ is bounded and tends to zero as $x\to \infty$ or $x\to 0$.
So if $k>1$ or $nq\to 0$ or $nq\to \infty$, we know that $\Expect(U_k)\to 0$.
Finally, we suppose that $p\to 0$, $np^2\to 0$, and
\[p> \frac{\log(n)+(2/k)\log(\log(n))+\omega(n)}{n}\]
for some $\omega(n)\to\infty$.
Using the second bound from Lemma~\ref{le:newbounds}, we have
\[\Expect(U_k)\leq n^{k+2}p^2(p+(1-p)^{k+1})^{n-k-2}.\]
By binomial expansion, this bound can be written as
\[n^{k+2}p^2(1-kp+O(p^2))^{n-k-2}.\]
This is asymptotically equivalent to
\[n^{k+2}p^2 \exp(-knp + O(np^2)+O(p))\sim n^{k+2}p^2e^{-knp},\]
since $np^2,p\to 0$.
Then by our lower bound on $p$, we have
\[
\begin{split}
\Expect(U_k)&\lesssim n^{k+2}\cdot n^{-2}(\log(n)+\log(\log(n))+\omega(n))^2\cdot n^{-k}\log(n)^{-2}e^{-k\omega(n)}\\
&=\left(\frac{\log(n)+\log(\log(n))+\omega(n)}{\log(n)}\right)^2e^{-k\omega(n)}
\lesssim\omega(n)^2e^{-k\omega(n)}
\end{split}
\]
So $\Expect(U_k)\to 0$.
We have shown that under these hypotheses, there are a.a.s.\ no proper star $k$-separations.
However, Theorem~\ref{th:connectivitythreshold} implies that a.a.s.\ there are no separations for $p$ in this range.
So a.a.s.\ there are no star $k$-separations.
\end{proof}
\subsection{Summed counts of star separations}
\label{se:summedcounts}
So far, we have shown that for each $k$, $\Expect(U_k)\to 0$ in a certain range of probability values.
We would like to show that $\Expect(\sum_k U_k)\to 0$ for $p$ in a specific range.
Effectively, this requires commuting a limit and sum, which the Lebesgue dominated convergence theorem would allow.
To meet the hypotheses of this theorem, we must compute bounds on $\Expect(U_k)$.
We do this using calculus techniques, using two different bounds that are useful when $p$ is close to $1$ and when $p$ is close to $0$, respectively.
Unfortunately, showing these bounds carefully takes a fair amount of work.
We proceed to prove Theorem~\ref{th:masterstarcut}, and then we prove the bounds on $\Expect(U_k)$.
\begin{proof}[Proof of Theorem~\ref{th:masterstarcut}]
We break the sequence $\{(n,p(n))\}_n$ into two subsequences, where one satisfies $p\leq 2/5$, and the other satisfies $p\geq 2/5$.
We prove the theorem for each subsequence; then it follows that the theorem is true for the original sequence.
We suppose the first hypothesis of the theorem, that $p>n^{-1}(\log(n)+\log(\log(n))+\omega(n))$.
We note that every star $1$-separation consists of a non-adjacent domination pair.
Then Theorem~\ref{th:dominationnonexistence} shows that a.a.s., there are no star $1$-separations if $p\leq 2/5$.
If $p\geq 2/5$, then Proposition~\ref{pr:smallstarseps} includes the fact that there are a.a.s.\ no star $1$-separations, provided that the second hypothesis also holds.
Then quoting Proposition~\ref{pr:smallstarseps}, we have that
\[\lim_{n\to\infty} \Expect(U_k)=0,\]
for any fixed $k$, if $k\geq 2$ (and $p$ satisfies the first hypothesis) or if $k=1$ and $p$ satisfies both hypotheses of the theorem.
Proposition~\ref{pr:highpbound} below states that if $p\geq 2/5$, then there is a nonnegative sequence $\{a_k\}_k$ with $\Expect(U_k)\leq a_k$ for all $n$ and for all $k$ with $2k\leq n$, and such that $\sum a_k<\infty$.
Proposition~\ref{pr:lowpbound} is the same statement in the case that $p\leq 2/5$.
Then in either case, the Lebesgue dominated convergence theorem (see, e.g. Rudin~\cite[p.26]{Rudin}) applies.
Therefore if we assume both hypotheses, we have
\begin{equation}\label{eq:ldc}
\lim_{n\to\infty}\sum_{k=1}^{\lfloor n/2\rfloor}\Expect(U_k)=\sum_{k=1}^{\infty}\lim_{n\to\infty}\Expect(U_k)=0.
\end{equation}
Of course, by linearity of expectations, we have
\[\sum_{k=1}^{\lfloor n/2\rfloor}\Expect(U_k)=\Expect\left(\sum_{k=1}^{\lfloor n/2\rfloor}U_k\right).\]
The random variable on the right will be zero only if $\Gamma\in\gnp$ has no proper star-separations: if $a\in\Gamma$ is a star-cut-vertex, then some component of $\Gamma\setminus\st(a)$ has less than $n/2$ vertices.
Then by Equation~\eqref{eq:ldc} there are a.a.s.\ no proper star separations.
Since Theorem~\ref{th:connectivitythreshold} implies there are a.a.s.\ no separations, we know there are a.a.s.\ no star-cut-vertices.
Similarly, if we assume only the first hypothesis, Equation~\eqref{eq:ldc} will be true if we sum from $k=2$ to $k=\lfloor n/2\rfloor$ on both sides, instead of starting at $k=1$.
Then we deduce that
\[\lim_{n\to\infty}\Expect\left(\sum_{k=2}^{\lfloor n/2\rfloor} U_k\right)=0.\]
Of course, this random variable will be zero only if there are no proper star $k$-separations for $2\leq k<n/2$.
If the star of a star-cut-vertex has more than one complementary component with at least two vertices, then it will have a complementary component with at least two and fewer than $n/2$ vertices.
The second statement in the theorem follows.
\end{proof}
Now we bound $\Expect(U_k)$.
The choice of $2/5$ below is somewhat arbitrary.
\begin{proposition}\label{pr:highpbound}
There is a sequence of positive numbers $a_k$ such that $\sum_{k=1}^\infty a_k<\infty$ and $\Expect(U_k)\leq a_k$ for any $n\geq 2k$ and any $p\geq 2/5$.
\end{proposition}
\begin{proof}
We write the first bound on $\Expect(U_k)$ from Lemma~\ref{le:newbounds} in terms of $q=1-p$:
\[\Expect(U_k)\leq n\binom{n-1}{k} (n-k-1)q^{2k+1}(1-q+q^{k+1})^{n-k-2}.\]
In terms of $q$, our hypothesis is that $q\leq 3/5$.
We then use a bound on $\binom{n-1}{k}$ that can be derived by considering $\log \binom{n-1}{k}$ as a Riemann sum approximation to an integral of a continuous function:
\[\binom{n-1}{k}\leq \frac{n^n}{(n-k)^{n-k}k^k}.\]
We set
\[F(k,n,q)= \frac{n^{n}}{(n-k)^{n-k}k^k}n^2q^{2k+1}(1-q+q^{k+1})^{n-k-2};\]
then
\[\Expect(U_k)\leq F(k,n,q).\]
We find the bounding sequence by using vector calculus to find critical points for $F(k,n,q)$ for fixed $k$ and for $(n,q)$ in the region $[2k,\infty)\times[0,3/5]$.
\begin{claim*}
For large enough $k$, the partial derivative $\partial \log\circ F/\partial n$ is never zero on the vertical ray $(n,q)\in[2k,\infty)\times\{3/5\}$.
\end{claim*}
To show this, we consider
\[\frac{\partial \log\circ F}{\partial n}(k,n,q)=\frac{2}{n}+\log(\frac{n}{n-k})+\log(1-q+q^{k+1}).\]
This is zero if and only if
\[e^{-2/n}(n-k)+n(-1+q-q^{k+1})=0.\]
Define $f_k(n,q)=e^{-2/n}(n-k)+n(-1+q-q^{k+1})$.
Next we define $g_k(n,q)=-(k+2)+n(q-q^{k+1})$.
We use the following bound from calculus: \[|e^{-2/n}-(1-2/n)|\leq 2/n^2.\]
This implies that
\begin{equation*}
f_k(n,q)-g_k(n,q)\leq 2/n+k(2/n^2-2/n)\leq 2/k,\end{equation*}
since $n\geq 2k$.
The function $g_k(n,q)$ is chosen so that we can solve $g_k(n,q)=0$ for $n$ easily:
\begin{equation*}
g_k(n,q)=0 \text{ if and only if } n = \frac{k+2}{q-q^{k+1}}.
\end{equation*}
To show the claim, we show that $f_k(n,q)$ is not zero on the vertical ray.
Note that $g_k(n,3/5)\geq -(k+2)+2k((3/5)-(3/5)^{k+1})$ on this ray.
Since this bound is asymptotically equivalent to $(1/5)k$ we see that for large $k$, $g_k(n,3/5)>2/k$ on the ray.
This implies that $f_k(n,3/5)$ is not zero when $n>2k$ and $k$ is large enough, proving the claim.
\begin{claim*}
For large enough $k$, the function $\log\circ F$ never has zero gradient on $[2k,\infty)\times[0,3/5]$.
\end{claim*}
Note that
\begin{equation}\label{eq:pFpq}
\frac{\partial \log\circ F}{\partial q}(k,n,q)=\frac{2k+1}{q}+(n-k-2)\frac{-1+(k+1)q^k}{1-q+q^{k+1}}.
\end{equation}
We suppose that this partial derivative is $0$ and $f_k(n,q)$ is zero as well.
If $f_k(n,q)$ is zero, then $g_k(n,q)=\epsilon$ for some $\epsilon=\epsilon(n,q,k)$ with $|\epsilon|>2/k$.
Then
\[q-q^{k+1}=\frac{k+2-\epsilon}{n}.\]
Assuming that $\frac{\partial \log\circ F}{\partial q}=0$, then substituting this for one instance of $q-q^{k+1}$ in the expression in Equation~\eqref{eq:pFpq}, we get the equation
\[\frac{2k+1}{q}+\frac{n-k-2}{n-k-2+\epsilon}\cdot n(-1+(k+1)q^k)=0.\]
Next we use the substitution $n=(k+2-\epsilon)/(q-q^{k+1})$ on one of the instances of $n$ to get
\[\frac{2k+1}{q}+\frac{n-k-2}{n-k-2+\epsilon}\cdot\frac{k+2-\epsilon}{q-q^{k+1}}\cdot (-1+(k+1)q^k)=0.\]
We write $c=\frac{n-k-2}{n-k-2+\epsilon}$.
Note that $c$ is close to $1$.
From here, it is straightforward to solve for $q^k$:
\[q^k=\frac{(2-c)k+1-c(2-\epsilon)}{-ck^2+(2-x(3-\epsilon))k+1-c(2-\epsilon)}.\]
The right side of this equation is negative, so it has no real solution for $q$ if $k$ is even.
Taking the limit as $k\to\infty$ on odd $k$, we see that $q\to -1$.
This implies that for large enough $k$, we never have both $f_k(n,q)=0$ and $\frac{\partial \log\circ F}{\partial q}=0$; this proves the claim.
\begin{claim*}
For large enough $k$, the maximum of $F(k,n,q)$ for $(n,q)\in[2k,\infty)\times[0,3/5]$ is realized at $(n,q)=(2k,3/5)$.
\end{claim*}
We have shown that the gradient of $\log\circ F$ never vanishes on $[2k,\infty)\times[0,3/5]$, and that the partial with respect to $n$ never vanishes on the left boundary of the region.
Note that $F$ is zero on the right boundary.
So it is enough to show that $F$ is increasing in $q$ for $q<3/5$ when $n=2k$.
It is straightforward to see from Equation~\eqref{eq:pFpq} that:
\[\frac{\partial \log\circ F}{\partial q}\geq\frac{2k}{q}-\frac{k-2}{1-q}.\]
However, $q\mapsto \frac{2k}{q}-\frac{k-2}{1-q}$ is plainly decreasing in $q$;
evaluating it at $3/5$, we see that $\frac{\partial \log\circ F}{\partial q}\geq (5/6)k+5>0$.
This proves this last claim.
Then to prove the lemma, we note the following:
\[\lim_{k\to\infty}\frac{F(k+1,2k+2,3/5)}{F(k,2k,3/5)}=\frac{72}{125}<1.\]
So the maximum value of $\Expect(U_k)$ for $q$ in this range is eventually bounded above by an exponentially decaying function of $k$.
\end{proof}
\begin{proposition}\label{pr:lowpbound}
There is a sequence of positive numbers $a_k$ such that $\sum_{k=1}^\infty a_k<\infty$ and
$\Expect(U_k)\leq a_k$
for any $n\geq 2k$ and any $p$ satisfying
\[\frac{\log(n)+\log(\log(n))}{n} \leq p \leq 2/5.\]
\end{proposition}
\begin{proof}
Define $G(k,n,p)$ by
\[G(k,n,p)=\frac{n^n}{(n-k)^{n-k}k^{k-1}}n^2p^2(1-p)^{k+1}(p+(1-p)^{k+1})^{n-k-2}.\]
Then by Lemma~\ref{le:newbounds} (the second bound) and the bound on $\binom{n-1}{k}$ from the previous proposition, we know that
\[G(k,n,p)\geq \Expect(U_k).\]
To bound $\Expect(U_k)$ by a function of $k$, we will show that for sufficiently large fixed values of $k$, the maximum of $G$ on the region
\[R=\{(n,p)| n\geq 2k, \frac{\log(n)+\log(\log(n))}{n} \leq p \leq 2/5\}\]
is at one of the corners of $R$.
\begin{claim}
For large enough $k$, the gradient of $\log\circ G$ is never zero on $R$.
\end{claim}
Since $p\mapsto 1-p-(1-p)^{k+1}$ is concave down, it is easy to verify that for large enough $k$ and $p<2/5$, we have
\[1-p-(1-p)^{k+1}\geq \min(\frac{(k+2)p}{2},\frac{k+2}{2k}).\]
We use this inequality in a bound on
\[\frac{\partial \log\circ G}{\partial n}(k,n,p)=\frac{2}{n}+\log(1+\frac{k}{n-k})+\log(p+(1-p)^{k+1}).\]
It is immediate that
\[\frac{\partial \log\circ G}{\partial n}(k,n,p)\leq \frac{2}{n}+\frac{k}{n-k}-1+p+(1-p)^{k+1},\]
from which we deduce
\[\frac{\partial \log\circ G}{\partial n}(k,n,p)\leq \frac{2}{n}+\frac{k}{n-k} - \min(\frac{(k+2)p}{2},\frac{k+2}{2k}).\]
If $\frac{k+2}{2k}$ is the smaller quantity, then it is straightforward to check that this bound is negative (using $n\geq 2k$).
If $\frac{(k+2)p}{2}$ is the smaller quantity, then using $p\geq n^{-1}(\log(n)+\log(\log(n)))$ and $n\geq 2k$, it is also routine to check that the bound is negative for large $k$.
Then $\frac{\partial \log\circ G}{\partial n}$ is always negative on $R$, so that the gradient is never zero on $R$.
Next we deal with the sloping boundary of $R$.
\begin{claim}
For all $k$ large enough and $n\geq 2k$, the function $G(k,n,p(n))$ is decreasing in $n$, where
\[p=p(n)=\frac{\log(n)+\log(\log(n))}{n}.\]
\end{claim}
First we compute the logarithmic partial derivative of $G(k,n,p(n))$ with respect to $n$:
\[\begin{split}
\frac{\partial}{\partial n}\log(G(k,n,p(n)))&= \log(\frac{n}{n-k})+\frac{2}{n}+\frac{2p'}{p}-\frac{kp'}{1-p}
\\&\quad
+\log(p+(1-p)^{k+1}))
\\&\quad
+ p'(n-k-2)\frac{1-(k+1)(1-p)^{k}}{p+(1-p)^{k+1}}.
\end{split}
\]
In computing upper bounds on this expression, we will freely assume that $k$ is large, and we will always assume that $n\geq 2k$.
Since $1-(k+1)^{-1/k}$ goes to $0$ more slowly than $p(2k)$ as $k\to\infty$, we may assume that $1-(k+1)(1-p)^{k}$ is negative.
We may assume that $p'$ is negative.
Further, note that $p+(1-p)^{k+1}\geq (1-p)^k$.
Then
\[p'(n-k-2)\frac{1-(k+1)(1-p)^{k}}{p+(1-p)^{k+1}}\leq (n-k-2)p'((1-p)^{-k}-1-k).\]
By logarithm rules and the inequality $\log(1+x)\leq x$, we may deduce
\[
\log(\frac{n}{n-k})\leq \frac{k}{n-k}\text{ and }\log(p+(1-p)^{k+1})\leq p((1-p)^{-k-1}-1-k).\]
Then we have
\begin{equation}\label{eq:derivbound}
\begin{split}
\frac{\partial}{\partial n}\log(G(k,n,p(n)))&\leq \frac{k}{n-k}+\frac{2}{n}+\frac{2p'}{p}-\frac{kp'}{1-p}
\\&\quad
+p((1-p)^{-k-1}-1-k)
\\&\quad
+ (n-k-2)p'((1-p)^{-k}-1-k).
\end{split}
\end{equation}
To process this expression, we start combining terms.
First of all, it is straightforward to show
\[\frac{2}{n}+\frac{2p'}{p}=2\frac{\log(n)+1}{\log(n)(\log(n)+\log(\log(n)))}\leq \frac{2}{\log(n)}.\]
Next, we note
\[\frac{k}{n-k}-k(p+np')=\frac{k^2}{n(n-k)}-\frac{k}{n\log(n)}.\]
Since
\[p'\leq -\frac{\log(n)}{n^2},\]
we have
\[\frac{k}{n-k}-k(p+(n-k-2)p')\leq \frac{k^2}{n(n-k)}-\frac{k}{n\log(n)}-\frac{k(k+2)\log(n)}{n^2}.\]
Next we note that
\[\frac{-kp'}{1-p}\leq \frac{2k\log(n)}{n^2}.\]
Since the $-k(k+2)\log(n)/n^2$ dominates the positive terms, we may deduce that for any positive constant strictly less than $1$, say $1/2$, we have
\begin{equation}
\label{eq:mainbound}
\frac{k}{n-k}-k(p+(n-k-2)p')+\frac{2}{n}+\frac{2p'}{p}-\frac{kp'}{1-p}\leq -\frac{k}{n\log(n)}-\big(\frac{1}{2}\big)\frac{k^2\log(n)}{n^2}.
\end{equation}
The remaining terms in our bound from Equation~\eqref{eq:derivbound} may be written as:
\begin{equation}\label{eq:extraterms}(p+(n-k-2)p')((1-p)^{-k-1}-1)-pp'(n-k-2)(1-p)^{-k-1}.\end{equation}
Since we assume that $p<1/2$, we know $-2\log(2)p<\log(1-p)$.
Since $(1-p)^{-k-1}$ is decreasing in $n$ for fixed $k$, we may get an upper bound by plugging in for $n=2k$.
Then
\[(1-p)^{-k-1}\leq (2k\log(2k))^{\log(2)\frac{k+1}{k}}\leq 2k^{\frac{3}{4}}.\]
This gives us an immediate upper bound on the second term from Equation~\eqref{eq:extraterms} as follows, using the obvious bound $p\leq 2\log(n)/n$:
\[-pp'(n-k-2)(1-p)^{-k-1}\leq \frac{8k^{\frac{3}{4}}(\log(n))^2}{n^2}.\]
Next we bound the first term in Equation~\eqref{eq:extraterms}.
Since $(1-p)^{k+1}\geq 1-(k+1)p$, we deduce that
\[(1-p)^{-k-1}-1\leq (k+1)p(1-p)^{-k-1}\leq \frac{4(k+1)k^{\frac{3}{4}}\log(n)}{n}.\]
Then
\[
\begin{split}
(p+(n-k-2)p')((1-p)^{-k-1}-1) &\leq
\frac{8(k+1)k^{\frac{3}{4}}\log(n)}{n^2} \\
&\quad +\frac{8(k+1)(k+2)k^{\frac{3}{4}}\log(n)^2}{n^3}.
\end{split}
\]
Using our assumption that $n\geq 2k$, we may then deduce that
\[
\begin{split}
(p+(n-k-2)p')((1-p)^{-k-1}-1)& \leq \frac{8(k+1)k^{\frac{3}{4}}\log(n)}{n^2} \\
&\quad +\frac{8(k+1)\log(k)k^{\frac{3}{4}}\log(n)}{n^2}.
\end{split}
\]
Then the $-k^2\log(n)/n^2$ term from Equation~\eqref{eq:mainbound} eventually (for large $k$ and all $n\geq 2k$) dominates everything from Equation~\eqref{eq:extraterms}, and the claim follows.
\begin{claim}
For large fixed $k$, the minimum value of $G(k,n,p)$ in $R$ is realized at one of the corners of $R$:
\[(n,p)=(2k,2/5)\text{ or }(n,p)=(2k,(2k)^{-1}(\log(2k)+\log(\log(2k)))).\]
\end{claim}
We have shown that the gradient of $\log\circ G$ never vanishes on $R$ and $G$ is decreasing on the sloping boundary of $R$.
In showing that the gradient never vanishes, we showed that $\frac{\partial\log\circ G}{\partial n}$ is negative on the boundary segment $p=2/5$.
This means that the maximum cannot occur on this boundary segment (away from the corner).
Now we consider the boundary segment where $n=2k$.
We consider
\[\frac{\partial \log\circ G}{\partial p}=\frac{2}{p}-\frac{k}{1-p}+(n-k-2)\frac{1-(k+1)(1-p)^k}{p+(1-p)^{k+1}}.\]
When $n=2k$, this is the function
\[p\mapsto \frac{2}{p}-\frac{k}{1-p}+(k-2)\frac{1-(k+1)(1-p)^k}{p+(1-p)^{k+1}}.,\]
where $(2k)^{-1}(\log(2k)+\log(\log(2k)))\leq p\leq 2/5$.
We define a function $f\co ((2k)^{-1}(\log(2k)+\log(\log(2k))),2/5)\to\R$ to be this function with denominators cleared:
\[f(p)= (2(1-p)-kp)(p+(1-p)^{k+1})+(k-2)p(1-p)(1-(k+1)(1-p)^k).\]
Computations show:
\[f''(p)=-4k+(1-p)^{k-2}g(k,p),\]
where $g$ is some polynomial in $k$ and $p$,
and
\[f'''(p)=(1-p)^{k-3}(2 k-5 k^3-3 k^4+(-2 k+7 k^3+6 k^4+k^5) p-(2 k^3+3 k^4+k^5) p^2).\]
A computation shows that $f'''(p)$ is positive if
\[\frac{-2+2 k+3 k^2}{k^2 (2+k)} < p < 1,\]
so for large $k$, $f'''(p)$ is positive on the entire domain of $f$.
Then $f''(p)\leq f''(2/5)$, for $p$ in the domain of $f$.
It is easy to see that $f''(2/5)\sim -4k$ as $k\to\infty$, so that $f''(2/5)$ is negative for large $k$ and $p$ in the domain of $f$.
Then $f$ is concave down.
Since $f(2/5)\sim \frac{2}{25}k$ as $k\to\infty$, this means that $f$ changes sign at most once.
Then $G$ is decreasing-then-increasing as $p$ increases along the boundary segment of $R$ with $n=2k$.
In particular, this proves the claim.
Finally, we can prove the proposition.
We note that
\[\lim_{k\to\infty}\frac{G(k+1,2k+2,2/5)}{G(k,2k,2/5)}=24/25<1,\]
and
\[\lim_{k\to\infty}\frac{G(k+1,2k+2,p(2k+2))}{G(k,2k,p(2k))}=0,\]
where $p(n)=n^{-1}(\log(n)+\log(\log(n)))$.
Then the sequence of values of $G$ at each corner of $R$ is eventually dominated by an exponentially decreasing function.
\end{proof}
\section*{Acknowledgments}
I was an undergraduate participant in the 2001 SUNY Potsdam--Clarkson summer REU, where I was involved with one research project on random graphs and another project on graph products of groups.
This REU was apt preparation for a project like this paper, and I am grateful to Christino Tamon and Kazem Mahdavi for their supervisory roles in those projects.
Thanks to Hanna Bennett for conversations and notes on an earlier version of this paper.
I am grateful to Niranjan Balachandran, Ruth Charney, Helge Kr\"uger and Michael Mendenhall for conversations.
\noindent
California Institute of Technology\\
Mathematics 253-37\\
Pasadena, Ca 91125\\
E-mail: {\tt [email protected]}
\end{document}
|
\begin{equation}gin{document}
\title[Superquadratic viscous Hamilton-Jacobi equation]
{Blow-up and regularization rates, \\ loss and recovery of boundary conditions for \\ the superquadratic viscous Hamilton-Jacobi equation}
\author[Porretta]{Alessio Porretta}
\address{Universit\`a di Roma Tor Vergata,
Dipartimento di Matematica,
Via della Ricerca Scientifica 1,
00133 Roma, Italia}
\email{[email protected]}
\author[Souplet]{Philippe Souplet}
\address{Universit\'e Paris 13, Sorbonne Paris Cit\'e,
CNRS UMR 7539, Laboratoire Analyse, G\'{e}om\'{e}trie et Applications,
93430 Villetaneuse, France}
\email{[email protected]}
\begin{equation}gin{abstract}
We study the qualitative properties of the unique global viscosity solution of the superquadratic diffusive Hamilton-Jacobi equation
with (generalized) homogeneous Dirichlet conditions.
We are interested in the phenomena of gradient blow-up (GBU), loss of boundary conditions (LBC), recovery of boundary conditions and eventual regularization, and in their mutual connections.
In any space dimension, we establish the sharp minimal rate of GBU.
Only partial results were previously known except in one space dimension.
We also obtain the corresponding minimal regularization rate.
In one space dimension, under suitable conditions on the initial data, we give a quite detailed description of the behavior of solutions for all $t>0$. In particular, we show that
nonminimal
GBU solutions immediately lose the boundary conditions after
the blow-up time and are immediately regularized after recovering the boundary data. Moreover, both GBU and regularization occur with the minimal rates, while loss and recovery of boundary data occur with linear rate. We describe further the intermediate singular life of those solutions in the time interval between GBU and regularization.
We also study minimal GBU solutions, for which GBU occurs {\it without} LBC: those solutions are immediately regularized, but their GBU and regularization rates are more singular.
Most of our one-dimensional results crucially depend on zero-number arguments,
which do not seem to have been used so far in the context of viscosity solutions of Hamilton-Jacobi equations.
\end{abstract}
\maketitle
\section{Introduction}
Let $\Omega$ be a smooth bounded domain of $\mathbb R^n$. In this article we consider the diffusive Hamilton-Jacobi equation (also called viscous Hamilton-Jacobi equation)
\begin{equation}\label{VHJ}
\begin{equation}gin{cases}
u_t -\Delta u = |\nabla u|^p, & t>0,\ x\in \Omega, \\
u=0, & t>0,\ x\in \partial\Omega, \\
u(0,x)= \phi, & x\in \Omega,
\end{cases}
\end{equation}
with $p>2$ and $\phi\in X$, where
$$
X=\bigl\{\phi\in C^1(\overline\Omega),\
\phi=0 \hbox{ on $\partial\Omega$}\bigr\}.
$$
Nowadays, it is well known that problem \rife{VHJ} exhibits very interesting phenomena whenever $p>2$, i.e.~in the case of superquadratic growth of the nonlinearity.
By standard theory, problem \rife{VHJ} admits a unique, maximal classical solution with existence time $T^*(\phi)\in(0,\infty]$.
Even if the initial data $\phi$ is smooth and satisfies the compatibility condition $\phi=0$ at the boundary $\partial \Omega$,
the classical solution of \rife{VHJ} may blow up
in finite time
i.e., $T^*(\phi)<\infty$, in which case
$$\lim_{t\to T^*_-}\|u(t)\|_{C^1(\overline\Omega)}=\infty.$$
This does happen for suitably large $\phi$.
On the other hand, by maximum principle, $u$ cannot blow up in $L^\infty$-norm; so what happens is that $\|\nabla u(t)\|_\infty$
has to blow up while the solution itself remains bounded. This is referred to in the literature as {\it gradient blow-up} (GBU). Several facts are by now well understood, and two main points are to be recalled:
\begin{equation}gin{itemize}
\item[(a)] GBU can only happen at the boundary $\partial \Omega$ (as a consequence of local interior universal gradient estimates),
see e.g. \cite{SZ06}.
\item[(b)] The solution survives after the blow-up time and it can be continued as a generalized viscosity solution of \rife{VHJ}. To be more precise, problem \rife{VHJ} admits a unique (generalized) viscosity solution $u$ in $(0,\infty) \times \Omega$; this solution $u$ is continuous in $[0,\infty)\times \overline \Omega$, with $u\ge 0$ on $[0,\infty)\times\partial\Omega$, and coincides with the (unique) classical solution in $(0,T^*)$
(see \cite{BDL04}).
\end{itemize}
But there is more than that. When we say a {\it generalized} viscosity solution, this means that the boundary condition should be understood in a relaxed sense, as it is done for example in first order problems. Namely, the prescribed boundary condition may not be satisfied by the solution in the classical sense, but rather in a weaker sense which is encoded in the viscosity formulation.
Somehow, when the gradient blows-up at the boundary, one may need to relax the boundary prescription in order that the problem be still solvable;
the superquadratic
growth of the nonlinearity overcomes the diffusive
smoothing of the second order term and a {\it first order behavior} is observed.
Recalling that $u\in C([0,\infty) \times \overline \Omega)$, we see that such solutions,
which are meant to satisfy zero boundary conditions in a generalized sense,
nevertheless have to continuously take on {\it positive} boundary values
(at some points) for some times after the blow-up time.
This apparently paradoxical situation can however be interpreted in a more intuitive way,
when one recalls that the global viscosity solution can also be obtained as the limit
of a sequence of global classical solutions of regularized versions of problem \rife{VHJ}, with truncated nonlinearity
(see Section~\ref{Sec3}). Since this convergence is monotone increasing but not uniform up to the boundary,
the loss of boundary conditions can in this framework be seen as a more familar boundary layer phenomenon.
The possibility of {\it loss of (classical) boundary conditions} (LBC) for problem \rife{VHJ} was suggested in \cite{BDL04},
and its actual occurence was recently confirmed in \cite{PS2}, \cite{QR16} for suitable initial data:
\begin{equation}gin{itemize}
\item[(c)]
For all sufficiently large initial data $\phi$, the solution $u$ undergoes GBU as well as LBC.
Namely:
$$\hbox{$T^*(\phi)<\infty$ and there exist $t_0>T^*(\phi)$ and $x_0\in\partial\Omega$ such that $u(t_0,x_0)>0$.}$$
\end{itemize}
\noindent But another key property of solutions, which was known before (see \cite{PZ}), is the eventual {\it recovery of boundary conditions:}
\begin{equation}gin{itemize}
\item[(d)] Any GBU solution eventually becomes a classical solution (including the boundary conditions) for all $t$ large enough.
Moreover, any solution $u(t)$ decays exponentially to $0$ in $C^1(\overline\Omega)$ as $t\to\infty$.
\end{itemize}
Therefore, the behavior of solutions of \rife{VHJ} presents three main issues:
\vskip 1pt
\begin{equation}gin{itemize}
\item[(i)] gradient blow-up
\item[(ii)] loss of boundary condition
\item[(iii)] recovery of boundary condition and regularization.
\end{itemize}
\vskip 1pt
\noindent
Up to now, no description of the way the solution loses and/or recovers its boundary conditions seems to be known.
As for the GBU phenomenon itself, it has been investigated so far in several papers and many things are known,
although several questions remain open.
Sufficient blow-up conditions appear in \cite{ABG89}, \cite{A96}, \cite{S02}, \cite{HM04}.
It is known that GBU is localized on some part of the boundary \cite{SZ06} and that single-point
blow-up may occur \cite{LS}, \cite{Est18}.
The space profiles at $t=T^*$ are also investigated (see \cite{CG96}, \cite{ARS04}, \cite{SZ06}, \cite{GH08}, \cite{PS}).
Partial results on the GBU rate are available (see \cite{CG96}, \cite{GH08}, \cite{ZL13}).
On the other hand, for related one-dimensional equations, typically of the form $u_t-u_{xx}=e^{u_x}$, with zero boundary conditions,
a global weak continuation after GBU was constructed in \cite{FL94}.
As an important difference with \rife{VHJ}, the boundary conditions of this weak continuation remain lost for all $t>T^*$.
The purpose of this article is to give new results on all aspects (i)-(iii), investigating the connections between the three questions.
In any space dimension, we establish the sharp minimal rate of GBU.
Only partial results were previously known (\cite{GH08}, \cite{ZL13}), except in one space dimension \cite{CG96}.
We also obtain the corresponding minimal regularization rate, which had not been studied before.
In one space dimension, under suitable conditions on the initial data, we give a quite detailed description
of the qualitative behavior of solutions for all $t>0$.
Most of our one-dimensional results crucially depend on zero-number arguments, applied to the function $u_t$.
Zero-number arguments do not seem to have been used so far in the context of
viscosity solutions of Hamilton-Jacobi equations.
For {\it nonminimal} GBU solutions (see Definition~\ref{defmin}), we especially describe the intermediate singular life of the solution
in the time interval between GBU and eventual regularization, showing:
\vskip 1pt
\begin{equation}gin{itemize}
\item[-] immediate LBC after GBU and immediate regularization after recovery of boundary conditions;
\item[-] loss and recovery of boundary conditions occur at linear rates;
\item[-] $C^1$ regularity of the boundary value as a function of time in the interval of LBC;
\item[-] GBU and regularization according to the minimal rates.
\end{itemize}
\vskip 1pt
\noindent We note that upper estimates on the GBU rate were previously known only in the case of inhomogeneous boundary conditions or forcing terms
(for some class of one-dimensional or radial solutions, see \cite{GH08}, \cite{QS07}, \cite{ZL13})),
due to time-monotonicity restrictions of the existing methods.
On the other hand, we recently showed in \cite{PS2} that (in any space dimension) the LBC set can be quite arbitrary depending on the initial data,
and that gradient blow-up may also occur {\it without} LBC, typically for minimal GBU solutions.
Here we prove that, in one space dimension, minimal GBU solutions have a completely different behavior
than nonminimal
GBU solutions: they do not lose boundary data, and
they are immediately regularized,
but their GBU and regularization rates are more singular.
\section{Main results}
We split them in three subsections.
The first one concerns minimal GBU and regularization rates and covers the general $n$-dimensional problem.
In the last two we specialize to the one-dimensional problem and we give detailed description of a class of GBU solutions, respectively with or without LBC.
\subsection{Minimal GBU and regularization rate}
We start with the general, minimal lower estimate of the GBU rate.
\begin{equation}gin{thm}\label{minimal_rate}
Let $\phi\in X$ and assume $T^*=T^*(\phi)<\infty$.
Then there exists a constant $C_1>0$ such that
\begin{equation}\label{lower-bu0}
\|\nabla u(t)\|_\infty\ge C_1(T^*-t)^{-1/(p-2)},\quad t\to T^*_-.
\end{equation}
\end{thm}
Theorem~\ref{minimal_rate} improves the result obtained in \cite{GH08},
where the lower blowup estimate was only obtained under the
weaker form
\begin{equation}\label{lower-bu-weaker}
\sup_{s\in[0,t]}\|\nabla u(s)\|_\infty\ge C(T^*-t)^{-1/(p-2)}.
\end{equation}
Estimate \rife{lower-bu0} had been proved earlier in the case $n=1$ by a completely different method
(intersection-comparison, which does not extend to higher dimensions).
We here refine some ideas from \cite{GH08} and \cite{QS07}.
The improvement from \rife{lower-bu-weaker} to \rife{lower-bu0} stems from a rather delicate argument based on the variation of constants formula.
Next, in view of property (b) in the previous subsection, we may introduce
the ultimate\footnote{We cannot a priori exclude that the solution loses and recovers regularity and/or boundary conditions again at several times, although we
presently do not know examples of such solutions.}
regularization time $T^r(\phi)$, defined by:
\begin{equation}\label{defTr}
T^r(\phi):= \inf \bigl\{\tau >T^*(\phi);\ u(t,\cdot)\in C^1_0(\overline\Omega) \hbox{ for all } t>\tau\bigr\} \in [T^*(\phi),\infty).
\end{equation}
Our next main result is a minimal lower estimate for the regularization rate.
\begin{equation}gin{thm}\label{minimal_rate2}
Let $\phi\in X$ with $T^*=T^*(\phi)<\infty$ and let $T^r=T^r(\phi)$ be defined by \rife{defTr}.
Then there exists a constant $C_2>0$ such that
$$
\|\nabla u(t)\|_\infty\ge C_2(t-T^r)^{-1/(p-2)},\quad t\to T^r_+.
$$
\end{thm}
Theorem~\ref{minimal_rate2} seems completely new. We refer to Section~\ref{sec-rates} for further results related with
Theorems \ref{minimal_rate} and \ref{minimal_rate2}.
\subsection{Description of a class of one-dimensional GBU solutions with LBC}
An important notion in the subsequent analysis is that of minimal/non-minimal GBU solution.
For shortness, here we use the notation $T^*(u)$ and $T^*(v)$ rather than $T^*(u(0))$, $T^*(v(0))$, respectively.
\begin{equation}gin{defn} \label{defmin}
Let $n\ge 1$. A solution $u$ of \rife{VHJ} is called a minimal GBU solution if
$T^*(u)<\infty$ and every solution $v$ such that $v(0)\leq u(0)$ and $v(0) \neq u(0)$ is a global classical solution, i.e. $T^*(v)=\infty$.
\end{defn}
This definition is consistent with the monotonicity of the function $T^*$. Indeed, if $v_0\le u_0$ then $T^*(v_0)\ge T^*(u_0)$
(see, e.g., \cite{LS}); and if, moreover, $v$ undergoes LBC, then so does $u$ -- see \cite{PS2}).
The existence of minimal blow-up solutions in any dimension was shown in \cite{PS2}, where such solutions were constructed as threshold solutions.
Namely for any $\psi\in X$ with $\psi\ge 0$ and $\psi\not\equiv 0$,
one considers $\lambda^*=\sup\{\lambda>0;\,T^*(\lambda\psi)=\infty\}$ and
shows that $\lambda^*\in (0,\infty)$ and that $T^*(\lambda^*\psi)<\infty$ (the latter fact is not obvious).
We also recall (see \cite[Theorem 3]{PS2}) that, at least in one space dimension, loss of boundary conditions
occurs if and only if the GBU solution is nonminimal.\footnote{Actually it is not difficult to show that the implication
'LBC $\mathbb Rightarrow$ $u$ nonminimal' remains valid in any dimension, but we are presently unable to show the reciprocal
in general. }
\begin{equation}gin{prop}\label{prop-min}
Let $\Omega=(0,1)$. Then $u$ is a minimal blow-up solution if and only if there is no loss of boundary conditions.
\end{prop}
It turns out that, for a suitable class of initial data on $\Omega=(0,1)$, we are able to provide a fairly precise description of the
life of nonminimal solutions, including their GBU, LBC and regularization behaviors in time and in space.
We consider the initial data $\phi\in W^{3,\infty}(0,1)$
satisfying the following properties:
\begin{equation}\label{hypID1}
\hbox{$\phi$ is symmetric w.r.t. $x=\frac12$,\ \ $\phi'\ge 0$ on $[0,\frac12]$,\ \ $\phi(0)=\phi''(0)+{\phi'}^p(0)=0$,}
\end{equation}
\begin{equation}\label{hypID2}
\hbox{there exists $a\in (0,1/2)$ such that }
\phi''+{\phi'}^p
\begin{equation}gin{cases}
\,\ge 0 & \hbox{on $[0,a]$} \\
\,\le 0 & \hbox{on $[a,\frac12]$.} \\
\end{cases}
\end{equation}
These assumptions are motivated by intersection-comparison -- or zero-number -- arguments that will be crucially used in our proofs.
Minimal and nonminimal GBU solutions starting from such initial data can be easily constructed (see Lemma~\ref{constuctID}).
Also we will denote by $U_*$ the one-dimensional stationary solution
\begin{equation}\label{defUstar}
U_*(x):= c_p\, x^{\frac{p-2}{p-1}}, \quad x\ge 0, \qquad\hbox{with } c_p= (p-2)^{-1}(p-1)^{\frac{p-2}{p-1}}.
\end{equation}
Note that $U_*$ is singular near $x=0$ (since $U_*'(x)=((p-1)x)^{-1/(p-1)}$) and recall, as is well known, that $U_*$ plays the role of a reference GBU profile
(see, e.g., \cite{CG96}, \cite{ARS04}).
\begin{equation}gin{thm}\label{nonmin0}
Let $\Omega=(0,1)$, let $\phi\in W^{3,\infty}(0,1)$ satisfy \eqref{hypID1}-\eqref{hypID2},
and assume that $u$ is a nonminimal GBU solution of \rife{VHJ}.
\begin{equation}gin{itemize}
\item[(i)] There is immediate loss of boundary conditions after GBU
and immediate and permanent regularization after recovery of boundary conditions.
More precisely, we have
$$u(t,0)>0,\quad\hbox{for all $t\in(T^*,T^r),$}$$
and $u$ is a classical solution (including boundary conditions) for all $t\in(T^r,\infty)$
(recalling the definition in \rife{defTr}).
\item[(ii)] $u$ satisfies the following GBU, detachment, reconnection and regularization estimates:
\begin{equation}\label{estA0}
c_1(T^*-t)^{-1/(p-2)}\le \|u_x(t)\|_\infty \le c_2(T^*-t)^{-1/(p-2)},\quad\hbox{ as $t\to T^*_-,$}
\end{equation}
\begin{equation}\label{estB0}
u(t,0) \sim \ell_1(t-T^*),\quad\hbox{ as $t\to T^*_+,$}
\end{equation}
\begin{equation}\label{BeqConclAa0}
u(t,0) \sim \ell_2(T^r-t),\quad\hbox{ as $t\to T^r_-,$}
\end{equation}
\begin{equation}\label{BeqConclA0}
c_3(t-T^r)^{-1/(p-2)} \le \|u_x(t)\|_\infty\le c_4(t-T^r)^{-1/(p-2)},\quad\hbox{ as $t\to T^r_+,$}
\end{equation}
with some constants $c_i,\ell_i>0$.
\item[(iii)] In the interval $[T^*,T^r]$, the solution behaves near the boundary like a shifted copy of the singular stationary profile $U_*$:
there exists $K>0$ such that
$$|u(t,x)-u(t,0)-U_*(x)|\le K\, {\frac{ x^2} 2} \quad\hbox{ in $[T^*,T^r]\times [0,1/2]$},$$
$$|u_x(t,x)-U_*'(x)|\le Kx\quad\hbox{and}\quad |u_{xx}(t,x)-U_*''(x)|\le K \ \quad\hbox{ in $[T^*,T^r]\times (0,1/2]$},$$
and the restriction of the function $u-U_*$ to $(T^*,T^r]\times [0,1/2]$ is of class $C^1$ in $t$ and $x$.
\vskip2pt
\noindent Furthermore, the boundary value enjoys the following monotonicity and regularity properties:
\vskip2pt
\begin{equation}gin{itemize}
\item[$\bullet$] $u(t,0)$ admits a unique global maximum at some $T_m\in (T^*,T^r)$;
\vskip1pt
\item[$\bullet$] $u(t,0)$ is increasing on $[T^*,T_m]$ and decreasing on $[T_m,T^r]$;
\vskip1pt
\item[$\bullet$] the restriction of the function $t\mapsto u(t,0)$ to $[T^*,T^r]$ is of class $C^1$
and $u_t(t,0)$ undergoes jump discontinuities at $t=T^*$ and $t=T^r$.
\end{itemize}
\vskip2pt
\noindent In addition, we have
$$
u_t\le 0\quad\hbox{ in $[T_m,\infty)\times (0,1)$.}
$$
\end{itemize}
\end{thm}
The shape of the solution described in Theorem~\ref{nonmin0} is depicted in the following figure.
$$
\includegraphics[width=12cm, height=5.5cm]{pictureLOB2}
$$
More generally, as shown in our next result,
the conclusions of Theorem~\ref{nonmin0} concerning the regularity and behavior of the function $u-U_*$
are actually valid for any LBC solution in one space dimension,
on any time interval where the boundary conditions are lost at $x=0$.
In the following, we set
$$
X_1:=\{\phi\in C^1([0,1]),\ \phi(0)=\phi(1)=0\}.
$$
\begin{equation}gin{thm}
\label{proppersist}
Let $\Omega=(0,1)$, and let $\phi\in X_1$ with $T^*(\phi)<\infty$.
Let $T_1, T_2$ be such that $T^*\le T_1<T_2\le T^r$ and
$$u(t,0)>0 \quad\hbox{ on $(T_1,T_2)$.}$$
\begin{equation}gin{itemize}
\item[(i)] Then there exists a constant $K>0$ (depending only on $u$), such that
\begin{equation}\label{shiftedcopy0}
|u(t,x)-u(t,0)-U_*(x)|\le K\, {\frac{x^2}2}\quad\hbox{ in $[T_1,T_2]\times (0,1/2]$},
\end{equation}
\begin{equation}\label{shiftedcopy}
|u_x(t,x)-U_*'(x)|\le Kx\quad\hbox{and}\quad |u_{xx}(t,x)-U_*''(x)|\le K \ \quad\hbox{ in $[T_1,T_2]\times (0,1/2]$}.
\end{equation}
\vskip 3pt
\item[(ii)] The restriction of the function $u-U_*$ to $(T_1,T_2]\times [0,1/2]$ is of class $C^1$ in $t$ and $x$
and the restriction of the function $t\mapsto u(t,0)$ to $[T_1,T_2]$ is of class $C^1$.
\end{itemize}
\end{thm}
\begin{equation}gin{rem}
(a) We note that the solutions in Theorem~\ref{nonmin0} behave according to the minimal GBU and regularization rates
(compare with Theorems \ref{minimal_rate} and \ref{minimal_rate2}).
As for the linear rates of loss and recovery of boundary conditions, interestingly, we see on the contrary that they are maximal.
Indeed, since all solutions satisfy bounds of the form $|u_t|\le C(t_0)$ in $[t_0,\infty)\times\Omega$ for all $t_0>0$ (see Section~\ref{Sec3}),
it is immediate that loss or recovery of the zero boundary conditions can never occur at a rate faster than linear.
(b) Theorem~\ref{nonmin0} is the first known result on upper GBU estimates for the homogeneous problem~\rife{VHJ}.
Previously, the upper GBU estimate in \eqref{estA0} was only known for the modified inhomogeneous problem
\begin{equation}\label{vhjModif}
\begin{equation}gin{cases}
u_t-u_{xx} =|u_x|^p+\lambda, & \quad t>0,\ 0<x<1,\\
u(t,0) =0, \ u(t,1)=M, & \quad t>0,\\
u(0,x) =\phi(x), & \quad 0<x<1,\,
\end{cases}
\end{equation}
where either $\lambda=0$ and $M>c_p$, or $M=0$ and $\lambda>0$ large enough.
The result in the case $\lambda=0$ and $M>c_p$ was proved in \cite{GH08}
for time-increasing GBU solutions (such solutions exist only for $M>c_p$).
The argument of \cite{GH08} was modified in \cite{QS07} to cover the case $M=0$ with $\lambda>0$ large enough, still for time-increasing GBU solutions.
The upper GBU rate remains an open problem for the nonradial higher dimensional case
(see \cite{ZL13} for a result on a radial inhomogeneous problem).
(c)
We see from Theorem~\ref{nonmin0} that $u_t$ exists and is continuous on $((0,\infty)\times [0,1/2])\setminus \{(T^*,0),(T^r,0)\}$.
Moreover, $u_t$ is respectively uniformly positive or negative near the corners.
Namely, under the assumptions of Theorem~\ref{nonmin0}, we have
$$
u_t \ge c \ \hbox{ in $(T^*,t_0)\times(0,a)$}
\quad\hbox{ and }\quad
u_t\le -c\ \hbox{ in $(t_1,T^r)\times (0,1)$}
$$
for some times $t_0, t_1$ with $T^*<t_0<t_1<T^r$, and some $a\in (0,1/2)$ and $c>0$
(see Propositions~\ref{nonmin} and \ref{recon}).
On the other hand, the restriction of the function $V:=u-U_*$ to $Q:=[T^*,T^r]\times [0,1/2]$ is actually differentiable
(including at $(T^*,0)$), with $V_x$ continuous (see the end of the proof of Theorem~\ref{proppersist}(ii)).
But we do not know if the restriction of $u_t=V_t$ to $Q$ is continuous
at $(T^*,0)$.
Some additional qualitative properties of $u$ under the assumptions of Theorems~\ref{nonmin0}, \ref{proppersist} or \ref{minimal0}
are given in Lemmas~\ref{bounduxt}, \ref{boundutt} and \ref{minlem1}--\ref{minlem3} and in Proposition~\ref{zeronumbermonotone}.
(d) Theorem~\ref{nonmin0}, as well as Theorem~\ref{minimal0} below, remain true for a larger class of symmetric initial data,
characterized by the zero-number of $u_t$ being at most four
(see Remark~\ref{remhigherN}).
The case of higher zero-number remains an open problem. But we suspect that different behaviors might occur,
such as multiple losses and recoveries of boundary conditions.
Also, other interesting and largely open problems concern the behavior of LBC solutions in higher dimensions.
On the other hand, in Theorems~\ref{nonmin0} and \ref{minimal0}, we have assumed for simplicity that $\phi$ is symmetric and nondecreasing on $[0,1/2]$.
Without these assumptions, some results of the same type might still be obtainable by our methods,
at the expense of additional technical difficulties.
Since our purpose is to focus on the presentention of new phenomena,
we have refrained from considering
such generality.
\end{rem}
\subsection{Description of a class of one-dimensional GBU solutions without LBC}
We already know that minimal solutions have no LBC.
Within the same class of initial data as in the previous subsection,
we have the following more precise information on the behavior of minimal solutions.
As a crucial difference, we find in particular that such solutions have more singular blow-up and regularization rates
than the 'standard' ones obtained in Theorem~\ref{nonmin0} for nonminimal solutions.
Moreover, $T^r=T^*$ so that $T^*$ is the only singular time of the solution.
\begin{equation}gin{thm}\label{minimal0}
Let $\Omega=(0,1)$, let $\phi\in W^{3,\infty}(0,1)$ satisfy \eqref{hypID1}-\eqref{hypID2},
and assume that $u$ is a minimal GBU solution of \rife{VHJ}.
Then:
\begin{equation}gin{itemize}
\item[(i)] The GBU
rate is more singular than the minimal
rate:
$$
(T^*-t)^{1/(p-2)}\|u_x(t)\|_\infty\to\infty,\quad t\to T^*_-\,;
$$
\vskip0.3em
\item[(ii)] There is instantaneous and permanent regularization at $T^*_+$, namely,
$u$ is a classical solution (including boundary conditions)
on $(T^*,\infty)$;
\vskip0.3em
\item[(iii)] The regularization rate is more singular than the minimal
rate:
$$
(t-T^*)^{1/(p-2)}\|u_x(t)\|_\infty\to\infty,\quad t\to T^*_+.
$$
\end{itemize}
\end{thm}
\begin{equation}gin{rem}
The solutions described in Theorem~\ref{minimal0} are a kind of analogue of the {\it peaking solutions}
known for the semilinear heat equation $u_t-\Delta u=u^p$, $t>0$, $x\in \mathbb R^n$.
Such solutions (which exist for $n\ge 3$ in suitable ranges of $p$) blow up (in $L^\infty$ norm) at the origin
as $t\to T_-$ and are instantaneously and forever regularized for $t>T$.
See, e.g., \cite{GV97}, \cite{FMP05}, \cite{MM09} and the references therein for more details.
\end{rem}
The outline of the rest of the paper is as follows.
A number of preliminary facts are collected in Section~3.
Section~\ref{sec-rates} is devoted to the lower estimates of blow-up and regularization rates, including
the proofs of Theorems \ref{minimal_rate} and \ref{minimal_rate2}.
The rest of the paper is then concerned with the one-dimensional problem.
The useful, preliminary one-dimensional results are given in Section~\ref{prelim1d},
except for the basic material on zero-number, which is the object of Section~\ref{SecZ}.
The behavior of nonminimal solutions is studied in Section~\ref{Sec-nonmin},
which consists of three subsections. The first two are respectively devoted to the GBU and LBC behavior near $T^*$,
and to the reconnection and regularization behavior near~$T^r$.
The last subsection studies the singular life of the solution in the time interval $(T^*,T^r)$.
In particular, the proofs of
Theorems~\ref{nonmin0} and \ref{proppersist} are completed in Subsection~\ref{Sec-nonmin3}.
The behavior of minimal solutions is then studied in Section~\ref{Sec-min}, where Theorem~\ref{minimal0} is proved.
Finally, the paper is complemented with
two appendices, respectively devoted to an additional zero-number property and to the proof of
a useful approximation result, via Bernstein-type estimates.
\section{Preliminary facts on the viscous HJ equation}\label{Sec3}
In this section, we recall some preliminary facts about the unique solution of the problem
\begin{equation}gin{eqnarray}
&u_t-\Delta u=|\nabla u|^p,\ \ \ &t>0,\ x\in \Omega,\label{vhj-1}\\
&u(t,x)=0,\ &t>0,\ x\in \partial \Omega,\label{vhj-2}\\
&u(0,x)=\phi(x),\ &x\in \Omega. \label{vhj-3}
\end{eqnarray}
Recall the notation
$$
X=\{\phi\in C^1(\overline\Omega),\ \phi=0 \hbox{ on $\partial\Omega$}\}.
$$
The following theorem collects a few results established in the literature
(see, e.g., \cite{BDL04}, \cite[Section 40]{QS07} and the references therein).
\goodbreak
\begin{equation}gin{thm}\label{prelimprop}
Let $\Omega$ be a smooth bounded domain of $\mathbb R^n$, $p>2$ and $\phi\in X$. Then:
\begin{equation}gin{itemize}
\item[(i)] There exists a unique generalized viscosity solution $u$ of problem \rife{vhj-1}-\rife{vhj-3} for $t\in (0,\infty)$, and $u\in C([0,\infty) \times \overline \Omega)$ with $u\ge 0$ on $[0,\infty)\times\partial\Omega$. Moreover, if $u,v$ are the solutions corresponding to different initial data $u_0,v_0\in X$, then
\begin{equation}\label{contdep}
\|u(t)-v(t)\|_\infty\le \|u_0-v_0\|_\infty, \quad\hbox{ for all $t>0$.}
\end{equation}
\item[(ii)] There exists $T^*\in (0,\infty]$ such that $u\in C^{1,2}((0,T^*)\times\overline\Omega)$,
$\nabla u\in C([0,T^*)\times\overline\Omega)$, $u$ is a classical solution in $(0, T^*)$ and, if $T^*<\infty$, then
$$
\lim\limits_{t \to T^*-} \|\nabla u(t)\|_\infty = \infty\,.
$$
\item[(iii)] We have $u\in C^{1,2}((0,\infty) \times \Omega)$, i.e. $u$ is smooth inside the domain~$\Omega$.
\item[(iv)] For all $t>0$ we have $u_t(t,\cdot)\in L^\infty(\Omega)$
and, given any $t_0>0$, there exists a constant
$M(t_0)>0$ such that
\begin{equation}\label{u_t-bounded}
\|u_t(t)\|_\infty\le M(t_0),\quad t\ge t_0.
\end{equation}
The same estimate also holds for $t_0=0$ provided $\Delta u_0 + |\nabla u_0|^p \in C(\overline \Omega)$.
\end{itemize}
\end{thm}
As mentioned before, it is quite useful to recover the solution $u$ as the limit
of global classical solutions of regularized problems of the form:
\begin{equation}\label{approxpbm}
\begin{equation}gin{cases}
\partial_tu_k-\Delta u_k =F_k(\nabla u_k), & \quad t>0,\ x\in\Omega, \\
u_k=0, & \quad t>0,\ x\in\partial\Omega,\\
u_k(0,x)=\phi(x), & \quad x\in\Omega.
\end{cases}
\end{equation}
We note that the simple truncated nonlinearities $F_k(\xi)=\min\bigl(|\xi|^p,k^{p-2}|\xi|^2\bigr)$ are often used.
However these $F_k$ are only locally Lipschitz continuous and choices with better regularity, and additional features such as convexity,
may be useful for certain properties
(see Section~\ref{prelim1d}).
To this end, we formulate the following general approximation result. The first assertion shows
--~without resorting to viscosity solution theory~-- that the maximal classical solution of problem~\rife{VHJ}
admits a unique global weak continuation $U$
after $t=T^*$, defined in a natural way through monotone approximation.
This notion of ``limit solution'' $U$ is similar to the concept of ``proper extension'',
which is customary in the blow-up theory for
nonlinear parabolic equations with zero-order nonlinearities, such as
\begin{equation}\label{NLH}
u_t-\Delta u= u^p
\end{equation}
(see \cite{BC87}, \cite{GV97}, \cite{QS07})\footnote{ As an important difference
with the diffusive Hamilton-Jacobi equation,
we recall that for equation \rife{NLH} the existence of a global weak continuation after blow-up is an exceptional phenomenon
(it only occurs for ``minimal'' blow-up solutions in certain ranges of $p$), and that blow-up is generically complete. }.
As for the second assertion, it shows that the limit solution $U$ coincides with the global viscosity solution $u$.
\begin{equation}gin{thm}\label{prelimprop2}
Let $\Omega$ be a smooth bounded domain of $\mathbb R^n$, $p>2$ and $\phi\in X$.
(i) Consider a sequence of nonlinearities $F_k(\xi)\in W^{1,\infty}_{loc}(\mathbb R^n)$ such that:
\begin{equation}\label{approxHyp1}
\hbox{$F_k(\xi)$ is nondecreasing with respect to $k$, with
$\lim_k F_k(\xi)=|\xi|^p$ for all $\xi\in\mathbb R^n$,}
\end{equation}
\begin{equation}\label{approxHyp3}
|\nabla F_k(\xi)|\le C_1\bigl(1+|\xi|^{-2}F_k^{2-\theta}(\xi)\bigr)\quad\hbox{for all $\xi\ne 0$,}
\end{equation}
\begin{equation}\label{approxHyp4}
F_k(\xi)\ge C_2|\xi|^2\quad\hbox{ for all $|\xi|\ge 1$,}
\end{equation}
\begin{equation}\label{approxHyp4b}
0\le F_k(\xi)\le C(k)(1+|\xi|^2)\quad\hbox{ for all $\xi\in\mathbb R^n$,}
\end{equation}
where $\theta\in (0,1), C_1, C_2>0$
are constants independent of $k$ and $C(k)>0$ is a constant depending on $k$.
Then problem \rife{approxpbm} has a unique global classical solution $u_k$ and there exists a function
$U\in C([0,\infty)\times\overline\Omega)\cap C^{1,2}((0,\infty)\times\Omega)$,
with the following properties:
\begin{equation}\label{approxpbm2}
\hbox{ $u_k$ is nondecreasing with respect to $k$, \quad $\displaystyle\lim_{k\to\infty}u_k=U$ in $C^{1,2}_{loc}((0,\infty)\times\Omega)$,}
\end{equation}
and $U$ solves the problem
\begin{equation}\label{VHJ2}
\begin{equation}gin{cases}
U_t -\Delta U = |\nabla U|^p, & t>0,\ x\in \Omega, \\
U(0,x)= \phi, & x\in \Omega.
\end{cases}
\end{equation}
Moreover we have
\begin{equation} \label{BernsteinPhAux6ca}
U\ge 0 \ \hbox{ on $(0,\infty)\times\partial\Omega$}
\end{equation}
and $U$ is independent of the choice of the $F_k$ satisfying the above assumptions.
(ii) The solution $U$ from assertion (i) concides with the global viscosity solution $u$ given by Theorem~\ref{prelimprop}.
\end{thm}
Results similar to Theorem~\ref{prelimprop2} are probably known but we could not find in the literature a statement and proof suitable to our needs.
We therefore give in Appendix a proof, based on a simple Bernstein type argument.
We shall always consider truncated nonlinearities, i.e. $F_k$ such that, for all $A>0$, there exists $k_A>0$ such that
\begin{equation}\label{eqabA00}
F_k(\xi)=|\xi|^p\quad\hbox{ for all $|\xi|\le A$ and $k\ge k_A$}.
\end{equation}
The following facts will be useful:
for each $t_0\in (0,T^*(\phi))$, there exist $k_0=k_0(t_0)$ and $M=M(t_0)>0$ such that
\begin{equation}\label{eqabA0}
u_k=u\quad\hbox{ on $[0,t_0]\times \overline\Omega$ for all $k\ge k_0$}
\end{equation}
and
\begin{equation}\label{eqabA}
|\partial_tu_k|\le M \quad\hbox{ on $[t_0,\infty)\times\Omega$ for all $k\ge k_0$}.
\end{equation}
Indeed, \rife{eqabA0} is a consequence of \rife{eqabA00} and uniqueness for problem \rife{approxpbm},
whereas property \rife{eqabA} easily follows by applying the maximum principle to the equation satisfied by $\partial_tu_k$.
\vskip1em
Recall the following standard comparison principle.
\begin{equation}gin{prop}\label{compP}
Let $\Omega\subset\mathbb R^n$ be a bounded domain and $T>0$. Set $Q:=(0,T)\times \Omega$ and
$$\mathcal{P}v:=v_t-\Delta v-|\nabla v|^p.$$
If $v_1,v_2\in C^{1,2}(Q)\cap C(\overline Q)$ satisfy
$\mathcal{P}v_1\le \mathcal{P}v_2$ in $Q$ and $v_1\le v_2$ on $\partial_PQ$, then $v_1\le v_2$ on $Q$.
\end{prop}
Note that the $v_i$ need not be $C^1$ up to the boundary.
Let us give the short proof for convenience (cf., e.g., \cite{SZ06}).
\begin{equation}gin{proof} Fix $\tau\in (0,T)$, $\varepsilon>0$ and let $w = v_1-v_2-\varepsilon t$.
We see that $w$ cannot attain a local maximum in $(0,\tau]\times \Omega$, since at such a point, we would have
$0\le w_t-\Delta w=|\nabla v_1|^p-|\nabla v_2|^p-\varepsilon = -\varepsilon < 0$.
Therefore, $\max_{Q_\tau} w = \max_{\partial_PQ_\tau}w$, and the conclusion follows by letting $\varepsilon\to 0$ and $\tau\to T$.
\end{proof}
We shall also use the following simple comparison principle for the viscosity solution
of \rife{vhj-1}-\rife{vhj-3}.\footnote{We recall that a comparison principle holds even in the strong form,
i.e. for possibly discontinuous {\it generalized} solutions satisfying the boundary conditions in a relaxed sense (see \cite{BDL04}).
However we will not need this stronger form.}
\begin{equation}gin{prop}\label{compP2}
Let $\phi,\psi\in X$ and let $u, v$ be the corresponding
global viscosity solutions of \rife{vhj-1}-\rife{vhj-3}.
If $u(t_0)\le v(t_0)$ in $\overline\Omega$ for some $t_0\ge 0$,
then $u(t)\le v(t)$ in $\overline\Omega$ for all $t\in [t_0,\infty)$.
\end{prop}
For convenience, we give a short proof, based on the approximation result in Theorem~\ref{prelimprop2}.
\begin{equation}gin{proof}
In view of Theorem~\ref{prelimprop2}(ii), this amounts to showing that
$U\le V$ in $[t_0,\infty)\times\overline\Omega$,
where $U,V$ are the limit solutions given by Theorem~\ref{prelimprop2}(i).
Let $u_k$ be given by Theorem~\ref{prelimprop2}(i).
For each $k\ge 0$, we have $u_k(t_0)\le U(t_0)\le V(t_0)$ in~$\overline\Omega$.
Moreover $u_k(t)=0\le V(t)$ on $\partial\Omega$ for all $t\in [t_0,\infty)$.
Since both $u_k$ and $V$ have the required regularity to apply Proposition~\ref{compP},
we deduce that $u_k(t)\le V(t)$ in $\Omega$ for all $t\in [t_0,\infty)$.
Letting $k\to\infty$ it follows that $U(t)\le V(t)$ in $\Omega$ for all $t\in [t_0,\infty)$.
Finally letting $x\to\partial\Omega$, we reach the desired conclusion.
\end{proof}
\section {On the minimal GBU and regularization rate}
\label{sec-rates}
In this section we first state and prove a slightly more general version of Theorems \ref{minimal_rate} and \ref{minimal_rate2}
concerning the minimal GBU and regularization rates.
Later, at the end of the section, we will give a sufficient condition
for the GBU or regularization rates to be more singular than the minimal ones
(this will be one of the ingredients of the proof of Theorem~\ref{minimal0} in Section~\ref{Sec-min}).
\begin{equation}gin{thm}\label{minimal_rateG}
Let $\phi\in X$, $T>0$ {and let $u$ be the unique viscosity solution of \rife{vhj-1}-\rife{vhj-3}}.
(i)
Assume that $u$ is a classical solution of \rife{vhj-1}-\rife{vhj-2}
on the time interval $(T-\deltalta,T)$ for some $\deltalta>0$ and
is not a classical solution on any interval of the form $(T-\deltalta, T+\varepsilon)$ with $\varepsilon>0$.
Then
$$
\|\nabla u(t)\|_\infty\ge C(T-t)^{-1/(p-2)},\quad t\to T_-.
$$
In particular, this is true with $T=T^*(\phi)$.
(ii)
Assume that $u$ is a classical solution of \rife{vhj-1}-\rife{vhj-2}
on the time interval $(T,T+\deltalta)$ for some $\deltalta>0$ and
is not a classical solution on any interval of the form $(T-\varepsilon,T+\deltalta)$ with $\varepsilon>0$.
Then
$$
\|\nabla u(t)\|_\infty\ge C(t-T)^{-1/(p-2)},\quad t\to T_+.
$$
In particular, this is true with $T=T^r(\phi)$.
\end{thm}
We notice that the assumptions of Theorem~\ref{minimal_rateG}(i) guarantee that
$$\limsup_{t\to T_-} \|\nabla u(t)\|_\infty=\infty$$
as direct consequence of the local $C^1$ theory.
The argument does not direcly apply for the regularization time due to the irreversibility of the equation.
However, it turns out that the analogous property is true under the assumptions of Theorem~\ref{minimal_rateG}(ii),
but this fact is more delicate.
This is the content of the following lemma,
which will be useful in the proof of assertion (ii).
\begin{equation}gin{lem}
\label{unbddT}
Under the assumptions of Theorem~\ref{minimal_rateG}(ii), we have
\begin{equation}\label{lower-bu}
\limsup_{t\to T_+} \|\nabla u(t)\|_\infty=\infty.
\end{equation}
\end{lem}
\begin{equation}gin{proof}
Assume for contradiction that
\begin{equation}\label{contrad-nablau}
m(t):=\|\nabla u(t)\|_\infty\le M,\quad T<t<T+\tau,
\end{equation}
for some $M>0$ and $\tau\in {(0,\deltalta)}$.
We shall reach a contradiction by estimating {the solutions $u_k$ of the approximate problems \rife{approxpbm} with $F_k(\xi)=\min\bigl(|\xi|^p,k^{p-2}|\xi|^2\bigr)$}.
{\bf Step 1.} We first claim that
\begin{equation}\label{contrad-nablavk}
m_k(t):=\|\nabla u_k(t)\|_\infty\le C(t-T)^{-1/2},\quad T<t<T+\tau,
\end{equation}
for some $C>0$ independent of $k$.
We prove \eqref{contrad-nablavk} by a Bernstein argument.
Note that $T^*(\phi)<\infty$ by assumption and fix some $t_0\in (0,T^*)$.
We know from \rife{u_t-bounded} that
$$
\|\partial_tu_k\|_\infty\le A,\quad t\ge t_0,
$$
for some constant $A>0$.
For fixed $k\ge 1$, the function $v=u_k$ saisfies
\begin{equation}\label{eqvg}
v_t-\Delta v=g(|\nabla v|^2) \quad\hbox{ in $Q:=(T,T+{\deltalta})\times\Omega$},
\end{equation}
where $g(s)=g_k(s)=(\min(s,k))^{(p-2)/2}s$.
Let $z=|\nabla v|^2$. By direct calculation, we see that $z$ {is a strong solution of }
$$z_t-\Delta z+b\cdot\nabla z +2|D^2v|^2= 0,$$
where $b:=-2g'(z)\nabla v$.
Using the fact that $|g(z)-v_t|=|\Delta v|\leq \sqrt{n}|D^2v|$, along with the bound $|v_t|\le \tilde C$
(where $\tilde C$ is independent of $k$),
we obtain
$$
n^{-1}z^2\leq n^{-1}g^2(z)\leq 2( |D^2v|^2+ |v_t|^2)\leq 2|D^2v|^2+\tilde C,
$$
hence
$$z_t-\Delta z+b\cdot\nabla z +n^{-1}z^2\le \tilde C.$$
Moreover, combining assumption \eqref{contrad-nablau} with the fact that
$u=0$ on the boundary for $t\in (T,T+\deltalta)$ and $0\le u_k\le u$, we have
$z=|\partial_\nu v|^2\le |\partial_\nu u|^2\le M^2$ on $(T,T+\tau)\times\partial\Omega$.
The claim now follows by comparing with a supersolution of the form $\overline z:={C_1}(t-T)^{-1}$,
{with $C_1>0$ large enough (independent of $k$). }
{\bf Step 2.}
Next we claim that
\begin{equation}\label{contrad-mk}
\lim_{k\to\infty} m_k(t)=m(t), \quad T<t<T+{\tau}.
\end{equation}
Indeed, it follows from \eqref{contrad-nablavk} and parabolic estimates that
$\nabla u_k(t)$ is precompact in $C(\overline\Omega)$ for each $t\in (T,T+{\tau})$.
Since we already know that
\begin{equation}\label{contrad-cvptw}
\hbox{$\nabla u_k(t,\cdot)$ converges to $\nabla u(t,\cdot)$ pointwise in $\Omega$,}
\end{equation}
we deduce that, for each $t\in (T,T+\tau)$,
$\nabla u_k(t,\cdot)\to \nabla u(t,\cdot)$ in $C(\overline\Omega)$,
hence in particular \eqref{contrad-mk}.
{\bf Step 3.} We shall then show the existence of $\eta>0$ and $k_0$ such that
\begin{equation}\label{contrad-mk5}
m_k(t)\le M+2 \quad\hbox{ for all $t\in [T-\eta,T]$ and all $k\ge k_0$.}
\end{equation}
To this end, we suitably estimate the oscillation of $m_k$ by means of the variation-of-constants formula.
For any $t_2\in (T,T+{\tau})$,
by \eqref{contrad-nablau} and \eqref{contrad-mk}, there exists an integer $k_0=k_0(t_2)$ such that
\begin{equation}\label{contrad-mk6}
m_k(t_2)\le M+1 \quad\hbox{ for all $k\ge k_0$.}
\end{equation}
For $k\ge k_0$, let
$$t_1=t_{1,k}:=\min E_k,\quad\hbox{ where $E_k:=\{t\in [t_0,t_2],\, m_k(s)\le M+2 \hbox{ on $[t,t_2]$}\}$}$$
(observe that $E_k$ is nonempty by continuity).
Since $v=u_k$ is a classical solution of \eqref{eqvg},
the function $w:= \partial_tu_k$ is a {strong} solution of
$$
w_t -\Delta w = {2} g'_k(|\nabla v|^2)\nabla v \cdot \nabla w\,,\qquad t>0,\ x\in \Omega.
$$
For $s,\tau>0$, the variation-of-constants formula for $w$ yields
$$
w(s+\tau)=e^{\tau \Delta}w(s)+
\int_0^\tau e^{(\tau-\sigma)\Delta }\bigl[{2} g_k'(|\nabla v|^2)\nabla v\cdot\nabla w\bigr](s+\sigma)\, d\sigma.
$$
Here and in the rest of the paper, $(e^{t\Delta})_{t\ge 0}$ denotes the Dirichlet heat semigroup on $\Omega$.
Define
$$L:=\max_{\sigma\in [0,t_2-t_1]} \sigma^{1/2} \|\nabla w(t_1+\sigma)\|_\infty.$$
Using $m_k(t)\le M+2$ on $[t_1,t_2]$,
{$0\le g_k'(s)\le (p/2)s^{(p-2)/2}$ a.e.~and}
$\int_0^\tau (\tau-\sigma)^{-1/2}\sigma^{-1/2}\, d\sigma=\int_0^1 (1-z)^{-1/2}z^{-1/2}\, dz$,
it follows that
$$
\begin{equation}gin{aligned}
\|\nabla w(t_1+\tau)\|_\infty
&\leq C\tau^{-1/2}\|w(t_1)\|_\infty+
C\int_0^\tau (\tau-\sigma)^{-1/2}\|\nabla v(t_1+\sigma)\|_\infty^{p-1}\|\nabla w(t_1+\sigma)\|_\infty\, d\sigma\\
&\leq CA\tau^{-1/2}+
C(M+2)^{p-1}\int_0^\tau (\tau-\sigma)^{-1/2}\|\nabla w(t_1+\sigma)\|_\infty\, d\sigma \\
&\leq CA\tau^{-1/2}+C_1L(M+2)^{p-1},
\end{aligned}
$$
where $C,C_1>0$ depend only on $\Omega$. Multiplying by $\tau^{1/2}$ and taking the supremum for $\tau\in [0,t_2-t_1]$, we obtain
\begin{equation}\label{contrad-mk3}
L\leq CA+C_1(t_2-t_1)^{1/2}L(M+2)^{p-1}.
\end{equation}
Now we claim that
\begin{equation}\label{contrad-mk4}
t_1\le \bar t_2:=\max\big[t_0,t_2-(2C_1(M+2)^{p-1})^{-2},t_2-(4CA)^{-2}\bigl].
\end{equation}
Assume for contradiction that $t_1>\bar t_2$.
In particular, we have $(t_2-t_1)^{1/2} < [2C_1(M+2)^{p-1}]^{-1}$,
so that \eqref{contrad-mk3} guarantees $L\leq 2CA$. We also have $m_k(t_1)=M+2$ by definition of $E_k$.
Using \eqref{contrad-mk6}, we then get
$$
\begin{equation}gin{aligned}
M+2=m_k(t_1)
&\le m_k(t_2) + \int_0^{t_2-t_1} \| {\nabla v_t}(t_1+\tau)\|_\infty\,d\tau \\
&\le M+1 + 2CA\int_0^{t_2-t_1}\tau^{-1/2}\,d\tau
\le M+1 + 4CA(t_2-t_1)^{1/2}<M+2,
\end{aligned}$$
which is impossible.
Therefore, inequality \eqref{contrad-mk4} is satisfied and \eqref{contrad-mk5} follows by choosing $t_2>T$ close enough to $T$.
{\bf Step 4.} Conclusion.
For each $t\in[T-\eta,T]$, owing to \eqref{contrad-cvptw},
we have $m(t)\le \liminf_{k\to\infty} m_k(t)\le M+2$.
This along with \eqref{contrad-nablau} implies that $u$ is actually a classical solution on $[T-\eta,T+\tau)$,
contradicting the hypotheses of Theorem~\ref{minimal_rateG}(ii).
The assumption \eqref{contrad-nablau} thus cannot hold and \eqref{lower-bu} is proved.
\end{proof}
{We now proceed to prove Theorem~\ref{minimal_rateG}.
The proof relies on estimate \rife{u_t-bounded} and on the variation-of-constants formula applied to $u_t$,
pushing further the arguments in the proof of Lemma~\ref{unbddT}.
Actually, we here modify the proof from \cite[pp.~369-370]{QS07} of the weaker estimate \rife{lower-bu-weaker}.
Namely we eliminate the possible time oscillations
by considering the supremum of $|\nabla u|$ over time-space cylinders of the form $(s(t),t)\times \Omega$
with $s(t)$ suitably chosen,
and by integrating in time the intermediate estimate \rife{varcost3} below. }
\begin{equation}gin{proof}[Proof of Theorem~\ref{minimal_rateG}.]
Set
$$
I=(T_1,T_2):=\ \
\begin{equation}gin{cases}
(T-\deltalta,T) \quad & \hbox{ in case (i)}\\
\noalign{\vskip 1mm}
(T,T+\deltalta) \quad & \hbox{ in case (ii).}
\end{cases}
$$
Since $u$ is a classical solution of \rife{vhj-1}-\rife{vhj-2} on the time interval I,
the function $w:= u_t$ is a solution of
$$
w_t -\Delta w = p|\nabla u|^{p-2}\nabla u \cdot \nabla w\,,\qquad {t\in I,\ x\in \Omega}.
$$
For $s,t\in I$ {with $s<t$,}
we may use the variation-of-constants formula for $w$ to write
\begin{equation}\label{varcost}
w(s+\tau)=e^{\tau\Delta}w(s)+
p\int_0^\tau e^{(\tau-\sigma)\Delta }(|\nabla u|^{p-1}\nabla u\cdot\nabla w)(s+\sigma)\, d\sigma,
\quad \tau\in (0,t-s).
\end{equation}
Also, fixing any $t_0\in (0,T^*)$, we know from \rife{u_t-bounded} that
\begin{equation}\label{Beqaa}
\|u_t\|_\infty\le A,\quad t\ge t_0,
\end{equation}
for some constant $A>0$. Without loss of generality, we may assume that $t_0< T_1$.
\vskip0.4em
Now we define
$$
m(t):=\|\nabla u(t)\|_\infty,\quad t\in I
$$
and
$$
M(s,t):=\max_{\tau\in[s,t]} m(\tau), \qquad
K(s,t):=\max_{\sigma\in [0,t-s]} \sigma^{1/2} \|\nabla w(s+\sigma)\|_\infty
$$
for $s,t\in I$ {with $s<t$. }
Using \rife{Beqaa} and
$\int_0^\tau (\tau-\sigma)^{-1/2}\sigma^{-1/2}\, d\sigma
=\int_0^1 (1-z)^{-1/2}z^{-1/2}\, dz$,
it follows from \rife{varcost} that
\begin{equation}\label{varcost2}
\begin{equation}gin{split}
\|\nabla w(s+\tau)\|_\infty
&\leq C\tau^{-1/2}\|w(s)\|_\infty+
C\int_0^\tau (\tau-\sigma)^{-1/2}\|\nabla u(s+\sigma)\|_\infty^{p-1}\|\nabla w(s+\sigma)\|_\infty\, d\sigma\\
&\leq CA\tau^{-1/2}+
CM^{p-1}(s,t)\int_0^\tau (\tau-\sigma)^{-1/2}\|\nabla w(s+\sigma)\|_\infty\, d\sigma\\
&\leq A_1\tau^{-1/2}+C_1K(s,t)M^{p-1}(s,t).
\end{split}
\end{equation}
Multiplying by $\tau^{1/2}$ and taking the supremum for $\tau\in [0,t-s]$, we obtain
\begin{equation}\label{Kst}
K(s,t)\leq A_1+C_1(t-s)^{1/2}K(s,t)\,M^{p-1}(s,t).
\end{equation}
Let now $t\in I$, and assume in addition that $t\ge T-\deltalta/2$ in case (i). We notice that, for fixed $t$, $M(s,t)$ is a continuous function of $s$ which satisfies
$$
\displaystyle\lim_{s\to (T_1)_+}(t-s)^{1/2}M^{p-1}(s,t)\ \
\begin{equation}gin{cases}
\ge (\deltalta/2)^{1/2}M^{p-1}(T-\deltalta,T-\deltalta/2)>0
&\quad \hbox{ in case (i)}\\
\noalign{\vskip 2mm}
=\infty
&\quad\hbox{ in case (ii)}\end{cases}
$$
(applying Lemma~\ref{unbddT} in case (ii)),
as well as
$\displaystyle\lim_{s\to t-}(t-s)^{1/2}M^{p-1}(s,t)=0$. So,
in both cases, we may choose some $s=s(t)\in (T_1,t)$ such that
\begin{equation}\label{Mst}
C_1(t-{s(t)})^{1/2}M^{p-1}({s(t)},t)=c_0,
\end{equation}
with $c_0\in (0,1/2]$ independent of $t$.
With this choice of $s$, it follows from \rife{Kst} that
$$
K(s(t),t)\leq 2A_1.
$$
{In particular, taking $\sigma=t-s$ in the definition of $K(s,t)$, we get
\begin{equation}\label{varcost3}
\|\nabla w(t)\|_\infty \le 2A_1(t-s(t))^{-\frac12}.
\end{equation}
}
On the other hand, a standard argument (see e.g. \cite[p.~454]{QS07}) shows that
$m(t)$ is locally Lipschitz and satisfies
\begin{equation}\label{Beqc}
|m'(t)|\le \|\nabla u_t(t)\|_\infty,\quad a.e.\ t\in I\,.
\end{equation}
Hence
$$
|m'(t)|\le C (t-s(t))^{-1/2},\quad a.e.\ t\in \tilde I\,,
$$
where $\tilde I = (T-\frac\delta2, T)$ in case (i) and $\tilde I=I$ in case (ii).
By integration, for $\tau\in [s(t),t)$, we get
$$m(\tau)=m(t)-\int_\tau^t m'(\sigma)\,d\sigma
\le m(t)+C\int_{s(t)}^t (\sigma-s(t))^{-1/2}\,d\sigma
\le m(t)+C (t-s(t))^{1/2},
$$
hence
$$
M(s(t),t)\le m(t)+C.
$$
Now going back to \rife{varcost3}, and using \rife{Mst}, we conclude that
$$
\|\nabla w(t)\|_\infty\le {2A_1} (t-s(t))^{-1/2} = {2A_1} \, c_0^{-1}C_1 \,M^{p-1}(s(t),t)\le C(m(t)+1)^{p-1}.
$$
Property \rife{Beqc} then guarantees that
$$
|m'(t)|\le C(m(t)+1)^{p-1},\quad a.e.\ t\in \tilde I.
$$
The desired estimates then follow by integration, using the fact that
$\limsup_{t\to T_\pm}m(t)=\infty$.
\end{proof}
By modifying the proof of Theorem~\ref{minimal_rateG},
we obtain the following proposition, which gives a sufficient condition
for the GBU rate to be more singular than the minimal one.
We also give a similar property in case of immediate regularization after $T^*$.
This will be one of the ingredients of the proof of Theorem~\ref{minimal0} in Section~\ref{Sec-min}.
{We here denote $\deltalta(x):= {\rm dist}(x,\partial\Omega)$.}
\begin{equation}gin{prop}\label{contut}
Let $\phi\in X$ and assume $T^*<\infty$.
\vskip0.4em
(i) Assume that $u_t$ is continuous at $\{T^*_-\}\times\partial\Omega$, i.e.:
\begin{equation}\label{continuityutA}
\lim_{t\to T^*_-,\,\deltalta(x)\to 0}u_t(t,x)=0.
\end{equation}
Then
$$(T^*-t)^{1/(p-2)} \|\nabla u(t)\|_\infty\to\infty,\quad t\to T^*_-.$$
\vskip0.4em
(ii) Assume that $u$ becomes a classical solution
again (including boundary conditions)
on some interval $(T^*,T^*+\deltalta)$. Assume in addition that $u_t$ is continuous at $\{T^*_+\}\times\partial\Omega$, i.e.:
\begin{equation}\label{continuityutB}
\lim_{t\to T^*_+,\,\deltalta(x)\to 0}u_t(t,x)=0.
\end{equation}
Then
$$
(t-T^*)^{1/(p-2)} \|\nabla u(t)\|_\infty\to\infty,\quad t\to T^*_+.
$$
\end{prop}
\begin{equation}gin{proof}
We slightly modify the proof of Theorem~\ref{minimal_rateG}. We use the same notations where, we recall, $w(t,x):= u_t(t,x)$.
Now, in \rife{varcost}, we estimate $e^{t\Delta }w(s)$ by taking advantage of assumptions
\rife{continuityutA} and \rife{continuityutB}.
Fix $\varepsilon>0$. There exists $a=a(\varepsilon)>0$ and an interval
$I_\varepsilon=[T_\varepsilon,T^*)$ in case (i) and $I_\varepsilon=[T^*,T_\varepsilon)$ in case (ii), such that
$$\sup_{\Omega_a} |w(s,x)|\le \varepsilon,\quad s\in I_\varepsilon,\quad\hbox{ where $\Omega_a=\{x\in \Omega;\, \deltalta(x)\le a\}$}.$$
Denote by $G(t,x,y)$ the Dirichlet heat kernel of $\Omega$ and recall the Gaussian estimate:
$$|\nabla_x G(t,x,y)|\le c_1t^{-(n+1)/2}\exp\Bigl[-c_2{|x-y|^2\over t}\Bigr].$$
Using \rife{continuityutA}, \rife{continuityutB}, \rife{Beqaa}, we may then write,
for all $s\in I_\varepsilon$, {$\tau\in (0,1)$}
and $x\in \Omega_{a/2}$,
\begin{equation}gin{align*}
\bigl|\nabla [e^{\tau \Delta }w(s)](x)\bigr|
&=\Bigl|\int_\Omega \nabla_x G(\tau,x,y)w(s,y)\, dy\Bigr| \\
&\le c_1\varepsilon\int_{\Omega_a} \tau^{-\frac{n+1}{2}}\exp\Bigl[-c_2{|x-y|^2\over \tau}\Bigr]\, dy
+c_1A\int_{\Omega\setminus \Omega_a} \tau^{-\frac{n+1}{2}}\exp\Bigl[-c_2{|x-y|^2\over \tau}\Bigr]\, dy\\
&\le c_3\varepsilon \tau^{-1/2}
+c_1A \tau^{-\frac{n+1}{2}}\exp\Bigl[{-c_2a^2\over 4\tau}\Bigr].
\end{align*}
On the other hand, for all $s\in I_\varepsilon$, { $\tau\in (0,1)$ }
and $x\in \Omega\setminus\Omega_{a/2}$, we have $|\nabla w(s+\tau,x)|\le C(\varepsilon)$,
since the solution remains smooth for all times away from the boundary.
Putting this together, we obtain,
for all $s\in I_\varepsilon$, {$t\in (s,s+1)$ and $\tau\in (0,t-s)$, }
$$\|\nabla w(s+\tau)\|_\infty
\le c_3\varepsilon \tau^{-1/2}
+c_1A \tau^{-\frac{n+1}{2}}\exp\Bigl[{-c_2a^2\over 4\tau}\Bigr]+C(\varepsilon)+C_1K(s,t)M^{p-1}(s,t).$$
Multiplying by $\tau^{1/2}$, taking the supremum for $\tau\in [0,t-s]$,
and assuming $t-s\le \tau_0(\varepsilon)$ with $\tau_0(\varepsilon)>0$ sufficiently small, we obtain
$$K(s,t)\leq 2c_3\varepsilon+C_1(t-s)^{1/2}K(s,t)M^{p-1}(s,t).$$
Let now $t\in I$. Choosing $s=s(t)$ as in \rife{Mst}, we see that the conditions $s\in I_\varepsilon$ and $t-s\le \tau_0(\varepsilon)$
are satisfied whenever $t>T^*-\deltalta_\varepsilon$ (resp., $t<T^*+\deltalta_\varepsilon$), with $\deltalta_\varepsilon$ sufficiently small.
Indeed, in case~(i) this follows from
$M(s,t)\ge m(t)$ and $\lim_{t\to T^*_-}m(t)=\infty$,
whereas in case (ii) this is just a consequence of $T^*<s<t$.
Consequently, we obtain
$$K(s(t),t)\leq 4c_3\varepsilon.$$
Arguing as before, we end up with
$$
|m'(t)|\le c_4\varepsilon(m(t)+1)^{p-1},
\quad\hbox{ for a.e. $t\in(T^*-\deltalta_\varepsilon,T^*)$ \ (resp., $t\in (T^*,T^*+\deltalta_\varepsilon))$.}
$$
Integrating and using the fact that
$\limsup_{t\to T^*_\pm}m(t)=\infty$, we obtain
$$
m(t)+1\ge [(p-2)c_4\varepsilon(T^*-t)]^{-1/(p-2)}
\quad\hbox{ for a.e. $t\in(T^*-\deltalta_\varepsilon,T^*)$}
$$
(and similarly in case (ii)).
Since $\varepsilon>0$ was arbitrarily small, the conclusion follows.
\end{proof}
\section{ Preliminary results in one space dimension}
\label{prelim1d}
In this section, we state and prove a number of useful preliminary properties of the solution in one space-dimension,
with $\Omega=(0,1)$. {Let us recall the notation}
$$
X_1:=\{\phi\in C^1([0,1]),\ \phi(0)=\phi(1)=0\}.
$$
Hence, for $\phi\in X_1$, we consider the unique global viscosity solution $u$ of the problem
\begin{equation}\label{vhj1}
\begin{equation}gin{cases}
u_t-u_{xx} =|u_x|^p, & \quad t>0,\ 0<x<1,\\
u =0, & \quad t>0,\ x\in\{0,1\},\\
u(0,x) =\phi(x), & \quad 0<x<1.
\end{cases}
\end{equation}
We will here consider the specific approximation
\begin{equation}\label{app-1d}
\begin{equation}gin{cases}
u_{k,t}-u_{k,xx} =F_k(u_{k,x}), & \quad t>0,\ 0<x<1,\\
u_k =0, & \quad t>0,\ x\in\{0,1\},\\
u_k(0,x) =\phi(x), & \quad 0<x<1,
\end{cases}
\end{equation}
where the nonlinearity $F_k\in C^2(\mathbb R)$ is given by the truncation of $|s|^p$ by its second order Taylor expansion at $s=\pm k$.
More precisely, $F_k$ is the even function defined by
\begin{equation}\label{eqaaa}
F_k(s)=\begin{equation}gin{cases}
s^p,\quad 0\le s\le k, \\
\noalign{\vskip 1mm}
k^p+pk^{p-1}(s-k)+p(p-1)k^{p-2}\displaystyle{(s-k)^2\over 2},\quad s>k.
\end{cases}
\end{equation}
Note that $0\le F_k(s)\le |s|^p$
and $F_k$ is convex. Also, it is easy to check that $F_k$ satisfies
the assumptions of Theorem~\ref{prelimprop2} with $\theta=1/2$, as well as
\begin{equation}\label{eqaa}
2F_k(s)\le sF_k'(s)\le pF_k(s),\quad s\ge 0.
\end{equation}
We start with some basic facts.
\begin{equation}gin{lem}\label{basic-prop0}
Let $\phi\in X_1$.
\vskip0.3em
(i) Let $t_0\in (0,T^*(\phi))$ and let the constant $M=M(t_0)$ be given by \rife{u_t-bounded}.
Then for all $t\ge t_0$, we have $u_{xx}\le M$ and the function $x\to u(t,x)-\frac{M}{2}x^2$ is concave in $(0,1)$.
Moreover, the limits
\begin{equation}\label{controlnormux0}
\displaystyle\lim_{x\to 0}u_x(t,x)\in \mathbb R\cup\{\infty\}\quad\hbox{and}\quad
\displaystyle\lim_{x\to 1}u_x(t,1)\in \mathbb R\cup\{-\infty\}
\end{equation}
exist. Whenever they are finite,
$u_x(t,0)$ and $u_x(t,1)$ exist (and respectively coincide with the limits in \eqref{controlnormux0}) and we have
\begin{equation}\label{controlnormux}
\|u_x(t)\|_\infty\le \max(u_x(t,0),-u_x(t,1))+M.
\end{equation}
\vskip0.3em
(ii) Assume in addition that $\phi\not\equiv 0$ is symmetric and nondecreasing on $(0,1/2)$. Then,
for all $t>0$, the functions $u_k(t,\cdot)$ ($k\ge 1$) and $u(t,\cdot)$ are symmetric and nondecreasing on $(0,1/2)$. Moreover, we have
\begin{equation}\label{utcenter}
u_t(t,1/2)<0,\quad t>0,
\end{equation}
and
\begin{equation}\label{uktcenter}
u_{k,t}(t,1/2)<0,\quad t>0.
\end{equation}
\end{lem}
\begin{equation}gin{proof}
(i) By \rife{u_t-bounded}, we have $u_{xx}=u_t-|u_x|^p\le M$ in $[t_0,\infty)\times (0,1)$
and the first part of the assertion immediately follows.
Since $u_x(t,x)-Mx$ is nonincreasing, the limits in \eqref{controlnormux0} exist.
Since $u(t,\cdot)\in C([0,1])\cap C^1(0,1)$,
the assertion after \eqref{controlnormux0} follows
and we have
$$u_x(t,1)-M\le u_x(t,x)-Mx\le u_x(t,0),\quad 0\le x\le 1,$$
which yields \eqref{controlnormux}.
(ii) The symmetry of the $u_k$ is guaranteed by their uniqueness.
Their monotonicity property is an immediate consequence of the maximum principle applied to the equation for $u_{k,x}$.
Both properties are inherited by $u$ after passing to the limit $k\to\infty$.
Let us show \eqref{utcenter}. Since $u$ is smooth in $(0,\infty)\times(0,1)$, we deduce from the strong maximum principle that $u_x>0$ for $x\in (0,1/2)$.
Since also $u_x(t,1/2)=0$,
and since the equation for $u_x$ in $(0,\infty)\times(1/4,1/2)$
has smooth bounded coefficients,
we may apply the Hopf lemma to get $u_{xx}(t,1/2)<0$, hence the conclusion.
The proof of \eqref{uktcenter} is similar.
\end{proof}
Our next lemma gives useful bounds on $u_x$.
We note that the bounds in Lemmas \ref{basic-bounds}--\ref{basic-prop} have been already known for classical solutions (see, e.g., \cite{CG96}, \cite{ARS04}),
but since we here deal with viscosity solutions, with possible loss of boundary conditions, it is safer to prove exactly what we need.
\begin{equation}gin{lem}\label{basic-bounds}
Let $\phi\in X_1$ with $T^*(\phi)<\infty$. Then, for all $t\ge t_0>0$, we have
\begin{equation}\label{SGBUprofileUpperEst}
u_x(t,x) \le \bigl[\bigl(u_x(t,y)-My\bigr)_+^{1-p}+(p-1)(x-y)\bigr]^{-1/(p-1)}+\, Mx, \quad 0<y<x<1,
\end{equation}
\begin{equation}\label{SGBUprofileUpperEst2}
u_x(t,x) \le U_*'(x)+\, Mx, \quad 0<x<1,
\end{equation}
\vskip -7pt
\begin{equation}\label{SGBUprofileUpperEst3}
u_x(t,x) \ge -U_*'(1-x)-\, M(1-x), \quad 0<x<1,
\end{equation}
and
\begin{equation}\label{SGBUprofileLowerEst}
(u_x(t,x))_+ \ge \bigl[\bigl((u_x(t,y))_++My\bigr)^{1-p}+(p-1)(x-y)\bigr]^{-1/(p-1)}-\, Mx, \quad 0<y<x<1,
\end{equation}
where $M=M(t_0)$ is given by \rife{u_t-bounded}
and where the reference singular profile $U_*$ is defined in \eqref{defUstar}.
\end{lem}
\begin{equation}gin{proof}
For fixed $t\ge t_0$, let $z(x)=(u_x(t,x)-M x)_+$. The function $z$ satisfies
$$z'+z^p=(u_{xx}-M)\chi_{\{u_x>M x\}}+(u_x-Mx)_+^p,
\quad\hbox{for a.e. $x\in (0,1)$.}$$
For each $x$ such that $u_x(t,x)>Mx$,
we have $(z'+z^p)(x)\leq (u_{xx}-M+|u_x|^p)(x)\le 0$ by \rife{u_t-bounded}.
Therefore, we have
\begin{equation}\label{SGBUprofileUpperEstAux}
z'+z^p\le 0\quad\hbox{ a.e. on $(0,1)$.}
\end{equation}
In particular $z$ is nonincreasing on $(0,1)$.
We may assume that $E:=\{x\in (0,1);\, z(x)>0\}\neq\emptyset$
(otherwise there is nothing to prove) and, letting $a=\sup E\in (0,1]$, we have $z>0$ on $(0,a)$.
For each $x\in(0,a)$, by integrating
\eqref{SGBUprofileUpperEstAux} we get $z^{1-p}(x)\ge z^{1-p}(y)+(p-1)(x-y)$
for all $y\in (0,x)$.
Since $z\le 0$ on $[a,1)$, we obtain inequality \eqref{SGBUprofileUpperEst}.
Next, inequality \eqref{SGBUprofileUpperEst2} follows by letting $y\to 0$ in \eqref{SGBUprofileUpperEst},
and \eqref{SGBUprofileUpperEst3} follows by applying \eqref{SGBUprofileUpperEst2} to the solution $v(t,x):=u(t,1-x)$.
To prove \eqref{SGBUprofileLowerEst} we now let
$z(x)=(u_x(t,x))_++\,Mx.$
The function $z$ satisfies
$$
z'+z^p=u_{xx} \chi_{\{u_x>0\}}+M+\bigl[(u_x(t,x))_++\,Mx\bigr]^p
\ge (u_{xx}+|u_x|^p) \chi_{\{u_x>0\}}+M\ge 0$$
a.e.~on $(0,1)$ by \rife{u_t-bounded}.
By integration, noting that $z>0$ on $(0,1)$, we get
$z^{1-p}(x)\le z^{1-p}(y)+(p-1)(x-y)$
and inequality \eqref{SGBUprofileLowerEst} follows.
\end{proof}
The next result guarantees that unboundedness of $u_x$ near a given time $t\ge T^*$
implies that the space behavior at that time is described by the reference singular profile $U_*$.
\begin{equation}gin{lem}\label{basic-prop}
Let $\phi\in X_1$ with $T^*(\phi)<\infty$.
Fix any $t_0\in (0,T^*)$ and let $M(t_0)$ be given by \rife{u_t-bounded}.
If, for some $t\ge T^*$, there exists a sequence $(t_j, x_j)\to (t,0)$ such that $u_x(t_j,x_j)\to \infty$, then
\begin{equation}\label{profile2}
|u(t,x)-u(t,0)-U_*(x)| \le K \, {\frac{x^2}2}, \qquad 0< x\le 1/2,
\end{equation}
\begin{equation}\label{profile}
|u_x(t,x)-U_*'(x)| \le K x, \qquad 0<x\le 1/2,\end{equation}
and
\begin{equation}\label{profile3}
|u_{xx}(t,x)-U_*''(x)| \le K, \qquad 0<x\le 1/2,\end{equation}
for some constant $K>0$ depending only on $p$ and $M(t_0)$.
\end{lem}
\begin{equation}gin{proof}
Let us first show \rife{profile}.
Since the upper bound in \rife{profile} is guaranteed by Lemma~\ref{basic-bounds}, we only need to show the lower bound.
To this end, writing \eqref{SGBUprofileLowerEst} with $t=t_j$, $y=x_j$ and letting $j\to\infty$, we have
\begin{equation}\label{SGBUprofileUpperEstAux2}
(u_x(t,x))_+ \ge ((p-1)x)^{-1/(p-1)}-\, Mx, \quad 0<x\le 1/2,
\end{equation}
with $M=M(t_0)$. Since the RHS of \eqref{SGBUprofileUpperEstAux2} is positive for $x<x_0:=(M^{1-p}/(p-1))^{1/(p-2)}$, we get
\begin{equation}\label{SGBUprofileUpperEstAux3}
u_x(t,x) \ge U_*'(x)-\, Mx, \quad 0<x<x_0.
\end{equation}
On the other hand, by \eqref{SGBUprofileUpperEst3}, we have
$$u_x(t,x)\ge -U_*'(1/2)-\, M, \quad x_0\le x\le 1/2.$$
Now choosing
$$K=K(p,M):=\max\bigl[M,x_0^{-1}(U_*'(1/2)+U_*'(x_0)+M)\bigr],$$
we get
$$
-U_*'(1/2)-\, M\ge U_*'(x_0)-\, Kx_0\ge U_*'(x)-\, Kx, \quad x_0\le x\le 1/2.
$$
This together with \eqref{SGBUprofileUpperEstAux3} yields the lower estimate on $u_x$ in \rife{profile}.
To show \rife{profile3}, we use \rife{profile} to write:
\begin{equation}\label{SGBUprofileUpperEstAux4}
|u_{xx}(t,x)-U_*''(x)|\le |u_t|+\bigl||U_*'|^p-|u_x|^p\bigr|\le |u_t|+|U_*'+Kx|^p-U_*'^p, \quad 0<x\le 1/2.
\end{equation}
On the other hand, we have
\begin{equation}\label{SGBUprofileUpperEstAux5}
|U_*'+Kx|^p-U_*'^p=U_*'^p\bigl[(1+cKx^{p/(p-1)})^p-1\bigr]\le cKU_*'^px^{p/(p-1)}=cK,
\end{equation}
where $c=c(p)$ denotes a generic positive constant.
Property \rife{profile3} then follows from \eqref{SGBUprofileUpperEstAux4},
\eqref{SGBUprofileUpperEstAux5} and \rife{u_t-bounded}.
Finally, \rife{profile2} follows from \rife{profile} by integration.
\end{proof}
We now point out that, should the solution $u$ have lost the boundary condition at some $t_0>0$,
then necessarily the gradient must blow up near the boundary at this time. This is a consequence of general results of \cite{BDL04}.
However, we shall provide a direct and more elementary proof in the next lemma.
\begin{equation}gin{lem}\label{bdl}
Let $\phi\in X_1$ with $T^*(\phi)<\infty$
and assume that, for some $t>T^*$, we have
\begin{equation}\label{HypInfiniteDeriv}
u(t,0)>0.
\end{equation}
Then we have
\begin{equation}\label{InfiniteDeriv}
\lim\limits_{ x \to 0_+} u_x(t,x) =\infty.
\end{equation}
\end{lem}
\begin{equation}gin{proof}
Fix some $t_0\in (0,T^*(\phi))$ and let $M=M(t_0)$ be given by \rife{u_t-bounded}.
Set $V(x)=u(t,x)$, $V_k(x)=u_k(t,x)$.
By Lemma~\ref{basic-prop0}, the function $V_x-Mx$ is nonincreasing and has a limit (finite or $+\infty$) as $x\to 0$.
Assume for contradiction that this limit is finite. Then there exists $A>0$ such that
$$
V_x(x)\le A+Mx\le A+M,\quad 0<x<1.
$$
Let $z(x)=(V_{k,x}(t,x))_++\,Mx.$
Then, owing to \rife{eqabA}, for all $k\ge k_0$, the function $z$ satisfies
$$
z'+z^p=V_{k,xx} \chi_{\{V_{k,x}>0\}}
+M+\bigl[(V_{k,x})_++\,Mx\bigr]^p
\ge (V_{k,xx}+|V_{k,x}|^p) \chi_{\{u_{k,x}>0\}}+M\ge 0
$$
almost everywhere on $(0,1)$.
By integration, noting that $z>0$ on $[0,1]$, we get
$$
z^{1-p}(y)\le z^{1-p}(x)+(p-1)(y-x),
\quad 0<x<y\le 1/2.
$$
Now fix $y_0\in (0,1/2)$ such that $y_0\le {(A+2M+1)^{1-p}\over 2(p-1)}$.
For $k\ge k_1$ sufficiently large, we have
$$
V_{k,x}(y_0)\le V_x(y_0)+1\le A+M+1,
$$
hence
$$
\bigl((V_{k,x})_+(x)+\,Mx\bigr)^{1-p}
\ge (A+2M+1)^{1-p}-(p-1)y_0\ge \textstyle\frac12(A+2M+1)^{1-p},
\quad 0<x<y_0.
$$
Therefore,
$$
V_{k,x}(x)\le C:={2^{\frac1{p-1}}}(A+2M+1),
\quad 0<x<y_0.
$$
Since $V_{k}(0)=0$, by integration we get
$$
V_{k}(x)\le Cx,
\quad 0<x<y_0.
$$
Passing to the limit $k\to\infty$ for each fixed $x\in(0,y_0)$, we get
$V(x)\le Cx$ for $0<x<y_0$. Then passing to the limit $x\to 0$, we finally obtain
$u(t,0)=V(0)=0$, contradicting the loss of boundary condition assumption \eqref{HypInfiniteDeriv}.
\end{proof}
As a key tool in order to
study the regularization after $T^r$ (respectively $T^*$) for nonminimal (respectively minimal) solutions,
we shall perform an analysis in terms of the distance of $u(t,\cdot)$ to the reference profile $U_*$.
The next result is a regularizing barrier argument showing how the smoothing rate can be related to the gap between $U_*$ and $u$.
\begin{equation}gin{lem}\label{barrier}
Let $\phi\in X_1$ with $\phi$ symmetric. {We set $\alpha= \frac1{p-1}$.}
Let $T>0$, $m\in [2, 3-\alpha)$ and assume that
\begin{equation}\label{BeqabAAa}
u(T,x)\le U_*(x)-bx^m, \quad 0<x<1,
\end{equation}
for some $b>0$. Then $u$ is a classical solution of problem \rife{vhj1} (including boundary conditions)
for all $t\in(T,\infty)$. Moreover, we have
\begin{equation}\label{BeqabAAb}
\|u_x(t)\|_\infty\le C(t-T)^{-\gamma},\quad t\to T_+,
\quad\hbox{ where }
\gamma={\alpha\over 3-m-\alpha}.
\end{equation}
We note that $\gamma={1\over p-2}$ for $m=2$ and that
$\gamma={1\over (3-m)(p-1)-1}>{1\over p-2}$ for $2<m<3-\alpha$.
\end{lem}
\begin{equation}gin{rem} \label{RemSepar}
Lemma~\ref{barrier} shows that the quadratic separation is sufficient in order to have the minimal smoothing rate (${1\over p-2}$ as in Theorem~\ref{minimal_rate}) and that no improvement on \rife{BeqabAAb} can be expected
if we make the stronger assumption \rife{BeqabAAa} with some $m<2$ (for instance $m=1$).
This will also be made apparent in the following proof.
\end{rem}
\begin{equation}gin{proof}[Proof of Lemma~\ref{barrier}]
In this proof we shall a priori consider the whole range of values $m\in [1,\infty)$,
in order to understand the phenomenon described in Remark~\ref{RemSepar}.
Shifting the origin of time at $t=T$ and denoting $U=U_*$, we look for a supersolution of the form
$$z(t,x):=U(x+a(t))-U(a(t))-bx^m,\quad 0\le t\le\deltalta,\ 0\le x\le 1,$$
for some $\deltalta>0$ and some function $a\in C([0,\deltalta])\cap C^1((0,\deltalta])$
with $a(0)=0$ and $a(t)>0$.
Writing $a$ for $a(t)$, we compute
\begin{equation}gin{align*}
Pz
&:=z_t-z_{xx}-|z_x|^p \\
&=[U'(x+a)-U'(a)]a'(t)-U''(x+a)+bm(m-1)x^{m-2}-|U'(x+a)-mbx^{m-1}|^p \\
&=[U'(x+a)-U'(a)]a'(t)+bm(m-1)x^{m-2}+[{U'}^p(x+a)-|U'(x+a)-mbx^{m-1}|^p].
\end{align*}
Assuming $a\le 1$ and (without loss of generality) $b<b_0(p)$ with $b_0(p)>0$ small enough,
we have $mbx^{m-1}\alpha^{-\alpha}(x+a)^{\alpha}\le 1/2$ for all $x\in (0,1)$.
We then compute
\begin{equation}gin{align*}
{U'}^p(x+a)-|U'(x+a)-mbx^{m-1}|^p
&=\alpha^{p\alpha}(x+a)^{-p\alpha}-\bigl[\alpha^{\alpha}(x+a)^{-\alpha}-mbx^{m-1}\bigr]^p\\
&=\alpha^{p\alpha}(x+a)^{-p\alpha} \bigl[1- \bigl[1-mbx^{m-1}\alpha^{-\alpha}(x+a)^{\alpha}\bigr]^p\bigr] \\
&\ge\alpha^{p\alpha}(x+a)^{-p\alpha}
\bigl[2^{1-p}pmbx^{m-1}\alpha^{-\alpha}(x+a)^{\alpha}\bigr]\\
&=2^{1-p}pmbx^{m-1}\alpha(x+a)^{-1},
\end{align*}
where we used $(p-1)\alpha=1$ and
$(1-t)^p\leq 1-2^{1-p}pt$ for $t\in (0,1/2)$.
The inequality $Pz\ge 0$ in $(0,\deltalta]\times (0,1)$ will thus be ensured provided we have
\begin{equation}\label{BeqabAA}
[U'(a)-U'(x+a)]a'(t)\le bm(m-1)x^{m-2}+2^{1-p}pmbx^{m-1}\alpha(x+a)^{-1}.
\end{equation}
We distinguish now two cases according to whether $m=1$ or $m>1$.
\begin{equation}gin{itemize}
\item For $m=1$, \rife{BeqabAA} reduces to
$$
a'(t)\le {2^{1-p}pb{\alpha}\over \overline D_a},
\quad\hbox{ where
$\overline D_a=\displaystyle\sup_{x\in (0,1)}D_a(x)$,
\quad
$D_a(x):=[U'(a)-U'(x+a)](x+a)>0$}.
$$
{Since $U'$ is a decreasing function, }
we see that $D_a(x)$ is an increasing function of $x$,
hence $\overline D_a=D_a(1)=[U'(a)-U'(1+a)](1+a)\le 2U'(a)$
and a sufficient condition for $Pz\ge 0$ is given by
$$a'(t)\le {2^{-p}pb{\alpha}\over U'(a)}=c(p)b a^\alpha.$$
We thus choose
\begin{equation}\label{BeqabA}
a(t)=\eta\, t^{1/(1-\alpha)}
\end{equation}
with $\eta$ sufficiently small.
\item For $m>1$, we use only the first term on the RHS of
\rife{BeqabAA}, since it dominates the second one.
Condition \rife{BeqabAA} then reduces to
\begin{equation}\label{lookfora}
a'(t)\le {m(m-1)b\over \overline D_a},
\quad\hbox{ where $D_a(x):=[U'(a)-U'(x+a)]x^{2-m}>0$}.
\end{equation}
Observe that we have $\overline D_a\ge D_a(1)=U'(a)-U'(1+a)\ge c(p)U'(a)$.
$\bullet$ If $1<m\le 2$, then $\overline D_a\le U'(a)$
and we would make the same choice as in \rife{BeqabA}.
$\bullet$ Next consider the case $m>2$.
If $0<x\le a$, then $D_a(x)=U''(a+\theta x)x^{3-m}$ for some $\theta(x)\in (0,1)$.
{In particular, notice that if $m>3$ we have $\overline D_a=\infty$ and the inequality \rife{lookfora} is impossible. Thus we already need to restrict to $2<m\le 3$. Now, }for $0<x\le a$, we have $D_a(x)\le c\,U''(a)a^{3-m}=c\,a^{2-\alpha-m}$.
For $a<x<1$, we have $D_a(x)\le U'(a)a^{2-m}=c\,a^{2-\alpha-m}$.
Therefore we have $c_0a^{2-\alpha-m}\le \overline D_a\le c_1\, a^{2-\alpha-m}$ for different constants $c_0,c_1$.
So we need a function $a$ such that $a'(t)\le c\,a^{\alpha+m-2}$.
However, since $a(0)=0$ and $a(t)>0$, this induces the further restriction
$\alpha+m-2<1$ i.e., $m<3-\alpha$ and leads to the choice
\begin{equation}\label{BeqabB}
a(t)=c\,t^{1/(3-m-\alpha)}.
\end{equation}
{Incidentally, we notice how the above computations offer a motivation for the restriction $m\in [2,3-\alpha)$ required in our assumptions.}
\end{itemize}
{Now,} for each $k\ge 1$, since $z(0,x)=U(x)-bx^m\ge u(T,x)\ge u_k(T,x)$
and $z\ge 0=u_k$ at $x\in\{0,1\}$ (decreasing $b>0$ if necessary), the comparison principle yields $z(t,\cdot)\ge u_k(T+t,\cdot)$,
hence $z(t,\cdot)\ge u(T+t,\cdot)$,
{for all $t\in (0,\deltalta]$.}
In particular, $u$ satisfies the boundary conditions in the classical sense for all {$t\in (T,T+\deltalta]$.}
Moreover, we have
\begin{equation}\label{BeqabA2}
u_x(T+t,0)\le z_x(t,0)=U'(a(t))=c\, a^{-\alpha}(t),\quad 0<t<\deltalta.
\end{equation}
Fix $0<t_0<\min(T^*,T)$ and let $M=M(t_0)$ be given by \rife{u_t-bounded}. Combining \rife{BeqabA2} with \rife{BeqabA} or \rife{BeqabB},
{estimate \rife{BeqabAAb}}
follows from symmetry and the fact that $u_x-Mx$ is nonincreasing in $x$ by Lemma~\ref{basic-prop0}.
{Finally, since $z_t\le 0$ hence $u(T+t,x)\le z(t,x)\le U(x)-bx^m$ on $(0,\tau]\times [0,1]$, we may repeat the comparison on each time interval
$[T+j\deltalta,T+(j+1)\deltalta]$, with $j$ integer, to deduce that $u(t,x)\le U(x)-bx^m$ on $[T,\infty)\times [0,1]$.
The argument in the preceding paragraph then guarantees that $u$ is a classical solution (including boundary conditions) on $(T,\infty)\times [0,1]$.}
\end{proof}
In the next elementary ODE lemma, we transform an information on the behavior of $u_t$ near the boundary
(such information will be obtained in Section~\ref{Sec-nonmin})
into a separation property of $U_*-u$.
\begin{equation}gin{lem}\label{separation} Let $\phi\in X_1$, $T>0$ and $\ell\ge 0$.
\vskip0.4em
(i) Assume that
\begin{equation}\label{Beqdd0}
u_t(T,x)\ge -bx^\ell,\quad 0<x\le 1/2
\end{equation}
for some $b>0$, along with
\begin{equation}\label{Beqdd}
\lim_{x\to 0}u_x(T, x)=\infty.
\end{equation}
Then there exist constants $c_1, c_2>0$ such that
$$
u(T,x)\ge u(T,0)+U_*(x)-c_1x^{\ell+2}
\quad\hbox{ and }\quad
u_x(T,x)\ge U'_*(x)-c_2x^{\ell+1},
\qquad 0<x\le 1/2.
$$
\vskip0.4em
(ii) Assume that
\begin{equation}\label{HypLemmeODE}
u_t(T,x)\le -bx^\ell,\quad 0<x\le 1/2
\end{equation}
for some $b\geq 0$. Then we have
$$
u(T,x)\le u(T,0)+U_*(x)-c_1\, x^{\ell+2}
\quad\hbox{ and }\quad
u_x(T,x)\le U'_*(x)-c_2\, x^{\ell+1},
\qquad 0<x\le 1/2,
$$
for some constants $c_1, c_2>0$ if $b>0$, or with $c_1=c_2=0$ if $b=0$.
\end{lem}
\vskip0.4em
\begin{equation}gin{rem} We will actually use assertion (ii) in the case $b=0$ only.
In particular this says that whenever $u_t(t,x)\leq 0$ and $u$ satisfies the boundary condition $u(t,0)=0$,
then $u$ lies below the singular profile $U_*(x)$.
However we present the general conclusion of part (ii) (with, possibly, $b>0$), as it shows the optimality of the estimate in part (i).
\end{rem}
\begin{equation}gin{proof}
Set $V(x)=u_x(T,x)$.
(i) By \eqref{Beqdd} and Lemma \ref{basic-prop} (see \rife{profile}), we have $V(x)\ge U_*'(x)-c\,x$ for all $x\in (0,1/2]$. {Here and below, $c$ will denote a generic positive constant. }
In particular there exists $x_0\in (0,1/2]$ such that
\begin{equation}\label{Beqe}
V(x)\ge \textstyle\frac12 U_*'(x)>0,\quad 0<x\le x_0.
\end{equation}
By assumption \eqref{Beqdd0}, we thus have
\begin{equation}\label{BeqdA}
-V'=V^p-u_t(T,x)\le V^p+bx^\ell,\quad 0<x\le x_0.
\end{equation}
Next using \rife{BeqdA}, \rife{Beqe} and recalling $p\alpha=\alpha+1$, {we have}
$$
{1\over p-1}(V^{1-p})' =-V'V^{-p} \le 1+bx^\ell V^{-p}
\le 1+c\,x^{\ell+\alpha+1},\quad 0<x<{x_0}.
$$
Integrating and using assumption \rife{Beqdd} again, we get
$$
V^{1-p}(x)\le (p-1)x[1+c\,x^{\ell+\alpha+1}],\quad 0<x<{x_0},
$$
hence
$$
V(x) \ge U_*'(x)[1+c\,x^{\ell+\alpha+1}]^{-\alpha}
\ge U_*'(x)[1-c\,x^{\ell+\alpha+1}]=U_*'(x)-c\,x^{\ell+1},\quad 0<x<{x_0}.
$$
In view of {\rife{profile}},
decreasing $c$ if necessary, we deduce
$$
V(x) \ge U_*'(x)-c\,x^{\ell+1},\quad 0<x\le 1/2,
$$
and assertion (i) follows by a further integration.
\vskip1em
(ii) Since $-V'=|V|^p-u_t(T,x)\ge 0$ owing to \rife{HypLemmeODE}, {we have that $V$ is decreasing in $(0,1/2)$; hence, due to \rife{Beqdd},
there exists ${x_1}\in (0,1/2]$ such that
$V>0$ in $(0,{x_1})$ and $V\le 0$ in $(x_1,1/2]$.}
By \rife{HypLemmeODE} we now have $-V'\ge V^p+bx^\ell$ in $(0,x_1)$, hence
\begin{equation}\label{BeqdA3}
{1\over p-1}(V^{1-p})' =-V'V^{-p} \ge 1+bx^\ell V^{-p},\quad 0<x< x_1.
\end{equation}
Since $\frac{(V^{1-p})'}{p-1}\geq 1$, we first deduce that $V\le U_*'$ in $(0,x_1)$ and so
\begin{equation}\label{BeqdA4}
V\le U_*',\quad 0<x\le 1/2,
\end{equation}
since $V\le 0$ in $[x_1,1/2]$.
In the case $b=0$, this yields the desired conclusion.
Now assume $b>0$. Combining \eqref{BeqdA3} and \eqref{BeqdA4}, we obtain
$$
{1\over p-1}(V^{1-p})' \ge 1+c\,x^{\ell+\alpha+1},\quad 0<x<x_1.
$$
Consequently,
$$
V^{1-p}\ge (p-1)x[1+c\,x^{\ell+\alpha+1}],\quad 0<x<x_1.$$
Taking a smaller constant $c$ if necessary, we deduce
$$V(x) \le U_*'(x)[1+c\,x^{\ell+\alpha+1}]^{-\alpha}
\le U_*'(x)[1-c\,x^{\ell+\alpha+1}]=U_*'(x)-c\,x^{\ell+1},\quad 0<x<x_1,$$
and this remains true on the remaining interval $[x_1,1/2]$ where $V\le 0$.
{The estimate for $u(T,x)$} follows by a further integration.
\end{proof}
\vskip0.4em
\begin{equation}gin{rem} \label{remb0}
As consequence of the above proof, we also have that if
$u_t(T,x)\ge 0$ in $(0,a]$ for some $T\ge T^*$ and $a\in (0,1/2)$, along with \rife{Beqdd}, then
$$
u(T,x)\ge u(T,0)+U_*(x)
\quad\hbox{ and }\quad
u_x(T,x)\ge U'_*(x),
\qquad 0<x\le a.
$$
\end{rem}
We stress a {simple but} useful
consequence of the comparison between $u$ and $U_*$ which occurs if $u_t\leq 0$ at some time $t_0$.
\begin{equation}gin{prop}\label{utneq} Let $\phi\in X_1$ with $\phi$
symmetric and nondecreasing on $[0,1/2]$.
Assume that there exists $t_0>0$ such that
$$
u_t(t_0,\cdot) \leq 0\quad\hbox{ in $(0,1)$} \qquad\hbox{and}\qquad u(t_0,0)=0.
$$
Then we have
\begin{equation}\label{compuUstar}
u(t,x)\leq U_*(x) \quad\hbox{ in $[t_0,\infty)\times[0,1]$,}
\end{equation}
\begin{equation}\label{compuxUstar}
u_x(t,x)\leq U_*'(x) \quad\hbox{ in $[t_0,\infty)\times (0,1)$,}
\end{equation}
\begin{equation}\label{computUstar}
u_t(t,x) \leq 0 \quad\hbox{ in $[t_0,\infty)\times (0,1)$.}
\end{equation}
\end{prop}
\begin{equation}gin{proof} Property \eqref{compuUstar} at $t=t_0$ is true as a consequence of Lemma~\ref{separation}(ii). Since $u_k\leq u$, we also have $u_k(t_0,x)\leq U_*(x)$ for all $k\geq 1$. Since $U_*$ is a supersolution
of problem \rife{app-1d}, by comparison principle we deduce that $u_k(t,x)\leq U_*(x)$ for all $t>t_0$. Passing to the limit as $k\to \infty$, we get property \eqref{compuUstar} (in particular, $u$ does not lose the boundary condition after $t_0$).
In order to verify \eqref{computUstar}, fixing any $h>0$,
it suffices to show that
$$u(t+h,x)\le u(t,x)\quad\hbox{ in $[t_0,\infty)\times [0,1]$.}$$
To this end, fix any $\tau>0$, and set $Q:=(0,\tau)\times (0,1)$ and $Pv:=v_t-v_{xx}-|v_x|^p$.
First applying the comparison principle in Proposition~\ref{compP} with $v_1(t,x)=u(t_0+t,x)$, $v_2(t,x)=u(t_0,x)$ and noting that
$$Pv_2=-[u_{xx}+|u_x|^p](t_0,x)=- u_t(t_0,x)\ge 0\quad\hbox{ in Q},$$
and that $v_1=0$ at $x\in \{0,1\}$ by \eqref{compuUstar},
we obtain $u(t_0+t,x)\le u(t_0,x)$ in $Q$.
Next apply the comparison principle with $v_1(t,x)=u(t_0+t+h,x)$, $v_2(t,x)=u(t_0+t,x)$. Since $v_1(0,x)=u(t_0+h,x)\le u(t_0,x)=v_2(0,x)$ by the previous step,
we reach the desired conclusion.
Finally, \eqref{compuxUstar} follows as a consequence of \eqref{computUstar} and Lemma~\ref{separation}(ii) with $b=0$.
\end{proof}
\section{Zero-number}
\label{SecZ}
In this section, we continue the study of the one-dimensional case, with $\Omega=(0,1)$,
turning our attention to the zero-number properties of $u_t$.
The function $w:=u_t$ is a classical solution of the homogeneous linear parabolic equation
\begin{equation}\label{eqnut}
w_t-w_{xx}=b(t,x)w_x\quad\hbox{ in $Q:=(0,\infty)\times (0,1)$},
\end{equation}
with coefficient given by $b(t,x):=p|u_x|^{p-2}u_x$.
This allows for using the powerful tool of zero-number.
To this purpose, we will need to consider initial data $\phi \in C^2([0,1])$ which are compatible at order two, that~is:
\begin{equation}\label{compat2}
\phi=\phi''+|\phi'|^p=0\qquad \hbox{at $x=0$ and $x=1$.}
\end{equation}
This guarantees that $u_t$ is continuous up to $t=0$ and up to the boundary,
i.e. $u_t\in C([0,T^*)\times [0,1])$, with
\begin{equation}\label{ut0}
u_t(0,\cdot)=\phi''+|\phi'|^p.
\end{equation}
\begin{equation}gin{defn}
For any function $\psi\in C(0,1)$, we denote by $Z(\psi)$ the number of sign changes of $\psi$ in the interval $(0,1)$. Precisely, we have
$$Z(\psi)=\sup\bigl\{m\in \mathbb N;\ \hbox{there exist $0<x_0<\dots<x_m<1,\ \psi(x_{i-1}) \psi(x_i)<0$, $i=1,\dots,m$}\bigr\}$$
(with the convention $Z(\psi)=0$ if $ \psi$ does not change sign).
In particular,
if $\phi \in C^2([0,1])$ is compatible at order two,
we set
$$N(t)=Z(u_t(t,\cdot)),\quad t\ge 0,$$
where $u$ is the unique viscosity solution of \rife{vhj1}.
\end{defn}
Let us first consider the time range $t\in (0,T^*)$. There, the function $w=u_t$ is a {\it classical solution of \eqref{eqnut} up to the boundary}, and it satisfies the
Dirichlet condition $w(t,0)=w(t,1)=0$. Consequently, the fundamental properties of the zero-number (cf.~\cite{Matano}, \cite{An88}) will be valid.
More generally, for the problem
\begin{equation}\label{genapproxpbm}
\begin{equation}gin{cases}
v_t-v_{xx} =F(v_x), & \quad t>0,\ x\in (0,1),\\
v(t,x)=0, & \quad t>0,\ x\in \{0,1\},\\
v(0,x)=\phi(x), & \quad x\in (0,1),
\end{cases}
\end{equation}
with $F\in W^{2,\infty}_{loc}(\mathbb R,\mathbb R)$, we have the following result as a consequence of \cite{An88}:
\begin{equation}gin{prop}\label{zeronumberut}
Let $F\in W^{2,\infty}_{loc}(\mathbb R,\mathbb R)$ and assume that $\phi\in W^{3,\infty}(0,1)$ satisfies $\phi=\phi''+F(\phi')=0$ for $x\in\{0,1\}$ and $Z(\phi''+F(\phi'))<\infty$.
Let $0<T\le \infty$ and let $v\in C^{1,2}([0,T)\times[0,1])$ be a classical solution of \eqref{genapproxpbm} on $(0,T)$. Then:
\begin{equation}gin{itemize}
\item[(i)] The function $Z(v_t(t))$ is nonincreasing on $[0,T)$.
\item[(ii)] For each $t\in (0,T)$, the zero set $\{x\in [0,1];\ v_t(t,x)=0\}$ is finite.
\item[(iii)] If $v_t$ has a degenerate zero, i.e.~$v_t(t_0,x_0)=v_{tx}(t_0,x_0)=0$, for some $t_0\in (0,T)$ and $x_0\in [0,1]$,
then $Z(v_t(t))$ drops at $t=t_0$, namely:
$$Z(v_t(s))<Z(v_t(t)),\quad 0<s<t_0<t<T.$$
\item[(iv)] For any $\varepsilon\in\mathbb R^*$, properties (i)-(iii) remain valid if the function $v_t$ is replaced with $v_t+\varepsilon$.
\end{itemize}
\end{prop}
\begin{equation}gin{rem}
(a) In particular, for $\phi$ as above, properties (i)-(iii) are true for
$v=u$, $T=T^*$ and $N(t)=Z(u_t(t))$.
(b) Assertion (iv) of Proposition~\ref{zeronumberut} is motivated by the study of the behavior of the solution for $t>T^*$.
Indeed, to our purposes it will be useful to consider the zero-number of the perturbed function $u_t+\varepsilon$ with small $\varepsilon$.
\end{rem}
\begin{equation}gin{proof}
The function $w=v_t\in C([0,T)\times [0,1])$
is a classical solution of
\begin{equation}\label{genapproxpbm2}
\begin{equation}gin{cases}
w_t-w_{xx} =b(t,x)w_x, & \quad 0<t<T,\ x\in (0,1),\\
w(t,x)=0, & \quad 0\le t<T,\ x\in \{0,1\},\\
\end{cases}
\end{equation}
with coefficient $b(t,x)=F'(v_x)$.
Assertions (i)-(iii) follow from \cite{An88} provided we show that $b, b_x, b_t\in L^\infty((0,\tau)\times (0,1))$ for each $\tau\in (0,T)$.
This is clear for $b$ and, since $b_x=F''(v_x)v_{xx}$ a.e.,
this is also true for $b_x$.
Let us consider $b_t=F''(v_x)v_{tx}$ a.e.
Since $w(0,\cdot)\in W^{1,\infty}(0,1)$,
a standard fixed-point argument, based on the variation of constants formula and heat semigroup estimates,
guarantees that $w_x$, hence $b_t$, is bounded for small $t>0$, hence on $(0,\tau)$.
As for assertion (iv), since $w=v_t+\varepsilon$ is a classical solution of
$$
\begin{equation}gin{cases}
w_t-w_{xx} =b(t,x)w_x, & \quad 0<t<T,\ x\in (0,1),\\
w(t,x)=\varepsilon, & \quad 0\le t<T,\ x\in \{0,1\},\\
\end{cases}
$$
in view of the above properties of $b$, it follows from Theorem D in \cite{An88}.
\end{proof}
Zero-number properties of $u_t$ are not a priori valid after $T^*$,
especially in case of loss of boundary conditions of $u$, since then $u_t$ is no longer controlled (it does not even need to exist) on the boundary.
In order to look for some possible zero-number properties of $u_t$ after $T^*$, it is natural to take advantage of the approximation
of $u$ by the global classical solutions $u_k$ of the truncated problems {\rife{app-1d}}.
Let us thus denote by
$$N_k(t)=Z(u_{k,t}(t,\cdot)),\quad t\ge 0,$$
the number of sign changes of $u_{k,t}(t,.)$.
It is not known whether the monotonicity property of $N(t)$ in Proposition~\ref{zeronumberut} remains true in general for $t>T^*$.
It will turn out that this is indeed the case when $N(0)\le 4$ (see Proposition~\ref{zeronumbermonotone}),
but this will follow a posteriori as a consequence of the full analysis of the behavior of the global viscosity solution.
In order to carry out this analysis, the following weaker zero-number property will be sufficient for our needs.
It is valid independently of the value of $N(0)$ and can be easily deduced from Proposition~\ref{zeronumberut}.
\begin{equation}gin{prop}\label{zeronumberut2}
Assume that $\phi\in W^{3,\infty}(0,1)$ is compatible at order two and satisfies $N(0)\equiv Z(\phi''+|\phi'|^p)<\infty$.
Then, for all $t_0\in [0,T^*)$, we have
\begin{equation}\label{utalternate0A}
N(t)\le N(t_0),\quad t_0<t<\infty.
\end{equation}
In addition, for any $\varepsilon\in\mathbb R^*$, we have
\begin{equation}\label{utalternate0B}
Z(u_t(t)+\varepsilon)\le Z(u_t(t_0)+\varepsilon),\quad t_0<t<\infty.
\end{equation}
\end{prop}
\begin{equation}gin{proof}
By property (i) of Proposition~\ref{zeronumberut} applied to problem {\rife{app-1d}},
we have
$N_k(t)\le N_k(t_0)$ for all $k>\|\phi'\|_\infty$.
By property \eqref{eqabA0}, it follows that
$$
N_k(t)\le N(t_0)\quad\hbox{ for all $k\ge k_0(t_0)$.}
$$
Assume for contradiction that $N(t)\ge m:=N(t_0)+1$. Then there exist
$0<x_0<\dots<x_m<1$ such that
\begin{equation}\label{utalternate}
u_t(t,x_{i-1}) u_t(t,x_i)<0,\quad i=1,\dots,m.
\end{equation}
By the approximation property \rife{approxpbm2}, for $k\ge k_0$ large enough, \eqref{utalternate} remains true with $u_t$ replaced with $u_{k,t}$,
hence $N_k(t)\ge m= N(t_0)+1$, which is a contradiction.
This proves \eqref{utalternate0A}.
The proof of \eqref{utalternate0B} is exactly the same, using property (iv) of Proposition~\ref{zeronumberut}.
\end{proof}
In view of Theorems~\ref{nonmin0} and \ref{minimal0}, we now turn to the special case $N(0)=2$,
for which we will be able to obtain a fairly complete picture of the behavior of solutions.
The condition $N(0)=2$ can be verified to hold for a large class of initial data
satisfying all our requirements,
and producing both minimal and nonminimal GBU solutions.
As an example of generic construction of such initial data, we have the following lemma:
\begin{equation}gin{lem}\label{constuctID}
Let $\varphi\in W^{1,\infty}(0,1)$ be symmetric on $[0,1]$ and satisfy, for some $x_0\in (0,1/2)$:
$$\varphi(0)=0,\quad \textstyle\int_0^{1/2} \varphi(t)dt = 0,$$
$$\hbox{$\varphi(x)>0$ on $(0,x_0)$, \quad $\varphi(x) <0$ and nonincreasing on $(x_0,1/2]$.}$$
(i) Then the function $\phi$ defined by
$$
\phi(x)=
\begin{equation}gin{cases}
\int_0^x \int_0^y \varphi(t)dtdy, &\quad\hbox{$0\le x\le 1/2$,} \\
\noalign{\vskip 2mm}
\phi(1-x),&\quad\hbox{$1/2< x\le 1$,}
\end{cases}
$$
has the following properties:
\vskip0.3em
\begin{equation}gin{itemize}
\item[(a)] $\phi\in W^{3,\infty}(0,1)$, $\phi$ is symmetric;
\vskip0.1em
\item[(b)] $\phi(0)=\phi'(0)=\phi''(0)=0$ and $\phi$ is in particular compatible at order two;
\vskip0.1em
\item[(c)] $\phi'(x)>0$ on $(0,1/2)$;
\vskip0.1em
\item[(d)] The function $V:= \phi''+( \phi')^p$ has a unique zero $z_0$ on $(0,1/2]$ with $V>0$ on $(0,z_0)$ and $V<0$ on $(z_0,1/2]$.
\end{itemize}
\vskip0.3em
\noindent (ii) For any $\lambda >0$, the function $\lambda \phi$ satisfies the same conditions (a)-(d).
Furthermore, there exists $\lambda^*>0$ such that
the corresponding solution $u^\lambda$ of \eqref{vhj1} is:
\begin{equation}gin{itemize}
\item[-] a nonminimal GBU solution for $\lambda>\lambda^*$,
\item[-] a minimal GBU solution for $\lambda=\lambda^*$,
\item[-] a global classical solution for $0\le\lambda<\lambda^*$.
\end{itemize}
\end{lem}
\begin{equation}gin{proof}
The function $\phi$ is symmetric by construction and we have
$\phi\in C^2([0,1])$ (the $C^2$ regularity at $x=1/2$ being guaranteed by $\phi'(1/2)=\int_0^{1/2} \varphi(t)dt= 0$).
Since $\phi''=\varphi$ on $[0,1/2]$, properties~(a) and (b) follow.
Property (c) is due to the fact that $\phi'(x)=\int_0^x \varphi(t)dt$ is positive on $(0,x_0)$ and decreasing on $[x_0,1/2)$ with $\phi'(1/2)=0$);
To check property (d) we note that $\phi''=\varphi$, hence
$ \phi''+( \phi')^p$ is positive on $(0,x_0]$ and is decreasing on $(x_0,1/2]$, with negative value at $x=1/2$).
{For any $\lambda>0$, since $\lambda\varphi$ verifies the same assumptions as $\varphi$,
we deduce that $\lambda \phi$ also satisfies conditions (i)-(iv).}
As for the existence of $\lambda^*>0$ with the stated properties, it follows from \cite{PS2} (see Theorems 2 and 3 and the proof of Theorem 2).
\end{proof}
\vskip0.5em
Our first basic result for the case $N(0)=2$ states that if $T^*<\infty$,
then $N(t)$ remains equal to~$2$ at least until the blow-up time $T^*$.
We also have a control on the zero-number of the perturbed function $u_t+\varepsilon$,
as well as of time-translates of $u$ itself,
both properties that will be useful in what follows.
\begin{equation}gin{lem}\label{defz}
Let $\phi\in W^{3,\infty}(0,1)$ be compatible at order two,
with $\phi$ symmetric and nondecreasing on $[0,1/2]$.
Assume that $N(0)=2$ and that $T^*<\infty$. Then we have:
(i) $N(t)=2$ for all $t\in (0,T^*)$.
Moreover, $u_t(t,\cdot)$ has a unique zero $z(t)\in (0,1/2)$ and we have
\begin{equation}\label{prelimz0}
u_t(t,\cdot)>0 \ \hbox{ in $(0,z(t))\cup(1-z(t),1)$},\qquad u_t(t,\cdot)<0 \ \hbox{ in $(z(t),1-z(t))$}
\end{equation}
and
\begin{equation}\label{prelimz0B}
u_{tx}(t,z(t))<0.
\end{equation}
(ii) Denote $Z_\varepsilon(t)=Z(u_t(t)+\varepsilon)$ for $\varepsilon>0$.
Then for each $t_0\in (0,T^*)$, there exists $\varepsilon_0(t_0)>0$ such that
\begin{equation}\label{prelimz0C}
Z_\varepsilon(t)\le 2\quad\hbox{ for all $\varepsilon\in (0,\varepsilon_0)$ and all $t\ge t_0$.}
\end{equation}
(iii) For each $t_0\in (0,T^*)$, there exists $\tau_0(t_0)>0$ such that
\begin{equation}\label{prelimz0D}
Z(u(t+\tau)-u(t))\le 2\quad\hbox{ for all $\tau\in (0, \tau_0)$ and $t\ge t_0$.}
\end{equation}
\end{lem}
\begin{equation}gin{proof}
(i) Assume for contradiction that $N(t_0)\ne N(0)=2$ for some $t_0\in [0,T^*)$.
Then $N(t)\le N(t_0)=0$ for all $t\in(t_0,T^*)$ by Proposition~\ref{zeronumberut} and symmetry.
In view of \eqref{utcenter}, it follows that $u_t(t,.)\le 0$ in $[t_0,T^*)\times [0,1]$, hence $u_{tx}(t,0)\leq 0$ for all $t\in[t_0,T^*)$, due to $u_t(t,0)=0$.
Consequently $u_x(t,0)\le u_x(t_0,0)$ for all $t\in[t_0,T^*)$, hence $\sup_{t\in(t_0,T^*)} \|u_x\|_\infty<\infty$ by~\eqref{controlnormux},
contradicting $T^*<\infty$.
Now, for each $t\in (0,T^*)$, since $N(t)=2$, there exists $z(t)\in(0,1/2)$ such that
\begin{equation}\label{prelimz}
\hbox{$u_t(t,\cdot)\ge 0$ in $(0,z(t))\cup(1-z(t),1)$ and $u_t(t,\cdot)\le 0$ in $(z(t),1-z(t))$}
\end{equation}
Assume that $u_t(t,x)=0$ for some $t\in (0,T^*)$ and some $x\in(0,1/2)\setminus\{z(t)\}$.
Then $u_{tx}(t,x)=0$ by \eqref{prelimz}, so that $N(t)$ drops at $t=t_0$ by property (iii) in Proposition~\ref{zeronumberut}.
This drop will also occur in case $u_{tx}(t,z(t))=0$.
But this contradicts the fact that $N(t)=2$ for all $t\in (0,T^*)$.
(ii) As a consequence of \eqref{prelimz0} and \eqref{prelimz0B}, there exists $\varepsilon_0>0$ such that
$Z_\varepsilon(t_0)=2$ for all $\varepsilon\in (0,\varepsilon_0)$.
Applying \eqref{utalternate0B}
in Proposition~\ref{zeronumberut2}, we deduce \eqref{prelimz0C}.
(iii) We first claim that there exists $\tau_0>0$ small, such that
\begin{equation}\label{utalternate0}
Z(u(t_0+\tau)-u(t_0))\le 2\quad\hbox{ for all $\tau\in (0, \tau_0)$}.
\end{equation}
Assume the contrary. Then there exist sequences $\tau_j\to 0$ and
$0<x_{0,j}<\dots<x_{4,j}<1$ such that
$$
w(\tau_j,x _{i-1,j}) w(\tau_j,x_{i,j})<0,\quad i=1,\dots,4,
$$
where $w(\tau,x)=u(t_0+\tau,x)-u(t_0,x)$. By applying the mean value theorem, we deduce the existence of sequences
$0<y_{1,j}<\dots<y_{4,j}<1$ such that $w(\tau_j,y _{i,j})=0$, and then of sequences $t_{i,j}\to 0$ such that
$$u_t(t_0+t_{i,j},y_{i,j})=0,\quad i=1,\dots,4.
$$
Passing to the limit, up to subsequences, we may assume $y_{i,j}\to y_i\in [0,1]$.
Consequently, we have $u(t_0,y_i)=0$, for $i=1,\dots,4$, with either $0<y_1<\dots<y_4<1$ or $u_{tx}(t_0,y_i)=0$ for some $i$.
In each case, this is a contradiction with assertion (i).
Consequently, \eqref{utalternate0} is true. Since $w(t,x)=u_k(t+\tau,x)-u_k(t,x)$ is a classical solution of \eqref{genapproxpbm2} with
$$b(t,x)
=\int_0^1 F'_{k}\bigl[u_{k,x}(t,x)+s(u_{k,x}(t+\tau,x)-u_{k,x}(t,x))\bigr]\, ds,$$
it follows from \cite{An88}, similarly as in the proof of Proposition~\ref{zeronumberut}, that $N(u_k(t+\tau)-u_k(t))\le N(u_k(t_0+\tau)-u_k(t_0))\le 2$ for $t\ge t_0$.
We finally deduce property \eqref{prelimz0D} by passing to the limit $k\to\infty$ exactly as in the proof of \eqref{utalternate0A}.
\end{proof}
The following simple property enables one to reduce the case $N(0)=4$ to the case $N(0)=2$, after a time shift.
\begin{equation}gin{lem}\label{higherN}
Let $\phi\in W^{3,\infty}(0,1)$ be compatible at order two,
with $\phi$ symmetric and nondecreasing on $[0,1/2]$.
Assume $N(0)\equiv Z(\phi''+|\phi'|^p)<\infty$ and $T^*(\phi)<\infty$. Then:
(i) There exist $t_0\in (0,T^*)$ and an integer $N_*\le N(0)$, with $N_*\equiv 2$ \hbox{\rm [mod. $4$]}, such that
\begin{equation}\label{propNstar}
N(t)=N_*\quad\hbox{ for all $t\in [t_0,T^*)$.}
\end{equation}
(ii) In particular, if $N(0)=4$, then $N_*=2$.
\end{lem}
\begin{equation}gin{proof}
Since $N(t)$ is integer-valued and nonincreasing for $t\in (0,T^*)$, by Proposition~\ref{zeronumberut}(i), the existence of $t_0$ and
$N_*\le N(0)$ satisfying \rife{propNstar} follows
($N_*$ is given by $\lim_{t\to T^*}N(t)$).
By symmetry, $N_*$ is even. Assume for contradiction that $N_*\equiv 0$ \hbox{\rm [mod. $4$].}
Then, for all $t\in [t_0,T^*)$, since $u_t(t,1/2)<0$ by \eqref{utcenter}, we have $u_t(t,x)\le 0$ for $x>0$ small.
Arguing as in the first paragraph of the proof of Lemma~\ref{defz}, we reach a contradiction with $T^*<\infty$.
\end{proof}
\begin{equation}gin{rem} \label{remhigherN}
All the conclusions of Theorems~\ref{nonmin0} and \ref{minimal0} remain valid if assumption \eqref{hypID2}
is replaced with $N(0)\equiv Z(\phi''+|\phi'|^p)=4$.
Indeed, by Lemma~\ref{higherN}, there exists $t_0\in (0,T^*)$ such that $u(t_0,\cdot)$, considered as new initial data (at the new time origin $t_0$),
satisfies all the assumptions of Theorems~\ref{nonmin0} and \ref{minimal0}.
\end{rem}
Let us go back to the case $N(0)=2$ (under the assumptions of Lemma~\ref{defz}).
The limiting behavior of the (unique) zero point $z(t)$ of the time derivative will play a crucial role in the behavior of the solution at $T^*$.
Indeed, the key quantity to consider will turn out to be
\begin{equation}\label{defL}
L:=\liminf_{t\to T^*_-}z(t).
\end{equation}
This kind of analysis will be carried out hereafter. For convenience, we split the analysis between the case of minimal and nonminimal blow-up solutions.
\section{Behavior of nonminimal solutions. Proofs of Theorems~\ref{nonmin0} and \ref{proppersist}} \label{behavenonmin}
\label{Sec-nonmin}
\subsection{Behavior near the blow-up time $T^*$.}
\label{Sec-nonmin1}
We shall here prove the following result.
It shows that blow-up is nonminimal if and only if $L>0$, where $L$ is defined in \eqref{defL}
and $z(t)$ is the unique zero point of $u_t$ in $(0,1/2]$ (cf.~Lemma~\ref{defz}).
For nonminimal blow-up solutions, it also implies part of assertion (i) of Theorem~\ref{nonmin0}
and the GBU rate estimate \eqref{estA0} therein.
{In addition, it gives some first linear bounds on the rate of loss of boundary conditions, that will be later improved to the precise asymptotics \eqref{estB0}. }
\begin{equation}gin{prop}\label{nonmin}
Let $\phi\in W^{3,\infty}(0,1)$ be compatible at order two,
with $\phi$ symmetric and nondecreasing on $[0,1/2]$.
Assume that $N(0)=2$ and $T^*<\infty$.
Then we have
(i) $u$ is a nonminimal blowup solution if and only if $L>0$.
Moreover, if $u$ is a minimal solution, then
\begin{equation}\label{utneg}
u_t(T^*,x)\le 0,\quad x\in (0,1).
\end{equation}
(ii) If $L>0$, then there is immediate loss of boundary conditions (LBC) for $t>T^*$
and $u$ satisfies the following estimates
\begin{equation}\label{estA}
c_1(T^*-t)^{-1/(p-2)}\le \|u_x(t)\|_\infty \le c_2(T^*-t)^{-1/(p-2)},\quad\hbox{as $t\to T^*_-$,}
\end{equation}
and
\begin{equation}\label{estB}
c_3(t-T^*)\le u(t,0) \le c_4(t-T^*),\quad\hbox{as $t\to T^*_+$,}
\end{equation}
for some constants $c_i>0$. Moreover, there exist $c,a,\deltalta>0$ such that
\begin{equation}\label{estB2}
u_t(t,x) \ge c \quad\hbox{ for all $t\in (T^*,T^* +\deltalta)$ and $x\in (0,a]$.}
\end{equation}
\end{prop}
\begin{equation}gin{rem} Property \eqref{estB2} shows that, interestingly, $u_t$ is uniformly {positive} near the corner $(T^*_+,0)$ and undergoes a jump discontinuity there,
while remaining bounded. A similar property will be observed near the regularization time (see~\eqref{estB2r}).
\end{rem}
\begin{equation}gin{proof}[Proof of Proposition~\ref{nonmin}]
First assume $L=0$. Since $u_t$ is continuous in $(0,T^*]\times (0,1)$, property \eqref{utneg} follows.
Proposition~\ref{utneq} then guarantees that the boundary conditions are preserved forever. So $u$ is minimal on account of Proposition~\ref{prop-min}.
We assume henceforth that $L>0$. We aim at showing that
$u$ loses the boundary conditions immediately after $T^*$
(hence in particular $u$ is nonminimal) and satisfies estimates \eqref{estA}, \eqref{estB}.
We divide the proof in several steps. {We consider below the approximating sequence of solutions $u_k$ of \rife{app-1d}, with $F_k$ given by \rife{eqaaa}}.
{\bf Step 1.} {\it Positivity of $u_{k,t}$ in a uniform neighborhood of $t=T^*$ and $x=0$.}
We first claim that
\begin{equation}\label{eqposut}
u_t(T^*,x)>0\quad\hbox{ in $(0,L)$.}
\end{equation}
Fix $0<L'<L$. By definition of $L$, there exists $t_0=t_0(L')$ such that $u_t(t,x)>0$ in $[t_0,T^*)\times (0,L')$.
For given $a\in (0,L')$, $v:=u_t$ is a classical solution of $v_t-v_{xx}=b(t,x)v_x$ in $Q_a:=[t_0,T^*]\times[a,L']$,
where the coefficient $b(t,x):=pu_x^{p-1}$ is bounded in $Q_a$.
The strong maximum principle then guarantees that $v(T^*,x)>0$ in $(a,L')$ and \eqref{eqposut} follows.
Set $a=L/2$. By \eqref{eqposut} and the local convergence of the $u_k$, there exist $k_0\ge 1$ and $\eta,\deltalta>0$ such that,
for all $k\ge k_0$,
\begin{equation}\label{eqba}
u_{k,t}(t,a)\ge \eta \quad\hbox{ for all $t\in [T^* -\deltalta,T^* +\deltalta]$.}
\end{equation}
By property (i) of Proposition~\ref{zeronumberut} applied to problem \rife{app-1d},
we have
$N_k(t)\le N_k(0)=N(0)=2$ for all $k>\|\phi'\|_\infty$.
Therefore, \rife{eqba} and \eqref{uktcenter} guarantee that
\begin{equation}\label{Nkt2}
N_k(t)=2\quad\hbox{ for all $t\in [0,T^* +\deltalta]$,}
\end{equation}
and we thus have
\begin{equation}\label{eqb}
u_{k,t}(t,x)>0 \quad\hbox{ in $[T^* -\deltalta,T^* +\deltalta]\times(0,a]$ for all $k\ge k_0$. }
\end{equation}
{\bf Step 2.} {\it
Maximum principle for $u_{k,x}$.} {Let us set
$$
m_k(t):=\max_{\overline Q_t} |u_{k,x}|\,.
$$}
We claim that there exists $t_0\in (T^*-\deltalta,T^*)$ such that, for all sufficiently large $k$,
\begin{equation}\label{eqbak}
m_k(t)=u_{k,x}(t,0) \ge 1,\quad t_0\le t\le T^* +\deltalta.
\end{equation}
Since $w=u_{k,x}$ satisfies the equation
$$w_t-w_{xx}=F'_k(u_{k,x})w_x$$
we have
$$
m_k(t)=\max\Bigl[\|\phi_x\|_\infty,\max_{\tau\in [0,t]} u_{k,x}(\tau,0)\Bigr].
$$
By \rife{Nkt2} and \eqref{uktcenter}, we see that the function $t\mapsto u_{k,x}(t,0)$ is nondecreasing on $[0,T^* +\deltalta]$, hence
$$
m_k(t)=\max\Bigl[\|\phi_x\|_\infty,u_{k,x}(t,0)\Bigr],\quad t\in [0,T^* +\deltalta].
$$
Since $\|u_x(t)\|_\infty\to\infty$ as $t\to T^*$, there exists $t_0\in (T^*-\deltalta,T^*)$ such that $\|u_x(t_0)\|_\infty>1+\|\phi_x\|_\infty$,
so that $\|u_{k,x}(t_0)\|_\infty\ge 1+\|\phi_x\|_\infty$ for all $k\ge k_0$ large enough, hence \eqref{eqbak}.
We proceed by deriving an upper bound for $m_k'$.
\begin{equation}gin{lem} Under the assumptions of Proposition~\ref{nonmin},
there exists a constant $C>0$ independent of $k$ such that, for all large enough $k$,
\begin{equation}\label{eqca}
0\le m'_k(t)\le CF'_k(m_k(t)),\quad t_0\le t\le T^* +\deltalta
\end{equation}
and
\begin{equation}\label{eqcaa}
0\le m'(t)\le CF'(m(t)),\quad t_0\le t<T^*.
\end{equation}
\end{lem}
\begin{equation}gin{proof}
We use (a simpler version of) arguments from the proofs of Theorem~\ref{minimal_rateG}
and Lemma~\ref{unbddT}.
Set $w:=u_{k,t}$, $F=F_k$.
Let $t\in (t_0,T^* +\deltalta]$, $s\in (0,t)$, and put
$K=\sup_{\sigma\in [0,t-s]} \sigma^{1/2}\|w_x(s+\sigma)\|_\infty$.
For $\tau\in (0,t-s)$, in view of the variation-of-constants formula, we have
$$w(s+\tau)=e^{\tau \Delta}w(s)+
\int_0^\tau e^{(\tau-\sigma)\Delta}(F'(u_{k,x})w_x)(s+\sigma)\, d\sigma.$$
Since
$\int_0^\tau (\tau-\sigma)^{-1/2}\sigma^{-1/2}\, d\sigma
=\int_0^1 (1-z)^{-1/2}z^{-1/2}\, dz$,
it follows that
\begin{equation}gin{align*}
\|w_x(s+\tau)\|_\infty
&\leq C\tau^{-1/2}\|w(s)\|_\infty+
C\int_0^\tau (\tau-\sigma)^{-1/2}\|[F'(u_{k,x})w_x](s+\sigma)\|_\infty\, d\sigma\\
&\leq C\tau^{-1/2}+CF'(m_k(t))K,
\end{align*}
where we also used the convexity of $F_k$. {Here and below $C$ denotes possibly different constants independent of $k$.}
Multiplying by $\tau^{1/2}$ and taking the supremum for $\tau\in [0,t-s]$, we obtain
$$K\leq C+C(t-s)^{1/2}F'(m_k(t))K.
$$
Now choosing $s=t-(1/4)\min\bigl(t_0,[CF'(m_k(t))]^{-2}\bigr)\in (0,t)$, we obtain
$K\leq 2C$, hence
$$\|w_x(t)\|_\infty\leq 2C(t-s)^{-1/2}\leq 4C\max(t_0^{-1/2},CF'(m_k(t)))\leq CF'(m_k(t))$$
(recalling $m_k(t)\ge 1$).
Since, by \rife{eqb} and \eqref{eqbak}, $0\le m'_k(t)=u_{k,xt}(t,0)\le \|w_x(t)\|_\infty$ for $t_0\le t\le T^* +\deltalta$,
estimate \rife{eqca} follows.
Finally, for each $t\in [t_0,T^*)$, we have
\begin{equation}\label{eqcaamt}
m(t)=\lim_{k\to\infty}m_k(t)=\lim_{k\to\infty}u_{k,x}(t,0)=u_{x}(t,0)
\end{equation}
(passing to the limit in \eqref{eqbak}),
and $m'(t)=u_{xt}(t,0)=\lim_{k\to\infty}u_{k,xt}(t,0)=\lim_{k\to\infty}m'_k(t)$.
Estimate \rife{eqcaa} then follows by letting $k\to\infty$ in~\rife{eqca}.
\end{proof}
{\bf Step 3.} {\it Auxiliary function argument applied to the $u_k$.}
\begin{equation}gin{lem} \label{lem73}
{ Under the assumptions of Proposition~\ref{nonmin}, } we have
$$u_{k,t} \ge C\Bigl(1-{u_{k,x} \over m_k(t)}\Bigr) \quad\hbox{ for all $t_0\le t\le T^* +\deltalta$ and $0<x\le a,$}$$
with $C$ independent of $k$.
\end{lem}
\begin{equation}gin{proof}
The proof is based on a modification of a device from \cite{GH08},
based on the auxiliary function in \rife{eqc2}.
Fix $k$ and denote $v:=u_k$, $F=F_k$, $\mu=m_k$ for simplicity. {We denote by $C$ possibly different constants independent of $k$. }
Consider the parabolic operator
$${\mathcal L}h:=h_t-h_{xx}-F'(v_x)h_x$$
in $Q:=[t_0,T^* +\deltalta]\times[0,a]$. In view of \rife{eqaa} and $v_x\ge 0$ in $Q$, we note that
\begin{equation}\label{eqc}
{\mathcal L}v=F(v_x)-F'(v_x)v_x\le -F(v_x).
\end{equation}
For $\sigma\in(0,1)$ to be chosen later,
we introduce the auxiliary function
\begin{equation}\label{eqc2}
w(t,x):=\Bigl(1+{1\over \mu^\sigma(t)}\Bigr)\Bigl(1-{v_x\over \mu(t)}\Bigr).
\end{equation}
A direct computation using ${\mathcal L}v_x=0$ shows that
$$
{\mathcal L}w=-{\sigma \mu'\over \mu^{\sigma+1}}\Bigl(1-{v_x\over \mu}\Bigr)
+\Bigl(1+{1\over \mu^\sigma}\Bigr){v_x \mu'\over \mu^2}.
$$
Now consider the following cases separately:
\noindent {\it Case 1.} If $v_x(t,x)\le{\sigma\over \sigma +2}\mu^{1-\sigma}(t)$, then
since $\mu'(t), v_x\geq 0$ and $\mu(t)\ge 1$, we have
$${\mathcal L}w
={\mu'\over \mu^{\sigma+1}} \Bigl(-\sigma+(\sigma+1){v_x\over \mu}
+{v_x \over \mu^{1-\sigma}}\Bigr)
\leq{\mu'\over \mu^{\sigma+1}} \Bigl(-\sigma+(\sigma+2)
{v_x \over \mu^{1-\sigma}} \Bigr)\leq 0.$$
\noindent {\it Case 2.} If $v_x(t,x)>{\sigma\over \sigma +2}\mu^{1-\sigma}(t)$, then,
by \rife{eqca} and \rife{eqaa}, we have
\begin{equation}\label{eqd}{\mathcal L}w
\leq\Bigl(1+{1\over \mu^\sigma}\Bigr){v_x \mu'\over \mu^2}
\leq Cv_x{F'(\mu)\over \mu^2}
\leq Cv_x{F(\mu)\over \mu^3},
\end{equation}
and we have the following two subcases.
\vskip0.3em
\ {\it Case 2a.} If $v_x(t,x)\le k$, then, choosing $\sigma\in (0,1)$ such that $\sigma\le 2/(p-1)$,
hence $(1-\sigma)(p-1)\ge p-3$, we have
$${F(v_x)\over v_x}=v_x^{p-1}\ge C\mu^{(1-\sigma)(p-1)}\ge C\mu^{p-3}\ge C{F(\mu)\over \mu^3},$$
due to \rife{eqaaa} and $\mu(t)\ge 1$. This along with \rife{eqd} yields
\begin{equation}\label{eqe}{\mathcal L}w\le CF(v_x).
\end{equation}
\ {\it Case 2b.} If $v_x(t,x)>k$, hence $\mu(t)>k$, then
$${F(v_x)\over v_x}\ge {C\, k^{p-1} \ge C\, {F(\mu)\over \mu^2}}\ge C{F(\mu)\over \mu^3}$$
by \rife{eqaaa} and $\mu(t)\ge 1$, hence again \rife{eqe}.
In all cases we thus have \rife{eqe}, hence
$${\mathcal L}(w+Cv)
\leq 0={\mathcal L}v_t \quad\hbox{ in $Q$},$$
due to \rife{eqc}.
Moreover, we have
$$[w+Cv](t,0)=0=v_t(t,0),\quad t_0\leq t\le T^* +\deltalta,
$$
{where we used that $v_x(t,0)=\mu(t)$, see \rife{eqbak}.}
On the other hand, by \eqref{eqabA0}, we have $u_k\equiv u$ on $Q_{t_0}$ for all $k$ large enough.
Also, as a consequence of \rife{eqb}, we have $u_t(t_0,x)>0$ in $(0,a]$, as well as $u_{tx}(t_0,0)>0$, in view of the Hopf lemma.
Consequently there exists $K>0$ (independent of $k$) such that
$$\bigl[(w+Cv)-Kv_t\bigr](t_0,x)\leq 0,\quad 0\le x\le a. $$
Since $v\le \|\phi\|_\infty$ and $w\le 2$, by increasing the constant $K$ if necessary, \rife{eqba} implies
$w+Cv\le Kv_t$ for $x=a$ and $t\in (t_0,T^* +\deltalta]$.
By the maximum principle, we deduce $w+Cv\leq K v_t$
in $[t_0,T^* +\deltalta]\times(0,a)$, hence the conclusion of Lemma~\ref{lem73}.
\end{proof}
\vskip0.5em
{\bf Step 4.} {\it LBC and detachment estimates at $t=T^*_+$.}
For $x\in (0,a]$ and $t_0<s<T^*<t<T^* +\deltalta$ fixed,
we deduce from Lemma~\ref{lem73} and $m'_k\ge 0$ (cf.~\rife{eqca}) that
$$u_{k,t}(t,x) \ge C\Bigl(1-{u_{k,x}(t,x) \over m_k(s)}\Bigr).$$
Letting $k\to\infty$, we obtain
$$u_t(t,x) \ge C\Bigl(1-{u_x(t,x) \over m(s)}\Bigr)$$
and then, letting $s\to T^*$, hence $m(s)\to\infty$, we get
$$u_t(t,x) \ge C \quad\hbox{ for all $x\in (0,a]$ and $T^*<t<T^* +\deltalta,$}$$
that is, \eqref{estB2}.
Consequently,
$$u(t,x) \ge C(t-T^*) \quad\hbox{ for all $x\in (0,a]$ and $T^*<t<T^* +\deltalta.$}$$
Finally passing to the limit $x\to 0$, using the fact that $u\in C([0,\infty)\times [0,1])$, we obtain
$$u(t,0) \ge C(t-T^*) \quad\hbox{ for all $T^*<t<T^* +\deltalta$.}$$
The upper estimate in \rife{estB} is an immediate consequence of the bound on $u_t$.
{\bf Step 5.} {\it Completion of proof of Proposition~\ref{nonmin}: GBU estimates at $t=T^*_-$.}
{As a consequence of Lemma \ref{lem73} and the convergence of $u_k$ to $u$ for $t<T^*$, } we have
$$u_t \ge C\Bigl(1-{u_x \over m(t)}\Bigr) \quad\hbox{ for all $t_0\le t<T^*$ and $0<x\le a.$}$$
Then, following \cite{GH08}, we write, using $m(t)=u_x(t,0)$ (cf.~\rife{eqcaamt}):
$$
u_{xt}(t,0)
=\lim_{x\to 0+}{u_t(t,x)\over x}
\geq C\lim_{x\to 0+} \frac 1x \Bigl(1-{u_x(t,x) \over m(t)}\Bigr)
= -{Cu_{xx}(t,0)\over m(t)} \geq {Cu_x^p(t,0)\over m(t)}=Cu_x^{p-1}(t,0),
$$
and the upper estimate in \rife{estA} follows by integration.
As for the lower estimate, it was proved in Theorem~\ref{minimal_rate} (alternatively, in the current simpler situation, it also follows by integrating \rife{eqcaa}).
\end{proof}
\subsection{Behavior near the reconnection time.}
\label{Sec-nonmin2}
We can now investigate what happens after the blow-up time of nonminimal blow-up solutions. To this extent, we define
$$
T_0:=\inf\{t>T^*;\, u(t,0)=0\}
$$
the reconnection time, or (first) time of recovery of boundary conditions.
We already know that $T^*<T_0\le T^r<\infty$.
We shall prove the following result, which
shows that $u$ is regularized forever immediately after~$T_0$, i.e., $T^r=T_0$.
As a consequence, along with Proposition~\ref{nonmin}, this will imply assertion (i) of Theorem~\ref{nonmin0}.
In addition, it establishes the regularization rate \eqref{BeqConclA0} in Theorem~\ref{nonmin0}
and gives a first linear bound on the rate of recovery of boundary conditions, that will be later improved {to the precise asymptotics} \eqref{BeqConclAa0}.
\begin{equation}gin{prop}\label{recon}
Let $\phi\in W^{3,\infty}(0,1)$ be compatible at order two,
with $\phi$ symmetric and nondecreasing on $[0,1/2]$.
Assume that $N(0)=2$,
that $T^*<\infty$ and that $u$ is a nonminimal blowup solution.
Then $u$ is a classical solution of problem \rife{vhj1} (including boundary conditions)
for all $t\in(T_0,\infty)$. In other words, $T^r=T_0$.
Moreover $u$ satisfies the following reconnection and regularization estimates:
\begin{equation}\label{BeqConclAa}
c_1(T^r-t)\le u(t,0) \le c_2(T^r-t),
\quad\hbox{as $t\to T^r_-$,}
\end{equation}
\begin{equation}\label{BeqConclA}
c_3(t-T^r)^{-1/(p-2)} \le \|u_x(t)\|_\infty\le c_4(t-T^r)^{-1/(p-2)},
\quad\hbox{as $t\to T^r_+$,}
\end{equation}
with some constants $c_i>0$.
Furthermore there exist $c>0$ and $t_1\in (T^*,T^r)$ such that
\begin{equation}\label{estB2r}
u_t\le -c\quad\hbox{ in $[t_1,T^r)\times (0,1)$.}
\end{equation}
\end{prop}
Proposition~\ref{recon} will be shown through a series of lemmas.
The first step is to initialize the uniform negativity of $u_t$.
Intuitively, since $u(t,0)$ goes from positive values for $t<T_0$ to $0$ for $t=T_0$,
the time derivative $u_t$ (although not necessarily defined at $x=0$) must be negative near $x=0$ for ''many'' values of~$t$.
However, this has to be extended to the whole interval.
The proof relies on a key zero-number argument
applied to the perturbed function $u_t+\varepsilon$ with $\varepsilon>0$ small.
\begin{equation}gin{lem}\label{neg1}
Under the assumptions of Proposition~\ref{recon},
for any $t_0\in [T^*,T_0)$, there exist $t_1\in (t_0,T_0)$ and $\varepsilon>0$ such that
$$
u_t(t_1,x)\le -\varepsilon,\quad 0<x<1.
$$
\end{lem}
\begin{equation}gin{proof}
Fix any $\tau, t_0$ such that $0<\tau<T^*<t_0<T_0$.
By \eqref{utcenter}, we have
\begin{equation}\label{BoundSigma}
\sigma:=-\max_{t\in [\tau,T_0]} u_t(t,1/2)>0
\end{equation}
and, by Lemma~\ref{defz}(ii), there exists $\varepsilon_0(\tau)\in (0,\sigma)$ such that
\begin{equation}\label{prelimz0C2}
Z_\varepsilon(t):=Z(u_t(t)+\varepsilon)\le 2\quad\hbox{ for all $\varepsilon\in (0,\varepsilon_0)$ and all $t\ge \tau$.}
\end{equation}
We claim that, for $\varepsilon\in (0,\varepsilon_0)$ to be determined,
\begin{equation}\label{ConclEpsB}
\hbox{there exists $t_1\in (t_0,T_0)$ such that $Z_\varepsilon(t_1)=0$.}
\end{equation}
Assume for contradiction that this is not the case.
Then $Z_\varepsilon(t)=2$ for all $t\in [t_0,T_0)$, due to \eqref{prelimz0C2}.
Since $u_t(t,x)+\varepsilon<0$ near $x=1/2$ by \rife{BoundSigma}, we may set
$$
\xi_\varepsilon(t)=\min\bigl\{x<1/2;\, u_t(t,x)+\varepsilon\le 0\ \hbox{ on $[x,1/2]$} \bigr\}\in (0,1/2).
$$
We note that the function $\xi_\varepsilon(t)$ is l.s.c. on $[t_0,T_0)$.
Indeed for every $t\in[t_0,T_0)$ and every $\eta>0$ there exists
$x_\eta\in (\xi_\varepsilon(t)-\eta,\xi_\varepsilon(t))$ such that
$u_t(t,x_\eta)+\varepsilon>0$, hence $u_t(s,x_\eta)+\varepsilon>0$ for all $s$ close enough to $t$,
due to $u_t\in C((0,\infty)\times(0,1))$, so that $\xi_\varepsilon(s)\ge \xi_\varepsilon(t)-\eta$.
It follows that, for any $t\in (t_0,T_0)$,
$$
x_0(t):=\inf_{s\in [t_0,t]}\xi_\varepsilon(s)>0.
$$
Then, fixing any $t\in (t_0,T_0)$ and $x\in (0,x_0(t))$, we have $u_t(s,x)+\varepsilon\ge 0$ for all $s\in [t_0,t]$ due to
$Z_\varepsilon(t)=2$. By integration, we obtain
$$
u(t_0,x)-u(t,x)=-\int_{t_0}^t u_t(s,x)\, ds\le \varepsilon(t-t_0).
$$
Letting $x\to 0$, we get $u(t_0,0)-u(t,0)\le \varepsilon(t-t_0)$, and then, letting $t\to T_0$,
$$
0<u(t_0,0)\le \varepsilon(T_0-t_0).
$$
Therefore, for any choice $\varepsilon<\min(\varepsilon_0,(T_0-t_0)^{-1}u(t_0,0))$,
property \rife{ConclEpsB} is necessarily true.
Since $u_t(t_1,x)+\varepsilon<0$ near $x=1/2$, we conclude that $u_t(t_1,x)+\varepsilon\leq 0$ in $(0,1)$.
\end{proof}
The second step is to propagate this uniform negativity.
This is done by the maximum principle applied to a suitable family of auxiliary functions
on subdomains (where the $u_x$ is bounded).
\begin{equation}gin{lem}\label{neg2} Under the assumptions of Proposition~\ref{recon},
there exist $\eta>0$ and $t_1\in (T^*,T_0)$ such that
\begin{equation}\label{ConclEta}
u_t\le -\eta<0\quad\hbox{ in $[t_1,T_0)\times (0,1)$}.
\end{equation}
\end{lem}
\begin{equation}gin{proof}
Fix $a\in (0,1/2)$ and consider the auxiliary function
\begin{equation}\label{defJa}
J(t,x)=u_t-\eta\bigl[La^\alpha (u_x+K)-1\bigr],\quad (t,x)\in Q:=[t_1,T_0)\times [a,1/2],
\end{equation}
where $\eta, K, L>0$ and $t_1\in (T^*,T_0)$ are to be chosen.
An immediate computation shows that
$$
{\mathcal L}J:=J_t-J_{xx}-p(u_x)^{p-1}J_x=0\quad\hbox{ in $Q$}.
$$
Since $u(t,0)>0$ for all $t\in (T^*,T_0)$ (by Proposition~\ref{nonmin} and the definition of $T_0$)
it follows from Lemmas~\ref{bdl} and \ref{basic-prop} that
$u_x(t,a)+K\ge d_pa^{-\alpha}$, where $d_p=(p-1)^{-1/(p-1)}$ and $K$ is independent of $t$.
We choose this $K$ in \eqref{defJa}.
Since also $u_t\le M$ by \rife{u_t-bounded}, we have
$$
J(t,a)\le M-\eta\bigl(Ld_p-1\bigr).
$$
We also have
$$
J(t,1/2)\le
-\sigma+\eta,
$$
by \rife{BoundSigma}.
Moreover, with $t_1,\varepsilon$ provided by Lemma~\ref{neg1} (applied with $t_0=T^*$), we have
$$
J(t_1,x)\le -\varepsilon+\eta.
$$
We then make the following choice:
$$
\eta=\min(\sigma, \varepsilon),\ \quad L=d_p^{-1}\bigl(1+M\eta^{-1}\bigr)
$$
(which is independent of $a$).
In view of the above, it then follows from the maximum principle that
$J(t,x)\le 0$ in $(t_1,T_0)\times (a,1/2)$. Consequently, we have
$$
u_t(t,x)\le \eta\bigl[La^\alpha (u_x(t,x)+K)-1\bigr],\quad t_1\le t<T_0,\ 0<a<x\le 1/2.
$$
For any given $(t,x)\in [t_1,T_0)\times (0,1/2)$, we may let $a\to 0^+$ in the
last inequality and the conclusion follows in $(0,1/2]$,
hence in $(0,1)$ by symmetry.
\end{proof}
We now have all the ingredients for the proof of Proposition~\ref{recon}.
\begin{equation}gin{proof}[Proof of Proposition~\ref{recon}.]
Since $u\in C^{1,2}((0,\infty)\times(-1,1))$, it follows from \rife{ConclEta}
that \rife{HypLemmeODE} is true at $T=T_0$ with $\ell=0$
and $b=\eta>0$.
By Lemma~\ref{separation}(ii), we deduce that \rife{BeqabAAa} is true with $m=2$.
Lemma~\ref{barrier} then guarantees that $T^r=T_0$ and
that the upper estimate in \rife{BeqConclA} is satisfied.
As for the lower estimate in \rife{BeqConclA}, it was proved in Theorem~\ref{minimal_rate2}.
On the other hand, \eqref{estB2r} and the lower part of estimate \rife{BeqConclAa} are direct consequences of Lemma~\ref{neg2},
whereas the {upper} part is just due to the bound $|u_t|\le M$.
\end{proof}
\subsection{On the life of blow-up solutions between $T^*$ and $T^r$.}
\label{Sec-nonmin3}
This subsection investigates what happens to the nonminimal blow-up solution after the loss of boundary conditions
and before they are recovered, namely in the time interval between $T^*$ and $T^r$.
We shall first prove Theorem~\ref{proppersist}, which is
valid for any LBC solution in one space dimension.
Assertion (i) is a direct consequence of results from Section~\ref{prelim1d}.
\begin{equation}gin{proof}[Proof of Theorem~\ref{proppersist}(i)]
For all $t\in (T_1,T_2)$, since $u(t,0)>0$,
we know from Lemma~\ref{bdl} that $\lim_{x \to 0_+} u_x(t,x) =\infty$
and estimates \rife{shiftedcopy0}-\rife{shiftedcopy} follow from Lemma~\ref{basic-prop}.
The estimates at $t=T_1$ and $T_2$ follow by continuity since $u\in C^{1,2}((0,\infty)\times (0,1))\cap C([0,\infty)\times [0,1])$.
\end{proof}
In view of the proof of assertion (ii), which is a bit delicate,
we establish the following bounds on $u_{xt}$ and $u_{tt}$
which are the key to the boundary regularity of $u_t$.
\begin{equation}gin{lem} \label{bounduxt}
Under the assumptions of Theorem~\ref{proppersist}, for each $\eta>0$, we have
\begin{equation}\label{bounduxt1}
\sup_{(T_1+\eta,T_2)\times (0,1/2)} |u_{xt}|<\infty.
\end{equation}
\end{lem}
\begin{equation}gin{lem} \label{boundutt}
Let $\phi\in X_1$ with $T^*(\phi)<\infty$. Then, for all $t_0>0$, we have
$$\inf_{(t_0,\infty)\times (0,1)} u_{tt}>-\infty.$$
\end{lem}
The key idea for the proof of Lemma~\ref{bounduxt} is to consider the finite differences of $u_x$ in time in the interval $(T_1,T_2)$ and to:
-{\hskip 3pt}observe that they vanish at $x=0$ due to the proximity with the reference profile $U_*'$ (cf.~\rife{shiftedcopy});
-{\hskip 3pt}show that they satisfy a parabolic inequality with {\it strong nonlinear absorption},
which forces their uniform boundedess for any $t>T_1$.
\begin{equation}gin{proof}[Proof of Lemma~\ref{bounduxt}]
{\bf Step 1.} {\it Parabolic equation for finite differences of $u_x$ in time.}
By \rife{shiftedcopy},
there exists $a\in (0,1/2)$ such that
\begin{equation}\label{bounduxt1aux1}
u_x(t,x)\ge cx^{-\alpha}\quad\hbox{ in $[T_1,T_2]\times (0,a)$,}
\end{equation}
and
\begin{equation}\label{bounduxt1aux2}
u_{xx}=-(u_x)^p+u_t\le -cx^{-\alpha-1}\quad\hbox{ in $[T_1,T_2]\times (0,a]$,}
\end{equation}
with $\alpha=1/(p-1)$ and $c=c(p)>0$.
Next fix $h>0$ and let $w=\tau_hu_x$
where $\tau_h$ denotes the finite difference (in time) operator i.e.,
$$[\tau_h\phi](t,x):=h^{-1}(\phi(t+h,x)-\phi(t,x)).$$
In $Q:=[T_1,T_2-h]\times (0,a]$, the function $w$ satisfies
$$
\begin{equation}gin{aligned}
w_t-w_{xx}
&=p\tau_h[(u_x)^{p-1}u_{xx}]\\
&=p(u_x)^{p-1}(t+h,x)\,\tau_h u_{xx}+pu_{xx}(t,x)\,\tau_h [(u_x)^{p-1}].
\end{aligned}
$$
By the mean value theorem, there exists $\theta=\theta(t,x,h)\in (0,1)$ such that
$$
\tau_h [(u_x)^{p-1}]=(u_x)^{p-1}(t+h,x)-(u_x)^{p-1}(t,x)=(p-1)\bigl[\theta u_x(t+h,x)+(1-\theta)u_x(t,x)\bigr]^{p-2}w(t,x).
$$
It then follows from \rife{bounduxt1aux1}, \rife{bounduxt1aux2} that $w$ satisfies in $Q$ an equation of the form
\begin{equation}\label{bounduxt1aux2a}
w_t-w_{xx}=-A(t,x)w+B(t,x)w_x,
\end{equation}
with
\begin{equation}\label{bounduxt1aux2b}
A(t,x)\ge c(p)x^{-2}.
\end{equation}
{\bf Step 2.} {\it Control of $u_{xt}$ away from the boundary.}
We next claim that
\begin{equation}\label{bounduxt1claim}
|u_{xt}|\le Cx^{-3-\alpha}\quad\hbox{ in $[T_1,T_2]\times (0,1/2]$.}
\end{equation}
Fix some $t_1\in (0,T^*)$ and, for any $\varepsilon\in (0,1/2)$, set $Q_\varepsilon=(t_1,T_2]\times (\varepsilon,1/2)$.
Denote $H=\partial_t-\partial_x^2$.
By \eqref{SGBUprofileUpperEst2}, \eqref{SGBUprofileUpperEst3}, \rife{vhj1} and \rife{u_t-bounded},
we have $|u_x|\le C\varepsilon^{-\alpha}$ and $|u_{xx}|\le C\varepsilon^{-\alpha-1}$ in $Q_\varepsilon$, hence
$|H(u_x)|=p|u_{xx}||u_x|^{p-1}\le C\varepsilon^{-2-\alpha}$ in $Q_\varepsilon$.
By standard
parabolic estimates, for each $q\in (1,\infty)$, it follows that
\begin{equation}\label{bounduxt1aux3}
\|u_{xt}\|_{L^q(Q_\varepsilon)}\le C_q\varepsilon^{-2-\alpha}.
\end{equation}
(Here and below, $C_q$ denotes a generic positive constant depending on $q$ but independent of $\varepsilon$).
Next, since $|H(u_t)|=p|u_{xt}||u_x|^{p-1}$, we deduce from
\rife{bounduxt1aux3} that
$\|H(u_t)\|_{L^q(Q_\varepsilon)}\le C_q\varepsilon^{-3-\alpha}$. Since $|u_t|\le M$ in $Q_\varepsilon$, parabolic estimates now give
$$\|u_{tt}\|_{L^q(Q_\varepsilon)}+\|u_{txx}\|_{L^q(Q_\varepsilon)}\le C_q\varepsilon^{-3-\alpha}.$$
Taking $q$ large enough and using standard interpolation properties, we deduce that
$\|u_{tx}\|_{L^\infty(Q_\varepsilon)}\le C_q\varepsilon^{-3-\alpha}$, hence the claim.
{\bf Step 3.} {\it Comparison argument.}
By \rife{bounduxt1claim} and the mean value theorem, we have
\begin{equation}\label{bounduxt1aux4}
|w(t,x)|\le C_0x^{-3-\alpha}\quad\hbox{ in $[T_1,T_2-h]\times (0,1/2]$,}
\end{equation}
with $C_0>0$ independent of $h$.
It then follows from \rife{bounduxt1aux2a}, \rife{bounduxt1aux2b} that
$w$ satisfies the following parabolic inequality with {\it strong nonlinear absorption}:
$$w_t-w_{xx}+C_1|w|^\gamma w\le B(t,x)w_x\quad\hbox{ in $[T_1,T_2-h]\times (0,1/2]$,}$$
where $\gamma=2/(3+\alpha)>0$ and $C_1>0$ is independent of $h$.
To take care of the boundary conditions at $x=0$, we write
$$|(u_x)(t+h,x)-u_x(t,x)|\le |(u_x)(t+h,x)-U_*'(x)|+|(u_x)(t,x)-U_*'(x)|$$
and apply \rife{shiftedcopy} to obtain
\begin{equation}\label{bounduxt1aux6}
|w(t,x)|\le 2Kh^{-1}x\quad\hbox{ in $[T_1,T_2-h]\times (0,1/2]$.}
\end{equation}
Therefore $w$ extends to a continuous function $w\in C([T_1,T_2-h]\times [0,1/2])$ with $w(t,0)=0$.
Now, for any fixed $t_0\in [T_1,T_2-h)$, the function
$\overline w(t,x)=\overline w(t):=[C_1\gamma(t-t_0)]^{-1/\gamma}$ satisfies
$$\overline w_t- \overline w_{xx} +C_1 \overline w^{1+\gamma}\ge B(t,x)\overline w_x
\quad\hbox{ in $(t_0,\infty)\times (0,1/2]$.}
$$
{If $t-t_0<\delta$, then $\overline w\geq (C_1\gamma \delta)^{-1/\gamma}$; hence, setting $\deltalta=(C_0^{-\gamma}/C_1\gamma)a^{(3+\alpha)\gamma}$} and using \rife{bounduxt1aux4}, we get
$$\overline w(t,a)\ge C_0a^{-3-\alpha}\ge w(t,a)\quad\hbox{ for all $t\in (t_0,\min(T_2-h,t_0+\deltalta))$.}$$
Since also $|w|\le 2Kh^{-1}$ in $[T_1,T_2-h]\times [0,1/2]$, owing to \rife{bounduxt1aux6}, whereas $\overline w(t)\to \infty$ as $t\to t_0$, we may
apply the comparison principle with $\pm\overline w$, and we obtain
$$|w(t,x)|\le [C_1\gamma(t-t_0)]^{-1/\gamma}\quad\hbox{ in $(t_0,\min(T_2-h,t_0+\deltalta))\times (0,a]$.}$$
Letting $h\to 0_+$, it follows that, for any $t_0\in [T_1,T_2)$, we have
$$|u_{xt}|\le [C_1\gamma(t-t_0)]^{-1/\gamma}\quad\hbox{ in $(t_0,\min(T_2,t_0+\deltalta))\times (0,a]$,}$$
hence in $(t_0,\min(T_2,t_0+\deltalta)]\times (0,1/2]$ by interior regularity of $u$.
This immediately yields the desired conclusion.
\end{proof}
\begin{equation}gin{proof}[Proof of Lemma~\ref{boundutt}]
Assume $t_0\in (0,T^*)$ without loss of generality.
We consider the approximating sequence of solutions $u_k$ of \rife{app-1d}, with $F_k$ given by \rife{eqaaa}.
The function $w:=u_{k,tt}$ solves the equation
$$
w_t - w_{xx}=F_k'(u_{k,x})w_x+F_k''(u_{k,x})(u_{k,xt})^2\ge F_k'(u_{k,x})w_x\,.
$$
Since $u_{k,tt}=0$ on the boundary and, by \rife{eqabA0}, $u_{k,tt}(t_0,\cdot)=u_{tt}(t_0,\cdot)$ for all $k$ sufficiently large,
we have $u_{k,tt}\ge -C(t_0)$ for all $t>t_0$,
and the conclusion follows by passing to the limit.
\end{proof}
\begin{equation}gin{proof}[Proof of Theorem~\ref{proppersist}(ii)]
We first claim that
\begin{equation}\label{defphiut}
\phi(t):=\lim_{x\to 0_+} u_t(t,x)
\hbox{ exists for all $t\in (T_1,T_2]$}
\end{equation}
and
\begin{equation}\label{defphiut2}
u_t(t,0) \hbox{ exists, with $u_t(t,0)=\phi(t)$, for all $t\in (T_1,T_2]$}.
\end{equation}
Let $t\in (T_1,T_2]$. Writing
$$u_t(t,x)=u_t(t,1/2)-\int_x^{1/2} u_{tx}(t,y)\,dy, \quad 0<x<1/2,$$
and using the bound \rife{bounduxt1}, we obtain \rife{defphiut}, {with
$$
\phi(t)= u_t(t,1/2)-\int_0^{1/2} u_{tx}(t,y)\,dy\,.
$$
Moreover, since $u$ is smooth in $(0,1)$ and $|u_{tx}|$ is dominated, the right-hand side is a continuous function of $t$, so $\phi$ is continuous and actually satisfies for all $x\in (0,1/2)$:}
\begin{equation}\label{defphiut3}
u_t(t,x)=\phi(t)+\int_0^x u_{tx}(t,y)\,dy,\quad 0<x<1/2.
\end{equation}
Therefore, for all $t,s\in (T_1,T_2]$, by\rife{u_t-bounded}, \rife{defphiut} and dominated convergence, we have
$$
u(s,0)-u(t,0)=\lim_{x\to 0} \bigl(u(s,x)-u(t,x)\bigr)=\lim_{x\to 0} \int_t^s u_t(\sigma,x)\,d\sigma=\int_t^s \phi(\sigma)\,d\sigma,$$
hence \rife{defphiut2} {follows from the continuity of $\phi$}.
We next claim that
\begin{equation}\label{defphiutclaim}
\quad\hbox{the restriction of $u_t$ to $(T_1,T_2]\times[0,1/2]$ is continuous.}
\end{equation}
Let $\varepsilon>0$ and $t\in [T_1+\varepsilon,T_2]$.
For any $y\in (0,1/2)$, $s\in [T_1+\varepsilon,T_2]$ and $x\in (0,y)$, by \rife{defphiut2}, \rife{defphiut3} and \rife{bounduxt1}, we have
$$
\begin{equation}gin{aligned}
|u_t(s,x)-u_t(t,0)|
&\le |u_t(s,x)-u_t(s,y)|+|u_t(s,y)-u_t(t,y))|+|u_t(t,y)-u_t(t,0)| \\
&\le \int_x^y |u_{tx}(s,\xi)|\,d\xi +|u_t(s,y)-u_t(t,y)| + \int_0^y |u_{tx}(t,\xi)|\,d\xi \\
&\le |u_t(s,y)-u_t(t,y)|+C(\varepsilon)y.
\end{aligned}
$$
Since $u_t\in C((0,\infty)\times (0,1/2])$, it follows that
$$\limsup_{(s,x)\to (t,0)}|u_t(s,x)-u_t(t,0)| \le C(\varepsilon)y.$$
The claim \rife{defphiutclaim} follows by letting $y\to 0$.
We then show that:
\begin{equation}\label{defphiutclaim2a}
\quad\hbox{the restriction of the function $t\mapsto u(t,0)$ to $[T_1,T_2]$ is of class $C^1$.}
\end{equation}
In view of \rife{defphiut2} and \rife{defphiutclaim}, it suffices to show that
\begin{equation}\label{defphiutclaim2}
\lim_{t\to {T_1}_+} \phi(t) \hbox{ exists.}
\end{equation}
Fixing any $t_0<T^*$, by Lemma~\ref{boundutt} we may choose $C>0$ such that,
for all $x\in (0,1)$, the function $t\to u_t(t,x)+Ct$ is nondecreasing with respect to $t\ge t_0$.
Therefore, letting $x\to 0$ and using \rife{defphiut}, we infer that $\phi(t)+Ct$ is nondecreasing with respect to $t\in (T_1,T_2]$.
Since it is bounded as a consequence of \rife{u_t-bounded} and \rife{defphiut}, it has a finite limit as $t\to T_{1,+}$,
hence \rife{defphiutclaim2} and \rife{defphiutclaim2a}.
Let us finally show that:
\begin{equation}\label{defphiutclaim4}
\quad\hbox{the restriction $V$ of $u-U_*$ to $Q:=[T_1,T_2]\times [0,1/2]$ is differentiable.}
\end{equation}
It suffices to verify the differentiability at points $(t,0)$ with $t\in[T_1,T_2]$.
Fix $t\in[T_1,T_2]$ and take any $h$ such that $t+h \in [T_1,T_2]$ and $x\in (0,1/2]$.
Since $V$ is continuous in $Q$ and recalling \rife{defphiut2}, we deduce from the mean value theorem that
$$
\begin{equation}gin{aligned}
V(t+h,x)-V(t,0)
&=V(t+h,x)-V(t+h,0)+V(t+h,0)-V(t,0)\\
&=V(t+h,x)-V(t+h,0)+u(t+h,0)-u(t,0)\\
&=xV_x(t+h,\theta x)+u(t+h,0)-u(t,0),
\end{aligned}
$$
for some $\theta\in (0,1)$ depending on $t, h, x$.
Since, by \rife{shiftedcopy},
\begin{equation}\label{boundVx}
|V_x|\le Kx\quad\hbox{ in $[T_1,T_2]\times (0,1/2]$}
\end{equation}
and recalling \rife{defphiutclaim2a}, this guarantees \rife{defphiutclaim4}.
Moreover, we get
\begin{equation}\label{defphiutclaim5}
\quad\hbox{$V_t(t,0)=u_t(t,0)$ for all $t\in (T_1,T_2]$ and $V_x(t,0)=0$ for all $t\in [T_1,T_2]$.}
\end{equation}
In view of \rife{boundVx} and \rife{defphiutclaim5}, the $C^1$ regularity of $V$
is now a consequence of \rife{defphiutclaim4} and \rife{defphiutclaim}.
\end{proof}
The last result of this section, which will enable us to complete the proof of Theorem~\ref{nonmin0},
establishes the two-piece monotonicity of $u$ in the time interval $[T^*,T^r]$.
The monotonicity in the weak sense is proved by using zero-number for time translates of the solution.
The possibility of a plateau is then ruled out by using the zero-number of $u_t$, Hopf's lemma
and the regularizing barriers from the proof of Lemma~\ref{barrier}.
\begin{equation}gin{prop} \label{twopiece}
Under the assumptions of Proposition~\ref{recon}, there exists $T_m\in (T^*,T^r)$
such that $u(t,0)$ is increasing on $[T^*,T_m]$ and decreasing on $[T_m,T^r]$,
with moreover
\begin{equation}\label{utneg3}
u_t\le 0\quad\hbox{ in $[T_m,\infty)\times (0,1)$.}
\end{equation}
\end{prop}
\begin{equation}gin{proof}
{\bf Step 1.} {\it Monotonicity in the weak sense.}
Fix $t_0\in (0,T^*)$. By Lemma~\ref{defz}(iii), there exists $\tau_0>0$ such that
\begin{equation}\label{prelimz0D2}
Z(u(t+\tau)-u(t))\le 2\quad\hbox{ for all $\tau\in (0, \tau_0)$ and $t\ge t_0$.}
\end{equation}
Consider $T_m:=\sup E$ where
$$E=\{T>T^*; \, \hbox{ $u(t,0)$ is nondecreasing on $[T^*,T]$}\}\in (T^*,T^r),$$
noting that $E\ne\emptyset$ owing to \eqref{estB2} and that $E\subset (T^*,T^r)$ because of \rife{ConclEta}.
By definition there exists $\varepsilon_j\to 0_+$ such that $u(T_m+\varepsilon_j,0)<u(T_m,0)$.
Since $Z(u(T_m+\varepsilon_j)-u(T_m))\le 2$ for large enough $j$ by \eqref{prelimz0D2} and $u(T_m+\varepsilon_j,1/2)<u(T_m,1/2)$ by Lemma~\ref{basic-prop0}(ii),
it follows that $u(T_m+\varepsilon_j)\le u(T_m)$ in $[0,1]$.
We may apply the comparison principle in Proposition~\ref{compP2} to the global viscosity
solutions $u$ and $\tilde u:=u(\cdot+\varepsilon_j,\cdot)$
(indeed the assumption $\tilde u(0,\cdot)\equiv u(\varepsilon_j,\cdot)\in X$ is satisfied for $j$ large enough).
Therefore, $u(t+\varepsilon_j)\le u(t)$ in $[0,1]$
for all $t\ge T_m$.
This implies \eqref{utneg3}.
Since we know now that $u(s,x)\le u(t,x)$ for all $T_m\le t\le s\le T^r$ and all $x\in (0,1)$,
by letting $x\to 0$, we deduce that $u(t,0)$ is nonincreasing on $[T_m,T^r ]$,
whereas $u(t,0)$ is nondecreasing on $[T^*,T_m]$ by the definition of $E$.
{\bf Step 2.} {\it Strict monotonicity.}
To conclude the proof, we shall show that $u$ cannot have a plateau.
Let us assume for contradiction that $u(t,0)=\ell>0$ on $[T_1,T_2]$ for some $T^*<T_1<T_2<T^r$.
\vskip2pt
We first claim that $N(t)=0$ on $[T_1,T_2]$.
Otherwise $u_t>0$ at some point of $[T_1,T_2]\times (0,1)$.
By continuity, it follows that there exist $a\in (0,1/2)$ and $T_1<t_1<t_2<T_2$, such that
$u_t(t,a)>0$ and $N(t)=2$ for all $t\in [t_1,t_2]$.
Recalling \eqref{utcenter}, we deduce that
$u_t\ge 0$ on $[t_1,t_2]\times (0,a]$.
By Remark~\ref{remb0}, we get $u_x\ge U_*'$ and $u\ge\ell+U_*$ on $[t_1,t_2]\times (0,a]$.
The function $w:=u-\ell-U_*$ is nonnegative, continuous up to the boundary, and verifies
\begin{equation}\label{eqparabHopf}
w_t-w_{xx}=(u_x)^p-(U_*')^p\ge 0
\end{equation}
in $Q:=(t_1,t_2)\times (0,a]$ with $w(t,a)>0$ (due to $u_t(t,a)>0$).
Then, by comparing with the linear heat equation and using Hopf's Lemma,
we deduce that $w(t,x)\ge c(t)x$ in $Q$ for some $c(t)>0$, i.e.
$$u(t,x)\ge u(t,0)+U_*+c(t)x.$$
But this contradicts estimate \rife{shiftedcopy0}.
\vskip2pt
We have thus proved that $u_t\le 0$ on $[T_1,T_2]\times (0,1)$,
hence $u_x\le U_*'$ and $u\le \ell+U_*$ on $[T_1,T_2]\times (0,1/2]$ by Lemma~\ref{separation}(ii).
The new function $w:=U_*-u+\ell$ is nonnegative and verifies~\rife{eqparabHopf}
in $Q:=(T_1,T_2)\times (0,1/2]$.
Since $w(T_1,\cdot)\not\equiv 0$ {(owing to $w_x(T_1,1/2)>0$)},
Hopf's Lemma again gives $w(t,x)\ge d(t)x$ in $Q$ for some $d(t)>0$,
i.e. $u(t,x)\le u(t,0) +U_*-d(t)x$.
But this again contradicts estimate \rife{shiftedcopy0}.
\end{proof}
We have now all the ingredients to conclude the proof of Theorem~\ref{nonmin0}.
\vskip1em
\begin{equation}gin{proof}[Completion of proof of Theorem~\ref{nonmin0}.]
Assertion (i) follows from Proposition~\ref{nonmin} and the first part of Proposition~\ref{recon}.
\vskip2pt
In assertion (ii), estimates \rife{estA0} and \rife{BeqConclA0} are just \rife{estA} and \rife{BeqConclA} in Propositions~\ref{nonmin} and \ref{recon}
whereas \eqref{estB0} and \eqref{BeqConclAa0} follow from \eqref{estB}, \eqref{BeqConclAa} and
the regularity of $u(t,0)$ proved in Theorem~\ref{proppersist}(ii).
\vskip2pt
Assertion (iii) is a consequence of Theorem~\ref{proppersist} and Proposition~\ref{twopiece}.
\end{proof}
\section{ Behavior of minimal GBU solutions: proof of Theorem~\ref{minimal0}}
\label{Sec-min}
This section is devoted to the behavior of minimal blow-up solutions. We already know that those solutions are characterized by the fact that the boundary condition is not lost. In addition, if $N(u_t(0))=2$, we also know from Proposition~\ref{nonmin}(i), that
minimal blow-up solutions occur if and only if $L=0$, where $L$ is defined in \eqref{defL}.
We first give the:
\begin{equation}gin{proof}[Proof of Theorem~\ref{minimal0}(ii).]
By \eqref{utneg} and Proposition \ref{utneq}, it follows that
\begin{equation}\label{utneg2}
u_t\le 0\quad\hbox{ in $[T^*,\infty)\times(0,1)$,}
\end{equation}
and that $u$ never loses boundary conditions and satisfies $u\leq U_*$, $u_x\le U_*'$ for all times after $T^*$.
Now, set $W:=U_*-u$. This function is nonnegative and verifies
$$
W_t-W_{xx}=(U_*')^p-(u_x)^p\ge 0
$$
on $(0,1/2)$ for $t>T^*$, with $W(t,1/2)>0$ due to \eqref{utcenter}
(and $W$ is continuous up to the boundary).
Then, Hopf's Lemma gives $W(T^*+t,x)\ge c(t)x$ for all $t>0$, with some (possibly bad) $c(t)>0$.
i.e. $u(T^*+t,x)\le U_*-c(t)x.$ We conclude by applying Lemma~\ref{barrier}.
\end{proof}
One of the main ingredients of the proof of parts (i) and (iii) of Theorem~\ref{minimal0} is the already established Proposition~\ref{contut},
where we showed that nonminimal blow-up and smoothing rates stem from the continuity of the time derivative at the boundary.
In the following steps, we aim at getting the necessary informations on the time derivative in order to apply Proposition~\ref{contut}.
\begin{equation}gin{lem}\label{minlem1}
Let $\phi\in W^{3,\infty}(0,1)$ be compatible at order two,
with $\phi$ symmetric and nondecreasing on $[0,1/2]$.
Assume that $N(0)=2$
and $T^*<\infty$.
Fix any $t_0\in (0,T^*)$. Then we have
\begin{equation}\label{boundutxA}
u_{tx}\ge -C,\quad (t,x)\in [t_0,T^*)\times [0,1/2],
\end{equation}
for some constant $C>0$.
As a consequence, we have
\begin{equation}\label{boundutxB}
u_t(t,x)\ge -Cx,\quad (t,x)\in [t_0,T^*]\times (0,1/2].
\end{equation}
\end{lem}
\begin{equation}gin{proof}
Set $w:=u_{tx}$. Differentiating the equation in \rife{vhj1} with respect to $t$ and then to $x$, we find
$$
w_t-w_{xx}=p|u_x|^{p-2}u_x u_{xtx}+p(p-1)|u_x|^{p-2}u_{xx}u_{xt}
$$
that is,
$$
w_t-w_{xx}=aw_x+bw
$$
with $a(t,x)=p|u_x|^{p-2}u_x$ and $b(t,x)=p(p-1)|u_x|^{p-2}u_{xx}$.
Moreover, we observe that
$$
b=p(p-1)|u_x|^{p-2}(u_t-|u_x|^p)\le p(p-1)|u_x|^{p-2}(M-|u_x|^p)<C_0.
$$
Since the modified function $\tilde w=e^{-C_0t}w$ then satisfies
$$
\tilde w_t-\tilde w_{xx}=a\tilde w_x+b_1\tilde w,
$$
with $b_1:=b-C_0<0$, the maximum principle yields
\begin{equation}\label{maxpplw}
\min_{\overline Q_T}\tilde w_-= \min_{\partial_PQ_T}\tilde w_-,
\quad\hbox{ for all $T\in (t_0,T^*)$,}
\end{equation}
where $Q_T=(t_0,T)\times (0,1/2)$ and $\partial_PQ_T$ denotes its parabolic boundary.
On the other hand,
by Lemma~\ref{defz}(i),
for each given $t\in [t_0,T^*)$, we have
$u_t(t,x)\ge 0$ for $x$ close to $0$, hence $u_{tx}(t,0)\ge 0$ (recalling $u_t(t,0)=0$).
Moreover, $\tilde w\ge -C$ at $x=1/2$,
since the solution remains smooth for all times away from the boundary of $(0,1)$.
Property \rife{maxpplw} then reduces to
$$
\displaystyle\min_{\overline Q_T}\tilde w_- \ge
\min\bigl[-C,\displaystyle\min_{x\in[0,1]}\tilde w_-(t_0,x)\bigr]
\quad\hbox{ for all $T\in (t_0,T^*)$,}
$$
and the conclusion follows.
\end{proof}
\begin{equation}gin{lem}\label{minlem2}
Under the assumptions of Theorem~\ref{minimal0}, we have
$$
\lim_{t\to T^*_-,\,x\to 0}u_t(t,x)=0.
$$
\end{lem}
\begin{equation}gin{proof}
We have $\displaystyle\liminf_{t\to T^*_-,\,x\to 0}u_t(t,x)\ge 0$ owing to \rife{boundutxB}.
So it suffices to show $\displaystyle\limsup_{t\to T^*_-,\,x\to 0}u_t(t,x)\le 0$.
Assume for contradiction that there exist $\eta>0$, $t_j\to T^*_-$ and $x_j\to 0$ such that
$$u_t(t_j,x_j)\ge \eta.
$$
Let $h=\eta/(2C)$ where $C$ is given by \rife{boundutxA}.
For all large $j$, we have $x_j<h$ and we deduce from \rife{boundutxA} that
$$
u_t(t_j,h):=u_t(t_j,x_j)+\int_{x_j}^h u_{tx}(t_j,x)\, dx\ge \eta-Ch=\eta/2.
$$
Letting $j\to\infty$ and using $u_t\in C((0,T^*]\times (0,1))$, we deduce that
$u_t(T^*,h)\ge\eta/2$.
This is a contradiction with \eqref{utneg2}.
\end{proof}
\begin{equation}gin{lem}\label{minlem3}
Under the assumptions of Theorem~\ref{minimal0}, we have
$$
\lim_{t\to T^*_+,\,x\to 0}u_t(t,x)=0.
$$
\end{lem}
\begin{equation}gin{proof}
In view of \rife{utneg2},
it suffices to prove that $\displaystyle\liminf_{t\to T^*_+,\,x\to 0}u_t(t,x)\ge 0$.
By \eqref{utcenter}, we have
\begin{equation}\label{BoundSigma2}
\sigma:=-\max_{t\in [T^*,T^*+1]} u_t(t,1/2)>0
\end{equation}
and, by Lemma~\ref{defz}(iii), there exists $\varepsilon_0\in (0,\sigma)$ such that
\begin{equation}\label{prelimz0C2b}
Z_\varepsilon(t):=Z(u_t(t)+\varepsilon)\le 2\quad\hbox{ for all $\varepsilon\in (0,\varepsilon_0)$ and all $t\ge T^*$.}
\end{equation}
Moreover, for all $t\in (T^*,T^*+1]$, since $u_t(t)\in C([0,1])$ and $u_t(t,0)=0$ by Theorem~\ref{minimal0}(ii), we have $Z_\varepsilon(t)=2$ due to \eqref{BoundSigma2}.
Now assume for contradiction that there exist $\varepsilon\in (0,\varepsilon_0)$, $t_j\to T^*_+$ and $x_j\to 0$ such that
$$u_t(t_j,x_j)<-\varepsilon.
$$
Then, we have
$$
u_t(t_j,x)\le - \varepsilon, \quad x_j\le x\le 1/2
$$
due to $Z_\varepsilon(t_j)=2$.
For all fixed $x\in (0,1/2)$, letting $j\to\infty$, we obtain
$u_t(T^*,x)\le - \varepsilon$, a contradiction with \rife{boundutxB} for $t=T^*$.
\end{proof}
We have now all the ingredients to conclude the proof of Theorem~\ref{minimal0}.
\begin{equation}gin{proof}[Proof of Theorem~\ref{minimal0}(i) and (iii).]
Assertion (i) is a direct consequence of {Proposition~\ref{contut}(i) }and Lemma~\ref{minlem2},
whereas assertion (iii) is a direct consequence of {Proposition~\ref{contut}(ii)} and Lemma~\ref{minlem3}.
\end{proof}
\section{Appendix 1. A partial monotonicity result for the zero-number of $u_t$}
\label{Sec-monot}
In this short appendix, we state the following additional result, announced before Proposition~\ref{zeronumberut2},
and prove it as an a posteriori consequence of our analysis of the behavior of the global viscosity solution.
It shows the monotonicity of the zero-number of $u_t$ for the viscosity solutions starting with $N(0)\le 4$.
\begin{equation}gin{prop} \label{zeronumbermonotone}
Let $\phi\in W^{3,\infty}(0,1)$ be compatible at order two,
with $\phi$ symmetric and nondecreasing on $[0,1/2]$.
Assume that $N(0)\le 4$ and $T^*<\infty$.
Then the zero-number $N(t)$ of $u_t$ is a nonincreasing function of $t$.
In particular, if $N(0)=2$, then we have
$$N(t)=
\begin{equation}gin{cases}
2&\hbox{ for all $t\in [0,T_d)$}, \\
0&\hbox{ for all $t\in [T_d,\infty)$,}
\end{cases}
$$
with $T_d=T_m$ if
$u$ is nonminimal and $T_d=T^*$ if $u$ is minimal.
\end{prop}
\begin{equation}gin{proof}
By Lemma~\ref{higherN}, the monotonicity property can be reduced to the case $N(0)=2$.
The case when $N(0)=2$ and $u$ is minimal follows from \rife{utneg2} and Lemma~\ref{defz}.
Let us thus assume that $N(0)=2$ and $u$ is nonminimal.
By Lemma~\ref{defz}, \eqref{eqposut} and Proposition~\ref{twopiece}, we have $N(t)=2$ for all $t\in (0,T^*]$ and $N(t)=0$ for all $t\in [T_m,\infty]$.
Assume for contradiction that $N(t_0)=0$ for some $t_0\in (T^*,T_m)$.
Then $u_t(t_0)\le 0$ in $(0,1)$, hence
\begin{equation}\label{monotN1}
\quad\hbox{ $u(t_0,x)\le u(t_0,0)+U_*(x)$ in $[0,1]$}
\end{equation}
by Lemma~\ref{separation}(ii) and the symmetry of $u$.
Since $u(t,0)$ is increasing on $[T^*,T^m]$ and $u_t\in C((T^*,T^r)\times [0,1])$ by Theorem~\ref{proppersist}(ii),
there exist $\varepsilon>0$ and $t_1,t_2$ with $t_0<t_1<t_2<T_m$ such that
$$\hbox{\quad $u_t(t,0)\ge \varepsilon$ on $[t_1,t_2]$ \quad and \quad $u_t(t,0)\ge 0$ on $(T^*,T_m)$.}$$
We first define the comparison function $z(t,x)=u(t,0)+U_*(x)$,
which satisfies
$$z_t-z_{xx}-|z_x|^p=u_t(t,0)-U_*''-U_*'^p=u_t(t,0)\ge 0\quad\hbox{ in $(T^*,T_m)\times (0,1)$}.$$
In view of \rife{monotN1} and of $u(t,1)=u(t,0)$, the comparison principle in Proposition~\ref{compP} guarantees that
\begin{equation}\label{monotN2}
u(t,x)\le u(t,0)+U_*(x) \quad\hbox{ in $(t_0,T_m)\times [0,1]$}.
\end{equation}
We next define the comparison function
$$W(t,x)=u(t,0)+U_*(x)-\eta(t-t_1)x
\quad\hbox{ in $[t_1,t_2]\times [0,1]$},$$
where $\eta\in (0,\varepsilon)$ is chosen small enough so that
$U_*(x)-\eta(t_2-t_1)x\ge 0$ and $U_*'(x)-\eta(t_2-t_1)\ge 0$ for all $x\in (0,1]$.
We then have $0\le W_x\le U_*'$ in $[t_1,t_2]\times (0,1]$ and $W(t,x)\ge u(t,0)$ in $[t_1,t_2]\times [0,1]$.
Consequently, we obtain
$$
\begin{equation}gin{aligned}
W_t-W_{xx}-|W_x|^p
&=u_t(t,0)-\eta x-U_*''-(W_x)^p \\
&\ge \varepsilon-\eta-U_*''-U_*'^p=\varepsilon-\eta>0\quad\hbox{ in $[t_1,t_2]\times (0,1)$.}
\end{aligned}
$$
Since $W(t_1,x)=u(t_1,0)+U_*(x)\ge u(t_1,x)$ in $[0,1]$ by \rife{monotN2},
applying the comparison principle again, we deduce that
$$u(t,x)\le u(t,0)+U_*(x)-\eta(t-t_1)x
\quad\hbox{ in $[t_1,t_2]\times [0,1]$}.$$
But this contradicts \rife{shiftedcopy0}.
\end{proof}
\section{Appendix 2: Proof of Theorem~\ref{prelimprop2}}
\label{Sec-App}
We first establish the following interior a priori estimate for a more general equation,
which covers the regularized problems \rife{approxpbm}.
It relies on a Bernstein type argument
(which is a modification of \cite[Theorem~3.1]{SZ06}, where only the case $F(\nabla u)=|\nabla u|^p$ was treated;
the latter was motivated by \cite{Li85} for the elliptic case). {See also the Appendix in \cite{PZ} for {\it local in time --
global in space} gradient estimates of similar type.}
\begin{equation}gin{lem}\label{prelimprop2lem}
Let $\Omega$ be any domain of $\mathbb R^n$.
Let $F(\xi)\in W^{1,\infty}_{loc}(\mathbb R^n)$ satisfy:
\begin{equation}\label{approxHyp4a}
F(\xi)\ge C_2|\xi|^m,\quad\hbox{ for $|\xi|\ge \xi_0$,}
\end{equation}
\begin{equation}\label{approxHyp3a}
|\nabla F(\xi)|\le C_1\bigl(1+|\xi|^{-2}F^{2-\theta}(\xi)\bigr), \quad\hbox{ for all $\xi\ne 0$,}
\end{equation}
where $m>1$, $\theta\in (0,1), C_1, C_2, \xi_0>0$.
Let $0\le t_0<T<\infty$, $v\in C^{1,2}((t_0,T)\times\Omega)$, with $\nabla v\in C([t_0,T)\times\Omega)$, be a solution of
$$v_t-\Delta v =F(\nabla v) \quad\hbox{ in $(t_0,T)\times\Omega$},$$
and assume that
\begin{equation}\label{approxHyp4a0}
|\nabla v(t_0,x)|\le M_0 \quad\hbox{ in $\Omega$}
\end{equation}
and
\begin{equation}\label{approxHyp4a1}
|v_t|\le M_1 \quad\hbox{ in $(t_0,T)\times\Omega$}.
\end{equation}
Then $\nabla v$ satisfies the estimate
\begin{equation}\label{approxpbm2aEstim}
|\nabla v(t,x)|\le C_3\bigl[1+\deltalta^{-1/(\theta m)}(x)+\deltalta^{-1/(m-1)}(x)\bigr] \quad\hbox{ in $(t_0,T)\times\Omega$},
\end{equation}
where $\deltalta(x)={\rm dist}(x,\partial\Omega)$ and the constant $C_3>0$ depends only on $C_1,C_2,\xi_0,M_0,M_1,m,\theta,n$.
\end{lem}
\begin{equation}gin{proof}
Let $x_0\in\Omega$, $R=\deltalta(x_0)$ and $Q_R=(t_0,T)\times B_R(x_0)$.
For $i=1,\dots,n$, put $v_i={\partial v\over\partial x_i}$.
By parabolic regularity, we have $\partial_tv_i, D^2v_i\in L^q_{loc}(Q_R)$
for all $q<\infty$.
Differentiating in $x_i$, we obtain
\begin{equation}gin{equation} \label{eqLocBernsteinPhA}
\partial_t v_i-\Delta v_i= (F(\nabla v))_i.
\end{equation}
Let $w=|\nabla v|^2$. Multiplying (\ref{eqLocBernsteinPhA}) by $2 v_i$, summing up, and using
$\Delta w=2\nabla v\cdot\nabla(\Delta v)+2|D^2v|^2$ (with
$|D^2v|^2=\textstyle\sum_{i,j}(v_{ij})^2$),
we deduce that
\begin{equation}gin{equation} \label{eqLocBernsteinPhB}
{\mathcal{L}}w:= w_t-\Delta w+b\cdot\nabla w=-2|D^2v|^2 \ \hbox{ in $Q_R$,\quad
with } b:= -\nabla F(\nabla v).
\end{equation}
Let $a\in (0,1)$ and put $R'=3R/4$. We can select a cut-off function $\eta\in C^2(\overline
B(x_0,R'))$, $0\leq \eta\leq 1$, with $\eta=0$
for $|x-x_0|=R'$, such that
\begin{equation}gin{equation} \label{condCutoff}
|\nabla\eta| \leq C R^{-1}\eta^a,\quad |\Delta\eta|+\eta^{-1}|\nabla\eta|^2\leq C R^{-2}\eta^a,
\quad\hbox{ for $|x-x_0|<R'$,}
\end{equation}
with $C=C(a)>0$ (see e.g. \cite[p. 374]{SZ06}).
Set
$$z=\eta w.$$
In the rest of the proof, $C$ denotes a generic positive constant depending only on $C_1,C_2,\xi_0$, $M_0,M_1,m,\theta,n,a$.
We have
$$
{\mathcal{L}}z =\eta{\mathcal{L}}w+w{\mathcal{L}}\eta-2\nabla\eta\cdot\nabla w
=-2\eta|D^2v|^2+w{\mathcal{L}}\eta-2\nabla\eta\cdot\nabla w
\quad\hbox{ in $Q_{R'}$}
$$
by (\ref{eqLocBernsteinPhB}), and
$$
2|\nabla\eta\cdot\nabla w|=4\big|\sum_i\nabla\eta\cdot v_i\nabla
v_i\big| \leq 4\sum_i \eta^{-1}|\nabla\eta|^2 v_i^2+\sum_i \eta|\nabla
v_i|^2 =4\eta^{-1}|\nabla\eta|^2 w+\eta|D^2v|^2,
$$
hence, using \rife{condCutoff},
$$
{\mathcal{L}}z+\eta|D^2v|^2
\leq ({\mathcal{L}}\eta+4\eta^{-1}|\nabla\eta|^2)w
\leq C\eta^a \bigl(R^{-1} |\nabla F(\nabla v)|+R^{-2}\bigr)|\nabla v|^2
\quad\hbox{ in $Q_{R'}.$}
$$
Taking \rife{approxHyp3a} into account and denoting $\Sigma=Q_{R'}\cap\{(t,x);\,z(t,x)\ge \xi_0^2\}$, we obtain
$$
{\mathcal{L}}z+\eta|D^2v|^2
\leq CR^{-1}\eta^a\,F^{2-\theta}(\nabla v)+C(R^{-2}+R^{-1})\eta^a |\nabla v|^2
\quad\hbox{ in $\Sigma.$}
$$
Choose $a=\max\bigl((2-\theta)/2,1/m\bigr)\in (0,1)$. Using Young's inequality, $\eta\le 1$ and \rife{approxHyp4a}, it follows that,
for all $\varepsilon>0$,
\begin{equation}gin{align}
{\mathcal{L}}z+\eta|D^2v|^2
&\leq \varepsilon\eta^{2a/(2-\theta)} F^2(\nabla v)+C(\varepsilon)R^{-2/\theta}
+\varepsilon\eta^{am}|\nabla v|^{2m}+C(\varepsilon)(1+R^{-2m/(m-1)}) \notag \\
&\leq \varepsilon\eta(1+C_2^{-2}) F^2(\nabla v)+C(\varepsilon)(1+R^{-2/\theta}+R^{-2m/(m-1)})
\quad\hbox{ in $\Sigma.$}
\label{eqLocBernsteinPh2}
\end{align}
On the other hand, using the fact that $|F(\nabla v)-v_t|=|\Delta v|\leq \sqrt{n}|D^2v|$, along with \rife{approxHyp4a1}, we obtain
$$
{1\over 2n}F^2(\nabla v)\leq |D^2v|^2+|v_t|^2\leq |D^2v|^2+M_1^2.
$$
Combining this with (\ref{eqLocBernsteinPh2}) with the choice $\varepsilon=[4n(1+C_2^{-2})]^{-1}$, it follows that
\begin{equation}gin{equation} \label{eqLocBernsteinPh2b}
{\mathcal{L}}z+{1\over 2n}\eta F^2(\nabla v)
\leq {1\over 4n}\eta F^2(\nabla v)+C(1+R^{-2/\theta}+R^{-2m/(m-1)})
\quad\hbox{ in $\Sigma.$}
\end{equation}
Since $\eta F^2(\nabla v)\ge \eta |\nabla v|^{2m} \ge z^m$ in $\Sigma$, we obtain
$${\mathcal{L}}z\leq -{1\over 4n}z^m+A \quad\hbox{ in $\Sigma$,
\quad with $A=C(1+R^{-2/\theta}+R^{-2m/(m-1)})$.}$$
It follows from the maximum principle (see, e.g., \cite[Proposition~52.4 and Remark~52.11(a)]{QS07}) that
$$\sup_{Q_{R'}} z\leq \max\bigl(\max_{x\in\overline{B_{R'}}} z(t_0,x),(4nA)^{1/m}\bigr)
\leq \max\bigl(M_0^2,(4nA)^{1/m}\bigr),$$
hence
$$|\nabla u(t,x_0)|\leq \sup_{Q_{R'}} z^{1/2}\leq
C\bigl[1+R^{-1/(\theta m)}+R^{-1/(m-1)}\bigr], \quad t_0<t<T,$$
which proves the lemma.
\end{proof}
\begin{equation}gin{proof}[Proof of Theorem~\ref{prelimprop2}(i).]
{\bf Step 1.} {\it Convergence.}
For each $k$, since $F_k$ has at most quadratic growth by \rife{approxHyp4b},
the global classical existence of $u_k$ is well known (see, e.g., \cite[Section 35]{QS07}).
In view of~\rife{approxHyp1}, the maximum principle guarantees that $u_k$ is nondecreasing with respect to $k$ and that
\begin{equation} \label{BernsteinPhAux0a}
e^{t\Delta}\phi\le u_k\le A_0:= \sup_\Omega\phi \quad\hbox{ in $[0,\infty)\times\Omega$,}
\end{equation}
for all $k\ge 1$. Therefore $U(t,x):=\lim_k u_k(t,x)$ is well defined and finite for each $t>0$, $x\in\Omega$.
Moreover, by \rife{approxHyp1}, \rife{approxHyp3}, we have
\begin{equation} \label{BernsteinPhAux0}
F_k(\xi)\le |\xi|^p, \qquad |\nabla F_k(\xi)|\le C(|\xi|^q+1),
\end{equation}
with $q=(2-\theta)p-2>0$ and $C$ independent of $k$.
By standard parabolic regularity arguments, we deduce the existence
of $t_0>0$ such that, for each $\varepsilon\in (0,t_0)$,
\begin{equation} \label{BernsteinPhAux1}
C_0:=\sup_k \|\nabla u_k\|_{L^\infty([0,t_0]\times\overline\Omega)}<\infty\quad\hbox{ and }\quad
\sup_k \|u_k\|_{C^{1,2}([\varepsilon,t_0]\times\overline\Omega)}<\infty.
\end{equation}
In particular, $M_0:=\sup_k \|\partial_t u_k\|_{L^\infty(\Omega)}<\infty$ and,
by the maximum principle applied to $\partial_t u_k$, we deduce that
\begin{equation} \label{BernsteinPhAux2}
|\partial_t u_k|\le M_0\quad\hbox{ in $[t_0,\infty)\times\Omega$.}
\end{equation}
On the other hand, the nonlinearities $F=F_k$ satisfy the assumptions of Lemma~\ref{prelimprop2lem}
with $m=2$ and, by \rife{BernsteinPhAux1}, \rife{BernsteinPhAux2},
the solutions $v=u_k$ satisfy assumptions \rife{approxHyp4a0}, \rife{approxHyp4a1},
with constants $C_1,C_2,\xi_0,M_0,M_1$ independent of $k$.
Applying Lemma~\ref{prelimprop2lem}, we obtain an interior a priori estimate of the form
\begin{equation} \label{BernsteinPhAux3}
|\nabla u_k|\le C(\varepsilon)\quad\hbox{ in $[t_0,\infty)\times\Omega_\varepsilon$},
\end{equation}
for each $\varepsilon>0$, where $\Omega_\varepsilon=\{x\in\Omega;\, \deltalta(x)>\varepsilon\}$.
By \rife{BernsteinPhAux1}, \rife{BernsteinPhAux3},
parabolic estimates and a diagonal procedure, it follows that the sequence $u_k$ is
relatively compact in $C^{1,2}((\varepsilon,T)\times\Omega_\varepsilon)$ for each $\varepsilon, T>0$.
Consequently, using \rife{approxHyp1}, we deduce that $U\in C^{1,2}((0,\infty)\times\Omega)$,
that some subsequence of $u_k$ converges to $U$ in $C^{1,2}_{loc}((0,\infty)\times\Omega)$
and that $U$ is a classical solution of $U_t-\Delta U=|\nabla U|^p$ in $(0,\infty)\times\Omega$,
which also satisfies
\begin{equation} \label{BernsteinPhAux6}
U\in C^{1,2}((0,t_0]\times\overline\Omega),
\end{equation}
owing to \rife{BernsteinPhAux1}.
Moreover, the whole sequence $u_k$ actually converges due to the uniqueness of the possible (pointwise) limits.
{\bf Step 2.} {\it Continuity of $U$ up to the boundary.}
By \rife{BernsteinPhAux2}, we have
\begin{equation} \label{BernsteinPhAux4c}
|U_t|\le M_0 \quad\hbox{ in $[t_0,\infty)\times\Omega$.}
\end{equation}
Hence, for $t\geq t_0$, $U(t)$ satisfies $\Delta U + |\nabla U|^p \leq M_0$ in $\Omega$. By
\cite[Theorem 1.1]{CLP}, $U(t)$ can be extended in a continuous way up to the boundary and belongs to $C^{1-\alpha}(\overline \Omega)$, $\alpha=1/(p-1)$, with a uniform (in time) estimate
\begin{equation}\label{hold}
|U(t,x)-U(t,y)|\leq M \, |x-y|^{1-\alpha},\qquad t\geq t_0,\ x,y\in \overline \Omega,
\end{equation}
where $M$ only depends on $M_0, \|U\|_\infty, p, \Omega$. On the other hand, as a consequence of \rife{BernsteinPhAux4c} and since $u$ is smooth inside $\Omega$, we also have
\begin{equation} \label{lip-time}
\|U(\tau)-U(t)\|_{L^\infty(\Omega)}\leq M_0 |\tau-t|\,,\quad \tau, t\in [t_0, \infty).
\end{equation}
From \rife{hold} and \rife{lip-time}, one immediately deduces the continuity of $U$ up to the boundary for $t\geq t_0$, and therefore, since $U$ is classical in $[0,t_0]$, we have that $U\in C([0,\infty)\times\overline\Omega)$.
Moreover, since $U\ge e^{t\Delta}\phi$ in $[0,\infty)\times\Omega$ owing to \rife{BernsteinPhAux0a}, we also have
\begin{equation} \label{BernsteinPhAux6c}
U\ge 0 \ \hbox{ on $(0,\infty)\times\partial\Omega$.}
\end{equation}
{\bf Step 3.} {\it Uniqueness.}
We claim that $U$ is independent of the choice of the sequence $F_k$ verifying the assumptions of Theorem~\ref{prelimprop2}(i).
Let $V$ be the limit of the sequence $v_k$ associated with another sequence of nonlinearities $G_k$.
Since $\partial_t v_k-\Delta v_k=G_k(\nabla v_k)\le |\nabla v_k|^p$ and $v_k=0\le U$ on $\partial\Omega$ by~\rife{BernsteinPhAux6c},
the comparison principle in Proposition~\ref{compP} ensures that $v_k\le U$, hence $V\le U$.
Similarly, we obtain $U\le V$.
\end{proof}
\begin{equation}gin{proof}[Proof of Theorem~\ref{prelimprop2}(ii).]
Since $u_k=0\le u$ on $\partial\Omega$ and $u_k, u\in C([0,\infty)\times\overline\Omega)\cap C^{1,2}((0,\infty)\times\Omega)$,
the comparison principle in Proposition~\ref{compP}
ensures that $u_k\le u$, hence $U\le u$.
On the other hand, $U\in C([0,\infty)\times\overline\Omega)\cap C^{1,2}((0,\infty)\times\Omega)$ is a supersolution of \rife{VHJ} in the classical sense, hence a viscosity supersolution.
By the comparison principle for viscosity sub-/supersolutions (see \cite{BDL04}), we conclude that $U\ge u$.
\end{proof}
{\bf Acknowledgements.}
Part of this work was done during visits of Ph.S. at the Dipartimento di Matematica of Universit\`a di Roma Tor Vergata. He wishes to thank this institution for the kind hospitality.
Ph.S is partially supported by the Labex MME-DII (ANR11-LBX-0023-01). A.P. was partially supported by the Grant \lq\lq Consolidate the Foundations 2015 (IrDyCo)\rq\rq\ of University of Rome Tor Vergata and by INDAM - GNAMPA funds (2018).
\begin{equation}gin{thebibliography}{99}
\bibitem{A96} Alaa N.,
\textit{Weak solutions of quasilinear parabolic equations with measures as initial data},
Ann. Math. Blaise Pascal~\textbf{3} (1996), 1-15.
\bibitem{ABG89} Alikakos N.D., Bates P.W., Grant C.P.,
\textit{Blow up for a diffusion-advection equation},
Proc. Roy. Soc. Edinburgh Sect. A \textbf{113} (1989), 181-190.
\bibitem{An88}Angenent S.,
\textit{The zero set of a solution of a parabolic equation},
J. Reine Angew. Math. \textbf{390} (1988), 79-96.
\bibitem{ARS04} Arrieta J.M., Rodriguez-Bernal A., Souplet Ph.,
\textit{Boundedness of global solutions for nonlinear parabolic equations
involving gradient blow-up phenomena},
Ann. Sc. Norm. Super. Pisa Cl. Sci. (5) \textbf{3} (2004), 1-15.
\bibitem{BC87} Baras P., Cohen L.,
\textit{Complete blow-up after $T_{max}$ for the solution of a semilinear heat equation},
J. Funct. Anal. \textbf{71} (1987), 142-174.
\bibitem{BDL04} Barles G., Da Lio F.,
\textit{On the generalized Dirichlet problem for viscous Hamilton-Jacobi equations},
J. Math. Pures Appl. \textbf{83} (2004), 53-75.
\bibitem{CLP} Capuzzo Dolcetta I., Leoni F., Porretta A.,
\textit{H\"older estimates for degenerate elliptic equations with coercive Hamiltonians},
Trans. Amer. Math. Soc. \textbf{362} (2010), 4511-4536.
\bibitem{CG96} Conner G.R., Grant C.P.,
\textit{Asymptotics of blowup for a convection-diffusion equation with conservation},
Differential Integral Equations \textbf{9} (1996), 719-728.
\bibitem{Est18} Esteve, C.,
\textit{Single-point gradient blow-up on the boundary for diffusive Hamilton-Jacobi equation in domains with non-constant curvature},
Preprint (2018).
\bibitem{FL94} Fila M., Lieberman G.M.,
\textit{Derivative blow-up and beyond for quasilinear parabolic equations},
Differential Integral Equations \textbf{7} (1994), 811-821.
\bibitem{FMP05} Fila M., Matano H., Pol\'a\v cik P.,
\textit{Immediate regularization after blow-up},
SIAM J. Math. Anal. \textbf{37} (2005), 752-776.
\bibitem{GV97}Galaktionov V.A., V\'azquez J.L.,
\textit{Continuation of blow-up solutions of nonlinear heat equations in several space dimensions},
Comm. Pure Appl. Math. \textbf{50} (1997), 1-67.
\bibitem{GH08} Guo J.-S., Hu B.,
\textit{Blowup rate estimates for the heat equation with a nonlinear gradient source term},
Discrete Contin. Dyn. Syst. \textbf{20} (2008), 927-937.
\bibitem{HM04} Hesaaraki M., Moameni A.,
\textit{Blow-up positive solutions for a family of nonlinear parabolic equations in general domain in $\mathbb{R}^N$},
Michigan Math. J. \textbf{52} (2004), 375-389.
\bibitem{LS} Li Y.-X., Souplet Ph.,
\textit{Single-point gradient blow-up on the boundary for diffusive Hamilton-Jacobi equations in planar domains},
Comm. Math. Phys. \textbf{293} (2009), 499-517.
\bibitem{Li85} Lions P.-L.,
\textit{Quelques remarques sur les probl\`emes elliptiques quasilin\'eaires du second ordre},
J. Anal. Math. \textbf{45} (1985), 234-254.
\bibitem{Matano} Matano H.,
\textit{Nonincrease of the lap-number of a solution for a one-dimensional semilinear parabolic equation},
J. Fac. Sci. Univ. Tokyo Sect. IA Math. 29 (1982), 401-441.
\bibitem{MM09} Matano H., Merle F.,
\textit{Classification of type I and type II behaviors for a supercritical nonlinear heat equation},
J. Funct. Anal. \textbf{256} (2009), 992-1064.
\bibitem{PS} Porretta A., Souplet Ph.,
\textit{The profile of boundary gradient blow-up for the diffusive Hamilton-Jacobi equation},
International Math. Research Notices
\textbf{17} (2017), 5260-5301.
\bibitem{PS2} Porretta A., Souplet Ph.,
\textit{Analysis of the loss of boundary conditions for the diffusive Hamilton-Jacobi equation},
Ann. Inst. H. Poincar\'e Anal. Non Lin\'eaire
\textbf{34} (2017), 1913-1923.
\bibitem{PZ} Porretta A., Zuazua E.,
\textit{ Null controllability of viscous Hamilton-Jacobi equations},
Ann. Inst. H. Poincar\'e Anal. Non Lin\'eaire \textbf{29} (2012), 301-333.
\bibitem{QR16} Quaas A., Rodr\'\i guez A.,
\textit{Loss of boundary conditions for fully nonlinear parabolic equations with superquadratic gradient terms},
J. Differential Equations \textbf{264} (2018), 2897-2935.
\bibitem{QS07} Quittner P., Souplet Ph.,
Superlinear parabolic problems. Blow-up, global existence and steady states,
Birkh\"{a}user Advanced Texts: Basel Textbooks, Birkh\"{a}user Verlag, Basel, 2007.
\bibitem{S02} Souplet Ph.,
\textit{Gradient blow-up for multidimensional nonlinear parabolic equations
with general boundary conditions},
Differential Integral Equations \textbf{15} (2002), 237-256.
\bibitem{SZ06} Souplet Ph., Zhang Q.S.,
\textit{Global solutions of inhomogeneous Hamilton-Jacobi equations},
J. Anal. Math. \textbf{99} (2006), 355-396.
\bibitem{ZL13} Zhang Z.-C., Li Z.,
\textit{A note on gradient blowup rate of the inhomogeneous Hamilton-Jacobi equations}, ,
Acta Math. Sci. Ser. B (Engl. Ed.) \textbf{33} (2013), 678-686.
\end{thebibliography}
\end{document}
|
\begin{document}
\sloppy
\title{On Supervised On-line Rolling-Horizon Control for Infinite-Horizon Discounted Markov Decision Processes}
\author{Hyeong Soo Chang
\thanks{H.S. Chang is with the Department of Computer Science and Engineering at Sogang University, Seoul 121-742, Korea. (e-mail:[email protected]).}
}
\maketitle
\begin{abstract}
This note re-visits the rolling-horizon control approach
to the problem of
a Markov decision process (MDP) with infinite-horizon discounted expected reward criterion.
Distinguished from the classical value-iteration approach,
we develop an asynchronous on-line algorithm based on policy iteration integrated with
a multi-policy improvement method of policy switching.
A sequence of monotonically improving solutions to the forecast-horizon sub-MDP
is generated by updating the current solution only at the currently visited state, building in effect a rolling-horizon control policy for the MDP
over infinite horizon.
Feedbacks from ``supervisors," if available, can be also
incorporated while updating.
We focus on the convergence issue with a relation to the transition
structure of the MDP.
Either a global convergence to an optimal forecast-horizon policy
or a local convergence to a ``locally-optimal" fixed-policy in a finite time
is achieved by the algorithm depending on the structure.
\end{abstract}
\begin{keywords}
rolling horizon control, policy iteration, policy switching, Markov decision process
\end{keywords}
\section{Introduction}
Consider the rolling horizon control (see, e.g.,~\cite{her}) with a fixed finite forecast-horizon $H$ to
the problem of a Markov decision process (MDP) $M_{\infty}$ with infinite-horizon discounted expected
reward criterion.
At discrete time $k\geq 1$, the system is at a state $x_k$ in a finite state set $X$.
If the controller of the system takes an action $a$ in $A(x_k)$ at $k$, then it obtains a reward of $R(x_{k},a)$ from
a reward function $R:X\times A\rightarrow \Re$, where $A(x)$ denotes an admissible-action set of $x$ in $X$. The system then makes a random transition to a next state $x_{k+1}$
by the probability of $P_{x_{k}x_{k+1}}^{a}$ specified in an $|X|\times |X|$-matrix $P^a=[P_{xy}^a], x,y\in X$.
Let $B(X)$ be the set of all possible functions from $X$ to $\Re$. The zero function in $B(X)$ is
referred to as $0_X$ where $0_X(x)=0$ for all $x\in X$.
Let also $\Pi(X)$ be the set of all possible mappings from $X$ to $A$ where for any $\pi \in \Pi(X)$, $\pi(x)$ is constrained to be in $A(x)$ for all $x\in X$.
Let an $h$-\emph{length} \emph{policy} of the controller be an element in $\Pi(X)^h$, $h$-ary Cartesian product, $h\geq 1$. That is, $\pi \in \Pi(X)^h$ is an ordered tuple $(\pi_{1},...,\pi_{h})$ where the $m$th entry of $\pi$ is equal to $\pi_m \in \Pi(X), m \geq 1,$ and
when $\pi_m$ is applied at $x$ in $X$, then the controller is supposed to look ahead of (or forecasts) the remaining horizon $h-m$ for control.
An infinite-length policy is an element in the infinite Cartesian product of $\Pi(X)$, denoted by
$\Pi(X)^{\infty}$, and referred to as just a policy.
We say that a policy $\phi \in \Pi(X)^{\infty}$ is \emph{stationary} if $\phi_m=\pi$ for all $m\geq 1$ for some $\pi\in \Pi(X)$. Given $\pi\in \Pi(X)$, $[\pi]$ denotes a stationary policy in $\Pi(X)^{\infty}$ constructed from $\pi$ such that $[\pi]_m=\pi$ for all $m\geq 1$.
Define the $h$-horizon value function $V^{\pi}_h$ of $\pi$ in $\Pi(X)^H, H \geq h$ such that
\[
V^{\pi}_h(x) = E\left [\sum_{l=1}^{h} \gamma^{l-1} R(X_{l},\pi_{l}(X_l)) \biggl | X_1=x \right ], x\in X, h \in [1,\infty),
\] where $X_l$ is a random variable that denotes a state at the level (of forecast) $l$ by
following the $l$th-entry mapping $\pi_l$ of $\pi$ and a fixed discounting factor $\gamma$ is in $(0,1)$.
In the sequel, any operator is applied componentwise for the elements in $B(X)$ and in $\Pi(X)$, respectively. Given $\pi\in \Pi(X)^h$ and $x\in X$, $\pi(x)$ is set to be $(\pi_1(x),...,\pi_h(x))$ meaning the ``$x$-coordinate" of $\pi$ here.
As is well known then,
there exists an optimal $h$-length policy $\pi^*(h)$ such that for any $x\in X$,
$V^{\pi^*(h)}_h(x) =\max_{\pi\in \Pi(X)^h} V^{\pi}_h(x) = V^*_h(x)$, where
$V^*_h$ is referred to as the optimal $h$-horizon value function.
In particular, $\{V^*_h, h\geq 1\}$ satisfies the optimality principle:
\[
V^*_h(x) = \max_{a\in A(x)} \biggl ( R(x,a) + \gamma \sum_{y\in X} P_{xy}^a V^{*}_{h-1}(y) \biggr ), x\in X,
\] for any fixed $V^*_0$ in $B(X)$.
Furthermore, $V^*_h$ is equal to the function in $B(X)$ obtained after applying the value iteration (VI) operator $T:B(X)\rightarrow B(X)$ iteratively $h$ times
with the initial function of $V^*_0$: $T(...T(T(T(V^*_0)))) = T^h(V^*_0)) = V^*_h$, where
\[T(u)(x) = \max_{a\in A(x)} ( R(x,a) + \gamma \sum_{y\in X} P_{xy}^a u(y) ), x\in X, u\in B(X).
\]
This optimal substructure leads to a dynamic programming algorithm, backward induction, which computes $\{V^*_h\}$ in \emph{off-line} and returns an optimal $H$-horizon policy $\pi^*(H)$ that achieves the optimal value at any $x\in X$ for the $H$-horizon sub-MDP $M_H$ of $M_{\infty}$ by
\[\pi^*(H)_{H-h+1}(x) \in \mathop{\mbox{\rm arg\,max}}_{a\in A(x)} \left ( R(x,a) + \gamma \sum_{y\in X} P_{xy}^a V^{*}_{h-1}(y) \right ), x\in X, h \in \{1,...,H\}.
\]
Once obtained, the rolling $H$-horizon controller employs the first entry $\pi^*(H)_1$ of $\pi^*(H)$ or a stationary policy $[\pi^*(H)_1]$ over the system time: At each $k\geq 1$, $\pi^*(H)_1(x_k)$ is taken at $x_k$.
Given $\pi\in \Pi(X)^{\infty}$,
$V^{\pi}_{\infty}$ denotes the value function of $\pi$ over \emph{infinite horizon} and it is obtained by
letting $h$ approach infinity in $E[\sum_{l=1}^{h} \gamma^{l-1} R(X_{l},\pi_{l}(X_l))|X_1=x]$.
The optimal value function $V^*_{\infty}$ of $M_{\infty}$ is then defined such that $V^*_{\infty}(x) = \sup_{\pi\in \Pi(X)^{\infty}} V^{\pi}_{\infty}(x), x\in X$.
A performance result of $[\pi^*(H)_1]$ applied to $M_{\infty}$ (see, e.g.,~\cite{her}) is that
the infinity-norm of the difference between the value function of $[\pi^*(H)_1]$ and $V^*_{\infty}$ is upper bounded by (an error of) $O(\gamma^H ||V^*_{\infty} - V^*_0||_{\infty})$.
The term $||V^*_{\infty} - V^*_0||_{\infty}$ can be loosely upper bounded by $C/(1-\gamma)$ with some constant $C$. Then due to the dependence on $(1-\gamma)^{-1}$, the performance worsens around $\gamma$ closer to one.
How to set $V^*_0$ is a critical issue in the rolling horizon control
even if the error vanishes to zero exponentially fast in $H$ with the rate of $\gamma$.
This note develops an algorithm for \emph{on-line} rolling $H$-horizon control. The sub-MDP $M_H$ is \emph{not} solved in advance.
Rather with an arbitrarily selected $\pi_1(H)\in \Pi(X)^H$ for $M_H$, the algorithm generates \emph{a monotonically improving sequence of} $\{\pi_{k}(H)\}$ over time $k\geq 1$. To the algorithm, only $\pi_{k-1}(H)$ is available at $k>1$ and it updates
$\pi_{k-1}(H)$ \emph{only at} $x_k$ to obtain $\pi_k(H)$.
Either we have that $\pi_{k}=\pi_{k-1}$ or $\pi_k(x)=\pi_{k-1}(x)$ for all $x\in X\setminus\{x_k\}$ but $\pi_k(x_k)\neq \pi_{k-1}(x_k)$.
The algorithm has a design flexibility in the aspect that a feedback of an action to be used at $x_k$
by some ``supervisor," can be incorporated while generating $\pi_k(H)$.
By setting $\phi_k = \pi_k(H)_1$ at each $k\geq 1$, a policy $\phi \in \Pi(X)^{\infty}$
is in effect built sequentially for the controller.
Once $\phi_k$ is available to the controller, it takes $\phi_{k}(x_k)$ to the system and the underlying system of $M_{\infty}$ moves to a next random state $x_{k+1}$ by the probability of $P^{\phi_k(x_k)}_{x_k x_{k+1}}$.
The behavior of such a control policy is discussed by focusing on the convergence issue with
a relation to
the transition structure of $M_{\infty}$.
We are concerned with a question about the \emph{existence of a finite time} $K<\infty$ such that
$\phi_{k}=\pi^*(H)_1$ for all $k > K$ for the infinite sequence $\{\phi_k\}$.
\section{Off-line Synchronous Policy Iteration with Policy Switching}
\label{sec:off-line}
To start with, we present an algorithm of \emph{off-line synchronous} policy iteration (PI) combined with
a multi-policy improvement method of policy switching for solving $M_H$. In what follows, we assume that $V^{\pi}_0 = 0_X$ for any $\pi\in \Pi(X)^H$ for simplicity. (The topic about how to set $V^{\pi}_0$ is beyond the scope of this note.)
\begin{thm}[Theorem 2~\cite{changps}]
Given a nonempty $\Delta \subseteq \Pi(X)^H$, construct policy switching with $\Delta$ in $\Pi(X)^H$ as $\pi_{\mathrm{ps}}(\Delta)$ such that for each possible pair of $x\in X$ and $h \in \{1,...,H\}$,
\[ \pi_{\mathrm{ps}}(\Delta)_h(x) = \phi^*_h(x) \mbox{ where } \phi^* \in \mathop{\mbox{\rm arg\,max}}_{\phi\in\Delta} V^{\phi}_{H-h+1}(x).
\]
Then $V^{\pi_{\mathrm{ps}}(\Delta)}_H\geq V^{\phi}_H$ for all $\phi \in \Delta$.
\end{thm}
Given $\tilde{\pi}$ and $\pi$ in $\Pi(X)^H$,
we say that $\tilde{\pi}$ \emph{strictly improves} $\pi$ (over the horizon $H$)
if there exists some $s\in X$ such that $V^{\tilde{\pi}}_H(s) > V^{\pi}_H(s)$ and $V^{\tilde{\pi}}_H\geq V^{\pi}_H$
in which case we write as $\tilde{\pi} >_H \pi$.
Define \emph{switchable action set} $S^{\pi}_h(x)$ of $\pi\in \Pi(X)^H$ at $x\in X$ for $h\in \{1,...,H\}$ as
\[
S^{\pi}_h(x) = \biggl \{ a \in A(x) \biggl | R(x,a) + \gamma \sum_{y\in X} P^{a}_{xy}V^{\pi}_{h-1}(y) > V^{\pi}_h(x) \biggr \}
\] and also \emph{improvable-state set of} $\pi$ for $h$ as
\[I^{\pi}_h = \{ (h,x) | S^{\pi}_h(x) \neq \emptyset, x\in X \}.
\]
Set $I^{\pi,H} = \bigcup_{h=1}^{H} I^{\pi}_h$.
Observe that if $I^{\pi,H} = \emptyset$ for $\pi\in \Pi(X)^H$, then $\pi$ is an optimal $H$-length policy for $M_H$.
The following theorem provides a result for $M_H$ in analogy with the key step of the single-policy improvement (see, e.g.,~\cite{puterman})
in PI for $M_{\infty}$.
Because Banach's fixed-point theorem is difficult to be invoked in the finite-horizon case unlike the standard proof for the infinite-horizon case, we provide a proof for the completeness.
\begin{thm}
\label{thm:imp}
Given $\pi \in \Pi(X)^H$ with $I^{\pi,H} \neq \emptyset$, construct $\tilde{\pi}\in \Pi(X)^H$
with any $I$ satisfying $\emptyset \subsetneq I \subseteq I^{\pi,H}$
such that $\tilde{\pi}_{H-h+1}(x) \in S^{\pi}_h(x)$ for all $(h,x)\in I$ and $\tilde{\pi}_{H-h+1}(x)=\pi_{H-h+1}(x)$ for all $(h,x) \in (\{1,...,H\} \times X)\setminus I$. Then $\tilde{\pi} >_H \pi$.
\end{thm}
\mathrm{pr}oof
The base case holds because $V^{\tilde{\pi}}_0 = V^{\pi}_0$.
For the induction step, assume that $V^{\tilde{\pi}}_{h-1} \geq V^{\pi}_{h-1}$.
For all $x$ such that $\tilde{\pi}_{H-h+1}(x)=\pi_{H-h+1}(x)$,
\begin{eqnarray*}
\lefteqn{V^{\tilde{\pi}}_h(x) = R(x,\tilde{\pi}_{H-h+1}(x)) + \gamma \sum_{y\in X} P^{\tilde{\pi}_{H-h+1}(x)}_{xy}V^{\tilde{\pi}}_{h-1}(y)} \\
& & \geq R(x,\pi_{H-h+1}(x)) + \gamma \sum_{y\in X} P^{\pi_{H-h+1}(x)}_{xy}V^{\pi}_{h-1}(y) = V^{\pi}_h(x).
\end{eqnarray*}
On the other hand, for any $x\in X$ such that $\tilde{\pi}_{H-h+1}(x) \in S^{\pi}_h(x)$,
\begin{eqnarray*}
\lefteqn{V^{\tilde{\pi}}_h(x) = R(x,\tilde{\pi}_{H-h+1}(x)) + \gamma \sum_{y\in X} P^{\tilde{\pi}_{H-h+1}(x)}_{xy}V^{\tilde{\pi}}_{h-1}(y)} \\
& & \geq R(x,\tilde{\pi}_{H-h+1}(x)) + \gamma \sum_{y\in X} P^{\tilde{\pi}_{H-h+1}(x)}_{xy}V^{\pi}_{h-1}(y)
\mbox{ by induction hypothesis } V^{\tilde{\pi}}_{h-1} \geq V^{\pi}_{h-1} \\
& & > R(x,\pi_{H-h+1}(x)) + \gamma \sum_{y\in X} P^{\pi_{H-h+1}(x)}_{xy}V^{\pi}_{h-1}(y) = V^{\pi}_h(x)
\mbox{ because } \tilde{\pi}_{H-h+1}(x)\in S^{\pi}_h(x).
\end{eqnarray*}
Putting the two cases together makes $V^{\tilde{\pi}}_{h} \geq V^{\pi}_{h}$. In particular, we see that there must exist some $y\in X$ such that $\tilde{\pi}_{H-h+1}(y) \in S^{\pi}_h(y)$ because of our choice of $I$, having $V^{\tilde{\pi}}_h(y) > V^{\pi}_h(y)$.
By an induction argument on the level from $h$ then, it follows that $V^{\tilde{\pi}}_{H}(y) > V^{\pi}_{H}(y)$.
\endproof
The previous theorem states that if a policy was generated from a given $\pi$ by switching
the action prescribed by $\pi$ at each improvable state with its corresponding level in a
particularly chosen nonempty subset of the improvable-state set of $\pi$, then
the policy constructed strictly improves $\pi$ over the relevant finite horizon.
However, in general, even if $\pi >_H \phi$ is known, for $\pi'$ and $\phi'$ obtained by the method
of Theorem~\ref{thm:imp}, respectively, $\pi' >_H \phi'$ does \emph{not} hold necessarily.
(Note that this is also true for the infinite-horizon case.)
It can be merely said that $\pi'$ improves $\pi$ and $\phi'$ improves $\phi$, respectively.
Motivated by this, we consider the following. For a given $\pi$ in $\Pi(X)^{H}$, let $\beta^{\pi,H}$
be the set of \emph{all strictly better policies than} $\pi$ \emph{obtainable from} $I^{\pi,H}$:
If $I^{\pi,H} = \emptyset$, $\beta^{\pi,H} = \emptyset$. Otherwise,
\begin{eqnarray*}
\lefteqn{\beta^{\pi,H} = \bigl \{\tilde{\pi}\in \Pi(X)^H | I \in 2^{I^{\pi,H}}\setminus \{\emptyset\},
\forall (h,x)\in I \mbox{ } \tilde{\pi}_{H-h+1}(x) \in S^{\pi}_h(x)} \\
& & \hspace{3cm} \mbox{ and } \forall (h,x)\in (\{1,...,H\} \times X) \setminus I \mbox{ } \tilde{\pi}_{H-h+1}(x)=\pi_{H-h+1}(x) \bigr \}.
\end{eqnarray*} Once obtained, policy switching with respect to $\beta^{\pi,H}$ is applied
to find a \emph{single} policy no worse than all policies in the set.
We are ready to derive an off-line synchronous algorithm, ``policy-iteration with policy switching," (PIPS) that generates a sequence of $H$-length policies for solving $M_H$: Set arbitrarily $\pi_1 \in \Pi(X)^H$.
Loop with $n \in \{1,2,...,\}$ until $I^{\pi_{n},H} = \emptyset$
where $\pi_{n+1} = \pi_{\mathrm{ps}}(\beta^{\pi_n,H})$.
The convergence to an optimal $H$-length policy for $M_H$ is trivially guaranteed because $\pi_{n+1} >_H \pi_n, n\geq 1$, and both $X$ and $A$ are finite.
Note that $\beta^{\pi_n,H}$ in $\pi_{\mathrm{ps}}(\beta^{\pi_n,H})$ can be substituted with any $\Delta_n\subseteq \Pi(X)^H$ as long as for $\Delta_n \cap \beta^{\pi_n,H} \neq \emptyset$.
The generality of $\{\Delta_n\}$ then provides a broad design-flexibility of PIPS.
The idea behind policy switching used in PIPS with $\beta^{\pi_n,H}$ can be attributed to
approximating the steepest ascent direction
while applying the steepest ascent algorithm. At the current location $\pi_n$,
we find ascent ``directions" relative to $\pi_n$ over the \emph{local neighborhood of} $\beta^{\pi_n,H}$.
A steepest ascent direction, $\pi_{\mathrm{ps}}(\beta^{\pi_n,H})$, is then obtained by ``combining" all
of the possible ascent directions. In particular, the \emph{greedy} ascent direction $\phi$ that satisfies that
$T(V^{\pi_n}_{H-h})(x) = R(x,\phi_h(x)) + \gamma \sum_{y\in X} P_{xy}^{\phi_h(x)} V^{\pi_n}_{H-h}(y)$
for all $x\in X$ and $h\in \{1,...,H\}$ is always included while combining.
\section{Off-line Asynchronous PIPS}
\label{sec:on-line}
An \emph{asynchronous} version can be inferred from the synchronous PIPS by the
following improvement result
when a single $H$-length policy in $\Pi(X)^H$ is updated only at a \emph{single state}:
\begin{cor}
Given $x\in X$ and $\pi\in \Pi(X)^H$, let $I^{\pi,H}_x = \{(h,x)\in I^{\pi,H} | h\in \{1,...,H\}\}$.
Suppose that $I^{\pi,H}_x\neq \emptyset$. Then for any $\phi \in \beta^{\pi,H}_{x}$,
$\phi >_H \pi$ where
\begin{eqnarray*}
\lefteqn{\hspace{-1cm}\beta^{\pi,H}_x = \bigl \{\tilde{\pi}\in \Pi(X)^H | I \in 2^{I^{\pi,H}_x}\setminus \{\emptyset\},
\forall h \in \{1,...,H\} \mbox{ such that } (h,x)\in I \mbox{ } \tilde{\pi}_{H-h+1}(x)} \\
& & \in S^{\pi}_{h}(x) \mbox{ and } \forall (h,x')\in (\{1,...,H\} \times X) \setminus I \mbox{ } \tilde{\pi}_{H-h+1}(x')=\pi_{H-h+1}(x') \bigr \}.
\end{eqnarray*}
\end{cor}
\mathrm{pr}oof
Because $\emptyset \neq I^{\pi,H}_x \subseteq I^{\pi,H}$, $\emptyset \neq \beta^{\pi,H}_x \subseteq \beta^{\pi,H}$.
\endproof
The set $\beta^{\pi,H}_x$ ``projected to the $x$-coordinate direction" from $\beta^{\pi,H}$
contains all strictly better policies than $\pi$
that can be obtained by switching the action(s) prescribed by $\pi$ at the single state $x$.
This result leads to an off-line convergent asynchronous PIPS for $M_H$:
Select $\pi_1\in \Pi(X)^H$ arbitrarily.
Loop with $n \in \{1,2,...,\}$:
If $I^{\pi_n,H} \neq \emptyset$,
select $x_n \in I^{\pi_n,H}$ and
construct $\pi_{n+1}$ such that $\pi_{n+1}(x_n) = \pi_{\mathrm{ps}}(\beta^{\pi_n,H}_{x_n})(x_n)$ and $\pi_{n+1}(x)=\pi_{n}(x)$ for all $x$ in $X\setminus\{x_n\}$. If $I^{\pi_n,H} = \emptyset$, stop.
Because $x_n$ is always selected to be an improvable-state in $I^{\pi_n,H}\neq \emptyset$, $\pi_{n+1} >_H \pi_n$ for all $n\geq 1$. Therefore, $\{\pi_n\}$ converges to an optimal $H$-length policy for $M_H$.
Suppose that the state given at the current step of the previous algorithm is \emph{not}
guaranteed to be in the improvable-state set of the current policy.
Such scenario is possible with the following modified version:
Select $\pi_1\in \Pi(X)^H$ arbitrarily.
Loop with $n\in \{1,2,...,\}$:
If $I^{\pi_n,H}=\emptyset$, stop. Select $x_n \in X$. If $x_n \in I^{\pi_n,H}$, then
construct $\pi_{n+1}$ such that $\pi_{n+1}(x_n) = \pi_{\mathrm{ps}}(\beta^{\pi_n,H}_{x_n})(x_n)$ and $\pi_{n+1}(x)=\pi_{n}(x)$ for all $x$ in $X\setminus\{x_n\}$. If $x_n \notin I^{\pi_n,H}$, $\pi_{n+1} = \pi_n$.
Unlike the previous version, this algorithm's convergence
\emph{depends on the sequence $\{x_n\}$ selected}.
Even if $\pi_{n+1} >_H \pi_n$ when $\pi_{n+1}\neq \pi_n$,
the stopping condition that checks for the optimality can never be satisfied. In other words,
an infinite loop is possible.
The immediate problem is then how to choose an update-state sequence to achieve
a global convergence.
The reason for bring this issue up with the modified algorithm is that the situation is
closely related with the on-line algorithm to be discussed in the next section. Dealing with this issue
here would help understanding the convergence behaviour of the on-line algorithm.
We discuss some pedagogical example of choosing an update-state sequence of the modified
off-line algorithm below.
One way of \emph{enforcing} a global convergence is to ``embed" backward induction into the update-state sequence.
For example, we generate a sequence of
$\{x_n\}=\{x_0,...,x_{n_1},...,x_{n_2},...,x_{n_h},....,x_{n_H},...\}$ whose
subsequence $\{x_{n_h},h=1,...,H\}$ produces $\{\pi^{n_h}\}$ that
solves $M_h$.
We need to follow the optimality principle such that $M_{h-1}$ is solved \emph{before} $M_h$, and so forth, until $M_H$ is finally solved. Therefore, the entries of $\pi^*(H)$ are searched from $\pi^*(H)_H$ to $\pi^*(H)_1$
over $\{x_n\}$ such that
$\pi^{n_1} = (\pi^*(H)_H, \pi^{n_1}_2,...,\pi^{n_1}_{H-1},\pi^{n_1}_H)$ where $V^{*}_{1} = V^{\pi^{n_1}}_1$, and then
$\pi^{n_2} = (\pi^*(H)_{H-1}, \pi^*(H)_H,\pi^{n_2}_3,...,\pi^{n_2}_H)$ where $V^{*}_{2} = V^{\pi^{n_2}}_2$,
$\pi^{n_h} = (\pi^*(H)_{H-h+1},...,\pi^*(H)_H,...,\pi^{n_h}_H)$ where $V^{*}_{h} = V^{\pi^{n_h}}_h$,..., and then finally,
$\pi^{n_H} = (\pi^*(H)_1,...,\pi^*(H)_{H-1},\pi^*(H)_H)$ with $V^{*}_{H} = V^{\pi^{n_H}}_H$.
Once $M_{h-1}$ has been solved, an optimal $h$-length policy $\pi^{n_h}$ for $M_h$ can be found exhaustively.
The corresponding update-state subsequence from $x_{n_{h-1}+1}$ to $x_{n_h}$ can be any permutation of the states in $X$. Visiting each $x$ in $X$ \emph{at least once} for updating causes an optimal $h$-length policy for $M_h$ to be found
because if not empty, $\beta^{\pi^{m},H}_{x}$, where $m\in \{n_{h-1}+1,...,n_h\}$, includes an $H$-length policy
whose first entry mapping maps $x$ to an action in
$\mathop{\mbox{\rm arg\,max}}_{a\in A(x)} ( R(x,a) + \gamma \sum_{y\in X} P_{xy}^a V^{*}_{h-1}(y) )$.
Even though visiting each state at least once makes the approach enumerative,
our point is showing that there \emph{exists} an update-state sequence that makes a global convergence
possible.
\section{On-line Asynchronous PIPS}
We are now at the position of the main subject of this note about solving
$M_{\infty}$ within an on-line rolling $H$-horizon control by solving $M_H$ not in advance but over time.
We assume that a sequence of $\{\Delta_k\}$ is available where $\Delta_k\subseteq \Pi(X)^H, k\geq 1$,
stands for a set of supervisors at $k$.
Any feedback of an action to take at a state can be represented by an element in $\Pi(X)^H$.
The controller applies a policy $\phi\in \Pi(X)^{\infty}$ to the system of $M_{\infty}$
built from the sequence of the $H$-length policies generated by the on-line asynchronous PIPS algorithm: PIPS first selects $\pi_1(H)\in \Pi(X)^H$ arbitrarily
and sets $\phi_1=\pi_1(H)_1$.
At $k >1$, if $\beta^{\pi_{k-1}(H),H}_{x_k} = \emptyset$, $\pi_k(H) = \pi_{k-1}(H)$.
Otherwise, PIPS updates $\pi_{k-1}(H)$ only at $x_k$ such that
$\pi_{k}(H)(x_k) = \pi_{\mathrm{ps}}(\beta^{\pi_{k-1}(H),H}_{x_k} \cup \Delta_k)(x_k)$
and $\pi_{k}(H)(x) = \pi_{k-1}(H)(x)$ for all $x$ in $X\setminus \{x_k\}$.
After finishing update, PIPS sets $\pi_{k}(H)_1$ to the $k$th entry $\phi_k$ of $\phi$.
Once $\phi_k$ is available to the controller, $\phi_{k}(x_k)$ is taken to the system and
the system makes a random transition to $x_{k+1}$.
Before we present a general convergence result, an intuitive consequence about a global convergence
from a sufficient condition related with the transition structure of $M_{\infty}$
is given below as a theorem.
Note that $M_{\infty}$ is communicating, if every Markov chain
induced by fixing each stationary policy $[\pi], \pi \in \Pi(X),$ in $M_{\infty}$ is communicating~\cite{kallen}.
\begin{thm}
\label{thm:mainconv}
Suppose that $M_{\infty}$ is communicating. Then for $\{\pi_k(H), k\geq 1\}$ generated
by on-line asynchronous PIPS,
there exists some $K < \infty$ such that $\pi_k(H) = \pi^*(H)$ for all $k > K$ where
$V^{\pi^*(H)}_H = V^*_H$ in $M_H$
for any $\pi_1(H)\in \Pi(X)^H$ and $\{\Delta_k\}$ where $\Delta_k\subseteq \Pi(X)^H, k\geq 1$.
\end{thm}
\mathrm{pr}oof
Because both $X$ and $A$ are finite, $B(X)$ and $\Pi(X)^H$ are finite. For any given $\{\Delta_k\}$
and any $\pi_1(H)$,
the monotonicity of $\{V^{\pi_k(H)}\}$ of $\{\pi_k(H)\}$ holds because
$\{\pi_k(H)\}$ satisfies that $\pi_k(H) >_H \pi_{k-1}(H)$ if $\beta^{\pi_{k-1}(H),H}_{x_k} \neq \emptyset$ and $V^{\pi_k(H)} \geq V^{\pi_{k-1}(H)}$ otherwise.
The assumption that $M_{\infty}$ is communicating ensures that every state $x$ in $X$ is visited
infinitely often within $\{x_k\}$. It follows that at some sufficiently large finite time $K$,
$\beta^{\pi_{K}(H),H} = \emptyset$ implying $I^{\pi_{K}(H),H}=\emptyset$. Therefore, $\pi_k(H) = \pi^*(H)$ for all $k > K$ where $V^{\pi^*(H)}_H = V^*_H$ in $M_H$.
\endproof
The policy $\phi$ of the controller becomes \emph{stable}
in the sense that the sequence $\{\phi_k\}$ converges to $\pi^*(H)_1$.
The question about checking whether $M_{\infty}$ is communicating without enumerating all stationary
policies in $\Pi(X)^{\infty}$ is possible in a polynomial time-complexity was raised
in~\cite{kallen}. Unfortunately, this problem is in general NP-complete~\cite{tsi}.
A simple and obvious sufficient condition for such connectivity is that $P^{a}_{xy} > 0$ for
all $x,y$ in $X$ and $a$ in $A(x)$.
The key in the convergence here is that which state in $X$ is visited ``sufficiently often" by
following $\{\phi_k\}$ to ensure that an optimal action at the visited state is eventually found.
The following result stated in Theorem~\ref{thm:gen} reflects this rationale.
Given a stationary policy $\phi\in \Pi(X)^{\infty}$,
the \emph{connectivity relation} $\chi^{\phi}$ is defined on $X$ from the Markov chain $M_{\infty}^{\phi}$
induced by fixing $\phi$ in $M_{\infty}$: If $x$ and $y$ in $X$ communicate each other in $M_{\infty}^{\phi}$,
$(x,y)$ is an element of $\chi^{\phi}$.
Given $x\in X$, the equivalence class of $x$
with respect to $\chi^{\phi}$ is denoted by $[x]_{\chi^{\phi}}$. Note that
for any $x\neq y$, either $[x]_{\chi^{\phi}} = [y]_{\chi^{\phi}}$ or
$[x]_{\chi^{\phi}} \cap [y]_{\chi^{\phi}} = \emptyset$. The collection
of $[x]_{\chi^{\phi}}, x\in X,$ partitions $X$.
\begin{thm}
\label{thm:gen}
For any $\pi_1(H)\in \Pi(X)^H$ and any $\{\Delta_k\}$ where $\Delta_k\subseteq \Pi(X)^H, k\geq 1$,
$\{\pi_k(H)\}$ generated by on-line asynchronous PIPS converges to some $\lambda(H)$ in $\Pi(X)^H$
such that for some $K < \infty$, $\pi_k(H) = \lambda(H)$ for all $k > K$.
Furthermore, $\lambda(H)$ satisfies that
$V^{\lambda(H)}_H \geq V^{\pi}_H$
for all $\pi \in \bigcup_{x\in [x^*]_{\chi^{[\lambda(H)_1]}} } \beta^{[\lambda(H)_1],H}_x$, where $x^*$
is any visited state at $k>K$.
\end{thm}
\mathrm{pr}oof
By the same reasoning in the proof of Theorem~\ref{thm:mainconv}, $\{\pi_k(H)\}$
converges to an element $\lambda(H)$ in $\Pi(X)^H$ in a finite time $K$.
Because every state $x$ in $[x^*]_{\chi^{[\lambda(H)_1]}}$ is visited infinitely
often within $\{x_k\}$ for $k > K$, $I^{\lambda(H),H}_x = \emptyset$ for such $x$. No more improvement
is possible at all states in $[x^*]_{\chi^{[\lambda(H)_1]}}$. Otherwise, it contradicts
the convergence to $\lambda(H)$.
\endproof
The theorem states that $\lambda(H)$ is ``locally optimal" over $[x^*]_{\chi^{[\lambda(H)_1]}}$
in the sense that no more improvement is possible at all states in $[x^*]_{\chi^{[\lambda(H)_1]}}$.
We remark that the above local convergence result is different from that of Proposition 2.2
by Bertsekas~\cite{bert2}.
In our case, a subset of $X$ in which every state is visited infinitely often
is \emph{not} assumed to be given in advance.
The sequence of policies generated by on-line PIPS
will eventually converge to a policy and Theorem~\ref{thm:gen}
characterizes its local optimality with respect to the communicating classes,
in which every state is visited infinitely often, induced by the policy.
Note further that Bertsekas' result is within the context of rolling an \emph{infinite-horizon}
control.
Unfortunately, we cannot provide a useful performance result about the performance of $\lambda(H)$
in Theorem~\ref{thm:gen} here, e.g, an upper bound on $||V^{[\lambda(H)_1]}_{\infty} - V^*_{\infty}||_{\infty}$
because it is difficult to bound $||V^{\lambda(H)}_{H} - V^*_{H}||_{\infty}$.
The degree of approximation by $\lambda(H)$ for $\pi^*(H)$ will determine
the performance of the rolling horizon control policy by the on-line asynchronous
PIPS algorithm.
\section{Concluding Remarks}
The off-line and on-line PIPS algorithms can play the role of \emph{frameworks}
for solving MDPs with supervisors.
While the disposition of the algorithms (and their developments) was done
mainly in the perspective of theoretical soundness and results,
both can be easily implemented by simulation if the MDP model is \emph{generative}.
In particular, for the on-line case, each $\pi \in \beta^{\pi_k,H}_{x_k}\cup \Delta_k$
is simply followed (rolled out) over a relevant forecast-horizon
(see, e.g.,~\cite{bert1} for related approaches and references) in order to
generate its sample-paths.
If $\beta^{\pi_k,H}_{x_k}\cup \Delta_k$ is large, some policies from $\beta^{\pi_k,H}_{x_k}$
and $\Delta_k$ can be ``randomly" sampled, for example, without losing the improvement
of $\pi_k$.
A study on the actual implementation is important and is left as a future study.
On-line PIPS is also in the category of ``learning" control.
Essentially, $V^*_0$ can be thought as an initial knowledge of control to the
system, e.g., as in the value function of a self-learned Go-playing policy
of AlphaZero from a neural-network based learning-system.
The monotonically improving value-function sequence generated by PIPS
can be interpreted as learning or obtaining a better knowledge about
controlling the system,
There exist an algorithm, ``adaptive multi-stage sampling,"
(AMS)~\cite{changams} for $M_H$ whose random estimate converges to $V^*_H(x)$
for a given $x$
as the number of samplings approach infinity in the expected sense.
AMS employed within the rolling-horizon control is closer to its original spirit,
compared with on-line PIPS, because the value of $V^*_H(x_k)$ is approximated
at each visited $x_k$ like solving $M_H$ in advance.
In contrast with the PI-based approach here, the idea of AMS is to replace the
inside of the maximization over the admissible action set in the $T$-operator
of VI such that the maximum selection is done with a different utility
for each action or the ``necessity" measure of sampling, which
is estimated over a set of currently sampled next states with a support
from a certain upper-confidence-bound that controls a degree of search by
the action.
Because the replacement is applied at each level while emulating the process
of backward induction, AMS requires a recursive process in a depth-first-search manner
that effectively builds a sampled tree whose size is exponential in $H$.
It is therefore not surprising that similar to the complexity of backward induction,
AMS also has the (sample) complexity that \emph{exponentially} depends on $H$.
On the other hand, in general estimating the value of a policy, \emph{not the optimal value},
is a much easier task by simulation.
Generating random sample-paths of a policy has a \emph{polynomial} dependence on $H$.
More importantly, it seems difficult to discuss the convergence behaviour
of the rolling horizon AMS-control
because some characterization of a \emph{stochastic} policy, due to random
estimates of $V^*_H(x_k)$ at each $k$, needs to be made with a finite sampling-complexity.
Arguably, this would be true for any algorithm that produces a \emph{random} estimate of
the optimal value, e.g., Monte-Carlo Tree Search (MCTS) used in AlphaGo or
AlphaZero~\cite{bert1}. However, it is worthwhile to note that these algorithms' output can
act as a feedback from a supervisor in the framework of on-line PIPS.
It can be checked that another multi-policy improvement method
of parallel rollout~\cite{changps} does not work
for preserving the monotonicity property with \emph{asynchronous} update
when the set of \emph{more than one} policies is applied to the method for the improvement.
Even with synchronous update, the parallel-rollout approach
requires estimating a \emph{double expectation} for each action, one for the next-state
distribution and another one in the value function (to be evaluated at the next state).
In contrast, in policy switching a single expectation for each policy needs to be estimated
leading to a lower simulation-complexity.
\end{document}
|
\begin{document}
\title{On the asymptotic behavior of cocycles over flows}
\begin{abstract}
In 1968, V.I. Oseledets formulated the question of convergence in the Birkhoff theorem and the multiplicative ergodic theorem for measurable cocycles over flows under the condition of integrability for each individual~$t$. A.M. Stepin and the author established (2016) the convergence along subsets of time of density 1. In this note, we show that moreover the convergence is fulfilled modulo time subsets of finite measure.
MSC2020: 37A10, 37A30.
Key words: cocycles, flows, Birkhoff ergodic theorem, Oseledets multiplicative ergodic theorem, Lyapunov exponents
\end{abstract}
\section{Introduction}
Let $\{T^t\}$ be a measure-preserving measurable flow on a Lebesgue space $(X,\mu)$ with $\mu(X) = 1.$
A {\it cocycle} over the flow $\{T^t\}$ with values in a group~$G$ is a measurable function $\alphalpha \colon\mathbb R\times X \to G$ such that $$\alphalpha (t + s, x) = \alphalpha(s, T^t x)\alphalpha(t, x)$$ for all $t,s\in\mathbb R$ and $x\in X.$\footnote{Similarly for semiflows.} For $G = \mathbb R,$ we have an additive cocycle: $$ \alphalpha (t + s, x) = \alphalpha (t, x) + \alphalpha (s, T^t x);$$ and if $\alpha$ is absolutely continuous with respect to $t,$ then $\alpha(t, x) = \int_0^t f (T^t x) \, dt $ for some measurable function $f$.\footnote {As a function $f,$ one can take $f(x) = \varlimsup \limits_{n \to \infty} \frac{\alpha (\varepsilon_n, x)}{\varepsilon_n}$ for some sequence $\varepsilon_n \to0. $}
The Oseledets multiplicative ergodic theorem (MET) \cite{O} generalizes the statement about convergence of the means $\alpha (t, x)/t $ of such cocycles (the Birkhoff theorem) to the non-commutative case. According to MET, for an arbitrary measurable cocycle $A\colon\mathbb R \times X \to GL (m, \mathbb R)$ satisfying the condition
\begin{equation}
\sup\limits_{0\leqslant t \leqslant1} \ln ^ + \|A (t, x) ^ {\pm1} \|\in L^1 (X, \mu), \label{int}
\end{equation}
almost all points with respect to the invariant measure $\mu$ are Lyapunov regular. This implies that almost everywhere there are exact Lyapunov exponents as well as the block structure of the cocycle: the vector bundle $X\times\mathbb R^m$ is decomposed into a direct sum of invariant subbundles corresponding to distinct Lyapunov exponents. If $X$ is a smooth compact manifold and $\{T^t\}$ is a flow of class $C^1$ preserving the smooth measure $\mu$ on $X,$ up to a set of measure zero the tangent bundle $TX$ admits trivialization and the differential of the flow $(t, x)\mapsto A(t, x):=D_xT^t$ is a cocycle satisfying condition \eqref{int}. This explains the great importance of MET for the theory of dynamical systems, especially for nonuniformly hyperbolic theory (see \cite{BP}). MET was generalized for local fields \cite{R}, for Hilbert \cite{Ru82, GM} and Banach \cite{M, T} spaces, spaces of non-positive curvature \cite{K87, KM}.
\par In the one-dimensional case, a sufficient condition for convergence of the means of an additive cocycle $\alpha(t, x)$, similar to \eqref{int}, has the form
\begin {equation}
\sup\limits_{0 \leqslant t\leqslant1}|\alpha (t, x)|\in L^1 (X, \mu). \label{int2}
\end{equation}
In the work \cite{O}, V.I. Oseledets posed the question about convergence in the Birkhoff theorem and MET under the condition of integrability for each individual $t.$
In the joint work of the author and A.M. Stepin \cite{LS}, it was shown that although convergence in all $t$ under these conditions may not hold, it does along density 1 time subsets depending on $x \in X.$ \footnote{There was used the argument with the function $ \varphi(x) = x^{-1/2} $ on page \pageref{phi} below.} In particular, there exist exact Lyapunov exponents for cocycles in the sense of the specified convergence almost everywhere. Such generalized Lyapunov exponents seem natural from an applied point of view since they are not sensitive to rare outliers.
Here we prove a stronger statement: the convergence persists if a set of finite Lebesgue measure (depending on x) is discarded from the time axis.
\section {Birkhoff theorem}
Here are some examples that demonstrate the possible behavior of the Birkhoff means of additive cocycles. We first recall the construction of a suspension flow over an automorphism $S$ of a Lebesgue space $(Y,\nu)$ with a measurable roof function $f\colon Y\to\mathbb R_{+},$ $\int f d\nu=1,$ $f(y)\geqslant C>0.$ Such a flow $\{T^t\}$ acts on the space $X=\{(y,\tau)\in Y\times\mathbb R:0\leqslant\tau<f(y)\}$ with the measure $d\mu=d\nu\,dt$ by the formula $$T^t(y,\tau)=\begin{cases}
(y,\tau+t),&0\leqslant\tau+t<f(y),\\
(S^ny,\tau+t-f_n(y)),&f_n(y)\leqslant\tau+t<f_{n+1}(y),
\end{cases}$$
where $f_n(y):=\sum\limits_{i=0}^{n-1}f(S^iy).$ In this case, one can always go to an isomorphic flow with $C\leqslant f(y) \leqslant 2C$ \cite{Ro}.
\begin{ex} Let $\{T^t\}$ be an arbitrary flow on $(X,\mu)$ without fixed points. (At the fixed points the means $\alpha(t, x)/t$ obviously converge.) Consider a coboundary, i.e., a cocycle of the form $\alpha(t,x)=h(T^tx)-h(x)$ with a function $h\in L^1,$ whose values along the trajectory $h(T^tx)$ are unbounded on each of consecutive time intervals.
The existence of such a function follows from the suspension representation of the flow. The flow under consideration is isomorphic to at most a countable sum of suspension flows with decreasing $C$. For each such suspension, we define a function on its space unbounded on each fiber $\{y\}\times[0, f(y))$. This gives an example of an integrable cocycle whose means do not converge everywhere.
\end{ex}
In the same way, one can construct a similar example of a cocycle not cohomologous to 0 using the following statement.
\begin{lemma} Any cocycle $\alphalpha\colon\mathbb R\times X\to \mathbb R$ over a suspension flow $\{T^t\}$ is uniquely determined by the values $\alphalpha(t,(y,0)),$ $0\leqslant t\leqslant f(y),$ $y\in Y$.
\end{lemma}
\begin{proof} Indeed, if $\alphalpha\colon\mathbb R\times X\to \mathbb R$ is a cocycle, then for $0\leqslant\tau<f(y)$ and $f_n(y)\leqslant\tau+t<f_{n+1}(y)$
we have $$\alpha(t,(y,\tau))=\alpha(\tau+t,(y,0))-\alpha(\tau,(y,0))=\sum_{i=0}^{n-1}\alpha(f(S^iy),(S^iy,0))+$$
$$+\alpha\bigl(\tau+t-f_n(y), (S^ny,0)\bigr)-\alpha(\tau,(y,0)).$$
It is easy to verify that the function $\alpha$ given by this formula is a cocycle.
\end{proof}
\begin{rem} For ergodic $S,$ a cocycle locally bounded in $t$ with the same property can be obtained by setting
$$\tilde\alphalpha(t,(y,0))=\min(\alphalpha(t,(y,0)),h(y)),$$ where $h(y)=N_k^2$ for $y\in A_k\setminus A_{k+1}$ and $$A_k\searrow\emptyset,\ \nu\Bigl(\bigcup_{n=0}^{N_k}S^{-n}A_k\Bigr)>1-\varepsilon_k,\ \varepsilon_k\searrow0.$$
\end{rem}
\begin{ex} Condition \eqref{int2} is not necessary for convergence $\mu$-a.e. of ratios $\alpha(t,x)/t$ as $t\to\infty$ even for ergodic flows. As an example, we can take a suspension flow built over an ergodic basic transformation $S\colon[0,1]\to[0,1]$ preserving Lebesgue measure, under the function $f(y)=y^{-2/3},$ and define the cocycle by the formula $$\alphalpha(t,(y,0))=\begin{cases}\sqrt{n},& t=n\in\mathbb N\cap[0,f(y)],\\ 0,& t\in\mathbb N^c\cap[0,f(y)].\end{cases}$$
\end{ex}
Although the means of an integrable cocycle may not converge, it turns out that convergence in density takes place.
The {\it upper density} of the Borel set $\tau\subset\mathbb R_+$ is the limit $$\bar{\textup{d}}(\tau)=\varlimsup_{t\to\infty}\frac{\lambda(\tau\cap[0,t])}{t},$$
where $\lambda$ is Lebesque measure.
The function $f(t)$ {\it converges in density} to $l$ as $t\to\infty$ (we will write $\bar{\textup{d}}lim\limits_{t\to\infty}f(t)=l$) if there exists a set $\tau\subset\mathbb R_+$ of density 0 such that $$\lim\limits_{t\to\infty,\ t\notin\tau}f(t)=l,$$ or equivalently $$\forall\varepsilon>0\ \ \bar{\textup{d}}\{t>0:|f(t)-l|\geqslant\varepsilon\}=0.$$
In our case, the set $\tau$ will have a finite Lebesgue measure.
\par Below in this section, we will consider measure-preserving semiflows~$\{T^t\}_{t\geqslant0}.$
\begin{theorem}\label{add} If $\alpha(t,x)\in L^1(X,\mu)$ for each $t,$ then almost everywhere there exists the limit $$\bar{\textup{d}}lim\limits_{t\to\infty}\frac{\alpha(t,x)}{t}=\beta(x),$$ where the function $\beta$ is measurable, $T^t$-invariant, and $$\int\beta(x)\,d\mu(x)=1/t\int\alpha(t,x)\,d\mu(x).$$ Moreover, the neglected subset of the time axis can be chosen measurably depending on $x$ and having a finite Lebesgue measure for each $x$.
\end{theorem}
\begin{proof}
Since $$\alphalpha(t,x)=\sum_{n=0}^{[t]-1}\alphalpha(1,T^nx)+\alphalpha(\{t\},T^{[t]}x),$$ by the Birkhoff theorem for an automorphism, this convergence is equivalent to convergence of $\alphalpha(\{t\},T^{[t]}x)/t.$
We have $$\int|\alphalpha(t+s,x)|\,d\mu(x)\leqslant\int|\alphalpha(t,x)|\,d\mu(x)+\int|\alphalpha(s,T^tx)|\,d\mu(x)=$$ $$=\int|\alphalpha(t,x)|\,d\mu(x)+\int|\alphalpha(s,x)|\,d\mu(x).$$
Therefore, the measurable, subadditive function $t\mapsto\int|\alphalpha(t,x)|\,d\mu(x)$ is locally bounded (\cite{K}, p. 461).
Denote $$\Delta_n^{\varepsilon}(x):=\leqslantft\{t\in[n,n+1): \frac{|\alpha(\{t\},T^nx)|}{n}\geqslant\varepsilon\right\}.$$
Note that the series with positive terms $\sum_n\lambda(\Delta_n^{\varepsilon}(x))$ converges almost everywhere since the series of integrals converges:
$$\sum_n\int_X\lambda(\Delta_n^{\varepsilon}(x))\,d\mu(x)=\sum_n\int_X\int_0^1I_{\{|\alpha(t,x)|\geqslant\varepsilon n\}}\,dt\,d\mu(x)=$$
$$=\sum_n\int_0^1\mu\{|\alpha(t,x)|\geqslant\varepsilon n\}\,dt\leqslant\frac{1}{\varepsilon}\int_0^1\int_X|\alpha(t,x)|\,d\mu(x) dt\leqslant$$ $$\leqslant\frac{1}{\varepsilon}\sup\limits_{0\leqslant t\leqslant1}\int\limits_X|\alphalpha(t,x)|\,d\mu(x)<\infty.$$
Let $\tau_n(x)=\cup_k\Delta_k^{\varepsilon_n}(x),$ where $\varepsilon_n$ is some sequence decreasing to zero and
$$k_n(x):=\min\leqslantft\{k: \lambda(\tau_n(x)\cap[k,\infty))\leqslant\frac{1}{2^n}\right\}.$$ Then, as the required set of finite measure, on whose complement convergence holds, we can take
$$\tau(x)=\bigcup_n\{\tau_n(x)\cap[k_{n-1}(x),k_n(x))\}.$$
\end{proof}
\begin{cor} Under the condition $\alpha(t,x)\in L^1(X,\mu)$ (for all $t$) the lattice limit $\lim\limits_{n\to\infty}\alpha(nh,x)/nh$ does not depend on the lattice spacing $h.$
\end{cor}
\begin{remark} Obviously, the discarded subset of the $t$ axis in Theorem \ref{add} can be chosen of arbitrary small measure.
\end{remark}
\begin{remark} Unlike the case of absolutely continuous cocycles, there is no local ergodic theorem for arbitrary measurable cocycles, as we see from the example of the Brownian motion cocycle: the process $B^t/t=sB^{1/s} = \widetilde B^s $ is itself a Brownian motion and does not converge as $s\to\infty$ in any sense.
\end{remark}
As a corollary of Theorem \ref{add}, we obtain one else asymptotic property.
\begin{theorem} Let $\{T^t\}$ be an ergodic semiflow, $\alpha(t,x)\in L^1(X,\mu)$ for each~$t,$ and $\int\alpha(t,x)\,d\mu(x)=0$ (if this is true for one $t,$ then for all). Then
$$\lambda\{t>0: |\alpha(t,x)|\leqslant\varepsilon\}=\infty.$$
\end{theorem}
This statement generalizes the well-known fact about the recurrence of random walks on the line and its ergodic analogue, the Atkinson theorem \cite{A}, and is proved in the same way as in the absolutely continuous case \cite{Sh}.
We also give some more explicit than in Theorem \ref{add} constructions of suitable sets $\tau(x)$ of density 0, which, however, can have infinite measure.
Put
$$\tau(x)=\bigcup_n\Delta_n(x),\ \Delta_n(x):=\leqslantft\{t\in[n,n+1): |\alpha(\{t\},T^nx)|\geqslant \frac{n}{\varphi(n)}\right\},$$
where $\varphi$ is a monotone function increasing to $\infty$. Obviously, we have
\begin{equation}\frac{\alpha(\{t\},T^{[t]}x)}{t}\to0,\ t\to\infty,\ t\notin\tau(x).\label{lim}
\end{equation}
Let us estimate the growth of the measure $\lambda(\tau(x)\cap[0,t])$. Note that the cocycle $\alpha(t, x)$ is locally integrable over $t$ for almost all $x$ since, by the Tonelli theorem, $$\int\limits_X\int\limits_0^t|\alphalpha(s,x)|\,ds\,d\mu(x)=\int\limits_0^t\int\limits_X|\alphalpha(s,x)|\,d\mu(x)ds\leqslant t\sup\limits_{0\leqslant s\leqslant t}\int\limits_X|\alphalpha(s,x)|\,d\mu(x)<\infty$$
due to the local boundedness of the function $t\mapsto\int|\alphalpha(t,x)|\,d\mu(x).$
It follows from the Birkhoff theorem for discrete time that there is a constant~$C,$ depending on $x,$ such that for each $n$
$$C(x)n\geqslant\sum_{k=1}^{n}\int_0^1|\alpha(s,T^kx)|\,ds\geqslant\sum_{k=1}^{n}\frac{k}{\varphi(k)}\lambda(\Delta_k(x)).$$
Applying the Abel transform to the last expression, for $S_n(x)=\sum_{k=1}^n\lambda(\Delta_k(x)),$ we get
$$S_n(x)\leqslant\varphi(n)\Bigl(C(x)+\frac{1}{n}\sum_{k=1}^n\Bigl(\frac{k}{\varphi(k)}-\frac{k-1}{\varphi(k-1)}\Bigr)S_{k-1}(x)\Bigr).$$
Hence by induction,
$$S_n(x)\leqslant C(x)\sum_{k=1}^n\frac{\varphi(k)}{k}=O(\varphi(n)\ln n).$$
We thus obtain an ``almost logarithmic'' estimate for the growth of the set $\tau$: $$\lambda(\tau(x)\cap[0,t])=O(\varphi(t)\ln t)).$$
The same is true for the set $\tau$ constructed from $\Delta_n$ of the form
$$\Delta_n(x)=\leqslantft\{t\in[n,n+1): |\alpha(\{t\},T^nx)|\geqslant \frac{n}{\varphi\bigl(\frac{f(T^nx)}{n}\bigr)}\right\},$$
$$f(x)=\int_0^1|\alpha(s,x)|\,ds,$$
where $\varphi$ is a convex function such that $\lim_{x\to0+}\varphi(x)=\infty$ and $\lim_{x\to0+}x\varphi(x)=0.$ E.g., $\varphi(x)=x^{-1/2}.$\label{phi}
Since, by the Borel-Cantelli lemma, $f(T^nx)/n\to0,\ n\to\infty,$ almost everywhere,
it follows that~\eqref{lim} holds. Also,
$$\lambda(\Delta_n(x))\leqslant\frac{f(T^nx)}{n}\varphi\Bigl(\frac{f(T^nx)}{n}\Bigr)\to0,\ n\to\infty,$$
which implies that the set $\tau$ has density 0. By Jensen's inequality, $$\lambda(\tau(x)\cap[0,n+1])\leqslant\sum_{k=1}^{n}\frac{f(T^kx)}{k}\varphi\Bigl(\frac{f(T^kx)}{k}\Bigr)\leqslant\Bigl(\sum_{k=1}^{n}\frac{f(T^kx)}{k}\Bigr)\varphi\Bigl(\frac{1}{n}\sum_{k=1}^{n}\frac{f(T^kx)}{k}\Bigr).$$
Using the Abel transform and the Birkhoff theorem, one can obtain the asymptotics
$$\sum_{k=1}^{n}\frac{f(T^kx)}{k}\alphasymp \ln n,$$ from which it follows that the above estimate is almost logarithmic.
Note that an analogue of Kingman's subadditive ergodic theorem \cite{Ki} is also valid.
\begin{theorem}\label{subadd} Let $\alpha\colon \mathbb R_+\times X\to\mathbb R\cup\{-\infty\}$ be a measurable function such that $\alpha^+(t,x)\in L^1(X,\mu)$ for each $t$ and $$\alpha(t+s,x)\leqslant\alpha(t,x)+\alpha(s,T^tx)$$ for all $s,t\in\mathbb R_+,$ $x\in X.$ Then there exists $T^t$-invariant function $\beta\colon X\to\mathbb R\cup\{-\infty\}$ such that $\beta^+\in L^1(X,\mu),$
$$\lim\limits_{t\to\infty}\frac{1}{t}\int\alpha(t,x)\,d\mu(x)=\inf\limits_{t}\frac{1}{t}\int\alpha(t,x)\,d\mu(x)=\int\beta(x)\,d\mu(x),$$
and for $\mu$-a.e. $x\in X,$ there exists the limit
$$\bar{\textup{d}}lim\limits_{t\to\infty}\frac{\alpha(t,x)}{t}=\beta(x)$$
along the complements to subsets of the time axis of finite Lebesgue measure.
\end{theorem}
\begin{proof} We have
\begin{equation}
\alpha([t]+1,x)-\alpha(1-\{t\},T^tx)\leqslant\alpha(t,x)\leqslant\alpha([t],x)+\alphalpha(\{t\},T^{[t]}x).\label{1}
\end{equation}
Taking into account that the subadditive function $t\mapsto\int\alphalpha^+(t,x)\,d\mu(x)$ is locally bounded, applying the arguments of Theorem \ref{add} to the sets
$$\Delta_{n,1}(x)=\bigl\{t\in[n,n+1):\alpha^+(\{t\},T^nx)\geqslant\varepsilon n\bigr\},$$
$$\Delta_{n,2}(x)=\bigl\{t\in[n,n+1):\alpha^+(1-\{t\},T^tx)\geqslant\varepsilon n\bigr\},$$
we find sets $\tau_{1,2}(x)$ of finite measure, on whose complements there is convergence
$$\frac{\alpha^+(\{t\},T^{[t]}x)}{t}\to0,\ \frac{\alpha^+(1-\{t\},T^tx)}{t}\to0$$
respectively. As the required set of density 0, we can take $\tau_1(x)\cup\tau_2(x).$ Indeed, by the subadditive ergodic theorem for discrete time, there exists the limit $\lim_{t\to\infty}\alpha([t],x)/t=:\beta(x)$ with $\beta^+\in L^1(X,\mu).$ Therefore,
$$\varlimsup\limits_{t\to\infty,\,t\notin\tau_1\cup\tau_2}\frac{\alpha(t,x)}{t}\leqslant\varlimsup\limits_{t\to\infty,\,t\notin\tau_1}\frac{\alpha(t,x)}{t}\leqslant\beta(x),$$ $$\varliminf\limits_{t\to\infty,\,t\notin\tau_1\cup\tau_2}\frac{\alpha(t,x)}{t}\geqslant\varliminf\limits_{t\to\infty,\,t\notin\tau_2}\frac{\alpha(t,x)}{t}\geqslant\beta(x).$$
Since the function $t\mapsto\int\alpha(t,x)\,d\mu(x)$ is subadditive, there exists the limit $$\lim\limits_{t\to\infty}\frac{1}{t}\int\alpha(t,x)\,d\mu(x)=\inf\limits_{t}\frac{1}{t}\int\alpha(t,x)\,d\mu(x).$$ And from $\eqref{1}$ and the fact that the function $t\mapsto\int\alpha^+(t,x)d\mu(x)$ is locally bounded it follows that $$\int\beta(x)\,d\mu(x)=\lim\limits_{t\to\infty}\frac{1}{t}\int\alpha([t]+1,x)\,d\mu(x)\leqslant\lim\limits_{t\to\infty}\frac{1}{t}\int\alpha(t,x)\,d\mu(x)\leqslant$$
$$\leqslant\lim\limits_{t\to\infty}\frac{1}{t}\int\alpha([t],x)\,d\mu(x)=\int\beta(x)\,d\mu(x).$$
\end{proof}
\section {Multiplicative ergodic theorem}
The following theorem shows that the structure of the Oseledets invariant subspaces is preserved under our weaker integrability conditions for cocycles.
\begin{theorem}[multiplicative ergodic theorem]
Let $A\colon\mathbb R\times X\to GL(m,\mathbb R)$ be a cocycle with $\ln^+\|A(t,x)^{\pm1}\|\in L^1(X,\mu)$ for each $t\in\mathbb R$. Then for almost every~$x$
\begin{itemize}
\item[\textup{(i)}] there exists the limit $$\bar{\textup{d}}lim\limits_{t\to\infty}(A^*(t,x)A(t,x))^{\frac{1}{2t}}=:\Lambda(x);$$
\item[\textup{(ii)}] there exists a measurable splitting $\mathbb R^m=\bigoplus\limits_{i=1}^{k(x)}U_i(x)$ such that $$U_i(T^tx)=A(t,x)U_i(x)$$ and $$\bar{\textup{d}}lim\limits_{t\to\pm\infty}\frac{1}{|t|}\ln\|A(t,x)v\|=\pm\chi_i(x),\ v\in U_i(x)\setminus\{0\}$$ uniformly on $U_i(x)\setminus\{0\}.$
The functions $k(x),$ $\chi_i(x),$ and $\bar{\textup{d}}im U_i(x)$ are $T^t$-invariant and $\exp(\chi_i(x))$ are the eigenvalues of the matrix $\Lambda(x)$ with multiplicities $\bar{\textup{d}}im U_i(x).$
\end{itemize}
Moreover, the convergence holds along the complements to time axis subsets of finite Lebesgue measure (which can be chosen measurably dependent on $x$).
\end{theorem}
This theorem is deduced, as for discrete time, from its one-sided version, in which (ii) is replaced by
{\it
\begin{itemize}
\item[\textup{(ii')}] $$\bar{\textup{d}}lim\limits_{t\to\infty}\frac{1}{t}\ln\|A(t,x)v\|=\chi_i(x),\ v\in V_i(x)\setminus V_{i+1}(x),$$
$$V_i(T^tx)=A(t,x)V_i(x),$$
$$V_i(x)=\bigoplus\limits_{j=i}^{k(x)}W_j(x),$$ where $W_j(x)$ are the eigenspaces of the operator $\Lambda(x)$ corresponding to its eigenvalues $\exp(\chi_1(x))\geqslant\ldots\geqslant\exp(\chi_{k(x)}(x))$.
\end{itemize}}
Together with $(i)$ this condition is an analogue of the Lyapunov regularity and, as was noted by V.A. Kaimanovich \cite{K87}, is equivalent to the existence of a positive definite symmetric matrix $\Lambda(x)$ such that
$$\bar{\textup{d}}lim_{t\to\infty}\frac{1}{t}\ln\|(A(t,x)\Lambda^{-t}(x))^{\pm1}\|=0\mbox{ a.e.}$$
If not all $\chi_i(x)$ are equal to 0, then the last condition means the proximity of the trajectory of the inverse cocycle $A(t,x)^{-1}p$ to the geodesics $\gamma(\theta(x)t,x)=\Lambda^{-t}(x)p$ in the symmetric space $GL(m,\mathbb R)/O(m)$ (for which $p=O(m)$) with the corresponding metrics $\rho$ \cite{K87}:
$$\bar{\textup{d}}lim_{t\to\infty}\frac{1}{t}\rho(A(t,x)^{-1}p,\gamma(\theta(x)t,x))=0.$$
(In this case, the trajectory $A(t,x)^{-1}p$ tends to a random point of the boundary at infinity $GL(m,\mathbb R)/O(m)(\infty).$) A more general statement --- a version of the Karlsson-Margulis theorem \cite{KM} --- is also true. We restrict ourselves to the ergodic case.
\begin{theorem} Let $(Y,\rho)$ be a uniformly convex, Busemann nonpositively curved, complete metric space $(Y,\rho);$ and let $A\colon \mathbb R_+\times X\to G$ be an ``inverse'' cocycle, i.e.,
$$A(t+s,x)=A(t,x)A(s,T^tx),$$ over an ergodic semiflow $\{T^t\}$
with values in semigroup of nonexpanding maps of $Y$. Suppose that for a fixed point $p\in Y,$
$$\rho(A(t,x)p,p)\in L^1(X,\mu)$$ holds for all $t$. Then for almost every $x$ there exists the limit
\begin{equation}\bar{\textup{d}}lim_{t\to\infty}\frac{1}{t}\rho(A(t,x)p,p)=:\theta,\label{v}\end{equation}
and if $\theta>0,$ then there exists a unique geodesics $\gamma$ in $Y,$ depending on $x,$ with $\gamma(0,x)=p,$ such that
\begin{equation}\bar{\textup{d}}lim_{t\to\infty}\frac{1}{t}\rho(A(t,x)p,\gamma(\theta t,x))=0 \mbox{ a.e.}\label{l}\end{equation}
Moreover, the convergence is fulfilled along the complements to time axis subsets of finite Lebesgue measure.
\end{theorem}
\begin{proof}
Theorem \ref{subadd} implies the existence of the limit \eqref{v}. Statement \eqref{l} follows from the discrete time theorem, the inequalities
$$\rho\bigl(A(t,x)p,\gamma(\theta t)\bigr)\leqslant\rho\bigl(A(t,x)p,A([t],x)\bigr)+\rho\bigl(A([t],x)p,\gamma(\theta[t])\bigr)+
\rho\bigl(\gamma(\theta[t]),\gamma(\theta t)\bigr),$$
$$\rho\bigl(A(t,x)p,A([t],x)p\bigr)=\rho\bigl(A([t],x)A(\{t\},T^{[t]}x)p,A([t],x)p\bigr)\leqslant$$ $$\leqslant\rho\bigl(A(\{t\},T^{[t]}x)p,p\bigr),$$
and the existence of the limit
$$\bar{\textup{d}}lim_{t\to\infty}\frac{1}{t}\rho(A(\{t\},T^{[t]}x)p,p)=0.$$
The latter was in fact already used in applying Theorem \ref{subadd}.
\end{proof}
\begin{rem} The infinite-dimensional operator versions of the MET for convergence in density are also valid. Proofs in the spirit of Raghunathan can be carried out by choosing the naturally arising countable collection of ``bad'' sets of density 0 so that the series of their measures converges.
\end{rem}
\noindent Maxim E. Lipatov\\
Dept. of Mechanics and Mathematics\\
Lomonosov Moscow State University \\
Main Building, 1 Leninskiye Gory\\
Moscow 119991\\
RUSSIA
\noindent\textit{E-mail:} \texttt{[email protected]}
\end{document}
|
\begin{document}
\title {Successive Minima and Lattice Points}
\author{Martin Henk}
\address{Martin Henk, Technische Universit\"at Wien, Abteilung f\"ur Analysis,
Wiedner Hauptstr. 8-10/1142, A-1040 Wien, Austria}
\email{[email protected]}
\subjclass{52C07, 11H06, 11P21}
\thanks{The work was completed while I was
visiting the Mathematical Department of the University of Crete in
Heraklion. I would like to thank
the University of Crete for their great hospitality and support.
}
\begin{abstract} The main purpose of this note is to prove an upper
bound on the number of lattice points of a centrally symmetric
convex body in terms of the successive minima of the body. This bound
improves on former bounds and narrows the gap towards a lattice
point analogue of Minkowski's second theorem on successive minima.
Minkowski's proof of his second theorem is rather lengthy and it was
also criticised as obscure. We present a short proof of Minkowski's
second theorem on successive minima, which, however, is based on the
ideas of Minkowski's proof.
\end{abstract}
\maketitle
\section{Introduction}
In 1896 Hermann Minkowski's fundamental and guiding book ``Geometrie der
Zahlen'' \cite{Min:geozahl} was published, which
may be considered as the first systematic study on relations
between convex geometry, Diophantine approximation, and the theory of
quadratic forms (cf.~{\sc Gruber} \cite{Gru:geonum}). One of the
basic problems in
geometry of numbers is to decide whether a given set in the
$d$-dimensional Euclidean space ${\mathbb R}^d$ contains a non-trivial lattice point
of a $d$-dimensional lattice $\Lambda\subset {\mathbb R}^d$.
With respect to the class ${\mathcal K}^d_0$ of all 0-symmetric convex bodies in
${\mathbb R}^d$ with non-empty interior and the volume
$\vol(\cdot)$ -- $d$-dimensional Lebesgue measure --
Minkowski settled this problem:
\begin{equation}
\label{eq:minkowski0}
\text{\it If $\vol(K)\geq 2^d\cdot \det\Lambda$ then $K$
contains a non-zero lattice point of $\Lambda$. }
\end{equation}
Here $\det\Lambda$ denotes the determinant of the lattice $\Lambda$
and the space of all lattices $\Lambda\subset{\mathbb R}^d$ with
$\det\Lambda\ne 0$ is denoted by ${\mathcal L}^d$.
Minkowski assessed his result as ``ein Satz, der nach meinem
Daf\"urhalten zu den fruchtbarsten in der Zahlenlehre zu rechnen
ist'' (\cite{Min:geozahl}, p.~75) and indeed this theorem has many
applications (cf.~\cite{ErdGruHam:lattpoint}, sec.~3.3).
Minkowski proved even a
stronger result, for which we have to introduce his ``kleinstes
System von unabh\"angig gerichteten Strahlendistanzen im
Zahlengitter'' (\cite{Min:geozahl}, p.~178).
\begin{definition} Let $K\in{\mathcal K}^d_0$ and $\Lambda\in{\mathcal L}^d$.
For $1\leq i\leq d$
\begin{equation*}
\begin{split}
\lambda_i(K,\Lambda)=\min\big\{
\lambda\in{\mathbb R}_{\geq 0}: &\,\lambda K \,\,\text{contains }\, i
\text{ linearly independent} \\
&\text{lattice points of }\Lambda\big\}
\end{split}
\end{equation*}
is called the {\em $i$-th successive minimum of $K$ with respect to~$\Lambda$}.
\end{definition}
Obviously, we have $\lambda_1(K,\Lambda)\leq
\lambda_2(K,\Lambda)\leq\cdots\leq\lambda_d(K,\Lambda)$ and
the first successive minimum $\lambda_1(K,\Lambda)$
is the smallest dilation factor such that $\lambda_1(K,\Lambda)\, K$
contains a non-zero lattice point. With this notation Minkowski's first
theorem on successive minima reads (cf.~\cite{Min:geozahl}, pp.~75)
\begin{theorem}[Minkowski] Let $K$ in
${\mathcal K}^d_0$ and $\Lambda\in{\mathcal L}^d$. Then
\begin{equation*}
\lambda_1(K,\Lambda)^d \vol(K)\leq 2^d\,\det\Lambda.
\end{equation*}
\label{thm:Minkowski_first}
\end{theorem}
So $\vol(K)\geq 2^d\det\Lambda$ implies $\lambda_1(K,\Lambda)\leq 1$,
and we get \eqref{eq:minkowski0}.
Minkowski's second theorem on successive minima is a deep improvement of the
first one and says (cf.~\cite{Min:geozahl}, pp.~199)
\begin{theorem}[Minkowski] Let $K \in
{\mathcal K}^d_0$ and $\Lambda\in{\mathcal L}^d$. Then
\begin{equation*}
\lambda_1(K,\Lambda)\cdot\lambda_2(K,\Lambda)\cdot\ldots\cdot\lambda_d(K,\Lambda)\cdot \vol(K)\leq 2^d\,\det\Lambda.
\end{equation*}
\label{thm:Minkowski_second}
\end{theorem}
This inequality is best possible. For instance, with respect to the
integral lattice ${\mathbb Z}^d$, each box with axes
parallel to the coordinate axes gives equality. Although Theorem
\ref{thm:Minkowski_second} has not so many applications as the first
theorem on successive minima, it shows a beautiful relation
between the volume of $K$ and the expansion of $K$ with respect to
independent lattice directions of a lattice.
The importance of Theorem
\ref{thm:Minkowski_second} is also reflected in the number of
different proofs, see e.g.~
{\sc Bambah, Woods \& Zassenhaus} \cite{BamWooZas:succmin},
{\sc Cassels} \cite{Cas:geonum},
{\sc Danicic} \cite{Dan:succmin},
{\sc Davenport} \cite{Dav:succmin},
{\sc Estermann} \cite{Est:succmin},
{\sc Siegel} \cite{Sie:geonum} and
{\sc Weyl} \cite{Wey:succmin}.
In \cite{BetkeHenkWills:successive_minima} it was conjectured that
an inequality analogue to Theorem \ref{thm:Minkowski_second} holds for the
lattice point enumerator $ \#(K\cap\Lambda)$.
More precisely,
\begin{conjecture} Let $K
\in {\mathcal K}^d_0$ and $\Lambda\in{\mathcal L}^d$. Then
\begin{equation}
\#(K\cap\Lambda) \leq \prod_{i=1}^d \left\lfloor
\frac{2}{\lambda_i(K,\Lambda)}+1\right\rfloor.
\label{eq:conj}
\end{equation}
\label{conj:bhw}
\end{conjecture}
Here $\lfloor x \rfloor$ denotes the smallest integer not less than
$x$. An analogous statement to the first theorem of Minkowski on
successive minima was already shown in
\cite{BetkeHenkWills:successive_minima}, namely
\begin{equation}
\#(K\cap\Lambda) \leq
\left\lfloor
\frac{2}{\lambda_1(K,\Lambda)}+1\right\rfloor^d.
\label{eq:lat_min_one}
\end{equation}
It seems to be worth mentioning that if Conjecture \ref{conj:bhw}
were true then we could write by the definition of the Riemann
integral
\begin{equation*}
\frac{\vol(K)}{\det\Lambda}=\lim_{r\to 0} r^d\#(K\cap r\Lambda)
\leq \lim_{r\to 0} \prod_{i=1}^d r \left\lfloor
\frac{2}{\lambda_i(K,r\Lambda)}+1\right\rfloor =\prod_{i=1}^d
\frac{2}{\lambda_i(K,\Lambda)}.
\end{equation*}
Thus Conjecture \ref{conj:bhw} implies Minkowski's second
theorem on successive minima (Theorem \ref{thm:Minkowski_second}).
In \cite{BetkeHenkWills:successive_minima} the validity of the
conjecture was proven in the case $d=2$. Moreover, it was shown that
an upper bound of this type
exists, if in the above product
$\frac{2}{\lambda_i(K,\Lambda)}$ is replaced by
$\frac{2\,i}{\lambda_i(K,\Lambda)}$. So, roughly speaking,
\eqref{eq:conj} holds up to a factor $d!$. Here we shall improve this
bound.
\begin{theorem} Let $d\geq 2$, $K\in {\mathcal K}^d_0$ and $\Lambda\in{\mathcal L}^d$. Then
\begin{equation*}
\#(K\cap\Lambda) < 2^{d-1} \prod_{i=1}^d \left\lfloor
\frac{2}{\lambda_i(K,\Lambda)}+1\right\rfloor.
\end{equation*}
\label{thm:main}
\end{theorem}
The proof of this theorem will be given is the next section.
Minkowski's original proof
(\cite{Min:geozahl}, 199-218) of his second theorem on successive minima
was sometimes
criticised as lengthy and obscure (cf.~\cite{davenport:collected_mink},
p.91). One reason might be that in the scope of the proof he also proves
many basic facts about the volume of a convex body,
like the computation of the volume through successive integrations, etc.,
which cloud a little bit the simple and nice geometrical ideas of his
proof. Based on these ideas we present a short proof of Theorem
\ref{thm:Minkowski_second} in the last section.
For more information on lattices, successive
minima and their role in the geometry of numbers we refer to the books
of {\sc Erd\"os, Gruber and Hammer} \cite{ErdGruHam:lattpoint},
{\sc Gruber and Lekkerkerker} \cite{GruLek:geonum} and the survey of
{\sc Gruber} \cite{Gru:geonum}. For an elementary introduction to
the geometry of numbers see \cite{OldsLaxDavidoff:geometry_of_numbers}.
\section{Proof of Theorem \ref{thm:main}}
Before giving the proof we list some basic facts on
lattices, for which we refer to \cite{GruLek:geonum}.
Every lattice $\Lambda\in{\mathcal L}^d$ can be
written as $\Lambda=A{\mathbb Z}^d$, where $A$ is a non-singular $(d\times
d)$-matrix,
i.e., $A\in {\rm GL}(d,{\mathbb R})$. In particular we have
$\lambda_i(K,\Lambda)=\lambda_i(A^{-1}K,{\mathbb Z}^d)$ and
$\#(K\cap\Lambda)=\#(A^{-1}K\cap{\mathbb Z}^d)$. A lattice
$\widetilde{\Lambda}\in{\mathcal L}^d$ is called a sublattice of
$\Lambda\in{\mathcal L}^d$ if $\widetilde{\Lambda}\subset\Lambda$. For
$a,\overline{a}\in\Lambda$ and a sublattice
$\widetilde{\Lambda}\subset\Lambda$ we write
$$
a\equiv \overline{a}\bmod \widetilde{\Lambda} \Leftrightarrow
(a-\overline{a})\in\widetilde{\Lambda}.
$$
In words, $a,\overline{a}$ belong to the same residue class (coset) of
$\Lambda$ with respect to~$\widetilde{\Lambda}$. We note that there are precisely
$\det\widetilde{\Lambda}/\det\Lambda$ different residue classes of
$\Lambda$ with respect to~$\widetilde{\Lambda}$. For every set of
$d$-linearly independent lattice points
$a^1,\dots,a^d$ of a lattice $\Lambda$ there
exists a basis $b^1,\dots,b^d$ of $\Lambda$ such that
$\lin\{a^1,\dots,a^i\}=\lin\{b^1,\dots,b^i\}$,
where $\lin$ denotes the linear hull. In particular, given $d$
linearly independent lattice vectors $z^i\in{\mathbb Z}^d$, $1\leq i\leq d$,
with $z^i\in\lambda_i(K,{\mathbb Z}^d)\,K$ then there exists an unimodular
matrix $U$, i.e., $U\in {\rm GL}(d,{\mathbb R})\cap {\mathbb Z}^{d\times
d}$, such that
\begin{equation}
Uz^i\in\left(\lambda_i(UK,{\mathbb Z}^d)\,UK\right)
\cap\lin\{e^1,\dots,e^i\},\quad 1\leq
i\leq d,
\label{eq:succ_minima}
\end{equation}
where $e^i\in{\mathbb R}^d$ denotes the $i$-th unit vector.
Furthermore we note that for $d$ linearly independent lattice points
$a^1,\dots,a^d$ of a lattice
$\Lambda\in{\mathcal L}^d$ satisfying $a^i\in\lambda_i(K,\Lambda)\,K$, the
definition of the successive minima implies
\begin{equation}
\inter\left(\lambda_i(K,\Lambda)\,K\right)\cap \Lambda
\subset\lin\{0,a^1,\dots,a^{i-1}\}\cap \Lambda,\quad 1\leq i\leq d,
\label{eq:succ_cons}
\end{equation}
where $\inter$ denotes the interior.
For the proof of Theorem \ref{thm:main} we need the following simple
lemma.
\begin{lemma} Let $K\in{\mathcal K}^d_0$, $\Lambda\in{\mathcal L}^d$ and let $\widetilde{\Lambda}$ be a sublattice of $\Lambda$. Then
$$
\#\left(K\cap\Lambda\right)\leq
\frac{\det\widetilde{\Lambda}}{\det\Lambda}\,\#\left(2\,K\cap\widetilde{\Lambda}\right).
$$
\label{lem:lat_suc}
\end{lemma}
\begin{proof} Let $m=\#(2\,K\cap\widetilde{\Lambda})$ and
suppose there exist at least $m+1$ different lattice points
$a^1,\dots,a^{m+1}\in K\cap\Lambda$ such that $a^i\equiv
a^1\bmod\widetilde{\Lambda}$, $1\leq i\leq m+1$. Then we have
$$
a^i-a^1 \in (K-K)\cap\widetilde{\Lambda}=2K\cap
\widetilde{\Lambda}, \quad 1\leq i\leq m+1,
$$
which contradicts the assumption
$\#(2\,K\cap\widetilde{\Lambda})=m$. Thus we have shown that every
residue class of $\Lambda$
with respect to~$\widetilde{\Lambda}$ does not contain more than $m$
points of $K\cap\Lambda$. Since there are precisely
$\det\widetilde{\Lambda}/\det\Lambda$ different residue classes, we get the
desired bound.
\end{proof}
We remark that inequality \eqref{eq:lat_min_one} is a simple
consequence of this lemma. To see this we set $n_1=\lfloor
2/\lambda_1(K,\Lambda)+1\rfloor$ and
$\widetilde{\Lambda}=n_1\Lambda$. Next we observe that $a\in
2K\cap\widetilde{\Lambda}$ implies that $\frac{1}{n_1}a\in
\inter(\lambda_1(K,\Lambda)\,K)\cap\Lambda$ and from \eqref{eq:succ_cons} we
conclude $\#(2K\cap\widetilde{\Lambda})=1$. Thus Lemma
\ref{lem:lat_suc} gives
\begin{equation*}
\#(K\cap\Lambda) \leq \frac{\det\widetilde{\Lambda}}{\det\Lambda}=(n_1)^d =\left\lfloor
\frac{2}{\lambda_1(K,\Lambda)}+1\right\rfloor^d.
\end{equation*}
Next we come to the proof of Theorem \ref{thm:main}.
\begin{proof}[Proof of Theorem \ref{thm:main}] W.l.o.g.~let
$\Lambda={\mathbb Z}^d$ and we may assume that (cf.~\eqref{eq:succ_minima}
and \eqref{eq:succ_cons})
\begin{equation}
\inter\left(\lambda_i(K,{\mathbb Z}^d)K\right)\cap{\mathbb Z}^d
\subset\lin\{0,e^1,\dots,e^{i-1}\}\cap{\mathbb Z}^d,\quad
1\leq i\leq d.
\label{eq:proof_assum}
\end{equation}
For abbreviation we set
$q_i=\lfloor\frac{2}{\lambda_i(K,\Lambda)}+1\rfloor$, $1\leq i\leq
d$, and first we determine $d$ numbers $n_i\in{\mathbb N}$ such that
\begin{equation}
\begin{split}
n_d =q_d, \quad
q_i \leq n_i
< 2\,q_i, \quad \text{ and }\quad n_{i+1} \text{ divides } n_i,\quad\,1\leq i\leq d-1.
\end{split}
\label{eq:proof_numbers}
\end{equation}
Suppose we have already found
$n_d,\dots,n_{k+1}$ with these properties. In order to determine
$n_k$ we distinguish two cases. If $n_{k+1}\geq q_k$ we
set $n_k=n_{k+1}$. Since $q_k\geq q_{k+1}$ we obtain $q_k\leq
n_k=n_{k+1}<2\,q_{k+1}\leq 2\,q_k$. Otherwise, if $n_{k+1}<q_k$ let
$q_k=m\cdot n_{k+1}+r$ with $m\in{\mathbb N}$, $m\geq 1$, and $0\leq r<n_{k+1}$. In this
case we set $n_k=q_k+n_{k+1}-r$ and obviously, $n_k$ meets the
requirements of \eqref{eq:proof_numbers}.
Now let $\widetilde{\Lambda}\subset {\mathbb Z}^d$ be the lattice generated by the vectors
$n_1\,e^1,n_2\,e^2,\dots,n_d\,e^d$. Then we have
$\det\widetilde{\Lambda}/\det\Lambda=n_1\cdot n_2\cdot\ldots\cdot n_d$
and together with the upper bounds on the
the numbers $n_i$, Lemma \ref{lem:lat_suc} gives
\begin{equation}
\#\left(K\cap\Lambda\right)\leq \#\left(2\,K\cap\widetilde{\Lambda}\right)\,\prod_{i=1}^d n_i
< \#\left(2\,K\cap\widetilde{\Lambda}\right)\,2^{d-1}
\prod_{i=1}^d\left\lfloor\frac{2}{\lambda_i(K,\Lambda)}+1\right\rfloor.
\label{eq:implies_thm}
\end{equation}
Hence, in order to verify the theorem, it suffices to show $2K\cap\widetilde{\Lambda}=\{0\}$.
Suppose there exists a $g\in 2K\cap\widetilde{\Lambda}\setminus\{0\}$
and let $k$
be the largest index of a non-zero coordinate of $g$, i.e., $g_k\ne 0$
and $g_{k+1}=\cdots =g_d=0$. Then we may write
$$
g=z_1\, (n_1e^1)+z_2\,(n_2e^2) + \cdots + z_k\,(n_ke^k)\in 2K
$$
for some $z_i\in{\mathbb Z}$. Since $n_{k}$ is a divisor of
$n_1,\dots,n_{k-1}$ and since
$2/n_k<\lambda_k(K,{\mathbb Z}^d)$ (cf.~\eqref{eq:proof_numbers}) we obtain
$$
\frac{1}{n_k} g \in \left(\frac{2}{n_k} K\right) \cap
{\mathbb Z}^d\subset
\inter(\lambda_k(K,{\mathbb Z}^d)K)\cap{\mathbb Z}^d.
$$
However, since $g_k\ne 0$ this relation violates
\eqref{eq:proof_assum}. Thus we have
$2K\cap\widetilde{\Lambda}=\{0\}$ and the theorem is proven.
\end{proof}
\section{Proof of Theorem \ref{thm:Minkowski_second} }
Minkowski's proof of his second theorem on successive minima can be found
in his book ``Geometrie der Zahlen'' (\cite{Min:geozahl},
199--219) and for an English translation we refer to
\cite{hancock:geonum}, 570--603.
\noindent
\begin{proof}[Proof of Theorem \ref{thm:Minkowski_second}\,\,{\rm
(following Minkowski)}]
Again w.l.o.g.~we may assume that $\Lambda={\mathbb Z}^d$. For convenience we write
$\lambda_i=\lambda_i(K,{\mathbb Z}^d)$ and set $K_i=\frac{\lambda_i}{2}K$.
Furthermore, we assume that $z^1,\dots,z^d$ are $d$ linearly
independent lattice points with $z^i\in \lambda_i K\cap {\mathbb Z}^d$ and
$\lin\{z^1,\dots,z^i\}=\lin\{e^1,\dots,e^i\}$, $1\leq i\leq d$,
(cf.~\eqref{eq:succ_minima}). For short, we denote the linear space $\lin\{e^1,\dots,e^i\}$ by $L_i$.
For an integer $q\in {\mathbb N}$ let $M_q^d=\{z\in{\mathbb Z}^d : |z_i|\leq q,\,
1\leq i\leq d\}$
and for $1\leq j\leq d-1$ let $M_q^j=M_q^d\cap L_j$.
Since $K$ is a bounded set there exists a constant $\gamma$,
only depending on $K$, such that
\begin{equation}
\vol(M_q^d+K_d)\leq (2q+\gamma)^d.
\label{eq:second_one}
\end{equation}
By the definition of $\lambda_1$ we have
$(z+\inter(K_1))\cap(\overline{z}+\inter(K_1))=\emptyset$ for two
different lattice point $z,\overline{z}\in{\mathbb Z}^d$, because otherwise we
would get the contradiction
$z-\overline{z}\in(\inter(K_1)-\inter(K_1))\cap{\mathbb Z}^d=\inter(K_1-K_1)\cap{\mathbb Z}^d=
\inter(\lambda_1\,K)\cap{\mathbb Z}^d=\{0\}$. Thus we have
\begin{equation}
\vol(M_q^d+K_1)= (2q+1)^d\vol(K_1) = (2q+1)^d\left(\frac{\lambda_1}{2}\right)^d \vol(K).
\label{eq:second_two}
\end{equation}
In the following we shall show that for $1\leq i\leq d-1$
\begin{equation}
\vol(M_q^d+K_{i+1})\geq \left(\frac{\lambda_{i+1}}{\lambda_i}\right)^{d-i}
\vol(M_q^d+K_i).
\label{eq:second_three}
\end{equation}
To this end we may assume $\lambda_{i+1}>\lambda_i$ and let
$z,\overline{z}\in{\mathbb Z}^d$, which differ in the last $d-i$ coordinates, i.e., $(z_{i+1},\dots,z_{d})\ne (\overline{z}_{i+1},\dots,\overline{z}_{d})$. Then
\begin{equation}
\left[z+\inter(K_{i+1})\right]\cap \left[\overline{z}+\inter(K_{i+1})\right]=\emptyset.
\label{eq:cut}
\end{equation}
Otherwise the $i+1$ linearly independent lattice points $z-\overline{z},z^1,\dots,z^i$ belong to the interior of $\lambda_{i+1}K$ which contradicts the minimality of $\lambda_{i+1}$. Hence we obtain from \eqref{eq:cut}
\begin{equation*}
\begin{split}
\vol\left(M_q^d+K_{i+1}\right) & =
(2q+1)^{d-i}\,\vol\left(M_q^i+K_{i+1}\right),
\\
\vol\left(M_q^d+K_{i}\right) & =(2q+1)^{d-i}\,
\vol\left(M_q^i+K_{i}\right).
\end{split}
\end{equation*}
and in order to verify \eqref{eq:second_three} it suffices to show
\begin{equation}
\vol\left(M_q^i+K_{i+1}\right)\geq
\left(\frac{\lambda_{i+1}}{\lambda_i}\right)^{d-i} \vol(M_q^i+K_i).
\label{eq:second_five}
\end{equation}
Let $f_1,f_2:{\mathbb R}^d\to {\mathbb R}^d$ be the linear maps given by
\begin{eqnarray*}
f_1(x)&=&\left( \frac{\lambda_{i+1}}{\lambda_i} x_1,\dots,
\frac{\lambda_{i+1}}{\lambda_i} x_i, x_{i+1},\dots,x_d\right)^\intercal,\\
f_2(x)&=&\left(x_1,\dots,x_i,\frac{\lambda_{i+1}}{\lambda_i} x_{i+1},\dots,
\frac{\lambda_{i+1}}{\lambda_i} x_d\right)^\intercal.
\end{eqnarray*}
Since $M^i_q+K_{i+1}=f_2(M_q^i+f_1(K_i))$ we get
$$
\vol\left(M_q^i+K_{i+1}\right)=
\left(\frac{\lambda_{i+1}}{\lambda_i}\right)^{d-i} \vol(M_q^i+f_1(K_i))
$$
and for the proof of \eqref{eq:second_five} we have to show
\begin{equation}
\vol\left(M_q^i+f_1(K_i)\right) \geq \vol\left(M_q^i+K_i\right).
\label{eq:second_six}
\end{equation}
To this end let $L_i^\perp$ be the $(d-i)$-dimensional
orthogonal complement of $L_i$. Then it is easy to see that
for every $x\in L_i^\perp$ there exists a $t(x)\in L_i$ with
$K_i\cap (x+L_i)\subset (f_1(K_i)\cap (x+L_i))+t(x)$ and so
$$
\left(M^i_q+K_i\right)\cap\left(x+L_i\right)\subset
\left[\left(M^i_q+f_1(K_i)\right)\cap\left(x+L_i\right)\right]+t(x).
$$
Thus we get
\begin{eqnarray*}
\vol(M_q^i+K_i)&=&\int_{x\in L_i^{\perp}}
\vol_i\left((M_q^i+K_i)\cap(x+L_i)\right){\rm d}\,x \\
&\leq &
\int_{x\in L_i^{\perp}}
\vol_i\left((M_q^i+f_1(K_i))\cap(x+L_i)\right){\rm d}\,x \\
& = &
\vol(M_q^i+f_1(K_i)),
\end{eqnarray*}
where $\vol_i(\cdot)$ denotes the $i$-dimensional volume. This shows
\eqref{eq:second_six} and so we have verified \eqref{eq:second_three}.
Finally, it follows from \eqref{eq:second_one}, \eqref{eq:second_two} and \eqref{eq:second_three}
\begin{equation*}
\begin{split}
(2q+\gamma)^d & \geq
\vol\left(M^d_q+K_d\right) \geq
\left(\frac{\lambda_d}{\lambda_{d-1}}\right)
\vol\left(M^d_q+K_{d-1}\right) \\
& \geq \left(\frac{\lambda_d}{\lambda_{d-1}}\right)\left(\frac{\lambda_{d-1}}{\lambda_{d-2}}\right)^2 \vol\left(M^d_q+K_{d-2}\right)
\geq \cdots\cdots \\ & \geq
\left(\frac{\lambda_d}{\lambda_{d-1}}\right)\cdot
\left(\frac{\lambda_{d-1}}{\lambda_{d-2}}\right)^2\cdot
\ldots\cdot
\left(\frac{\lambda_{2}}{\lambda_{1}}\right)^{d-1}\,
\vol\left(M^d_q+K_1\right) \\[0.5ex]
& = \lambda_d\cdot\ldots\cdot\lambda_1\,
\frac{\vol(K)}{2^d}\,(2q+1)^d
\end{split}
\end{equation*}
and so
$$
\lambda_1\cdot\ldots\cdot\lambda_d\,
\vol(K)
\leq 2^d \cdot\left(\frac{2q+\gamma}{2q+1}\right)^d.
$$
Since this holds for all $q\in{\mathbb N}$ the theorem is proven.
\end{proof}
\end{document}
|
\begin{document}
\title{Conformal surface embeddings and extremal length}
\author[Kahn]{Jeremy Kahn}
\address{Brown University\\
151 Thayer Street,
Providence, RI 02912\\
USA}
\email{jeremy\[email protected]}
\author[Pilgrim]{Kevin M.~Pilgrim}
\address{Indiana University\\
831 E. Third St.,
Bloomington, Indiana 47405\\
USA}
\email{[email protected]}
\author[Thurston]{Dylan~P.~Thurston}
\address{Indiana University\\
831 E. Third St.,
Bloomington, Indiana 47405\\
USA}
\email{[email protected]}
\subjclass[2010]{Primary 30F60; Secondary 31A15, 32G15}
\keywords{Riemann surfaces with boundary, conformal embeddings, extremal length}
\begin{abstract}
Given two Riemann surfaces with boundary and a homotopy class of
topological embeddings between them, there is a conformal embedding
in the homotopy class if and only if the extremal length of every simple
multi-curve is decreased under the embedding. Furthermore, the
homotopy class has a conformal embedding that misses an open disk
if and only if extremal lengths are decreased by a definite
ratio. This ratio remains bounded away from one under finite covers.
\end{abstract}
\maketitle
\tableofcontents
\section{Introduction}
\label{sec:intro}
Let $R$ and $S$ be two Riemann surfaces of finite topological type,
possibly with
boundary, and let $f \colon
R \hookrightarrow S$ be a topological embedding. The goal
of this paper is to give conditions for $f$ to
be homotopic to a \emph{conformal} embedding, possibly with extra nice
properties.
We give an answer in terms of ratios of extremal
lengths of simple multi-curves.
For us, surfaces~$S$ are of finite type and a simple multi-curve
on~$S$ is an embedded 1-manifold
in~$S$. See Definitions~\ref{def:surface}
and~\ref{def:curves} for the full
definitions.
The \emph{extremal length} $\EL_S[C]$ of a simple curve~$C$
is a measure of the fattest annulus
that can be embedded in~$S$ with core curve isotopic to~$C$. See
Section~\ref{sec:ext-length} for more on extremal length of
multi-curves.
\begin{definition}\label{def:sf}
For $f \colon R \hookrightarrow S$ a topological
embedding of Riemann surfaces, the \emph{stretch factor} of~$f$
is the maximal ratio of extremal lengths between the two surfaces:
\begin{equation*}
\SF[f] \colonloneqq \sup_{C \in \CCurves^+(R)}
\frac{\EL_S[f(C)]}{\EL_R[C]},
\end{equation*}
where the supremum runs over all simple multi-curves~$C$ with
$\EL_R[C] \ne 0$.
\end{definition}
We will show that $\SF[f]$ is achieved by a ratio of
extremal lengths of two measured foliations, not multi-curves. But $f$
does not induce a natural continuous map between measured foliations
(Example~\ref{examp:erase-hole}), so Definition~\ref{def:sf} is stated
in terms of multi-curves.
\begin{theorem}\label{thm:emb}
Let $R$ and $S$ be Riemann surfaces and $f\colon
R \hookrightarrow S$ be a
topological embedding so that no component of $f(R)$ is contained in
a disk or a once-punctured disk. Then $f$
is homotopic to a conformal embedding if and only if $\SF[f] \le 1$.
\end{theorem}
The key part of Theorem~\ref{thm:emb} is due to Ioffe
\cite{Ioffe75:QCImbedding}.
In fact, his results show that if $\SF[f] \ge 1$, it is related to the
quasi-conformal constant.
\begin{proposition}\label{prop:sf-qc}
Let $f\colon R \hookrightarrow S$ be a topological
embedding of Riemann surfaces. If $\SF[f] \ge 1$, then
$\SF[f]$ is equal to the smallest quasi-conformal constant of any
quasi-conformal embedding homotopic to~$f$.
\end{proposition}
We can also characterize conformal embeddings with some extra ``room''.
\begin{definition}
Let $f \colon R \hookrightarrow
S$ be a conformal embedding between Riemann surfaces. We say that $f$ is a
\emph{strict} embedding if its image omits a
non-empty open subset of each component of~$S$.
An
\emph{annular extension} of a Riemann surface $S$
is a surface $\widehat S$ obtained by attaching a non-empty conformal
annulus to each boundary component, with the boundary of $S$
smoothly embedded in $\widehat S$. An \emph{annular
conformal embedding} is one that extends to a conformal embedding
$\widehat{R} \hookrightarrow S$ for some annular
extension $\widehat{R}$ of~$R$.
\end{definition}
\begin{remark}
A similar relation for subsets of $\mathbb Ca$ is sometimes written $f(R)
\Subset S$ \cite[inter alia]{CPT16:Renorm}.
\end{remark}
\begin{theorem}\label{thm:strict-emb}
Let $R$ and $S$ be Riemann surfaces, with $S$ connected, and let $f\colon
R \hookrightarrow S$ be a
topological embedding so that no component of $f(R)$ is contained in
a disk or a once-punctured disk. Then the following conditions are equivalent:
\begin{enumerate}
\item\label{item:strict-strict} $f$ is homotopic to a strict
conformal embedding;
\item\label{item:strict-annular} $f$ is homotopic to an annular
conformal embedding;
\item\label{item:strict-ball} there is a neighborhood $N$ of $S$ in
Teichmüller space so that, for all $S' \in N$, $f$ is homotopic to a
conformal embedding of $R$ in~$S'$; and
\item\label{item:strict-sf} $\SF[f] < 1$.
\end{enumerate}
\end{theorem}
\begin{remark}
When $\SF[f] = 1$, the embedding guaranteed by Theorem~\ref{thm:emb}
is instead a Teichmüller embedding in the sense of Definition~\ref{def:slit},
with $K=1$ \cite{FB18:Couch}.
\end{remark}
In condition~(\ref{item:strict-ball}), $\SF[f]$ is related to the
size of the ball in Teichmüller space.
\begin{definition}\label{def:teich-embed}
Let $f \colon R \subset S$ be a topological embedding of Riemann
surfaces. Let $\mathcal{T}_R(S)$ be the subset of the Teichmüller
space $\mathcal{T}(S)$ for which there is a conformal embedding
of~$R$ in the homotopy class~$[f]$. (This is empty if $\SF[f] > 1$.)
\end{definition}
\begin{proposition}\label{prop:sf-dist}
Let $f\colon R \hookrightarrow S$ be a topological
embedding of Riemann surfaces, and suppose $\SF[f] \le 1$.
Then
\[
d(S, \partial\mathcal{T}_R(S)) =
-\frac{1}{2}\log \SF[f].
\]
\end{proposition}
We can also control the behavior of the
stretch factor under
taking covers. Proposition~\ref{prop:sf-qc} guarantees that when
$\SF[f] \ge 1$, the stretch factor is unchanged under taking
finite covers (see Proposition~\ref{prop:SF-large-cover}).
We can control what happens when $\SF[f] < 1$, as
well.
\begin{definition}\label{def:cover-map}
For $f \colon R \hookrightarrow S$ a topological
embedding of Riemann surfaces and $p \colon \wt S \to S$ a covering
map, the corresponding \emph{cover} of $f$ is
the pull-back map~$\wt f$ in the diagram
\[
\begin{tikzpicture}
\matrix[row sep=0.7cm,column sep=0.8cm] {
\node (S1t) {$\wt R$}; &
\node (S2t) {$\wt S$}; \\
\node (S1) {$R$}; &
\node (S2) {$S$.}; \\
};
\draw[->] (S1) to node[auto=left,cdlabel] {f} (S2);
\draw[->,dashed] (S1t) to node[auto=left,cdlabel] {\wt f} (S2t);
\draw[->] (S2t) to node[right,cdlabel] {p} (S2);
\draw[->,dashed] (S1t) to node[right,cdlabel] {q} (S1);
\end{tikzpicture}
\]
Explicitly, we have
\begin{align*}
\wt R &\colonloneqq \bigl\{\,(r, \wt{s}) \in R \times \wt{S}
\mid f(r) = p(\wt{s})\,\bigr\}\\
\wt{f}(r,\wt{s}) &\colonloneqq \wt{s}\\
q(r,\wt{s}) &\colonloneqq r.
\end{align*}
Then $\wt f$ is a topological embedding and $q$ is a
covering map.
We may also say that $\wt f$ is a cover of $f$, without
specifying~$p$.
\end{definition}
\begin{definition}\label{def:lifted-stretch}
For $f \colon R \hookrightarrow S$ a topological
embedding of Riemann surfaces, the \emph{lifted stretch factor}
$\wt{\SF}[f]$ is
\[
\wt{\SF}[f] \colonloneqq \sup_{\substack{\text{$\wt f$ finite}\\\text{cover of $f$}}} \SF[\wt f].
\]
\end{definition}
\begin{theorem}\label{thm:sf-cover}
Let $f \colon R \hookrightarrow S$ be a topological
embedding of Riemann surfaces. If $\SF[f] \ge 1$, then
$\wt\SF[f] = \SF[f]$. If $\SF[f] < 1$, then
\[
\SF[f] \le \wt\SF[f] < 1.
\]
\end{theorem}
The hard part of Theorem~\ref{thm:sf-cover} is showing that
$\wt\SF[f]$ is strictly less than $1$ when $\SF[f] < 1$.
By Proposition~\ref{prop:sf-dist}, $\wt \SF[f] < 1$ is equivalent to saying
that $\mathcal{T}_{\wt R}(\wt S)$ contains a ball of uniform size
around~$\wt S$ for every finite cover of~$f$.
Theorem~\ref{thm:sf-cover} will be used in later work
\cite{Thurston16:RubberBands}
to give a positive characterization of
post-critically finite rational maps among topological branched
self-covers of the sphere. This provides a counterpoint to W. Thurston's
characterization \cite{DH93:ThurstonChar}, which characterizes
rational maps in terms of an obstruction.
\subsection{History}
\label{sec:history}
The maximum of the ratio of extremal lengths has appeared before,
usually in the context of closed surfaces, where it gives
Teichmüller distance, as first proved by Kerckhoff
(see Theorem~\ref{thm:teich-dist} below). For surfaces with
boundary
the behavior is quite different, as the stretch factor can be less
than one.
In the special case when the
target~$S$ is a closed torus, there is very precise information about
when $R$ conformally embeds inside of~$S$
\cite{Shiba87:ModuliCont,Shiba93:SpansLowGenus}. Shiba proves that in
this case
$\mathcal{T}_R(S)$ is a disk with
respect to the Teichmüller metric.
There has been earlier work on portions of
Theorem~\ref{thm:strict-emb}. In
particular, Earle and
Marden \cite{EM78:ConfEmbeddings} showed that, with extra
topological restrictions on the embedding $R \hookrightarrow S$,
if $f$ is homotopic to a strict conformal embedding then it is
homotopic to an annular conformal embedding.
It is tempting to look for an analogue of Theorem~\ref{thm:emb} using
hyperbolic length instead of extremal length, given that, by the
Schwarz lemma, hyperbolic length is decreased under conformal
inclusion. However, the results are false for hyperbolic length in
almost all cases
\cite{Masumoto00:HypEmbedding,FB14:ConverseSchwarz}.
These results were first announced in a research report by the last
author \cite{Thurston16:RubberBands}.
\subsection{Organization}
\label{sec:organization}
Section~\ref{sec:setting} reviews background material and specify our
definitions for topological surfaces. Section~\ref{sec:conformal} does
the same for Riemann surfaces and extremal length, as well as giving
elementary properties of the stretch factor. Section~\ref{sec:slit}
proves Theorem~\ref{thm:emb}, largely based on a theorem of
Ioffe. Section~\ref{sec:strict} extends this to prove
Theorem~\ref{thm:strict-emb}. Section~\ref{sec:covers} gives the
further extension to prove Theorem~\ref{thm:sf-cover}. In
Section~\ref{sec:covers}, we also prove Theorem~\ref{thm:area-surface}, an
estimate on areas of subsurfaces with respect to quadratic
differentials; this may be of
independent interest. Section~\ref{sec:challenges} gives some
directions for future research, and in the process gives another way
to get an upper bound on $\wt\SF[f]$.
\subsection{Acknowledgments}
We thank
Matt Bainbridge,
Maxime Fortier Bourque, and
Frederick Gardiner
for many helpful conversations.
Aaron Cohen,
Russell Lodge,
Insung Park, and
Maxime Scott
gave useful comments on earlier drafts.
JK was supported by NSF grant DMS-1352721.
KMP was supported by Simons Foundation Collaboration Grant \#4429407.
DPT was supported by NSF grants DMS-1358638 and DMS-1507244.
\section{Topological Setting}
\label{sec:setting}
\begin{definition}\label{def:surface}
By a (smooth) \emph{surface}~$S$ we mean a smooth, oriented, compact
2-manifold
with boundary, together with a distinguished finite set~$P$ of points
in~$S$, the \emph{punctures}. The boundary $\partial S$ of~$S$ is
a finite union of circles. By a slight abuse of terminology, by the
interior~$S^\circ$ of~$S$ we mean $S \setminus (P \cup \partial S)$.
If we want to emphasize that we are talking about the
compact version of~$S$, we will write $\closure{S}$.
A surface is \emph{small} if it is the sphere with $0$, $1$,
or~$2$ punctures or the unit disk with $0$ or~$1$ punctures. These
are the surfaces that have no non-trivial curves by the definition
below.
\ By a \emph{topological map} $f \colon R \to S$ between surfaces we mean an
orientation\hyp preserving continuous map from
$R^\circ$ to~$S^\circ$ that extends to a continuous map from $\closure{R}$ to~$\closure{S}$. In
particular, the image of a puncture is a puncture or a regular
point, and embeddings are only required to be one-to-one on $R^\circ$.
Homotopies are taken within the same space of maps.
\end{definition}
\begin{definition}\label{def:curves}
A \emph{multi-curve}~$C$ on a surface~$S$ is a smooth 1-manifold with
boundary $X(C)$ together with an immersion from the interior of $X(C)$
into~$S^\circ$ that maps $\partial X(C)$ to
$\partial S$. We do not assume that $X(C)$ is connected; if it
is, $C$ is said to be \emph{connected} or a
\emph{curve}. We will mostly be concerned with \emph{simple}
multi-curves, those for which the immersion is an embedding. An \emph{arc}
is a curve for which $X(C)$ is an interval, and a \emph{loop} is a
curve for which $X(C)$ is a circle. A multi-curve is \emph{closed}
if it has no arc components.
A (multi-)curve is \emph{trivial} if
it is contained in a disk or once-punctured disk of~$S$.
\emph{Equivalence} of multi-curves is the equivalence relation generated
by
\begin{enumerate}
\item homotopy within the space of all maps taking $\partial X(C)$ to
$\partial S$ (not necessarily immersions),
\item reparametrization of the 1-manifold $X(C)$ (including
orientation reversal), and
\item dropping trivial components.
\end{enumerate}
The equivalence class of~$C$ is denoted~$[C]$. The space of simple
multi-curves on~$S$ up to homotopy is denoted $\CCurves^{\pm}(S)$. If
$\partial S \ne \emptyset$, then we distinguish two subsets of
$\CCurves^{\pm}(S)$:
\begin{itemize}
\item $\CCurves^+(S) \subset \CCurves^{\pm}(S)$ is the subset of closed
curves and
\item $\CCurves^-(S) \subset \CCurves^{\pm}(S)$ is the subset with no loops
parallel to a boundary component.
\end{itemize}
A \emph{weighted multi-curve} $C = \sum a_iC_i$ is a multi-curve in which each
connected component is given a positive real coefficient~$a_i$. When
considering equivalence of weighted multi-curves, we add the further
relation that two parallel components may be merged and their
weights added. We write $\CCurves_\mathbb R(S)$ or
$\CCurves_\mathbb Q(S)$ for the space of weighted multi-curves with real or
rational weights, respectively.
\end{definition}
\begin{definition}\label{def:foliation}
A (positive) \emph{measured foliation} $F$ on a surface $S$ is
a singular 1-dimensional foliation on $\overline{S}$, tangent
to~$\partial S$, with a non-zero transverse
measure. $F$ is allowed to have $k$-prong singularities, as
described, for instance, in
\cite{FLP79:TravauxThurston}, and summarized below.
\begin{itemize}
\item At points of $S^\circ$, we allow $k$-prong singularities for $k
\ge 3$. (If there are only $2$ prongs, it is not a singularity.)
This is also called a zero of order $k-2$.
\item At punctures, we allow $k$-prong singularities for $k \ge
1$. This is also called a zero of order $k-1$.
\item At points of~$\partial S$, we allow $k$-prong singularities for
$k \ge 3$. This is also called a zero of order $k-2$. If we double
the surface, it becomes a $(2k-2)$-prong singularity.
\end{itemize}
We also admit the empty (zero) measured foliation as a degenerate case.
A \emph{singular leaf} of a measured foliation is a leaf that ends
at a singularity. A
\emph{saddle connection} is a
singular leaf that ends at singularities in both
directions. If a saddle connection
connects two distinct singularities, and at least one of the
singularities is in
the interior, it is possible to \emph{collapse} it to form a new
measured foliation.
\emph{Whitehead equivalence} of measured foliations is
the equivalence relation generated by homotopy and collapsing saddle
connections. We denote the Whitehead equivalence class of a
measured foliation by~$[F]$, and the set of Whitehead equivalence
classes of measured foliations by $\mathcal{MF}^+(S)$.
\end{definition}
From a multi-curve $C \in \CCurves^-(S)$ and a measured foliation $F$
on~$S$, we can form the intersection number
\begin{equation}\label{eq:mf-intersect}
i([C],[F]) \colonloneqq \inf_{C_1 \in [C]} \int_t \abs{F(C_1'(t))}\,dt.
\end{equation}
\begin{proposition}\label{prop:mf-measure}
The map
\begin{align*}
\mathcal{MF}^+(S) &\rightarrow \mathbb R^{\CCurves^-(S)}\\
[F] &\mapsto \bigl(i([C],[F])\bigr)_{[C] \in \CCurves^-(S)}
\end{align*}
is an injection, with image a finite-dimensional manifold determined
by its projection onto finitely many factors.
\end{proposition}
\begin{proof}[Proof sketch]
This is standard. If $S$ has non-empty boundary, take a
maximal set $(C_i)_{i=1}^n$ of non-intersecting arcs in
$\CCurves^-(S)$. Then $(i([C_i],[F]))_{i=1}^n$ determines~$F$ up
to Whitehead equivalence. If $S$ has no boundary, the construction
is more involved, and we omit it.
\end{proof}
Proposition~\ref{prop:mf-measure} can be used to define a topology on
$\mathcal{MF}^+(S)$, which we will use.
\begin{proposition}
The projection map from all measured foliations (not up to
equivalence, with its natural function topology) to $\mathcal{MF}^+(S)$ is
continuous.
\end{proposition}
\begin{proof}[Proof sketch]
For any non-zero measured foliation~$F_0$ and $[C] \in \CCurves^-(S)$, there
is a quasi-transverse representative $C_0\in [C]$, which
automatically satisfies $i(C_0,F_0) = i([C],[F_0])$. If $F_1$ is any
measured foliation close to~$F_0$, then an analysis of the behavior
near singularities shows that there is a representative $C_1\in [C]$
so that $C_1$ is close to $C_0$ and $C_1$ is quasi-transverse with
respect to $F_1$. Then $i([C],[F_1]) = i(C_1, F_1)$ and $i(C_1, F_1$
is close to $i(C_0, F_0)$.
\end{proof}
We can also use Proposition~\ref{prop:mf-measure} to define a map from
$\CCurves^+(S)$ to $\mathcal{MF}^+(S)$, sending $[C] \in \CCurves^+(S)$ to the unique
measured foliation $[F_C]\in\mathcal{MF}^+(S)$ so that $i([C'],[C]) =
i([C'],[F_C])$ for all $C' \in \CCurves^-(S)$. This map is an embedding
on equivalence classes of weighted simple multi-curves.
\begin{definition}
A \emph{train track}~$T$ on a surface~$S$ is a graph $G$ embedded
in~$S$, so that at each vertex of~$G$ (called a \emph{switch}) the
incident edges are partitioned into two non-empty subsets that are
non-crossing in the cyclic order on the incident vertices. In
drawings, the elements of each subset are drawn tangent to each
other.
The \emph{complementary regions} of a train track are naturally
surfaces with cusps on the boundary.
A \emph{taut} train track is a train track with no complementary
components that are disks with no cusps or one cusp, or once-punctured
disks with no cusps.
\end{definition}
\begin{remark}
Many authors (e.g., Penner and Harer \cite{PH92:CombTrainTracks} and
Mosher \cite{Mosher03:Arational})
include our notion of tautness in the definition of a train
track, often in a stronger form
forbidding bigons (disks with two cusps) and once-punctured monogons as well.
\end{remark}
\begin{definition}
The space of positive \emph{transverse measures} or \emph{weights} on a train
track~$T$ on a surface~$S$ is the space
$\mathcal{M}(T)$ of assignments of positive numbers (``widths'') to edges of the
train track so that, at each vertex, the sum of weights on the two
sides of the vertex are equal. If $\mathcal{M}(T)$ is non-empty, then
$T$ is said to be \emph{recurrent}. We have subspaces
$\mathcal{M}_{\mathbb Q}(T)$ and $\mathcal{M}_{\mathbb Z}(T)$ for transverse measures on~$T$
with rational or integral values, respectively. For any train
track, there
is a natural map $\mathcal{M}_{\mathbb Z}(T) \to \CCurves^+(S)$, where we replace
an edge of~$T$ of weight~$k$ by $k$ parallel strands, joining the
strands in the natural way at the switches.
\end{definition}
\begin{lemma}\label{lem:tt-continuous}
Let $T$ be a recurrent taut train track on~$S$. Then there is a
natural continuous map $\mathcal{M}(T) \to
\mathcal{MF}^+(S)$ extending $\mathcal{M}_{\mathbb Z}(T) \to \CCurves^+(S)$.
\end{lemma}
We will denote the map $\mathcal{M}(T) \to \mathcal{MF}^+(S)$ by $w \mapsto T(w)$. If
$F = T(w)$ for some~$w$, we say that $T$ \emph{carries}~$F$.
(For convenience in the proof we are assuming the weights on~$T$ are
strictly positive, but in fact the lemma extends to non-negative
weights.)
\begin{proof}
Pick a small regular neighborhood $N(T)$ of~$T$, arranged so that
$S \setminus N(T)$ has a cusp near each corner where $T$ has a
cusp, as illustrated in Figure~\ref{fig:sew-train-track}. A weight
$w \in \mathcal{M}(T)$ gives a canonical measured
foliation $F_N(w)$ on $N(T)$, where an arc cutting across $N(T)$ transverse
to a edge~$e$ has measure $w(e)$.
Next pick a graph $\Gamma \subset \overline{S} \setminus N(T)$ so that
\begin{itemize}
\item $\Gamma$ contains $\partial S$,
\item $\Gamma$ has a 1-valent vertex at each cusp of $S \setminus
N(T)$ and at each puncture,
\item all other vertices of~$\Gamma$ have valence $2$ or more, and
\item $\Gamma$ is a spine for $S \setminus N(T)$, i.e., $S \setminus
N(T)$ deformation retracts onto~$\Gamma$.
\end{itemize}
(The condition that $T$ be taut guarantees that we can find such a
$\Gamma$). Since $\Gamma$ is a spine, there is a deformation
retraction $\overline{S} \setminus N(T) \to \Gamma$. We can use this to
construct a homeomorphism $\phi \colon N(T) \to \overline{S} \setminus
\Gamma$ that is the identity on $T \subset N(T)$ and extends to a
continuous map $\partial N(T) \to \Gamma$ without backtracking. Then
$[\phi(F_N(w))]$ is the desired measured foliation $T(w)$.
\begin{figure}
\caption{Left: A portion of a taut train track~$T$. The
point~$p$ is a puncture. Right: A
neighborhood $N(T)$ of~$T$, together with a spine~$\Gamma$
for $\overline{S}
\label{fig:sew-train-track}
\end{figure}
As a measured foliation (not up to Whitehead equivalence),
$\phi(F_N(w))$ depends continuously on~$w$ by construction. The
quotient map to the Whitehead equivalence class is continuous.
\end{proof}
In Lemma~\ref{lem:tt-continuous}, if a complementary
region of~$T$ is a bigon or once-punctured monogon, the corresponding spine
is necessarily an interval. Lemma~\ref{lem:tt-continuous} is false
without the assumption that $T$ is taut; see
Example~\ref{examp:erase-hole}.
\begin{lemma}\label{lem:MF-carried}
Every measured foliation~$F$ is carried by a taut train
track~$T$. Furthermore, $T$ can be chosen so that if $F$ has
$k$~zeros on a
boundary component, the
corresponding complementary component of~$T$ has at least
$k$~cusps.
\end{lemma}
Notice that the number of zeros on a boundary component is not
invariant under Whitehead equivalence.
\begin{proof}
The techniques here are standard; see, e.g., \cite[Proposition
3.6.1]{Mosher03:Arational}, or \cite[Corollary
1.7.6]{PH92:CombTrainTracks} for a different approach.
Since the definitions we use are slightly
different, we sketch the argument.
Pick a set of intervals $I_j$ on~$S$ that are transverse
to~$F$ and cut every leaf of~$F$. These intervals will become
the switches of the train track. Let $I = \bigcup_j I_j$.
Divide the leaves of~$F$ into \emph{segments} between singularities
of~$F$ and
intersections with~$I$. A \emph{regular} segment is one that
intersects $I$ in interior points on both ends. There are only a
finite number of non-regular segments (since the
number of singularities of~$F$ and ends of~$I$ is finite), while for
any regular
segment, nearby segments are isotopic relative to~$I$. There are
thus a finite number of classes of parallel regular segments.
Now construct a train track~$T$ by taking the union of~$I$ and one
element of each class of parallel regular segments, and replacing
each interval $I_j$ with a single vertex~$v_j$, joined to the
same regular segments by connecting arcs. At each
switch, the incident edges are divided according to the sides of the
corresponding~$I_j$.
Let $\Gamma$ be the union of the non-regular segments. The components of
the complement of~$T$ correspond to the connected components of~$\Gamma$,
which is a graph with vertices of valence~$1$ at cusps of~$T$ and
possibly at punctures of~$S$, and all other vertices of valence $\ge
2$. (That is, $\Gamma$ is a spine as in the proof of
Lemma~\ref{lem:tt-continuous}.) It follows that $T$ is taut.
$T$~carries $F$ with weights
equal to the width of the families of parallel segments. If $F$ has
$k$~zeros on a boundary component, then $T$ has at least $k$~cusps
by construction.
\end{proof}
\begin{proposition}\label{prop:curves-dense}
$\CCurves^+_\mathbb Q(S)$ is dense in $\mathcal{MF}^+(S)$.
\end{proposition}
\begin{proof}
For a given measured foliation~$F$, we
will produce a
sequence of weighted multi-curves approximating $[F] \in \mathcal{MF}^+(S)$. By
Lemma~\ref{lem:MF-carried}, $F = T(w)$ for a taut train track~$T$
and weight $w \in \mathcal{M}(T)$. Pick a sequence of
rational weights $w_n \in \mathcal{M}_{\mathbb Q}(T)$ approximating~$w$, and
clear denominators to write $w_n = \lambda_n w_n'$ where $w_n' \in
\mathcal{M}_{\mathbb Z}(T)$. Then $w_n'(T)/\lambda_n$ is a weighted multi-curve
approximating~$F$.
\end{proof}
\begin{remark}\label{rem:connected}
On a connected surface~$S$ with no boundary,
Proposition~\ref{prop:curves-dense} can be strengthened to say that
simple curves are projectively dense in measured
foliations, as well
\cite{Kerckhoff80:AsympTeich,FLP79:TravauxThurston}. This
strengthening is false for surfaces with boundary. For
instance, a pair of pants has only three distinct non-trivial
simple curves, but a 3-dimensional space of measured foliations.
\end{remark}
\section{Conformal Setting\label{sec:conformal}}
\subsection{Riemann surfaces\label{sec:riemann}}
\begin{definition}
A \emph{Riemann surface} (with boundary) is a smooth
surface~$S$, as in
Definition~\ref{def:surface}, together with
a complex structure on~${\overline{S}}$, i.e., a fiberwise linear map $J \colon
T{\overline{S}} \to T{\overline{S}}$ with $J^2 = -\textrm{id}$.
\end{definition}
\begin{convention}
For us, a Riemann surface need not be connected. We only consider
surfaces of finite topological type.
\end{convention}
Since the complex structure is on ${\overline{S}}$, not just on~$S$, the complex
structure on $S^\circ$ necessarily has a removable singularity near
every puncture.
\begin{definition}
A (holomorphic) \emph{quadratic differential} $q$ on a Riemann
surface~$S$ is a holomorphic
section of the square of the holomorphic cotangent bundle
of~$S^\circ$. If $z$ is a local coordinate on
$S^\circ$, we can
write $q = \phi(z)\,(dz)^2$ where $\phi(z)$ is holomorphic.
Naturally associated with a quadratic differential we have several objects:
\begin{itemize}
\item Local coordinates given by integrating a branch of $\sqrt{q}$
away from the zeros of~$q$. The transition maps are translations
or half-turns followed by translations, giving $S$ the structure
of a half-translation surface.
\item A horizontal measured foliation $\mathcal{F}_h(q) = \abs{\mathop{\mathrm{Im}} \sqrt{q}}$.
The tangent vectors
to the foliation are those vectors $v\in TS$ with $q(v) \ge 0$, and the
transverse length of a multi-curve~$C$ is
\[
\mathcal{F}_h(q)(C) = \int_{t} \bigl\lvert\mathop{\mathrm{Im}} \sqrt{q(C'(t))}\bigr\rvert\,dt,
\]
i.e., the total variation of the $y$ coordinate in the
half-translation coordinates.
\item Similarly, a vertical measured foliation $\mathcal{F}_v(q) = \abs{\mathop{\mathrm{Re}} \sqrt{q}}$.
\item A locally Euclidean metric $\abs{q}$ on~$S^\circ$, possibly
with cone singularities of cone angle $k\pi$ with $k \ge 3$.
The
length of a multi-curve~$C$ with respect to $\abs{q}$ is
\[
\ell(C) = \int_t \sqrt{\abs{q(C'(t))}}\,dt.
\]
\item An area measure $A_q$ on~$S$, the volume measure of $\abs{q}$.
\end{itemize}
The vector space of finite-area quadratic differentials on~$S$ that
extend analytically to $\partial S$ (but not necessarily to the punctures)
is denoted $\mathcal{Q}(S)$. The finite area constraint implies that at a
puncture of~$S$, every $q\in\mathcal{Q}(S)$ has at most a simple pole.
That is, if $z$ is a local coordinate on ${\overline{S}}$ with a puncture at
$z=0$, we can locally write $q = \phi(z)/z\,(dz)^2$ where $\phi(z)$
is holomorphic.
If $S$ has non-empty boundary, then $\mathcal{Q}(S)$ is
infinite\hyp dimensional. There is a finite\hyp dimensional subspace $\mathcal{Q}^\mathbb R(S)$,
consisting of those quadratic differentials that are real on vectors
tangent to $\partial S$. There is a convex cone $\mathcal{Q}^+(S) \subset
\mathcal{Q}^\mathbb R(S)$ consisting of those quadratic differentials that are
non-negative on $\partial S$. For non-zero $q$ in $\mathcal{Q}^+(S)$, we have
$[\mathcal{F}_h(q)] \in \mathcal{MF}^+(S)$.
\end{definition}
\subsection{Extremal length}
\label{sec:ext-length}
\begin{definition}\label{def:el-curves}
For $C$ a multi-curve on a Riemann surface~$S$, pick a Riemannian
metric~$g$ in the distinguished conformal class. Then the \emph{length}
$\ell_g[C]$ is the minimum Riemannian
length with respect to~$g$ of any rectifiable representative of~$[C]$. The
\emph{extremal length} of $C$ is
\begin{equation}\label{eq:el-sup}
\EL_S[C] \colonloneqq \sup_\rho \frac{\ell_{\rho g}[C]^2}{A_{\rho g}(S)},
\end{equation}
where the supremum runs over all Borel-measurable
conformal
scale factors $\rho \colon S \to \mathbb R_{\ge 0}$ of finite, positive
area. (The scaled quantity $\rho g$ may give a pseudo-metric rather
than a metric, as, e.g., $\rho$ can be~$0$ on an open subset
of~$S$. But we can still define length
and area in a natural way.) The definition makes it clear that
extremal length does
not depend on the metric $g$ within its conformal equivalence class,
so we may talk about extremal length on~$S$ without reference to~$g$.
When the Riemann surface is clear from context, we suppress it
from the notation.
More generally, if $C = \sum_i a_i C_i$ is a weighted multi-curve, then its
length is $\ell_g[C] = \sum_i a_i \ell_g[C_i]$, i.e., the
corresponding weighted linear
combination of lengths of curves, and its extremal length is still defined by
Equation~\eqref{eq:el-sup}.
\end{definition}
We need multi-curves in Definition~\ref{def:el-curves}, as the main
theorems of this paper
are false if restricted to curves rather than multi-curves; see
Remark~\ref{rem:connected}.
We will be interested in simple multi-curves~$C$.
We must check that extremal length is well-defined on equivalence
classes of simple multi-curves. Invariance under homotopy and
reparametrization is obvious. Trivial components of a multi-curve~$C$
have no effect on $\ell_{\rho g}[C]$ since $\rho g$ has finite
area, so also have no effect on extremal length. Finally, let $C_0$ be
a simple
multi-curve with parallel components, and let $C_1$ be the weighted
multi-curve with integer weights obtained by merging parallel
components and taking the weight to be the number of merged
components. Then it is easy to see from the definitions that $\EL[C_0]
= \EL[C_1]$.
Furthermore, $\EL$ scales
quadratically: $\EL[kC] = k^2\EL[C]$.
\begin{lemma}\label{lem:EL-small}
For any non-trivial multi-curve~$C$ on a Riemann surface~$S$,
$\EL[C] > 0$. In particular, if $S$ is not small, there
is a curve with non-zero extremal length.
\end{lemma}
\begin{proof}
Take any finite-area Riemannian metric~$g$ on~$S$ in the given
conformal
class. Then, since $C$ has at least one non-trivial component,
$\ell_g[C] > 0$, so $\EL[C] > 0$.
\end{proof}
We next give some
other interpretations of extremal length for simple multi-curves.
First, recall that for a conformal annulus
\[
A = \bigl([0,\ell] \times [0,w]\bigr)/\bigl((0,x) \sim (\ell,x)\bigr),
\]
its \emph{modulus} $\Mod(A)$ is $w/\ell$. Define the \emph{extremal
length} of $A$ to be $\EL(A) \colonloneqq 1/\Mod(A) = \ell/w$.
Then we can see $\EL[C]$ for a simple multi-curve~$C$ as finding the fattest
set of conformal annuli around~$C$, in the sense that we minimize
total extremal length, as follows.
\begin{proposition}\label{prop:el-total-el}
Let $C = \bigcup_{i=1}^k C_i$ be a simple closed multi-curve on a Riemann
surface~$S$ with
components~$C_i$.
For $i=1,\dots,k$, let
$A_i$ be a (topological) annulus, and let $A = \bigcup_{i=1}^k A_i$.
Then
\begin{equation}
\EL[C] = \inf\limits_{\omega, f}\,\,\sum\limits_{i=1}^k\,\, \EL_\omega(A_i),
\end{equation}
where the infimum runs over all conformal structures~$\omega$ on~$A$
(which amounts to a choice of modulus for each $A_i$) and over all
conformal embeddings $f\colon A \hookrightarrow S$ so that the image of the core
curve of~$A_i$ is isotopic to~$C_i$.
More generally, if $C = \sum_{i=1}^k a_i C_i$ is a weighted simple
multi-curve on~$S$, then, with notation as above,
\begin{equation}\label{eq:el-total-el}
\EL[C] = \inf_{\omega,f}\,\,\sum\limits_{i=1}^k\,\,
a_i^2 \EL_\omega(A_i),
\end{equation}
where the supremum runs over the same set.
\end{proposition}
We delay the proof of Proposition~\ref{prop:el-total-el} a little.
We can also give a characterization of~$\EL$ in terms of Jenkins-Strebel
differentials.
\begin{definition}
A \emph{Jenkins-Strebel} quadratic differential~$q$ on~$S$ is one
where almost every leaf of $\mathcal{F}_h(q)$ is closed. In this case, the
quadratic differential gives a canonical decomposition of~$S$
into annuli foliated by the closed leaves.
\end{definition}
\begin{citethm}\label{thm:quad-diff-height}
Let $C=\bigcup_i a_i C_i$ be a weighted simple closed multi-curve on a Riemann
surface~$S$ so that no $C_i$ is trivial. Then there is a
unique Jenkins-Strebel
differential~$q_C\in\mathcal{Q}^+(S)$ so that $\mathcal{F}_h(q_C)$ can be
decomposed as a disjoint union of annuli~$A_i$ with each $A_i$ being
a union of leaves of transverse measure~$a_i$ and core curve
isotopic to~$C_i$. With
respect to
$\abs{q_C}$, each $A_i$ is isometric to a right Euclidean cylinder.
\end{citethm}
For a proof, see, e.g., Strebel \cite[Theorem
21.1]{Strebel84:QuadDiff}, who attributes the theorem to
Hubbard-Masur \cite{HM79:QuadDiffFol} and Renelt \cite{Renelt76:QD}.
This theorem is one of three different
standard theorems on the existence of Jenkins-Strebel differentials.
\begin{proposition}\label{prop:el-area}
For $C$ a weighted simple closed multi-curve on~$S$ with no trivial
components, let $q = q_C$ be the associated quadratic differential from
Theorem~\ref{thm:quad-diff-height}. Then
\begin{equation}
\EL[C] = A_{\abs{q}}(S).
\end{equation}
\end{proposition}
Proposition~\ref{prop:el-area} should be standard, but we have been
unable to locate it in the literature. We provide a short proof,
an easy application of Beurling's criterion.
\begin{proof}
We use $\abs{q}$ as the base metric in Equation~\eqref{eq:el-sup}
(abusing notation slightly since $\abs{q}$ is not smooth). Let
$\ell_i = \ell_{\abs{q}}[C_i]$. Since $\abs{q}$ is a locally CAT(0)
metric and local geodesics in locally CAT(0) spaces are globally
length-minimizing, $\ell_i$ is the length in~$\abs{q}$ of the core
curve of the annulus~$A_i$.
(This also follows from Teichmüller's Lemma \cite[Theorem
14.1]{Strebel84:QuadDiff}.)
Then, since
$A_{q}(S) = \sum_i a_i \ell_i$ by the construction of~$q$ and
$\ell_{\abs{q}}[C] = \sum_i a_i \ell_i$ by definition of $\ell_{\abs{q}}[C]$,
\[
\frac{\ell_{\abs{q}}[C]^2}{A_{q}(S)}
= A_{q}(S),
\]
so $\EL[C] \ge A_{q}(S)$.
For the other direction, let $\rho$ be the scaling factor relative
to~$\abs{q}$ for another metric in the conformal class. For each $i$ and
$t \in [0,a_i]$, let $C_i(t)$ be the curve on~$A_i$ at distance~$t$
from one of boundary, let $s_i(t) = \int_{C_i(t)}\rho(x)\,dx$,
and let
$S_i = \min_{t\in[0,a_i]} s_i(t)$. Then, using the Cauchy-Schwarz
inequality, we have
\begin{align*}
\ell_{\rho\abs{q}}[C] &\le \sum\nolimits_i a_i S_i\\
A_{\rho\abs{q}}(S) &= \iint\nolimits_S \rho^2\,dA_q
\ge \frac{1}{A_{q}(S)} \left(\iint\nolimits_S \rho\,dA_q\right)^2
\ge \frac{1}{A_{q}(S)} \left(\sum\nolimits_i a_i S_i\right)^2\\
\frac{\ell_{\rho\abs{q}}[C]^2}{A_{\rho\abs{q}}(S)}
&\le A_{q}(S).\qedhere
\end{align*}
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:el-total-el}]
The functional $\sum_{i=1}^k a_i^2 \EL(A_i)$ on the space of
disjoint embeddings of annuli $A_i$ homotopic to $C_i$
is minimized when the $A_i$ are the annuli from the decomposition of
$\mathcal{F}_h(q_C)$
from Theorem~\ref{thm:quad-diff-height} \cite[Theorem
20.5]{Strebel84:QuadDiff}. In this case the value of
the functional is $A_{q_C}(S)$, which is equal to $\EL[C]$ by
Proposition~\ref{prop:el-area}.
\end{proof}
More generally, we can work with arbitrary measured foliations, rather
than simple multi-curves.
\begin{citethm}[Heights theorem]\label{thm:heights}
Let $[F] \in \mathcal{MF}^+(S)$ be a measured foliation on a Riemann
surface~$S$. Then there is a unique quadratic differential $q_F \in
\mathcal{Q}^+(S)$ so that $[\mathcal{F}_h(q_F)] = [F]$. Furthermore, $q_F$ depends
continuously on~$F$.
\end{citethm}
Proofs of Theorem~\ref{thm:heights} have been
given by many authors
\cite{HM79:QuadDiffFol,Kerckhoff80:AsympTeich,MS84:Heights,Wolf96:RealizingMF}.
Of these, Marden and Strebel \cite{MS84:Heights} treat surfaces with
boundary.
By analogy
with Proposition~\ref{prop:el-area}, we define
\begin{equation}
\EL[F] \colonloneqq A_{q_F}(S).
\end{equation}
$\EL[F]$ can also be given a definition closer to
Definition~\ref{def:el-curves}
\cite[Section~4.4]{McMullen12:RS-course}.
\subsection{Stretch factors}
\label{sec:stretch-factors}
We now turn to a few elementary facts about stretch factors, as
already defined in Definition~\ref{def:sf}.
\begin{proposition}
If $f \colon R \hookrightarrow S$ is an topological embedding of
Riemann surfaces where $R$ is not a small surface, then $\SF[f]$ is
defined and finite.
\end{proposition}
\begin{proof}
Immediate from Lemma~\ref{lem:EL-small}.
\end{proof}
\begin{definition}
By analogy with Definition~\ref{def:curves}, we say that a
subsurface $S'$ of a surface~$S$ is \emph{trivial} if~$S'$ is
contained in a disk or once-punctured disk inside~$S$.
\end{definition}
\begin{proposition}
For $f \colon R \hookrightarrow S$ a topological embedding of Riemann
surfaces where $S$ is not small, $\SF[f] = 0$ if and only if $f(R)$
is trivial as a
subsurface of~$S$.
\end{proposition}
\begin{proof}
If $f(R)$ is trivial in~$S$, it is immediate that $SF[f] =
0$. Otherwise, there is some simple curve~$C$ on~$R$ so that $f(C)$ is
nontrivial in~$S$. It follows that $C$ is nontrivial in~$R$, and $\SF[f] \ge
\EL[f(C)]/\EL[C] > 0$.
\end{proof}
\begin{proposition}\label{prop:sf-compose}
If $f\colon S_1 \hookrightarrow S_2$ and $g \colon S_2
\hookrightarrow S_3$ are two topological embeddings of
Riemann surfaces, then
\[
\SF[f \circ g] \le \SF[f] \cdot\SF[g].
\]
\end{proposition}
\begin{proof}
Immediate from the definition.
\end{proof}
\subsection{Teichmüller space}
\label{sec:teichmuller-space}
We can assemble the Riemann surface structures on an underlying
smooth surface~$S$ into the (reduced) \emph{Teichmüller space}
$\mathcal{T}(S)$, meaning the space of Riemann surfaces $T$ together with a
homeomorphisms $\phi_T \colon S \to T$, considered up to isotopies,
taking the boundary to itself but not required to fix it pointwise.
The \emph{Teichmüller distance} between two points in $\mathcal{T}(S)$ is
defined by
\[
d(T,T') \colonloneqq \frac{1}{2} \log K,
\]
where $K$ is the minimal stretching of any quasi-conformal homeomorphism~$f$
from $T$ to~$T'$ so that $(\phi_{T'})^{-1} \circ f \circ \phi_T$ is
isotopic to the identity. (Note that this definition uses
homeomorphisms, rather than the embeddings used in most of the paper.)
It is a standard result that there is a map~$f$ realizing the minimal
stretching~$K$, and that its Beltrami differential has the form
\begin{equation}\label{eq:quad-beltrami}
\mu_f = \frac{K-1}{K+1} \frac{\overline{q}}{\abs{q}}
\end{equation}
for some quadratic differential $q \in q^{\mathbb R}(T)$. Concretely, we
stretch the Euclidean metric $\abs{q}$ along $\mathcal{F}_h(q)$ by a
factor of~$K$. (Since $q$ is only real and not positive on
$\partial T$, $\mathcal{F}_h(q)$ will not in general be in $\mathcal{MF}^+(T)$.)
This is usually stated and proved for closed surfaces;
the case with boundary follows by considering $T \cup \overline{T}$,
the double of $T$ along its boundary, solving the problem in that
context, and observing that the optimal map~$f$ (which is usually
unique) can be chosen to be equivariant with respect to the
anti-holomorphic involution that interchanges $T$ and~$\overline{T}$
so must be real on $\partial T$.
It follows from Equation~\eqref{eq:quad-beltrami} that
\[
\frac{\EL_{T'}(f_* \mathcal{F}_h(q))}{\EL_T(\mathcal{F}_h(q)} = K,
\]
and that this is the maximal ratio of extremal lengths. We can
approximate $\mathcal{F}_h(q)$ by a weighted multi-curve, possibly
with some arc components. We can therefore write the distance in terms
of ratios of extremal lengths. If $f \colon T \to T'$ is a homeomorphism,
define a version of the stretch factor by
\[
\SF^{\pm}[f] \colonloneqq \sup_{C \in \mathcal{C}^\pm_{\mathbb R}}
\frac{\EL_{T'}[f(C)]}{\EL_T[C]}.
\]
That is, we allow arc components of the weighted multi-curve; extremal
length extends in the natural way to these multi-curves. If $C$ has
arc components, $f(C)$ is only well-defined since $f$ is a
homeomorphism. We have $\SF[f] \le \SF^{\pm}[f]$, since the
supremum is over a larger set.
\begin{citethm}\label{thm:teich-dist}
The Teichmüller distance between $T,T' \in \mathcal{T}(S)$ is
\[
d(T, T') = \frac{1}{2}\log \SF^{\pm}[\mathrm{id}_{T,T'}].
\]
\end{citethm}
Theorem~\ref{thm:teich-dist} was stated and proved by Kerckhoff
\cite[Theorem~4]{Kerckhoff80:AsympTeich} for closed surfaces.
He furthermore restricted to simple curves (not multi-curves); the
technique for the reduction to simple curves cannot be made
equivariant with respect to the map interchanging the two components
of the mirror of~$T$.
\section{Slit maps and Ioffe's theorem}
\label{sec:slit}
The following terminology is adapted from Ioffe
\cite{Ioffe75:QCImbedding} and Fortier Bourque \cite{FB18:Couch}.
\begin{definition}\label{def:slit}
On a connected surface $S$ with a non-zero measured
foliation~$F$, a \emph{slit} is a finite union of
closed segments of leafs of~$F$. (The leaf segments can meet
at singularities of~$F$, and so the slit may be a graph.)
A \emph{slit complement} in~$F$
is the complement of a slit, and a \emph{topological slit map} with
respect to~$F$ is
the inclusion of a slit complement into $S$. (This is the inclusion
of the interiors $R^\circ \hookrightarrow S^\circ$, which extends on
a non-injective map $\closure{R} \to \closure{S}$.)
If $f \colon R \hookrightarrow S$ is a slit map with
respect to $F \in \mathcal{MF}^+(S)$, then there is a natural pull-back
measured foliation $f^* F \in \mathcal{MF}^+(R)$.
If $R$ and $S$ are Riemann surfaces, a \emph{Teichmüller
embedding} of dilatation $K \ge 1$ is an embedding $f\colon
R \hookrightarrow S$ with quadratic differentials $q_R\in
\mathcal{Q}^+(R)$ and
$q_S \in \mathcal{Q}^+(S)$ so that $f$ is a topological slit map with respect
to $\mathcal{F}_h(q_S)$ and, in the natural coordinates determined by $q_R$
and~$q_S$, the map $f$ has the form $f(x+iy) = Kx+iy$. Note
that a Teichmüller embedding is $K$-quasi-conformal, and that
$f^* \mathcal{F}_h(q_S) = \mathcal{F}_h(q_R)$.
\end{definition}
\begin{citethm}[Ioffe \cite{Ioffe75:QCImbedding}]\label{thm:ioffe}
Let $R$ and $S$ be Riemann surfaces, with $S$
connected, and let $f\colon R \hookrightarrow S$ be a
topological embedding so that no component of $R$ has trivial
image in~$S$. Suppose that $f$ is not homotopic to a
conformal embedding. Then there is a quasi-conformal embedding with
minimal dilatation in $[f]$. Furthermore, there are unique quadratic
differentials $q_R \in \mathcal{Q}^+(R)$ and $q_S \in \mathcal{Q}^+(S)$ so that
all quasi-conformal embeddings with minimal dilatation are Teichmüller
embeddings with respect to the same quadratic differentials on
$R$ and~$S$.
\end{citethm}
\begin{remark}
The Teichmüller embedding is not in general unique, but two different
embeddings differ by
translations with respect to the two measured foliations
\cite[Theorem 3.7]{FB18:Couch}.
\end{remark}
Ioffe's theorem gives a pair of measured foliations on $R$
and~$S$. To relate to Theorem~\ref{thm:emb}, we need to
approximate both of these measured foliations by simple multi-curves.
This is
more subtle than it appears at
first, since the natural map $f_* \colon \CCurves^+(R) \to
\CCurves^+(S)$ does \emph{not} generally extend to a continuous map
$\mathcal{MF}^+(R) \to \mathcal{MF}^+(S)$, as the following example shows.
\begin{example}\label{examp:erase-hole}
Let $S = S^2 \setminus \{D_a,D_b,D_c\}$ be the sphere minus three
disks and let $R = S^2 \setminus \{D_a,D_b,D_c,D_d\}$ be the subsurface
obtained by removing another disk. Pick a set of disjoint arcs
$\gamma_{a,b}$, $\gamma_{a,c}$,
$\gamma_{b,d}$, and $\gamma_{c,d}$ on~$S$ between the
respective boundary components. For $s = p/q$ a positive rational number,
there is a natural simple curve $C_s$ at slope $s$ with
\begin{align*}
i(\gamma_{a,c},C_s) &= i(\gamma_{b,d},F_s) = q\\
i(\gamma_{a,b},C_s) &= i(\gamma_{c,d},F_s) = p,
\end{align*}
as illustrated in Figure~\ref{fig:sample-curves}.
\begin{figure}
\caption{Some of the curves $C_s$ in Example~\ref{examp:erase-hole}
\label{fig:sample-curves}
\end{figure}
Set $F_s \colonloneqq (1/q) \cdot [C_s]$ for $s \in \mathbb Q_+$, so that
\begin{align*}
i(\gamma_{a,c},F_s) &= i(\gamma_{b,d},F_s) = 1\\
i(\gamma_{a,b},F_s) &= i(\gamma_{c,d},F_s) = s.
\end{align*}
Then $F_s$ extends to a continuous family of foliations for $s \in
\mathbb R_+$.
For $s \in \mathbb Q_+$, if we push forward $F_s$ by the inclusion map~$f$,
we get a multiple
of a simple curve
on~$S$. There are only three simple curves on~$S$, the curves $C_a$,
$C_b$, and $C_c$ around the respective boundary component. Which one
we get depends only on the parity of $p$ and~$q$, where $s=p/q$ in
lowest terms:
\begin{equation}\label{eq:erase-hole}
f_*[F_s] = \frac{1}{q} \cdot
\begin{cases}
[C_a] & \text{$p$ odd, $q$ odd}\\
[C_b] & \text{$p$ odd, $q$ even}\\
[C_c] & \text{$p$ even, $q$ odd.}\\
\end{cases}
\end{equation}
This map $f_*$
has no continuous extension to~$\mathbb R_+$.
\end{example}
\begin{example}
We can improve Example~\ref{examp:erase-hole} to avoid dealing with
curves around boundary components. Let $S'$ be the
surface obtained from the previous surface~$S$ by gluing a pair of
pants to $\partial D_a$, $\partial D_b$, and $\partial D_c$, and
similarly glue a pair of pants to~$R$ to get~$R'$. Then $S'$ is a
surface of genus two and $R'$ is a surface of genus two minus a
disk. Then $F_s$ can be viewed as a continuous
family of foliations on $R'$, and Equation~\eqref{eq:erase-hole}
still holds.
\end{example}
Despite Example~\ref{examp:erase-hole}, we can still do simultaneous
approximations, using the techniques of Proposition~\ref{prop:curves-dense}.
\begin{proposition}\label{prop:slit-approx}
Let $f \colon R \to S$ be a topological slit map with respect to
$F_S \in \mathcal{MF}^+(S)$. Let $F_R = f^* F_S$. Then there
is a sequence of simple multi-curves $C_n$ on $R$ and weights
$\lambda_n$ so that
\begin{align*}
\lim_{n \to \infty}\lambda_n F[C_n] &= F_R\\
\lim_{n \to \infty}\lambda_n F[f(C_n)] &= F_S.
\end{align*}
\end{proposition}
\begin{proof}
By Lemma~\ref{lem:MF-carried}, $[F_R] = T_R(w)$ for some weight~$w$
on a taut train
track $T_R$ on~$R$. Fix a boundary component~$B$ of~$R$, and let
$\beta$ be a curve parallel to~$B$ slightly pushed in to~$R$.
If $f(\beta)$ bounds a disk in~$S$, the corresponding slit of~$F_S$
is a tree which must have at least two endpoints. Each endpoint of
the tree contributes a zero to $F_R$ on~$B$, so $F_R$ has at least
two zeros on~$B$.
Likewise,
if $f(\beta)$ bounds a once-punctured disk in~$S$, the corresponding
slit of~$F_S$ is a tree with at least two endpoints. At most one of
these endpoints may be at the puncture, so $F_R$
has at least one zero on~$B$.
Let $T_S = f(T_R)$. The second part of the statement of
Lemma~\ref{lem:MF-carried}
guarantees that $T_S$ is taut, and so $F_S = T_S(w)$. (The new disks
in~$T_S$ that were not
disks in~$T_R$ have at least two cusps, and the new once-punctured disks
have at least one cusp.)
As in the proof of Proposition~\ref{prop:curves-dense},
choose a sequence of rational weights $w_n \in
\mathcal{M}_{\mathbb Q}(T_R)$ approaching~$w$, and choose scalars~$\lambda_n$ so
that $w_n' \colonloneqq w_n/\lambda_n$ is integral. Then
$T_R(w_n')$ is a multi-curve $[C_n]$ with $\lambda_n [C_n]$
approaching $[F_R]$.
We also have $[f(C_n)] = T_S(w_n')$, so by
Lemma~\ref{lem:tt-continuous}, $\lambda_n [f(C_n)]$ approaches
$F_S$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:emb}]
If $S = \bigcup_i S_i$ is not connected, with $R_i = f^{-1}(S_i)$,
then the stretch factor is
a supremum over all embeddings $R_i \hookrightarrow S_i$, as
$\frac{a+b}{c+d} < \max\bigl(\frac{a}{c} + \frac{b}{d}\bigr)$. On
the
other hand, $R$
conformally embeds in $S$ iff $R_i$ conformally
embeds in $S_i$ in the given homotopy class for all~$i$. So from now
on we assume that $S$ is
connected.
If $f\colon R \hookrightarrow S$ is homotopic to a
conformal embedding, then Proposition~\ref{prop:el-total-el}
guarantees that for
all multi-curves $[C] \in \CCurves^+(R)$, we have $\EL_S[f(C)] \le
\EL_R[C]$, as we have more maps in computing $\EL_S[f(C)]$, so
smaller infimum in Equation~\eqref{eq:el-total-el}. Thus $\SF[f] \le 1$.
Conversely, suppose $f$ is not homotopic to a conformal
embedding. Then by Theorem~\ref{thm:ioffe},
$f$ is homotopic to a Teichmüller map~$g$ of dilatation~$K$ with respect
to quadratic
differentials $q_R \in \mathcal{Q}^+(R)$ and~$q_S \in \mathcal{Q}^+(S)$. Apply
Proposition~\ref{prop:slit-approx} to find a sequence of simple
multi-curves
$C_n$ on~$R$ and weights~$\lambda_n$ so that $\lambda_n[C_n]$
approximates $\mathcal{F}_h(q_R)$ and $\lambda_n[f_*C_n]$ approximates
$\mathcal{F}_h(q_S)$. By Theorem~\ref{thm:heights}, the quadratic differentials
corresponding to $\lambda_n[C_n]$ approach $q_R$ and the quadratic
differentials corresponding to $\lambda_n [f(C_n)]$ approach
$q_S$, and therefore
\begin{equation}\label{eq:SF-bound}
\SF[f] \ge
\lim_{n \to \infty} \frac{\EL_S[f(C_n)]}{\EL_R[C_n]} =
\frac{\EL_S[\mathcal{F}_h(q_S)]}{\EL_R[\mathcal{F}_h(q_R)]} =
\frac{\norm{q_S}}{\norm{q_R}} = K > 1.\qedhere
\end{equation}
\end{proof}
When the stretch factor is larger than~$1$, we find it exactly
(Proposition~\ref{prop:sf-qc}) with the following standard fact.
\begin{lemma}\label{lem:qc-sf}
Let $f \colon R \hookrightarrow S$ be a quasi-conformal embedding of
Riemann surfaces with quasi-conformal constant $\le K$, and let $C$
be any multi-curve on~$R$. Then
\[
\EL_S[f(C)] \le K \EL_R[C].
\]
\end{lemma}
\begin{proof}[Proof of Proposition~\ref{prop:sf-qc}]
We can again assume that $S$ is connected. If $\SF[f] = 1$, the
result is trivial: By Theorem~\ref{thm:emb}, there is a conformal
embedding, which has quasi-conformal constant equal to~$1$.
If $\SF[f] > 1$, then by Theorem~\ref{thm:emb}, the map~$f$ is not
homotopic to a conformal embedding. Ioffe's
Theorem~\ref{thm:ioffe} constructs a $K$-quasi-conformal
map. $\SF[f] \le K$ by Lemma~\ref{lem:qc-sf}, and $\SF[f] \ge
K$ by Equation~\eqref{eq:SF-bound}.
\end{proof}
\section{Strict embeddings}
\label{sec:strict}
We now turn to Theorem~\ref{thm:strict-emb}, on embeddings with stretch
factor strictly less than~$1$. We start with some preliminary lemmas.
\begin{lemma}\label{lem:area-bound}
Let $f \colon R \hookrightarrow S$ be a strict conformal
embedding. Then there is a constant $K<1$ so that for any $q \in
\mathcal{Q}^+(S)$,
\[
A_q(f(R)) \le K A_q(S).
\]
\end{lemma}
\begin{proof}
For any non-zero quadratic differential~$q$ on~$S$, the ratio
$A_q(f(R))/A_q(S)$ is less than~$1$, as the open set
missed by the image of~$f$ has some non-zero area with respect
to~$q$. Then
$A_q(f(R))/A_q(S)$ is a continuous function on the projective space
$P\mathcal{Q}^+(S)$. Since
$P\mathcal{Q}^+(S)$ is compact, the result follows.
\end{proof}
Later, in Theorem~\ref{thm:area-surface}, we will strengthen
Lemma~\ref{lem:area-bound} considerably.
\begin{lemma}\label{lem:annular-sf}
Let $R$ be a compact Riemann surface with a quadratic
differential~$q\in\mathcal{Q}^+(R)$ that is strictly positive on $\partial
R$. Let
$\widehat{R}_t$ be the annular extension of~$R$ obtained by gluing a
Euclidean cylinder of width~$t$ onto each boundary component of~$R$
with respect to the locally Euclidean metric given by~$q$. Then
\[
\lim_{t \to 0} \SF[\widehat{R}_t\to R] = 1,
\]
where $\SF[\widehat{R}_t\to R]$ is the stretch factor of the
obvious homotopy class of topological embeddings.
\end{lemma}
\begin{proof}
By Proposition~\ref{prop:sf-qc}, it suffices to construct a family of
quasi-conformal maps
$f_t\colon \widehat{R}_t \to R$ with quasi-conformal constant $K_t$
that approaches~$1$ as $t$ approaches~$0$. The assumption that $q$
is positive on $\partial
R$ guarantees that near each component~$C_i$ of $\partial R$ there is an
annulus $A_i$ foliated by leaves of~$\mathcal{F}_h(q)$, with circumference
$r_i$ and width $w_i$ (with
respect to the Euclidean metric induced
by~$q$). Let $B_{i,t}$ be the
annulus added to this boundary component in $\widehat{R}_t$, and
let $\iota_t\colon \widehat{R}_t \to R$ be the affine map of
$A_i\cup B_{i,t}$ onto $A_i$
and the identity outside of $A_i \cup B_{i,t}$. Then $\iota_t$
has quasi-conformal constant equal to
\[
\max_i \frac{w_i + t}{w_i},
\]
which goes to~$1$ as $t \to 0$ as desired.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:strict-emb}]
\textbf{(\ref{item:strict-annular}) $\Rightarrow$
(\ref{item:strict-strict}):} An annular conformal embedding is
also a strict conformal embedding, so this is clear.
\textbf{(\ref{item:strict-strict}) $\Rightarrow$
(\ref{item:strict-sf}):} Suppose that $f$ is a strict conformal
embedding, and let $K < 1$ be the constant from
Lemma~\ref{lem:area-bound}. For any multi-curve
$[C]\in\CCurves^+(R)$, let $q\in\mathcal{Q}^+(S)$ be the quadratic
differential that realizes extremal length for $[f(C)]$, and
consider the pull-back metric $\mu = f^*\abs{q}$ on~$R$.
Since $(R, \mu)$ and $(f(R), \abs{q})$ are isometric, but there are
more curves in the homotopy class~$[C]$ on~$S$ than those that lie in
$f(R)$, we have
$\ell_\mu[C] \ge \ell_{\abs{q}}[f(C)]$. Therefore,
\begin{align*}
\EL_R[C] &\ge \frac{\ell_\mu[C]^2}{A_\mu(R)} \ge
\frac{\ell_{\abs{q}}[f(C)]^2}{KA_{\abs{q}}(S)}
= K^{-1} \EL_S[f(C)]
\end{align*}
implying that $\frac{\EL[f(C)]}{\EL[C]} \le K$. Since $C$ was
arbitrary, $\SF[f] \le K$.
\textbf{(\ref{item:strict-sf}) $\Rightarrow$
(\ref{item:strict-annular}):} Suppose that $\SF[f] < 1$. Pick a
quadratic differential~$q\in\mathcal{Q}^+(R)$ that is real and strictly positive on
$\partial R$. Let $\widehat{R}_t$ be the family of annular
extensions of~$R$ with respect to~$q$ as in Lemma~\ref{lem:annular-sf}, and
let $\widehat{f}_t:
\widehat{R}_t \to S$ be the composite topological embeddings.
Then by Proposition~\ref{prop:sf-compose},
\[
\SF[\widehat{f}_t] \le \SF[\widehat{R}_t\to R] \cdot \SF[f].
\]
It follows from Lemma~\ref{lem:annular-sf} that for $t$
sufficiently small, $\SF[\widehat{f}_t] \le 1$, so by
Theorem~\ref{thm:emb}, $\widehat{f}_t$ is homotopic to a
conformal embedding.
\textbf{(\ref{item:strict-sf}) $\Leftrightarrow$
(\ref{item:strict-ball}):} This is a consequence of
Proposition~\ref{prop:sf-dist}, which we prove next.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:sf-dist}]
By compactness of balls in Teichmüller space, it suffices to show,
on one hand, that if $d(S,S') < -\frac{1}{2}\log\SF[f]$, then there
is a conformal embedding of $R$ in~$S'$; and, on the other hand,
that there are surfaces $S'$ with $d(S,S')$ arbitrarily close to
$-\frac{1}{2}\log\SF[f]$ so that $R$ does not conformally embed
in~$S'$.
For the first part, suppose
$d(S, S') < -\frac{1}{2}\log \SF[f]$. Let $\mathrm{id}_{S,S'}$ be the
identity map from the marking. Then
\begin{align*}
\SF[\mathrm{id}_{S,S'} \circ f] \le \SF[f] \cdot \SF[\mathrm{id}_{S,S'}]
\le \SF[f] \cdot \SF^{\pm}[\mathrm{id}_{S,S'}]
= \SF[f] \cdot e^{2d(S,S')}
< 1.
\end{align*}
as desired.
To get the other direction of the inequality, pick $\varepsilonilon > 0$,
and set $K = e^{\varepsilonilon}/\SF[f]$ and $\lambda = \frac{K-1}{K+1}$.
Find a simple multi-curve~$C$ on~$R$ near the
supremum defining $\SF$:
\[
\frac{\EL_S[f(C)]}{\EL_R[C]} > e^{-\varepsilonilon} \SF[f].
\]
Let $q = q_{f(C)}\in \mathcal{Q}^+(S)$ be the associated Jenkins-Strebel
quadratic differential, and set
$\mu = \lambda \cdot {\overline{q}}/{\abs{q}}$ to be an associated
Beltrami differential. Let $S'$ be $S$ stretched by~$\mu$, so that
\[
d(S,S') = \frac{1}{2}\log \SF^\pm[\mathrm{id}_{S,S'}]
= \frac{1}{2}\log \SF[\mathrm{id}_{S,S'}]
= \frac{1}{2}\frac{\EL_{S'}[f(C)]}{\EL_S[f(C)]}
= \frac{\log K}{2}
= -\frac{1}{2}\log \SF[f] + \frac{\varepsilonilon}{2}.
\]
We also have
\[
\SF[\mathrm{id}_{S,S'} \circ f]
\ge \frac{\EL_{S'}[f(C)]}{\EL_R[C]}
> \bigl(e^{\varepsilonilon}/\SF[f]\bigr)\bigl(e^{-\varepsilonilon}\SF[f]\bigr)
> 1
\]
so $S' \notin \mathcal{T}_R(S)$.
Since $\varepsilonilon$ can be chosen arbitrarily small, we get the
desired result.
\end{proof}
\begin{remark}
It follows from the proof that the
stretching to a nearest point on $\partial\mathcal{T}_R(S)$ is
horizontal on the boundary.
\end{remark}
\begin{remark}
The nearest point to~$S$ on $\partial\mathcal{T}_R(S)$ is not always
unique, as we can see from the fact that $\wt\SF[f] \ne \SF[f]$ in
Examples~\ref{examp:cover} and~\ref{examp:cover2} below.
Indeed, let $f \colon R \hookrightarrow S$ be a conformal embedding and
let $\wt f \colon \wt R \hookrightarrow \wt S$ be a regular covering
with $\SF[f] < \SF[\wt f] < 1$. Then if there were a unique nearest
point~$\wt S'$ to $\wt S$ on $\partial \mathcal{T}_{\wt R}(\wt S)$, it
would be invariant under the deck transformations, and so would
descend to give a point~$S'$ on $\partial\mathcal{T}_R(S)$, contradicting
$\SF[f] < \SF[\wt f]$.
\end{remark}
\section{Behavior under finite covers}
\label{sec:covers}
We now turn to the behavior of the stretch factor under finite covers. We
start with some easy statements.
\begin{lemma}\label{lem:el-cover}
Let $\pi \colon \wt{S} \to S$ be a covering map of Riemann surfaces of
finite degree~$d$. For $C$
a weighted multi-curve on~$S$, define $\pi^{-1} C$ to be the
full inverse image of~$C$, with the same weights. Then
$\EL_{\wt{S}}[\pi^{-1} C] = d\EL_S[C]$.
\end{lemma}
\begin{proof}
By Proposition~\ref{prop:el-area},
$\EL_S[C] = A_{q_C}(S)$, where $q_C$ is the Jenkins-Strebel
quadratic differential corresponding to~$C$. Then
$f^*(q_C)$ is a Jenkins-Strebel quadratic differential
corresponding to $\pi^{-1}(C)$, and so
\[
\EL_{\wt{S}}[\pi^{-1}(C)] = A_{f^*(q_C)}(S) = d A_{q_C}(S) = d \EL_S[C].\qedhere
\]
\end{proof}
\begin{lemma}\label{lem:SF-cover-inc}
For $\wt{f}$ a finite cover of $f\colon R \hookrightarrow S$, we have
$\SF[\wt{f}] \ge \SF[f]$.
\end{lemma}
\begin{proof}
Follows from Lemma~\ref{lem:el-cover} and the definition of $\SF$,
as the supremum involved in computing $\SF[\wt{f}]$ is over a larger
set.
\end{proof}
\begin{proposition}\label{prop:SF-large-cover}
If $f \colon R \hookrightarrow S$ is a topological embedding of Riemann
surfaces with $\SF[f] \ge 1$ and $\wt f$ is a finite cover of~$f$ in the
sense of Definition~\ref{def:cover-map}, then $\SF[\wt f] = \SF[f]$.
\end{proposition}
\begin{proof}
If $\SF[f] = 1$, the result follows from
Lemma~\ref{lem:SF-cover-inc} and Theorem~\ref{thm:emb}.
If $\SF[f] > 1$,
by Proposition~\ref{prop:sf-qc} $\SF[f]$ is the minimal
quasi-conformal constant of any map homotopic to~$f$, which by
Theorem~\ref{thm:ioffe} is given by a Teichmüller embedding~$g$. Let
$\wt{g}$ be the corresponding
cover of~$g$. Then $\wt{g}$ is also a Teichmüller embedding with the same
quasi-conformal constant, and so $\SF[\wt{f}]$ is the
quasi-conformal constant of~$\wt{g}$.
\end{proof}
\begin{remark}
Proposition~\ref{prop:SF-large-cover} relies on $\wt{f}$ being a cover
of finite degree of~$f$. McMullen \cite[Corollary 1.2]{McMullen89:Amenable}
shows that, in the case that $R$ and $S$ are closed surfaces, $f$
is a Teichmüller map, and $\wt{f}$ is a non-amenable cover of~$f$,
then $\wt{f}$ does \emph{not} minimize the quasi-conformal
distortion in its bounded homotopy class.
\end{remark}
\begin{proposition}\label{prop:sf-cover-1}
For $\wt{f}$ a finite cover of $f\colon R \hookrightarrow S$,
the quantity $\SF[\wt{f}]$ is less than one, equal to one, or
greater than one exactly when $\SF[f]$ is less than one, equal to one, or
greater than one.
\end{proposition}
\begin{proof}
If $\SF[f] < 1$, by Theorem~\ref{thm:strict-emb}, $f$
is homotopic to a
strict conformal embedding. Since a cover of a strict conformal
embedding is a strict conformal embedding, we have $\SF[\wt{f}] < 1$.
The other cases follow from Proposition~\ref{prop:SF-large-cover}.
\end{proof}
Although there is some good behavior, it is not true in general that
$\SF[\wt{f}]=\SF[f]$.
\begin{example}\label{examp:cover}
Let $R$ and~$S$ both be disks with two
points removed, with $f \colon R \to S$ a strict conformal embedding
and $g \colon S \to R$ a homotopy inverse. The surfaces~$R$ and~$S$
have, up to equivalence and scale, only one non-trivial
simple multi-curve (the boundary-parallel curve), so
$\SF[f] = 1/\SF[g]$. Also, $SF[f] < 1$, since $f$ was assumed to be
a strict conformal embedding. Now take any non-trivial finite
cover~$\wt R$ of~$R$ and the corresponding cover~$\wt S$ of~$S$. Let
the corresponding topological
embeddings be $\wt{f} \colon \wt{R}\to\wt{S}$ and $\wt g \colon \wt S \to \wt
R$. Since $\SF[g] > 1$, by Proposition~\ref{prop:SF-large-cover} we
have $\SF[\wt{g}] = \SF[g]$, with the supremum in the definition of
stretch factor realized by a symmetric multi-curve. By
Theorem~\ref{thm:ioffe}, the
quadratic differentials realizing this stretch factor are
\emph{unique}, so
for \emph{any} non-symmetric multi-curve~$C$ on~$\wt S$ (or equivalently
$\wt R$), we have
\[
\SF[g] = \SF[\wt{g}] > \frac{\EL_{\wt R}[C]}{\EL_{\wt S}[C]}.
\]
But then
\[
\SF[\wt{f}] \ge \frac{\EL_{\wt S}[C]}{\EL_{\wt R}[C]} > 1/\SF[g] = \SF[f].
\]
\end{example}
\begin{example}\label{examp:cover2}
The previous example can be improved to give an examples with
arbitrarily large gap between $\SF$ and $\wt\SF$: for any
$0 < \varepsilon < \delta < 1$, there is an embedding
$f \colon R \hookrightarrow S$ and two-fold cover $\wt f$ so that
$\SF[f] < \varepsilonilon$ and $\SF[\wt f] > \delta$. This example is due
to Maxime Fortier-Bourque. Let $R_t$ be the disk with two punctures
obtained by doubling a $t \times 1$ rectangle along three of its
sides, and let $S_t$ be
the double cover of~$R_t$ branched along one of the two punctures.
Then for $s < t$ the embedding $S_s \hookrightarrow S_t$ is a cover
of the embedding $R_s \hookrightarrow R_t$.
Let $C_1$ be the only non-trivial curve on $R_t$, the curve parallel
to the boundary as shown on the
left of Figure~\ref{fig:cover2}.
Let $C_2$ be the non-symmetric curve on~$S_t$
shown on the right of Figure~\ref{fig:cover2}. By construction,
$\EL_{R_t}[C_1] = 2/t$.
As $t \to \infty$,
the surface $S_t$ approaches a sphere with 4~punctures, specifically
the double of a square. The curve $C_2$ is non-trivial on the
4-punctured sphere, and so its extremal length approaches a definite
value:
\[
\lim_{t \to \infty} \EL_{S_t}[C_2] = 2.
\]
Thus, for $t \gg s \gg 0$, we have
\begin{align*}
\SF[R_s \hookrightarrow R_t] &= \frac{2/t}{2/s} = \frac{s}{t}\\
\SF[S_s \hookrightarrow S_t] &\ge \frac{\EL_{S_t}[C_2]}{\EL_{S_s}[C_2]}
\rightarrow 1,
\end{align*}
as desired.
With a little more care, one can show that
$\EL_{S_t}[C_2] \approx 2(1 + Ke^{\pi t/2})$ for some
constant~$K$. This uses the uniformization of $S_\infty$ to the double
of a square by the composition of $z \mapsto \sin(\pi i z/2)$ and
$z \mapsto \int_{w=0}^z dw/\sqrt{w^3-w}$.
\end{example}
\begin{figure}
\caption{The surfaces from Example~\ref{examp:cover2}
\label{fig:cover2}
\end{figure}
In order to prove Theorem~\ref{thm:sf-cover}, we need some extra
control: a strengthening of Lemma~\ref{lem:area-bound}.
\begin{theorem}\label{thm:area-surface}
Let $f\colon R \hookrightarrow S$ be a annular conformal
embedding of Riemann surfaces. Then there is a
constant $K < 1$ so that for any quadratic differential $q \in
\mathcal{Q}(S)$,
\[
A_{f^* q}(R) \le K A_q(S).
\]
Furthermore, the constant $K$ can be chosen uniformly under
finite covers, in the sense that for any finite cover
$\wt{f}\colon \wt{R}\to\wt{S}$
of~$f$ and any
quadratic differential $\wt{q} \in \mathcal{Q}(\wt{S})$,
\[
A_{f^* \wt{q}}(\wt{R}) \le K A_{\wt{q}}(\wt{S}).
\]
\end{theorem}
The technique in Lemma~\ref{lem:area-bound} will not work to prove
Theorem~\ref{thm:area-surface}, as
$\mathcal{Q}(S)$ is infinite-dimensional. (That bound is also not uniform
under covers.)
As in Lemma~\ref{lem:area-bound}, $K$ depends on the actual embedding,
not just the
homotopy class of the embedding.
When $S$ is a disk, Theorem~\ref{thm:area-surface} is not hard.
For $a \in \mathbb{C}$ and $r>0$, we denote by
$\DD(a,r)=\{z:\abs{z-a}<r\}$ the open disk of radius $r$ about $a$.
\begin{proposition}\label{prop:small-area}
Let $\Omega \subset \mathbb D$ be an open subset of the disk so that
$\overline{\Omega} \cap \partial\mathbb D = \emptyset$. For any
quadratic differential $q\in\mathcal{Q}(\mathbb D)$,
\[
A_q(\Omega) \le r^2 A_q(\mathbb D),
\]
where $r$ is large enough so that $\Omega \subset \DD(0,r)$.
\end{proposition}
Proposition~\ref{prop:small-area} is a special case of
Proposition~\ref{prop:disk-area} below, but we give a separate proof
because we can give a precise constant.
\begin{proof}
Let $r_0$ be the smallest value so that
$\Omega \subset \DD(0,r_0) \subset \mathbb D$, and let $q \in \mathcal{Q}(\mathbb D)$
be arbitrary. For $0 \le r \le 1$, we will show that
$A_q(\DD(0,r)) \le r^2 \cdot A_q(\mathbb D)$, so that $K = r_0^2$
suffices. Define
\begin{align*}
I(r) &= \int_{\theta=0}^{2\pi} \abs{q(r e^{i\theta})} \, d\theta\\
J(r) &= \int_{s=0}^r s I(s)\,ds = A_q(\DD(0,r)),
\end{align*}
where we are writing $q = q(z)\,(dz)^2$ with $q(z)$ a holomorphic
function. The function $z\mapsto\abs{q(z)}$ is subharmonic, so if $s
< r$, we have $I(s) \le I(r)$. (We would have equality between the
corresponding integrals if $\abs{q(z)}$ were harmonic; see, e.g.,
\cite[p.\ 142]{Burckel79:IntroComplex}). We therefore
have $J(r) = \int_{s=0}^r s I(s)\,ds \le r^2 I(r)/2$, and so
\[
\frac{d}{dr} \frac{J(r)}{r^2}
= \frac{r J'(r) - 2J(r)}{r^3}
\ge \frac{r^2 I(r) - r^2 I(r)}{r^3} = 0.
\]
It follows that $J(r)/r^2 \le J(1)$, as desired.
\end{proof}
Proposition~\ref{prop:small-area} is false if $\overline{\Omega}$ is
allowed to intersect $\partial\mathbb D$. Suppose $\Omega$ contains a
neighborhood of a segment of $\partial\mathbb D$, and let $w$ be a point
very close to this segment. By a conformal automorphism $\phi$ of $\mathbb D$, we
can take $w$ to the center of the disk. Then
$\bigl(d\phi(z)\bigr)^2$ will have its measure concentrated near
$w\in\Omega$, as illustrated in
Figure~\ref{fig:disk-trans}.
\begin{figure}
\caption{Möbius transformations to make the area of a quadratic
differential be concentrated near a point $w$ that is close to
$\partial\mathbb D$.}
\label{fig:disk-trans}
\end{figure}
The following proposition says that this is all that can happen: if
the mass of $q$ on $\Omega$ gets large, then the mass of $q$ is
concentrating near $\partial\mathbb D$.
\begin{proposition}\label{prop:disk-area}
Let $\Omega \subset \DD$ be an open subset of the disk with an open
set~$A$
in its complement, and let $B \subset \overline{\DD}$ be a neighborhood
of $\overline{\Omega} \cap \partial \overline{\DD}$, as illustrated in
Figure~\ref{fig:surface-decomp}. Then, for every
$\varepsilonilon>0$, there is a $\delta>0$ so that if $q \in \mathcal{Q}(\DD)$
is such that $q \ne 0$ and
\begin{align*}
\frac{A_q(\Omega)}{A_q(\DD)} &> 1-\delta,
\shortintertext{then}
\frac{A_q(B)}{A_q(\DD)} &> 1-\varepsilonilon.
\end{align*}
\end{proposition}
\begin{figure}
\caption{The schematic setup of Proposition~\ref{prop:disk-area}
\end{figure}
The proposition implies that given a sequence $q_n \in
\mathcal{Q}(\DD)$, if the percentage of the $\abs{q_n}$-area of $\DD$
occupied by $\Omega$ tends to $1$, then the percentage of the
$\abs{q_n}$-area occupied by the set $B$ of ``thickened ends of $\Omega$''
also tends to $1$. Figure~\ref{fig:disk-trans} again provides an
example of how this happens.
We give two versions of the proof, one shorter, and the other more
explicit and giving (poor) bounds on the constants.
\begin{proof}[Proof of Proposition~\ref{prop:disk-area}, version 1]
If there are no such bounds as in the statement of the proposition,
there is an $0 < \varepsilonilon < 1$ and
a sequence of quadratic differentials $q_n\in\mathcal{Q}(\mathbb D)$ so that
\begin{align}
A_{q_n}(\mathbb D) &= 1\\
A_{q_n}(B) &< 1-\varepsilonilon\label{eq:B-small}\\
A_{q_n}(\Omega) &> 1-1/n.\label{eq:omega-large}
\end{align}
Consider $A_{q_n}$ as a measure on $\overline{\mathbb D}$. Since the
space of measures of unit area on the closed disk is compact in the
weak topology, after passing to a subsequence we may assume that
$A_{q_n}$ converges (weakly) to some limiting measure
$\mu$ (of total mass~$1$) on~$\overline{\mathbb D}$. Since holomorphic
functions on the disk
that are also in $L^1(\mathbb D)$ form a normal family,
after passing to a further subsequence, we may assume that the sequence~$q_n$
converges locally uniformly to some holomorphic function~$q_\infty$
on~$\mathbb D$. The
restriction of~$\mu$ to the open disk is then~$A_{q_\infty}$. But
$A_{q_n}(A) < 1/n$, so
$A_{q_\infty}(A) = 0$, so $q_\infty$ is identically~$0$ on~$A$ and
therefore on the entire open disk. Hence
$\mu$ is supported on $\partial\overline{\DD}$. Equation~\eqref{eq:omega-large}
implies that the support of $\mu$ is also contained in
$\overline{\Omega}$, and hence in
$\overline{\Omega}\cap\partial\overline{\DD}$. But this contradicts
Equation~\eqref{eq:B-small}.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:disk-area}, version 2]
Apply a Möbius transformation so that $A$ contains~$0$. We may then
assume that $\Omega \subset \DD \setminus \overline{\DD(0,2r_0)}$ for
some $0 < r_0 < 1/2$.
We identify the space $\mathcal{Q}(\DD)$ of integrable holomorphic
quadratic differentials on $\DD$ with the Banach space of
$L^1$-integrable holomorphic functions on $\DD$, so that
$A_q(\DD)=\int_\DD\abs{q}=\norm{q}$.
Suppose $q \in \mathcal{Q}(\DD)$ satisfies $A_q(\DD)=1$. We will
quantitatively show that the $q$-area of a small ball controls the
$q$-area of a big ball.
Suppose $s$ is chosen close to~$1$ with $r_0<s<1$. Suppose
$\abs{z}\leq s$.
The Cauchy Integral Formula applied to the
concentric circles comprising the disk $\DD(z,1-s)$ shows that
\[ \abs{q(z)} \leq \frac{1}{\pi(1-s)^2}\int_{\DD(z,1-s)}\abs{q}=\frac{1}{\pi(1-s)^2}A_q(\DD(z, 1-s)),\]
i.e., $\abs{q}$ is subharmonic.
Using the assumption that $A_q(\DD)=1$, this implies
\begin{align}
\label{eqn:cif1}
\abs{z} \leq s &\implies \abs{q(z)} \le K(s):=\frac{1}{\pi(1-s)^2}.\\
\intertext{Similar reasoning shows}
\label{eqn:cif2}
\abs{z} \leq r_0 &\implies \abs{q(z)} \leq \frac{1}{\pi r_0^2} A_q(\DD(0,2r_0)).
\end{align}
For $0<t<1$, let $M_q(t)$ be $\max\{\abs{q(z)} : \abs{z}=t\}$. The
Hadamard Three Circles Theorem \cite[Theorem
6.3.13]{Conway78:Complex} implies that $\log M_q$ is a convex
function of $\log t$. Thus if $r$ and $r_1$ are chosen so that $r_0
\leq r \leq r_1 < s$ then
\begin{align*}
\log M_q(r) &\le \log M_q(r_0)+\frac{\log M_q(s)-\log M_q(r_0)}{\log s-\log r_0}(\log r-\log r_0) \\
& \le \log M_q(r_0)+\frac{\log K(s) - \log M_q(r_0)}{\log s-\log r_0}(\log r_1-\log r_0)\\
& = \left(1-\frac{\log r_1 - \log r_0}{\log s- \log r_0}\right) \log M_q(r_0)+ \log K(s) \frac{\log r_1 - \log r_0}{\log s- \log r_0}\\
& = K_1 \log M_q(r_0) + K_2
\end{align*}
where $K_1$ and $K_2$ are constants, with $K_1>0$, depending only on
$r_0$, $r_1$, and~$s$, and not on~$q$. It follows from (\ref{eqn:cif2}) that
there are positive constants $c_1$ and $c_2$ depending only on $r_0$,
$r_1$, and $s$ with
\begin{equation}
\label{eqn:compare_area}
A_q(\DD(0,r_1)) < c_2 A_q(\DD(0,2r_0))^{c_1}.
\end{equation}
Now suppose that $\delta$ is small, $0<\delta<1$, and
$A_q(\DD\setminus\Omega)<\delta$. Note that this implies that
$A_q(\DD(0,2r_0))<\delta$. Given $0<r_1<1$, let $E$ be the annulus
$\DD\setminus\DD(0,r_1)$. From the definition of $B$, there is some
$r_1$ with $0<r_1<1$ close to $1$ for which $E\cap\Omega \subset E\cap
B$. Choose $s$ so that $r_0<r_1<s<1$; we are in the setup of the
previous paragraph. We have
\begin{align*}
1-c_2\delta^{c_1} & < A_q(E) && \mbox{by (\ref{eqn:compare_area})} \\
& = A_q(E\cap(\DD\setminus \Omega)) + A_q(E \cap \Omega) \\
& < A_q(\DD \setminus \Omega) + A_q(E\cap B) \\
& < \delta + A_q(B)
\end{align*}
and so $A_q(B)>1-c_2\delta^{c_1} - \delta$, which tends to $1$ as
$\delta$ tends to~$0$, as required.
\end{proof}
We also need an analogue of Proposition~\ref{prop:disk-area} for the
once-punctured disk. (In fact it is true
in more generality.)
\begin{proposition}\label{prop:punct-disk-area}
Let $\mathbb D^\times$ be the punctured unit disk $\mathbb D \setminus \{0\}$,
let $\Omega \subset
\mathbb D^\times$ be an open subset with an open set~$A$ in its
complement, and let $B \subset \overline{\mathbb D}^\times$ be an open
neighborhood of $\overline{\Omega}
\cap \partial\overline{\mathbb D}$.
Then, for every
$\varepsilonilon>0$, there is a $\delta>0$ so that if $q \in \mathcal{Q}(\DD^\times)$
is such that $q \ne 0$ and
\begin{align*}
\frac{A_q(\Omega)}{A_q(\DD)} &> 1-\delta,
\shortintertext{then}
\frac{A_q(B)}{A_q(\DD)} &> 1-\varepsilonilon.
\end{align*}
\end{proposition}
\begin{proof}
Let $s \colon \DD \to \DD$ be the squaring map $s(z) = z^2$. We can apply
Proposition~\ref{prop:disk-area} to the tuple
$(s^{-1}(\Omega), s^{-1}(A), s^{-1}(B))$. For every quadratic
differential $q
\in \mathcal{Q}(\DD^\times)$ with at most a simple pole at~$0$,
$s^* q$ is a quadratic differential on~$\DD^\times$ with no pole,
and can thus be considered as a quadratic differential on~$\DD$.
Since for any $X \subset \DD^\times$,
\[
A_{s^*q}(s^{-1}(X)) = 2A_q(X),
\]
the area bounds for $s^* q$ on $s^{-1}(\Omega)$ and $s^{-1}(B)$
imply the same bounds for $q$ on $\Omega$ and~$B$, as desired.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:area-surface}]
For simplicity, if $S$ has no boundary or has non-negative
Euler characteristic, remove disks from
$S\setminus R$ until it has boundary and negative
Euler characteristic. Then enlarge $R$ until it is equal to
$S$ minus an $\varepsilonilon$-neighborhood of $\partial S$,
and think about $R$ as a subset of~$S$.
Now choose a maximal set of simple, non-intersecting and
non-parallel arcs $\{\gamma_i\}_{i=1}^{k}$ on~$S$. These will divide
$S$ into a collection of half-pants (i.e., hexagons) and once-punctured
bigons; arrange the arcs so that they divide~$R$ in the same way,
as illustrated in Figure~\ref{fig:surface-decomp}. Let
$\{P_j\}_{j=1}^{\ell}$ be the connected components of
$S \setminus \bigcup \gamma_i$, and let $G_i$ be small disjoint
tubular
neighborhoods of the $\gamma_i$ inside $S$. Let $P_j' = P_j \cap
R$ and let $G = \bigcup_i G_i$. As detailed below, we can apply
Propositions~\ref{prop:disk-area} or~\ref{prop:punct-disk-area} to
each triple
$\bigl(P_j, P_j', P_j \cap G\bigr)$ to show that if the area
of a sequence of quadratic differentials~$q_n$ on~$S$ is concentrating
within~$R$, then
it is actually concentrating within $G$.
\begin{figure}
\caption{Two decompositions of $S$ and~$R$. In this
example, $S$ is a sphere with 4 holes and one puncture and
$R$ is a smaller copy of~$S$ shaded in red. The
arcs $\gamma_i$ (solid, in green) divide the two surfaces into
half-pants and a once-punctured bigon. The tripods $\tau_j$ (dashed,
in blue) divide the
two surfaces into rectangles and a once-punctured bigon.}
\label{fig:surface-decomp}
\end{figure}
We also pick another decomposition of $R$ and~$S$ into
disks. Within each half-pants among the~$P_j$, pick a tripod
$\tau_j$ with
ends on the three components of $P_j \cap \partial S$ and
intersecting $\partial R$ in three points, as
in Figure~\ref{fig:surface-decomp}; ensure
that $\tau_j$ is disjoint from $\overline{G}$. Let
$\{Q_i\}_{i=1}^{k}$ be the connected components of
$S \setminus \bigcup_j \tau_j$. Each $Q_i$ is a rectangle or a
once-punctured bigon. Pick a
small tubular neighborhood $T_j$ of
$\tau_j$, small enough that each $T_j$ and $G_i$ are
disjoint. Let $Q_i' = Q_i \cap R$ and $T = \bigcup_j T_j$.
Propositions~\ref{prop:disk-area} and~\ref{prop:punct-disk-area}
will again show that if
the area of a sequence of quadratic differentials on~$S$ is
concentrating
within~$R$, then it is concentrating within $T$; but this
is a contradiction, as $G$ and $T$ are
disjoint.
We now give the concrete estimates alluded to above. Since all areas
are with respect to an arbitrary quadratic differential~$q \in
\mathcal{Q}(S)$, we will omit
it from the notation for brevity. For
each~$j$, the triple $(\overline{P_j}, P_j', G \cap P_j)$ is either a
triple like $(\mathbb D, \Omega, B)$ as in the statement of
Proposition~\ref{prop:disk-area} or a triple like $(\mathbb D^\times,
\Omega, B)$ as in the statement of
Proposition~\ref{prop:punct-disk-area}. We can thus find
$\delta_j$ according to the
propositions so that if $A(P_j') > (1-\delta)A(P_j)$,
then $A(G \cap P_j) > (3/4)A(P_j)$. Let $\delta \colonloneqq \min_j
\delta_j$ and $\delta' \colonloneqq \delta/4$.
\begin{claim}
If $A(R) > (1-\delta')A(S)$, then $A(G) > \frac{1}{2}\cdot A(S)$.
\end{claim}
\begin{proof}
Let $J \subset \{1,\dots,\ell\}$ be the subset of indices $j$ so
that $A(P_j') > (1-\delta)A(P_j)$, and let
\begin{align*}
P_J &\colonloneqq \bigcup_{j \in J} P_j & P_J' &\colonloneqq P_J \cap R\\
P_{\overline{J}} &\colonloneqq \bigcup_{j \notin J} P_j & P_{\overline{J}}' &\colonloneqq P_{\overline{J}} \cap R.
\end{align*}
Then we have
\begin{equation*}
(1-\delta')A(S) - A(P_J') < A(P_{\overline{J}}')
\le (1-\delta)A(P_{\overline{J}})
< (1-\delta)(A(S) - A(P_J'))
\end{equation*}
which simplifies to
\begin{equation*}
A(P_J') > \frac{\delta - \delta'}{\delta}A(S) =
\frac{3}{4}\cdot A(S).
\end{equation*}
On the other hand, by the choice
of~$\delta$, we have $A(G \cap P_J') > (3/4)A(P_J)$,
so
\begin{equation*}
A(G) \ge A(G \cap P_J')
> \frac{3}{4}\cdot A(P_J)
\ge \frac{3}{4}\cdot A(P_J')
> \frac{3}{4}\cdot \frac{3}{4} \cdot A(S)
> \frac{1}{2} \cdot A(S). \qedhere
\end{equation*}
\end{proof}
An exactly parallel argument shows that there is a $\delta'' > 0$ so
that if $A(R) > (1 - \delta'')A(S)$, then $A(T) >
\frac{1}{2}\cdot A(S)$. Since $G \cap T = \emptyset$, this implies that
$A(R) \le (1-\min(\delta',\delta'))A(S)$, proving the first
statement of the theorem.
Note that the crucial constants $\delta'$ and $\delta''$ were
defined as a minimum over the triples
$(P_j,P_j',G \cap P_j)$ and $(Q_i, Q_i', T \cap Q_i)$. On a
finite cover $\wt{f} \colon \wt{R} \hookrightarrow \wt{S}$ of $f$,
we can take arcs $\wt\gamma_i$ and tripods $\wt\tau_j$ to be lifts
of $\gamma_i$ and $\tau_j$, respectively. Then the triples on the
$\wt{S}$ are lifts of the triples on $S$, and the same
estimate works in~$\wt{f}$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:sf-cover}]
If $\SF[f] \ge 1$, we have already proved the result in
Proposition~\ref{prop:SF-large-cover}. If $\SF[f] < 1$, by
Theorem~\ref{thm:strict-emb} we may assume that $f$ is an annular conformal
embedding. Let $K$ be the constant from
Theorem~\ref{thm:area-surface} for the map~$f$. We must show that
for any finite cover $\wt{f}\colon \wt{R} \to \wt{S}$ of~$f$ and any simple
multi-curve~$\wt{C}$ on~$\wt{R}$,
\[
\frac{\EL_{\wt{R}}[\wt{f}(\wt{C})]}{\EL_{\wt{S}}[\wt{C}]} < K.
\]
Let $\wt{q}$ be the quadratic differential realizing the extremal
length of $[\wt{f}(\wt{C})]$. Then, as in the proof of
Theorem~\ref{thm:strict-emb},
\[
\EL_{\wt{R}}[\wt{C}]
\ge\frac{\ell_{\wt{f}^*\abs{\wt{q}}}[\wt{C}]^2}{A_{\wt{f}^*\abs{\wt{q}}}(\wt{R})}
\ge\frac{\ell_{\abs{\wt{q}}}[\wt{f}(\wt{C})]^2}{K A_{\abs{\wt{q}}}(\wt{S})}
= K^{-1} \EL_{\wt{S}}[\wt{f}(\wt{C})]. \qedhere
\]
\end{proof}
\section{Future challenges}\label{sec:challenges}
There are several obvious questions raised by Theorems~\ref{thm:emb},
\ref{thm:strict-emb}, and~\ref{thm:sf-cover}.
The first is an analogue of Proposition~\ref{prop:SF-large-cover} when
$\SF[f] < 1$.
\begin{problem}\label{prob:inf-SF}
Give an intrinsic characterization of $\wt\SF[f]$ for general maps
$f \colon R \to S$ between Riemann surfaces as an infimum, not
just when $\wt\SF[f] \ge 1$.
\end{problem}
To elaborate a little, $\SF$ and $\wt\SF$ are defined as maxima. It
would be much easier to find upper bounds (as in the hard direction of
Theorem~\ref{thm:sf-cover}) if there were an alternate definition of
$\wt\SF$ as a minimum.
For example, there are two characterizations of extremal length: as a
maximum over metrics (Definition~\ref{def:el-curves}) and
as a minimum over embeddings of annuli (Proposition~\ref{prop:el-total-el}).
When $\SF[f] \ge 1$, Proposition~\ref{prop:SF-large-cover} serves this role.
When $\SF[f] < 1$, there are many different conformal embeddings
$R \hookrightarrow S$ in the homotopy class~$[f]$. The space of such
conformal embeddings is path-connected \cite{FB18:Couch}. One could
attempt to find a canonical embedding by, for instance, gluing annuli
to the boundary components of~$R$ \cite{EM78:ConfEmbeddings}. But this
embedding seems ill-suited to give tight bounds on $\SF[f]$ or
$\wt\SF[f]$. Ideally one would want a notion of ``map with
quasi-conformal constant less than one'', but that is nonsensical.
Instead, it seems likely we need to consider
some sort of ``smeared'' maps: maps from $R$ to
probability distributions on~$S$.
\begin{problem}\label{prob:SF-smeared}
Find an energy of smeared maps $g \colon R \to \mathcal{M}(S)$
whose minimum value is $\wt\SF[f]$.
\end{problem}
As an example of what we mean, we give one way to get an explicit upper
bound on $\wt\SF[f]$.
\begin{definition}
A homotopy class of topological embeddings $[f] \colon R
\hookrightarrow S$ between
Riemann surfaces is \emph{conformally loose} if, for all $y \in \overline{S}$,
there is a conformal embedding $g \in [f]$ so that
$y \notin \overline{f(R)}$.
Since $\overline{S}$ is compact, if $[f]\colon R \to S$ is conformally
loose we
can find finitely many conformal embeddings
$f_i \in [f], i = 1,\dots,n$ so that
\begin{equation}
\bigcap_{i=1}^n \overline{f_i(R)} = \emptyset.
\label{eq:finite-loose}
\end{equation}
In this case, we say that $[f]$ is \emph{$n$-loose}.
\end{definition}
\begin{proposition}\label{prop:sf-loose-bound}
If $[f]\colon R \hookrightarrow S$ is $n$-loose, then
$\wt\SF[f] \le 1-1/n$.
\end{proposition}
\begin{proof}
If $f$ is $n$-loose, then all covers are also $n$-loose. So
it suffices to prove that $\SF[f] \le 1-1/n$.
Let $(f_i)_{i=1}^n$ be the $n$ different embeddings from
Equation~\eqref{eq:finite-loose}. For a simple multi-curve
$C\in\CCurves^+(R)$, let $q = q_{f(C)} \in
\mathcal{Q}^+(S)$ be the quadratic differential corresponding to
$f(C)$ from Theorem~\ref{thm:quad-diff-height}. For at least
one~$i$, we will have
\[\frac{A_q(f_i(R))}{A_q(S)} \le 1-1/n\]
by Lemma~\ref{lem:empty-intersect} below.
Then the argument from case \eqref{item:strict-strict} $\Rightarrow$
\eqref{item:strict-sf} of the proof of Theorem~\ref{thm:strict-emb}
shows that $\EL_R[C] \le (1-1/n)\EL_S[f(C)]$, as desired.
\end{proof}
\begin{lemma}\label{lem:empty-intersect}
If $A_1, \dots, A_n \subset X$ are $n$ subsets of a measure
space~$X$ so that $\bigcap_{i=1}^n A_i = \emptyset$, then for at
least one~$i$ we must have $\mu(A_i) \le (1-1/n) \mu(X)$.
\end{lemma}
\begin{proof}
This follows from the continuous pigeonhole principle.
\end{proof}
In the language of Problem~\ref{prob:SF-smeared}, if $[f]$ is $n$-loose,
then the averaged map
\[
g(x) = \frac{1}{n} \sum_{i=1}^n f_i(x)
\]
is a smeared map from $R$ to~$S$. Likewise, if $\wt{f} \colon \wt{R} \to
\wt{S}$ is $n$-loose where $q \colon \wt{R} \to R$ is a finite cover of
degree~$k$, then the averaged map
\[
g(x) = \frac{1}{nk}\sum_{q(\wt{x}) = x} \,\sum_{i=1}^n \wt{f}_i(\wt{x})
\]
is a smeared map from $R$ to~$S$.
\begin{conjecture}\label{conj:loose}
If $f \colon R \to S$ is a strict conformal embedding of Riemann
surfaces where $S$ has no punctures, there is some finite
cover~$\wt f$
of~$f$ that is conformally loose.
\end{conjecture}
If $[f]$ maps a puncture~$x$ of~$R$ to a puncture~$y$ of~$S$, a
neighborhood of~$y$ is in the image of every map in~$[f]$, so
$[f]$ can never be conformally loose. In this case we could
pass to a branched double cover as in the proof of
Proposition~\ref{prop:punct-disk-area}.
\begin{remark}
In Problems~\ref{prob:inf-SF} and~\ref{prob:SF-smeared}, it may be
that $\wt\SF[f]$ is not
the most natural quantity to consider; there may be a more natural
quantity that bounds $\wt\SF[f]$ from above and is less than one
when $\wt\SF[f]$ is less than one.
\end{remark}
\end{document}
|
\begin{document}
\title{\large \bf ON THE AVERAGE SUM OF THE $K$-TH DIVISOR FUNCTION OVER VALUES OF QUADRATIC POLYNOMIALS}
\begin{abstract}Let $F({\bf x})\in\mathbb{Z}[x_1,x_2,\dots,x_n]$ be a quadratic polynomial in $n\geq 3$ variables with a nonsingular quadratic part. Using the circle method we derive an asymptotic formula for the sum
$$
\Sigma_{k,F}(X; {\mathcal{B}})=\sum_{{\bf x}\in X\mathcal{B}\cap\mathbb{Z}^{n}}\tau_{k}\left(F({\bf x})\mathrm ight),
$$
for $X$ tending to infinity, where $\mathcal{B}\subset\mathbb{R}^n$ is an $n$-dimensional box such that
$\min{\rm Li}mits_{{\bf x}\in X\mathcal{B}}F({\bf x})\ge 0$ for all sufficiently large $X$, and $\tau_{k}(\cdot)$ is the $k$-th divisor
function for any integer $k\ge 2$.
\end{abstract}
\tableofcontents
\section{Introduction}
The $k$-th divisor function is a generalisation of the divisor function $\tau(m)=\sum_{d|m}1$ which counts the number of ways $m$
can be written as a product of $k$ positive integer numbers. It is defined as
$$
\tau_{k}(m)=\#\{(x_1,x_2,...,x_{k})\in\mathbb Z_+^{k}: m=x_1x_2...x_{k}\},
$$
where we assume that $\tau_k(0)=0$. For polynomials $F({\bf x})\in \mathbb Z[x_1,\ldots,x_n]$ consider the sums
$$T_k(F({\bf x}),X)=\sum_{\left|F({\bf x})\mathrm ight|\le X}\tau_k(\left|F({\bf x})\mathrm ight|)\,.$$ Understanding the average order of $\tau_k(m)$,
as it ranges over sparse sequences of values taken by polynomials, i.e. of $T_k(F,X)$, is a problem that has received a lot of attention.
\newline
The most studied case is naturally $k=2$. For $F({\bf x})=F(x_1,x_2)$ a binary irreducible cubic form Greaves \cite{MR0263761}
showed that there exist real constants $c_1>0$ and $c_2$ depending only on $F$, such that
$$
T_2(F({\bf x}),X)= c_1X^{2/3}\log X+c_2X^{2/3} +O_{\varepsilon, F}(X^{9/14+\varepsilon}),
$$
holds for any $\varepsilon>0$ as $X\mathrm ightarrow \infty$. If $F(x_1,x_2)$ is an irreducible quartic form, Daniel \cite{MR1670278}
proved that
$$
T_2(F({\bf x}),X)= c_1X^{1/2}\log X+O_{F}(X^{1/2}\log\log X),
$$
where $c_1>0$ is a constant depending only on $F$. It seems that $\deg F=4$ is the limit of the current available methods treating
divisor sums over binary forms. More related works on the cases $k=2$ and $n=2$ are e.g. la Bret{\`{e}}che and Browning \cite{MR2719554},
Browning \cite{MR2861076} and Yu \cite{MR1754029}. On the other hand, with their paper from 2012 Guo and
Zhai \cite{GuoZhai2012} revived the interest toward estimating asymptotically $T_2(F({\bf x}),X)$ for forms in $n\geq 3$ variables
using the classical circle method. After many other papers extending \cite{GuoZhai2012} and dealing with diagonal forms,
in a recent work Liu \cite{Liu2019} obtained an asymptotic formula for $T_2(F({\bf x}),X)$ for any nonsingular quadratic form $F$
in $n\geq 3$ variables.\newline
For the cases when $k\ge 3$ there are only few results in the literature. Friedlander and Iwaniec \cite{MR2289206} showed that
$$
\sum_{\substack{n_1^2+n_2^6\leq X\\ {\rm gcd}(n_1,n_2)=1}}\tau_3(n_1^2+n_2^6)=cX^{2/3}(\log X)^2
+O\left(X^{2/3}(\log X)^{7/4}(\log \log X)^{1/2}\mathrm ight),
$$
where $c$ is a constant. Daniel \cite[(4.5)]{DanielThesis} described an asymptotic formula for $T_k(F({\bf x}),X)$ as $X\mathrm ightarrow\infty$ for any $k\geq 2$ for irreducible binary definite quadratic forms $F$ and in \cite[(4.7)]{DanielThesis} he proved an asymptotic formula for $T_3(F({\bf x}),X)$ as $X\mathrm ightarrow\infty$
for irreducible binary cubic forms $F$. Sun and Zhang \cite{SunZhang2016}, with the help of the circle method, obtained
\[
\sum_{1\le m_1, m_2, m_3\le X}\tau_3\left(m_1^2+m_2^2+m_3^2\mathrm ight)=c_1X^3\log^2 X+c_2X^3\log X+c_3X^3+O_{\varepsilon}(X^{11/4+\varepsilon}),
\]
where $c_1, c_2, c_3$ are constants and $\varepsilon$ is any positive number. Finally Blomer \cite{Blomer} proved an asymptotic formula
for the sum $\Sigma_{k,F}(X; \mathcal{B})$ defined in \eqref{defSigma}, for any $k\geq 2$, where $F({\bf x})$ is a form of degree $k$ in $n=k-1$ variables, coming from incomplete norm form.
\newline
In this paper we investigate the average sum of the $k$-th divisor function over values of quadratic polynomials $F(\bf x)$, not
necessarily homogenous, in $n\ge 3$ variables for \emph{any} $k\ge 2$. Every $n$-variables quadratic polynomial can be written as
\begin{equation}\label{def:F}
F({\bf x})={\bf x}^TQ{\bf x}+{\bf L}^T{\bf x}+N\,
\end{equation}
where $Q\in\mathbb Z^{n\times n}$ is a symmetric matrix, ${\bf L}\in\mathbb Z^n$ and $N\in\mathbb Z$. Our only additional requirement is that $Q$ is nonsingular. Let $\mathcal{B}\subset\mathbb{R}^n$ be an $n$-dimensional box (i.e. a certain product of intervals)
such that $\min_{{\bf x}\in X\mathcal{B}}F({\bf x})\ge 0$ for all sufficiently large $X$,
and for each integer $k\ge 2$, consider the sum
\begin{equation}\label{defSigma}
\Sigma_{k,F}(X; \mathcal{B})=\sum_{{\bf x}\in X\mathcal{B}\cap\mathbb{Z}^{n}}\tau_{k}\left(F({\bf x})\mathrm ight),
\end{equation}
as $X$ tends to infinity.
Let us also use the following notation for $q\in\mathbb Z_{+}$
\[\varrho_F(q)=\frac{1}{q^{n-1}}\#\{{\bf h}\pmod q: F({\bf h})\equiv 0\bmod q\}.\]
Our main result is the following.
\begin{theorem}\label{mth} Let $F({\bf x})$ and $\Sigma_{k,F}(X; \mathcal{B})$ be defined as in \eqref{def:F} and \eqref{defSigma},
respectively, where $Q$ is a nonsingular matrix. Then for any $\varepsilon>0$ there exist real constants $C_{k,0}(F)$, $C_{k,1}(F)$,..., and $C_{k,k-1}(F)$,
such that for $X$ tending to infinity we have the asymptotic formula
$$\Sigma_{k,F}(X; \mathcal{B})=\sum_{r=0}^{k-1}C_{k,r}(F)\int_{X\mathcal{B}}(\log F({\bf t}))^r\,d{\bf t}
+O\left(X^{n-\frac{n-2}{n+2}\min\left(1,\frac{4}{k+1}\mathrm ight)+\varepsilon}\mathrm ight),$$
where the implied constant depends on $F$, $k$, $\mathcal{B}$ and $\varepsilon$, and
$$C_{k,r}(F)=\frac{1}{r!}\sum_{t=0}^{k-r-1}\frac{1}{t!}
\left({\frac{\,d^tL(s;k,F)}{\,ds^t}}\bigg|_{s=1}\mathrm ight) {\rm Re}s_{s=1}\left((s-1)^{r+t}\zeta(s)^k\mathrm ight).$$
The function $L(s; k,F)$ has the Euler product presentation
$$L(s;k,F)=\prod_{p}\left(\sum_{\ell\ge 0}\frac{\varrho_F(p^{\ell})\left(\tau_{k}(p^{\ell})-p^{s-1}\tau_{k}(p^{\ell-1})\mathrm ight)}{p^{\ell s}}\mathrm ight)
\left(\frac{(1-p^{-s})^k}{1-p^{-1}}\mathrm ight)\,,$$
with $\tau_{k}(x):=0$ for all $x\not\in\mathbb Z$, and it is absolutely convergent for all $\Re(s)>1/2$. In particular, the main term has a positive leading coefficient:
$$C_{k,k-1}(F)=\frac{1}{(k-1)!}
\prod_{p}\left(\sum_{\ell\ge 0}\frac{\varrho_F(p^{\ell})\tau_{k-1}(p^{\ell})}{p^{\ell }}\mathrm ight)\left(1-\frac{1}{p}\mathrm ight)^{k-1}>0.$$
\end{theorem}
First of all, we remark that since $F({\bf x})$ has a nonsingular quadratic part, the set of all zeros of $F({\bf x})=0$
has a Lebesgue measure $0$, so that the logarithm function in
the integrals in the terms is well defined.
Note that we provide a formula with $k$ terms, where one can easily see that the main term is of magnitude
$X^n(\log X)^{k-1}$ (when $r=k-1$) and the last secondary term is of magnitude $X^n$ (when $r=0$). Thus the error term is indeed of a smaller
rate. \newline
Using Theorem {\rm Re}f{mth}, one can get the asymptotic formula for $\Sigma_{2,F}(X,\mathcal{B})$ in the most studied case of $k=2$.
This recreates and extends the main Theorem of Liu \cite{Liu2019} also for non-homogenous quadratic polynomials, but also provides
different expressions for the coefficients. Naturally, they can be also computed explicitly for specific polynomials, a goal we have
not pursued in the current paper. Theorem {\rm Re}f{mth} also extends the formula \cite[(4.5)]{DanielThesis} of Daniel to quadratic polynomials in more than $2$ variables, further, it elucidates the form of the involved coefficients.
\paragraph{Notations.}
The symbols $\mathbb{Z}_+$, $\mathbb{Z}$ and $\mathbb{R}$ denote the positive integers, the integers and the real numbers, respectively.
$e(z):=e^{2\pi i z}$, $\zeta(s)=\sum_{n\ge 1}n^{-s}$ is the Riemann zeta function, the letter $p$ always denotes a prime.
We make use of the $\varepsilon$-convention: whenever $\varepsilon$ appears in a statement, it is asserted that the statement is true for all real $\varepsilon $.
This allows us to write $x^{\varepsilon}\log x\ll x^{\varepsilon}$ and $x^{2\varepsilon}\ll x^{\varepsilon}$, for example. Furthermore,
if not specially specified, all the implied constants of this paper in $O$ and $\ll$ depend on $F$, $k$, $\mathcal{B}$ and $\varepsilon$.\newline
\section{The proof of Theorem {\rm Re}f{mth}}
\subsection{Setting up the circle method}
The primary technique used in the proof of the main theorem is the circle method and more precisely its treatment by Pleasants \cite{Pleas}.
The recent work on quadratic forms in $n\ge 3$ variables of Liu \cite{Liu2019} uses the same circle method techniques,
i.e. Weyl differencing, that were already used for general quadratic multivariable polynomials by Pleasants.
\par For the real $X$ from the definition \eqref{defSigma} let $L\ll X$ be a positive real parameter which we will choose later in a suitable way,
let $a,q\in\mathbb Z$, $0\le a<q\le L$ and $ {\rm gcd}(a,q)=1$. Then we define the intervals
$${\mathfrak M}_{a,q}(L):=\left\{\alpha\in\left[0,1\mathrm ight]: \left|\alpha - \frac{a}{q}\mathrm ight|\le \frac{L}{qX^2}\mathrm ight\}.$$
The set of the major arcs is then the union
\begin{equation}\label{defMajor}
{\mathfrak M}(L)=\bigsqcup_{\substack{0\le a<q\le L\\ {\rm gcd}(a,q)=1}}{\mathfrak M}_{a,q}(L),
\end{equation}
and the set of the minor arcs is the complement ${\mathfrak m}(L)=\left[0,1\mathrm ight]\setminus {\mathfrak M}(L)$.
\par We further define the following exponential sums for $\alpha\in\mathbb R$
$$S(\alpha)=\sum_{{\bf x}\in X\mathcal{B}\cap\mathbb Z^n}e\left(F({\bf x})\alpha\mathrm ight)$$
and
$$T(\alpha,Y)=\sum_{0\le m\le Y}\tau_k(m)e(m\alpha). $$
Then, by the well-known identity for $u\in\mathbb Z$
\[\int_0^1 e(u\alpha)\,d\alpha=\left\{\begin{array}{lll}
1 & \text{if} &u=0,\\
0 & \text{if} &u\neq 0,
\end{array}\mathrm ight.\]
we have
\begin{align*}
\Sigma_{k,F}(X; \mathcal{B})&=\sum_{{\bf x}\in X\mathcal{B}\cap\mathbb Z^{n}}\tau_{k}\left(F({\bf x})\mathrm ight)\\
&=\int_{0}^{1}S(\alpha)T(-\alpha, C_{F,\mathcal{ B}}(X))\,d\alpha\\
&=\int_{{\mathfrak M}(L)}S(\alpha)T(-\alpha, C_{F,\mathcal{ B}}(X))\,d\alpha+\int_{{\mathfrak m}(L)}S(\alpha)T(-\alpha, C_{F,\mathcal{ B}}(X))\,d\alpha\\
&=I_{{\mathfrak M}(L)}+I_{{\mathfrak m}(L)},
\end{align*}
where
$$C_{F,\mathcal{ B}}(X):=\max_{{\bf x}\in X\mathcal{B}}|F({\bf x})|=X^2\max_{{\bf x}\in \mathcal{B}}\left|{\bf x}^TQ{\bf x}\mathrm ight|+O(X)\asymp X^2.$$
We shall prove in Section {\rm Re}f{ssmin} that for the contribution from the minor arcs we have
\begin{equation}\label{estm1}
I_{{\mathfrak m}(L)}\ll X^{n+\varepsilon}L^{1-n/2},
\end{equation}
as long as $L\ll X$. Already here we see that we need to require that the number of variables satisfy $n\geq 3$ in order to have an
error term of a smaller magnitude than $O(X^n)$.
Further, in Section {\rm Re}f{ssmar} we will show that
\begin{equation}\label{estm2}
I_{{\mathfrak M}(L)}-\sum_{r=0}^{k-1}C_{k,r}(F)\int{\rm Li}mits_{X\mathcal{B}}\left(\log(F({\bf t}))\mathrm ight)^r\,d {\bf t}
\ll X^{n+\varepsilon}\left( L^{1-n/2}+L^2X^{-\min\left(1,\frac{4}{k+1}\mathrm ight)}\mathrm ight).
\end{equation}
Here for $r=0,1,\dots,k-1$,
\begin{equation}\label{def:Ckr}
C_{k,r}(F)=\sum_{q=1}^{\infty}\beta_{k,r}(q)S_F(q)\,,
\end{equation}
where
\begin{equation}\label{def:S_F}
S_F(q)=\sum_{\substack{a\in[1,q]\cap\mathbb Z\\ \gcd(a,q)=1}}q^{-n}\sum_{{\bf h}\in [1,q]^n\cap\mathbb Z^{n}}e\left(\frac{a}{q}F({\bf h})\mathrm ight),
\end{equation}
$$\beta_{k,r}(q)=\frac{1}{r!}\sum_{t=0}^{k-r-1}\frac{1}{t!}{\rm Re}s_{s=1}\left((s-1)^{r+t}\zeta(s)^k\mathrm ight)\left(\frac{\,d^t \partialhi_k(q,s)}{\,ds^t}\bigg|_{s=1}\mathrm ight),$$
and the analytic function $\partialhi_k(q,s)$ is defined by Lemma {\rm Re}f{t22}. We further consider the function
\begin{equation}\label{def:L}
L(s; k,F)=\sum_{q\ge 1}\partialhi_k(q,s)S_F(q)\,.
\end{equation}
In Subsection {\rm Re}f{lemss} we prove that it satisfies
\begin{equation}\label{eqFL}
L(s;k,F)=\prod_{p}\left(\sum_{\ell\ge 0}\frac{\varrho_F(p^{\ell})\left(\tau_{k}(p^{\ell})-p^{s-1}\tau_{k}(p^{\ell-1})\mathrm ight)}{p^{\ell s}}\mathrm ight)\left(\frac{(1-p^{-s})^k}{1-p^{-1}}\mathrm ight)\,,
\end{equation}
with $\tau_{k}(x):=0$ for all $x\not\in\mathbb Z$.
Then Theorem {\rm Re}f{mth} follows from \eqref{estm1}, \eqref{estm2} and \eqref{eqFL}, after choosing $L=X^{\frac{2}{n+2}\min\left(1, \frac{4}{k+1}\mathrm ight)}$.
\subsection{Contribution from the minor arcs}\label{ssmin}~
Clearly, if the positive real numbers $L$ and $L'$ satisfy $L\le L'$, then ${\mathfrak M}(L)\subset {\mathfrak M}(L')$, and if $L\ge X$, then $[0, 1]\subset {\mathfrak M}(L)$
follows from Dirichlet's approximation theorem.
We further define
$$\mathscr{F}(L)={\mathfrak M}(2L)\setminus {\mathfrak M}(L).$$
Then for a given positive number $L<X/2$,
$$[0, 1]\subset{\mathfrak M}(L)\sqcup\bigsqcup_{0\le j<N} \mathscr{F}(2^jL),$$
where $N$ is the smallest integer greater than or equal to $ (\log (X/L))/\log 2$.
Clearly, the set of the small arcs then satisfy
\begin{equation}\label{mind}
{\mathfrak m}(L)\subset \bigsqcup_{0\le j<N} \mathscr{F}(2^jL).
\end{equation}
To prove the estimate \eqref{estm1} over the minor arcs, we would use separate estimates of the two components $S(\alpha)$ and $T(\alpha,X)$
when $\alpha\in\mathscr{F}(L)$. We first state the following result.
\begin{lemma}\label{lemqe}For all positive numbers $L\ll X$,
$$\sup_{\alpha\in \mathscr{F}(L)}|S(\alpha)|\ll X^{n+\varepsilon}L^{-n/2}.$$
\end{lemma}
\begin{proof}
This estimate was done by Pleasants \cite{Pleas} even for the range $L\ll X(\log X)^{1/4}$. In the first equation of p.138 \cite{Pleas}
he proves that for $\alpha\in \mathscr{F}(L)$ we have $$|S(\alpha)|\le X^n(\log X)^n L^{-r/2},$$ where $r\ge 3$ is the rank of $Q$,
and in our case we have assumed that $r=n$.
\end{proof}
We also need the following estimate.
\begin{lemma}\label{lemde} For all positive numbers $L\ll X$,
$$\int_{\mathscr{F}(L)}\left|T(-\alpha, C_{F,\mathcal{ B}}(X))\mathrm ight|\,d\alpha\ll X^{\varepsilon}L.$$
\end{lemma}
\begin{proof}By Cauchy's inequality, and using the definition of the major arcs \eqref{defMajor}, we have
\begin{align*}
\int_{\mathscr{F}(L)}\left|T(-\alpha, C_{F,\mathcal{ B}}(X))\mathrm ight|\,d\alpha &\ll |\mathscr{F}(L)|^{1/2}
\left(\int_{0}^1\left|T(-\alpha, C_{F,\mathcal{ B}}(X))\mathrm ight|^2\,d\alpha\mathrm ight)^{1/2}\\
&\ll |{\mathfrak M}(2L)|^{1/2}\left(\left(\sum_{1\le n\le C_{F,\mathcal{ B}}(X)}\tau_k(n)\mathrm ight)^2\mathrm ight)^{1/2}\\
&\ll \left(\sum_{1\le q\le L}\frac{2L}{qX^2}\varphi(q)\mathrm ight)^{1/2}X^{1+\varepsilon}\ll X^{-1+1+\varepsilon}L\ll X^{\varepsilon}L,
\end{align*}
where we also applied the well known bound $\tau_k(n)\ll_k n^{\varepsilon}$ and the trivial $\varphi(q)/q\leq 1$.
\end{proof}
Now the estimate \eqref{estm1} over the minor arcs follow from \eqref{mind}, Lemma {\rm Re}f{lemqe} and Lemma {\rm Re}f{lemde}, namely
\begin{align*}
I_{{\mathfrak m}(L)}\ll& \sum_{0\le j<N}\int_{\mathscr{F}(2^jL)}\left|S(\alpha)T(-\alpha, C_{F,\mathcal{B}}(X))\mathrm ight|\,d\alpha\\
\ll&\sum_{0\le j<N}\sup_{\alpha\in \mathscr{F}(2^jL)}|S(\alpha)|\int_{\mathscr{F}(2^jL)}\left|T(-\alpha, C_{F,\mathcal{B}}(X))\mathrm ight|\,d\alpha\\
\ll &\sum_{0\le j<N} X^{n+\varepsilon}(2^jL)^{-n/2} (X^{\varepsilon}2^jL)\ll X^{n+\varepsilon}L^{1-n/2},
\end{align*}
where we used that $N\ll \log{X}$.
\subsection{Contribution from the major arcs}\label{ssmar}~
In this subsection we have $\alpha\in{\mathfrak M}_{a,q}(L)$, and we shall write $\beta=\alpha-a/q$ for the coprime integers $a$ and $q$, $|\beta|\le L/(qX^2)$
and $1\le q\le L$.
In order to prove the asymptotic formula \eqref{estm2}, we need the following statements.
\begin{lemma}\label{lemqqs} For $\alpha\in{\mathfrak M}_{a,q}(L)$, and $\beta=\alpha-a/q$, we have
$$
S(\alpha)=q^{-n}S_F(q, a)\int_{X\mathcal{B}} e\left(F({\bf t})\beta\mathrm ight)\,d{\bf t}+O_{\mathcal{B}, F}\left(LX^{n-1}\mathrm ight),
$$
where
$$S_F(q,a)=\sum_{{\bf h}\in [1,q]^n\cap\mathbb Z^{n}}e\left(\frac{a}{q}F({\bf h})\mathrm ight).$$
\end{lemma}
\begin{proof}
To prove this result we only need to adjust the last equation in the proof of \cite[Lemma 8]{Pleas} with the upper bounds $\beta\le L/(qX^2)$ and $q\le L$.
Note that Pleasant does the analysis over a quadratic polynomial with linear coefficients which can depend on $X$.
We are dealing with a quadratic $F$ with fixed coefficients, which makes the proof even easier.
\end{proof}
\begin{lemma}\label{lemqf} Let $S_F(q,a)$ be defined as in Lemma {\rm Re}f{lemqqs}. We have
$$
S_F(q, a)\ll_F q^{n/{2}+\varepsilon},
$$
where the implied constant is independent of $a$ and $q$
\end{lemma}
\begin{proof}
This is \cite[Lemma 10]{Pleas}.
\end{proof}
We further need the following two statements. The first one gives a general asymptotic representation of $T(\alpha, Y)$ and the second
one estimates the part of the singular integral coming from the major arcs. The proofs of Lemma {\rm Re}f{lemkdf} and Lemma {\rm Re}f{lemsi} will be given in
Section {\rm Re}f{sec32} and Section {\rm Re}f{sec41}, respectively.
\begin{lemma}\label{lemkdf}Let $ Y\asymp X^2$. We have
$$
T(\alpha, Y)=\sum_{r=0}^{k-1}\beta_{k,r}(q)\int_{0}^Y(\log u)^re(u\beta)\,du+O_{k,\varepsilon}\left(LX^{2-\frac{4}{k+1}+\varepsilon}\mathrm ight).
$$
where for $r=0,1,\dots,k-1$,
$$\beta_{k,r}(q)=\frac{1}{r!}\sum_{t=0}^{k-r-1}\frac{1}{t!}{\rm Re}s_{s=1}\left((s-1)^{r+t}\zeta(s)^k\mathrm ight)
\left(\frac{\,d^t \partialhi_k(q,s)}{\,ds^t}\bigg|_{s=1}\mathrm ight)\ll q^{-1+\varepsilon}.$$
with $\partialhi_k(q,s)$ defined by Lemma {\rm Re}f{t22}. In particular,
$$
T(\alpha, Y)\ll q^{-1+\varepsilon} X^{2+\varepsilon}.
$$
\end{lemma}
\begin{lemma}\label{lemsi}We have
\begin{align*}
\int{\rm Li}mits_{\left|\beta\mathrm ight|\le L/{qX^2}}\,d\beta\int{\rm Li}mits_{X\mathcal{B}}\,d{\bf t}
&\int{\rm Li}mits_{0}^{C_{F,\mathcal{B}}(X)}e\left((F({\bf t})-u)\beta\mathrm ight)(\log u)^r\,du = \int{\rm Li}mits_{X\mathcal{B}}(\log F({\bf t}))^r\,d{\bf t}
+O\left(\frac{q^{n/2}X^{n+\varepsilon}}{L^{n/2}}\mathrm ight).
\end{align*}
\end{lemma}
We now prove the asymptotic formula \eqref{estm2}. Using \eqref{defMajor} we get
\begin{align*}
I_{{\mathfrak M}(L)}&=\int_{{\mathfrak M}(L)}S(\alpha)T(-\alpha, C_{F,\mathcal{B}}(X))\,d\alpha\\
&=\sum_{q\le L}\sum_{\substack{0\le a< q\\ {\rm gcd}(a,q)=1}}\int_{|\beta|\le L/qX^2}S(\alpha)T(-\alpha, C_{F,\mathcal{B}}(X))\,d\alpha\\
&=:\sum_{q\le L}\sum_{\substack{0\le a< q\\ {\rm gcd}(a,q)=1}}\mathcal{I}_{q,a}.
\end{align*}
Since $1\le q\le L\ll X$, we have
\begin{align*}
\mathcal{I}_{q,a}
=&\mathrm int_{|\beta|\le L/qX^2}S(\alpha)T(-\alpha, C_{F,\mathcal{B}}(X))\,d\beta\\
=&\mathrm int_{|\beta|\le L/qX^2}\left(\frac{S_F(q, a)}{q^{n}}\mathrm int_{X\mathcal{B}} e\left(F({\bf t})\beta\mathrm ight)\,d{\bf t}\mathrm ight)
T(-\alpha, C_{F,\mathcal{B}}(X))\,d\beta\\
&+O\left(\int_{|\beta|\le L/qX^2}\left(LX^{n-1}\mathrm ight)q^{-1+\varepsilon}X^{2+\varepsilon}\,d\beta\mathrm ight),\\
\end{align*}
by Lemma {\rm Re}f{lemqqs}. Further, by applying Lemma {\rm Re}f{lemqf}, Lemma {\rm Re}f{lemkdf} and Lemma {\rm Re}f{lemsi}, we get
\begin{align*}
\mathcal{I}_{q,a}
=&\sum_{r=0}^{k-1}\frac{S_F(q,a)\beta_{k,r}(q)}{q^n}
\int{\rm Li}mits_{\left|\beta\mathrm ight|\le L/qX^2}\,d\beta\int{\rm Li}mits_{X\mathcal{B}} e\left(F({\bf t})\beta\mathrm ight)\,d{\bf t}
\int_{0}^{C_{F,\mathcal{B}}(X)}(\log u)^re(-u\beta)\,d u\\
&+O\left(\int_{|\beta|\le L/qX^2}q^{-n/2+\varepsilon}X^n\left(LX^{2-\frac{4}{k+1}+\varepsilon}\mathrm ight)\,d\beta+\frac{L^2X^{n-1+\varepsilon}}{q^2}\mathrm ight)\\
=&\sum_{r=0}^{k-1}\frac{S_F(q,a)\beta_{k,r}(q)}{q^n}\int{\rm Li}mits_{X\mathcal{B}}(\log F({\bf t}))^r\,d{\bf t}+O\left(\frac{X^{n+\varepsilon}}{qL^{n/2}}
+\frac{X^{n-\frac{4}{k+1}+\varepsilon}L^2}{q^{1+n/2}}+\frac{L^2X^{n-1+\varepsilon}}{q^2}\mathrm ight).
\end{align*}
Recall the notation \eqref{def:Ckr} and note that \[S_F(q)=\sum_{\substack{a\in[1,q]\cap\mathbb Z\\ \gcd(a,q)=1}}q^{-n}S_F(q,a).\]
Then after summing over all $1\le q\le L$ and $1\leq a<q,\, \gcd(a,q)=1,$ the major arcs ${{\mathfrak M}(L)}$ contribute
\begin{align*}
I_{{\mathfrak M}(L)}=&\sum_{q\ge 1}S_F(q)\sum_{r=0}^{k-1}\beta_{k,r}(q)\int{\rm Li}mits_{X\mathcal{B}}(\log F({\bf t}))^r\,d {\bf t}
+O\left(\sum_{q>L}q^{-1+\varepsilon}|S_F(q)|X^{n+\varepsilon}\mathrm ight)\\
&+O\left(X^{n+\varepsilon}L^{1-n/2}+X^{n-\frac{4}{k+1}+\varepsilon}L^2+L^2X^{n-1+\varepsilon}\mathrm ight)\\
=&\sum_{r=0}^{k-1}C_{k,r}(F)\int_{X\mathcal{B}}(\log(F({\bf t})))^r\,d {\bf t}+O\left(X^{n+\varepsilon}E\mathrm ight),
\end{align*}
with
\begin{align*}
E=L^{1-n/2}+L^2\left(X^{-1}+X^{-\frac{4}{k+1}}\mathrm ight)\ll L^{1-n/2}+L^2X^{-\min\left(\frac{4}{k+1}, 1\mathrm ight)}.
\end{align*}
Note that at this step, and at few other places, in order to control the error terms we necessarily have $n\geq 3$. This completes the proof of \eqref{estm2}.
\section{The estimates involving the $k$-th divisor function}\label{sec32}~
The usual technique in estimating asymptotically through the circle method average sums similar to $\Sigma_{k,F}(X,\mathcal{ B})$,
is the application of
non-trivial average estimates of the specific arithmetic function over arithmetic progressions ( e.g. \cite{GuoZhai2012}, \cite{HuYang2018}, \cite{Liu2019}).
Thus in order to prove Lemma {\rm Re}f{lemkdf} we first need the following result.
\begin{lemma}\label{t21}Let $h,q$ be integers such that $1\leq h\le q$ and ${\rm gcd}(h,q)=\delta$. Then for each real number $x>1$, $q\le x^{\frac{2}{k+1}}$
and $\varepsilon>0$, we have
$$
A_k(x;h,q):=\sum_{\substack{m\le x\\ m\equiv h~(\bmod q)}}\tau_{k}(m)=M_{k}(x;h,q)+O_{k,\varepsilon}(x^{1-\frac{2}{k+1}+\varepsilon}),
$$
where
$$
M_{k}(x;h,q)={\rm Re}s_{s=1}\left(\zeta(s)^{k}\frac{x^{s}}{s}f_{k}(q, \delta,s)\mathrm ight)
$$
with
\begin{equation*}
f_{k}(q,\delta,s)=\frac{1}{\varphi(q/\delta)\delta^s}\left(\sum_{d |(q/\delta)}\frac{\mu(d)}{d^s}\mathrm ight)^{k}
\sum_{d_1d_2...d_{k}=\delta}\sum_{\substack{t_i|(\prod_{j=i+1}^kd_{j})\\ \gcd(t_i,q/\delta)=1\\ i=1,2,...,k}}\frac{\mu(t_1)\dots \mu(t_k)}{\left(t_1...t_{k}\mathrm ight)^{s}},
\end{equation*}
where $d_1, d_2,\ldots, d_{k}$ are positive integers and the empty product $\prod_{j=k+1}^kd_k:=1$.
\end{lemma}
\begin{proof} This lemma is essentially due to Smith \cite{Smith}, and we only adjust it for our purposes. We will extend easily \cite[Theorem 3]{Smith},
which covers the case when $h$ and $q$ are coprime, to any $h$ and $q$. First, equation (30) of \cite{Smith} states that
$$
A_k(x;h,q)=\sum_{d_1d_2...d_{k}=\delta}\sum_{\substack{t_i|(\prod_{j=i+1}^kd_{j})\\ i=1,2,...,k\\{\rm gcd}(t_1t_2...t_{k},q/\delta)=1}}\mu({\bf t})
A_k\left(\frac{x}{\delta t_1t_2...t_{k}};\overline{t_1t_2...t_k}\frac{h}{\delta},\frac{q}{\delta}\mathrm ight),$$
where $d_1,d_2,\ldots,d_r$ are positive integers, $\mu({\bf t})=\prod_{j=1}^k\mu(t_j)$ and $\overline{m}$ is the multiplicative inverse of $m$ modulo $q$.
Then Theorem 3 of \cite{Smith} states that
$$A_k(x;h,q)=M_{k}(x;h,q)+\Delta_k(x;h,q),$$
where
$$
M_{k}(x;h,q)=\sum_{d_1d_2...d_{k}=\delta}\sum_{\substack{t_i|(\prod_{j=i+1}^kd_{j})\\ \gcd(t_i,q/\delta)=1\\ i=1,2,...,k}}\mu({\bf t})
\frac{x}{\delta t_1t_2...t_{k}}
P_{k}\left(\log\left(\frac{x}{\delta t_1t_2...t_{k}}\mathrm ight),\frac{q}{\delta}\mathrm ight)
$$
and
\begin{align*}
\Delta_k(x;h,q)=&\sum_{d_1d_2...d_{k}=\delta}\sum_{\substack{t_i|(\prod_{j=i+1}^kd_{j})\\ \gcd(t_i,q/\delta)=1\\ i=1,2,...,k}}
\mu({\bf t})\left(D_k\left(0;\overline{t_1...t_k}\frac{h}{\delta},\frac{q}{\delta}\mathrm ight)\mathrm ight)\\
&+\sum_{d_1...d_{k}=\delta}\sum_{\substack{t_i|(\prod_{j=i+1}^kd_{j})\\ \gcd(t_i,q/\delta)=1\\ i=1,2,...,k}}\mu({\bf t})
\left(O\left(\left(\frac{x}{\delta t_1...t_{k}}\mathrm ight)^{\frac{k-1}{k+1}}\tau_{k}\left(\frac{q}{\delta}\mathrm ight)\log^{k-1} (2x)\mathrm ight)\mathrm ight).
\end{align*}
Here $P_k(\log x, q)$ is a polynomial in $\log x$ of degree $k-1$ and $D_k(s;h,q)$ is the Dirichlet series corresponding to the
sum $A_k(x;h,q)$. By the definition of $P_k(\log x, q)$, namely \cite[(13)]{Smith},
and the analysis of $D_k(s;h,q)$ given in particular in \cite[(21)]{Smith}, it is easily seen that
\[
xP_k(\log x, q)=\frac{1}{\varphi(q)}{\rm Re}s_{s=1}\left(\left(\zeta(s)\sum_{d |q}d^{-s}\mu(d)\mathrm ight)^{k}\frac{x^{s}}{s}\mathrm ight).
\]
Hence
\begin{align*}
M_{k}(x;h,q)&=\sum_{d_1...d_{k}=\delta}\sum_{\substack{t_i|(\prod_{j=i+1}^kd_{j})\\ \gcd(t_i,q/\delta)=1\\ i=1,2,...,k}}\frac{\mu(t_1)\dots \mu(t_k)}{\varphi(q/\delta)}{\rm Re}s_{s=1}\left(\left(\zeta(s)\sum_{d |(q/\delta)}\frac{\mu(d)}{d^s}\mathrm ight)^{k}\frac{x^s/s}{\left(\delta t_1...t_{k}\mathrm ight)^{s}}\mathrm ight)\\
&={\rm Re}s_{s=1}\left(\frac{\zeta(s)^{k}x^s/s}{\varphi(q/\delta)\delta^s }\left(\sum_{d |(q/\delta)}\frac{\mu(d)}{d^s}\mathrm ight)^{k}\sum_{d_1d_2...d_{k}=\delta}\sum_{\substack{t_i|(\prod_{j=i+1}^kd_{j})\\ \gcd(t_i,q/\delta)=1\\ i=1,2,...,k}}\frac{\mu(t_1)\dots \mu(t_k)}{\left( t_1...t_{k}\mathrm ight)^{s}}\mathrm ight).
\end{align*}
Thus the main term is
$$M_{k}(x;h,q)={\rm Re}s_{s=1}\left(\zeta(s)^{k}\frac{x^{s}}{s}f_{k}(q,\delta,s)\mathrm ight),$$
where, as defined in the statement of the lemma, we have
\begin{align*}
f_{k}(q,\delta,s)&=\frac{1}{\varphi(q/\delta)\delta^s}\left(\sum_{d |(q/\delta)}\frac{\mu(d)}{d^s}\mathrm ight)^{k}\sum_{d_1d_2...d_{k}=\delta}\sum_{\substack{t_i|(\prod_{j=i+1}^kd_{j})\\ \gcd(t_i,q/\delta)=1\\ i=1,2,...,k}}\frac{\mu(t_1)\dots \mu(t_k)}{\left(t_1...t_{k}\mathrm ight)^{s}}.
\end{align*}
Smith \cite{Smith} conjectured the validity of the estimate $D_k(0,h,q)\ll q^{\frac{k-1}{2}+\varepsilon}$ for any $(q,h)=1$. This was later affirmed by Matsumoto \cite{MR792769}. Therefore we have the bound
\begin{align*}
\Delta_k(x;h,q)&\ll\sum_{d_1...d_{k}=\delta}\sum_{\substack{t_i|(\prod_{j=i+1}^kd_{j})\\ i=1,2,...,k}}\left|\mu(t_1)\dots \mu(t_k)\mathrm ight|\left(\left({q}/{\delta}\mathrm ight)^{\frac{k-1}{2}+\varepsilon}+q^{\varepsilon}x^{\frac{k-1}{k+1}+\varepsilon}\mathrm ight)\\
&\ll_k \left(q^{\frac{k-1}{2}+\varepsilon}+x^{\frac{k-1}{k+1}+\varepsilon}\mathrm ight)\sum_{d_1...d_{k}=\delta}\tau(\delta)^{k-1}\ll x^{1-\frac{2}{k+1}+\varepsilon},
\end{align*}
using $q\leq x^{\frac 2{k+1}}$ and $\tau_k(\delta)\ll\delta^\varepsilon$.
This completes the proof of the lemma.
\end{proof}
\begin{lemma}\label{t22} Let $q\ge 1$ be an integer, $(a,q)=1$ and denote $\delta=(h,q)$. Also let $f_k(q,\delta, s)$ be defined as in Lemma {\rm Re}f{t21}. Define
$$\partialhi_{k,a}(q,s)=\sum_{h=1}^q e\left(-\frac{ah}{q}\mathrm ight)f_k(q,\delta,s).$$
Then $\partialhi_{k,a}(q,s)$ is independent of $a$ and we may write it as $\partialhi_k(q,s)$.
Furthermore, $\partialhi_k(q,s)$ is multiplicative function and
$$\frac{\,d^r \partialhi_k(q,1)}{\,ds^r}\ll_{k} q^{-1+\varepsilon}$$
holds for each integer $r=0,1,...,k-1$.
\end{lemma}
\begin{proof}
First, we have
\begin{align}\label{eq:Phi}
\partialhi_{k,a}(q,s)&=\sum_{\delta|q}\sum_{\substack{1\le h\le q\\ \gcd(h,q)=\delta}}e\left(-\frac{ah}{q}\mathrm ight)f_k(q,\delta,s)=\sum_{\delta|q}f_k(q,\delta,s)\sum_{\substack{1\le h_1\le q/\delta\\ \gcd(h_1,q/\delta)=1}}e\left(-\frac{ah_1}{q/\delta}\mathrm ight)\nonumber\\
&=\sum_{\delta|q}c_{\delta}(a)f_k(q,q/\delta,s)=\sum_{\delta|q}\mu(\delta)f_k(q,q/\delta,s),
\end{align}
where $c_{\delta}(a)$ is the Ramanujan's sum and we use the fact that if $(a,q/\delta)=(a,q)=1$ then $c_{\delta}(a)=\mu(\delta)$. Therefore $F_{k,a}(q,s)$ is independent on $a$. Suppose that the positive integers $q_1$ and $q_2$ are coprime, then
\begin{align*}
\partialhi_k(q_1,s)\partialhi_k(q_2,s)&=\sum_{\delta_2|q_2}\sum_{\delta_1|q_1}\mu(\delta_1)\mu(\delta_2)f_k(q_1,q_1/\delta_1,s)f_k(q_2,q_2/\delta_2,s)\\
&=\sum_{(\delta_1\delta_2)|(q_1q_2)}\mu(\delta_1\delta_2)f_k(q_1,q_1/\delta_1,s)f_k(q_2,q_2/\delta_2,s),
\end{align*}
hence we just need to show that
$$f_k(q_1,q_1/\delta_1,s)f_k(q_2,q_2/\delta_2,s)=f_k(q_1q_2,q_1q_2/(\delta_1\delta_2),s)$$
whenever $\delta_1|q_1$ and $\delta_2|q_2$. For this we use the definition of $f_k(q,q/\delta,s)$, namely
$$
f_k(q,q/\delta,s)=\frac{\delta^s}{\varphi(\delta)q^s}\left(\sum_{d |\delta}\frac{\mu(d)}{d^s}\mathrm ight)^{k}\sum_{d_1d_2...d_{k}=q/\delta}\sum_{\substack{t_i|(\prod_{j=i+1}^kd_{j})\\ \gcd(t_i,\delta)=1\\ i=1,2,...,k}}\frac{\mu(t_1)\dots \mu(t_k)}{\left(t_1...t_{k}\mathrm ight)^{s}}.
$$
For $\sigma={\Re}(s)$ we obtain
$$
f_k(q,q/\delta,s)\ll \frac{\delta^{\sigma}}{\varphi(\delta)q^{\sigma}}\prod_{p|\delta}\left(1+\frac{1}{p^{\sigma}}\mathrm ight)^k\sum_{d_1d_2...d_{k}=q/\delta}\prod_{i=1}^{k}\prod_{\substack{p|(\prod_{j=i+1}^kd_{j})\\ \gcd(p,\delta)=1}}\left(1+\frac{1}{p^{\sigma}}\mathrm ight).
$$
Let us assume that $s$ lies on a circle with a centre $s=1$, so we can write $s=1+\rho e(\theta)$ with $\theta\in[0,1)$ and $\rho\in(0,1)$. Then it is easy to see that
$$
f_k(q,q/\delta,s)\ll \frac{\delta^{\sigma}}{\varphi(\delta)q^{\sigma}} 2^{k\omega(\delta)}\tau_k(q)2^{k\omega(q)}\ll q^{\varepsilon} \frac{\delta^{\sigma}}{\varphi(\delta)q^{\sigma}}.
$$
Here $\omega(n)$ is the number of distinct prime factors of $n$ and we used the well known fact that $\omega(n)\ll \frac{\log n}{\log\log n}$ as $n\mathrm ightarrow \infty$. Thus we have
$$\partialhi_{k}(q,s)\ll q^{\varepsilon}\sum_{\delta|q}\left|\mu(\delta)\mathrm ight|\frac{\delta^{\sigma}}{\varphi(\delta)q^{\sigma}}= q^{-\sigma+\varepsilon}\prod_{p|q}\left(1+\frac{p^{\sigma}}{p-1}\mathrm ight)\ll q^{-\sigma+\varepsilon}\prod_{p|q}\left(1+\frac{p^{\sigma}}{p}\mathrm ight).$$
On the other hand, when $\sigma\in\left(0,2\mathrm ight)$, we have
\begin{align*}
q^{-\sigma}\prod_{p|q}\left(1+\frac{p^{\sigma}}{p}\mathrm ight)\ll \begin{cases}
q^{-\sigma+\varepsilon} \quad &\sigma\in(0,1];\\
q^{-\sigma+\varepsilon}\prod_{p|q}p^{-1+\sigma}\ll q^{-1+\varepsilon} & \sigma\in(1,2).
\end{cases}
\end{align*}
Therefore for $\sigma={\Re}(s)$, $0<\sigma<2$, we get
\begin{equation}\label{pree}
\partialhi_k(q,s)\ll q^{-\min(\sigma, 1)+\varepsilon}.
\end{equation}
It is obvious that $\partialhi_{k}(q,s)$ is analytic for every $s\in{\mathbb C}$, and for every parameter $q$ which we consider. Hence one can use Cauchy's integral formula:
\begin{equation*}
\frac{\,d^r\partialhi_{k}(q,s)}{\,ds^r}{\bigg|}_{s=1}=\frac{r!}{2\pi i}\int_{|\xi-1|=\rho}\frac{\partialhi_{k}(q,\xi)}{(\xi-1)^{r+1}}\,d\xi\ll \frac{r!}{\rho^r}\max_{\theta\in[0, 1)}\left|\partialhi_{k}(q,1+\rho e(\theta))\mathrm ight|,
\end{equation*}
where $\rho\in (0,1)$.
Using ({\rm Re}f{pree}) and choosing $\rho\ll\varepsilon$, we obtain
$$\frac{\,d^r \partialhi_k(q,1)}{\,ds^r}\ll \frac{r!}{\rho^r}q^{-(1-\rho)+\varepsilon}\ll_{k, \varepsilon} q^{-1+\varepsilon},$$
as $q\mathrm ightarrow\infty$, which completes the proof of the lemma.
\end{proof}
Now we can deal with the representation of the sum $T(\alpha, Y)$.
\begin{proof}[Proof of Lemma {\rm Re}f{lemkdf}]
First of all, we pick $Y\asymp X^2$. Recall that by Lemma {\rm Re}f{t21} for $q\le X^{2/(k+1)}$ and $\beta=\alpha-a/q$ we have
\begin{align*}J_{k}(\alpha, Y)&=\sum_{h=1}^qe\left(\frac{ah}{q}\mathrm ight)\sum_{\substack{m\le X\\ m\equiv h\pmod q}}\tau_k(m)e(m\beta)\\
&=\sum_{h=1}^qe\left(\frac{ah}{q}\mathrm ight)\int_{0}^Ye(u\beta)\,d\left(M_k(u;h,q)+O_k(u^{1-\frac{2}{k+1}+\varepsilon})\mathrm ight)\\
&=\sum_{h=1}^qe\left(\frac{ah}{q}\mathrm ight)\int_{0}^Ye(u\beta)M'(u;h,q)\,du+O_k\left(q(1+|\beta|Y)Y^{1-\frac{2}{k+1}+\varepsilon}\mathrm ight).
\end{align*}
Here we also used a summation formula described for example in \cite[Lemma 3.7]{GuoZhai2012}.
It is clear that
$$\sum_{h=1}^qe\left(\frac{ah}{q}\mathrm ight)M'(u;h,q)=\sum_{h=1}^qe\left(\frac{ah}{q}\mathrm ight){\rm Re}s_{s=1}\left(\zeta(s)^{k}u^{s-1}f_{k}(q,\delta,s)\mathrm ight),$$
where $\delta=(q,h)$. This means that
\begin{equation}\label{eqjk}
T(\alpha, Y)=\int_{0}^{Y}e(u\beta){\rm Re}s_{s=1}\left(\zeta(s)^k\partialhi_k(q,s)u^{s-1}\mathrm ight)\,du+O\left(q(1+|\beta|Y)Y^{1-\frac{2}{k+1}+\varepsilon}\mathrm ight).
\end{equation}
We now compute
$
{\rm Re}s_{s=1}\left(\zeta(s)^k\partialhi_k(q,s)u^{s-1}\mathrm ight).
$
The Riemann zeta function has a Laurent series about $s = 1$,
\[
\zeta(s)=\frac{1}{s-1}+\sum_{n=0}^{\infty}\frac{(-1)^{n}\gamma_n}{n!}(s-1)^n,
\]
where
\[\gamma_n={\rm Li}m_{M\mathrm ightarrow\infty}\left(\sum_{d=1}^{M}\frac{\log^{n}d}{d}-\frac{\log^{n+1}M}{n+1}\mathrm ight),\;\; n\in\mathbb Z_{\ge 0}\]
are the Stieltjes constants. Therefore there exist constants
$$\alpha_{k,j}={\rm Re}s_{s=1}\left((s-1)^{j-1}\zeta(s)^k\mathrm ight), j=1,2,\dots, k,$$
and a holomorphic function $h_k(s)$ on ${\mathbb C}$ such that
\begin{equation*}
\zeta(s)^{k}=\sum_{r=1}^{k}\frac{\alpha_{k,r}}{(s-1)^r}+h_k(s).
\end{equation*}
Thus we obtain that
\begin{equation*}
\zeta(s)^{k}u^{s-1}=\sum_{r=1}^{k}\frac{1}{(s-1)^r}\sum_{r_1=0}^{k-r}\alpha_{k,r_1+r}\frac{\log^{r_1} u}{r_1!}+g_{k,u}(s),
\end{equation*}
for any $u>0$, where $g_{k,u}(s)$ is a holomorphic function on ${\mathbb C}$ about $s$. The Taylor series for $\partialhi_k(q,s)$ at $s=1$ is
\[\partialhi_k(q,s)=\sum_{d=0}^{\infty}\frac{\partialhi_k^{\langle d\rangle}(q,1)}{d !}(s-1)^{d}.\]
Therefore the residue of $\zeta(s)^{k}x^{s-1}\partialhi_k(q,s)$ at $s=1$ is
\begin{equation*}
\sum_{\substack{r-d=1\\ d, r\in\mathbb Z_+,1\le r\le k}}\frac{\partialhi_k^{\langle d\rangle}(q,1)}{d !}\sum_{r_1=0}^{k-r}\alpha_{k,r_1+r}\frac{\log^{r_1} x}{r_1!}=\sum_{r=1}^{k}\frac{\log^{r-1}x}{(r-1)!}\sum_{t=0}^{k-r}\partialhi_k^{\langle t\rangle}(q,1)\frac{\alpha_{k,r+t}}{t!}.
\end{equation*}
Thus if we define
$$
\beta_{k,r}(q)=\frac{1}{r!}\sum_{t=0}^{k-r-1}\frac{1}{t!}{\rm Re}s_{s=1}\left((s-1)^{r+t}\zeta(s)^k\mathrm ight)\left(\frac{\,d^t \partialhi_k(q,s)}{\,ds^t}\bigg|_{s=1}\mathrm ight)
$$
by Lemma {\rm Re}f{t22} we obtain $\beta_{k,r}(q)\ll q^{-1+\varepsilon}$. Furthermore, the error term in \eqref{eqjk} is
$$q(1+|\beta|Y)Y^{1-\frac{2}{k+1}+\varepsilon}\ll q(1+L/q)X^{2-\frac{4}{k+1}+\varepsilon}\ll LX^{2-\frac{4}{k+1}+\varepsilon}$$
for $q\ll L=o\left(X^{\min\left(1,\frac{4}{k+1}\mathrm ight)}\mathrm ight)$, which completes the proof of Lemma {\rm Re}f{lemkdf}.
\end{proof}
\section{The singular integral and series}
\subsection{The singular integral}\label{sec41}~
In this subsection we deal with the singular integral and give a proof of Lemma {\rm Re}f{lemsi}. We first proof the following lemmas.
\begin{lemma}\label{lem41}
Let $\beta\in\mathbb R\setminus\{0\}$ and $Y\ge 2$. We have
$$\int_{0}^{Y}e\left(-u\beta\mathrm ight)(\log u)^r\,du\ll |\beta|^{-1+\varepsilon}Y^{\varepsilon}.$$
\end{lemma}
\begin{proof}
We have
\begin{align*}
\int_{0}^{Y}e\left(-u\beta\mathrm ight)(\log u)^r\,du&\ll |\beta|^{-1}\int_{0}^{Y|\beta|}e\left(-u\beta/|\beta|\mathrm ight)(\log (u/|\beta|))^r\,du\\
&\ll |\beta|^{-1} \sum_{\ell=0}^r|\log |\beta||^{r-\ell}\left|\int_{0}^{Y|\beta|}(\log u)^{\ell}e\left(-u\frac{\beta}{|\beta|}\mathrm ight)\,du\mathrm ight|\\
&\ll |\beta|^{-1}\left(Y^{\varepsilon}|\beta|^{\varepsilon}+1+\sum_{\ell=1}^{r}\int_{1}^{Y|\beta|}\frac{|\log u|^{\ell-1}}{u}\,du\mathrm ight)\ll |\beta|^{-1+\varepsilon}Y^{\varepsilon}.
\end{align*}
This completes the proof.
\end{proof}
\begin{lemma}\label{lem42}
Let $F({\bf t})$ be defined as in \eqref{def:F} and $X\ge 2$. If $\beta\in\mathbb R$ and $|\beta|\ge X^{-2}$ then
$$I_{F, \mathcal{B}}(\beta, X):=\int_{X\mathcal{B}}e(F({\bf t})\beta)\,d{\bf t}\ll |\beta|^{-n/2+\varepsilon}.$$
\end{lemma}
\begin{proof}
First, we notice that from the fact that $Q$ is nonsingular it follows that there exists a transformation, such that
\begin{align*}
\int_{X\mathcal{B}}e\left(F({\bf t})\beta\mathrm ight)\,d{\bf t}&=\int_{X\mathcal{B}}e\left(\left({\bf t}^TQ{\bf t}+{\bf L}^T{\bf t}+N\mathrm ight)\beta\mathrm ight)\,d{\bf t}\\
&\ll \left|\int_{X\mathcal{B}}e\left(\left({\bf t}^TQ{\bf t}+{\bf L}^T{\bf t}\mathrm ight)\beta\mathrm ight)\,d{\bf t}\mathrm ight|
\ll \left|\int_{X\mathcal{B}+{\bf b_F}}e\left({\bf y}^TQ{\bf y}\beta\mathrm ight)\,d{\bf y}\mathrm ight|
\end{align*}
for some ${\bf b_F}\in \mathbb R^n$. Here $X\mathcal{B}+{\bf b_F}$ is still a box, i.e. a factor of intervals, and we can consider that $\mathcal{B}+{\bf b_F}/X$ has a maximal side length smaller than $1$. According to \cite[Lemma 5.2]{Birch} of Birch, for a quadratic nonsingular form $G$ and a box $\mathfrak{C}$ with a maximal side length smaller than $1$, we have
$$I_{G,\mathfrak{C}}(\beta,1)\ll |\beta|^{-n/2+\varepsilon},$$
where the dependence in this version is uniform on the side length of the box $\mathfrak{C}$. Indeed, we apply \cite[Lemma 5.2]{Birch} with $K=n/2, R=1, d=2$, after we have noticed that the condition (iii) from \cite[Lemma 3.2]{Birch} is not fulfilled for $k=(K-\varepsilon)\Theta$, thus \cite[Lemma 4.3]{Birch} holds in our case too, therefore Lemma 5.2 of Birch applies for our form $Q$. We point out this, since a direct look of the main theorem of Birch implies $n\geq 5$, which is, however, superfluous for \cite[Lemma 5.2]{Birch}. Therefore we have
\begin{align*}\int_{X\mathcal{B}+{\bf b_F}}e\left({\bf y}^TQ{\bf y}\beta\mathrm ight)\,d{\bf y}=I_{Q,\mathcal{B}+{\bf b_F}/X}(\beta,X)&=X^{-n}I_{Q,\mathcal{B}+{\bf b_F}/X}(\beta X^{-2},1)\ll |\beta|^{-n/2+\varepsilon}X^{-2\varepsilon}\ll |\beta|^{-n/2+\varepsilon}.
\end{align*}
This completes the proof of the lemma.
\end{proof}
\begin{proof}[Proof of Lemma {\rm Re}f{lemsi}] Using Lemma {\rm Re}f{lem41} and Lemma {\rm Re}f{lem42}, we obtain that
\begin{align*}
I_{r,F}(\beta, X):=\int_{X\mathcal{B}}\,d{\bf t}\int_{0}^{C_{F,\mathcal{B}}(X)}&e\left((F({\bf t})-u)\beta\mathrm ight)(\log u)^r\,du\ll_F |\beta|^{-1-n/2+\varepsilon}X^{\varepsilon}.
\end{align*}
This implies that
\begin{equation}\label{ierror}
\int_{|\beta|\le L/qX^2}I_{r,F}(\beta, X)\,d\beta=\int_{\mathbb R}I_{r,F}(\beta, X)\,d\beta+O\left(X^{\varepsilon}(L/qX^2)^{-\frac{n}{2}}\mathrm ight).
\end{equation}
Moreover,
\begin{align*}
\int_{\mathbb R}I_{r,F}(\beta, X)\,d\beta&=\int_{\mathbb R}\,d\beta\int_{X\mathcal{B}}\,d{\bf t}\int_{0}^{C_{F,\mathcal{B}}(X)}e\left((F({\bf t})-u)\beta\mathrm ight)(\log u)^r\,du\\
&=2\int_{\mathbb R_+}\,d\beta\int_{0}^{C_{F,\mathcal{B}}(X)}(\log u)^r\,du\int_{X\mathcal{B}}\cos\left[2\pi(u-F({\bf t}))\beta\mathrm ight]\,d{\bf t}\\
&=\frac{1}{\pi}\int_{X\mathcal{B}}\,d{\bf t}\int_{\mathbb R_+}\,d\beta\int_{0}^{C_{F,\mathcal{B}}(X)}(\log u)^r\, d\left(\frac{\sin\left[2\pi(u-F({\bf t}))\beta\mathrm ight]}{\beta}\mathrm ight)\\
&=\frac{1}{\pi}\int_{X\mathcal{B}}\,d{\bf t}\int_{0}^{C_{F,\mathcal{B}}(X)}(\log u)^r\,d\left(\int_{\mathbb R_+}\frac{\sin\left[2\pi(u-F({\bf t}))\beta\mathrm ight]}{\beta}\,d\beta\mathrm ight)\\
&=\frac{1}{\pi}\int_{X\mathcal{B}}\,d{\bf t}\int_{0}^{C_{F,\mathcal{B}}(X)}(\log u)^r\,d\left(\frac{\pi}{2}{\rm sgn}(u-F({\bf t}))\mathrm ight),
\end{align*}
where we have used the fact: $\int_{0}^{\infty}\frac{\sin(\alpha x)}{x}\mathrm{d}x=\frac{\pi}{2}{\rm sgn}(\alpha)$ and
\begin{align*}
{\rm sgn}(\alpha):=\begin{cases}\frac{\alpha}{\left|\alpha\mathrm ight|} \qquad &\alpha\neq 0\\
~~0\qquad &\alpha=0.
\end{cases}
\end{align*}
By integration by parts we have
\begin{align*}
\mathrm int_{\mathbb R}I_{r,F}(\beta, X)\,d\beta&=\frac{1}{2}\int_{X\mathcal{B}}\,d{\bf t}\int_{0}^{C_{F,\mathcal{B}}(X)}(\log u)^r\,d\left({\rm sgn}(u-F({\bf t}))\mathrm ight)\\
&={\rm Li}m_{\epsilon\mathrm ightarrow 0^+}\int_{X\mathcal{B}}\frac{\,d{\bf t}}{2}\int_{\substack{|u-F({\bf t})|\le \epsilon\\ 0\le u\le C_{F,\mathcal{B}}(X)}}(\log u)^r\,d\left({\rm sgn}(u-F({\bf t}))\mathrm ight)\\
&={\rm Li}m_{\epsilon\mathrm ightarrow 0^+}\mathrm int_{X\mathcal{B}}\frac{\,d{\bf t}}{2}\left(\left.(\log u)^r\left({\rm sgn}(u-F({\bf t}))\mathrm ight)\mathrm ight|_{F({\bf t})-\varepsilon}^{F({\bf t})+\varepsilon}-\mathrm int_{\substack{|u-F({\bf t})|\le \epsilon\\ 0\le u\le C_{F,\mathcal{B}}(X)}}{\rm sgn}(u-F({\bf t}))\,d(\log u)^r\mathrm ight)\\
&=\int_{X\mathcal{B}}\frac{1}{2}\left(2(\log F({\bf t}))^r\,d{\bf t} +{\rm Li}m_{\epsilon\mathrm ightarrow 0^+} O(\epsilon \log^rX)\mathrm ight)\,d{\bf t}\\
&=\int_{X\mathcal{B}}(\log F({\bf t}))^r\,d{\bf t}.
\end{align*}
Using ({\rm Re}f{ierror}) we get the proof of Lemma {\rm Re}f{lemsi}. \end{proof}
\subsection{The singular series}\label{lemss}~
In this subsection we deal with the singular series, i.e. with the series $L(s;k,F)$ defined in \eqref{def:L}, and their presentation stated in Theorem {\rm Re}f{mth}. \newline
First of all, note that from Lemma {\rm Re}f{lemqf} it follows that $S_F(q)\ll q^{1-n/2+\varepsilon}$ and Lemma {\rm Re}f{t22} gives $\displaystyle\frac{d^r\partialhi_k(q,1)}{ds^r}\ll q^{-1+\varepsilon} $ for any integer $r\in [0,k-1]$. Hence, for any $t=0,\ldots,k-1$,
$$\left.\frac{d^t L(s; k,F)}{ds^t}\mathrm ight|_{s=1}=\sum_{q=1}^\infty \frac{d^t\partialhi_k(q,1)}{ds^t}S_F(q)\ll\sum_{q=1}^\infty q^{-n/2+\varepsilon}\ll 1\,,$$
as $n\geq 3$. By their definition in Theorem {\rm Re}f{mth} this ensures that $C_{k,r}(F)$, $r=0,\ldots,k-1$, are convergent and indeed well-defined constants.
It is easily seen that $S_F(q)$ defined in \eqref{def:S_F}
is real and multiplicative. On the other hand, Lemma {\rm Re}f{t22} showed that $\partialhi_k(q,s)$ is also multiplicative. Therefore $L(s; k,F)=\sum_{q=1}^\infty \partialhi_k(q,s)S_F(q)$ has an Euler product representation as follows:
\begin{equation*}
L(s; k,F)=\prod_{p}L_p(s;k,F)
\end{equation*}
with
$$L_p(s;k,F)=1+\sum_{m\ge 1}S_F(p^{m})\partialhi_k(p^{m},s)$$
\newline
By orthogonality of characters in $\mathbb Z/p^m\mathbb Z$ for integer $m\ge 1$ it easily follows that
\[\varrho_F(p^m)=p^{-nm}\sum_{1\leq a\leq p^m}S_F(p^m,a).\]
Then we have
\begin{equation}\label{Srho}
S_F(p^m)=\varrho_F(p^m)-\varrho_F(p^{m-1}).
\end{equation}
By the estimate from Lemma {\rm Re}f{lemqf} we get $S_F(p^m)\ll_F (p^m)^{1-n/2+\varepsilon}$ and after telescoping summation of \eqref{Srho} we obtain
$$\varrho_F(p^\ell)-1\ll_F \sum_{m=1}^\ell (p^m)^{1-n/2+\varepsilon}\ll p^{1-n/2+\varepsilon},$$
where we again used that $n\geq 3$. Then by partial summation, using \eqref{Srho} and the estimate \eqref{pree}, we have
\[L_p(s;k,F)=\sum_{\ell\ge 0}\varrho_F(p^{\ell})\left(\partialhi_k(p^{\ell},s)-\partialhi_k(p^{\ell+1},s)\mathrm ight),\]
where we set $\varrho_F(1)=\partialhi_k(1,s)=1$. \newline
From \eqref{eq:Phi} and the definition of $f_k(q,\delta,s)$ in Lemma {\rm Re}f{t21}, we see that
\begin{align*}
\partialhi_{k}(p^m,s)&=f_k(p^m,p^m,s)-f_k(p^m,p^{m-1},s)\\
&=\frac{1}{p^{ms}}\sum_{d_1d_2...d_{k}=p^m}\sum_{\substack{t_i|(\prod_{j=i+1}^kd_{j})\\ i=1,2,...,k}}\frac{\mu(t_1)\dots \mu(t_k)}{\left(t_1...t_{k}\mathrm ight)^{s}}-\frac{\left(1-p^{-s}\mathrm ight)^{k}}{\varphi(p)p^{(m-1)s}}\tau_k(p^{m-1})\\
\end{align*}
For the first expression above, denote
\[
I_k=\sum_{d_1d_2...d_{k}=p^m}\sum_{\substack{t_i|(\prod_{j=i+1}^kd_{j})\\ i=1,2,...,k}}\frac{\mu(t_1)\dots \mu(t_k)}{\left(t_1...t_{k}\mathrm ight)^{s}}.\]
Then for $m\ge 1$ and $k=2$ we have
\[I_2=1+m(1-p^{-s}).\]
Now using the identities $\tau_k(p^m)=\sum_{v=0}^m \tau_{k-1}(p^{m-v})$, from which it also follows that
\begin{equation}\label{tau_kminus}\tau_k(p^m)-\tau_{k-1}(p^m)=\tau_k(p^{m-1}),
\end{equation}
we see that
\begin{align*}
I_k&=\sum_{v=0}^{m}\sum_{d_1d_2...d_{k-1}=p^{m-v}}\sum_{\substack{t_i|(\prod_{j=i+1}^kd_{j})\\ i=1,2,...,k}}\frac{\mu(t_1)\dots \mu(t_k)}{\left(t_1...t_{k}\mathrm ight)^{s}}\\
&=\sum_{d_1d_2...d_{k-1}=p^{m}}\sum_{\substack{t_i|(\prod_{j=i+1}^{k-1}d_{j})\\ i=1,2,...,k-1}}\frac{\mu(t_1)\dots \mu(t_{k-1})}{\left(t_1...t_{k-1}\mathrm ight)^{s}}+\sum_{v=1}^{m}\sum_{d_1d_2...d_{k-1}=p^{m-v}}\left(1-\frac{1}{p^s}\mathrm ight)^{k-1}\\
&=I_{k-1}+\left(1-p^{-s}\mathrm ight)^{k-1}\sum_{v=1}^{m}\tau_{k-1}(p^{m-v})=I_{k-1}+\left(1-p^{-s}\mathrm ight)^{k-1}\left(\tau_k(p^m)-\tau_{k-1}(p^m)\mathrm ight)\\
&=1+m(1-p^{-s})+\sum_{v=3}^{k}\left(1-p^{-s}\mathrm ight)^{v-1}\tau_{v}(p^{m-1})=\sum_{v=1}^{k}\left(1-p^{-s}\mathrm ight)^{v-1}\tau_{v}(p^{m-1}).
\end{align*}
Hence
\begin{equation}\label{Phi_v2}
\partialhi_k(p^m,s)=p^{-ms}\left(\sum_{1\le v\le k}(1-p^{-s})^{v-1}\tau_{v}(p^{m-1})-\tau_k(p^{m-1})\frac{p^s(1-p^{-s})^{k}}{p-1}\mathrm ight).
\end{equation}\newline
We now aim to find the value of $\partialhi_k(p^m,s)-\partialhi_k(p^{m+1},s)$ for each non-negative integer $m$. When $m=1$ we have
\begin{align*}
\partialhi_k(1,s)-\partialhi_k(p,s)&=1-p^{-s}\left(\sum_{v=1}^{k}(1-p^{-s})^{v-1}\tau_{v}(1)-\tau_k(1)\frac{p^s(1-p^{-s})^{k}}{p-1}\mathrm ight)\\
&=1-p^{-s}\left(\frac{1-(1-p^{-s})^k}{1-(1-p^{-s})}-\frac{p^s(1-p^{-s})^k}{p-1}\mathrm ight)\\
&=(1-p^{-s})^k\frac{p}{p-1}=(1-p^{-1})^{-1}(1-p^{-s})^k.
\end{align*}
If $f(z)$ is a formal power series, we denote by $[z^n]f(z)$ the coefficient of $z^n$ in $f(z)$. Then for any $|z|<1$ and $m, v\in\mathbb Z_+$ we have
$$\tau_v(p^{m-1})=[z^{m-1}]\left((1-z)^{-v}\mathrm ight).$$
Since the symbol $[z^n]f(z)$ has a distributive property, we have
\begin{align*}
\phi_k(p^m,s)&:=\frac{1}{p^{ms}}\sum_{v=1}^{k}(1-p^{-s})^{v-1}\tau_v(p^{m-1})\\
&=[z^{m-1}]\left(\frac{1}{p^{ms}}\sum_{v=1}^{k}\frac{(1-p^{-s})^{v-1}}{(1-z)^v}\mathrm ight)\\
&=[z^{m-1}]\left(\frac{1}{p^{(m-1)s}}\frac{1}{1-p^sz}\left(1-\frac{(1-p^{-s})^{k}}{(1-z)^k}\mathrm ight)\mathrm ight)\\
&=1-p^{(1-m)s}(1-p^{-s})^{k}[z^{m-1}]\left((1-p^sz)^{-1}(1-z)^{-k}\mathrm ight)\\
&=1-(1-p^{-s})^{k}\sum_{0\le \ell\le m-1}p^{-s\ell }[z^{\ell}](1-z)^{-k}\\
&=1-(1-p^{-s})^{k}\left((1-p^{-s})^{-k}-\sum_{\ell\ge m}p^{-s\ell}[z^{\ell}](1-z)^{-k}\mathrm ight)\\
&=(1-p^{-s})^{k}\sum_{\ell\ge m}p^{-s\ell}[z^{\ell}](1-z)^{-k}=(1-p^{-s})^{k}\sum_{\ell\ge m}p^{-s\ell}\tau_k(p^\ell).
\end{align*}
and then for $m\ge 1$ we get
$$\phi_k(p^m,s)-\phi_k(p^{m+1},s)=(1-p^{-s})^k p^{-ms}\tau_k(p^m).$$
From \eqref{Phi_v2} it follows that when $m\geq 1$ we have
\begin{align*}
\partialhi_k(p^m,s)-\partialhi_k(p^{m+1},s)=&\left(\phi_k(p^m,s)-\frac{(1-p^{-s})^{k}p^s}{p^{sm}(p-1)}\tau_k(p^{m-1})\mathrm ight)\\
&-\left(\phi_k(p^{m+1},s)-\frac{(1-p^{-s})^{k}p^s}{p^{s(m+1)}(p-1)}\tau_k(p^{m})\mathrm ight)\\
=&(1-p^{-s})^kp^{-ms}\left(\tau_{k}(p^m)-\frac{p^s}{p-1}\left(\tau_{k}(p^{m-1})-\frac{\tau_k(p^m)}{p^s}\mathrm ight)\mathrm ight)\\
=&\frac{(1-p^{-s})^k}{1-p^{-1}}p^{-ms}\left(\tau_{k}(p^m)-p^{s-1}\tau_{k}(p^{m-1})\mathrm ight).
\end{align*}
Let $\sigma:=\Re(s)>0$. Then according to \eqref{pree} we have $\partialhi_k(p^\ell,s)\mathrm ightarrow 0$, as $\ell\mathrm ightarrow 0$ and $s$ is fixed. Then after appropriate telescoping summation we can write
\begin{align*}
L_p(s;k,F)&=1+\sum_{\ell\ge 0}(\varrho_F(p^{\ell})-1)(\partialhi_k(p^\ell,s)-\partialhi_k(p^{\ell+1},s))\\
&=1+\sum_{\ell\ge 1}O\left(p^{1-n/2+\varepsilon}p^{-\ell\sigma}\left(\tau_{k}(p^{\ell})+p^{\sigma-1}\tau_{k}(p^{\ell-1})\mathrm ight)\mathrm ight)\end{align*}
Let us further assume that $\sigma>1/2$, so that we obtain
$$L_p(s;k,F)\ll 1+O\left(p^{1-n/2+\varepsilon-\sigma}(1+p^{\sigma-1})\mathrm ight)=1+O(p^{-n/2+\varepsilon}+p^{1-n/2-\sigma+\varepsilon}).$$
Therefore if $\sigma>\max(1/2,2-n/2)=1/2$, and setting $\tau_{k}(p^{-1}):=0$, we have that the Euler product
$$L(s;k,F)=\prod_{p}\left(\sum_{\ell\ge 0}\frac{\varrho_F(p^{\ell})\left(\tau_{k}(p^{\ell})-p^{s-1}\tau_{k}(p^{\ell-1})\mathrm ight)}{p^{\ell s}}\mathrm ight)\left(\frac{(1-p^{-s})^k}{1-p^{-1}}\mathrm ight)\,,$$
is absolutely convergent. In particular, by \eqref{tau_kminus} we have
$$L(1;k,F)=\prod_{p}\left(\sum_{\ell\ge 0}\frac{\varrho_F(p^{\ell})\tau_{k-1}(p^{\ell})}{p^{\ell}}\mathrm ight)\left(1-\frac{1}{p}\mathrm ight)^{k-1}>0\,.$$
Now from $$C_{k,k-1}=\frac{1}{(k-1)!}L(1;k,F){\rm Re}s_{s=1}\left((s-1)^{k-1}\zeta(s)^k\mathrm ight)=\frac{L(1;k,F)}{(k-1)!}$$ we conclude that $C_{k,k-1}>0$, which finalizes the proof of Theorem {\rm Re}f{mth}.
\section{Final remarks}
We believe that the application of the circle method in estimating divisor sums over values of quadratic polynomials can be extended also to the sum
\[\Sigma^\ell_{k,F}(X; {\mathcal{B}}):=\sum_{{\bf x}\in X\mathcal{B}\cap\mathbb{Z}^{d}}\tau^\ell_{k}\left(F({\bf x})\mathrm ight)\,.\]
The treatment of the sum $S(\alpha)$ remains the same, and one could use a level of distribution result for the function $\tau^\ell_k(m)$ given by Rieger (Satz 3, \cite{Rieger}). In this case a separate treatment for $q$ in the middle range $(\log x)^\lambda\le q\le L\ll X$ might also be required.
\noindent
{\sc Institute of Analysis and Number Theory\\
Graz University of Technology \\
Kopernikusgasse 24/II\\
8010 Graz\\
Austria}\newline
\href{mailto:[email protected]}{\small [email protected]}
\noindent
{\sc School of Mathematical Sciences\\
East China Normal University\\
500 Dongchuan Road\\
Shanghai 200241\\
PR China}\newline
\href{mailto:[email protected]}{\small [email protected]}
\end{document}
|
\begin{document}
\title{Another application of Linnik's dispersion method}
\date{\today}
\author{\'Etienne Fouvry}
\address{Laboratoire de Math\' ematiques d'Orsay, Univ. Paris--Sud, CNRS, Universit\' e Paris--Saclay, 91405 Orsay, France}
\email{[email protected]}
\author{Maksym Radziwi\l\l}
\address{
Department of Mathematics \\
Caltech \\
1200 E California Blvd \\
Pasadena \\ CA \\ 91125
}
\email{[email protected]}
\dedicatory{In memoriam Professor Yu. V. Linnik (1915-1972)}
\keywords{equidistribution in arithmetic progressions, dispersion method}
\subjclass[2010]{Primary 11N69}
\begin{abstract} Let $\alpha_m$ and $\beta_n$ be two sequences of real numbers supported on $[M, 2M]$ and $[N, 2N]$ with $M = X^{1/2 - \delta}$ and $N = X^{1/2 + \delta}$. We show that there exists a $\delta_0 > 0$ such that the multiplicative convolution of $\alpha_m$ and $\beta_n$ has exponent of distribution $\frac{1}{2} + \delta-\varepsilon$ (in a weak sense) as long as $0 \leq \delta < \delta_0$, the sequence $\beta_n$ is Siegel-Walfisz and both sequences $\alpha_m$ and $\beta_n$ are bounded above by divisor functions. Our result is thus a general dispersion estimate for ``narrow'' type-II sums. The proof relies crucially on Linnik's dispersion method and recent bounds for trilinear forms in Kloosterman fractions due to Bettin-Chandee. We highlight an application related to the Titchmarsh divisor problem.
\end{abstract}
\maketitle
\renewcommand{(\roman{enumi})}{(\roman{enumi})}
\section{Introduction} An important theme in analytic number theory is the study of the distribution of sequences in arithmetic progressions. A representative result in this field is the Bombieri-Vinogradov theorem \cite{BV}, according to which for any $A > 0$,
\begin{equation} \label{eq:primes}
\sum_{q \leq Q} \max_{(a,q) = 1} \Big | \sum_{\substack{p \leq x \\ p \equiv a \pmod{q}}} 1 - \frac{1}{\varphi(q)} \sum_{p \leq x} 1 \Big | \ll_{A} x (\log x)^{-A}
\end{equation}
provided that $Q \leq \sqrt{x} (\log x)^{-B}$ for some constant $B = B(A)$ depending on $A > 0$.
Nothing of the strength of \eqref{eq:primes} is known in the range $Q > x^{1/2 + \varepsilon}$ for any fixed $\varepsilon > 0$ and already establishing for any fixed integer $a \neq 0$ and for all $A > 0$ the weaker estimate,
\begin{equation} \label{eq:primes2}
\sum_{q \leq Q} \Big | \sum_{\substack{p \leq x \\ p \equiv a \pmod{q}}} 1 - \frac{1}{\varphi(q)} \sum_{p \leq x} 1 \Big | \ll_{a,A} x (\log x)^{-A}
\end{equation}
with $Q = x^{1/2 + \delta}$ and some $\delta > 0$ is a major open problem. If we could show \eqref{eq:primes2} then we would say that \textit{the primes have exponent of distribution $\frac 12 + \delta$ in a weak sense}. However we note that there are results of this type if one allows to restrict the sum over $q \leq Q$ in \eqref{eq:primes2} to integers that are $x^{\varepsilon}$ smooth, for a sufficiently small $\varepsilon > 0$ (see \cite{Zhang, Polymath}).
Any known approach to \eqref{eq:primes2} goes through combinatorial formulas which decompose the sequence of prime numbers as a linear combination of multiplicative convolutions of other sequences (see for example \cite[Chapter 13]{I-K}).
If one attempts to establish \eqref{eq:primes2} by using such a combinatorial formula then one is led to the problem of showing that for any $A > 0$,
\begin{equation} \label{eq:geh}
\sum_{\substack{q \leq Q \\ (q,a) = 1}} \Big | \sum_{\substack{M \leq m \leq 2M \\ N \leq n \leq 2N \\ m n \equiv a \pmod{q}}} \alpha_m \beta_n - \frac{1}{\varphi(q)} \sum_{\substack{M \leq m \leq 2M \\ N \leq n \leq 2N \\ (m n, q) = 1}} \alpha_{m} \beta_{n} \Big | \ll X (\log X)^{-A} \ , \ X := M N
\end{equation}
with $Q > X^{1/2 + \varepsilon}$ for some $\varepsilon > 0$. In \cite{Li} Linnik developed his ``dispersion method'' to tackle such expressions. The method relies crucially on the bilinearity of the problem, followed by the use of various estimates for Kloosterman sums of analytic or algebraic origins.
For a bound such as \eqref{eq:geh} to hold one needs to impose a ``Siegel-Walfisz condition'' on at least one of the sequences $\alpha_m$ or $\beta_n$.
\begin{definition}
We say that a sequence $\boldsymbol \beta = (\beta_n)$ satisfies a Siegel-Walfisz condition (alternatively we also say that $\boldsymbol \beta$ is \textit{Siegel-Walfisz}), if there exists an integer $k > 0$ such that for any fixed $A > 0$, uniformly in $x \geq 2$, $q > |a| \geq 1, r \geq 1$ and $(a,q) = 1$, we have,
\begin{equation*} \label{condSW}
\sum_{\substack{x < n \leq 2x \\ n \equiv a \pmod{q} \\ (n,r) = 1}} \beta_n - \frac{1}{\varphi(q)} \sum_{\substack{x < n \leq 2x \\ (n, q r) = 1}} \beta_{n} = O_{A}(\tau_k(r) \cdot x (\log x)^{-A}).
\end{equation*}
where $\tau_k(n) := \sum_{n_1 \ldots n_k = n} 1$ is the $k$th divisor function.
\end{definition}
It is widely expected (see e.g \cite[Conjecture 1]{BFI1}) that \eqref{eq:geh} should hold as soon as $\min(M,N) > X^{\varepsilon}$ provided that at least one of the sequences $\alpha_n,\beta_m$ is Siegel-Walfisz, and that there exists an integer $k > 0$ such that $|\alpha_m| \leq \tau_k(m)$ and $|\beta_n| \leq \tau_k(n)$ for all integers $m,\, n \geq 1$. We are however very far from proving a result of this type.
When $Q > X^{1/2 + \varepsilon}$ for some $\varepsilon > 0$, there are only a few results establishing \eqref{eq:geh} unconditionally in specific ranges of $M$ and $N$ (precisely \cite[Th\' eor\`eme 1]{FoActaMath}, \cite[Theorem 3]{BFI1}, \cite[Corollaire 1]{FoAnnENS}, \cite[Corollary 1.1 (i)]{FouRadz2}). All the results that establish \eqref{eq:geh} unconditionally place a restriction on one of the variable $N$ or $M$ being much smaller than the other. We call such cases ``unbalanced convolutions'' and this forms the topic of our previous paper \cite{FouRadz2}.
In applications a recurring range is one where $M$ and $N$ are roughly of the same size. This often corresponds to the case of ``type II sums'' in which one is permitted to exploit bilinearity but not much else. This is the range to which we contribute in this paper.
\begin{theorem} \label{thm:main}
Let $k \geq 1$ be an integer and $M,N \geq 1$ be given. Set $X = M N$. Let $\alpha_m$ and $\beta_n$ be two sequences of real numbers supported respectively on $[M, 2M]$ and $[N, 2N]$. Suppose that $\boldsymbol \beta = (\beta_n)$ is Siegel--Walfisz and suppose that $|\alpha_{m}| \leq \tau_k(m)$ and $|\beta_n| \leq \tau_k(n)$ for all integers $m,n \geq 1$. Then, for every $\varepsilon > 0$ and every $A > 0$,
\begin{equation} \label{eq:maineq}
\sum_{\substack{Q \leq q \leq 2Q \\ (q,a) = 1}} \Big | \sum_{\substack{m n \equiv a \pmod{q}}} \alpha_m \beta_n - \frac{1}{\varphi(q)} \sum_{(m n, q) = 1} \alpha_m \beta_n \Big | \ll_{A} X (\log X)^{-A}
\end{equation}
uniformly in $N^{56/23} X^{-17/23 + \varepsilon} \leq Q \leq N X^{-\varepsilon}$ and $1 \leq |a| \leq X$.
\end{theorem}
Setting $N = X^{1/2 + \delta}$ and $M = X^{1/2 - \delta}$ in Theorem \ref{thm:main} it follows from Theorem \ref{thm:main} and the Bombieri-Vinogradov theorem that \eqref{eq:maineq} holds for
all $Q \leq NX^{-\varepsilon}$ with $0\leq \delta < \delta_0:=\frac{1}{112}.$
Previously the existence of such a $\delta_0 > 0$ was established conditionally on Hooley's $R^{\star}$ conjecture on cancellations in short incomplete Kloosterman sums in \cite[Th\' eor\`eme 1]{FoActaArith} and in that case one can take $\delta_0 = \frac{1}{14}$. Similarly to our previous paper, we use the work of Bettin-Chandee \cite{Be-Ch} and Duke-Friedlander-Iwaniec \cite{D-F-I} as an unconditional substitute for Hooley's $R^{\star}$ conjecture. In fact the proof of Theorem \ref{thm:main} follows closely the proof of the conditional result in \cite[Th\'eor\`eme 1]{FoActaArith} up to the point where Hooley's $R^{\star}$ conjecture is applied. Incidentally we notice that the largest $Q$ that Theorem \ref{thm:main} allows to take is $Q = X^{17/33 - 5 \varepsilon}$ provided that one chooses $N = X^{17/33 - 4 \varepsilon}$.
Unfortunately the type-II sums that our Theorem \ref{thm:main} allows to estimate are too narrow to make Theorem \ref{thm:main} widely applicable in many problems (however see \cite{Tao} for an interesting connection with cancellations in character sums). We record nonetheless below one corollary, which is related to Titchmarsh's divisor problem concerning the estimation of $\sum_{p \leq x} \tau_2(p - 1)$ (for the best results on this problem see \cite[Corollaire 2]{FoCrelle}, \cite[Corollary 1]{BFI1} and \cite{Drappeau}). The proof of the Corollary below will be given in \S \ref{proofcorollary}.
\begin{corollary}\label{appli} Let $k \geq 1$ and
let $\boldsymbol \alpha$ and $\boldsymbol \beta$ be two sequences of real numbers as in Theorem \ref{thm:main}. Let $\delta$ be a constant satisfying
$$
0 < \delta < \frac{1}{112},
$$
and let $$ X\geq 2,\ M=X^{1/2-\delta}, \text{ and } N=X^{1/2 + \delta}.
$$
Then for every $A >0$ we have the equality
\begin{equation*}
\sum_{m\sim M} \sum_{n \sim N} \alpha_m \beta_n \tau_2 (mn-1)
= 2 \sum_{q\geq 1 } \frac{1}{\varphi (q)} \underset{\substack{m\sim M, n\sim N \\ mn >q^2 \\ (mn,q) =1}} {\sum \sum} \alpha_m \beta_n
+
O \bigl( X (\log X)^{-A}\bigr).
\end{equation*}
\end{corollary}
\section{Conventions and lemmas}
\subsection{Conventions} For $M$ and $N\geq 1$, we put $X=MN$ and $\mathcal L =\log 2X$.
Whenever it appears in the subscript of a sum the notation $n \sim N$ will means $N \leq n < 2N$. Given an integer $a \not= 0$ and two sequences $\boldsymbol \alpha = (\alpha_m)_{M \leq m < 2M}$ and $\boldsymbol \beta = (\beta_n)_{N \leq n < 2N}$ supported respectively on $[M, 2M]$ and $[N, 2N]$ we define the discrepancy
\begin{equation*}
E (\boldsymbol \alpha, \boldsymbol \beta, M, N, q,a) :=
\underset{\substack{m\sim M, n\sim N \\ mn\equiv a \bmod q}}{\sum \sum} \alpha_m \beta_n -\frac{1}{\varphi (q)}
\underset{\substack{m\sim M, n\sim N \\ (mn, q)=1}}{\sum \sum} \alpha_m \beta_n,
\end{equation*}
and we also define the mean-discrepancy,
\begin{equation} \label{defDelta}
\Delta (\boldsymbol \alpha, \boldsymbol \beta, M, N, q, a) := \sum_{\substack{q \sim Q \\ (q,a)=1}} |E ( \boldsymbol \alpha, \boldsymbol \beta, M, N, q, a)|.
\end{equation}
Throughout $\eta$ will denote any positive number
the value of which may change at each occurence. The dependency on $\eta$ will not be recalled in the $O$ or $\ll$--symbols. Typical examples are $\tau_k (n) = O(n^\eta)$ or $(\log x)^{10} =O (x^\eta)$, uniformly for $x\geq 1$.
If $f$ is a smooth real function, its Fourier transform is defined by
$$
\hat f (\xi) = \int_{-\infty}^\infty f(t) e( -\xi t) \, {\rm d} t,
$$
where $e(\cdot)= \exp (2 \pi i \cdot).$
\subsection{Lemmas}
Our first lemma is a classical finite version of the Poisson summation formula
in arithmetic progressions, with a good error term.
\begin{lemma} \label{existenceofpsi}There exists
a smooth function $\psi\ : \ \mathbb R \longrightarrow \mathbb R^+$, with compact support equal to $[1/2, 5/2]$, larger than the characteristic function of the interval $[1,2]$, equal to $1$ on this interval
such that, uniformly for integers $a$ and $q \geq 1$, for $M \geq 1$ and $H\geq (q/M)\log ^4 2M$
one has the equality
\begin{equation}\label{ineqpsi}
\sum_{m\equiv a \bmod q} \psi \Bigl( \frac{m}{M}\Bigr) =\hat{\psi}(0) \frac {M}{q}
+ \frac{M}{q} \sum_{0 < \vert h \vert \leq H}e \bigl( \frac{ ah}{q} \bigr)\hat \psi \Bigl( \frac{h}{q/M}\Bigr) +O(M^{-1}).
\end{equation}
Furthermore, uniformly for $q\geq 1$ and $M\geq 1$ one has the equality
\begin{equation}\label{Poissoncoprime}
\sum_{(m,q)=1} \psi \Bigl( \frac{m}{M}\Bigr) =\frac{\varphi (q)}{q}\hat{\psi}(0) M + O \bigl(\tau_2 (q) \log^4 2M \bigr).
\end{equation}
\end{lemma}
\begin{proof} See Lemma 2.1 of \cite{FouRadz2}, inspired by \cite[Lemma 7]{B-F-I2}.
\end{proof}
We now recall a classical lemma on the average behavior of the $\tau_k$-function in arithmetic progressions (see \cite[Lemma 1.1.5]{Li}, for instance).
\begin{lemma}\label{dkinarith}
For every $k\geq 1$, for every $\varepsilon >0$, there exists $C(k,\varepsilon)$ such that, for every $x \geq 2$, for every $x^\varepsilon < y < x$, for every $1\leq q \leq yx^{ -\varepsilon}$, for every integer $a$ coprime with $q$, one has the inequality $$
\sum_{\substack{x-y < n \leq x \\ n \equiv a \bmod q}} \tau_k (n) \leq C({k, \varepsilon})\frac{ y }{\varphi (q)}(\log 2x)^{k-1}.
$$
\end{lemma}
The following lemma is one of the various forms of the so--called Barban--Davenport--Halberstam Theorem (for a proof see for instance
\cite[Theorem 0 (a)]{BFI1}.
\begin{lemma}\label{Ba-Da-Ha}Let $k > 0$ be an integer. Let $\boldsymbol \beta = (\beta_{n})$ be a Siegel--Walfisz sequence such that $|\beta_{n}| \leq \tau_k(n)$ for all integer $n \geq 1$. Then for every $A > 0$ there exists $B=B(A)$ such that, uniformly for $N \geq 1$ one has the equality
$$
\sum_{q\leq N(\log 2N)^{-B}}\ \sum_{a,\, (a,q) =1} \Bigl\vert \sum_{\substack{n\sim N \\ n\equiv a \bmod q}} \beta_n -\frac{1}{\varphi (q)} \sum_{\substack{n\sim N \\ (n,q) =1}} \beta_n
\Bigr\vert^2 =O_A \bigl( N (\log 2N)^{-A}\bigr).
$$
\end{lemma}
We now recall an easy consequence of Weil's bound for Kloosterman sums.
\begin{lemma}\label{shortkloo}
Let $a$ and $b$ two integers $\geq 1$. Let $\mathcal I$ an interval included in $[1, a]$. Then for every integer $\ell$ for every $\varepsilon >0$ we have the inequality
$$
\sum_{\substack{n\in \mathcal I \\ (n,ab) =1}} \frac{n}{\varphi (n)} e\Bigl( \ell \frac{\overline n}{a}\Bigr) = O_\varepsilon \Bigl( (\ell, a)^\frac 12 (ab)^\varepsilon a^\frac12
\Bigr).
$$
\end{lemma}
\begin{proof} We begin we the case $b=1$. We write the factor $\frac{n}{\varphi (n)} $ as
$$
\frac{n}{\varphi (n)} = \sum_{\nu \mid n^\infty} \nu^{-1} = \sum_{\kappa (\nu) \mid n } \nu^{-1},
$$
where $\kappa (\nu)$ is the largest squarefree integer dividing $\nu$ (sometimes $\kappa (\nu)$ is called the {\it kernel } of $\nu$). This gives the equality
$$
\Bigl\vert \ \sum_{\substack{n\in \mathcal I \\ (n,a) = 1}} \frac{n}{\varphi (n)} e\Bigl( \ell \frac{\overline n}{a}\Bigr) \ \Bigr\vert \leq
\sum_{\nu \geq 1} \nu^{-1} \Bigl\vert \ \sum_{\substack{n\in \mathcal I \\ \kappa (\nu) \mid n \\ (n,a) = 1}}
e\Bigl( \ell \frac{\overline n}{a}\Bigr) \ \Bigr\vert
= \sum_{\substack{\nu \geq 1 \\ (\nu, a)=1}} \nu^{-1} \Bigl\vert \ \sum_{\substack{m\in \mathcal I /\kappa (\nu) \\ (m,a) = 1}}
e\Bigl( \ell \frac{\overline {\kappa (\nu)}\, \overline m}{a}\Bigr) \ \Bigr\vert.
$$
In the summation we can restrict to the $\nu$ such that $\kappa (\nu) \leq a$. Applying the classical bound for short Kloosterman sums, we deduce
that
$$
\Bigl\vert \ \sum_{\substack{n\in \mathcal I \\ (n,a) = 1}} \frac{n}{\varphi (n)} e\Bigl( \ell \frac{\overline n}{a}\Bigr) \ \Bigr\vert \ll_\varepsilon (\ell, a)^\frac{1}{2} a^{\frac 12 + \varepsilon} \prod_{p \leq a} \Bigl(1-\frac{1}{p}\Bigr)^{-1} \ll_\varepsilon (\ell, a)^\frac{1}{2} a^{\frac 12 + 2 \varepsilon}.
$$
This proves Lemma \ref{shortkloo} in the case where $b=1$. When $b \not=1$, we use the M\" obius inversion formula to detect the condition $(n,b)=1$.
\end{proof}
Our central tool is a bound for trilinear forms for Kloosterman fractions, due to Bettin and Chandee \cite[Theorem 1]{Be-Ch}. The result of Bettin-Chandee builds on work of Duke-Friedlander-Iwaniec \cite[Theorem 2]{D-F-I} who considered the case of bilinear forms.
These two papers show cancellations in exponential sums involving Kloosterman fractions $e(a \overline{m} / n)$ with $m \asymp n$. We state below the main theorem of Bettin-Chandee.
\begin{lemma}\label{trilinear} For every $\epsilon >0$ there exists $C(\varepsilon)$ such that
for every non zero integer $\vartheta$, for every sequences let $\boldsymbol \alpha$, $\boldsymbol \beta$ and $\boldsymbol \nu$ be of complex numbers, for every $A$, $M$ and $N\geq 1$, one has the inequality
\begin{multline*}
\Bigl\vert \, \sum_{a\sim A} \sum_{m\sim M} \sum_{n \sim N} \alpha (m) \beta (n) \nu (a) e \Bigl(\vartheta \frac{a \overline m}{n} \Bigr)\, \Bigr\vert \leq C(\varepsilon) \Vert \boldsymbol \alpha\Vert_{2, M} \, \Vert \boldsymbol \beta\Vert_{2, N} \, \Vert \boldsymbol \nu\Vert_{2, A}
\\
\times \Bigl( 1+ \frac{\vert \vartheta\vert A}{MN}\Bigr)^\frac{1}{2} \Bigl( (AMN)^{\frac{7}{20} + \varepsilon} \, (M+N) ^\frac{1}{4} +(AMN)^{\frac{3}{8} +\varepsilon} (AM+AN) ^\frac{1}{8}
\Bigr).
\end{multline*}
\end{lemma}
\section{Proof of Theorem \ref{thm:main}}
All along the proof we will suppose that the inequality $1 \leq |a| \leq X$ holds and that we also have
\begin{equation}\label{universal}
X ^\frac 38 \leq M\leq X^\frac 12 \leq N \text{ and } Q\leq N.
\end{equation}
\subsection{Beginning of the dispersion} Without loss of generality we can suppose that the sequence $\boldsymbol \beta$ satisfies the following property
\begin{equation}\label{betan=0}
n\mid a \Rightarrow \beta_n= 0.
\end{equation}
Such an assumption is justified because the contribution to $ \Delta( \boldsymbol \alpha, \boldsymbol \beta, M, N, Q,a) $ of the $(q,m,n)$
such that $n \mid a$ is
$$
\ll QX^\eta+X^\eta \sum_{n \mid a}\sum_{\substack{m\sim M \\ mn\not= a}} \tau_2(\vert mn -a\vert) + M X^\varepsilon \ll (M+Q)X^\eta.
$$
By \eqref{defDelta}, we have the inequality
\begin{equation*}
\Delta( \boldsymbol \alpha, \boldsymbol \beta, M, N, Q,a) \leq \sum_{q\sim Q} \, \sum_{\substack{m\sim M \\ (m,q)=1}} \vert \alpha_m \vert
\Bigl\vert \sum_{\substack{n \sim N\\ n\equiv a \overline m\bmod q}} \beta_n - \frac{1}{\varphi (q)} \sum_{\substack{n\sim N \\ (n,q)=1}} \beta_n \Bigr\vert.
\end{equation*}
Let $\psi$ be the smooth function constructed in Lemma \ref{existenceofpsi}.
By the Cauchy--Schwarz inequality, the inequality $|\alpha_m| \leq \tau_k(m)$ and by Lemma \ref{dkinarith} we deduce
\begin{equation}\label{CS}
\Delta^2( \boldsymbol \alpha, \boldsymbol \beta, M, N, Q,a) \ll MQ \mathcal L^{k^2-1}\, \Bigl\{ W(Q) -2 V(Q) +U(Q)\Bigr\},
\end{equation}
with
\begin{align}
U(Q)&= \sum_{ (q,a)=1} \frac{\psi (q/Q)}{\varphi^2 (q)}\,\Bigl( \sum_{\substack{n \sim N \\ (n,q)=1}} \beta_n \Bigr)^2 \sum_{(m,q)=1} \psi \Bigl( \frac{m}{M} \Bigr),\label{defU}\\
V(Q)& = \sum_{ (q,a)=1} \frac{\psi (q/Q)}{\varphi (q)}\,
\Bigl( \sum_{\substack{n_1 \sim N \\ (n_1,q)=1}} \beta_{n_1} \Bigr)
\Bigl( \sum_{\substack{n_2 \sim N \\ (n_2,q)=1}} \beta_{n_2} \Bigr) \sum_{ m\equiv a \overline{n_1}\bmod q } \psi \Bigl( \frac{m}{M} \Bigr),\nonumber
\\
W(Q) & = \sum_{(q,a)=1} \psi (q/Q)
\Bigl( \sum_{\substack{n_1 \sim N\\ (n_1,q)=1}} \beta_{n_1} \Bigr)
\Bigl( \sum_{\substack{n_2 \sim N\\ (n_2,q)=1}} \beta_{n_2} \Bigr) \sum_{\substack{ m\equiv a \overline{n_1}\bmod q \\ m\equiv a \overline{n_2}\bmod q}} \psi \Bigl( \frac{m}{M} \Bigr).\label{defW}
\end{align}
\subsection{Study of $U(Q)$} A direct application of \eqref{Poissoncoprime} of Lemma \ref{existenceofpsi} in the definition \eqref{defU} gives the equality
\begin{align}
U(Q) &= \hat \psi (0) M \sum_{ (q,a)=1} \frac{\psi (q/Q)}{q\varphi (q)}\,\Bigl( \sum_{\substack{n \sim N\\ (n,q)=1}} \beta_n \Bigr)^2 + O \bigl( N^2 Q^{-1} X^\eta
\bigr)\nonumber\\
&= U^{\rm MT} (Q) + O \bigl( N^2 Q^{-1}X^\eta \label{UMT}
\bigr),
\end{align}
by definition.
\subsection{Study of $V(Q)$} Let $\varepsilon$ be a fixed positive number. We now apply \eqref{ineqpsi} of Lemma \ref{existenceofpsi} with
\begin{equation}\label{defH}
H = M^{-1} Q X^\varepsilon.
\end{equation}
This leads to the equality
\begin{equation}\label{V1}
V(Q)= V^{\rm MT}(Q) + V^{\rm Err1} (Q) + V^{\rm Err2} (Q),
\end{equation}
where each of the three terms corresponds to the contribution of the three terms on the right hand--side of \eqref{ineqpsi}. We directly have the equality
\begin{equation}\label{V2}
V^{\rm Err2} (Q) = O \bigl( M^{-1} N^2 X^\eta \bigr).
\end{equation}
For the main term we get
\begin{equation}\label{V3}
V^{\rm MT} (Q) = \hat \psi (0) M \sum_{ (q,a)=1} \frac{\psi (q/Q)}{q\varphi (q)}\,\Bigl( \sum_{\substack{n \sim N\\ (n,q)=1}} \beta_n \Bigr)^2.
\end{equation}
By the definition of $V^{\rm Err1} (Q)$ we have the equality
\begin{multline*}
V^{\rm Err1} (Q) = M \sum_{ (q,a)=1} \frac{\psi (q/Q)}{q\varphi (q)}\,
\Bigl( \sum_{\substack{n_2 \sim N\\ (n_2,q)=1}} \beta_{n_2} \Bigr)\\
\Bigl( \sum_{\substack{n_1 \sim N\\ (n_1,q)=1}} \beta_{n_1}
\sum_{0 < \vert h \vert \leq H} \hat \psi \Bigl(\frac{h}{q/M}\Bigr) e \Bigl( \frac{ah \overline{n_1}}{q} \Bigr)\Bigr),
\end{multline*}
from which we deduce the inequality
\begin{equation}\label{split1}
\bigl\vert \, V^{\rm Err1} (Q) \, \bigr\vert \leq M Q^{-2} \sum_{n_1\sim N} \vert \beta_{n_1}\vert \sum_{n_2\sim N} \vert \beta_{n_2}\vert
\sum_{0< \vert h \vert \leq H} \bigl\vert \,\mathcal V (n_1, n_2, h) \, \bigr\vert
\end{equation}
with
$$
\mathcal V (n_1, n_2, h) = \sum_{ (q,an_1n_2)=1} \psi (q/Q)\frac{Q^2}{q \varphi (q)} \hat \psi \Bigl(\frac{h}{q/M}\Bigr) e \Bigl( \frac{ah \overline{n_1}}{q} \Bigr).
$$
Since $(q,n_1)=1$ B\' ezout's relation gives the equality $$
\frac{ah \overline{n_1}}{q} = -ah \frac{\overline q}{n_1} + \frac{ah}{n_1 q} \mod 1.
$$
By the inequality $1 \leq |a| \leq X$ and by the definition of $H$, the derivative of the bounded function
$$t\mapsto \psi (t/Q)\frac{Q^2}{t^2}\hat \psi \Bigl(\frac{h}{t/M}\Bigr) e\Bigl( \frac {ah}{n_1 t}\Bigr)$$
is $\ll X^\varepsilon t^{-1}$ when $t \asymp Q$. This allows to make a partial summation over the variable $q$ with the loss
of a factor $X^\varepsilon$. After all these considerations, we see that there exists a subinterval $\mathcal J \subset [Q/2, 5 Q/2]$ such that we have the inequality
$$
\bigl\vert \, \mathcal V (n_1, n_2, h)\, \bigr\vert \ll X^\varepsilon \,\Bigl\vert \sum_{\substack{q \in \mathcal J\\ (q, n_1n_2)=1}} \frac{q}{\varphi (q)} e \Bigl(
ah \frac{\overline q}{ n_1}\Bigr)\, \Bigr\vert.
$$
Lemma \ref{shortkloo} leads to the bound
$$
\bigl\vert \, \mathcal V (n_1, n_2, h)\, \bigr\vert \ll X^\varepsilon (ah, n_1)^\frac12 (n_1n_2)^\eta n_1^\frac 12.
$$
Inserting this into \eqref{split1}, we obtain
$$
V^{\rm Err1} (Q) \ll MN^\frac 32 Q^{-2} X^{\varepsilon +\eta} \sum_{n_1 \sim N} \vert \beta_{n_1}\vert \sum_{0< \vert h \vert \leq H} (h,n_1)^\frac 12,
$$
which finally gives
\begin{equation}\label{V4}
V^{\rm Err1} (Q) \ll N^\frac 52 Q^{-1} X^{2\varepsilon +\eta}
\end{equation}
using the inequality $|\beta_n| \leq \tau_k(n)$ and the definition of $H$. Combining \eqref{V1}, \eqref{V2}, \eqref{V3} and \eqref{V4} we obtain the equality
\begin{equation}\label{V5}
V(Q) = V^{\rm MT} (Q) + O_\varepsilon \bigl( (M^{-1} N^2 +N ^\frac 52 Q^{-1})X^{2\varepsilon+\eta} \bigr).
\end{equation}
where $V^{\rm MT} (Q)$ is defined in \eqref{V3} and where the constant implicit in the $O_\varepsilon$--symbol is uniform for
$a$ satisfying $1 \leq |a| \leq X$.
\section{Study of $W(Q)$}
\subsection{ The preparation of the variables } The conditions of the last summation in \eqref{defW} imply the congruence restriction
\begin{equation}\label{n1=n2}
n_1\equiv n_2 \bmod q \text{ and } (n_1n_2, q)=1.
\end{equation}
In order to control the mutual multiplicative properties of $n_1$ and $n_2$ we decompose these variables as
\begin{equation}\label{decompn1n2}
\begin{cases} (n_1, n_2 ) =d,\\
n_1= d \nu_1, \ n_2 = d \nu_2, \ (\nu_1, \nu_2)=1, \\
\nu_1= d_1\nu'_1\text { with } d_1 \mid d^\infty\text{ and } (\nu'_1, d)=1.
\end{cases}
\end{equation}
Thanks to $|\beta_n| \leq \tau_k(n)$ and to \eqref{betan=0} the contribution of the pairs $(n_1,n_2)$ with $d > X^\varepsilon$ to the right--hand side of \eqref{defW} is negligible since it is
\begin{align}
& \ll X^\eta \sum_{ X^\varepsilon <d \leq 2N} \sum_{m\sim M} \, \sum_{\substack{\nu_1 \sim N/d \\ dm\nu_1-a \not=0}} \sum_{\substack{q\sim Q \\ q \mid d\nu_1 m-a}}
\sum_{\substack{\nu_2 \sim N/d \\ \nu_2\equiv \nu_1 \bmod q}} 1 \nonumber \\
& \ll X^\eta \sum_{ X^\varepsilon <d \leq 2N} \sum_{m\sim M} \, \sum_{\substack{\nu_1 \sim N/d\\ dm\nu_1-a \not=0}} \tau_2 (\vert d\nu_1 m-a\vert) \Bigl( \frac{ N}{d Q} +1\Bigr) \nonumber\\
& \ll MN^2 Q^{-1} X^{\eta -\varepsilon} + X^{1 + \eta}. \label{d>X}
\end{align}
Now consider the contribution of the pairs $(n_1,n_2)$ with $d \leq X^\varepsilon$ and $d_1 >X^\varepsilon$ to the right--hand side of \eqref{defW}. It is
\begin{align}
& \ll X^\eta \sum_{ d \leq X^\varepsilon} \sum_{\substack{X^\varepsilon <d_1 <2N\\ d_1 \mid d^\infty}} \sum_{m\sim M} \, \sum_{\substack{\nu'_1 \sim N/(dd_1) \\ dd_1m\nu'_1-a \not=0}} \sum_{\substack{q\sim Q \\ q \mid dd_1\nu'_1 m-a}}
\sum_{\substack{\nu_2 \sim N/d \\ \nu_2\equiv d_1 \nu'_1 \bmod q}} 1 \nonumber \\
& \ll X^\eta \sum_{ d \leq X^\varepsilon} \sum_{\substack{X^\varepsilon <d_1 <2N\\ d_1 \mid d^\infty}} \sum_{m\sim M} \, \sum_{\substack{\nu'_1 \sim N/(d d_1)\\ dd_1m\nu'_1-a \not=0}} \tau_2 (\vert dd_1\nu'_1 m-a\vert)
\Bigl( \frac{ N}{d Q} +1\Bigr) \nonumber\\
&\ll X^\eta MN^2Q^{-1}\sum_{d\leq X^\varepsilon} \frac{1}{d^2} \sum_{\substack{d_1 > X^\varepsilon\\ d_1 \mid d^\infty}}\frac{1}{d_1} +X^\eta MN \sum_{d\leq X^\varepsilon} \frac{1}{d} \sum_{\substack{d_1 > X^\varepsilon\\ d_1 \mid d^\infty}}\frac{1}{d_1}\nonumber \\
& \ll MN^2 Q^{-1} X^{\eta -\frac{\varepsilon}{2}} + X^{1 + \eta -\frac{\varepsilon}{2}}. \label{d1>X}
\end{align}
Consider the conditions
\begin{equation}\label{d<Xd1<X}
d<X^\varepsilon \text{ and } d_1 < X^\varepsilon,
\end{equation}
and the subsum $\widetilde {W} (Q)$ of $W(Q)$ where the variables $n_1$ and $n_2$ satisfy the condition \eqref{d<Xd1<X}.
By \eqref{d>X} and \eqref{d1>X} we have the equality
\begin{equation}\label{W=tildeW}
W(Q) = \widetilde{ W}(Q)+ O \bigl( MN^2 Q^{-1} X^{\eta-\frac{\varepsilon}{2}}+ X^{1 + \eta}\bigr).
\end{equation}
\subsection{Expansion in Fourier series} We apply Lemma \ref{existenceofpsi} to the last sum over $m$ in \eqref{defW} with $H$ defined in \eqref{defH}. This decomposes $\widetilde{W}(Q)$ into the sum
\begin{equation}\label{decompoW}
\widetilde{W}(Q)= \widetilde{W}^{\rm MT}(Q) + \widetilde{W}^{\rm Err1} (Q) +\widetilde{ W}^{\rm Err2} (Q),
\end{equation}
where each of the three terms corresponds to the contribution of each term on the right--hand side of \eqref{ineqpsi}.
The easiest term is $\widetilde{ W}^{\rm Err2} (Q)$ since, by $|\beta_n| \leq \tau_k(n)$ and \eqref{universal}, it satisfies the inequality
\begin{align}
\widetilde{ W}^{\rm Err2} (Q) &\ll M^{-1} \sum_{Q / 2 \leq q \leq 5 Q / 2}\ \underset{\substack{n_1, n_2 \sim N\\ n_1 \equiv n_2 \bmod q}}{\sum \sum} \tau_k (n_1) \tau_k (n_2)\nonumber\\
& \ll M^{-1} N^2 X^\eta
\label{WErr2}
\end{align}
According to the restriction \eqref{n1=n2}, we see that the main term is
\begin{equation}\label{tildeWMT}
\widetilde{ W}^{\rm MT} (Q) = \hat \psi (0) M \sum_{ (q,a) =1} \frac{\psi (q/Q)}{q}\sum_{(\delta , q) =1} \Bigl(
\underset {\substack{n_1, n_2 \sim N\\ n_1\equiv n_2\equiv \delta \bmod q}}{ \sum\ \ \sum} \beta_{n_1}\beta_{n_2}\Bigr),
\end{equation}
where the variables $n_1$ and $n_2$ satisfy the conditions \eqref{d<Xd1<X}. By a similar computation leading to \eqref{d>X} and \eqref{d1>X} we can drop these conditions at the cost of the same error term. In other words the equality \eqref{tildeWMT} can be written as
\begin{equation}\label{W=W}
\widetilde{ W}^{\rm MT} (Q) =W^{\rm MT} (Q) + O \bigl( MN^2 Q^{-1} X^{\eta-\frac{\varepsilon}{2}} + X^{1 + \eta}\bigr),
\end{equation}
where $W^{\rm MT} (Q)$ is the new main term, which is defined
by
\begin{equation}\label{defWMT}
W^{\rm MT} (Q) = \hat \psi (0) M \sum_{ (q,a) =1} \frac{\psi (q/Q)}{q}\sum_{(\delta , q) =1} \Bigl( \sum_{\substack{n \sim N\\ n \equiv \delta \bmod q}} \beta_{n} \Bigr)^2.
\end{equation}
\subsection{Dealing with the main terms}
We now gather the main terms appearing in \eqref{UMT}, \eqref{V3}, \eqref{W=tildeW}, \eqref{decompoW}, \eqref{W=W}, and in \eqref{defWMT}. The main term of $W(Q)-2 V(Q) +U(Q)$ is
\begin{multline*}
W^{\rm MT} (Q) -2 V^{\rm MT} (Q) + U^{\rm MT} (Q)\\ =\hat \psi (0) M \sum_{ (q,a)=1} \frac{\psi (q/Q)}{q} \sum_{(\delta, q) =1}
\Bigl( \sum_{\substack{n\sim N \\ n\equiv \delta \bmod q}} \beta_n -\frac{1}{\varphi (q)} \sum_{\substack{n \sim N \\ (n,q) =1}} \beta_n
\Bigr)^2.
\end{multline*}
Appealing to Lemma \ref{Ba-Da-Ha} we deduce that, for any $A$, we have the equality
\begin{equation}\label{W-2V+U}
W^{\rm MT} (Q) -2 V^{\rm MT} (Q) + U^{\rm MT} (Q) = O \Bigl( M\cdot Q^{-1}\cdot N^2 (\log 2N)^{-A}\Bigr)
\end{equation}
provided that
\begin{equation}\label{Qleq}
Q \leq N(\log 2N)^{-B},
\end{equation}
for some $B =B(A).$
\subsection{Preparation of the exponential sums} By the definition \eqref{decompoW}, we have the equality
\begin{equation*}
\widetilde{ W}^{\rm Err1} (Q) = M\sum_{q} \frac {\psi (q/Q)}{q} \underset{\substack{n_1, n_2 \sim N\\ n_1 \equiv n_2 \bmod q}}{\sum \sum} \beta_{n_1} \beta_{n_2}
\sum_{0 < \vert h \vert \leq H} \hat \psi \Bigl( \frac{h}{q/M}\Bigr)\,e \Bigl( \frac{ah \overline{n_1}}{q}\Bigr),
\end{equation*}
where the variables $(n_1,n_2)$ are such the associated $d$ and $d_1$ satisfy \eqref{d<Xd1<X}.
This implies that any pair $(n_1, n_2)$ satisfies $n_1-n_2 \not= 0$ and since we have
$n_1\equiv n_2 \bmod q$ (see \eqref{n1=n2}) these integers cannot be near to each other, indeed they satisfy the inequality
\begin{equation*}
\vert n_1-n_2 \vert \geq Q/2.
\end{equation*}
Since we have $(n_1n_2, q)=1$, we can equivalently write the congruence $n_1-n_2 \equiv 0 \bmod q$ as
\begin{equation}\label{nu1-nu2}
\nu_1-\nu_2 = d_1 \nu'_1-\nu_2= qr,
\end{equation}
and instead of summing over $q$, we will sum over $r$. Note that $1 \leq \vert r \vert \leq R/d$, where
\begin{equation}\label{defR}
R =2N Q^{-1}.
\end{equation}
In the summations, the pair of variables $(n_1,n_2)$ is replaced by the quadruple $(d, d_1, \nu'_1, \nu_2)$ (see \eqref{decompn1n2}). The variables $d$ and $d_1$ are small, so we expect no substantial cancellations
when summing over them. Hence for some
\begin{equation*}
d,\,d_1\leq X^\varepsilon,\ d_1 \mid d^\infty,
\end{equation*}
we have the inequality
\begin{equation}\label{587}
\widetilde{W}^{\rm Err1} (Q) \ll X^{2\varepsilon } MQ^{-1} \bigl\vert \, \mathcal W \, \bigr\vert,
\end{equation}
where $\mathcal W= \mathcal W(d,d_1) $ is the quadrilinear form in the four variables $r$, $\nu'_1$, $\nu_2$ and $h$ defined by
\begin{multline}\label{defmathcalW}
\mathcal W = \sum_{1\leq \vert r \vert \leq R/d} \underset{\substack{dd_1\nu'_1, d\nu_2 \sim N \\ d_1\nu'_1\equiv \nu_2 \bmod r}}
{\sum \sum}\beta_{dd_1 \nu'_1} \beta_{d\nu_2}
\frac{ \psi \bigl( (d_1\nu'_1-\nu_2)/ (rQ)\bigr)}{ (d_1\nu'_1-\nu_2)/ (rQ)}\\
\sum_{0 <\vert h \vert \leq H} \hat \psi \Bigl( \frac{h}{(d_1\nu'_1-\nu_2)/(rM)}\Bigr)
e(\cdot ),
\end{multline}
where $e(\cdot)$ is the oscillating factor
$$
e(\cdot) = e \Bigl( \frac{ah\, \overline{dd_1 \nu'_1}}{(d_1 \nu'_1- \nu_2)/r} \Bigr),
$$
and where the variables satisfy the following divisibility conditions:
$$
(d_1\nu'_1, \nu_2) =1,\ (\nu'_1, d) =1 \text{ and } (dd_1 \nu'_1 r, d_1\nu'_1 - \nu_2) =r.
$$
Using B\'ezout's reciprocity formula we transform the factor $e(\cdot)$ as follows:
\begin{equation*}
\frac{ah\, \overline{dd_1 \nu'_1}}{(d_1 \nu'_1- \nu_2)/r} = - ah\frac{ \overline {(d_1 \nu'_1- \nu_2)/r}}{dd_1 \nu'_1} +\frac{ahr}{dd_1 \nu'_1
( d_1\nu'_1 - \nu_2) } \bmod 1.
\end{equation*}
Since $(dd_1, \nu'_1)= (r,\nu'_1) =1$ we can apply B\'ezout formula again, giving the equalities
\begin{align*}
ah \frac{ \overline {(d_1 \nu'_1- \nu_2)/r}}{dd_1 \nu'_1} &=ah \frac{ \overline{\nu'_1} \, \overline {(d_1 \nu'_1- \nu_2)/r}}{dd_1} +ah \frac {\overline{dd_1}\, \overline {(d_1 \nu'_1- \nu_2)/r}}{\nu'_1}\bmod 1\\
& = ah \frac{ \overline{\nu'_1} \, \overline {(d_1 \nu'_1- \nu_2)/r}}{dd_1} -ah \frac{r \overline{dd_1\nu_2}}{\nu'_1} \bmod 1
\end{align*}
The first term on the right--hand side of the above equality depends only on the congruences classes of $a$, $h$, $r$, $\nu'_1$ and $\nu_2$ modulo $dd_1$.
As a consequence of the above discussion, we see that there exists a coefficient $\xi=\xi (a,h, r ,\nu'_1, \nu_2)$ of modulus $1$, depending only on the congruence classes of $a$, $h$, $r$, $\nu'_1$ and $\nu_2$ modulo $dd_1$ such that we have the equality
$$
e(\cdot )= \xi\cdot e \Bigl( \frac{ahr}{dd_1 \nu'_1
( d_1\nu'_1 - \nu_2) }\Bigr) \cdot e \Bigl( \frac{ah r \overline{dd_1\nu_2} }{\nu'_1} \Bigr).
$$
Returning to \eqref{defmathcalW}, and fixing the congruences classes modulo $dd_1$ of the variables $h$, $r$, $\nu'_1$ and $\nu_2$, we see that there exists
$$0 \leq a_1, a_2, a_3, a_4 < d d_1$$
such that
$\mathcal W$
satisfies the inequality,
\begin{multline}\label{W1}
\vert \mathcal W \bigr\vert \leq X^{6\varepsilon} \\ \sum_{\substack{1\leq \vert r\vert \leq R/d \\ r \equiv a_1 \bmod{d d_1}}}
\Bigl\vert
\underset{\substack{dd_1\nu'_1, \, d\nu_2 \sim N \\ d_1\nu'_1\equiv \nu_2 \bmod r \\ \nu_1' \equiv a_2 \bmod d d_1 \\ \nu_2 \equiv a_3 \bmod d d_1}}
{\sum \sum}\beta_{dd_1 \nu'_1} \beta_{d\nu_2} \sum_{\substack{1\leq \vert h \vert \leq H \\ h \equiv a_4 \bmod d d_1}}
\Psi_r (h, \nu'_1, \nu_2)
e \Bigl( \frac{ah r \overline{dd_1\nu_2} }{\nu'_1} \Bigr)
\Bigr\vert,
\end{multline}
where $\Psi_r$ is the differentiable function
$$
\Psi_r (h, \nu'_1, \nu_2)= \frac{ \psi \bigl( (d_1\nu'_1-\nu_2)/ (rQ)\bigr)}{ (d_1\nu'_1-\nu_2)/ (rQ)} \, \hat \psi \Bigl( \frac{h}{(d_1\nu'_1-\nu_2)/(rM)}\Bigr)\, e \Bigl(\frac{ahr}{dd_1 \nu'_1
( d_1\nu'_1 - \nu_2) } \Bigr),
$$
In order to perform
the Abel summation over the variables $\nu'_1$, $\nu_2$ and $h$ (see for instance \cite[Lemme 5]{FoActaMath}) we must have information on the partial derivatives of the $\Psi_r$--function. Indeed for $0\leq \epsilon_0,\, \epsilon_1, \epsilon_2 \leq 1$, we have the inequality
\begin{equation}\label{bypart}
\frac{\partial^{\epsilon_0+ \epsilon_1+ \epsilon_2}}{\partial h^{\epsilon_0}\partial {\nu'_1}^{\epsilon_1} \partial\nu_2^{\epsilon_2}} \Psi_r (h, \nu'_1, \nu_2) \ll X^{50\varepsilon} \vert h\vert^{-\epsilon_0} \, {\nu'_1}^{-\epsilon_1} \, {\nu_2}^{-\epsilon_2} \bigl(N/(rQ)\bigr)^{ \epsilon_1 + \epsilon_2 },
\end{equation}
as a consequence of the inequality $\vert d_1\nu'_1 -\nu_2 \vert \geq rQ/2$ (see\eqref{nu1-nu2}), of the definition of $H$ (see \eqref{defH}) and of the inequality $1 \leq |a| \leq X$.
Since $(d_1 \nu'_1 \nu_2,r)=1$ we detect the congruence $d_1 \nu'_1 \equiv \nu_2 \bmod r$ by the $ \varphi (r)$ Dirichlet characters $\chi$ modulo $r$. By \eqref{bypart} we eliminate the function $\Psi_r$ in the inequality \eqref{W1} which becomes
\begin{multline}\label{W2}
\vert \mathcal W \bigr\vert \leq X^{60\varepsilon} N^2Q^{-2} \sum_{\substack{1\leq \vert r\vert \leq R/d \\ r \equiv a_1 \bmod d d_1}} \frac{1}{\varphi (r) \, r^2} \sum_{\chi \bmod r}\\
\Bigl\vert
\underset{\substack{dd_1\nu'_1\in \mathcal N_1\, d\nu_2 \in \mathcal N_2 \\ \nu_1' \equiv a_2 \bmod d d_1 \\ \nu_2 \equiv a_3 \bmod d d_1}}
{\sum \sum}\chi (d\nu'_1) \overline{\chi}(\nu_2) \beta_{dd_1 \nu'_1} \beta_{d\nu_2} \sum_{\substack{h\in \mathcal H \\ h \equiv a_4 \bmod d d_1}}
e \Bigl( \frac{ah r \overline{dd_1\nu_2} }{\nu'_1} \Bigr)
\Bigr\vert,
\end{multline}
\noindent $\bullet$ where $\mathcal N_1 $ and $\mathcal N_2$ are two intervals included in $[N, 2N]$,
\noindent $\bullet$ and where $\mathcal H$ is the union of two intervals included in $[-H, -1]$ and $[1, H]$ respectively.
Denote by $\mathcal W_1(r, \chi)$ the inner sum over $\nu'_1$, $\nu_2$ and $h$ in \eqref{W2}. Remark that the trivial bound for $\mathcal W_1 (r, \chi)$ is $O( X^\eta H N^2/(d^2d_1))$. We now can apply Lemma \ref{trilinear} to the sum $\mathcal W_1 (r, \chi)$, with the choice of parameters
$$
\vartheta \rightarrow ar, \ A \rightarrow H, M\rightarrow N \text{ and } N \rightarrow N .
$$
We obtain the bound
$$
\mathcal W_1 (r, \chi) \ll H^\frac 12 N^\frac 12 N^\frac 12 X^{ \varepsilon +\eta}\Bigl( 1 + \frac{\vert a \vert \, \vert r\vert H}{N^2}\Bigr) \Bigl( (HN^2)^{\frac{7}{20} + \varepsilon} N^\frac 14+ (HN^2)^{\frac 38 + \varepsilon} (HN)^\frac 18 \Bigr).
$$
By the definition \eqref{defR}, \eqref{defH} and the inequality $1 \leq |a| \leq X$ we deduce the inequality
\begin{equation*}
\mathcal W_1 (r, \chi) \ll X^{4\varepsilon +\eta} \bigl( H^\frac{17}{20} N^\frac{39}{20} + H N^\frac{15}{8}\bigr),
\end{equation*}
and using \eqref{defH} we finally deduce
$$
\mathcal W_1 (r, \chi) \ll X^{5\varepsilon +\eta} \bigl( M^{-\frac{17}{20}} N^\frac{39}{20} Q^\frac{17}{20} + M^{-1} N^\frac{15}{8}Q\bigr).
$$
Returning to \eqref{W2}, summing over $\chi$ and $r$ and inserting into \eqref{587} we obtain the bound
\begin{equation}\label{tildeWErr2<<}
\widetilde{W}^{\rm Err1} (Q) \ll X^{67 \varepsilon +\eta } \bigl( M^\frac {3}{20} N^\frac{79}{20} Q^{-\frac{43}{20} } + N^\frac{31}{8 } Q^{-2}
\bigr).
\end{equation}
\subsection{Conclusion} We have now all the elements to bound $ \Delta (\boldsymbol \alpha, \boldsymbol \beta, M, N, Q, a)$.
By \eqref{CS}, \eqref{UMT}, \eqref{V5}, \eqref{W=tildeW}, \eqref{decompoW}, \eqref{WErr2}, \eqref{W=W} and \eqref{tildeWErr2<<} we have
the inequality
\begin{multline*}
\Delta^2 \ll MQ\mathcal L^{k^2-1} \Bigl\{ \Bigl( W^{\rm MT} (Q) -2 V^{\rm MT} (Q)+ U^{\rm MT} (Q)\Bigr) + N^2Q^{-1} X^\eta \\
+ (M^{-1} N^2 +N ^\frac 52 Q^{-1})X^{2\varepsilon+\eta}
\\+ \bigl( MN^2 Q^{-1} X^{\eta -\frac{\varepsilon}{2}} +
X^{1 + \eta} \bigl) + X^{67 \varepsilon +\eta } \bigl( M^\frac {3}{20} N^\frac{79}{20} Q^{-\frac{43}{20} } + N^\frac{31}{8 } Q^{-2}
\bigr)
\Bigr\},
\end{multline*}
which is shortened in (recall \eqref{universal})
\begin{multline*}
\Delta^2 \ll MQ\mathcal L^{k^2-1} \Bigl\{ MN^2 Q^{-1} (\log 2N)^{-A} + \\+ MN^2 Q^{-1} X^{\eta -\frac{\varepsilon}{2}}
+ X^{67 \varepsilon +\eta } \bigl( M^\frac {3}{20} N^\frac{79}{20} Q^{-\frac{43}{20} } + N^\frac{31}{8 } Q^{-2}
\bigr)
\Bigr\},
\end{multline*}
by \eqref{W-2V+U} and \eqref{Qleq} if one assumes
\begin{equation}\label{10}
Q \leq N X^{-\varepsilon}.
\end{equation}
To finish the proof of Theorem \ref{thm:main}, it remains to find sufficient conditions
over $M$, $N$ and $Q$ to ensure the bound $\Delta^2 \ll M^2 N^2 \mathcal L^{-A}$. Choosing $\eta= \varepsilon /5$, we have to study the following three inequalities hold
\begin{equation}\label{system}
\begin{cases}
MQ \cdot MN^2 Q^{-1} X^{-\frac{\varepsilon}{4} }&\ll M^2N^2X^{-\frac{\varepsilon}{4}}, \\
MQ \cdot M^\frac{3}{20} N^\frac{79}{20} Q^{-\frac{43}{20} }X^{68 \varepsilon}& \ll M^2N^2X^{-\frac{\varepsilon}{4}}, \\
MQ \cdot N^\frac{31}{8} Q^{-2} X^{68 \varepsilon} & \ll M^2N^2X^{-\frac{\varepsilon}{4}}.
\end{cases}
\end{equation}
The first inequality is trivially satisfied. The second inequality of \eqref{system} is satisfied as soon as
\begin{equation}\label{14}
Q> N^\frac{56}{23} X^{-\frac {17 }{23} +65 \varepsilon}.
\end{equation}
This inequality combined with \eqref{10} implies that $N< X^\frac{17}{33}$.
The last condition of \eqref{system} is satisfied as soon as
$$
Q > N^\frac{23}{8} X^{-1 +69 \varepsilon}.
$$
We can drop this condition since it is a consequence of \eqref{14} and of the inequality $N < X^\frac{17}{33}$. The proof of Theorem \ref{thm:main} is now complete.
\section{Proof of Corollary \ref{appli}} \label{proofcorollary} Let $S(M,N)$ be the sum we are studying in this corollary. We use Dirichlet's hyperbola argument to write
\begin{equation}\label{Diri}
mn -1=qr,
\end{equation}
and by symmetry we can impose the condition $q< r$. This symmetry creates a factor $2$ unless $mn-1$ is a perfect square. The contribution to $S(M,N)$ of the $(m,n)$ such that $mn-1$ is a square is bounded by $O( X^{\frac 12 +\eta})$ with $\eta >0$ arbitrary. This is a consequence of $|\beta_n| \leq \tau_k(n)$.
The decomposition \eqref{Diri}, the constraint $q < r$ and the inequalities $X-1 \leq mn-1< 4X$ imply that $q\leq 2 X^\frac 12$. In counterpart, if $q <X^\frac 12$ we are sure that $q <r$. Thus we have the equality
\begin{align}\label{END}
S(M,N) &=
2 \sum_{q \leq X^{1/2}} \underset{\substack{ m\sim M , n\sim N \\ mn\equiv 1 \bmod q}}{ \sum \sum} \alpha_m \beta_n + 2 \underset{\substack{mn-1 =qr, q<r \\
m\sim M, n\sim N \\ X^{1/2} < q \leq 2 X^{1/2}} }{\sum\ \sum\ \sum\ \sum}
\alpha_m \beta_n + O(X^{\frac 12 +\eta})\nonumber\\
&= 2S_0 (M,N) +2 S_1 (M,N) + O(X^{\frac 12 +\eta}),
\end{align}
by definition.
A direct application of Theorem \ref{thm:main} with $Q= X^\frac 12$ gives the equality
\begin{equation}\label{Eq1}
S_0 (M,N)= \sum_{q\leq X^{1/2}} \frac{1}{\varphi (q)} \underset{\substack{m\sim M, n\sim N\\ (mn,q) =1}}{\sum \sum} \alpha_m \beta_n + O( X \mathcal L^{-C}),
\end{equation}
for any $C$.
For the second term $S_1 (M,N)$, we must get rid of the constraint $q<r$. A technique among others is to precisely control the size of the variables $m$, $n$ and $q$.
If it is so, then $r=(mn-1)/q$ is also controlled and one can check if it satisfies $r >q$.
We introduce the following factor of dissection:
$$
\Delta = 2^\frac{1}{[ \mathcal L^B]},
$$
where $B= B(A)$ is a parameter to be fixed later, and where $[y]$ is the largest integer $\leq y$. If we denote by $L_0= [\mathcal L^B]$ we see that $\Delta^{L_0} =2$ and that $\Delta =1 + O (\mathcal L ^{-B})$. We denote by $M_0$, $N_0$ and $Q_0$ any numbers in the sets
\begin{align*}
\mathcal M_0&:=\{M, \Delta M, \Delta^2 M, \Delta^3 M,\cdots ,\Delta^{L_0-1} M\}\\
\mathcal N_0&:= \{N, \Delta N, \Delta^2 N, \Delta^3 N,\cdots , \Delta^{L_0-1} N\}\\
\mathcal Q_0 & :=\{X^\frac 12, \Delta X^\frac 12, \Delta^2 X^\frac 12, \Delta^3 X^\frac 12,\cdots, \Delta^{L_0-1} X^\frac 12\},
\end{align*} respectively.
We split $S_1 (M,N)$ into
\begin{equation}\label{decompo1}
S_1 (M,N) =\sum_{M_0 \in \mathcal M_0} \ \sum_{N_0 \in \mathcal N_0} \ \sum_{Q_0 \in \mathcal Q_0} S_1(M_0,N_0,Q_0),
\end{equation}
where $S_1(M_0,N_0,Q_0)$ is defined by
$$
S_1(M_0,N_0,Q_0)=\sum_{ q\simeq Q_0}\ \underset{\substack{ m\simeq M_0, n \simeq N_0 \\ mn\equiv 1 \bmod q}}{ \sum \sum}\alpha_m \beta_n.
$$
\noindent $\bullet$ where the notation $y\simeq Y_0$ means that the integer $y$ satisfies the inequalities $Y_0 \leq y < \Delta Y_0$,
\vskip .2cm
\noindent $\bullet$ where the variables $m$, $n$ and $q$ satisfy the extra condition
\begin{equation}\label{r>q}
mn-1>q^2.
\end{equation}
Note that the decomposition \eqref{decompo1} contains
\begin{equation}\label{O(LB)}
O( \mathcal L^{3B}),
\end{equation}
terms.
Since $mn-1 \geq M_0N_0 -1 $ and $q^2 <Q_0^2 \Delta^2$ in each sum $S_1(M_0,N_0, Q_0)$, we can drop the condition
\eqref{r>q} in the definition of this sum as soon as we have
\begin{equation} \label{M0N0<Q0}
M_0N_0-1 >Q_0^2 \Delta^2.
\end{equation}
When \eqref{M0N0<Q0} is satisfied, the variables $m$, $n$ and $q$ are independent and a direct application of Theorem \ref{thm:main} gives for each sum $S_1 (M_0, N_0, Q_0) $, the equality
\begin{equation}\label{nunu}
S_1 (M_0,N_0, Q_0)= \sum_{ q\simeq Q_0} \frac{1}{\varphi (q)} \underset{\substack{ m\simeq M_0, n \simeq N_0\\ (mn,q)=1}}{ \sum \sum}\alpha_m \beta_n +O_C (X \mathcal L^{-C}),
\end{equation}
where $C$ is arbitrary.
It remains to consider the case where \eqref{M0N0<Q0} is not satisfied, which means that $(M_0, N_0, Q_0) \in \mathcal E_0$ where
\begin{equation}\label{inverse}
\mathcal E_0 := \bigl\{ (M_0, N_0, Q_0)\, ;\ M_0N_0-1 \leq Q_0^2 \Delta^2\bigr\}.
\end{equation}
We now show that the variable $n$ considered in such a $S_1 (M_0, N_0, Q_0)$ varies in a rather short interval. More precisely, since $M_0 \Delta>m, N_0 \Delta >n$ and $Q_0 < q\ $ we deduce from the definition
\eqref{inverse} that $q^2 \geq mn \Delta^{-4} -\Delta^{-2}$ which implies the inequality $q \geq (mn)^\frac 12 \Delta^{-2} -1$. Combining with \eqref{r>q}, we get the inequality
$$
(mn)^\frac 12 \Delta^{-2} -1 < q < (mn)^\frac 12
$$
which implies
$$
(q^2/m) < n< ((q+1)^2/m)\Delta^4.
$$
Using the inequality
$$
X^{1/2} \leq q \leq 2 X^{1/2} \ll (Q^2/M)(\Delta^4-1)X^{-\frac{\delta}{2}},
$$
and $|\beta_n| \leq \tau_k(n)$ we apply Lemma \ref{dkinarith} to see that
\begin{align*}
&\underset{(M_0,N_0,Q_0) \in \mathcal E_0}{\sum\ \sum\ \sum} S_1( M_0, N_0, Q_0)\\
&\ll \sum_{m \sim M} \tau_k (m) \sum_{\substack{q \sim X^{1/2}\\ (q,m)=1}} \sum_{(q^2/m) < n< ((q+1)^2/m)\Delta^4}\tau_k (n) \\
&\ll (\Delta^4 -1) \mathcal L^{k-1} \sum_{m\sim M} \tau_k (m)
\sum_{q \sim X^{1/2} }\frac{1}{\varphi (q)}\cdot \frac{q^2}{m}\\
&\ll \mathcal L^{2k-2-B} X.
\end{align*}
Actually, by introducing a main term back, which is less than the error term, we can also write this bound as an equality
\begin{multline}
\underset{(M_0,N_0,Q_0) \in \mathcal E_0}{\sum\ \sum\ \sum} S_1( M_0, N_0, Q_0)
\\
= \underset{(M_0,N_0,Q_0) \in \mathcal E_0}{\sum\ \sum\ \sum} \sum_{ q\simeq Q_0}\ \frac{1}{\varphi (q)} \underset{\substack{ m\simeq M_0, n \simeq N_0\\ (mn,q)=1}}{ \sum \sum}\alpha_m \beta_n
+O( \mathcal L^{2k-2-B} ), \label{Eq2}
\end{multline}
where the variables $(m,n,q)$ continue to satisfy \eqref{r>q}.
Gathering \eqref{END}, \eqref{Eq1}, \eqref{decompo1}, \eqref{O(LB)}, \eqref{nunu}, \eqref{Eq2}
we obtain
\begin{multline*}
S(M,N)= 2 \sum_{q\leq X^{1/2}} \frac{1}{\varphi (q)} \underset{\substack{m\sim M, n\sim N\\ (mn,q) =1}}{\sum \sum} \alpha_m \beta_n\\+2\sum_{M_0 \in \mathcal M_0} \ \sum_{N_0 \in \mathcal N_0} \ \sum_{Q_0 \in \mathcal Q_0} \sum_{ q\simeq Q_0}\ \frac{1}{\varphi (q)} \underset{\substack{ m\simeq M_0, n \simeq N_0\\ (mn,q)=1}}{ \sum \sum}\alpha_m \beta_n \\
+O( \mathcal L^{3B-C} X) +O( \mathcal L^{2k-2-B} X) + O(X^{\frac 12 +\eta}),
\end{multline*}
where the variables $(m,n,q)$ continue to satisfy \eqref{r>q}.
Putting the different summations back together, we complete the proof of Corollary \ref{appli} by choosing $B$ and $C$ in order to satisfy the equalities $-A= 3B-C = 2k-2-B$.
\end{document}
|
\begin{document}
\centerline{\LARGE \bf Cascade of phases in turbulent flows}
\centerline{{\bf s}c Christophe CHEVERRY
{{\bf r}m\bf f}ootnote{{{\bf r}m\bf f}ootnotesize IRMAR, Universit\'e de Rennes I, Campus de
Beaulieu, 35042 Rennes cedex, France, \text{ }\text{ }\text{ }\text{ }
[email protected].}}
\noindent{\bf {\bf s}mall Abstract.} {{\bf s}mall This article is
devoted to incompressible Euler equations (or to
Navier-Stokes equations in the vanishing viscosity limit).
It describes the propagation of {\it quasi-singularities}.
The underlying phenomena are consistent with the notion
of a {\it cascade of energy}.}
\noindent{\bf {\bf s}mall R\'esum\'e.} {{\bf s}mall Cet article
\'etudie les \'equations d'Euler incompressible (ou de
Navier-Stokes en pr\'esence de viscosit\'e \'evanescente).
On y d\'ecrit la propagation de {\it quasi-singularit\'es}.
Les ph\'enom\`enes sous-jacents confirment l'id\'ee selon
laquelle il se produit une {\it cascade d'energie}.}
{\bf s}etcounter{section}{0}
{\bf s}ection{Introduction.} $ \, $
{\bf v}skip -5mm
Consider incompressible fluid equations
\mathfrak{e}dskip
\noindent{$ (\mathfrak{a}thcal{E}) \qquad {\bf p}art_t {\bf u} + ({\bf u} \cdot \nabla ) \, {\bf u} +
\nabla {\bf p} = 0 \, , \qquad \Div \ {\bf u} = 0 \, , \qquad (t,x) \in
[0,T] \times {\mathfrak{a}thbb R}^d \, , $}
\mathfrak{e}dskip
\noindent{where $ {\bf u} = {}^t ( {\bf u}^1 , \cdots , {\bf u}^d) $ is the fluid
velocity and $ {\bf p} $ is the pressure. The structure of {\it weak}
solutions of $ (\mathfrak{a}thcal{E}) $ in $ d - $space dimensions with $ d {{\bf r}m\bf g}eq 2 $
is a problem of wide current interest \cite{Chem}-\cite{Li}.
The questions are how to describe the phenomena with adequate
models and how to visualize the results in spite of their
complexity. We will achieve a small step in these two
directions.
\mathfrak{e}dskip
According to the physical intuition, the appearance of
singularities is linked with the {\it increase of the vorticity}.
Along this line, we have to mark the contributions \cite{BKM}
and \cite{CF}. Interesting objects are solutions which do
not blow up in finite time but whose associated vorticities
increase arbitrarily fast. These are {\it quasi-singularities}.
Their study is of practical importance.
\mathfrak{e}dskip
Typical examples of quasi-singularities are oscillations.
This is a well-known fact going back to \cite{C}-\cite{MPP}.
The works \cite{C} and \cite{MPP} rely on phenomenological
considerations and engineering experiments. Further
developments are related to homogenization \cite{E}-\cite{E2},
compensated compactness \cite{D}-\cite{Ge} and non linear
geometric optics \cite{Che}-\cite{CGM}-\cite{CGM1}.
{\bf s}mallskip
$\, $
\break
$ \, $
In \cite{D}, DiPerna and Majda show the persistence of
oscillations in three dimensional Euler equations. To
this end, they select a parameter $ \varepsilon\in \, ]0,1] $
and look at
\begin{equation} \label{oscidipmaj}
\ {\bf u}^\eps_s (t,x) \, := \, {}^t \bigl( {{\bf r}m\bf g} ( x_2 , \eps^{-1}
\, x_2) , 0 , {{\bf r}m\bf h} \bigl( x_1 - {{\bf r}m\bf g} ( x_2 , \eps^{-1} \, x_2)
\, t , x_2, \eps^{-1} \, x_2 \bigr) \bigr) \quad
\end{equation}
where $ {{\bf r}m\bf g} (x_2,\theta) $ and $ {{\bf r}m\bf h} (x_1,x_2,\theta) $
are smooth bounded functions with period $ 1 $ in
$ \theta $. They remark that the functions $ {\bf u}^\eps_s $
are exact smooth solutions of $ (\mathfrak{a}thcal{E}) $ and they let $ \varepsilon$
goes to zero. Yet, this construction is of a very special
form. First, it comes from shear layers (steady 2-D solutions) as
\mathfrak{e}dskip
$ \tilde {\bf u}^\eps_s (t,x) \, = \, \tilde {\bf u}^\eps_s (0,x) \,
= \, {}^t \bigl( {{\bf r}m\bf g} ( x_2,\eps^{-1} \, x_2) , 0 \bigr) \,
\in \, {\mathfrak{a}thbb R}^2 \, . $
\mathfrak{e}dskip
\noindent{Secondly, it involves a phase $ {\bf v}arphi_0 (t,x)
\equiv x_2 $ which does not depend on $ \varepsilon$.} Of course,
this is a common fact \cite{CG}-\cite{G}-\cite{G2}-\cite{Se}
when dealing with large amplitude high frequency waves.
Nevertheless, this is far from giving a complete idea of what
can happen.
\mathfrak{e}dskip
Our aim in this paper is to develop a theory which allows to
remove the two restrictions mentioned above. Fix $ {{\bf r}m\bf f}lat =
(l,N) \in {\mathfrak{a}thbb N}^2 $ where the integers $ l $ and $ N $ are such
that $ 0 < l < N $. Introduce the {\it geometrical} phase
\mathfrak{e}dskip
$ {\bf v}arphi^\eps_g (t,x) \, := \, {\bf v}arphi_0 (t,x) \, + \, {\bf s}um_{k=1}^{
l-1} \, \eps^{{{\bf r}m\bf f}rac{k}{l}} \ {\bf v}arphi_k (t,x) \, . $
\mathfrak{e}dskip
\noindent{In the section 2, we state the Theorem {\bf r}ef{appBKW} which
provides with {\it approximate} solutions $ {\bf u}^\eps_{{\bf r}m\bf f}lat $ defined
on the interval $ [0,T] $ with $ T > 0 $ and having the form}
\begin{equation} \label{formegenerale}
\begin{array} {ll}
{\bf u}^\eps_{{\bf r}m\bf f}lat (t,x) \! \! \! & = \, {}^t ( {\bf u}^{\varepsilon1}_{{\bf r}m\bf f}lat,
\cdots , {\bf u}^{\varepsilond}_{{\bf r}m\bf f}lat)(t,x) \\
\ & = \, {\bf u}_0 (t,x) \, + \, {\bf s}um_{k=1}^N \, \eps^{
{{\bf r}m\bf f}rac{k}{l}} \ U_k \bigl( t,x, \eps^{-1} \, {\bf v}arphi^\eps_g (t,x)
\bigr) \qquad \qquad
\end{array}
\end{equation}
where the smooth profiles
\mathfrak{e}dskip
$ U_k (t,x,\theta) \, = \, {}^t (U^1_k, \cdots, U^d_k)
(t,x,\theta) \in {\mathfrak{a}thbb R}^d \, , \qquad 1 \leq k \leq N \, , $
\mathfrak{e}dskip
\noindent{are periodic functions of $ \theta \in {\mathfrak{a}thbb R} /
{\mathfrak{a}thbb Z} $.} We assume that
\mathfrak{e}dskip
$ \exists \, (t,x,\theta) \in [0,T] \times {\mathfrak{a}thbb R}^d \times
{\mathfrak{a}thbb T} \, ; \qquad {\bf p}art_\theta U_1 (t,x,\theta) \not = 0 $.
\mathfrak{e}dskip
\noindent{We say that the family $ \{ {\bf u}^\eps_{{\bf r}m\bf f}lat \}_\varepsilon$
is a {\it weak}, a {\it strong} or a {\it turbulent}
oscillation according as we have respectively $ l=1 $,
$ l = 2 $ or $ l {{\bf r}m\bf g}eq 3 $.}
\mathfrak{e}dskip
The order of magnitude of the energy of the oscillations
is $ \eps^{{{\bf r}m\bf f}rac{1}{l}} $. Compute the vorticities associated with the
functions $ {\bf u}^\eps_{{\bf r}m\bf f}lat $. These are the skew-symmetric
matrices $ \Omega^\eps_{{\bf r}m\bf f}lat = (\Omega^{\varepsiloni}_{{{\bf r}m\bf f}lat j}
)_{1 \leq i,j \leq d} $ where
$$ \quad \begin{array} {l}
\Omega^{\varepsiloni}_{{{\bf r}m\bf f}lat j} (t,x) \, := \, ({\bf p}art_j
{\bf u}^{\varepsiloni}_{{\bf r}m\bf f}lat - {\bf p}art_i {\bf u}^{\varepsilonj}_{{\bf r}m\bf f}lat)(t,x) \\
\ = \, {\bf s}um_{k=1}^N \, \eps^{{{\bf r}m\bf f}rac{k}{l}-1} \ ( {\bf p}art_j
{\bf v}arphi^\eps_g \ {\bf p}art_\theta U^i_k - {\bf p}art_i {\bf v}arphi^\eps_g \
{\bf p}art_\theta U^j_k) \bigl( t,x, \eps^{-1} \, {\bf v}arphi^\eps_g (t,x)
\bigr) \\
\quad \ + \, ( {\bf p}art_j {\bf u}^i_0 - {\bf p}art_i {\bf u}^j_0)(t,x) \, + \,
{\bf s}um_{k=1}^N \, \eps^{{{\bf r}m\bf f}rac{k}{l}} \ ( {\bf p}art_j U^i_k -
{\bf p}art_i U^j_k ) \bigl( t,x, \eps^{-1} \, {\bf v}arphi^\eps_g
(t,x) \bigr) \, .
\end{array} $$
The principal term in $ \Omega^\eps_{{\bf r}m\bf f}lat $ is of size
$ \eps^{{{\bf r}m\bf f}rac{1}{l}-1}$. When $ l {{\bf r}m\bf g}eq 2 $, no uniform
estimates are available on the family $ \{ \Omega^\eps_{{\bf r}m\bf f}lat
\}_{\varepsilon\in \, ]0,1] } $. In particular, if $ d = 3 $, there
is no uniform control on the enstrophy
\mathfrak{e}dskip
$ \int_0^T \int_{{\mathfrak{a}thbb R}^3} \ {\bf v}ert \omega^\eps_{{\bf r}m\bf f}lat (t,x) {\bf v}ert^2 \ \,
dt \, dx \, , \qquad \omega^\eps_{{\bf r}m\bf f}lat (t,x) \, := \, (\nabla {\bf w}edge
{\bf u}^\eps_{{\bf r}m\bf f}lat)(t,x) \, \equiv \, \Omega^\eps_{{\bf r}m\bf f}lat (t,x) \, . $
\mathfrak{e}dskip
\noindent{We see here that strong and turbulent oscillations
are examples of quasi-singularities.} Observe that the
expansion ({\bf r}ef{formegenerale}) involves a more complicated
structure than in ({\bf r}ef{oscidipmaj}) though the corresponding
regime is less singular.
\mathfrak{e}dskip
The BKW analysis reveals that the phase shift $ {\bf v}arphi_1 $
and the terms $ {\bf v}arphi_k $ with $ 2 \leq k \leq l-1 $ play
different parts. The r\^ole of $ {\bf v}arphi_1 $ is partly examined
in the articles \cite{Che} and \cite{CGM} which deal with the
case $ l = 2 $. When $ l {{\bf r}m\bf g}eq 3 $, the phenomenon to emphasize
is the creation of the $ {\bf v}arphi_k $ with $ 2 \leq k \leq l-1 $.
Indeed, suppose that
\mathfrak{e}dskip
$ {\bf v}arphi_2 (0,\cdot) \equiv \, \cdots \, \equiv {\bf v}arphi_{l-1}
(0,\cdot) \equiv 0 \, , \qquad l {{\bf r}m\bf g}eq 3 \, . $
\mathfrak{e}dskip
\noindent{Then, generically, we find}
\mathfrak{e}dskip
$ \exists \, t \in \, ]0,T] \, ; \qquad {\bf v}arphi_2 (t,\cdot)
\not \equiv 0 \, , \qquad \cdots \, , \qquad {\bf v}arphi_{l-1}
(t,\cdot) \not \equiv 0 \, . $
\mathfrak{e}dskip
\noindent{Now starting with {\it large} amplitude waves (this
corresponds to the limit case $ l = + \infty $) that is}
\mathfrak{e}dskip
$ {\bf u}^\eps_\infty (0,x) \, = \, {\bf s}um_{k=0}^\infty \,
\eps^k \ U_k \bigl( 0,x, \eps^{-1} \, {\bf v}arphi_0 (0,x)
\bigr) \, , \qquad {\bf p}art_\theta U_0 \not \equiv 0 \, , $
\mathfrak{e}dskip
\noindent{the description of $ {\bf u}^\eps_\infty (t,\cdot) $ on the
interval $ [0,T] $ with $ T > 0 $ needs the introduction of an {\it
infinite cascade} of phases $ {\bf v}arphi_k $.} The scenario is
the following. Oscillations of the velocity develop spontaneously
in all the intermediate frequencies $ \eps^{{{\bf r}m\bf f}rac{k}{l}-1} $
and in all the directions $ \nabla {\bf v}arphi_k (t,x) $. This
expresses {\it turbulent} features in the flow.
\mathfrak{e}dskip
The family $ \{ {\bf u}^\eps_{{\bf r}m\bf f}lat \}_{\varepsilon\in \, ]0,1]} $
is $ \varepsilon- $stratified \cite{G2} with respect to the phase
$ {\bf v}arphi^\eps_g $ with in general $ {\bf v}arphi^\eps_g \not
\equiv {\bf v}arphi_0 $. The presence in $ {\bf v}arphi^\eps_g $ of the
non trivial functions $ {\bf v}arphi_k $ is necessary and sufficient
to encompass the {\it geometrical} features of the propagation.
It has various consequences which are detailed in the section
3. It brings informations about microstructures, compensated
compactness and non linear geometric optics. It also confirms
observations made in the statistical approach of turbulences
\cite{FMRT}-\cite{L}.
\mathfrak{e}dskip
The chapter 4 is devoted to the demonstration of Theorem
{\bf r}ef{appBKW}. Because of {\it closure problems}, the use of
the geometrical phase $ {\bf v}arphi^\eps_g $ does not suffice to
perform the BKW analysis. Among other things, {\it adjusting
phases} $ {\bf v}arphi_k $ with $ l \leq k \leq N $ must be
incorporated in order to put the system of formal equations
in a triangular form.
\mathfrak{e}dskip
The expressions $ {\bf u}^\eps_{{\bf r}m\bf f}lat $ are not exact solutions of
Euler equations, yielding small error terms $ {{\bf r}m\bf f}^\eps_{{\bf r}m\bf f}lat $
as source terms. The matter is to know if there exists exact
solutions which coincide with $ {\bf u}^\eps_{{\bf r}m\bf f}lat (0,\cdot) $ at
time $ t = 0 $, which are defined on $ [0,T] $ with $ T > 0 $,
and which are close to the approximate divergence free solutions
$ {\bf u}^\eps_{{\bf r}m\bf f}lat $. This is a problem of {\it stability}.
\mathfrak{e}dskip
\noindent{The construction of exact solutions requires a good
understanding of the different mechanisms of amplifications
which occur.} In the subsection 5.1, we make a distinction
between {\it obvious} and {\it hidden} instabilities.
\mathfrak{e}dskip
\noindent{The obvious instabilities can be detected by
looking at the BKW analysis presented before.} They
imply the non linear instability of Euler equations
(Proposition {\bf r}ef{euinstap}). They need to be absorbed
a dependent change of variables which induces a defect
of hyperbolicity. The hidden instabilities can be revealed
by soliciting this lack of hyperbolicity. They require to
be controled the addition of dissipation terms.
\mathfrak{e}dskip
\noindent{In the subsection 5.2, we look at incompressible fluids
with anisotropic viscosity.} This is the framework of \cite{CDGG}
though we adopt a different point of view. We consider strong
oscillations. We show (Theorem {\bf r}ef{ciprin}) that {\it exact}
solutions corresponding to $ {\bf u}^\eps_{(2,N)} $ exist on some
interval $ [0,T] $ with $ T > 0 $ independent on $ \varepsilon\in
\, ]0,1] $.
{\bf s}mallskip
{{\bf s}mall {\bf p}arskip=-3pt
\tableofcontents
}
\break
{\bf s}ection{Euler equations in the variables $ (t,x) $.} $\,$
{\bf v}skip -5mm
The description of incompressible flows in turbulent regime
is a delicate question. No systematic analysis is yet available.
However, special appro- ximate solutions with rapidly varying
structure in space and time can be exhibited. Their construction
is summarized in this chapter 2.
{\bf s}ubsection{Notations.}
{\bf s}mallskip
\noindent{$ \bullet $ {\em Variables.}} Let $ T \in {\mathfrak{a}thbb R}^+_* $. The time
variable is $ t \in [0,T] $. Let $ d \in {\mathfrak{a}thbb N} {\bf s}etminus \{ 0 ,
1 \} $. The space variables are $ (x, \theta) \in {\mathfrak{a}thbb R}^d \times
{\mathfrak{a}thbb T} $ where $ {\mathfrak{a}thbb T} := {\mathfrak{a}thbb R} / {\mathfrak{a}thbb Z} $. Mark the ball
\mathfrak{e}dskip
$ B(0,R] \, := \, \bigl \{ \, x \in {\mathfrak{a}thbb R}^d \, ; \ {\bf v}ert x {\bf v}ert^2
:= {\bf s}um_{i=1}^d \, x_i^{\, 2} \leq R \, \bigr \} \, , \qquad R \in
{\mathfrak{a}thbb R}^+ \, . $
\mathfrak{e}dskip
\noindent{The state variables are the velocity field $ u = {}^t
(u^1,\cdots,u^d) \in {\mathfrak{a}thbb R}^d $ and the pressure $ p \in {\mathfrak{a}thbb R} $.
Given $ (u , \tilde u ) \in ({\mathfrak{a}thbb R}^d)^2 $, define
\mathfrak{e}dskip
$ u \cdot \tilde u := {\bf s}um_{i=1}^d \, u^i \, \tilde u^i \, ,
\qquad {\bf v}ert u {\bf v}ert^2 := u \cdot u \, , \qquad u \otimes \tilde
u := (u^j \, \tilde u^i )_{1 \leq i,j \leq d} \, . $
\mathfrak{e}dskip
\noindent{The symbol $ S^d_+ $ is for the set of positive definite
quadratic form on $ {\mathfrak{a}thbb R}^d $.} An element $ \mathfrak{q} \in S^d_+ $ can be
represented by some $ d \times d $ matrix $ ( \mathfrak{q}_{ij} )_{1 \leq
i,j \leq d} $.
\noindent{$ \bullet $ {\em Functional spaces.}} Distinguish the
expressions $ {\bf u} (t,x) $ which do not depend on the variable
$ \theta $ from the expressions $ u(t,x,\theta) $ which depend on
$ \theta $. The boldfaced type $ {\bf u} $ is used in the first
case whereas the letter $ u $ is employed in the second
situation.
\mathfrak{e}dskip
\noindent{Note $ C^\infty_b ( [0,T] \times {\mathfrak{a}thbb R}^d ) $ the space
of functions in $ [0,T] \times {\mathfrak{a}thbb R}^d $ with bounded continuous
derivatives of any order.} Let $ m \in {\mathfrak{a}thbb N} $. The Sobolev space
$ H^m $ is the set of functions
\mathfrak{e}dskip
$ u(x,\theta) \, = \, {\bf s}um_{k \in {\mathfrak{a}thbb Z}} \, {\bf u}_k (x) \ e^{
i \, k \, \theta} $
\mathfrak{e}dskip
\noindent{such that}
\mathfrak{e}dskip
$ {\bf p}arallel u {\bf p}arallel_{H^m}^2 \, := \, {\bf s}um_{k \in {\mathfrak{a}thbb Z}} \, (1 +
{\bf v}ert k {\bf v}ert^2)^m \ \int_{{\mathfrak{a}thbb R}^d} \, (1 + {\bf v}ert \xi {\bf v}ert^2)^m \
{\bf v}ert {{\bf r}m\bf h}at {\bf u}_k (\xi) {\bf v}ert^2 \ d \xi \, < \, \infty $
\mathfrak{e}dskip
\noindent{where}
\mathfrak{e}dskip
$ \mathfrak{a}thcal{F} ( {\bf u} )(\xi) \, = \, {{\bf r}m\bf h}at {\bf u} (\xi) \, := \, ( 2 \, {\bf p}i)^{-
{{\bf r}m\bf f}rac{d}{2}} \ \int_{{\mathfrak{a}thbb R}^d} \, e^{- i \, x \cdot \xi} \ {\bf u} (x) \
dx \, , \qquad \xi \in {\mathfrak{a}thbb R}^d \, . $
\mathfrak{e}dskip
\noindent{With these conventions, the condition $ {\bf u} \in H^m $
means simply that}
\mathfrak{e}dskip
$ {\bf p}arallel {\bf u} {\bf p}arallel_{H^m}^2 \, := \, \int_{{\mathfrak{a}thbb R}^d} \, (1 +
{\bf v}ert \xi {\bf v}ert^2)^m \ {\bf v}ert {{\bf r}m\bf h}at {\bf u} (\xi) {\bf v}ert^2 \ d \xi \,
< \, \infty \, . $
\mathfrak{e}dskip
\noindent{Define}
\mathfrak{e}dskip
$ H^m_T \, := \, \bigl \{ \, u \ ; \ {\bf p}art^j_t u \in L^2 ( [0,T];
H^{m-j}) \, , \ {{\bf r}m\bf f}orall \, j \in \{0, \cdots,m \} \, \bigr \} \, , $
\mathfrak{e}dskip
$ \mathfrak{a}thcal{W}^m_T \, := \, \bigl \{ \, u \ ; \ u \in C^j ( [0,T]; H^{m-j}) \, ,
\ {{\bf r}m\bf f}orall \, j \in \{0, \cdots,m \} \, \bigr \} \, , $
\mathfrak{e}dskip
\noindent{with the corresponding norms}
\mathfrak{e}dskip
$ {\bf p}arallel u {\bf p}arallel_{H^m_T}^2 \, := \, {\bf s}um_{j=0}^m \, \int_0^T \,
{\bf p}arallel {\bf p}art^j_t u (t,\cdot) {\bf p}arallel_{H^m}^2 \ dt \, , $
\mathfrak{e}dskip
$ {\bf p}arallel u {\bf p}arallel_{\mathfrak{a}thcal{W}^m_T} \, := \, {\bf s}up_{t \in [0,T]} \ \
{\bf s}um_{j=0}^m \, {\bf p}arallel {\bf p}art^j_t u (t,\cdot) {\bf p}arallel_{H^m} \, . $
\mathfrak{e}dskip
\noindent{Consider also }
$$ \ \left. \begin{array} {lll}
H^m_\infty \, := \, \bigcap_{T \in {\mathfrak{a}thbb R}^+} \, H^m_T \, , & \quad
H^\infty_T \, := \, \bigcap_{m \in {\mathfrak{a}thbb N}} \, H^m_T \, , & \quad
H^\infty_\infty \, := \, \bigcap_{T \in {\mathfrak{a}thbb R}^+} \, H^\infty_T \, , \\
\mathfrak{a}thcal{W}^m_\infty \, := \, \bigcap_{T \in {\mathfrak{a}thbb R}^+} \, \mathfrak{a}thcal{W}^m_T \, , & \quad
\mathfrak{a}thcal{W}^\infty_T \, := \, \bigcap_{m \in {\mathfrak{a}thbb N}} \, \mathfrak{a}thcal{W}^m_T \, , & \quad
\mathfrak{a}thcal{W}^\infty_\infty \, := \, \bigcap_{T \in {\mathfrak{a}thbb R}^+} \, \mathfrak{a}thcal{W}^\infty_T \, .
\end{array} {\bf r}ight. $$
When $ m = 0 $, replace $ H^0 $ with $ L^2 $. Any function $ u
\in L^2 $ can be decomposed according to
\mathfrak{e}dskip
$ u (t,x,\theta) \, = \, \langle u {\bf r}angle (t,x) + u^* (t,x,\theta) \,
= \, \bar u (t,x) + u^* (t,x,\theta) $
\mathfrak{e}dskip
\noindent{where}
\mathfrak{e}dskip
$ \langle u {\bf r}angle (t,x) \, \equiv \, \bar u (t,x) \, := \, \int_{\mathfrak{a}thbb T} \,
u(t,x,\theta) \ d \theta \, . $
\mathfrak{e}dskip
\noindent{Let $ \Gamma $ be the symbol of any of the spaces $ H^m $,
$ H^m_T $, $ \mathfrak{a}thcal{W}^m_T $, $ \cdots $ defined before.} In order to
specify the functions with mean value zero, introduce
\mathfrak{e}dskip
$ \Gamma^* \, := \, \lbrace \, u \in \Gamma \, ; \ \bar u \equiv 0 \,
{\bf r}brace \, . $
\mathfrak{e}dskip
\noindent{Mark also}
\mathfrak{e}dskip
$ \text{supp}_x \, u^* \, := \, \text{closure of} \ \bigl \{ \, x
\in {\mathfrak{a}thbb R}^d \, ; \ {\bf p}arallel u^* (x, \cdot) {\bf p}arallel_{L^2({\mathfrak{a}thbb T})}
\not = \, 0 \, \bigr \} \, . $
\noindent{$ \bullet $ {\em Differential operators.}} Note
\mathfrak{e}dskip
$ {\bf p}art_t \equiv {\bf p}art_0 := {\bf p}art / {\bf p}art \, t \, , \qquad \ \
{\bf p}art_\theta \equiv {\bf p}art_{d+1} := {\bf p}art / {\bf p}art \, \theta \, , $
{\bf s}mallskip
$ {\bf p}art_j := {\bf p}art / {\bf p}art \, x_j \, , \qquad \qquad \ j
\in \{1 , \cdots , d \} \, , $
{\bf s}mallskip
$ \nabla := ({\bf p}art_1, \cdots ,{\bf p}art_d) \, , \qquad \Delta :=
\Delta_x + {\bf p}art^2_\theta = {\bf p}art^{\, 2}_1 + \cdots + {\bf p}art^{\,
2}_{d} + {\bf p}art^2_\theta \, . $
\mathfrak{e}dskip
\noindent{Let $ u \in \mathfrak{a}thcal{W}^\infty_T $.} Define
\mathfrak{e}dskip
$ u \cdot \nabla \, := \, u^1 \ {\bf p}art_1 + \cdots + u^d \ {\bf p}art_d \, , $
{\bf s}mallskip
$ \Div \ u \, := \, {\bf p}art_1 u^1 + \cdots + {\bf p}art_d u^d \, , $
{\bf s}mallskip
$ \Div \ (u \otimes \tilde u) \, := \, {\bf s}um_{j=1}^d \, {}^t \bigl(
{\bf p}art_j ( u^j \, \tilde u^1) \, , \cdots , {\bf p}art_j ( u^j \, \tilde
u^d) \bigr) \in {\mathfrak{a}thbb R}^d \, . $
\mathfrak{e}dskip
\noindent{Employ the bracket $ < \cdot , \cdot >_H $ for the scalar
product in the Hilbert space $ H $. Note $ \mathfrak{a}thcal{L} (E;F) $ the space of
linear continuous applications $ T : E \longrightarrow F $ where $ E $
and $ F $ are Banach spaces. The symbol $ \mathfrak{a}thcal{L} (E) $ is simply for
$\mathfrak{a}thcal{L} (E;E) $. Introduce the commutator
\mathfrak{e}dskip
$ [A;B] \, := \, A \circ B - B \circ A \, , \qquad (A,B) \in \mathfrak{a}thcal{L}(E)^2 \, . $
\mathfrak{e}dskip
\noindent{Let $ r \in {\mathfrak{a}thbb Z} $.} The operator $ T $ is in $ \mathfrak{a}thfrak{L}^r $ if
\mathfrak{e}dskip
$ {\bf p}arallel T {\bf p}arallel_{\mathfrak{a}thcal{L} (H^{m+r}_T ; H^m_T)} \, < \, \infty \, ,
\qquad {{\bf r}m\bf f}orall \, m \in {\mathfrak{a}thbb N} \, . $
\mathfrak{e}dskip
\noindent{Let $ \eps_0 > 0 $.} The family of operators $ \{ T^\varepsilon
\}_\varepsilon\in \mathfrak{a}thcal{L} (H^\infty_T)^{]0,\eps_0]} $ is in $ \mathfrak{a}thfrak{U} \mathfrak{a}thfrak{L}^r $ if
\mathfrak{e}dskip
$ {\bf s}up_{\varepsilon\in \, ]0,\eps_0]} \ \ {\bf p}arallel T^\varepsilon{\bf p}arallel_{
\mathfrak{a}thcal{L} (H^{m+r}_T ; H^m_T)} \, < \, \infty \, , \qquad {{\bf r}m\bf f}orall \, m
\in {\mathfrak{a}thbb N} \, . $
\mathfrak{e}dskip
\noindent{Consider a family $ \{ f^\varepsilon\}_\varepsilon\in (\mathfrak{a}thcal{W}^\infty_T)^{
]0,\eps_0]} $. We say that $ \{ f^\varepsilon\}_\varepsilon= \bigcirc (\eps^r ) $
if
\mathfrak{e}dskip
$ {\bf s}up_{\varepsilon\in \, ]0,\eps_0]} \ \ \eps^{-r} \ {\bf p}arallel f^\varepsilon
{\bf p}arallel_{\mathfrak{a}thcal{W}^m_T} \, < \, \infty \, , \qquad {{\bf r}m\bf f}orall \, m \in
{\mathfrak{a}thbb N} \, . $
\mathfrak{e}dskip
\noindent{Given a family $ \{ {{\bf r}m\bf f}^\varepsilon\}_\varepsilon\in (\mathfrak{a}thcal{W}^\infty_T)^{
]0,\eps_0]} $, we say that $ \{ {{\bf r}m\bf f}^\varepsilon\}_\varepsilon= \bigcirc (\eps^r ) $
if
\mathfrak{e}dskip
$ {\bf s}up_{\varepsilon\in \, ]0,\eps_0]} \ \ \eps^{-r+m} \ {\bf p}arallel {{\bf r}m\bf f}^\varepsilon
{\bf p}arallel_{\mathfrak{a}thcal{W}^m_T} \, < \, \infty \, , \qquad {{\bf r}m\bf f}orall \, m \in
{\mathfrak{a}thbb N} \, . $
\mathfrak{e}dskip
\noindent{Observe that the two preceding definitions have very
different significations according as we use the letter $ f $
or the boldfaced type $ {{\bf r}m\bf f} $.} In particular, the second inequalities
correspond to $ \varepsilon- \, $stratified estimates. The families
$ \{ f^\varepsilon\}_\varepsilon$ or $ \{ {{\bf r}m\bf f}^\varepsilon\}_\varepsilon$ are $ \bigcirc
(\eps^\infty ) $ if they are $ \bigcirc (\eps^r ) $ for all
$ r \in {\mathfrak{a}thbb R} $.
{\bf s}ubsection{Divergence free approximate solutions in $ (t,x) $.}
\noindent{$ \bullet $ {\bf A first result.}} Select smooth functions
\mathfrak{e}dskip
$ {\bf u}_{00} \in H^\infty \, , \qquad {\bf v}arphi_{00} \in C^1 ({\mathfrak{a}thbb R}^d ) \, ,
\qquad \nabla {\bf v}arphi_{00} \in C^\infty_b ({\mathfrak{a}thbb R}^d ) \, . $
\mathfrak{e}dskip
\noindent{Suppose that}
\mathfrak{e}dskip
$ \exists \, c > 0 \, ; \qquad {\bf v}ert \nabla {\bf v}arphi_{00} (x)
{\bf v}ert \, {{\bf r}m\bf g}eq \, 2 \ c \, , \qquad {{\bf r}m\bf f}orall \, x \in {\mathfrak{a}thbb R}^d \, . $
\mathfrak{e}dskip
\noindent{For $ T > 0 $ small enough, the equation $ (\mathfrak{a}thcal{E}) $
associated with}
\mathfrak{e}dskip
$ {\bf u}_0 (0,x) = {\bf u}_{00} (x) \, , \qquad {{\bf r}m\bf f}orall \, x \in {\mathfrak{a}thbb R}^d $}
\mathfrak{e}dskip
\noindent{has a smooth solution $ {\bf u}_0(t,x) \in \mathfrak{a}thcal{W}^\infty_T $.}
Solve the eiconal equation
\mathfrak{e}dskip
\noindent{$ (ei) \qquad {\bf p}art_t {\bf v}arphi_0 + ({\bf u}_0 \cdot \nabla) \,
{\bf v}arphi_0 = 0 \, , \qquad (t,x) \in [0,T] \times {\mathfrak{a}thbb R}^d $}
\mathfrak{e}dskip
\noindent{with the initial data}
\mathfrak{e}dskip
$ {\bf v}arphi_0 (0,x) = {\bf v}arphi_{00} (x) \, , \qquad {{\bf r}m\bf f}orall \, x \in
{\mathfrak{a}thbb R}^d \, . $}
\mathfrak{e}dskip
\noindent{If necessary, restrict the time $ T $ in order to have}
\begin{equation} \label{nonstaou}
{\bf v}ert \nabla {\bf v}arphi_0 (t,x) {\bf v}ert \, {{\bf r}m\bf g}eq \, c \, , \qquad {{\bf r}m\bf f}orall \,
(t,x) \in [0,T] \times {\mathfrak{a}thbb R}^d \, . \qquad \qquad \qquad
\end{equation}
Call $ \Pi_0 (t,x) $ the orthogonal projector from $ {\mathfrak{a}thbb R}^d $
onto the hyperplane
\mathfrak{e}dskip
$ \nabla {\bf v}arphi_0 (t,x)^{\bf p}erp \, := \, \bigl \lbrace \, u
\in {\mathfrak{a}thbb R}^d \, ; \ u \cdot \nabla {\bf v}arphi_0 (t,x) = 0 \, \bigr
{\bf r}brace \, .$}
{\bf s}mallskip
\begin{theo} \label{appBKW} Select any $ {{\bf r}m\bf f}lat = (l,N) \in {\mathfrak{a}thbb N}^2_* $
such that $ 0 < l \, (3 + {{\bf r}m\bf f}rac{d}{2}) \ll N $. Consider the following
initial data
\mathfrak{e}dskip
$ U_{k0}^* (x,\theta) = \Pi_0 (0,x) \, U_{k0}^* (x,\theta) \in
H^\infty \, , \qquad 1 \leq k \leq N \, , $
{\bf s}mallskip
$ \bar U_{k0} (x) \in H^\infty \, , \qquad 1 \leq k \leq
N \, , $
{\bf s}mallskip
$ {\bf v}arphi_{k0} (x) \in H^\infty \, , \qquad 1 \leq k \leq l-1 \, . $
\mathfrak{e}dskip
\noindent{First, there are finite sequences $ \{ U_k \}_{1 \leq k \leq N} $
and $ \{ P_k \}_{1 \leq k \leq N} $ with}
\mathfrak{e}dskip
$ U_k (t,x,\theta) \in \mathfrak{a}thcal{W}^\infty_T \, , \qquad P_k (t,x,\theta) \in
\mathfrak{a}thcal{W}^\infty_T \, , \qquad 1 \leq k \leq N \, , $
\mathfrak{e}dskip
\noindent{and a finite sequence $ \{ {\bf v}arphi_k \}_{1 \leq k \leq l-1} $
with}
\mathfrak{e}dskip
$ {\bf v}arphi_k (t,x) \in \mathfrak{a}thcal{W}^\infty_T \, , \qquad 1 \leq k \leq l-1 \, , $
\mathfrak{e}dskip
\noindent{which are such that}
\mathfrak{e}dskip
$ \Pi_0 (0,x) \, U_k^* (0,x,\theta) = \Pi_0 (0,x) \, U_{k0}^*
(x,\theta) \, , \qquad 1 \leq k \leq N \, , $
{\bf s}mallskip
$ \bar U_k (0,x) = \bar U_{k0} (x) \, , \qquad \! 1 \leq k \leq
N \, , $
{\bf s}mallskip
$ {\bf v}arphi_k (0,x) = {\bf v}arphi_{k0} (x) \, , \qquad \! 1 \leq k
\leq l-1 \, . $
\mathfrak{e}dskip
\noindent{Secondly, there is $ \eps_0 \in \, ]0,1 ] $ and
correctors}
\mathfrak{e}dskip
$ {{\bf r}m\bf c} {\bf u}^\eps_{{\bf r}m\bf f}lat (t,x) \in \mathfrak{a}thcal{W}^\infty_T \, , \qquad {{\bf r}m\bf c} {\bf p}^\eps_{{\bf r}m\bf f}lat
(t,x) \in \mathfrak{a}thcal{W}^\infty_T \, , \qquad \varepsilon\in \, ]0,\eps_0] \, , $
\mathfrak{e}dskip
\noindent{which give rise to families satisfying}
\mathfrak{e}dskip
$ \{ {{\bf r}m\bf c} {\bf u}^\eps_{{\bf r}m\bf f}lat \}_\varepsilon\, = \, \bigcirc(\eps^{{{\bf r}m\bf f}rac{N}{l}-2}) \, ,
\qquad \{ {{\bf r}m\bf c} {\bf p}^\eps_{{\bf r}m\bf f}lat \}_\varepsilon\, = \, \bigcirc(\eps^{{{\bf r}m\bf f}rac{N}{l}})
\, . $
\mathfrak{e}dskip
\noindent{Then, all these expressions are adjusted so that the functions
$ {\bf u}^\eps_{{\bf r}m\bf f}lat $ and $ {\bf p}^\eps_{{\bf r}m\bf f}lat $ defined according to}
{\bf v}skip -4mm
\begin{equation} \label{BKWdel}
\, \left. \begin{array}{l}
{\bf u}^\eps_{{\bf r}m\bf f}lat (t,x) := {\bf u}_0 (t,x) + {\bf s}um_{k=1}^N \,
\eps^{{{\bf r}m\bf f}rac{k}{l}} \ U_k \bigl( t,x, \eps^{-1} \, {\bf v}arphi^\eps_g
(t,x) \bigr) + {{\bf r}m\bf c} {\bf u}^\eps_{{\bf r}m\bf f}lat (t,x) \ \, \\
{\bf p}^\eps_{{\bf r}m\bf f}lat (t,x) := {\bf p}_0 (t,x) + {\bf s}um_{k=1}^N \,
\eps^{{{\bf r}m\bf f}rac{k}{l}} \ P_k \bigl( t,x, \eps^{-1} \, {\bf v}arphi^\eps_g
(t,x) \bigr) + {{\bf r}m\bf c} {\bf p}^\eps_{{\bf r}m\bf f}lat (t,x) \ \ \,
\end{array} {\bf r}ight.
\end{equation}
where $ {\bf v}arphi^\eps_g (t,x) $ is the geometrical phase
{\bf v}skip -4mm
\begin{equation} \label{phasegeo}
{\bf v}arphi^\eps_g (t,x) \, := \, {\bf v}arphi_0 (t,x) \, + \, {{\bf r}m\bf h}box{$ {\bf s}um_{k=1}^{l-1}$}
\ \eps^{{{\bf r}m\bf f}rac{k}{l}} \ {\bf v}arphi_k (t,x) \qquad \qquad \qquad \ \
\end{equation}
are approximate solutions of $ (\mathfrak{a}thcal{E}) $ on the interval $ [0,T] $.
More precisely
\mathfrak{e}dskip
$ {\bf p}art_t {\bf u}^\eps_{{\bf r}m\bf f}lat + ( {\bf u}^\eps_{{\bf r}m\bf f}lat \cdot \nabla) {\bf u}^\eps_{{\bf r}m\bf f}lat
+ \nabla {\bf p}^\eps_{{\bf r}m\bf f}lat = {{\bf r}m\bf f}^\eps_{{\bf r}m\bf f}lat \, , \qquad \Div \ {\bf u}^\eps_{{\bf r}m\bf f}lat
= 0 \, , \qquad {{\bf r}m\bf f}^\eps_{{\bf r}m\bf f}lat = \bigcirc(\eps^{{{\bf r}m\bf f}rac{N}{l}-3-{{\bf r}m\bf f}rac{d}
{2}}) \, . $
\end{theo}
{\bf s}mallskip
\noindent{$ \bullet $ {\bf Some comments.}}
\mathfrak{e}dskip
\noindent{{\it Remark 2.2.1:}} In what follows, we suppose that
$ U^*_1 $ is non trivial.} In other words, we start with some
initial data satisfying
\begin{equation} \label{nonnontri}
\ \exists \, (x,\theta) \in {\mathfrak{a}thbb R}^d \times {\mathfrak{a}thbb T} \, ; \qquad U^*_1
(0,x,\theta) = U^*_{10} (x,\theta) \not = 0 \, . \qquad \qquad
\qquad \quad
\end{equation}
{\bf v}skip -9mm
{{\bf r}m\bf h}fill $ \triangle $
\noindent{{\it Remark 2.2.2:}} Fix any $ l \in {\mathfrak{a}thbb N}_* $. The Borel's
summation process allows to take $ N = + \infty $ in the Theorem
{\bf r}ef{appBKW}. It yields BKW solutions $ ({\bf u}^\eps_{{\bf r}m\bf f}lat , {\bf p}^\eps_{{\bf r}m\bf f}lat) $
which solve $ (\mathfrak{a}thcal{E}) $ with infinite accuracy
\mathfrak{e}dskip
$ {\bf p}art_t {\bf u}^\eps_{{\bf r}m\bf f}lat + ( {\bf u}^\eps_{{\bf r}m\bf f}lat \cdot \nabla) {\bf u}^\eps_{{\bf r}m\bf f}lat + \nabla
{\bf p}^\eps_{{\bf r}m\bf f}lat \, = \, \bigcirc(\eps^\infty) \, , \qquad \Div \ {\bf u}^\eps_{{\bf r}m\bf f}lat
\, = \, 0 \, . $ {{\bf r}m\bf h}fill $ \triangle $
\noindent{{\it Remark 2.2.3:}} Suppose that the function $ {\bf u}_0 \in
\mathfrak{a}thcal{W}^\infty_\infty $ is a global solution of Euler equations. Suppose
also that the phase $ {\bf v}arphi_0 \in \mathfrak{a}thcal{W}^\infty_\infty $ is subjected
to ({\bf r}ef{nonstaou}) on the strip $ [0,\infty [ \times {\mathfrak{a}thbb R}^d $ and that
it is a global solution of the eiconal equation $ (ei) $. Then
the Theorem {\bf r}ef{appBKW} can be applied with any $ T \in {\mathfrak{a}thbb R}^+_* $. It
means that no blow up occurs at the level of the equations yielding the
profiles $ U_k $, $ P_k $ and the phases $ {\bf v}arphi_k $. Yet, non linear
effects are present. {{\bf r}m\bf h}fill $ \triangle $
\noindent{{\it Remark 2.2.4:}} The characteristic curves
of the field $ {\bf p}art_t + {\bf u}_0 \cdot \nabla_x $ are obtained
by solving the differential equation
\mathfrak{e}dskip
$ {\bf p}art_t \, \Gamma (t,x) \, = \, {\bf u}_0 \bigl( t, \Gamma (t,x)
\bigr) \, , \qquad \Gamma (0,x) = x \, . $
\mathfrak{e}dskip
\noindent{Suppose that the oscillations of the profiles
$ U^*_{k0} $ are concentrated in some domain $ D {\bf s}ubset
{\mathfrak{a}thbb R}^d $.} In other words
\mathfrak{e}dskip
$ \text{supp}_x \, U_{k0}^* \, {\bf s}ubset \, D \, , \qquad
{{\bf r}m\bf f}orall \, k \in \{ 1, \cdots , N \} \, . $
\mathfrak{e}dskip
\noindent{The BKW analysis reveals that for all $ t \in
[0,T] $ we have}
\mathfrak{e}dskip
$ \text{supp}_x \, U_k^* (t,\cdot) \, {\bf s}ubset \, \bigl
\lbrace \, \Gamma (t,x) \, ; \ x \in D \, \bigr {\bf r}brace
\, , \qquad {{\bf r}m\bf f}orall \, k \in \{ 1, \cdots , N \} \, . $
\mathfrak{e}dskip
\noindent{The phenomena under study have a finite speed of
propagation.} {{\bf r}m\bf h}fill $ \triangle $
\noindent{{\it Remark 2.2.5:}} The influence of dissipation
terms will be taken into account in the subsection 4.1. The
viscosity we will incorporate is anisotropic. It is small enough
in the direction $ \nabla {\bf v}arphi^\eps_{{\bf r}m\bf f}lat $ in order to be
compatible with the propagation of oscillations. {{\bf r}m\bf h}fill $ \triangle $
{\bf s}ubsection{End of the proof of Theorem {\bf r}ef{appBKW}.}
The Theorem {\bf r}ef{appBKW} is a consequence of the Proposition
{\bf r}ef{appBKWinter} which will be stated and demonstrated in the
subsection 4.2. Below, we just explain how to deduce the Theorem
{\bf r}ef{appBKW} from the Proposition {\bf r}ef{appBKWinter} applied with
$ \nu = 0 $.
\mathfrak{e}dskip
\noindent{$ \bullet $ {\bf Dictionary between the profiles}.} Select
arbitrary initial data for
\mathfrak{e}dskip
$ \Pi_0 (0,x) \, \tilde U_k^* (0,x,\theta) \in H^\infty \, , \qquad
\langle \tilde U_k {\bf r}angle (0,x) \in H^\infty \, , \qquad 1 \leq k
\leq N \, , $
\mathfrak{e}dskip
\noindent{and arbitrary initial data for}
\mathfrak{e}dskip
$ {\bf v}arphi_k (0,x) \in H^\infty \, , \qquad 1 \leq k \leq l-1 \, . $
\mathfrak{e}dskip
\noindent{On the contrary, impose}
\begin{equation} \label{inipreli}
{\bf v}arphi_k (0,\cdot) \equiv 0 \, , \qquad {{\bf r}m\bf f}orall \, k \in \{l,
\cdots, N \} \, . \qquad \qquad \qquad \qquad \qquad
\end{equation}
The Proposition {\bf r}ef{appBKWinter} provides with finite
sequences
\mathfrak{e}dskip
$ \{ \tilde U_k \}_{1 \leq k \leq N} \, , \qquad \{ \tilde P_k \}_{
1 \leq k \leq N} \, , \qquad \{ {\bf v}arphi_k \}_{1 \leq k \leq N} \, , $
\mathfrak{e}dskip
\noindent{and source terms}
\mathfrak{e}dskip
$ \tilde f^\eps_{{\bf r}m\bf f}lat (t,x,\theta) \in \mathfrak{a}thcal{W}^\infty_T \, , \qquad
\tilde g^\eps_{{\bf r}m\bf f}lat (t,x,\theta) \in \mathfrak{a}thcal{W}^\infty_T \, . $
\mathfrak{e}dskip
\noindent{such that the associated oscillations}
$$ \left. \begin{array}{l}
\tilde {\bf u}^\eps_{{\bf r}m\bf f}lat (t,x) \, := \, {\bf u}_0 (t,x) \, + \,
{\bf s}um_{k=1}^{N} \, \eps^{{{\bf r}m\bf f}rac{k}{l}} \ \tilde U_k \bigl(
t,x, \eps^{-1} \, {\bf v}arphi^\eps_{{\bf r}m\bf f}lat (t,x) \bigr) \, , \\
\tilde {\bf p}^\eps_{{\bf r}m\bf f}lat (t,x) \, := \, {\bf p}_0 (t,x) \, + \,
{\bf s}um_{k=1}^{N} \, \eps^{{{\bf r}m\bf f}rac{k}{l}} \ \tilde P_k \bigl(t,
x, \eps^{-1} \, {\bf v}arphi^\eps_{{\bf r}m\bf f}lat (t,x) \bigr) \, , \qquad
\qquad \qquad \ \\
\tilde {{\bf r}m\bf f}^\eps_{{\bf r}m\bf f}lat (t,x) \, := \, \eps^{-1} \ \tilde
f^\eps_{{\bf r}m\bf f}lat \bigl(t,x, \eps^{-1} \, {\bf v}arphi^\eps_{{\bf r}m\bf f}lat
(t,x) \bigr) \, , \\
\tilde {{\bf r}m\bf g}^\eps_{{\bf r}m\bf f}lat (t,x) \, := \, \eps^{-1} \ \tilde
g^\eps_{{\bf r}m\bf f}lat \bigl(t,x, \eps^{-1} \, {\bf v}arphi^\eps_{{\bf r}m\bf f}lat
(t,x) \bigr) \, ,
\end{array} {\bf r}ight. $$
are subjected to
\mathfrak{e}dskip
$ {\bf p}art_t \tilde {\bf u}^\eps_{{\bf r}m\bf f}lat + ( \tilde {\bf u}^\eps_{{\bf r}m\bf f}lat \cdot \nabla)
\tilde {\bf u}^\eps_{{\bf r}m\bf f}lat + \nabla \tilde {\bf p}^\eps_{{\bf r}m\bf f}lat = \tilde {{\bf r}m\bf f}^\eps_{{\bf r}m\bf f}lat
= \bigcirc ( \eps^{{{\bf r}m\bf f}rac{N+1}{l}-1}) \, , \quad \ \Div \ \tilde {\bf u}^\eps_{{\bf r}m\bf f}lat
= \tilde {{\bf r}m\bf g}^\eps_{{\bf r}m\bf f}lat = \bigcirc ( \eps^{{{\bf r}m\bf f}rac{N+1}{l}-1}) \, . $
\mathfrak{e}dskip
\noindent{The oscillations $ \tilde {\bf u}^\eps_{{\bf r}m\bf f}lat $ and $ \tilde {\bf p}^\eps_{{\bf r}m\bf f}lat $
involve the {\it complete} phase $ {\bf v}arphi^\eps_{{\bf r}m\bf f}lat (t,x) $ which is
the sum of the geometrical phase $ {\bf v}arphi^\eps_g (t,x) $ plus some
{\it adjusting} phase $ \varepsilon\, {\bf v}arphi^\eps_a (t,x) $. More precisely
\mathfrak{e}dskip
$ {\bf v}arphi^\eps_{{\bf r}m\bf f}lat (t,x) := {\bf v}arphi^\eps_g (t,x) + \varepsilon\
{\bf v}arphi^\eps_a (t,x) \, , \qquad {\bf v}arphi^\eps_a (t,x) :=
{\bf s}um_{k=l}^N \, \eps^{{{\bf r}m\bf f}rac{k}{l}-1} \ {\bf v}arphi_k (t,x) \, . $
\mathfrak{e}dskip
\noindent{The functions $ \tilde {\bf u}^\eps_{{\bf r}m\bf f}lat $ and $ \tilde
{\bf p}^\eps_{{\bf r}m\bf f}lat $ can also be written in terms of the phase
$ {\bf v}arphi^\eps_g $.} Indeed, there is a unique decomposition
\mathfrak{e}dskip
$ \tilde {\bf u}^\eps_{{\bf r}m\bf f}lat = {\bf u}^\eps_{{\bf r}m\bf f}lat + {\bf r} {\bf u}^\eps_{{\bf r}m\bf f}lat =
{\bf u}^\eps_{{\bf r}m\bf f}lat + \bigcirc (\eps^{{{\bf r}m\bf f}rac{N+1}{l}}) \, , \qquad
\tilde {\bf p}^\eps_{{\bf r}m\bf f}lat = {\bf p}^\eps_{{\bf r}m\bf f}lat + {\bf r} {\bf p}^\eps_{{\bf r}m\bf f}lat =
{\bf p}^\eps_{{\bf r}m\bf f}lat + \bigcirc (\eps^{{{\bf r}m\bf f}rac{N+1}{l}}) \, , $
\mathfrak{e}dskip
\noindent{involving the representations}
\begin{equation} \label{BKWdelbis}
\ {\bf u}^\eps_{{\bf r}m\bf f}lat (t,x) = u^\eps_{{\bf r}m\bf f}lat \bigl( t,x, \eps^{-1} \,
{\bf v}arphi^\eps_g (t,x) \bigr) \, , \qquad {\bf p}^\eps_{{\bf r}m\bf f}lat (t,x) =
p^\eps_{{\bf r}m\bf f}lat \bigl( t,x, \eps^{-1} \, {\bf v}arphi^\eps_g (t,x)
\bigr)
\end{equation}
where the profiles $ u^\eps_{{\bf r}m\bf f}lat ( t,x, \theta ) $ and
$ p^\eps_{{\bf r}m\bf f}lat ( t,x, \theta ) $ have the form
$$ \left. \begin{array}{l}
u^\eps_{{\bf r}m\bf f}lat ( t,x, \theta ) \, = \, {\bf u}_0 (t,x) + {\bf s}um_{k=1}^N \,
\eps^{{{\bf r}m\bf f}rac{k}{l}} \ U_k ( t,x, \theta) \, , \qquad \qquad \qquad
\qquad \qquad \ \\
p^\eps_{{\bf r}m\bf f}lat ( t,x, \theta ) \, = \, {\bf p}_0 (t,x) + {\bf s}um_{k=1}^N \,
\eps^{{{\bf r}m\bf f}rac{k}{l}} \ P_k ( t,x, \theta) \, .
\end{array} {\bf r}ight. $$
The transition from $ \tilde {\bf u}^\eps_{{\bf r}m\bf f}lat $ to $ {\bf u}^\eps_{{\bf r}m\bf f}lat $
is achieved through the phase shift $ {\bf v}arphi^\eps_a $
\mathfrak{e}dskip
$ \tilde U_k ( t,x, \eps^{-1} \, {\bf v}arphi^\eps_{{\bf r}m\bf f}lat ) \, = \,
\tilde U_k \bigl( t,x, \eps^{-1} \, {\bf v}arphi^\eps_g + {\bf v}arphi_l +
{\bf s}um_{k=l+1}^N \, \eps^{{{\bf r}m\bf f}rac{k}{l} - 1} \ {\bf v}arphi_k \bigr) \, . $
\mathfrak{e}dskip
\noindent{Use the Taylor formula in order to absorb the small
term in the right.} It furnishes the following explicit link
between the $ (U_k,P_k) $ and the $ (\tilde U_k , \tilde P_k) $
\begin{equation} \label{diction}
\ \left. \begin{array}{l}
U_k \bigl( t,x,\theta - {\bf v}arphi_l (t,x) \bigr) \, := \, \tilde U_k
(t,x,\theta) \, + \, \mathfrak{a}thcal{G}^k (\tilde U_1, \cdots, \tilde U_{k-1}) (t,
x,\theta) \, , \ \ \\
P_k \bigl( t,x,\theta - {\bf v}arphi_l (t,x) \bigr) \, := \, \tilde P_k
(t,x,\theta) \, + \, \mathfrak{a}thcal{G}^k (\tilde P_1, \cdots, \tilde P_{k-1}) (t,
x,\theta) \, .
\end{array} {\bf r}ight.
\end{equation}
The application $ \mathfrak{a}thcal{G}^k $ can be put in the form
\mathfrak{e}dskip
$ \mathfrak{a}thcal{G}^k (\tilde U_1, \cdots, \tilde U_{k-1}) \, := \, {\bf s}um_{p=1}^{k-1} \,
{\bf p}artial_\theta^p \mathfrak{a}thcal{G}^k_p (\tilde U_1, \cdots, \tilde U_{k-p}) \, ,
\qquad k \in \{ 1, \cdots , N \} \, . $
\mathfrak{e}dskip
\noindent{The terms $ \mathfrak{a}thcal{G}^k_p $ are given by}
\mathfrak{e}dskip
$ \mathfrak{a}thcal{G}^k_p (\tilde U_1, \cdots, \tilde U_{k-p}) \, := \,
{{\bf r}m\bf f}rac{1}{p \, !} \ {\bf s}um_{{{\bf r}m\bf a}lpha \in \mathfrak{a}thcal{J}^k_p} \ {\bf v}arphi_{
l+1+{{\bf r}m\bf a}lpha_1} \times \cdots \times {\bf v}arphi_{l+1+{{\bf r}m\bf a}lpha_p}
\ \tilde U_{{{\bf r}m\bf a}lpha_{p+1}} \, , $
\mathfrak{e}dskip
\noindent{where the sum is taken over the set}
\mathfrak{e}dskip
$ \mathfrak{a}thcal{J}^k_p \, := \, \bigl \lbrace \, {{\bf r}m\bf a}lpha = ({{\bf r}m\bf a}lpha_1,
\cdots,{{\bf r}m\bf a}lpha_p, {{\bf r}m\bf a}lpha_{p+1}) \in {\mathfrak{a}thbb N}^{p+1} \, ; $
{\bf s}mallskip
{{\bf r}m\bf h}fill $ 0 \leq {{\bf r}m\bf a}lpha_j \leq N-l-1 \, , \qquad {{\bf r}m\bf f}orall \,
j \in \{1, \cdots, p \} \, , \qquad \qquad \quad \ \ \, $
{\bf s}mallskip
{{\bf r}m\bf h}fill $ 1 \leq {{\bf r}m\bf a}lpha_{p+1} \leq k-p \, , \qquad {{\bf r}m\bf a}lpha_1
+ \cdots + {{\bf r}m\bf a}lpha_p + {{\bf r}m\bf a}lpha_{p+1} = k-p \, \bigr {\bf r}brace
\, . $
\mathfrak{e}dskip
\noindent{The relation ({\bf r}ef{diction}) and the definition of
$ \mathfrak{a}thcal{G}^k $ imply that}
\mathfrak{e}dskip
$ \bar U_k (t,x) \, = \, \langle \tilde U_k {\bf r}angle (t,x) \, ,
\qquad {{\bf r}m\bf f}orall \, k \in \{ 1, \cdots , N \} \, , \qquad
{{\bf r}m\bf f}orall \, t \in [0,T] \, . $
\mathfrak{e}dskip
\noindent{Therefore, prescribing the initial data for the
$ \bar U_k $ or the $ \langle \tilde U_k {\bf r}angle $ amounts
to the same thing.} The condition ({\bf r}ef{inipreli}) yields
\mathfrak{e}dskip
$ \mathfrak{a}thcal{G}^k_p (\tilde U_1, \cdots, \tilde U_{k-p}) (0,x,\theta)
= 0 \, , \qquad {{\bf r}m\bf f}orall \, k \in \{ 1, \cdots , N \} \, . $
\mathfrak{e}dskip
\noindent{Since $ {\bf v}arphi_l (0,\cdot) \equiv 0 $, we have}
\mathfrak{e}dskip
$ \Pi_0 (0,x) \, U_k^* (0,x,\theta) \, = \, \Pi_0 (0,x) \,
\tilde U_k^* (0,x,\theta) \, , \qquad {{\bf r}m\bf f}orall \, k \in \{ 1,
\cdots , N \} \, . $
\mathfrak{e}dskip
\noindent{It is clearly equivalent to specify the initial
data for the $ \Pi_0 \, U_k^* $ or the $ \Pi_0 \, \tilde
U_k^* $.
\noindent{$ \bullet $ {\bf The divergence free relation in
the variables $ (t,x) $.}} Consider the application
\mathfrak{e}dskip
$ \Div \, : \, H^\infty \, \longrightarrow \, \text{{\bf r}m Im} \,
(\Div) \, {\bf s}ubset \, \bigl \lbrace \, {{\bf r}m\bf g} \in H^\infty \, ; \ {{\bf r}m\bf h}at {{\bf r}m\bf g}
(0) = 0 \, \bigr {\bf r}brace \, . $
\mathfrak{e}dskip
\noindent{We can select some special right inverse.}
\begin{lem} \label{invpartbar} There is a linear operator
$ \text{{\bf r}m ridiv} \, : \, \text{{\bf r}m Im} \, (\Div) \longrightarrow
H^\infty $ with
\begin{equation} \label{exainv}
\Div \circ \text{{\bf r}m ridiv} \ {{\bf r}m\bf g} \, = \, {{\bf r}m\bf g} \, , \qquad {{\bf r}m\bf f}orall \,
{{\bf r}m\bf g} \in \text{{\bf r}m Im} \, (\Div) \, . \qquad \qquad \qquad \quad \
\end{equation}
For all $ \iota > 0 $ and for all $ m \in {\mathfrak{a}thbb N} $, there is a
constant $ C_m^\iota > 0 $ such that
\begin{equation} \label{estiinv}
{\bf p}arallel \text{{\bf r}m ridiv} \ {{\bf r}m\bf g} {\bf p}arallel_{H^m} \, \leq \, C_m \
{\bf p}arallel {{\bf r}m\bf g} {\bf p}arallel_{H^{m+1+ {{\bf r}m\bf f}rac{d}{2} + \iota}} \, , \qquad
{{\bf r}m\bf f}orall \, {{\bf r}m\bf g} \in \text{{\bf r}m Im} \, (\Div) \, . \qquad
\end{equation}
\end{lem}
{\bf s}mallskip
\noindent{\em {\bf u}nderline{Proo}f {\bf u}nderline{o}f {\bf u}nderline{the}
{\bf u}nderline{Lemma} {\bf u}nderline{{\bf r}ef{invpartbar}}.} Introduce a
cut-off function $ {\bf p}si \in C^\infty ({\mathfrak{a}thbb R}^d) $ such that
\mathfrak{e}dskip
$ \bigl \{ \, \xi \, ; \ {\bf p}si (\xi) \not = 0 \, \bigr \} \,
{\bf s}ubset \, B (0,2] \, , \qquad \bigl \{ \, \xi \, ; \ {\bf p}si
(\xi) = 1 \, \bigr \} \, {\bf s}upset \, B (0,1] \, . $
\mathfrak{e}dskip
\noindent{For $ g \in \text{{\bf r}m Im} \, (\Div) $, take the
explicit formula}
\mathfrak{e}dskip
$ \text{ridiv} \, ({{\bf r}m\bf g}) : = \mathfrak{a}thcal{F}^{-1} \, \bigl( \, \int_0^1 \,
\nabla_\xi ({\bf p}si \, {{\bf r}m\bf h}at {{\bf r}m\bf g}) (r \, \xi) \ dr \, + \, {\bf v}ert
\xi {\bf v}ert^{-2} \ (1-{\bf p}si)(\xi) \ {{\bf r}m\bf h}at {{\bf r}m\bf g} (\xi) \times \xi \,
\bigr) \, . $
\mathfrak{e}dskip
\noindent{Since $ {{\bf r}m\bf h}at {{\bf r}m\bf g} (0) = 0 $, the relation ({\bf r}ef{exainv})
is satisfied.} For $ s > {{\bf r}m\bf f}rac{d}{2} $, the injection $ H^s
({\mathfrak{a}thbb R}^d) {{\bf r}m\bf h}ookrightarrow L^\infty ({\mathfrak{a}thbb R}^d) $ is continuous. It
leads to ({\bf r}ef{estiinv}). {{\bf r}m\bf h}fill $ \Diamond $
\noindent{$ \bullet $ {\bf The Leray projector in the variables $ (t,x) $.}}
Note $ \Pi (\xi) $ the orthogonal projector from $ {\mathfrak{a}thbb R}^d $ onto the plane
\mathfrak{e}dskip
$ \xi^{\bf p}erp \, := \, \lbrace \, u \in {\mathfrak{a}thbb R}^d \, ; \ u \cdot \xi
= 0 \, {\bf r}brace \, . $
\mathfrak{e}dskip
\noindent{Introduce the closed subspace}
\mathfrak{e}dskip
$ \text{F} \, := \, \bigl \lbrace \, {\bf u} \in L^2 \, ;
\ \Div \, {\bf u} = 0 \, \bigr {\bf r}brace \, {\bf s}ubset \, L^2 \, . $
\mathfrak{e}dskip
\noindent{Call $ P $ the orthogonal projector from $ L^2 $
onto F.} It corresponds to the Fourier multiplier
\mathfrak{e}dskip
$ P \, {\bf u} \, = \, \Pi (D_x) \, {\bf u} \, := \, ( 2 \, {\bf p}i)^{-
{{\bf r}m\bf f}rac{d}{2}} \ \int_{{\mathfrak{a}thbb R}^d} \, e^{i \, x \cdot \xi} \ \Pi
(\xi) \, {{\bf r}m\bf h}at {\bf u} (\xi) \ d \xi \, . $
\mathfrak{e}dskip
\noindent{The application $ P $ is the Leray projector onto the
space of divergence free vector fields.} It is a self-adjoint
operator such that
\mathfrak{e}dskip
$ \ker \, \Div \, = \, \text{Im} \, P \, , \qquad \text{Im} \,
\nabla \, = \, \bigl( \ker \, (\Div) \bigr)^{\bf p}erp \, = \, \ker \,
P \, . $
\mathfrak{e}dskip
\noindent{Consider the Cauchy problem}
\mathfrak{e}dskip
$ {\bf p}artial_t {\bf u} + \nabla {\bf p} = {{\bf r}m\bf f} \, , \qquad \Div \, {\bf u} = 0 \, ,
\qquad {\bf u} (0, \cdot) = {{\bf r}m\bf h} $
\mathfrak{e}dskip
\noindent{with data $ {{\bf r}m\bf f} \in L^2_T $ and $ {{\bf r}m\bf h} \in L^2 $.}
It leads to the equivalent conditions
\mathfrak{e}dskip
$ {\bf p}artial_t {\bf u} = P \, {{\bf r}m\bf f} \, , \qquad {\bf u} (0, \cdot)
= P \, {{\bf r}m\bf h} \, , \qquad \nabla {\bf p} = (\id - P) \, {{\bf r}m\bf f} \, . $
\noindent{Now we come back to the proof of Theorem {\bf r}ef{appBKW}.}
It remains to absorb the term $ \tilde {{\bf r}m\bf g}^\eps_{{\bf r}m\bf f}lat \in
\text{Im} \, (\Div) $. To this end, take $ \iota = {{\bf r}m\bf f}rac{1}
{2 \, l} $. Define $ {\bf u}^\eps_{{\bf r}m\bf f}lat $ and $ {\bf p}^\eps_{{\bf r}m\bf f}lat $
as in ({\bf r}ef{BKWdel}) with the $ U_k $ and $ P_k $ of
({\bf r}ef{diction}). Introduce
\mathfrak{e}dskip
$ {{\bf r}m\bf c} {\bf u}^\eps_{{\bf r}m\bf f}lat \, := \, {\bf r} {\bf u}^\eps_{{\bf r}m\bf f}lat - \text{ridiv} \,
\tilde {{\bf r}m\bf g}^\eps_{{\bf r}m\bf f}lat = \bigcirc (\eps^{{{\bf r}m\bf f}rac{N}{l}-2-{{\bf r}m\bf f}rac{d}{2}}) \, ,
\qquad {{\bf r}m\bf c} {\bf p}^\eps_{{\bf r}m\bf f}lat \, := \, {\bf r} {\bf p}^\eps_{{\bf r}m\bf f}lat = \bigcirc
(\eps^{{{\bf r}m\bf f}rac{N+1}{l}}) \, . $
\mathfrak{e}dskip
\noindent{After substitution in $ (\mathfrak{a}thcal{E}) $, we lose again a
power of $ \varepsilon$.} We find
\mathfrak{e}dskip
$ {{\bf r}m\bf f}^\eps_{{\bf r}m\bf f}lat = \tilde {{\bf r}m\bf f}^\eps_{{\bf r}m\bf f}lat - (\text{ridiv} \,
\tilde {{\bf r}m\bf g}^\eps_{{\bf r}m\bf f}lat \cdot \nabla) \, \tilde {\bf u}^\eps_{{\bf r}m\bf f}lat
- (\tilde {\bf u}^\eps_{{\bf r}m\bf f}lat \cdot \nabla) \, \text{ridiv} \,
\tilde {{\bf r}m\bf g}^\eps_{{\bf r}m\bf f}lat $
{\bf s}mallskip
$ \qquad \quad \ - \, {\bf p}art_t \text{ridiv} \, \tilde {{\bf r}m\bf g}^\eps_{{\bf r}m\bf f}lat
+ (\text{ridiv} \, \tilde {{\bf r}m\bf g}^\eps_{{\bf r}m\bf f}lat \cdot \nabla) \,
\text{ridiv} \, \tilde {{\bf r}m\bf g}^\eps_{{\bf r}m\bf f}lat \, = \, \bigcirc(\eps^{
{{\bf r}m\bf f}rac{N}{l}-3-{{\bf r}m\bf f}rac{d}{2}}) \, . $
\mathfrak{e}dskip
\noindent{The Theorem {\bf r}ef{appBKW} looks like classical statements in one phase
non linear geometric optics except that the phase $ {\bf v}arphi^\eps_g $ does depend
on $ \varepsilon$.} In the next chapter, we examine the part of the $ {\bf v}arphi_k $
which make up $ {\bf v}arphi^\eps_g $ and $ {\bf v}arphi^\eps_a $.
{\bf s}ection{The cascade of phases.} {\it Turbulence} and {\it intermittency} are topics
which represent extremely different points of view. Two approaches compete:
\mathfrak{e}dskip
\noindent{a) }The deterministic approach which study the time evolution
of flows arising in fluid mechanics \cite{B}-\cite{C}-\cite{D}-\cite{E}-\cite{MPP}.
\mathfrak{e}dskip
\noindent{b) }The statistical approach in which the velocity of the
fluid is a random variable \cite{FMRT}-\cite{L}.
\mathfrak{e}dskip
\noindent{The Theorem {\bf r}ef{appBKW} is mainly connected with a)}. It brings
various informations related to the propagation of quasi-singularities. These
aspects are detailed at first. Then we briefly explain b) and we draw
(in the setting of the Theorem {\bf r}ef{appBKW}) a phenomenological
comparison between a) and b).
{\bf s}mallskip
{\bf s}ubsection{Microstructures.} The result {\bf r}ef{appBKW} is concerned
with the convection of microstructures. It is linked with the multiple
scale approach of \cite{MPP} and \cite{C}. In \cite{MPP} the authors
look for BKW solutions $ {\bf u}^\eps_a $ in the form
\mathfrak{e}dskip
$ {\bf u}^\eps_a (t,x) \, = \, {\bf u}_0 (t,x) \, + \, U_0^* \bigl( t,x,
\eps^{-1} \, t , \eps^{-1} \, {\bf v}ec {\bf v}arphi_0 (t,x) \bigr) \, + \,
\bigcirc (\eps) \, . $
\mathfrak{e}dskip
\noindent{In the more recent paper \cite{C}, the selected expansion is}
\mathfrak{e}dskip
$ \mathfrak{u}^\eps_a (t,x) \, = \, {\bf u}_0 (t,x) \, + \, \eps^{{{\bf r}m\bf f}rac{1}{3}} \
U_1 \bigl( t,x, \eps^{-{{\bf r}m\bf f}rac{2}{3}} \, t, \eps^{-1} \, {\bf v}ec {\bf v}arphi_0
(t,x) \bigr) \, + \, \bigcirc (\eps^{{{\bf r}m\bf f}rac{2}{3}}) \, . $
\mathfrak{e}dskip
\noindent{Both articles \cite{C} and \cite{MPP} use homogenization
techniques.} They perform computations involving expressions as
$ {\bf u}^\eps_a $ or $ \mathfrak{u}^\eps_a $. Simplifications (supported by
engineering experiments) are made in order to get effective
equations for the evolution of $ ({\bf u}_0 , U_0^*) $ or $ ({\bf u}_0 ,
U_1 ) $.
{\bf s}mallskip
\noindent{Consider the simple case of one phase expansions (that
is when $ {\bf v}ec {\bf v}arphi_0 \equiv {\bf v}arphi_0 $ is a scalar valued
function).} Reasons why a complete mathematical analysis based on
$ {\bf u}^\eps_a $ or $ \mathfrak{u}^\eps_a $ is not available can be drawn from
the Theorem {\bf r}ef{appBKW}. For instance, look at $ \mathfrak{u}^\eps_a $.
When $ l = 3 $, the oscillation $ \mathfrak{u}^\eps_a $ involves the same
scales as $ {\bf u}^\eps_{(3,N)} $ since
\mathfrak{e}dskip
$ \eps^{-1} \, {\bf v}arphi^\eps_g (t,x) \, = \, \eps^{-1} \ {\bf v}arphi_0
(t,x) + \eps^{- {{\bf r}m\bf f}rac{2}{3}} \ {\bf v}arphi_1 (t,x) + \eps^{- {{\bf r}m\bf f}rac{1}
{3}} \ {\bf v}arphi_2 (t,x) \, . $
\mathfrak{e}dskip
\noindent{Now the analogy stops here since in general $ {\bf v}arphi_1
(t,x) \not \equiv t $ and $ {\bf v}arphi_2 (t,x) \not \equiv 0 $.} These
are geometrical obstructions which prevent to describe the propagation
by way of $ \mathfrak{u}^\eps_a $. The asymptotic expansion $ \mathfrak{u}^\eps_a $
is not suitable.
\mathfrak{e}dskip
\noindent{Analogous arguments concerning $ {\bf u}^\eps_a $ will be
presented in the paragraph 3.5.}
{\bf s}mallskip
{\bf s}ubsection{The geometrical phase.} Let us examine
more carefully how the expression $ {\bf v}arphi^\eps_g $
is built. Because of the condition ({\bf r}ef{nonstaou}),
for $ \varepsilon$ small enough, it is still not stationary
\begin{equation} \label{nondegeat}
\ \exists \, \eps_0 > 0 \, ; \qquad \nabla {\bf v}arphi^\eps_g
(t,x) \not = 0 \, , \qquad {{\bf r}m\bf f}orall \, (\eps,t,x) \in \,
]0,\eps_0] \times [0,T] \times {\mathfrak{a}thbb R}^d \, . \ \
\end{equation}
In fact, the function $ {\bf v}arphi^\eps_g $ comes from the
approximate eiconal equation
\mathfrak{e}dskip
$ {\bf p}artial_t {\bf v}arphi^\eps_g + ( \bar u^\eps_{{\bf r}m\bf f}lat \cdot
\nabla) {\bf v}arphi^\eps_g = \bigcirc ( \varepsilon) $
\mathfrak{e}dskip
\noindent{which is equivalent to}
\mathfrak{e}dskip
$ {\bf p}artial_t {\bf v}arphi_k + {\bf u}_0 \cdot \nabla {\bf v}arphi_k + {\bf s}um_{j=0}^{k-1}
\, \bar U_{k-j} \cdot \nabla {\bf v}arphi_j = 0 \, , \qquad {{\bf r}m\bf f}orall \, k \in \{ 1,
\cdots, l-1 \} \, . $
\mathfrak{e}dskip
\noindent{The family $ \{ {\bf u}^\eps_{{\bf r}m\bf f}lat (t,x) \}_{\varepsilon\in \, ]0,1]} $
has an $ \varepsilon- \, $stratified regularity \cite{G2} with respect to the
phase $ {\bf v}arphi^\eps_g $.} This is a {\it geometrical} information.
{\bf s}mallskip
{\bf s}ubsection{Closure problems.} We have explained why appealing
only to $ {\bf v}arphi_0 $ is not sufficient. It turns out that BKW
computations relying only on the geometrical phase $ {\bf v}arphi^\eps_g $
come also to nothing. This is a subtle aspect when proving the
Theorem {\bf r}ef{appBKW}. We lay now stress on it.
\mathfrak{e}dskip
\noindent{For all $ N \in {\mathfrak{a}thbb N}_* $, the application $ \mathfrak{a}thcal{G} $
defined below is one to one}
$$ \ \left. \begin{array}{rcl}
\mathfrak{a}thcal{G} \ \, : \, (\mathfrak{a}thcal{W}^\infty_T)^N & \longrightarrow & (\mathfrak{a}thcal{W}^\infty_T)^N \\
\\
\left(\begin{array}{c}
\tilde U_1 \\
\tilde U_2 \\
{\bf v}dots \\
\tilde U_N
\end{array} {\bf r}ight)(t,x,\theta) & \longmapsto &
\left(\begin{array}{c}
\tilde U_1 \\
\tilde U_2 + \mathfrak{a}thcal{G}^1 (\tilde U_1) \\
{\bf v}dots \\
\tilde U_N + \mathfrak{a}thcal{G}^N (\tilde U_1, \cdots, \tilde U_{N-1})
\end{array} {\bf r}ight)(t,x,\theta + {\bf v}arphi_l (t,x) ) \, .
\end{array} {\bf r}ight. $$
Once the $ U_j $ or the $ \tilde U_j $ are known, it is entirely
equivalent to use $ {\bf u}^\eps_{{\bf r}m\bf f}lat $ or $ \tilde {\bf u}^\eps_{{\bf r}m\bf f}lat $.
Before the $ U_j $ or the $ \tilde U_j $ have been identified, in
particular when performing the BKW calculus, it is deeply different to
employ $ {\bf u}^\eps_{{\bf r}m\bf f}lat $ or $ \tilde {\bf u}^\eps_{{\bf r}m\bf f}lat $. Indeed, there is
a unique choice of the $ {\bf v}arphi_k $ with $ l \leq k \leq N $, which
imposes a {\it specific hierarchy} between the profiles $ \tilde
U_k $, which makes possible the {\it triangulation} of the equations
obtained by the formal computations.
\mathfrak{e}dskip
\noindent{In the subsection 2.3, we will perform the BKW analysis
with the profiles $ \tilde U_k $.} It yields a sequence of equations
\begin{equation} \label{cloeetilde}
\tilde X^k ( \tilde U_1, \cdots, \tilde U_{k+l} ) \, = \, 0 \, ,
\qquad 1 \leq k \leq N \, . \qquad \qquad \quad \
\end{equation}
As usual in non linear geometric optics, this can be rewritten
in order to find a sequence of well-posed equations
\begin{equation} \label{cloeq}
\dot X^k ( \dot U_k ) \, = \, \mathfrak{a}thcal{F} ( \dot U_1, \cdots , \dot
U_{k-1} ) \, , \qquad 1 \leq k \leq N \, , \qquad \qquad
\end{equation}
where the $ \dot U_k $ are made of pieces of the $ \tilde
U_j $. Of course, the equation ({\bf r}ef{cloeq}) can be interpreted
in terms of the $ \tilde U_j $ and then in terms of the $ U_j $.
In this second step, it requires to implement the phase
shift $ {\bf v}arphi_l $ and the transformations $ \mathfrak{a}thcal{G}^j_p $
with $ 1 \leq j \leq k-1 $ and $ 1 \leq p \leq j $. Now,
the BKW analysis reveals that $ {\bf v}arphi_l $ or the various
coefficients $ {\bf v}arphi_i $ which appear in the definition
of such $ \mathfrak{a}thcal{G}^j_p $ do not depend only on $ (\dot U_1,
\cdots, \dot U_k) $ but also on some $ \dot U_i $ with
$ i > k $. The resulting system is therefore underdetermined.
Computations involving the functions $ U_j $ lead to a
sequence of equations which are not closed.
\mathfrak{e}dskip
\noindent{The insertion of the phases $ {\bf v}arphi_k $ with $ 1
\leq k \leq N $ is an elegant way to introduce $ \mathfrak{a}thcal{G} $.} The
change of variables $ \mathfrak{a}thcal{G} $, though it is a function of $ (U_1,
\cdots, U_N) $, is needed to progress. It allows to get round
{\it closure problems}.
{\bf s}ubsection{Compensated compactness.} Dissipation terms can be
incorporated in the discussion. In the variables $ (t,x) $, the
addition of some viscosity $ \kappa $ is compatible with the
propagation of oscillations if for instance $ \kappa = \nu \,
\eps^2 $. There are approximate solutions $ ({\bf u}^\eps_{{\bf r}m\bf f}lat ,
{\bf p}^\eps_{{\bf r}m\bf f}lat ) $ of the Navier-Stokes equations. They satisfy
({\bf r}ef{BKWdel}) and
\mathfrak{e}dskip
$ {\bf p}art_t {\bf u}^\eps_{{\bf r}m\bf f}lat + ( {\bf u}^\eps_{{\bf r}m\bf f}lat \cdot \nabla) {\bf u}^\eps_{{\bf r}m\bf f}lat
+ \nabla {\bf p}^\eps_{{\bf r}m\bf f}lat = \nu \ \eps^2 \ \Delta_x {\bf u}^\eps_{{\bf r}m\bf f}lat +
{{\bf r}m\bf f}^\eps_{{\bf r}m\bf f}lat \, , \qquad \Div \ {\bf u}^\eps_{{\bf r}m\bf f}lat = 0 \, , $
\mathfrak{e}dskip
\noindent{with $ {{\bf r}m\bf f}^\eps_{{\bf r}m\bf f}lat = \bigcirc(\eps^\infty) $.}
When $ \nu > 0 $, Leray's theorem provides with global weak
solutions $ ({\bf u}^\eps,{\bf p}^\eps) (t,x) $ of the following
Cauchy problem}
$$ \left \{ \begin{array} {l}
{\bf p}art_t {\bf u}^\varepsilon+ ( {\bf u}^\varepsilon\cdot \nabla) {\bf u}^\varepsilon+ \nabla
{\bf p}^\varepsilon= \nu \ \eps^2 \ \Delta_x {\bf u}^\varepsilon\, , \qquad \Div \
{\bf u}^\varepsilon= 0 \, , \qquad \qquad \\
{\bf u}^\varepsilon(0,\cdot) \equiv {\bf u}^\eps_{{\bf r}m\bf f}lat (0,\cdot) \, .
\end{array} {\bf r}ight. $$
Suppose now that $ {\bf u}_0 \equiv 0 $. Then, we have
also the uniform controls
\begin{equation} \label{compcomp}
\, \left. \begin{array}{l}
{\bf s}up \ \bigl \lbrace \, {\bf p}arallel \eps^{-{{\bf r}m\bf f}rac{1}{l}} \,
{\bf u}^\varepsilon{\bf p}arallel_{L^2_T} \, ; \ \varepsilon\in \, ]0,1] \,
\bigr {\bf r}brace \, \leq \, C \, < \, \infty \, , \\
{\bf s}up \ \bigl \lbrace \, \nu \ \eps^2 \ \int_0^T \,
{\bf p}arallel \eps^{-{{\bf r}m\bf f}rac{1}{l}} \, {\bf u}^\varepsilon(t,\cdot)
{\bf p}arallel^2_{H^1 ({\mathfrak{a}thbb R}^d) } \ dt \, ; \ \varepsilon\in \,
]0,1] \, \bigr {\bf r}brace \, \leq \, C \, < \, \infty \, . \
\end{array} {\bf r}ight.
\end{equation}
Arguments issued from the theory of compensated compactness
\cite{Ge} can be employed to study the sequence $ \{ \eps^{
-{{\bf r}m\bf f}rac{1}{l}} \, {\bf u}^\varepsilon\}_\varepsilon$. In the spirit of \cite{D}
or \cite{E}, we can try to exploit the informations contained
in ({\bf r}ef{compcomp}) and the equation on $ {\bf u}^\varepsilon$ in order
to describe the asymptotic behaviour when $ \varepsilon$ goes to
zero of the functions $ \eps^{- {{\bf r}m\bf f}rac{1}{l}} \, {\bf u}^\varepsilon$.
However this approach seems to be not applicable here.
\mathfrak{e}dskip
\noindent{Indeed, {\it obvious} instabilities occur.}
The related mechanisms, which induce the non linear instability
of Euler equations, are detailed in the paragraph 5.1. Below, we
just give an intuitive idea of what can happen. Use the representation
$ \tilde {\bf u}^\eps_{{\bf r}m\bf f}lat $ involving the phase $ {\bf v}arphi^\eps_{{\bf r}m\bf f}lat $.
The determination of the intermediate term $ {\bf v}arphi_l $ requires to
identify $ \langle \tilde U_l {\bf r}angle $ and $ \tilde U^*_{l-1} $. This
is a consequence of the equations ({\bf r}ef{phik}) and ({\bf r}ef{moyj+1}).
\mathfrak{e}dskip
\noindent{In view of the formula ({\bf r}ef{diction}), when $ {\bf v}arphi_l $
is modified by an amount of $ \delta {\bf v}arphi_l $, the quantity
$ U_1 (t,x,\theta) $ undergoes a perturbation of the same order
$ \delta {\bf v}arphi_l $.} When dealing with quasi-singularities,
some quantities with $ \varepsilon$ in factor (like $ \langle \tilde
U_l {\bf r}angle $) or with $ \eps^{1 - {{\bf r}m\bf f}rac{1}{l}} $ in factor
(like $ \tilde U^*_{l-1} $) can control informations of size
$ \eps^{{{\bf r}m\bf f}rac{1}{l}} $. This fact is expressed by the
following rules of transformation
\begin{equation} \label{rulesoftr}
\left. \begin{array}{lcl}
\langle \tilde U_l {\bf r}angle \ / \ \langle \tilde U_l {\bf r}angle \,
+ \, \delta \langle \tilde U_l {\bf r}angle \quad &
\Longrightarrow & \quad {\bf u}^\eps_{{\bf r}m\bf f}lat \ / \ {\bf u}^\eps_{{\bf r}m\bf f}lat +
\bigcirc ( \eps^{{{\bf r}m\bf f}rac{1}{l}} ) \ \delta \langle \tilde U_l
{\bf r}angle \, , \\
\tilde U^*_{l-1} \ / \ \tilde U^*_{l-1} \, + \, \delta
\tilde U^*_{l-1} \quad & \Longrightarrow & \quad {\bf u}^\eps_{{\bf r}m\bf f}lat \
/ \ {\bf u}^\eps_{{\bf r}m\bf f}lat + \bigcirc ( \eps^{{{\bf r}m\bf f}rac{1}{l}} ) \ \delta
\tilde U^*_l \, . \
\end{array} {\bf r}ight.
\end{equation}
Now reverse the preceding reasoning. To describe features in the
principal oscillating term $ \eps^{{{\bf r}m\bf f}rac{1}{l}} \ U^*_1
\bigl(t,x,\eps^{-1} \, {\bf v}arphi^\eps_g (t,x) \bigr) $, we
must identify $ {\bf v}arphi_l $ which means to obtain $ \langle
\tilde U_l {\bf r}angle $ and $ \tilde U^*_{l-1} $. In other words,
we need to know quantities which have respectively $ \varepsilon$ and
$ \eps^{1- {{\bf r}m\bf f}rac{1}{l}} $ in factor. When $ l {{\bf r}m\bf g}eq 2 $ such
informations are clearly not reachable by rough controls as
({\bf r}ef{compcomp}).
\mathfrak{e}dskip
\noindent{This discussion indicates that the study of turbulent
regimes requires to combine at least geometrical aspects, multiphase
analysis and high order expansions.} The tools of non linear geometric
optics seem to be appropriate. Some attempts in this direction have
already been made.
{\bf s}mallskip
{\bf s}ubsection {Non linear geometric optics.} We make in this paragraph 3.5
several comments about non linear geometric optics. They concern both old
\cite{G}-\cite{G2}-\cite{Se} and recent \cite{Che}-\cite{CGM}-\cite{CGM1}
results which all are devoted to one phase expansions of the type
\begin{equation} \label{ogmonop}
{\bf u}^\eps_\natural (t,x) \, := \, {\bf u}_0 (t,x) \, + \, {{\bf r}m\bf h}box{$ {\bf s}um_{k=1}^\infty $}
\, \eps^{{{\bf r}m\bf f}rac{k}{l}} \ U_k \bigl( t,x, \eps^{-1} \, {\bf v}arphi_0 (t,x) \bigr)
\, . \quad \
\end{equation}
When $ l=1 $, one is faced with {\it weakly non linear geometric optics}.
The asymptotic behavior and the stability of $ {\bf u}^\eps_\natural $ are
well understood. In fact a complete theory has been achieved in the
general framework of multidimensional systems of conservation laws
(see \cite{G}-\cite{G2} and the related references). Because of the
formation of shocks, the life span of exact solutions close to
$ {\bf u}^\eps_\natural $ does not go beyond $ T {\bf s}imeq 1 $.
\mathfrak{e}dskip
\noindent{When $ l=2 $, expressions as $ {\bf u}^\eps_\natural $ are called
{\it strong} oscillations.} The construction of such BKW solutions can
be undertaken only if the system of conservation laws has a special
structure. {\it Transparency} conditions are needed to progress. They
can be deduced from the presence of a linearly degenerate field
\cite{CGM}. In the hyperbolic situation the family $ \{ {\bf u}^\eps_\natural
\}_{\varepsilon\in \, ]0,1]} $ is unstable \cite{CGM} on the interval $ [0,T] $.
It becomes stable on condition that a small viscosity is incorporated
\cite{Che}. Applications can be given to describe large-scale
motions in the atmosphere \cite{Che}.
\mathfrak{e}dskip
\noindent{Compressible Euler equations are the prototype of a non
linear hyperbolic system having a linearly degenerate field.} After
a finite time, singularities appear. These correspond to the generation
of shocks by compression \cite{Si}. The situation is different in the
incompressible setting. There is no genuine shock and the production
of singularities poses a much more subtle problem \cite{BKM}-\cite{CF}
which up to now remains basically open.
\mathfrak{e}dskip
\noindent{Incompressible fluid equations lie at an extreme end in the
sense that they are the most {\it degenerate} (or the most {\it linear})
equations which have just been mentioned. Following the approach of
\cite{JMR1} related to {\it transparency}, repeating the reasoning
which goes from \cite{G}-\cite{G2} to \cite{Che}-\cite{CGM}, one
expects to go further than $ l = 2 $ when dealing with $ (\mathfrak{a}thcal{E}) $.
Now, this is precisely what says the Theorem {\bf r}ef{appBKW} since
it allows to reach any $ l \in {\mathfrak{a}thbb N}_* \, !$
\mathfrak{e}dskip
\noindent{To tackle the limit case $ l= \infty $, one is tempted to look
at asymptotic expansions of the form
\begin{equation} \label{ogmonopl}
\ {\bf u}^\eps_\infty (t,x) \, := \, {{\bf r}m\bf h}box{$ {\bf s}um_{k=0}^\infty $} \, \eps^k \
U_k \bigl( t,x, \eps^{-1} \, {\bf v}arphi_0 (t,x) \bigr) \, , \qquad
{\bf p}artial_\theta U_0^* \not \equiv 0 \, . \qquad \quad
\end{equation}
The oscillations contained in $ {\bf u}^\eps_\infty $ have a {\it large}
amplitude. Modulation equations for $ U_0 $ are proposed in \cite{Se}.
However these transport equations are not hyperbolic so that they are
ill posed (in the sense of Hadamard) with respect to the initial value
problem. It confirms that a BKW construction based on ({\bf r}ef{ogmonopl})
is not relevant{{\bf r}m\bf f}ootnote{The singularities are carried here by the velocity
field. The discussion is very different when the oscillations are
polarized on the entropy \cite{CGM1}.}.
\mathfrak{e}dskip
\noindent{The contribution \cite{Se} does not explain why the expansion
({\bf r}ef{ogmonopl}) is not the good one.} We come back below to this point.
At first sight the Theorem {\bf r}ef{appBKW} does not include large
amplitude waves since $ {\bf u}^\eps_{{\bf r}m\bf f}lat - {\bf u}_0 = \bigcirc ( \eps^{{{\bf r}m\bf f}rac
{1}{l}} ) \ll \bigcirc(1) $. A change of variables leads to recant
this impression. Suppose that $ {\bf u}_0 \equiv 0 $ and $ {\bf p}art_\theta
U_1^* \not \equiv 0 $. Then define
\mathfrak{e}dskip
$ \dot {\bf u}^\eps_{{\bf r}m\bf f}lat (t,x) \, := \, \eps^{-{{\bf r}m\bf f}rac{1}{l}} \ {\bf u}^\eps_{{\bf r}m\bf f}lat
(\eps^{-{{\bf r}m\bf f}rac{1}{l}} \, t,x) \, , \qquad \dot {\bf p}^\eps_{{\bf r}m\bf f}lat (t,x) \, :=
\, \eps^{-{{\bf r}m\bf f}rac{2}{l}} \ {\bf p}^\eps_{{\bf r}m\bf f}lat (\eps^{-{{\bf r}m\bf f}rac{1}{l}} \, t,x) \, . $
\mathfrak{e}dskip
\noindent{Observe that the structure of $ \dot {\bf u}^\eps_{{\bf r}m\bf f}lat $ and
$ \dot {\bf p}^\eps_{{\bf r}m\bf f}lat $ is very different from the one in ({\bf r}ef{ogmonopl})
since we have
$$ \quad \left. \begin{array}{l}
\dot {\bf u}^\eps_{{\bf r}m\bf f}lat (t,x) = {\bf s}um_{k=1}^\infty \, \eps^{
{{\bf r}m\bf f}rac{k-1}{l}} \ U_k \bigl( \eps^{-{{\bf r}m\bf f}rac{1}{l}} \, t,x, \eps^{-1} \,
{\bf v}arphi^\eps_g (\eps^{-{{\bf r}m\bf f}rac{1}{l}} \, t,x) \bigr) +
\eps^{-{{\bf r}m\bf f}rac{1}{l}} \ {{\bf r}m\bf c} {\bf u}^\eps_{{\bf r}m\bf f}lat (\eps^{-{{\bf r}m\bf f}rac{1}{l}} \, t,x)
\, , \\
\dot {\bf p}^\eps_{{\bf r}m\bf f}lat (t,x) = {\bf s}um_{k=1}^\infty \, \eps^{
{{\bf r}m\bf f}rac{k-2}{l}} \ P_k \bigl( \eps^{-{{\bf r}m\bf f}rac{1}{l}} \, t,x, \eps^{-1} \,
{\bf v}arphi^\eps_g (\eps^{-{{\bf r}m\bf f}rac{1}{l}} \, t,x) \bigr) +
\eps^{-{{\bf r}m\bf f}rac{2}{l}} \ {{\bf r}m\bf c} {\bf p}^\eps_{{\bf r}m\bf f}lat (\eps^{-{{\bf r}m\bf f}rac{1}{l}} \, t,x) \, .
\end{array} {\bf r}ight. $$
The functions $ \dot {\bf u}^\eps_{{\bf r}m\bf f}lat $ and $ \dot {\bf p}^\eps_{{\bf r}m\bf f}lat $
satisfy
\mathfrak{e}dskip
$ {\bf p}art_t \dot {\bf u}^\eps_{{\bf r}m\bf f}lat + ( \dot {\bf u}^\eps_{{\bf r}m\bf f}lat \cdot \nabla)
\dot {\bf u}^\eps_{{\bf r}m\bf f}lat + \nabla \dot {\bf p}^\eps_{{\bf r}m\bf f}lat = \dot {{\bf r}m\bf f}^\eps_{{\bf r}m\bf f}lat \, ,
\quad \ \Div \, \dot {\bf u}^\eps_{{\bf r}m\bf f}lat = 0 \, , \quad \ \dot {{\bf r}m\bf f}^\eps_{{\bf r}m\bf f}lat
(t,x) = \eps^{-{{\bf r}m\bf f}rac{2}{l}} \ {{\bf r}m\bf f}^\eps_{{\bf r}m\bf f}lat (\eps^{-{{\bf r}m\bf f}rac{1}{l}} \, t,x)
\, . $
\mathfrak{e}dskip
\noindent{The functions $ \dot {\bf u}^\eps_{{\bf r}m\bf f}lat $ are oscillations
of the order $ 1 $.} They are approximate solutions of $ (\mathfrak{a}thcal{E}) $
on the {\it small} interval $ [0, \eps^{{{\bf r}m\bf f}rac{1}{l}} \, T] $.
Indeed, for all $ m \in {\mathfrak{a}thbb N} $, the family $ \{ \dot {{\bf r}m\bf f}^\eps_{{\bf r}m\bf f}lat
\}_\varepsilon$ is subjected to the uniform majoration
\mathfrak{e}dskip
$ {\bf s}up_{\varepsilon\in \, ]0,\eps_0]} \quad \eps^{- {{\bf r}m\bf f}rac{N}{l} +
{{\bf r}m\bf f}rac{2}{l} + 3 + m} \ {\bf p}arallel \dot {{\bf r}m\bf f}^\eps_{{\bf r}m\bf f}lat {\bf p}arallel_{
\mathfrak{a}thcal{W}^m_{\! \text{\tiny $ \eps^{(1/l)} \, T $}} } \ < \, \infty \, . \ $
\mathfrak{e}dskip
\noindent{If moreover $ N = + \infty $ and}
\begin{equation} \label{speini}
\left. \begin{array}{l}
{\bf v}arphi_1 (0,\cdot) \, \equiv \, \cdots \, \equiv \, {\bf v}arphi_{l-1}
(0,\cdot) \, \equiv \, 0 \, , \\
U_{k+1} (0,\cdot) \, \equiv \, 0 \, , \qquad {{\bf r}m\bf f}orall \, k \in {\mathfrak{a}thbb N}
{\bf s}etminus ( l \, {\mathfrak{a}thbb N}) \, , \qquad \qquad \qquad \qquad \qquad \
\end{array} {\bf r}ight.
\end{equation}
the trace $ \dot {\bf u}^\eps_{{\bf r}m\bf f}lat (0,\cdot) $ has the form
\mathfrak{e}dskip
$ \dot {\bf u}^\eps_{{\bf r}m\bf f}lat (0,x) \, = \ {\bf s}um_{k=0}^\infty \, \eps^k \
U_{1 + l \, k} \bigl(0,x, \eps^{-1} \, {\bf v}arphi_0 (0,x) \bigr) \, ,
\qquad {\bf p}art_\theta U_1^* \, \not \equiv \, 0 \, .$
\mathfrak{e}dskip
\noindent{At the time $ t=0$, we recover ({\bf r}ef{ogmonopl}). Now
the construction underlying the Theorem {\bf r}ef{appBKW} reveals that
in general}
\begin{equation} \label{phasenonn}
{\bf v}arphi_k(t,\cdot) \, \not \equiv \, 0 \, , \qquad {{\bf r}m\bf f}orall \, t \in \, ]0,T] \, , \qquad
{{\bf r}m\bf f}orall \, k \in \{ 2, \cdots, l-1 \} \, . \quad
\end{equation}
The functions $ {\bf v}arphi_j $ with $ j \in \{2,\cdots, l-1 \} $ are
not present when $ t = 0 $. But the description of $ \dot {\bf u}^\eps_{{\bf r}m\bf f}lat
(t,\cdot) $ on the interval $ [0, \eps^{1 - {{\bf r}m\bf f}rac{k}{l}} \, T] $ with $
k \in \{2,\cdots, l-1 \} $ requires the introduction of the {\it phase
shifts} $ {\bf v}arphi_j $ for $ j \in \{2,\cdots,k\} $. More generally, the
description of $ \dot {\bf u}^\eps_{{\bf r}m\bf f}lat (t,\cdot) $ on the whole interval
$ [0, T] $ needs the introduction of an {\it infinite cascade} of phases
$ \{ {\bf v}arphi_j \}_{j \in {\mathfrak{a}thbb N}_*} $.
\mathfrak{e}dskip
\noindent{Such a phenomenon does not occur when constructing large
amplitude oscillations for systems of conservation laws in one space
dimension \cite{CG}-\cite{E2}.} It is specific to the multidimensional
framework. It explains why the classical approach of \cite{Se} fails.
\mathfrak{e}dskip
\noindent{It seems that the creation of the $ {\bf v}arphi_j $ is due to
mechanisms which have not already been studied.} It is not linked with
{\it resonances}. It is related neither to {\it dispersive} nor to
{\it diffractive} effects.
{\bf v}skip -1mm
\noindent{{\it Remark 3.5.1 (about $ {\bf v}arphi_1 $):}} The term $ {\bf v}arphi_1 $
does not appear if $ {\bf v}arphi_1 (0,\cdot) \equiv 0 $ and $ \bar U_1
(0,\cdot) \equiv 0 $. When these two conditions are not verified, the
phase shift $ {\bf v}arphi_1 $ can be absorbed by the technical trick
exposed in \cite{CGM}. Just replace $ {\bf u}_0 (0,\cdot) $ by $ {\bf u}_0
(0,\cdot) + \delta \, \bar U_1 (0,\cdot) $. Perform the BKW calculus
with a fixed $ \delta > 0 $. Then choose $ \delta = \varepsilon$. {{\bf r}m\bf h}fill
$ \triangle $
\noindent{{\it Remark 3.5.2 (about $ {\bf v}arphi_2 $):}} In general, we
have $ {\bf v}arphi_2 \not \equiv 0 $ even if
\mathfrak{e}dskip
$ {\bf v}arphi_1 (0,\cdot) \equiv {\bf v}arphi_2 (0,\cdot) \equiv 0 \, , \qquad
\bar U_1 (0,\cdot) \equiv \bar U_2 (0,\cdot) \equiv 0 \, . $
\mathfrak{e}dskip
\noindent{Indeed the time evolution of $ \bar U_2 $ is governed
by ({\bf r}ef{moy2}).} It involves the source term $ \Div \, \langle
U^*_1 \otimes U^*_1 {\bf r}angle $ which is able to awake $ \bar U_2 $. This
influence can then be transmitted to $ {\bf v}arphi_2 $ through the
transport equation
\begin{equation} \label{trans2}
{\bf p}art_t {\bf v}arphi_2 + ({\bf u}_0 \cdot \nabla) {\bf v}arphi_2 + (\bar U_1
\cdot \nabla) {\bf v}arphi_1 + (\bar U_2 \cdot \nabla) {\bf v}arphi_0
\, = \, 0 \, . \qquad \qquad
\end{equation}
Likewise, the other terms $ {\bf v}arphi_3 $, $\cdots $,
$ {\bf v}arphi_{l-1} $ are in general non trivial even if
\mathfrak{e}dskip
$ {\bf v}arphi_1 (0,\cdot) \equiv \cdots \equiv {\bf v}arphi_{l-1} (0,\cdot)
\equiv 0 \, , \qquad \bar U_1 (0,\cdot) \equiv \cdots \equiv
\bar U_{l-1} (0,\cdot) \equiv 0 \, . $
\mathfrak{e}dskip
\noindent{There is no more trick which allows to get rid of
$ {\bf v}arphi_2 $, $ \cdots$, $ {\bf v}arphi_{l-1} $.} {{\bf r}m\bf h}fill $ \triangle $
\noindent{{\it Remark 3.5.3 (why turbulent flows ?):}} The introduction
of the phase shifts $ {\bf v}arphi_k $ with $ 2 \leq k \leq l-1 $ cannot
be avoided. Therefore the difficulties that we deal with appear from
$ l = 3 $. When $ l {{\bf r}m\bf g}eq 3 $, the characteristic rate $ e $ of eddy
dissipation is bigger than one \cite{C}. This is the reason why such
situations are refered to {\it turbulent regimes}. {{\bf r}m\bf h}fill $ \triangle $
\noindent{{\it Remark 3.5.4 (about shear layers):}} We have said in
the introduction that the expression $ {\bf u}^\eps_s $ given by formula
({\bf r}ef{oscidipmaj}) is of a very special form. Let us explain why.
Change the variable $ t $ into $ \eps^{{{\bf r}m\bf f}rac{1}{l}} \, t $
and $ {\bf u}^\eps_s $ into $ \dot {\bf u}^\eps_s := \eps^{{{\bf r}m\bf f}rac{1}{l}} \,
{\bf u}^\eps_s $. The main phase $ {\bf v}arphi_0 (t,x) \equiv x_2 $ remains
the same. Now we are faced with
\mathfrak{e}dskip
$ \dot {\bf u}^\eps_s (t,x) \, := \, {}^t \bigl( \eps^{{{\bf r}m\bf f}rac{1}{l}} \,
{{\bf r}m\bf g} ( x_2,\eps^{-1} \, x_2) , 0 , \eps^{{{\bf r}m\bf f}rac{1}{l}} \, {{\bf r}m\bf h} \bigl(
x_1 - \eps^{{{\bf r}m\bf f}rac{1}{l}} \, {{\bf r}m\bf g} (x_2, \eps^{-1} \, x_2) \, t , x_2,
\eps^{-1} \, x_2 \bigr) \bigr) \, . $
\mathfrak{e}dskip
\noindent{It is still a solution of Euler equations.} Now it falls
in the framework of the Theorem {\bf r}ef{appBKW}. The constraints
on $ \bar U_2 = {}^t (\bar U_2^1, \bar U_2^2, \bar U_2^3 ) $
reduce to
\mathfrak{e}dskip
$ \bar U_2^1 \, \equiv \, \bar U_2^2 \, \equiv \, 0 \, , \qquad
{\bf p}art_t \bar U_2^3 \, + \, \langle {{\bf r}m\bf g} \, {\bf p}art_1 {{\bf r}m\bf h} {\bf r}angle \,
= \, 0 \, . $
\mathfrak{e}dskip
\noindent{The contribution $ \bar U_2 $ is non trivial but it is
polarized so that $ \bar U_2 \cdot \nabla {\bf v}arphi_0 \equiv 0 $.}
Therefore it does not produce the phase shift $ {\bf v}arphi_2 $. The
same phenomenon occurs concerning $ {\bf v}arphi_3 $, $ \cdots , $
$ {\bf v}arphi_{l-1} $. These terms are not present. It turns out that
the expansion $ {\bf u}^\eps_s $ involves only the phase $ {\bf v}arphi_0
(t,x) \equiv x_2 $. {{\bf r}m\bf h}fill $ \triangle $
\noindent{The choice for the amplitude of the oscillations is very
important.} It is strongly related to the scale of time $ T $ under
consideration. The idea is to increase the time of propagation $ T $
to reach the regime where non linear effects appear. Starting
with some large amplitude high frequency waves
\mathfrak{e}dskip
$ {\bf u}^\eps_\infty (0,x) \, = \, U_0 \bigl( 0,x, \eps^{-1} \, {\bf v}arphi_0
(0,x) \bigr) \, + \, \bigcirc (\eps) \, , \qquad {\bf p}artial_\theta U_0^*
(0,\cdot) \not \equiv 0 \, , $
\mathfrak{e}dskip
\noindent{the preceding discussion can be summarized by the following
diagram:}
$$ \left. \begin{array}{ccccccc}
T {\bf s}imeq 1 &
-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-
& \ &
-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-
& \ &
-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!- \\
\ & \left. \begin{array}{c}
\text{{\bf s}criptsize infinite cascade} \\
\text{{\bf s}criptsize of phases} \\
\text{{\bf s}criptsize ${\bf v}arphi_0 - ({\bf v}arphi_1) - \cdots $}
\end{array} {\bf r}ight. & \left. \begin{array}{c}
\mathfrak{i}d \\
\mathfrak{i}d \\
\mathfrak{i}d
\end{array} {\bf r}ight. & \left. \begin{array}{c}
\\
\text{{\bf s}criptsize turbulent} \\
\text{{\bf s}criptsize flows}
\end{array} {\bf r}ight.
& \left. \begin{array}{c}
\mathfrak{i}d \\
\mathfrak{i}d \\
\mathfrak{i}d
\end{array} {\bf r}ight.
& \left. \begin{array}{c}
\\
\text{{{\bf r}m\bf f}ootnotesize incompressible} \\
\text{{{\bf r}m\bf f}ootnotesize fluid equations}
\end{array} {\bf r}ight.
\\
T {\bf s}imeq \eps^{{{\bf r}m\bf f}rac{1}{3}} \ &
-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-
& \ &
-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-
& \ &
-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!- \\
\ & \text{{\bf s}criptsize ${\bf v}arphi_0 - ({\bf v}arphi_1) - {\bf v}arphi_2 $} & \left. \begin{array}{c}
\mathfrak{i}d \\
\mathfrak{i}d
\end{array} {\bf r}ight. & \left. \begin{array}{c}
\text{{{\bf r}m\bf f}ootnotesize turbulent} \\
\text{{{\bf r}m\bf f}ootnotesize flows}
\end{array} {\bf r}ight. & \left. \begin{array}{c}
\mathfrak{i}d \\
\mathfrak{i}d
\end{array} {\bf r}ight. & \left. \begin{array}{c}
\text{{{\bf r}m\bf f}ootnotesize incompressible} \\
\text{{{\bf r}m\bf f}ootnotesize fluid equations}
\end{array} {\bf r}ight. \\
T {\bf s}imeq \eps^{{{\bf r}m\bf f}rac{1}{2}} \ & -\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-
& \ &
-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-
& \ & -\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!- \\
&\text{{{\bf r}m\bf f}ootnotesize $ {\bf v}arphi_0 - ({\bf v}arphi_1) $} & \left. \begin{array}{c}
\mathfrak{i}d \\
\mathfrak{i}d \\
\mathfrak{i}d
\end{array} {\bf r}ight. & \left. \begin{array}{c}
\text{{\bf s}mall strong}\\
\text{{\bf s}mall oscillations} \\
\cite{Che}-\cite{CGM}
\end{array} {\bf r}ight. & \left. \begin{array}{c}
\mathfrak{i}d \\
\mathfrak{i}d \\
\mathfrak{i}d
\end{array} {\bf r}ight. & \left. \begin{array}{c}
\text{{\bf s}mall systems of conservation}\\
\text{{\bf s}mall laws with a linearly} \\
\text{{\bf s}mall degenerate field}
\end{array} {\bf r}ight.
\\
T {\bf s}imeq \varepsilon& -\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-
& \ &
-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-
& \ & -\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!- \\
& {\bf v}arphi_0 & \left. \begin{array}{c}
\mathfrak{i}d \\
\mathfrak{i}d \\
\mathfrak{i}d \\
\mathfrak{i}d \\
\mathfrak{i}d
\end{array} {\bf r}ight. & \left. \begin{array}{c}
\text{weakly} \\
\text{non linear} \\
\text{geometric}\\
\text{optics} \\
\cite{G}-\cite{G2}
\end{array} {\bf r}ight.
& \left. \begin{array}{c}
\mathfrak{i}d \\
\mathfrak{i}d \\
\mathfrak{i}d \\
\mathfrak{i}d \\
\mathfrak{i}d
\end{array} {\bf r}ight. & \left. \begin{array}{c}
\text{\large systems} \\
\text{\large of} \\
\text{\large conservation} \\
\text{\large laws}
\end{array} {\bf r}ight. \\
T=0 & -\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-
& \ &
-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-
& \ & -\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!- \\
& \text{\large \bf phases} & \mathfrak{i}d & \text{\large \bf regimes} & \mathfrak{i}d & \text{\large \bf equations} \\
\end{array} {\bf r}ight. \quad $$
\noindent{This picture allows to understand the position of the
actual paper in comparison with previous results.}
{\bf s}mallskip
{\bf s}ubsection{The statistical approach.} It deals mainly with
{\it quantitative} informations obtained at the level of expressions,
say $ \mathfrak{u} (x) $, which in general do not depend on the time $ t $.
The introduction of $ \mathfrak{u} $ can be achieved by looking at {\it
stationary statistical} solutions \cite{FMRT} of the Navier-Stokes
equations that is
\mathfrak{e}dskip
$ \mathfrak{u}(x) \, \equiv \, \lim_{\, T \, \longrightarrow \, \infty} \quad
{{\bf r}m\bf f}rac{1}{T} \ \int_0^T \, {\bf u}(t,x) \ dt $
\mathfrak{e}dskip
\noindent{or in conjunction with the {\it ensemble average operator}
(\cite{L}-V-6) marked by the brackets $ < \cdot > $.} We will follow
this second option. The description below is extracted from the book
of M. Lesieur \cite{L} (chapters V and VI). We work with $ d = 3 $.
Interesting quantities are the mean kinetic energy
\mathfrak{e}dskip
$ {{\bf r}m\bf f}rac{1}{2} \ < \mathfrak{u}(x)^2 > \ {\bf s}im \, \int_{{\mathfrak{a}thbb R}^3} \, {\bf v}ert \mathfrak{u}(x) {\bf v}ert^2 \ dx \, , $
\mathfrak{e}dskip
\noindent{the enstrophy (that is the space integral of the square norm of the vorticity)}
\mathfrak{e}dskip
$ {{\bf r}m\bf f}rac{1}{2} \ < \omega (x)^2 > \ {\bf s}im \, \int_{{\mathfrak{a}thbb R}^3} \, {\bf v}ert \omega (x)
{\bf v}ert^2 \ dx \, , \qquad \omega (x) := \nabla {\bf w}edge \mathfrak{u} (x) $
\mathfrak{e}dskip
\noindent{and the rate of dissipation $ e \, {\bf s}im \, \kappa \ <
\omega (x)^2 > $.} In the setting of {\it isotropic} turbulence,
these quantities can be expressed in terms of a scalar function
$ k \longmapsto E(k) $. The real number $ E(k) $ represents the
density of kinetic energy at wave number $ k $ (or the kinetic
energy in Fourier space integrated on a sphere of radius $ k $).
The relations are the following
\mathfrak{e}dskip
\noindent{\cite{L}}-V-10-4$ \qquad \ {{\bf r}m\bf f}rac{1}{2} \ < \mathfrak{u}(x)^2 > \ = \, \int_0^{+ \infty} \,
E(k) \ dk \, . $
\mathfrak{e}dskip
\noindent{\cite{L}}-V-10-15$ \qquad \! {{\bf r}m\bf f}rac{1}{2} \ < \omega(x)^2 > \ = \, \int_0^{+ \infty} \,
k^2 \ E(k) \ dk \, . $
\mathfrak{e}dskip
\noindent{\cite{L}}-VI-3-15$ \qquad e \, = \, 2 \ \kappa \ \int_0^{+ \infty} \, k^2 \
E(k) \ dk \, . $
\mathfrak{e}dskip
\noindent{Kolmogorov's theory assumes that}
\mathfrak{e}dskip
\noindent{\cite{L}}-VI-4-1 $ \qquad \exists \ c > 0 \, ; \qquad E(k) \,= \, c \ e^{2/3} \
k^{-5/3} \, , \qquad {{\bf r}m\bf f}orall \, k \in [k_i,k_d] \, . $
\mathfrak{e}dskip
\noindent{This law is valid up to the frequency $ k_d $ with}
\mathfrak{e}dskip
\noindent{\cite{L}}-VI-4-2 $ \qquad \, k_d \ {\bf s}im \ ( \, e \, / \, \kappa^3 \, )^{1/4} \, . $
\mathfrak{e}dskip
\noindent{The small quantity $ \varepsilon:= k_d^{-1} $ is the Kolmogorov
dissipative scale.} The relations \cite{L}}-VI-3-15 and \cite{L}}-VI-4-2
imply that the rate of injection of kinetic energy $ e $ is linked to
the number $ l $ according to $ e {\bf s}im \eps^{-1+{{\bf r}m\bf f}rac{3}{l}} $. We
recover here that $ e {\bf s}im 1 $ when $ l =3 $ (see \cite{C}).
\mathfrak{e}dskip
\noindent{A starting point for the conventional theory of turbulence is
the notion that, on average, kinetic energy is transfered from low wave
numbers modes to high wave numbers modes. A recent paper \cite{FMRT} put
forward the following idea: in the spectral region below that of injection
of energy, an inverse (from high to low modes) transfer of energy takes
place. At any rate, it is a central question to determine how the kinetic
energy is distributed.
{\bf s}ubsection{Phenomenological comparison.} The statistical
approach is concerned with the spectral properties of solutions.
Below, we draw a parallel with the propagation of quasi-singularities
as it is described in the Theorem {\bf r}ef{appBKW}.
\mathfrak{e}dskip
\noindent{Let us examine how the square $ \mathfrak{a}thcal{F}({\bf u}^\eps_{{\bf r}m\bf f}lat) (t,\xi)^2 $
of the Fourier transform of $ {\bf u}^\eps_{{\bf r}m\bf f}lat(t,x) $ is distributed.} To
this end, consider the application
{\bf v}skip -5mm
$$ \left. \begin{array} {rcl}
\tilde E(t,\cdot) \, : \, {\mathfrak{a}thbb R}^+ & \longrightarrow & {\mathfrak{a}thbb R}^+ \\
k \ & \longmapsto & \tilde E(t,k) \, := \, \int_{\{ \xi
\in {\mathfrak{a}thbb R}^d \, ; \, {\bf v}ert \xi {\bf v}ert = k \}} \ {\bf v}ert \mathfrak{a}thcal{F}
({\bf u}^\eps_{{\bf r}m\bf f}lat) (t,\xi) {\bf v}ert^2 \ \, d {\bf s}igma (\xi) \, . \quad
\end{array} {\bf r}ight. \ $$
{\bf v}skip -1mm
\noindent{The initial data $ {\bf u}^\eps_{{\bf r}m\bf f}lat (0,\cdot) $ has a {\it spectral gap}.}
In another words, the graph of the function $ k \longmapsto \tilde
E(0,k) $ appears concentrated around the two characteristic wave
numbers $ k {\bf s}imeq 1 $ and $ k {\bf s}imeq\eps^{-1} = k_d $.} In view of
({\bf r}ef{phasenonn}), this situation does not persist. At the time
$ t = \eps^{{{\bf r}m\bf f}rac{1}{l}} $, the concentration is around $ l $
characteristic wave numbers which are intermediate between the
two preceding ones. This corresponds to a {\it discrete} cascade
of energy.
\mathfrak{e}dskip
\noindent{Suppose now ({\bf r}ef{speini}) and consider $ \dot {\bf u}^\eps_{{\bf r}m\bf f}lat $.}
The life span of $ \dot {\bf u}^\eps_{{\bf r}m\bf f}lat (t,\cdot) $ is $ \eps^{{{\bf r}m\bf f}rac{1}{l}}
\, T $.} There are various manners to get a family $ \{\dot {\bf u}^\eps_{{\bf r}m\bf f}lat(t,\cdot)
\}_{\varepsilon\in \, ]0,1] } $ which is defined on some interval $ [0,\tilde T] $ with
$ \tilde T > 0 $ independent on $ \varepsilon$.} In particular, we can
\mathfrak{e}dskip
\noindent{a) Select any $ \tilde T > 0 $ when $ T = + \infty $.} However
nothing guarantees that the functions $ \dot {\bf u}^\eps_{{\bf r}m\bf f}lat $ are still
approximate solutions on the interval $ [0,\tilde T] $. Indeed, since
$ t $ is replaced by $ \eps^{- (1/l)} \, t $, the size of the error
terms $ \dot {{\bf r}m\bf f}^\eps_{{\bf r}m\bf f}lat $ depends on the increase of $ {{\bf r}m\bf f}^\eps_{{\bf r}m\bf f}lat $
with respect to $ t $. At this level, we are faced with secular growth
problems \cite{La}.
\mathfrak{e}dskip
\noindent{b) Use a convergence process{{\bf r}m\bf f}ootnote{When performing
the formal analysis, arbitrary values can be given to the
parameters $ \varepsilon\in \, ]0,1] $ and $ l \in {\mathfrak{a}thbb N}_* $. For instance
$ \varepsilon$ can be fixed whereas $ l $ goes to $ \infty $. Or
$ l = - (\ln \eps) / (\ln 2) $ so that $ \eps^{{{\bf r}m\bf f}rac{1}{l}} \,
T = {{\bf r}m\bf f}rac{1}{2} \, T > 0 $.} which needs the introduction of
an {\it infinite} cascade of phase shifts. The intuition{{\bf r}m\bf f}ootnote{
Even at a formal level, difficulties occur in order to justify
the different convergences. Rigorous results in this direction
seem to be a difficult task.} is that the graph of $ \tilde E $
becomes continuous (no more gap). This corresponds to the
impression of an {\it infinite} cascade of energy. This
remark is consistent with engineering experiments and
the observations reported in the statistical approach.
The turbulent phenomena which we study are very complex in their
realization. When $ t > 0 $, the description of $ \dot {\bf u}^\eps_{{\bf r}m\bf f}lat
(t,\cdot) $ involves an infinite set of phases so that computations
and representations are hard to implement. It gives the impression
of a chaos. Nevertheless, our analysis reveals that these phenomena
contain no mystery in their generation. On the contrary quantitative
and qualitative features can be predicted in the framework of non
linear geometric optics.
{\bf s}mallskip
$\, $
{\bf s}ection{Euler equations in the variables $ (t,x,\theta) $.}
As explained in the previous chapter, the demonstration of
the Theorem {\bf r}ef{appBKW} is achieved with the representation
\begin{equation} \label{BKWdelbis}
\ \tilde {\bf u}^\eps_{{\bf r}m\bf f}lat (t,x) = \tilde u^\eps_{{\bf r}m\bf f}lat \bigl(
t,x, \eps^{-1} \, {\bf v}arphi^\eps_{{\bf r}m\bf f}lat (t,x) \bigr) \, , \quad
\ \tilde {\bf p}^\eps_{{\bf r}m\bf f}lat (t,x) = \tilde p^\eps_{{\bf r}m\bf f}lat \bigl(
t,x, \eps^{-1} \, {\bf v}arphi^\eps_{{\bf r}m\bf f}lat (t,x) \bigr) \, . \ \
\end{equation}
Recall that the complete phase $ {\bf v}arphi^\eps_{{\bf r}m\bf f}lat (t,x) $ is
\begin{equation} \label{phasecomplete}
{\bf v}arphi^\eps_{{\bf r}m\bf f}lat (t,x) \, = \, {\bf v}arphi^\eps_g (t,x) + \varepsilon\
{\bf v}arphi^\eps_a (t,x) \, = \, {\bf v}arphi_0 (t,x) + {{\bf r}m\bf h}box{$ {\bf s}um_{
k=1}^N $} \ \eps^{{{\bf r}m\bf f}rac{k}{l}} \ {\bf v}arphi_k (t,x) \
\end{equation}
and that the profiles $ \tilde u^\eps_{{\bf r}m\bf f}lat ( t,x, \theta ) $
and $ \tilde p^\eps_{{\bf r}m\bf f}lat ( t,x, \theta ) $ have the form
\begin{equation} \label{feclate}
\left. \begin{array}{l}
\tilde u^\eps_{{\bf r}m\bf f}lat ( t,x, \theta ) \, = \, {\bf u}_0 (t,x) +
{\bf s}um_{k=1}^N \, \eps^{{{\bf r}m\bf f}rac{k}{l}} \ \tilde U_k ( t,x,
\theta) \, , \qquad \qquad \qquad \qquad \, \\
\tilde p^\eps_{{\bf r}m\bf f}lat ( t,x, \theta ) \, = \, {\bf p}_0 (t,x) +
{\bf s}um_{k=1}^N \, \eps^{{{\bf r}m\bf f}rac{k}{l}} \ \tilde P_k ( t,x,
\theta) \, .
\end{array} {\bf r}ight.
\end{equation}
{\bf s}ubsection{Preliminaries.}
$ \bullet $ {\bf Anisotropic viscosity.} Mark the
abbreviated notations
\mathfrak{e}dskip
$ X^\eps_{{\bf r}m\bf f}lat (t,x) := \nabla {\bf v}arphi^\eps_{{\bf r}m\bf f}lat (t,x) =
{\bf s}um_{k=0}^N \, \eps^{{{\bf r}m\bf f}rac{k}{l}} \ X_k (t,x) \, , \qquad
X_k (t,x) := \nabla {\bf v}arphi_k (t,x) \, , $
\mathfrak{e}dskip
$ \mathfrak{a}thfrak{X}^\eps_{{{\bf r}m\bf f}lat 1} (t,x) := {\bf v}ert X^\eps_{{\bf r}m\bf f}lat (t,x)
{\bf v}ert^{-1} \ X^\eps_{{\bf r}m\bf f}lat (t,x) \, . $
\mathfrak{e}dskip
\noindent{Complete the unit vector $ \mathfrak{a}thfrak{X}^\eps_{{{\bf r}m\bf f}lat 1} (t,x) $
into some orthonormal basis of $ {\mathfrak{a}thbb R}^d $}
\mathfrak{e}dskip
$ \mathfrak{a}thfrak{X}^\eps_{{{\bf r}m\bf f}lat i} (t,x) \cdot \mathfrak{a}thfrak{X}^\eps_{{{\bf r}m\bf f}lat j} (t,x) =
\delta_{ij} \, , \qquad {{\bf r}m\bf f}orall \, (i,j) \in \{1, \cdots ,
d \}^2 \, , $
\mathfrak{e}dskip
\noindent{so that all the vector fields $ \mathfrak{a}thfrak{X}^\eps_{{{\bf r}m\bf f}lat i} $
are smooth functions on $ [0,T] \times {\mathfrak{a}thbb R}^d $.} The
corresponding differential operators are denoted
\mathfrak{e}dskip
$ \mathfrak{a}thfrak{X}^\eps_{{{\bf r}m\bf f}lat i} ({\bf p}art ) := \mathfrak{a}thfrak{X}^\eps_{{{\bf r}m\bf f}lat i} (t,x)
\cdot \nabla \, , \qquad i \in \{1, \cdots , d \} \, . $
\mathfrak{e}dskip
\noindent{Their adjoints are}
\mathfrak{e}dskip
$ \mathfrak{a}thfrak{X}^\eps_{{{\bf r}m\bf f}lat i} ({\bf p}art )^* := \mathfrak{a}thfrak{X}^\eps_{{{\bf r}m\bf f}lat i} (t,x)
\cdot \nabla + \Div \, ( \mathfrak{a}thfrak{X}^\eps_{{{\bf r}m\bf f}lat i} ) (t,x) \, ,
\qquad i \in \{1, \cdots , d \} \, . $
\mathfrak{e}dskip
\noindent{Select $ \mathfrak{q} \in C^\infty_b ( [0,T] \times {\mathfrak{a}thbb R}^d ;
S^d_+ ) $ be such that}
\mathfrak{e}dskip
$ \exists \, c > 0 \, ; \qquad \mathfrak{q} (t,x) {{\bf r}m\bf g}eq c \, , \qquad
{{\bf r}m\bf f}orall \, (t,x) \in [0,T] \times {\mathfrak{a}thbb R}^d \, . $
\mathfrak{e}dskip
\noindent{Let $ (m,n ) \in {\mathfrak{a}thbb N}^2 $. Consider the elliptic
operator $ E^{\varepsilonm}_{{{\bf r}m\bf f}lat n} ({\bf p}art) $ defined according
to}
\mathfrak{e}dskip
$ E^{\varepsilonm}_{{{\bf r}m\bf f}lat n} ({\bf p}art) \, := \, \bigl( \,
\eps^{{{\bf r}m\bf f}rac{m}{l}} \, \mathfrak{a}thfrak{X}^\eps_{{{\bf r}m\bf f}lat 1} ({\bf p}art )^* \, ,
\, \eps^{{{\bf r}m\bf f}rac{n}{l}} \, \mathfrak{a}thfrak{X}^\eps_{{{\bf r}m\bf f}lat 2} ({\bf p}art )^* \, ,
\, \cdots \, , \, \eps^{{{\bf r}m\bf f}rac{n}{l}} \, \mathfrak{a}thfrak{X}^\eps_{{{\bf r}m\bf f}lat d}
({\bf p}art )^* \, \bigr) $
$$ \qquad \qquad \qquad \qquad
\left( \begin{array} {cccc}
\mathfrak{q}_{11} (t,x) & \mathfrak{q}_{12} (t,x) &\cdots & \mathfrak{q}_{1d} (t,x) \\
\mathfrak{q}_{21} (t,x) & \mathfrak{q}_{22} (t,x) & \cdots & \mathfrak{q}_{2d} (t,x) \\
{\bf v}dots & {\bf v}dots & \ & {\bf v}dots \\
\mathfrak{q}_{d1} (t,x) & \mathfrak{q}_{d2} (t,x) & \cdots & \mathfrak{q}_{dd} (t,x)
\end{array} {\bf r}ight)
\left(
\begin{array} {c}
\eps^{{{\bf r}m\bf f}rac{m}{l}} \, \mathfrak{a}thfrak{X}^\eps_{{{\bf r}m\bf f}lat 1} ({\bf p}art ) \\
\eps^{{{\bf r}m\bf f}rac{n}{l}} \, \mathfrak{a}thfrak{X}^\eps_{{{\bf r}m\bf f}lat 2} ({\bf p}art ) \\
{\bf v}dots \\
\eps^{{{\bf r}m\bf f}rac{n}{l}} \, \mathfrak{a}thfrak{X}^\eps_{{{\bf r}m\bf f}lat d} ({\bf p}art )
\end{array} {\bf r}ight) . $$
The introduction of the operator $ E^{\varepsilonm}_{{{\bf r}m\bf f}lat n} ({\bf p}art) $
in the right of $ (\mathfrak{a}thcal{E}) $ is compatible with the propagation
of oscillations only if $ m {{\bf r}m\bf g}eq l $ and $ n {{\bf r}m\bf g}eq 0 $. We
retain the limit case $ l=m $ and $ n = 0 $. The other
situations are easier to deal with, at least when performing
formal computations.
\noindent{$ \bullet $ {\bf Interpretation in $ (t,x,\theta) $.}}
To deal with the variables $ (t,x,\theta) $, define
\mathfrak{e}dskip
$ \mathfrak{d}_{j,\eps} \, := \, \varepsilon\ {\bf p}art_j \, + \, {\bf p}art_j
{\bf v}arphi^\eps_{{\bf r}m\bf f}lat \times {\bf p}art_\theta \, , \qquad j \in
\{0, \cdots, d \} \, , $
\mathfrak{e}dskip
$ \mathfrak{d}_\varepsilon\, := \, ( \mathfrak{d}_{1,\eps} , \cdots , \mathfrak{d}_{d,\eps} )
\, , $
\mathfrak{e}dskip
$ \mg \mr \ma \md^\eps_{{\bf r}m\bf f}lat \, := \, {}^t ( \mathfrak{d}_{1,\eps} , \cdots ,
\mathfrak{d}_{d,\eps} ) \, = \, \varepsilon\ \nabla + X^\eps_{{\bf r}m\bf f}lat \times
{\bf p}artial_\theta \, , $
\mathfrak{e}dskip
$ \md \mi \mv^\eps_{{\bf r}m\bf f}lat \, := \, ( \mg \mr \ma \md^\eps_{{\bf r}m\bf f}lat)^{\bf s}tar \,
= \, \varepsilon\ \Div + X^\eps_{{\bf r}m\bf f}lat \cdot {\bf p}artial_\theta \, . $
\mathfrak{e}dskip
\noindent{The derivatives $ \mathfrak{a}thfrak{X}^\eps_{{{\bf r}m\bf f}lat \dag} $ become}
\mathfrak{e}dskip
$ \varepsilon\ \mathfrak{a}thfrak{X}^\eps_{{{\bf r}m\bf f}lat 1} ( \mathfrak{d}_\varepsilon) \, := \, \varepsilon\
\mathfrak{a}thfrak{X}^\eps_{{{\bf r}m\bf f}lat 1} ({\bf p}art ) \, + \, {\bf v}ert X^\eps_{{\bf r}m\bf f}lat
(t,x) {\bf v}ert \times {\bf p}art_\theta \, , $
\mathfrak{e}dskip
$ \varepsilon\ \mathfrak{a}thfrak{X}^\eps_{{{\bf r}m\bf f}lat j} ( \mathfrak{d}_\varepsilon) \, := \, \varepsilon\
\mathfrak{a}thfrak{X}^\eps_{{{\bf r}m\bf f}lat j} ({\bf p}art ) \, , \qquad {{\bf r}m\bf f}orall \, j \in
\{2, \cdots, d \} \, . $
\mathfrak{e}dskip
\noindent{The action of $ E^{\varepsilonl}_{{{\bf r}m\bf f}lat 0} ({\bf p}art) $
expressed in the variables $ (t,x,\theta) $ gives rise to
some negative differential operator of the order two, noted
$ E^{\varepsilonl}_{{{\bf r}m\bf f}lat 0} ( \mathfrak{d}_\varepsilon) $.} The coefficients
of the derivatives in $ E^{\varepsilonl}_{{{\bf r}m\bf f}lat 0} ( \mathfrak{d}_\varepsilon) $
are of size one, except in front of $ \mathfrak{a}thfrak{X}^\eps_{{{\bf r}m\bf f}lat 1}
({\bf p}art ) $. To avoid technicalities and to simplify the
notations, we substitute the Laplacian $ \nu \, \Delta $
for $ E^{\varepsilonl}_{{{\bf r}m\bf f}lat 0} ( \mathfrak{d}_\varepsilon) $.
\mathfrak{e}dskip
\noindent{When $ \nu = 0 $, we recover Euler equations.}
When $ \nu > 0 $, the action $ \nu \, \Delta $ can be
viewed as the `trace' in $ (t,x,\theta) $ of the anisotropic
viscosity $ E^{\varepsilonl}_{{{\bf r}m\bf f}lat 0} ({\bf p}art) $. Now, consider
the Cauchy problem
\begin{equation} \label{cauchypr}
\ \left \lbrace \begin{array} {ll}
\mathfrak{d}_{0,\eps} \, \tilde u^\eps_{{\bf r}m\bf f}lat + ( \tilde u^\eps_{{\bf r}m\bf f}lat
\cdot \mg \mr \ma \md^\eps_{{\bf r}m\bf f}lat) \, \tilde u^\eps_{{\bf r}m\bf f}lat \! \! \! &
+ \, \mg \mr \ma \md^\eps_{{\bf r}m\bf f}lat \, \tilde p^\eps_{{\bf r}m\bf f}lat \\
\ & = \, \nu \ \varepsilon\ \Delta \, \tilde u^\eps_{{\bf r}m\bf f}lat + \tilde
f^\eps_{{\bf r}m\bf f}lat \, , \qquad \md \mi \mv^\eps_{{\bf r}m\bf f}lat \, \tilde
u^\eps_{{\bf r}m\bf f}lat = \tilde g^\eps_{{\bf r}m\bf f}lat \, , \qquad \\
\tilde u^\eps_{{\bf r}m\bf f}lat (0,\cdot) \, = \, \tilde h^\eps_{{\bf r}m\bf f}lat
(\cdot) \, , & \
\end{array} {\bf r}ight.
\end{equation}
with given data
\mathfrak{e}dskip
$ \tilde f^\eps_{{\bf r}m\bf f}lat \in \mathfrak{a}thcal{W}^\infty_T \, , \qquad \tilde
g^\eps_{{\bf r}m\bf f}lat \in \mathfrak{a}thcal{W}^\infty_T \, , \qquad \tilde h^\eps_{{\bf r}m\bf f}lat
\in H^\infty \, . $
\mathfrak{e}dskip
\noindent{Suppose that $ \nu = 0 $ and select some smooth
solution $ ( \tilde u^\eps_{{\bf r}m\bf f}lat , \tilde p^\eps_{{\bf r}m\bf f}lat ) $
of ({\bf r}ef{cauchypr}). The expressions $ \tilde {\bf u}^\eps_{{\bf r}m\bf f}lat $
and $ \tilde {\bf p}^\eps_{{\bf r}m\bf f}lat $ given by the formula ({\bf r}ef{BKWdelbis})
are subjected to
\begin{equation} \label{cauchyprbf}
\left \lbrace \begin{array} {l}
{\bf p}art_t \tilde {\bf u}^\eps_{{\bf r}m\bf f}lat + ( \tilde {\bf u}^\eps_{{\bf r}m\bf f}lat \cdot \nabla)
\tilde {\bf u}^\eps_{{\bf r}m\bf f}lat + \nabla \tilde {\bf p}^\eps_{{\bf r}m\bf f}lat \, = \, \tilde
{{\bf r}m\bf f}^\eps_{{\bf r}m\bf f}lat \, , \qquad \Div \, \tilde {\bf u}^\eps_{{\bf r}m\bf f}lat = \tilde
{{\bf r}m\bf g}^\eps_{{\bf r}m\bf f}lat \, , \qquad \ \, \\
\tilde {\bf u}^\eps_{{\bf r}m\bf f}lat (0,\cdot) \, = \, \tilde {{\bf r}m\bf h}^\eps_{{\bf r}m\bf f}lat (\cdot) \, ,
\end{array} {\bf r}ight. \qquad
\end{equation}
where the functions $ \tilde {{\bf r}m\bf f}^\eps_{{\bf r}m\bf f}lat (t,x) $, $ \tilde
{{\bf r}m\bf g}^\eps_{{\bf r}m\bf f}lat (t,x) $ and $ \tilde {{\bf r}m\bf h}^\eps_{{\bf r}m\bf f}lat (t,x) $ are
obtained by replacing the variable $ \theta $ by $ {\bf v}arphi^\eps_{{\bf r}m\bf f}lat
(t,x) $ in the expressions $ \eps^{-1} \ \tilde f^\eps_{{\bf r}m\bf f}lat
( t,x, \theta ) $, $ \eps^{-1} \ \tilde g^\eps_{{\bf r}m\bf f}lat ( t,x,
\theta ) $ and $ \tilde h^\eps_{{\bf r}m\bf f}lat (t,x, \theta) $. In other
words, any solution of ({\bf r}ef{cauchypr}) with $ \nu = 0 $ yields
a solution of ({\bf r}ef{cauchyprbf}). From now on, we proceed directly
with the relaxed system ({\bf r}ef{cauchypr}).
{\bf s}mallskip
{\bf s}ubsection{The BKW analysis.}
Select a smooth solution $ {\bf u}_0(t,x) \in \mathfrak{a}thcal{W}^\infty_T $ of
\mathfrak{e}dskip
$ {\bf p}art_t {\bf u}_0 + ({\bf u}_0 \cdot \nabla ) \, {\bf u}_0 + \nabla {\bf p}_0 \,
= \, \nu \ \Delta_x {\bf u}_0 \, , \qquad \Div \ {\bf u}_0 = 0 \, . $
\mathfrak{e}dskip
\noindent{Choose a phase $ {\bf v}arphi_0 (t,x) \in C^1 ([0,T]
\times {\mathfrak{a}thbb R}^d) $ with $ \nabla {\bf v}arphi_0 (t,x) \in C^\infty_b
([0,T] \times {\mathfrak{a}thbb R}^d) $. Suppose moreover that it satisfies
the eiconal equation $ (ei) $ and the condition ({\bf r}ef{nonstaou}).}
The main step in the construction of approximate solutions is the
following intermediate result.
{\bf s}mallskip
\begin{prop} \label{appBKWinter} Select any $ {{\bf r}m\bf f}lat = (l,N) \in {\mathfrak{a}thbb N}^2 $
such that $ 0 < l < N $. Consider the following initial data
\mathfrak{e}dskip
$ \tilde U_{k0}^* (x,\theta) = \Pi_0 (0,x) \, \tilde U_{k0}^*
(x,\theta) \in H^\infty \, , \qquad 1 \leq k \leq N \, , $
{\bf s}mallskip
$ \langle \tilde U_{k0} {\bf r}angle (x) \in H^\infty \, ,
\qquad 1 \leq k \leq N \, , $
{\bf s}mallskip
$ {\bf v}arphi_{k0} (x) \in H^\infty \, , \qquad \quad \!
1 \leq k \leq N \, . $
\mathfrak{e}dskip
\noindent{There are finite sequences $ \{ \tilde U_k \}_{1 \leq
k \leq N} $ and $ \{ \tilde P_k \}_{1 \leq k \leq N} $ with}
\mathfrak{e}dskip
$ \tilde U_k (t,x,\theta) \in \mathfrak{a}thcal{W}^\infty_T \, , \qquad \tilde
P_k (t,x,\theta) \in \mathfrak{a}thcal{W}^\infty_T \, , \qquad 1 \leq k \leq N \, , $
\mathfrak{e}dskip
\noindent{and a finite sequence $ \{ {\bf v}arphi_k \}_{1 \leq k \leq N} $
with}
\mathfrak{e}dskip
$ {\bf v}arphi_k (t,x) \in \mathfrak{a}thcal{W}^\infty_T \, , \qquad 1 \leq k \leq N \, , $
\mathfrak{e}dskip
\noindent{which are such that}
\mathfrak{e}dskip
$ \Pi_0 (0,x) \, \tilde U_k^* (0,x,\theta) = \Pi_0 (0,x) \,
\tilde U_{k0}^* (x,\theta) \, , \qquad 1 \leq k \leq N \, , $
{\bf s}mallskip
$ \langle \tilde U_k {\bf r}angle (0,x) = \langle \tilde U_{k0}
{\bf r}angle (x) \, , \qquad \! 1 \leq k \leq N \, , $
{\bf s}mallskip
$ {\bf v}arphi_k (0,x) = {\bf v}arphi_{k0} (x) \, , \qquad \quad
\ 1 \leq k \leq N \, . $
\mathfrak{e}dskip
\noindent{Define $ {\bf v}arphi^\eps_{{\bf r}m\bf f}lat $ as in ({\bf r}ef{phasecomplete}).}
All the preceding expressions are adjusted so that the functions
$ \tilde u^\eps_{{\bf r}m\bf f}lat $ and $ \tilde p^\eps_{{\bf r}m\bf f}lat $ associated with
the expansions in ({\bf r}ef{feclate}) are approximate solutions
on the interval $ [0,T] $. More precisely, they satisfy
({\bf r}ef{cauchypr}) with
\begin{equation} \label{conclufi}
\tilde h^\eps_{{\bf r}m\bf f}lat (x, \theta) \, = \, {\bf u}_0 (0,x) + {{\bf r}m\bf h}box{$ {\bf s}um_{
k=1}^N $} \, \eps^{{{\bf r}m\bf f}rac{k}{l}} \ \tilde U_{k0} (x,\theta) \qquad
\qquad \qquad
\end{equation}
and we have
\begin{equation} \label{conclufiaussi}
\{ \tilde f^\eps_{{\bf r}m\bf f}lat \}_\varepsilon= \bigcirc( \eps^{{{\bf r}m\bf f}rac{N+1}{l}})
\, , \qquad \{ \tilde g^\eps_{{\bf r}m\bf f}lat \}_\varepsilon= \bigcirc( \eps^{
{{\bf r}m\bf f}rac{N+1}{l}}) \, . \qquad \qquad \qquad \quad
\end{equation}
\end{prop}
\mathfrak{e}dskip
\noindent{$ \bullet $ {\bf Proof of the Proposition {\bf r}ef{appBKWinter}.}}
For convenience, we will drop in this paragraph the tilde '$\, \tilde \ \, $'
on the profiles $ u^\eps_{{\bf r}m\bf f}lat $, $ p^\eps_{{\bf r}m\bf f}lat $, $ U_k $ and $ P_k $.
This modification concerns only this demonstration. We hope that it
will not induce confusions: we still work here with the complete phase
$ {\bf v}arphi^\eps_{{\bf r}m\bf f}lat $.
\mathfrak{e}dskip
\noindent{Because of ({\bf r}ef{nondegeat}) we can define the application
$ \Pi^\eps_{{\bf r}m\bf f}lat (t,x) $ which is the orthogonal projector on the
hyperplane $ \nabla {\bf v}arphi^\eps_{{\bf r}m\bf f}lat (t,x)^{\bf p}erp {\bf s}ubset {\mathfrak{a}thbb R}^d $.}
We adopt the convention
\mathfrak{e}dskip
$ \Pi^\eps_{{\bf r}m\bf f}lat (t,x) = {\bf s}um_{k=0}^\infty \, \eps^{{{\bf r}m\bf f}rac{k}{l}} \
\Pi_k (t,x) \, , \qquad \Pi_k \in \mathfrak{a}thcal{W}^\infty_T \, , \qquad \varepsilon
\in \, ]0, \eps_0] \, . $
\mathfrak{e}dskip
\noindent{The access to $ \Pi_k $ needs only the knowledge of the $ X_j $
for $ j \leq k $.} Introduce
\mathfrak{e}dskip
$ v^\eps_{{\bf r}m\bf f}lat := X^\eps_{{\bf r}m\bf f}lat \cdot u^\eps_{{\bf r}m\bf f}lat = {\bf s}um_{k=0}^\infty \,
\eps^{{{\bf r}m\bf f}rac{k}{l}} \ V_k \, , \qquad \! V_k = X_k \cdot {\bf u}_0 +
{\bf s}um_{j=0}^{k-1} \, X_j \cdot U_{k-j} \, , $
\mathfrak{e}dskip
$ w^\eps_{{\bf r}m\bf f}lat := \Pi^\eps_{{\bf r}m\bf f}lat \, u^\eps_{{\bf r}m\bf f}lat = {\bf s}um_{k=0}^\infty \,
\eps^{{{\bf r}m\bf f}rac{k}{l}} \ W_k \, , \qquad W_k = \Pi_k \, {\bf u}_0 + {\bf s}um_{j=0}^{k-1}
\, \Pi_j \, U_{k-j} \, . $
\mathfrak{e}dskip
\noindent{By construction}
\mathfrak{e}dskip
$ u^\eps_{{\bf r}m\bf f}lat \, = \, v^\eps_{{\bf r}m\bf f}lat \ {\bf v}ert X^\eps_{{\bf r}m\bf f}lat {\bf v}ert^{-2}
\ X^\eps_{{\bf r}m\bf f}lat + w^\eps_{{\bf r}m\bf f}lat \, , \qquad U_k \, = \, V_k \ {\bf v}ert X_0
{\bf v}ert^{-2} \ X_0 \, + W_k + h $
\mathfrak{e}dskip
\noindent{where $ h $ depends only on the $ X_j $ for $ j \leq k $
and on the $ U_j $ for $ j \leq k-1 $.}
\mathfrak{e}dskip
\noindent{The conditions prescribed in the Proposition {\bf r}ef{appBKWinter}
on the initial data $ U_{k0}^* $ allow to fix the functions $ \nabla
{\bf v}arphi_0(0,x) \cdot U_k^* (0,x,\theta) $ as we want.} Since
\mathfrak{e}dskip
$ V_k^* \, = \, \nabla {\bf v}arphi_0 \cdot U_k^* \, + \, {\bf s}um_{j=1}^{k-1} \,
X_j \cdot U_{k-j}^* \, , $
\mathfrak{e}dskip
\noindent{the same is true (by induction) for the components $ V_k^*
(0,x,\theta) $.} To begin with, we impose the polarization conditions
\begin{equation} \label{polafirst}
P_k^* \, \equiv \, V_k^* \, \equiv \, 0 \, , \qquad {{\bf r}m\bf f}orall \, k \in
\{ 1,\cdots,l \} \qquad \qquad \qquad \qquad \
\end{equation}
and we adjust {\it a priori} the geometrical phase $ {\bf v}arphi^\eps_g $
so that
\begin{equation} \label{shiftfirst}
{\bf p}art_t {\bf v}arphi_k + \bar V_k = 0 \, , \qquad {{\bf r}m\bf f}orall \, k \in \{ 1,
\cdots, l-1 \} \qquad \qquad \qquad \qquad \quad
\end{equation}
which implies that
\mathfrak{e}dskip
$ {\bf p}art_t {\bf v}arphi^\eps_g + (\bar u^\eps_{{\bf r}m\bf f}lat \cdot \nabla)
{\bf v}arphi^\eps_g \, = \, {\bf s}um_{k=l}^\infty \, \eps^{{{\bf r}m\bf f}rac{k}{l}}
\ \bar V_k \, = \, \bigcirc (\eps) \, . $
\mathfrak{e}dskip
\noindent{It amounts to the same thing to look at the equations in
({\bf r}ef{cauchypr}) or at the following singular system (we drop here
the indices $ \varepsilon$ and $ {{\bf r}m\bf f}lat $ at the level of $ u^\eps_{{\bf r}m\bf f}lat $,
$ v^\eps_{{\bf r}m\bf f}lat $, $ w^\eps_{{\bf r}m\bf f}lat $, $ p^\eps_{{\bf r}m\bf f}lat $ , $ \Pi^\eps_{{\bf r}m\bf f}lat $
and $ {\bf v}arphi^\eps_{{\bf r}m\bf f}lat $)}
\begin{equation} \label{singu}
\left \{ \begin{array}{ll}
{\bf p}art_t u + ( u \cdot \nabla ) \, u + \nabla p & \! \! \! + \,
\eps^{-1} \ ({\bf p}art_t {\bf v}arphi + v) \ {\bf p}artial_\theta u \quad \\
& \! \! \! + \, \eps^{-1} \ {\bf p}art_\theta p \ \nabla {\bf v}arphi \,
= \, \nu \ \Delta \, u \, , \\
\Div \ u + \eps^{-1} \ {\bf p}art_\theta v = 0 \, . & \
\end{array} {\bf r}ight. \qquad \qquad \
\end{equation}
The functions $ v $ is subjected to
\begin{equation} \label{v}
\left. \begin{array}{rl}
{\bf p}art_t v \! \! \! \! & + \, (u \cdot \nabla) \, v + X \cdot \nabla p
+ \eps^{-1} \ ({\bf p}art_t {\bf v}arphi + v) \ {\bf p}artial_\theta v \\
\ & + \, \eps^{-1} \ {\bf p}art_\theta p \ {\bf p}arallel X {\bf p}arallel^2
- \bigl( {\bf p}art_t X + (u \cdot \nabla) \, X \bigr) \cdot u =
\nu \ X \cdot \Delta \, u \, .
\end{array} {\bf r}ight.
\end{equation}
The functions $ w $ satisfies
\begin{equation} \label{w}
\left. \begin{array}{rl}
{\bf p}art_t w \! \! \! \! & + \, (u \cdot \nabla) \, w + \Pi \, \nabla
p + \eps^{-1} \ ({\bf p}art_t {\bf v}arphi + v) \ {\bf p}artial_\theta w \qquad
\qquad \quad \ \ \ \\
\ & - \, \bigl( {\bf p}art_t \Pi + (u \cdot \nabla) \, \Pi \bigr) \, u
= \nu \ \Pi \, \Delta \, u \, .
\end{array} {\bf r}ight.
\end{equation}
Substitute the expressions $ u^\eps_{{\bf r}m\bf f}lat $ and $ p^\eps_{{\bf r}m\bf f}lat $
given by ({\bf r}ef{feclate}) into ({\bf r}ef{singu}). Then arrange the terms
according to the different powers of $ \varepsilon$ which are in factor.
The contributions coming from the orders $ \eps^{{{\bf r}m\bf f}rac{1}{l}-1} $,
$ \cdots $, $ \eps^{-{{\bf r}m\bf f}rac{1}{l}} $ and $ \eps^0 $ are eliminated
through ({\bf r}ef{polafirst}), ({\bf r}ef{shiftfirst}) and the constraints
imposed on $ ({\bf u}_0,{\bf p}_0) $.
\mathfrak{e}dskip
\noindent{Now, look at the terms in front of $\eps^{{{\bf r}m\bf f}rac{j}{l}} $
with $ j \in {\mathfrak{a}thbb N}_* $.} It remains
\begin{equation} \label{eqgj}
\left \{ \begin{array}{l}
{\bf p}art_t U_j + {\bf s}um_{k=0}^j \, ( U_k \cdot \nabla) \, U_{j-k} +
\nabla P_j + {\bf s}um_{k=0}^j \, {\bf p}art_\theta P_{l+k} \ \nabla
{\bf v}arphi_{j-k} \\
\qquad \, + \, {\bf s}um_{k=0}^{j-1} \, ({\bf p}art_t {\bf v}arphi_{l+k} + V_{l+k}) \
{\bf p}art_\theta U_{j-k} \, = \, \nu \ \Delta \, U_j \, , \\
\Div \ U_j + {\bf p}art_\theta V_{j+l} = 0 \, .
\end{array} {\bf r}ight. \quad
\end{equation}
Proceed in a similar manner with ({\bf r}ef{v}). Just arrange the terms
which come from $ X \cdot \Delta \, u $ and which do not involve
$ U_j $ in a source term $ \mathfrak{a}thcal{H}^j_V $
\begin{equation} \label{eqgjeg}
\left. \begin{array}{rl}
\, {\bf p}art_t V_j \! \! \! \! & + \, {\bf s}um_{k=0}^j \, (U_k \cdot \nabla) \,
V_{j-k} + {\bf s}um_{k=0}^{j-1} \, ({\bf p}art_t {\bf v}arphi_{l+k} + V_{l+k}) \
{\bf p}art_\theta V_{j-k} \\
\ & + \, {\bf s}um_{k=0}^j \, X_k \cdot \nabla P_{j-k} \, - \, {\bf s}um_{k=0}^j \,
{\bf p}art_t X_k \cdot U_{j-k} \\
\ & - \, {\bf s}um_{k=0}^j \, \bigl( {\bf s}um_{l=0}^k \, (U_{k-l} \cdot \nabla) \,
X_l \bigr) \cdot U_{j-k} \\
\ & + \, {\bf s}um_{k=1}^{j-1} \, \bigl( {\bf s}um_{l=0}^k \, X_l \cdot X_{k-l}
\bigr) \ {\bf p}art_\theta P_{j+l-k} + {\bf v}ert X_0 {\bf v}ert^2 \
{\bf p}art_\theta P_{j+l} \\
\ & = \, \nu \ X_0 \cdot \Delta \, U_j + \mathfrak{a}thcal{H}^j_V (t,x,\theta,U_1,X_1,
\cdots, U_{j-1},X_{j-1}, X_j) \, . \ \
\end{array} {\bf r}ight.
\end{equation}
The same operation with ({\bf r}ef{w}) yields
\begin{equation} \label{eqgjel}
\, \left. \begin{array}{rl}
{\bf p}art_t W_j \! \! \! \! & + \, {\bf s}um_{k=0}^j \, (U_k \cdot \nabla) \,
W_{j-k} \, + \, {\bf s}um_{k=0}^{j-1} \, ({\bf p}art_t {\bf v}arphi_{l+k} + V_{l+k}) \
{\bf p}art_\theta W_{j-k} \quad \\
\ & + \, {\bf s}um_{k=0}^j \, \Pi_k \, \nabla P_{j-k} \, - \, {\bf s}um_{k=0}^j \,
{\bf p}art_t \Pi_k \, U_{j-k} \\
\ & - \, {\bf s}um_{k=0}^j \, \bigl( {\bf s}um_{l=0}^k \, (U_{k-l} \cdot \nabla)
\, \Pi_l \bigr) \,U_{j-k} \\
\ & = \, \nu \ \Pi_0 \ \Delta \, U_j + \mathfrak{a}thcal{H}^j_W (t,x,\theta,U_1,X_1,
\cdots, U_{j-1},X_{j-1}, X_j) \, .
\end{array} {\bf r}ight.
\end{equation}
Then extract the mean value of ({\bf r}ef{eqgj})
\begin{equation} \label{moyj}
\ \left \{ \begin{array}{l}
{\bf p}art_t \bar U_j + ({\bf u}_0 \cdot \nabla) \, \bar U_j + (\bar U_j \cdot
\nabla) \, {\bf u}_0 + \nabla \bar P_j \qquad \qquad \qquad \qquad \qquad \\
\qquad \, + \, {\bf s}um_{k=1}^{j-1} \, \langle ( U_k \cdot \nabla) \,
U_{j-k} {\bf r}angle \\
\qquad \, + {\bf s}um_{k=1}^{j-1} \, \langle V^*_{l+k} \ {\bf p}art_\theta
U_{j-k} {\bf r}angle \, = \, \nu \ \Delta_x \, \bar U_j \, , \\
\Div \ \bar U_j = 0 \, .
\end{array} {\bf r}ight.
\end{equation}
Observe also that
\begin{equation} \label{divj}
V^*_{j+l} \, = \, - \, \Div \ {\bf p}artial_\theta^{-1} U^*_j \, , \qquad
{{\bf r}m\bf f}orall \, j \in {\mathfrak{a}thbb N}_* \, . \qquad \qquad \qquad \qquad \qquad
\end{equation}
Consider the inductive reasoning based on
\mathfrak{e}dskip
\noindent{\bf Hypothesis} $ (H_j) \, $:
\mathfrak{e}dskip
{\em
\noindent{i) The expressions $ U_1 $, $ \cdots $, $ U_j $ and $ P_1 $,
$ \cdots $, $ P_j $ are known.}
\mathfrak{e}dskip
\noindent{ii) The phases $ {\bf v}arphi_1 $, $\cdots $, $ {\bf v}arphi_j $ are
identified. The same is true for the vectors $ X_1 $, $\cdots $, $ X_j $
and the projectors $ \Pi_1 $, $\cdots $, $ \Pi_j $. Moreover, the
following relations are satisfied}
\begin{equation} \label{phik}
{\bf p}art_t {\bf v}arphi_{j+k} + \bar V_{j+k} = 0 \, , \qquad {{\bf r}m\bf f}orall \, k \in
\{ 1, \cdots, l-1 \} \, .
\qquad \end{equation}
iii) The correctors $ V^*_{j+1} $, $ \cdots $, $ V^*_{j+l} $ and
$ P^*_{j+1} $, $ \cdots $, $ P^*_{j+l} $ are identified and
\begin{equation} \label{vk+n}
V^*_{j+k} = \, - \, \Div \ {\bf p}art_\theta^{-1} U^*_{j+k-l} \, , \qquad
{{\bf r}m\bf f}orall \, k \in \{1, \cdots, l \} \, . \qquad \quad
\end{equation}
}
\noindent{- {\bf u}nderline{Verification} {\bf u}nderline{of} $ (H_1) $}. The mean
value $ \bar U_1 $ is obtained by solving
\mathfrak{e}dskip
$ {\bf p}art_t \bar U_1 + ({\bf u}_0 \cdot \nabla) \, \bar U_1 + (\bar U_1 \cdot
\nabla) \, {\bf u}_0 + \nabla \bar P_1 = \nu \ \Delta_x \, \bar U_1 \, ,
\qquad \Div \ \bar U_1 = 0 \, . $
\mathfrak{e}dskip
\noindent{Using ({\bf r}ef{shiftfirst}), it allows to determine $ {\bf v}arphi_1 $.}
Now look at the oscillating part of ({\bf r}ef{eqgjel}) with the indice $ j = 1 $.
The constraint on $ W_1^* \equiv U_1^* $ writes
\mathfrak{e}dskip
$ {\bf p}art_t W_1^* + ( {\bf u}_0 \cdot \nabla) \, W_1^* + ( {\bf p}art_t {\bf v}arphi_l
+ \bar V_l) \ {\bf p}art_\theta W_1^* = M \, W_1^* + \nu \ \Pi_0 \ \Delta
\, W^*_1 $
\mathfrak{e}dskip
\noindent{where $ M $ is the linear application}
\mathfrak{e}dskip
$ M \, U \, := \, ({\bf p}art_t \Pi_0) \, U + \bigl( ({\bf u}_0 \cdot \nabla)
\Pi_0 \bigr) \, U - \Pi_0 \, (U \cdot \nabla ) {\bf u}_0 \, . $
\mathfrak{e}dskip
\noindent{We impose ({\bf r}ef{phik}) for $ j=1 $.} In view of
({\bf r}ef{shiftfirst}), it reduces to
\mathfrak{e}dskip
$ {\bf p}art_t {\bf v}arphi_l + \bar V_l = 0 \, . $
\mathfrak{e}dskip
\noindent{The link between $ W_1^* $ and $ \bar V_l $ is
removed.} It remains the linear equation
\mathfrak{e}dskip
$ {\bf p}art_t W_1^* + ( {\bf u}_0 \cdot \nabla) \, W_1^* = M \, W_1^* + \nu \
\Pi_0 \ \Delta \, W^*_1 \, . $
\mathfrak{e}dskip
\noindent{When $ \nu = 0 $, the profile $ W^*_1 $ is obtained
by integrating along the characteristic curves $ \Gamma (\cdot,x) $.
This justifies the remark 2.2.4 for $ k = 1 $. Observe also that the
polarization condition $ W^*_1 = \Pi_0 \, W^*_1 $ is conserved since
the equation given for $ W_1^* $ is equivalent to}
\begin{equation} \label{eqW}
\ \left \{ \begin{array}{l}
\Pi_0 \, \bigl \lbrack {\bf p}art_t W_1^* + ( {\bf u}_0 \cdot
\nabla) \, W_1^* + (W_1^* \cdot \nabla) {\bf u}_0 \bigr {\bf r}brack \,
= \, \nu \ \Pi_0 \ \Delta \, W^*_1 \, , \qquad \quad \\
W_1^* \, = \, \Pi_0 \, W^*_1 \, .
\end{array} {\bf r}ight.
\end{equation}
Introduce the linear form
\mathfrak{e}dskip
$ \ell \, U \, := \ {\bf v}ert X_0 {\bf v}ert^{-2} \ \bigl \lbrack \,
{\bf p}art_t X_0 \cdot U + \bigl( ({\bf u}_0 \cdot \nabla) X_0 \bigr)
\cdot U - X_0 \cdot \bigl( (U \cdot \nabla ) {\bf u}_0 \bigr) \,
\bigr {\bf r}brack \, . $
\mathfrak{e}dskip
\mathfrak{e}dskip
\noindent{The constraint $ V_1^* \equiv 0 $ is equivalent to}
\mathfrak{e}dskip
$ P^*_{l+1} \, = \, \ell \ {\bf p}art_\theta^{-1} W_1^* \, + \,
\nu \ {\bf v}ert X_0 {\bf v}ert^{-2} \ X_0 \cdot {\bf p}art_\theta^{-1} \,
\Delta \, W^*_1 \, , $
\mathfrak{e}dskip
\noindent{We have also}
\mathfrak{e}dskip
$ V^*_{l+1} = \, - \, \Div \ {\bf p}art_\theta^{-1} W_1^* \, . $
\mathfrak{e}dskip
\noindent{At this stage, we know who is $ U_1 \equiv \bar U_1
+ W_1^* $ and $ P_1 \equiv 0 $.} Moreover, we have the relations
({\bf r}ef{phik}) and ({\bf r}ef{vk+n}). Thus, the hypothesis $ (H_1) $
is verified.
\noindent{- {\bf u}nderline{The} {\bf u}nderline{induction}.} Suppose
that the conditions given in $ (H_j )$ are satisfied. The
question is to obtain $ (H_{j+1} )$. Consider first ({\bf r}ef{moyj})
with the indice $ j+1 $. The relation ({\bf r}ef{vk+n}) induces
simplifications. It remains
\begin{equation} \label{moyj+1}
\left \{ \begin{array}{l}
{\bf p}art_t \bar U_{j+1} + ({\bf u}_0 \cdot \nabla) \, \bar U_{j+1}
+ (\bar U_{j+1} \cdot \nabla) \, {\bf u}_0 \qquad \qquad \\
\qquad + \, {\bf s}um_{k=1}^{j} \, ( \bar U_k \cdot \nabla) \,
\bar U_{j+1-k} + \, {\bf s}um_{k=1}^j \, \Div \ \langle
U^*_k \otimes U^*_{j+1-k} {\bf r}angle \\
\qquad + \, \nabla \bar P_{j+1} \, = \, \nu \ \Delta_x
\, \bar U_{j+1} \, , \qquad \Div \ \bar U_{j+1} = 0 \, .
\end{array} {\bf r}ight.
\end{equation}
This system gives access to $ \bar U_{j+1} $ and $ \bar P_{j+1} $.
For $ j=1 $, it yields
\begin{equation} \label{moy2}
\ \left \{ \begin{array}{l}
{\bf p}art_t \bar U_2 + ({\bf u}_0 \cdot \nabla) \, \bar U_2 + (\bar U_2 \cdot
\nabla) \, {\bf u}_0 + \nabla \bar P_2 \qquad \qquad \qquad \quad \ \ \, \\
\quad \ + \,( \bar U_1 \cdot \nabla) \, \bar U_1 + \Div \ \langle
U^*_1 \otimes U^*_1 {\bf r}angle = \, \nu \ \Delta \bar U_2 \, , \quad
\ \Div \ \bar U_2 = 0 \, .
\end{array} {\bf r}ight.
\end{equation}
Because of ({\bf r}ef{nonnontri}), the source term $ \langle U^*_1 \otimes
U^*_1 {\bf r}angle $ is sure to be non trivial. We recover here that in
general $ \bar U_2 \not \equiv 0 $ even if $ \bar U_1 (0,\cdot)
\equiv \bar U_2 (0,\cdot) \equiv 0 $. The term $ \bar U_2 $
excites $ {\bf v}arphi_2 $ through ({\bf r}ef{trans2}). Generically, we
have $ {\bf v}arphi_2 \not \equiv 0 $ even if
\mathfrak{e}dskip
$ \bar U_1 (0,\cdot) \, \equiv \, \bar U_2 (0,\cdot) \, \equiv \, 0 \, ,
\qquad {\bf v}arphi_1 (0,\cdot) \, \equiv \, {\bf v}arphi_2 (0,\cdot) \, \equiv
\, 0 \, . $
\mathfrak{e}dskip
\noindent{Observe however that exceptions can happen (see the
remark 3.5.4).} The information ({\bf r}ef{phik}) for $ k=1 $ means
that
\mathfrak{e}dskip
$ {\bf p}art_t {\bf v}arphi_{j+1} + ({\bf u}_0 \cdot \nabla) \, {\bf v}arphi_{j+1}
+ X_0 \cdot \bar U_{j+1} + {\bf s}um_{l=1}^j \, X_l \cdot \bar U_{j
+1-l} \, = \, 0 \, . $
\mathfrak{e}dskip
\noindent{Deduce $ {\bf v}arphi_{j+1} $ from this equation, and
therefore $ X_{j+1} $ and $ \Pi_{j+1} $.} Complete with the
triangulation condition
\begin{equation} \label{phibi}
{\bf p}art_t {\bf v}arphi_{j+l} + \bar V_{j+l} = 0 \, . \qquad \qquad \qquad
\qquad \qquad \qquad \qquad \qquad \quad \quad
\end{equation}
Then extract the oscillating part of ({\bf r}ef{eqgjel}) written
with $ j+1 $. Use $ (H_j) $ and ({\bf r}ef{phibi}) in order to
simplify the resulting equation. It yields
\begin{equation} \label{oscij+1}
\ {\bf p}art_t W_{j+1}^* + ( {\bf u}_0 \cdot \nabla) \, W_{j+1}^* = M \,
W_{j+1}^* + \nu \ \Pi_0 \ \Delta \, W^*_{j+1} + f \qquad \qquad \
\end{equation}
where $ f $ is known. We get $ W^*_{j+1} $ by solving ({\bf r}ef{oscij+1}).
Therefore we have $ U^*_{j+1} $ and we can deduce $ V^*_{j+l+1} = \,
- \, \Div \ {\bf p}art_\theta^{-1} U^*_{j+1} $.
\mathfrak{e}dskip
\noindent{Now look at the constraint ({\bf r}ef{eqgjeg}) for the
indice $ j+1 $.} Extract the oscillating part. It allows to
recover $ P^*_{j+l+1} $. Thus we have $ (H_{j+1}) $.
\mathfrak{e}dskip
\noindent{Apply the induction up to $ j = N -l $.} It yields
$ U_1 $, $ \cdots $, $ U_N $. Construct oscillations
$ \tilde u^\eps_{{\bf r}m\bf f}lat $ and $ \tilde p^\eps_{{\bf r}m\bf f}lat $ by way of
({\bf r}ef{feclate}). It furnishes source terms $ \tilde f^\eps_{{\bf r}m\bf f}lat $
and $ \tilde g^\eps_{{\bf r}m\bf f}lat $ through ({\bf r}ef{cauchypr}). By
construction, we recover ({\bf r}ef{conclufiaussi}). {{\bf r}m\bf h}fill $ \Box $
{\bf s}ubsection{Divergence free approximate solutions in $ (t,x, \theta) $.}
In this subsection 4.3, we impose on $ {\bf v}arphi_0 $ a constraint
which is more restrictive than ({\bf r}ef{nonstaou}). We suppose that
we can find a direction $ \zeta \in {\mathfrak{a}thbb R}^d {\bf s}etminus \{ 0 \} $ such
that
\begin{equation} \label{nonstaoubis}
\exists \, c > 0 \, ; \qquad \nabla {\bf v}arphi_0 (t,x) \cdot \zeta \,
{{\bf r}m\bf g}eq \, c \, , \qquad {{\bf r}m\bf f}orall \, (t,x) \in [0,T]
\times {\mathfrak{a}thbb R}^d \, . \quad \
\end{equation}
\begin{prop} \label{complement} The assumptions are as in
the Proposition {\bf r}ef{appBKWinter}. The profiles $ \tilde
U_k $, $ \tilde P_k $, and the phases $ {\bf v}arphi_k $ are
defined in the same way. Then, there are correctors
\mathfrak{e}dskip
$ \tilde {c u}^\eps_{{\bf r}m\bf f}lat (t,x,\theta) \in \mathfrak{a}thcal{W}^\infty_T \, ,
\qquad \{ \tilde{c u}^\eps_{{\bf r}m\bf f}lat \}_\varepsilon\, = \, \bigcirc(
\eps^{{{\bf r}m\bf f}rac{N}{l}}) $
\mathfrak{e}dskip
\noindent{such that the functions $ \tilde u^\eps_{{\bf r}m\bf f}lat $
and $ \tilde p^\eps_{{\bf r}m\bf f}lat $ defined according to}
$$ \, \left. \begin{array}{l}
\tilde u^\eps_{{\bf r}m\bf f}lat (t,x) := {\bf u}_0 (t,x) + {\bf s}um_{k=1}^N \,
\eps^{{{\bf r}m\bf f}rac{k}{l}} \ \tilde U_k \bigl( t,x, \eps^{-1} \,
{\bf v}arphi^\eps_g (t,x) \bigr) + \tilde {c u}^\eps_{{\bf r}m\bf f}lat (t,x)
\qquad \ \ \\
\tilde p^\eps_{{\bf r}m\bf f}lat (t,x) := {\bf p}_0 (t,x) + {\bf s}um_{k=1}^N \,
\eps^{{{\bf r}m\bf f}rac{k}{l}} \ \tilde P_k \bigl( t,x, \eps^{-1} \,
{\bf v}arphi^\eps_g (t,x) \bigr)
\end{array} {\bf r}ight. $$
satisfy the Cauchy problem
\begin{equation} \label{eclacpa}
\ \left \lbrace \begin{array} {l}
\mathfrak{d}_{0,\eps} \, \tilde u^\eps_{{\bf r}m\bf f}lat + ( \tilde u^\eps_{{\bf r}m\bf f}lat
\cdot \mg \mr \ma \md^\eps_{{\bf r}m\bf f}lat) \, \tilde u^\eps_{{\bf r}m\bf f}lat +
\mg \mr \ma \md^\eps_{{\bf r}m\bf f}lat \, \tilde p^\eps_{{\bf r}m\bf f}lat \\
\qquad \qquad \qquad \qquad \qquad \! = \, \nu \ \varepsilon\
\Delta \, \tilde u^\eps_{{\bf r}m\bf f}lat + \tilde f^\eps_{{\bf r}m\bf f}lat \, ,
\qquad \md \mi \mv^\eps_{{\bf r}m\bf f}lat \, \tilde u^\eps_{{\bf r}m\bf f}lat = 0 \qquad \\
\tilde u^\eps_{{\bf r}m\bf f}lat (0,x,\theta) \, = \, {\bf u}_0 (0,x) +
{\bf s}um_{k=1}^N \, \eps^{{{\bf r}m\bf f}rac{k}{l}} \ \tilde U_{k0} (x,\theta)
\end{array} {\bf r}ight.
\end{equation}
and we still have $ \{ \tilde f^\eps_{{\bf r}m\bf f}lat \}_\varepsilon\, = \,
\bigcirc(\eps^{{{\bf r}m\bf f}rac{N+1}{l}}) $.
\end{prop}
\noindent{We need some material before proving the Proposition
{\bf r}ef{complement}.}
\mathfrak{e}dskip
\noindent{$ \bullet $ {\bf The divergence free relation in
the variables $ (t,x,\theta) $.}} We can select some special
right inverse of the application $ \md \mi \mv^\eps_{{\bf r}m\bf f}lat \, : \,
H^{\infty *}_T \longrightarrow H^{\infty *}_T $.}
\begin{lem} \label{invpart*} There is a linear operator
$ \mathfrak{r} \mathfrak{i} \md \mi \mv^\eps_{{\bf r}m\bf f}lat \, : \, \text{{\bf r}m Im} \, (
\md \mi \mv^\eps_{{\bf r}m\bf f}lat) \longrightarrow H^{\infty *}_T $ with
\begin{equation} \label{exainv'}
\md \mi \mv^\eps_{{\bf r}m\bf f}lat \circ \mathfrak{r} \mathfrak{i} \md \mi \mv^\eps_{{\bf r}m\bf f}lat \ g \, = \,
g \, , \qquad {{\bf r}m\bf f}orall \, g \in \text{{\bf r}m Im} \, (\md \mi \mv^\eps_{{\bf r}m\bf f}lat)
\, . \qquad \qquad \qquad \qquad
\end{equation}
For all $ m \in {\mathfrak{a}thbb N} $, there is a constant $ C_m > 0 $ such that
\begin{equation} \label{estiinv'}
{\bf p}arallel \mathfrak{r} \mathfrak{i} \md \mi \mv^\eps_{{\bf r}m\bf f}lat \ g {\bf p}arallel_{H^m} \, \leq \,
C_m \ {\bf p}arallel g {\bf p}arallel_{H^{m+1+ {{\bf r}m\bf f}rac{d}{2}}} \, ,
\qquad {{\bf r}m\bf f}orall \, g \in \text{{\bf r}m Im} \, (\md \mi \mv^\eps_{{\bf r}m\bf f}lat) \, .
\quad \
\end{equation}
\end{lem}
\mathfrak{e}dskip
\noindent{\em {\bf u}nderline{Proo}f {\bf u}nderline{o}f {\bf u}nderline{the}
{\bf u}nderline{Lemma} 4{\bf u}nderline{.1}.} Let $ n \in {\mathfrak{a}thbb N}_* $. Note
\mathfrak{e}dskip
$ t_j := j \, T / n \, , \qquad x_j = k /n \, , \qquad 1 \leq
j \leq n-1 \, , \qquad k \in {\mathfrak{a}thbb Z}^d \, . $
\mathfrak{e}dskip
\noindent{Consider a related partition of unity}
\mathfrak{e}dskip
$ \chi_{(j,k)} \in C^\infty ([0,T] \times {\mathfrak{a}thbb R}^d) \, , \qquad
\qquad (j,k) \in \{ 1, \cdots ,n-1 \} \times {\mathfrak{a}thbb Z}^d \, , $
{\bf s}mallskip
$ {\bf s}um_{j=1}^{n-1} \, {\bf s}um_{k \in {\mathfrak{a}thbb Z}^d} \, \chi_{(j,k)} (t,x) \,
= \, 1 \, , \qquad {{\bf r}m\bf f}orall \, (t,x) \in [0,T] \times {\mathfrak{a}thbb R}^d \, , $
{\bf s}mallskip
$ \bigl \{ \, (t,x) \, ; \ \chi_{(j,k)}(t,x) \not = 0 \, \bigr \} \,
{\bf s}ubset \, [t_j - {{\bf r}m\bf f}rac{2}{n} , t_j + {{\bf r}m\bf f}rac{2}{n} ] \times B (x_j,
{{\bf r}m\bf f}rac{2}{n} ] \, , $
{\bf s}mallskip
$ \bigl \{ \, (t,x) \, ; \ \chi_{(j,k)}(t,x) = 1 \, \bigr \} \,
{\bf s}upset \, [t_j - {{\bf r}m\bf f}rac{1}{n} , t_j + {{\bf r}m\bf f}rac{1}{n} ] \times B
(x_j,{{\bf r}m\bf f}rac{1}{n} ] \, . $
\mathfrak{e}dskip
\noindent{By hypothesis, there is a function $ v \in H^{\infty *}_T $
such that $ g = \md \mi \mv^\eps_{{\bf r}m\bf f}lat \, v $.} Introduce
\mathfrak{e}dskip
$ v_{(j,k)} \, := \, \chi_{(j,k)} \ v \in H^{\infty *}_T \, ,
\qquad g_{(j,k)} := \md \mi \mv^\eps_{{\bf r}m\bf f}lat \, v_{(j,k)} \, . $
\mathfrak{e}dskip
\noindent{It suffices to exhibit $ \mathfrak{r} \mathfrak{i} \md \mi \mv^\eps_{{\bf r}m\bf f}lat \,
g_{(j,k)} $ and to show ({\bf r}ef{estiinv'}) with a constant $ C_m $
which is uniform in $ {(j,k)} $.} The problem of finding $ \mathfrak{r}
\mathfrak{i} \md \mi \mv^\eps_{{\bf r}m\bf f}lat \, g_{(j,k)} $ can be reduced to a model
situation. This can be achieved by using a change of variables
in $ (t,x) $, based on ({\bf r}ef{nonstaoubis}). From now on, the
time $ t $ is viewed as a parameter, the space variable is
$ x = (x_1,{{\bf r}m\bf h}at x) \in {\mathfrak{a}thbb R} \times {\mathfrak{a}thbb R}^{d-1} $, and we work with
\mathfrak{e}dskip
$ g = g^* = \md \mi \mv^\eps_{{\bf r}m\bf f}lat \, v = ( \varepsilon\, {\bf p}art_1 + {\bf p}art_\theta)
v_1 + {\bf p}art_2 v_2 + \cdots + {\bf p}art_d v_d \, , $
\mathfrak{e}dskip
$ \bigl \{ \, x \, ; \ g(x,\theta) \not = 0 \, \bigr \} \, {\bf s}ubset \,
\bigl \{ \, x \, ; \ v(x,\theta) \not = 0 \, \bigr \} \, {\bf s}ubset \,
B(0,{{\bf r}m\bf f}rac{1}{2}] \, . $
\mathfrak{e}dskip
\noindent{Let $ {\bf p}si \in C^\infty ({\mathfrak{a}thbb R}^{d-1};{\mathfrak{a}thbb R}^+) $ be such that
$ \int_{{\mathfrak{a}thbb R}^{d-1}} \, {\bf p}si({{\bf r}m\bf h}at x) \ d {{\bf r}m\bf h}at x = 1 $ and
\mathfrak{e}dskip
$ \bigl \{ \, {{\bf r}m\bf h}at x \, ; \ {\bf p}si ({{\bf r}m\bf h}at x) \not = 0 \, \bigr \} \,
{\bf s}ubset \, B (0,1] \, , \qquad \bigl \{ \, {{\bf r}m\bf h}at x \, ; \ {\bf p}si
({{\bf r}m\bf h}at x) = 1 \, \bigr \} \, {\bf s}upset \, B (0,{{\bf r}m\bf f}rac{1}{2}] \, . $
\mathfrak{e}dskip
\noindent{Decompose $ g $ according to}
$ g \, = \, (g - \breve g) \ {\bf p}si + \breve g \ {\bf p}si \, , \qquad
\breve g (x) := \int_{{\mathfrak{a}thbb R}^{d-1}} \, g(x_1,{{\bf r}m\bf h}at x) \ d {{\bf r}m\bf h}at x \,
= \, ( \varepsilon\, {\bf p}art_1 + {\bf p}art_\theta) \breve v_1 \, . $
\mathfrak{e}dskip
\noindent{Seek a special solution $ u $ having the form}
\mathfrak{e}dskip
$ u \, = \, \mathfrak{r} \mathfrak{i} \md \mi \mv^\eps_{{\bf r}m\bf f}lat \, g \, = \, {}^t \bigl( \,
a \, , \, \text{{\bf r}m ridiv} \, [ (g - \breve g) \ {\bf p}si ] \, \bigr) \, ,
\qquad a \in H^{\infty *}_T $
\mathfrak{e}dskip
\noindent{where 'ridiv' is the operator of Lemma {\bf r}ef{invpartbar}
applied in the dimension $ d-1 $. It remains to control the
scalar function $ a $ which satisfies the constraint}
\mathfrak{e}dskip
$ \varepsilon\ {\bf p}art_1 a + {\bf p}art_\theta a \, = \, h \, := \, \breve g \
{\bf p}si \, = \, (\varepsilon\ {\bf p}art_1 + {\bf p}art_\theta) (\breve v_1 \,
{\bf p}si ) \, . $
\mathfrak{e}dskip
\noindent{Take the explicit solution}
\mathfrak{e}dskip
$ a (x_1, {{\bf r}m\bf h}at x, \theta) \, = \, \int_{- \infty}^\theta \, h
\bigl( x_1 + \varepsilon\, (s-\theta) , {{\bf r}m\bf h}at x , s \bigr) \ ds $
{\bf s}mallskip
$ \qquad \qquad \ \ = \, \eps^{-1} \ \int_{- \infty}^0 \,
h \bigl(x_1 + r , {{\bf r}m\bf h}at x , \theta + \eps^{-1} \, r \bigr)
\ dr \, . $
\mathfrak{e}dskip
\noindent{By construction}
\mathfrak{e}dskip
$ a (x,\theta + 1) = a (x,\theta) \, , \qquad \int_{\mathfrak{a}thbb T} \, a
(x,\theta) \ d \theta \, = \, 0 \, , \qquad {{\bf r}m\bf f}orall \, (x,
\theta) \in {\mathfrak{a}thbb R}^d \times {\mathfrak{a}thbb T} \, . $
\mathfrak{e}dskip
\noindent{For $ {\bf v}ert x_1 {\bf v}ert + {\bf v}ert {{\bf r}m\bf h}at x {\bf v}ert {{\bf r}m\bf g}eq 2 $,
we find}
\mathfrak{e}dskip
$ a (x,\theta) \, = \, \int_{- \infty}^\theta \, {{\bf r}m\bf f}rac
{d}{ds} \, \bigl[ (\breve v_1 \, {\bf p}si) \bigl( x_1 +
\varepsilon\, (s-\theta) , {{\bf r}m\bf h}at x , s \bigr) \bigr {\bf r}brack \
ds \, = \, (\breve v_1 \, {\bf p}si ) (x,\theta) \, = \, 0 \, . $
\mathfrak{e}dskip
\noindent{It implies that}
\mathfrak{e}dskip
$ \bigl \{ \, (x,\theta) \, ; \ a (x,\theta) \not = 0 \,
\bigr \} \, {\bf s}ubset \, B(0;2] \, . $
\mathfrak{e}dskip
\noindent{Note $ \mathfrak{h} := {\bf p}art^{-1}_\theta h \in H^{\infty *} $.}
Obviously
\mathfrak{e}dskip
$ {\bf p}arallel \mathfrak{h} {\bf p}arallel_{H^m} \, \leq \, C_m \ {\bf p}arallel h
{\bf p}arallel_{H^m} \, , \qquad {{\bf r}m\bf f}orall \, m \in {\mathfrak{a}thbb N} \, , $
\mathfrak{e}dskip
$ \bigl \{ \, (x,\theta) \, ; \ \mathfrak{h} (x,\theta) \not = 0 \,
\bigr \} \, {\bf s}ubset \, B(0;1] \, , $
\mathfrak{e}dskip
\noindent{and we have the identity}
\mathfrak{e}dskip
$ a (x_1,{{\bf r}m\bf h}at x,\theta) \, = \, \mathfrak{h} (x_1,{{\bf r}m\bf h}at x,\theta) \,
- \, \int_{ - x_1 - 1}^{- x_1 + 1} \ {\bf p}art_1 \mathfrak{h} \bigl(
x_1 + r , {{\bf r}m\bf h}at x , \theta + \eps^{-1} \, r \bigr) \ dr \, . $
\mathfrak{e}dskip
\noindent{The term on the right is supported in $ B(0,2] $.}
Use Fubini and Cauchy-Schwarz inequality to control the
integration of $ {\bf p}art_1 \mathfrak{h} $. It yields ({\bf r}ef{estiinv'}).}
{{\bf r}m\bf h}fill $ \Diamond $
\noindent{$ \bullet $ {\bf The Leray projector interpreted in
the variables $ (t,x,\theta) $.}} Introduce the closed subspace
\mathfrak{e}dskip
$ \text{F}^\eps_{{\bf r}m\bf f}lat \, := \, \bigl \lbrace \, u^* \in L^{2*}_T \, ;
\ \md \mi \mv^\eps_{{\bf r}m\bf f}lat \, u^* = 0 \, \bigr {\bf r}brace \, {\bf s}ubset \,
L^{2*}_T \, . $
\mathfrak{e}dskip
\noindent{Note $ \mathfrak{a}thfrak{P}^\eps_{{\bf r}m\bf f}lat $ the orthogonal projector from
$ L^{2*}_T $ onto F$ {}^\eps_{{\bf r}m\bf f}lat $.} This is a self-adjoint
operator such that
\mathfrak{e}dskip
$ \ker \, \md \mi \mv^\eps_{{\bf r}m\bf f}lat = \text{Im} \, \mathfrak{a}thfrak{P}^\eps_{{\bf r}m\bf f}lat \, , \qquad
\text{Im} \, \mg \mr \ma \md^\eps_{{\bf r}m\bf f}lat = ( \ker \, \md \mi \mv^\eps_{{\bf r}m\bf f}lat)^{\bf p}erp
= \ker \, \mathfrak{a}thfrak{P}^\eps_{{\bf r}m\bf f}lat \, . $
\mathfrak{e}dskip
\noindent{Expand the function $ u^* \in L^{2*}_T $ in Fourier series
and decompose the action of $ \mathfrak{a}thfrak{P}^\eps_{{\bf r}m\bf f}lat $ in view of the Fourier
modes}
\mathfrak{e}dskip
$ u^*(t,x,\theta) = {\bf s}um_{k \in {\mathfrak{a}thbb Z}_*} \, {\bf u}_k (t,x) \ e^{i \,
k \, \theta} \, , \qquad \mathfrak{a}thfrak{P}^\eps_{{\bf r}m\bf f}lat \, u^* = {\bf s}um_{k \in {\mathfrak{a}thbb Z}_*} \,
\mathfrak{a}thfrak{P}^\eps_{{{\bf r}m\bf f}lat k} \, {\bf u}_k (t,x) \ e^{i \, k \, \theta}\, . $
\mathfrak{e}dskip
\noindent{Simple computations indicate that}
\mathfrak{e}dskip
$ \mathfrak{a}thfrak{P}^\eps_{{{\bf r}m\bf f}lat k} \,{\bf u}_k \, := \, e^{- \, i \, \eps^{-1} \, k \,
{\bf v}arphi^\eps_{{\bf r}m\bf f}lat} \ \Pi (D_x) \ \bigl( e^{i \, \eps^{-1} \, k \,
{\bf v}arphi^\eps_{{\bf r}m\bf f}lat} \, {\bf u}_k \bigr) \, . $
\mathfrak{e}dskip
\noindent{The following result explains why the projector
$ \mathfrak{a}thfrak{P}^\eps_{{\bf r}m\bf f}lat $ is replaced by $ \Pi_0 $ when performing
the BKW calculus.}
\begin{lem} \label{projeler} $ \, $
{\bf s}mallskip
\noindent{i) The family $ \{ \mathfrak{a}thfrak{P}^\eps_{{\bf r}m\bf f}lat \}_\varepsilon$ is in
$ \mathfrak{a}thfrak{U} \mathfrak{a}thfrak{L}^0 $. We have $ [ {\bf p}art_\theta ; \mathfrak{a}thfrak{P}^\eps_{{\bf r}m\bf f}lat ]
= 0 $ and}
\mathfrak{e}dskip
$ [ \mathfrak{d}_{j,\eps} ; \mathfrak{a}thfrak{P}^\eps_{{\bf r}m\bf f}lat ] = 0 \, , \qquad {{\bf r}m\bf f}orall \,
j \in \{ 0 , \cdots , d \} \, . $
\mathfrak{e}dskip
\noindent{ii) The projector $ \Pi^\eps_{{\bf r}m\bf f}lat (t,x) $ is an
approximation of $ \mathfrak{a}thfrak{P}^\eps_{{\bf r}m\bf f}lat $ in the sense that}
\mathfrak{e}dskip
$ \bigl \{ \, \mathfrak{a}thfrak{P}^\eps_{{\bf r}m\bf f}lat - \Pi^\eps_{{\bf r}m\bf f}lat \, \bigr \}_\varepsilon
\, \in \, \varepsilon\ \mathfrak{a}thfrak{U} \mathfrak{a}thfrak{L}^{2+{{\bf r}m\bf f}rac{d}{2}} \, , \qquad \bigl \{ \,
\mathfrak{a}thfrak{P}^\eps_{{\bf r}m\bf f}lat \ (\id - \Pi^\eps_{{\bf r}m\bf f}lat) \, \bigr \}_\varepsilon\, \in \,
\varepsilon\ \mathfrak{a}thfrak{U} \mathfrak{a}thfrak{L}^1 \, . $
\end{lem}
\noindent{\em {\bf u}nderline{Proo}f {\bf u}nderline{o}f {\bf u}nderline{the}
{\bf u}nderline{Lemma} 4.{\bf u}nderline{2}.} Since $ \mathfrak{a}thfrak{P}^\eps_{{\bf r}m\bf f}lat $
is a projector, we are sure that
\mathfrak{e}dskip
$ {\bf p}arallel \mathfrak{a}thfrak{P}^\eps_{{\bf r}m\bf f}lat \, u {\bf p}arallel_{L^2_T} \ \leq \
{\bf p}arallel u {\bf p}arallel_{L^2_T} \, , \qquad {{\bf r}m\bf f}orall \, (\eps,u)
\in \, ] 0, \eps_0 ] \times L^2_T \, . $
\mathfrak{e}dskip
\noindent{It shows that $ \{ \mathfrak{a}thfrak{P}^\eps_{{\bf r}m\bf f}lat \}_\varepsilon\in \mathfrak{a}thfrak{U} \mathfrak{a}thfrak{L}^0 $.}
Compute
\mathfrak{e}dskip
$ [ \mathfrak{d}_{j,\eps} ; \mathfrak{a}thfrak{P}^\eps_{{\bf r}m\bf f}lat ] \, u^* (t,x,\theta) \,
= \, {\bf s}um_{k \in {\mathfrak{a}thbb Z}_*} \, [ \, \varepsilon\ {\bf p}art_j + i \ k \
{\bf p}art_j {\bf v}arphi^\eps_{{\bf r}m\bf f}lat \, ; \, \mathfrak{a}thfrak{P}^\eps_{{{\bf r}m\bf f}lat k} \, ] \,
{\bf u}_k (t,x) \ e^{i \, k \, \theta} \, . $
\mathfrak{e}dskip
\noindent{Observe that}
\mathfrak{e}dskip
$ ( \varepsilon\ {\bf p}art_j + i \ k \ {\bf p}art_j {\bf v}arphi^\eps_{{\bf r}m\bf f}lat ) \
\mathfrak{a}thfrak{P}^\eps_{{{\bf r}m\bf f}lat k} \, {\bf u}_k \, = \, e^{- \, i \, \eps^{-1} \,
k \, {\bf v}arphi^\eps_{{\bf r}m\bf f}lat} \ \Pi (D_x) \ \varepsilon\, {\bf p}art_j \
\bigl( e^{i \, \eps^{-1} \, k \, {\bf v}arphi^\eps_{{\bf r}m\bf f}lat} \
{\bf u}_k ) $
{\bf s}mallskip
{{\bf r}m\bf h}fill $ = \, \mathfrak{a}thfrak{P}^\eps_{{{\bf r}m\bf f}lat k} \ ( \varepsilon\ {\bf p}art_j +
i \ k \ {\bf p}art_j {\bf v}arphi^\eps_{{\bf r}m\bf f}lat ) \, {\bf u}_k \, . \qquad
\qquad \qquad \qquad \, $
\mathfrak{e}dskip
\noindent{All these informations give access to the first
assertion i).} Now consider ii). The asymptotic expansion
formula for pseudodifferential operators say that for
all $ {\bf u}_k $ in $ C^\infty_0 ( {\mathfrak{a}thbb R}^d_T ) $ we have
$$ \quad {{\bf r}m\bf f}orall \, (t,x) \in {\mathfrak{a}thbb R}^d_T \, , \quad \
\lim_{\varepsilon\, \longrightarrow \, 0} \ \ \bigl \{ \,
( \mathfrak{a}thfrak{P}^\eps_{{{\bf r}m\bf f}lat k} \, {\bf u}_k) (t,x) \, - \, \Pi
\bigl( \nabla {\bf v}arphi^\eps_{{\bf r}m\bf f}lat (t,x) \bigr) \,
{\bf u}_k (t,x) \, \bigr \} \, = \, 0 \, . $$
Since $ \Pi^\eps_{{\bf r}m\bf f}lat = \Pi ( \nabla {\bf v}arphi^\eps_{{\bf r}m\bf f}lat ) $,
it indicates that $ \mathfrak{a}thfrak{P}^\eps_{{\bf r}m\bf f}lat $ is close to
$ \Pi^\eps_{{\bf r}m\bf f}lat $. We have to make this information
more precise. To this end, proceed to the decomposition
\mathfrak{e}dskip
$ u^* \, = \, v^* + \varepsilon\ \nabla p^* + {\bf p}art_\theta p^*
\times X^\eps_{{\bf r}m\bf f}lat \, , \qquad v^* = \mathfrak{a}thfrak{P}^\eps_{{\bf r}m\bf f}lat \,
u^* \, . $
\mathfrak{e}dskip
\noindent{We seek a solution $ ( v^*, p^*) $ of these
constraints such that}
\mathfrak{e}dskip
$ v^* \, = \, \Pi^\eps_{{\bf r}m\bf f}lat \, u^* + \varepsilon\ \tilde v^* \, , \qquad
p^* \, = \, {\bf p}arallel X^\eps_{{\bf r}m\bf f}lat {\bf p}arallel^{-2} \ X^\eps_{{\bf r}m\bf f}lat \cdot
{\bf p}art_\theta^{-1} u^* + \varepsilon\ \tilde p^* \, . $
\mathfrak{e}dskip
\noindent{After substitution, we find the relation}
\mathfrak{e}dskip
$ - \, \nabla \bigl( {\bf p}arallel X^\eps_{{\bf r}m\bf f}lat {\bf p}arallel^{-2} \
X^\eps_{{\bf r}m\bf f}lat \cdot {\bf p}art_\theta^{-1} u^* \bigr) \, = \,
\tilde v^* + \varepsilon\ \nabla \tilde p^* + {\bf p}art_\theta \tilde
p^* \times X^\eps_{{\bf r}m\bf f}lat $
\mathfrak{e}dskip
\noindent{which must be completed by the condition}
\mathfrak{e}dskip
$ - \, \Div \, ( \Pi^\eps_{{\bf r}m\bf f}lat \, u^* ) \, = \, \varepsilon\
\Div \, \tilde v^* + X^\eps_{{\bf r}m\bf f}lat \cdot {\bf p}art_\theta
\tilde v^* \, . $
\mathfrak{e}dskip
\noindent{It follows that}
\mathfrak{e}dskip
$ \tilde v^* \, = \, - \, \mathfrak{a}thfrak{P}^\eps_{{\bf r}m\bf f}lat \, \bigl \lbrack
\nabla ( {\bf p}arallel X^\eps_{{\bf r}m\bf f}lat {\bf p}arallel^{-2} \ X^\eps_{{\bf r}m\bf f}lat
\cdot {\bf p}art_\theta^{-1} u^* ) \bigr {\bf r}brack + ( \mathfrak{a}thfrak{P}^\eps_{{\bf r}m\bf f}lat
- \id ) \ \mathfrak{r} \mathfrak{i} \md \mi \mv^\eps_{{\bf r}m\bf f}lat \ \bigl( \Div \, ( \Pi^\eps_{{\bf r}m\bf f}lat
\, u^* ) \bigr) \, . $
\mathfrak{e}dskip
\noindent{In view of this relation, the point ii) becomes clear.
{{\bf r}m\bf h}fill $ \Diamond $
\mathfrak{e}dskip
\noindent{Consider the Cauchy problem}
\mathfrak{e}dskip
$ \mathfrak{d}_{0,\eps} u^* + \eps^{-1} \ \mg \mr \ma \md^\eps_{{\bf r}m\bf f}lat \, p^* \,
= \, f^* \, , \qquad \md \mi \mv^\eps_{{\bf r}m\bf f}lat \, u^* = 0 \, , \qquad
u^* (0,\cdot) = h^* (\cdot) $
\mathfrak{e}dskip
\noindent{with data $ f^* \in L^{2*}_T $ and $ h^* \in L^{2*} $.}
Compose on the left with $ \mathfrak{a}thfrak{P}^\eps_{{\bf r}m\bf f}lat $. It yields
\mathfrak{e}dskip
$ \mathfrak{d}_{0,\eps} u^* = \mathfrak{a}thfrak{P}^\eps_{{\bf r}m\bf f}lat \, f^* + [\mathfrak{d}_{0,\eps} ;
\mathfrak{a}thfrak{P}^\eps_{{\bf r}m\bf f}lat ] \, u^* \, , \qquad u^* (0,\cdot) = \mathfrak{a}thfrak{P}^\eps_{{\bf r}m\bf f}lat
\, h^* (\cdot) \, . $
\mathfrak{e}dskip
\noindent{The Cauchy problem can be solved in two steps.}
First extract $ u^* $ from the above equation. Then recover
$ p^* $ from the remaining relations.
\noindent{$ \bullet $ {\bf Proof of the Proposition
{\bf r}ef{complement}.}} It remains to absorb the term $ \tilde
g^\eps_{{\bf r}m\bf f}lat $. Use the decomposition
\mathfrak{e}dskip
$ g^\eps_{{\bf r}m\bf f}lat = \langle \tilde g^\eps_{{\bf r}m\bf f}lat {\bf r}angle +
\tilde g^{\varepsilon*}_{{\bf r}m\bf f}lat \, , \qquad \langle \tilde g^\eps_{{\bf r}m\bf f}lat
{\bf r}angle \in \text{Im} \, (\Div) \, , \qquad \tilde g^{\varepsilon*}_{{\bf r}m\bf f}lat
\in \text{Im} \, (\md \mi \mv^\eps_{{\bf r}m\bf f}lat) \, . $
\mathfrak{e}dskip
\noindent{It suffices to choose}
\mathfrak{e}dskip
$ c {\bf u}^\eps_{{\bf r}m\bf f}lat \, := \, - \, \text{ridiv} \, \langle \tilde
g^\eps_{{\bf r}m\bf f}lat {\bf r}angle - \mathfrak{r} \mathfrak{i} \md \mi \mv^\eps_{{\bf r}m\bf f}lat \, \tilde
g^{\eps*}_{{\bf r}m\bf f}lat = \bigcirc (\eps^{{{\bf r}m\bf f}rac{N+1}{l}}) \, . $
{\bf s}ection{Stability of strong oscillations} $ \, $
{\bf v}skip -5mm
\noindent{The case of turbulent regimes ($ l {{\bf r}m\bf g}eq 3 $)
will not be undertaken here.} From now on, fix $ l = 2 $
and $ N {{\bf r}m\bf g}g ( 6 + d ) $.} Consider the Cauchy problem
\begin{equation} \label{CPdep}
\left \{ \begin{array}{l}
{\bf p}art_t {\bf u}^\varepsilon+ ( {\bf u}^\varepsilon\cdot \nabla ) \, {\bf u}^\varepsilon+
\nabla {\bf p}^\varepsilon\, = \, \nu \ E^{\varepsilonl}_{{{\bf r}m\bf f}lat 0} ( {\bf p}art )
\, {\bf u}^\varepsilon\, , \qquad \Div \ {\bf u}^\varepsilon\, = \, 0 \, ,
\quad \\
{\bf u}^\varepsilon(0,x) \, = \, {\bf u}^\eps_{{\bf r}m\bf f}lat (0,x) \, .
\end{array} {\bf r}ight.
\end{equation}
Let $ T_\varepsilon$ be the upper bound of the $ T {{\bf r}m\bf g}eq 0 $ such
that ({\bf r}ef{CPdep}) has a solution $ {\bf u}^\varepsilon\in \mathfrak{a}thcal{W}^0_T $.
Classical results \cite{Chem} for fluid equations imply that
$ T_\varepsilon> 0 $. Our aim in this chapter 5 is to investigate
the singular limit `$ \varepsilon$ {\it goes to zero}'. Such an
analysis must at least contain the two following parts.
\mathfrak{e}dskip
\noindent{a) An {\it existence} result for a time $ T_0 $ which is
independent on the small parameter $ \varepsilon\in \, ]0,\eps_0] $.}
It is required that
\mathfrak{e}dskip
$ \inf \ \{ \, T_\varepsilon\, ; \ \varepsilon\in \, ]0,1] \, \} \, {{\bf r}m\bf g}eq \,
T_0 \, > \, 0 \, . $
\mathfrak{e}dskip
\noindent{When $ \nu > 0 $, or when $ \nu = 0 $ and $ d = 2 $,
we know \cite{Chem}-\cite{Li} that $ T_\varepsilon= + \infty $ so
that $ T_0 = + \infty $.} When $ \nu = 0 $ and $ d {{\bf r}m\bf g}eq 3 $,
nothing guarantees that $ T_0 > 0 $. To our knowledge, this
is an open question.
\mathfrak{e}dskip
\noindent{b) A {\it convergence} result.} The exact solution
$ {\bf u}^\varepsilon$ is not sure to remain close on the whole interval
$ [0,T_0] $ to the approximate solution $ {\bf u}^\eps_{{\bf r}m\bf f}lat $
given by the Theorem {\bf r}ef{appBKW}. Proving estimates on
$ {\bf u}^\varepsilon- {\bf u}^\eps_{{\bf r}m\bf f}lat $ is a delicate matter.
{\bf s}ubsection{Various types of instabilities}
$ \bullet $ {\bf Obvious instabilities.} The obvious instabilities
are the mechanisms of amplifications which can be detected by
looking directly at the formal expansions $ {\bf u}^\eps_{{\bf r}m\bf f}lat $.
They imply the non linear instability of Euler equations.
Indeed, fix any $ T > 0 $, any $ {\bf u}_0 \in \mathfrak{a}thcal{W}^\infty_T
({\mathfrak{a}thbb R}^d) $ which is solution of $ (\mathfrak{a}thcal{E}) $, and any
$ \delta > 0 $.} Work on the balls
\mathfrak{e}dskip
$ B_0 ( {\bf u}_0 ; \delta ] \, := \, \bigl \{ \, {\bf u} \in L^2 \, ;
\ {\bf p}arallel {\bf u} (\cdot) - {\bf u}_0 (0,\cdot) {\bf p}arallel_{L^2 ({\mathfrak{a}thbb R}^d)}
\, \leq \, \delta \, \bigr \} \, . $
\mathfrak{e}dskip
$ B_T ( {\bf u}_0 ; \delta ] := \, \bigl \{ \, {\bf u} \in L^2_T \, ;
\ {\bf p}arallel {\bf u} - {\bf u}_0 {\bf p}arallel_{L^2 ([0,T] \times {\mathfrak{a}thbb R}^d)} \,
\leq \, \delta \, \bigr \} \, . $
\begin{prop} \label{euinstap} For all constant $ C > 0 $, there
are small data
\mathfrak{e}dskip
$ ({{\bf r}m\bf h},\tilde {{\bf r}m\bf h}) \in \bigl( \, B_0 ( {\bf u}_0 ; \delta ]
\cap H^\infty \, \bigr)^2 \, , \qquad ({{\bf r}m\bf f}, \tilde {{\bf r}m\bf f})
\in \bigl( \, B_T ( {\bf u}_0 ; \delta ] \cap \mathfrak{a}thcal{W}^\infty_T
\, \bigr)^2 $
\mathfrak{e}dskip
\noindent{so that the Cauchy problems}
\mathfrak{e}dskip
$ {\bf p}art_t {\bf u} + ( {\bf u} \cdot \nabla ) \, {\bf u} + \nabla {\bf p} \, =
\, {{\bf r}m\bf f} \, , \qquad \Div \ {\bf u} \, = \, 0 \, , \qquad {\bf u} (0 ,
\cdot) = {{\bf r}m\bf h}(\cdot) \, , $
{\bf s}mallskip
$ {\bf p}art_t \tilde {\bf u} + ( \tilde {\bf u} \cdot \nabla ) \, \tilde {\bf u}
+ \nabla \tilde {\bf p} \, = \, \tilde {{\bf r}m\bf f} \, , \qquad \Div \ \tilde
{\bf u} \, = \, 0 \, , \qquad \tilde {\bf u} (0 , \cdot) = \tilde {{\bf r}m\bf h} (\cdot) \, , $
\mathfrak{e}dskip
\noindent{have solutions $ ({\bf u} , \tilde {\bf u}) \in B_T ({\bf u}_0;\delta]^2 $
and there is $ t \in \, ]0,T] $ such that}
\begin{equation} \label{contrsta}
\begin{array}{rl}
\ {\bf p}arallel ({\bf u} - \tilde {\bf u})(t, \cdot) {\bf p}arallel_{L^2 ({\mathfrak{a}thbb R}^d)} \
{{\bf r}m\bf g}eq \, C & \! \! \! \bigl( \, {\bf p}arallel {{\bf r}m\bf h} - \tilde {{\bf r}m\bf h} {\bf p}arallel_{L^2
({\mathfrak{a}thbb R}^d)} \\
\ & + \, {{\bf r}m\bf h}box{$\int_0^t$} \, {\bf p}arallel ({{\bf r}m\bf f} - \tilde {{\bf r}m\bf f}) (s,\cdot)
{\bf p}arallel_{L^2 ({\mathfrak{a}thbb R}^d)} \ ds \, \bigr) \, .
\end{array}
\end{equation}
\end{prop}
\mathfrak{e}dskip
\noindent{Inequalities as ({\bf r}ef{contrsta}) are well-known.} In
general \cite{CGM}-\cite{FSV}-\cite{Gr}, the demonstration is
achieved in two steps. First detect equilibria where instability
arises in the discrete spectrum. Then establish that linearized
instability implies non linear instability. The procedure we adopt
below is different.} We just look at approximate solutions like
$ {\bf u}^\eps_{{\bf r}m\bf f}lat $. It follows a more simple proof of ({\bf r}ef{contrsta}).
\mathfrak{e}dskip
\noindent{\em {\bf u}nderline{Proo}f {\bf u}nderline{o}f {\bf u}nderline{the}
{\bf u}nderline{Pro}p{\bf u}nderline{osition} {\bf u}nderline{{\bf r}ef{euinstap}}.}
Take $ l=2 $ and $ N {{\bf r}m\bf g}eq (8+d) $. Consider two deals of initial data
\mathfrak{e}dskip
$ \tilde U^1_k (0,x,\theta) \, , \qquad {\bf v}arphi^1_k (0,x) \, ,
\qquad 1 \leq k \leq N \, , $
\mathfrak{e}dskip
$ \tilde U^2_k (0,x,\theta) \, , \qquad {\bf v}arphi^2_k (0,x) \, ,
\qquad 1 \leq k \leq N \, . $
\mathfrak{e}dskip
\noindent{Fix these expressions in the following way}
\mathfrak{e}dskip
$ \tilde U^1_1 (0,\cdot) \equiv \tilde U^2_1 (0,\cdot) \, ,
\quad \ {\bf v}arphi^1_1 (0,\cdot) \equiv {\bf v}arphi^2_1 (0,\cdot)
\equiv 0 \, , \quad \ {\bf v}arphi^1_2 (0,\cdot) \equiv {\bf v}arphi^2_2
(0,\cdot) \equiv 0 \, . $
\mathfrak{e}dskip
\noindent{It implies that}
\mathfrak{e}dskip
$ \tilde U^1_1 (t,\cdot) \equiv \tilde U^2_1 (t,\cdot) \, ,
\qquad {\bf v}arphi^1_1 (t,\cdot) \equiv {\bf v}arphi^2_1 (t,\cdot)
\equiv 0 \, , \qquad {{\bf r}m\bf f}orall \, t \in [0,T] \, . $
\mathfrak{e}dskip
\noindent{Adjust $ \tilde U^1_2 (0,\cdot) $ and $ \tilde
U^2_2 (0,\cdot) $ so that}
\mathfrak{e}dskip
$ {\bf p}art_t \, ( {\bf v}arphi^1_2 - {\bf v}arphi^2_2 ) (0,\cdot) \, = \, - \,
\nabla {\bf v}arphi_0 \cdot \langle \tilde U^1_2 - \tilde U^2_2 {\bf r}angle
(0,\cdot) \, \not \equiv \, 0 \, . $
\mathfrak{e}dskip
\noindent{Therefore, we are sure to find some $ t > 0 $
such that $ ( {\bf v}arphi^1_2 - {\bf v}arphi^2_2 ) (t,\cdot)
\not \equiv 0 $.} It follows that
\begin{equation} \label{difama}
\begin{array} {ll}
U^1_1 (t,x,\theta) \! \! \! & = \, \tilde U^1_1 \bigl(
t, x, \theta + {\bf v}arphi^1_2( t, x) \bigr) \\
\ & \not \equiv \, U^2_1 (t, x,\theta) \, = \, \tilde
U^1_1 \bigl(t, x,\theta + {\bf v}arphi^2_2 (t,x) \bigr) \, .
\qquad \qquad \
\end{array}
\end{equation}
Note $ {\bf u}^{\varepsilon1}_{{\bf r}m\bf f}lat $ and $ {\bf u}^{\varepsilon2}_{{\bf r}m\bf f}lat $
the approximate solutions built with the profiles
$ \{ U^1_k \}_k $ and $\{ U^2_k \}_k $. The associated
error terms are $ {{\bf r}m\bf f}^{\varepsilon1}_{{\bf r}m\bf f}lat $ and $ {{\bf r}m\bf f}^{\varepsilon
2}_{{\bf r}m\bf f}lat $.
\mathfrak{e}dskip
\noindent{Proceed by contradiction.} Suppose that the
Proposition {\bf r}ef{euinstap} is wrong. Then, there is
$ C > 0 $ and $ \eps_1 \in \, ]0,\eps_0 ] $ such
that for all $ \varepsilon\in \, ] 0 ,\eps_1 ] $, we have
$$ \ \begin{array}{rl}
{\bf p}arallel ({\bf u}^{\varepsilon1}_{{\bf r}m\bf f}lat - {\bf u}^{\varepsilon2}_{{\bf r}m\bf f}lat)(t, \cdot)
{\bf p}arallel_{L^2 ({\mathfrak{a}thbb R}^d)} \ \leq \, C & \! \! \! \bigl( \, {\bf p}arallel
({\bf u}^{\varepsilon1}_{{\bf r}m\bf f}lat - {\bf u}^{\varepsilon2}_{{\bf r}m\bf f}lat)(0, \cdot) {\bf p}arallel_{
L^2 ({\mathfrak{a}thbb R}^d)} \\
\ & + \, {{\bf r}m\bf h}box{$ \int_0^t $} \, {\bf p}arallel ({{\bf r}m\bf f}^{\varepsilon1}_{{\bf r}m\bf f}lat -
{{\bf r}m\bf f}^{\varepsilon2}_{{\bf r}m\bf f}lat) (s,\cdot) {\bf p}arallel_{L^2 ({\mathfrak{a}thbb R}^d)} \ d s \,
\bigr) \, .
\end{array} $$
Divide this inequality by $ {\bf s}qrt \varepsilon$. By construction,
we have
\mathfrak{e}dskip
$ \eps^{- {{\bf r}m\bf f}rac{1}{2}} \ {\bf p}arallel ({\bf u}^{\varepsilon1}_{{\bf r}m\bf f}lat -
{\bf u}^{\varepsilon2}_{{\bf r}m\bf f}lat)(0, \cdot) {\bf p}arallel_{L^2 ({\mathfrak{a}thbb R}^d)} \, = \,
\bigcirc ( {\bf s}qrt \varepsilon) \, , $
\mathfrak{e}dskip
$ \eps^{- {{\bf r}m\bf f}rac{1}{2}} \ {\bf p}arallel ({{\bf r}m\bf f}^{\varepsilon1}_{{\bf r}m\bf f}lat -
{{\bf r}m\bf f}^{\varepsilon2}_{{\bf r}m\bf f}lat)(s, \cdot) {\bf p}arallel_{L^2 ({\mathfrak{a}thbb R}^d)} \,
= \, \bigcirc ( {\bf s}qrt \varepsilon) \, , \qquad {{\bf r}m\bf f}orall \, s
\in [0,t] \, . $
\mathfrak{e}dskip
$ \eps^{- {{\bf r}m\bf f}rac{1}{2}} \ {\bf p}arallel ({\bf u}^{\varepsilon1}_{{\bf r}m\bf f}lat -
{\bf u}^{\varepsilon2}_{{\bf r}m\bf f}lat)(t, \cdot) {\bf p}arallel_{L^2 ({\mathfrak{a}thbb R}^d)} $
{\bf s}mallskip
{{\bf r}m\bf h}fill $ = \, {\bf p}arallel (U^1_1 - U^2_1) (t,\cdot, \eps^{-1} \,
{\bf v}arphi^\eps_g (t, \cdot) {\bf p}arallel_{L^2 ({\mathfrak{a}thbb R}^d)} \, + \,
\bigcirc ( {\bf s}qrt \varepsilon) \, . \qquad $
\mathfrak{e}dskip
\noindent{It follows that}
$$ \ \lim_{ \varepsilon\, \longrightarrow \, 0 } \ \ \eps^{-
{{\bf r}m\bf f}rac{1}{2}} \ {\bf p}arallel ({\bf u}^{\varepsilon1}_{{\bf r}m\bf f}lat - {\bf u}^{\varepsilon
2}_{{\bf r}m\bf f}lat)(t, \cdot) {\bf p}arallel_{L^2 ({\mathfrak{a}thbb R}^d)} \, = \,
{\bf p}arallel (U^1_1 - U^2_1)(t, \cdot) {\bf p}arallel_{L^2
({\mathfrak{a}thbb R}^d \times {\mathfrak{a}thbb T})} \, = \, 0 $$
which is inconsistent with ({\bf r}ef{difama}). {{\bf r}m\bf h}fill $ \Box $
\noindent{{\it Remark 5.1.1:}} In the demonstration presented
above, the amplification is due to $ {\bf v}arphi_2 $ which is the
principal term in the adjusting phase. The presence of $ {\bf v}arphi_2 $
becomes efficient in comparison with the other effects when
\mathfrak{e}dskip
$ {\bf v}ert \, \tilde U^1_1 \bigl(t,x,\theta + {\bf v}arphi^1_2 (t,x) \bigr) -
\tilde U^2_1 \bigl(t,x,\theta + {\bf v}arphi^2_2 (t,x) \bigr) \, {\bf v}ert \,
{\bf s}im \, c \ t \, {{\bf r}m\bf g}g \, {\bf s}qrt \varepsilon\, . $
\mathfrak{e}dskip
\noindent{This requires to wait a lapse of time bigger than
$ {\bf s}qrt \varepsilon$.} This delay can be reduced by adapting the
above procedure to the cases $ l > 2 $. {{\bf r}m\bf h}fill $ \triangle $
\mathfrak{e}dskip
\noindent{Obvious instabilities have an important consequence.}
To describe the related amplifications, it is necessary to introduce
new quantities which correspond to the phase shifts. In other words,
the only way to get $ L^2 \, - $estimates is to {\it blow up} the
state variables. This principle is detailed in \cite{Che} in the
case of compressible Euler equations.
\noindent{$ \bullet $ {\bf Hidden instabilities.}} Hidden
instabilities are the amplifications which are not detected by
the monophase description of the section 4. On the other hand,
they can be revealed by a multiphase analysis. Introduce a second
phase $ {\bf p}si_0 (t,x) \in \mathfrak{a}thcal{W}^\infty_T $ such that
\mathfrak{e}dskip
$ {\bf p}art_t {\bf p}si_0 + ({\bf u}_0 \cdot \nabla) \, {\bf p}si_0 = 0 \, ,
\qquad \nabla {\bf p}si_0 {\bf w}edge \nabla {\bf v}arphi_0 \not \equiv 0 $
\mathfrak{e}dskip
\noindent{and disturb the Cauchy data of ({\bf r}ef{CPdep})
according to}
\mathfrak{e}dskip
$ {\bf u}^\varepsilon(0,x) \, = \, {\bf u}^\eps_{{\bf r}m\bf f}lat (0,x) \, + \, \eps^{{{\bf r}m\bf f}rac
{M}{l}} \ U \bigl(x, \eps^{-1} \ {\bf p}si_0 (0,x) \bigr) \, , \qquad
M {{\bf r}m\bf g}g N \, . $
\mathfrak{e}dskip
\noindent{The small oscillations contained in the perturbation
of size $ \eps^{{{\bf r}m\bf f}rac{M}{l}} $ are not always kept under control.
They interact with $ {\bf u}^\eps_{{\bf r}m\bf f}lat $ and with themselves. They can
be organized in such a way to affect the leading oscillation $ {\bf u}^\eps_{{\bf r}m\bf f}lat $.
Concretely (see \cite{CGM}), we can adjust $ U $ and $ {\bf p}si_0 $ so that
there is a constant $ C > 0 $ and times $ t_\varepsilon\in \,
]0,T_\eps[ $ going to zero with $ \varepsilon$ such that
\mathfrak{e}dskip
$ {\bf p}arallel ({\bf u}^\varepsilon- {\bf u}^\eps_{{\bf r}m\bf f}lat) (t,\cdot) {\bf p}arallel_{
L^2 ({\mathfrak{a}thbb R}^d)} \, {{\bf r}m\bf g}eq \, C \ \eps^{{{\bf r}m\bf f}rac{1}{2}} \, , \qquad
{{\bf r}m\bf f}orall \, \varepsilon\in \, ]0,\eps_0] \, . $
\mathfrak{e}dskip
\noindent{The power $ \eps^{{{\bf r}m\bf f}rac{M}{l}} $ at the time $ t=0 $
is turned into $ \eps^{{{\bf r}m\bf f}rac{1}{2}} $ at the time $ t=t_\varepsilon$.}
Such amplifications occur whatever the selection of $ l {{\bf r}m\bf g}eq 2 $.
They imply minorations like ({\bf r}ef{contrsta}). However, the underlying
mechanisms are distinct from the preceding ones. They are implemented
by oscillations which are transversal to $ {\bf v}arphi_0 $ and whose
wavelengths are $ \bigcirc(\varepsilon) $. They are cancelled by the
addition of the anisotropic viscosity $ \nu \ E^{\varepsilonl}_{{{\bf r}m\bf f}lat 0} $.
{\bf s}ubsection{Exact solutions}
$ \bullet $ {\bf Statement of the result.} The first information
brought by the BKW construction is that mean values $ \bar U_k $ and
oscillations $ U^*_k $ of the profiles $ U_k $ do not play the same
part. This fact is well illustrated by the rules of transformation
({\bf r}ef{rulesoftr}). It means that we have to distinguish these quantities
if we want to go further in the analysis. This can be done by involving
the variables $ (t,x,\theta) $ that is by working at the level of
({\bf r}ef{eclacpa}). To deal with $ (u^\varepsilon, p^\eps) (t,x,\theta) $
instead of $ ({\bf u}^\varepsilon, {\bf p}^\eps) (t,x) $ is usual in non linear
geometric optics \cite{Sc}. It allows to mark the terms apt to
induce instabilities.
\mathfrak{e}dskip
\noindent{Select some approximate solution $ (u^\eps_{(2,N)} ,
p^\eps_{(2,N)} ) $ with source term $ f^\eps_{(2,N)} $ given by
the Proposition {\bf r}ef{complement} and look at
\begin{equation} \label{eclacpasui}
\ \left \lbrace \begin{array} {l}
\mathfrak{d}_{0,\eps} \, u^\varepsilon+ ( u^\varepsilon\cdot \mg \mr \ma \md^\eps_{{\bf r}m\bf f}lat) \,
u^\varepsilon+ \mg \mr \ma \md^\eps_{{\bf r}m\bf f}lat \, p^\varepsilon\, = \, \nu \ \varepsilon\ \Delta \,
u^\varepsilon\, , \qquad \md \mi \mv^\eps_{{\bf r}m\bf f}lat \, u^\varepsilon= 0 \, , \\
u^\varepsilon(0,x,\theta) \, = \, u^\eps_{(2,N)} (0,x,\theta) \, .
\end{array} {\bf r}ight.
\end{equation}
\begin{theo} \label{ciprin} Fix any integer $ N > d + 8 $. There
is $ \eps_N \in \, ]0,1] $ and $ \nu_N > 0 $ such that for all
$ \varepsilon\in \, ]0,\eps_N] $ and for all $ \nu > \nu_N $ the Cauchy
problem ({\bf r}ef{eclacpasui}) has a unique solution $ (u^\varepsilon, p^\eps) $
defined on the strip $ [0,T] \times {\mathfrak{a}thbb R}^d \times {\mathfrak{a}thbb T} $. Moreover
\mathfrak{e}dskip
$ \bigl \lbrace u^\varepsilon- u^\eps_{(2,N)} \bigr {\bf r}brace_\varepsilon\,
= \, \bigcirc (\eps^{{{\bf r}m\bf f}rac{N}{2}-d-4}) \, . $
\end{theo}
{\bf s}mallskip
\noindent{\em {\bf u}nderline{Proo}f {\bf u}nderline{o}f {\bf u}nderline{the}
{\bf u}nderline{Theorem} {\bf u}nderline{{\bf r}ef{ciprin}}.} The system
({\bf r}ef{eclacpasui}) amounts to the same thing as
\begin{equation} \label{singudepmoyos}
\left \{ \begin{array}{l}
{\bf p}art_t \bar u^\varepsilon+ (\bar u^\varepsilon\cdot \nabla) \bar u^\varepsilon
+ \Div \, \langle u^{\eps*} \otimes u^{\eps*} {\bf r}angle +
\nabla \bar p^\varepsilon= \nu \ \Delta_x \,
\bar u^\varepsilon\, , \\
{\bf p}art_{0,\eps} u^{\eps*} + (\bar u^\varepsilon\cdot \mg \mr \ma \md^\eps_{{\bf r}m\bf f}lat) \,
u^{\eps*} + \varepsilon\ (u^{\eps*} \cdot \nabla ) \,
\bar u^\varepsilon\\
\qquad \quad \, + \, \left \lbrack ( u^{\varepsilon*} \cdot
\mg \mr \ma \md^\eps_{{\bf r}m\bf f}lat) \, u^{\eps*} {\bf r}ight {\bf r}brack^* +
\mg \mr \ma \md^\eps_{{\bf r}m\bf f}lat \, p^{\varepsilon*} = \nu \ \varepsilon\
\Delta \, u^{\varepsilon*} \, , \qquad \ \\
\Div \, \bar u^\varepsilon= \md \mi \mv^\eps_{{\bf r}m\bf f}lat \, u^{\varepsilon*} = 0 \, .
\end{array} {\bf r}ight.
\end{equation}
The equation ({\bf r}ef{singudepmoyos}) is also equivalent to solve
the Cauchy problem
\begin{equation} \label{singudepmoyosmod}
\left \{ \begin{array}{l}
P \, {\bf p}art_t \bar u^\varepsilon+ P \, \left \lbrack (\bar u^\varepsilon\cdot \nabla)
\bar u^\varepsilon{\bf r}ight {\bf r}brack + P \, \left \lbrack \Div \, \langle
u^{\eps*} \otimes u^{\eps*} {\bf r}angle {\bf r}ight {\bf r}brack =
\nu \ \Delta_x \, \bar u^\varepsilon\, , \qquad \\
\mathfrak{a}thfrak{P}^\eps_{{\bf r}m\bf f}lat \, {\bf p}art_{0,\eps} u^{\eps*} + \mathfrak{a}thfrak{P}^\eps_{{\bf r}m\bf f}lat \, \left \lbrack
(\bar u^\varepsilon\cdot \mg \mr \ma \md^\eps_{{\bf r}m\bf f}lat) \, u^{\eps*} {\bf r}ight
{\bf r}brack + \varepsilon\ \mathfrak{a}thfrak{P}^\eps_{{\bf r}m\bf f}lat \, \left \lbrack (u^{\eps*}
\cdot \nabla ) \, \bar u^\varepsilon{\bf r}ight {\bf r}brack \\
\qquad \quad \, + \, \mathfrak{a}thfrak{P}^\eps_{{\bf r}m\bf f}lat \, \left \lbrack
( u^{\varepsilon*} \cdot \mg \mr \ma \md^\eps_{{\bf r}m\bf f}lat) \, u^{\eps*}
{\bf r}ight {\bf r}brack^* = \nu \ \varepsilon\ \mathfrak{a}thfrak{P}^\eps_{{\bf r}m\bf f}lat \
\Delta \, u^{\varepsilon*} \, ,
\end{array} {\bf r}ight.
\end{equation}
associated with the compatible initial data
\mathfrak{e}dskip
$ \bar u^\varepsilon(0, \cdot) = P \, \bar u^\eps_{{\bf r}m\bf f}lat (0, \cdot) \, ,
\qquad u^{\eps*} (0, \cdot) = \mathfrak{a}thfrak{P}^\eps_{{\bf r}m\bf f}lat \, u^{\eps*}_{{\bf r}m\bf f}lat
(0, \cdot) \, . $
\mathfrak{e}dskip
\noindent{$ \bullet $ {\bf Blow up.}} Introduce the new unknown
\mathfrak{e}dskip
$ d^\varepsilon\, = \, {}^t ( \bar d^\varepsilon, d^{\eps*}) \, = \, {}^t
( \, P \, \bar d^\varepsilon\, , \, \mathfrak{a}thfrak{P}^\eps_{{\bf r}m\bf f}lat \, d^{\eps*} \, ) $
{\bf s}mallskip
$ \quad \ := \,
\eps^{- \iota} \ \bigl( \, \eps^{-{{\bf r}m\bf f}rac{1}{l}} \ ( \bar u^\varepsilon
- \bar u^\eps_{{\bf r}m\bf f}lat) \, , \, ( u^{\varepsilon*} - u^{\varepsilon*}_{{\bf r}m\bf f}lat )
\bigr) \, , \qquad {{\bf r}m\bf f}lat = (2,N) \, . $
\mathfrak{e}dskip
\noindent{This transformation agrees with ({\bf r}ef{rulesoftr}).}
The weight $ \eps^{-{{\bf r}m\bf f}rac{1}{l}} $ in front of $ ( \bar u^\varepsilon
- \bar u^\eps_{{\bf r}m\bf f}lat) $ induces a shift on the indice $ l $.
Functions $ \bar U_l $ and $ U^*_{l-1} $ play now the same
part related to the amplifications. To write the equation
on $ d^\varepsilon$ in an abbreviated form, we need notations.
Quasilinear terms
$$ \ \left. \begin{array}{l}
\mathfrak{a}thcal{L}_{11}^\varepsilon\, \bar d \, := \, P \, \bigl \lbrack \,
(\bar u^\eps_{{\bf r}m\bf f}lat \cdot \nabla) \bar d \, \bigr {\bf r}brack \, ,
\qquad \qquad \quad \ \\
\mathfrak{a}thcal{L}_{12}^\varepsilon\, d^* \, := \, P \, \bigl \lbrack \, \Div
\, \langle \eps^{- {{\bf r}m\bf f}rac{1}{2}} \ u^{\varepsilon*}_{{\bf r}m\bf f}lat \otimes
d^* \, + \, d^* \otimes \eps^{- {{\bf r}m\bf f}rac{1}{2}} \ u^{\varepsilon
*}_{{\bf r}m\bf f}lat {\bf r}angle \, \bigr {\bf r}brack \, , \\
\mathfrak{a}thcal{L}_{21}^\varepsilon\, \bar d \, := \, \eps^{{{\bf r}m\bf f}rac{1}{2}} \
\mathfrak{a}thfrak{P}^\eps_{{\bf r}m\bf f}lat \ \bigl \lbrack \, ( u^{\varepsilon*}_{{\bf r}m\bf f}lat
\cdot \nabla) \bar d \, \bigr {\bf r}brack \, , \\
\mathfrak{a}thcal{L}_{22}^\varepsilon\, d^* \, := \, \mathfrak{a}thfrak{P}^\eps_{{\bf r}m\bf f}lat \, \left
\lbrack \, (\bar u^\eps_{{\bf r}m\bf f}lat \cdot \nabla ) d^* \,
{\bf r}ight {\bf r}brack \, + \, \eps^{-1} \ \mathfrak{a}thfrak{P}^\eps_{{\bf r}m\bf f}lat \, \left
\lbrack \, ( u^{\varepsilon*}_{{\bf r}m\bf f}lat \cdot \mg \mr \ma \md^\eps_{{\bf r}m\bf f}lat) \,
d^* \, {\bf r}ight {\bf r}brack^* \qquad \qquad \qquad \\
\qquad \qquad \ + \, \eps^{-1} \ \mathfrak{a}thfrak{P}^\eps_{{\bf r}m\bf f}lat \,
\left \lbrack \, ({\bf p}artial_t {\bf v}arphi^\eps_{{\bf r}m\bf f}lat + \bar
u^\eps_{{\bf r}m\bf f}lat \cdot \nabla {\bf v}arphi^\eps_{{\bf r}m\bf f}lat ) \
{\bf p}artial_\theta d^* \, {\bf r}ight {\bf r}brack \, .
\end{array} {\bf r}ight. $$
Semilinear terms
$$ \ \left. \begin{array}{l}
A_{11}^\varepsilon\, \bar d \, := \, P \, \bigl \lbrack \,
( \bar d \cdot \nabla) \bar u^\eps_{{\bf r}m\bf f}lat \, \bigr
{\bf r}brack \, , \qquad \\
A_{21}^\varepsilon\, \bar d \, := \, \mathfrak{a}thfrak{P}^\eps_{{\bf r}m\bf f}lat \,
\bigl \lbrack \, ( \bar d \cdot \mg \mr \ma \md^\eps_{{\bf r}m\bf f}lat) \,
( \eps^{- {{\bf r}m\bf f}rac{1}{2}} \ u^{\varepsilon*}_{{\bf r}m\bf f}lat ) \, \bigr
{\bf r}brack \, , \\
A_{22}^\varepsilon\, d^* \, := \, \mathfrak{a}thfrak{P}^\eps_{{\bf r}m\bf f}lat \, \left
\lbrack \, ( d^* \cdot \nabla) \, \bar u^\eps_{{\bf r}m\bf f}lat \,
{\bf r}ight {\bf r}brack \, + \, \eps^{-1} \ \mathfrak{a}thfrak{P}^\eps_{{\bf r}m\bf f}lat \,
\left \lbrack \, ( d^* \cdot \mg \mr \ma \md^\eps_{{\bf r}m\bf f}lat) \,
u^{\eps*}_{{\bf r}m\bf f}lat \, {\bf r}ight {\bf r}brack^* \, . \qquad
\qquad \quad
\end{array} {\bf r}ight. $$
Small quadratic terms
$$ \ \left. \begin{array}{l}
Q_1^\varepsilon\, := \, \eps^{{{\bf r}m\bf f}rac{3}{2}} \ P \, \bigl
\lbrack \, \Div \ ( \bar d \otimes \bar d ) \bigr
{\bf r}brack \, + \, \eps^{{{\bf r}m\bf f}rac{1}{2}} \ P \, \bigl
\lbrack \, \Div \ \langle d^* \otimes d^* {\bf r}angle \,
\bigr {\bf r}brack \, , \\
Q_2^\varepsilon\, := \, \eps^{{{\bf r}m\bf f}rac{1}{2}} \ \mathfrak{a}thfrak{P}^\eps_{{\bf r}m\bf f}lat \,
\left \lbrack \, (\bar d \cdot \mg \mr \ma \md^\eps_{{\bf r}m\bf f}lat) \, d^* \,
{\bf r}ight {\bf r}brack \, + \, \eps^{{{\bf r}m\bf f}rac{3}{2}} \ \mathfrak{a}thfrak{P}^\eps_{{\bf r}m\bf f}lat \,
\left \lbrack \, (d^* \cdot \nabla ) \, \bar d \,
{\bf r}ight {\bf r}brack \qquad \qquad \qquad \quad \ \ \\
\qquad \quad + \, \mathfrak{a}thfrak{P}^\eps_{{\bf r}m\bf f}lat \, \left
\lbrack \, ( d^* \cdot \mg \mr \ma \md^\eps_{{\bf r}m\bf f}lat) \, d^* \,
{\bf r}ight {\bf r}brack^* \, .
\end{array} {\bf r}ight. $$
And error terms
\mathfrak{e}dskip
$ er^\eps_1 \, := \, \eps^{-\iota-{{\bf r}m\bf f}rac{3}{2}} \ P \
\bar f^\eps_{{\bf r}m\bf f}lat \, , \qquad er^\eps_2 \, := \,
\eps^{-\iota-1} \ \mathfrak{a}thfrak{P}^\eps_{{\bf r}m\bf f}lat \ f^{\varepsilon*}_{{\bf r}m\bf f}lat \, . $
\mathfrak{e}dskip
\noindent{With these conventions, the expression $ d^\varepsilon$
is subjected to}
\begin{equation} \label{interface}
\ \left \{ \begin{array}{l}
P \, {\bf p}artial_t \bar d^\varepsilon+ \mathfrak{a}thcal{L}^\eps_{11} \, \bar d^\varepsilon
+ \mathfrak{a}thcal{L}^\eps_{12} \, d^{\varepsilon*} + A^\eps_{11} \, \bar d^\varepsilon\\
\qquad \quad + \, \eps^{\iota - 1} \ Q^\eps_1 + er^\eps_1
\, = \, \nu \ P \ \Delta_x \bar d^\varepsilon\, , \\
\mathfrak{a}thfrak{P}^\eps_{{\bf r}m\bf f}lat \, {\bf p}artial_t d^{\varepsilon*} + \mathfrak{a}thcal{L}^\eps_{21} \,
\bar d^\varepsilon+ \mathfrak{a}thcal{L}^\eps_{22} \, d^{\varepsilon*} + A^\eps_{21} \,
\bar d^\varepsilon+ A^\eps_{22} \, d^{\varepsilon*} \qquad \qquad
\qquad \quad \ \\
\qquad \quad + \, \eps^{\iota - 1} \ Q^\eps_2 + er^\eps_2
\, = \, \nu \ \mathfrak{a}thfrak{P}^\eps_{{\bf r}m\bf f}lat \ \Delta \, d^{\varepsilon*} \, .
\end{array} {\bf r}ight.
\end{equation}
\noindent{Energy estimates are obtained at the level
of ({\bf r}ef{interface}).} Below, we just sketch the related
arguments which are classical.
\noindent{$ \bullet $ {\bf $ L^2 - \, $estimates for the
linear problem.}} The linearized equations of Euler
equations along the approximate solution $ u^\eps_{{\bf r}m\bf f}lat $
are obtained by removing $ Q^\eps_1 $ and $ Q^\eps_2 $
from ({\bf r}ef{interface}). It yields a system which, at
first sight, involves coefficients which are singular
in $ \varepsilon$. In fact, this is not the case. Let us
explain why.
\mathfrak{e}dskip
\noindent{This is clear for $ \mathfrak{a}thcal{L}^\eps_{11} $, $ \mathfrak{a}thcal{L}^\eps_{21} $
and $ A^\eps_{11} $.}
\mathfrak{e}dskip
\noindent{Since $ u^{\eps*}_{{\bf r}m\bf f}lat = \bigcirc( \eps^{{{\bf r}m\bf f}rac{1}{l}} ) $,
this is also true for $ \mathfrak{a}thcal{L}^\eps_{12} $ and $ A^\eps_{21} $.}
\mathfrak{e}dskip
\noindent{The contributions which in $ \mathfrak{a}thcal{L}^\eps_{22} $
have $ \eps^{-1} $ in factor give no trouble since}
\mathfrak{e}dskip
$ {\bf p}art_t {\bf v}arphi^\eps_{{\bf r}m\bf f}lat + \bar u^\eps_{{\bf r}m\bf f}lat \cdot \nabla
{\bf v}arphi^\eps_{{\bf r}m\bf f}lat = \bigcirc ( \eps^{{{\bf r}m\bf f}rac{N}{2}} ) = \bigcirc (
\eps^{d+4} ) \, , \qquad u^{\varepsilon*}_{{\bf r}m\bf f}lat \cdot \nabla {\bf v}arphi^\eps_{{\bf r}m\bf f}lat
= v^{\varepsilon*}_{{\bf r}m\bf f}lat = \bigcirc ( \eps^{1 + {{\bf r}m\bf f}rac{1}{l}} ) \, . $
\mathfrak{e}dskip
\noindent{Now, look at $ A^\eps_{22} $.} Recall that $ d^{\varepsilon*} =
\mathfrak{a}thfrak{P}^\eps_{{\bf r}m\bf f}lat \, d^{\varepsilon*} $ which means that
\mathfrak{e}dskip
$ \eps^{-1} \ d^{\varepsilon*} \cdot \nabla {\bf v}arphi^\eps_{{\bf r}m\bf f}lat \, = \,
- \, \Div \, d^{\varepsilon*} \, . $
\mathfrak{e}dskip
\noindent{Therefore}
\mathfrak{e}dskip
$ \eps^{-1} \ \mathfrak{a}thfrak{P}^\eps_{{\bf r}m\bf f}lat \, \left \lbrack \, ( d^{\varepsilon*}
\cdot \mg \mr \ma \md^\eps_{{\bf r}m\bf f}lat) \, u^{\eps*}_{{\bf r}m\bf f}lat \, {\bf r}ight {\bf r}brack^* \,
= \, T^\varepsilon(t,x,\nabla) \, d^{\varepsilon*} \, , $
\mathfrak{e}dskip
\noindent{where $ T^\varepsilon$ is some differential operator of
order $ 1 $ with bounded coefficients.}
\mathfrak{e}dskip
\noindent{Observe that these manipulations and the blow up procedure
induce a {\it loss} of hyperbolicity. When $ \nu = 0 $, this is the
source of hidden instabilities. When $ \nu {{\bf r}m\bf g}eq \nu_N > 0 $ with
$ \nu_N $ large enough, this can be compensated by the viscosity.
This is the key to $ L^2 - \, $estimates.
\noindent{$ \bullet $ {\bf The non linear problem and higher
order estimates.}} Let $ {\bf s}igma $ be the smaller integer such
that $ {\bf s}igma {{\bf r}m\bf g}eq {{\bf r}m\bf f}rac{d+3}{2} $. If the life span $ T_\varepsilon$
of the exact solution $ u^\varepsilon$ is finite, we must have
$$ \lim_{t \, \longrightarrow \, T_\eps} \ \ {\bf p}arallel u^\varepsilon
(t,\cdot) {\bf p}arallel_{H^{\bf s}igma} \, = \, + \infty \, . \qquad
\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad $$
Thus, the Theorem {\bf r}ef{ciprin} is a consequence of the following
majoration
\mathfrak{e}dskip
$ {\bf s}up \ \bigl \{ \, {\bf p}arallel u^\varepsilon(t,\cdot) {\bf p}arallel_{
H^{\bf s}igma} \, ; \ t \in [0 , \mathfrak{i}n \, (T_\eps,T) ] \, \bigr \}
\, \leq \, C \, < \, \infty \, . $
\mathfrak{e}dskip
\noindent{Consider the set}
\mathfrak{e}dskip
$ \mathfrak{a}thcal{Z}_\varepsilon\, := \, \bigl \{ \, \mathfrak{d}_{0,\eps} \, , \, \cdots \, ,
\, \mathfrak{d}_{d,\eps} \, , \, {\bf p}artial_\theta \, \bigr \} \, . $
\mathfrak{e}dskip
\noindent{Extract the operators}
\mathfrak{e}dskip
$ \mathfrak{a}thcal{Z}^k_\varepsilon\, := \, \mathfrak{a}thcal{Z}_1 \circ \, \cdots \, \circ \mathfrak{a}thcal{Z}_k \, ,
\qquad \mathfrak{a}thcal{Z}_j \in \mathfrak{a}thcal{Z}_\varepsilon\, , \qquad k \leq {\bf s}igma \, . $
\mathfrak{e}dskip
\noindent{It suffices to show that}
$$ \ \mathfrak{a}x_{\, 0 \leq k \leq {\bf s}igma} \ \ {\bf s}up \ \bigl \{ \,
{\bf p}arallel \eps^{-k} \ \mathfrak{a}thcal{Z}^k_\varepsilon\ u^\varepsilon(t,\cdot)
{\bf p}arallel_{L^2} \, ; \ t \in [0 , \mathfrak{i}n \, (T_\eps,T) ] \,
\bigr \} \, \leq \, C \, < \, \infty \, . \ $$
Pick some $ \mathfrak{a}thcal{Z}^k_\varepsilon$ with $ k \leq {\bf s}igma $. Apply
$ \mathfrak{a}thcal{Z}^k_\varepsilon$ on the left of ({\bf r}ef{interface}). Use
the point i) of Lemma {\bf r}ef{projeler} to pass through
$ \mathfrak{a}thfrak{P}^\eps_{{\bf r}m\bf f}lat $. Then, observe that the commutator
of two vector fields in $ \mathfrak{a}thcal{Z}^\varepsilon$ is a linear
combination of elements of $ \mathfrak{a}thcal{Z}^\varepsilon$ with
coefficients in $ C^\infty $. Thus, we get an
equation on $ \mathfrak{a}thcal{Z}^k_\varepsilon\, d^{\varepsilon*} $.
\mathfrak{e}dskip
\noindent{The linear part is managed as in the preceding
paragraph.} Take $ \iota = 1 $. The contributions due to
$ Q^\eps_1 $ and $ Q^\eps_2 $ are controled by way of the
a priori estimate and the viscosity. The condition on $ N $
is to make sure that
\mathfrak{e}dskip
$ {{\bf r}m\bf f}rac{N}{2} - \iota - {{\bf r}m\bf f}rac{3}{2} - {\bf s}igma \, {{\bf r}m\bf g}eq \, 0 \, . $
\mathfrak{e}dskip
\noindent{Thereby, the contributions brought by the error
terms $ er^\eps_1 $ and $ er^\eps_2 $ remain bounded in
the procedure.}
\end{document}
|
\begin{document}
\title{Bitrate-Constrained DRO: \ Beyond Worst Case Robustness To Unknown Group Shifts}
\begin{abstract}
Training machine learning models robust to distribution shifts
is critical for real-world applications.
Some robust training algorithms (\eg Group DRO) specialize to group shifts and require group information on all training points.
Other methods (\eg CVaR DRO) that do not need group annotations can be overly conservative, since they naively upweight high loss points which may form a contrived set that does not correspond to any meaningful group in the real world (\eg when the high loss points are randomly mislabeled training points).
In this work, we address limitations in prior approaches by assuming a more nuanced form of group shift: conditioned on the label, we assume that the true group function (indicator over group) is simple.
For example, we may expect that group shifts occur along low bitrate features (\eg image background, lighting).
Thus, we aim to learn a model that maintains high accuracy on simple group functions realized by these low bitrate features, that need not spend valuable model capacity achieving high accuracy on contrived groups of examples.
Based on this, we consider the two-player game formulation of DRO where the adversary's capacity is bitrate-constrained.
Our resulting practical algorithm, Bitrate-Constrained DRO (\bdro), does not require group information on training samples yet matches the performance of Group DRO on datasets that have training group annotations and that of CVaR DRO on long-tailed distributions.
Our theoretical analysis reveals that in some settings \bdro objective can provably yield statistically efficient and less conservative solutions than unconstrained CVaR DRO.
\end{abstract}
\section{Introduction}
\label{sec:introduction}
Machine learning models may perform poorly when tested on distributions that differ from the training distribution.
A common form of distribution shift is \emph{group} shift, where the source and target differ only in the marginal distribution over finite groups or sub-populations, with no change in group conditionals~\citep{oren2019distributionally,duchi2019distributionally} (\eg when the groups are defined by spurious correlations and the target distribution upsamples the group where the correlation is absent~\cite{sagawa2019distributionally}).
Prior works consider various approaches to address group shift. One solution is to ensure robustness to worst case shifts using distributionally robust optimization (DRO)~\citep{bagnell2005robust,ben2013robust,duchi2016statistics}, which considers a two-player game where a {learner} minimizes risk on distributions chosen by an {adversary} from a predefined uncertainty set.
As the adversary is only constrained to propose distributions that lie within an f-divergence based uncertainty set,
DRO often yields overly conservative (pessimistic) solutions~\citep{hu2018does} and can suffer from statistical challenges~\citep{duchi2019distributionally}. This is mainly because DRO upweights high loss points that may not form a meaningful group in the real world, and may even be \emph{contrived} if the high loss points simply correspond to randomly mislabeled examples in the training set.
Methods like \gdro ~\citep{sagawa2019distributionally} avoid overly pessimistic solutions by assuming knowledge of group membership for each training example. However, these group-based methods provide no guarantees on shifts that deviate from the predefined groups (\eg when there is a new group),
and are not applicable to problems that lack group knowledge. In this work, we therefore ask: \textit{Can we train non-pessimistic robust models without access to group information on training samples?}
We address this question by considering a more nuanced assumption on the structure of the underlying groups.
We assume that, conditioned on the label, group boundaries are realized by high-level features
that depend on a small set of underlying factors (\eg background color, brightness).
This leads to simpler group functions with large margin and simple decision boundaries between groups (Figure~\ref{fig:intro-figure} \emph{(left)}).
Invoking the principle of minimum description length~\citep{grunwald2007minimum}, restricting our adversary to functions that satisfy this assumption corresponds to a bitrate constraint.
In DRO, the adversary upweights points with higher losses under the current learner, which in practice often correspond to examples that belong to a rare group, contain complex patterns, or are mislabeled~\citep{carlini2019distribution,toneva2018empirical}.
Restricting the adversary's capacity prevents it from upweighting individual hard or mislabeled examples (as they cannot be identified with simple features), and biases it towards identifying erroneous data points misclassified by simple features.
This also complements the failure mode of neural networks trained with stochastic gradient descent (SGD) that rely on simple spurious features which correctly classify points in the \emph{majority} group but may fail on \emph{minority} groups ~\citep{blodgett2016demographic}.
The main contribution of this paper is Bitrate-Constrained DRO (\bdro),
a supervised learning procedure that provides robustness to distribution shifts along groups realized by simple functions.
Despite not using group information on training examples, we demonstrate that \bdro can match the performance of methods requiring them.
We also find that \bdro is more successful in identifying true minority training points, compared to unconstrained DRO.
This indicates that not optimizing for performance on contrived worst-case shifts can reduce the pessimism inherent in DRO.
It further validates: (i) our assumption on the simple nature of group shift;
and (ii) that our bitrate constraint meaningfully structures the uncertainty set to be robust to such shifts.
As a consequence of the constraint, we also find that \bdro is robust to random noise in the training data~\citep{song2022learning}, since it cannot form ``groups'' entirely based on randomly mislabeled points with low bitrate features. This is in contrast with existing methods that use the learner's training error to up-weight arbitrary sets of difficult training points~\citep[\eg][]{liu2021just,levy2020large}, which we show are highly susceptible to label noise (see Figure~\ref{fig:intro-figure}~\emph{(right)}).
Finally, we theoretically analyze our approach---characterizing how the degree of constraint on the adversary can effect worst risk estimation and excess risk (pessimism) bounds, as well as convergence rates for specific online solvers.
\begin{figure}
\caption{\footnotesize
\textbf{Bitrate-Constrained DRO}
\label{fig:intro-figure}
\end{figure}
\section{Related Work}
\label{sec:relwork}
Prior works in robust ML~\citep[e.g.,][]{li2018learning,lipton2018detecting,goodfellow2014explaining} address various forms of adversarial or structured shifts. We specifically review prior work on robustness to group shifts. While those based on DRO
optimize for worst-case shifts in an explicit uncertainty set, the robust set is implicit for some others, with most using some form of importance weighting.
\textbf{Distributionally robust optimization (DRO).}
DRO methods generally optimize for worst-case performance on joint $(\rvx, \ry)$ distributions that lie in an $f$-divergence ball (uncertainty set) around the training distribution~\citep{ben2013robust,rahimian2019distributionally,bertsimas2018data,blanchet2019quantifying,miyato2018virtual,duchi2016statistics,duchi2021uniform}.
\citet{hu2018does} highlights that the conservative nature of DRO may lead to degenerate solutions when the unrestricted adversary uniformly upweights all misclassified points. \citet{sagawa2019distributionally} proposes to address this by limiting the adversary to shifts that only differ in marginals over predefined groups. However, in addition to it being difficult to obtain this information,
\citet{kearns2018preventing} raise ``gerrymandering'' concerns with notions of robustness that fix a small number of groups apriori. While they propose a solution that looks at exponentially many subgroups defined over protected attributes, our method does not assume access to such attributes and aims to be fair on them as long as they are realized by simple functions.
Finally, \citet{zhai2021boosted} avoid conservative solutions by solving the DRO objective over randomized predictors learned through boosting. We consider deterministic and over-parameterized learners and instead constrain the adversary's class.
\textbf{Constraining the DRO uncertainty set.}
In the marginal DRO setting, \citet{duchi2019distributionally} limit the adversary via easier-to-control reproducing kernel hilbert spaces (RKHS) or bounded H\"{o}lder continuous functions \citep{liu2014robust,wen2014robust}.
While this reduces the statistical error in worst risk estimation, the size of the uncertainty set (scales with the data) remains too large to avoid cases where an adversary can re-weight mislabeled and hard examples from the majority set~\citep{carlini2019distribution}.
In contrast, we restrict the adversary even for large datasets where the estimation error would be low,
as this would reduce excess risk when we only care about robustness to rare sub-populations defined by simple functions. Additionally, while their analysis and method prefers the adversary's objective to have a strong dual, we show empirical results on real-world datasets and generalization bounds where the adversary's objective is not necessarily convex.
\textbf{Robustness to group shifts without demographics.}
Recent works~\citep{sohoni2020no,creager2021environment,bao2022learning} that aim to achieve group robustness without access to group labels employ various heuristics where the robust set is implicit while others require data from multiple domains~\citep{arjovsky2019invariant,yao2022improving} or ability to query test samples~\citep{lee2022diversify}.
\citet{liu2021just} use training losses for a heavily regularized model trained with empirical risk minimization (ERM) to directly identify minority data points with higher losses and re-train on the dataset that up-weights the identified set. \citet{nam2020learning} take a similar approach. Other methods~\citep{idrissi2022simple} propose simple baselines that subsample the majority class in the absence of group demographics and the majority group in its presence.
\citet{hashimoto2018fairness} find DRO over a $\chi^2$-divergence ball can reduce the otherwise increasing disparity of per-group risks in a dynamical system. Since it does not use features to upweight points (like \bdro) it is vulnerable to label noise. Same can be said about some other works (\eg \cite{liu2021just,nam2020learning}).
\textbf{Importance weighting in deep learning.} Finally, numerous works~\citep{duchi2016statistics,levy2020large,lipton2018detecting,oren2019distributionally} enforce robustness by re-weighting losses on individual data points. Recent investigations~\citep{soudry2018implicit,byrd2019effect,lu2022importance} reveal that such objectives have little impact on the learned solution in interpolation regimes. One way to avoid this pitfall is to train with heavily regularized models~\citep{sagawa2019distributionally, sagawa2020investigation} and employ early stopping. Another way is to subsample certain points, as opposed to up-weighting~\citep{idrissi2022simple}. In this work, we use both techniques while training our objective and the baselines, ensuring that the regularized class is robust to shifts under misspecification~\citep{wen2014robust}.
\section{Preliminaries}
\label{sec:prelim}
We introduce the notation we use in the rest of the paper and describe the DRO problem. In the following section, we will formalize our assumptions on the nature of the shift before introducing our optimization objective and algorithm.
\textbf{Notation.} With covariates $\gX \subset \Real^d$ and labels $\gY$, the given source $P$ and unknown true target $Q_0$ are measures over the measurable space $(\gX \times \gY, \Sigma)$ and have densities $p$ and $q_0$ respectively (w.r.t. base measure $\mu$).
The learner's choice is a hypothesis $h: \gX \mapsto \gY$ in class $\gH \subset L^2(P)$, and the adversary's action in standard DRO is a target distribution $Q$ in set $\gQ_{P, \kappa} \coloneqq \{Q : Q \ll P,\, D_{f}(Q\, ||\, P) \leq \kappa\}$.
Here, $D_f$ is the $f$-divergence between $Q$ and $P$ for a convex function $f$\footnote{For \eg $\kl{Q}{P}$ can be derived with $f(x) = x \log x$ and for Total Variation $f(x) = |x-1|/2$.} with $f(1)=0$. An equivalent action space for the adversary is the set of re-weighting functions:
\begin{align}
\gW_{P,\kappa} = \{w: \gX \times \gY \mapsto \Real \st w \; \textrm{is measurable under}\; P, \; \E_{P}[w] = 1,\; \E_P{f(w)} \leq \kappa \}
\label{eq:adv-constraints}
\end{align}
For a convex loss function $l: \gY \times \gY \mapsto \Real_+$, we denote $l(h)$ as the function over $(\rvx, \ry)$ that evaluates $l(h(\rvx), \ry)$, and use $\lzone$ to denote the loss function $\I(h(\rvx) \neq \ry)$.
Given either distribution $Q \in \gQ_{P, \kappa}$,
or a re-weighting function $w \in \gW_{P, \kappa}$, the risk of a learner $h$ is:
\begin{align}
& \;\; \;\;\;R(h, Q) = \E_{Q}\;[l(h)] \;\;\;\;\;\;\;\;\; R(h, w) = \E_{(\rvx, \ry) \sim P} \; [l(h(\rvx), \ry) \cdot w(\rvx, \ry)] = \innerprod{l(h)}{\;w}_P
\label{eq:risk-defn}
\end{align}
Note the overload of notation for $R(h, \cdot)$. If the adversary is stochastic it picks a mixed action $\delta \in \dwp$, which is the set of all distributions over $\gW_{P, \kappa}$. Whenever it is clear, we drop $P, \kappa$.
\textbf{Unconstrained DRO~\citep{ben2013robust}.} This is a min-max optimization problem understood as a two-player game, where the learner chooses a hypothesis, to minimize risk on the worst distribution that the adversary can choose from its set.
Formally, this is given by \eqref{eq:dro-main}.
The first equivalence is clear from the definitions and for the second since $R(h, Q)$ is linear in $Q$, the supremum over $\dwp$ is a Dirac delta over the best weighting in $\gW_{P, \kappa}$. In the next section, we will see how a bitrate-constrained adversary can only pick certain actions from $\dwp$.
\begin{align}
& \inf_{h \in \gH} \; \sup_{Q \in \gQ_{P, \kappa}} \; R(h, Q)\;\; \equiv \;\; \inf_{h \in \gH} \; \sup_{w \in \gW_{P, \kappa}} \; R(h, w) \;\; \equiv \;\; \inf_{h \in \gH} \; \sup_{\delta \in \dwp} \;\; \E_{w \sim \delta} \brck{R(h, w)}
\label{eq:dro-main}
\end{align}
\textbf{Group Shift.} While the DRO framework in Section~\ref{sec:prelim} is broad and addresses any unstructured shift, we focus on the specific case of group shift.
First, for a given pair of measures $P, Q$ we define what we mean by the group structure $\gG_{P, Q}$ (Definition~\ref{def:group-struc}). Intuitively, it is a set of sub-populations along which the distribution shifts, defined in a way that makes them uniquely identifiable. For \eg in the Waterbirds dataset (Figure~\ref{fig:intro-figure}), there are four groups given by combinations of (label, background). Corollary~\ref{cor:group-struc-unique} follows immediately from the definition of $\gG_{P, Q}$. Using this definition, the standard group shift assumption~\citep{sagawa2019distributionally} can be formally re-stated as Assumption~\ref{assm:group-shift}.
\begin{definition}[group structure $\gG_{P, Q}$]
\label{def:group-struc}
For $Q$ $\ll$ $P$ the group structure $\gG_{P, Q}$$=$$\{G_k\}_{k=1}^{K}$ is the smallest finite set of disjoint groups $\{G_k\}_{k=1}^K$ s.t. $Q(\cup_{k=1}^K G_k)$$=$$1$ and $\forall$$k$ (i) $G_k \in \Sigma$, $Q(G_k) > 0$ and (ii) $p(\rvx,\ry \mid G_k) = q(\rvx,\ry \mid G_k) > 0 \; a.e$. in $\mu$. If such a structure exists then $\gG_{P, Q}$ is well defined.
\end{definition}
\begin{corollary}[uniqueness of $\gG_{P, Q}$]
$\forall P, Q$, the structure $\gG(P, Q)$ is unique if it is well defined.
\label{cor:group-struc-unique}
\end{corollary}
\begin{assumption}[standard group shift]
\label{assm:group-shift}
There exists a well-defined group structure $\gG_{P, Q_0}$ s.t. target $Q_0$ differs from $P$ only in terms of marginal probabilities over all $ G \in \gG_{P, Q_0}$.
\end{assumption}
\section{Bitrate-Constrained DRO}
\label{sec:bdro}
We begin with a note on the expressivity of the adversary in Unconstrained DRO and formally introduce the assumption we make on the nature of shift. Then, we build intuition for why unconstrained adversaries fail but restricted ones do better under our assumption.
Finally, we state our main objective and discuss a specific instance of it.
\textbf{How expressive is unconstrained adversary?}
Note that the set $\gW_{P, \kappa}$ includes all measurable functions (under $P$) such that the re-weighted distribution is bounded in $f$-divergence (by $\kappa$).
While prior works~\citep{shafieezadeh2015distributionally,duchi2016statistics} shrink $\kappa$
to construct confidence intervals,
this \textit{only controls} the total mass that can be moved between measurable sets $G_1, G_2 \in \Sigma$, but \textit{does not restrict} the choice of $G_1$ and $G_2$ itself. As noted by \citet{hu2018does}, such an adversary is highly expressive, and optimizing for the worst case only leads to the solution of empirical risk minimization (ERM) under $\lzone$ loss. Thus, we can conclude that DRO recovers degenerate solutions because the worst target in $\gW_{P,\kappa}$ lies far from the subspace of naturally occurring targets. Since it is hard to precisely characterize natural targets we make a nuanced assumption: the target $Q_0$ only upsamples those rare subpopulations that are misclassified by simple features. We state this formally in Assumption~\ref{assm:simple-group-shift} after we define the bitrate-constrained function class $\gW(\gamma)$ in Definition~\ref{def:bitrate-constrained-class}.
\begin{definition}
\label{def:bitrate-constrained-class} A function class $\gW(\gamma)$ is bitrate-constrained if there exists a data independent prior $\pi$, s.t. $\gW(\gamma) = \{\E[\delta] \st \delta \in \Delta(\gW), \; \kl{\delta}{\pi} \leq \gamma\}$.
\end{definition}
\begin{assumption}[simple group shift]
\label{assm:simple-group-shift}
Target $Q_0$ satisfies Assumption~\ref{assm:group-shift} (group shift) w.r.t. source $P$.
Additionally,
For some prior $\pi$ and a small $\gamma^*$, the re-weighting function $q_0/p$ lies in a bitrate-constrained class $\gW(\gamma^*)$. In other words, for every group $G \in \gG(P, Q_0)$, $\exists w_G \in \gW(\gamma^*)$ s.t. $\I((\rvx, \ry) \in G) = w_G$ a.e.. We refer to such a $G$ as a \textbf{simple group} that is realized in $\gW(\gamma^*)$.
\end{assumption}
Under the principle of minimum description length ~\citep{grunwald2007minimum} any deviation from the prior (\ie $\kl{\delta}{\pi}$) increases the \emph{description length} of the encoding $\delta \in \Delta(\gW)$, thus we refer to $\gW(\gamma)$ as being \emph{bitrate-constrained} in the sense that it contains functions (means of distributions) that can be described with a limited number of bits given the prior $\pi$. See Appendix~\ref{subsec:assm-explain} for an example of a bitrate-constrained class of functions. Next we present arguments for why identifiability of simple (satisfy Assumption~\ref{assm:simple-group-shift}) minority groups can be critical for robustness.
\textbf{Neural networks can perform poorly on simple minorities.}
For a fixed target $Q_0$, let's say there exists two groups: $\gmin$ and $\gmaj \in \gG(P, Q_0)$ such that $P(\gmin) \ll P(\gmaj)$.
By Assumption~\ref{assm:simple-group-shift}, both $\gmin$ and $\gmaj$ are simple (realized in $\gW(\gamma^*)$), and are thus separated by some simple feature.
The learner's class $\gH$ is usually a class of overparameterized neural networks. When trained with stochastic gradient descent (SGD), these are biased towards learning simple features that classify a majority of the data~\citep{shah2020pitfalls,soudry2018implicit}. Thus, if the simple feature separating $\gmin$ and $\gmaj$ itself correlates with the label $y$ on $\gmaj$, then neural networks would fit on this feature. This is precisely the case in the Waterbirds example, where the groups are defined by whether the simple feature background correlates with the label (Figure~\ref{fig:intro-figure}). Thus our assumption on the nature of shift complements the nature of neural networks perform poorly on simple minorities.
\textbf{The bitrate constraint helps identify simple unfair minorities in $\gG(P, Q_0)$.} Any method that aims to be robust on $Q_0$ must up-weight data points from $\gmin$ but without knowing its identity.
Since the unconstrained adversary upsamples any group of data points with high loss and low probability, it cannot distinguish between a rare group that is realized by simple functions in $\gW(\gamma^*)$ and a rare group of examples that share no feature in common or may even be mislabeled. On the other hand, the group of mislabeled examples cannot be separated from the rest by functions in $\gW(\gamma^*)$. Thus, a bitrate constraint adversary can only identify simple groups and upsamples those that incur high losses -- possibly due to the simplicity bias of neural networks.
\textbf{\bdro objective.} According to Assumption~\ref{assm:simple-group-shift}, there cannot exist a target $Q_0$ such that minority $\gmin \in \gG(P, Q_0)$ is not realized in bitrate constrained class $\gW(\gamma^*)$.
Thus, by constraining our adversary to a class $\gW(\gamma)$ (for some $\gamma$ that is user defined), we can possibly evade issues emerging from optimizing for performance on mislabeled or hard examples, even if they were rare. This gives us the objective in Equation~\ref{eq:bdro-main} where the equalities hold from the linearity of $\innerprod{\cdot}{\cdot}$ and Definition~\ref{def:bitrate-constrained-class}.
\begin{align}
\inf_{h \in \gH} \sup_{\substack{\delta \in \Delta(\gW) \\ \kl{\delta}{\pi} \leq \gamma}} \E_{w \sim \delta} R(h, w) \; = \;
\inf_{h \in \gH} \sup_{\substack{\delta \in \Delta(\gW) \\ \kl{\delta}{\pi} \leq \gamma}} \langle l(h), \E_\delta[w] \rangle_P \;\; = \;\; \inf_{h \in \gH} \sup_{w \in \wgam} R(h, w) \label{eq:bdro-main}
\end{align}
\textbf{\bdro in practice.} We parameterize the learner $\thh \in \Theta_h$ and adversary $\thw \in \Theta_w$ as neural networks\footnote{We use $\theta_h, \theta_w$ and $l(\theta_h)$ to denote $w(\thw; (\rvx, \ry)), h(\thh; \rvx)$ and $l(h(\thh; \rvx), \ry)$ respectively.}.
In practice, we implement the adversary either as a one hidden layer variational information bottleneck (VIB)~\citep{alemi2016deep}, where the Kullback-Leibler (KL) constraint on the latent variable $\rvz$ (output of VIB's hidden layer) directly constrains the bitrate; or as an $l_2$ norm constrained linear layer.
The objective for the VIB ($l_2$) version is obtained by setting $\bvib \neq 0$ ($\bltwo \neq 0$) in \eqref{eq:bdro-prac} below. See Appendix~\ref{appsubsec:bdro-objective} for details.
Note that the objective in Equation~\ref{eq:bdro-prac} is no longer convex-concave and can have multiple local equilibria or stationary points~\citep{mangoubi2021greedy}. The adversary's objective also does not have a strong dual that can be solved through conic programs---a standard practice in DRO literature~\citep{namkoong2016stochastic}. Thus, we provide an algorithm where both learner and adversary optimize \bdro iteratively through stochastic gradient ascent/descent (Algorithm~\ref{alg:online-bdro} in Appendix~\ref{subsec:bdro-algo}).
\begin{align}
& \;\;\; \fourquad \min_{\thh \in \Theta_h} \innerprod{l({\thh})}{\thws}_P \;\;\;\; \textrm{s.t.} \;\;\;\; \thws = \argmax_{\thw \in \Theta_w} \;\;L_{\textrm{adv}}(\thw; \btheta_h, \bvib, \bltwo, \eta) \label{eq:bdro-prac} \\
& L_{\textrm{adv}}(\thw; \btheta_h, \bvib, \bltwo, \eta) = \innerprod{l({\thh}) - \eta}{\thw}_P - \bvib \; \E_{P} \kl{p(\rvz \;|\; \rvx; \thw)}{\gN(\bf{0}, {I}_d)} - \beta_{l_2} \|\thw\|_2^2 \nonumber
\end{align}
\textbf{Training.} For each example, the adversary takes as input: (i) the last layer output of the current learner's feature network; and (ii) the input label. The adversary then outputs a weight (in $[0, 1]$). The idea of applying the adversary directly on the learner's features (instead of the original input) is based on recent literature \citep{rosenfeld2022domain,kirichenko2022last} that suggests re-training the prediction head is sufficient for robustness to shifts. The adversary tries to maximize weights on examples with value $\geq \eta$ (hyperparameter) and minimize on others.
For the learner, in addition to the example it takes as input the adversary assigned weight for that example from the previous round and uses it to reweigh its loss in a minibatch. Both players are updated in a round (Algorithm~\ref{alg:online-bdro}).
\section{Theoretical Analysis}
\label{sec:analysis}
\newcommand{\hat{\eta}_D^\gamma}{\hat{\eta}_D^\gamma}
The main objective of our analysis of \bdro is to show how adding a bitrate constraint on the adversary can: (i) give us tighter statistical estimates of the worst risk; and (ii) control the pessimism (excess risk) of the learned solution. First, we provide worst risk generalization guarantees using the PAC-Bayes framework~\citep{catoni2007pac}, along with a result for kernel adversary. Then, we provide convergence rates and pessimism guarantees for the solution found by our online solver for a specific instance of $\gW(\gamma)$
For both these, we analyze the constrained form of the conditional value at risk (CVaR) DRO objective~\citep{levy2020large} below.
\textbf{Bitrate-Constrained CVaR DRO.} When the uncertainty set $\gQ$ is defined by the set of all distributions $Q$ that have bounded likelihood \ie $\|q/p\|_\infty \leq 1/\alpha_0$, we recover the original CVaR DRO objective~\citep{duchi2021uniform}. The bitrate-constrained version of CVaR DRO is given in \eqref{eq:bitcon-cvar-dro} (see Appendix~\ref{appsec:omitted-proofs} for derivation). Note that, slightly different from Section~\ref{sec:prelim}, we define $\gW$ as the set of all measurable functions $w$$:$ $\gX$$\times$ $\gY$ $\mapsto$ $[0,1]$, since the other convex restrictions in \eqref{eq:adv-constraints} are handled by dual variable $\eta$. As in Section~\ref{sec:bdro}, $\gW(\gamma)$ is derived from $\gW$ using Definition~\ref{def:bitrate-constrained-class}. In \eqref{eq:bitcon-cvar-dro}, if we replace the bitrate-constrained class $\gW(\gamma)$ with the unrestricted $\gW$ then we recover the variational form of unconstrained CVaR DRO in \citet{duchi2016statistics}.
\begin{align}
\label{eq:bitcon-cvar-dro}
\gL^*_{\textrm{cvar}}(\gamma) = \inf_{h \in \gH, \eta \in \Real} \sup_{w \in \gW(\gamma)} R(h, \eta, w)\;\;\textrm{where,}\;\;
R(h, \eta, w) = (1/\alpha_0) \innerprod{l(h)-\eta}{w}_P + \eta
\end{align}
\textbf{Worst risk estimation bounds for \bdro.} Since we are only given a finite sampled dataset $\gD$ $\sim$ $P^n$, we solve the objective in \eqref{eq:bitcon-cvar-dro} using the empirical distribution $\pn$. We denote the plug-in estimates as $\hD^\gamma, \hat{\eta}_D^\gamma$.
This incurs an estimation error for the true worst risk.
But when we restrict our adversary to $\dwgam$, for a fixed learner $h$ we reduce the worst-case risk estimation error which scales with the bitrate $\kl{\cdot}{\pi}$ of the solution (deviation from prior $\pi$). Expanding this argument to every learner in $\gH$,
with high probability we also reduce the estimation error for the worst risk of $\hD^\gamma$. Theorem~\ref{thm:worst-risk-gen} states this generalization guarantee more precisely.
\begin{theorem}[worst-case risk generalization]
With probability $\geq 1-\delta$ over $\gD \sim P^n$, the worst bitrate-constrained $\alpha_0$-CVaR risk for $\hD^\gamma$ can be upper bounded by the following oracle inequality:
{
\begin{align}
\sup_{w \in \wgam} R(\hD^\gamma, \hat{\eta}_D^\gamma, w) \;\lsim \; \gL^*_{\textrm{cvar}}(\gamma) + \frac{M}{\alpha_0} \sqrt{\paren{\gamma + \log\paren{\frac{1}{\delta}} + (d+1) \log\paren{\frac{L^2n}{\gamma}} + \log n}/{(2n -1)}} \nonumber,
\end{align}}
when $l(\cdot, \cdot)$ is $[0,M]$-bounded, $L$-Lipschitz and $\gH$ is parameterized by convex set $\Theta \subset \Real^d$.
\label{thm:worst-risk-gen}
\newline
\textbf{Proof.} See Appendix~\ref{appsubsec:omitted-proofs-5.1}.
\end{theorem}
Informally, Theorem~\ref{thm:worst-risk-gen} tells us that bitrate-constraint $\gamma$ gracefully controls the estimation error $\gO(\sqrt{(\gamma + \gC(\gH))/n})$ (where $\gC(\gH)$ is a complexity measure) if we know that Assumption~\ref{assm:simple-group-shift} is satisfied. While this only tells us that our estimator is consistent with $\gO_p(1/\sqrt{n})$, the estimate may itself be converging to a degenerate predictor, \ie $ \gL^*_{\textrm{cvar}}(\gamma)$ may be very high. For example, if the adversary can cleanly separate mislabeled points even after the bitrate constraint, then presumably these noisy points with high losses would be the ones mainly contributing to the worst risk, and up-weighting these points would result in a learner that has memorized noise. Thus, it becomes equally important for us to analyze the excess risk (or the pessimism) for the learned solution. Since this is hard to study for any arbitrary bitrate-constrained class $\wgam$, we shall do so for the specific class of reproducing kernel Hilbert space (RKHS) functions.
\textbf{Special case of bounded RKHS.}
Let us assume there exists a prior $\Pi$ such that $\wgam$ in Definition~\ref{def:bitrate-constrained-class} is given by an RKHS induced by Mercer kernel $k:\gX\times\gX \mapsto\Real$, s.t. the eigenvalues of the kernel operator decay polynomially, \ie $\mu_j \lsim j^{{-2}/{\gamma}}$ $(\gamma < 2)$. Then, if we solve for $\hD^\gamma,\hat{\eta}_D^\gamma$ by doing kernel ridge regression over norm bounded ($\|f\|_{\wgam}$$\leq$$B\leq1$) smooth functions $f$ then we can control: (i) the pessimism of the learned solution; and (ii) the generalization error (Theorem~\ref{thm:special-case-rkhs}). Formally, we refer to pessimism for estimates $\hD^\gamma, \hat{\eta}_D^\gamma$ as excess risk defined as:
{
\begin{align}
\label{eq:excess-risk}
\textrm{excess risk} \coloneqq \sup_{w \in \wgam}|\inf_{h,\eta} R(h, \eta, w) - R(\hD^\gamma, \hat{\eta}_D^\gamma, w)|.
\end{align}}
\begin{theorem}[bounded RKHS]
\label{thm:special-case-rkhs} For $l, \gH$ in Theorem~\ref{thm:worst-risk-gen}, and for $\wgam$ described above $\exists \gamma_0$ such that for all sufficiently bitrate-constrained $\gW(\gamma)$ \ie $\gamma \leq \gamma_0$, w.h.p. $1- \delta$ worst risk generalization error:
\begin{align*}
\sup_{w \in \wgam} R(\hD^\gamma, \hat{\eta}_D^\gamma, w) \lsim (1/n)\paren{\log(1/\delta) + (d+1) \log(nB^{-\gamma} L^{\gamma/2})}
\end{align*}
and the excess risk is $\gO(B)$ for $\hD^\gamma, \hat{\eta}_D^\gamma$ defined above.
\newline
\textbf{Proof.} See Appendix~\ref{appsubsec:omitted-proofs-5.2}.
\end{theorem}
Thus, in the setting described above we have shown how bitrate-constraints given indirectly by $\gamma, R$ can control both the pessimism and statistical estimation errors. Here, we directly analyzed the estimates $\hD^\gamma, \hat{\eta}_D^\gamma$ but did not describe the specific algorithm used to solve the objective in \eqref{eq:bitcon-cvar-dro} with $\pn$. Now, we look at an iterative online algorithm to solve the same objective and see how bitrate-constraints can also influence convergence rates in this setting.
\textbf{Convergence and excess risk analysis for an online solver.} In the following, we provide an algorithm to solve the objective in \eqref{eq:bitcon-cvar-dro} and analyze how bitrate-constraint impacts the solver and the solution.
For convex losses, the min-max objective in \eqref{eq:bitcon-cvar-dro} has a unique solution and this matches the unique Nash equilibrium for the generic online algorithm (game) we describe (Lemma~\ref{lem:nash-eq}). The algorithm is as follows: Consider a two-player zero-sum game where the learner uses a no-regret strategy to first play $h \in \gH, \eta \in \Real$ to minimize $\E_{w \sim \delta} R(h, \eta, w)$. Then, the adversary plays follow the regularized leader (FTRL) strategy to pick distribution $\delta \in \Delta(\wgam)$ to maximize the same.
Our goal is to analyze the bitrate-constraint $\gamma$'s effect on the above algorithm's convergence rate and the pessimistic nature of the solution found. For this, we need to first characterize the bitrate-constraint class $\gW(\gamma)$.
If we assume there exists a prior $\Pi$ such that $\wgam$ is Vapnik-Chervenokis (VC) class of dimension $O(\gamma)$, then in Theorem~\ref{thm:convergence-excess-guarantee}, we see that the iterates of our algorithm converge to the equilibrium (solution) in $\gO({\sqrt{{\gamma\log n}/{T}}})$ steps. Clearly, the degree of bitrate constraint can significantly impact the convergence rate for a generic solver that solves the constrained DRO objective.
Theorem~\ref{thm:convergence-excess-guarantee} also bounds the excess risk (\eqref{eq:excess-risk}) on $\pn$.
\begin{lemma}[Nash equilibrium]
\label{lem:nash-eq} For strictly convex $l(h)$, $l(h) \in [0,M]$, the objective in \eqref{eq:bitcon-cvar-dro} has a unique solution which is also the Nash equilibrium of the game above when played over compact sets $\gH \times [0, M]$, $\dwgam$. We denote this equilibrium as $h^*_D(\gamma), \eta^*_D(\gamma), \delta^*_D(\gamma)$.
\end{lemma}
\begin{theorem}
\label{thm:convergence-excess-guarantee}
At time step $t$, if the learner plays $(h_t, \eta_t)$ with no-regret and the adversary plays $\delta_t$ with FTRL strategy that uses a negative entropy regularizer on $\delta$
then average iterates $(\bar{h}_{T},\bar{\eta}_{T},\bar{\delta}_{T}) =(1/T) \sum_{t=1}^T (h_t, \eta_t, \delta_t)$ converge to the equilibrium $(h^*_D(\gamma), \eta^*_D(\gamma), \delta^*_D(\gamma))$ at rate $\gO({\sqrt{{\gamma\log n}/{T}})}$. Further the excess risk defined above is $ \gO((M/\alpha_0)\paren{1-\frac{1}{n^\gamma}})$.
\newline
\textbf{Proof.} See Appendix~\ref{appsubsec:omitted-proofs-5.4}.
\end{theorem}
\section{Experiments}
\label{sec:experiments}
Our experiments aim to evaluate the performance of \bdro and compare it with ERM and group shift robustness methods that do not require group annotations for training examples. We conduct empirical analyses along the following axes: (i) worst group performance on datasets that exhibit known spurious correlations; (ii) robustness to random label noise
in the training data; (iii) average performance on hybrid covariate shift datasets with unspecified groups; and (iv) accuracy in identifying minority groups. See Appendix~\ref{appsec:additional-expts} for additional experiments and details\footnote{The code used in our experiments can be found at \url{https://github.com/ars22/bitrate_DRO}.}.
\textbf{Baselines.} Since our objective is to be robust to group shifts without group annotations on training examples,
we explore baselines that either optimize for the worst minority group (CVaR DRO~\citep{levy2020large}) or use training losses to identify specific minority points (LfF~\citep{nam2020learning}, JTT~\citep{liu2021just}).
\gdro~\citep{sagawa2019distributionally} is treated as an oracle.
We also compare with the simple re-weighting baseline (RWY) proposed by \citet{idrissi2022simple}.
\textbf{Implementation details.} We train using Resnet-50~\citep{he2016deep} for all methods and datasets except CivilComments, where we use BERT~\citep{wolf2019huggingface}. For our VIB adversary, we use a $1$-hidden layer neural network encoder and decoder (one for each label). As mentioned in Section~\ref{sec:bdro}, the adversary takes as input the learner model's features and the true label to generate weights. All implementation and design choices for baselines were adopted directly from \citet{liu2021just,idrissi2022simple}. We provide model selection methodology and other details in Appendix~\ref{appsec:additional-expts}.
\newcommand{\mt}[1]{{\footnotesize{(#1)}}}
\newcommand{\mbl}[1]{#1}
\newcommand{\mbb}[1]{#1}
\begin{table}[b]
\footnotesize
\centering
\setlength{\tabcolsep}{1em}
\begin{tabular}{r|cccccc}
& \multicolumn{2}{c}{Waterbirds} & \multicolumn{2}{c}{CelebA} & \multicolumn{2}{c}{CivilComments} \\
Method & Avg & WG & Avg & WG & Avg & WG \\ \midrule
ERM & 97.1 \mt{0.1} & 71.0 \mt{0.4} & 95.4 \mt{0.2} & 46.9 \mt{1.0} & 92.3 \mt{0.2} & 57.2 \mt{0.9} \\
LfF ~\citep{nam2020learning} & 90.7 \mt{0.2} & 77.6 \mt{0.5} & 85.3 \mt{0.2} & 77.4 \mt{0.7} & 92.4 \mt{0.1} & 58.9 \mt{1.1} \\
RWY ~\citep{idrissi2022simple} & 93.7 \mt{0.3} & 85.8 \mt{0.5} & 84.9 \mt{0.2} & 80.4 \mt{0.3} & 91.7 \mt{0.2} & 67.7 \mt{0.7} \\
JTT ~\citep{liu2021just} & 93.2 \mt{0.2} & 86.6 \mt{0.4} & 87.6 \mt{0.2} & 81.3 \mt{0.5} & 90.8 \mt{0.3} & 69.4 \mt{0.8} \\
CVaR DRO ~\citep{levy2020large} & 96.3 \mt{0.2} & \mbl{75.5 \mt{0.4}} & 82.2 \mt{0.3} & \mbl{64.7 \mt{0.6}} & 92.3 \mt{0.2} & \mbl{60.2 \mt{0.8}} \\\midrule
\bdro (\vib) (ours) & 94.1 \mt{0.2} & \mbb{86.3 \mt{0.3}} & 86.7 \mt{0.2} & \mbb{80.9 \mt{0.4}} & 90.5 \mt{0.2} & \mbb{68.7 \mt{0.9}} \\
\bdro (\ltwop) (ours) & 93.8 \mt{0.2} & \mbb{86.4 \mt{0.3}} & 87.7 \mt{0.3} & \mbb{80.4 \mt{0.6}} & 91.0 \mt{0.3} & \mbb{68.9 \mt{0.7}} \\ \midrule \gdro
~\cite{sagawa2019distributionally} & 93.2 \mt{0.3} & 91.1 \mt{0.3} & 92.3 \mt{0.3} & 88.4 \mt{0.6} & 88.5 \mt{0.3} & 70.0 \mt{0.5}
\end{tabular}
\caption{\footnotesize \textbf{\bdro recovers worst group performance gap between CVaR DRO and Group DRO:} On Waterbirds, CelebA and CivilComments we report test average (Avg) and test worst group (WG) accuracies for \bdro and baselines. In ($\cdot$) we report the standard error of the mean accuracy across five runs.}
\label{tab:sc-datasets}
\end{table}
\textbf{Datasets.} For experiments in the known groups and label noise settings we use: (i) Waterbirds~\citep{wah2011caltech} (background is spurious), CelebA~\citep{liu2015deep} (binary gender is spuriously correlated with label ``blond''); and CivilComments (WILDS)~\citep{borkan2019nuanced} where the task is to predict ``toxic'' texts and there are 16 predefined groups~\cite{koh2021wilds}. We use FMoW and Camelyon17~\citep{koh2021wilds} to test methods on datasets that do not have explicit group shifts. In FMoW the task is to predict land use from satellite images where the training/test set comprises of data before/after 2013. Test involves both subpopulation shifts over regions (\eg Africa, Asia) and domain generalization over time (year). Camelyon17 presents a domain generalization problem where the task is to detect tumor in tissue slides from different sets of hospitals in train and test sets.
\subsection{Is \bdro robust to group shifts without training data group annotations?}
\label{subsec:known-correlations}
Table~\ref{tab:sc-datasets} compares the average and worst group accuracy for \bdro with ERM and four group shift robustness baselines: JTT, LtF, SUBY, and CVaR DRO. First, we see that unconstrained CVaR DRO underperforms other heuristic algorithms. This matches the observation made by \citet{liu2021just}. Next, we see that adding bitrate constraints on the adversary via a KL term or \ltwop penalty significantly improves the performance of \bdro (\vib) or \bdro (\ltwop),
which now matches the best performing baseline (JTT). Thus, we see the less conservative nature of \bdro allows it to recover a large portion of the performance gap between \gdro and CVaR DRO. Indirectly, this partially validates our Assumption~\ref{assm:simple-group-shift}, which states that the minority group is identified by a low bitrate adversary class. In Section~\ref{subsec:identify-min} we discuss exactly what fraction of the minority group is identified, and the role played by the strength of bitrate-constraint.
\subsection{\bdro is more robust to random label noise}
\label{subsec:rcn}
Several methods for group robustness (\eg CVaR DRO, JTT) are based on the idea of up weighting points with high training losses. The goal is to obtain a learner with matching
performance on every (small) fraction of points in the dataset. However, when training data has mislabeled examples, such an approach will likely yield degenerate solutions. This is because the adversary directly upweights any example where the learner has high loss, including datapoints with incorrect labels. Hence, even if the learner's prediction matches the (unknown) true label, this formulation would force the learner to {memorize} incorrect labelings at the expense of learning the true underlying function.
On the other hand, if the adversary is sufficiently bitrate constrained, it cannot upweight the arbitrary set of randomly mislabeled points, as this would require it to memorize those points. Our Assumption~\ref{assm:simple-group-shift} also dictates that the distribution shift
would not upsample such high bitrate noisy examples.
Thus, our constraint on the adversary ensures \bdro is robust to label noise in the training data and our assumption on the target distribution retains its robustness to test time distribution shifts.
In Figure~\ref{fig:noise-robustness-right} we highlight this failure mode of unconstrained up-weighting methods
in contrast to \bdro. We first induce random label noise~\citep{carlini2019distribution} of varying degrees into the Waterbirds and CelebA training sets.
Then we run each method and compare worst group performance. In the absence of noise we see that the performance of JTT is comparable with \bdro, if not slightly better (Table~\ref{tab:sc-datasets}). Thus, both \bdro and JTT perform reasonably well in identifying and upsampling the simple minority group in the absence of noise. In its presence, \bdro significantly outperforms JTT and other approaches on both Waterbirds and CelebA, as it only upsamples the minority examples misclassified by simple features, ignoring the noisy examples for the reasons above.
To further verify our claims, we set up a noisily labeled synthetic dataset (see Appendix~\ref{appsec:additional-expts} for details).
In Figure~\ref{fig:noise-robustness-left} we plot training samples
as well as the solutions learned by \bdro and and JTT on synthetic data. In Figure~\ref{fig:intro-figure}\emph{(right)} we also plot exactly which points are upweighted by \bdro and JTT. Using both figures, we note that JTT mainly upweights the noisy points (in red) and memorizes them using $\xnoise$. Without any weights on minority, it memorizes them as well and learns component along spurious feature.
On the contrary, when we restrict the adversary with \bdro to be sparse ($l_1$ penalty),
it only upweights minority samples, since no sparse predictor can separate noisy points in the data. Thus, the learner can no longer memorize the upweighted minority and we recover the robust predictor along core feature.
\begin{figure}
\caption{\label{fig:noise-robustness-left}
\caption{\label{fig:noise-robustness-right}
\caption{\footnotesize
\emph{(Left)}
\label{fig:noise-robustness-left}
\label{fig:noise-robustness-right}
\end{figure}
\subsection{How does \bdro perform on more general covariate shifts?}
\label{subsec:gen-cov-shift}
In Table~\ref{tab:gen-cov-shifts} we report the average test accuracies for \bdro and baselines on the hybrid dataset FMoW and domain generalization dataset Camelyon17.
Given its hybrid nature, on FMoW we also report worst region accuracy. First, we note that on these datasets group shift robustness baselines do not do better than ERM. Some are either too pessimistic (\eg CVaR DRO), or require heavy assumptions
\begin{table}[!ht]
\footnotesize
\centering
\setlength\tabcolsep{1em}
\begin{tabular}{r|cc|c}
Method & \multicolumn{2}{c|}{FMoW} & \multicolumn{1}{c}{Camelyon17} \\
& Avg & W-Reg & Avg \\ \midrule
ERM & 53.3 \mt{0.1} & 32.4 \mt{0.3} & 70.6 \mt{1.6} \\ \midrule
JTT ~\cite{liu2021just} & 52.1 \mt{0.1} & 31.8 \mt{0.2} & 66.3 \mt{1.3} \\
LfF ~\cite{nam2020learning} & 49.6 \mt{0.2} & 31.0 \mt{0.3} & 65.8 \mt{1.2} \\
RWY ~\cite{idrissi2022simple} & 50.8 \mt{0.1} & 30.9 \mt{0.2} & 69.9 \mt{1.3} \\
Group DRO ~\cite{sagawa2019distributionally} & 51.9 \mt{0.2} & 30.4 \mt{0.3} & 68.5 \mt{0.9} \\
CVaR DRO ~\cite{levy2020large} & 51.5 \mt{0.1} & 31.0 \mt{0.3} & 66.8 \mt{1.3} \\ \midrule
\bdro (\vib) (ours) & 52.0 \mt{0.2} & 31.8 \mt{0.2} & 70.4 \mt{1.5} \\
\bdro (\ltwop) (ours) & 53.1 \mt{0.1} & 32.3 \mt{0.2} & 71.2 \mt{1.0} \\
\end{tabular}
\caption{\footnotesize \textbf{\bdro does better than Group DRO and other baselines on two WILDS datasets where the precise nature of shift is unknown:} Average (Avg) and worst region (W-Reg for FMoW) test accuracies on Camelyon17 and FMoW. In ($\cdot$) we report the standard error of the mean accuracy across five runs.
\label{tab:gen-cov-shifts}
}
\end{table}
(\eg Group DRO) to be robust to domain generalization.
This is also noted by~\citet{gulrajani2020search}. Next, we see that \bdro (\ltwop version) does better than other group shift baselines
on both both worst region and average datasets and matches ERM performance on Camelyon17. One explanation could be that even though these datasets test models on new domains, there maybe some latent groups defining these domains that are simple and form a part of latent subpopulation shift. Investigating this claim further is a promising line of future work.
\subsection{What fraction of minority is recovered by \bdro?}
\label{subsec:identify-min}
We claim that our less pessimistic objective can more accurately recover (upsample) the true minority group if indeed the minority group is simple (see Assumption~\ref{assm:simple-group-shift} for our definition of simple).
In this section, we aim to verify this claim. If we treat examples in the top $10\%$ (chosen for post hoc analysis) fraction of examples as our predicted minorities, we can check precision and recall of this decision on the Waterbirds and CelebA datasets. Figure~\ref{fig:precision-recall} plots these metrics at each training epoch for \bdro (with varying $\bvib$), \jtt and CVaR DRO. Precision of the random baseline tells us the true fraction of minority examples in the data. First we note that \bdro consistently performs much better on this metric than unconstrained CVaR DRO. In fact, as we reduce strength of $\bvib$ we recover precision/recall close to the latter. This controlled experiment shows that the bitrate constraint is helpful (and very much needed) in practice to identify rare simple groups.
In Figure~\ref{fig:precision-recall} we observe that asymptotically, the precision of \bdro is better than \jtt on both datasets, while the recall is similar. Since importance weighting has little impact in later stages with exponential tail losses~\citep{soudry2018implicit, byrd2019effect}, other losses (\eg polytail~\citet{wang2021importance}) may further improve the performance of \bdro as it gets better at identifying the minority classes when trained longer.
\begin{figure}
\caption{\footnotesize By considering the fraction of points upweighted by our adversary (top $10\%$) as the positive class we analyze the precision and recall of this class with respect to the minority group. and do the same for JTT, random baseline and CVaR DRO. \bdro achieves highest precision and matches recall with \jtt asymptotically. We also find that increasing bitrate constraint $\bvib$ helps improving precision/recall.}
\label{fig:precision-recall}
\end{figure}
\section{Conclusion}
\label{sec:conclusion}
In this paper, we proposed a method for making machine learning models more robust. While prior methods optimize robustness on a per-example or per-group basis, our work focuses on features. In doing so, we avoid requiring group annotations on training samples, but also avoid the excessively conservative solutions that might arise from CVaR DRO with fully unconstrained adversaries. Our results show that our method avoids learning spurious features, is robust to noise in the training labels, and does better on other forms of covariate shifts compared to prior approaches. Our theoretical analysis also highlights other provable benefits in some settings like reduced estimation error, lower excess risk and faster convergence rates for certain solvers.
\textbf{Limitations.}
While our method lifts the main limitation of Group DRO (access to training group annotations), it does so at the cost of increased complexity. Further, to tune hyperparameters, like prior work we assume access to a some group annotations on validation set but also get decent performance (on some datasets) with only a balanced validation set (see Appendix~\ref{appsec:additional-expts}). Adapting group shift methods to more generic settings remains an important and open problem.
{
\textbf{Acknowledgement.} The authors would like to thank Tian Li, Saurabh Garg at Carnegie Mellon University, and Yoonho Lee at Stanford University for helpful feedback and discussion.
}
\appendix
\section*{Appendix Outline}
\ref{appsec:practical-imp} Implementing \bdro in practice
\ref{appsec:additional-expts} Additional empirical results and other experiment details
\ref{appsec:omitted-proofs} Omitted Proofs.
\input{sections/practical}
\input{sections/additional_exp}
\input{sections/proofs}
\end{document}
|
\begin{document}
\begin{frontmatter}
title{Performance measures for the two-node queue with finite buffers}
\author[label1]{Yanting Chen}
\address[label1]{College of Mathematics and Econometrics, Hunan University, Changsha, Hunan 410082, P.~R.~China\fnref{label}}
\address[label2]{Stochastic Operations Research, University of Twente, P.O. Box 217, 7500 AE Enschede, The Netherlands\fnref{labe2}}
\ead{[email protected]}
\author[label2]{{Xinwei Bai}}
\ead{[email protected]}
\author[label2]{Richard J. Boucherie}
\ead{[email protected]}
\author[label2]{Jasper Goseling}
\ead{[email protected]}
\address{}
\begin{abstract}
We consider a two-node queue modeled as a two-dimensional random walk. In particular, we consider the case that one or both queues have finite buffers. We develop an approximation scheme based on the Markov reward approach to error bounds in order to bound performance measures of such random walks in terms of a perturbed random walk in which the transitions along the boundaries are different from those in the original model and the invariant measure of the perturbed random walk is of product-form. We then apply this approximation scheme to a tandem queue and some variants of this model, for the case that both buffers are finite. We also apply our approximation scheme to a coupled-queue in which only one of the buffers has finite capacity.
\end{abstract}
\begin{keyword}
Two-node queue\sep Random walk \sep Finite state space \sep Product-form \sep Error bounds \sep Performance measure
\end{keyword}
\end{frontmatter}
The two-node queue is one of the most extensively studied topics in queueing theory. It can be often modeled as a two-dimensional random walk on the quarter-plane. Hence, it is sufficient to find performance measures of the corresponding two-dimensional random walk if we are interested in the performance of the two-node queue. In this work we analyze the steady-state performance of a two node queue for the particular case that one or both of the queues have finite buffer capacity. Our aim is to develop a general methodology that can be applied to any two-node queue that can be modeled as a two-dimensional random walk on (part of) the quarter-plane.
A special case of the two-node queue with finite buffers at both queues which has been extensively studied so far, is the tandem queue with finite buffers. An extensive survey of results on this topic is provided in~cite{balsamo2011queueing, perros1994queueing}. Most of these papers focus on the development of approximations or algorithmic procedures to find steady-state system performances such as throughput and the average number of customers in the system. A popular approach used in such approximations is decomposition, see~cite{asadathorn1999decomposition, gershwin1987efficient}. The main variations of a two-node queue with finite buffers at both queues are: three or more stations in the tandem queue~cite{shanthikumar1994bounding}, multiple servers at each station~cite{van1989simple, van2006performance}, optimal design for allocating finite buffers to the stations~cite{hillier1995optimal}, general service times~cite{van1987formal, van2004error}, etc. Numerical results of such approximations often suggest that the proposed approximations are indeed bounds on the specific performance measure, however rigorous proofs are not always available. Moreover, these approximation methods cannot be easily extended to a general method, which determines the steady-state performance measure of any two-node queue with finite buffers at both queues.
Van Dijk et al.~cite{vandijk88tandem} pioneered in developing error bounds for the system throughput using the product-form modification approach. The method has since been further developed by van Dijk et al.~cite{van1998bounds,vandijk88perturb} and has been applied to, for instance, Erlang loss networks~cite{boucherie2009monotonicity}, to networks with breakdowns~cite{van1988simple}, to queueing networks with non-exponential service~cite{van2004error} and to wireless communication networks with network coding~cite{goseling2013energy}. An extensive description and overview of various applications of this method can be found in~cite{vandijk11inbook}.
A major disadvantage of the error bound method mentioned above is that the verification steps that are required to apply the method can be technically quite complicated. Goseling et al.~cite{goseling2014linear} developed a general verification technique for random walks in the quarter-plane. This verification technique is based on formulating the application of the error bounds method as solving a linear program. In doing so, it avoids completely the induction proof required in~cite{vandijk88perturb}. Moreover, instead of only bounding performance measures for specific queueing system, the approximation method developed in~cite{goseling2014linear} accepts any random walk in the quarter-plane as an input.
The main contribution of the current work is to provide an approximation scheme which can be readily applied to approximate performance measures for any two-node queue in which one or both queues have finite buffer capacity. This is based on modifying the general verification technique developed in~cite{goseling2014linear} for a two-dimensional random walk on a state space that is finite in one or both dimensions.
We apply this approximation scheme to a tandem queue with finite buffers at both queues. We show that the error bounds for the blocking probability are improved compared with the error bounds for the blocking probability provided in~cite{vandijk88tandem}. The method in~cite{vandijk88tandem} is based on specific model modifications. Apart from this, our approximation scheme is more general in the sense that other interesting performance measures could also be obtained easily. This is an advantage over the methods used in~cite{van1998bounds, vandijk88tandem, vandijk88perturb} where different model modifications are necessary for different performance measures. Moreover, we show that the error bounds can also be obtained for variations of the tandem queue with finite buffers. In particular, we consider the case that one server speeds-up or slows-down when another server is idle or saturated.
For a two-node queue with finite buffers at both queues, it is also possible to find the invariant measure by solving a system of linear equations. The complexity solving this system is at least $O(L^{2})$, where $L$ is the size of the smallest buffer. We will demonstrate that the approach that is presented in this paper has a complexity that is constant in $L$. This makes it an interesting alternative to solving for the invariant measure by brute force if $L$ is large.
Finally, we apply this approximation scheme to a two-node queue with finite buffers at only one queue. In particular, we apply our results to the coupled-queue~cite{fayolle1979two}. Contrary to~cite{fayolle1979two}, we consider the case that one of the queues has finite buffer capacity. The numerical results illustrate that our approximation scheme achieves tight bounds.
There are other means to analyze the two-node queue with finite buffers at only or both queues. In particular, the models considered in this paper are instances of quasi-birth-and-death process (QBD) processes and, therefore, amendable for a solution using the matrix-geometric approach~cite{Latouche1999Matrix, Neuts1981Matrix}.
There are many variations on the matrix geometric method, in particular in how to compute the rate matrix. However, all methods share a common complexity of $O(L^3)$, where $L$ is the number of phases, which in our case corresponds to the size of the smallest buffer. Therefore, our approach, with constant complexity in $L$, provides a promising alternative to the matrix geometric method for large $L$. A drawback of our approach is that it in general does not give an exact result, but only bounds.
Another important advantage of our work is that it is possible, though outside the scope of the current paper, to extend our approach to queueing networks with more than two queues and more complicated interactions. Such an extension is not possible for the matrix-geometric method. This paper provides the necessary intermediate step in building up our approach from the first ideas in~cite{goseling2014linear} towards a completely general method that can be applied to queueing networks for which currently no methods exist by which we can analyze them.
The remainder of this paper proceeds as follows. In Section~\ref{sec:model4E}, we present the model and formulate the research problem. In Section~\ref{sec:as4E}, we provide an approximation scheme to bound performance measures for any two-node queue with finite buffers at both queues. We bound performance measures for a tandem queue with finite buffers and some variants of this model in Section~\ref{sec:errorboundsTandemE}. In Section~\ref{sec:onequeue}, we extend the approximation scheme to any two-node queue with finite buffers at only one queue. In Section~\ref{sec:errorboundsSharing4E}, this extended approximation scheme has been applied to a coupled-queue with processor sharing and finite buffers at only one queue. Finally, we provide concluding remarks in Section~\ref{sec:conclusionE}.
\section{Two-node queue with finite buffers at both queues} \label{sec:model4E}
\subsection{Two-node queue with finite buffers at both queues}
The two-node queue with finite buffers at both queues is a queueing system with two servers, each of them having finite storage capacity. If a job arrives at a server which does not have any more storage capacity, then the job is lost. In general, the two queues influence each other, {\it i.e.},\ the service rate at one of the queues depends on the number of jobs at the other.
Such a queueing system is naturally modeled as a two-dimensional finite random walk, which we introduce next. The connection between the continuous-time queueing system and the discrete-time random walk, obtained through uniformization, is made explicit for various examples in Section~\ref{sec:errorboundsTandemE} and Section~\ref{sec:errorboundsSharing4E}.
\subsection{Two-dimensional finite random walk on both axis}
\begin{figure}\label{fig:partXE}
\end{figure}
\begin{figure}\label{fig:rwE}
\end{figure}
We consider a two-dimensional random walk $R$ on $S$ where
\begin{equation*}
S = \{0,1,2, cdots, L_1\} times\{0,1,2, cdots, L_2\}.
\end{equation*}
We use a pair of coordinates to represent a state, {\it i.e.},\ for $n \in S$, $n = (i,j)$. The state space is naturally partitioned in the following components (see Figure~\ref{fig:partXE}):
\begin{align*}
C_1 &= \{1,2,3, cdots, L_1-1\} times \{0\}, \quad C_2 = \{0\} times \{1,2,3, cdots, L_2-1\}, \\
C_3 &= \{1,2,3, cdots, L_1-1\} times \{L_2\}, \quad C_4 = \{L_1\} times \{1,2,3,cdots,L_2-1\}, \\
C_5 &= \{(0,0)\}, \quad C_6 = \{(0, L_2)\}, \quad C_7 = \{(L_1, L_2)\}, \quad C_8 = \{(L_1, 0)\}, \\
C_9 &= \{1,2,3, cdots, L_1-1\} times \{1,2,3, cdots, L_2-1\}.
\end{align*}
We refer to this partition as the $C$-partition. The index of the component of state $n \in S$ is denoted by $k(n)$, {\it i.e.},\ $n \in C_{k(n)}$. Take for instance, $C_5 = (0,0)$. Then the index of $(0,0)$ is $5$, hence, $k((0,0)) = 5$, {\it i.e.},\ $(0,0) \in C_5$.
Transitions are restricted to the neighboring points (horizontally, vertically and diagonally). For $k = 1,2, cdots, 9$, we denote by $N_k$ the neighbors of a state in $C_k$. More precisely, $N_1 = \{-1,0,1\} times \{0,1\}$, $N_2 = \{0,1\} times \{-1,0,1\}$, $N_3 = \{-1,0,1\} times \{-1,0\}$, $N_4 = \{-1,0\} times \{-1,0,1\}$, $N_5 = \{0,1\} times \{0,1\}$, $N_6 = \{0,1\} times \{-1,0\}$, $N_7 = \{-1,0\} times \{-1,0\}$, $N_8 = \{-1,0\} times \{1,0\}$ and $N_9 = \{-1,0,1\} times \{-1,0,1\}$. Also, let $N = N_9$.
Again, let us consider $C_5$. The neighbors, $N_5$, is the product set $\{0,1\} times \{0,1\}$, which denotes the coordinates of the transitions, either horizontally or vertically.
Let $p_{k,u}$ denote the transition probability from state $n$ in component $k$ to $n + u$, where $u \in N_k$. For $C_5$, we now have $p_{k,u}$ from state $n = (0,0)$ in component $k =5$ to $(0,0) + u$, where $u \in N_5$. This means $u$ could be $(0,0), (0,1), (1,0)$ and $(1,1)$. For instance, $p_{5,(1,0)}$ is the transition probability from state $(0,0)$ in component $5$ to $(0,0) + (1,0)$, {\it i.e.},\ (1,0), transition to the right. The transition diagram of a two-dimensional finite random walk can be found in Figure~\ref{fig:rwE}. The transitions from a state to itself are omitted. The system is homogeneous in the sense that the transition probabilities (incoming and outgoing) are translation invariant in each of the components, {\it i.e.},\
\begin{equation}{\label{eq:homogeneousE}}
p_{k(n-u), u} = p_{k(n),u}, \quad text{for $n - u \in S$ and $u \in N_{k(n)}$}.
\end{equation}
Equation~\eqref{eq:homogeneousE} not only implies that the transition probabilities for each part of the state space are translation invariant but also ensures that also the transition probabilities entering the same component of the state space are translation invariant.
We assume that the random walk $R$ that we consider is aperiodic, irreducible, positive recurrent, and has invariant probability measure $m(n)$, where $m(n)$ satisfies for all $n \in S$,
\begin{equation*}
m(n) = \sum_{u \in N_{k(n)}} p_{k(n+u), -u} m(n+u).
\end{equation*}
\subsection{Problem formulation}
Our goal is to approximate the steady-state performance of the random walk $R$. The performance measure of interest is
\begin{equation*}
\mathcal{F} = \sum_{n\inS} m(n)F(n),
\end{equation*}
where $F(n):Sto[0,\infty)$ is linear in each of the components from $C$-partition, {\it i.e.},\
\begin{equation}{\label{eq:performancemeasureE}}
F(n) = f_{k(n),0} + f_{k(n),1} i + f_{k(n), 2} j, \quad text{for} \quad n = (i,j) \in S.
\end{equation}
The constants $f_{k(n),0}$, $f_{k(n),1}$ and $f_{k(n), 2}$ are allowed to be different for different components from the $C$-partition of $S$.
In general, it is not possible to obtain the probability measure $m(n)$ in a closed-form. Therefore, we will use a perturbed random walk of which the invariant measure has a closed-form expression to approximate the performance measure $\mathcal{F}$.
We approximate the performance measure $\mathcal{F}$ in terms of the perturbed random walk $\bar{R}$. We consider the perturbed random walk $\bar{R}$ in which only the transition probabilities along the boundaries $(C_1, cdots, C_8)$ are allowed to be different, {\it i.e.},\ for instance, $p_{1,(-1,0)}$, $p_{1,(1,0)}$, $p_{1,(0,0)}$ for the state from $C_1$ are allowed to be different in $\bar{R}$, $p_{2,(0,1)}$, $p_{2,(0,-1)}$, $p_{2,(0,0)}$ for the state from $C_2$ are allowed to be different in $\bar{R}$, etc. An example of a perturbed random walk $\bar{R}$ can be found in Figure~\ref{fig:rwXE}.
\begin{figure}\label{fig:rwXE}
\end{figure}
We use $\bar{p}_{k,u}$ to denote the probability of $\bar{R}$ jumping from any state $n$ in component $C_k$ to $n + u$, where $u \in N_k$. Moreover, let $q_{k,u} = \bar{p}_{k,u} - p_{k,u}$. The probability measure $\bar{m}$ of $\bar{R}$ is assumed to be of product-form,{\it i.e.},\
\begin{equation*}
\bar{m}(n) = \alpha \rho^i \sigma^j,
\end{equation*}
where $n = (i,j)$ for some $(\rho, \sigma) \in (0,1)^2$ and $\alpha \neq 0$. The measure $\bar{m}$ is the invariant measure of $\bar{R}$, {\it i.e.},\ it satisfies
\begin{equation}{\label{eq:balancePE}}
\bar{m}(n) = \sum_{u \in N_{k(n)}} \bar{p}_{k(n+u), -u} \bar{m}(n+u),
\end{equation}
for all $n \in S$.
In the following sections, we are going to find upper and lower bounds of $\mathcal{F}$ in terms of the perturbed random walk $\bar{R}$ defined above.
\section{Proposed approximation scheme}{\label{sec:as4E}}
In this section, we establish an approximation scheme to find upper and lower bounds for performance measures of a two-dimensional finite random walk.
In~cite{goseling2014linear}, an approximation scheme based on a linear program is developed for a random walk in the quarter-plane. This approximation scheme has also been used in~cite{chen2015invariant}. We will show in this paper that the technique can be extended to cover our model, {\it i.e.},\ a two-dimensional finite random walk. We will explain how this is achieved in the following sections.
\subsection{Markov reward approach to error bounds}
The fact that $R$ and $\bar{R}$ differ only along the boundaries of $S$ makes it possible to obtain the error bounds for the performance measures via the Markov reward approach. An introduction to this technique is provided in~cite{vandijk11inbook}. We interpret $F$ as a reward function, where $F(n)$ is the one step reward if the random walk is in state $n$. We denote by $F^t(n)$ the expected cumulative reward at time $t$ if the random walk starts from state $n$ at time $0$, {\it i.e.},\
\begin{equation*}
F^t(n) =
\begin{cases}
0, \quad &text{if } t = 0,\\
F(n) + \sum_{u \in N_{k(n)}} p_{k(n), u} F^{t - 1} (n + u), \quad &text{if } t > 0,
\end{cases}
\end{equation*}
For convenience, let $F^t(n + u) = 0$ where $u \in \{(s,t)| s,t\in \{-1,0,1\}\}$ if $n + u \notin S$. Terms of the form $F^t(n + u) - F^t(n)$ play a crucial role in the Markov reward approach and are denoted as \emph{bias terms}. Let $D^t_u = F^t(n + u) - F^t(n)$. For the unit vectors $e_1 = (1,0)$, $e_2 = (0,1)$, let $D_1^t(n) = D_{e_1}^t(n)$ and $D_2^t(n) = D_{e_2}^t(n)$.
The next result in~cite{vandijk11inbook} provides bounds for the approximation error for $\mathcal{F}$. We will use two non-negative functions $\bar{F}$ and $G$ to bound the performance measure $\mathcal{F}$.
\begin{theorem}[~cite{vandijk11inbook}]{\label{thm:vandijkE}}
Let $\bar{F}$: $S r_ightarrow [0, \infty)$ and $G$: $S r_ightarrow [0, \infty)$ satisfy
\begin{equation}{\label{eq:requirementE}}
\left|\bar{F}(n) - F(n) + \sum_{u \in N_{k(n)}} q_{k(n), u} D^t_u(n)r_ight| \leq G(n),
\end{equation}
for all $n \in S$ and $t geq 0$. Then
\begin{equation}{\label{eq:resultE}}
\sum_{n \in S}[\bar{F}(n) - G(n)] \bar{m}(n) \leq \mathcal{F} \leq \sum_{n \in S}[\bar{F}(n) + G(n)] \bar{m}(n).
\end{equation}
\end{theorem}
\subsection{A linear program approach}
In this section we present a linear program approach to bound the errors. Due to our construction of $\bar{R}$, the random walks $R$ and $\bar{R}$ differ only in the transitions that are along the unit directions, {\it i.e.},\
\begin{equation}{\label{eq:boundaryE}}
q_{k,u} = \bar{p}_{k,u} - p_{k,u} = 0 \quad text{for} \quad u \neq \{e_1, e_2, -e_1, -e_2, (0,0)\}.
\end{equation}
This restriction will significantly simplify the presentation of the result.
To start, consider the following optimization problem. We only consider how to obtain the upper bound for $\mathcal{F}$ here because the lower bound for $\mathcal{F}$ can be found similarly.
\noindent textbf{Problem 1}
\begin{equation}{\label{eq:LPOE}}
textit{minimize} \quad \sum_{n \in S} [\bar{F}(n) + G(n)] \bar{m}(n),
\end{equation}
\begin{align}
textit{subject to} \quad &\left|\bar{F}(n) - F(n) + \sum_{s = 1,2} \left(q_{k(n), e_s} D_s^t(n) - q_{k(n), -e_s} D_s^t(n - e_s)r_ight)r_ight| \notag \\
&\leq G(n), \quad text{for} \quad n \in S, t geq 0, \label{eq:LPS1E}\\
&\bar{F}(n) geq 0, G(n) geq 0, \quad text{for} \quad n \in S. \label{eq:LPS2E}
\end{align}
The variables in Problem $1$ are the functions $\bar{F}(n)$, $G(n)$ and the parameters are $F(n), \bar{m}(n), q_{k(n), e_s}$ and $D_s^t(n)$ for $n \in S$, $s = 1,2$. Hence, Problem $1$ is a linear programming problem over two non-negative variables $\bar{F}(n)$ and $G(n)$ for every $n \in S$.
This linear program has infinitely many constraints because we have unbounded time horizon. We will first bound the bias term $D^t_s(n)$ uniformly over $t$. Then we have a linear program with a finite number of variables and constraints. However, further reduction is still needed because the number of variables and constraints will increase rapidly if $L_1$ and $L_2$, which define the size of the state space, increase. Our contribution is to reduce Problem $1$ to a linear programming problem where the number of variables and constraints does not depend on the size of the finite state space. By doing so, we will achieve a constant complexity in the parameters $L_1$ and $L_2$, as opposed to, for instance, the matrix geometric method which has cubic complexity.
We now verify that the objective in Problem $1$ is indeed an upper bound on the performance measure $\mathcal{F}$. Consider $D_{(0,0)}^t(n) = 0$, $D_{-e_s}^t(n) = -D^t_{e_s}(n - e_s)$ for $s = 1,2$ and \eqref{eq:boundaryE}, it follows directly that~\eqref{eq:LPS1E} is equivalent to~\eqref{eq:requirementE}. Therefore, it follows from Theorem~\ref{thm:vandijkE} that the objective of Problem $1$ provides an upper bound on $\mathcal{F}$.
\subsection{Bounding the bias terms}
The main difficulty in solving Problem $1$ is the unknown bias terms $D_s^t(n)$. It is in general not possible to find closed-form expressions for the bias terms. Therefore, we introduce two functions $A_s$: $S r_ightarrow [0,\infty)$ and $B_s: S r_ightarrow [0, \infty)$, $s = 1,2$. We will formulate a finite number of constraints on functions $A_s$ and $B_s$ where $s = 1,2$ such that for any $t$ and $s = 1,2$ we have
\begin{equation}{\label{eq:LPconstrainE}}
-A_s(n) \leq D_s^t(n) \leq B_s(n),
\end{equation}
{\it i.e.},\ , the functions $A_s$ and $B_s$ provide bounds on the bias terms uniformly over all $t geq 0$. In the next section, we will find a finite number of constraints that imply~\eqref{eq:LPconstrainE}. Our method is based on the method that was developed in~cite{goseling2014linear} for the case of an unbounded state space.
For notational convenience, as will become clear below, we define a finer partition of $S$, the $Z$-partition. This partition is depicted in Figure~\ref{fig:partitionBE}. For example, we have $Z_1=\{(0,0)\}$, $Z_2=\{(1,0)\}$, $Z_3=\{2, dots, L_1-2\}\!\!times\!\!\{0\}$, $Z_4=\{(L_1-1,0)\}$ and $Z_5=\{(L_1,0)\}$, the rest of the elements in the partition are determined similarly. Let $k^z(n)$ denote the label of component from $Z$-partition of state $n \in S$, {\it i.e.},\ $n \in Z_{k^z(n)}$. Similar to the definition of $N_k$, let $N^z_k$ denote the neighbors of a state $n$ in $Z_k$ from the $Z$-partition of $S$.
\begin{figure}\label{fig:partitionBE}
\end{figure}
The constraints which ensure~\eqref{eq:LPconstrainE} are obtained based on an induction in $t$. More precisely, we express $D_s^{t+1}$ as a linear combination of $D_1^t$ and $D_2^t$ as
\begin{equation}{\label{eq:inductionDE}}
D_s^{t+1}(n) = F(n + e_s) - F(n) + \sum_{v = 1,2} \sum_{u \in N^z_{k(n)}} c_{s,k^z(n),v,u} D_v^t(n + u),
\end{equation}
where the $c_{s,k,v,u}$, $s \in \{1,2\}, k \in \{1,2, cdots, 25\}, v \in \{1,2\}, u \in N^z_k$ are constants. An important property of the $Z$-partition is that starting from any state $n$ in component $k^z$ of the Z-partition the component $k(n+u)$ in the $C$-partition is well defined for all $u\in N^z_k$ and depends only on $k^z$ and $u$. In~cite{goseling2014linear} it was shown, using this property, that constants $c_{s,k,v,u}$ that ensure~\eqref{eq:inductionDE} always exist and that they can be expressed as simple functions of the transition probabilities of the random walk. The results in~cite{goseling2014linear} are derived for the random walk on the whole quarter-plane. However, a careful inspection of the results in~cite{goseling2014linear} reveals that they hold also for our model of a random walk on a bounded state space. Therefore, we refer the reader to~cite{goseling2014linear} and omit further details here.
We are now ready to bound the bias terms based on~\eqref{eq:inductionDE}. The result, which is easy to verify, states that if $A_s$: $S r_ightarrow [0, \infty)$ and $B_s$: $S r_ightarrow [0, \infty)$ where $s = 1,2$ satisfy
\begin{multline*}
F(n + e_s) - F(n) \\
+ \sum_{v = 1,2} \sum_{u \in N^z_{k(n)}} \max \{-c_{s,k^z(n),v,u} A_s(n + u), c_{s,k^z(n),v,u} B_s(n + u)\} \leq B_s(n),
\end{multline*}
\begin{multline*}
F(n) - F(n + e_s) \\
+ \sum_{v = 1,2} \sum_{u \in N^z_{k(n)}} \max \{-c_{s,k^z(n),v,u} B_s(n + u), c_{s,k^z(n),v,u} A_s(n + u)\} \leq A_s(n),
\end{multline*}
for all $n \in S$, then
\begin{equation*}{\label{eq:equationCE}}
-A_s (n) \leq D_s^t(n) \leq B_s(n),
\end{equation*}
for $s = 1,2$, $n \in S$ and $t geq 0$.
After bounding the bias terms, we are able to rewrite the linear program Problem $1$ into Problem $2$ with plugging in the upper and lower bounds for $D_s^t(n)$.
\noindent textbf{Problem 2}
\begin{equation*}{\label{eq:XLPOE}}
textit{minimize} \quad \sum_{n \in S} [\bar{F}(n) + G(n)] \bar{m}(n),
\end{equation*}
\begin{align*}
textit{subject to} \quad & \bar{F}(n) - F(n) + \sum_{s = 1,2} \max \{ q_{k(n), e_s} B_s(n) + q_{k(n), -e_s} A_s(n - e_s), \\
&-q_{k(n), e_s} A_s(n) - q_{k(n), -e_s} B_s(n - e_s) \} \leq G(n), \label{eq:XLPS1E}\\
&F(n) - \bar{F}(n) + \sum_{s = 1,2} \max \{ q_{k(n), e_s} A_s(n) + q_{k(n), -e_s} B_s(n - e_s), \\
&-q_{k(n), e_s} B_s(n) - q_{k(n), -e_s} A_s(n - e_s) \} \leq G(n) \\
&F(n + e_s) - F(n) + \sum_{v = 1,2} \sum_{u \in N^z_{k(n)}} \max \{-c_{s,k^z(n),v,u} A_s(n + u), \\
&c_{s,k^z(n),v,u} B_s(n + u)\} \leq B_s(n), \\
displaybreak[4]
&F(n) - F(n + e_s) + \sum_{v = 1,2} \sum_{u \in N^z_{k(n)}} \max \{-c_{s,k^z(n),v,u} B_s(n + u), \\
&c_{s,k^z(n),v,u} A_s(n + u)\} \leq A_s(n), \\
&\bar{F}(n) geq 0, G(n) geq 0, A_s(n) geq 0, B_s(n) geq 0,\\
&text{for} \quad n \in S, s \in \{1,2\}.
\end{align*}
\subsection{Fixed number of variables and constraints}
The final step is to reduce Problem $2$ to a linear program with fixed number of variables and constraints regardless of the size of the state space.
We first introduce the notion of a piecewise-linear function on the $Z$-partition. A function $F: S r_ightarrow[0, \infty)$ is called $Z$-linear if the function is linear in each of the components from $Z$-partition, {\it i.e.},\
\begin{equation*}
F(n) = f_{k^z(n),0} + f_{k^z(n),1} i + f_{k^z(n), 2} j, \quad text{for} \quad n = (i,j) \in S.
\end{equation*}
where $f_{k^z(n),0}$, $f_{k^z(n),1}$ and $f_{k^z(n), 2}$ are the constants that define the function. In similar fashion we define $C$-linear functions on the $C$-partition of $S$.
Now, in Problem~$2$ we put the additional constraint that the variables $\bar{F}$, $G$, $A_s$, $B_s$ and $E_s$ are $C$-linear functions. Hence, these functions are defined in terms of variables, the number of which is independent on $L_1$ and $L_2$. Hence, the number of variables in the resulting linear program is independent of $L_1$ and $L_2$.
It remains to show that the number of constraints is independent of $L_1$ and $L_2$. Following the reasoning on the properties of $Z$-partition below~\eqref{eq:inductionDE} it is easy to see that all constraints in Problem~$2$ can be formulated as a non-negativity constraint on a $Z$-linear function. Such a constraint on a $Z$-linear function induces at most $4$ constraints per component in the $Z$-partition, one constraint for each corner of the component. This indicates that the number of constraints does not depend on the size of the state space, since the number of constraints are fixed as well.
\subsection{The optimal solutions}
We are now able to find the upper and lower bounds of $\mathcal{F}$ based on the linear program here.
Let $\mathcal{P}$ denote the set of $(\bar{F}, G)$ for which we are able to find functions $A_s$, $B_s$ and $E_s$ where $s = 1,2$ such that all constraints in Problem $2$ are satisfied. Then, we find the upper and lower bounds for $\mathcal{F}$ as follow.
\begin{equation*}
\mathcal{F}_{up} = \min \left\{\sum_{n \in S} [\bar{F}(n) + G(n)] \bar{m}(n) | (\bar{F}, G) \in \mathcal{P}r_ight\},
\end{equation*}
and
\begin{equation*}
\mathcal{F}_{low} = \max \left\{\sum_{n \in S} [\bar{F}(n) - G(n)] \bar{m}(n) | (\bar{F}, G) \in \mathcal{P}r_ight\}.
\end{equation*}
We have now presented the complete approximation scheme to obtain the upper and lower bounds for $\mathcal{F}$ using the perturbed random walk $\bar{R}$ of which the probability measure is of product-form.
In the following section, we will consider some examples: a tandem queue with finite buffers and some variants of this model.
\section{Application to the Tandem queue with finite buffers}{\label{sec:errorboundsTandemE}}
In this section, we investigate the applications of the approximation scheme proposed in Section~\ref{sec:as4E}.
\subsection{Model description}
Consider a two-node tandem queue with Poisson arrivals at rate $\lambda$. Both nodes have a single server. At most a finite number of jobs, say $L_1$ and $L_2$ jobs, can be present at nodes $1$ and $2$. This includes the jobs in service. An arriving job is rejected if node $1$ is saturated, {\it i.e.},\ there are $L_1$ jobs at node $1$. The service time for the jobs at both nodes is exponentially distributed with parameters $\mu_1$ and $\mu_2$, respectively.
\begin{figure}\label{fig:TandemFBE}
\end{figure}
When node $2$ is saturated, {\it i.e.},\ there are $L_2$ jobs at node $2$, node $1$ stops serving. When it is not blocked, it instantly routes to node $2$. All service times are independent. We also assume that the service discipline is first-in first-out.
The tandem queue with finite buffers can be represented by a continuous-time Markov process whose state space consists of the pairs $(i,j)$ where $i$ and $j$ are the number of jobs at node $1$ and node $2$, respectively. We now uniformize this continuous-time Markov process to obtain a discrete-time random walk. We assume without loss of generality that $\lambda + \mu_1 + \mu_2 \leq 1$ and uniformize the continuous-time Markov process with uniformization parameter $1$. We denote this random walk by $R_T$. All transition probabilities of $R_T$, except those for the transitions from a state to itself, are illustrated in Figure~\ref{fig:rwTE}.
\begin{figure}\label{fig:rwTE}
\end{figure}
\subsection{Perturbed random walk of $R_T$}{\label{sec:perturbRWE}}
We now present a perturbed random walk $\bar{R}_T$. The invariant measure of the perturbed random walk $\bar{R}_T$ is of product-form and only the transitions along the boundaries in $\bar{R}_T$ are different from those in $R_T$.
\begin{figure}\label{fig:PRWE}
\end{figure}
In the perturbed random walk $\bar{R}_T$, the transition probabilities in the components $C_3, C_4, C_6, C_7, C_8$ are different from those in $R_T$. More precisely, we have $\bar{p}_{3,(1,0)} = \lambda$, $\bar{p}_{3,(-1,0)} = \mu_1$, $\bar{p}_{4,(0,1)} = \lambda$, $\bar{p}_{4,(0,-1)} = \mu_2$, see Figure~\ref{fig:PRWE}. It can be readily verified that the measure, which is of product-form, with $\alpha$, which depends on $L_1$ and $L_2$ as the normalizing constant
\begin{equation*}
\bar{m}(i,j) = \alpha \left(\frac{\lambda}{\mu_1}r_ight)^i \left(\frac{\lambda}{\mu_2}r_ight)^j
\end{equation*}
is the probability measure of the perturbed random walk by substitution into the global balance equations~\eqref{eq:balancePE} together with the normalization requirement.
\subsection{Bounding the blocking probability}{\label{sec:examplesE}}
In this section, we provide error bounds for the blocking probability for the tandem queue with finite buffers using our approximation scheme provided in Section~\ref{sec:as4E}. Moreover, we show that our results are better than those obtain by van Dijk et al. in~cite{vandijk88tandem}.
For a given performance measure $\mathcal{F}$, we use $\mathcal{F}^{up}$, $\mathcal{F}^{low}$ to denote the upper and lower bounds for $\mathcal{F}$ obtained based on our approximation scheme and $tilde{\mathcal{F}}^{up}$, $tilde{\mathcal{F}}^{low}$ to denote the upper and lower bounds based on the method suggested by van Dijk et al.~cite{vandijk88tandem}.
We use $\mathcal{F}_0$ to denote the blocking probability, {\it i.e.},\ the probability that an arriving job is rejected.
We now consider an example that has also been considered in~cite{vandijk88tandem}.
\begin{example}{\label{ex:oneE}}
Consider a tandem queue with finite buffers, we have $\lambda = 0.1$, $\mu_1 = 0.2$, $\mu_2 = 0.2$.
\end{example}
We would like to compute the blocking probability of the queueing system. Hence, for the performance measure function $F(n)$, defined in~\eqref{eq:performancemeasureE}, we set the coefficients $f_{k,d}$ where with $k = 1,2,cdots,9$, $d =0,1,2$ to be $f_{8,0} = 1$, $f_{4,0} = 1$, $f_{7,0} = 1$ and others $0$. The error bounds can be found in Figure~\ref{fig:example1f0E}. Clearly, our results outperform the error bounds obtained in~cite{vandijk88tandem}. Moreover, the difference between the upper and lower bounds of $\mathcal{F}_0$ are captured in Figure~\ref{fig:example1f0dE}. This indicates that our error bounds are tighter than those in~cite{vandijk88tandem}.
\begin{figure}\label{fig:example1f0E}
\end{figure}
\begin{figure}\label{fig:example1f0dE}
\end{figure}
In addition to the improved bounds, there is another advantage to our method. There is a limitation to the model modification approach that is used in~cite{vandijk88tandem}. This method requires a different model modification for each specific performance measure. For instance, the specific model modifications which are used to find error bounds for the blocking probability of a tandem queue with finite buffers in~cite{vandijk88tandem} cannot be used to obtain error bounds for the average number of jobs in the first node. In addition, extra effort is needed to verify that the model modifications are indeed valid for a specific performance measure. In the next section, we will show that our method can easily provide error bounds for other performance measures without extra effort.
\subsection{Bounds for other performance measures}
In this section, we will demonstrate the error bounds for other performance measures for Example~\ref{ex:oneE}, {\it i.e.},\ a tandem queue with finite buffers.
Let $\mathcal{F}_1$ be the average number of jobs at node $1$ and $\mathcal{F}_2$ which is the average number of jobs at node $2$.
In general, the models, ({\it i.e.},\ the perturbed systems), used to bound the blocking probability in~cite{vandijk88tandem} cannot be used to bound $\mathcal{F}_1$ and $\mathcal{F}_2$. The method in~cite{vandijk88tandem} requires different upper and lower bound models for different performance measures. Moreover, this method also requires effort to verify that they are indeed the upper and lower bound models for this specific performance measure. Our approximation scheme does not have this disadvantage. For different performance measure, we only need to change the coefficients $f_{k,d}$ where $k = 1,2,cdots,9$ and $d =0, 1,2$ in $F(n)$, which is defined in~\eqref{eq:performancemeasureE}.
It can be readily verified that the performance measure $\mathcal{F}$ is $\mathcal{F}_1$ if and only if we assign following values to the coefficients: $f_{1,1} = 1, f_{8,1} = 1, f_{9,1} = 1, f_{4,1} = 1, f_{3,1} = 1, f_{7,1} = 1$ and others $0$. Figure~\ref{fig:example1f1E} presents the error bounds of $\mathcal{F}_1$.
Similarly, the performance measure $\mathcal{F}$ is $\mathcal{F}_2$ if and only if we assign following values to the coefficients: $f_{2,2} = 1, f_{9,2} = 1, f_{4,2} = 1, f_{6,2} = 1, f_{3,2} = 1, f_{7,2} = 1$ and others $0$. Figure~\ref{fig:example1f2E} presents the error bounds of $\mathcal{F}_2$.
\begin{figure}\label{fig:example1f1E}
\end{figure}
\begin{figure}\label{fig:example1f2E}
\end{figure}
The results show that tight bounds have been achieved with our approximation scheme. Moreover, the only thing we need to change for different performance measures is the input function, which does not require further model modifications. In the next section, we will show that our approximation scheme could also give error bounds for the performance measures of the tandem queue with finite buffers which has a slower or faster server when another node is idle or saturated, respectively, without model modifications as well.
\subsection{Tandem queue with finite buffers and server slow-down/speed-up}
In this section, we consider two variants of the tandem queue with finite buffers. More specifically, we provide error bounds for the blocking probabilities when one server in the tandem with finite buffers is slower or faster if another node is idle or saturated, respectively.
\subsubsection{Tandem queue with finite buffers and server slow-down}
Tandem queue with server slow-down has been previously studied in, for instance,~cite{miretskiy2011state,van2005tandem}. A specific type of tandem queue with finite buffers and server slow-down has been considered in~cite{miretskiy2011state,van2005tandem}. More precisely, the service speed of node $1$ is reduced as soon as the number of jobs in node $2$ reaches some pre-specified threshold because of some sort of protection against frequent overflows.
We consider a different scenario with server slow-down. In our case, the service rate at node $2$ reduces when node $1$ is idle. This comes from a practical situation that when node $1$ is idle, the working pressure for node $2$ decreases and can shift some working capacity to other tasks. Therefore, we consider a two-node tandem queue with Poisson arrivals at rate $\lambda$. Both nodes have a single server. At most a finite number of jobs, say $L_1$ and $L_2$ jobs, can be present at nodes $1$ and $2$, respectively. An arriving job is rejected if node $1$ is saturated. The service time for the jobs at both nodes are exponentially distributed with parameters $\mu_1$ and $\mu_2$, respectively. While node $2$ is saturated, node $1$ stops serving. When it is not blocked, it instantly routes to node $2$. While node $1$ is idle, the service rate of node $2$ becomes $tilde{\mu}_2$ where $tilde{\mu}_2 < \mu_2$. All service times are independent. We also assume that the service discipline is first-in first-out.
The tandem queue with finite buffers and server slow-down can be represented by a continuous-time Markov process whose state space consists of the pairs $(i,j)$ where $i$ and $j$ are the number of jobs at node $1$ and node $2$, respectively. We assume without loss of generality that $\lambda + \mu_1 + \mu_2 \leq 1$ and uniformize this continuous-time Markov process with uniformization parameter $1$. Then we obtain a discrete-time random walk. We denote this random walk by $R^{sd}_T$, all transition probabilities of $R^{sd}_T$, except those for the transitions from a state to itself, are illustrated in Figure~\ref{fig:rw2E}.
It can be readily verified that the random walk $\bar{R}_T$ as defined in Section~\ref{sec:perturbRWE} is a perturbed random walk of $R^{sd}_T$ as well, {\it i.e.},\ the transition probabilities in $\bar{R}_T$ only differ from those in $R^{sd}_T$ along the boundaries. We next consider a numerical example.
\begin{figure}\label{fig:rw2E}
\end{figure}
\begin{example}[slow-down]{\label{ex:twoE}}
Consider a tandem queue with finite buffers and server slow-down, we have $\lambda = 0.1$, $\mu_1 = 0.2$, $\mu_2 = 0.2$ and $tilde{\mu}_2 = 0.5\mu_2$.
\end{example}
The error bounds for the blocking probability of Example~\ref{ex:twoE} are illustrated in Figure~\ref{fig:example2f0E}.
\begin{figure}\label{fig:example2f0E}
\end{figure}
Notice that our approximation scheme is sufficiently general in the sense that the error bounds for the performance measures of all tandem queue with server slow-down and blocking mentioned in the previous paragraphs can be obtained with our approximation scheme. There are no restrictions on the input random walk.
\subsubsection{Tandem queue with finite buffers and server speed-up}
It is also of great interest to consider a tandem queue with finite buffers and server speed-up.
We consider the following scenario with server speed-up: The service rate at node $2$ increases when node $1$ is saturated. This comes from a practical situation, for instance, when node $1$ is saturated, the working pressure for node $2$ increases to eliminate the jobs in the queueing system. Therefore, we consider a two-node tandem queue with Poisson arrivals at rate $\lambda$. Both nodes have a single server. At most a finite number of jobs, say $L_1$ and $L_2$ jobs, can be present at nodes $1$ and $2$, respectively. An arriving job is rejected if node $1$ is saturated. The service time for the jobs at both nodes are exponential distributed with parameters $\mu_1$ and $\mu_2$, respectively. When node $2$ is saturated, node $1$ stops serving. When it is not blocked, it instantly routes to node $2$. When node $1$ is saturated, the service rate of node $2$ becomes $\bar{\mu}_2$ where $\bar{\mu}_2 > \mu_2$. All service times are independent. We also assume that the service discipline is first-in first-out.
Tandem queue with finite buffers and server speed-up can be represented by a continuous-time Markov process whose state space consists of the pairs $(i,j)$ where $i$ and $j$ are the number of jobs at node $1$ and node $2$, respectively. We assume without loss of generality that $\lambda + \mu_1 + \bar{\mu}_2 \leq 1$ and uniformize this continuous-time Markov process with uniformization parameter $1$. Then we obtain a discrete-time random walk. We denote this random walk by $R^{su}_T$, all transition probabilities of $R^{su}_T$, except those for the transitions from a state to itself, are illustrated in Figure~\ref{fig:rw3E}.
\begin{figure}\label{fig:rw3E}
\end{figure}
Again, it can be readily verified that the random walk $\bar{R}_T$ as defined in Section~\ref{sec:perturbRWE} is a perturbed random walk of $R^{su}_T$ because only the transitions along the boundaries in $\bar{R}_T$ are different from those in $R^{su}_T$. We next consider the following numerical example.
\begin{example}[speed-up]{\label{ex:threeE}}
Consider a tandem queue with finite buffers and server speed-up, we have $\lambda = 0.1$, $\mu_1 = 0.2$, $\mu_2 = 0.2$ and $\bar{\mu}_2 = 1.2\mu_2$.
\end{example}
The error bounds for the blocking probability of Example~\ref{ex:threeE} can be found in Figure~\ref{fig:example3f0E}.
\begin{figure}\label{fig:example3f0E}
\end{figure}
In the next section, we will extend our approximation scheme to the two-dimensional random walk in which one dimension is finite and another dimension is infinite.
\section{Two-node queue with finite buffers at one queue}{\label{sec:onequeue}}
The two-node queue with finite buffers at one queue is a queueing system with two servers, one of them having finite storage capacity. Without loss of generality, we assume node $1$ has finite capacity. If a job arrives at node $1$ when it does not have any more storage capacity, then the job is lost. There is no restriction to the capacity of node $2$. In general, the two queues influence each other. In particular, the service rate at node $2$ depends on the number of jobs at node $1$. Again we model this queueing system as a two-dimensional random walk for which the state space is finite in one dimension.
We consider a two-dimensional random walk $tilde{R}$ on $tilde{S}$ where
\begin{equation*}
tilde{S} = \{0,1,2, cdots, L_1\} times \{0,1,2,3, cdots\}.
\end{equation*}
Next, we introduce the modified approximation scheme which will be used to find the upper and lower bounds. Similar to the development of the approximation scheme for the two-dimensional finite random walk at both axis, we are able to partition the state space and construct the approximation scheme for the random walk $tilde{R}$ on state space $tilde{S}$ based on Markov reward approach. The procedure is different only in the aspect the definition of the components $C_1, C_2, dots,$ changes. Therefore, we omitted the details and present only the numerical results that have been obtained based on this model.
\section{Application to the coupled-queue with processor sharing and finite buffers at one queue}{\label{sec:errorboundsSharing4E}}
In this section, we apply the approximation scheme to a coupled-queue with processor sharing and finite buffers at one queue. Two coupled processors problem has been extensively studied so far. In particular, Fayolle et al. reduce the problem of finding the generating function of the invariant measure to a Riemann-Hilbert problem in~cite{fayolle1979two}. However, when we have finite buffers, in general, the methods developed for a coupled-queue with infinite buffers are no longer valid.
\subsection{Model description}
Consider a two-node queue with Poisson arrivals at rate $\lambda_1$ for node $1$ and $\lambda_2$ for node $2$. Both nodes have a single server and at most $L_1$ jobs can be present at nodes $1$ and there is no restriction for the capacity of node $2$. When neither of the nodes is empty they evolve independently, but when one of the queues becomes empty the service rate at another queue changes. An arriving job for node $1$ is rejected when node $1$ is saturated. The service time at both nodes is exponentially distributed with parameters $\mu_1$ and $\mu_2$, respectively, when neither of the queue is empty. When node $1$ is empty, the service rate at node $2$ becomes $tilde{\mu}_2$ where $tilde{\mu}_2 > \mu_2$. When node $2$ is empty, the service rate at node $1$ becomes $tilde{\mu}_1$ where $tilde{\mu}_1 > \mu_1$. All service requirements are independent. We also assume that the service discipline is first-in first-out.
This coupled-queue with processor sharing and finite buffers at one queue can be represented by a continuous-time Markov process whose state space consists of the pairs $(i,j)$ where $i$ and $j$ are the number of jobs at node $1$ and node $2$, respectively. We assume without loss of generality that $\lambda_1 + \lambda_2 + tilde{\mu}_1 + tilde{\mu}_2 \leq 1$ and uniformize this continuous-time Markov process with uniformization parameter $1$. Then we obtain a discrete-time random walk. We denote this random walk by $R_C$. All transition probabilities of $R_C$, except those for the transitions from a state to itself, are illustrated in Figure~\ref{fig:rw4E}.
\begin{figure}\label{fig:rw4E}
\end{figure}
\subsection{Perturbed random walk $\bar{R}_C$}
We now display a perturbed random walk $\bar{R}_C$ of $R_C$ such that the probability measure of $\bar{R}_C$ is of product-form and only the transitions along the boundaries in $\bar{R}_C$ are different from those in $R_C$.
It can be readily verified that the invariant measure of the perturbed random walk $\bar{R}_C$ in Figure~\ref{fig:rw4PE},
which is of product-form, with $\alpha$, which depends on $L_1$ as the normalizing constant
\begin{equation*}
\bar{m}(n) = \alpha \left(\frac{\lambda_1}{\mu_1}r_ight)^i \left(\frac{\lambda_2}{\mu_2}r_ight)^j \quad text{where} \quad n = (i,j),
\end{equation*}
is the probability measure of the perturbed random walk by substitution into the global balance equations~\eqref{eq:balancePE} together with the normalization requirement.
\begin{figure}\label{fig:rw4PE}
\end{figure}
We next illustrate a numerical example of a coupled-queue with processor sharing and finite buffers at one queue.
\subsection{Numerical results}
\begin{example}{\label{ex:fourE}}
Consider a coupled-queue with finite buffers at one queue, we have $\lambda_1 = \lambda_2 = 0.15, \mu_1 = \mu_2 = 0.2$, $tilde{\mu}_1 = tilde{\mu}_2 = 0.25$.
\end{example}
We approximate the average number of jobs in node $1$. We use $F_1$ to denote the average number of jobs in node $1$. The upper and lower bounds of $F_1$, which are denoted by $F_{1}^{up}$ and $F_{1} ^{low}$, can be found in Figure~\ref{fig:example_new_2}.
\begin{figure}\label{fig:example_new_2}
\end{figure}
We see from the results in Figure~\ref{fig:example_new_2} that our approximation scheme can also be extended to finite random walks at one axis. Moreover, note that when $L_1$, {\it i.e.},\ the size of the first dimension, is increasing, the values of the upper and lower bounds reach a limit.
In the next numerical example, we will fix the service rate. We present the error bounds for the corresponding performance measure when the occupation rate, {\it i.e.},\ $\rho = \frac{\lambda}{\mu}$ increases, even close to $1$.
\begin{example}{\label{ex:fiveE}}
Consider a coupled-queue with finite buffers at one queue, we have $\mu_1 = \mu_2 = 0.2$, $tilde{\mu}_1 = tilde{\mu}_2 = 0.25$, $L_1=20$. Let $\rho$ changes from $0.5$ to $0.95$.
\end{example}
\begin{figure}\label{fig:example_new_3}
\end{figure}
We see from Figure~\ref{fig:example_new_3} that the error bounds are quite tight as well.
Next, we present several examples for blocking probabilities, which is again denoted by $F_0$, based on Example~\ref{ex:fiveE} in which the size of the buffers in the first dimension increases from $20$ to $10000$.
\begin{example}{\label{ex:sixE}}
Consider a coupled-queue with finite buffers at one queue, we have $\mu_1 = \mu_2 = 0.2$, $tilde{\mu}_1 = tilde{\mu}_2 = 0.25$, $L_1 = 20$ and the occupation rate increases from $0.5$ to $0.95$.
\end{example}
The bounds for blocking probabilities are very close in this case, hence, we convert these probabilities by applying logarithm to the $y$ axis in Figure~\ref{fig:blocking_pr_rho_L20_log} and also in following examples.
\begin{figure}\label{fig:blocking_pr_rho_L20_log}
\end{figure}
\begin{example}{\label{ex:sevenE}}
Consider a coupled-queue with finite buffers at one queue, we have $\mu_1 = \mu_2 = 0.2$, $tilde{\mu}_1 = tilde{\mu}_2 = 0.25$, $L_1 = 500$ and the occupation rate increases from $0.98$ to $0.99$.
\end{example}
\begin{figure}\label{fig:blocking_pr_rho_L500_loglog}
\end{figure}
Next, we also extend these numerical results to the case when $L_1 = 10000$.
\begin{example}{\label{ex:eightE}}
Consider a coupled-queue with finite buffers at one queue, we have $\mu_1 = \mu_2 = 0.2$, $tilde{\mu}_1 = tilde{\mu}_2 = 0.25$, $L_1 = 10000$ and the occupation rate increases from $0.98$ to $0.99$.
\end{example}
\begin{figure}\label{fig:blocking_pr_rho_L10000_loglog}
\end{figure}
We see from the above examples that relatively tight bounds are obtained efficiently based on our approach. As discussed in the introduction that the matrix geometric method has cubic complexity in $L_1$.
\section{Conclusion}{\label{sec:conclusionE}}
In this paper, we presented a general approximation scheme for a two-node queue with finite buffers at either one or both queues, which establishes error bounds for a large class of performance measures. Our work is an extension of the linear programming approach developed in~cite{goseling2014linear} to approximate performance measures of random walks in the quarter-plane.
We first developed an approximation scheme for a two-node queue with finite buffers at both queues. We then applied this approximation scheme to obtain bounds for performance measures of a tandem queue in which both buffers are finite and some variants of this model. We also extended the approximation scheme to deal with a two-node queue with finite buffers at only one queue. We applied our approximation scheme to a coupled-queue with finite buffers at one queue. The approximation scheme gives tight bounds for various performance measures, like the blocking probability and the average number of jobs at node $1$. We also obtain error bounds for the blocking probabilities when the size of the buffers in one dimension is really large.
To summarize, the complexity for solving a system of linear equations is at least $O(L_1^2)$ and the variations of matrix geometric method share a complexity of $O(L_1^3)$. Therefore, when $L_1$ is large, our approach, of which the complexity is a constant in $L_1$, acts as a promising alternative to finding the invariant measures.
\section{Acknowledgment}
Yanting Chen acknowledges support through the NSFC grant 71701066, the Fundamental Research Funds for the Central Universities and a CSC scholarship [No. 2008613008]. Xinwei Bai acknowledges support through a CSC scholarship [No. 201407720012]. This work is partly supported by the Netherlands Organization for Scientific Research (NWO) grant $612.001.107$.
\end{document}
|
\begin{document}
\title{\Large \bf A Kernel Method for Exact Tail Asymptotics \\--- Random Walks in the Quarter
Plane\footnotetext{$\,$ \hspace{-6.2ex}
\begin{abstract}
In this paper, we propose a kernel method for exact tail
asymptotics of a random walk to neighborhoods in the quarter
plane. This is a two-dimensional method, which does not require a
determination of the unknown generating function(s). Instead, in
terms of the asymptotic analysis and a Tauberian-like theorem, we
show that the information about the location of the dominant
singularity or singularities and the detailed asymptotic property
at a dominant singularity is sufficient for the exact tail
asymptotic behaviour for the marginal distributions and also for
joint probabilities along a coordinate direction. We provide all
details, not only for a ``typical'' case, the case with a single
dominant singularity for an unknown generating function, but also
for all non-typical cases which have not been studied before. A
total of four types of exact tail asymptotics are found for the
typical case, which have been reported in the literature. We also
show that on the circle of convergence, an unknown generating
function could have two dominant singularities instead of one,
which can lead to a new periodic phenomena. Examples are
illustrated by using this kernel method. This paper can be
considered as a systematic summary and extension of existing
ideas, which also contains new and interesting research results.
\noindent \textbf{Keywords:} random walks in the quarter plane;
stationary distribution; generating function; kernel methods;
singularity analysis; exact tail asymptotics; light tail
\end{abstract}
\section{Introduction}
Two-dimensional discrete random walks in the quarter plane are
classical models, that could be either probabilistic or
combinatorial. Studying these models is important and often
fundamental for both theoretical and applied purposes. For a
stable probabilistic model, it is of significant interest to study
its stationary probabilities. However, only for very limited
special cases, a closed-form solution is available for the
stationary probability distribution. This adds value to studying
tail asymptotic properties in stationary probabilities, since
performance bounds and approximations can often be developed from
the tail asymptotic property. The focus of this paper is to
characterize exact tail asymptotics. Specifically, we propose a
kernel method to systematically study the exact tail behaviour for
the stationary probability distribution of the random walk in the
quarter plane.
The kernel method proposed here is an extension of the classical
kernel method, first introduced by Knuth~\cite{Knuth:69} and later
developed as the kernel method by Banderier \textit{et
al.}~\cite{B-BM-D-F-G-GB:02}. The standard kernel method deals
with the case of a functional equation of the fundamental form
$K(x, y)F(x, y) = A(x, y)G(x) + B(x, y)$, where $F(x, y)$ and
$G(x)$ are unknown functions. The key idea in the kernel method is
to find a branch $y=y_0(x)$, such that, at $(x,y_0(x))$, the
kernel function is zero, or $K(x, y_0(x))=0$. When analytically
substituting this branch into the right hand side of the
fundamental form, we then have $G(x)=-B(x, y_0(x))/A(x, y_0(x))$,
and hence,
\[
F(x, y) = \frac{-A(x, y)B(x,y_0(x))/A(x, y_0(x)) + B(x, y)}{K(x, y)}.
\]
However, applying the above idea to the fundamental form of a
two-dimensional random walk does not immediately lead to a
determination of the generating function $P(x,y)$. Instead, it
provides a relationship between two unknown generating functions
$\pi_1(x)$ and $\pi_2(y)$, referred to as the generating functions
for the boundary probabilities. This is the key challenge in the
analysis of using a kernel method. Therefore, a good understanding
on the interlace of these two functions is crucial.
Following the early research by Malyshev~\cite{Malyshev:72,
Malyshev:73}, the algebraic method targeting on expressing the
unknwon generating functions was further systematically updated in
Fayolle, Iasnogorodski and Malyshev~\cite{FIM:1999} based on the
study of the kernel equation. The authors indicated in their book
that: ``Even if asymptotic problems were not mentioned in this
book, they have many applications and are mostly interesting for
higher dimensions.'' The proposed kernel method in this paper is a
continuation of the study in \cite{FIM:1999}. Research on tail
asymptotics for various models following the method (determination
of the unknown generating function(s) first) of \cite{FIM:1999} or
other closely related methods can be found in
Flatto and McKean~\cite{Flatto-McKeqn:77},
Fayolle and Iasnogorodski~\cite{FI:1979},
Fayolle, King and Mitrani~\cite{FKM:82},
Cohen and Boxma~\cite{cohen-boxma:83},
Flatto and Hahn~\cite{Flatto-Hahn:84},
Flatto~\cite{Flatto:85},
Wright~\cite{Wright:92},
Kurkova and Suhov~\cite{Kurkova-Suhov:03},
Bousquet-Melou~\cite{Bousquet-Melou:05},
Morrison~\cite{Morrison:07},
Li and Zhao~\cite{Li-Zhao:09, Li-Zhao:10},
Guillemin and Leeuwaarden~\cite{Guillemin-Leeuwaarden:09},
and Li, Tavakoli and Zhao~\cite{Li-Tavakoli-Zhao:11}.
Different from the work mentioned above, which requires
characterizing or expressing the unknown generating function, such
as a closed-form solution or an integral expression through
boundary value problems, the proposed kernel method only requires
the information about the dominant singularities of the unknown
function, including the location and detailed asymptotic property
at the dominant singularities. Because of this, the method makes
it possible to systematically deal with all random walks instead
of a model based treatment. In a recent research, Li and
Zhao~\cite{Li-Zhao:10b} applied this method to a specific model,
and Li, Tavakoli and Zhao~\cite{Li-Tavakoli-Zhao:11} to the
singular random walks. For exact tail asymptotics without a
determination of the unknown generating function(s) or Laplace
transformation function(s), different methods were used in the
following studies:
Abate and Whitt~\cite{Abate-Whitt:97},
Lieshout and Mandjes~\cite{LM:2008},
Miyazawa and Rolski~\cite{Miyazawa-Rolski:09},
Dai and Miyazawa~\cite{Dai-Miyazawa:10}.
Other methods for studying two-dimensional problems, including
exact tail asymptotics, also exist, for example, based on large
deviations, on properties of the Markov additive process (including matrix-analytic methods), or on
asymptotic properties of the Green functions. References include
Borovkov and Mogul'skii~\cite{Borovkov-M:01},
McDonald~\cite{McDonald:99},
Foley and McDonald~\cite{Foley-McDonald:01,Foley-McDonald:05a,Foley-McDonald:05b},
Khanchi~\cite{Khanchi:08,Khanchi:09},
Adan, Foley and McDonald~\cite{Adan-Foley-McDonald:09},
Raschel~\cite{Raschel:10},
Miyazawa~\cite{Miyazawa:07,Miyazawa:09,Miyazawa:08},
Kobayashi and Miyazawa~\cite{Kobayashi-Miyazawa:2011},
Takahashi, Fujimoto and Makimoto~\cite{TFM:01},
Haque~\cite{Haque:03},
Miyazawa~\cite{Miyazawa:04},
Miyazawa and Zhao~\cite{Miyazawa-Zhao:04},
Kroese, Scheinhardt and Taylor~\cite{KST:04},
Haque, Liu and Zhao~\cite{Haque-Liu-Zhao:05},
Li and Zhao~\cite{Li-Zhao:05},
Motyer and Taylor~\cite{Motyer-Taylor:06},
Li, Miyazawa and Zhao~\cite{Li-Miyazawa-Zhao:07},
He, Li and Zhao~\cite{He-Li-Zhao:08},
Liu, Miyazawa and Zhao~\cite{Liu-Miyazawa-Zhao:08},
Tang and Zhao~\cite{Tang-Zhao:08},
Kobayashi, Miyazawa and Zhao~\cite{Kobayashi-Miyazawa-Zhao:10}, among others.
For more references, people may refer to a recent survey on tail
asymptotics of multi-dimensional reflecting processes for queueing
networks by Miyazawa~\cite{Miyazawa:11}.
The main focus of this paper is to propose a kernel method for
exact tail asymptotics of random walks in the quarter plane
following the ideal in \cite{FIM:1999}, based on which a complete
description of the exact tail asymptotics for stationary
probabilities of a non-singular genus 1 random walk is obtained.
We claim that the unknown generating function $\pi_1(x)$, or equivalently, $\pi_2(y)$, has either one or two dominant singularities.
For the case of either one dominant singularity, or two dominant singularities with different asymptotic properties,
a total of four types of exact tail asymptotics
exists: (1) exact geometric decay; (2) a geometric decay
multiplied by a factor of $n^{-1/2}$; (3) a geometric decay
multiplied by a factor of $n^{-3/2}$; and (4) a geometric decay
multiplied by a factor of $n$. These results are essentially not
new (for examples see references \cite{Borovkov-M:01,
Foley-McDonald:01, Foley-McDonald:05b, Miyazawa:09, Khanchi:09})
except that the fourth type is missing from previous studies for
the discrete random walk, but was reported for the continuous
random walk in \cite{Dai-Miyazawa:10}. For the case of two
dominant singularities with the same asymptotic property, a new
periodic phenomena in the tail asymptotic property is discovered,
which has not been reported in previous literature. For the tail
asymptotic behaviour of the non-boundary joint probabilities along
a coordinate direction, a new method based on recursive
relationships of probability generating functions will be applied,
which is an extension of the idea used in \cite{Li-Zhao:10b}.
For an unknown generating function of probabilities, a
Tauberian-like theorem is used as a bridge to link the asymptotic
property of the function at its dominant singularities to the tail
asymptotic property of its coefficient, or in our case, stationary
probabilities. This theorem does not require the monitonicity in
the probabilities, which is required by a standard Tauberian
theorem and cannot be verified in general, or Heaviside
operational calculus, which is usually very difficult to be
rigorous. However, the price paid for applying the Tauberian-like
theorem requires more in analyticity of the function and detailed
information about all dominant singularities, or singularities on
the circle of convergence. Therefore we need to provide
information about how many singularities exist on the circle of
convergence and their detailed properties, such as the nature of
the singularity and the multiplicity in the case of the pole, for
the random walk. It is not always true that only one singularity
exists on the circle of convergence. Technical details are needed
to address these issues.
The kernel method immediately leads to exact tail asymptotics in
the boundary probabilities, in both directions, based on which
exact tail asymptotics in a marginal distribution will become
clear. However, it does not directly lead to exact tail asymptotic
properties for the joint probabilities along a coordinate
direction, except for the boundary probabilities as mentioned
above. Therefore, further efforts are required. In this paper, we
propose a method, based on difference equations of the unknown
generating functions, to do the asymptotic analysis, which
successfully overcomes the hurdle for exact tail asymptotics for
joint probabilities.
The rest of the paper is organized into eight sections. In
section~2, after the model description, the so-called fundamental
form for the random walk in the quarter plane is provided,
together with a stability condition. Section~3 contains necessary
properties for the two branches (or an algebraic function) defined
by the kernel equation and for the branch points of the branches.
These properties are either directly from \cite{FIM:1999} or
further refinements. Section~4 consists of six subsections for the
purpose of characterizing the asymptotic properties of the unknown
generating functions $\pi_1(x)$ and $\pi_2(y)$ at their dominant
singularities. Specifically, two Tauberian-like theorems are
introduced in subsection~1; the interlace between the two unknown
generating functions is discussed in subsection~2, which plays a
key role in the proposed kernel method; detailed properties for
singularities of the unknown generating functions are obtained in
subsections~3--5, which finally lead to the main theorem
(Theorem~\ref{theorem3.1}) in this section provided in the last
subsection. In Section~5, asymptotic analysis for the boundary
generating functions is carried out, which directly leads to the
tail asymptotics for the boundary probabilities in terms of the
Tauberian-like theorem. In Section~6, based on the asymptotic
results obtained for the generating function of boundary
probabilities in the previous section, and the fundamental form,
exact tail asymptotic properties for the two marginal
distributions are provided.
Exact tail asymptotic
properties for joint probabilities along a coordinate direction is
addressed in Section~7, which is not a direct result from the
kernel method. Instead, we propose a difference equation method to
carry out an asymptotic analysis of a sequence of unknown
generating functions. The last section contributes to some
concluding remarks and two examples by applying the kernel method.
\section{Description of the Random Walk}
The random walk in the quarter plane used in this paper to
demonstrate the kernel method is a reflected random walk or a
Markov chain with the state space $\mathbb{Z}_+^2=\{(m,n); m, n
\text{ are non-negative integers }\}$. To describe this process,
we divide the whole quadrant $\mathbb{Z}_+^2$ into four regions:
the interior $S_+ = \{(m,n); m, n =1, 2, \ldots \}$, horizontal
boundary $S_1 = \{(m,0); m =1, 2, \ldots \}$, vertical boundary
$S_2 = \{(0,n); n =1, 2, \ldots \}$, and the origin $S_0=
\{(0,0)\}$, or $\mathbb{Z}_+^2= S_+ \cup S_1 \cup S_2 \cup S_0$.
In each of these regions, the transition is homogeneous.
Specifically, let $X_+$, $X_1$, $X_2$ and $X_0$ be random
variables having the distributions, respectively, $p_{i,j}$ with
$i, j =0, \pm 1$; $p^{(1)}_{i,j}$ with $i =0, \pm 1$ and $j =0,
1$; $p^{(2)}_{i,j}$ with $i =0, 1$ and $j =0, \pm 1$; and
$p^{(0)}_{i,j}$ with $i, j =0, 1$. Then, the transition
probabilities of the random walk (Markov chain)
$L_t=(L_1(t),L_2(t))$ are given by
\begin{align*}
P(L_{t+1}= (m_2,n_2) &| L_t = (m_1,n_1) \} = \\
& \left \{ \begin{array}{ll}
P(X_+ = (m_2-m_1,n_2-n_1)), & \text{if } (m_2,n_2) \in S, (m_1,n_1) \in S_+, \\
P(X_k = (m_2-m_1,n_2-n_1)), & \text{if } (m_2,n_2) \in S,(m_1,n_1) \in S_k \text{ with } k=0, 1, 2.
\end{array} \right.
\end{align*}
\subsection{Ergodicity conditions}
A stability (ergodic) condition can be found in Theorem~3.3.1 of Fayolle,
Iasnogorodski and Malyshev~\cite{FIM:1999}, which has been amended
by Kobayashi and Miyazawa as Lemma~2.1 in
\cite{Kobayashi-Miyazawa:2011}.
This condition is stated in terms of the drift vectors defined by
\begin{eqnarray*}
M &=&(M_{x,} M_{y})= \biggl ( \sum_i i \Bigl ( \sum_j p_{i,j} \Bigr ), \sum_j j \Bigl ( \sum_i p_{i,j} \Bigr ) \biggr ), \\
M^{(1)} &=& (M_{x,}^{(1)} M_{y}^{(1)})=\biggl ( \sum_i i \Bigl ( \sum_j p^{(1)}_{i,j} \Bigr ), \sum_j j \Bigl ( \sum_i p^{(1)}_{i,j} \Bigr )\biggr ), \\
M^{(2)} &=& (M_{x,}^{(2)} M_{y}^{(2)})=\biggl ( \sum_i i \Bigl ( \sum_j p^{(2)}_{i,j} \Bigr ), \sum_j j \Bigl ( \sum_i p^{(2)}_{i,j} \Bigr )\biggr ).
\end{eqnarray*}
\begin{theorem}[Theorem~3.3.1 in \cite{FIM:1999} and Lemma~2.1 in
\cite{Kobayashi-Miyazawa:2011}] \label{theoremergodicity}
When $M\neq 0$, the random walk is
ergodic if and only if one of the following three conditions
holds:
\textbf{1.} $M_{x}<0$, $M_{y}<0$,
$M_{x}M_{y}^{(1)}-M_{y}M_{x}^{(1)}<0$ and
$M_{y}M_{x}^{(2)}-M_{x}M_{y}^{(2)}<0$;
\textbf{2.} $M_{x}<0$, $M_{y}\geq 0$,
$M_{y}M_{x}^{(2)}-M_{x}M_{y}^{(2)}<0$ and $M_{x}^{(1)}<0$ if
$M_{y}^{(1)}=0$;
\textbf{3.} $M_{x}\geq 0$, $M_{y}<0$,
$M_{x}M_{y}^{(1)}-M_{y}M_{x}^{(1)}<0$ and $M_{y}^{(2)}<0$ if
$M_{x}^{(2)}=0$.
\end{theorem}
Throughout the paper, we make the following assumption, unless
otherwise specified:
\begin{assumption} \label{assumption-1}
The random walk $L_t$ is irreducible, positive recurrent and aperiodic.
\end{assumption}
Under Assumption~\ref{assumption-1}, let $\pi_{m,n}$ be the unique stationary probability distribution
of the random walk.
\begin{remark}
It should be noted that for a stable random walk, the condition $M
\neq 0$ is equivalent to that both sequences $\{\pi_{m,0}\}$ and
$\{\pi_{0,n}\}$ are light-tailed (for example, see Lemma~3.3 of
\cite{Kobayashi-Miyazawa:2011}), which is not our focus of this
paper. Therefore, Theorem~\ref{theoremergodicity} provides a
necessary and sufficient stability condition for the light-tailed
case.
\end{remark}
\subsection{Fundamental Form}
Define the following generating functions of the probability sequences for the interior states, horizontal boundary states and
vertical boundary states, respectively,
\begin{eqnarray*}
\pi(x,y) &=& \sum_{m=1}^{\infty} \sum_{n=1}^{\infty} \pi_{m,n}x^{m-1}y^{n-1}, \\
\pi_1(x) &=&\sum_{m=1}^{\infty} \pi_{m,0}x^{m-1}, \\
\pi_2(y) &=&\sum_{n=1}^{\infty} \pi_{0,n}y^{n-1}.
\end{eqnarray*}
The so-called fundamental form of the random walk provides a functional equation relating the three unknown generating functions $\pi(x,y)$, $\pi_1(x)$ and
$\pi_2(y)$. To state the fundamental form, we define
\begin{eqnarray*}
h(x,y) &=&xy\left( \sum_{i=-1}^{1}\sum_{j=-1}^{1}p_{i,j}x^{i}y^{j}-1\right) \\
&=&a(x)y^{2}+b(x)y+c(x)=\tilde{a}(y)x^{2}+\tilde{b}(y)x+\tilde{c}(y), \\
h_{1}(x,y) &=&x\left(\sum_{i=-1}^{1}\sum_{j=0}^{1}p_{i,j}^{(1)}x^{i}y^{j}-1\right) \\
&=&a_{1}(x)y+b_{1}(x)=\widetilde{a}_{1}(y)x^{2}+\widetilde{b}_{1}(y)x+\widetilde{c}_{1}(y), \\
h_{2}(x,y) &=&y\left(\sum_{i=0}^{1}\sum_{j=-1}^{1}p_{i,j}^{(2)}x^{i}y^{j}-1\right) \\
&=&\widetilde{a}_{2}(y)x+\widetilde{b}_{2}(y)=a_{2}(x)y^{2}+b_{2}(x)y+c_{2}(x), \\
h_{0}(x,y) &=&\left(\sum_{i=0}^{1}\sum_{j=0}^{1}p_{i,j}^{(0)}x^{i}y^{j}-1\right) \\
&=&a_{0}(x)y+b_{0}(x)=\widetilde{a}_{0}(y)x+\widetilde{b}_{0}(y),
\end{eqnarray*}
where
\begin{eqnarray*}
a(x) &=&p_{-1,1}+p_{0,1}x+p_{1,1}x^{2}, \\
b(x) &=&p_{-1,0}-(1-p_{0,0})x+p_{1,0}x^{2}, \\
c(x) &=&p_{-1,-1}+p_{0,-1}x+p_{1,-1}x^{2}, \\
\tilde{a}(y) &=&p_{1,-1}+p_{1,0}y+p_{1,1}y^{2}, \\
\tilde{b}(y) &=&p_{0,-1}-(1-p_{0,0})y+p_{0,1}y^{2}, \\
\tilde{c}(y) &=&p_{-1,-1}+p_{-1,0}y+p_{-1,1}y^{2},
\end{eqnarray*}
\begin{eqnarray*}
a_{1}(x) &=&p_{-1,1}^{(1)}+p_{0,1}^{(1)}x+p_{1,1}^{(1)}x^{2},
b_{1}(x)=p_{-1,0}^{(1)}-\left( 1-p_{0,0}^{(1)}\right) x+p_{1,0}^{(1)}x^{2},\\
\widetilde{a}_{1}(y) &=&p_{1,0}^{(1)}+p_{1,1}^{(1)}y,
\widetilde{b}_{1}(y)=p_{0,0}^{(1)}-1+p_{0,1}^{(1)}y, \widetilde{c}_{1}(y)=p_{-1,0}^{(1)}+p_{-1,1}^{(1)}y \\
a_{2}(x) &=&p_{0,1}^{(2)} + p_{1,1}^{(2)}x,
b_{2}(x)=p_{0,0}^{(2)}-1+p_{1,0}^{(2)}x,
c_{2}(x)=p_{0,-1}^{(2)}+p_{1,-1}^{(2)}x \\
\widetilde{a}_{2}(y) &=&p_{1,-1}^{(2)}+p_{1,0}^{(2)}y+p_{1,1}^{(2)}y^{2},
\widetilde{b}_{2}(y)=p_{0,-1}^{(2)}-\left( 1-p_{0,0}^{(2)}\right) y+p_{0,1}^{(2)}y^{2}, \\
a_{0}(x) &=&p_{0,1}^{(0)}+p_{1,1}^{(0)}x,b_{0}(x)=p_{1,0}^{(0)}x-\left( 1-p_{0,0}^{(0)}\right) , \\
\widetilde{a}_{0}(y) &=&p_{1,0}^{(0)}+p_{1,1}^{(0)}y,\widetilde{b}_{0}(y)=p_{0,1}^{(0)}y-\left( 1-p_{0,0}^{(0)}\right) .
\end{eqnarray*}
The basic equation of the generating function of the joint
distribution, or the fundamental form of the random walk, is given
by
\begin{equation} \label{eqn:fundamental}
-h(x,y)\pi(x,y)=h_{1}(x,y)\pi_1(x)+h_{2}(x,y)\pi_2(y)+h_{0}(x,y)\pi_{0,0}.
\end{equation}
The reason for the above functional equation to be called
fundamental is largely due to the fact that through analysis of
this equation, the unknown generating functions can be determined
or expressed, for example, through algebraic methods and boundary
value problems as illustrated in Fayolle, Iasnogorodski and
Malyshev~\cite{FIM:1999}. The kernel method presented here also
starts with the fundamental form, but without expressing
generating functions first.
\begin{remark}
The generating function $\pi(x,y)$ is defined for $m,n>0$, excluding the boundary probabilities.
(\ref{eqn:fundamental}) was proved in (1.3.6) in \cite{FIM:1999}. Based on (\ref{eqn:fundamental}), one can also obtain a similar fundamental form using generating functions including boundary
probabilities: $\Pi(x,y)=\sum_{m=0}^\infty \sum_{n=0}^\infty \pi_{m,n} x^m y^n$, $\Pi_1(x)=\sum_{m=0}^\infty \pi_{m,0} x^m$ and
$\Pi_2(y)=\sum_{n=0}^\infty \pi_{0,n} y^n$.
\end{remark}
For the conclusion of this section, we can easily check the
following expressions, some of which will be needed in later
sections:
\begin{align}
M_{y} &= a(1)-c(1) = \widetilde{a}^{\prime }(1) + \widetilde{b}^{\prime}(1) + \widetilde{c}^{\prime}(1), &
M_{x} & = \widetilde{a}(1) - \widetilde{c}(1) = a^{\prime}(1)+b^{\prime}(1)+c^{\prime}(1), \label{eqn:MxMy} \\
M_{y}^{(1)} &= a_{1}(1) = \widetilde{a}^{\prime}_1(1) + \widetilde{b}^{\prime}_1(1) + \widetilde{c}^{\prime}_1(1), &
M_{x}^{(1)} & = \widetilde{a}_1(1) - \widetilde{c}_1(1) = a_{1}^{\prime}(1) + b_{1}^{\prime}(1), \\
M_{y}^{(2)} &= a_{2}(1) - c_{2}(1) = \widetilde{a}^{\prime}_2(1) + \widetilde{b}^{\prime}_2(1), &
M_{x}^{(2)} &= \widetilde{a}_2(1) = a_{2}^{\prime}(1) + b_{2}^{\prime}(1) + c_{2}^{\prime}(1).
\end{align}
\section{Branch Points And Functions Defined by the Kernel Equation}
The property of the random walk relies on the property of the
kernel function $h$ and functions $h_1$ and $h_2$. The kernel
function plays a key role in the kernel method.
\begin{definition}
A random variable is called non-singular if the kernel function
$h(x,y)$, as a polynomial in the two variables $x$ and $y$, is
irreducible (equivalently, if $h=fg$ then either $f$ or $g$ is a
constant) and quadratic in both variables.
\end{definition}
Throughout the paper unless otherwise specified, we make the
second assumption below.
\begin{assumption} \label{assumption-2}
The random considered is non-singular.
\end{assumption}
The non-singular condition for a random walk is closely related to
the irreducibility of the marginal processes $L_1(t)$ and
$L_2(t)$, but they are not the same concept. A necessary and
sufficient condition for a random walk to be singular is given, in
terms of $p_{i,j}$, in Lemma~2.3.2 in \cite{FIM:1999}. Study on
tail asymptotics for a singular random walk is either easier or
similar to the non-singular case, which can be found in Li,
Tavakoli and Zhao~\cite{Li-Tavakoli-Zhao:11}.
The starting point of our analysis is the set of all pairs $(x,y)$
satisfy the kernel equation, or
\[
B =\{(x,y) \in \mathbb{C}^2: h(x,y)=0\},
\]
where $\mathbb{C}$ is the set of all complex numbers. The kernel
function can be considered as a quadratic form in either $x$ or
$y$ with the coefficients being functions of $y$ or $x$,
respectively. Therefore, the kernel equation can be written as
\begin{equation} \label{eqn:algebraic-functions}
a(x)y^{2}+b(x)y+c(x)=\widetilde{a}(y)x^{2}+\widetilde{b}(y)x+\widetilde{c}(y)=0.
\end{equation}
For a fixed $x$, the two solutions to the kernel equation as a quadratic form in $y$ are given by
\begin{equation*}
Y_{\pm }(x)=\frac{-b(x)\pm \sqrt{D_{1}(x)}}{2a(x)}
\end{equation*}
if $a(x) \neq 0$, where $D_{1}(x)=b^{2}(x)-4a(x)c(x)$. Notice that
non-singularity implies that $a(x) \not\equiv 0$ and, therefore,
only up to two values of $x$ could lead to $a(x)=0$ since $a(x)$
is a polynomial of degree up to 2.
Similarly, for a fixed $y$, the two solutions to the kernel equation as a quadratic form in $x$ are given by
\begin{equation*}
X_{\pm }(y)=\frac{-\widetilde{b}(y)\pm \sqrt{D_{2}(y)}}{2\widetilde{a}(y)},
\end{equation*}
where $D_{2}(y)=\widetilde{b}^{2}(y)-4\widetilde{a}(y)\widetilde{c}(y)$.
It is important to study the set $B$, or equivalently $Y_{\pm}(x)$
or $X_{\pm}(y)$, since for all $(x,y) \in B$ with $|\pi(x,y)|
<\infty$, the right hand side of the fundamental form is also
zero, which provides a relationship between the two unknown
generating functions $\pi_1$ and $\pi_2$. In the above,
$\sqrt{D_1(x)}$ is well-defined if $D_1(x)\geq 0$ and similarly
$\sqrt{D_2(y)}$ is well-defined if $D_2(y)\geq 0$. As a function
of a complex variable, the square root is a two-valued function.
To specify a branch, when $z$ is complex, $\sqrt{z}$ is defined
such that $\sqrt{1}=1$.
Let $z = D_1(x)$. Then, both $Y_{-}(x)$ and $Y_{+}(x)$ are
analytic as long as $z \notin (-\infty, 0]$ and $a(x) \neq 0$. For
these two functions, we start from a region, in which they are
analytic, and consider an analytic continuation of these two
functions. In this consideration, the key is the continuation of
$\sqrt{D_1(x)}$.
\begin{definition} A branch point of $Y_{\pm}(x)$ ($X_{\pm}(y)$) is a value of $x$ ($y$) such that $D_1(x)=0$ ($D_2(y)=0$).
\end{definition}
To discuss the branch points, notice that
the discriminant $D_1$ ($D_2$) is a polynomial of degree up to four. Since the two cases are symmetric, we discuss $D_1(x)$ in detail only.
Rewrite $D_1(x)$ as
\begin{equation*}
D_{1}(x)=d_{4}x^{4}+d_{3}x^{3}+d_{2}x^{2}+d_{1}x+d_{0},
\end{equation*}
where
\begin{align*}
d_{0} &= p_{-1,0}^{2}-4p_{-1,1}p_{-1,-1}, \\
d_{1} &= 2p_{-1,0}(p_{0,0}-1)-4(p_{-1,1}p_{0,-1}+p_{0,1}p_{-1,-1}), \\
d_{2} &= (p_{0,0}-1)^{2}+2p_{1,0}p_{-1,0}-4(p_{1,1}p_{-1,-1}+p_{1,-1}p_{-1,1}+p_{0,1}p_{0,-1}), \\
d_{3} &= 2p_{1,0}(p_{0,0}-1)-4(p_{1,1}p_{0,-1}+p_{0,1}p_{1,-1}), \\
d_{4} &= p_{1,0}^{2}-4p_{1,1}p_{1,-1}.
\end{align*}
It can be easily checked that $d_1 \leq 0$ and $d_3 \leq 0$.
When $D_1$ is a polynomial of degree 4 (or $d_4 \neq 0$), there are four
branch points, denoted by $x_i$ ($y_i$), $i=1, 2, 3, 4$. Without
loss of generality, we assume that $|x_1| \leq |x_2| \leq |x_3|
\leq |x_4|$. When the
degree of $D_1(x)$ is $d<4$, for convenience, we let
$x_{d+k}=\infty$ for integer $k>0$ such that
$d+k \leq 4$. For example, if $d=3$, then $x_4=\infty$.
This can be justified by the following: consider
the polynomial $\tilde{D}_1(\widetilde{x})=D_1(x)/x^4$ in
$\widetilde{x}$, where $\widetilde{x}=1/x$. Then,
$\widetilde{x}=0$ is a $d$-tuple zero of
$\widetilde{D}_1(\tilde{x})$, and therefore $x=\infty$ can be
viewed as a $d$-tuple zero of $D_1(x)$.
The following lemma characterizes the branch points of
$Y_{\pm}(x)$ for all non-singular random walks, including the
heave-tailed case, or the case of $M =0$.
\begin{lemma} \label{lemma1.1}
\textbf{1.} For a non-singular random walk with $M_{y}\neq 0$,
$Y_{\pm}(x)$ has two branch points $x_{1}$ and $x_{2}$ inside the
unit circle and another two branch points $x_{3}$ and $x_{4}$
outside the unit circle. All these branch points lie on the real
line. More specifically,
\begin{description}
\item[(1)] if $p_{1,0}>2\sqrt{p_{1,1}p_{1,-1}}$, then
$1<x_{3}<x_{4}<\infty$;
\item[(2)] if $p_{1,0}=2\sqrt{p_{1,1}p_{1,-1}}$, then
$1<x_{3}<x_{4}=\infty$;
\item[(3)] if $p_{1,0}<2\sqrt{p_{1,1}p_{1,-1}}$, then $1<x_{3}
\leq - x_{4} < \infty$, where the equality holds if and only if
$d_1=d_3=0$.
\end{description}
Similarly,
\begin{description}
\item[(4)] if $p_{-1,0}>2\sqrt{p_{-1,1}p_{-1,-1}}$, then
$0<x_{1}<x_{2}<1$;
\item[(5)] if $p_{-1,0}=2\sqrt{p_{-1,1}p_{-1,-1}}$, then $x_{1}=0$
and $0<x_{2}<1$;
\item[(6)] if $p_{-1,0}<2\sqrt{p_{-1,1}p_{-1,-1}}$, then $0<-x_1
\leq x_{2}<1$, where the equality holds if and only if
$d_1=d_3=0$.
\end{description}
\textbf{2.} For a non-singular random walk with $M_{y}=0$ (in this
case $M_{x}\neq 0$ since we are only considering the genus 1 case
in this paper), either $x_{2}=1$ if $M_{x}<0$; or $x_{3}=1$ if
$M_{x}>0$. In the latter case, the system is unstable.
\end{lemma}
\proof We only need to prove \textbf{3.} and \textbf{6.} since all
other proofs can be found in Fayolle, Iasnogorodski and
Malyshev~\cite{FIM:1999} (Lemma~2.3.8 and Lemma~2.3.9). We provide details for \textbf{3.}
since \textbf{6.} can be proved similarly. Suppose otherwise
$x_{3}>-x_{4}$. From $d_{1}\leq 0$ and $d_{3}\leq 0$, we obtain
$D_{1}(-x_{3})=-d_{3}x_{3}^{3}-d_{1}x_{3}>0$. On the other hand,
$D_{1}(-\infty )=-\infty$ since $d_{4}<0$, which implies that
$D_{1}(x)=0$ has a fifth root in $(-\infty ,x_{3})$, but this is
impossible. The contradiction shows that $x_{3}\leq -x_{4}$. It is
clear that the equality holds if and only if $d_{1}=$ $d_{3}=0$.
\thicklines \framebox(6.6,6.6)[l]{}
\begin{remark}
Similar results hold for the branch points $y_i$, $i=1, 2, 3, 4$, of $X_{\pm}(y)$.
\end{remark}
\begin{definition}
$p_{i,j}$ ($p_{i,j}^{(k)}$) is called X-shaped if $p_{i,j}=0$
($p_{i,j}^{(k)}=0$) for all $i$ and $j$ such that $|i+j|=1$. A
random walk is called X-shaped if $p_{i,j}$ and also
$p_{i,j}^{(k)}$ for $k=1, 2$ are all X-shaped.
\end{definition}
Based on Lemma~\ref{lemma1.1}, we can prove the following result.
\begin{corollary} \label{corollary-X}
$x_3=-x_4$ if and only if $p_{i,j}$ is X-shaped.
\end{corollary}
Throughout the rest of the paper, we define $[x_{3},x_{4}]=[-\infty ,x_{4}]\cup [x_{3},\infty ]$ when $x_{4}<-1$. Similarly,
$[y_{3},y_{4}]=[-\infty ,y_{4}]\cup [y_{3},\infty ]$] when $y_4<-1$. We define the following cut planes:
\begin{eqnarray*}
\widetilde{\mathbb{C}}_{x} &=&\mathbb{C}_{x} \setminus [x_{3},x_{4}], \\
\widetilde{\mathbb{C}}_{y} &=&\mathbb{C}_{y} \setminus [y_{3},y_{4}], \\
\widetilde{\widetilde{\mathbb{C}}}_{x} &=&\mathbb{C}_{x} \setminus [x_{3},x_{4}] \cup [x_{1},x_{2}], \\
\widetilde{\widetilde{\mathbb{C}}}_{y} &=&\mathbb{C}_{y} \setminus [y_{3},y_{4}]\cup [y_{1},y_{2}],
\end{eqnarray*}
where $\mathbb{C}_{x}$ and $\mathbb{C}_{y}$ are the complex planes for $x$ and $y$, respectively.
We now define two complex functions on the cut plane $\widetilde{\widetilde{\mathbb{C}}}_{x}$ based on $Y_{\pm}(x)$:
\begin{equation}
Y_{0}(x) = \left \{ \begin{array}{ll}
Y_{-}(x), & \text{if } |Y_{-}(x)| \leq |Y_{+}(x)|, \\
Y_{+}(x), & \text{if } |Y_{-}(x)|>|Y_{+}(x)|; \end{array}
\right.
\end{equation}
and
\begin{equation}
Y_{1}(x) = \left \{ \begin{array}{ll}
Y_{+}(x), & \text{if } |Y_{-}(x)| \leq |Y_{+}(x)|, \\
Y_{-}(x), & \text{if } |Y_{-}(x)|>|Y_{+}(x)|. \end{array}
\right.
\end{equation}
Obviously, $Y_0$ is the function of $Y_{-}$ and $Y_{+}$ with the smaller modulus and $Y_{+}$ is the function with the larger modulus.
Functions $X_0(y)$ and $X_1(y)$ are defined on the cut plane $\widetilde{\mathbb{C}}_{y}$ in the same manner.
\begin{remark}
A branch point of $Y_{\pm}(x)$ ($X_{\pm}(y)$) is also referred to
as a branch point of $Y_0(x)$ and $Y_1(x)$ ($X_0(y)$ and
$X_1(y)$).
\end{remark}
\begin{remark} It is not always the case that $Y_0$ is a continuation of $Y_-$ and
$Y_1$ a continuation of $Y_+$. However, for $x \in \widetilde{\widetilde{\mathbb{C}}}_{x}$ with $a(x) \neq 0$, $Y_0(x)$ and $Y_1(x)$ are still the two
zeros of the kernel function $h(x,y)$. Parallel comments can be made on $X_0$ and $X_1$.
\end{remark}
A list of basic properties of $Y_0$ and $Y_1$ ($X_0$ and $X_1$) is provided in the following lemma.
\begin{lemma} \label{lemma1.1-b}
\textbf{1.} For $|x|=1$, $|Y_{0}(x)|\leq 1$ and $|Y_{1}(x)|\geq
1$, with equality only possibly for $x=\pm 1$. For $x=1$, we
have
\begin{align*}
Y_{0}(1) &= \min \left( 1,\frac{\sum p_{i,-1}}{\sum p_{i,1}}=\frac{c(1)}{a(1)} \right ), \\
Y_{1}(1) &= \max \left( 1,\frac{\sum p_{i,-1}}{\sum p_{i,1}}=\frac{c(1)}{a(1)} \right );
\end{align*}
for $x=-1$, the equality holds only if $p_{i,j}$ is X-shaped, for which we have
\begin{align*}
Y_{0}(-1) &= -\min \left( 1,\frac{\sum p_{i,-1}}{\sum p_{i,1}}=\frac{c(1)}{a(1)} \right ), \\
Y_{1}(-1) &= -\max \left( 1,\frac{\sum p_{i,-1}}{\sum p_{i,1}}=\frac{c(1)}{a(1)} \right ).
\end{align*}
\textbf{2.} The functions $Y_{i}(x)$, $i=0,1$, are meromorphic in
the cut plane $\widetilde{\widetilde{\mathbb{C}}}_{x}$. In
addition,
\begin{description}
\item[(a)] $Y_{0}(x)$ has two zeros and no poles. Hence $Y_{0}(x)$
is analytic in $\widetilde{\widetilde{\mathbb{C}}}_{x}$;
\item[(b)] $Y_{1}(x)$ has two poles and no zeros.
\item[(c)] $|Y_{0}(x)|\leq |Y_{1}(x)|$, in the whole cut complex
plane $\widetilde{\widetilde{\mathbb{C}}}_{x}$, and equality takes
place only on the cuts.
\end{description}
\textbf{3.} The function $Y_{0}(x)$ can become infinite at a point $x$ if
and only if,
\begin{description}
\item[(a)] $p_{11}=p_{10}=0$, in this case, $x=x_{4}=\infty$; or
\item[(b)] $p_{-11}=p_{-10}=0$, in this case, $x=x_{1}=0$.
\end{description}
Parallel conclusions can be made for functions $X_0(y)$ and $X_1(y)$.
\end{lemma}
\proof This lemma contains results in Lemma~2.3.4 and Theorem~5.3.3 in \cite{FIM:1999}. First, according to (ii) of Theorem~5.3.3 in
\cite{FIM:1999}, the functions $Y_0$ and $Y_1$ defined in this paper coincide the functions $Y_0$ and $Y_1$ in \cite{FIM:1999} due to
the uniqueness of the continuity. Then, all results in 1 come from Lemma~2.3.4 and Lemma~5.3.1 in \cite{FIM:1999} except for the expressions for $Y_0(-1)$ and $Y_1(-1)$, which
can be obtained in the same fashion as for $Y_0(1)$ and $Y_1(1)$; results in
2 are given in (ii) of Theorem~5.3.3 in \cite{FIM:1999}; and the conclusion in 3 is the same as in (iii) Theorem~5.3.3 in \cite{FIM:1999}.
\thicklines \framebox(6.6,6.6)[l]{}
\begin{remark} All the above properties can be directly obtained through elementary analysis of the square root function. \end{remark}
Throughout the rest of the paper, unless otherwise specified, we
make the following assumption:
\begin{assumption} \label{assumption-3}
All branch points $x_i$ and $y_i$, $i=1,2,3,4$, are distinct.
\end{assumption}
A random walk satisfying Assumption~\ref{assumption-3} is called a genus 1 random walk.
\begin{remark}
This assumption is equivalent to the assumption that the Riemann surface defined by the kernel equation has genus 1. The Riemann surface for the random walk is either genus 1 or genus 0.
A necessary and sufficient condition for the random walk in the quarter plane
to be genus 1 is given in Lemma~2.3.10 in \cite{FIM:1999}. Most of queueing application models are the case of genus 1. The genus 0 case can be analyzed similarly except for the heavy-tailed case,
the case where $M=0$. In general, analysis of the genus 0 case (except for the case of $M=0$) could be less challenging since expressions for the unknown generating functions $\pi_1(x)$ and $\pi_2(y)$ are either explicit or
less complex than for the genus 1 case, which can immediately lead to an analytic continuation of these unknown generating functions. Chapter~6 of \cite{FIM:1999} is devoted to the genus 0 case.
\end{remark}
\begin{corollary}
For a non-singular genus 1 random walk, if $p_{i,j}$ is X-shaped, then
all $p_{1,1}$, $p_{1,-1}$, $p_{-1,1}$ and $p_{-1,-1}$ are
positive.
\end{corollary}
\proof If only one of $p_{1,1}$, $p_{1,-1}$, $p_{-1,1}$ and
$p_{-1,-1}$ is zero, then the random walk is non-singular having
genus 0 (Lemma~2.3.10 in \cite{FIM:1999}) and if at least two of
them are zero, then the random walk is singular (Lemma~2.3.2 in
\cite{FIM:1999}).
\thicklines \framebox(6.6,6.6)[l]{}
\begin{corollary}
For a stable random walk with $M \neq 0$,
\textbf{1.} If $p^{(1)}_{i,j}$ is X-shaped, then
$p^{(1)}_{1,1}$ and $p^{(1)}_{-1,1}$ cannot be both zero; and
\textbf{2.} If $p^{(2)}_{i,j}$ is X-shaped, then
$p^{(2)}_{1,1}$ and $p^{(2)}_{-1,-1}$ cannot be both zero.
\end{corollary}
\proof Otherwise, $p^{(k)}_{0,0}=1$, $k=1$ or 2, with which the random walk cannot be stable.
\thicklines \framebox(6.6,6.6)[l]{}
For our purpose, more results about functions $Y_0$ and $Y_1$
($X_0$ and $X_1$) are needed. Once again, we consider $Y_0$ and
$Y_1$. $X_0$ and $X_1$ can be considered in the same way. Recall
that $Y_k$ ($k=1,2$) are defined on the cut plane $\mathbb{C}_{x}
\setminus [x_{3},x_{4}] \cup [x_{1},x_{2}]$, where two slits
$[x_1,x_2]$ and $[x_3,x_4]$ are removed from the complex plane
such that the functions $Y_k$ can stay always in one branch. Take
the slit $[x_1,x_2]$ as an example. For $x' \in [x_1,x_2]$, the
limit of $Y_k(x)$ when $x$ approaches to $x'$ from above the real
axis is different from the limit as $x$ approaches to $x'$ from
below the real axis. Let $x'' \in [x_1,x_2]$ be another point
satisfying $x'' >x'$. By
$Y_{0}[\underleftarrow{\overlineerrightarrow{x'x''}}]$, we denote the
image contour, which is the limit of $Y_0(x)$ from above the real
axis when $x$ traverses from $x'$ to $x''$ and from below the real
axis when $x$ continues to traverse back from $x''$ to $x'$. For
convenience, we say that
$Y_{0}[\underleftarrow{\overlineerrightarrow{x'x''}}]$ is the image of
the contour $\underleftarrow{\overlineerrightarrow{x'x''}}$, traversed
from $x'$ to $x''$ along the upper edge of the slit $[x',x'']$ and
then back to $x'$ along the lower edge of the slit. In this way,
we can define the following image contours:
\begin{align}
\mathscr{L} & = Y_{0}[\underleftarrow{\overlineerrightarrow{x_{1}x_{2}}}], \;\;\;
{\mathscr{L}}_{ext} = Y_{0}[\underleftarrow{\overlineerrightarrow{x_{3}x_{4}}}]; \\
\mathscr{M} & = X_{0}[\underleftarrow{\overlineerrightarrow{y_{1}y_{2}}}], \;\;\;
{\mathscr{M}}_{ext} = X_{0}[\underleftarrow{\overlineerrightarrow{y_{3}y_{4}}}],
\end{align}
respectively.
Furthermore, for an arbitrary simple closed curve $\mathscr{U}$,
by $G_{\mathscr{U}}$ we denote the interior domain bounded by
$\mathscr{U}$ and by $G_{\mathscr{U}}^{c}$ the exterior domain.
The properties of the above image contours provided in the following lemma are important for the interlace between the two unknown functions $\pi_(x)$ and
$\pi_2(y)$ discussed in the next section. To state the lemma,
define the following determinant:
\begin{eqnarray}
\Delta &=&\left\vert
\begin{tabular}{lll}
$p_{11}$ & $p_{10}$ & $p_{1,-1}$ \\
$p_{01}$ & $p_{00}$ & $p_{0,-1}$ \\
$p_{-1,1}$ & $p_{-1,0}$ & $p_{-1,-1}$
\end{tabular}
\right\vert \notag .
\end{eqnarray}
\begin{lemma}\label{lemma1.4}
For non-singular genus 1
random walk without branch points on the unit circle, we have the
following properties:
\textbf{1.} The curve $\mathscr{M}$ and $\mathscr{M}_{ext }$
are simple, closed and symmetrical about the
real axis in $\mathbb{C}_{x}$ plane. Moreover,
\textbf{(a)} If $\Delta >0$, then
\begin{equation*}
\lbrack x_{1},x_{2}]\subset G_{\mathscr{M}}\subset G_{\mathscr{M}_{ext}}
\;\text{ and }\; [x_{3},x_{4}]\subset G_{\mathscr{M}_{ext}}^{c};
\end{equation*}
\textbf{(b)} If $\Delta <0$, then
\begin{equation*}
\lbrack x_{1},x_{2}]\subset G_{\mathscr{M}_{ext}}\subset G_{\mathscr{M}}
\;\text{ and }\; [x_{3},x_{4}]\subset G_{\mathscr{M}}^{c};
\end{equation*}
\textbf{(c)} If $\Delta =0$, then
\begin{equation*}
\lbrack x_{1},x_{2}]\subset G_{\mathscr{M}_{ext}}=G_{\mathscr{M}}
\;\text{ and }\; [x_{3},x_{4}]\subset G_{\mathscr{M}}^{c}.
\end{equation*}
Entirely symmetric results hold for $\mathscr{L}$ and
$\mathscr{L}_{ext}$.
\textbf{2.} The branches $X_i$ and $Y_i$ have the following
properties:
\textbf{(a)} Both $X_0(y)$ and $Y_0(x)$ are conformal mappings:
$G_{\mathscr{M}}-[x_{1},x_{2}]\overlineerset{Y_{0}(x)}{\underset{X_{0}(y)}{\rightleftarrows}}G_{\mathscr{L}}-[y_{1},y_{2}]$;
\textbf{(b)} $X_{0}(y)\in G_{\mathscr{M}} \cup
G_{\mathscr{M}_{ext}}$ and $X_{1}(y) \in G_{\mathscr{M}}^{c} \cup
G_{\mathscr{M}_{ext}}^{c}$. Symmetrically, $Y_{0}(x) \in
G_{\mathscr{L}} \cup G_{\mathscr{L}_{ext}}$ and $Y_{1}(x)\in
G_{\mathscr{L}}^{c} \cup G_{\mathscr{L}_{ext}}^{c}$;
\textbf{(c)} If $G_{\mathscr{M}} \subset G_{\mathscr{M}_{ext}}$,
then
\begin{eqnarray*}
X_{0}\circ Y_{0}(t) &=&t, \text{ if } t \in G_{\mathscr{M}}, \\
X_{0}\circ Y_{0}(t) &\neq &t, \text{ if }t\in G_{\mathscr{M}}^{c}
\text{ and } X_{0}\circ
Y_{0}(G_{\mathscr{M}}^{c})=G_{\mathscr{M}}.
\end{eqnarray*}
Symmetrically, if $G_{\mathscr{L}}\subset G_{\mathscr{L}_{ext}}$,
then
\begin{eqnarray*}
Y_{0}\circ X_{0}(t) &=&t\text{ if }t\in G_\mathscr{L}, \\
Y_{0}\circ X_{0}(t) &\neq &t\text{ if }t\in
G_\mathscr{L}^{c}\text{ and }Y_{0}\circ
X_{0}(G_\mathscr{L}^{c})=G_\mathscr{L}.
\end{eqnarray*}
\end{lemma}
\proof
A proof of the lemma can be found in Theorem~5.3.3~(i) and
Corollary~5.3.5 in \cite{FIM:1999}. Parallel results when 1 is a
branch point (or both 1 and -1 are branch points) can be found in
Lemma ~2.3.6, Lemma~2.3.9 and Lemma~2.3.10 of \cite{FIM:1999}.
\thicklines \framebox(6.6,6.6)[l]{}
\begin{remark}
Results in this lemma can also be directly proved through elementary analysis without using advanced mathematical concepts used in
\cite{FIM:1999}.
\end{remark}
\section{Asymptotic Analysis of the Two Unknown Functions $\pi_1(x)$
And $\pi_2(y)$}
The key idea of the kernel method is to consider all $(x,y) \in B$
such that the right hand side of the fundamental form is also
zero, which provides a relationship between the two unknown
functions $\pi_1(x)$ and $\pi_2(y)$. Then, the interlace between
the unknown functions $\pi_1(x)$ and $\pi_2(y)$ plays the key role
in the asymptotic analysis of these two functions, from which
exact tail asymptotics of the stationary distribution can be
determined according to asymptotic analysis of the unknown
function at its singularities and the Tauberien-like theorem.
\subsection{Tauberian-like theorems}
Various approaches, say probabilistic or non-probabilistic,
including analytic or algebraic, are available for exact geometric
decay. However, asymptotic analysis seems unavoidable for exact
non-geometric decay. A Tauberian, or Tauberian-like, theorem
provides a tool of connecting the asymptotic property at dominant
singularities of an analytic function at zero and the tail
property of the sequence of coefficients in the Taylor series of
the function. In our case, an unknown generating function of a
probability sequence is analytic at zero. Since these
probabilities are unknown, in general, it cannot be verified that
the probability sequence is (eventual) monotone, which is a
required condition for applying a standard Tauberian theorem. The
tool used in this paper is a Tauberian-like theorem, which does
not require this monitonicity. Instead, it imposes some extra
condition on analyticity of the unknown generating function.
Let $A(z)$ be analytic in $|z|<R$, where $R$ is the radius of convergence of the function $A(z)$. We first consider a special case in which $R$ is the only
singularity on the circle of convergence.
\begin{remark}
It should be noticed that for an analytic function at 0, if the coefficients of the Taylor expansion are all non-negative,
then the radius $R>0$ of convergence is a singularity of the function according to the well-known Pringsheim's Theorem.
\end{remark}
\begin{definition}[Definition~VI.1 in Flajolet and Sedgewick~\cite{Flajolet-Sedgewick:09}]
For given numbers $\varepsilon >0$ and $\phi$ with $0<\phi <\pi/2$, the open domain
$\Delta (\phi, \varepsilon)$ is defined by
\begin{equation}
\Delta (\phi, \varepsilon) = \left \{z \in \mathbb{C}: |z| < 1+\varepsilon, z \neq 1, |z-1| > \phi \right \}.
\end{equation}
A domain is a $\Delta$-domain at 1 if it is a $\Delta(\phi, \varepsilon)$ for some $\varepsilon>0$ and $0< \phi< \pi/2$. For a complex number $\zeta \neq 0$,
a $\Delta$-domain at $\zeta$ is defined as the image $\zeta \cdot \Delta(\phi, \varepsilon)$ of a $\Delta$-domain $\Delta(\phi, \varepsilon)$ at 1
under the mapping $z \mapsto \zeta z$. A function is called $\Delta$-analytic if it is
analytic in some $\Delta$-domain.
\end{definition}
\begin{remark}
The region $\Delta (\phi, \varepsilon)$ is an intended disk with the radius of $1+\varepsilon$.
Readers may refer to Figure~VI.6 in \cite{Flajolet-Sedgewick:09} for a picture of the region.
Throughout the paper, without
otherwise stated, the limit of a $\Delta$-analytic function is always
taken in the $\Delta$-domain.
\end{remark}
\begin{theorem}[Tauberian-like theorem for single singularity] \label{tauberian-1}
Let $A(z)=\sum_{n\geq 0}a_{n}z^{n}$ be analytic at 0 with $R$ the
radius of convergence. Suppose that $R$ is a singularity of $A(z)$
on the circle of convergence such that $A(z)$ can be continued to
a $\Delta$-domain at $R$. If for a real number $\alpha \notin \{0,
-1, -2, \ldots\}$,
\begin{equation*}
\lim_{z \rightarrow R}(1-z/R)^{\alpha}A(z)= g,
\end{equation*}
where $g$ is a non-zero constant, then,
\begin{equation*}
a_{n} \sim \frac{g}{\Gamma(\alpha)} n^{\alpha-1} R^{-n},
\end{equation*}
where $\Gamma(\alpha)$ is the value of the gamma function at $\alpha$.
\end{theorem}
\proof
This is a immediate consequence of Corollary~VI.1 in \cite{Flajolet-Sedgewick:09} after the transform $z \mapsto R z$.
\thicklines \framebox(6.6,6.6)[l]{}
For the random walks studied in this paper, we will prove that the unknown generating function $\pi_1(x)$ ($\pi_2(y)$) has only one singularity on
the circle of its convergence, except the X-shaped random walk for which the convergent radius $R$ and $-R$ are the only singularities. To deal with the later case,
we introduce the following Tauberian-like theorem for the case of multiple singularities.
\begin{theorem}[Tauberian-like theorem for multiple singularities] \label{tauberian-2}
Let $A(z)=\sum_{n\geq 0}a_{n}z^{n}$ be analytic when $|z|<R$ and have a finite number of singularities $\zeta_k$, $k=1, 2, \ldots, m$ on the circle $|z|=R$ of convergence.
Assume that there exists a $\Delta$-domain $\Delta_0$ at 1 such that $A$ can be continued to intersection of the $\Delta$-domains $\zeta_k$ at $\zeta_k$, $k=1, 2, \ldots, m$:
\[
D = \cap_{k=1}^m (\zeta_k \cdot \Delta_0).
\]
If for each $k$, there exists a real number $\alpha_k \notin \{0, -1, -2, \ldots\}$ such that
\begin{equation*}
\lim_{z \rightarrow \zeta_k} (1-z/\zeta_k)^{\alpha_k}A(z)= g_k,
\end{equation*}
where $g_k$ is a non-zero constant, then,
\begin{equation*}
a_{n} \sim \sum_{k=1}^m \frac{g_k}{\Gamma(\alpha_k)} n^{\alpha_k-1} \zeta_k^{-n}.
\end{equation*}
\end{theorem}
\proof This is an immediate corollary of Theorem~VI.5 in
\cite{Flajolet-Sedgewick:09} for the case where $\alpha _{k}$ is
real, $\beta _{k}=0$, $\sigma_{k}(z) =
\tau_{k}(z)=(1-z)^{-\alpha_{k}}$ and $\sigma_{k,n} =
\frac{g_{k}}{\Gamma (\alpha )}n^{\alpha _{k}-1}$.
\thicklines \framebox(6.6,6.6)[l]{}
\subsection{Interlace of the two unknown functions $\pi_1(x)$ and $\pi_2(y)$}
The interlace of the unknown functions $\pi_1(x)$ and $\pi_2(y)$ is a key for asymptotic analysis of these functions.
Let
\begin{eqnarray*}
\Gamma_{a} &=& \{x\in \mathbb{C:} |x|=a \}, \\
D_{a} &=& \{x: |x|<a\}, \\
\overlineerline{D}_{a} &=&\{x:|x|\leq a\}.
\end{eqnarray*}
When $a=1$, we write $\Gamma=\Gamma_1$, $D = D_1$ and
$\overlineerline{D}=\overlineerline{D}_1$.
We fist state two literature results on the continuation of the functions $\pi_1(x)$ and $\pi_2(y)$.
\begin{lemma}[Theorem~3.2.3 in \cite{FIM:1999}] \label{lemma1.3}
For a stable non-singular random walk having genus 1, $\pi_1(x)$
is a meromorphic function in the complex cut plane
$\widetilde{\mathbb{C}}_{x}$. Similarly, $\pi_2(y)$ is a
meromorphic function in the complex cut plane
$\widetilde{\mathbb{C}}_{y}$.
\end{lemma}
This continuation result is crucial for tail asymptotic analysis.
The following intuition might be helpful to see why such a
continuation exist. When the right hand side of the fundamental
form is zero, the $x$ and $y$ are related, say through the
function $Y_0(x)$. Therefore, $x_3$ is the dominant singularity if
there are no other singularities exist between $(1,x_3)$. Based on
the expression for $\pi_1(x)$ obtained from the fundamental form,
all other singularities come from the zeros of $h_1(x,Y_0(x))$,
which are poles of $\pi_1(x)$, or the singularities of
$\pi_2(Y_0(x))$. A similar intuition holds for the function
$\pi_2(y)$. Based on the above intuition, it is reasonable to
expect Lemma~\ref{lemma1.3}.
\begin{remark}
An analytic continuation can be achieved through various methods.
In \cite{FIM:1999} and \cite{Flatto-Hahn:84}, it was proved in terms of
properties of Riemann surfaces. In \cite{Kobayashi-Miyazawa:2011} and \cite{Guillemin-Leeuwaarden:09}, direct methods were used
for a convergent region. For some cases, a simple proof exists by using the property of the conformal mapping $Y_0$ or $X_0$.
For example, for the case of $M_y>0$ and $M_x<0$,
we know, from Lemma~\ref{lemma1.1-b}-1, Lemma~\ref{lemma1.1} and Lemma~\ref{lemma1.1-b}-2 respectively, that
$|Y_0(x)|<1$ for $|x|=1$, $x_3>1$ and $Y_0$ is analytic in the cut
plan. Therefore, it is not difficult to see that we can find an
$\varepsilon>0$ such that for $|x|<1+\varepsilon$, the function
$\pi_2(Y_{0}(x))$ in (\ref{eqn:1.9}) is analytic, which leads to
the continuation of $\pi_1(x)$.
\end{remark}
\begin{lemma}[Lemma~2.2.1 in \cite{FIM:1999}] \label{lemma1.2}
Assume that the random walk is ergodic with $M \neq 0$ and the
polynomial $h(x,y)$ is irreducible. Then, exists an $\varepsilon
>0$ such that the functions $\pi_1(x)$ and $\pi_2(y)$ can be
analytically continued up to the circle $\Gamma_{1+\varepsilon }$
in their respective complex plane. Moreover, they satisfy the
following equation in $D_{1+\varepsilon }^{2}\cap B$:
\begin{equation*}
h_{1}(x,y)\pi_1(x)+h_{2}(x,y)\pi_2(y)+h_{0}(x,y)\pi_{0,0}=0.
\end{equation*}
\end{lemma}
\proof The analytic continuation is a direct consequence of Lemma~\ref{lemma1.3} and the equation is
directly from the fundamental form.
\thicklines \framebox(6.6,6.6)[l]{}
\begin{theorem} \label{theorem1.1}
\textbf{1.}
Function $\pi_2(Y_{0}(x))$ is meromorphic in the cut complex plane
$\widetilde{\widetilde{\mathbb{C}}}_{x}$. Moreover, if
$Y_{0}(x_{3})$ is not a pole of $\pi_2(y)$, then $x_{3}$ is $x_{dom}$
of $\pi_2(Y_{0}(x))$ and there exist $\varepsilon >0$ and $0< \phi <\pi/2$ such that
\begin{equation*}
\underset{ x\rightarrow x_{3}}{\lim }\pi_2(Y_{0}(x))= \pi_2(Y_{0}(x_{3}))
\;\text{ and }\;
\underset{ x\rightarrow x_{3}}{\lim }\pi_2^{\prime}(Y_{0}(x))=\pi_2^{\prime}(Y_{0}(x_{3})).
\end{equation*}
Similarly, $\pi_1(X_{0}(y))$ is
meromorphic in the cut complex plane $\widetilde{\widetilde{\mathbb{C}}}_{y}$. Moreover, if $X_{0}(y_{3})$
is not a pole of $\pi_1(x)$, then $y_{3}$ is $y_{dom}$ of $\pi_1(X_{0}(y))$ and there exist $\varepsilon >0$ and $0< \phi <\pi/2$ such that
\[
\underset{ y\rightarrow y_{3}}{\lim }\pi_1(X_{0}(y)) = \pi_1(X_{0}(y_{3}))
\;\text{ and }\;
\underset{y\rightarrow y_{3}}{\lim }\pi_1^{\prime}(X_{0}(y)) = \pi_1^{\prime}(X_{0}(y_{3})).
\]
\textbf{2.} In cut plane
$\widetilde{\widetilde{\mathbb{C}}}_{x}$, equation
\begin{equation} \label{eqn:1.8}
h_{1}(x,Y_{0}(x))\pi_1(x)+h_{2}(x,Y_{0}(x))\pi_2(Y_{0}(x))+h_{0}(x,Y_{0}(x))\pi_{0,0}=0
\end{equation}
holds except at a pole (if there is any) of $\pi_1(x)$ or
$\pi_2(Y_{0}(x))$. Therefore,
\begin{equation} \label{eqn:1.9}
\pi_1(x)=\frac{-h_{2}(x,Y_{0}(x))\pi_2(Y_{0}(x))-h_{0}(x,Y_{0}(x))\pi_{0,0}}{h_{1}(x,Y_{0}(x))},
\end{equation}
except at zero of $h_{1}(x,Y_{0}(x))$, or at a pole (if there is
any) of $\pi_1(x)$ or $\pi_2(Y_{0}(x))$.
Similarly, in the cut plane $\widetilde{\widetilde{\mathbb{C}}}_{y}$,
equation
\begin{equation}
h_{1}(X_{0}(y),y)\pi_1(X_{0}(y)+h_{2}(X_{0}(y),y)\pi_2(y)+h_{0}(X_{0}(y),y)\pi_{0,0}=0
\end{equation}
holds except at a pole (if there is any) of $\pi_2(y)$ or
$\pi_1(X_{0}(y))$. Therefore,
\begin{equation} \label{eqn:1.11}
\pi_2(y)=\frac{-h_{1}(X_{0}(y),y)\pi_1(X_{0}(y))-h_{0}(X_{0}(y),y)\pi_{0,0}}{h_{2}(X_{0}(y),y)},
\end{equation}
except at a zero of $h_{2}(X_{0}(y),y)$, or at a pole (if there is
any) of $\pi_2(y)$ or $\pi_1(X_{0}(y))$.
\end{theorem}
\proof We only prove the result for functions of $x$ and the result
for functions of $y$ can be proved in the same fashion.
\textbf{1.} From Lemma~\ref{lemma1.1} and Lemma~\ref{lemma1.3},
$Y_{0}(x)$ is analytic in the cut complex plane
$\widetilde{\widetilde{\mathbb{C}}}_{x}$ and $\pi_2(y)$ is
meromorphic in the cut complex plane $\widetilde{\mathbb{C}}_{y}$,
which implies $\pi_2(Y_{0}(x))$ is meromorphic in
$\widetilde{\widetilde{\mathbb{C}}}_{x}$ if $Y_{0}(x)\notin
[y_{3},y_{4}]$. According to Lemma~\ref{lemma1.4}-2(b), for all
$x \in \mathbb{C}_x$, $Y_{0}(x)\in G_\mathscr{L} \cup
G_{\mathscr{L}_{ext}}$ and according to Lemma~\ref{lemma1.4}-1,
$[y_{3},y_{4}] \subset (G_\mathscr{L}\cup
G_{\mathscr{L}_{ext}})^{c}$, which confirms $Y_{0}(x)\notin
[y_{3},y_{4}]$. From the above, we have $\pi_2(y)$ is analytic at
$Y_0(x_3)$, then the limits in \textbf{1.} are immediate results of the
analytic properties of $\pi_2(Y_{0}(x))$.
\textbf{2.} Since both $\pi_1(x)$ and $ \pi_2(Y_{0}(x))$ are
meromorphic (proved in \textbf{1.}) and $Y_{0}(x)$ is analytic
(Lemma~\ref{lemma1.1}) in
$\widetilde{\widetilde{\mathbb{C}}}_{x}$, equation (\ref{eqn:1.8})
in the cut
plane $\widetilde{ \widetilde{\mathbb{C}}_{x}}$ except at the
poles of $\pi_1(x)$ or $\pi_2(Y_{0}(x))$.
\thicklines \framebox(6.6,6.6)[l]{}
\begin{remark} Let us
extend the definition of $\pi_1(x)$ to $x=x_3$ by $\pi_1(x_3) =
\lim_{x \to x_3} \pi_1(x)$ for $x$ in the cut plane. We say that
$x_3$ is a pole if the limit of $\pi_1(x)$ is infinite as $x \to
x_3$ in the cut plane.
\end{remark}
According to the above interlacing property and the Tauberian-like
theorem, for exact tail asymptotics of the boundary probabilities
$\pi_{n,0}$ and $\pi_{0,n}$, we only need to carry out an
asymptotic analysis at the dominant singularities of the functions
$\pi_1(x)$ and $\pi_2(y)$, respectively. There are only two
possible types of singularities, poles or branch points. We need
to answer the following questions:
\textbf{Q1.} How many singularities on the circle of convergence (dominant singularities)?
\textbf{Q2.} What is the multiplicity of a pole?
\textbf{Q3.} Is the branch point also a pole?
For the random walk considered in this paper, we will answer all
these questions. We will see that on the convergent circle, there
is only one singularity or there are exactly two singularities.
For the former, Theorem~\ref{tauberian-1} will be applied, and for
the latter, Theorem~\ref{tauberian-2} will be applied.
\subsection{Poles of $\pi_1(x)$}
Parallel properties about poles of the function $\pi_2(y)$ can be obtained in the
same fashion, which will not be detailed here.
\begin{lemma} \label{lemma1.5}
\textbf{1.} Let $x \in G_{\mathscr{M}} \cap (\overlineerline{D})^{c}$,
then the possible poles of $\pi_1(x)$ in $G_{\mathscr{M}}\cap
(\overlineerline{D})^{c}$ are necessarily zeros of $h_{1}(x,Y_{0}(x))$,
and $|Y_{0}(x)|\leq 1$.
\textbf{2.} Let $ y \in G_\mathscr{L}\cap (\overlineerline{D})^{c}$,
then the possible poles of $\pi_2(y)$ in $G_\mathscr{L}\cap
(\overlineerline{D})^{c}$ are necessarily zeros of $ h_{2}(X_{0}(x),y)$,
and $|X_{0}(y)|\leq 1$.
\end{lemma}
\proof \textbf{1.} When $x\in
\mathscr{M}$, then $Y_0(x) =y \in [y_1,y_2]$.
From
Lemma~\ref{lemma1.1}, for $|x|=1$, $|Y_{0}(x)|\leq 1$. For $x \in
G_{\mathscr{M}} \cap (\overlineerline{D})^{c}$, it follows from the
maximum modulus principle, we have $|Y_{0}(x)|\leq 1$. Hence,
$\pi_2(Y_{0}(x))$ is analytic in $G_{\mathscr{M}} \cap
(\overlineerline{D})^{c}$. From Theorem~\ref{theorem1.1}, if
$h_{1}(x,Y_{0}(x)) \neq 0$, equation (\ref{eqn:1.9}) holds, which
implies that the possible poles of $\pi_1(x)$ in $G_{\mathscr{M}}
\cap (\overlineerline{D})^{c}$ are necessarily zeros of
$h_{1}(x,Y_{0}(x))$.
\textbf{2.} The proof is similar.
\thicklines \framebox(6.6,6.6)[l]{}
\begin{theorem} \label{theorem1.2}
Let $x_p$ be a pole of $\pi_1(x)$ with the
smallest modulus. Assume that $|x_p| \leq x_3$. Then, one of the
follow two cases must hold:
\textbf{1.} $x_p$ is a zero of $h_{1}(x,Y_{0}(x))$;
\textbf{2.} $\widetilde{y}_0=Y_{0}(x_p)$ is a zero of
$h_{2}(X_{0}(y),y)$ and $|\widetilde{y}_0|>1$.
Parallel results hold for a pole of $\pi_2(y)$.
\end{theorem}
\proof Suppose that $x_p$ is not a zero of $h_{1}(x,Y_{0}(x))$.
According to equation (\ref{eqn:1.9}) in Theorem~\ref{theorem1.1},
$x_p$ must be a pole of $\pi_2(Y_{0}(x))$ and
$|\widetilde{y}_0|>1$. Furthermore, by Lemma~\ref{lemma1.5},
$x_p \notin G_{\mathscr{M}}$. If $\widetilde{y}_0$ is not a
zero of $h_{2}(X_{0}(y),y)$, according to equation
(\ref{eqn:1.11}) in Theorem~\ref{theorem1.1},
$\widetilde{y}_0$ must be a pole of $\pi_1(X_{0}(y))$, that
is, $\widetilde{x}_0=X_0(\widetilde{y}_0)$ is a pole of $\pi_1(x)$.
It follows from Lemma~\ref{lemma1.5} that
$\widetilde{x}_0=X_{0}(\widetilde{y}_0)$ is a zero of
$h_{1}(x,Y_{0}(x))$ if $\widetilde{x}_0\in G_{\mathscr{M}}$.
There are two possible cases:
$\Delta >0$ or $\Delta \leq 0$. If $\Delta >0$, by
Lemma~\ref{lemma1.4}-1(a) and 2(c), $\widetilde{x}_0 \in
G_{\mathscr{M}}$. In the case of $\Delta \leq 0$, according to
Lemma~\ref{lemma1.4}-1(b), 1(c) and 2(b), we also have
$\widetilde{x}_0\in G_{\mathscr{M}}$. However, this case is not
possible, since otherwise according to Lemma~\ref{lemma1.4}-1 we
would have $\widetilde{x}_0=x_p$ or $\widetilde{x}_0=-x_p$, both leading
to a contradiction. This completes the proof.
\thicklines \framebox(6.6,6.6)[l]{}
\begin{remark} \label{remark1.3}
We will show in the next subsection that a pole
of $\pi_1(x)$ with the smallest modulus in the disk $|x|\leq x_{3}$ is real.
\end{remark}
\subsection{Zeros of $h_{1}(x,Y_{0}(x))$}
\label{section2}
In this subsection, we provide properties on the zeros of the
function $h_1(x,Y_0(x))$. The main result is stated in the
following theorem.
\begin{theorem} \label{theorem2.1}
For a non-singular random walk having genus 1, consider
the following two possible cases:
\textbf{1.} Either $p_{i,j}$ or $p_{i,j}^{(1)}$ is not X-shaped.
In this case, either $h_{1}(x,Y_{0}(x))$ has no zeros with modulus
in $(1,x_{3}]$, or it has only one simple zero, say $x^{\ast }$,
with modulus in $(1,x_{3}]$, and $x^{\ast }$ is positive.
\textbf{2.} Both $p_{i,j}$ and $p_{i,j}^{(1)}$ are X-shaped. In
this case, either $h_{1}(x,Y_{0}(x))$ has no zeros with modulus in
$(1,x_{3}]$, or it has exact two simple zeros, namely,
$x^{\ast}>0$ (with modulus in $(1,x_3]$) and $-x^{\ast}$, both are
zeros of $h_{1}(x,Y_{0}(x))$ or both are zeros of
$a(x)h_{1}(x,Y_{1}(x))$.
\end{theorem}
With this theorem and Theorem~\ref{theorem1.2}, we are able to
apply the Tauberian-like theorem to characterize the tail
asymptotic properties for the boundary probability sequence
$\pi_{n,0}$. To show the above Theorem, we need the following
several lemmas and two propositions. Instead of directly
considering the function $f_0(x)=h_{1}(x,Y_{0}(x))$, we consider a
polynomial $f(x)$, which is essentially the product of $f_0(x)$
and $f_1(x)=h_{1}(x,Y_{1}(x))$:
\[
f(x)= f_0(x)\widetilde{f_{1}}(x),
\]
where $\widetilde{f_{1}}(x)=a(x)f_{1}(x)$. It is easy to verify,
by noticing
\begin{equation*}
Y_{0}(x)Y_{1}(x)=\frac{c(x)}{a(x)}\; \text{ and } \; Y_{0}(x)+Y_{1}(x)=-\frac{b(x)}{ a(x)},
\end{equation*}
that
\begin{eqnarray}
f(x) &=& a(x)b_{1}^{2}(x)-b(x)b_{1}(x)a_{1}(x)+c(x)a_{1}^{2}(x) \label{lemma2.1} \\
&=&
d_{6}x^{6}+d_{5}x^{5}+d_{4}x^{4}+d_{3}x^{3}+d_{2}x^{2}+d_{1}x+d_{0}.
\end{eqnarray}
Hence, a zero of $f_{i}(x)$, $i=0,1$, has to be a zero of $f(x)$,
and any zero of $f(x)$ is either a zero of $f_{0}(x)$ or a zero of
$\widetilde{f_{1}}(x)=a(x)f_{1}(x)$.
We can also write
\begin{equation}
f(x) = a(x)[a_{1}(x)]^{2}R_{-}(x)R_{+}(x), \label{eqn:2.1}
\end{equation}
where
\begin{equation}
R_{\pm}(x)=F(x)\pm \frac{\sqrt{D_{1}(x)}}{2a(x)}
\end{equation}
with
\begin{equation}
F(x)=\frac{b_{1}(x)}{a_{1}(x)}-\frac{b(x)}{2a(x)}.
\end{equation}
\begin{remark} \textbf{1.}
It can be easily seen that both $f_{0}(x)$ and
$\widetilde{f_{1}}(x)$ are analytic on the cut complex plan. In
fact, the analyticity of $f_0(x)$ is obvious and the analyticity
of $\widetilde{f}_1(x)$ is due to the cancellation of the zeros of
$a(x)$ and the pole of $f_1(x)$.
\end{remark}
All proofs for Lemmas~\ref{lemma2.2}--\ref{lemma2.6} and for
Proposition~\ref{theorem2.2} and Proposition~\ref{theorem2.3} are
organized into Appendix~\ref{appendix1}.
\begin{lemma} \label{lemma2.2}
\textbf{1.} \textbf{(a)} $Y_{0}^{\prime}(1)=\frac{M_{x}}{-M_{y}}$ if $M_{y}<0$;
\textbf{(b)} $Y_{1}^{\prime}(1)=\frac{M_{x}}{-M_{y}}$ if $M_{y}>0$; and
\textbf{(c)} $Y_{1}(1)=Y_{0}(1) $ and $x=1$ is a branch point of $Y_{1}(x)$ and $Y_{0}(x)$
if $M_{y}=0$. In this case, $Y_{1}^{\prime}(1)$ and $Y_{0}^{\prime}(1)$ do
not exist. Parallel results hold for functions $X_{k}(y)$.
\textbf{2.} If $M_{y}\neq 0$, then $f(x)$ has at least one
non-unit zero in $[x_{2},x_{3}]$ and 1 is a simple zero of $f(x)$.
Parallel results holds for the case of $M_{x}\neq 0$.
\end{lemma}
\begin{lemma} \label{lemma2.3}
\textbf{1.} Let $z$ be a branch point of $Y_{0}(x)$. If $f(z)=0$,
then $z$ cannot be a repeated root of $f(x)=0$.
\textbf{2.} $f(x)$ (therefore both $f_{0}(x)$ and $\widetilde{f}_{1}(x)$)
has (have) no zeros on the cuts, except possibly at a branch point. More
specifically, $f(x)<0$ if $a(x)<0$ and $f(x)>0$ if $a(x)>0$.
\textbf{3.} $f_{0}(x)$ and $\widetilde{f}_{1}(x)$ have no common
zeros except possibly at a branch point or at zero.
\textbf{4.} Consider the random walk in
Theorem~\ref{theorem2.1}-1. If $f_{0}(x)$ has a zero in
$[-x_{3},-1)$, then $f_{0}(x)$ has an additional (different) zero
in $[-x_{3},-1)$.
\textbf{5.} For the random walk in Theorem~\ref{theorem2.1}-1, if
$|x|\in (1,x_{3}]$, then $|Y_{0}(-|x|)|<Y_{0}(|x|)$.
\end{lemma}
\begin{lemma} \label{lemma2.4}
Consider the random walk in Theorem~\ref{theorem2.1}-1.
If $ M_{y}\leq 0$, then $x=1$ is the only zero of
$f_{0}(x)=h_{1}(x,Y_{0}(x))$ on the unit circle $|x|=1$. If
$M_{y}>0$, then $f_{0}(x)$ has no zero on unit circle $|x|=1$.
\end{lemma}
\begin{remark} \label{remark2.1}
From the proof of Lemma~\ref{lemma2.4}, we can see that for the
random walk considered in Theorem~\ref{theorem2.1}-2, $f_{0}(x)$
has no zeros with non-zero imaginary part on the unit circle.
\end{remark}
The proof of Theorem~\ref{theorem2.1} is based on detailed
properties of the function $f(x)$ and also the powerful continuity
argument to connect an arbitrary random walk to a simpler one. For
using this continuity argument, we consider the following special
random walk.
\textbf{Special Random Walk.} This is the random walk for which $p_{i,j}$ is cross-shaped (or $p_{i,j}=0$
whenever $|ij|=1$), and $p_{-1,1}^{(1)}=p_{-1,0}^{(1)}=0$.
We first prove the counterpart result to Theorem~\ref{theorem2.1} for the Special Random Walk.
\begin{proposition} \label{theorem2.2}
For the Special Random Walk, the following results hold:
\textbf{1.} $f(x)=0$ has six real roots with exact one non-unit root in
$[x_{2},x_{3}]$. More specifically, two roots are zero, two in $[x_{2,}x_{3}]$, one in $(-\infty ,x_{1}]$, and one
in $[x_{4},\infty )$.
\textbf{2.} If $f_{0}(x)$ has a zero, say $x^{\ast}$, in $(1$,
$x_{3}]$, then $x^{\ast}$ is the only zero of $f_{0}(x)$ with
modulus in $(1$, $x_{3}]$. Furthermore, $f_{0}(x)$ has no other
zeros with modulus greater than $1$ except possibly at $x=x_{4}$.
\end{proposition}
For the random walk considered in Theorem~\ref{theorem2.1}-2, we first prove the following results.
\begin{lemma} \label{lemma2.6}
For the random walk considered in Theorem~\ref{theorem2.1}-2 (or both $p_{i,j}$ and $p^{(1)}_{i,j}$ are X-shaped),
$f(1)=f(-1)=0$, and $f(x)=0$ has two more real roots, say $0<x_0 \neq 1$ and $-x_0$, and two complex roots.
\end{lemma}
\begin{proposition} \label{theorem2.3}
For the random walk considered in Theorem~\ref{theorem2.1}-2 (or
both $p_{i,j}$ and $p^{(1)}_{i,j}$ are X-shaped), either the
two complex zeros of $f(x)$ are zeros of
$\widetilde{f_{1}}(x)=a(x)f_{1}(x)$ or they are inside the unit
circle.
\end{proposition}
\underline{\proof of Theorem~\ref{theorem2.1}.}
\textbf{1.}
For the random walk considered here (either $p_{i,j}$ or $p^{(1)}_{i,j}$ is not X-shaped), let
\begin{eqnarray*}
\mathbf{p}&=&(p_{-1,-1}, p_{0,-1}, p_{1,-1}, p_{-1,0},p_{0,0},p_{0,1}, p_{-1,1},p_{0,1},p_{1,1}), \\
\mathbf{p}^{(1)}&=&(p^{(1)}_{-1,0},p^{(1)}_{0,0},p^{(1)}_{0,1}, p^{(1)}_{-1,1},p^{(1)}_{0,1},p^{(1)}_{1,1}).
\end{eqnarray*}
Define
\begin{equation*}
A=\Big \{ \big ( \mathbf{p},\mathbf{p}^{(1)} \big ): 0 \leq p_{i,j}, p^{(1)}_{i,j} \leq 1 \; \mbox{ and } \; \sum_{i,j} p_{i,j} = \sum_{i,j} p^{(1)}_{i,j} =1 \Big \}.
\end{equation*}
For an arbitrary random walk for which either $p_{i,j}$ or
$p^{(1)}_{i,j}$ is not X-shaped, let $\rho$ be the corresponding
point in $A$. We assume that $M_y \leq 0$ for the random walk
$\rho$ (and a similar proof can be found for the case of $M_y>0$).
Let $\rho_{0}$ be an arbitrarily chosen point in $A$ corresponding
the Special Random Walk. We prove the result by contradiction.
Suppose otherwise that the statement were not true. There would be
three possible cases: (i) $\Im (x^{\ast })\neq 0$; (ii)
$-x_{3}\leq x^{\ast}< -1$; and (iii) there exists $x_{0} \in
(1,x_{3}]$ with $x_{0}\neq x^{\ast}$ such that $f_{0}(x_{0})=0$.
\textbf{Case (i).} Clearly, $\overlineerline{x^{\ast}}$ is also a root
of $f(x)=0$. Choose a simple connected path $\ell$ in $A$ to
connect $\rho$ to $\rho_{0}$ such that on $\ell$ (excluding
$\rho$, but including $\rho_{0}$) $M_{y}<0$. The zeros of $f(x)$
as a function of parameters in $A$ are continues on $\ell$. There
are two possible cases: (a) the zero function $x_0(\theta)$ (with
$x_0(\rho)=x^*$) never passes the unit circle when $\theta$
travels from $\rho$ to $\rho_0$; and (b) $x_0(\theta)$ passes the
unit circle at some point $\theta \in \ell$.
If (a) occurs, let $\theta_0$ be the first point at which
$x_0(\theta)=\overlineerline{x}_0(\theta)$, where
$\overlineerline{x}_0(\theta)$ is the zero function with
$\overlineerline{x}_0(\rho)=\overlineerline{x^*}$. If $\overlineerline{x^*}$ is a
zero of $\widetilde{f}_{1}$, then $f_0$ and $\widetilde{f}_{1}$
would have a common zero $x_0(\theta_0)=\overlineerline{x}(\theta_0)$
at $\theta_0$, which contradicts Lemma~\ref{lemma2.3}-3. Hence,
the only possibility is that $\overlineerline{x^*}$ is also a zero of
$f_0$. From $\theta_0$ on, both $x_0(\theta)$ and
$\overlineerline{x}(\theta)$ should always be zeros of $f_0$, since
otherwise only at a branch point a zero of $f_0$ could be switched
to a zero of $\widetilde{f}_1$ and all branch points are real,
which means that $x_0(\theta)=\overlineerline{x}(\theta)$ is a branch
point and a multiple roots, contradicting to
Lemma~\ref{lemma2.3}-1. As $\theta_0$ approaches $\rho_0$, it
leads to a contradiction that two zeros of $f_0$ are in $(1,x_3]$.
If (b) occurs, we can assume that when $x_0(\theta)$ pases the
unit circle it is a zero of $f_0$ based on the proof in (a). Then,
$f_0$ has two zeros since 1 is always a zero of $f_0$ independent
of the parameters (or $\theta$) when $M_y<0$, which is a different
zero from $x_0(\theta)$. This contradicts to the fact that $f_0$
has only one zero at the unit circle.
\textbf{Case (ii).} In this case, $f_0(x)$ would have another zero
in $[-x_3,-1)$ at $\rho$ according to Lemma~\ref{lemma2.3}-4.
Consider the same two cases (a) and (b) as in (i). We can then
follow a similar proof to show that case (ii) is impossible.
\textbf{Case (iii).} A similar proof will show that the case is
impossible.
\textbf{2.} This is a direct consequence of Lemma~\ref{lemma2.6}
and Proposition~\ref{theorem2.3}.
\thicklines \framebox(6.6,6.6)[l]{}
The following Lemma gives a necessary and sufficient condition
under which $f_{0}(x)=h_1(x,Y_0(x))$ has a zero in $(1,x_{3}]$.
\begin{lemma} \label{lemma2.7}
Assume $M_{y}\neq 0$. We have following
results:
\textbf{1.} If $f_{0}(x_{3})\geq 0$, $f_{0}(x)$ has a zero in $(1,x_{3}]$;
\textbf{2.} If $f_{0}(x_{3})<0$, $f_{0}(x)$ has no zeros in $(1,x_{3}]$.
\end{lemma}
\proof
\textbf{1.} There are two cases: $M_{y}>0$ or $M_{y}<0$.
If $M_{y}>0$, then $f_{0}(1)<0$, which leads to the conclusion. If $M_{y}<0$, then
$f_{0}^{\prime}(1)<0$, which also leads to the conclusion since $f_{0}(1)=0$ and
$f_{0}(x_{3})\geq 0$.
\textbf{2.} Again there are two cases: $M_{y}>0$ or $M_{y}<0$. By simple
calculus, in either case, we obtain that if $f_{0}(x)=0$ had a root in $(1,x_{3}]$,
then it would have another root in $(1,x_{3}]$ since $f_{0}(x_{3})<0$. This
contradicts to Theorem~\ref{theorem2.1}.
\thicklines \framebox(6.6,6.6)[l]{}
\subsection{Zeros of $h_{2}(X_{0}(y),y)$}
Following the same argument in the previous subsection, we have
the following result:
\begin{theorem} \label{theorem2.1-b}
For a non-singular random walk having genus 1, consider the
following two possible cases:
\textbf{1.} Either $p_{i,j}$ or $p_{i,j}^{(2)}$ is not X-shaped.
In this case, either $h_{2}(X_{0}(y),y)$ has no zeros with modulus
in $(1,y_{3}]$, or it has only one simple zero, say $y^{\ast }$,
with modulus in $(1,y_{3}]$, and $y^{\ast }$ is positive.
\textbf{2.} Both $p_{i,j}$ and $p_{i,j}^{(2)}$ are X-shaped. In
this case, either $h_{2}(X_{0}(y),y)$ has no zeros with modulus in
$(1,y_{3}]$, or it has exact two simple zeros, namely,
$y^{\ast}>0$ (with modulus in $(1,y_3]$) and $-y^{\ast}$, both are
zeros of $g_{0}(y)$ or both are zeros of $g_{1}(y)$, where
\[
g_0(y)=h_{2}(X_{0}(y),y) \qquad \text{and} \qquad g_1(y)=h_{2}(X_{1}(y),y).
\]
\end{theorem}
From the above analysis, we know that if $h_{1}(x,Y_{0}(x))$ has a
zero in $(1,x_{3}]$, then such a zero is unique. Similarly, if
$h_{2}(X_{0}(y),y)$ has a zero in $(1,y_{3}]$, then such a zero is
unique. For convenience, we make the following convention:
\begin{convention} Let $x^{\ast}$ be the unique
zero in $(1,x_{3}]$ of the function $h_{1}(x,Y_{0}(x))$, if such a
zero exists, otherwise let $x^{\ast}=\infty$. Similarly Let
$y^{\ast}$ be the unique zero in $(1,y_{3}]$ of the function
$h_{2}(X_{0}(y),y)$ if such a zero exists, otherwise let
$y^{\ast}=\infty$.
\end{convention}
According to Theorem~\ref{theorem1.2}, the unique pole in $(1,
x_3]$ of $\pi_1(x)$ is either $x^{\ast}$, or the image of the pole
under $Y_0$ is a zero of $h_2(X_0(y),y)$. Our focus in this
subsection is on this special case of $y^*$.
\begin{theorem} \label{theorem-h2}
If the pole in $(1, x_3]$ of $\pi_1(x)$ is not $x^{\ast}$, then,
it, denoted by $\widetilde{x}_1$, satisfies:
\textbf{1.} $\widetilde{x}_1=X_1(y^*)$, where $y^*$ is the unique
zero in $(1,y_{3}]$ of the function $h_{2}(X_{0}(y),y)$;
\textbf{2.} $\widetilde{x}_1$ is the only pole of $\pi_1(x)$ with
modulus in $(1,y_3]$, except for the case where both $p_{i,j}$ and
$p_{i,j}^{(2)}$ are X-shaped, for which $-\widetilde{x}_1$ is the
other pole of $\pi_1(x)$ with modulus in $(1,y_3]$.
\end{theorem}
\proof \textbf{1.} Let $\widetilde{x}$ be the solution of $y^*=Y_{0}(x)$.
Then,
$\widetilde{x}=\widetilde{x}_{0}\stackrel{\triangle}{=}X_{0}(y^*)$
or
$\widetilde{x}=\widetilde{x}_{1}\stackrel{\triangle}{=}X_{1}(y^*)$.
If $y^*\in G_\mathscr{L}$, then $\widetilde{x}=\widetilde{x}_{0}$
so that $y^*=Y_{0}(X_{0}(y^*))$. In this case, by
Lemma~\ref{lemma1.5}, $\widetilde{x}_{0}<1$. If $y^*\in
G_\mathscr{L}^{c}$, then $\widetilde{x}=\widetilde{x}_{1}$ so that
$y^*=Y_{0}(X_{1}(y^*)$ and $\widetilde{x}_{1}\in G_{\mathscr{M}}^{c}$.
\textbf{2.} It follows from the fact that the zero, $y^{\ast}$, of
$h_{2}(X_{0}(y),y)$ in $(1,y_{3}]$ is unique and the fact that
$y^{\ast}=Y_{0}(x)$ has only two possible solutions
$\widetilde{x}_{0}<1$ and $\widetilde{x}_{1}$. In the case where
both $p_{i,j}$ and $p_{i,j}^{(2)}$ are X-shaped, $-y^{\ast }$ is
the other zero of $h_{2}(X_{0}(y),y)$ with either $-y^{\ast
}=Y_{0}(-\widetilde{x}_{1})$ or $-y^{\ast
}=Y_{0}(-\widetilde{x}_{0})$.
\thicklines \framebox(6.6,6.6)[l]{}
\begin{corollary} Let $\widetilde{x}$ be a solution of $y^*=Y_{0}(x)$.
In order for $\widetilde{x}$ to be in $(1,x_3]$ we need $y^* \in
G_\mathscr{L}^{c}$. Furthermore, we have
$y^*<y_{3}$.
\end{corollary}
\proof
The first conclusion is directly from the proof to Theorem~\ref{theorem-h2} and the second one
follows from that fact that by Lemma~\ref{lemma1.4}-1 and
Lemma~\ref{lemma1.4}-2(b), there exists no $x \in (1, x_3]$ such that
$y^*=y_3=Y_0(x)$. Therefore, we should have
$y^*<y_{3}$.
\thicklines \framebox(6.6,6.6)[l]{}
\begin{convention} Let $\widetilde{x}_1=X_1(y^*)$ if the unique zero $y^*$ in
$(1,y_{3}]$ of the function $h_{2}(X_{0}(y),y)$ exists, otherwise
let $\widetilde{x}_1=\infty$.
\end{convention}
\subsection{Asymptotics behaviour of $\pi_1(x)$ and $\pi_2(y)$}
\label{section3}
In this subsection, we provide asymptotic behaviour of two unknown functions $\pi_1(x)$ and $\pi_2(y)$.
We only provide details for $\pi_1(x)$, since the behaviour for $\pi_2(y)$ can be characterized in the same fashion.
It follows from the discussion so far that:
\begin{description}
\item[(1)] If $p_{i,j}$ is not X-shaped, then, independent of the
properties of $p_{i,j}^{(1)}$ and $p_{i,j}^{(2)}$, there is only
one dominant singularity, which is the smallest one of $x^*$,
$\widetilde{x}_1$ and $x_3$. Here $x^*$, $\widetilde{x}_1$ and
$x_3$ are not necessarily all different.
\item[(2)] If $p_{i,j}$ is X-shaped, then both $x_3$ and $-x_3$
are branch points.
\begin{description}
\item[(a)] If $p_{i,j}^{(1)}$ is not X-shaped, then $h_1(x,Y_0(x)$
has either no zero or one zero $x^*$ in $(1,x_3]$; and if
$p_{i,j}^{(1)}$ is X-shaped, then $h_1(x,Y_0(x)$ has either no
zero or two zeros $x^* \in (1,x_3]$ and $-x^*$.
\item[(b)] Similar to (a), $h_2(X_0(y),y)$ has either no zero in
$(1,y_3]$ or one zero $y^*$ in it. For the latter, if $p_{i,j}^{(2)}$ is not X-shaped, then
$\widetilde{x}_1=X_1(y^*)$ is the only pole of $\pi_2(Y_0(x))$
with modulus in $(1,x_3]$; and if $p_{i,j}^{(2)}$ is X-shaped,
then $\widetilde{x}_1=X_1(y^*) \in (1,x_3]$ and
$-\widetilde{x}_1=X_1(-y^*)$ are the only two poles of
$\pi_2(Y_0(x))$ with modulus in $(1,x_3]$.
\end{description}
Therefore, in case (2), we either have only one dominant
singularity or exactly two dominant singularities depending on
which of $x^*$, $\widetilde{x}_1$ and $x_3$ is smallest and the
property of $p_{i,j}^{(k)}$, $k=1,2$.
\end{description}
The theorem in this subsection provides detailed asymptotic
properties at a dominant singularity for all possible cases. Let $x_{dom}$ be a dominant singularity of $\pi_1(x)$. Clearly,
$|x_{dom}|=$ $x^{\ast}$, $|x_{dom}|=\widetilde{x}_{1}$ or
$|x_{dom}|=x_{3}$.
To
state this theorem for the cases
where $x_{dom}=\pm x_{3}$, notice that through simple calculation we can
write
\begin{equation}
\label{eqn:3.1}
h_{1}(x,Y_{0}(x))=p_{1}(x)+q_{1}(x)\sqrt{1-\frac{x}{x_{dom}}},
\end{equation}
\begin{equation} \label{eqn:3.2}
Y_{0}(x)=p(x)+q(x)\sqrt{1-\frac{x}{x_{dom}}},
\end{equation}
\begin{equation} \label{eqn:3.3}
Y_{0}(x_{dom})-Y_{0}(x) = \left(1-\frac{x}{x_{dom}}\right)p^{\ast}(x)-q(x)\sqrt{1-\frac{x}{x_{dom}}},
\end{equation}
\begin{equation} \label{eqn:3.4}
h_{1}(x,Y_{0}(x))-h_{1}(x_{dom},Y_{0}(x_{dom})) = \left( 1-\frac{x}{x_{dom}}
\right) p_{1}^{\ast }(x)+q_{1}(x)\sqrt{1-\frac{x}{x_{dom}}},
\end{equation}
where
\begin{align*}
p(x) &=\frac{-b(x)}{2a(x)}, \quad p^{\ast }(x)=\frac{\frac{b(x)}{2a(x)}-
\frac{b(x_{dom})}{2a(x_{dom})}}{\frac{1}{x_{dom}}(x-x_{dom})}, \quad
p_{1}(x)=\frac{-b(x)a_{1}(x)}{2a(x)}+b_{1}(x), \\
p_{1}^{\ast }(x) &= x_{dom}\left( \frac{a_{1}(x)-a_{1}(x_{dom})+b_{1}(x)-b_{1}(x_{dom})}{x_{dom}-x}\right),
\end{align*}
\[
q(x)=\left\{\begin{array}{ll}
-\frac{1}{2a(x)}\sqrt{\frac{D_{1}(x)}{1-\frac{x}{x_{dom}}}}, & \text{ if } x_{dom}=x_{3}, \\
\frac{1}{2a(x)}\sqrt{\frac{D_{1}(x)}{1-\frac{x}{x_{dom}}}}, & \text{ if } x_{dom}=-x_{3},
\end{array}
\right.
\]
and $q_{1}(x)=a_{1}(x)q(x)$.
Define
\begin{eqnarray*}
L(x) &=&\frac{[h_{2}(x,Y_{0}(x))\pi_2(Y_{0}(x))+h_{0}(x,Y_{0}(x))\pi_{0,0}]h_{1}(x,Y_{1}(x))a(x)}{xf^{\prime}(x)}, \\
\widetilde{L}(y)
&=&\frac{[h_{1}(X_{0}(y),y)\pi_1(X_{0}(y))+h_{0}(X_{0}(y),y)\pi_{0,0}]h_{2}(X_{1}(y),y)\widetilde{a}(y)}{yg^{\prime}(y)},
\end{eqnarray*}
where $f(x)=a(x)h_{1}(x,Y_{0}(x))h_{1}(x,Y_{1}(x))$ is a
polynomial defined in Section~\ref{section2} and
$g(y)=\widetilde{a}(y)h_{2}(X_{1}(y),y)$ $h_{2}(X_{0}(y),y)$ is
the counterpart polynomial for function $h_2$.
The following Theorem shows the behaviour of $\pi_1(x)$ at
$x_{dom}$. Recall that $\widetilde{y}_0=Y_0(x^*)$.
\begin{theorem} \label{theorem3.1}
Assumed that both
$h_{2}(x^{\ast},Y_0(x^{\ast}))\pi_2(Y_0(x^{\ast}))+h_{0}(x^{\ast},Y_0(y^{\ast}))\pi_{0,0}
\neq 0$ and $h_{1}(X_{0}(\widetilde{y}_0),\widetilde{y}_0)$
$\pi(X_{0}(\widetilde{y}_0))+h_{0}(X_{0}(\widetilde{y}_0),\widetilde{y}_0)\pi_{0,0}
\neq 0$. For the function $\pi_1(x)$, a total of four types of
asymptotics exist as $x$ approaches to a dominant singularity of
$\pi_1(x)$, based on the detailed property of the dominant
singularity.
\textbf{Case 1}: If $|x_{dom}|=x^{\ast}<\min
\{\widetilde{x}_{1},x_{3}\}$, or $|x_{dom}|=\widetilde{x}_{1} <
\min\{x^{\ast},x_{3}\}$, or
$|x_{dom}|=x^{\ast}=\widetilde{x}_{1}=x_{3}$, then
\begin{equation*}
\lim_{x\rightarrow x_{dom}}\left( 1-\frac{x}{x_{dom}}\right) \pi_1(x)=c_{0,1}(x_{dom}),
\end{equation*}
where
\[
c_{0,1}(x_{dom}) = \left \{ \begin{array}{ll}
L(x_{dom}), & \text{if } x^{\ast}<\min \{\widetilde{x}_{1},x_{3}\}; \\
\displaystyle
\frac{-h_{2}(x_{dom},\widetilde{y}_0)\widetilde{y}_0\widetilde{L}(\widetilde{y}_0)}{h_{1}(x_{dom},\widetilde{y}_0)Y_{0}^{\prime}(x_{dom})x_{dom}},
&
\text{if }\widetilde{x}_{1}<\min \{x^{\ast},x_{3}\}; \\
\displaystyle
\frac{h_{2}(x_{dom},\widetilde{y}_0)\widetilde{L}(\widetilde{y}_0)\widetilde{y}_0}{q_{1}(x_{dom})q(x_{dom})},
& \text{if } x^{\ast}=\widetilde{x}_{1}=x_{3}, \end{array} \right.
\]
with $\widetilde{y}_0=Y_{0}(x_{dom})$.
\textbf{Case 2}: If $|x_{dom}|=x^{\ast}=x_{3}<\widetilde{x}_{1}$
or $|x_{dom}|=\widetilde{x}_{1}=x_{3}<x^{\ast}$, then
\begin{equation*}
\lim_{\frac{x}{x_{dom}}\rightarrow 1}\sqrt{1-x/x_{dom}}\pi_1(x)=c_{0,2}(x_{dom}),
\end{equation*}
where
\[
c_{0,2}(x_{dom}) = \left \{ \begin{array}{ll}
\displaystyle
\frac{h_{2}(x_{dom},\widetilde{y}_0)\pi_2(\widetilde{y}_0)+h_{0}(x_{dom},\widetilde{y}_0)\pi_{0,0}}{-q_{1}(x_{dom})}, & \text{if }x^{\ast}=x_{3}<\widetilde{x}_{1}; \\
\displaystyle
\frac{h_{2}(x_{dom},\widetilde{y}_0)\widetilde{y}_0\widetilde{L}(\widetilde{y}_0)}{h_{1}(x_{dom},\widetilde{y}_0)q(x_{dom})}, & \text{if }
\widetilde{x}_{1}=x_{3}<x^{\ast}, \end{array} \right.
\]
with $\widetilde{y}_0=Y_{0}(x_{dom})$.
\textbf{Case 3:} If $|x_{dom}|=x_{3}<\min
\{\widetilde{x}_{1},x^{\ast}\}$, then
\begin{equation*}
\lim_{x\rightarrow x_{dom}}\sqrt{1-x/x_{dom}}\pi_1^{\prime}(x)=c_{0,3}(x_{dom}),
\end{equation*}
where
\begin{equation*}
c_{0,3}(x_{dom})=-\frac{q(x_{dom})x_{dom}^{2}}{2}\frac{d}{dy}\left [
\frac{h_{2}(x_{dom},y)\pi_2(y)+h_{0}(x_{dom},y)\pi_{0,0}}{-h_{1}(x_{dom},y)}
\right ] \bigg |_{y=Y_{0}(x_{dom})}.
\end{equation*}
\textbf{Case 4:} If $|x_{dom}|=x^{\ast}=\widetilde{x}_{1}<x_{3}$,
then
\begin{equation*}
\lim_{x\rightarrow x_{dom}}\left( 1-\frac{x}{x_{dom}}\right)
^{2}\pi_1(x)=c_{0,4}(x_{dom}),
\end{equation*}
where
\begin{equation*}
c_{0,4}(x_{dom})=\frac{h_{2}(x_{dom},\widetilde{y}_0)[h_{1}(\widetilde{x}_0,\widetilde{y}_0)
\pi_1(\widetilde{x}_0)+h_{0}(\widetilde{x}_0,\widetilde{y}_0)]\pi_{0,0}]}{x^{\ast2}h_{1}^{\prime}(x_{dom},\widetilde{y}_0)
Y_{0}^{\prime}(x_{dom})h_{2}^{\prime}(X_{0}(\widetilde{y}_0),\widetilde{y}_0)},
\end{equation*}
with $\widetilde{y}_0=Y_{0}(x_{dom})$ and
$\widetilde{x}_0=X_{0}(\widetilde{y}_0)$.
\end{theorem}
\proof
\textbf{Case~1.} If $x^{\ast}<\widetilde{x}_{1}$, then $x_{dom}$
is not a pole of $\pi_2(Y_{0}(x))$. According to
Theorem~\ref{theorem2.1}, $x_{dom}$ is a simple pole of
$\pi_1(x)$. From equation (\ref{eqn:1.9}) in
Theorem~\ref{theorem1.1} and Lemmas~\ref{lemma2.2} and
\ref{lemma2.3}, we have
\begin{eqnarray*}
\pi_1(x) &=&\frac{-h_{1}(x,Y_0(x))\pi_2(Y_0(x))-h_{0}(x,Y_0(x))\pi_{0,0}}{h_{1}(x,Y_0(x))} \\
&=&\frac{-[h_{1}(x,Y_0(x))\pi_2(Y_0(x))+h_{0}(x,Y_0(x))\pi_{0,0}]h_{1}(x,Y_{1})a(x)}{f(x)} \\
&=&\frac{-[h_{1}(x,Y_0(x))\pi_2(Y_0(x))+h_{0}(x,Y_0(x))\pi_{0,0}]h_{1}(x,Y_{1})a(x)}{(x-x_{dom})f^{\ast}(x)},
\end{eqnarray*}
where $f^{\ast}(x_{dom})=f^{\prime}(x_{dom})\neq 0$. It follows
that
\begin{equation*}
\lim_{x\rightarrow x_{dom}}\left( 1-\frac{x}{x_{dom}}\right)\pi_1(x)=L(x_{dom}).
\end{equation*}
Similarly, if $\widetilde{x}_{1}<x^{\ast}$, following the same
argument used in the above, we have
$\lim_{y\rightarrow \widetilde{y}_0}\left( 1-\frac{y}{\widetilde{y}_0} \right)
\pi_2(y)=\widetilde{L}(\widetilde{y}_0)$ and
\begin{eqnarray*}
&&\lim_{x\rightarrow x_{dom}}\left( 1-\frac{x}{x_{dom}}\right) \pi_1(x) \\
&=&\lim_{x\rightarrow x_{dom}}\frac{-h_{2}(x,Y_0(x))
\left(1-\frac{Y_0(x)}{ \widetilde{y}_0}\right) \pi_2(Y_0(x))-(1-\frac{Y_0(x)}{y_{dom}})
h_{0}(x,Y_0(x))\pi_{0,0}}{\frac{1-\frac{Y_0(x)}{\widetilde{y}_0}}{1-\frac{x}{x_{dom}}}h_{1}(x,Y_0(x))} \\
&=&\frac{-h_{2}(x_{dom},\widetilde{y}_0)\widetilde{L}(\widetilde{y}_0)\widetilde{y}_0}{h_{1}(x_{dom},\widetilde{y}_0)Y_0(x)^{\prime}(x_{dom})x_{dom}}.
\end{eqnarray*}
In the case of $x^{\ast}=\widetilde{x}_{1}=x_{3}=|x_{dom}|$, we
first have $\lim_{x\rightarrow x_{dom}} \left(
1-\frac{Y_0(x)}{\widetilde{y}_0} \right)
\pi_2(Y_{0}(x))=\widetilde{L}(\widetilde{y}_0)$. Then, using
equations (\ref{eqn:3.3}), (\ref{eqn:3.4}) and the expression for
$h_{1}(x_{3},\widetilde{y}_0)$, we obtain
\begin{eqnarray*}
&&\lim_{x\rightarrow x_{dom}}\left( 1-\frac{x}{x_{dom}}\right) \pi_1(x) \\
&=&\lim_{x\rightarrow x_{dom}}\frac{-h_{2}(x,Y_0(x))
\frac{\sqrt{1-\frac{x}{x_{dom}}}}{1-\frac{Y_0(x)}{\widetilde{y}_0}}
\left[\left( 1-\frac{Y_0(x)}{\widetilde{y}_0} \right)
\pi_2(Y_0(x))\right] -\sqrt{1-\frac{x}{x_{dom}}}
h_{0}(x,Y_0(x))\pi_{0,0}}{h_{1}(x,Y_0(x))/\sqrt{1-\frac{x}{x_{dom}}}} \\
&=&\frac{\widetilde{L}(\widetilde{y}_0)h_{2}(x_{dom},\widetilde{y}_0)\widetilde{y}_0}{q_{1}(x_{dom})q(x_{dom})}.
\end{eqnarray*}
\textbf{Case 2.} If $x^{\ast}=x_{3,}$ then
$h_{1}(x_{dom},\widetilde{y}_0)=p_{1}(x_{dom})=0$, using equations
(\ref{eqn:1.9}), (\ref{eqn:3.1}), (\ref{eqn:3.3}) and
(\ref{eqn:3.4}), we can rewrite $\pi_1(x)$ as
\[
\pi_1(x) =\frac{-h_{2}(x,Y_{0}(x))\pi_2(Y_{0}(x))-h_{0}(x,Y_{0}(x))\pi_{0,0}}{\sqrt{1-x/x_{dom}}\left[\sqrt{1-x/x_{dom}}p_{1}^{\ast}(x)+q_{1}(x) \right] }.
\]
It follows that
\begin{equation*}
\lim_{x \to x_{dom}}\sqrt{1-x/x_{dom}}\pi_1(x)=\lim_{x \to x_{dom}}
\frac{-h_{2}(x,Y_{0}(x))
\pi_2(Y_{0}(x))-h_{0}(x,Y_{0}(x))\pi_{0,0}}{\left[
\sqrt{1-x/x_{dom}}p_{1}^{\ast}(x)+q_{1}(x)\right] }=c_{0,2}(x_{dom}).
\end{equation*}
Note that $q_{1}(x_{dom})\neq 0$.
Similarly, if $\widetilde{x}_{1}=x_{3}$, then $\widetilde{y}_0$ is
a pole of $\pi_2(y)$, which gives $\lim_{y\rightarrow
\widetilde{y}_0}\left( 1-\frac{y}{\widetilde{y}_0}\right)
\widetilde{ \pi}(y)=\widetilde{L}(\widetilde{y}_0)$. Again, using
equations (\ref{eqn:1.9}), (\ref{eqn:3.1}), (\ref{eqn:3.3}) and
(\ref{eqn:3.4}), we obtain
\begin{eqnarray*}
&&\lim_{x\rightarrow x_{dom}}\sqrt{1-\frac{x}{x_{dom}}}\pi_1(x) \\
&=&\lim_{x\rightarrow
x_{dom}}\frac{-h_{2}(x,Y_0(x))\frac{\sqrt{1-\frac{x}{
x_{dom}}}}{1-\frac{Y_0(x)}{\widetilde{y}_0}}\left(
1-\frac{Y_0(x)}{\widetilde{y}_0}\right)
\pi_2(Y_0(x))-\sqrt{1-\frac{x}{x_{dom}}}h_{0}(x,Y_0(x))\pi_{0,0}}{
h_{1}(x,Y_{0}(x))} \\
&=&-\frac{h_{2}(x_{dom},\widetilde{y}_0)\widetilde{L}(\widetilde{y}_0)}{
h_{1}(x_{dom},\widetilde{y}_0)}\lim_{x\rightarrow
x_{dom}}\frac{\widetilde{y}_0\sqrt{1-
\frac{x}{x_{dom}}}}{(1-x/x_{dom})p^{\ast}(x)-q(x)\sqrt{1-x/x_{dom}}} \\
&=&\frac{h_{2}(x_{dom},\widetilde{y}_0)\widetilde{L}(\widetilde{y}_0)\widetilde{y}_0}{
h_{1}(x_{dom},\widetilde{y}_0)q(x_{dom})}.
\end{eqnarray*}
\textbf{Case 3.} Let
\begin{equation*}
T(x,y)=\frac{h_{2}(x,y)\pi_2(y)+h_{0}(x,y)\pi_{0,0}}{-h_{1}(x,y)}.
\end{equation*}
Then,
\begin{equation*}
\pi_1^{\prime}(x)=\frac{\partial T}{\partial x}+\frac{\partial T}{\partial y} \frac{dY_{0}(x)}{dx}
\end{equation*}
with
\begin{equation*}
\frac{dY_{0}(x)}{dx}=p^{\prime}(x)+q^{\prime}(x)\sqrt{1-x/x_{dom}}-\frac{q(x)}{2x_{dom}\sqrt{1-x/x_{dom}}},
\end{equation*}
\begin{equation*}
\frac{\partial T}{\partial x}=\frac{\widetilde{a}_{2}(y)\pi_2(y)
+\widetilde{a}_{0}(y)+[a_{1}^{\prime}(x)y+b_{1}^{\prime}(x)]T(x,y)}{-h_{1}(x,y)}
\end{equation*}
and
\begin{equation*}
\frac{\partial T}{\partial y}=\frac{\frac{\partial h_{2}(x,y)}{\partial y}
\pi_2(y)+h_{2}(x,y)\pi_2^{\prime}(y)+\frac{\partial
h_{0}(x,y)\pi_{0,0}}{\partial y}+\frac{\partial
h_{1}(x,y)}{\partial y}T(x,y) }{-h_{1}(x,y)},
\end{equation*}
where $p(x)$ and $q(x)$ are defined by equation (\ref{eqn:3.2}).
Since $\lim_{x\rightarrow x_{dom}}\sqrt{1-x/x_{dom}}\frac{\partial
T}{\partial x}=0$, $\lim_{x\rightarrow x_{dom}}$
$\sqrt{1-x/x_{dom}}\frac{dY_{0}(x)}{dx}=-\frac{q(x_{dom})}{2x_{dom}}$
and
$\frac{\partial T}{\partial y}$ is continuous at
$(x_{dom},Y_0(x_{dom}))$,
\begin{eqnarray}
\lim_{x\rightarrow x_{dom}}\sqrt{1-x/x_{dom}}\pi_1^{\prime}(x)
&=&-\frac{q(x_{dom})}{2x_{dom}}\frac{\partial T}{\partial y}|_{(x_{3},\widetilde{y}_0)} \\
&=&-\frac{q(x_{dom})}{2x_{dom}}\frac{dT(x_{dom},y)}{dy}|_{y=\widetilde{y}_0}=c_{0,3}(x_{dom}).
\end{eqnarray}
It is easy to see $c_{3,0}(x_{dom}) \neq 0$, since otherwise
$\pi_1^{\prime}(x_{dom})<\infty $, which contradicts the fact that
$x_{3}$ is a branch point of $\pi_1(x)$.
\textbf{Case 4.} From equation (\ref{eqn:1.9}) and
(\ref{eqn:1.11}) in Theorem~\ref{theorem1.1}, we have
\[
\pi_1(x) =
\frac{h_{2}(x,Y_0(x))h_{1}(X_{0}(Y_0(x)),Y_0(x))\pi_1(X_{0}(Y_0(x)))
+ N(x)} {h_{1}(x,Y_0(x))h_{2}(X_{0}(Y_0(x)),Y_0(x))},
\]
where
\[
N(x)=[h_{2}(x,Y_0(x))h_{0}(X_{0}(Y_0(x)),Y_0(x))-h_{2}(X_{0}(Y_0(x)),Y_0(x))h_{0}(x,Y_0(x))]\pi_{0,0}.
\]
Since
\begin{equation*}
\lim_{x\rightarrow x_{dom}}\frac{h_{1}(x,Y_0(x))}{x-x_{dom}} =\lim_{x\rightarrow x_{dom}}\frac{
h_{1}(x,Y_{0}(x))-h_{1}(x_{dom},Y_{0}(x_{dom}))}{x-x_{dom}}=h_{1}^{\prime}(x_{dom},\widetilde{y}_0)
\end{equation*}
and
\begin{eqnarray*}
\lim_{x\rightarrow x_{dom}}\frac{h_{2}(X_{0}(Y_0(x)),Y_0(x))}{x-x_{dom}}
&=&\lim_{x\rightarrow x_{dom}}\frac{h_{2}(X_{0}(Y_0(x)),Y_0(x))-h_{2}(X_{0}(\widetilde{y}_0),\widetilde{y}_0)}{x-x_{dom}} \\
&=&Y_{0}^{\prime}(x_{dom})h_{2}^{\prime}(X_{0}(\widetilde{y}_0),\widetilde{y}_0),
\end{eqnarray*}
we obtain
\begin{equation*}
\lim_{x\rightarrow x_{dom}}\frac{\left( 1-\frac{x}{x_{dom}}\right )^{2}}{h_{1}(x,Y_0(x))h_{2}(X_{0}(Y_0(x)),Y_0(x))}
=\frac{1}{x_{dom}^{2}h_{1}^{\prime}(x_{dom},\widetilde{y}_0)Y_{0}^{\prime}(x_{dom})h_{2}^{\prime}(X_{0}(\widetilde{y}_0),\widetilde{y}_0)},
\end{equation*}
which yields
\begin{equation*}
\lim_{x\rightarrow x_{dom}}\left( 1-\frac{x}{x_{dom}}\right )^{2}\pi_1(x)=c_{0,4}(x_{dom}).
\end{equation*}
\thicklines \framebox(6.6,6.6)[l]{}
\begin{remark}
It should be noted that the above theorem provides the asymptotic
behaviour at a dominant singularity, either positive or negative.
\end{remark}
\begin{corollary} \label{corollary3.1}
If
$h_{2}(x^{\ast},Y_0(x^{\ast}))\pi_2(Y_0(x^{\ast}))+h_{0}(x^{\ast},Y_0(y^{\ast}))\pi_{0,0}=
0$ or
$h_{1}(X_{0}(\widetilde{y}_0),\widetilde{y}_0)\pi(X_{0}(\widetilde{y}_0))+h_{0}(X_{0}(\widetilde{y}_0),$
$\widetilde{y}_0) \pi_{0,0} = 0$, then the function $\pi_1(x)$,
as $x$ approaches to its dominant singularity, has one of the
three types of asymptotic properties shown in Case~1 to Case~3 of
Theorem~\ref{theorem3.1}.
\end{corollary}
\proof First suppose that $h_{2}(x^{\ast },Y_{0}(x^{\ast
}))\pi_{2}(Y_{0}(x^{\ast }))+h_{0}(x^{\ast },Y_{0}(x^{\ast
}))\pi_{0,0}=0$, but
$h_{1}(X_{0}(\widetilde{y}_{0}),\widetilde{y}_{0})\pi_{1}(X_{0}(\widetilde{y}_{0}))
+ h_{0}(X_{0}(\widetilde{y}_{0}),\widetilde{y}_{0})\pi_{0,0} \neq
0$. We then have the following four cases:
\textbf{1.} If $x^{\ast} < \widetilde{x}_{1}<x_{3}$, then
$\widetilde{x}_{1}$ is a pole and the dominant singular point of
$\pi_1(x)$ since $x^{\ast}$ is a removable singular point of
$\pi_1(x)$, which leads to $\lim_{x\rightarrow \widetilde{x}
}\left( 1-\frac{x}{\widetilde{x}_{1}}\right) \pi_1(x)=C$, the same
type in Case~1.
\textbf{2.} If $x^{\ast}<\widetilde{x}_{1}=x_3$, the same type of
asymptotic result as in Case~2 can be obtained.
\textbf{3.} If $x^{\ast }=x_{3}<\widetilde{x}_{1}$, then the
factor $\sqrt{1-x/x_{dom}}$ is cancelled out from both the
denominator and the numerator in the expression for $\pi _{1}(x)$.
By considering $\pi _{1}^{\prime }(x)$, we obtain the same type of
asymptotic result as that given in Case~3.
\textbf{4.} If $x^{\ast }=\widetilde{x}_{1}$, then $x^{\ast }$
would be a pole of $\pi _{2}(Y_{0}(x))$. This would imply
$\pi_{2}(Y_{0}(x^{\ast }))=\infty$, which contradict to
$h_{2}(x^{\ast },Y_{0}(x^{\ast }))
\pi_{2}(Y_{0}(x^{\ast}))+h_{0}(x^{\ast },Y_{0}(x^{\ast }))
\pi_{0,0}=0$ since $h_{2}(X_{0}(Y_{0}(x^{\ast }),Y_{0}(x^{\ast
}))=0$ implies $h_{2}(x^{\ast },Y_{0}(x^{\ast }))\neq 0$. Hence
this case is impossible.
Next, assume that both $h_{2}(x^{\ast },Y_{0}(x^{\ast }))
\pi_{2}(Y_{0}(x^{\ast})) + h_{0}(x^{\ast },Y_{0}(x^{\ast
}))\pi_{0,0}=0$ and
$h_{1}(X_{0}(\widetilde{y}_{0}),\widetilde{y}_{0})\pi_{1}(X_{0}(\widetilde{y}_{0}))
+ h_{0}(X_{0}(\widetilde{y}_{0}),\widetilde{y}_{0})\pi _{0,0}=0$.
We then have the following two cases:
\textbf{1.} If $\max \{\widetilde{x}_{1}$, $x^{\ast }\}<x_{3}$,
then both $\widetilde{x}_{1}$ and $x^{\ast }$ are removable poles
of $\pi _{1}(x)$. By considering $\pi_{1}^{\prime }(x)$, we obtain
the same type of asymptotic result as that given in Case~3.
\textbf{2.} If $\max \{\widetilde{x}_{1}$, $x^{\ast }\}=x_{3}$,
then the factor $\sqrt{1-x/x_{dom}}$ is cancelled out from both
the denominator and the numerator in the expression for
$\pi_{2}(Y_{0}(x))$ if $\widetilde{x}_{1}=x_{3}$ and for
$\pi_{1}(x)$ if $x^{\ast }=x_{3}$. By considering
$\pi_{1}^{\prime}(x)$, we obtain the same type of asymptotic
result as that given in Case~3.
Finally, the case in which
$h_{1}(X_{0}(\widetilde{y}_{0}),\widetilde{y}_{0})\pi_{1}(X_{0}(\widetilde{y}_{0}))
+ h_{0}(X_{0}(\widetilde{y}_{0}),\widetilde{y}_{0})\pi _{0,0}=0$,
but $h_{2}(x^{\ast },Y_{0}(x^{\ast })) \pi_{2}(Y_{0}(x^{\ast}))$
$+ h_{0}(x^{\ast },Y_{0}(x^{\ast }))\pi_{0,0} \neq 0$ can be
similarly considered.
\thicklines \framebox(6.6,6.6)[l]{}
\begin{remark}
We believe that both
$h_{2}(x^{\ast},Y_0(x^{\ast}))\pi_2(Y_0(x^{\ast}))+h_{0}(x^{\ast},Y_0(x^{\ast}))\pi_{0,0}
\neq 0$ and $h_{1}(X_{0}(\widetilde{y}_0),\widetilde{y}_0)$
$\pi_1(X_{0}(\widetilde{y}_0))+h_{0}(X_{0}(\widetilde{y}_0),\widetilde{y}_0)\pi_{0,0}
\neq 0$ always hold, though at this moment we could not find a
proof. However, no new type of asymptotic property will appear
without this condition as shown in Corollary~\ref{corollary3.1}.
In the rest of the paper, the analysis will be carried out with
this condition, which is also valid without this condition.
\end{remark}
\begin{remark} \label{remark3.2}
When $x_{dom}=|x_{3}|<\min \{x^{\ast },\widetilde{x}_{1}\}$, the
numerator in the expression for $\pi_{1}(x)$ is not zero at
$x_{3}$.
\end{remark}
\section{Tail Asymptotics of Boundary Probabilities $\pi_{n,0}$ and $\pi_{0,n}$}
Since $\pi_1(x)$ and $\pi_2(y)$ are symmetric, properties for
$\pi_1(x)$ can be easily translated to the counterpart properties
for $\pi_2(y)$. Therefore, tail asymptotics for the boundary
probabilities $\pi_{0,n}$ can be directly obtained by symmetry.
The exact tail
asymptotics of the boundary probabilities $\pi_{n,0}$ is a direct
consequence of Theorem~\ref{theorem3.1} and a Tauberian-like theorem applied to the function $\pi_1(x)$.
Specifically, if $\pi_1(x)$ has only one dominant singularity, then Theorem~\ref{tauberian-1}
is applied; and if $\pi_1(x)$ has two dominant singularity, then Theorem~\ref{tauberian-2}
is applied.
The following theorem shows that there are four types of exact
tail asymptotics, for large $n$, together with a possible periodic
property if $\pi_1(x)$ has two dominant singularities that have
the same asymptotic property.
In the theorem, let $x_{dom}$ be the positive dominant singularity of $\pi_1(x)$.
Consider the following four cases regarding which of $x^*$, $\widetilde{x}_1$ and $x_3$ will
be $x_{dom}$:
\begin{description}
\item[Case 1.] $x_{dom}=\min\{x^{\ast}, \widetilde{x}_{1}\}<x_{3}$
with $x^{\ast} \neq \widetilde{x}_{1}$, or
$x_{dom}=\widetilde{x}_{1}=x^{\ast}=x_{3}$;
\item[Case 2.] $x_{dom}=x_{3}=\min\{x^{\ast},\widetilde{x}_{1}\}$ with $x^{\ast}\neq
\widetilde{x}_{1}$;
\item[Case 3.] $x_{3}=x_{dom}<\min\{x^{\ast},\widetilde{x}_{1}\}$;
\item[Case 4.] $x_{dom}=x^{\ast}=\widetilde{x}_{1}<x_{3}$.
\end{description}
\begin{theorem}
\label{theorem4.1-a} Consider the stable non-singular genus 1
random walk. Corresponding to the above four cases, we have the
following tail asymptotic properties for the boundary
probabilities $\pi_{n,0}$ for large $n$. In all cases,
$c_{0,i}(x_{dom})$ ($1\leq i\leq 4$) are given in
Theorem~\ref{theorem3.1}.
\begin{description}
\item[1.] If $p_{i,j}$ is not X-shaped, then there are four types
of exact tail asymptotics:
\textbf{Case 1:} (Exact geometric decay)
\begin{equation} \label{eqn:exact}
\pi_{n,0} \sim c_{0,1}(x_{dom})\left( \frac{1}{x_{dom}}\right)^{n-1};
\end{equation}
\textbf{Case 2:} (Geometric decay multiplied by a factor of
$n^{-1/2}$)
\begin{equation}
\pi_{n,0} \sim \frac{c_{0,2}(x_{dom})}{\sqrt{\pi}}n^{-1/2}\left(\frac{1}{x_{dom}}\right)^{n-1}; \label{eqn:1/2}
\end{equation}
\textbf{Case 3:} (Geometric decay multiplied by a factor of
$n^{-3/2}$)
\begin{equation} \label{eqn:3/2}
\pi_{n,0} \sim \frac{c_{0,3}(x_{dom})}{\sqrt{\pi}}n^{-3/2}\left(\frac{1}{x_{dom}}\right)^{n-1};
\end{equation}
\textbf{Case 4:} (Geometric decay multiplied by a factor of $n$)
\begin{equation}
\pi_{n,0} \sim c_{0,4}(x_{dom})n\left( \frac{1}{x_{dom}}\right)^{n-1}; \label{eqn:n}
\end{equation}
\item[2.] If $p_{i,j}$ is X-shaped,
but both $p^{(1)}_{i,j}$ and $p^{(2)}_{i,j}$ are not X-shaped, we then have the following
exact tail asymptotic properties:
\textbf{Case 1:} (Exact geometric decay) It is given by (\ref{eqn:exact});
\textbf{Case 2:} (Geometric decay multiplied by a factor of
$n^{-1/2}$) It is given by (\ref{eqn:1/2});
\textbf{Case 3:} (Geometric decay multiplied by a factor of
$n^{-3/2}$)
\begin{equation} \label{eqn:p-3/2}
\pi_{n,0} \sim \frac{\left[c_{0,3}(x_{dom})+(-1)^{n-1}c_{0,3}(-x_{dom})
\right] }{\sqrt{\pi}}n^{-3/2}\left( \frac{1}{x_{dom}}\right)^{n-1};
\end{equation}
\textbf{Case 4:} (Geometric decay multiplied by a factor of $n$)
It is given by (\ref{eqn:n});
\item[3.] If $p_{i,j}$ and $p^{(1)}_{i,j}$ are X-shaped, but
$p^{(2)}_{i,j}$ is not, we then have the following
exact tail asymptotic properties:
\textbf{Case 1:} (Exact geometric decay) When $x^{\ast} \geq
\widetilde{x}_{1}$, it is given by (\ref{eqn:exact}); when
$x_{dom}=x^{\ast} < \widetilde{x}_{1}$, it is given by
\begin{equation} \label{eqn:p-exact}
\pi_{n,0} \sim \left[c_{0,1}(x_{dom})+(-1)^{n-1}c_{0,1}(-x_{dom})
\right] \left( \frac{1}{x_{dom}}\right)^{n-1};
\end{equation}
\textbf{Case 2:} (Geometric decay multiplied by a factor of
$n^{-1/2}$) When $x^{\ast}>\widetilde{x}_{1}$, it is given by (\ref{eqn:1/2}); when
$x_{dom}=x^{\ast} < \widetilde{x}_{1}$, it is given by
\begin{equation} \label{eqn:p-1/2}
\pi_{n,0} \sim \frac{\left[c_{0,2}(x_{dom})+(-1)^{n-1}c_{0,2}(-x_{dom})
\right] }{\sqrt{\pi}}n^{-1/2}\left( \frac{1}{x_{dom}}\right)^{n-1};
\end{equation}
\textbf{Case 3:} (Geometric decay multiplied by a factor of
$n^{-3/2}$) It is given by (\ref{eqn:p-3/2}).
\textbf{Case 4:} (Geometric decay multiplied by a factor of $n$)
It is given by (\ref{eqn:n}).
\item[4.] If $p_{i,j}$ and $p^{(2)}_{i,j}$ are X-shaped, but
$p^{(1)}_{i,j}$ is not, then it is the symmetric case to \textbf{3}. All
expression in \textbf{3} are valid after switching $x^{\ast}$ and
$\widetilde{x}_{1}$.
\item[5.] If all $p_{i,j}$, $p^{(1)}_{i,j}$ and $p^{(2)}_{i,j}$ are
X-shaped, we then have the following exact tail asymptotic properties:
\textbf{Case 1:} (Exact geometric decay) When $x^* \leq \widetilde{x}_1$, it is given by (\ref{eqn:p-exact}); when $x^* > \widetilde{x}_1$, it is also given by (\ref{eqn:p-exact})
by replacing the dominant singularity $x^*$ by $\widetilde{x}_1$.
\textbf{Case 2:} (Geometric decay multiplied by a factor of $n^{-1/2}$)
When $x^* < \widetilde{x}_1$, it is given by (\ref{eqn:p-1/2}); when $x^* > \widetilde{x}_1$, it is also given by (\ref{eqn:p-1/2})
by replacing the dominant singularity $x^*$ by $\widetilde{x}_1$.
\textbf{Case 3:} (Geometric decay multiplied by a factor of
$n^{-3/2}$) It is given by (\ref{eqn:p-3/2}).
\textbf{Case 4:} (Geometric decay multiplied by a factor of $n$) It is given by
\begin{equation} \label{eqn:p-n}
\pi_{n,0} \sim \left[c_{0,4}(x_{dom})+(-1)^{n-1}c_{0,4}(-x_{dom})
\right] n\left( \frac{1}{x_{dom}}\right)^{n-1}.
\end{equation}
\end{description}
\end{theorem}
\proof \textbf{1.} Since $p_{i,j}$ is not X-shaped, all $-x_3$, $-x^*$ and $-\widetilde{x}_1$ are not
dominant singularities according to Corollary~\ref{corollary-X}, Theorem~\ref{theorem2.1} and Theorem~\ref{theorem2.1-b}.
Therefore, there is only one dominant singularity for $\pi_1(x)$. The tail asymptotic properties of $\pi_{n,0}$ follow from
Theorem~\ref{theorem3.1} and the direct application of the Tauberian-like theorem (Theorem~\ref{tauberian-1}).
\textbf{2.} We only provide a proof to the cases, which are not identical to that in 1.
\begin{description}
\item[Case 1.] For the case that $x_{dom}=\widetilde{x}_{1}=x^{\ast}=x_{3}$, we notice that $-x_3$ is also a dominant singularity
(Corollary~\ref{corollary-X}). In this case, the Tauberian-like theorem (Theorem~\ref{tauberian-2}) is used to have a tail asymptotic expression
consisting of two terms, one, corresponding to the positive dominant singularity, with the exact geometric decay rate and the other, corresponding to
the negative dominant singularity, with the geometric decay rate multiplied by a factor of $n^{-3/2}$. Therefore, the term with the geometric decay rate is the dominant (decay slower)
term leading to the same tail asymptotic property given in (\ref{eqn:exact}).
\item[Case 2.] Similar to Case~1, $-x_3$ is also a dominant singularity. The Tauberian-like theorem (Theorem~\ref{tauberian-2}) leads to a tail asymptotic expression
consisting of two terms, one with the geometric rate multiplied by a factor of $n^{-1/2}$ (dominant term) and the other by $n^{-3/2}$.
\item[Case 3.] In this case, both $x_3$ and $-x_3$ are dominant singularities having the same asymptotic property according to Theorem~\ref{theorem3.1}.
The tail asymptotic expression follows from the application of the Tauberian-like theorem (Theorem~\ref{tauberian-2}).
\end{description}
\textbf{3.} In this case, $-x_3$ and $-x^*$ are singularities, but $-\widetilde{x}_1$ is not. We only provide a proof to the cases, which are not identical to that in 1 or in 2.
\begin{description}
\item[Case 1.] For the case when $x^*=\widetilde{x}_1=x_3$, there are two dominant singularities. The Tauberian-like theorem (Theorem~\ref{tauberian-2}) leads
to a tail asymptotic expression consisting of two terms, one (corresponding to the positive singularity) with a geometric decay rate, and the other (corresponding to
the negative singularity) with the same geometric decay rate multiplied by a factor of $n^{-1/2}$ that is dominated by the geometric decay.
When $x^* < \widetilde{x}_1$, both $x^*$ and $-x^*$ are dominant singularities with the same asymptotic property, which leads to the tail asymptotic expression by using
Theorem~\ref{tauberian-2}.
\item[Case 2.] For case when $x_3=x^*$, there are two dominant singularities having the same asymptotic property. The tail asymptotic expression follows from
Theorem~\ref{tauberian-2}.
\item[Case 4.] In this case, there are two dominant singularities, but the contribution from the positive dominant singularity dominates that from the negative
dominant singularity. The tail asymptotic expression follows from Theorem~\ref{tauberian-2}.
\end{description}
\textbf{4.} The symmetric case to 3.
\textbf{5.} In this case, all $-x^*$, $-x^*$ and $-x_3$ are singularities. We only provide a proof to the cases, which are not considered in the above.
\begin{description}
\item[Case 1.] The only new situation here is the case when $x^*=x^*=x_3$. In this case, we have the same asymptotic property at both dominant singularities, which leads to
(\ref{eqn:p-exact}).
\item[Case 4.] In this case, we have the same asymptotic property at both dominant singularities, which leads to
(\ref{eqn:p-n}).
\end{description}
\thicklines \framebox(6.6,6.6)[l]{}
From the above theorem, it is clear that if there is only one dominant singularity,
then
the boundary probabilities $\pi_{n,0}$ have the following four
types of astymptotics: \textbf{1.} exact geometric; \textbf{2.}
geometric multiplied by a factor of $n^{-1/2}$; \textbf{3.}
geometric multiplied by a factor of $n^{-3/2}$; and \textbf{4.}
geometric multiplied by a factor of $n$.
If there are two dominant singularities, but with different asymptotic properties,
$\pi_{n,0}$ also has one of the above four types of tail asymptotic properties.
Finally, if we have the same asymptotic property at both dominant singularities, then $\pi_{n,0}$
reveals a periodic property with the above four types of tail asymptotics, which is a new discovery.
\section{Tail Asymptotics of the Marginal Distributions}
In the previous section, we have seen that the asymptotic
behaviour of the function $\pi_1(x)$ ($\pi_2(y)$) at its dominant
singularity or singularities determines the tail asymptotic property of the
boundary probabilities $\pi_{n,0}$ ($\pi_{0,n}$). According the
the fundamental form of the random walk, it, together with the
property of the kernel function $h(x,y)$, also determines the
tail asymptotic property of the marginal distribution
$\pi_{n}^{(1)} = \sum_j \pi_{n,j}$ (and $\pi_{n}^{(2)} = \sum_i
\pi_{i,n}$).
In this section, we provide details for the exact tail asymptotics of
the marginal distribution $\pi_{n}^{(1)}$. The exact tail asymptotics of $\pi_{n}^{(2)}$
can be easily obtained by symmetry. First, based on the
fundamental form, we have
\[
\pi(x,y)=\frac{h_{1}(x,y)\pi_1(x)+h_{2}(x,y)\pi_2(y)+h_{0}(x,y)\pi_{0,0}}{-h(x,y)}
\]
and therefore,
\begin{eqnarray*}
\pi(x,1) &=&\frac{h_{1}(x,1)\pi_1(x)+h_{2}(x,1)\pi_2(1)+h_{0}(x,1)\pi_{0,0}}{-h(x,1)} \\
&=&\frac{h_{1}(x,1)\pi_1(x)+h_{2}(x,1)\pi_2(1)+h_{0}(x,1)\pi_{0,0}}{-\widetilde{a}(1)[x-X_{0}(1)][x-X_{1}(1)]}.
\end{eqnarray*}
If $M_{x} \geq 0$, then $X_{1}(1)=1$, which implies that the
denominator of the expression for $\pi(x,1)$ does not have any
zero outside the unit circle. In this case, $\pi_{n}^{(1)}$ has
the same tail asymptotics as $\pi_{n,0}$. The only difference is
the expression for the coefficient, which can be obtained from straight
forward calculations.
If $M_{x}<0$, then $X_{0}(1)=1$ and $X_{1}(1)>1$. If $p_{i,j}$ is
not X-shaped, the analysis is so-called standard, details of which
will be provided here. If $p_{i,j}$ is X-shaped, then there are
four subcases based on if $p^{(k)}_{i,j}$ is X-shaped or not. For
these cases, detailed analysis varies, but similar. We provide
details here for the case where both $p^{(k)}_{i,j}$ for $k=1, 2$
are not X-shaped. Let
$z=\min \{x^{\ast},\widetilde{x}_{1}\}$.
and consider the following four cases:
\textbf{1.} $\min (X_{1}(1),z)<x_{3}$ and $X_{1}(1)\neq z$. In
this case, $\pi_{n}^{(1)}$ has an exact geometric decay with the
decay rate equal to $x_{dom}=\min (X_{1}(1),z)$:
\[
\pi_{n}^{(1)}\sim c_{1}^{(x)} \left(
\frac{1}{x_{dom}}\right)^{n-1},
\]
where
\[
c_{1}^{(x)}=\left\{
\begin{array}{ll}
\frac{\lbrack h_{1}(X_{1}(1),1)\pi _{1}(X_{1}(1))+h_{2}(X_{1}(1),1)\pi
_{2}(1)+h_{0}(X_{1}(1),1)\pi _{0,0}]X_{1}(1)}{\widetilde{a}(1)(X_{1}(1)-1)},
& X_{1}(1)<z, \\
\frac{h_{1}(z,1)c_{0,1}(z)+h_{2}(z,1)\pi _{2}(1)+h_{0}(z,1)\pi _{0,0}}{-
\widetilde{a}(1)[z-X_{0}(1)][z-X_{1}(1)]}, & X_{1}(1)>z,
\end{array}
\right.
\]
with $c_{0,1}(z)$ being given in Theorem~\ref{theorem3.1}.
\textbf{2.} $X_{1}(1)=z<x_{3}$. In this case, $X_{1}(1)=
\widetilde{x}_{1}$ is impossible, since otherwise
$h(X_{1}(1),1)=0$, which implies $1=Y_{0}(\widetilde{x}_{1})$ or
$1=Y_{1}(\widetilde{x}_{1})$. This is contradiction to
$Y_{0}(\widetilde{x}_{1})>1$. Hence, only $X_{1}(1)=x^{\ast}$ may
hold. There are two subcases:
\textbf{2(a):} $1=Y_{0}(x^{\ast})$. In this case,
$h_{1}(x^{\ast},1)=$ $h_{1}(x^{\ast},Y_{0}(x^{\ast}))=0$. We can
write $h_{1}(x,1)$ as
$h_{1}(x,1)=a_{1}(x)+b_{1}(x)=(x-X_{1}(1))h_{1}^{\ast}(x)$ with
$\frac{ h_{1}(x,1)}{x-X_{1}(1)}$ being a linear function of $x$,
which yields
\begin{eqnarray*}
\pi(x,1) &=&\frac{h_{1}(x,1)\pi_1(x)+h_{2}(x,1)\pi_2
(1)+h_{0}(x,1)\pi_{0,0}}{-\widetilde{a}(1)(x-1)[x-X_{1}(1)]} \\
&=&\frac{h_{1}^{\ast}(x)\pi_1(x)}{-\widetilde{a}(1)(x-1)}+\frac{h_{2}(x,1)
\pi_2(1)+h_{0}(x,1)\pi_{0,0}}{-\widetilde{a}(1)(x-1)[x-X_{1}(1)]}.
\end{eqnarray*}
Therefore, $\pi(x,1)$ has a single pole $X_{1}(1)$, which leads to
an exact geometric decay (recalling $\pi(1,1)\neq 1$):
\[
\pi_{n}^{(1)}\sim c_{2,1}^{(x)}\left( \frac{1}{X_{1}(1)}\right)^{n-1}
\]
with the
coefficient given by
\[
c_{2,1}^{(x)}=\lim_{x\rightarrow X_{1}(1)}\left( 1-\frac{x}{X_{1}(1)}\right)
\pi(x,1)=\frac{[h_{2}(X_{1}(1),1)\pi_2(1)+h_{0}(X_{1}(1),1)\pi_{0,0}]X_{1}(1)}{\widetilde{a}(1)(X_{1}(1)-1)}.
\]
\textbf{2(b):} $1=Y_{1}(x^{\ast})$. In this case,
$h_{1}(x^{\ast},Y_{0}(x^{\ast}))=0$ and
$Y_{0}(x^{\ast})<Y_{1}(x^{\ast})$, we obtain $ h_{1}(x^{\ast},1)=$
$h_{1}(x^{\ast},Y_{1}(x^{\ast}))>0$, which implies that $x^{\ast}$
is a double pole of $\pi(x,1)$ (noting that
$h_{2}(x^{\ast},1)\pi_2(1)+h_{0}(x^{\ast},1)\pi_{0,0}>0$ since
$h_{2}(x^{\ast},1)>h_{2}(X_{0}(1),1)=0$ and
$h_{0}(x^{\ast},1)\pi_{0,0}>0$). The corresponding tail asymptotic
is given by
\[
\pi_{n}^{(1)}\sim c_{2,2}^{(x)}n\left(
\frac{1}{X_{1}(1)}\right)^{n-1},
\]
where
\begin{eqnarray*}
c_{2,2}^{(x)}&=& \lim_{x\rightarrow X_{1}(1)}\left(1-\frac{x}{X_{1}(1)}\right
)^{2}\pi(x,1)\\
&=&\frac{X_{1}(1)[h_{1}(X_{1}(1),1)c_{0,1}(x^{\ast})+h_{2}(X_{1}(1),1)
\pi_2(1)+h_{0}(X_{1}(1),1)\pi_{0,0}]}{\widetilde{a}
(1)[X_{1}(1)-1]}
\end{eqnarray*}
with $c_{0,1}(x^{\ast})$ given in Theorem~\ref{theorem3.1}.
\textbf{3.} $\min (X_{1}(1),z)=x_{3}$. In this case, there are
four possible subcases, for which proofs are omitted since they
are similar to that for the previous cases:
\textbf{3(a):} $X_{1}(1)=z=x_{3}$ leading to an exact geometric
decay:
\[
\pi_{n}^{(1)}\sim c_{3,1}^{(x)}\left( \frac{1}{x_{3}}\right
)^{n-1},
\]
where
\[
c_{3,1}^{(x)} = \lim_{x\rightarrow x_{3}} \left (1-\frac{x}{x_{3}}\right )
\pi(x,1)=\frac{h_{2}(x_{3},1)\pi_2(1)+h_{0}(x_{3},1)\pi_{0,0}}{x_{3}\widetilde{a}(1)(x_{3}-1)}.
\]
\textbf{3(b):} $X_{1}(1)=x_{3}<z$ leading to an exact geometric
decay:
\[
\pi_{n}^{(1)}\sim c_{3,2}^{(x)}\left(\frac{1}{x_{3}}\right)^{n-1},
\]
where
\[
c_{3,2}^{(x)}= \lim_{x\rightarrow x_{3}}\left(
1-\frac{x}{x_{3}}\right) \pi(x,1) =
\frac{h_{1}(x_{3},1)\pi(x_{3})+h_{2}(x_{3},1)\pi_2
(1)+h_{0}(x_{3},1)\pi_{0,0}}{x_{3}\widetilde{a}(1)(x_{3}-1)}.
\]
\textbf{3(c):} $z=x_{3}<X_{1}(1)$ with $x^{\ast }\neq
\widetilde{x}_{1}$ leading to a geometric decay multiplied by the
factor $n^{-1/2}$:
\[
\pi_{n}^{(1)}\sim c_{3,3}^{(x)}n^{-1/2}\left(
\frac{1}{x_{3}}\right)^{n-1},
\]
where
\[
c_{3,3}^{(x)}=\lim_{x\rightarrow x_{3}}\left(
1-\frac{x}{x_{3}}\right)^{1/2}\pi(x,1)=\frac{
h_{1}(x_{3},1)c_{0,2}(x_3)}{\widetilde{a}(1)(X_{1}(1)-1)[X_{1}(1)-x_{3}]}
\]
with $c_{0,2}(x_3)$ given in Theorem~\ref{theorem3.1}.
\textbf{3(d):} $z=x^{\ast }=\widetilde{x}_{1}=x_{3}<X_{1}(1)$
leading to an exact geometric decay
\[
\pi _{n}^{(1)}\sim c_{3,4}^{(x)}\left( \frac{1}{x_{3}}\right) ^{n-1},
\]
where
\[
c_{3,4}^{(x)}=\lim_{x\rightarrow z}\left( 1-\frac{x}{z}\right) \pi (x,1)=
\frac{h_{1}(z,1)c_{0,1}(z)+h_{2}(z,1)\pi _{2}(1)+h_{0}(z,1)\pi _{0,0}}{-\widetilde{a}(1)[z-X_{0}(1)][z-X_{1}(1)]}.
\]
\textbf{4.} $x_{3}<\min (z,X_{1}(1))$ leading to a geometric decay
multiplied by the factor $ n^{-3/2}$:
\[
\pi_{n}^{(1)}\sim c_{4}^{(x)}n^{-3/2}\left(
\frac{1}{x_{3}}\right)^{n-1},
\]
where
\[
c_{4}^{(x)}=\lim_{x\rightarrow x_{3}}\left(1-\frac{x}{x_{3}} \right)^{1/2}\pi^{\prime}(x,1)=
\frac{h_{1}(x_{3},1)c_{0,3}(x_{3})}{\widetilde{a}(1)(x_{3}-1)[X_{1}(1)-x_{3}]
}
\]
with $c_{0,3}(x_{3})$ given in Theorem~\ref{theorem3.1}.
For the completeness, we provide a summary of tail asymptotic
properties for the marginal distribution $\pi_{n}^{(1)}$ for all
possible cases. For this purpose, let $x_{dom}$ be the positive
dominant singularity of $\pi(x,1)$. Note that $X_{1}(1)\neq
\widetilde{x}_{1}$. The following are the all possible cases
according to which of $\widetilde{x}_{1}$, $x^{\ast }$, $x_{3}$
and $X_{1}(1)$ is $x_{dom}$.
\textbf{Case A.} $x_{dom}=\min
\{\widetilde{x}_{1},x^{\ast},x_{3}\}<X_{1}(1)$;
\textbf{Case B.} $x_{dom}=X_{1}(1)<\min
\{\widetilde{x}_{1},x^{\ast },x_{3}\}$;
\textbf{Case C.} $x_{dom}=X_{1}(1)=x^{\ast }<\min
\{\widetilde{x}_{1},x_{3}\}$;
\textbf{Case D.} $x_{dom}=X_{1}(1)=x_{3}<x^{\ast }$;
\textbf{Case E.} $x_{dom}=X_{1}(1)=x_{3}=x^{\ast }$.
\begin{remark}
The cases here are different from the cases classified in the
previous section and the next section.
\end{remark}
The exact tail asymptotic properties are obtained according to the
expression of $\pi(x,1)$ and the Taubarian-like theorem.
\begin{theorem}
For the stable non-singular genus 1 random walk, the exact tail
asymptotic properties for the marginal distribution
$\pi_{n}^{(1)}$, as $n$ is large, are summarized as:
\begin{description}
\item[Case~A:] This case includes Cases~1--4 in the previous
section. $\pi_{n}^{(1)}$ has the same types of asymptotic
properties as $\pi_{n,0}$ given in Theorem~\ref{theorem4.1-a},
respectively, with possible different expressions for the
coefficients.
\item[Case~B:] $\pi_{n}^{(1)}$ has an exact geometric decay.
\item[Case~C:] $\pi_{n}^{(1)}$ has an exact geometric decay if
$Y_{0}(x^{\ast })=1$ and a geometric decay multiplied by a factor
of $n$ if $Y_{1}(x^{\ast })=1$, respectively.
\item[Case~D:] $\pi_{n}^{(1)}$ has an exact geometric decay.
\item[Case~E:] $\pi_{n}^{(1)}$ has an exact geometric decay.
\end{description}
\end{theorem}
\section{Tail Asymptotics for Joint Probabilities}
\label{section4}
In the previous sections, we have seen how we can derive exact
tail asymptotic properties for the boundary probabilities and for
the marginal distributions based on the asymptotic property of
$\pi_1(x)$ ($\pi_2(y)$) and the kernel function. However, the
exact tail asymptotic behaviour for joint probabilities cannot be
obtained directly from them. Further tools are needed for this
purpose. Our goal is to characterize the exact tail asymptotics
for $\pi_{n,j}$ for each fixed $j$ and $\pi_{i,n}$ for each fixed
$i$. Due to the symmetry, in this section, we provide details only
for the former.
The relevant balance equations of the random walk are given by
\begin{eqnarray*}
(1-p_{0,0}^{(0)})\pi_{0,0} &=&p_{-1,0}^{(1)}\pi_{1,0}+p_{0,-1}^{(2)}\pi_{0,1}+p_{-1,-1}\pi_{1,1}, \\
(1-p_{0,0}^{(1)})\pi_{1,0} &=&p_{1,0}^{(0)}\pi_{0,0}+p_{-1,0}^{(1)}\pi_{2,0}+p_{-1,-1}\pi_{2,1}+p_{1,-1}^{(2)}\pi_{0,1}+p_{0,-1}\pi_{1,1}, \\
(1-p_{0,0}^{(1)})\pi_{i,0} &=&p_{1,0}^{(1)}\pi_{i-1,0}+p_{-1,0}^{(1)}\pi_{i+1,0}+p_{-1,-1}\pi_{i+1,1}+p_{1,-1}\pi_{i-1,1}+p_{0,-1}\pi_{i,1},\;\; i\geq 2, \\
(1-p_{0,0})\pi_{i,j} &=&p_{1,-1}\pi_{i-1,j+1}+p_{-1,-1}\pi_{i+1,j+1}+p_{0,-1}\pi_{i,j+1}+p_{1,0}\pi_{i-1,j}+p_{-1,0}\pi_{i+1,j} \\
&&+p_{1,1}\pi_{i-1,j-1}+p_{0,1}\pi_{i,j-1}+p_{-1,1}\pi_{i+1,j-1},\;\;
j\geq 2.
\end{eqnarray*}
Let
\begin{eqnarray*}
\varphi_{j}(x) &=&\sum_{i=1}^{\infty }\pi_{i,j}x^{i-1},\;\;\; j\geq 0, \\
\psi_{i}(y) &=&\sum_{j=1}^{\infty }\pi_{i,j}y^{i-1},\;\;\; i\geq 0.
\end{eqnarray*}
From the above definition, it is clear that $\varphi_{0}(x)=\pi_1(x)$ and $\psi_0(y)=\pi_2(y)$.
From the relevant balance equations, we obtain
\begin{eqnarray}
c(x)\varphi_{1}(x)+b_{1}(x)\varphi_{0}(x) &=&a_{0}^{\ast}(x), \label{eqn:4.1} \\
c(x)\varphi_{2}(x)+b(x)\varphi_{1}(x)+a_{1}(x)\varphi_{0}(x) &=&a_{1}^{\ast}(x), \label{eqn:4.2} \\
c(x)\varphi_{j+1}(x)+b(x)\varphi_{j}(x)+a(x)\varphi_{j-1}(x) &=&a_{j}^{\ast}(x),\;\;\; j\geq 2, \label{eqn:4.3}
\end{eqnarray}
or
\begin{equation} \label{eqn:4.4}
\varphi_{j+1}(x)=\frac{-b(x)\varphi_{j}(x)-a(x)\varphi_{j-1}(x)+a_{j}^{\ast}(x)}{c(x)},\;\;\; j \geq 0,
\end{equation}
where
\begin{eqnarray*}
a_{0}^{\ast}(x) &=&-c_{2}(x)\pi_{0,1}-b_{0}(x)\pi_{0,0}, \\
a_{1}^{\ast}(x) &=&-c_{2}(x)\pi_{0,2}-b_{2}(x)\pi_{0,1}-a_{0}(x)\pi_{0,0}, \\
a_{j}^{\ast}(x) &=&-c_{2}(x)\pi_{0,j+1}-b_{2}(x)\pi_{0,j}-a_{2}(x)\pi_{0,j-1},\;\;\; j \geq 2.
\end{eqnarray*}
First, we establish the fact that a zero of $c(x)$ is not a pole of $\varphi_{j}(x)$ for all $j\geq 0$. Therefore $\varphi_{j}(x)$ has the same
singularities as $\varphi_{0}(x)$.
Let $y=Y_{0}(x)$ be in the cut plane $\widetilde{\mathbb{C}}_{x}$,
and let $y_{dom}$ and $x_{dom}$ be the positive dominating
singular points of $\psi_{0}(y)$ and $\varphi_{0}(x)$,
respectively. Let
\[
f_{k}(x)=-a_{2}(x)\sum_{j=k-1}^{\infty
}\pi_{0,j}y^{j-(k-1)}-b_{2}(x)\sum_{j=k}^{\infty
}\pi_{0,j}y^{j-k}-c_{2}(x)\sum_{j=k+1}^{\infty
}\pi_{0,j}y^{j-(k+1)},\;\;\; k\geq 1,
\]
then,
\begin{align}
f_{1}(x) & = y f_{2}(x)-c_{2}(x)\pi _{0,2}-b_{2}(x)\pi _{0,1}, \nonumber \\
\label{eqn:4.5}
f_{k}(x) & =yf_{k+1}(x)+a_{k}^{\ast}(x), \quad k \geq 2.
\end{align}
According to Theorem~\ref{theorem1.1}, when $|x|<x_{dom}$, we obtain
\begin{align}
h_{1}(x,y)\varphi_{0}(x) &= -h_{2}(x,y)\psi_{0}(y)-h_{0}(x,y)\pi_{0,0}
\nonumber \\
&= y[f_{1}(x)-a_{0}(x)\pi _{0,0}]+a_{0}^{\ast }(x) \label{eqn:4.6} \\
&= y^{2}f_{2}(x)+ya_{1}^{\ast}(x)+a_{0}^{\ast}(x) \label{eqn:4.7} \\
&= y^{3}f_{3}(x)+y^{2}a_{2}^{\ast}(x)+ya_{1}^{\ast}(x)+a_{0}^{\ast}(x). \label{eqn:4.8}
\end{align}
Let $u=\frac{y}{c(x)}=\frac{Y_{0}(x)}{c(x)}$. Since a zero of $c(x)$ is a
zero of $Y_{0}(x)$, $u$ is analytic on the cut plane $\widetilde{C}_{x}$.
Using $\frac{b(x)}{a(x)}=-Y_{1}(x)-Y_{0}(x)$ and $\frac{c(x)}{a(x)}
=Y_{1}(x)Y_{0}(x)$, we obtain
\begin{equation} \label{eqn:4.9}
1+b(x)u=-yua(x)\text{ and }\frac{1+b(x)u}{c(x)}=-a(x)u^{2}.
\end{equation}
Write $a=a(x)$, $b=b(x)$, $c=c(x)$, $a_{i}=a_{i}(x)$, $b_{i}=b_{i}(x)$,
$a_{j}^{\ast}=a_{j}^{\ast}(x)$, $f_{j}=f_{j}(x)$ and $\varphi_{j}=\varphi_{j}(x)$. We have following Lemma, which
confirms that a zero of $c(x)$ is not a pole of $\varphi_{j}(x)$ for all $j\geq 0$. Therefore,
$\varphi_{j}(x)$ has the same singularities as $\varphi_{0}(x)$.
\begin{lemma} \label{lemma4.1}
Let
\begin{equation}
w_{-1} =-\frac{a_{1}}{au},\;\;\;
w_{0} =-b_{1},\;\;\; \label{eqn:4.10}
w_{j} =buw_{j-1}+(1+bu)w_{j-2}.
\end{equation}
Then,
\begin{equation} \label{eqn:4.11}
(-1)^{j}\left[ buw_{j-1}+(1+bu)w_{j-2}\right]+b_{1}+a_{1}y=(-1)^{j}(1+bu)w_{j-1},\;\;\; j\geq 1,
\end{equation}
and
\begin{eqnarray}
h_{1}\varphi_{1} &=&yuf_{2}w_{0}+ug_{1}, \label{eqn:4.12} \\
h_{1}\varphi_{j} &=&(-1)^{j+1}yuf_{j+1}w_{j-1}+u\sum_{k=0}^{j-2}(-1)^{j+1-k}a_{j-k}^{\ast}w_{j-1-k}(au)^{k}+ug_{1}(au)^{j-1},\;\;\; j\geq 2, \label{eqn:4.13}
\end{eqnarray}
where $g_{1}=a_{0}^{\ast}(x)a_{1}(x)-b_{1}(x)a_{1}^{\ast}(x)$.
\end{lemma}
\proof By applying (\ref{eqn:4.9}) and (\ref{eqn:4.10}), we easily obtain equation
(\ref{eqn:4.11}) for $j=1$. Assume that equation (\ref{eqn:4.11}) is true for $j\leq k$, we show
\begin{equation} \label{eqn:4.14}
(-1)^{k+1}\left[ buw_{k}+(1+bu)w_{k-1}\right ] +b_{1}+a_{1}y=(-1)^{k+1}(1+bu)w_{k}.
\end{equation}
From the inductive assumption and the definition of $w_{j}$, we have
\begin{eqnarray} \label{eqn:4.15}
b_{1}+a_{1}y &=&(-1)^{k}(1+bu)w_{k-1}-(-1)^{k}\left[ buw_{k-1}+(1+bu)w_{k-2} \right ], \nonumber \\
&=&(-1)^{k}(1+bu)w_{k-1}+(-1)^{k+1}w_{k},
\end{eqnarray}
which yields equation (\ref{eqn:4.14}). Equation (\ref{eqn:4.12}) is obtained by the direct
substitutions of equations (\ref{eqn:4.1}) and (\ref{eqn:4.6}).
Next, we show equation (\ref{eqn:4.13}). We use the induction again.
According to
equations (\ref{eqn:4.2}), (\ref{eqn:4.7}) and (\ref{eqn:4.12}),
\begin{eqnarray*}
c(x)h_{1}\varphi_{2} &=&-bh_{1}\varphi_{1}-a_{1}h_{1}\varphi_{0}+h_{1}a_{1}^{\ast} \\
&=&-b[yuf_{2}w_{0}+ug_{1}]-a_{1}[y^{2}f_{2}+ya_{1}^{\ast}+a_{0}^{\ast}]+a_{1}^{\ast}[a_{1}y+b_{1}].
\end{eqnarray*}
It follows from equations (\ref{eqn:4.9}) and (\ref{eqn:4.10}) that
\begin{eqnarray*}
h_{1}\varphi_{2}&=&-bu^{2}f_{2}w_{0}-a_{1}yuf_{2}+u(au)g_{1}=-uf_{2} \left [buw_{0}+a_{1}\frac{1+bu}{-au} \right ]+u(au)g_{1} \\
&=&-uf_{2}[buw_{0}+(1+bu)w_{-1}]+u(au)g_{1}=-yuf_{3}w_{1}-a_{2}^{\ast}uw_{1}+u(au)g_{1},
\end{eqnarray*}
which gives equation (\ref{eqn:4.13}) for $j=2$. Assume that equation (\ref{eqn:4.13}) is true for
$j\leq n$. We prove the result for $j=n+1$. From equations (\ref{eqn:4.9}), (\ref{eqn:4.11}), (\ref{eqn:4.5}) and the
inductive assumption, we have
\begin{eqnarray*}
&& c(x)h_{1}\varphi_{n+1}=-bh_{1}\varphi_{n}-ah_{1}\varphi_{n-1}+h_{1}a_{n}^{\ast} \\
&=&(-1)^{n+2}ybuf_{n+1}w_{n-1}+bu\sum_{k=0}^{n-2}(-1)^{n+2-k}a_{n-k}^{\ast}w_{n-1-k}(au)^{k}-bug_{1}(au)^{n-1} \\
&&+(-1)^{n+1}yauf_{n}w_{n-2}+au\sum_{k=0}^{n-3}(-1)^{n+1-k}a_{n-1-k}^{\ast}w_{n-2-k}(au)^{k}-g_{1}(au)^{n-1}+a_{n}^{\ast}[a_{1}y+b_{1}] \\
&=&(-1)^{n+2}yf_{n+1}[buw_{n-1}-yauw_{n-2}]+a_{n}^{\ast}\left\{
(-1)^{n+2}buw_{n-1}+(-1)^{n+1}yauw_{n-2}+a_{1}y+b_{1}\right\} \\
&&+bu\sum_{k=1}^{n-2}(-1)^{n+2-k}a_{n-k}^{\ast}w_{n-1-k}(au)^{k}+\sum_{k=0}^{n-3}(-1)^{n+1-k}a_{n-1-k}^{\ast}w_{n-2-k}(au)^{k+1}-(1+bu)g_{1}(au)^{n-1} \\
&=&(-1)^{n+2}yf_{n+1}[buw_{n-1}+(1+bu)w_{n-2}]+(-1)^{n+2}a_{n}^{\ast}(1+bu)w_{n-1} \\
&&+(1+bu)\sum_{k=1}^{n-2}(-1)^{n+2-k}a_{n-k}^{\ast}w_{n-1-k}(au)^{k}-(1+bu)g_{1}(au)^{n-1},
\end{eqnarray*}
which yields
\begin{eqnarray*}
h_{1}\varphi_{n+1} &=&(-1)^{n+2}uf_{n+1}w_{n}+(-1)^{n+1}a_{n}^{\ast}au^{2}w_{n-1}+au^{2}\sum_{k=1}^{n-2}(-1)^{n+1-k}a_{n-k}^{\ast}w_{n-1-k}(au)^{k}+ug_{1}(au)^{n} \\
&=&(-1)^{n+2}yuf_{n+1}w_{n}+u\sum_{k=0}^{n-1}(-1)^{n+2-k}a_{n+1-k}^{\ast}w_{n-k}(au)^{k}+ug_{1}(au)^{n}.
\end{eqnarray*}
This completes the proof.
\thicklines \framebox(6.6,6.6)[l]{}
\begin{corollary}
$\varphi_{0}(x)$ and $\varphi_{j}(x)$, $j \geq 1$, have the same
singularities.
\end{corollary}
The following Lemma is useful in characterizing the tail asymptotics of $\pi_{n,j}$ for a fixed $j$.
\begin{lemma} \label{lemma4.2}
If $\min \{x^{\ast},\widetilde{x}_{1}\}>x_{3}$, then
\[
\lim_{x\rightarrow x_{dom}}\sqrt{1-\frac{x}{x_{dom}}}\varphi_{j}^{\prime}(x)=c_{3,j}(x_{dom}),
\]
where $c_{3,0}(x_{dom})$ is given in Theorem~\ref{theorem3.1} and
\begin{equation} \label{eqn:4.16}
c_{3,j+1}(x_{dom})=[A_{3}(x_{dom})+B_{3}(x_{dom})j]\left(\frac{1}{ Y_{1}(x_{dom})}\right)^{j},\;\;\; j\geq 0,
\end{equation}
with
\begin{align} \label{eqn:4.17}
A_{3}(x_{dom})
&=-\frac{c_{3,0}(x_{dom})b_{1}(x_{dom})}{c(x_{dom})}, \\
B_{3}(x_{dom})
&=\frac{-h_{1}(x_{dom},Y_{0}(x_{dom}))c_{3,0}(x_{dom})}{c(x_{dom})}. \label{eqn:4.17-b}
\end{align}
\end{lemma}
\proof When $\min \{x^{\ast},\widetilde{x}_{1}\}>x_{3}$, we have
$x_{dom}=\pm x_{3}$. Without lose of
generality, we assume $x_{dom}=x_{3}$ in the proof.
Since $\varphi_{j}(x)$, $j\geq 0$, is continuous at $x_{3}$, $\lim_{x\rightarrow x_{3}}\sqrt{1-\frac{x}{x_{3}}}\varphi_{j}(x)=0$.
Let $j=1$. Then,
\[
\varphi_{1}^{\prime}(x)=\frac{-c^{\prime}(x)\varphi_{1}(x)-b_{1}(x)\varphi_{0}^{\prime}(x)-b_{1}(x)\varphi_{0}^{\prime}(x)+a_{0}^{\ast\prime}(x)}{c(x)}
\]
and
\[
\lim_{x\rightarrow x_{3}}\sqrt{1-\frac{x}{x_{3}}}\varphi_{1}^{\prime}(x)=\frac{-b_{1}(x_{3})c_{3,0}(x_{3})}{c(x_{3})}=c_{3,1}(x_{3}).
\]
Assume that $\lim_{x\rightarrow x_{3}}\sqrt{1-\frac{x}{x_{3}}}\varphi_{k}^{\prime}(x)$ exists for $k \leq j$ and
\[
\lim_{x\rightarrow x_{3}}\sqrt{1-\frac{x}{x_{3}}}\varphi_{k}^{\prime}(x)=c_{3,k}(x_{3}),
\]
we obtain
\[
\varphi_{k+1}^{\prime}(x) =\frac{-c^{\prime}(x)\varphi_{k+1}(x)-b(x)\varphi_{k}^{\prime}(x)-b^{\prime}(x)\varphi_{k}(x)-a(x)\varphi_{k-1}^{\prime}(x)-a^{\prime}(x)\varphi_{k-1}(x)+a_{k}^{\ast\prime}(x)}{c(x)},
\]
and
\[
\lim_{x\rightarrow
x_{3}}\sqrt{1-\frac{x}{x_{3}}}\varphi_{k+1}^{\prime}(x)=
\frac{-b(x_{3})c_{3,k}(x_{3})-a(x_{3})c_{3,k-1}(x_{3})}{c(x_{3})}=c_{3,k+1}(x_{3}).
\]
Therefore, we can inductively have
\begin{eqnarray}
c_{3,1}(x_{3}) c(x_{3})+c_{3,0}(x_{3}) b_{1}(x_{3}) &=&0, \label{eqn:4.18} \\
c_{3,2}(x_{3}) c(x_{3})+c_{3,1}(x_{3}) b(x_{3})+c_{3,0}(x_{3}) a_{1}(x_{3}) &=&0, \label{eqn:4.19} \\
c_{3,j+1}(x_{3}) c(x_{3})+b(x_{3})c_{3,j}(x_{3})
+a(x_{3})c_{3,j-1}(x_{3}) &=&0, \;\;\; j\geq 2. \label{eqn:4.20}
\end{eqnarray}
It follows that $\left\{ c_{3,k}(x_{3}) \right\}$ is the solution
of the second order recursive relation determined by equations
(\ref{eqn:4.18})--(\ref{eqn:4.20}). Since $
b^{2}(x_{3})-4a(x_{3})c(x_{3})=0$, $c_{3,j}(x_{3})$ takes the form
given by equation (\ref{eqn:4.16}). $A_{3}(x_{3})$ and
$B_{3}(x_{3})$ are obtained by using the initial equations:
\begin{eqnarray*}
A_{3}(x_{3})c(x_{3})+c_{3,0}(x_{3})b_{1}(x_{3}) &=&0, \\
\frac{\lbrack A_{3}(x_{3})+B_{3}(x_{3}))c(x_{3})]}{Y_{1}(x_{3})}
+A_{3}(x_{3})b(x_{3})+c_{3,0}(x_{3})a_{1}(x_{3}) &=&0.
\end{eqnarray*}
\thicklines \framebox(6.6,6.6)[l]{}
We are now ready to prove the main theorem of this section, in
which
\begin{eqnarray}
A_{1}(x_{dom}) &=& -B_{1}(x_{dom})+\frac{-c_{1,0}(x_{dom})b_{1}(x_{dom})
}{c(x_{dom})} \label{eqn:4.29} \\
&=&\left( \frac{h_{1}(x_{dom},Y_{0}(x_{dom}))}{
a(x_{dom})[Y_{1}(x_{dom})-Y_{0}(x_{dom})]Y_{0}(x_{dom})}-\frac{b_{1}(x_{dom})
}{c(x_{dom})}\right) c_{1,0}(x_{dom}), \nonumber
\end{eqnarray}
\begin{equation}
A_{2}(x_{dom})=-\frac{c_{2,0}(x_{dom})b_{1}(x_{dom})}{c(x_{dom})},
\label{eqn:4.31}
\end{equation}
$A_3(x_{dom})$ is given in (\ref{eqn:4.17}),
\begin{equation} \label{eqn:A4}
A_{4}(x_{dom})=-\frac{b_{1}(x_{dom})c_{0,4}(x_{dom})}{c(x_{dom})},
\end{equation}
\begin{equation}
B_{1}(x_{dom})=\frac{-h_{1}(x_{dom},Y_{0}(x_{dom}))c_{1,0}(x_{dom})}{
a(x_{dom})[Y_{1}(x_{dom})-Y_{0}(x_{dom})]Y_{0}(x_{dom})},
\label{eqn:4.28}
\end{equation}
\begin{equation}
B_{2}(x_{dom})=\frac{-c_{2,0}(x_{dom})h_{1}(x_{dom},Y_{0}(x_{dom}))}{
aY_0(x_{dom})^{2}}, \label{eqn:4.30}
\end{equation}
and $B_{3}(x_{dom})$ is given in (\ref{eqn:4.17-b}).
\begin{theorem} \label{theorem4.1}
Consider the stable non-singular genus 1 random walk.
Corresponding to the four case, we then have the following tail
asymptotic properties for the joint probabilities $\pi_{n,j}$ for
large $n$.
\textbf{1.} If $p_{i,j}$ is not X-shaped, then there are four
types of exact tail asymptotics:
\textbf{Case 1:} (Exact geometric decay)
\begin{equation} \label{eqn:n7-1}
\pi_{n,j} \sim \left[ A_{1}(x_{dom})\left(\frac{1}{Y_{1}(x_{dom})}\right)^{j-1} +
B_{1}(x_{dom})\left( \frac{1}{Y_{0}(x_{dom})}\right)^{j-1}\right]
\left( \frac{1}{x_{dom}}\right)^{n-1}, \;\;\;j\geq 1;
\end{equation}
\textbf{Case 2:} (Geometric decay multiplied by a factor of
$n^{-1/2}$)
\begin{equation} \label{eqn:n7-2}
\pi_{n,j} \sim \frac{\lbrack A_{2}(x_{dom})+(j-1)B_{2}(x_{dom})]}{\sqrt{\pi}}
\left( \frac{1}{Y_{1}(x_{dom})}\right)^{j-1}n^{-1/2}\left(
\frac{1}{x_{dom}}\right)^{n-1}, \;\;\;j\geq 1;
\end{equation}
\textbf{Case 3:} (Geometric decay multiplied by a factor of
$n^{-3/2}$)
\begin{equation} \label{eqn:n7-3}
\pi_{n,j}\sim \frac{\lbrack A_{3}(x_{dom})+(j-1)B_{3}(x_{dom})]}{\sqrt{\pi}}
\left( \frac{1}{Y_{1}(x_{dom})}\right)^{j-1}n^{-3/2}\left(
\frac{1}{x_{dom}}\right)^{n-1}, \;\;\;j\geq 1;
\end{equation}
\textbf{Case 4:} (Geometric decay multiplied by a factor of $n$)
\begin{equation} \label{eqn:n7-4}
\pi_{n,j}\sim \left[ A_{4}(x_{dom})\left(\frac{1}{Y_{1}(x_{dom})}\right)^{j-1}\right] n
\left( \frac{1}{x_{dom}}\right)^{n-1}, \;\;\;j\geq 1.
\end{equation}
\textbf{2.} If $p_{i,j}$ is X-shaped, but both $p_{i,j}^{(1)}$ and $
p_{i,j}^{(2)}$ are not X-shaped, we then have the following exact
tail asymptotic properties:
\textbf{Case 1:} (Exact geometric decay) It is given by
(\ref{eqn:n7-1});
\textbf{Case 2: }(Geometric decay multiplied by a factor of
$n^{-1/2}$) It is given by (\ref{eqn:n7-2});
\textbf{Case 3:} (Geometric decay multiplied by a factor of
$n^{-3/2}$) It is given by
\begin{equation} \label{eqn:n7-5}
\pi_{n,j} \sim \frac{\lbrack A_{3}(x_{dom})+(-1)^{n+j}A_{3}(-x_{dom})]}{\sqrt{\pi }}
\left(\frac{1}{Y_{1}(x_{dom})}\right)^{j-1}n^{-3/2}
\left(\frac{1}{x_{dom}}\right)^{n-1}, \;\;\;j\geq 1;
\end{equation}
\textbf{Case 4:} (Geometric decay multiplied by a factor of $n$)
It is given by (\ref{eqn:n7-4}).
\textbf{3.} If $p_{i,j}$ and $p_{i,j}^{(1)}$ are X-shaped, but $
p_{i,j}^{(2)} $ is not, we then have the following exact tail
asymptotic properties:
\textbf{Case 1:} (Exact geometric decay) When
$\widetilde{x}_{1}<x^{\ast }$, it is given by (\ref{eqn:n7-1});
when $\widetilde{x}_{1}=x^{\ast }=x_{3}$, it is also given by
(\ref{eqn:n7-1}); when $x^{\ast }<\widetilde{x}_{1}$, it is given
by
\begin{equation} \label{eqn:n7-6}
\pi_{n,j} \sim \left[A_{1}(x_{dom})+(-1)^{n+j}A_{1}(-x_{dom})\right]
\left( \frac{1}{Y_{1}(x_{dom})}\right)^{j-1}
\left(\frac{1}{x_{dom}}\right)^{n-1}, \;\;\;j\geq 1;
\end{equation}
\textbf{Case 2:} (Geometric decay multiplied by a factor of
$n^{-1/2}$) When $x^{\ast }>\widetilde{x}_{1}$, it is given by
(\ref{eqn:n7-2}); when $x^{\ast }< \widetilde{x}_{1}$, it is given
by
\begin{equation} \label{eqn:n7-7}
\pi_{n,j} \sim \frac{\lbrack A_{2}(x_{dom})+(-1)^{n+j}A_{2}(-x_{dom})]}{\sqrt{\pi }}
\left( \frac{1}{Y_{1}(x_{dom})}\right )^{j-1} n^{-1/2}
\left(\frac{1}{x_{dom}}\right)^{n-1}, \;\;\;j\geq 1;
\end{equation}
\textbf{Case 3:} (Geometric decay multiplied by a factor of
$n^{-3/2}$) It is given by
\[
\pi_{n,j} \sim \frac{\lbrack A_{3}(x_{dom})+(-1)^{n+j}A_{3}(-x_{dom})]}{\sqrt{\pi }}
\left( \frac{1}{Y_{1}(x_{dom})}\right)^{j-1}n^{-3/2}
\left(\frac{1}{x_{dom}}\right)^{n-1}, \;\;\;j\geq 1;
\]
\textbf{Case 4:} (Geometric decay multiplied by a factor of $n$)
It is given by (\ref{eqn:n7-4}).
\textbf{4.} If $p_{i,j}$ and $p_{i,j}^{(2)}$ are X-shaped, but $
p_{i,j}^{(1)} $ is not, then it is the symmetric case to
\textbf{3}. All
expression in \textbf{3} are valid after switching $x^{\ast }$ and $
\widetilde{x}_{1}$.
\textbf{5.} If all $p_{i,j}$, $p_{i,j}^{(1)}$ and $p_{i,j}^{(2)}$
are X-shaped, we then have the following exact tail asymptotic
properties:
\textbf{Case 1:} (Exact geometric decay) When $x^{\ast }\leq
\widetilde{x}_{1}$, it is given by (\ref{eqn:n7-6}); when
$x^{\ast}>\widetilde{x}_{1}$, it is also given by (\ref{eqn:n7-6})
by replacing the dominant singularity $x^{\ast}$ by
$\widetilde{x}_{1}$;
\textbf{Case 2:} (Geometric decay multiplied by a factor of
$n^{-1/2}$) When $x^{\ast}<\widetilde{x}_{1}$, it is given by
(\ref{eqn:n7-7}); when $x^{\ast }> \widetilde{x}_{1}$, it is also
given by (\ref{eqn:n7-7}) by replacing the dominant singularity
$x^{\ast}$ by $\widetilde{x}_{1}$;
\textbf{Case 3:} (Geometric decay multiplied by a factor of
$n^{-3/2}$) It is given by (\ref{eqn:n7-5});
\textbf{Case 4:} (Geometric decay multiplied by a factor of $n$)
It is given by
\[
\pi_{n,j} \sim \left[A_{4}(x_{dom})+(-1)^{n+j}A_{4}(-x_{dom})\right]
\left( \frac{1}{Y_{1}(x_{dom})}\right)^{j-1} n \left(
\frac{1}{x_{dom}}\right)^{n-1}, \;\;\;j\geq 1.
\]
\end{theorem}
\proof \textbf{1.}
\textbf{Case 1:} It follows from Section~\ref{section3} that
$\lim_{x\rightarrow x_{dom}}\left( 1-\frac{x}{x_{dom}}\right)
\varphi_{0}(x)=c_{0,1}(x_{dom})$. By the induction and equations
(\ref{eqn:4.1})--(\ref{eqn:4.3}), $\lim_{x\rightarrow
x_{dom}}\left( 1-\frac{x}{x_{dom}}\right)
\varphi_{j}(x)=c_{1,j}(x_{dom})$
with
\begin{eqnarray*}
c_{1,1}(x_{dom})c(x_{dom})+c_{1,0}(x_{dom})b_{1}(x_{dom}) &=&0, \\
c_{1,2}(x_{dom})c(x_{dom})+c_{1,1}(x_{dom})b(x_{dom})+c_{1,0}(x_{dom})a_{1}(x_{dom}) &=&0, \\
c_{1,j+1}(x_{dom})c(x_{dom})+c_{1,j}(x_{dom})b(x_{dom})+c_{1,j-1}(x_{dom})a(x_{dom})
&=&0,\;\;\; j\geq 2.
\end{eqnarray*}
Since $c_{1,j}(x_{dom})$, $j\geq 0$, satisfies the second order
recursive relation above, it takes the form of
\[
c_{1,j+1}(x_{dom})=A_{1}(x_{dom}) \left( \frac{1}{Y_{1}(x_{dom})}\right )^{j}
+B_{1}(x_{dom}) \left( \frac{ 1}{Y_{0}(x_{dom})}\right)^{j},
\;\;\;j\geq 0.
\]
To determine $A_1=A_{1}(x_{dom})$ and $B_1=B_{1}(x_{dom})$, we use
the initial equations:
\begin{align}
(A_{1}+B_{1})c(x_{dom})+c_{1,0}(x_{dom})b_{1}(x_{dom}) &=0, \label{eqn:4.26} \\
\left[ A_{1}\left( \frac{1}{Y_{1}(x_{dom})}\right) +B_{1}\left(\frac{1}{Y_0(x_{dom})} \right) \right]
c(x_{dom})+(A_{1}+B_{1})b(x_{dom})+c_{1,0}(x_{dom})a_{1}(x_{dom})
&=0. \label{eqn:4.27}
\end{align}
Multiplying both sides of equation (\ref{eqn:4.27}) by
$Y_{0}(x_{dom})$, adding the resulting one to (\ref{eqn:4.26}),
and taking into account
$a(x_{dom})Y_{0}^{2}(x_{dom})+b(x_{dom})Y_{0}(x_{dom})+c(x_{dom})=0$,
$h_{1}(x_{dom},Y_{0}(x_{dom}))=a_{1}(x_{dom})Y_{0}(x_{dom})+b_{1}(x_{dom})$
and $c(x_{dom})=Y_{0}(x_{dom})Y_{1}(x_{dom})a(x_{dom})$ yield:
\begin{eqnarray*}
(A_{1}+B_{1})c(x_{dom})+c_{1,0}(x_{dom})b_{1}(x_{dom}) &=&0, \\
A_{1}\frac{Y_{0}(x_{dom})}{Y_{1}(x_{dom})}
c(x_{dom})+B_{1}c(x_{dom})+(A_{1}+B_{1})b(x_{dom})Y_{0}(x_{dom})+c_{1,0}(x_{dom})a_{1}(x_{dom})Y_{0}(x_{dom})
&=&0,
\end{eqnarray*}
which gives (\ref{eqn:4.28}) and (\ref{eqn:4.29}).
So, $B_{1}(x_{dom})=0$ if $x_{dom}=x^{\ast }$ and $B_{1}(x_{dom})\neq 0$ if $
x_{dom}=\widetilde{x}_{1}$. By the Tauberian-like theorem, we
obtain (\ref{eqn:n7-1}).
\textbf{Case 2:} Similar to that for 1-Case 1. From the proof, we
have (\ref{eqn:4.30}) and (\ref{eqn:4.31}).
\textbf{Case 3:} Write
\[
\varphi_{j}^{\prime}(x)=\sum_{n=0}^{\infty }(n+1)\pi_{n+2,j}x^{n}=\sum_{n=0}^{\infty }(n+1)x_{3}^{n}\pi_{n+2,j}\left( \frac{x}{
x_{3}}\right)^{n}.
\]
According Lemma~\ref{lemma4.2} and the Tauberian-like theorem, we
have
\[
(n+1)x_{3}^{n}\pi_{n+2,j}\sim \frac{c_{3,j}(x_{3})}{\sqrt{\pi}}n^{-1/2},
\]
which is equivalent to (\ref{eqn:n7-3}).
\textbf{Case 4:} The results can be proved in the same fashion as
in Case~1 and Case~2.
The proofs of the other cases are omitted due to the similarity to
\textbf{1} and Theorem~\ref{theorem4.1-a}.
\thicklines \framebox(6.6,6.6)[l]{}
\section{Examples and Concluding Remarks}
In this paper, for a non-singular genus 1 random walk, we proposed
a kernel method to study the exact tail asymptotic behaviour of
the joint stationary probabilities along a coordinate direction,
when the value of the other coordinate is fixed,
and also the exact tail asymptotic behaviour for the two marginal
distributions.
A total of four different types of exact tail asymptotics exists.
The fourth one, a geometric decay multiplied by a factor $n$, was
not reported before for this discrete-time model (the same type
was reported recently for a continuous-time random walk model by
Dai and Miyazawa~\cite{Dai-Miyazawa:10}). In this study, we also
revealed a new periodic phenomena for all four types of exact tail
asymptotics when there are two dominant singularities for the
unknown generating function, say $\pi_1(x)$, with the same
asymptotic property at them.
The key idea of this kernel method is simple and the use of the
Tauberian-like theorem greatly simplifies the analysis, which,
unlike in the situation when a standard Tauberian theorem is used,
is also rigorous. Under the assumption that there is only one
dominant singularity, this method provides a straightforward
routine analysis for the exact tail asymptotic behaviour. However,
without this assumption, the analysis is not simple, at least to
our best effort, for telling how many dominant singularities and
when a pole is simple. It is also challenging to characterize the
exact tail asymptotic along a coordinate direction when the value
of the other coordinate is not zero, since it is not a direct
consequence of the kernel method.
This kernel method can also be used for characterizing the exact
tail asymptotics for the non-singular genus 0 case and the
singular random walks (see Li, Tavakoli and
Zhao~\cite{Li-Tavakoli-Zhao:11}). With the detailed analysis
provided in this paper, we expect further research in applying
this kernel method to more general models.
The complete characterization of the exact tail asymptotic
behaviour provided in this paper does not necessarily imply that
for any specific model, a characterization explicitly in terms of
the system parameters exists. However, we are confident that for
any specific model, if using a different method could lead to a
such characterization, in terms of system parameters, then it can
be done using the kernel method.
Finally, we mention two
examples, which have been analyzed by using the proposed kernel
method.
\textbf{Example 1.} A generalized two-demand model was considered
in Li and Zhao~\cite{Li-Zhao:10b} using the same idea proposed in
this paper. For this model, let $\lambda$ and $\lambda_k$
($k=1,2$) be the Poisson arrival rate with two demands and the
arrival rate of the two dedicated Poisson arrivals, respectively.
Furthermore, let $\mu_k$ ($k=1,2$) be the exponential service
rates of the two independent parallel servers. For a detailed
description of the model, one may refer to \cite{Li-Zhao:10b}. For
this model, the three regions, on which the joint probabilities
along a coordinate direction, say queue 1, have an exact geometric
decay, a geometric decay multiplied by a factor $n^{-1/2}$ and a
geometric decay multiplied by a factor $n^{-3/2}$ are extremely
simple, which are: (a) $\frac{\mu_1}{\lambda+\lambda_1} <
\frac{\mu_2-\lambda_2}{\lambda}$; (b)
$\frac{\mu_1}{\lambda+\lambda_1} =
\frac{\mu_2-\lambda_2}{\lambda}$; and (c)
$\frac{\mu_1}{\lambda+\lambda_1} >
\frac{\mu_2-\lambda_2}{\lambda}$, respectively.
\textbf{Example 2.} Consider the simple random walk, or a random
walk for which $p_{i,j}$ and both $p^{(k)}_{i,j}$ ($k=1,2$) are
cross-shaped. We then can follow the general results obtained in
this paper to have refined properties. For example, consider the
case of $M_y>0$ and $M_x<0$ and assume that the system is stable.
Then, along the $x$-direction, $\pi_{n,j}$ has three types exact
asymptotics in the following respective regions:
\textbf{1. Exact geometric:}
\[
\frac{x_{3}}{x_{3}-1}\left[\sqrt{\frac{p_{0,-1}}{p_{0,1}}}-1\right]
p_{0,1}^{(1)}+p_{1,0}^{(1)}x_{3}>p_{-1,0}^{(1)};
\]
\textbf{2. Geometric with a factor $n^{-1/2}$:}
\[
\frac{x_{3}}{x_{3}-1}\left[\sqrt{\frac{p_{0,-1}}{p_{0,1}}}-1\right]
p_{0,1}^{(1)}+p_{1,0}^{(1)}x_{3}=p_{-1,0}^{(1)};
\]
\textbf{3. Geometric with a factor $n^{-3/2}$:}
\[
\frac{x_{3}}{x_{3}-1}\left[\sqrt{\frac{p_{0,-1}}{p_{0,1}}}-1\right]
p_{0,1}^{(1)}+p_{1,0}^{(1)}x_{3}<p_{-1,0}^{(1)}.
\]
When $M_y<0$ and $M_x<0$, this example also reveals the fourth
type of exact tail asymptotic property, or a geometric decay
multiplied by the factor $n$ along the $x$-coordinate direction in
the region defined by the following conditions:
\begin{equation} \label{eqn:9.1}
\frac{x_{3}}{x_{3}-1}\left[ \sqrt{\frac{p_{0,-1}}{p_{0,1}}}-1\right]
p_{0,1}^{(1)}+p_{1,0}^{(1)}x_{3}\geq p_{-1,0}^{(1)},
\end{equation}
\begin{equation} \label{eqn:9.2}
\frac{y_{3}}{y_{3}-1}\left[ \sqrt{\frac{p_{-1,0}}{p_{1,0}}}-1\right]
p_{1,0}^{(2)}+p_{0,1}^{(2)}y_{3}\geq p_{0,-1}^{(2)},
\end{equation}
\begin{eqnarray}
h_{1}(x^{\ast },\widetilde{y}_0) &=&0, \\
\frac{p_{-1,0}}{p_{1,0}} &<&\frac{p_{-1,0}^{(1)}}{p_{1,0}^{(1)}},
\end{eqnarray}
and
\begin{equation}
\frac{(x^{\ast}-1)p_{0,-1}^{(2)}p_{1,0}+p_{1,0}^{(2)}p_{0,-1}}{(x^{\ast}-1)p_{0,1}^{(2)}p_{1,0}+p_{1,0}^{(2)}p_{0,1}}
=1+\frac{(x^{\ast}-1)[p_{-1,0}^{(1)}-p_{1,0}^{(1)}x^{\ast}]}{p_{0,1}^{(1)}x^{\ast}}.
\end{equation}
Here, $x^{\ast} \in (1,x_{3}]$ and $y^* \in (1,y_{3}]$
are the zero $h_{1}(x,Y_{0}(x))$ and $h_{2}(X_{0}(y),y)$,
respectively, whose existence is guaranteed by
Lemma~\ref{lemma2.7} under conditions (\ref{eqn:9.1}) and
(\ref{eqn:9.2}); $\widetilde{y}_0=Y_{0}(x^{\ast})$ and in this case we
have $\widetilde{y}_0=y^{\ast}$; and $\widetilde{x}_0=X_{0}(Y_{0}(x^{\ast}))$.
It is not very difficult to see this is not an empty region. The
last thing which we need to check is the coefficient
\begin{equation} \label{eqn:9.6}
c_{0,4}(x_{dom})=\frac{h_{2}(x_{dom},y^{\ast })[h_{1}(\widetilde{x}_0,
y^{\ast })\pi (\widetilde{x}_0) + h_{0}(\widetilde{x}_0,y^{\ast})]\pi_{0,0}]}{x^{\ast2}h_{1}^{\prime}(x_{dom},y^{\ast})Y_{0}^{\prime}(x_{dom})
h_{2}^{\prime}(X_{0}(y^{\ast}),y^{\ast})}\neq 0,
\end{equation}
or
\[
h_{1}(\widetilde{x}_0,y^{\ast })\pi (\widetilde{x}_0)+h_{0}(\widetilde{x}_0,y^{\ast })\pi _{0,0}\neq 0,
\]
which is true since $h_{2}(x_{dom},y^{\ast })=h_{2}(X_{1}(y^{\ast
}),y^{\ast})>h_{2}(X_{0}(y^{\ast }),y^{\ast })=0$.
\vspace*{1cm}
\noindent \textbf{Acknowledgements:} The authors thank the
anonymous referee for the valuable comments and suggestions, which
significantly improved the quality of the paper, and the late
Dr.~P. Flajolet of INRIA for the discussion of the Tauberian-like
theorem. This work was supported in part by Discovery Grants from
NSERC of Canada.
\appendix
\section{Proof to Lemmas~\ref{lemma2.2}--\ref{lemma2.6} and Propositions~\ref{theorem2.2}--\ref{theorem2.3}}
\label{appendix1}
\underline{\proof of Lemma~\ref{lemma2.2}.} \textbf{1.} From $h(x,y)=0$, we have
\begin{equation*}
y^{\prime}=-\frac{a^{\prime}(x)y^{2}+b^{\prime}(x)y+c^{\prime}(x)}{2a(x)y+b(x)}.
\end{equation*}
Using $a(1)+b(1)+c(1)=0$, the property in (\ref{eqn:MxMy}) and the
expression for $Y_k(1)$ in Lemma~\ref{lemma1.1}, we obtain (a) and
(b). (c) is obvious.
\textbf{2.} There are two possible cases: $M_{y}<0$ and $M_{y}>0$. If
$M_{y}<0$, according to the ergodicity condition in Theorem~\ref{theoremergodicity}, $M_{y}^{(1)}M_{x}-M_{y}M_{x}^{(1)}<0$ must hold, which yields
\begin{eqnarray*}
f^{\prime}(1) &=&a(1)h_{1}^{\prime}(1,Y_{0}(1))h_{1}(1,Y_{1}(1)) \\
&=&a(1)\left[ a_{1}^{\prime}(1)+a_{1}(1)Y_{0}^{\prime}(1)+b_{1}^{\prime}(1)\right] h_{1}(1,Y_{1}(1)) \\
&=&\frac{a(1)h_{1}(1,Y_{1}(1))}{-M_{y}}\left[M_{y}^{(1)}M_{x}-M_{y}M_{x}^{(1)}\right] <0.
\end{eqnarray*}
From equation (\ref{eqn:2.1}), $f(x_{3})\geq 0$, it follows that $f(x)=0$ has a root in
$(1,x_{3}]$ since $f(1)=0$ and $f^{\prime}(1)<0$.
If $M_{y}>0$, we have
\begin{eqnarray*}
f^{\prime}(1) &=&a(1)h_{1}(1,Y_{0}(1))h_{1}^{\prime}(1,Y_{1}(1)) \\
&=&\frac{-a(1)h_{1}(1,Y_{0}(1))[M_{x}M_{y}^{(1)}-M_{y}M_{x}^{(1)}]}{M_{y}}.
\end{eqnarray*}
If $M_{x}M_{y}^{(1)}-M_{y}M_{x}^{(1)}<0$, from $f(x_{2})\geq 0$,
$f(1)=0$ and $f^{\prime}(1)>0$, $f(x)=0$ has a root in
$[x_{2},1)$. Similarly, if $M_{x}M_{y}^{(1)}-M_{y}M_{x}^{(1)}>0$,
we have $f^{\prime}(1)<0$, which implies that $f(x)=0$ has a root
in $(1,x_{3}]$. Also, 1 is not a repeated root of $f(x)=0$ since
$f^{\prime}(1)\neq 0$ when $M_{y}\neq 0$.
\thicklines \framebox(6.6,6.6)[l]{}
\underline{\proof of Lemma~\ref{lemma2.3}.} \textbf{1.} Suppose
$f(z)=0$. From equation (\ref{eqn:2.1}), we have $F(z)=0$. So we
can write $F(x)=(x-z)G(z)$. Similarly, since $D_{1}(z)=0$ (Recall
$D_{1}(x)=b^{2}(x)-4a(x)c(x)$), we can write
$D_{1}(x)=(x-z)D^{\ast}(x)$, where $D^{\ast}(x)$ is a polynomial.
It follows that $f(x)=(x-z)T(x)$, where
\[
T(x)=a(x)[a_{1}(x)]^{2}\left\{ (x-z)[G(z)]^{2}-\frac{D^{\ast}(x)}{4a^{2}(x)}\right\}.
\]
Since the random walk has genus 1, $z$ is not a repeated
root of $D_{1}(x)=0$, which implies $a(x)[a_{1}(x)]^{2}\frac{D^{\ast}(z)}{4a^{2}(x)}\neq 0$ (note that $a(z)\neq 0$ since $D_{1}(z)=0$ and $b(z)>0$
when $z<0$). It follows that $T(z)\neq 0$, that is, $z$ is not a repeated
root of $f(x)=0$.
\textbf{2.} This is a direct consequence of equation
(\ref{eqn:2.1}).
\textbf{3.} Suppose $x^{\prime}$ is a common root. If
$a(x^{\prime})\neq 0$, it is easy to obtain that $x^{\prime}$ is a
branch point. Assume $a(x^{\prime})=0$. Clearly, $x^{\prime}$
cannot be a positive number. Since
$\widetilde{f}_{1}(x^{\prime})=a_{1}(x^{\prime})[-2b(x^{\prime})]=0$,
$f_{0}(x^{\prime})=\frac{a_{1}(x^{\prime})c(x^{\prime})}{-b(x^{\prime})}+b_{1}(x^{\prime})=0$
and $b(x^{\prime})\neq 0$, we obtain $a_{1}(x^{\prime})=0$ and
$b_{1}(x^{\prime})=0$, which implies that $x^{\prime}=0$ since
$b_{1}(x)$ has only nonnegative zeros.
\textbf{4.} Let $-|z|$ be a negative root of $f_{0}(x)=0$ in
$[-x_{3},-1)$. From the definition of $f_{0}(x)$, we have
$\sum_{i\geq -1,j\geq 0}p_{i,j}^{(1)}[-|z|]^{i}Y_{0}^{j}(-|z|)=1$,
which implies $f_{0}(|z|)>0$ since $Y_{0}(|z|)>|Y_{0}(-|z|)|$.
According to $f_{0}(1)\leq 0$ and Lemma~\ref{lemma2.2}-1,
$f_{0}(x)=0$ has a root, say $z^{\prime }$ in $(1,|z|)$. Again,
from $Y_{0}(|z^{\prime }|)>|Y_{0}(-|z^{\prime }|)|$,
$f_{0}(-|z^{\prime}|)<0$, which implies $f_{0}(x)=0$ has a root in
$(-|z^{\prime }|,-1)$ since $f_{0}(-1)>0$. Clearly, this root is
greater than $-|z|$.
\textbf{5.} Let $|x|\in (1,x_{3}]$. Since $-b(-|x|)<0$,
$Y_{0}(-|x|)=\frac{-b(-|x|)+\sqrt{D_{1}(x)}}{a(-|x|)}
=\frac{2c(-|x|)}{-b(-|x|)-\sqrt{D_{1}(-|x|)}}$. From $b(-|x|)\geq
b(|x|)$, $\sqrt{D_{1}(-|x|)}> \sqrt{D_{1}(|x|)}$ and
$|c(-|x|)|\leq c(|x|)$, we obtain $|Y_{0}(-|x|)|<
\frac{2c(|x|)}{b(|x|)+\sqrt{D_{1}(|x|)}}=Y_{0}(|x|)$.
\thicklines \framebox(6.6,6.6)[l]{}
\underline{\proof of Lemma~\ref{lemma2.4}.} Assume $|z|=1$ and
$z\neq 1$ or $-1$. From Lemma~\ref{lemma1.1-b}-1, $|Y_{0}(z)|<1$.
Since
\[
h_{1}(x,y)=a_{1}(x)y+b_{1}(x)=x\left( \sum_{i\geq -1,j\geq
0}p_{i,j}x^{i}y^{j}-1\right)
\]
and when $|z|=1$, $\left\vert \frac{1}{z}\right\vert =1$ as well,
we obtain
\[
\left\vert \sum_{i\geq -1,j\geq
0}p_{i,j}z^{i}Y_{0}(z)^{j}\right\vert \leq \sum_{i\geq -1,j\geq
0}p_{i,j}|z^{i}||Y_{0}(z)|^{j}<1,
\]
which yields $f_{0}(x)=h_{1}(z,Y_{0}(z))\neq 0$.
For $z=-1$, $|Y_{0}(-1)|<1$ if $p_{i,j}$ is not X-shaped, and
$Y_{0}(-1)=-1$ if $p_{i,j}$ is X-shaped and $p_{i,j}^{(1)}$ is not
X-shaped. It follows that $f_{0}(-1)>0$ in both cases since
$b_{1}(-1)\geq | a_{1}(-1)|$ and $b_{1}(-1)>0$ in the first case
and $b_{1}(-1)>|a_{1}(-1)|$ in the second case.
\thicklines \framebox(6.6,6.6)[l]{}
For special ransom walk 1, we have
\begin{equation}
a_{1}(x)=p_{0,1}^{(1)}x+p_{1,1}^{(1)}x^{2}\; \text{ and }\;
b_{1}(x)=-x+p_{1,0}^{(1)}x^{2},
\end{equation}
\begin{equation} \label{eqn:2.4}
a(x)=p_{0,1}x, b(x)=p_{-1,0}-x+p_{1,0}x^{2}\; \text{ and }\;
c(x)=p_{0,-1}x.
\end{equation}
In this case, $f(x)$ becomes
\begin{equation} \label{eqn:2.5}
f(x) = x^{2}f^{\ast}(x),
\end{equation}
where
\begin{eqnarray*}
f^{\ast}(x) &=&d_{4}^{\ast}x^{4}+d_{3}^{\ast}x^{3}+d_{2}^{\ast}x^{2}+d_{1}^{\ast}x+d_{0}^{\ast} \\
&=&p_{0,1}x[1-p_{1,0}^{(1)}x]^{2}+[p_{-1,0}-x+p_{1,0}x^{2}][1-p_{1,0}^{(1)}x][p_{0,1}^{(1)}+p_{1,1}^{(1)}x]+p_{0,-1}x[p_{0,1}^{(1)}+p_{1,1}^{(1)}x]^{2}
\end{eqnarray*}
with
\begin{equation*}
d_{0}^{\ast}=p_{-1,0}p_{0,1}^{(1)}\text{ and }d_{4}^{\ast}=-p_{1,0}p_{1,0}^{(1)}p_{1,1}^{(1)}.
\end{equation*}
\underline{\proof of Proposition~\ref{theorem2.2}.} \textbf{1.}
Obviously, From equation (\ref{eqn:2.5}) and Lemma~\ref{lemma2.2},
$f(x)=0$ has at least four real roots with two in $[x_{2},x_{3}]$
and two equal to zero. The facts that $f(x_{1})\geq 0$,
$f(x_{4})\geq 0$ and $f(\pm \infty )=-\infty$ yield one root in
$(-\infty ,x_{1}]$ and another root in $ [x_{4},+\infty )$.
\textbf{2.} It is a direct result of
Proposition~\ref{theorem2.2}-1
and Lemma~\ref{lemma2.3}-4.
\thicklines \framebox(6.6,6.6)[l]{}
For the random walk considered in Theorem~\ref{theorem2.1}-2 (or both $p_{i,j}$ and $p^{(1)}_{i,j}$
are X-shaped, we have
\begin{equation}
a_{1}(x)=p_{-1,1}^{(1)}+p_{1,1}^{(1)}x^{2},\;\;\; b_{1}(x)=-x,
\end{equation}
\begin{equation}
a(x)=p_{-1,1}+p_{1,1}x^{2}, \;\;\; b(x)=-x,\;\;\;c(x)=p_{-1,-1}+p_{1,-1}x^{2}.
\end{equation}
Therefore, $f(x)$ becomes
\begin{eqnarray}
f(x) &=& a(x)b_{1}^{2}(x)-b(x)b_{1}(x)a_{1}(x)+c(x)a_{1}^{2}(x) \notag \\
&=&x^{2}[p_{-1,1}+p_{1,1}x^{2}]-x^{2}[p_{-1,1}^{(1)}+p_{1,1}^{(1)}x^{2}]+[p_{-1,-1}+p_{1,-1}x^{2}][p_{-1,1}^{(1)}+p_{1,1}^{(1)}x^{2}]^{2} \notag \\
&=&d_{6}x^{6}+d_{4}x^{4}+d_{2}x^{2}+d_{0}, \label{eqn:2.8}
\end{eqnarray}
where
\begin{eqnarray*}
d_{6} &=&\left[ p_{1,1}^{(1)}\right] ^{2}p_{1,-1}, \\
d_{4} &=&p_{1,1}-p_{1,1}^{(1)}+2p_{1,-1}p_{-1,1}^{(1)}p_{1,1}^{(1)}+p_{-1,-1}\left[ p_{1,1}^{(1)}\right] ^{2}, \\
d_{2} &=&p_{-1,1}-p_{-1,1}^{(1)}+2p_{-1,-1}p_{-1,1}^{(1)}p_{1,1}^{(1)}+p_{1,-1}\left[ p_{-1,1}^{(1)}\right] ^{2}, \\
d_{0} &=&\left[ p_{-1,1}^{(1)}\right] ^{2}p_{-1,-1}.
\end{eqnarray*}
\underline{\proof of Lemma~\ref{lemma2.6}} $f(1)=f(-1)=0$ follows
from $Y_{i}(1)=-Y_{i}(-1)$, $a_{1}(1)=a_{1}(-1)$ and
$b_{1}(1)=-b_{1}(-1)$. From Lemma~\ref{lemma2.3}, there exists an
$x_0 \neq 1$, $x_0 \in [x_{2},x_{3}]$ such that $f(x_0)=0$. We
provide details for the case of $f_{1}(z)=0$ and a similar proof
can be found for the other case. Since $x_{2}< x_0 \leq x_{3}$,
$-b(z)=-b(-z)>0$. It follows that
\begin{equation}
Y_{1}(x_0)=\frac{-b(x_0)}{2a(x_0)}+\frac{\sqrt{b^{2}(x_0)-4a(x_0)c(x_0)}}{2a(x_0)}
\end{equation}
and
\begin{equation}
f_{1}(x_0)=a_{1}(x_0)Y_{1}(x_0)+b_{1}(x_0)=0.
\end{equation}
On the other hand, from $-x_{3}\leq -x_0<-1$ we have $-b(-x_0)<0$, which yields
\begin{equation}
Y_{1}(-x_0)=\frac{b(x_0)}{2a(x_0)}-\frac{\sqrt{b^{2}(x_0)-4a(x_0)c(x_0)}}{2a(x_0)} = -Y_{1}(x_0)
\end{equation}
and
\begin{equation*}
f_{1}(-x_0)=a_{1}(-x_0)Y_{1}(-x_0)+b_{1}(-x_0)=-f_{1}(x_0)=0.
\end{equation*}
It follows from equation (\ref{eqn:2.8}) that $f(x)$ can be written as
\begin{equation*}
f(x)=d_{6}(x^{2}-1)(x^{2}-x_0^{2})(x^{2}+\eta ).
\end{equation*}
Since $\frac{d_{0}}{d_{6}}>0$, we have $\eta >0$, which indicates that $f(x)=0$
has two complex roots.
\thicklines \framebox(6.6,6.6)[l]{}
\underline{\proof of Proposition~\ref{theorem2.3}.} Suppose that
one of the two complex roots is a root of $f_{0}(x)=0$. First
assume $\frac{d_{0}}{d_{6}}\leq 1$. Then, $z^{2}\eta =
\frac{d_{0}}{d_{6}}\leq 1$ implies $|\eta |<1$. In the case of
$\frac{d_{0}}{d_{6}}>1$, we choose a path $\ell$ to connect the
random walk here to the one with $\frac{d_{0}}{d_{6}}\leq 1$. Then
on $\ell $, the two complex roots of $f(x)=0$ have to pass through
the unit circle, which is impossible according to
Remark~\ref{remark2.1} and Lemma~\ref{lemma2.4}.
\end{document}
|
\begin{equation}gin{document}
\title{Behavior of quantum correlations under
nondissipative decoherence by means of the correlation matrix}
\author{D. G. Bussandri$^{1,2}$, T. M. Os\'an$^{1,3}$, A. P. Majtey$^{1,3}$, P. W. Lamberti$^{1,2}$}
\affiliation{$^1$Facultad de Matem\'atica, Astronom\'{\i}a, F\'{\i}sica y Computaci\'on, Universidad Nacional de C\'ordoba, Av. Medina Allende s/n, Ciudad Universitaria, X5000HUA C\'ordoba, Argentina}
\affiliation{$^2$Consejo Nacional de Investigaciones Cient\'{i}ficas y T\'ecnicas de la Rep\'ublica Argentina, Av. Rivadavia 1917, C1033AAJ, CABA, Argentina}
\affiliation{$^3$ Instituto de F\'isica Enrique Gaviola, Consejo Nacional de Investigaciones Cient\'{i}ficas y T\'ecnicas de la Rep\'ublica Argentina, Av. Medina Allende s/n, X5000HUA, C\'rdoba, Argentina}
\begin{equation}gin{abstract}
In this paper we use the Fano representation of two-qubit states from which we can identify a correlation matrix containing the information about the classical and quantum correlations present in the bipartite quantum state. To illustrate the use of this matrix, we analyze the behavior of the correlations under non-dissipative decoherence in two-qubit states with maximally mixed marginals. From the behavior of the elements of the correlation matrix before and after making measurements on one of the subsystems, we identify the classical and quantum correlations present in the Bell-diagonal states. In addition, we use the correlation matrix to study the phenomenon known as freezing of quantum discord. We find that under some initial conditions where freezing of quantum discord takes place, quantum correlation instead may remain not constant. In order to further explore into these results we also compute a non-commutativity measure of quantum correlations to analyze the behavior of quantum correlations under non-dissipative decoherence. We conclude from our study that freezing of quantum discord may not always be identified as equivalent to the freezing of the actual quantum correlations.
\keywords{Freezing Quantum discord \and Non-commutativity \and Quantum Correlations \and Fano Representation}
\end{abstract}
\hbox{-}aketitle
\section{Introduction\label{sec:intro}}
In quantum information processing and quantum computing a central issue is to improve our capability of identifying which features in the quantum realm are responsible for the speed up of quantum algorithms over their classical counterparts. For a long time the prime suspect was the entanglement. However, it has been proven both theoretically and experimentally that there exists some separable mixed states, having negligible entanglement, which provide computational speedup in some quantum computation models compared to classical procedures [13,24]. Several results indicate that the increase in the efficiency is due to correlations of a quantum nature different from entanglement \cite{Knill98}-\cite{Lanyon08}.
Quantum discord (QD) is a widely accepted measure of quantum correlations, beyond just entanglement, and it is useful in many ways to indicate a divergence from classicality. However, even though there is strong evidence that states with non-zero discord play a central role in mixed state protocols, in the context of quantum state algorithms there is still interest in understanding the elusive source for the quantum speed up. Thus, besides QD, several measures of quantum correlations have been proposed \cite{ABC16}.
Of special relevance for this work is the non-commutativity measure of quantum correlations (NCMQC) introduced in \cite{Guo16} and \cite{Majtey17}.
On the other hand, it is well-known that quantum correlations are usually destroyed under the effects of decoherence , i.e., uncontrolled interactions between the system and its environment. As a consequence, the system becomes less efficient for the realization of a number of quantum information tasks. Thus, in order to faithfully perform quantum information protocols it is of the essence to know the time scales along with the involved quantum resources can be securely preserved and manipulated.
Recent studies of the dynamics of general quantum correlations in open quantum systems under Markovian or non-Markovian evolutions indicate that QD is typically more robust than entanglement and does not suffer from sudden death issues \cite{Modi2012,Czele2011,LoFranco2013,Maziero2009,Ferraro2010}. In particular, a peculiar phenomenon known as \textit{discord freezing} can occur for two-qubit states undergoing nondissipative decoherence. Indeed, under Markovian conditions and for certain initial conditons, QD may remain constant or frozen for a time interval \cite{Mazzola2010}. Moreover, when a non-Markovian dynamics is considered, a forever frozen discord \cite{Haikka2013} or multiple intervals of recurring frozen discord \cite{Mazzola2011,Mannone2013,LoFranco2012} may take place. Even though necessary and sufficient conditions for the freezing have been investigated \cite{You2012}, this phenomenon continues to be not completely understood. Besides, it is natural to question whether the freezing phenomenon is a consequence of a mathematical artifact originated from the particular definition of QD or it reflects the actual freezing of quantum correlations present in the physical system. The aim of this work is precisely to gain more insights in order to answer this question.
This paper is organized as follows. In Sect.~\ref{sec:theory} we outline the theoretical framework for our work, including the definition of quantum discord, the Fano form and the correlation matrix for two-qubit states, the properties of Bell-diagonal states, and also the definition of a non-commutativity measure of quantum correlations. In Sect.~\ref{sec:Results}, we present our main results. We obtain the correlation matrix after a measurement has been performed on one of the subsystems. By using the correlation matrix we determine the character of the correlations present in a two-qubit Bell-diagonal state. By considering a dynamical scenario, corresponding to a non-dissipative decoherence process, we discuss the freezing phenomenon of quantum discord analyzing the behavior of the correlations given by the correlation matrix, the QD measure, and according to the non-commutativity measure of quantum correlations. Finally, some conclusions are addressed in Sect.~\ref{sec:conclusions}.
\section{Theoretical framework\label{sec:theory}}
\subsection{Quantum discord}
A widely accepted information-theoretic measure of the total correlations contained in a bipartite quantum state $\rho$ is the (von Neumann) Quantum Mutual Information $\hbox{-}athcal{I}(\rho)$ defined as:
\begin{equation}gin{align}
\hbox{-}athcal{I}(\rho) \doteq S(\rho_A) + S(\rho_B) -
S(\rho).
\label{eq:QMI}
\end{align}
In eq. \eqref{eq:QMI}, $\rho$ stands for a general bipartite quantum state,
$\rho_A=\textrm{Tr}_B\left[\rho \right]$, $\rho_B=\textrm{Tr}_A\left[\rho \right]$ represent the corresponding reduced (marginal) states and $S(\rho)$ represents the von Neumann entropy given by
\begin{equation}gin{align}
S(\rho) \doteq -\hbox{-}box{Tr} \left[ \rho \log_2 \rho\right] .
\label{def8}
\end{align}
It is worth mentioning that $\hbox{-}athcal{I}(\rho)$ describes the correlations between the whole subsystems rather than a correlation between just two observables.\par
Classical correlations present in a quantum state $\rho$ of a bipartite quantum system can be quantified by means of the measure $\hbox{-}athcal{J}_S(\rho)$ defined as ~\cite{OZ02,HV01}
\begin{equation}gin{align}
\hbox{-}athcal{J}_S\left(\rho\right)\doteq S(\rho_B)-\hbox{-}in_{\hbox{-}athcal{M}} \sum_j \ p'_j \ S (\rho_{B|j}^\hbox{-}athcal{M} ), \label{eq:classcorrVedral}
\end{align}
\noindent with $\hbox{-}athcal{M}=\left\{M_j\right\}_{j=1}^{m}$ ($m\in\hbox{-}athbb {N} $) being a von Neumann measurement on subsystem $A$ (i.e., a complete set of rank-1 orthonormal projective measurements on $\hbox{-}athcal{H}_A$), and
\begin{equation}gin{align}
\rho_{B|j} ^\hbox{-}athcal{M}&=\textrm{Tr}_A\left[( M_j \otimes \hbox{-}athbb{I}) \rho \right]/p'_j \label{rhoBJ} \\
p'_j&=\textrm{Tr}\left[( M_j \otimes \hbox{-}athbb{I}) \rho \right], \label{PprimaJ}
\end{align}
\noindent being the resulting state of the subsystem $B$ after obtaining the result $M_j$ when $\hbox{-}athcal{M}$ is measured on subsystem $A$ and $p'_j$ being its corresponding probability. States given by Eq. \eqref{rhoBJ} are commonly referred to as \textit{conditional states}.\par
The difference between total correlations given by $\hbox{-}athcal{I}(\rho)$ [cf. Eq. \eqref{eq:QMI}] and classical correlations as measured by $\hbox{-}athcal{J}_S(\rho)$ [cf. Eq. \eqref{eq:classcorrVedral}] provides the measure of quantum correlations known as Quantum Discord which can be written as ~\cite{OZ02,HV01},
\begin{equation}gin{align}
\hbox{-}athcal{D} (\rho) \doteq S(\rho_A) - S(\rho) + \hbox{-}in_{\hbox{-}athcal{M}} \sum_j \ p'_j \ S (\rho_{B|j}^\hbox{-}athcal{M} ).
\label{eq:QDdef}
\end{align}
Besides, after a measurement $\hbox{-}athcal{M}$ is performed on party $A$, the state of the composite system $A+B$ (without observing) can be written as
\begin{equation}gin{align}\label{rhoM}
\rho^\hbox{-}athcal{M}=\sum_j (M_j\otimes\hbox{-}athbb{I}) \rho (M_j\otimes\hbox{-}athbb{I}).
\end{align}
Thus, bearing in mind equation \eqref{rhoM}, it can be easily verified that Quantum Discord can also be written as
\begin{equation}gin{align}
\hbox{-}athcal{D} (\rho) \doteq \hbox{-}athcal{I}(\rho) - \hbox{-}ax_{\hbox{-}athcal{M}}\hbox{-}athcal{I}(\rho^\hbox{-}athcal{M}).
\label{eq:QDdef2}
\end{align}
It is worth pointing out that, as the measure $\hbox{-}athcal{J}_S\left(\rho\right)$ is not symmetric under the exchange of subsystems $A$ and $B$, there exists a \textit{directionality} over $\hbox{-}athcal{J}_S\left(\rho\right)$ and in consequence over the quantity $\hbox{-}athcal{D}(\rho)$.
\subsection{Correlation matrix, classical and quantum correlations\label{sec:Tmatrix}}
A general two qubits state $\rho$ may always be written, up to local unitary transformations, in the Fano form \cite{Fano1983,Hioe1981,Schlienz1995} as follows
\begin{equation}gin{align}
\rho = \rho_A \otimes \rho_B + \frac{1}{4}\sum_{ij} T_{ij}\, \sigma^A_i\otimes \sigma^B_j.\label{Fanoform}
\end{align}
Here $\rho_{A,B}=\textrm{Tr}_{B,A}[\rho]$; $\{\sigma^A_i\}$, $\{\sigma^B_i\}$ denotes the Pauli matrices acting on the Hilbert spaces $A$, $B$ respectively; and the elements
\begin{equation}gin{align}
T_{ij}\doteq\left\left\anglegle\sigma^A_i\otimes\sigma^B_j\right\right\anglegle_\rho-\left<\sigma^A_i\otimes\hbox{-}athbb{I}\right>_\rho\left<\hbox{-}athbb{I}\otimes\sigma^B_j\right>_\rho,
\end{align}
\noindent define the \textit{correlation} matrix $\hbox{-}athbb{T}$, being $\left<O\right>_\rho \doteq \textrm{Tr}[O\rho]$. \par
On one hand, by analogy with the concept of correlation functions for describing correlation effects in many-body physics \cite{Ma1985} and taking into account that correlation functions are directly related to observables \cite{Ma1985}, it can be verified that the information related to both, classical and quantum correlations present in the composite quantum system, is in fact contained inside the elements $T_{ij}$ of the correlation matrix $\hbox{-}athbb{T}$. This matrix was used to investigate, for example, the dynamics of open quantum systems in the presence of initial correlations \cite{Peter2001}, and to study correlations in the quantum state of a composite system \cite{Huang2008,Dong2010}.
\subsection{Two-qubit states with maximally mixed marginals}
Bell--diagonal (BD) states are two-qubit states with maximally mixed marginals which can be written as
\begin{equation}gin{align}\label{BD states}
\rho^{BD}=\frac{1}{4}\left( \hbox{-}athbb{I}_2 \otimes \hbox{-}athbb{I}_2 + \sum_{i=1}^3 c_{i} \sigma^A_i \otimes \sigma^B_i \right),
\end{align}
with $\hbox{-}athbb{I}_2$ the identity matrix of dimension 2.
Any two-qubit state satisfying $\left\anglegle\sigma_j^A\right\anglegle=0=\left\anglegle\sigma_j^B\right\anglegle$, i.e., having maximally mixed marginal density operators $\rho_A=\hbox{-}athbb{I}_2/2=\rho_B$, can be brought into a Bell-diagonal form by using local unitary operations on the two qubits to diagonalize the correlation matrix $\left\anglegle\sigma_j^A\otimes\sigma_k^B\right\anglegle$. Since quantum and classical correlations are both invariant under local unitary transformations, for our purpose it will be sufficient to consider the set of BD states.
The eigenvalues of a BD state are given by
\begin{equation}gin{align}
\lambda_0&=\frac{1}{4}(1-c_1-c_2-c_3),\\
\lambda_1&=\frac{1}{4}(1-c_1+c_2+c_3),\\
\lambda_2&=\frac{1}{4}(1+c_1-c_2+c_3),\\
\lambda_3&=\frac{1}{4}(1+c_1+c_2-c_3),
\end{align}
\noindent where the coefficients $\{c_j\}$ are such that $0\leq\lambda_i\leq 1$, $i=0,\ldots, 3$.
BD states are a three-parameter set which includes the subsets of separable and classical states~\cite{Horodeckis2009}. They can be specified by the 3-tuple $(c_1,c_2,c_3)$.
Two-qubit states with maximally mixed marginals also includes Werner ($|c_1|= |c_2|= |c_3|=c$) and Bell states ($|c_i|=1$, $|c_j|=0$, $|c_k|=0$, with $(i,j,k)$ any permutation of $(1,2,3)$). Thus, the state represented by Eq. \eqref{BD states} encompasses a wide set of quantum states.\par
\subsection{Non-commutativity measure of quantum correlations}\label{sec:NCMQC}
In \cite{Guo16} a non-commutativity measure of quantum correlations (NCMQC) was introduced as another tool for studying the behavior of quantum correlations in bipartite quantum systems.\par
Any state $\rho$ of a bipartite system $A+B$ can always be expressed as
\begin{equation}gin{equation}\label{rho}
\rho=\sum_{i,j} A_{ij}\otimes |i_B\right\anglegle\left\anglegle j_B|,
\end{equation}
\noindent where $\{|i_B\right\anglegle\}$ stands for an orthonormal basis of $\hbox{-}athcal{H}_B$, and
\begin{equation}gin{equation} \label{As}
A_{ij}\doteq \hbox{-}athrm{Tr}_B[(\hbox{-}athbb{I}_A\otimes|j_B\right\anglegle\left\anglegle i_B|)\rho].
\end{equation}
By considering this representation of the states, Guo \cite{Guo16} introduced the following measure of quantum correlations:
\begin{equation}gin{equation}
D_{A}(\rho) \doteq\sum_{\Omega}||[A_{ij},A_{kl}]||_2,\label{Dtraza}
\end{equation}
where $||\cdot||_2$ is the Hilbert-Schmidt norm, $||A||_2=\sqrt{\hbox{-}athrm{Tr}(A^{\dagger}A)}$, and $\Omega$ the set of all the possible pairs (regardless of the order).
According to \cite{Majtey17}, the measure $D_A(\rho)$ depends upon the representation basis $\{\ket{i_B}\}$ of the state $\rho$ \eqref{rho}. Then, it fails in satisfying all the criteria in order to be a physically well-behaved measure of quantum correlations. With the aim of overcoming this drawback, the following improved measure of quantum correlations has been proposed \cite{Majtey17}:
\begin{equation}gin{align}
\label{dnos}
d_{A}(\rho) \doteq \hbox{-}in_{\hbox{-}athcal{R}}D_{A}(\rho),
\end{align}
where the minimum is taken over all possible representations of the state $\rho$.
\section{Results}\label{sec:Results}
In this section we analyze the (quantum or classical) character of the correlations present in a bipartite state $\rho$ by means of the Fano representation {\eqref{Fanoform}} and the correlation matrix $\hbox{-}athbb{T}$. We shall focus on two-qubit BD states.
\subsection{Correlation matrix as a tool to identify classical and quantum correlations. \label{sec:correMatr}}
The computation of the QD involves an optimization of the \textit{classical correlations} $\hbox{-}athcal{J}_S$ [cf. eq. \eqref{eq:classcorrVedral}] over all possible von Neumann measurements. Let us introduce local measurements for party $A$,
\begin{equation}gin{equation}
\{E_j=\ket{j}\bra{j} \ / \ j\in\{0,1\}\},
\end{equation}
that is, $\{E_j\}$ is a PVM (Projection-Valued Measure) over the subsystem $A$ given in the computational basis $\{\ket{j}\}$. Any other projective measurement will be given by a unitary transformation:
\begin{equation}gin{equation}\label{param1}
\{M_j=V\ket{j}\bra{j}V^\dagger \ / \ j\in\{0,1\}\},
\end{equation}
with $V\in U(2)$. A useful parametrization of this unitary operators, up to a constant phase, is
\begin{equation}gin{align}\label{param2}
V=\vect{s}\cdot(\hbox{-}athbb{I}_2,i \vect{\sigma}),
\end{align}
with $\vect{s}\in \Gamma$, and $\Gamma =\{\vect{s}\in\hbox{-}athbb{R}^4 \ / \ s_0^2+s_1^2+s_2^2+s_3^2=1\}$.
Once the measurement is parametrized by
the vector $\vect{s}$, and considering Bell diagonal states \eqref{BD states}, the conditional states of the subsystem $B$ [cf. Eq. \eqref{rhoBJ}] are given by \cite{Luo08b}
\begin{equation}gin{align}
\rho^{BD}_{B|0}(\vect{s})&=\frac{1}{2}\left(\hbox{-}athbb{I}_2 + \sum_{i=1}^3 c_i z_i(\vect{s}) \sigma^B_i\right),\label{eq:rhob0}\\
\rho^{BD}_{B|1}(\vect{s})&=\frac{1}{2}\left(\hbox{-}athbb{I}_2 - \sum_{i=1}^3 c_i z_i(\vect{s}) \sigma^B_i\right),\label{eq:rhob1}
\end{align}
In Eqs. \eqref{eq:rhob0} and \eqref{eq:rhob1} we defined
\begin{equation}gin{align}
z_1(\vect{s})&=2(-s_0s_2+s_1s_3)\label{eq:z1},\\
z_2(\vect{s})&=2(s_0s_1+s_2s_3)\label{eq:z2},\\
z_3(\vect{s})&=s_0^2+s_3^2-s_1^2-s_2^2,\label{eq:z3}
\end{align}
\noindent and the conditional probabilities are $p_{0}(\vect{s})\!=\!p_{1}(\vect{s})\!=\!\frac{1}{2}$ for all $\vect{s}\in \Gamma$. \par
By using \eqref{eq:z1}, \eqref{eq:z2}, and \eqref{eq:z3} the measure $\hbox{-}athcal{J}_S$ [cf. Eq. \eqref{eq:classcorrVedral}] of classical correlations can be evaluated. It turns out that $\hbox{-}athcal{J}_S$ is a non-decreasing function of the parameter $\theta(\vect{s}):=\sqrt{ |c_1z_1(\vect{s})|^2+|c_2z_2(\vect{s})|^2+|c_3z_3(\vect{s})|^2}$. Therefore, the \textit{optimal measurement} is defined by the vector $\vect{s}$ such that $\theta(\vect{s})$ is maximum. \par
If we set $c=\hbox{-}ax\{|c_1|,|c_2|,|c_3|\}$ it can be verified that $\theta(\vect{s}) \leq c$. Thus, the optimal measurement is given by the vector $\vect{s}_M$ satisfying $\theta(\vect{s}_M)=c$. More specifically, we have the following cases,\par
\begin{equation}gin{enumerate}
\item If $c=|c_1|$ $\Rightarrow$ $|z_1(\vect{s}_M)|=1$, $z_2(\vect{s}_M)=z_3(\vect{s}_M)=0$;
\item If $c=|c_2|$ $\Rightarrow$ $|z_2(\vect{s}_M)|=1$, $z_1(\vect{s}_M)=z_3(\vect{s}_M)=0$;
\item If $c=|c_3|$ $\Rightarrow$ $|z_3(\vect{s}_M)|=1$, $z_2(\vect{s}_M)=z_1(\vect{s}_M)=0$.
\end{enumerate}
The correlation matrix for an arbitrary BD state [cf. \eqref{BD states}] takes the form,
\begin{equation}gin{align}
\hbox{-}athbb{T}=\begin{equation}gin{bmatrix}
c_1 & 0 & 0 \\
0 & c_2 & 0 \\
0 & 0 & c_3
\end{bmatrix},
\end{align}
revealing the presence of correlations between the Pauli spin observables $\sigma^A_i$ and $\sigma^B_i$ of each subsystem, $i\in \{1,2,3\}$. \par
After some algebra, when a measurement parametrized by equations (\ref{param1}) and (\ref{param2}) is performed on subsystem $A$, it is straightforward to verify that the correlation matrix associated with the state after the measurement can be written as,
\begin{equation}gin{align}
\hbox{-}athbb{T}(\vect{s})=\begin{equation}gin{bmatrix}
c_1z_1(\vect{s})^2 & c_2z_1(\vect{s})z_2(\vect{s}) & c_3z_1(\vect{s})z_3(\vect{s}) \\
c_1z_1(\vect{s})z_2(\vect{s}) & c_2z_2(\vect{s})^2 & c_3z_2(\vect{s})z_3(\vect{s}) \\
c_1z_1(\vect{s})z_3(\vect{s}) & c_2z_2(\vect{s})z_3(\vect{s}) & c_3z_3(\vect{s})^2
\end{bmatrix}
\end{align}
If we choose $\vect{s}$ maximizing $\hbox{-}athcal{J}_S$, i.e,. $\vect{s}=\vect{s}_M$, the correlation matrix becomes diagonal with only one non--vanishing element given by $c=\hbox{-}ax\{|c_1|,|c_2|,|c_3|\}$. For example,
\begin{equation}gin{itemize}
\item if $c=|c_1|$,
\begin{equation}gin{align}\label{matrixc1}
\hbox{-}athbb{T}(\vect{s}_M)=\begin{equation}gin{bmatrix}
c_1 & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 & 0
\end{bmatrix},
\end{align}
\item if $c=|c_2|$,
\begin{equation}gin{align}\label{matrixc2}
\hbox{-}athbb{T}(\vect{s}_M)=\begin{equation}gin{bmatrix}
0 & 0 & 0 \\
0 & c_2 & 0 \\
0 & 0 & 0
\end{bmatrix},
\end{align}
\item if $c=|c_3|$,
\begin{equation}gin{align}
\hbox{-}athbb{T}(\vect{s}_M)=\begin{equation}gin{bmatrix}
0 & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 & c_3
\end{bmatrix}.
\end{align}
\end{itemize}
Taking into account that after a measurement the state can only exhibit classical correlations \cite{Majtey17,HV01,ABC16}, the elements in the correlation matrix which remain invariant after the (optimal) measurement can be associated with this kind of correlations \cite{HV01,OZ02,Luo08b}. Thus, BD states exhibit only one classical correlation in the direction determined by $c=\hbox{-}ax\{|c_1|,|c_2|,|c_3|\}$. On the other hand, those elements of $\hbox{-}athbb{T}$ suppressed by the measurement can be identified with quantum correlations.
\subsection{Behavior of correlations under nondissipative decoherence\label{sec:decores1}}
Now, we turn to the study of a dynamical scenario where we shall consider two non-interacting qubits $A$ and $B$ under the influence of local and identical non-dissipative decoherence channels. In this case, the evolution of a two-qubit state $\rho$ can be written by means of the Kraus operators formalism, e.g.,
\begin{equation}gin{align}
\Lambda[\rho]=\sum_{i,j=1}^4(E^A_i\otimes E^B_j) \rho (E_i^{A\dagger} \otimes E_j^{B\dagger}),
\end{align}
where the Kraus operators are
\begin{equation}gin{align}
&E_{k}^m=\sqrt{\frac{1-\exp(-\gamma t)}{2}}\sigma^m_k, \\
&E_{4}^m=\sqrt{\frac{1+\exp(-\gamma t)}{2}}\hbox{-}athbb{I}_2, \\
&E_{i,j \not= k}^m=0,
\end{align}
\noindent and $m=A,B$ states for the qubit $A$ or $B$, $k\in\{1,2,3\}$ is in correspondence with $\{$\textit{bit flip}, \textit{bit-phase flip}, \textit{phase flip}$\}$ channels, and $\gamma\in\hbox{-}athbb{R}_{\geq0}$ is the decoherence rate. A particular choice of $k$ defines the direction $x$, $y$, $z$ of the noise in the Bloch sphere and establishes the decoherence process.
If the $A+B$ system is initially in a BD state its structure will remain unchanged for all $t$ \cite{Modi2012,Czele2011,LoFranco2013,Maziero2009,Ferraro2010}. In this scenario, the coefficients $c_i$ are functions of $t$ and are given by
\begin{equation}gin{align}
&c_k(t)=c_k(0), \\
&c_{i,j \not= k}(t)=c_{i,j}(0)e^{-2\gamma t}.
\end{align}
The freezing phenomenon of QD may occur if certain particular initial conditions are satisfied, as for example:
\begin{equation}gin{align}
c_i(0)&=\pm 1, \\
c_j(0)&=\hbox{-}p c_k(0),
\end{align}
with $\left|c_k(0)\right|\equiv c_0$ and $k\in\{1,2,3\}$ denoting the corresponding channels.
The evolution of the system from the above initial conditions gives rise to a peculiar dynamics. In particular, some measures of quantum correlations \cite{Cianciaruso2015}, remain constant for all $t\in [0,t^*]$ where $t^*=-\frac{1}{2\gamma}\log c_0$. However, for $t>t^*$ they start to decay with $t$.\par
\subsubsection{Correlation matrix}
Here we analyze the dynamics of quantum and classical correlations under a non-dissipative decoherence process by using the results of Sect.~\ref{sec:correMatr}.
In the case of the phase flip channel ($k=3$), the correlation matrix for a BD state can be written as
\begin{equation}gin{align}
\hbox{-}athbb{T}=\begin{equation}gin{bmatrix}
c_{10}e^{-2\gamma t} & 0 & 0 \\
0 & c_{20}e^{-2\gamma t} & 0 \\
0 & 0 & c_{30}
\end{bmatrix},
\end{align}
with $c_j(0)=c_{j0}$, $j\in\{1,2,3\}$. After performing the optimal measurement, the structure of the correlation matrix will be determined by $$c(t)=\hbox{-}ax\{|c_1(t)|,|c_2(t)|,|c_3(t)|\}$$ which in turn will depend upon the initial conditions. \par
In order to illustrate the use of the correlation matrix for the analysis of the correlations present in the system in what follows we shall consider two examples corresponding to two different sets of initial conditions.\par
\noindent \textit{Example 1:} Let us consider the simple case where we set $c_{10}=c_0$, $c_{20}=-c_0$ and $c_{30}=c_0$. As $c_1(t)$ and $c_2(t)$ both decay with time, it is clear that $c(t)=c_0$. In this case we have,
\begin{equation}gin{align}
\hbox{-}athbb{T}=\begin{equation}gin{bmatrix}
c_0e^{-2\gamma t} & 0 & 0 \\
0 & -c_0e^{-2\gamma t} & 0 \\
0 & 0 & c_0
\end{bmatrix},
\end{align}
and after performing the optimal measurement the correlation matrix takes the form,
\begin{equation}gin{align}
\hbox{-}athbb{T}(\vect{s}_M)=\begin{equation}gin{bmatrix}
0 & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 & c_0
\end{bmatrix},
\end{align}
for all $t\in[0,\infty)$. Therefore, after the measurement, the invariant element turns out to be $T_{33}=c_0$. Thus this element is associated with a classical correlation. In contrast, as the elements $c_1(t)$, $c_2(t)$ are suppressed under measurement, they are associated with quantum correlations. Here we consider the optimal measurement correspondig to the computation of the classical correlations $\hbox{-}athcal{J}_S$. Following our analysis, quantum discord $\hbox{-}athcal{D}$ should also decay and the classical correlations should remain constant. This matches perfectly with the behaviour of correlations shown in Fig. \ref{Fig1}.\par
\begin{equation}gin{figure}
\centering
\includegraphics[width=0.75\textwidth]{fig1v4.pdf}
\caption{Dynamics of $\hbox{-}athcal{D}$ (solid line), $\hbox{-}athcal{I}$ (dashed line), $\hbox{-}athcal{J}_S$ (dotted line) and $d_A$ (dash-dotted line) as a function of $t$ ($\gamma=1$) for $c_1(0)=-c_2(0)=c_3(0)=0.6$ and $k=3$. All shown quantities are dimensionless.}
\label{Fig1}
\end{figure}
\noindent \textit{Example 2:} Now, let us consider the freezing phenomenon of quantum discord. In this case, we set the initial conditions as follows: $c_{10}=1$, $c_{20}=-c_0$, and $c_{30}=c_0$. The behavior of the (total, classical, and quantum) correlations measures and the matrix elements are plotted in Fig. \ref{Fig2}.
\begin{equation}gin{figure}
\centering
\includegraphics[width=0.75\textwidth]{fig2v4.pdf}
\caption{Dynamics of $\hbox{-}athcal{D}$ (solid line), $\hbox{-}athcal{I}$ (dashed line), $\hbox{-}athcal{J}_S$ (dotted line) and $d_A$ (dash-dotted line) as a function of $t$ ($\gamma=1$) for $c_1(0)=1$, $c_3(0)=-c_2(0)=0.6$ and $k=3$. All shown quantities are dimensionless.}
\label{Fig2}
\end{figure}
In this case there exists a clear change in the correlation matrix $\hbox{-}athbb{T}(\vect{s}_M)$ at $t^*$ and a sharp transition of quantum and classical correlations is also observed. The correlation matrix takes the form,
\begin{equation}gin{align}
\label{matrixTfreezing}
\hbox{-}athbb{T}=\begin{equation}gin{bmatrix}
e^{-2\gamma t} & 0 & 0 \\
0 & -c_0e^{-2\gamma t} & 0 \\
0 & 0 & c_0
\end{bmatrix},
\end{align}
and after performing the optimal measurement, for $t\in (t^*,\infty)$ we have $c(t)=|c_3|=|c_0|$ leading to the following matrix,
\begin{equation}gin{align}
\hbox{-}athbb{T}(\vect{s}_M)=\begin{equation}gin{bmatrix}
0 & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 & c_0
\end{bmatrix}.
\end{align}
Thus, $T_{33}=c_0$ is associated with a classical correlation whereas the remaining elements $T_{11}=e^{-2\gamma t}$, $T_{22}=-c_0e^{-2\gamma t}$ are associated with quantum correlations. This analysis is in agreement with Fig. \ref{Fig2} where it can be seen that while the measure of classical correlations remains constant, quantum discord decays with $t$. However, for $t\in [0,t^*)$ we have $c(t)=|c_1(t)|$. Therefore, the correlation matrix takes form,
\begin{equation}gin{align}
\label{matrixTfreezingCC2}
\hbox{-}athbb{T}(\vect{S}_M)=
\begin{equation}gin{bmatrix}
e^{-2\gamma t} & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 & 0
\end{bmatrix}.
\end{align}
\noindent If we compare Eqs. \eqref{matrixTfreezingCC2} and (\ref{matrixTfreezing}), we can see that now $T_{11}=e^{-2\gamma t}$ is associated with a classical correlation and $T_{22}=-c_0e^{-2\gamma t}$, $T_{33}=c_0$ are associated with quantum correlations. The measure of classical correlations decays in time in the same way as von Neumann total information does. Thus, quantum discord remains constant in this case exhibiting the freezing phenomenon. By analyzing the corresponding correlation matrix this behavior seems to be controversial because only one of the elements associated to quantum correlations remains invariant with $t$ ($T_{33}$). Since quantum discord is also a function of the elements of $\hbox{-}athbb{T}$, a question that naturally arises is whether QD truly reflects what happens with the \textit{actual} quantum correlations in this time interval.
\subsubsection{NCMQD for Bell diagonal states\label{sec:opt}}
In order to further explore into the results obtained in previous section, we will compute now the non-commutativity measure of quantum correlations introduced in Sect.~\ref{sec:NCMQC} [cf. Eq.\eqref{dnos}] for Bell diagonals states [cf. Eq.\eqref{BD states}] under the influence of local non-dissipative decoherence. We will consider the same two examples as in previous section.
Following \cite{Bussandri19} we have,
\begin{equation}gin{align}
&\left[A_{ij},A_{kl}\right]=\frac{i}{8}\Big[ c_1c_2\alpha^{(12)}_{ijkl}\sigma_3+c_1c_3\alpha^{(31)}_{ijkl}\sigma_2+c_2c_3\alpha^{(23)}_{ijkl}\sigma_1 \Big],
\end{align}
with $\alpha_{ijkl}^{(mn)}=\sigma_m^{ij} \sigma_n^{kl} - \sigma_n^{ij} \sigma_m^{kl}$. Thus, after some algebra the HS norm of the commutators can be written as
\begin{equation}gin{align}\label{26}
\begin{equation}gin{Vmatrix}
\left[A_{ij},A_{kl}\right]
\end{Vmatrix}_2^2&=\frac{1}{2^5}\Big(
\left|c_1c_2\right|^2\left|\alpha_{ijkl}^{(12)} \right|^2 +\left|c_1c_3\right|^2\left|\alpha_{ijkl}^{(31)} \right|^2 +\left|c_2c_3\right|^2\left|\alpha_{ijkl}^{(23)} \right|^2\Big),
\end{align}
where $\sigma_k^{ij}=\bra{i_B}\sigma_k \ket{j_B}$.
The optimization procedure involved in NCMQC requires a suitable parametrization of the basis of $\hbox{-}athcal{H}_B$. In this case, we have considered $\{\ket{i_B}\}=\{U\ket{i_B}_c\}$, with $\{\ket{i_B}_c\}$ being the computational basis and $U$ an unitary operator which can also be parametrized according to Eq. \eqref{param2}.
After some straightforward calculations, the resulting expression to be minimized turns out to be
\begin{equation}gin{align}
D_A(\rho)=&\frac{1}{\sqrt{2^3}} \sqrt{c_2^2c_3^2z_1(\vect{s})^2 + c_1^2c_3^2z_2(\vect{s})^2 + c_2^2c_1^2z_3(\vect{s})^2 }+\nonumber\\
+&\frac{1}{\sqrt{2}}\sqrt{c_2^2c_3^2\zeta_1(\vect{s}) + c_1^2c_3^2\zeta_2(\vect{s}) + c_2^2c_1^2\zeta_3(\vect{s}) }\label{eq:DANCMQD},
\end{align}
with $z_1(\vect{s})$, $z_2(\vect{s})$ and $z_3(\vect{s})$ defined according to Eqs. \eqref{eq:z1}, \eqref{eq:z2}, and \eqref{eq:z3} respectively and $\zeta_i(\vect{s})=1-z_i(\vect{s})^2, i=1,2,3$.
After some algebra (see appendix \ref{sec:optimNCMQD}) the optimized measure can be written as:
\begin{equation}gin{equation}
\begin{equation}gin{split}
d_A(\rho)=\frac{1}{\sqrt{2^3}}\hbox{-}in \{ \ &|c_1c_2|+2\sqrt{(c_2c_3)^2+(c_1c_3)^2} , \\ &|c_2c_3|+2\sqrt{(c_1c_2)^2+(c_1c_3)^2}, \\ &|c_1c_3|+2\sqrt{(c_1c_2)^2+(c_2c_3)^2} \ \}.
\label{eq:dArhooptim}
\end{split}
\end{equation}
We evaluate Eq. \eqref{eq:dArhooptim} for the initial conditions corresponding to both of the examples considered before. In the first case, we choose $c_{10}=c_0$, $c_{20}=-c_0$ and $c_{30}=c_0$, and the freezing phenomenon of QD is absent. In the second case, we choose $c_{10}=1$, $c_{20}=-c_0$, and $c_{30}=c_0$, and the freezing phenomenon of QD takes place. The dynamics of the NCMQC for each set of the initial conditions is shown in Fig. \ref{Fig1} and \ref{Fig2}. The behavior exhibited by this measure seems to follow more reliably the behaviour shown by the correlations associated with the elements of the matrix $\hbox{-}athbb{T}$ than the measure of QD.
\section{Concluding remarks\label{sec:conclusions}}
In this paper we studied the behavior of correlations under non-dissipative decoherence in two-qubit states with maximally mixed marginals by means of the Fano representation which allows us to identify a correlation matrix. From the behavior of the elements of this correlation matrix before and after making measurements on one of the subsystems, we have been able to identify the classical and quantum correlations present in the bipartite states. In addition, we used the correlation matrix to study the phenomenon of freezing of quantum discord under non-dissipative decoherence. We found that under some initial conditions, where freezing of quantum discord takes place, the actual quantum correlations may not remain constant. In order to obtain further insights into these findings we also computed a non-commutativity measure of quantum correlations in the same dynamical scenario. We conclude from our study that freezing of quantum discord may not always be identified as equivalent to the freezing of the actual quantum correlations.\par
Naturally, our conclusions may be extended to other measures or quantifiers of quantum correlations eventually reflecting the same kind of freezing behavior. It seems that caution must be exercised regarding the interpretation of freezing of a certain measure of quantum correlations as equivalent to the freezing of the actual quantum correlations present in the physical system.
\begin{equation}gin{acknowledgements}
D.B., T.M.O, A.P.M., and P.W.L. acknowledge the Argentinian agency SeCyT-UNC and CONICET for financial support. D. B. has a fellowship from CONICET.
\end{acknowledgements}
\section*{Appendix: Optimization of the NCMQD}
\label{sec:optimNCMQD}
In order to minimize Eq. \eqref{eq:DANCMQD} let us consider the function $f(\vect{x})$:
\begin{equation}gin{align}
f(\vect{x})=&\sqrt{\sum_{i=1}^3 \alpha_i x_i^2} + 2\sqrt{\sum_{i=1}^3 \alpha_i (1-x_i^2)}, \\
g(\vect{x})=&\sum_i x_i^2=1\label{eq:constfx},
\end{align}
with $g(\vect{x})=1$ and $\vect{x}=(x_1,x_2,x_3)$. Up to a constant factor, $D_A(\rho)$ is a particular case of $f(\vect{x})$: $\alpha_1=(c_2c_3)^2$, $\alpha_2=(c_1c_3)^2$, $\alpha_3=(c_2c_1)^2$ and the variables $\vect{x}=(x_1,x_2,x_3)$ represent the quantities $\vect{z}=( z_1(\vect{s}) ,z_2(\vect{s}),z_3(\vect{s}) )$.
Following the method of Lagrange multipliers we have
\begin{equation}gin{equation}
\frac{\partial f}{\partial x_p}=\lambda\frac{\partial g}{\partial x_p},
\end{equation}
where $p\in\{1,2,3\}$. Thus, we can write the following equations:
\begin{equation}gin{align}
\frac{\alpha_k x_k}{\sqrt{\theta}}-2\frac{\alpha_k x_k}{\sqrt{\alpha - \theta}}=2 \lambda x_k \label{eq:xk}, \\
\frac{\alpha_i x_i}{\sqrt{\theta}}-2\frac{\alpha_i x_i}{\sqrt{\alpha - \theta}}=2 \lambda x_i \label{eq:xi}, \\
\frac{\alpha_j x_j}{\sqrt{\theta}}-2\frac{\alpha_j x_j}{\sqrt{\alpha - \theta}}=2 \lambda x_j \label{eq:xj},
\end{align}
where $\theta=\sum_{p} \alpha_p x_p^2$, $\alpha=\sum_p \alpha_p $ and $i, j, k$ ($i\neq j \neq k$) are numbers belonging to the set $\{1,2,3\}$.
Without loss of generality, we may assume that $\alpha_k$, $\alpha_i$ and $\alpha_j$ are different from zero. In view of the constrain $g(x_1,x_2,x_3)=1$ [cf. Eq. \eqref{eq:constfx}], let us suppose that $x_k=0$, $x_i=0$ and $x_j=1$. Then Eqs. $\eqref{eq:xk}$ and $\eqref{eq:xi}$ are satisfied and Eq. $\eqref{eq:xj}$ can be fulfilled by taking
\begin{equation}gin{equation}
\lambda=\frac{\alpha_j}{2\sqrt{\theta}}-\frac{\alpha_j}{\sqrt{\alpha - \theta}}. \label{eq:xj2}
\end{equation}
Thus, bearing in mind the permutation of $i,j,k$, we obtain three extremal points, i.e., $\vect{x}_e\in \{(0,0,1),(0,1,0),(1,0,0)\}$.
Let us take now $x_k=0$, and $x_i\not =0$, $x_j\not =0$. Then, we have the three quantities $x_i$, $x_j$ and $\lambda$ to be determined taking into account \eqref{eq:xi}, \eqref{eq:xj} and $x_i^2+x_j^2=1$. Accordingly,
\begin{equation}gin{align}
\lambda=\frac{\alpha_i}{2}\left(\frac{1}{\sqrt{\theta}}-\frac{2}{\sqrt{\alpha-\theta}}\right).
\end{align}
Now, if $\alpha_i \not = \alpha_j$ from Eq. \eqref{eq:xj} we obtain
\begin{equation}gin{align}
\frac{1}{\sqrt{\theta}}=\frac{2}{\sqrt{\alpha-\theta}}\label{eq:thetaeq}.
\end{align}
The equality in Eq. \eqref{eq:thetaeq} holds iff $\theta=\frac{1}{5}\alpha$. Therefore, the extremal points are given by
\begin{equation}gin{align}
x_k&=0, \\
x_1^2+x_j^2&=1, \\
\alpha_ix_i^2+\alpha_jx_j^2&=\frac{1}{5}\sum_{p=1}^3\alpha_p.
\end{align}
On the contrary, if $\alpha_i = \alpha_j$, Eqs. \eqref{eq:xi} and \eqref{eq:xj} are trivially fulfilled. It can be verified that the general case, i.e., $x_k \not =0$, $x_i \not =0$ and $x_j \not =0$, can be solved following the previous calculations and gives the same extreme value $\theta=\frac{1}{5}\alpha$.
In summary, we have two types of extremal points. First $\vect{x}_e\in\{(0,0,1),(0,1,0),(1,0,0)\}$, secondly, $\vect{x}_e$ such that $\sum_p x_p^2=1$ and $\theta=\frac{1}{5}\alpha$.
Let us see which of them is a minimum of the function $f(\vect{x})$. Consider the one-dimensional function $\hat{f}(\theta)=\sqrt{\theta}+2\sqrt{\alpha-\theta}$. It is easy to see that is a concave function of $\theta$ with an extremal point in $\theta=\frac{1}{5}\alpha$. Therefore, this case corresponds to a local maximum. Thus, the minimum of the function should be in the boundary points:
\begin{equation}gin{align}
\theta_{\hbox{-}in}=\hbox{-}in_{\vect{x}\in\hbox{-}athcal{G}} \{ \theta \},\\
\theta_{\hbox{-}ax}=\hbox{-}ax_{\vect{x}\in\hbox{-}athcal{G}} \{\theta\},
\end{align}
being $\hbox{-}athcal{G}=\{\vect{x}\in \hbox{-}athbb{R}^3 : g(\vect{x})=1 \}$. Following \cite{Luo08b}, as $\theta=\sum_p \alpha_p x_p^2\leq c\sum_p x^2_p=c_{\pm}$ we have,
\begin{equation}gin{align}
\theta_{\hbox{-}in}=c_{-}=\hbox{-}in\{\alpha_1,\alpha_2,\alpha_3\},\\
\theta_{\hbox{-}ax}=c_{+}=\hbox{-}ax \{ \alpha_1,\alpha_2,\alpha_3 \}.
\end{align}
These $\theta$ values do coincide with our first type of extremal points $\vect{x}_e$. As a consequence, the minimum of the function $f(\vect{x})$ turns out to be:
\begin{equation}gin{align}
f_{\hbox{-}in}=\hbox{-}in \{ \ &\sqrt{\alpha_1}+2\sqrt{\alpha_2+\alpha_3} \ , \ \sqrt{\alpha_2}+2\sqrt{\alpha_1+\alpha_3} \ , \nonumber \sqrt{\alpha_3}+2\sqrt{\alpha_2+\alpha_1} \ \}.
\end{align}
Finally, the optimized measure $d_A(\rho)$ can be written as:
\begin{equation}gin{equation}
\begin{equation}gin{split}
d_A(\rho)=\frac{1}{\sqrt{2^3}}\hbox{-}in \{ \ &|c_1c_2|+2\sqrt{(c_2c_3)^2+(c_1c_3)^2} , \\ &|c_2c_3|+2\sqrt{(c_1c_2)^2+(c_1c_3)^2}, \\ &|c_1c_3|+2\sqrt{(c_1c_2)^2+(c_2c_3)^2} \ \}.
\end{split}
\end{equation}
It is important to realize that the extremal points $\vect{z}_e=(z_1^e,z_2^e,z_3^e)\in\{ (0,0,1),(0,1,0),(1,0,0) \}$ can always be attained by a suitable choice of $\vect{s}$ \cite{Luo08b}.
\begin{equation}gin{thebibliography}{}
\bibitem{Knill98} Knill, E., Laflamme, R.: Power of One Bit of Quantum Information. Phys. Rev. Lett. {\bf 81}, 5672 (1998)
\bibitem{LCNV} Laflamme, R., Cory, D. G., Negrevergne, C. \& Viola, L.: NMR quantum information processing and entanglement. Quantum. Inf. and Comp. {\bf 2}, 166 (2002)
\bibitem{Braun99} Braunstein, S. L., Caves, C. M., Jozsa, R., Linden, N., Popescu, S., Schack R.: Separability of very noisy mixed states and implications for NMR Quantum computing. Phys. Rev. Lett. {\bf 83}, 1054 (1999)
\bibitem{Meyer00} Meyer, D., A.: Sophisticated Quantum Search Without Entanglement. Phys. Rev. Lett. {\bf 85}, 2014 (2000)
\bibitem{Datta05} Datta, A., Flammia, S., T., Caves, C., M.: Entanglement and the power of one qubit. Phys. Rev. A {\bf 72}, 042316 (2005)
\bibitem{Datta07} Datta A., Vidal, G.: Role of entanglement and correlations in mixed-state quantum computation. Phys. Rev. A {\bf 75}, 042310 (2007)
\bibitem{Datta08} Datta, A., Shaji, A., Caves, C. M.: Quantum Discord and the Power of One Qubit. Phys. Rev. Lett. {\bf 100}, 050502 (2008)
\bibitem{Lanyon08} B. P. Lanyon, M. Barbieri, M. P. Almeida, A. G. White, Phys. Rev. Lett. {\bf 101}, 200501 (2008).
\bibitem{ABC16} Adesso, G., Bromley, T., R., Cianciaruso, M.: Measures and applications of quantum correlations. J. Phys. A: Math. Theor. {\bf 49}, 473001 (2016)
\bibitem{Guo16} Guo Y.: Non-commutativity measure of quantum discord. Sci. Rep. {\bf 6}, 25241 (2016)
\bibitem{Majtey17} Majtey, A., P., Bussandri, D., G., Ossan T., G., Lamberti P., W., Valdés-Hernández A.: Problem of quantifying quantum correlations with non-commutative discord. Quantum Inf. Process {\bf 16}, 226 (2017)
\bibitem{Modi2012} Modi, K., Brodutch, A., Cable, H., Paterek, T., Vedral, V.: The classical-quantum boundary for correlations: Discord and related measures. Rev. Mod. Phys. \textbf{84}, 1655 (2012)
\bibitem{Czele2011} C\'eleri, L., C., Maziero, J., and Serra, R., M., Theoretical and experimental aspects of quantum discord and related measures, Int. J. Quantum Inf. \textbf{09}, 1837 (2011).
\bibitem{LoFranco2013} Lo Franco, R., Bellomo, B., Maniscalco, S., and Compagno, G., Dynamics of quantum correlations in two-qubit systems within non-Markovian environments, Int. J. Int. J. Mod. Phys. B \textbf{27}, 1345053 (2013).
\bibitem{Maziero2009} Maziero, J., C\'eleri, L., C., Serra, R., M., and Vedral, V., Classical and quantum correlations under decoherence, Phys. Rev. A \textbf{80}, 044102 (2009).
\bibitem{Ferraro2010} Ferraro, A., Aolita., L., Cavalcanti, D., Cucchietti, F. M., and Acín, A., Optimal reconstruction of the states in qutrit systems, Phys. Rev. A \textbf{81}, 044102 (2010).
\bibitem{Mazzola2010} Mazzola, Piilo J., and Maniscalco S., Sudden Transition between Classical and Quantum Decoherence, Phys. Rev. Lett. \textbf{104}, 200401 (2010).
\bibitem{Haikka2013} Haikka, P., Johnson, T., H., and Maniscalco, S., Non-Markovianity of local dephasing channels and time-invariant discord, Phys. Rev. A \textbf{87}, 010103(R) (2013).
\bibitem{Mazzola2011} Mazzola L., Piilo, J., and Maniscalco, S., Frozen discord in non-Markovian dephasing channels, Int. J. Quantum Inf. {\bf 09}, 981 (2011).
\bibitem{Mannone2013} Mannone, M., Lo Franco, R., and Compagno, G., Comparison of non-Markovianity criteria in a qubit system under random external fields, Phys. Scr. T153, 014047 (2013).
\bibitem{LoFranco2012} Lo Franco, R., Bellomo, B., Andersson, E., and Compagno, G., Revival of quantum correlations without system-environment back-action, Phys. Rev. A {\bf 85}, 032318 (2012).
\bibitem{You2012} You B. , and Cen L.-X., Necessary and sufficient conditions for the freezing phenomena of quantum discord under phase damping, Phys. Rev. A {\bf 86}, 012102 (2012).
\bibitem{HV01} Henderson, L., Vedral, V.: Classical, quantum and total correlations. J. Phys. A {\bf 34}, 6899 (2001)
\bibitem{OZ02} Ollivier, H., Zurek, W., H.: Quantum Discord: A Measure of the Quantumness of Correlations. Phys. Rev. Lett {\bf 88}, 017901 (2001)
\bibitem{Fano1983} Fano, U., Correlations of two excited electrons, Rep. Prog. Phys. {\bf 46}, 97 (1983).
\bibitem{Hioe1981} Hioe, F., T., and Eberly J., H., N-Level Coherence Vector and Higher Conservation Laws in Quantum Optics and Quantum Mechanics, Phys. Rev. Lett. {\bf 47}, 838 (1981).
\bibitem{Schlienz1995} Schlienz, J. and Mahler, G. , Description of entanglement, Phys. Rev. A {\bf 52}, 4396 (1995).
\bibitem{Ma1985} Ma, S-K, \emph{Statistical Mechanics} (World Scientific, Singapore, 1985).
\bibitem{Peter2001} Peter Š. and Vladimír B., Dynamics of open quantum systems initially entangled with environment: Beyond the Kraus representation, Phys. Rev. A {\bf 64}, 062106 (2001).
\bibitem{Huang2008} Huang, J.-H. and Zhu, S.-Y., Measure of classical correlation in a two-qubit state, J. Phys. A: Math. Theor. {\bf 41}, 125301 (2008).
\bibitem{Dong2010} Dong R. X. and Zhou D. L., Correlation function and mutual information, J. Phys. A: Math. Theor. {\bf 43} 445302 (2010)
\bibitem{Horodeckis2009} Horodecki R., Horodecki P.,Horodecki M., and Horodecki K., Quantum entanglement, Rev. Mod. Phys. {\bf 81}, 865 (2009).
\bibitem{Luo08b} Luo S.: Quantum discord for two-qubit systems. Phys. Rev. A {\bf 77}, 042303 (2008)
\bibitem{Cianciaruso2015} Cianciaruso, M., Bromley, T., R., Roga, W., Lo Franco R., and G. Adesso, Universal freezing of quantum correlations within the geometric approach, Sci. Rep. \textbf{5}, 10177 (2015).
\bibitem{Bussandri19} Bussandri, D., G., Majtey, A., P.,Vald\'es-Hern\'andez, A., Generalized approach to quantify correlations in bipartite quantum systems. Quantum Inf. Process. {\bf 18}, 47 (2019).
\end{thebibliography}{}
\end{document}
|
\begin{document}
\title{\Large Control-Affine Extremum Seeking Control with Attenuating Oscillations: A Lie Bracket Estimation Approach \thanks{Dr. Eisa acknowledges the fund of the 2023 Office of Research URC Faculty Scholars Research Award at the University of Cincinnati, OH-USA.}}
\author{Sameer Pokhrel\thanks{Ph.D. student at the Department of Aerospace Engineering and Engineering Mechanics, University of Cincinnati, OH, USA. Email: [email protected]}
\and Sameh A. Eisa \thanks{Assistant professor at the Department of Aerospace Engineering and Engineering Mechanics, University of Cincinnati, OH, USA. Email: [email protected] and [email protected].}}
\date{}
\maketitle
\begin{abstract} \small \baselineskip=9pt
Control-affine Extremum Seeking Control (ESC) systems have been increasingly studied and applied in the last decade. In a recent effort, many control-affine ESC structures have been generalized in a unifying class and their stability was analyzed. However, guaranteeing vanishing oscillations at the extremum point for said class requires strong conditions that may not be feasible or easy to check/design by the user, especially when the gradient of the objective function is unknown. In this paper, we introduce a control-affine ESC structure that remedies this problem such that: (i) its oscillations attenuate structurally via a novel application of a geometric-based Kalman filter and a Lie bracket estimation approach; and (ii) its stability is characterized by a time-dependent (one-bound) condition that is easier to check and relaxed when compared to the generalized approach mentioned earlier. We provide numerical simulations of three problems to demonstrate the effectiveness of our proposed ESC; these problems cannot be solved with vanishing oscillations using the generalized approach in the literature.
\end{abstract}
\section{Introduction} \label{s: Intro}
Extremum Seeking Control (ESC) is a model-free adaptive control technique that stabilizes a steady state map of a dynamical system around the extremum point of an objective function that we have access to its measurements, but not its expression. In 2000, what is known as the classic ESC structure \cite{KRSTICMain} has been analyzed through singular perturbation and averaging (refer \cite{Maggia2020higherOrderAvg} for more on averaging theory), and its stability was characterized. This classic structure has been extended to many other forms and found many applications \cite{ariyur2003real,scheinker2017modelfreebook}.
In this paper, we focus on control-affine ESC systems \cite{ESCTracking, DURR2013, scheinker2016boundedDither,BoundedUpdateKrstic,ExpStabSUTTNER2017,eisa2023}, which often deal with problems/applications naturally expressed in control-affine formulation -- such as but not limited to multi-agent systems. In control-affine ESC structures, the objective function is not perturbed/modulated directly by the input signal, rather it -- or a functional of it -- is multiplied by the input signal, resulting in the objective function being incorporated within the vector fields of the control-affine ESC system. This has invoked the use of Lie bracket-based analysis as these tools are very natural to control-affine systems as known in geometric control theory \cite{bullo2019geometric}. Durr et al. \cite{DURR2013} first used the concept of Lie Bracket System (LBS) approximation of ESCs, mainly for stability characterization. They called it ``the corresponding LBS." They showed that under certain assumptions, the practical stability property of a control-affine ESC system follows from the asymptotic stability property of the corresponding LBS; in essence, LBS characterization of control-affine ESCs is analogous to singular perturbation and averaging characterization of classic ESC-related structures. In the following years to \cite{DURR2013}, many researchers have adopted structures that utilize LBS-based approaches for characterizing stability and improving the ESC performance (e.g., \cite{DURR2017,GRUSHKOVSKAYA2017}). Recently, Grushkovskaya et al. \cite{VectorFieldGRUSHKOVSKAYA2018} provided a generalized class for many control-affine ESC structures \cite{scheinker2016boundedDither,BoundedUpdateKrstic,ExpStabSUTTNER2017} including the work of Durr et al. \cite{DURR2013}. In their work, certain conditions and bounds if guaranteed on the objective function and its derivatives, one should be able to guarantee vanishing control inputs at the extremum point (this effort has been extended in \cite{taha2021vanishing}), hence asymptotic convergence of the ESC system is achieved.
Since Durr et al. \cite{DURR2013} introduced their ESC approach, until the generalization of control-affine ESCs by Grushkovskaya et al. \cite{VectorFieldGRUSHKOVSKAYA2018}, there seems to be a consistent trade-off: approaches of ESC with simple stability condition but persistent oscillations (e.g., \cite{DURR2013}), or approaches of ESC with complex stability conditions to guarantee vanishing oscillations (e.g., the generalized ESC approach in \cite{VectorFieldGRUSHKOVSKAYA2018}). Our motivation is to balance said trade-off by achieving relaxed stability conditions with vanishing oscillations. By examining the literature of classic ESC-related structures, the contribution made recently in \cite{AttenuatedOscillaiion2021} has made it possible to implement the classic ESC structure with attenuating oscillations and without over-complicating, the stability condition of the classic ESC structure in \cite{KRSTICMain}. The key was finding the proper adaptation law for the amplitude of the control input. We aim at introducing an analogous concept to \cite{AttenuatedOscillaiion2021} \textit{but for control-affine ESC systems} to achieve the results of \cite{VectorFieldGRUSHKOVSKAYA2018} with minimal complications to the relaxed stability conditions in \cite{DURR2013}. In this paper, we propose a LBS-estimation approach for control-affine ESC systems which: (i) have attenuating oscillations that vanish at the extremum point; and (ii) its stability is characterized by one time-dependent condition/bound, with no bounds on the gradient derivatives.
Moreover, the framework proposed in this paper introduces a novel adaptation law utilizing a novel geometric-based Kalman filtering (GEKF) approach to estimate the LBS approximating a control-affine ESC. We solve numerically three problems including a multi-agent system to demonstrate the effectiveness of the proposed approach; examples chosen in this paper are for problems the generalized ESC approach in \cite{VectorFieldGRUSHKOVSKAYA2018} cannot solve with vanishing oscillations.
\section{Background and preliminaries}
Control-affine ESC systems can be written as \cite{DURR2013}:
\begin{equation}\label{eqn:ESC_back}
\dot{\bm{x}}=\bm{b_d}(t,\bm{x})+ \sum\limits_{i=1}^{m} \bm{b_i}(t,\bm{x})\sqrt{\omega}u_i(t,\omega t),\\
\end{equation}
with $\bm{x}(t_0)=\bm{x_0}\in \mathbb{R}^n$ and $\omega \in (0,\infty)$.
Here $\bm{x}$ is the state space vector, $\bm{b}_d$ is the drift vector field of the system, $u_i$ are the control inputs, $m$ is the number of control inputs, and $\bm{b}_i$ are the control vector fields.
Now, the corresponding LBS \cite{DURR2013} to (\ref{eqn:ESC_back}) is defined as in \eqref{eqn:Lie_back}:
\begin{equation}\label{eqn:Lie_back}
\dot{\bm{z}}=\bm{b_d}(t,\bm{z})+ \sum_{\substack{i=1\\j=i+1}}^m [\bm{b_i},\bm{\bm{b_j}}](t,\bm{z})\nu_{j,i}(t),
\end{equation}
with $\nu_{j,i}(t)=\frac{1}{T}\int_0^T u_j(t,\theta)\int_0^\theta u_i(t,\tau)d\tau d\theta$.
The operation $[\cdot,\cdot]$ is the Lie bracket operation between two vector fields $\bm{b_i},\bm{b_j}: \mathbb{R} \times\mathbb{R}^n \rightarrow \mathbb{R}^n$ with $\bm{b_i}(t,\cdot),\bm{b_j}(t,\cdot)$ being continuously differentiable, and is defined as $[\bm{b_i},\bm{b_j}](t,\bm{x}):=\frac{\partial \bm{b_j}(t,\bm{x})}{\partial \bm{x}}\bm{b_i}(t,x)-\frac{\partial \bm{b_i}(t,\bm{x})}{\partial \bm{x}}\bm{b_j}(t,\bm{x})$. A particular class of the control-affine ESC in (\ref{eqn:ESC_back}), which generalizes many ESC systems in control affine form found in literature \cite{ESCTracking, DURR2013, scheinker2016boundedDither,BoundedUpdateKrstic,ExpStabSUTTNER2017,VectorFieldGRUSHKOVSKAYA2018, taha2021vanishing}, is provided below:
\begin{equation}\label{eqn:ESC}
\dot{{x}}={b_1}(f({x}))u_1+ {b_2}(f({x}))u_2,
\end{equation}
where $f({x})$ is the objective function. This generalized structure of ESCs \eqref{eqn:ESC} is shown in the unshaded parts of figure \ref{fig:ESC_scheme}, with $u_1=a\sqrt{\omega}\hat{u}_1(\omega t),u_2=a \sqrt{\omega}\hat{u}_2(\omega t) $, $a \in \mathbb{R}$ is the amplitude of the input signal. Now, we impose the following assumptions on ${b}_1,{b}_2$, $\hat{u}_1$ and $\hat{u}_2$.
\begin{enumerate}[label=A\arabic*.]
\item
$b_i\in C^2: \mathbb{R} \to \mathbb{R}, i={1,2}$ and for a compact set $\mathscr{C} \subseteq \mathbb{R}$, there exist $A_1, ..., A_3 \in [0,\infty)$ such that $|b_i(x)|\leq A_1,
| \frac{\partial b_i(x)}{\partial x}|\leq A_2,
|\frac{\partial [{b_j},{b_k}](x)}{\partial x}| \leq A_3$ for all $x\in \mathscr{C}, i={1,2};\; j={1,2};\; k={1,2}.$
\item
$\hat{u}_i: \mathbb{R} \times \mathbb{R} \to \mathbb{R} , i=1,2$, are measurable functions. Moreover, there exist constants $M_i \in (0,\infty) $ that $sup_{\omega t \in \mathbb{R}}|\hat{u}_i(\omega t)|\leq M_i$, and
$\hat{u}_i(\cdot)$ is T-periodic, i.e. $\hat{u}_i(\omega t + T)=\hat{u}_i(\omega t),$ and has zero average, i.e. $\int_0^T \hat{u}_i(\tau) d\tau = 0,$ with $T \in (0,\infty)$ for all $\omega t \in \mathbb{R}$.
\item
There exists an ${x}^* \in \mathscr{C}$ such that $\nabla f({x}^*)=0, \nabla f({x})\ne 0$ for all ${x}\in \mathscr{C}\backslash \{{x}^*\}; f({x}^*)=f^* \in \mathbb{R}$ is an isolated extremum value.
\end{enumerate}
\begin{figure}
\caption{Proposed ESC structure.}
\label{fig:ESC_scheme}
\end{figure}
\begin{Remark}\label{remark:assumptions1_3}
Assumptions A1-A3 are typical in ESC literature \cite{KRSTICMain,DURR2013}. A1-A2 ensure well-posedness, boundedness, zero-mean average, and measurability of the control inputs. A3 ensures the objective function $f$ has an isolated local extremum $f^*$ at $\bm{x}^*$.
\end{Remark}
\begin{theorem}\label{thm:from2018}
(from \cite{VectorFieldGRUSHKOVSKAYA2018}) Let A1-A2 be satisfied, then for the ESC system in (\ref{eqn:ESC}), the corresponding LBS is:
\begin{equation}\label{eqn:Lie}
\dot{{z}}=-\nu_{2,1} \nabla f({z}) b_0(f({z})),
\end{equation}
where $b_0(z)=b_2(z) \frac{db_1(z)}{dz}-b_1(z)\frac{db_2(z)}{dz}, z\in \mathbb{R}$.
For the multi-variable ESC ($\bm{x}\in \mathbb{R}^n$) in the following form
\begin{equation}\label{eqn:ESC_multi}
\dot{\bm{x}}=\sum \limits_{i=1}^n \left( b_{1i}(f({\bm{x}}))u_{1i}+ {b_{2i}}(f({\bm{x}}))u_{2i}\right) e_i,
\end{equation}
the corresponding LBS is given by
\begin{equation}\label{eqn:Lie_multi}
\dot{\bm{z}}=-\sum \limits_{i=1}^n \nu_{2i,1i} \frac{\partial f(\bm{z})}{\partial z_i} b_{0i}(f({\bm{z}})) e_i.
\end{equation}
where $f: \mathbb{R}^n \to \mathbb{R}$, $e_i$ denotes the $i^{th}$ unit vector in $\mathbb{R}^n$.
\end{theorem}
Moreover, we introduce Theorem \ref{thm:esc_lbs} from \cite{DURR2013} (this theorem is also stated in \cite{VectorFieldGRUSHKOVSKAYA2018}) which links the stability properties of LBSs (\ref{eqn:Lie}) and (\ref{eqn:Lie_multi}) with ESC systems (\ref{eqn:ESC}) and (\ref{eqn:ESC_multi}), respectively.
\begin{theorem}\label{thm:esc_lbs}
Let assumptions A1-A3 be satisfied and suppose that a compact set $\mathscr{C}$ is locally (uniformly) asymptotically stable for (\ref{eqn:Lie_multi}). Then $\mathscr{C}$ is locally practically (uniformly) asymptotically stable for (\ref{eqn:ESC_multi}).
\end{theorem}
Finally, we introduce the Chen-Fliess functional expansion \cite[chapter 3]{isidori1985nonlinear} that can be used to describe representations of input-output behavior of a nonlinear system of the control-affine form (\ref{eqn:ESC_back}) and associated output function $y=h(\bm{x})$, $y\in \mathbb{R}$. Let $T$ be a fixed value of the time and $u_1,...,u_m$ are real-valued piecewise continuous functions defined on [0,$T$]. Now, iterated integral for each multi-index ($i_k,...,i_0$) is defined as:
\begin{equation}
\int_0^t d\xi_{i_k}...d\xi_{i_0}=\int_0^t d\xi_{i_k}(\tau) \int_0^t d\xi_{i_{k-1}}... \int_0^t d\xi_{i_0},
\end{equation}
where $0\le t \le T$; $\xi_0(t)=t$; $\xi_i(t)=\int_0^t u_i(\tau)d\tau$ for $1 \le i \le m$. Now, the evolution of the output $y(t)$ for a short time $t\in[0,T]$ is given by:
\begin{equation}\label{eqn:chenFliess}
y(t)=h(\bm{x}_0)+\sum \limits_{k=0}^\infty \sum \limits_{i_0 , ..., i_k = 0}^m L_{\bm{b}_{i_0}}... L_{\bm{b}_{i_k}} h(\bm{x}_0) \int_0^t d\xi_{i_k}...d\xi_{i_0},
\end{equation}
where $L_{\bm{b}}h$ is the Lie derivative of $h$ along $\bm{b}$.
\section{Main Results and Discussion} \label{s: MainResults}
The LBS in (\ref{eqn:Lie_multi}) approximates, captures the behavior of, and characterizes the ESC system in (\ref{eqn:ESC_multi}). One can easily observe that, with the exception of the gradient components, the LBS in (\ref{eqn:Lie_multi}) is obtainable by access to measurements of the objective function $f(\bm{x})$ and the predetermined structure choices. For instance, $b_0$ per (\ref{eqn:Lie}) is determined by $b_1$ and $b_2$ built in the structure, and the term $\nu_{2,1}$ is dependent on the chosen control inputs. As a result, the LBS estimation is dependent on the gradient of the objective function. This leads us to formally introduce the concept of estimated LBS, denoted as $\hat{\bm{z}}$, and defined as follows:
\begin{equation}\label{eqn:LieEst}
\dot{\hat{\bm{z}}}=-\sum \limits_{i=1}^n ( \nu_{2i,1i} \frac{\partial f(\bm{z})}{\partial z_i} b_{0i}(f({\bm{z}})) + \eta_i(t)) e_i =\bm{J}(t,\bm{z}).
\end{equation}
The estimated LBS in (\ref{eqn:LieEst}) is similar to (\ref{eqn:Lie_multi}) but with the addition of an error term ${\eta}_i(t)$ representing the inaccuracies introduced during the estimation. we impose the following assumption on ${\eta}_i(t)$:
\begin{enumerate}[label=A\arabic*.]
\setcounter{enumi}{3}
\item
${\eta}_i(t): \mathbb{R} \rightarrow \mathbb{R},i=1,...,n$ is measurable function and there exist constants $\theta_0,\epsilon_0 \in (0,\infty)$ such that $|{\eta}_i(t_2)-{\eta}_i(t_1)| \le \theta_0|t_2-t_1|$ for all $t_1,t_2 \in \mathbb{R}$ and $\sup_{t \in \mathbb{R}}|{\eta}_i(t)| \le \epsilon_0$.
Furthermore, $\lim_{t\to \infty} {\eta}_i (t)={0}$.
\end{enumerate}
Assumption A4 implies that the error term $\eta_i(t)$ is measurable, bounded, and decreases with time. Having the knowledge of the estimated LBS, we now propose an ESC structure that couples the control-affine ESC in (\ref{eqn:ESC_multi}) with an adaptation law for the amplitude of the control input which depends on the estimated LBS as:
\begin{align}
\dot{\bm{x}}&=\sum \limits_{i=1}^n \left( b_{1i}(f({x}))\sqrt{\omega} a_i(t)\hat{u}_{1i}+ {b_{2i}}(f({x}))\sqrt{\omega} a_i(t) \hat{u}_{2i}\right) e_i \label{eqn:generalizedSystem},\\
\dot{\bm{a}}&=\sum \limits_{i=1}^n \left(-\lambda_i ({a}_i(t)-{J}_i(t,\bm{x})) \right ) e_i,\label{eqn:law}
\end{align}
where $a_i \in \mathbb{R}$ is the amplitude of the input signal, and $\lambda_i >0\in \mathbb{R}$ is a tuning parameter. Our proposed structure for $n=1$ is shown in figure \ref{fig:ESC_scheme}. Note that our structure in figure \ref{fig:ESC_scheme} without the shaded area is the same as the generalized ESC systems in \cite{VectorFieldGRUSHKOVSKAYA2018}.
Next, we introduce our theorem on the proposed structure.
\begin{theorem}\label{thm:proposed_theorem}
Let A1-A4 be satisfied with some $\omega \in (\omega^*,\infty), \omega^* >0$ and suppose $\exists \; t^* >t_0 =0$ such that $\forall t>t^*$ and $|J_i(t,\bm{z})| \le 1/t^p$ with some $p>1$ then (i) the equilibrium point $\hat{\bm{z}}^* \in \mathscr{C}$ is locally asymptotically stable for the estimated LBS in \eqref{eqn:LieEst}, (ii) $a_i$ in (\ref{eqn:law}) is asymptotically convergent to 0; and (iii) the system in (\ref{eqn:generalizedSystem}) is practically asymptotically stable.\\
\emph{The proof can be found in the appendix \ref{sec:Proof}}.
\end{theorem}
\begin{Remark} Theorem \ref{thm:proposed_theorem} mean that if after some time the estimated right-hand side of the LBS is consistently bounded via $1/t^p$ ($p>1$), then the proposed ESC system is stable with guaranteed vanishing oscillations.
\end{Remark}
The above-mentioned results, until this point, depend on the knowledge of the estimated LBS. So, it is necessary to introduce a sound technique to obtain the estimated LBS inline with theorem 3.1. Estimating the gradient of the objective function is not a new concept in the ESC literature; however, it was done only to classic ESC-related structures using Kalman Filtering (KF) \cite{Chicka2006,AttenuatedOscillaiion2021,ESC_KF}. In said structures, KF measurement update equation was easily found via natural Taylor expansion of the perturbed objective function \cite{Chicka2006,AttenuatedOscillaiion2021}. That is, the addition of the input $du$ after the integrator in such structures perturbs the function argument as:
\begin{equation}\label{eqn:taylor}
f(x)=f(\hat{x}+du)=f(\hat{x})+f'(\hat{x}) du+O(|du|^2),
\end{equation}
where $\hat{x}$ denotes current estimate of the variable $x$ and $f'(\hat{x})$ denotes the gradient estimate.
However, (\ref{eqn:taylor}) \textit{cannot be used for control affine ESC structures} (the focus of this work) as the input signal $du$ does not perturb the function argument in such systems. It is important to emphasize that there is hardly \textit{any} literature for estimating the gradient or LBSs in control-affine ESCs. Here, we introduce a novel and applicable geometric-based KF framework to control-affine ESC systems which we call "GEKF." The Chen-Fliess series (\ref{eqn:chenFliess}) (see also \cite{grushkovskaya2021extremum}) can be used to analyze control-affine ESC systems. Arguably, the Chen-Fliess series is a control-affine analog to the Taylor series. We set $y=f(x)$ in (\ref{eqn:chenFliess}) and truncate after first-order terms:
\begin{multline}\label{eqn:measurementEqn1_mainresult}
f(\bm{x})|_{t_2}= f(\bm{x})|_{t_1}+ {\bm{b}_1} \cdot \nabla f(\bm{x})|_{t_1} U_1 +{\bm{b}_2} \cdot \nabla f(\bm{x}) |_{t_1} U_2 \\
+ O((\Delta t)^2),
\end{multline}
with $U_1 = \int_{t_1}^{t_2}{u}_1 d\tau, U_2 = \int_{t_1}^{t_2}{u}_2 d\tau$, $ t_2=t_1+\Delta t$, $\Delta t$ is the time period/step. We set $\Delta t$ to be a very small step size when compared to the periodic time. Hence, $\Delta t = K/\omega$, with some constant $K$, and the higher order terms are in $O((1/\omega)^2)$. The higher-order terms are sufficiently small when the frequency chosen is sufficiently large.
Now, (\ref{eqn:measurementEqn1_mainresult}) can be used as the measurement update equation for our introduced continuous-discrete extended KF (GEKF) to estimate the LBS. It is worth noting that in our structure (figure \ref{fig:ESC_scheme}), the measurement of the objective function is taken from the ESC system. So, a zero mean error is expected to persist in the measurement equation given the oscillatory nature of the ESC trajectory about the averaged LBS. We assume the sum of this error and the higher order term $O(1/\omega)^2)$ in (\ref{eqn:measurementEqn1_mainresult}) as Gaussian measurement noise $\nu (t) \sim N(0,r)$. Next, we formulate the state propagation model of the GEKF to approximate the LBS corresponding to a control-affine ESC system with state space $x \in \mathbb{R}^n$. We define the state variables of the GEKF as follows:
\begin{align}\label{eqn:GE_states}
\bm{\bar{X}}=\begin{bmatrix}
[\bar{\bm{x}}_1]_{n \times 1}\\
[\bar{\bm{x}}_2]_{n \times 1}\\
[\bar{x}_3]_{1 \times 1}\\
\end{bmatrix}=\begin{bmatrix}
-\sum \limits_{i=1}^n \nu_{2i,1i} \frac{\partial f(\bm{x})}{\partial x_i} b_{0i}(f({\bm{x}})) e_i \\
\dot{\bar{\bm{x}}}_1\\
f(\bm{x})|_{t_1}\\
\end{bmatrix},
\end{align}
where $\bar{\bm{x}}_1$ is equivalent to the right-hand side of the LBS (\ref{eqn:Lie_multi}). We assume $\bar{\bm{x}}_1$ has a constant derivative given by $\bar{\bm{x}}_2$. The state $\bar{x}_3$ is taken as the constant value of the objective function. Thus, the state propagation model, used in between the measurements at $t_1$ and $t_2$ is:
\begin{align}\label{eqn:GEKFDynamics}
\dot{\bar{\bm{X}}}=\begin{bmatrix}
\bar{\bm{x}}_2,
0,
0\\
\end{bmatrix}^T + \bm{\Omega},
\end{align}
where $\bm{\Omega}$ is a random variable representing the process noise which is assumed to be Gaussian with zero mean and covariance $\bm{Q}$.
Furthermore, the process noise and the measurement noise are assumed to be uncorrelated. Note that the first $n$ states of the filter are estimating the right-hand side of the LBS in the filter dynamics. Hence the filter state itself is the integration of that which is proportional to the average. So, by the time the ESC reaches the limit cycle, the average of said right-hand side will incorporate the exact gradient. This can be extracted from the states of the filter to finally obtain the estimated LBS.
The use of the GEKF to estimate the right-hand side of the LBS, as discussed above, introduces an error. We argue that, due to the choice of sufficiently large frequency and choice of our propagation dynamics and measurement equation of the filter, the error is bounded and decreases with time, i.e. it has the same characteristics as the error $\eta(t)$ introduced in (\ref{eqn:LieEst}) and satisfies assumption A4. Thus, our proposed technique of estimation can be used to obtain the estimated LBS.
It is essential to briefly highlight the advantages of the proposed control-affine ESC structure when compared to the generalized approach of Grushkovskaya et al. \cite{VectorFieldGRUSHKOVSKAYA2018} which unified most significant control-affine ESC systems \cite{ESCTracking, DURR2013, scheinker2016boundedDither,BoundedUpdateKrstic,ExpStabSUTTNER2017}. First highlight: our proposed ESC stability is characterized by a time-dependent condition that, in principle, is simpler and easier to verify for a given ESC system when compared to the stability conditions introduced in \cite{VectorFieldGRUSHKOVSKAYA2018}. The stability conditions of \cite{VectorFieldGRUSHKOVSKAYA2018} -- provided in the appendix \ref{sec:StabilityConditionsB} -- are harder to obtain through estimation/approximation methods applied to the measurements of the objective function as said conditions require bounds on the objective function, its gradient, and even higher order derivatives.
Second highlight: in order to guarantee vanishing oscillations of the control inputs at the extremum point, a strong condition from \cite{VectorFieldGRUSHKOVSKAYA2018} (provided in appendix \ref{sec:StabilityConditionsB} as B2) has to be satisfied. However, B2 is hard to verify, check, or apply, especially without knowing the objective function or its derivatives as discussed above. Moreover, there is no apparent alternative for how one can attenuate the oscillations if B2 is not satisfied. Our approach on the other hand guarantees the vanishing of oscillations even when condition B2 is not satisfied. Next, we show the merit of our approach by studying three ESC cases. The first case is a simple single-variable ESC system taken from \cite{VectorFieldGRUSHKOVSKAYA2018}, the second case is a multi-variable single-agent vehicle case taken from \cite{BoundedUpdateKrstic}, and the third case is the multi-agent problem using single-integrator dynamics similar to Durr et al. \cite{DURR2013}. In the first and second cases, oscillations can not be attenuated as stated in \cite{VectorFieldGRUSHKOVSKAYA2018} due to condition B2 not being satisfied. We also show in appendix \ref{sec:StabilityConditionsB} that the third case problem from \cite{DURR2013} cannot be solved with vanishing oscillation due to the violation of B2. Nevertheless, with our proposed approach, the three ESC cases are solved with vanishing oscillations where the generalized approach \cite{VectorFieldGRUSHKOVSKAYA2018} did not succeed.
\section{Simulation Results}\label{s:Simulation}
\textbf{Case 1.}
This simple ESC system was used in \cite{VectorFieldGRUSHKOVSKAYA2018} to demonstrate that with the failure to satisfy condition B2 (refer the appendix), the ESC will have non-vanishing oscillations at the extremum point. We resolved the same problem by our proposed ESC (\ref{eqn:generalizedSystem})- (\ref{eqn:law}), and it works effectively. The equation of the system is $\dot{x}=f(x)u_1(t)+ u_2 (t)$ with $f(x)=2(x-x^*)^2, x\in \mathbb{R}, x^*=1$, $u_1=a\sqrt{\omega}\cos(\omega t)$ $u_2=a\sqrt{\omega}\sin(\omega t), \omega=8$. The LBS is $\dot{z}=- \alpha/2 \nabla f(z)$ with $\alpha = a^2$, and the expression for estimated LBS is $\dot{z}=- \alpha/2 \nabla f(z)+\eta (t)$. We used the GEKF as described in section \ref{s: MainResults}. Figure \ref{fig:comparison_case1} shows the simulation results for this case with the initial condition as $a_0=1, x_0=2$ and parameter $\lambda =0.1$ with a top plot showing the advantage of our proposed ESC (vanishing oscillation). Similarly, verification of Theorem \ref{thm:proposed_theorem} is provided in the bottom plot of figure \ref{fig:comparison_case1} where the estimated and exact right-hand side of the LBS shown are under the viable bound $1/t^p,p=1.05$.
\begin{figure}
\caption{Comparison between our ESC system (convergent) with that of \cite{VectorFieldGRUSHKOVSKAYA2018}
\label{fig:comparison_case1}
\end{figure}\\
\noindent\textbf{Case 2.} In this case, we consider a single vehicle example moving in a two-dimensional plane in GPS-denied environment from \cite{BoundedUpdateKrstic}. The ESC system drives the vehicle to its optimal position based on measurable, but analytically unknown, cost function $f(x,y)$, where $x$ and $y$ are the coordinates of the vehicle. This problem as mentioned in \cite{VectorFieldGRUSHKOVSKAYA2018} cannot be solved with vanishing oscillations as condition B2 is not satisfied. The equation of motion for the system is given as
\begin{equation}\label{eqn:ESC_BU}
\begin{split}
\dot{x}= \cos({k f}) u_1 - \sin({k f}) u_2\\
\dot{y}= \sin({k f}) u_1 + \cos({k f}) u_2,
\end{split}
\end{equation}
where $\omega$ is the amplitude and angular frequency of the signal and $k$ is a constant. The control inputs are taken as $u_1 = a \sqrt{\omega}\cos \omega t$ and $u_2=a\sqrt{\omega} \sin \omega t$. The system is in the generalized multi-variable structure in (\ref{eqn:ESC_multi}) and the corresponding LBS for (\ref{eqn:ESC_BU}) is given by
\begin{equation}\label{eqn:LBS_BU}
\begin{split}
\dot{z}_x= -\frac{k \alpha }{2}\frac{\partial f}{\partial z_x};
\dot{z}_y= -\frac{k \alpha }{2}\frac{\partial f}{\partial z_y},
\end{split}
\end{equation}
where $\alpha = a^2$. Similar to case 1, we obtained the estimated LBS using the GEKF as described in section \ref{s: MainResults}. For the simulation, we take the following parameters $\alpha=0.5, k=2, \omega =25$ with $f(x)=x^2+y^2$.
Figure \ref{fig:plotWithTime_BU} shows the trajectories of $x$ and $y$ coordinates of the vehicle employing ESC from \cite{BoundedUpdateKrstic} (red) and our proposed system (black).
We verified Theorem \ref{thm:proposed_theorem} by tracking the estimated right-hand side of the LBS, i.e., $|J_{x}|$ and $|J_{y}|$ vs. time, and with a viable upper bound in the form $1/t^{1+\epsilon}$ (we choose $\epsilon =0.05$). As seen in figure \ref{fig:RHS_LBS_BU}, Theorem \ref{thm:proposed_theorem} is verified for both $x$ and $y$.
\begin{figure}
\caption{Trajectories of x and y coordinates employing ESC from \cite{BoundedUpdateKrstic}
\label{fig:plotWithTime_BU}
\end{figure}
\begin{figure}
\caption{Verification of Theorem \ref{thm:proposed_theorem}
\label{fig:RHS_LBS_BU}
\end{figure}
\noindent\textbf{Case 3.}
In this case, a problem of a three-agent vehicle system from \cite{DURR2013} is considered. We show in the appendix \ref{sec:StabilityConditionsB} that the objective function used in \cite{DURR2013} does not satisfy assumption B2, hence, this problem does not attain vanishing oscillation using the approach in \cite{VectorFieldGRUSHKOVSKAYA2018}. On the other hand, by following a similar process to the above two cases, we successfully resolved the problem with vanishing oscillations-- refer to figures \ref{fig: vehicle_SI}-\ref{fig: vehicle2lie_SI}.
\begin{figure}
\caption{Trajectories of x and y coordinates of vehicle-2 employing ESC with single-integrator dynamics from \cite{DURR2013}
\label{fig: vehicle_SI}
\end{figure}
\begin{figure}
\caption{Verification of Theorem \ref{thm:proposed_theorem}
\label{fig: vehicle2lie_SI}
\end{figure}
\section{Conclusion}\label{s: Conclusion}
The proposed ESC system in this paper is based on the estimation of LBS via the novel GEKF for both stability characterization and attenuation of oscillations, and it shows promising results to control-affine ESC structures: (i) it has stability characteristics that are less complex when compared to literature (dependent on one time-dependent condition), (ii) it guarantees vanishing oscillations by design, and (iii) it resolved problems the generalized approach in \cite{VectorFieldGRUSHKOVSKAYA2018} did not succeed in solving.
\appendix
\section{Proof of Theorem \ref{thm:proposed_theorem}}\label{sec:Proof}
Throughout the proof, $i=1,...,n$ is used to denote the $i^{th}$ component of an n-vector and we assume A1-A4 to be satisfied. It is clear that $\bm{J}$ in (\ref{eqn:LieEst}) is Riemann integrable. We fix the initial time to $t=0$ and apply $\int_0^t(\cdot) d\tau$ on both sides of (\ref{eqn:LieEst}), with $|J_i|\le {1}/{t^p} \: \forall \: t\in (t^*,\infty)$ and $p>1$, and rearranging, for all $i$, we get:
\begin{align}
\hat{z}_i(t)-\hat{z}_{i}(0)=\int_0^t J_i d\tau =\int_0^{t^*} J_i d\tau + \int_{t^*}^t J_i d\tau.
\end{align}
Let the evaluation of the integral $\int_0^{t^*} J_i d\tau =\beta_i^*$, then,
\begin{equation}
\hat{z}_i(t)=\hat{z}_i(0)+\beta_i^*+\int_{t^*}^t J_i d\tau \label{eqn:a1}
\end{equation}
Let us consider any arbitrary two initial conditions $\bm{l}$ and $\bm{m}$ in the compact set $\mathscr{C}$ such that
\begin{align}
\hat{z}_i(t;l_i) &=l_i+\beta^*_{i,l} + \int_{t_l^*}^t J_i d\tau \label{eqn:initialCond1}\\
\hat{z}_i(t;m_i) &=m_i+ \beta^*_{i,m} + \int_{t_m^*}^t J_i d\tau \label{eqn:initialCond2},
\end{align}
where $\beta^*_{i,l}=\int_0^{t_l^*} J_i d\tau$ and $\beta^*_{i,m}=\int_0^{t_m^*} J_i d\tau $ are some finite values. Now, from (\ref{eqn:initialCond1}) and ( \ref{eqn:initialCond2}), we get:
\begin{equation}\label{eqn:bounds_lm}
\begin{split}
\vert l_i-m_i\vert \leq \vert \hat{z}_i(t;l_i)-\hat{z}_i(t;m_i)\vert + \vert \beta^*_{i,l}-\beta^*_{i,m}\vert +\\
\left \vert \int_{t_l^*}^t J_i d\tau-\int_{t_m^*}^t J_i d\tau \right \vert .\\
\end{split}
\end{equation}
Let the finite quantity $\vert \beta^*_{i1}-\beta^*_{i2}\vert \le \vert \beta^*_{i1}\vert +\vert \beta^*_{i2}\vert \le M_1$; $M_1$ is some positive constant. With $\vert J_i\vert \le \frac{1}{t^p}$, $\forall \: t\in (t^*,\infty)$ given $p>1$, we have
\begin{equation}
\int_{t^*}^t J_i d\tau\leq\int_{t^*}^t \vert J_i\vert d\tau\leq\int_{t^*}^t \frac{1}{\tau^p}d\tau. \label{eqn:time_bound}
\end{equation}
Therefore, $\left \vert \int_{t_1^*}^t J_i d\tau-\int_{t_2^*}^t J_i d\tau \right \vert $ will have some finite bound as
$\left \vert \int_{t_1^*}^t J_i d\tau-\int_{t_2^*}^t J_i d\tau \right \vert \le
\left \vert \int_{t_1^*}^t J_i d\tau\right\vert + \left\vert \int_{t_2^*}^t J_i d\tau \right \vert
\le
\left \vert \int_{t_1^*}^t \frac{1}{\tau^p} d\tau\right\vert + \left\vert \int_{t_2^*}^t \frac{1}{\tau^p} d\tau \right \vert
\le M_2$; $M_2$ is some positive constant.
Then, (\ref{eqn:bounds_lm}) can be written as
\begin{equation}\label{eqn:lm_bounded}
\vert l_i-m_i\vert \leq \vert \hat{z}_i(t;l_i)-\hat{z}_i(t;m_i)\vert +M,
\end{equation}
where $M=M_1+M_2$. From (\ref{eqn:lm_bounded}), it is clear that, for every $\epsilon>0$, we have $\delta=f(\epsilon)=\epsilon+M>0$ such that
\begin{equation}
\vert l_i-m_i\vert <\delta \:\: \Rightarrow \:\: \vert \hat{z}_i(t;l_i)-\hat{z}_i(t;m_i)\vert <\epsilon, \:\: \forall \epsilon>0. \label{cond2}
\end{equation}
The $(\epsilon,\delta)$ stability condition in \eqref{cond2} (see section 4.3 in \cite{aastrom2008feedback} or chapter 4 in \cite{khalil2002nonlinear}) proves the stability of (\ref{eqn:LieEst}) in the Lyapunov sense. Next, we show the local asymptotic stability of the equilibrium point $\hat{z}_i^*\in \mathscr{C}$ (also is the extremum point as in A3).
Since $\lim_{t\to \infty}[\int_{t^*}^t \frac{1}{\tau^p}d\tau]$ is convergent for $p>1$, then $\lim_{t\to \infty}[\int_{t^*}^t J_i d\tau]$ is also convergent per \eqref{eqn:time_bound}. We let $\lim_{t\to \infty}[\int_{t^*}^t J_i d\tau]=c_i^*$, so with \eqref{eqn:a1}, the following condition follows:
\begin{equation}
\lim_{t\to \infty} \hat{z}_i(t)=\hat{z}_{0i}+{\beta}_i^* +c^* = k_i^*. \label{eqn:cond1}
\end{equation}
As per (\ref{eqn:cond1}), the LBS trajectory in (\ref{eqn:LieEst}) is convergent to some constant value of $k_i^*$, which means $k_i^*$ is an equilibrium point. But from assumption A3, $\hat{z}_i^*$ is the only isolated equilibrium point. Hence, $k_i^*$ must be $\hat{z}_i^*$, and all the solutions starting from arbitrary initial conditions $l_i$ and $m_i$ within the compact set $\mathscr{C}$ converge to $\hat{z}_i^*$. Thus, $\hat{z}_i^*$ is locally asymptotically stable for the estimated LBS system (\ref{eqn:LieEst}). This completes the first part of the proof of Theorem \ref{thm:proposed_theorem}.
Now, for the proof of second part of Theorem \ref{thm:proposed_theorem}, we rewrite (\ref{eqn:law}) as: $\Dot{a_i}+ \lambda_i a_i=\lambda_i J_i$. Using the integrating factor $e^{\lambda_i t}$,
\begin{align}
\int_0^t \frac{d}{d\tau} (a_i e^{\lambda_i \tau}) d\tau &=\int_0^t \lambda_i J_i e^{\lambda_i \tau} d\tau. \label{a12}
\end{align}
Applying $|.|$ and bounds on $J_i$ and taking $\lim\limits_{t\to \infty}$,
\begin{equation*}
\begin{split}
0 \le \lim\limits_{t\to \infty}|a_i(t)| \le \cancelto{0}{\lim\limits_{t\to \infty} \left [\int_0^{t^*} \lambda_i |J_i|e^{\lambda_i \tau} d\tau \right] e^{-\lambda_i t}} \\
+\lim\limits_{t\to \infty} \frac{\int_{t^*}^t \lambda_i \frac{1}{\tau ^p} e^{\lambda_i \tau} d \tau}{e^{\lambda_i t}}
+\cancelto{0}{ \lim\limits_{t\to \infty}a_0 e^{-\lambda_i t}}.
\end{split}
\end{equation*}
Now, by using Taylor expansion of $e^{\lambda_i t}$ in $t$, we get
\begin{equation}\label{eqn: TaylerExp}
0\le
\lim\limits_{t\to \infty} |a_i(t)|
\le \lambda_i \lim\limits_{t\to \infty}
\frac{\int_{t^*}^t \frac{1}{\tau^p } (1+ \lambda_i \tau + \frac{(\lambda_i \tau) ^2}{2!} + ...) d\tau }{1+\lambda_i t+\frac{(\lambda_i t)^2}{2!}+...}.
\end{equation}
Let the value of integral in (\ref{eqn: TaylerExp}) evaluated at $t^*$ be $c$, then by writing (\ref{eqn: TaylerExp}) in a polynomial coefficient form,
\begin{equation*}
0\le \lim\limits_{t\to \infty} |a_i(t)| \le \lambda_i \lim\limits_{t\to \infty}
\frac{c+a_1 t^{1-p}+ a_2t^{2-p}+...}{c_0 + c_1 t+ c_2 t^2 + ...}.
\end{equation*}
Finally, using the Squeeze theorem
\begin{align}\label{eqn:converging_a}
\lim\limits_{t\to \infty} |a_i(t)|=0.
\end{align}
This proves the second claim of {Theorem \ref{thm:proposed_theorem}}.
Next, we prove the third claim of Theorem \ref{thm:proposed_theorem}. First, we note here that the proof of asymptotic stability of a class of estimated LBSs (\ref{eqn:LieEst}) includes the particular case of the exact corresponding LBS (\ref{eqn:Lie_multi}); this follows by the fact that $\eta_i(t) =0$ satisfies A4 -- leading to the exact LBS (\ref{eqn:Lie_multi}) -- being included in the proof of first claim of Theorem \ref{thm:proposed_theorem}. As a result, Theorem \ref{thm:esc_lbs} applies and by letting $\bm{x}_1(t)$ be solution of the ESC system in (\ref{eqn:generalizedSystem}) but with constant amplitude $a(t)=a_0$ as in (\ref{eqn:ESC_back}), then we have for a fixed $\omega$, for every $\epsilon >0$, $\exists$ $\delta(\epsilon) >0$ such that $|{x}_i(0)-{x}_i^*|<\delta$ with $|{x}_{1i}(t)-{x}_i^*|<\epsilon $, where ${x}_i^*={z}_i^*$ is the equilibrium point of (\ref{eqn:generalizedSystem}) such that $\lim_{t \rightarrow \infty} \hat{z}_i(t)={z}_i^*={x}_i^*$. Now, let $\bm{x}_2(t)$ be the solution of the proposed ESC system in (\ref{eqn:generalizedSystem}) when $a(t)$ is time-varying. For both systems $\bm{x}_1(0)=\bm{x}_2(0)=\bm{x}(0)$ and $\bm{x}_1^*=\bm{x}_2^*=\bm{x}^*$. We need to show that (\ref{eqn:generalizedSystem}) is also practically asymptotically stable similar to (\ref{eqn:ESC_back}), i.e., the ($\epsilon$,$\delta$) condition satisfied above on $\bm{x}_1(t)$ is satisfied on $\bm{x}_2(t)$.
Now for fixed $\omega$, we have
\begin{equation}\label{eqn:constant_a}
{x}_{1i}(t)={x}_{i}(0)+\int_0^t \sqrt{\omega}a_0 \hat{u}_{1i} b_{1i} d\tau + \int_0^t \sqrt{\omega}a_0 \hat{u}_{2i} b_{2i}d\tau.
\end{equation}
Equation (\ref{eqn:constant_a}) can be rewritten as
\begin{equation*}
\begin{split}
&\big|\int_0^t \sqrt{\omega}a_0 \hat{u}_{1i} b_{1i}d\tau + \int_0^t \sqrt{\omega}a_0 \hat{u}_{2i} b_{2i}d\tau\big| \\ &=|({x}_{1i}(t)-x_{i}^*)-({x}_{i}(0)-x_{i}^*)|\\
&\le |x_{1i}(t)-x_i^*|+|x_i(0)-x_i^*|
\le \delta(\epsilon) + \epsilon.
\end{split}
\end{equation*}
\noindent Then, for $\forall \epsilon>0$ we let $|\int_0^t \sqrt{\omega}a_0 \hat{u}_{1i} b_{1i}d\tau| + |\int_0^t \sqrt{\omega}a_0 \hat{u}_{2i} b_{2i}d\tau| \le \delta(\epsilon)+\epsilon= K_i(\epsilon)$. We also have:
\begin{equation}\label{eqn:variable_a}
{x}_{2i}(t)={x}_{i}(0)+\int_0^t \sqrt{\omega}a_i(\tau) \hat{u}_{1i} b_{1i}d\tau + \int_0^t \sqrt{\omega} a_i(\tau) \hat{u}_{2i} b_{2i}d\tau,
\end{equation}
Subtracting (\ref{eqn:constant_a}) from (\ref{eqn:variable_a}), we get:
\begin{multline}\label{eqn:diff_a}
x_{2i}(t)-x_{1i}(t)=\sqrt{\omega} \Bigl(\int_0^t a_i(\tau)\hat{u}_{1i} b_{1i}d\tau+\\
\int_0^t a_i(\tau) \hat{u}_{2i} b_{2i}d\tau \Bigr)
- \sqrt{\omega} \Bigl(\int_0^t a_{0} \hat{u}_{1i} b_{1i}d\tau+ \int_0^t a_0 \hat{u}_{2i} b_{2i}d\tau \Bigr).
\end{multline}
From (\ref{eqn: TaylerExp}) and (\ref{eqn:converging_a}), $a(t)$ is bounded for all $t \implies \exists K_0>0$ such that $|a_i(t)|\le K_0 a_0$. Now, from (\ref{eqn:diff_a})
\begin{equation}
|x_{2i}(t)-x_{1i}(t)|\le (K_0 +1)K_i(\epsilon)
\end{equation}
Now,
\begin{align}
x_{2i}(t)-x_i^*&=(x_{2i}(t)-x_{1i}(t))+x_{1i}(t)-x_i^*\\
|x_{2i}(t)-x_i^*| & \le (K_0+1)K_i(\epsilon)+\epsilon =\epsilon^*
\end{align}
Thus, for ever $\epsilon^*>0$, $\exists$ $\delta>0$ such that we have $|x_i(0)-x_i^*| < \delta$ with $|x_{2i}(t)-x_i^*| <\epsilon^* $ for a given $\omega$. This completes the proof of Theorem \ref{thm:proposed_theorem}.
\section{Stability conditions in \cite{VectorFieldGRUSHKOVSKAYA2018} vs. case 3.}\label{sec:StabilityConditionsB}
For a domain $D\subseteq \mathbb{R}^n$, refer \cite[section 3]{VectorFieldGRUSHKOVSKAYA2018}, condition B1 below is needed for practical stability whereas condition B2 is further needed for asymptotic stability (vanishing oscillations):
\begin{enumerate}[label=B\arabic*.]
\item
There exist constants $ \gamma_1, \gamma_2, \kappa_1, \kappa_2, \mu$ and $m_1 \ge 1$ such that for all $\bm{x} \in D$,
\begin{equation*}{\label{eqn:grushExample}}
\begin{split}
\gamma_1 ||\bm{x}-\bm{x}^*||^{2m_1} &\le \tilde{f}(\bm{x}) \le \gamma_2 ||\bm{x}-\bm{x}^*||^{2m_1},\\
\kappa_1\Tilde{f}(\bm{x})^{2- \frac{1}{m_1}} &\le ||\nabla f(\bm{x})||^2 \le \kappa_2 \tilde{f}(\bm{x})^{2-\frac{1}{m_1}},\\
\left | \left|\frac{\partial^2f(\bm{x})}{\partial \bm{x}^2}\right | \right| &\le \mu \tilde{f}(\bm{x})^{1-\frac{1}{m_1}},
\end{split}
\end{equation*}
where $||.||$ is the $l_2$ norm, and $\tilde{f}(\bm{x})=f(\bm{x})-f^*$.
\item
The functions $b_{si}(\tilde{f}(\bm{x}))$ are Lipschitz on each compact set from $D$, and
\begin{equation*}
\begin{split}
\alpha_1 \tilde{f}^{m_2}(x) \le b_{0i}(\tilde{f}(x))\le \alpha_2 \tilde{f}^{m_2}(x)\\
|b_{si}(\tilde{f}(x))| \le M\tilde{f}^{m_3}(x)\\
||L_{b_{ql}}L_{b_{pj}}b_{si}(\tilde{f}(\cdot))|| \le H \tilde{f}^{m_4}(x)
\end{split}
\end{equation*}
for all $x\in D$, $s,p,q=\overline{1,2}, i,j,l=\overline{1,n}$ with $m_2 \ge 1/m_1-1, m_3=(m_2+1)/2, m_4=3(1+m_2)/2-1/m_1$, and some $\alpha_1,\alpha_2, M>0, H\ge0$.
\end{enumerate}
Now, we check condition B2 which is needed for guaranteeing vanishing oscillations of the control input at the extremum point for the multi-agent system with single integrator dynamics in \cite[section 4]{DURR2013}, which we solved in case 3 (section \ref{s:Simulation}) successfully with vanishing oscillations. Without any loss of generality, we limit our analysis here to the objective function (map) assigned to vehicle-3 of the multi-agent system. We analyze said objective function vs. the bounds in condition B2. The expression for the objective function for vehicle-3 is $f_3 = -(x_3+1)^2/2-3(y_3-1)^2/2+10$. The extremum point is $x_3^*=-1$ and $y_3^* =1$, which corresponds to the optimal value of the cost function as $f_3^*=10$. Thus, $\tilde{f}_3=f_3-f_3^*=-(x_3+1)^2/2-3(y_3-1)^2/2$. Now, each element of the vector field $\bm{b}$ are
\begin{equation*}
\begin{split}
b_{11}(\tilde{f_3})&=c_3(f_3(\bm{x})-x_{3e}h_3),\: b_{21}(\tilde{f_3})=a_3,\\
b_{12}(\tilde{f_3})&=a_3,\: b_{22}(\tilde{f_3})=-c_3(f_3(\bm{x})-x_{3e}h_3),\\
b_{13}(\tilde{f_3})&=0,\quad b_{23}(\tilde{f_3})=0.
\end{split}
\end{equation*}
As per condition B2, the following inequality should be satisfied for all $x\in D$
\begin{equation*}
|b_{si}(\tilde{f}(x))| \le M\tilde{f}^{m_3}(x),
\end{equation*}
where $s\in \{1,2\}$, and $i \in \{1,2,3\}$. Let us analyze $b_{12}(\tilde{f_3})=a_3$ as an element of the vector field, and the point $x=-1,y=1$ as the point of reference. Since $M\tilde{f}^{m_3}=0$ at the reference point for any $M>0$ and $m_3$, the inequality becomes $|a_3| \le 0$. However, $a_3=constant\:>0$ is the amplitude of the input signal and is defined to be a positive constant \cite{DURR2013}, which establishes a contradiction. Thus, the condition B2 is not satisfied for vehicle-3 in the multi-agent system \cite{DURR2013}.
\end{document}
|
\begin{document}
\begin{abstract}
We study the asymptotic dynamics for solutions to a system of nonlinear Schr\"odinger equations with cubic interactions, arising in nonlinear optics. We provide sharp threshold criteria leading to global well-posedness and scattering of solutions, as well as formation of singularities in finite time for (anisotropic) symmetric initial data. The free asymptotic results are proved by means of Morawetz and interaction Morawetz estimates. The blow-up results are shown by combining variational analysis and an ODE argument, which overcomes the unavailability of the convexity argument based on virial-type identities.
\end{abstract}
\maketitle
\section{Introduction}
\label{Intro}
\setcounter{equation}{0}
In this paper, we consider the Cauchy problem for the following system of nonlinear Schr\"odinger equations with cubic interaction
\begin{equation}\label{SNLS}
\left\{
\begin{aligned}
i\partial_t u + \slashed{\nabla}lta u - u &= -\left(\dfrac{1}{9} |u|^2 + 2|v|^2 \right) u - \dfrac{1}{3} \overline{u}^2 v, \\
i \gamma \partial_t v + \slashed{\nabla}lta v - \mu v &= -\left( 9 |v|^2 + 2 |u|^2\right) v - \dfrac{1}{9} u^3,
\end{aligned}
\right.
\end{equation}
with initial datum
$\left.(u, v)\right|_{t=0} =(u_0,v_0).$ Here $u, v: \mathbb R \times \mathbb R^3 \rightarrow \mathbb C$, $u_0,v_0: \mathbb R^3 \rightarrow \mathbb C$, and the parameters $\gamma, \mu$ are strictly positive real numbers.
The system \eqref{SNLS} is the dimensionless form of a system of nonlinear Schr\"odinger equations as derived in \cite{SBK-OL} (see also \cite{SBK-JOSA}), where the interaction between an optical beam at some fundamental frequency and its third harmonic is investigated. More precisely, from a physical point of view, \eqref{SNLS} models the interplay of an optical monochromatic beam with its third harmonic in a Kerr-type medium (we refer to \cite{OP} for the latter terminology, as well as for a sketch of the derivation of \eqref{SNLS}).
Models such as in \eqref{SNLS} arise in nonlinear optics in the context of the so-called cascading nonlinear processes. These processes can indeed generate effective higher-order nonlinearities, and they stimulated the study of spatial solitary waves in optical materials with $\chi^2$ or $\chi^3$ susceptibilities (or nonlinear response, equivalently).
Let us mention, following \cite{CdMS}, the difference between $\chi^2$ (quadratic) and $\chi^3$ (cubic) media. The contrast basically reflects the order of expansion (in terms of the electric field) of the polarization vector, when decomposing the electrical induction field appearing in the Maxwell equations as the sum of the electric field $\mathbb E$ and the polarization vector $\mathbb P$. Indeed, for ``small'' intensities of the electric field, the polarization response is linear, while for ``large'' intensities of $\mathbb E,$ the vector $\mathbb P$ has a non-negligible nonlinear component, denoted by $\mathbb P_{nl}$. Thus, when considering the Taylor expansion for $\mathbb P_{nl}$, one gets the presence of (at least) quadratic and cubic terms whose coefficients $\chi^j$, which depend on the frequency of the electric field $\mathbb E,$ are called $j$-th optical susceptibility. For $j=2,3,$ they are usually denoted by $\chi^2$ and $\chi^3$. Therefore quadratic media arise from approximation of the type $\mathbb P_{nl}\sim \chi^2\mathbb E^2,$ and similarly one can define cubic media. The so-called non-centrosymmetric crystals are typical examples of $\chi^2$ materials. Moreover, it can be shown, see \cite{Fib}, that isotropic materials have $\chi^{2n}=0$ susceptibility, namely even orders of nonlinear responses are zero. In the latter case, the leading-order in the expansion of $\mathbb P_{nl}$ is cubic, and these kind of isotropic materials are called Kerr-materials. See the monographs \cite{Fib,SS, Boy} for more discussions. In addition, we refer to \cite{AP, BDST, B99, CdMS, Kivshar, LGT, SBK-OL, SBK-JOSA,ZZS}, and references therein, for more insights on physical motivations and physical results (both theoretical and numerical) about \eqref{SNLS} and other NLS systems with cubic and quadratic interactions. Models as in \eqref{SNLS} are therefore physically relevant, and they deserve a rigorous mathematical investigation. In particular, we are interested in qualitative properties of solutions to \eqref{SNLS}.\\
Our main goal is to understand the asymptotic dynamics of solutions to \eqref{SNLS}, by establishing conditions ensuring global existence and their long time behavior, or leading to formation of singularities in finite time. \\
Let us mention since now on, that once the Strichartz machinery has been established, and this is nowadays classical, local well-posedness of \eqref{SNLS} at the energy regularity level (i.e. $H^1(\mathbb R^3),$ mathematically speaking) is relatively straightforward to get (see below for a precise definition of the functional space to employ a fixed point argument).
The dynamics of solution of NLS-type equation is intimately related to the existence of ground states (see below for a more precise definition). The analysis of solitons is a very important physical problem, and the main difference between $\chi^2$ media and $\chi^3$ media, is that, in the latter case, the cubic nonlinearity is $L^2$ supercritical, while in the former quadratic nonlinearities are $L^2$ subcritical. The last two regimes dramatically reflect the possibility for the problem to be globally well-posed, and the stability/instability properties of the solitons are different. See \cite{CdMS} for further discussions, and a rigorous analysis for solitons in quadratic media. \\
Regarding system \eqref{SNLS}, existence of ground states and their instability properties were established in a recent paper by Oliveira and Pastor, see \cite{OP}. Our aim is to push forward their achievements to obtain a qualitative description of solutions to \eqref{SNLS}, by giving sharp thresholds, defined by means of quantities linked to the ground state, are sufficient to guarantee a linear asymptotic dynamics for large time (i.e. scattering) or finite time blow-up of the solutions.\\
Let us start our rigorous mathematical discussion about \eqref{SNLS}.
The existence of solutions is quite simple to obtain. As said above, it is well-known that \eqref{SNLS} is locally well-posed in $H^1(\mathbb R^3) \times H^1(\mathbb R^3),$ (see e.g., \cite{Cazenave}). More precisely, for $(u_0,v_0) \in H^1(\mathbb R^3)\times H^1(\mathbb R^3)$, there exist $T_{\pm}>0$ and a unique solution $(u,v)\in X((-T_{-},T_{+})) \times X((-T_{-},T_{+}))$, where
\[
X((-T_{-},T_{+})):= C((-T_{-},T_{+}), H^1(\mathbb R^3)) \cap L^q_{\loc}((-T_{-}, T_{+}), W^{1,r}(\mathbb R^3))
\]
for any Strichartz $L^2$-admissible pair $(q,r)$, i.e., $\frac{2}{q}+\frac{3}{r}=\frac{3}{2},$ for $2\leq r \leq 6.$ See Section \ref{sec:pre}.
In addition, the maximal times of existence obey the blow-up alternative, i.e., either $T_{+}=\infty$, or $T_{+}<\infty$ and $\lim_{t\nearrow T_{+}} \|(u(t),v(t))\|_{H^1(\mathbb R^3)\times H^1(\mathbb R^3)} =\infty$, and similarly for $T_{-}$. When $T_{\pm} =\infty$, we call the solution global.
Solutions to \eqref{SNLS} satisfy conservation laws of mass and energy, namely
\begin{align*}
M_{3\gamma}(u(t),v(t)) &= M_{3\gamma}(u_0,v_0), \tag{Mass} \\
E_\mu(u(t),v(t)) &= \frac{1}{2} \left( K(u(t),v(t)) + M_\mu(u(t),v(t)) \right) - P(u(t),v(t)) = E_\mu(u_0,v_0), \tag{Energy}
\end{align*}
where
\begin{align}
M_\mu(f,g) &:= \|f\|^2_{L^2(\mathbb R^3)} + \mu \|g\|^2_{L^2(\mathbb R^3)}, \label{defi-M-mu} \\
K(f,g) &:= \|\nabla f\|^2_{L^2(\mathbb R^3)} + \|\nabla g\|^2_{L^2(\mathbb R^3)}, \label{defi-K} \\
P(f,g) &:= \int_{\mathbb R^3} \frac{1}{36} |f(x)|^4 + \frac{9}{4} |g(x)|^4 + |f(x)|^2 |g(x)|^2 + \frac{1}{9} \rea \left( \overline{f}^3(x) g(x)\right) dx. \label{defi-P}
\end{align}
\noindent It is worth introducing since now the Pohozaev functional
\begin{equation} \label{defi-G}
G(f,g):= K(f,g) - 3 P(f,g),
\end{equation}
and, for later purposes, we rewrite the functionals $P$ (see \eqref{defi-P}) by means of its density: namely
\[
P(f,g)=\int_{\mathbb R^{3}}N(f(x),g(x))dx
\]
where
\begin{equation}\label{defi-N}
N(f(x),g(x)):= \frac{1}{36} |f(x)|^4 + \frac{9}{4} |g(x)|^4 + |f(x)|^2 |g(x)|^2 + \frac{1}{9} \rea\left(\overline{f}^3(x) g(x)\right).
\end{equation}
The previous conservation laws can be formally proved by usual integration by part, then a rigorous justification of them can be done by a classical regularization argument, see \cite{Cazenave}.\\
In order to introduce other invariance of the equations, let us give the following definition.
\begin{definition}
We say that the initial-value problem \eqref{SNLS} satisfies the mass-resonance condition provided that $\gamma=3.$
\end{definition}
\noindent For $\gamma=3,$ \eqref{SNLS} has the Galilean invariance: namely, if $(u,v)$ is a solution to \eqref{SNLS}, then
\begin{equation}\label{gal-trans}
u_\xi(t,x):= e^{ix\cdot\xi}e^{-t|\xi|^{2}i} u(t, x-2t\xi), \quad v_\xi(t,x):= e^{3ix\cdot\xi}e^{-3t|\xi|^{2}i}v(t, x-2t\xi)),
\quad \xi\in \mathbb R^{3},
\end{equation}
is also a solution to \eqref{SNLS} with initial data
$(e^{ix\cdot\xi}u_{0}, e^{3ix\cdot\xi}v_{0}).$
\begin{remark}\label{rmk:mass-res}
Notice that if $\gamma\neq3$, the system \eqref{SNLS} is not invariant under the Galilean transformations as in \eqref{gal-trans}.
\end{remark}
As, in this paper, we are interested in long time behavior of solutions to \eqref{SNLS}, let us recall the notion of scattering.
\begin{definition}
We say that a global solution $(u(t), v(t))$ to \eqref{SNLS} scatters in $H^{1}(\mathbb R^{3})\times H^{1}(\mathbb R^{3})$ if there exists a scattering state $(u_{\pm},v_{\pm})\in H^1(\mathbb R^3)\times H^(\mathbb R^3)$ such that
\begin{equation}\label{def:scattering}
\lim_{t\to\pm\infty}\|(u(t),v(t))-(\mathcal S_{1}(t)u_{\pm},\mathcal S_{2}(t) v_{\pm})\|_{H^{1}(\mathbb R^{3})\times H^{1}(\mathbb R^{3})}=0,
\end{equation}
where
\begin{equation}\label{def:propagators}
\mathcal S_{1}(t)=e^{it (\slashed{\nabla}lta-1)} \hbox{ \quad and \quad }\mathcal S_{2}(t)=e^{i\frac{t }{\gamma}(\slashed{\nabla}lta-\mu)}
\end{equation}
are linear Schr\"odinger propagators.
\end{definition}
Note that the set of initial data such that solutions to \eqref{SNLS} satisfy \eqref{def:scattering} is non-empty, as solutions corresponding to small $H^1(\mathbb R^3)\times H^1(\mathbb R^3)$-data do scatter (see Section \ref{sec:pre}). \\
As already mention above, it is well-known that the dynamics of nonlinear Schr\"odinger-type equations is strongly related to the notion of ground states. Hence, we recall some basic facts about ground state standing waves related to \eqref{SNLS}. By standing waves, we mean solutions to \eqref{SNLS} of the form
\[
(u(t,x), v(t,x)) = \left(e^{i\omega t} f(x), e^{3 i\omega t} g(x)\right),
\]
where $\omega \in \mathbb R$ is a frequency and $(f,g)$ is a real-valued solution to the system of elliptic equations
\begin{align} \label{syst-elli}
\left\{
\begin{aligned}
\slashed{\nabla}lta f - (\omega+1) f + \left(\frac{1}{9} f^2 + 2g^2 \right) f + \frac{1}{3} f^2 g &=0,\\
\slashed{\nabla}lta g - (\mu + 3\gamma \omega) g +(9g^2 + 2f^2) g +\frac{1}{9} f^3&=0.
\end{aligned}
\right.
\end{align}
It was proved by Oliveira and Pastor, see \cite{OP}, that solutions to \eqref{syst-elli} exist, provided that
\begin{align} \label{cond-omega}
\omega > -\min \left\{ 1, \frac{\mu}{3\gamma}\right\}.
\end{align}
Moreover, a non-trivial solution $(\phi, \psi)$ to \eqref{syst-elli} is called ground state related to \eqref{syst-elli} if it minimizes the action functional
\begin{equation}\label{Afu}
S_{\omega, \mu,\gamma} (f,g) := E_\mu(f,g) + \frac{\omega}{2} M_{3\gamma}(f,g),
\end{equation}
over all non-trivial solutions to \eqref{syst-elli}. Under the assumption \eqref{cond-omega}, the set of ground states related to \eqref{syst-elli} denoted by
\[
\mathcal G(\omega, \mu,\gamma):= \left\{ (\phi,\psi) \in \mathcal A_{\omega, \mu, \gamma} \ : \ S_{\omega, \mu,\gamma}(\phi,\psi) \leq S_{\omega,\mu,\gamma}(f,g), \,\forall (f,g) \in \mathcal A_{\omega, \mu, \gamma}\right\}
\]
is not empty, where $\mathcal A_{\omega,\mu, \gamma}$ is the set of all non-trivial solutions to \eqref{syst-elli}. In particular, $\mathcal G(0, 3\gamma, \gamma) \ne \emptyset$.
It was shown (see \cite[Theorem 3.10]{OP}) that if $(u_0,v_0) \in H^1(\mathbb R^3) \times H^1(\mathbb R^3)$ satisfies
\begin{align}
E_{\mu}(u_0,v_0) M_{3\gamma}(u_0,v_0) &< \frac{1}{2}E_{3\gamma}(\phi,\psi) M_{3\gamma}(\phi,\psi), \label{cond-ener} \\
K(u_0,v_0) M_{3\gamma}(u_0,v_0) &< K(\phi,\psi) M_{3\gamma}(\phi,\psi), \label{cond-gwp}
\end{align}
where $(\phi,\psi) \in \mathcal G(0,3\gamma,\gamma)$, then the corresponding solution to \eqref{SNLS} exists globally in time. The proof of this result is based on a continuity argument and the following sharp Gagliardo-Nirenberg inequality
\begin{align} \label{GN-ineq}
P(f,g) \leq C_{\opt} \left(K(f,g)\right)^{\frac{3}{2}} \left(M_{3\gamma}(f,g)\right)^{\frac{1}{2}}, \quad \forall (f,g) \in H^1(\mathbb R^3) \times H^1(\mathbb R^3).
\end{align}
This type of Gagliardo-Nirenberg inequality was established in \cite[Lemma 3.5]{OP}. Note that in \cite{OP}, this inequality was proved for real-valued $H^1$-functions. However, we can state it for complex-valued $H^1$-functions as well since $P(f,g) \leq P(|f|,|g|)$ and $\|\nabla(|f|)\|_{L^2(\mathbb R^3)} \leq \|\nabla f\|_{L^2(\mathbb R^3)}$. \\
We are now in position to state our first main result. The following theorem provides sufficient conditions to have scattering of solutions. More precisely, for data belonging to the set given by conditions \eqref{cond-ener} and \eqref{cond-gwp}, solutions to \eqref{SNLS} satisfy \eqref{def:scattering}, for some scattering state $(u^\pm,v^\pm)$.
\begin{theorem}\label{Th1}
Let $\mu, \gamma>0$, and $(\phi,\psi) \in \mathcal G(0, 3\gamma, \gamma)$. Let $(u(t),v(t))$ the corresponding solution of \eqref{SNLS} with initial data $(u_{0}, v_{0})\in H^{1}(\mathbb R^{3})\times H^{1}(\mathbb R^{3})$. Assume that the initial data satisfies \eqref{cond-ener} and \eqref{cond-gwp}.
Provided that
\begin{itemize}[leftmargin=5mm]
\item (non-radial case) either $|\gamma-3|<\eta$ for some $\eta=\eta(E_{3\gamma}((u_{0}, v_{0})), M_{3\gamma}((u_{0}, v_{0})))>0$ small enough,
\item (radial case) or $(u_{0}, v_{0})$ is radial,
\end{itemize}
then the solution of \eqref{SNLS} is global and scatters in $H^{1}(\mathbb R^{3})\times H^{1}(\mathbb R^{3})$.
\end{theorem}
Our proof of the scattering results is based on the recent works by Dodson and Murphy \cite{DM-MRL} (for non-radial solutions) and \cite{DM-PAMS} (for radial solutions), using suitable scattering criteria and Morawetz-type estimates. In the non-radial case, we make use of an interaction Morawetz estimate to derive a space-time estimate. In the radial case, we make use of localized Morawetz estimates and radial Sobolev embeddings to show a suitable space-time bound of the solution.
Let us highlight the main novelties of this paper, regarding the linear asymptotic dynamics. For the classical focusing cubic equation in $H^1(\mathbb R^3),$ scattering (and blow-up) below the mass-energy threshold, was proved by Holmer and Roudenko in \cite{HR} for radial solutions, by exploiting the concentration/compactness and rigidity scheme in the spirit of Kenig and Merle, see \cite{KM}. The latter scattering result has been then extended to non-radial solution in Duyckaerts, Holmer, and Roudenko \cite{DHR}. To remove the radiality assumption, a crucial role is played by the invariance of the cubic NLS under the Galilean boost, which enables to have a zero momentum for the soliton-like solution. As observed in Remark \ref{rmk:mass-res}, equation \eqref{SNLS} lacks the Galilean invariance unless $\gamma=3.$ Hence we cannot rely on a Kenig and Merle road map to achieve our scattering results, and we instead build our analysis on the recent method developed by Dodson and Murphy, see \cite{DM-PAMS, DM-MRL}. In the latter two cited works, Dodson and Murphy give alternative proofs of the scattering results contained in \cite{HR, DHR}, which avoid the use of the concentration/compactness and rigidity method. They give a shorter proofs, though quite technical, based on Morawetz-type estimates. In our work, by borrowing from \cite{DM-PAMS, DM-MRL}, we prove interaction Morawetz and Morawetz estimates for \eqref{SNLS}, and we prove Theorem \ref{Th1} for non-radial solutions which do not fit the mass-resonance condition, as well as for radially symmetric solutions. In this latter case, instead, we only need (localized) Morawetz estimates, which are less involved with respect to the interaction Morawetz ones, as we can take advantage of the spatial decay of radial Sobolev functions. \\
Our second main result is about formation of singularities in finite time for solutions to \eqref{SNLS}. We state it for two classes of initial data. Indeed, besides the fact that these initial data must satisfy the a-priori bounds given by \eqref{cond-ener} and \eqref{cond-blow} -- the latter (see below) replacing the condition \eqref{cond-gwp} yielding to global well-posedness -- they can belong either to the space of radial function, or to the anisotropic space of cylindrical function having finite variance in the last variable. The Theorem reads as follows.
\begin{theorem} \label{theo-blow}
Let $\mu, \gamma>0$, and $(\phi,\psi) \in \mathcal G(0, 3\gamma, \gamma)$. Let $(u_0,v_0) \in H^1(\mathbb R^3) \times H^1(\mathbb R^3)$ satisfy either $E_\mu(u_0,v_0)<0$ or, if $E_\mu(u_0,v_0) \geq 0$, we assume moreover that \eqref{cond-ener} holds and
\begin{align} \label{cond-blow}
K(u_0,v_0) M_{3\gamma}(u_0,v_0) > K(\phi,\psi) M_{3\gamma}(\phi,\psi).
\end{align}
If the initial data satisfy
\begin{itemize}[leftmargin=5mm]
\item either $(u_0,v_0)$ is radially symmetric,
\item or $(u_0,v_0) \in \Sigma_3 \times \Sigma_3$,
where
\[
\Sigma_3:= \left\{ f \in H^1(\mathbb R^3) \ : \ f(y,z) = f(|y|,z), z f \in L^2(\mathbb R^3) \right\}
\]
with $x=(y,z), y=(x_1,x_2) \in \mathbb R^2$ and $z \in \mathbb R,$
\end{itemize}
then the corresponding solution to \eqref{SNLS} blows-up in finite time.
\end{theorem}
Let us now comment previous known results about blow-up for \eqref{SNLS} and the one stated above, and highlight the main novelties of this paper regarding the blow-up achievements with respect to the previous literature.\\
In the mass-resonance case, i.e., $\gamma=3$, and provided $\mu=3\gamma=9$, the existence of finite time blow-up solutions to \eqref{SNLS} with finite variance initial data was proved in \cite[Theorems 4.6 and 4.8]{OP}. More precisely, they proved that if $(u_0,v_0) \in \Sigma (\mathbb R^3)\times \Sigma(\mathbb R^3)$ with $\Sigma(\mathbb R^3) = H^1(\mathbb R^3) \cap L^2(\mathbb R^3, |x|^2 dx)$ satisfying either $E_9(u_0,v_0) <0$ or if $E_9(u_0,v_0) \geq 0$, they moreover assumed that
\begin{align*}
E_9(u_0,v_0) M_{9}(u_0,v_0) &< \frac{1}{2}E_{9}(\phi,\psi) M_{9}(\phi,\psi),\\
K(u_0,v_0) M_{9}(u_0,v_0) &> K(\phi,\psi) M_{9}(\phi,\psi),
\end{align*}
where $(\phi,\psi) \in \mathcal G(0,9,3)$, then the corresponding solution to \eqref{SNLS} blows-up in finite time. The proof of the blow-up result in \cite{OP} is based on the following virial identity (see Remark \ref{rem-viri-iden})
\begin{equation}\label{eq:gla}
\frac{d^2}{dt^2} V(t) = 4 G(u(t),v(t)),
\end{equation}
where
\[
V(t):= \int |x|^2 \left( |u(t,x)|^2 + 9 |v(t,x)|^2\right) dx.
\]
Using \eqref{eq:gla}, the finite time blow-up result follows from a convexity argument. For the power-type NLS equation, this kind of convexity strategy goes back to the early work of Glassey, see \cite{Glassey}, for finite variance solutions with negative initial energy. See the works by Ogawa and Tsutsumi \cite{OT} for the removal of the finiteness hypothesis of the variance, but with the addition of the radial assumption. See the already mentioned paper \cite{HR} for an extension to the cubic NLS up to the mass-energy threshold, of the results by Glassey, and Ogawa and Tsutsumi.
If we do not assume the mass-resonance condition, or we do not assume that $\mu\neq3\gamma,$ the identity \eqref{eq:gla} ceases to be valid. Thus the convexity argument is no-more applicable in our general setting. The proof of Theorem \ref{theo-blow} above relies instead on an ODE argument, in the same spirit of our previous work \cite{DF}, using localized virial estimates and the negativity property of the Pohozaev functional (see Lemma \ref{lem-nega-G}). We point-out that our result not only extends the one in \cite{OP} to radial and cylindrical solutions, but also extends it to the whole range of $\mu, \gamma>0$. It worth mentioning that blow-up in a full generality, i.e. for infinite-variance solutions with no symmetric assumptions, is still an open problem even for the classical cubic NLS. \\
We conclude this introduction by reporting some notation used along the paper, and by disclosing how the paper is organized.
\subsection{Notations}
We use the notation $X\lesssim Y$ to denote $X\leq C Y$ for some constant $C>0$. When $X\lesssim Y$ and $Y\lesssim X$ (possibly for two different universal constants), we write $X\sim Y,$ or equivalently, we use the `big O' notation $\mathcal O$, e.g., $X=\mathcal O(Y)$. For $I\subset \mathbb R$ an interval, we denote the mixed norm
\[
\|f\|_{L^{q}_{t}L^{r}_{x}(I\times\mathbb R^{3})}= \left(\int_I \left( \int_{\mathbb R^3} |f(t,x)|^r dx\right)^{\frac{q}{r}} dt\right)^{\frac{1}{q}}
\]
with the usual modifications when either $r$ or $q$ are infinity. When $q=r$, we simply write $\|f\|_{L^{q}_{t,x}(I\times\mathbb R^{3})}$. Let $f, g \in L^q_t L^r_x(I \times \mathbb R^3)$, we denote
\[
\|(f,g)\|_{L^q_t L^r_x \times L^q_t L^r_x(I\times \mathbb R^3)} := \|f\|_{L^{q}_{t}L^{r}_{x}(I\times\mathbb R^{3})} + \|g\|_{L^{q}_{t}L^{r}_{x}(I\times\mathbb R^{3})}
\]
and if $q=r$, we simply write
\[
\|(f,g)\|_{L^q_{t,x}\times L^q_{t,x}(I\times \mathbb R^3)} := \|f\|_{L^q_{t,x}(I\times\mathbb R^{3})} + \|g\|_{L^{q}_{t,x}(I\times\mathbb R^{3})}.
\]
The $L^p(\mathbb R^3)$ spaces, with $1\leq,p\leq\infty,$ are the usual Lebesgue spaces, as well as spaces $W^{k,p}(\mathbb R^3)$ spaces, and their homogeneous versions, are the classical Sobolev spaces.
To lighten the notation along the paper, we will avoid to write $\mathbb R^3$ (unless necessary), as we are dealing with a three-dimensional problem.
\subsection{Structure of the paper} This paper is organized as follows. In Section \ref{sec:pre}, we state preliminary results that will be needed throughout the paper, and we will prove some coercivity conditions which play a vital role to get the scattering results. In Section \ref{sec:VM}, we introduce localized quantities, and we derive localized virial estimates, Morawetz and interaction Morawetz estimates which will be the fundamental tools to establish the main results. The latter a-priori estimates will be shown in both radial and non-radial settings. In Section \ref{sec:sct}, we give scattering criteria for radial and non-radial solutions. We eventually prove, in Section \ref{sec:proofs-main}, the scattering results and the blow-up results, by employing the tools developed in the previous Sections. We conclude with the Appendixes \ref{sec:app:A} and \ref{sec:app:B}, devoted to the proofs of some results used along the paper.
\section{Preliminary tools}\label{sec:pre}
In this section, we introduce some basic tools towards the proof of our main achievements. Specifically, we give a small data scattering result, as well as useful properties related to the ground states. We postpone the proof of some of the following results to the Appendix \ref{sec:app:A}.
\subsection{Small data theory}
We have the following small data scattering result, which will be useful in the sequel.
\begin{lemma}\label{lem-smal-scat}
Let $\mu,\gamma>0$, and $T>0$. Suppose that $(u,v)$ is a global $H^1$-solution to \eqref{SNLS} satisfying
\[
\sup_{t\in \mathbb R}\|(u(t),v(t))\|_{H^{1}\times H^{1}}\leq E
\]
for some constant $E>0$. There exists $\epsilon_{\sd}=\epsilon_{\sd}(E)>0$ such that if
\begin{equation}\label{Small-sc}
\| (\mathcal S_{1}(t-T)u(T),\mathcal S_{2}(t-T)v(T)) \|_{L_{t}^{4}L^{6}_{x}\times L_{t}^{4}L^{6}_{x}([T,\infty)\times \mathbb R^{3})}<\epsilon_{\sd},
\end{equation}
then the solution scatters forward in time.
\end{lemma}
\begin{proof}
See Appendix \ref{sec:app:A}.
\end{proof}
\subsection{Variational analysis}\label{Vari-Anal}
We first recall some basic properties of ground states in $\mathcal G(0,3\gamma,\gamma)$ and then show a coercivity condition (see \eqref{coer-prop}), which play a vital role to get scattering results.
It was shown in \cite[Lemma 3.5]{OP} that any ground state $(\phi,\psi) \in \mathcal G(0,3\gamma,\gamma)$ optimizes the Gagliardo-Nirenberg inequality \eqref{GN-ineq}, that is
\[
C_{\opt} = \frac{P(\phi,\psi)}{\left(K(\phi,\psi)\right)^{\frac{3}{2}} \left(M_{3\gamma}(\phi,\psi)\right)^{\frac{1}{2}}}.
\]
Using the Pohozaev identities (see \cite[Lemma 3.4]{OP})
\begin{align} \label{poho-iden}
P(\phi,\psi) = S_{0,3\gamma,\gamma}(\phi,\psi) = E_{3\gamma}(\phi,\psi) = M_{3\gamma}(\phi,\psi)= \frac{1}{3} K(\phi,\psi),
\end{align}
we have
\begin{align} \label{opti-cons}
C_{\opt} = \frac{1}{3} \left( K(\phi,\psi) M_{3\gamma}(\phi,\psi)\right)^{-\frac{1}{2}}.
\end{align}
To employ some Morawetz estimates in the proof of the scattering theorem, we will also use the following refined Gagliardo-Nirenberg inequality.
\begin{lemma} \label{lem-refi-GN-ineq}
Let $(\phi,\psi)\in \mathcal G(0, 3\gamma, \gamma)$. For any $(f,g)\in H^{1}\times H^{1}$ and $\xi_{1}$, $\xi_{2}\in \mathbb R^{3}$,
we have
\begin{equation}\label{refi-GN-ineq}
P(|f|,|g|)\leq \frac{1}{3}\left(\frac{K(f,g) M_{3\gamma}(f,g)}{K(\phi,\psi) M_{3\gamma}(\phi,\psi)}\right)^{\frac{1}{2}}K(e^{ix\cdot\xi_{1}}f,e^{ix\cdot\xi_{2}}g).
\end{equation}
\end{lemma}
\begin{proof}
See Appendix \ref{sec:app:A}.
\end{proof}
We conclude this preliminary section by giving the following two coercivity results.
\begin{lemma}\label{L22}
Let $\mu, \gamma>0$, and $(\phi,\psi)\in \mathcal G(0, 3\gamma, \gamma)$. Let $(u_{0},v_{0})\in H^{1}\times H^{1}$ satisfy \eqref{cond-ener} and \eqref{cond-gwp}. Then the corresponding solution to \eqref{SNLS} exists globally in time and satisfies
\begin{equation}\label{est-K}
\sup_{t\in \mathbb R} K(u(t),v(t))\leq 6E_{\mu}(u_{0},v_{0}).
\end{equation}
Moreover, there exists $\delta=\delta(u_0,v_0,\phi,\psi)>0$ such that
\begin{equation}\label{coer-1}
K(u(t),v(t))M_{3\gamma}(u(t),v(t)) \leq (1-\delta)K(\phi,\psi)M_{3\gamma}(\phi,\psi)
\end{equation}
for all $t\in \mathbb R$.
\end{lemma}
\begin{proof}
See Appendix \ref{sec:app:A}.
\end{proof}
\begin{lemma} \label{lem-coer-2}
Let $\mu,\gamma>0$, and $(\phi,\psi) \in \mathcal G(0,3\gamma,\gamma)$. Let $(u_{0},v_{0})\in H^{1}\times H^{1}$ satisfy \eqref{cond-ener} and \eqref{cond-gwp}. Let $\delta$ be as in \eqref{coer-1}. Then there exists $R=R(\delta, u_0,v_0, \phi,\psi)>0$ sufficiency large such that for any $z\in \mathbb R^{3}$,
\begin{equation}\label{Cb2}
\begin{aligned}
K\left(\Gamma_{R}(\cdot-z)u(t),\Gamma_{R}(\cdot-z)v(t)\right) &M_{3\gamma}\left(\Gamma_{R}(\cdot-z)u(t),\Gamma_{R}(\cdot-z)v(t)\right) \\
&\leq \left(1-\frac{\delta}{2}\right)K(\phi,\psi)M_{3\gamma}(\phi,\psi)
\end{aligned}
\end{equation}
uniformly for $t\in \mathbb R$, where $\Gamma_{R}(x):=\Gamma\left(\frac{x}{R}\right)$ with $\Gamma$ a cutoff function satisfying $0\leq \Gamma(x)\leq 1$ for all $x\in \mathbb R^3$. Moreover, there exists $\nu=\nu(\delta) >0$ independent on $t$ so that
for any $\xi_{1}, \xi_{2}\in \mathbb R^{3}$, and any $z\in \mathbb R^3$,
\begin{equation}\label{coer-prop}
\begin{aligned}
K\left(\Gamma_{R}(\cdot-z)e^{ix\cdot\xi_{1}}u(t),\Gamma_{R}(\cdot-z)e^{ix\cdot\xi_{2}}v(t)\right) &- 3 P\left(\Gamma_{R}(\cdot-z)u(t),\Gamma_{R}(\cdot-z)v(t)\right) \\
&\geq \nu K\left(\Gamma_{R}(\cdot-z)e^{ix\cdot\xi_{1}}u(t),\Gamma_{R}(\cdot-z)e^{ix\cdot\xi_{2}}v(t)\right)
\end{aligned}
\end{equation}
for any $t\in \mathbb R$.
\end{lemma}
\begin{proof}
See Appendix \ref{sec:app:A}.
\end{proof}
\section{Virial and Morawetz estimates}\label{sec:VM}
This section is devoted to the proof of virial-type, Morawetz-type, and interaction Morawetz-type estimates, which will be crucial for the proof of the main Theorems \ref{Th1} and \ref{theo-blow}.
\subsection{Virial estimates}
We start with the following identities. In what follows we use the Einstein convention, so repeated indices are summed.
\begin{lemma} \label{Imporide}
Let $\mu, \beta, \gamma>0$, and $(u,v)$ be a $H^1$-solution to \eqref{SNLS}. Then the following identities hold:
\begin{align}\label{Idg}
\partial_{t}(|u|^{2}+\gamma\beta|v|^{2})&=-2\nabla\cdot\IM (\overline{u}\nabla u)
-2\beta\nabla\cdot\IM (\overline{v}\nabla v)+\frac{2}{3}\left(1-\frac{\beta}{3}\right)\IM (u^{3}\overline{v}),\\
\label{Idn}
\partial_{t}\IM (\overline{u}\partial_k u+\gamma\overline{v} \partial_k v)&=
\frac{1}{2}\partial_{k}\slashed{\nabla}lta (|u|^{2}+|v|^{2}) -2\partial_{j}\mathbb RE(\partial_j\overline{u} \partial_k u+\partial_j\overline{v} \partial_kv)+2\partial_{k}N(u,v),
\end{align}
where $N$ is as in \eqref{defi-N}. In particular, we have
\begin{align*}
\partial_{t}(|u|^{2}+\gamma^{2}|v|^{2})&=-2\nabla\cdot\IM (\overline{u}\nabla u)
-2\gamma\nabla\cdot\IM (\overline{v}\nabla v)+\frac{2}{3}\left(1-\frac{\gamma}{3}\right)\IM (u^{3}\overline{v}),\\
\partial_{t}(|u|^{2}+3\gamma|v|^{2})&=-2\nabla\cdot\IM (\overline{u}\nabla u)
-6\nabla\cdot\IM (\overline{v}\nabla v).
\end{align*}
\end{lemma}
\begin{proof}
See Appendix \ref{sec:app:B}.
\end{proof}
A direct consequence of Lemma \ref{Imporide} is the following localized virial identity related to \eqref{SNLS}.
\begin{lemma} \label{lem-viri-iden}
Let $\mu,\gamma >0$, and $\varphi: \mathbb R^3 \rightarrow \mathbb R$ be a sufficiently smooth and decaying function. Let $(u,v)$ be a $H^1$-solution to \eqref{SNLS} defined on the maximal time interval $(-T_-,T_+)$. Define
\begin{align} \label{defi-M-varphi}
\mathcal M_\varphi(t):= 2 \ima \int \nabla \varphi(x) \cdot (\nabla u\overline{u}+\gamma\nabla v \overline{v})(t,x) dx.
\end{align}
Then we have for all $t\in (-T_-,T_+)$,
\begin{align*}
\frac{d}{dt}\mathcal M_\varphi(t)&=- \int \slashed{\nabla}lta^2 \varphi(x) ( |u|^2 + |v|^2 )(t,x) dx + 4\rea\int \partial^2_{jk}\varphi(x)(\partial_j\overline{u} \partial_k u+ \partial_j\overline{v} \partial_k v)(t,x)dx \\
&-4\int \slashed{\nabla}lta \varphi(x)N(u,v)(t,x) dx.
\end{align*}
\end{lemma}
The following Corollary is easy to get.
\begin{corollary} \label{rem-viri-iden}
Recall the definition of $G,N,P$ in \eqref{defi-G}, \eqref{defi-N}, and \eqref{defi-P}, respectively.
\begin{itemize}[leftmargin=5mm]
\item[(i)] If $\varphi(x) = |x|^2$,
\begin{equation}\label{eq:variance}
\frac{d}{dt} \mathcal M_{|x|^2}(t) = 8 G(u(t),v(t)).
\end{equation}
\item[(ii)] If $\varphi$ is radially symmetric, by denoting $|x|=r,$ we have
\begin{equation}\label{cor:ii}
\begin{aligned}
\frac{d}{dt} \mathcal M_\varphi(t) &= -\int \slashed{\nabla}lta^2 \varphi(x) (|u|^2 + |v|^2)(t,x) dx + 4\int \frac{\varphi'(r)}{r} (|\nabla u|^2 + |\nabla v|^2 )(t,x) dx \\
&+ 4 \int \left(\frac{\varphi''(r)}{r^2} - \frac{\varphi'(r)}{r^3} \right) (|x\cdot \nabla u|^2 + |x\cdot \nabla v|^2 )(t,x) dx \\
&-4\int \slashed{\nabla}lta \varphi(x) N(u,v)(t,x)dx.
\end{aligned}
\end{equation}
\item[(iii)] If $\varphi$ is radial and $(u,v)$ is also radial, then
\begin{equation}\label{cor:iii}
\begin{aligned}
\frac{d}{dt} \mathcal M_\varphi(t) &= -\int \slashed{\nabla}lta^2 \varphi(x) (|u|^2 + |v|^2)(t,x) dx + 4 \int \varphi''(r) (|\nabla u|^2 + |\nabla v|^2)(t,x) dx \\
&- 4\int \slashed{\nabla}lta \varphi(x) N(u,v)(t,x)dx.
\end{aligned}
\end{equation}
\item[(iv)] Denote $x=(y,z)$ with $y=(x_1, x_2) \in \mathbb R^2$ and $z \in \mathbb R$. Let $\psi: \mathbb R^2 \rightarrow \mathbb R$ be a sufficiently smooth and decaying function. Set $\varphi(x) = \psi(y) + z^2$. If $(u(t),v(t)) \in \Sigma_3 \times \Sigma_3$ for all $t\in (-T_-,T_+)$, then we have
\begin{equation}\label{cor:iv}
\begin{aligned}
\frac{d}{dt} \mathcal M_\varphi(t) &= -\int \slashed{\nabla}lta^2_y \psi(y) (|u|^2 + |v|^2)(t,x) dx + 4\int \psi''(\rho) (|\nabla_y u|^2 + |\nabla_y v|^2)(t,x) dx \\
& + 8 \left(\|\partial_z u(t)\|^2_{L^2} + \|\partial_z v(t)\|^2_{L^2}\right) - 8 P(u(t),v(t)) -4\int \slashed{\nabla}lta_y \psi(y) N(u,v)(t,x)dx,
\end{aligned}
\end{equation}
where $\rho = |y|.$
\end{itemize}
\end{corollary}
\begin{proof}
See Appendix \ref{sec:app:B}.
\end{proof}
We now aim to construct precise localization functions that we will use to get the desired main results of the paper. Let $\zeta: [0,\infty) \rightarrow [0,2]$ be a smooth function satisfying
\[
\zeta(r):= \left\{
\begin{array}{ccl}
2 &\text{if}& 0 \leq r \leq 1, \\
0 &\text{if}& r\geq 2.
\end{array}
\right.
\]
We define the function $\vartheta:[0,\infty) \rightarrow [0, \infty)$ by
\begin{align} \label{defi-vartheta}
\vartheta(r):= \int_0^r \int_0^\tau \zeta(s) ds d\tau.
\end{align}
For $R>0$, we define the radial function $\varphi_R: \mathbb R^3 \rightarrow \mathbb R$ by
\begin{align} \label{defi-varphi-R}
\varphi_R(x)=\varphi_R(r):= R^2 \vartheta(r/R), \quad r=|x|.
\end{align}
We readily check that, $ \forall x \in \mathbb R^3$ and $\forall r\geq 0,$
\[
2\geq \varphi''_R(r) \geq 0, \quad 2-\frac{\varphi'_R(r)}{r} \geq 0, \quad 6-\slashed{\nabla}lta \varphi_R(x) \geq 0.
\]
We are ready to state the first virial estimate for radially symmetric solutions.
\begin{lemma} \label{lem-viri-est-rad}
Let $\mu,\gamma >0$. Let $(u,v)$ be a radial $H^1$-solution to \eqref{SNLS} defined on the maximal time interval $(-T_-,T_+)$. Let $\varphi_R$ be as in \eqref{defi-varphi-R} and denote $\mathcal M_{\varphi_R}(t)$ as in \eqref{defi-M-varphi}. Then we have for all $t\in (-T_-,T_+)$,
\begin{align} \label{viri-est-rad}
\frac{d}{dt} \mathcal M_{\varphi_R}(t) \leq 4 G(u(t),v(t)) + C R^{-2} K(u(t),v(t)) + CR^{-2}
\end{align}
for some constant $C>0$ depending only on $\mu, \gamma$, and $M_{3\gamma}(u_0,v_0)$, where $G$ is as in \eqref{defi-G}.
\end{lemma}
\begin{proof}
By \eqref{cor:iii}, we have for all $t\in (-T_-,T_+)$,
\begin{align*}
\frac{d}{dt} \mathcal M_{\varphi_R}(t) &= -\int \slashed{\nabla}lta^2 \varphi_R(x) (|u|^2 + |v|^2)(t,x) dx + 4 \int_{\mathbb R^3} \varphi''_R(r) (|\nabla u|^2 + |\nabla v|^2)(t,x) dx \\
&-4\int \slashed{\nabla}lta \varphi_R(x) N(u,v)(t,x)dx.
\end{align*}
We rewrite, using $G-K+3P=0,$
\begin{align*}
\frac{d}{dt} \mathcal M_{\varphi_R}(t)&= 8 G(u(t),v(t)) - 8 K(u(t),v(t)) + 24 P(u(t),v(t)) \\
&-\int \slashed{\nabla}lta^2 \varphi_R(x) (|u|^2 + |v|^2)(t,x) dx + 4 \int \varphi''_R(r) (|\nabla u|^2 + |\nabla v|^2)(t,x) dx \\
& -4\int \slashed{\nabla}lta \varphi_R(x) N(u,v)(t,x)dx \\
&= 8 G(u(t),v(t)) -\int \slashed{\nabla}lta^2 \varphi_R(x) (|u|^2 + |v|^2)(t,x) dx \\
& - 4 \int (2-\varphi''_R(r)) (|\nabla u|^2 + |\nabla v|^2)(t,x) dx + 4 \int (6-\slashed{\nabla}lta \varphi_R(x)) N(u,v)(t,x)dx.
\end{align*}
As $\|\slashed{\nabla}lta^2 \varphi_R\|_{L^\infty} \lesssim R^{-2}$, the conservation of mass implies that
\[
\left| \int_{\mathbb R^3} \slashed{\nabla}lta^2 \varphi_R(x) (|u|^2 +|v|^2)(t,x) dx \right| \lesssim R^{-2}.
\]
The latter, together with $\varphi''_R (r) \leq 2$ for all $r\geq0$, $\|\slashed{\nabla}lta \varphi_R\|_{L^\infty} \lesssim 1$, $\varphi_R(x) = |x|^2$ on $|x| \leq R$, and H\"older's inequality, yield
\[
\frac{d}{dt}\mathcal M_{\varphi_R}(t) \leq 8 G(u(t),v(t)) + CR^{-2} + C\int_{|x|\geq R} |u(t,x)|^4 + |v(t,x)|^4dx,
\]
where we have used the fact that (see \eqref{defi-N})
\[
|N(u,v)| \lesssim |u|^4 + |v|^4.
\]
To estimate the last term, we recall the following radial Sobolev embedding (see e.g., \cite{CO}): for a radial function $f\in H^1(\mathbb R^3)$, we have
\begin{align} \label{rad-sobo}
\sup_{x \ne 0} |x| |f(x)| \leq C\|\nabla f\|^{\frac{1}{2}}_{L^2} \|f\|^{\frac{1}{2}}_{L^2}.
\end{align}
Thanks to \eqref{rad-sobo} and the conservation of mass, we estimate
\begin{align*}
\int_{|x|\geq R} |u(t,x)|^4 dx &\leq \sup_{|x|\geq R} |u(t,x)|^2 \|u(t)\|^2_{L^2}\\
&\lesssim R^{-2} \sup_{|x|\geq R} \left(|x||u(t,x)|\right)^2 \|u(t)\|^2_{L^2}\\
&\lesssim R^{-2} \|\nabla u(t)\|_{L^2} \|u(t)\|^3_{L^2} \\
&\lesssim R^{-2} \|\nabla u(t)\|_{L^2} \\
&\lesssim R^{-2} \left(\|\nabla u(t)\|^2_{L^2} +1\right).
\end{align*}
It follows that
\[
\frac{d}{dt} \mathcal M_{\varphi_R}(t) \leq 8 G(u(t),v(t)) + CR^{-2} + CR^{-2} \left(\|\nabla u(t)\|^2_{L^2}+\|\nabla v(t)\|^2_{L^2}\right).
\]
The proof is complete.
\end{proof}
Next we derive localized virial estimates for cylindrically symmetric solutions (we also mention here \cite{BF20, BFG20, Mar, DF, Inui1, Inui2}, for the qualitative analysis of dispersive-type equations in anisotropic spaces).
To this end, we introduce
\begin{align} \label{defi-psi-R}
\psi_R(y)= \psi_R(\rho) := R^2 \zeta(\rho/R), \quad \rho =|y|
\end{align}
and set
\begin{align} \label{defi-varphi-R-psi}
\varphi_R(x):= \psi_R(y) + z^2.
\end{align}
\begin{lemma} \label{lem-viri-est-cyli}
Let $\mu,\gamma>0$. Let $(u,v)$ be a $\Sigma_3$-solution to \eqref{SNLS} defined on the maximal time interval $(-T_-,T_+)$. Let $\varphi_R$ be as in \eqref{defi-varphi-R-psi} and denote $\mathcal M_{\varphi_R}(t)$ as in \eqref{defi-M-varphi}. Then we have for all $t\in (-T_-,T_+)$,
\begin{align} \label{viri-est-cyli}
\frac{d}{dt} \mathcal M_{\varphi_R}(t) \leq 8 G(u(t),v(t)) + CR^{-1} K(u(t),v(t)) + CR^{-2}
\end{align}
for some constant $C>0$ depending only on $\mu,\gamma$, and $M(u_0,v_0)$.
\end{lemma}
\begin{proof}
By \eqref{cor:iv}, we have for all $t\in (-T_-,T_+)$,
\begin{align*}
\frac{d}{dt} \mathcal M_{\varphi_R}(t) &= -\int \slashed{\nabla}lta^2_y \psi_R(y) (|u|^2 + |v|^2)(t,x) dx + 4\int_{\mathbb R^3} \psi''_R(\rho) (|\nabla_y u|^2 + |\nabla_y v|^2)(t,x) dx \\
& + 8 \left(\|\partial_z u(t)\|^2_{L^2} + \|\partial_z v(t)\|^2_{L^2}\right) - 8 P(u(t),v(t)) -4\int \slashed{\nabla}lta_y \psi_R(y) N(u,v)(t,x)dx,
\end{align*}
where $\rho=|y|$. It follows that
\begin{align*}
\frac{d}{dt} \mathcal M_{\varphi_R}(t) &\leq 8 G(u(t),v(t)) +CR^{-2} - 4 \int (2-\psi''_R(\rho)) ( |\nabla_y u|^2 + |\nabla_y v|^2)(t,x) dx \\
&+4 \rea \int (4-\slashed{\nabla}lta_y\psi_R(y)) N(u,v)(t,x)dx.
\end{align*}
As $\psi''_R(\rho) \leq 2$ and $\|\slashed{\nabla}lta_y \psi_R\|_{L^\infty} \lesssim 1$, the H\"older's inequality implies that
\begin{align} \label{est-cyli}
\frac{d}{dt} \mathcal M_{\varphi_R}(t) \leq 8 G(u(t),v(t)) + CR^{-2} + C \int_{|y|\geq R} |u(t,x)|^4 + |v(t,x)|^4 dx.
\end{align}
We estimate
\begin{align*}
\int_{|y|\geq R} |u(t,x)|^4 dx &\leq \int_{\mathbb R} \|u(t,z)\|^2_{L^2_y} \|u(t,z)\|^2_{L^\infty_y(|y|\geq R)} dz \\
&\leq \sup_{z \in \mathbb R} \|u(t,z)\|^2_{L^2_y} \left(\int_{\mathbb R} \|u(t,z)\|^2_{L^\infty_y(|y|\geq R)} dz \right).
\end{align*}
Set $g(z):= \|u(t,z)\|^2_{L^2_y}$, we have
\begin{align*}
g(z) = \int_{-\infty}^{z} \partial_s g(s) ds &= 2 \int_{-\infty}^{z} \rea \int_{\mathbb R^2} \overline{u}(t,y,s) \partial_s u(t,y,s) dy ds \\
&\leq 2 \|u(t)\|_{L^2_x} \|\partial_z u(t)\|_{L^2_x}
\end{align*}
which, by the conservation of mass, implies that
\begin{align} \label{est-cyli-1}
\sup_{z \in \mathbb R} \|u(t,z)\|^2_{L^2_y} \lesssim \|\partial_z u(t)\|_{L^2_x}.
\end{align}
By the radial Sobolev embedding \eqref{rad-sobo} with respect to the $y$-variable, we have
\begin{align}
\int \|u(t,z)\|^2_{L^\infty_y(|y|\geq R)} dz &\lesssim R^{-1} \int \|\nabla_y u(t,z)\|_{L^2_y} \|u(t,z)\|_{L^2_y} dz \nonumber \\
&\lesssim R^{-1} \left( \int \|\nabla_y u(t,z)\|^2_{L^2_y} dz\right)^{1/2} \left( \int \|u(t,z)\|^2_{L^2_y} dz\right)^{1/2} \nonumber \\
&\lesssim R^{-1} \|\nabla_y u(t)\|_{L^2_x} \|u(t)\|_{L^2_x} \nonumber \\
&\lesssim R^{-1} \|\nabla_y u(t)\|_{L^2_x}. \label{est-cyli-2}
\end{align}
Collecting \eqref{est-cyli-1} and \eqref{est-cyli-2}, we get
\begin{align*}
\int_{|y|\geq R} |u(t,x)|^4 dx &\lesssim R^{-1} \|\nabla_y u(t)\|_{L^2_x} \|\partial_z u(t)\|_{L^2_x} \\
&\lesssim R^{-1} \left(\|\nabla_y u(t)\|^2_{L^2_x} + \|\partial_z u(t)\|^2_{L^2_x}\right) \\
&\lesssim R^{-1} \|\nabla u(t)\|^2_{L^2_x}.
\end{align*}
The latter and \eqref{est-cyli} give \eqref{viri-est-cyli}. The proof is complete.
\end{proof}
\subsection{Interaction Morawetz estimates. Non-radial setting}
Following \cite{WY}, let $\chi$ be a decreasing radial smooth function such that $\chi(x)=1$ for $|x|\leq 1-\sigma$, $\chi(x)=0$ for $|x|\geq 1$, and $|\nabla \chi| \lesssim \sigma^{-1}$, where $0<\sigma<1$ is a small constant.
Let $R>1$ be a large parameter. We define the following radial functions
\begin{align*}
\Phi_R(x) &=\frac{1}{\omega_{3}R^{3}}\int\chi_{R}^{2}(x-z)\chi_{R}^{2}(z)dz, \\
\Phi_{1,R}(x,y)&=\frac{1}{\omega_{3}R^{3}}\int\chi_{R}^{2}({x-z})\chi_{R}^{4}({y-z})dz,
\end{align*}
where $\chi_{R}(x):=\chi\left(\frac{x}{R}\right)$ and $\omega_{3}$ is the volume of unit ball in $\mathbb R^{3}$. We also define the functions
\[
\Psi_R(x)=\frac{1}{|x|}\int^{|x|}_{0}\Phi_R(r)dr, \quad \Theta_R(x)=\int^{|x|}_{0} r \Psi_R(r)dr.
\]
We collect below some properties of the above functions.
\begin{remark}[\cite{DM-MRL}] \label{Rema1}
Straightforward calculations give:
\begin{itemize}[leftmargin=5mm]
\item the identities $\partial_{j}\Theta_R(x)=x_j\Psi_R(x)$ and $\partial_{j}\Psi_R(x)=\frac{x_{j}}{|x|^{2}}(\Phi_R(x)-\Psi_R(x))$, and in particular,
\begin{align} \label{prop-cutoff-1}
\slashed{\nabla}lta \Theta_R(x)=2\Psi_R(x)+\Phi_R(x), \quad \partial^2_{jk}\Theta_R(x)=\delta_{jk}\Phi_R(x)+P_{jk}(x)(\Psi_R(x)-\Phi_R(x)),
\end{align}
where $P_{jk}(x)=\delta_{jk}-\frac{x_{j}x_{k}}{|x|^{2}}$ with $\delta_{jk}$ the Kronecker symbol;
\item and that the estimates below are satisfied:
\begin{equation}\label{prop-cutoff-2}
\begin{aligned}
&\Psi_R(x)-\Phi_R(x)\geq 0,\quad\qquad |\Psi_R(x)|\lesssim \min\left\{1, \frac{R}{|x|}\right\}, \\
& |\nabla \Phi_R(x)|\lesssim \frac{1}{\sigma R}, \,\,\,\quad\quad\qquad
|\nabla \Psi_R(x)|\lesssim \frac{1}{\sigma}\min \left\{\frac{1}{R}, \frac{R}{|x|^2}\right\}, \\
& |\Phi_R(x)-\Phi_{1,R}(x)|\lesssim \sigma, \quad\quad |\Psi_R(x)-\Phi_R(x)|\lesssim \frac{1}{\sigma}\min\left\{\frac{|x|}{R}, \frac{R}{|x|}\right\}.
\end{aligned}
\end{equation}
\end{itemize}
\end{remark}
Let $(u,v)$ be a global $H^1$-solution to \eqref{SNLS} with initial data $(u_0,v_0)$ satisfying \eqref{cond-ener} and \eqref{cond-gwp}. We define the interaction Morawetz quantity adapted to system \eqref{SNLS} by
\[
\mathcal M^{\otimes 2}_{R}(t)=2\iint L_{\gamma}(u,v)(t,y)\nabla \Theta_R(x-y)\cdot \IM (\overline{u}\nabla u+\gamma \overline{v}\nabla v)(t,x)dxdy,
\]
where
\[
L_{\gamma}(u,v)(t, x):=(|u|^{2}+\gamma^{2}|v|^{2})(t,x).
\]
From the conservation of mass, \eqref{est-K}, and \eqref{prop-cutoff-2}, we have
\[
\sup_{t\in\mathbb R}|\mathcal M^{\otimes 2}_{R}(t)| \lesssim R.
\]
By Lemma \ref{Imporide}, we have
\begin{equation}\label{dtm}
\partial_{t}L_{\gamma}(u,v)=-2\nabla\cdot\IM (\overline{u}\nabla u)
-2\gamma\nabla\cdot\IM (\overline{v}\nabla v)+\frac{2}{3}\left(1-\frac{\gamma}{3}\right)\IM (u^{3}\overline{v})
\end{equation}
and
\begin{align*}
\partial_{t}\IM (\overline{u} \partial_k u+\gamma\overline{v} \partial_kv)&=
-2\partial_{j}\mathbb RE(\partial_j\overline{u} \partial_k u+ \partial_j\overline{v} \partial_k v)+
\frac{1}{2}\partial_{k}\slashed{\nabla}lta (|u|^{2}+|v|^{2})+2\partial_{k}N(u,v),
\end{align*}
where we recall that
\[
N(u,v)=\frac{1}{36}|u|^{4}+\frac{9}{4}|v|^{4}+|u|^{2}|v|^{2}+\frac{1}{9}\mathbb RE (\overline{u}^{3}v).
\]
Here repeated indices are summed. Moreover, by using integration by parts, we readily see that
\begin{align}\label{Ln}
\frac{d}{dt} \mathcal M^{\otimes 2}_{R}(t)=&4\iint L_{\gamma}(u,v)(t,y) \nabla \Theta_R(x-y)\cdot \nabla N(u,v)(t,x)dxdy
\\ \label{Ln1}
&+\iint L_{\gamma}(u,v)(t,y) \nabla \Theta_R(x-y)\cdot \nabla\slashed{\nabla}lta (|u|^{2}+|v|^{2})(t,x)dxdy
\\\label{Ln2}
&-4\iint L_{\gamma}(u,v)(t,y) \partial_{k} \Theta_R(x-y)\partial_{j}\mathbb RE(\partial_j\overline{u} \partial_k u+\partial_j \overline{v} \partial_kv)(t,x)dxdy
\\ \label{Ln3}
&+2\iint \partial_{t}L_{\gamma}(u,v)(t,y)\nabla \Theta_R(x-y)\cdot \IM (\overline{u}\nabla u+\gamma \overline{v}\nabla v)(t,x)dxdy.
\end{align}
We are able to prove the following interaction Morawetz estimates, which will play a fundamental role for the proof of the scattering theorem in the non-radial framework.
\begin{proposition}\label{Imnn}
Let $\mu,\gamma>0$, and $(\phi,\psi) \in \mathcal G(0,3\gamma,\gamma)$. Let $(u_{0},v_{0}) \in H^1\times H^1$ satisfy \eqref{cond-ener} and \eqref{cond-gwp}. Let $(u,v)$ be the corresponding global solution to \eqref{SNLS}. Then for arbitrary small $\epsilon>0$, there exist $T_{0}=T_{0}(\epsilon)$, $J=J(\epsilon)$, $R_{0}=R_{0}(\epsilon, u_0,v_0 \phi,\psi)$ sufficiently large and $\sigma=\sigma(\epsilon)$,
$\eta=\eta(\epsilon)$ sufficiently small such that if $|\gamma-3|<\eta$, then for any $a\in \mathbb R$,
\begin{multline}\label{Pl11}
\frac{1}{JT_{0}}\int^{a+T_{0}}_{a}\int^{R_{0}e^{J}}_{R_{0}}\frac{1}{R^{3}}\int_{\mathbb R^{3}}
W_{\gamma}(\chi_{R}(\cdot-z)u(t),\chi_{R}(\cdot-z)v(t)) \\
\times K(\chi_{R}(\cdot-z)u^{\xi}(t),\chi_{R}(\cdot-z)v^{\xi}(t))dz\frac{dR}{R}dt \lesssim \epsilon,
\end{multline}
where $(u^{\xi}(t,x), v^{\xi}(t,x)):= (e^{ix\cdot\xi}u(t,x), e^{i\gamma x\cdot\xi}v(t,x))$ for some $\xi=\xi(t,z,R)\in \mathbb R^{3}$ and
\[
W_{\gamma}(f,g)=\int_{\mathbb R^{3}}L_{\gamma}(f,g)(x)dx.
\]
\end{proposition}
\begin{proof}
Since $\slashed{\nabla}lta \Theta_R(x-y)=3\Phi_{1,R}(x,y)+3(\Phi_R-\Phi_{1,R})(x,y)+2(\Psi_R-\Phi_R)(x-y)$, by integration by parts, we have
\begin{align}\label{S1}
\eqref{Ln}=&-12\iint L_{\gamma}(u,v)(t,y) \Phi_{1,R}(x-y)N(u,v)(t,x)dxdy
\\\label{S2}
&-12\iint L_{\gamma}(u,v)(t,y) (\Phi_R-\Phi_{1,R})(x-y)N(u,v)(t,x)dxdy
\\\label{S3}
&-8\iint L_{\gamma}(u,v)(t,y) (\Psi_R-\Phi_R)(x-y)N(u,v)(t,x)dxdy.
\end{align}
Again, by integration by parts and Remark \ref{Rema1}, we have
\begin{align} \label{Ln11}
\eqref{Ln1}=\iint L_{\gamma}(u,v)(t,y) \nabla (3\Phi_R(x-y)+2(\Psi_R-\Phi_R)(x-y))\cdot \nabla(|u|^{2}+|v|^{2})(t,x)dxdy.
\end{align}
We will treat \eqref{S2}, \eqref{S3}, and \eqref{Ln11} as error terms. Moreover, by Remark \ref{Rema1}, we get
\begin{align}\label{R1}
\eqref{Ln2}&= 4\iint L_{\gamma}(u,v)(t,y) \Phi_R(x-y)(|\nabla u|^{2}+|\nabla v|^{2})(t,x)dxdy\\\label{R2}
&+4\iint L_{\gamma}(u,v)(t,y) (\Psi_R-\Phi_R)(x-y)P_{jk}(x-y)\mathbb RE(\partial_j\overline{u} \partial_k u+ \partial_j\overline{v} \partial_k v)(t,x)dxdy.
\end{align}
Similarly, by \eqref{dtm} and Remark \ref{Rema1}, we see that
\begin{align}\label{J1}
\eqref{Ln3}&=-4\iint \Phi_R(x-y) \IM (\overline{u}\nabla u+\gamma \overline{v}\nabla v)(t,y)\cdot
\IM (\overline{u}\nabla u+\gamma \overline{v}\nabla v)(t,x)dxdy\\\label{J2}
&-4\iint (\Psi_R-\Phi_R)(x-y)P_{jk}(x-y)\IM(\overline{u} \partial_ku+\gamma\overline{v} \partial_k v)(t,y)
\IM(\overline{u} \partial_j u +\gamma\overline{v} \partial_j v)(t,y)dxdy\\\label{J3}
&+\frac{4}{3}\left(1-\frac{\gamma}{3}\right)\iint \nabla \Theta_R(x-y)\cdot \IM(\overline{u}\nabla u+\gamma \overline{v}\nabla v)(t,x)\IM(u^{3}\overline{v})(t,y)dxdy.
\end{align}
Now, let $\slashed\nabla_y$ denote the angular derivative centered at $y$, namely
\[
\slashed{\nabla}_y f(x) := \nabla f(x) - \frac{x-y}{|x-y|} \left(\frac{x-y}{|x-y|} \nabla f(x)\right)
\]
and similarly for $\slashed{\nabla}_x$. We have
\begin{equation}\label{Pds}
\begin{aligned}
\eqref{R2}+\eqref{J2} &=4\iint (\Psi_R-\Phi_R)(x-y)\Big((|\slashed{\nabla}_{y}u|^{2}+|\slashed{\nabla}_{y}v|^{2})(t,x) (|u|^{2}+\gamma^{2}|v|^{2})(t,y)\\
& -\IM (\overline{u}\slashed{\nabla}_{y}u+\gamma \overline{v}\slashed{\nabla}_{y}v)(t,x)\cdot
\IM (\overline{u}\slashed{\nabla}_{x}u+\gamma \overline{v}\slashed{\nabla}_{x}v)(t,y)\Big)dxdy.
\end{aligned}
\end{equation}
Hence $\psi_R-\phi_R$ is radial and non-negative, by the Cauchy-Schwarz inequality, we infer that
\[
\eqref{R2}+\eqref{J2}=\eqref{Pds}\geq 0.
\]
On the other hand, as $\chi_{R}$ is radial and non-negative, we have
\begin{align}\nonumber
\eqref{R1}+\eqref{J1} &=\frac{4}{\omega_{3}R^{3}}\iiint \chi^2_{R}(x-z)\chi^2_{R}(y-z)\Big(
(|\nabla u|^{2}+|\nabla v|^{2})(t,x) (|u|^{2}+\gamma^{2}| v|^{2})(t,y) \nonumber \\
& - \IM (\overline{u}\nabla u+\gamma \overline{v}\nabla v)(t,y)\cdot
\IM (\overline{u}\nabla u+\gamma \overline{v}\nabla v)(t,x)\Big) dxdydz \nonumber\\
&=\frac{4}{\omega_{3}R^{3}}\int B(u,v)(t,z)dz, \label{Gin}
\end{align}
where
\begin{align*}
B(u,v)(t,z):&=\int \chi^{2}_{R}(x-z)(|\nabla u|^{2}+|\nabla v|^{2})(t,x)dx
\int \chi^{2}_{R}(y-z)(|u|^{2}+\gamma^{2}|v|^{2})(t,y)dy\\
& -\left|\int \chi^{2}_{R}(x-z)\IM (\overline{u}\nabla u+\gamma \overline{v}\nabla v)(t,x)dx\right|^{2}.
\end{align*}
Notice that $B(u,v)$ is invariant under the gauge transformation
\[(u(t,x), v(t,x))\mapsto (u^{\xi}(t,x), v^{\xi}(t,x)):= (e^{ix\cdot\xi}u(t,x), e^{i\gamma x\cdot\xi}v(t,x))\]
for any $\xi\in \mathbb R^{3}$. Indeed, we see that
\begin{align*}
L_{\gamma}(u^{\xi},v^{\xi})&= L_{\gamma}(u,v), \quad V_{\gamma}(u^{\xi},v^{\xi})=\xi L_{\gamma}(u,v)+V_{\gamma}(u,v),\\
H(u^{\xi},v^{\xi})&=|\xi|^{2} L_{\gamma}(u,v)+H(u,v)+2\xi\cdot V_{\gamma}(u,v),
\end{align*}
where
\[
V_{\gamma}(u,v)(t,x):=\IM (\overline{u}\nabla u+\gamma \overline{v}\nabla v)(t,x), \quad
H(u,v)(t,x):= (|\nabla u|^{2}+|\nabla v|^{2})(t,x),
\]
which implies that $B(u^{\xi},v^{\xi})=B(u,v)$. Next, we define
\[
\xi(t,z,R):=-\frac{\mathlarger{\int}\chi^{2}_{R}(x-z)V_{\gamma}(u,v)(t,x)dx}
{\mathlarger{\int}\chi^{2}_{R}(x-z)L_{\gamma}(u,v)(t,x)dx}
\]
provided that the denominator is non-zero; otherwise we can define $\xi(t,z,R)\equiv 0$. With this choice of $\xi$, we have
\[
\int \chi^{2}_{R}(x-z)V_{\gamma}(u^{\xi},v^{\xi})(t,x)dx=0.
\]
Combining this with \eqref{Gin}, we infer that
\begin{align*}
\eqref{R1}+\eqref{J1}=\frac{4}{\omega_{3}R^{3}}\int \left(\int \chi^{2}_{R}(x-z)H(u^{\xi},v^{\xi})(t,x)dx
\int \chi^{2}_{R}(y-z)L_{\gamma}(u,v)(t,y)dy\right)dz.
\end{align*}
Therefore, by the above identity, \eqref{S1}, \eqref{S2}, \eqref{Ln11}, and \eqref{J3}, we get
\begin{align}
\frac{4}{\omega_{3}R^{3}}&\int_{\mathbb R^{3}}\left(\int \chi^{2}_{R}(y-z)L_{\gamma}(u,v)(t,y)dy\right) \nonumber \\
&\times \left(\int \chi^{2}_{R}(x-z)H(u^{\xi},v^{\xi})(t,x)-3\chi^{4}_{R}(x-z)N(u,v)(t,x) dx\right)dz \label{Mc1} \\
&\leq \frac{d}{dt} \mathcal M^{\otimes 2}_{R}(t) \label{Mc2}\\
\label{Mc3}
&+\iint L_{\gamma}(u,v)(t,y) (12(\Phi_R-\Phi_{1,R})+8(\Psi_R-\Phi_R))(x-y) N(u,v)(t,x)dxdy\\\label{Mc4}
&-\iint L_{\gamma}(u,v)(t,y) (3 \nabla\Phi_R+ 2 \nabla(\Psi_R-\Phi_R))(x-y)
\cdot \nabla(|u|^2+|v|^2)(t,x)dxdy\\\label{Mc5}
&+\frac{4}{3}\left(\frac{\gamma}{3}-1\right)\iint \nabla \Theta_R(x-y)\cdot \IM(\overline{u}\nabla u+\gamma \overline{v}\nabla v)(t,x)\IM(u^{3}\overline{v})(t,y)dxdy.
\end{align}
Now, we consider \eqref{Mc1}. Since
\[
\int |\nabla (\chi f)|^{2}dx=\int \chi^{2}|\nabla f|^{2}dx-\int\chi\slashed{\nabla}lta\chi |f|^{2}dx,
\]
we get
\begin{equation}\label{Sust}
\begin{aligned}
\int \chi^{2}_{R}(x-z)H(u^{\xi},v^{\xi})(t,x)dx&=
\int H(\chi_{R}(\cdot-z)u^{\xi},\chi_{R}(\cdot-z)v^{\xi})(t,x)dx\\
&+\int \chi_{R}(x-z)\slashed{\nabla}lta\left(\chi_{R}(x-z)\right)(|u|^{2}+|v|^{2})(t,x)dx.
\end{aligned}
\end{equation}
Thus, substituting \eqref{Sust} in \eqref{Mc1} and using Lemma \ref{lem-coer-2} with $\chi_R$ instead of $\Gamma_R$, we see that there exists $\nu >0$ such that
\begin{align*}
\frac{1}{JT_{0}}\int^{a+T_{0}}_{a}\int^{R_{0}e^{J}}_{R_{0}}\eqref{Mc1}\frac{dR}{R}dt &\geq \frac{4\nu}{\omega_{3}JT_{0}}\int^{a+T_{0}}_{a}\int^{R_{0}e^{J}}_{R_{0}}\frac{1}{R^{3}}\int_{\mathbb R^{3}}
\bigg( W_{\gamma}(\chi_{R}(\cdot-z)u(t),\chi_{R}(\cdot-z)v(t)) \\
&\mathrel{\phantom{\quad\quad\int^{a+T_{0}}_{a}\int^{R_{0}e^{J}}_{R_{0}}}} \times K(\chi_{R}(\cdot-z)u^{\xi}(t),\chi_{R}(\cdot-z)v^{\xi}(t))dz\bigg)\frac{dR}{R}dt\\
&+\frac{4\nu}{\omega_{3}JT_{0}}\int^{a+T_{0}}_{a}\int^{R_{0}e^{J}}_{R_{0}}\frac{1}{R^{3}}\int_{\mathbb R^{3}}
W_{\gamma}(\chi_{R}(\cdot-z)u(t), \chi_{R}(\cdot-z)v(t))\\
&\mathrel{\phantom{\quad}} \times \left(\int_{\mathbb R^{3}}\chi_{R}(\cdot-z)\slashed{\nabla}lta\left( \chi_{R}(\cdot-z)\right)(|u|^{2}+|v|^{2})(t,x)dx \right)dz\frac{dR}{R}dt.
\end{align*}
By the conservation of mass and the fact that $\|\slashed{\nabla}lta(\chi_R)\|_{L^\infty} \lesssim R^{-2}$, the absolute value of the second term in the right hand side can be bounded by
\[
\frac{4\nu}{\omega_3 J T_0} \int_a^{a+T_0} \int_{R_0}^{R_0e^J} CR^{-2}\frac{dR}{R} dt \lesssim \frac{1}{JR_0^2}.
\]
This implies that
\begin{align}\nonumber
\frac{1}{JT_{0}}\int^{a+T_{0}}_{a}\int^{R_{0}e^{J}}_{R_{0}}&\frac{1}{R^{3}}\int_{\mathbb R^{3}}
W_{\gamma}( \chi_{R}(\cdot-z)u(t),\chi_{R}(\cdot-z)v(t))K(\chi_{R}(\cdot-z)u^{\xi}(t),\chi_{R}(\cdot-z)v^{\xi}(t))dz\frac{dR}{R}dt\\\label{Plcsa}
&\lesssim \frac{1}{JT_{0}}\int^{a+T_{0}}_{a}\int^{R_{0}e^{J}}_{R_{0}}\eqref{Mc1}\frac{dR}{R}dt+\frac{1}{JR_{0}^{2}}.
\end{align}
Next, as $|\mathcal M^{\otimes 2}_{R}(t)|\lesssim R$, we have
\begin{equation}\label{Cul2}
\left|\frac{1}{JT_{0}}\int^{a+T_{0}}_{a}\int^{R_{0}e^{J}}_{R_{0}} \eqref{Mc2}\frac{dR}{R} dt\right| \leq \frac{1}{JT_0} \int^{R_{0}e^{J}}_{R_{0}} \sup_{t\in [a, a+T_0]} |\mathcal M^{\otimes 2}_R(t)| \frac{dR}{R} \lesssim \frac{R_{0}e^{J}}{JT_0}.
\end{equation}
By \eqref{prop-cutoff-2}, the conservation of mass, \eqref{est-K}, and Sobolev embedding, we have
\begin{align*}
\Big| \frac{1}{JT_0} \int_a^{a+T_0} &\int_{R_0}^{R_0e^J} \iint L_\gamma(u,v)(t,y) (\Phi_R-\Phi_{1,R})(x-y) N(u,v)(t,x) dx dy \frac{dR}{R} dt \Big| \\
&\lesssim \frac{1}{JT_0} \int_a^{a+T_0} \int_{R_0}^{R_0e^J} \sigma \frac{dR}{R} dt \lesssim \sigma,
\end{align*}
where we have used the fact that
\begin{align*}
\int |L_\gamma(u,v)(t,y)|dy &\lesssim M_{3\gamma}(u(t),v(t)), \\
\int |N(u,v)(t,x)|dx &\lesssim \|(u(t),v(t))\|^4_{L^4 \times L^4 } \lesssim \|(u(t),v(t))\|^4_{H^1 \times H^1 }.
\end{align*}
Using \eqref{prop-cutoff-2}, we see that
\begin{align*}
\Big| \frac{1}{JT_0} &\int_a^{a+T_0} \int_{R_0}^{R_0e^J} \iint L_\gamma(u,v)(t,y) (\Psi_R-\Phi_R)(x-y) N(u,v)(t,x) dxdy \frac{dR}{R}dt \Big| \\
&\lesssim \frac{1}{ \sigma JT_0} \int_a^{a+T_0} \int_{R_0}^{R_0e^J} \iint |L_\gamma(u,v)(t,y)| \min \left\{\frac{|x-y|}{R},\frac{R}{|x-y|} \right\} |N(u,v)(t,x)|dxdy \frac{dR}{R} dt \\
&\lesssim \frac{1}{ \sigma JT_0} \int_a^{a+T_0} \iint |L_\gamma(u,v)(t,y)| \left( \int_{R_0}^{R_0e^J} \min \left\{\frac{|x-y|}{R},\frac{R}{|x-y|} \right\} \frac{dR}{R} \right) |N(u,v)(t,x)|dxdy dt \\
&\lesssim \frac{1}{ \sigma J},
\end{align*}
where we have used the fact that
\[
\int_0^\infty \min \left\{\frac{|x-y|}{R},\frac{R}{|x-y|} \right\} \frac{dR}{R} \lesssim 1.
\]
We thus get
\begin{equation}\label{Cul3}
\left|\frac{1}{JT_{0}}\int^{a+T_{0}}_{a}\int^{R_{0}e^{J}}_{R_{0}} \eqref{Mc3}\frac{dR}{R} dt\right| \lesssim \sigma +\frac{1}{ \sigma J}.
\end{equation}
As $|\nabla\Phi_R(x)|, |\nabla(\Psi_R-\Phi_R)(x)| \lesssim \frac{1}{ \sigma R}$, we see that
\begin{align} \label{Cul4}
\left|\frac{1}{JT_{0}}\int^{a+T_{0}}_{a}\int^{R_{0}e^{J}}_{R_{0}} \eqref{Mc4}\frac{dR}{R} dt\right| \lesssim \frac{1}{\sigma JR_0}.
\end{align}
Finally, as $|\gamma-3|<\eta$ and $|\nabla \Theta_R(x)| \lesssim R$, we infer from the conservation of mass, \eqref{est-K}, and Sobolev embedding that
\begin{equation}\label{bnms}
\left|\frac{1}{JT_{0}}\int^{a+T_{0}}_{a}\int^{R_{0}e^{J}}_{R_{0}}\eqref{Mc5} \frac{dR}{R}dt\right|\lesssim \frac{\eta}{JT_{0}}\int^{a+T_{0}}_{a}\int^{R_{0}e^{J}}_{R_{0}}dRdt \lesssim \eta \frac{R_{0}e^{J}}{J}.
\end{equation}
Combining these estimates \eqref{Plcsa}, \eqref{Cul2}, \eqref{Cul3}, \eqref{Cul4}, and \eqref{bnms}, we obtain
\begin{align*}
\frac{1}{JT_{0}}\int^{a+T_{0}}_{a}\int^{R_{0}e^{J}}_{R_{0}}&\frac{1}{R^{3}}\int_{\mathbb R^{3}}
W_{\gamma}( \chi_{R}(\cdot-z)u(t),\chi_{R}(\cdot-z)v(t))K(\chi_{R}(\cdot-z)u^{\xi}(t),\chi_{R}(\cdot-z)v^{\xi}(t))dz\frac{dR}{R}dt\\
&\lesssim \frac{1}{JR_{0}^{2}}+\frac{R_{0}e^{J}}{JT_0}+\sigma+\frac{1}{\sigma J}+\frac{1}{ \sigma JR_{0}}+\eta \frac{R_{0}e^{J}}{J},
\end{align*}
which shows \eqref{Pl11} by choosing $\sigma=\epsilon, J=\epsilon^{-3}$, $R_{0}=\epsilon^{-1}$, $T_{0}=e^{\epsilon^{-3}}$,
and $\eta=e^{-\epsilon^{-3}}$. The proof is complete.
\end{proof}
\subsection{Morawetz estimates. Radial setting}
We now turn our attention to the proof of the radial version of the Morawetz estimate which will be essential in the proof of the scattering theorem in the radially symmetric setting. In this context, we take advantage of the radial Sobolev embedding to get some spatial decay.
\begin{lemma}\label{Mst1}
Let $\mu,\gamma>0$, and $(\phi,\psi)\in \mathcal G(0, 3\gamma, \gamma)$. Let $(u_0,v_0)\in H^1 \times H^1 $ be radially symmetric satisfying \eqref{cond-ener} and \eqref{cond-gwp}. Then for any $T>0$ and $R=R(u_0,v_0, \phi,\psi)>0$ sufficiently large, the corresponding global solution to \eqref{SNLS} satisfies
\begin{equation}\label{Mirs}
\frac{1}{T}\int^{T}_{0}\int_{|x|\leq \frac{R}{2}}\left(|u(t,x)|^{\frac{10}{3}}+|v(t,x)|^{\frac{10}{3}}\right)dxdt
\lesssim \frac{R}{T}+\frac{1}{R^{2}}.
\end{equation}
\end{lemma}
\begin{proof}
Let $\varphi_R$ be as in \eqref{defi-varphi-R} and define $\mathcal M_{\varphi_R}(t)$ as in \eqref{defi-M-varphi}. By the Cauchy-Schwarz inequality, the conservation of mass, and \eqref{est-K}, we have
\begin{align} \label{est-M-varphi-R}
\sup_{t\in \mathbb R} |\mathcal M_{\varphi_R}(t)| \lesssim R.
\end{align}
By \eqref{cor:iii}, we have
\begin{align*}
\frac{d}{dt} \mathcal M_{\varphi_R}(t) &= - \int \slashed{\nabla}lta^2 \varphi_R(x) (|u|^2+|v|^2)(t,x) dx + 4\int\varphi''_R(r) (|\nabla u|^2+|\nabla v|^2)(t,x)\\
&- 4 \int\slashed{\nabla}lta \varphi_R(x) N(u,v)(t,x) dx.
\end{align*}
As $\varphi_R(x)=|x|^2$ for $|x|\leq R$, we see that
\begin{align*}
\frac{d}{dt} \mathcal M_{\varphi_R}(t) &= 8 \left( \int_{|x|\leq R} (|\nabla u|^2+|\nabla v|^2)(t,x) dx - 3 \int_{|x|\leq R} N(u,v)(t,x) dx \right) \\
&- \int \slashed{\nabla}lta^2\varphi_R(x) (|u|^2+|v|^2)(t,x) dx + 4 \rea \int_{|x| >R} \partial^2_{jk} \varphi_R(x) (\partial_j \overline{u} \partial_k u + \partial_j \overline{v} \partial_k v) (t,x) dx \\
&- 4 \int_{|x|>R} \slashed{\nabla}lta \varphi_R(x) N(u,v)(t,x) dx.
\end{align*}
Since $\|\slashed{\nabla}lta^2\varphi_R\|_{L^\infty } \lesssim R^{-2}$, the conservation of mass implies
\[
\int \slashed{\nabla}lta^2 \varphi_R(x) (|u|^2+|v|^2)(t,x) dx \lesssim R^{-2}.
\]
As $(u,v)$ is radially symmetric, we use the fact
\[
\partial_j = \frac{x_j}{r} \partial_r, \quad\partial^2_{jk} =\left(\frac{\delta_{jk}}{r} - \frac{x_j x_k}{r^3}\right) \partial_r + \frac{x_j x_k}{r^2} \partial^2_r
\]
to get
\[
\partial^2_{jk} \varphi_R(x) \partial_j \overline{u}(t,x) \partial_k u(t,x) = \varphi''_R(r) |\partial_r u(t,r)|^2 \geq 0
\]
which implies
\[
\rea \int_{|x|>R} \partial^2_{jk}\varphi_R(x) (\partial_j \overline{u}\partial_k u + \partial_j \overline{v} \partial_k v)(t,x) \geq 0.
\]
On the other hand, by arguing as in the proof of Lemma \ref{lem-viri-est-rad}, we have
\[
\left| \int_{|x|>R} \slashed{\nabla}lta \varphi_R(x) N(u,v)(t,x) dx \right| \lesssim R^{-2} K(u(t),v(t)) \lesssim R^{-2}.
\]
Thus we get
\begin{align} \label{est-M-varphi-R-app}
\frac{d}{dt} \mathcal M_{\varphi_R}(t) \geq 8 \left( \int_{|x|\leq R} (|\nabla u|^2+|\nabla v|^2)(t,x) dx - 3 \int_{|x|\leq R} N(u,v)(t,x) dx \right) + C R^{-2}
\end{align}
for all $t\in \mathbb R$. Now, let $\varrho_R(x)=\varrho(x/R)$ with $\varrho$ as in \eqref{defi-varrho}. We have
\begin{align*}
\int |\nabla(\varrho_R u(t))|^2 dx &= \int \varrho^2_R |\nabla u(t)|^2 dx - \int \varrho_R \slashed{\nabla}lta\varrho_R |u(t)|^2 dx \\
&= \int_{|x|\leq R} |\nabla u(t)|^2 - \int_{R/2 \leq |x| \leq R} (1-\varrho^2_R) |\nabla u(t)|^2 dx - \int \varrho_R \slashed{\nabla}lta\varrho_R |u(t)|^2 dx
\end{align*}
and
\[
\int N(\varrho_R u, \varrho_R v)(t,x) dx = \int_{|x| \leq R} N(u,v)(t,x) dx + \int_{R/2\leq |x| \leq R} \left(N(\varrho_R u, \varrho_R v) - N(u,v) \right)(t,x) dx.
\]
It follows that
\begin{align*}
\int_{|x|\leq R} &(|\nabla u|^2 +|\nabla v|^2)(t,x) dx - 3 \int_{|x|\leq R} N(u,v)(t,x) dx \\
&=\int (|\nabla(\varrho_R u)|^2 + |\nabla(\varrho_R v)|^2)(t,x) dx - 3\int N(\varrho_R u, \varrho_R v)(t,x) dx \\
&+ \int_{R/2 \leq |x| \leq R} (1-\varrho_R^2(x)) (|\nabla u|^2 +|\nabla v|^2) (t,x) dx \\
&+ \int \varrho_R(x) \slashed{\nabla}lta \varrho_R(x) (|u|^2+|v|^2) (t,x) dx - 3\int_{R/2 \leq |x| \leq R} \left( N(\varrho_R u, \varrho_R v) - N(u,v)\right) (t,x) dx.
\end{align*}
As $0 \leq \rho_R \leq 1$ and $\|\slashed{\nabla}lta \varrho_R\|_{L^\infty } \lesssim R^{-2}$, the conservation of mass, \eqref{est-K}, and the radial Sobolev embedding, we have
\begin{align*}
\int_{|x|\leq R} (|\nabla u|^2 +|\nabla v|^2)(t,x) dx &- 3 \int_{|x|\leq R} N(u,v)(t,x) dx \\
&\geq K(\varrho_R u(t), \varrho_R v(t))- 3 P(\varrho_R u(t), \varrho_R v(t)) + \mathcal O(R^{-2}).
\end{align*}
Thanks to \eqref{coer-prop} with $\varrho_R$ in place of $\Gamma_R$ and $z=\xi_1= \xi_2 =0$, there exist $R=R(u_0,v_0,\phi,\psi)>0$ sufficiently large and $\nu=\nu(u_0,v_0,\phi,\psi)>0$ such that
\[
\int_{|x|\leq R} (|\nabla u|^2 +|\nabla v|^2)(t,x) dx - 3 \int_{|x|\leq R} N(u,v)(t,x) dx \geq \nu K(\varrho_R u(t), \varrho_R v(t)) + \mathcal O(R^{-2})
\]
for all $t\in \mathbb R$. This together with \eqref{est-M-varphi-R-app} yield
\[
\nu K(\varrho_R u(t), \varrho_R v(t)) \leq \frac{d}{dt} \mathcal M_{\varphi_R}(t) + C R^{-2}
\]
for all $t\in \mathbb R$. Integrating on $[0, T]$ and using \eqref{est-M-varphi-R}, we get
\[
\frac{1}{T}\int^{T}_{0}K(\varrho_{R}u(t), \varrho_{R}v(t))dt\lesssim
\frac{R}{T}+\frac{1}{R^{2}}.
\]
In particular, we have
\[
\frac{1}{T}\int^{T}_{0}\|\nabla(\rho_{R}u(t))\|^{2}_{L^{2} }dt\lesssim
\frac{R}{T}+\frac{1}{R^{2}}
\]
which together with the Gagliardo-Nirenberg inequality
\[
\| u \|^{\frac{10}{3}}_{L^{\frac{10}{3}} }\lesssim \| \nabla u \|^{2}_{L^{2} }
\| u \|^{\frac{4}{3}}_{L^{2} }
\]
imply
\[
\frac{1}{T}\int^{T}_{0}\|\rho_{R}u(t)\|^{\frac{10}{3}}_{L^{\frac{10}{3}} }dt
\lesssim \frac{1}{T}\int^{T}_{0}\|\nabla(\rho_{R}u(t))\|^{2}_{L^{2} }dt\lesssim
\frac{R}{T}+\frac{1}{R^{2}}.
\]
By the choice of $\varrho_R$, we obtain
\[
\frac{1}{T}\int^{T}_{0}\int_{|x|\leq \frac{R}{2}}|u(t,x)|^{\frac{10}{3}}dxdt\lesssim
\frac{R}{T}+\frac{1}{R^{2}}.
\]
A similar estimate holds for $v$. The proof is complete.
\end{proof}
\section{Scattering criteria}\label{sec:sct}
In this section, we give scattering criteria for solution to \eqref{SNLS} in the spirit of Dodson and Murphy \cite{DM-MRL, DM-PAMS} (see also \cite{WY}). Let us start with the scattering criterion for non-radial solutions.
\begin{proposition} \label{prop-scat-crit-non-rad}
Let $\mu, \gamma>0$. Suppose that $(u, v)$ is a global $H^1$-solution to \eqref{SNLS} satisfying
\begin{equation}\label{Est1}
\sup_{t\in \mathbb R}\| (u(t),v(t)) \|_{H^{1} \times H^{1} }
\lesssim E
\end{equation}
for some constant $E>0$. Then there exist $\epsilon=\epsilon(E)>0$ small enough and $T_{0}=T_{0}(\epsilon, E)>0$ sufficiently large such that if for any $a\in \mathbb R$, there exists $t_0\in(a,a+T_{0})$ such that
\begin{equation}\label{Dtn}
\| (u(t),v(t)) \|_{L^{5}_{t,x}\times L^{5}_{t,x}([t_0-\epsilon^{-\frac{1}{4}}, t_0] \times \mathbb R^3)} \lesssim \epsilon,
\end{equation}
then the solution scatters forward in the time.
\end{proposition}
\begin{proof}
By Lemma \ref{lem-smal-scat}, it suffices to show that there exists $T>0$ such that
\begin{equation}\label{Oc}
\| (\mathcal S_{1}(t-T)u(T),\mathcal S_{2}(t-T)v(T)) \|_{L_{t}^{4}L^{6}_{x}\times L_{t}^{4}L^{6}_{x}([T,\infty)\times \mathbb R^{3})} \lesssim \epsilon^{\frac{1}{32}}.
\end{equation}
To prove \eqref{Oc}, we first write
\[
(\mathcal S_{1}(t-T)u(T),\mathcal S_{2}(t-T)v(T))=(\mathcal S_{1}(t)u_{0},\mathcal S_{2}(t)v_{0})+i \int_0^T (\mathcal S_1(t-s)F_1(s), \mathcal S_2(t-s)F_2(s)) ds.
\]
By Sobolev embedding, Strichartz estimates, and the monotone convergence theorem, there exists
$T_{1}>0$ sufficiently large such that if $T>T_{1}$, then
\begin{equation}\label{Cdt}
\|(\mathcal S_{1}(t)u_{0},\mathcal S_{2}(t)v_{0})\|_{L_{t}^{4}L^{6}_{x}\times L_{t}^{4}L^{6}_{x}([T,\infty)\times \mathbb R^{3})} \lesssim \epsilon.
\end{equation}
We take $a=T_{1}$ and $T=t_0$, where $a$ and $t_0$ are as in \eqref{Dtn}, we write
\[
i \int_0^T (\mathcal S_1(t-s)F_1(s), \mathcal S_2(t-s)F_2(s)) ds =: H_1(t) + H_2(t),
\]
where
\[
H_{j}(t)=i\int_{I_{j}}(\mathcal S_{1}(t-s)F_{1}(s), \mathcal S_{2}(t-s)F_{2}(s))ds, \quad I_{1}=[0, T-\epsilon^{-\frac{1}{4}}], \quad
I_{2}=[T-\epsilon^{-\frac{1}{4}}, T].
\]
To estimate $H_2$, we observe that
\begin{equation}\label{on1}
\|(u,v)\|_{L_{t}^{2}\dot{W}^{\frac{1}{2},6}_{x}\times L_{t}^{2}\dot{W}^{\frac{1}{2},6}_{x}([T-\epsilon^{-\frac{1}{4}}, T]\times \mathbb R^{3})}
\lesssim 1.
\end{equation}
Indeed, by Strichartz estimates, fractional chain rule, \eqref{Est1}, and \eqref{Dtn}, we have
\begin{align*}
\|(u,v)\|&_{L_{t}^{2}\dot{W}^{\frac{1}{2},6}_{x}\times L_{t}^{2}\dot{W}^{\frac{1}{2},6}_{x}([T-\epsilon^{-\frac{1}{4}}, T]\times \mathbb R^{3})}
\\
&\lesssim E+ \| (u,v) \|^{2}_{L^{5}_{t,x}\times L^{5}_{t,x}([T-\epsilon^{-\frac{1}{4}}, T] \times \mathbb R^3)}
\|(u,v)\|_{L_{t}^{2}\dot{W}^{\frac{1}{2},6}_{x}\times L_{t}^{2}\dot{W}^{\frac{1}{2},6}_{x}([T-\epsilon^{-\frac{1}{4}}, T]\times \mathbb R^{3})}\\
&\lesssim E+ \epsilon^2\|(u,v)\|_{L_{t}^{2}\dot{W}^{\frac{1}{2},6}_{x}\times L_{t}^{2}\dot{W}^{\frac{1}{2},6}_{x}([T-\epsilon^{-\frac{1}{4}}, T]\times \mathbb R^{3})}.
\end{align*}
By choosing $\epsilon$ small enough, we get \eqref{on1}. By Sobolev embedding and Strichartz estimates, we see that
\begin{align*}
\|H_{2}\|_{L_{t}^{4}L^{6}_{x}\times L_{t}^{4}L^{6}_{x}([T,\infty)\times \mathbb R^{3})} \lesssim \|(u,v)\|^{2}_{L_{t,x}^{5}\times L_{t,x}^{5}([T-\epsilon^{-\frac{1}{4}}, T]\times \mathbb R^{3})}
\|(u,v)\|_{L_{t}^{2}\dot{W}^{\frac{1}{2},6}_{x}\times L_{t}^{2}\dot{W}^{\frac{1}{2},6}_{x}([T-\epsilon^{-\frac{1}{4}}, T]\times \mathbb R^{3})}
\end{align*}
which together with \eqref{Dtn} and \eqref{on1} imply
\begin{equation}\label{Pii}
\|H_{2}\|_{L_{t}^{4}L^{6}_{x}\times L_{t}^{4}L^{6}_{x}([T,\infty)\times \mathbb R^{3})}\lesssim \epsilon^2.
\end{equation}
On the other hand, we claim that
\begin{equation}\label{cla1}
\|H_{1}\|_{L_{t}^{4}L^{6}_{x}\times L_{t}^{4}L^{6}_{x}([T,\infty)\times \mathbb R^{3})}\lesssim \epsilon^{\frac{1}{32}}.
\end{equation}
In fact, we notice that
\[
H_{1}(t)=(\mathcal S_{1}(t-T+\epsilon^{-\frac{1}{4}})u(T-\epsilon^{-\frac{1}{4}}), \mathcal S_{2}(t-T+\epsilon^{-\frac{1}{4}})u(T-\epsilon^{-\frac{1}{4}}))
-(\mathcal S_{1}(t)u_{0}, \mathcal S_{2}(t)v_{0})
\]
which, by Strichartz estimates, implies
\begin{equation*}
\|H_{1}\|_{L_{t}^{4}L^{3}_{x}\times L_{t}^{4}L^{3}_{x}([T,\infty)\times \mathbb R^{3})}
\lesssim \|(u(T-\epsilon^{-\frac{1}{4}}), v(T-\epsilon^{-\frac{1}{4}}))\|_{L^2\times L^2 } + \|(u_{0}, v_{0})\|_{L^2 \times L^2}\lesssim E.
\end{equation*}
Moreover, as
\[
\|(F_{1}(t), F_{2}(t))\|_{L^1 \times L^1 } \lesssim \|(u(t),v(t))\|^3_{L^3 \times L^3 } \lesssim \| (u(t),v(t)) \|^{3}_{H^{1}\times H^{1} }
\lesssim E^{3},
\]
we have from the dispersive estimate \eqref{Dpe} and Young's inequality that
\begin{equation*}
\|H_{1}\|_{L_{t}^{4}L^{\infty}_{x}\times L_{t}^{4}L^{\infty}_{x}([T,\infty)\times \mathbb R^{3})}
\lesssim
\left\|\int^{T-\epsilon^{-\frac{1}{4}}}_{0}|t-s|^{-3/2} ds\right\|_{L^{4}_{t}([T,\infty))}
\lesssim \epsilon^{\frac{1}{16}}.
\end{equation*}
By interpolation, we get
\begin{align*}
\|H_{1}\|_{L_{t}^{4}L^{6}_{x}\times L_{t}^{4}L^{6}_{x}([T,\infty)\times \mathbb R^{3})} \leq \|H_{1}\|^{1/2}_{L_{t}^{4}L^{3}_{x}\times L_{t}^{4}L^{3}_{x}([T,\infty)\times \mathbb R^{3})}
\|H_{1}\|^{1/2}_{L_{t}^{4}L^{\infty}_{x}\times L_{t}^{4}L^{\infty}_{x}([T,\infty)\times \mathbb R^{3})}
\lesssim \epsilon^{\frac{1}{32}}
\end{align*}
which proves \eqref{cla1}. Collecting \eqref{Cdt}, \eqref{Pii}, and \eqref{cla1}, we obtain \eqref{Oc}, and the proof is complete.
\end{proof}
Let us give now an analogous of the previous Criterion in the radial setting.
\begin{proposition}[Scattering criterion for radial solutions]\label{prop-scat-crit-rad}
Let $\mu,\gamma>0$. Suppose that $(u,v)$ is a global $H^1$-solution to \eqref{SNLS} satisfying
\begin{equation}\label{Lrs}
\sup_{t\in \mathbb R}\|(u(t),v(t))\|_{H^{1} \times H^{1}}\leq E
\end{equation}
for some constant $E>0$. Then there exist $\epsilon=\epsilon(E)>0$ and $R=R(E)>0$ such that if
\begin{equation}\label{Taos}
\liminf_{t\rightarrow \infty}\int_{|x|\leq R} \left(|u(t,x)|^{2}+3\gamma|v(t,x)|^{2}\right)dx\leq \epsilon^{2},
\end{equation}
then the solution scatters forward in time.
\end{proposition}
\begin{proof}
Let $\epsilon>0$ be a small constant. By Lemma \ref{lem-smal-scat}, it suffices to show the existence of $T=T(\epsilon)>0$ such that
\begin{align} \label{scat-crit-app}
\| (\mathcal S_{1}(t-T)u(T),\mathcal S_{2}(t-T)v(T)) \|_{L_{t}^{4}L^{6}_{x}\times L_{t}^{4}L^{6}_{x}([T,\infty)\times \mathbb R^{3})}<\epsilon^{\frac{1}{32}}.
\end{align}
To show this, we follow the argument of \cite[Lemma 2.2]{DM-PAMS}. By the Strichartz estimates and the monotone
convergence theorem, there exists $T=T(\epsilon)>0$ sufficiently large such that
\begin{equation}\label{Cddf}
\|(\mathcal S_{1}(t)u_{0},\mathcal S_{2}(t)v_{0})\|_{L_{t}^{4}L^{6}_{x}\times L_{t}^{4}L^{6}_{x}([T,\infty)\times \mathbb R^{3})}<\epsilon.
\end{equation}
As in the proof of Proposition \ref{prop-scat-crit-non-rad}, we write
\[
(\mathcal S_{1}(t-T)u(T),\mathcal S_{2}(t-T)v(T)) = (\mathcal S_{1}(t)u_{0},\mathcal S_{2}(t)v_{0})+H_{1}(t)+H_{2}(t),
\]
where
\[
H_{j}(t)= i\int_{I_{j}}(\mathcal S_{1}(t-s)F_{1}(s), \mathcal S_{2}(t-s)F_{2}(s))ds, \quad I_{1}=[0, T-\epsilon^{-\frac{1}{4}}], \quad
I_{2}=[T-\epsilon^{-\frac{1}{4}},T].
\]
By \eqref{Taos} and enlarging $T$ if necessary, we have
\begin{equation}\label{Tapro}
\int\varrho_{R}(x)\left(|u(T,x)|^{2}+3\gamma|v(T,x)|^{2}\right)dx\leq \epsilon^{2},
\end{equation}
where $\varrho_{R}(x)=\varrho(x/R)$ with $\varrho:\mathbb R^3 \rightarrow [0,1]$ a smooth cut-off function satisfying
\begin{align} \label{defi-varrho}
\varrho(x) = \left\{
\begin{array}{ccl}
1 &\text{if}& |x| \leq 1/2, \\
0 &\text{if}& |x| \geq 1.
\end{array}
\right.
\end{align}
Using the fact (see Lemma \ref{Imporide}) that
\[
\partial_{t}(|u|^{2}+3\gamma|v|^{2})=-2\nabla\cdot\IM (\overline{u}\nabla u)
-6\nabla\cdot\IM (\overline{v}\nabla v),
\]
\eqref{Lrs}, and $\|\nabla \varrho_{R}\|_{L^\infty(\mathbb R^3)}\lesssim R^{-1}$, an integration by parts and the H\"older inequality yield
\[
\left| \partial_{t}\int\varrho_{R}(x) (|u(t,x)|^{2}+3\gamma|v(t,x)|^{2}) dx \right|\lesssim R^{-1}.
\]
Taking $R$ sufficient large such that $R^{-1}\epsilon^{-\frac{1}{4}}\ll \epsilon^{2}$, we infer from \eqref{Tapro} that
\[
\left\|\int \varrho_{R}(x)(|u(\cdot,x)|^{2}+3\gamma|v(\cdot,x)|^{2})dx\right\|_{L_{t}^{\infty}(I_{2})}
\lesssim \epsilon^2.
\]
This inequality implies that
\begin{equation}\label{Igf}
\| \varrho_R u \|_{L_{t}^{\infty}L^{2}_{x}(I_{2}\times \mathbb R^{3})}\lesssim \epsilon \quad \mbox{and}\quad \| \varrho_R v \|_{L_{t}^{\infty}L^{2}_{x}(I_{2}\times \mathbb R^{3})}\lesssim \epsilon.
\end{equation}
Thanks to the radial Sobolev embedding \eqref{rad-sobo}, we have from \eqref{Lrs} and \eqref{Igf} that
\begin{align*}
\|u\|_{L^\infty_t L^3_x(I_2\times \mathbb R^3)} &\leq \|\varrho_R u\|_{L^\infty_tL^3_x(I_2\times \mathbb R^3)} + \|(1-\varrho_R) u\|_{L^\infty_tL^3_x(I_2\times \mathbb R^3)} \\
&\lesssim \|\varrho_R\|^{1/2}_{L^\infty_tL^2_x(I_2\times \mathbb R^3)} \|\varrho_R u\|^{1/2}_{L^\infty_tL^6_x(I_2\times \mathbb R^3)} \\
& + \|(1-\varrho_R)u\|^{1/3}_{L^\infty_t L^\infty_x(I_2\times \mathbb R^3)} \|(1-\varrho_R) u\|^{2/3}_{L^\infty_tL^2_x(I_2\times \mathbb R^3)} \\
&\lesssim \epsilon^{\frac{1}{2}} + R^{-\frac{1}{3}} \lesssim \epsilon^{\frac{1}{2}}
\end{align*}
provided that $R>\epsilon^{-\frac{3}{2}}$. A similar estimate holds for $v$. In particular, we get
\begin{equation}\label{Estils}
\|(u,v)\|_{L_{t}^{\infty}L^{3}_{x}\times L_{t}^{\infty}L^{3}_{x} (I_{2}\times \mathbb R^{3})} \lesssim\epsilon^{\frac{1}{2}}.
\end{equation}
Moreover, we have from the local theory that
\[
\|(u,v)\|_{L_{t}^2L^{\infty}_{x}\times L_{t}^2L^{\infty}_{x} (I_2\times \mathbb R^3)} +
\|(u,v)\|_{L_{t}^{2}\dot{W}^{\frac{1}{2},6}_{x}\times L_{t}^{2}\dot{W}^{\frac{1}{2},6}_{x} (I_2\times \mathbb R^3)}
\lesssim (1+|I_2|)^{\frac{1}{2}} \lesssim \epsilon^{-\frac{1}{8}}.
\]
By Sobolev embedding and Strichartz estimates, we see that that
\begin{multline} \label{est-H2}
\|H_{2}\|_{L_{t}^{4}L^{6}_{x}\times L_{t}^{4}L^{6}_{x}([T,\infty)\times \mathbb R^{3})} \\
\lesssim
\|(u,v)\|_{L^\infty_t L^3_x \times L^\infty_t L^3_x (I_2\times \mathbb R^3)} \|(u,v)\|_{L^2_tL^\infty_x \times L^2_t L^\infty_x (I_2\times \mathbb R^3)} \|(u,v)\|_{L^2_t \dot{W}^{\frac{1}{2},6}_x \times L^2_t \dot{W}^{\frac{1}{2},6}_x(I_2\times \mathbb R^3)} \lesssim \epsilon^{\frac{1}{4}}.
\end{multline}
On the other hand, the same argument developed in the proof of \eqref{cla1} shows that
\begin{align} \label{est-H1}
\|H_{1}\|_{L_{t}^{4}L^{6}_{x}\times L_{t}^{4}L^{6}_{x}([T,\infty)\times \mathbb R^{3})}
\lesssim \epsilon^{\frac{1}{32}}.
\end{align}
Collecting \eqref{Cddf}, \eqref{est-H2}, and \eqref{est-H1}, we prove \eqref{scat-crit-app}, and the proof is complete.
\end{proof}
\section{Proofs of the main Theorems}\label{sec:proofs-main}
By exploiting the tools obtained in the previous parts of the paper, we are now able to prove the scattering for non-radial and radial solutions to \eqref{SNLS} given in Theorem \ref{Th1}. See \cite{MX,WY,XX} for analogous results for NLS systems of quadratic type.
\subsection{Proof of the scattering results}
\begin{proof}[{Proof of Theorem \ref{Th1} for non-radial solutions}]
It suffices to check the scattering criterion given in Proposition \ref{prop-scat-crit-non-rad}. To this end, we are inspired to \cite{XZZ}. Fix $a \in \mathbb R$ and let $\epsilon>0$ be a sufficiently small constant. Let $T_0=T_0(\epsilon)>0$ sufficiently large to be chosen later. We will show that there exists $t_0 \in (a, a+T_0)$ such that
\begin{align} \label{scat-crit-non-rad-app}
\|(u,v)\|_{L^5_{t,x}\times L^5_{t,x}([t_0-\epsilon^{-\frac{1}{4}}, t_0] \times \mathbb R^3)} \lesssim \epsilon^{\frac{3}{140}}.
\end{align}
By Proposition \ref{Imnn}, there exist $T_0=T_0(\epsilon), J=J(\epsilon), R_0=R_0(\epsilon, u_0,v_0,\phi,\psi)$, $\sigma=\sigma(\epsilon)$, and $\eta=\eta(\epsilon)$ such that if $|\gamma-3|<\eta$, then
\begin{multline*}
\frac{1}{JT_{0}}\int^{a+T_{0}}_{a}\int^{R_{0}e^{J}}_{R_{0}}\frac{1}{R^{3}}\int_{\mathbb R^{3}}
W_{\gamma}(\chi_{R}(\cdot-z)u(t),\chi_{R}(\cdot-z)v(t)) \\
\times K(\chi_{R}(\cdot-z)u^{\xi}(t),\chi_{R}(\cdot-z)v^{\xi}(t))dz\frac{dR}{R}dt \lesssim \epsilon.
\end{multline*}
It follows that there exists $R\in [R_{0}, e^{J}R_{0}]$ such that
\[
\frac{1}{T_{0}}\int^{a+T_{0}}_{a}\frac{1}{R^{3}}\int_{\mathbb R^{3}}
W_{\gamma}( \chi_{R}(\cdot-z)u(t),\chi_{R}(\cdot-z)v(t))K(\chi_{R}(\cdot-z)u^{\xi}(t),\chi_{R}(\cdot-z)v^{\xi}(t))dzdt
\lesssim \epsilon.
\]
In particular,
\[
\frac{1}{T_{0}}\int^{a+T_{0}}_{a}\frac{1}{R^{3}}\int
\|\chi_{R}(\cdot-z)u(t)\|^{2}_{L^{2}} \|\nabla\left(\chi_{R}(\cdot-z)u^{\xi}(t)\right)\|^{2}_{L^{2}}dzdt
\lesssim \epsilon
\]
and similarly for $v$. By the change of variable $z=\frac{R}{4}(w+\theta)$ with $w\in \mathbb Z^3$ and $\theta \in [0,1]^3$, we deduce from the integral mean value theorem and Fubini's theorem that there exists $\theta\in [0,1]^{3}$ such that
\[
\frac{1}{T_{0}}\int^{a+T_{0}}_{a}\sum_{w\in \mathbb Z^{3}}
\left\|\chi_{R} \left(\cdot-\frac{R}{4}(w+\theta)\right)u(t)\right\|^{2}_{L^{2}}
\left\|\nabla\left(\chi_{R}\left(\cdot-\frac{R}{4}(w+\theta)\right)u^{\xi}(t)\right)\right\|^{2}_{L^{2}}dt
\lesssim \epsilon.
\]
By spliting the interval $[a+T_0/2, a+3T_0/4]$ into $T_0 \epsilon^{\frac{1}{4}}$ subintervals of the same length $\epsilon^{-\frac{1}{4}}$, we infer that there exists $t_0 \in [a+T_0/2, a+3T_0/4]$ such that $I_0:=[t_0-\epsilon^{-\frac{1}{4}}, t_0]\subset (a,a+T_0)$ and
\begin{equation}\label{Csit}
\int_{I_0}\sum_{w\in \mathbb Z^{3}}
\left\|\chi_{R}\left(\cdot-\frac{R}{4}(w+\theta)\right)u(t)\right\|^{2}_{L^{2}}
\left\|\nabla\left(\chi_{R}\left(\cdot-\frac{R}{4}(w+\theta)\right)u^{\xi}(t)\right)\right\|^{2}_{L^{2}}dt
\lesssim \epsilon^{\frac{3}{4}}.
\end{equation}
In particular, by the classical Gagliardo-Nirenberg inequality
\[
\|f\|^{4}_{L^{3}}\lesssim \|f\|^{2}_{L^{2}}\|\nabla f\|^{2}_{L^{2}},
\]
we obtain
\begin{equation}\label{dir}
\int_{I_0}\sum_{w\in \mathbb Z^{3}}
\left\|\chi_{R}\left(\cdot-\frac{R}{4}(w+\theta)\right)u(t)\right\|^{4}_{L^{3}}\lesssim\epsilon^{\frac{3}{4}}.
\end{equation}
On the other hand, by using the H\"older inequality and the Sobolev embedding, we get
\begin{align}
\sum_{w\in \mathbb Z^{3}}
&\left\|\chi_{R}\left(\cdot-\frac{R}{4}(w+\theta)\right)u(t)\right\|^{2}_{L^{3}} \nonumber\\
&\lesssim
\sum_{w\in \mathbb Z^{3}}\left\|\chi_{R}\left(\cdot-\frac{R}{4}(w+\theta)\right)u(t)\right\|_{L^{2}}
\left\|\chi_{R}\left(\cdot-\frac{R}{4}(w+\theta)\right)u(t)\right\|_{L^{6}} \nonumber \\
&\leq \Big( \sum_{\omega \in \mathbb Z^3} \Big\| \chi_R\Big(\cdot -\frac{R}{4}(w+\theta)\Big) u(t)\Big\|_{L^2}^2 \Big)^{1/2} \Big( \sum_{w\in \mathbb Z^3} \Big\|\chi_R \Big(\cdot-\frac{R}{4}(w+\theta) \Big) u(t)\Big\|^2_{L^6} \Big)^{1/2} \nonumber \\
&\lesssim \|u(t)\|_{L^2} \|u(t)\|_{H^1} \lesssim 1.\label{Oinm}
\end{align}
For the last line above we used the following: by Sobolev,
\begin{align*}
\sum_{w \in \mathbb Z^3} \Big\|\chi_R &\Big(\cdot-\frac{R}{4}(w+\theta)\Big) u(t)\Big\|^2_{L^6} \\
&\lesssim \sum_{w \in \mathbb Z^3} \Big\|\chi_R \Big(\cdot-\frac{R}{4}(w+\theta)\Big) \nabla u(t)\Big\|^2_{L^2}+ \frac{1}{R^2} \Big\|(\nabla \chi)_R \Big(\cdot-\frac{R}{4}(w+\theta)\Big) u(t)\Big\|^2_{L^2} \\
&\lesssim \|\nabla u(t)\|^2_{L^2} + \frac{1}{R^2\sigma^2} \|u(t)\|^2_{L^2} \lesssim \|u(t)\|^2_{H^1}
\end{align*}
as $|\nabla\chi| \lesssim \sigma^{-1}$ and $R>R_0 =\epsilon^{-1} =\sigma^{-1}$ (see the end of the proof of Proposition \ref{Imnn}).
It follows from \eqref{dir}, \eqref{Oinm}, and the almost orthogonality that
\begin{align}\nonumber
\| u &\|^{3}_{L^{3}_{t,x}(I_0\times \mathbb R^{3})}\lesssim \int_{I_0}\sum_{w\in \mathbb Z^{3}}
\left\|\chi_{R}\left(\cdot-\frac{R}{4}(w+\theta)\right)u(t)\right\|^{3}_{L^{3} }\\\nonumber
&\leq \int_{I_0}\left(\sum_{w\in \mathbb Z^{3}}
\left\|\chi_{R}\left(\cdot-\frac{R}{4}(w+\theta)\right)u(t)\right\|^{4}_{L^{3} }\right)^{\frac{1}{2}}
\left(\sum_{w\in \mathbb Z^{3}}
\left\|\chi_{R}\left(\cdot-\frac{R}{4}(w+\theta)\right)u(t)\right\|^{4}_{L^{2} }\right)^{\frac{1}{2}}\\\nonumber
&\leq \left(\int_{I_{0}}\sum_{w\in \mathbb Z^{3}}
\left\|\chi_{R}\left(\cdot-\frac{R}{4}(w+\theta)\right)u(t)\right\|^{4}_{L^{3} }\right)^{\frac{1}{2}}
\left(\int_{I_{0}}\sum_{w\in \mathbb Z^{3}}
\left\|\chi_{R}\left(\cdot-\frac{R}{4}(w+\theta)\right)u(t)\right\|^{4}_{L^{2}}\right)^{\frac{1}{2}}\\\label{Nmbs}
&\lesssim \epsilon^{\frac{1}{4}}.
\end{align}
On the other hand, by Strichartz estimates, Sobolev embedding and standard continuity argument, we deduce that
\[
\|u\|_{L^{10}_{t,x}(I_{0}\times \mathbb R^{3})} \lesssim \left\langle I_{0}\right\rangle^{\frac{1}{10}}.
\]
This inequality, \eqref{Nmbs}, and interpolation imply that
\[
\|u\|_{L^{5}_{t,x}(I_{0}\times \mathbb R^{3})}\lesssim
\|u\|^{\frac{3}{7}}_{L^{3}_{t,x}(I_{0}\times \mathbb R^{3})}
\|u\|^{\frac{4}{7}}_{L^{10}_{t,x}(I_{0}\times \mathbb R^{3})} \lesssim \epsilon^{\frac{3}{140}}.
\]
Similarly, we have
\[
\|v\|_{L^{5}_{t,x}(I_{0}\times \mathbb R^{3})}\lesssim \epsilon^{\frac{3}{140}}.
\]
Therefore, \eqref{scat-crit-non-rad-app} holds, and the proof is complete.
\end{proof}
\begin{proof}[{Proof of Theorem \ref{Th1} for radial solutions}]
We fix $\epsilon>0$ and $R$ as in Proposition \ref{prop-scat-crit-rad}. From \eqref{Mirs} and the mean value theorem, we infer that there exist sequences of times $t_n\rightarrow\infty$
and radii $R_n\rightarrow\infty$ such that
\begin{align} \label{est-n}
\lim_{n \to \infty}\int_{|x|\leq {R_{n}}} \left(|u(t,x)|^{\frac{10}{3}}+|v(t,x)|^{\frac{10}{3}}\right)dx=0.
\end{align}
Choosing $n$ sufficiently large so that $R_{n}\geq R$, the H\"older inequality yields
\begin{align*}
\int_{|x|\leq R} \left(|u(t,x)|^{2}+3\gamma|v(t,x)|^{2} \right) dx\lesssim R^{\frac{3}{5}} \left[\left(\int_{|x|\leq R_{n}}|u(t,x)|^{\frac{10}{3}}dx \right)^{\frac{3}{5}}
+\left(\int_{|x|\leq R_{n}}|v(t,x)|^{\frac{10}{3}}dx \right)^{\frac{3}{5}}\right]
\end{align*}
which, by \eqref{est-n}, shows \eqref{Taos}. By Proposition \ref{prop-scat-crit-rad}, the solution scatters forward in time.
\end{proof}
\subsection{Proof of the blow-up results}
It remains to prove the blow-up results as stated in Theorem \ref{theo-blow}. Let us start with the following observation.
\begin{lemma} \label{lem-nega-G}
Let $\mu, \gamma>0$, and $(\phi,\psi) \in \mathcal G(0, 3\gamma, \gamma)$. Let $(u_0,v_0) \in H^1 \times H^1$ satisfy either $E_\mu(u_0,v_0)<0$ or if $E_\mu(u_0,v_0) \geq 0$, we assume that \eqref{cond-ener} and \eqref{cond-blow} hold. Let $(u,v)$ be the corresponding solution to \eqref{SNLS} with initial data $(u_0,v_0)$ defined on the maximal time interval $(-T_-, T_+)$. Then for $\varepsilon>0$ sufficiently small, there exists $c=c(\varepsilon)>0$ such that
\begin{align} \label{est-G}
G(u(t),v(t)) + \varepsilon K(u(t),v(t)) \leq -c
\end{align}
for all $t\in (-T_-, T_+)$.
\end{lemma}
\begin{proof}
If $E_\mu(u_0,v_0)<0$, then the conservation of energy implies that
\begin{align*}
G(u(t),v(t)) + \frac{1}{2} K(u(t),v(t)) &= 3 E_\mu(u(t),v(t)) - \frac{3}{2} M_\mu(u(t),v(t)) \\
&\leq 3 E_\mu(u(t),v(t)) = 3 E_\mu(u_0,v_0).
\end{align*}
This shows \eqref{est-G} with $\varepsilon =\frac{1}{2}$ and $c=-3E_\mu(u_0, v_0)>0$.
We next consider the case $E_\mu(u_0,v_0)\geq 0$. In this case, we assume \eqref{cond-ener} and \eqref{cond-blow}.
By the same argument as in the proof of \cite[Theorem 4.6]{OP} using \eqref{cond-ener} and \eqref{cond-blow}, we have
\[
K(u(t),v(t)) M_{3\gamma}(u(t),v(t)) > K(\phi,\psi) M_{3\gamma}(\phi,\psi), \quad \forall t\in (-T_-,T_+).
\]
Moreover, by taking $\rho=\rho(u_0,v_0,\phi,\psi)>0$ such that
\begin{align} \label{defi-rho}
E_\mu(u_0,v_0) M_{3\gamma}(u_0,v_0) \leq \frac{1}{2}(1-\rho) E_{3\gamma}(\phi, \psi) M_{3\gamma}(\phi,\psi),
\end{align}
we can prove (see again the proof of \cite[Theorem 4.6]{OP}) the existence of $\delta = \delta(u_0,v_0,\phi,\psi)>0$ such that
\begin{align} \label{est-solu-blow}
K(u(t),v(t)) M_{3\gamma}(u(t),v(t)) \geq (1+\delta) K(\phi,\psi) M_{3\gamma}(\phi,\psi), \quad \forall t\in (-T_-,T_+).
\end{align}
Now for $\varepsilon>0$ small to be chosen later, we have from \eqref{defi-rho}, \eqref{est-solu-blow}, and \eqref{poho-iden} that
\begin{align*}
\Big( G(u(t),v(t)) &+ \varepsilon K(u(t),v(t)) \Big) M_{3\gamma}(u(t),v(t)) \\
&= \Big( 3 E_\mu(u(t),v(t)) - \frac{3}{2} M_\mu(u(t),v(t)) -\Big(\frac{1}{2}-\varepsilon\Big) K(u(t),v(t)) \Big) M_{3\gamma}(u(t),v(t)) \\
&\leq 3 E_\mu(u(t),v(t)) M_{3\gamma}(u(t),v(t)) - \Big(\frac{1}{2}-\varepsilon\Big) K(u(t),v(t)) M_{3\gamma}(u(t),v(t)) \\
&= \frac{3}{2}(1-\rho) E_{3\gamma}(\phi,\psi) M_{3\gamma}(\phi,\psi) - \Big(\frac{1}{2}-\varepsilon\Big)(1+\delta) K(\phi,\psi) M_{3\gamma}(\phi,\psi) \\
&=-\Big( \frac{1}{2}(\rho+\delta) - \varepsilon(1+\delta)\Big) K(\phi,\psi) M_{3\gamma}(\phi,\psi)
\end{align*}
for all $t\in (-T_-,T_+)$. By choosing $0<\varepsilon<\frac{\rho+\delta}{2(1+\delta)}$, the conservation of mass yields
\[
G(u(t),v(t)) +\varepsilon K(u(t),v(t)) \leq -\Big( \frac{1}{2}(\rho+\delta) - \varepsilon(1+\delta)\Big) K(\phi,\psi) \frac{M_{3\gamma}(\phi,\psi)}{M_{3\gamma}(u_0,v_0)}
\]
for all $t \in (-T_-,T_+)$. The proof is complete.
\end{proof}
We are now able to provide a proof of Theorem \ref{theo-blow}. To the best of our knowledge, the strategy of using an ODE argument -- when classical virial estimates based on the second derivative in time of (localized) variance break down -- goes back to the work \cite{BHL}, where fractional radial NLS is investigated. See instead \cite{DF, IKN-NA} for some blow-up results for quadratic NLS systems.\\
\noindent {\it Proof of Theorem \ref{theo-blow}.}
We only consider the case of radial data, the one for $\Sigma_3$-data is treated in a similar manner using \eqref{viri-est-cyli}. Let $(u_0,v_0) \in H^1\times H^1$ be radially symmetric and satisfy either $E_\mu(u_0,v_0)<0$ or if $E_\mu(u_0,v_0) \geq 0$, we assume that \eqref{cond-ener} and \eqref{cond-blow} hold. Let $(u,v)$ be the corresponding solution to \eqref{SNLS} defined on the maximal time interval $(-T_-,T_+)$. We only show that $T_+<\infty$ since the one for $T_-<\infty$ is similar. Assume by contradiction that $T_+=\infty$. By Lemma \ref{lem-nega-G}, we have for $\varepsilon>0$ sufficiently small, there exists $c=c(\varepsilon)>0$ such that
\begin{align} \label{nega-G-app}
G(u(t),u(t)) + \varepsilon K(u(t),v(t)) \leq -c
\end{align}
for all $t\in [0,\infty)$. On the other hand, by Lemma \ref{viri-est-rad}, we have for all $t\in [0,\infty)$,
\begin{align} \label{viri-est-rad-app-1}
\frac{d}{dt} M_{\varphi_R}(t)\leq 8G(u(t),v(t)) + CR^{-2} K(u(t),v(t)) + CR^{-2},
\end{align}
where $\varphi_R$ is as in \eqref{defi-varphi-R} and $M_{\varphi_R}(t)$ is as in \eqref{defi-M-varphi}. It follows from \eqref{nega-G-app} and \eqref{viri-est-rad-app-1} that for all $t\in [0,\infty)$,
\begin{align*}
\frac{d}{dt} M_{\varphi_R}(t) \leq -8c - 8\varepsilon K(u(t),v(t)) +CR^{-2} K(u(t),v(t)) + CR^{-2}.
\end{align*}
By choosing $R>1$ sufficiently large, we get
\begin{align} \label{viri-est-rad-app-2}
\frac{d}{dt}M_{\varphi_R}(t) \leq -4c -4\varepsilon K(u(t),v(t))
\end{align}
for all $t\in [0,\infty)$. Integrating the above inequality, we see that $M_{\varphi_R}(t) <0$ for all $t\geq t_0$ with some $t_0>0$ sufficiently large. We infer from \eqref{viri-est-rad-app-2} that
\begin{align} \label{viri-est-rad-app-3}
M_{\varphi_R}(t) \leq -4\varepsilon \int_{t_0}^t K(u(s),v(s)) ds
\end{align}
for all $t\geq t_0$. On the other hand, by the H\"older's inequality and the conservation of mass, we have
\begin{align}
|M_{\varphi_R}(t)| &\leq C \|\nabla \varphi_R\|_{L^\infty} \left( \|\nabla u(t)\|_{L^2} \|u(t)\|_{L^2} + \|\nabla v(t)\|_{L^2} \|v(t)\|_{L^2} \right) \nonumber \\
&\leq C(\varphi_R, M_{3\gamma}(u_0,v_0)) \sqrt{K(u(t),v(t))}. \label{viri-est-rad-app-4}
\end{align}
From \eqref{viri-est-rad-app-3} and \eqref{viri-est-rad-app-4}, we get
\begin{align} \label{viri-est-rad-app-5}
M_{\varphi_R}(t) \leq -A \int_{t_0}^t |M_{\varphi_R}(s)|^2 ds
\end{align}
for all $t\geq t_0$, where $A=A(\varepsilon, \varphi_R, M_{3\gamma}(u_0,v_0))>0$. Set
\begin{align} \label{viri-est-rad-app-6}
z(t):= \int_{t_0}^t |M_{\varphi_R}(s)|^2 ds, \quad t\geq t_0.
\end{align}
We see that $z(t)$ is non-decreasing and non-negative. Moreover,
\[
z'(t) = |M_{\varphi_R}(t)|^2 \geq A^2 z^2(t), \quad \forall t\geq t_0.
\]
For $t_1>t_0$, we integrate over $[t_1,t]$ to obtain
\[
z(t) \geq \frac{z(t_1)}{1-A^2z(t_1)(t-t_1)}, \quad \forall t\geq t_1.
\]
This shows that $z(t) \rightarrow +\infty$ as $t \nearrow t^*$, where
\[
t^*:= t_1 + \frac{1}{A^2 z(t_1)} >t_1.
\]
In particular, we have
\[
M_{\varphi_R}(t) \leq -Az(t) \rightarrow -\infty
\]
as $t\nearrow t^*$, hence $K(u(t), v(t)) \rightarrow +\infty$ as $t\nearrow t^*$. Thus the solution cannot exist for all time $t\geq 0$. The proof is complete.
$\Box$
\appendix
\section{Proofs of Lemmas \ref{lem-smal-scat}, \ref{lem-refi-GN-ineq}, \ref{L22}, and \ref{lem-coer-2}}\label{sec:app:A}
Let $I \subset \mathbb R$ be an interval containing zero. We recall that a pair of functions $(u,v)\in C(I, H^1(\mathbb R^3)) \times C(I,H^1(\mathbb R^3))$ is called a solution to the problem \eqref{SNLS} if $(u,v)$ satisfies the Duhamel formula
\[
(u(t),v(t))=(\mathcal S_{1}(t)u_{0},\mathcal S_{2}(t)v_{0})+i\int^{t}_{0}(\mathcal S_{1}(t-s)F_{1}(s),\mathcal S_{2}(t-s)F_{2}(s))ds
\]
for all $t\in I$, where
\begin{align} \label{F1F2}
\begin{aligned}
F_{1}(s)&:=\left(\frac{1}{9}|u(s)|^{2}+2|v(s)|^{2}\right)u(s)+\frac{1}{3}\overline{u}^{2}(s)v(s),\\
F_{2}(s)&:=\left(9|v(s)|^{2}+2|u(s)|^{2}\right)v(s)+\frac{1}{9}u^{3}(s).
\end{aligned}
\end{align}
The linear operators $\mathcal S_1$ and $\mathcal S_2$ introduced in \eqref{def:propagators} satisfy the following dispersive estimates: for $j=1,2$, and $2\leq r \leq \infty$,
\begin{equation}\label{Dpe}
\|\mathcal S_{j}(t) f\|_{L^r(\mathbb R^{3})}\lesssim |t|^{-\left(\frac{3}{2}-\frac{3}{r}\right)} \|f\|_{L^{r'}(\mathbb R^3)}, \quad f \in L^{r'}(\mathbb R^3)
\end{equation}
for all $t\ne 0$, which in turn yield the following Strichartz estimates: for any interval $I\subset \mathbb R$ and any Strichartz $L^2$-admissible pairs $(q,r)$ and $(m, n),$ i.e., pairs of real numbers satisfying
\begin{align} \label{Sch-adm}
\frac{2}{q}+\frac{3}{r}=\frac{3}{2}, \quad 2\leq r \leq 6.
\end{align}
we have, for $j=1,2$,
\begin{align*}
\| \mathcal S_{j}(t)f \|_{L_{t}^{q}L^{r}_{x}(I\times\mathbb R^{3})}&\lesssim \|f\|_{L^2(\mathbb R^{3})}, \quad f \in L^2(\mathbb R^3),\\
\left\| \int^{t}_{0} \mathcal S_{j}(t-s) F(s) ds \right\|_{L_{t}^{q}L^{r}_{x}(I\times\mathbb R^{3})}
&\lesssim \| F\|_{L_{t}^{m'}L^{n'}_{x}(I\times\mathbb R^{3})}, \quad F \in L^{m'}_t L^{n'}_x(I \times \mathbb R^3),
\end{align*}
where $(m,m')$ and $(n,n')$ are H\"older conjugate pairs. We refer the readers to the boos \cite{Cazenave, LP, Tao} for a general treatment of the Strichartz estimates for NLS equations.\\
We are ready to prove Lemma \ref{lem-smal-scat}.
\begin{proof}[Proof of Lemma \ref{lem-smal-scat}]
From the Duhamel formula, we have
\[
(u(t),v(t))=(\mathcal S_{1}(t-T)u(T),\mathcal S_{2}(t-T)v(T))+i\int^{t}_{T}(\mathcal S_{1}(t-s)F_{1}(s),\mathcal S_{2}(t-s)F_{2}(s))ds.
\]
By using Sobolev embedding, Strichartz estimates, and interpolation, we get
\begin{align*}
\| (u,v) \|_{L_{t}^{4}L^{6}_{x}\times L_{t}^{4}L^{6}_{x}([T,\infty)\times \mathbb R^{3})} &\leq \| (\mathcal S_{1}(t-T)u(T),\mathcal S_{2}(t-T)v(T)) \|_{L_{t}^{4}L^{6}_{x} \times L_{t}^{4}L^{6}_{x}([T,\infty)\times \mathbb R^{3})} \\
& + C \|(F_1, F_2)\|_{L^2_t W^{1,\frac{6}{5}}_x \times L^2_t W^{1,\frac{6}{5}}_x([T,\infty)\times \mathbb R^{3})} \\
& \leq \| (\mathcal S_{1}(t-T)u(T),\mathcal S_{2}(t-T)v(T)) \|_{L_{t}^{4}L^{6}_{x}\times L_{t}^{4}L^{6}_{x}([T,\infty)\times \mathbb R^{3})} \\
& + C\| (u,v) \|^{2}_{L_{t}^{4}L^{6}_{x}\times L_{t}^{4}L^{6}_{x}([T,\infty)\times \mathbb R^{3})}\|(u,v)\|_{L_{t}^{\infty}L^3_{x} \times L^\infty_t L^3_x([T,\infty)\times \mathbb R^{3})} \\
&\leq \| (\mathcal S_{1}(t-T)u(T),\mathcal S_{2}(t-T)v(T)) \|_{L_{t}^{4}L^{6}_{x}\times L_{t}^{4}L^{6}_{x}([T,\infty)\times \mathbb R^{3})}\\
&+E\| (u,v) \|^{2}_{L_{t}^{4}L^{6}_{x}\times L_{t}^{4}L^{6}_{x}([T,\infty)\times \mathbb R^{3})},
\end{align*}
Choosing $\epsilon_{\sd}=\epsilon_{\sd}(E)>0$ small enough, the standard continuity argument implies that if \eqref{Small-sc} holds, then
\[
\| (u,v) \|_{L_{t}^{4}L^{6}_{x}\times L_{t}^{4}L^{6}_{x}([T,\infty)\times \mathbb R^{3})}\lesssim \epsilon_{\sd}.
\]
Now, for $0<\tau<t$, we have
\begin{align*}
\|(\mathcal S_{1}(t)u(t),\mathcal S_{2}(t)v(t))&-(\mathcal S_{1}(\tau)u(t),\mathcal S_{2}(\tau)v(\tau)) \|_{H^{1} \times H^1 }\\
&= \left\| \int^{t}_{\tau}(\mathcal S_{1}(-s)F_{1}(s),\mathcal S_{2}(-s)F_{2}(s))ds \right\|_{H^{1}\times H^1}\\
&\lesssim \| (u,v) \|^{2}_{L_{t}^{4}L^{6}_{x}\times L_{t}^{4}L^{6}_{x}([\tau,t]\times \mathbb R^{3})}\|(u,v)\|_{L_{t}^{\infty}H^{1}_{x}\times L^\infty_t H^1_x([\tau,t]\times \mathbb R^{3})} \rightarrow 0
\end{align*}
as $\tau$, $t\to\infty$. Therefore, $\left\{(\mathcal S_{1}(t)u(t),\mathcal S_{2}(t)v(t)) \right\}_{t\to \infty}$ is a Cauchy sequence in
$H^{1} \times H^{1} $. In particular, the solution $(u,v)$ scatters in the positive time.
\end{proof}
In the following, we provide the proofs for Lemmas \ref{lem-refi-GN-ineq}, \ref{L22}, and \ref{lem-coer-2}.
\begin{proof}[Proof of Lemma \ref{lem-refi-GN-ineq}]
By the sharp Gagliardo-Nirenberg inequality \eqref{GN-ineq}, $K(|f|,|g|) \leq K(f,g)$, and \eqref{opti-cons}, we get
\[
P(|f|,|g|)\leq \frac{1}{3}\left(\frac{K(f,g) M_{3\gamma}(f,g)}{K(\phi,\psi) M_{3\gamma}(\phi,\psi)}\right)^{\frac{1}{2}} K(f,g).
\]
Thus
\begin{align*}
P(|f|,|g|)&\leq \frac{1}{3}\inf_{\xi_{1},\xi_{2}\in \mathbb R^{3}}\left(
\left(\frac{K(e^{ix\cdot\xi_{1}}f,e^{ix\cdot\xi_{2}}g) M_{3\gamma}(f,g) }{K(\phi,\psi) M_{3\gamma}(\phi,\psi)}\right)^{\frac{1}{2}}
K(e^{ix\cdot\xi_{1}}f,e^{ix\cdot\xi_{2}}g)\right)\\
&\leq\frac{1}{3}\inf_{\xi_{1},\xi_{2}\in \mathbb R^{3}}
\left(\frac{K(e^{ix\cdot\xi_{1}}f,e^{ix\cdot\xi_{2}}g) M_{3\gamma}(f,g)}{K(\phi,\psi) M_{3\gamma}(\phi,\psi)}\right)^{\frac{1}{2}}
\times \inf_{\xi_{1},\xi_{2}\in \mathbb R^{3}} K(e^{ix\cdot\xi_{1}}f,e^{ix\cdot\xi_{2}}g),
\end{align*}
which implies \eqref{refi-GN-ineq}.
\end{proof}
\begin{proof}[Proof of Lemma \ref{L22}]
By \eqref{GN-ineq} and $\mu>0$, we have
\begin{align*}
E_\mu(u(t),v(t)) M_{3\gamma}(u(t),v(t)) &\geq \frac{1}{2} K(u(t),v(t)) M_{3\gamma}(u(t),v(t)) - C_{\opt} \left( K(u(t),v(t)) M_{3\gamma}(u(t),v(t))\right)^{\frac{3}{2}} \\
&=: G\left(K(u(t),v(t)) M_{3\gamma}(u(t),v(t))\right)
\end{align*}
for all $t\in (-T_-,T_+)$, where $G(\lambda):=\frac{1}{2} \lambda - C_{\opt} \lambda^{\frac{3}{2}}$. Using \eqref{opti-cons}, we compute
\[
G\left( K(\phi,\psi) M_{3\gamma}(\phi,\psi)\right) = \frac{1}{6} K(\phi,\psi) M_{3\gamma}(\phi,\psi) = \frac{1}{2} E_{3\gamma}(\phi,\psi) M_{3\gamma}(\phi,\psi).
\]
By the conservation of mass and energy, and \eqref{cond-ener}, we have
\begin{align*}
G\left( K(u(t),v(t)) M_{3\gamma}(u(t),v(t))\right) &\leq E_\mu(u(t),v(t)) M_{3\gamma}(u(t),v(t)) \\
&= E_\mu(u_0,v_0) M_{3\gamma}(u_0,v_0) \\
&< \frac{1}{2} E_{3\gamma}(\phi,\psi) M_{3\gamma}(\phi,\psi) = G\left( K(\phi,\psi) M_{3\gamma}(\phi,\psi)\right)
\end{align*}
for all $t\in (-T_-,T_+)$. Using this and \eqref{cond-blow}, the continuity argument yields
\begin{align} \label{est-solu-gwp}
K(u(t),v(t)) M_{3\gamma}(u(t),v(t)) < K(\phi,\psi) M_{3\gamma}(\phi,\psi)
\end{align}
for all $t\in (-T_-,T_+)$. The blow-up alternative then implies that $T_-=T_+=\infty$. Next, by \eqref{GN-ineq}, \eqref{opti-cons}, and \eqref{est-solu-gwp}, we have
\[
P(u(t),v(t))\leq \frac{1}{3}
\left(\frac{K(u(t),v(t)) M_{3\gamma}(u(t),v(t))}{K(\phi,\psi) M_{3\gamma}(\phi,\psi)}\right)^{\frac{1}{2}}
K(u(t),v(t))\leq\frac{1}{3}K(u(t),v(t))
\]
for all $t\in \mathbb R$. It follows that
\begin{equation}\label{est-E}
E_{\mu}(u(t),v(t))=\frac{1}{2}\left(K(u(t),v(t))+M_{\mu}(u(t),v(t))\right)-P(u(t),v(t))\geq \frac{1}{6}K(u(t),v(t))
\end{equation}
which, by the conservation of energy, implies \eqref{est-K}.
\noindent From \eqref{est-E} and \eqref{opti-cons}, we see that
\begin{align}\label{Ess}
\begin{aligned}
K(u(t),v(t))M_{3\gamma}(u(t),v(t))&\leq 6E_{\mu}(u(t),v(t))M_{3\gamma}(u(t),v(t)) \\
&=6\left(\frac{E_{\mu}(u(t),v(t))M_{3\gamma}(u(t),v(t))}{E_{3\gamma}(\phi,\psi) M_{3\gamma}(\phi,\psi)}\right) E_{3\gamma}(\phi,\psi) M_{3\gamma}(\phi,\psi)\\
&=
\left(\frac{E_{3\gamma}(u(t),v(t))M_{3\gamma}(u(t),v(t))}{\frac{1}{2} E_{3\gamma}(\phi,\psi) M_{3\gamma}(\phi,\psi)}\right) K(\phi,\psi) M_{3\gamma}(\phi,\psi)
\end{aligned}
\end{align}
for all $t\in \mathbb R$. On the other hand, by \eqref{cond-ener}, there exists $\delta=\delta(u_0,v_0,\phi,\psi)>0$ such that
\[
E_{\mu}(u_{0},v_{0}) M_{3\gamma}(u_{0},v_{0}) \leq (1-\delta) \frac{1}{2} E_{3\gamma}(\phi,\psi) M_{3\gamma}(\phi,\psi).
\]
Then from \eqref{Ess} and the conservation laws of mass and energy, we obtain
\[
K(u(t),v(t))M_{3\gamma}(u(t),v(t))\leq (1-\delta)K(\phi,\psi)M_{3\gamma}(\phi,\psi)
\]
for all $t\in \mathbb R$. The proof is complete.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lem-coer-2}]
It follows from straightforward calculations that $\|\Gamma_R f\|^2_{L^2 } \leq \|f\|^2_{L^2 }$ and
\[
\int \Gamma^2_R(x) |\nabla f(x)|^{2}dx=\int |\nabla (\Gamma_R(x) f(x))|^{2}dx + \int \Gamma_R(x)\slashed{\nabla}lta\Gamma_R(x) |f(x)|^{2}dx, \quad f \in H^1.
\]
As $\|\slashed{\nabla}lta \Gamma_R\|_{L^\infty} \lesssim R^{-2}$, we infer from \eqref{coer-1} and the conservation of mass that there exists a sufficiently large $R=R(\delta, u_0,v_0, \phi,\psi)$ so that
\[
K\left(\Gamma_{R}(\cdot-z)u(t),\Gamma_{R}(\cdot-z)v(t)\right) M_{3\gamma}\left(\Gamma_{R}(\cdot-z)u(t),\Gamma_{R}(\cdot-z)v(t)\right) \leq \left(1-\frac{\delta}{2}\right)K(\phi,\psi) M_{3\gamma}(\phi,\psi)
\]
for all $t\in \mathbb R$. The refined Gagliardo-Nirenberg inequality \eqref{refi-GN-ineq} implies that
\[
P\left(\Gamma_{R}(\cdot-z)|u(t)|,\Gamma_{R}(\cdot-z)|v(t)|\right)\leq \frac{1}{3}\left(1-\frac{\delta}{2}\right)^{\frac{1}{2}} K\left(\Gamma_{R}(\cdot-z)e^{ix\cdot\xi_{1}}u(t),\Gamma_{R}(\cdot-z)e^{ix\cdot\xi_{2}}v(t)\right)
\]
which in turn implies \eqref{coer-prop} with $\nu:= 1-\left(1-\frac{\delta}{2}\right)^{\frac{1}{2}}>0$.
\end{proof}
\section{Virial Identities}\label{sec:app:B}
This Appendix is devoted to the proof of the virial identities in Section \ref{sec:VM}.
\begin{proof}[Proof of Lemma \ref{Imporide}]
Notice that
\begin{equation}\label{Idf1}
\partial_{t}(|u|^{2}+\gamma\beta|v|^{2})=2\mathbb RE (\overline{u}\partial_{t} u +\gamma\beta\overline{v}\partial_{t}v).
\end{equation}
Moreover, multiplying the equation \eqref{SNLS} with $(\overline{u}, \beta\overline{v})$ and taking the imaginary part, we have
\begin{equation}\label{Idf22}
\begin{split}
\mathbb RE (\overline{u}\partial_{t} u +\gamma\beta\overline{v}\partial_{t}v)=
-\IM (\overline{u}\slashed{\nabla}lta u +\beta\overline{v}\slashed{\nabla}lta v)-\IM \left(\frac{1}{3}\overline{u}^{3}v+\frac{\beta}{9}{u}^{3}\overline{v}\right) \\
=-\IM (\overline{u}\slashed{\nabla}lta u +\beta\overline{v}\slashed{\nabla}lta v)+
\frac{1}{3}\left(1-\frac{\beta}{3}\right)\IM (u^{3}\overline{v}).
\end{split}
\end{equation}
Combining \eqref{Idf1} and \eqref{Idf22}, we infer that
\begin{align*}
\partial_{t}(|u|^{2}+\gamma \beta|v|^{2}) &=
-2\IM (\overline{u}\slashed{\nabla}lta u +\beta\overline{v}\slashed{\nabla}lta v)+
\frac{2}{3}\left(1-\frac{\beta}{3}\right)\IM (u^{3}\overline{v})\\
&=-2\nabla\cdot\IM (\overline{u}\nabla u) -2\beta\nabla\cdot\IM (\overline{v}\nabla v)+\frac{2}{3}\left(1-\frac{\beta}{3}\right)\IM (u^{3}\overline{v}),
\end{align*}
which implies \eqref{Idg}. On the other hand, we rewrite \eqref{SNLS} as
\[
\left\{
\begin{array}{ccl}
i\partial_{t} u+\slashed{\nabla}lta u &=&H,\\
i\gamma\partial_{t} v+\slashed{\nabla}lta v &=&G,
\end{array}
\right.
\]
where $H=H_1+H_2+H_3$ and $G=G_1+G_2+G_3$ with
\begin{align*}
H_{1}& =u, & H_{2}&=-\left(\frac{1}{9}|u|^{2}+2|v|^{2}\right)u, & H_{3}&=-\frac{1}{3}\overline{u}^{2}v,\\
G_{1}& =\mu v, & G_{2}&=-(9|v|^{2}+2|u|^{2})v, & G_{3}&=-\frac{1}{9}u^{3}.
\end{align*}
It follows from straightforward computations that
\begin{align} \nonumber
\partial_{t}\IM (\overline{u} \partial_ku+\gamma\overline{v} \partial_k v)&=
\frac{1}{2}\partial_{k}\slashed{\nabla}lta (|u|^{2}+|v|^{2})-2\partial_{j}\mathbb RE(\partial_j\overline{u} \partial_ku+\partial_j\overline{v} \partial_kv)\\
& + (2\mathbb RE (\overline{H}\partial_{k}u)-\partial_{k}\mathbb RE(\overline{H} u))+
(2\mathbb RE (\overline{G}\partial_{k}v)-\partial_{k}\mathbb RE (\overline{G} v)). \label{iden-prof}
\end{align}
A simple calculation leads to
\[
(2\mathbb RE (\overline{H}_{1}\partial_{k}u)-\partial_{k}\mathbb RE (\overline{H}_{1} u))+
(2\mathbb RE(\overline{G}_{1}\partial_{k}v)-\partial_{k}\mathbb RE (\overline{G}_{1} v))=0.
\]
Moreover, since
\begin{align*}
\partial_{k}(|u|^{2}|v|^{2})&=2\mathbb RE(\overline{u}\partial_{k} u)|v|^{2}+2\mathbb RE(\overline{v}\partial_{k} v)|u|^{2}\\
\partial_{k}(|u|^{4})&=4|u|^{4}\mathbb RE(\overline{u}\partial_{k} u), \quad
\partial_{k}(|v|^{4})=4|v|^{4}\mathbb RE (\overline{v}\partial_{k} v),
\end{align*}
we obtain that
\[
(2\mathbb RE(\overline{H}_{2}\partial_{k}u)-\partial_{k}\mathbb RE(\overline{H}_{2} u))+
(2\mathbb RE (\overline{G}_{2}\partial_{k}v)-\partial_{k}\mathbb RE(\overline{G}_{2} v))=
\frac{1}{18}|u|^{4}+\frac{9}{2}|v|^{4} +2|u|^{2}|v|^{2}.
\]
Finally, as
\[
\partial_{k}\mathbb RE(\overline{u}^{3} v)=3\mathbb RE (\overline{u}^{2}v \partial_{k}\overline{u})+\mathbb RE(\overline{u}^{3} \partial_{k}v),
\]
it follows that
\[
(2\mathbb RE(\overline{H}_{3}\partial_{k}u)-\partial_{k}\mathbb RE(\overline{H}_{3} u))+
(2\mathbb RE (\overline{G}_{3}\partial_{k}v)-\partial_{k}\mathbb RE(\overline{G}_{3} v))=
\frac{2}{9}\partial_{k}\mathbb RE(\overline{u}^{3} v).
\]
Collecting the above identities, we obtain
\begin{align*}
(2\mathbb RE (\overline{H}\partial_{k}u)-\partial_{k}\mathbb RE(\overline{H} u))+
(2\mathbb RE (\overline{G}\partial_{k}v)-\partial_{k}\mathbb RE (\overline{G} v))=
2\partial_{k}N(u,v),
\end{align*}
which, together with \eqref{iden-prof}, shows \eqref{Idn}. The proof is complete.
\end{proof}
\begin{proof}[Proof of Corollary \ref{rem-viri-iden}]
The proof of the identity \eqref{eq:variance} is straightforward. The relation \eqref{cor:ii} comes from the fact that
\[
\partial_j = \frac{x_j}{r} \partial_r, \quad \partial^2_{jk} = \left( \frac{\delta_{jk}}{r} - \frac{x_jx_k}{r^3} \right) \partial_r + \frac{x_j x_k}{r^2} \partial^2_r,
\]
for radial function. Hence
\begin{align*}
\rea \int \partial^2_{jk} \varphi(x) \partial_j \overline{u} (t,x) \partial_k u(t,x) dx
= \int \frac{\varphi'(r)}{r} |\nabla u(t,x)|^2 dx + \int \left(\frac{\varphi''(r)}{r^2}-\frac{\varphi'(r)}{r^3}\right) |x \cdot \nabla u(t,x)|^2 dx,
\end{align*}
where $r=|x|$, which in turn implies \eqref{cor:iii}.
If $\varphi$ is radial and $(u,v)$ as well,
\begin{align*}
\frac{d}{dt} \mathcal M_\varphi(t) &= -\int \slashed{\nabla}lta^2 \varphi(x) (|u|^2 + |v|^2)(t,x) dx + 4 \int \varphi''(r) (|\nabla u|^2 + |\nabla v|^2)(t,x) dx \\
& -4\int \slashed{\nabla}lta \varphi(x) N(u,v)(t,x)dx.
\end{align*}
From the choice of the function $\varphi(x) = \psi(y) + z^2,$ we have
\begin{align*}
\frac{d}{dt} \mathcal M_\varphi(t) &= -\int\slashed{\nabla}lta^2_y \psi(y) (|u|^2 + |v|^2)(t,x) dx + 4 \rea \int\partial^2_{jk} \psi(y) (\partial_j \overline{u}\partial_k u + \partial_j \overline{v} \partial_k v)(t,x) dx \\
& + 8\left(\|\partial_z u(t)\|^2_{L^2} + \|\partial_z v(t)\|^2_{L^2}\right) - 8 P(u(t),v(t)) -4\int \slashed{\nabla}lta_y \psi(y) N(u,v)(t,x)dx
\end{align*}
which in turn gives \eqref{cor:iv}.
\end{proof}
\begin{bibdiv}
\begin{biblist}
\bib{AP}{article}{
author={Angulo, J. P.},
author={Pastor, A. F.},
title={Stability of periodic optical solitons for a nonlinear Schrodinger system},
journal={Proc. Roy. Soc. Edinburgh Sect. A},
volume={139},
number={5},
pages={927},
year={2009},
publisher={Cambridge University Press}
}
\bib{BF20}{article}{
author={Bellazzini, {J.}},
author={Forcella, {L.}},
title={Dynamical collapse of cylindrical symmetric dipolar Bose-Einstein condensates},
journal={preprint},
eprint={https://arxiv.org/abs/2005.02894},
}
\bib{BFG20}{article}{
author={Bellazzini, {J.}},
author={Forcella, {L.}},
author={Georgiev, {V.}},
title={Ground state energy threshold and blow-up for NLS with competing nonlinearities},
journal={preprint},
eprint={https://arxiv.org/abs/2012.10977},
}
\bib{BHL}{article}{
author={Boulenger, T.},
author={Himmelsbach, D.},
author={Lenzmann, E.},
title={Blowup for fractional NLS},
journal={J. Funct. Anal.},
volume={271},
date={2016},
number={9},
pages={2569--2603},
issn={0022-1236},
}
\bib{Boy}{book}{
author={Boyd, R. W.},
title={Nonlinear optics},
edition={3},
publisher={Elsevier/Academic Press, Amsterdam},
date={2008},
pages={xx+613},
isbn={978-0-12-369470-6},
}
\bib{BDST}{article}{
author={Buryak, A. V.},
author={Di Trapani, P.},
author={Skryabin, D. V.},
author={Trillo, S.},
title={Optical solitons due to quadratic nonlinearities: from basic physics to futuristic applications},
journal={Phys. Rep.},
volume={370},
number={2},
pages={63--235},
year={2002},
publisher={Elsevier}
}
\bib{B99}{article}{
title={Solitons and collapse suppression due to parametric interaction in bulk Kerr media},
author={A. V. Buryak and V. Steblina and R. Sammut},
journal={Optics letters},
year={1999},
volume={24},
pages={1859--1861}
}
\bib{Cazenave}{book}{
author={Cazenave, T.},
title={Semilinear Schr\"{o}dinger equations},
series={Courant Lecture Notes in Mathematics},
volume={10},
publisher={New York University, Courant Institute of Mathematical Sciences, New York; American Mathematical Society, Providence, RI},
date={2003},
pages={xiv+323},
isbn={0-8218-3399-5},
}
\bib{CO}{article}{
author={Cho, Y.},
author={Ozawa, T.},
title={Sobolev inequalities with symmetry},
journal={Commun. Contemp. Math.},
volume={11},
date={2009},
number={3},
pages={355--365},
issn={0219-1997},
}
\bib{CdMS}{article}{
author={Colin, M.},
author={Di Menza, L.},
author={Saut, J. C.},
title={Solitons in quadratic media},
journal={Nonlinearity},
volume={29},
date={2016},
number={3},
pages={1000--1035},
issn={0951-7715},
}
\bib{DF}{article}{
author={Dinh, V. D.},
author={Forcella, L.},
title={Blow-up results for systems of nonlinear Schr\"odinger equations with quadratic interaction},
journal={preprint},
eprint={https://arxiv.org/abs/2010.14595},
}
\bib{DM-PAMS}{article}{
author={Dodson, B. },
author={Murphy, J.},
year = {2017},
month = {11},
pages = {4859--4867},
title = {A new proof of scattering below the ground state for the 3D radial focusing cubic NLS},
volume = {145},
journal = {Proc. Amer. Math. Soc.},
}
\bib{DM-MRL}{article}{
author={Dodson, B.},
author={Murphy, J.},
title={A new proof of scattering below the ground state for the non-radial focusing NLS},
journal={Math. Res. Lett.},
volume={25},
date={2018},
number={6},
pages={1805--1825},
issn={1073-2780},
}
\bib{DHR}{article}{
author={Duyckaerts, T.},
author={Holmer, J.},
author={Roudenko, S.},
title={Scattering for the non-radial 3D cubic nonlinear Schr\"odinger
equation},
journal={Math. Res. Lett.},
volume={15},
date={2008},
number={6},
pages={1233--1250},
issn={1073-2780},
}
\bib{Fib}{book}{
author={Fibich, G.},
title={The nonlinear Schr\"{o}dinger equation},
series={Applied Mathematical Sciences},
volume={192},
note={Singular solutions and optical collapse},
publisher={Springer, Cham},
date={2015},
pages={xxxii+862},
isbn={978-3-319-12747-7},
isbn={978-3-319-12748-4},
}
\bib{Glassey}{article}{
author={Glassey, R. T.},
title={On the blowing up of solutions to the Cauchy problem for nonlinear
Schr\"odinger equations},
journal={J. Math. Phys.},
volume={18},
date={1977},
number={9},
pages={1794--1797},
issn={0022-2488},
}
\bib{HR}{article}{
author={Holmer, J.},
author={Roudenko, S.},
title={A sharp condition for scattering of the radial 3D cubic nonlinear
Schr\"odinger equation},
journal={Comm. Math. Phys.},
volume={282},
date={2008},
number={2},
pages={435--467},
issn={0010-3616},
}
\bib{KM}{article}{
author={Kenig, C. E.},
author={Merle, F.},
title={Global well-posedness, scattering and blow-up for the
energy-critical, focusing, nonlinear Schr\"odinger equation in the radial
case},
journal={Invent. Math.},
volume={166},
date={2006},
number={3},
pages={645--675},
issn={0020-9910},
}
\bib{Kivshar}{article}{
author={Kivshar, Y. S.},
title={Bright and dark spatial solitons in non-Kerr media},
journal={Opt. Quant. Electron.},
volume={30},
number={7-10},
pages={571--614},
year={1998},
publisher={Springer}
}
\bib{Inui1}{article}{
author={Inui, T.},
title={Global dynamics of solutions with group invariance for the
nonlinear Schr\"{o}dinger equation},
journal={Commun. Pure Appl. Anal.},
volume={16},
date={2017},
number={2},
pages={557--590},
issn={1534-0392},
}
\bib{Inui2}{article}{
author={Inui, {T.}},
title={Remarks on the global dynamics for solutions with an infinite
group invariance to the nonlinear Schr\"{o}dinger equation},
conference={
title={Harmonic analysis and nonlinear partial differential equations},
},
book={
series={RIMS K\^{o}ky\^{u}roku Bessatsu, B70},
publisher={Res. Inst. Math. Sci. (RIMS), Kyoto},
},
date={2018},
pages={1--32},
}
\bib{IKN-NA}{article}{
author={Inui, {T.}},
author={Kishimoto, {N.}},
author={Nishimura, {K.}},
title={Blow-up of the radially symmetric solutions for the quadratic nonlinear Schr\"{o}dinger system without mass-resonance},
journal={Nonlinear Anal.},
volume={198},
date={2020},
pages={111895, 10},
issn={0362-546X},
}
\bib{LP}{book}{
author={Linares, F.},
author={Ponce, G.},
title={Introduction to nonlinear dispersive equations},
series={Universitext},
edition={2},
publisher={Springer, New York},
date={2015},
pages={xiv+301},
isbn={978-1-4939-2180-5},
isbn={978-1-4939-2181-2},
}
\bib{LGT}{article}{
title={On existence of solitons for the 3rd harmonic of a light beam in planar waveguides},
author={Long, V. C.},
author={Goldstein, P.},
author={Trippenbach, M.},
journal={Acta Phys. Polo. A},
volume={5},
number={105},
pages={437--444},
year={2004}
}
\bib{Mar}{article}{
author={Martel, Y.},
title={Blow-up for the nonlinear Schr\"{o}dinger equation in nonisotropic spaces},
journal={Nonlinear Anal.},
volume={28},
date={1997},
number={12},
pages={1903--1908},
issn={0362-546X},
}
\bib{MX}{article}{
author={Meng, F.},
author={Xu, C.},
year = {2020},
month = {01},
title = {Scattering for mass-resonance nonlinear Schr\"odinger system in 5D}
journal={J. Differential Equations},
volume={275},
year={2021},
pages={837--857},
}
\bib{OT}{article}{
author={Ogawa, T.},
author={Tsutsumi, Y.},
title={Blow-up of $H^1$ solution for the nonlinear Schr\"odinger equation},
journal={J. Differential Equations},
volume={92},
date={1991},
number={2},
pages={317--330},
issn={0022-0396},
}
\bib{OP}{article}{
author={Oliveira, F.},
author={Pastor, A.},
title={On a Schr{\"o}dinger system arising in nonlinear optics},
journal={preprint},
eprint={http://arxiv.org/abs/1810.08231},
}
\bib{SBK-OL}{article}{
author={Sammut, {R.} A.},
author={Buryak, A. V.},
author={Kivshar, Y. S.},
title={Modification of solitary waves by third-harmonic generation},
journal={Opt. Lett.},
volume={22},
number={18},
pages={1385--1387},
year={1997},
publisher={Optical Society of America}
}
\bib{SBK-JOSA}{article}{
title={Bright and dark solitary waves in the presence of third-harmonic generation},
author={Sammut, R. A.},
author={Buryak, A. V.},
author={Kivshar, Y. S.},
journal={J. Opt. Soc. Am. B},
volume={15},
number={5},
pages={1488--1496},
year={1998},
publisher={Optical Society of America}
}
\bib{SS}{book}{
author={Sulem, C.},
author={Sulem, P.-L.},
title={The nonlinear Schr\"{o}dinger equation},
series={Applied Mathematical Sciences},
volume={139},
note={Self-focusing and wave collapse},
publisher={Springer-Verlag, New York},
date={1999},
pages={xvi+350},
isbn={0-387-98611-1},
}
\bib{Tao}{book}{
author={Tao, T.},
title={Nonlinear dispersive equations},
series={CBMS Regional Conference Series in Mathematics},
volume={106},
note={Local and global analysis},
publisher={Published for the Conference Board of the Mathematical
Sciences, Washington, DC; by the American Mathematical Society,
Providence, RI},
date={2006},
pages={xvi+373},
isbn={0-8218-4143-2},
}
\bib{WY}{article}{
author={Wang, H.},
author={Yang, Q.},
title={Scattering for the 5D quadratic NLS system without mass-resonance},
journal={J. Math. Phys.},
volume={60},
date={2019},
number={12},
pages={121508, 23},
issn={0022-2488},
}
\bib{XX}{article}{
author={Xia, S.},
author={Xu, C.},
year = {2019},
month = {08},
pages = {1--17},
title = {On dynamics of the system of two coupled nonlinear Schr\"odinger in $\mathbb R^{3}$},
volume = {42},
journal = {Math. Meth. Appl. Sci.},
}
\bib{XZZ}{article}{
author={Xu, C.},
author={Zhao, T.},
author={Zheng, J.},
title = {Scattering for 3d cubic focusing NLS on the domain outside a convex obstacle revisited}
journal={preprint},
eprint={https://arxiv.org/pdf/1812.09445.pdf},
}
\bib{ZZS}{article}{
title={Higher dimensional solitary waves generated by second-harmonic generation in quadratic media},
author={Zhao, L.},
author={Zhao, F.},
author={Shi, J.},
journal={Cal. Var. Partial Differential Equations},
volume={54},
number={3},
pages={2657--2691},
year={2015},
publisher={Springer}
}
\end{biblist}
\end{bibdiv}
\end{document}
|
^{\textsf{T}}\xspaceextbf{e}\xspacegin{document}
\title{A time-optimal algorithm for solving (block-)tridiagonal linear systems of dimension $N$ on a distributed computer of $N$ nodes}
^{\textsf{T}}\xspaceextbf{e}\xspacegin{abstract}
We are concerned with the fastest possible direct numerical solution algorithm for a thin-banded or tridiagonal linear system of dimension $N$ on a distributed computing network of $N$ nodes that is connected in a binary communication tree. Our research is driven by the need for faster ways of numerically solving discretized systems of coupled one-dimensional black-box boundary-value problems.
Our paper presents two major results: First, we provide an algorithm that achieves the optimal parallel time complexity for solving a tridiagonal linear system and thin-banded linear systems. Second, we prove that it is impossible to improve the time complexity of this method by any polynomial degree.
To solve a system of dimension $m\cdot N$ and bandwidth $m \in \Omega(N^{1/6})$ on $2 \cdot N-1$ computing nodes, our method needs time complexity $\mathcal{O}\xspace(\log(N)^2 \cdot m^3)$.
\end{abstract}
\section{Introduction}
\paragraph{Motivation of the problem}
Many computational engineering tasks deal with the solution of (systems) of one-dimensional differential-algebraic boundary-value problems \cite{Russell1972}. Examples are numerical simulations of the following physical phenomena: the deformation of a clamped beam, the dynamic pressure in a gas-pipe, the trajectory of a missile, and constrained optimal control problems.
Using a numerical discretization method, a large-dimensional equation systems results that is typically solved via Newton's method. The arising linear systems are thin-banded and of very large dimension.
Often, the Newton system results from a minimization principle, either of an objective or by a natural model that aims for minimization of potential energy. In these cases the arising linear systems are not only thin-banded and of very large dimension, but they are also symmetric positive definite, which is clearly desirable for reasons of numerical stability.
Since thin-banded, the system matrix can be interpreted as block-tridiagonal, where the block-size is identical to the band-width. Thus, for ease of presentation in the following we present an algorithm for (block-) tridiagonal systems where the block-size is $m\ll N$.
\paragraph{Motivation of the problem statement and computing model}
In consequence of the emergence of massively parallel computing systems, nowadays the problem in numerical computing is typically not to solve a given mathematical problem, but rather to solve it on a given computing system while exploiting its resources in an optimal way. Especially for parallel computing systems it is difficult to spread the computational task in a way that enables the full use of the computing system's capacity.
As for our case of solving thin-banded linear systems, it
is well-known that for a bandwidth bounded by $m \in \mathcal{O}\xspace(1)$ the optimal time complexity for solving a banded linear system on a serial computer is
^{\textsf{T}}\xspaceextbf{e}\xspacegin{align*}
\mathcal{O}\xspace(N)\,,
\end{align*}
as can be achieved by use of Gaussian elimination. This is equivalent in meaning to that solving the problem with a serial machine is literally as expensive as the following two: ^{\textsf{T}}\xspaceextit{reading} the problem, or ^{\textsf{T}}\xspaceextit{writing} the problem's solution into the memory.
So at first glance it seems like nothing can be done to improve: The time to communicate the problem to a solver would already predominate the time that the solver actually needed. So where is the point in trying to make the solver faster?
The point or answer is that problems do not need to be communicated. One could assemble in parallel the rows of a linear equation system in a distributed way on the memory of independent computing nodes that are connected via a network. Using the algorithm that we propose, all the computing nodes can solve the one big linear system but each of them does only ^{\textsf{T}}\xspaceextit{read a tiny part} of the problem and only ^{\textsf{T}}\xspaceextit{writes a tiny part} of the solution vector.
\paragraph{Literature review}
We consider the problem of solving a thin-banded linear system as a generalization of solving tridiagonal linear systems in parallel, which is why our literature review refers to parallel tridiagonal solvers. Such solvers can be applied by interpreting the thin-banded linear system as block-tridiagonal system of dense blocks.
There are several popular methods for the solution of tridiagonal linear systems. These have in common that they use a concept described as ^{\textsf{T}}\xspaceextit{parallel factorizations} \cite{Amodio92parallelfactorizations,MATTOR19951769}: The matrix is multiplied from the left with a block-diagonal matrix that decouples a portion of the unknowns through a n interface system. The reduced system is solved. The reduced solution is distributed to all processors so that they can compute in parallel the formerly removed unknowns.
According to \cite{austin-2004-linesolver}, the first parallel tridiagonal solver is called ^{\textsf{T}}\xspaceextit{cyclic reduction} and was presented 1965 in \cite{Hockney:1965:FDS:321250.321259}. The ^{\textsf{T}}\xspaceextit{recursive doubling algorithm} was introduced 1973 in \cite{Stone:1973:EPA:321738.321741}. In both of these algorithms each processor holds one row of the system. Cyclic reduction works by successively expressing the odd variables of the solution vector in terms of the even. This can be done until finally there is a system of one variable that is solved. An implementation is provided in \cite{DBLP:journals/siamsc/BrownFJ00}.
^{\textsf{T}}\xspaceextit{Wang's method} \cite{Wang:1981:PMT:355945.355947}, introduced 1981, is a parallel algorithm where the order of the number of processors is smaller than the order of the number of rows of the system to be solved. This algorithm has been proven to be numerically stable \cite{Yalamov:1999:SPA:300088.300141}. The idea of this method is the assembly of an interface problem whose solution can be used directly to solve the remaining variables in a backward-substitution step.
In 1991 Bondeli introduced a divide and conquer algorithm for tridiagonal system \cite{Bondeli:1991:PDC:1746085.1746141}. The idea of this algorithm is to solve a block-diagonal approximation of the system in parallel, where each processor holds a diagonal-block. It results a reduced interface system that is solved by cyclic reduction, cf. \cite{austin-2004-linesolver}.
\paragraph{Organization of the paper}
In the remainder of this section we describe the parallel computing system that we use for our algorithm. We then describe the mathematical problem that the algorithm solves. It is important that the problem is given to the algorithm in a special way. In particular, the problem data must be provided in distributed memory before the algorithm is called. This is important because it would take to much time to move the data from a central storage to the distributed memory.
In Section~2 we present the algorithm. We start with an implementation and afterwards show how this algorithm can be derived from familiar matrix algorithms. At the end of the section we analyse the time complexity of the algorithm and remark on optimality of this complexity result.
Section 3 we give lower bounds on the time complexity for solving tridiagonal linear systems on a parallel distributed memory machine. We show that our algorithm is able to yield optimal time complexity.
Eventually we draw conclusions in Section~4.
\paragraph{Computing system}
We need to describe the computing system before we describe the problem statement because otherwise we cannot describe where we presume the problem data to be placed. Above we described why this is crucial: We have to make sure that the problem is provided in the right way because moving the problem data around would cost too much time.
For a system of dimension $N \cdot m$ with a bandwidth $m$ we consider a computing system of $2 \cdot N-1$ computing nodes that have each their own memory and that each run the solution algorithm in parallel. The nodes are connected via a network of cables. Nodes can send data to other nodes and nodes can wait to receive data from others. Figure~\ref{fig:ComputingSystem} illustrates the computing system. The black circles symbolize the computing nodes and the black lines illustrate their connections via a cable network.
For the network we require a special structure: For our algorithm we need a two-tree network. In a two-tree, also called dual tree, each node is connected to three other nodes: a ^{\textsf{T}}\xspaceextit{parent}, an ^{\textsf{T}}\xspaceextit{up-child} and a ^{\textsf{T}}\xspaceextit{down-child}, cf. in the figure. Exceptions are: There exists one node called ^{\textsf{T}}\xspaceextit{root}. This node does not have a parent. Further, there exist $N$ nodes that are called ^{\textsf{T}}\xspaceextit{leaves}. A leaf does only have a parent, but it does neither have an ^{\textsf{T}}\xspaceextit{up-child} nor a ^{\textsf{T}}\xspaceextit{down-child}.
The right part of the figure assigns the nodes with numbers. Each node has a ^{\textsf{T}}\xspaceextit{processor number} and a ^{\textsf{T}}\xspaceextit{level}. The levels are defined recursively: Each leaf has a level of zero, and the parent of each node has a level that is by one larger than the level of the node itself. The processor number is a value that is given by counting from 1 from the uppermost node to the lowermost node of each level.
Each node holds identifying variables like a passport: The variable ^{\textsf{T}}\xspaceexttt{my\_level} gives the level of this node. The variable ^{\textsf{T}}\xspaceexttt{my\_proc\_num} is a list. The following values are well-defined
^{\textsf{T}}\xspaceextbf{e}\xspacegin{align*}
^{\textsf{T}}\xspaceexttt{my\_proc\_num}(\ell) \quad ^{\textsf{T}}\xspaceext{ for }\ell \in \lbrace ^{\textsf{T}}\xspaceexttt{my\_level},...d\rbrace\,,
\end{align*}
where $d$ is the level of the root. The value $^{\textsf{T}}\xspaceexttt{my\_proc\_num}(\ell)$ gives the processor number of the node on level $\ell$ through which a signal from the root would have to travel in order to reach this node. Figure~\ref{fig:ComputingSystem} gives an example for this: To reach the node on level $0$ with processor number $5$, a signal from the root would have to traverse through node $2$ of level $2$ and node $3$ of level $1$.
^{\textsf{T}}\xspaceextbf{e}\xspacegin{figure}
\centering
\includegraphics[width=1\linewidth]{Images_PDF/ComputingSystem}
\caption{Structure of the computing system: $2 \cdot N-1$ nodes are connected in a two-tree communication network. Each node can send messages to its parent and children. The nodes are classified in levels. Further, each node per level is given a processor number. Each node knows its level and the processor number of itself and its parents.}
\label{fig:ComputingSystem}
\end{figure}
We presume the nodes as identical serial computing units. As is common, we presume that basic operations plus, minus, times, divide, and copy of scalar values require each a fixed amount of time on a respective node.
For communications over the network, we use the following message passing interface of six commands:
^{\textsf{T}}\xspaceextbf{e}\xspacegin{itemize}
\item ^{\textsf{T}}\xspaceexttt{send\_to\_up\_child}($^{\textsf{T}}\xspaceextbf{M}\xspace$). If this node has an up-child then it sends a matrix by copy $^{\textsf{T}}\xspaceextbf{M}\xspace$ to up-child and waits until up-child received it.
\item ^{\textsf{T}}\xspaceexttt{receive\_from\_up\_child}($^{\textsf{T}}\xspaceextbf{M}\xspace$). If this node has a down-child then it waits until it receives by copy a matrix from up-child. This node stores the matrix in its variable $^{\textsf{T}}\xspaceextbf{M}\xspace$ and the copy for the transmission is destroyed.
\item ^{\textsf{T}}\xspaceexttt{send\_to\_down\_child}($^{\textsf{T}}\xspaceextbf{M}\xspace$); analogous to above, but the data is sent to the down-child of this node.
\item ^{\textsf{T}}\xspaceexttt{receive\_from\_down\_child}($^{\textsf{T}}\xspaceextbf{M}\xspace$); analogous to above, but the data is received from the down-child of this node.
\item ^{\textsf{T}}\xspaceexttt{send\_to\_parent}($^{\textsf{T}}\xspaceextbf{M}\xspace$); analogous to above, but the data is sent to the parent of this node.
\item ^{\textsf{T}}\xspaceexttt{receive\_from\_parent}($^{\textsf{T}}\xspaceextbf{M}\xspace$); analogous to above, but the data is received from the parent of this node.
\end{itemize}
For each communication we assume a time complexity of the number of elements of $^{\textsf{T}}\xspaceextbf{M}\xspace$ plus a constant amount of time $c^^{\textsf{T}}\xspaceext{Lat}_N$ that is due to latency. The latency accounts for the phenomenon that information travels through the cable at speed of light, so it takes a while until the beginning of a message has moved through the cable.
\paragraph{Problem statement}
We consider the numerical solution of a banded linear system
^{\textsf{T}}\xspaceextbf{e}\xspacegin{align}
\underline{^{\textsf{T}}\xspaceextbf{A}\xspace} \cdot \underline{^{\textsf{T}}\xspaceextbf{X}\xspace} = \underline{^{\textsf{T}}\xspaceextbf{Y}\xspace}\label{eqn:LinearSystem}
\end{align}
where $\underline{^{\textsf{T}}\xspaceextbf{A}\xspace} \in \mathbb{R}\xspace^{(N\cdot m) ^{\textsf{T}}\xspaceimes (N \cdot m)}$ has bandwidth $b\leq m$, and $\underline{^{\textsf{T}}\xspaceextbf{Y}\xspace} \in \mathbb{R}\xspace^{(N ^{\textsf{T}}\xspaceimes m) ^{\textsf{T}}\xspaceimes k}$ is a dense matrix of $k$ right-hand sides. The task is to find numerical values for $\underline{^{\textsf{T}}\xspaceextbf{X}\xspace} \in \mathbb{R}\xspace^{(N ^{\textsf{T}}\xspaceimes m) ^{\textsf{T}}\xspaceimes k}$. We assume $N \in 2^\mathbb{N}\xspace$.
\largeparbreak
As formerly discussed, the time for writing the solution into memory in a sequential way would already exceed the time that is actually needed to solve the system. This is why in the following we describe very precisely in which form the data $\underline{^{\textsf{T}}\xspaceextbf{A}\xspace}$, $\underline{^{\textsf{T}}\xspaceextbf{Y}\xspace}$ must be provided to our computing system.
The system matrix, the right-hand sides, and the solution vectors are stored in a separated way in the leaves of our two-tree. Figure~\ref{fig:Block_Matrix_Storage} illustrates the situation for $N=8$. Each leaf holds five matrices in its private storage: $^{\textsf{T}}\xspaceextbf{A}\xspace,^{\textsf{T}}\xspaceextbf{B}\xspace,^{\textsf{T}}\xspaceextbf{C}\xspace \in \mathbb{R}\xspace^{m ^{\textsf{T}}\xspaceimes m}$ and $^{\textsf{T}}\xspaceextbf{X}\xspace,^{\textsf{T}}\xspaceextbf{Y}\xspace \in \mathbb{R}\xspace^{m ^{\textsf{T}}\xspaceimes k}$. Matrices of two distinct leaves can have totally different values.
Comparing the upper and lower part of the Figure~\ref{fig:Block_Matrix_Storage}, we find that the original matrices $\underline{^{\textsf{T}}\xspaceextbf{A}\xspace}$, $\underline{^{\textsf{T}}\xspaceextbf{X}\xspace}$, $\underline{^{\textsf{T}}\xspaceextbf{Y}\xspace}$ can be composed of the matrices $^{\textsf{T}}\xspaceextbf{A}\xspace,^{\textsf{T}}\xspaceextbf{B}\xspace,^{\textsf{T}}\xspaceextbf{C}\xspace,^{\textsf{T}}\xspaceextbf{X}\xspace,^{\textsf{T}}\xspaceextbf{Y}\xspace$ of all leaves. There are two matrices that fall out of the pattern: $^{\textsf{T}}\xspaceextbf{C}\xspace$ in the uppermost leaf and $^{\textsf{T}}\xspaceextbf{B}\xspace$ in the lowermost leaf. We require that these matrices are zero-matrices.
^{\textsf{T}}\xspaceextbf{e}\xspacegin{figure}
\centering
\includegraphics[width=1\linewidth]{Images_PDF/Distributed_Matrix}
\caption{Distributed storage of the linear system. At the top: The linear system of bandwidth $b \leq m$ is chunked into row-blocks of size $m$. At the bottom: Each row-block is stored in one leaf using five matrices.}
\label{fig:Block_Matrix_Storage}
\end{figure}
\section{The algorithm}
This section is organized as follows. We first present the algorithm as a code that could be directly used for implementation in a programming language such as MPI with Cpp or Fortran. Then we sketch our derivation of the algorithm, that arised from applying the devide-and-conquer paradigm on the SPIKE algorithm due to Sameh \cite{Polizzi2007}. We explain how our algorithm operates and give an example to illustrate the algorithmic steps. Finally, we analyse the parallel time complexity.
\paragraph{The algorithm}
The following algorithm is launched on all nodes of the computing system at the same time with their respective local data.
^{\textsf{T}}\xspaceextbf{e}\xspacegin{algorithmic}[1]
\Procedure{ParallelSolver}{$^{\textsf{T}}\xspaceextbf{A}\xspace,^{\textsf{T}}\xspaceextbf{B}\xspace,^{\textsf{T}}\xspaceextbf{C}\xspace,^{\textsf{T}}\xspaceextbf{Y}\xspace,N,d$}
\State ^{\textsf{T}}\xspaceextit{// matrices: }$^{\textsf{T}}\xspaceextbf{V}\xspace^{\lbrace j\rbrace}\in \mathbb{R}\xspace^{m ^{\textsf{T}}\xspaceimes m}$ for $j=1,...,d$; $^{\textsf{T}}\xspaceextbf{V}\xspace_^{\textsf{T}}\xspaceext{up},^{\textsf{T}}\xspaceextbf{V}\xspace_^{\textsf{T}}\xspaceext{down} \in \mathbb{R}\xspace^{m ^{\textsf{T}}\xspaceimes m}$
\State ^{\textsf{T}}\xspaceextit{// matrices (cont. 1): }$^{\textsf{T}}\xspaceextbf{Z}\xspace^{\lbrace j\rbrace}_^{\textsf{T}}\xspaceext{V,up},^{\textsf{T}}\xspaceextbf{Z}\xspace_^{\textsf{T}}\xspaceext{V}^{\lbrace j \rbrace},^{\textsf{T}}\xspaceextbf{Z}\xspace^{\lbrace j \rbrace}_^{\textsf{T}}\xspaceext{V,down} \in \mathbb{R}\xspace^{m ^{\textsf{T}}\xspaceimes m}$ for $j=1,...,d$
\State ^{\textsf{T}}\xspaceextit{// matrices (cont. 2): } $^{\textsf{T}}\xspaceextbf{Z}\xspace_^{\textsf{T}}\xspaceext{X,up},^{\textsf{T}}\xspaceextbf{Z}\xspace_^{\textsf{T}}\xspaceext{X},^{\textsf{T}}\xspaceextbf{Z}\xspace_^{\textsf{T}}\xspaceext{X,down} \in \mathbb{R}\xspace^{m ^{\textsf{T}}\xspaceimes k}$
\If{$^{\textsf{T}}\xspaceexttt{my\_level}==0$}
\State ^{\textsf{T}}\xspaceextit{// - - - write wings}
\State $j_B := ^{\textsf{T}}\xspaceexttt{my\_proc\_num(my\_level)}$\ ; \quad $k:=N/2$
\For{$j=d\ :\ -1\ :\ 1$}
\If{$j_B\geq k$}
\State $j_B := j_B - k$
\EndIf
\If{$j_B==0$}
\State $^{\textsf{T}}\xspaceextbf{V}\xspace^{\lbrace j\rbrace} := ^{\textsf{T}}\xspaceextbf{V}\xspace^{\lbrace j\rbrace} + ^{\textsf{T}}\xspaceextbf{B}\xspace$\ ; \quad ^{\textsf{T}}\xspaceextbf{break for-loop}
\EndIf
\State $k:=k/2$
\EndFor
\State $j_C := ^{\textsf{T}}\xspaceexttt{my\_proc\_num(my\_level)}-1$\ ; \quad $k:=N/2$
\For{$j=d\ :\ -1\ :\ 1$}
\If{$j_C\geq k$}
\State $j_C := j_C - k$
\EndIf
\If{$j_C==0$}
\State $^{\textsf{T}}\xspaceextbf{V}\xspace^{\lbrace j\rbrace} := ^{\textsf{T}}\xspaceextbf{V}\xspace^{\lbrace j\rbrace} + ^{\textsf{T}}\xspaceextbf{C}\xspace$\ ; \quad ^{\textsf{T}}\xspaceextbf{break for-loop}
\EndIf
\State $k:=k/2$
\EndFor
\State ^{\textsf{T}}\xspaceextit{// - - - block-diagonal inversion}
\State $[^{\textsf{T}}\xspaceextbf{V}\xspace^{\lbrace 1\rbrace},...,^{\textsf{T}}\xspaceextbf{V}\xspace^{\lbrace d\rbrace},^{\textsf{T}}\xspaceextbf{X}\xspace] := ^{\textsf{T}}\xspaceextbf{A}\xspace \backslash [^{\textsf{T}}\xspaceextbf{V}\xspace^{\lbrace 1\rbrace},...,^{\textsf{T}}\xspaceextbf{V}\xspace^{\lbrace d\rbrace},^{\textsf{T}}\xspaceextbf{Y}\xspace]$
\EndIf
\For{$\ell=1\ :\ 1\ :\ d$}
\If{$^{\textsf{T}}\xspaceexttt{my\_level}==0$}
\State ^{\textsf{T}}\xspaceexttt{send\_to\_parent}($^{\textsf{T}}\xspaceextbf{V}\xspace^{\lbrace\ell\rbrace},...,^{\textsf{T}}\xspaceextbf{V}\xspace^{\lbrace d \rbrace}\,,\,^{\textsf{T}}\xspaceextbf{X}\xspace$)
\Else
\State ^{\textsf{T}}\xspaceexttt{receive\_from\_up\_child}($^{\textsf{T}}\xspaceextbf{V}\xspace^{\lbrace\ell\rbrace}_^{\textsf{T}}\xspaceext{up},...,^{\textsf{T}}\xspaceextbf{V}\xspace^{\lbrace d\rbrace}_^{\textsf{T}}\xspaceext{up}\,,\,^{\textsf{T}}\xspaceextbf{X}\xspace_^{\textsf{T}}\xspaceext{up}$)
\State ^{\textsf{T}}\xspaceexttt{receive\_from\_down\_child}($^{\textsf{T}}\xspaceextbf{V}\xspace^{\lbrace\ell\rbrace}_^{\textsf{T}}\xspaceext{down},...,^{\textsf{T}}\xspaceextbf{V}\xspace^{\lbrace d\rbrace}_^{\textsf{T}}\xspaceext{down}\,,\,^{\textsf{T}}\xspaceextbf{X}\xspace_^{\textsf{T}}\xspaceext{down}$)
\If{$^{\textsf{T}}\xspaceexttt{my\_level}<\ell$}
\If{$^{\textsf{T}}\xspaceexttt{my\_proc\_num}(\ell-1)$ is odd}
\State ^{\textsf{T}}\xspaceexttt{send\_to\_parent}($^{\textsf{T}}\xspaceextbf{V}\xspace^{\lbrace\ell\rbrace}_^{\textsf{T}}\xspaceext{down},...,^{\textsf{T}}\xspaceextbf{V}\xspace^{\lbrace d\rbrace}_^{\textsf{T}}\xspaceext{down}\,,\,^{\textsf{T}}\xspaceextbf{X}\xspace_^{\textsf{T}}\xspaceext{down}$)
\Else
\State ^{\textsf{T}}\xspaceexttt{send\_to\_parent}($^{\textsf{T}}\xspaceextbf{V}\xspace^{\lbrace\ell\rbrace}_^{\textsf{T}}\xspaceext{up},...,^{\textsf{T}}\xspaceextbf{V}\xspace^{\lbrace d\rbrace}_^{\textsf{T}}\xspaceext{up}\,,\,^{\textsf{T}}\xspaceextbf{X}\xspace_^{\textsf{T}}\xspaceext{up}$)
\EndIf
\EndIf
\EndIf
\State ^{\textsf{T}}\xspaceextit{// above: nodes of level $\ell$ receive $^{\textsf{T}}\xspaceextbf{V}\xspace^{\lbrace \ell,...,d \rbrace }_^{\textsf{T}}\xspaceext{up},^{\textsf{T}}\xspaceextbf{V}\xspace^{\lbrace \ell,...,d \rbrace }_^{\textsf{T}}\xspaceext{down},^{\textsf{T}}\xspaceextbf{X}\xspace_^{\textsf{T}}\xspaceext{up},^{\textsf{T}}\xspaceextbf{X}\xspace_^{\textsf{T}}\xspaceext{down}$}
\If{$^{\textsf{T}}\xspaceexttt{my\_level}==\ell$}
\State $^{\textsf{T}}\xspaceextbf{S}\xspace := ^{\textsf{T}}\xspaceextbf{e}\xspacegin{bmatrix}
^{\textsf{T}}\xspaceextbf{I}\xspace_{m ^{\textsf{T}}\xspaceimes m} & ^{\textsf{T}}\xspaceextbf{V}\xspace_^{\textsf{T}}\xspaceext{up}^{\lbrace \ell \rbrace} \\
^{\textsf{T}}\xspaceextbf{V}\xspace_^{\textsf{T}}\xspaceext{down}^{\lbrace \ell \rbrace} & ^{\textsf{T}}\xspaceextbf{I}\xspace_{m ^{\textsf{T}}\xspaceimes m}
\end{bmatrix}$
\State \mbox{$^{\textsf{T}}\xspaceextbf{e}\xspacegin{bmatrix}
^{\textsf{T}}\xspaceextbf{Z}\xspace_^{\textsf{T}}\xspaceext{V,up}^{\lbrace \ell+1 \rbrace},...,^{\textsf{T}}\xspaceextbf{Z}\xspace_^{\textsf{T}}\xspaceext{V,up}^{\lbrace d \rbrace} & ^{\textsf{T}}\xspaceextbf{Z}\xspace_^{\textsf{T}}\xspaceext{X,up}\\
^{\textsf{T}}\xspaceextbf{Z}\xspace_^{\textsf{T}}\xspaceext{V,down}^{\lbrace \ell+1 \rbrace},...,^{\textsf{T}}\xspaceextbf{Z}\xspace_^{\textsf{T}}\xspaceext{V,down}^{\lbrace d \rbrace} & ^{\textsf{T}}\xspaceextbf{Z}\xspace_^{\textsf{T}}\xspaceext{X,down}
\end{bmatrix} := ^{\textsf{T}}\xspaceextbf{S}\xspace \, \backslash\, ^{\textsf{T}}\xspaceextbf{e}\xspacegin{bmatrix}
^{\textsf{T}}\xspaceextbf{V}\xspace_^{\textsf{T}}\xspaceext{up}^{\lbrace \ell+1 \rbrace},...,^{\textsf{T}}\xspaceextbf{V}\xspace_^{\textsf{T}}\xspaceext{up}^{\lbrace d \rbrace} & ^{\textsf{T}}\xspaceextbf{X}\xspace_^{\textsf{T}}\xspaceext{up}\\
^{\textsf{T}}\xspaceextbf{V}\xspace_^{\textsf{T}}\xspaceext{down}^{\lbrace \ell+1 \rbrace},...,^{\textsf{T}}\xspaceextbf{V}\xspace_^{\textsf{T}}\xspaceext{down}^{\lbrace d \rbrace} & ^{\textsf{T}}\xspaceextbf{X}\xspace_^{\textsf{T}}\xspaceext{down}
\end{bmatrix}$}
\State ^{\textsf{T}}\xspaceexttt{send\_to\_up\_child}($^{\textsf{T}}\xspaceextbf{Z}\xspace_^{\textsf{T}}\xspaceext{V,down}^{\lbrace \ell+1 \rbrace},...,^{\textsf{T}}\xspaceextbf{Z}\xspace_^{\textsf{T}}\xspaceext{V,down}^{\lbrace d \rbrace}\,,\,^{\textsf{T}}\xspaceextbf{Z}\xspace_^{\textsf{T}}\xspaceext{X,down}$)
\State ^{\textsf{T}}\xspaceexttt{send\_to\_down\_child}($^{\textsf{T}}\xspaceextbf{Z}\xspace_^{\textsf{T}}\xspaceext{V,up}^{\lbrace \ell+1 \rbrace},...,^{\textsf{T}}\xspaceextbf{Z}\xspace_^{\textsf{T}}\xspaceext{V,up}^{\lbrace d \rbrace}\,,\,^{\textsf{T}}\xspaceextbf{Z}\xspace_^{\textsf{T}}\xspaceext{X,up}$)
\EndIf
\If{$^{\textsf{T}}\xspaceexttt{my\_level}<\ell$}
\State ^{\textsf{T}}\xspaceexttt{receive\_from\_parent}($^{\textsf{T}}\xspaceextbf{Z}\xspace^{\lbrace \ell+1 \rbrace}_^{\textsf{T}}\xspaceext{V},...,^{\textsf{T}}\xspaceextbf{Z}\xspace^{\lbrace d \rbrace}_^{\textsf{T}}\xspaceext{V}\,,\,^{\textsf{T}}\xspaceextbf{Z}\xspace_^{\textsf{T}}\xspaceext{X}$)
\If{$^{\textsf{T}}\xspaceexttt{my\_level}>0$}
\State ^{\textsf{T}}\xspaceexttt{send\_to\_up\_child}($^{\textsf{T}}\xspaceextbf{Z}\xspace^{\lbrace \ell+1 \rbrace}_^{\textsf{T}}\xspaceext{V},...,^{\textsf{T}}\xspaceextbf{Z}\xspace^{\lbrace d \rbrace}_^{\textsf{T}}\xspaceext{V}\,,\,^{\textsf{T}}\xspaceextbf{Z}\xspace_^{\textsf{T}}\xspaceext{X}$)
\State ^{\textsf{T}}\xspaceexttt{send\_to\_down\_child}($^{\textsf{T}}\xspaceextbf{Z}\xspace^{\lbrace \ell+1 \rbrace}_^{\textsf{T}}\xspaceext{V},...,^{\textsf{T}}\xspaceextbf{Z}\xspace^{\lbrace d \rbrace}_^{\textsf{T}}\xspaceext{V}\,,\,^{\textsf{T}}\xspaceextbf{Z}\xspace_^{\textsf{T}}\xspaceext{X}$)
\EndIf
\If{$^{\textsf{T}}\xspaceexttt{my\_level}==0$}
\State \mbox{$[\, ^{\textsf{T}}\xspaceextbf{V}\xspace^{\lbrace \ell+1 \rbrace},...,^{\textsf{T}}\xspaceextbf{V}\xspace^{\lbrace d \rbrace}\,,\,^{\textsf{T}}\xspaceextbf{X}\xspace \,] := [\, ^{\textsf{T}}\xspaceextbf{V}\xspace^{\lbrace \ell+1 \rbrace},...,^{\textsf{T}}\xspaceextbf{V}\xspace^{\lbrace d \rbrace}\,,\,^{\textsf{T}}\xspaceextbf{X}\xspace \,] - ^{\textsf{T}}\xspaceextbf{V}\xspace^{\lbrace \ell \rbrace} \cdot [^{\textsf{T}}\xspaceextbf{Z}\xspace_^{\textsf{T}}\xspaceext{V}^{\lbrace \ell+1 \rbrace},...,^{\textsf{T}}\xspaceextbf{Z}\xspace_^{\textsf{T}}\xspaceext{V}^{\lbrace d \rbrace}\,,\,^{\textsf{T}}\xspaceextbf{Z}\xspace_^{\textsf{T}}\xspaceext{X}]$}
\EndIf
\EndIf
\EndFor
\EndProcedure
\end{algorithmic}
\subsection{Derivation of the algorithm}
\paragraph{Origins in SPIKE}
We derived the above algorithm by applying the SPIKE algorithm \cite{10.1007/978-3-642-03869-3_74} due to Sameh in a divide-and-conquer fashion. We explain the derivation with the help of Figure~\ref{fig:Derivation_Spike}.
Initially, we are given a banded linear system $^{\textsf{T}}\xspaceextbf{A}\xspace\cdot ^{\textsf{T}}\xspaceextbf{X}\xspace = ^{\textsf{T}}\xspaceextbf{Y}\xspace$, as is shown in part 1 of the figure. As is common for the divide-and-conquer approach, the system is split in the middle. We express the system as the composition of equally dimensioned square-matrices $^{\textsf{T}}\xspaceextbf{A}\xspace_1,^{\textsf{T}}\xspaceextbf{A}\xspace_2$, and matrices $^{\textsf{T}}\xspaceextbf{X}\xspace_1,^{\textsf{T}}\xspaceextbf{X}\xspace_2$, $^{\textsf{T}}\xspaceextbf{Y}\xspace_1,^{\textsf{T}}\xspaceextbf{Y}\xspace_2$. Since $^{\textsf{T}}\xspaceextbf{A}\xspace$ is banded, we need two additional matrices $^{\textsf{T}}\xspaceextbf{V}\xspace_1,^{\textsf{T}}\xspaceextbf{V}\xspace_2$ in order to be able to express $^{\textsf{T}}\xspaceextbf{A}\xspace$ as blocks. $^{\textsf{T}}\xspaceextbf{V}\xspace_1,^{\textsf{T}}\xspaceextbf{V}\xspace_2$ are high and thin: They have the hight of half the dimension of the original system and their breadth is equal to the bandwidth of the original system. We give names to $^{\textsf{T}}\xspaceilde{^{\textsf{T}}\xspaceextbf{V}}\xspace_1,^{\textsf{T}}\xspaceilde{^{\textsf{T}}\xspaceextbf{V}}\xspace_2$ the system in part 2: The system we call ^{\textsf{T}}\xspaceextit{blade} and the matrices $^{\textsf{T}}\xspaceilde{^{\textsf{T}}\xspaceextbf{V}}\xspace_1,^{\textsf{T}}\xspaceilde{^{\textsf{T}}\xspaceextbf{V}}\xspace_2$ we call ^{\textsf{T}}\xspaceextit{wings}. We will come back to this later.
In part 2 of the figure we see a transformed system that is obtained when multiplying the inverse of the block-diagonal matrix $^{\textsf{T}}\xspaceextbf{D}\xspace$
^{\textsf{T}}\xspaceextbf{e}\xspacegin{align*}
^{\textsf{T}}\xspaceextbf{D}\xspace = ^{\textsf{T}}\xspaceextbf{e}\xspacegin{bmatrix}
^{\textsf{T}}\xspaceextbf{A}\xspace_1 & ^{\textsf{T}}\xspaceextbf{0}\xspace\\
^{\textsf{T}}\xspaceextbf{0}\xspace & ^{\textsf{T}}\xspaceextbf{A}\xspace_2
\end{bmatrix}
\end{align*}
from the left onto the system. If the dimension of $^{\textsf{T}}\xspaceextbf{A}\xspace_1$ and $^{\textsf{T}}\xspaceextbf{A}\xspace_2$ is large then it is unlikely that $^{\textsf{T}}\xspaceextbf{D}\xspace$ is singular. If $^{\textsf{T}}\xspaceextbf{A}\xspace$ is symmetric positive definite then $^{\textsf{T}}\xspaceextbf{D}\xspace$ is regular with a condition number bounded by that of $^{\textsf{T}}\xspaceextbf{A}\xspace$ \cite{Saad:2003:IMS:829576}. Thus, there exist conditions for $^{\textsf{T}}\xspaceextbf{A}\xspace$ such that the system considered in part 2 of the figure is well-posed. The obtained system has a interesting structure: It is an identity matrix plus two dense sub-blocks $^{\textsf{T}}\xspaceilde{^{\textsf{T}}\xspaceextbf{V}}\xspace_1,^{\textsf{T}}\xspaceilde{^{\textsf{T}}\xspaceextbf{V}}\xspace_2$, that have the breadth of $^{\textsf{T}}\xspaceextbf{A}\xspace$'s bandwidth.
In part 3 of the figure we consider a sub-system that is obtained when extracting the red portions from the matrices in part 2. This sub-system is decoupled from the total system. It has a size to twice the bandwidth and it can be solved directly (e.g. by $LU$-factorization followed by forward and backward substitutions) for the extracted (red-marked) portion of $^{\textsf{T}}\xspaceextbf{X}\xspace$. This approach would be followed in the usual SPIKE algorithm \cite{Polizzi2007}. However, we use a different approach. As shown in part 3 of our figure, we solve the sub-system with system matrix $^{\textsf{T}}\xspaceextbf{S}\xspace$ and write the solution into a matrix that we call $^{\textsf{T}}\xspaceextbf{Z}\xspace$. $^{\textsf{T}}\xspaceextbf{Z}\xspace$ is composed vertically of two blocks $^{\textsf{T}}\xspaceextbf{Z}\xspace_^{\textsf{T}}\xspaceext{up}$ and $^{\textsf{T}}\xspaceextbf{Z}\xspace_^{\textsf{T}}\xspaceext{down}$, of which each has the hight of the bandwidth of $^{\textsf{T}}\xspaceextbf{A}\xspace$, and the breadth of $^{\textsf{T}}\xspaceextbf{X}\xspace$ and $^{\textsf{T}}\xspaceextbf{Y}\xspace$.
In part 4 we consider again a modified system. Using $^{\textsf{T}}\xspaceextbf{Z}\xspace$ from step 3, we see that we can change the right-hand sides such that the system matrix simplifies to an identity. This transformation can be easily derived: As the first block of the system in part 2, we have
^{\textsf{T}}\xspaceextbf{e}\xspacegin{align*}
^{\textsf{T}}\xspaceextbf{I}\xspace \cdot ^{\textsf{T}}\xspaceextbf{X}\xspace_1 + [^{\textsf{T}}\xspaceilde{^{\textsf{T}}\xspaceextbf{V}}\xspace_1,^{\textsf{T}}\xspaceextbf{0}\xspace] \cdot ^{\textsf{T}}\xspaceextbf{X}\xspace_2 = ^{\textsf{T}}\xspaceilde{^{\textsf{T}}\xspaceextbf{Y}}\xspace_1\,.
\end{align*}
Replacing $[^{\textsf{T}}\xspaceilde{^{\textsf{T}}\xspaceextbf{V}}\xspace_1,^{\textsf{T}}\xspaceextbf{0}\xspace] \cdot ^{\textsf{T}}\xspaceextbf{X}\xspace_2$ by $^{\textsf{T}}\xspaceilde{^{\textsf{T}}\xspaceextbf{V}}\xspace_1 \cdot ^{\textsf{T}}\xspaceextbf{Z}\xspace_^{\textsf{T}}\xspaceext{down}$ and moving the second term to the right-hand side yields the formula for $^{(1)}\xspacet{^{\textsf{T}}\xspaceextbf{Y}}\xspace_1$. The formula for $^{(1)}\xspacet{^{\textsf{T}}\xspaceextbf{Y}}\xspace_2$ can be found in an analogous way.
^{\textsf{T}}\xspaceextbf{e}\xspacegin{figure}
\centering
\includegraphics[width=1\linewidth]{Images_PDF/Derivation_Spike}
\caption{Derivation of our algorithm by applying SPIKE in a divide-and-conquer fashion. The system is divided into two equally sized systems of smaller dimension. After solving these, the solutions can be put together through the solution of a decoupled system (part 3) that has a small dimension.}
\label{fig:Derivation_Spike}
\end{figure}
\paragraph{Parallelism, recursions, and the computing system}
In Figure~\ref{fig:Derivation_Spike}, there are two stages where parallelism can be exploited: The transformation from part 1 to part 2 involves the solution of the following banded linear systems:
^{\textsf{T}}\xspaceextbf{e}\xspacegin{subequations}
^{\textsf{T}}\xspaceextbf{e}\xspacegin{align}
^{\textsf{T}}\xspaceextbf{A}\xspace_1 \cdot [^{\textsf{T}}\xspaceilde{^{\textsf{T}}\xspaceextbf{V}}\xspace_1,^{\textsf{T}}\xspaceilde{^{\textsf{T}}\xspaceextbf{Y}}\xspace_1] &= [^{\textsf{T}}\xspaceextbf{V}\xspace_1,^{\textsf{T}}\xspaceextbf{Y}\xspace_1]\\
^{\textsf{T}}\xspaceextbf{A}\xspace_2 \cdot [^{\textsf{T}}\xspaceilde{^{\textsf{T}}\xspaceextbf{V}}\xspace_2,^{\textsf{T}}\xspaceilde{^{\textsf{T}}\xspaceextbf{Y}}\xspace_2] &= [^{\textsf{T}}\xspaceextbf{V}\xspace_2,^{\textsf{T}}\xspaceextbf{Y}\xspace_2]
\end{align}\label{eqn:DC_linSys}
\end{subequations}
These two systems can be solved independently from each other. The second stage, where parallelism can be exploited, is in the computation of the two components $^{(1)}\xspacet{^{\textsf{T}}\xspaceextbf{Y}}\xspace_1$ and $^{(1)}\xspacet{^{\textsf{T}}\xspaceextbf{Y}}\xspace_2$:
^{\textsf{T}}\xspaceextbf{e}\xspacegin{subequations}
^{\textsf{T}}\xspaceextbf{e}\xspacegin{align}
^{(1)}\xspacet{^{\textsf{T}}\xspaceextbf{Y}}\xspace_1 &= ^{\textsf{T}}\xspaceilde{^{\textsf{T}}\xspaceextbf{V}}\xspace_1\cdot ^{\textsf{T}}\xspaceextbf{Z}\xspace_^{\textsf{T}}\xspaceext{down}\\
^{(1)}\xspacet{^{\textsf{T}}\xspaceextbf{Y}}\xspace_2 &= ^{\textsf{T}}\xspaceilde{^{\textsf{T}}\xspaceextbf{V}}\xspace_2\cdot ^{\textsf{T}}\xspaceextbf{Z}\xspace_^{\textsf{T}}\xspaceext{up}
\end{align}\label{eqn:DC_hY}
\end{subequations}
The problems in \eqref{eqn:DC_linSys} and \eqref{eqn:DC_hY} can be expressed as recursions. The recursion for \eqref{eqn:DC_linSys} is obvious because we develop an algorithm that solves banded linear system and interiorly requires the solution of smaller banded linear systems. A recursion for \eqref{eqn:DC_hY} is found by distributing the computation over the rows of $^{(1)}\xspacet{^{\textsf{T}}\xspaceextbf{Y}}\xspace_1,^{(1)}\xspacet{^{\textsf{T}}\xspaceextbf{Y}}\xspace_2$. This is shown in Figure~\ref{fig:Recursion_MatMat}. The computation of each $^{(1)}\xspacet{^{\textsf{T}}\xspaceextbf{Y}}\xspace_1$ and $^{(1)}\xspacet{^{\textsf{T}}\xspaceextbf{Y}}\xspace_2$ can each be expressed in a form as $^{\textsf{T}}\xspaceextbf{M}\xspace := ^{\textsf{T}}\xspaceextbf{M}\xspace - ^{\textsf{T}}\xspaceextbf{U}\xspace \cdot ^{\textsf{T}}\xspaceextbf{W}\xspace$, in some programming languages also written as $^{\textsf{T}}\xspaceextbf{M}\xspace -= ^{\textsf{T}}\xspaceextbf{U}\xspace \cdot ^{\textsf{T}}\xspaceextbf{W}\xspace$. The figure shows how the computation of the update of $^{\textsf{T}}\xspaceextbf{M}\xspace$ can be distributed through a two-tree network by dividing it in vertical direction.
All-together, we can formulate our whole algorithm for solving a banded linear system in a recursive way. The following code demonstrates this:
^{\textsf{T}}\xspaceextbf{e}\xspacegin{algorithmic}[1]
\Procedure{RecursiveSolver}{$^{\textsf{T}}\xspaceextbf{A}\xspace,^{\textsf{T}}\xspaceextbf{Y}\xspace$}
\If($^{\textsf{T}}\xspaceextbf{A}\xspace$ has small dimension)
\State $^{\textsf{T}}\xspaceextbf{X}\xspace = ^{\textsf{T}}\xspaceextbf{A}\xspace \backslash ^{\textsf{T}}\xspaceextbf{Y}\xspace$
\State \mathbb{R}\xspaceeturn $^{\textsf{T}}\xspaceextbf{X}\xspace$
\EndIf
\State Decompose the system into $^{\textsf{T}}\xspaceextbf{A}\xspace_1,^{\textsf{T}}\xspaceextbf{A}\xspace_2,^{\textsf{T}}\xspaceextbf{V}\xspace_1,^{\textsf{T}}\xspaceextbf{V}\xspace_2,^{\textsf{T}}\xspaceextbf{Y}\xspace_1,^{\textsf{T}}\xspaceextbf{Y}\xspace_2,^{\textsf{T}}\xspaceextbf{X}\xspace_1,^{\textsf{T}}\xspaceextbf{X}\xspace_2$.
\State $[^{\textsf{T}}\xspaceilde{^{\textsf{T}}\xspaceextbf{V}}\xspace_1,^{\textsf{T}}\xspaceilde{^{\textsf{T}}\xspaceextbf{Y}}\xspace_1] := $^{\textsf{T}}\xspaceextsc{RecursiveSolver}($^{\textsf{T}}\xspaceextbf{A}\xspace_1,[^{\textsf{T}}\xspaceextbf{V}\xspace_1,^{\textsf{T}}\xspaceextbf{Y}\xspace_1]$)
\State $[^{\textsf{T}}\xspaceilde{^{\textsf{T}}\xspaceextbf{V}}\xspace_2,^{\textsf{T}}\xspaceilde{^{\textsf{T}}\xspaceextbf{Y}}\xspace_2] := $^{\textsf{T}}\xspaceextsc{RecursiveSolver}($^{\textsf{T}}\xspaceextbf{A}\xspace_2,[^{\textsf{T}}\xspaceextbf{V}\xspace_2,^{\textsf{T}}\xspaceextbf{Y}\xspace_2]$)
\State Compose $^{\textsf{T}}\xspaceextbf{S}\xspace$ and solve the reduced linear system for $^{\textsf{T}}\xspaceextbf{Z}\xspace_^{\textsf{T}}\xspaceext{up}$, $^{\textsf{T}}\xspaceextbf{Z}\xspace_^{\textsf{T}}\xspaceext{down}$.
\State ^{\textsf{T}}\xspaceextit{// $^{(1)}\xspacet{^{\textsf{T}}\xspaceextbf{Y}}\xspace_1$ \underline{is} $^{\textsf{T}}\xspaceilde{^{\textsf{T}}\xspaceextbf{Y}}\xspace_1$, $^{(1)}\xspacet{^{\textsf{T}}\xspaceextbf{Y}}\xspace_2$ \underline{is} $^{\textsf{T}}\xspaceilde{^{\textsf{T}}\xspaceextbf{Y}}\xspace_2$}
\State $^{(1)}\xspacet{^{\textsf{T}}\xspaceextbf{Y}}\xspace_1 := $^{\textsf{T}}\xspaceextsc{RecursiveMatMul}($^{\textsf{T}}\xspaceilde{^{\textsf{T}}\xspaceextbf{Y}}\xspace_1,^{\textsf{T}}\xspaceilde{^{\textsf{T}}\xspaceextbf{V}}\xspace_1,^{\textsf{T}}\xspaceextbf{Z}\xspace_^{\textsf{T}}\xspaceext{down}$)
\State $^{(1)}\xspacet{^{\textsf{T}}\xspaceextbf{Y}}\xspace_2 := $^{\textsf{T}}\xspaceextsc{RecursiveMatMul}($^{\textsf{T}}\xspaceilde{^{\textsf{T}}\xspaceextbf{Y}}\xspace_2,^{\textsf{T}}\xspaceilde{^{\textsf{T}}\xspaceextbf{V}}\xspace_2,^{\textsf{T}}\xspaceextbf{Z}\xspace_^{\textsf{T}}\xspaceext{up}$)
\State ^{\textsf{T}}\xspaceextit{// $^{\textsf{T}}\xspaceextbf{X}\xspace$ \underline{is} $^{\textsf{T}}\xspaceextbf{e}\xspacegin{bmatrix}
^{(1)}\xspacet{^{\textsf{T}}\xspaceextbf{Y}}\xspace_1\\
^{(1)}\xspacet{^{\textsf{T}}\xspaceextbf{Y}}\xspace_2
\end{bmatrix}$ }
\State \mathbb{R}\xspaceeturn $^{\textsf{T}}\xspaceextbf{X}\xspace$
\EndProcedure
\end{algorithmic}
^{\textsf{T}}\xspaceextbf{e}\xspacegin{figure}
\centering
\includegraphics[width=0.7\linewidth]{Images_PDF/Recursion_MatMat}
\caption{Recursive divide-and-conquer approach to compute the matrix update $^{\textsf{T}}\xspaceextbf{M}\xspace -= ^{\textsf{T}}\xspaceextbf{U}\xspace \cdot ^{\textsf{T}}\xspaceextbf{W}\xspace$ on a parallel two-tree computing system. Each node divides the computational problem vertically into two problems of smaller dimension. These are then solved recursively by its children.}
\label{fig:Recursion_MatMat}
\end{figure}
The computing system has been chosen as a two-tree in order to exploit the recursive nature of the algorithm: Initially, the root has the solve the linear system. According to the above code it would call its children in lines 7--8 to solve recursively the subsystems with $^{\textsf{T}}\xspaceextbf{A}\xspace_1$ and $^{\textsf{T}}\xspaceextbf{A}\xspace_2$. This is supposed to be done in parallel. Afterwards, the root composes $^{\textsf{T}}\xspaceextbf{S}\xspace$ from the small portions $^{\textsf{T}}\xspaceilde{^{\textsf{T}}\xspaceextbf{V}}\xspace_^{\textsf{T}}\xspaceext{1,down},^{\textsf{T}}\xspaceilde{^{\textsf{T}}\xspaceextbf{V}}\xspace_^{\textsf{T}}\xspaceext{2,up}$ of $^{\textsf{T}}\xspaceilde{^{\textsf{T}}\xspaceextbf{V}}\xspace_1,^{\textsf{T}}\xspaceilde{^{\textsf{T}}\xspaceextbf{V}}\xspace_2$, and computes $^{\textsf{T}}\xspaceextbf{Z}\xspace_^{\textsf{T}}\xspaceext{up},^{\textsf{T}}\xspaceextbf{Z}\xspace_^{\textsf{T}}\xspaceext{down}$. Finally, it calls again its children in order to compute $^{(1)}\xspacet{^{\textsf{T}}\xspaceextbf{Y}}\xspace_1,^{(1)}\xspacet{^{\textsf{T}}\xspaceextbf{Y}}\xspace_2$.
\paragraph{Access pattern on the distributed memory}
So far our explanation helps to understand how the algorithm works and why the results for $^{\textsf{T}}\xspaceextbf{X}\xspace$ are correct. But yet it is not easy to see how communication-intense the algorithm is and how the recursion acts on the global system. In this paragraph we give illustrations for both.
Our algorithm can be interpreted in an elegant way. To solve a system
^{\textsf{T}}\xspaceextbf{e}\xspacegin{align*}
\underline{^{\textsf{T}}\xspaceextbf{A}\xspace} \cdot \underline{^{\textsf{T}}\xspaceextbf{X}\xspace} \cdot \underline{^{\textsf{T}}\xspaceextbf{Y}\xspace}
\end{align*}
we can interprete that the algorithm applies an iterative scheme
^{\textsf{T}}\xspaceextbf{e}\xspacegin{align*}
^{\textsf{T}}\xspaceextbf{A}\xspace^{(0)} &:= \underline{^{\textsf{T}}\xspaceextbf{A}\xspace} & ^{\textsf{T}}\xspaceextbf{Y}\xspace^{(0)} &:= \underline{^{\textsf{T}}\xspaceextbf{Y}\xspace}\\
^{\textsf{T}}\xspaceextbf{A}\xspace^{(j+1)} &:= \big(^{\textsf{T}}\xspaceextbf{D}\xspace^{(j)}\big)^{-1} \cdot ^{\textsf{T}}\xspaceextbf{A}\xspace^{(j)} & ^{\textsf{T}}\xspaceextbf{Y}\xspace^{(j+1)} &:= \big(^{\textsf{T}}\xspaceextbf{D}\xspace^{(j)}\big)^{-1} \cdot ^{\textsf{T}}\xspaceextbf{Y}\xspace^{(j)}\quad ^{\textsf{T}}\xspaceext{ for }j=0,...,d\,,
\end{align*}
where $d = \log_2(N)$ and where $^{\textsf{T}}\xspaceextbf{A}\xspace^{(d+1)} = ^{\textsf{T}}\xspaceextbf{I}\xspace$ and thus $\underline{^{\textsf{T}}\xspaceextbf{X}\xspace} = ^{\textsf{T}}\xspaceextbf{Y}\xspace^{(d+1)}$. The matrices $^{\textsf{T}}\xspaceextbf{D}\xspace^{(j)}$ are block-diagonal matrices whose blocks are blades. As we have seen, the inverses of blades can be computed in parallel through ^{\textsf{T}}\xspaceextsc{RecursiveMatMul}. We only need one upwards-communication of $^{\textsf{T}}\xspaceextbf{V}\xspace_^{\textsf{T}}\xspaceext{1,up}$, $^{\textsf{T}}\xspaceextbf{V}\xspace_^{\textsf{T}}\xspaceext{2,down}$ and one downwards communication of $^{\textsf{T}}\xspaceextbf{Z}\xspace_^{\textsf{T}}\xspaceext{down}$, $^{\textsf{T}}\xspaceextbf{Z}\xspace_^{\textsf{T}}\xspaceext{up}$.
Figure~\ref{fig:Matrix_Pattern} illustrates the iteration for $N=8$, $d=3$. The figure shows the system matrix in the right part, starting at the top with the original matrix and ending at the bottom with an identity. The left part of the figure shows the matrices $^{\textsf{T}}\xspaceextbf{D}\xspace^{(j)}$ for $j=0,...,d$\,.
\largeparbreak
We want to explain how this interpretation of the algorithm's action can be derived from ^{\textsf{T}}\xspaceextsc{RecursiveSolver} and at the same time discuss the algorithmic steps as represented by the figure. The recursion in lines 7--8 results in a partitioning of the system matrix along the diagonal blocks. Black lines indicate the recursive diagonal partitioning. Grey coloring shows the non-zero pattern. In the bottom of the recursion the inverses of the smallest diagonal blocks are applied from the left onto the system. So the matrix $^{\textsf{T}}\xspaceextbf{D}\xspace^{(0)}$ consists solely of diagonal elements. The matrix $^{\textsf{T}}\xspaceextbf{A}\xspace^{(1)}$ has identity matrices on the diagonal blocks by construction. The sub-diagonal blocks are no longer triangular but dense. Since level 0 is the recursive bottom, we now ascend. The matrix $^{\textsf{T}}\xspaceextbf{D}\xspace^{(1)}$ consists of four blades. Multiplication from the left with its inverse yields a matrix $^{\textsf{T}}\xspaceextbf{A}\xspace^{(2)}$ that has identities on larger diagonal blocks (since these where identical to the blades on the diagonals of $^{\textsf{T}}\xspaceextbf{D}\xspace^{(1)}$). However, the hight of the wings in $^{\textsf{T}}\xspaceextbf{A}\xspace^{(2)}$ increases because there are subdiagonal blocks in $^{\textsf{T}}\xspaceextbf{A}\xspace^{(1)}$ that were not represented in $^{\textsf{T}}\xspaceextbf{D}\xspace^{(1)}$. Further ascending, the matrix $^{\textsf{T}}\xspaceextbf{D}\xspace^{(2)}$ consists of two diagonal blocks, which are blades of dimension $4$. The two subdiagonal blocks in $^{\textsf{T}}\xspaceextbf{A}\xspace^{(3)}$ suffer from fill-in in vertical direction while the diagonal blocks become identity matrices. Very finally, $^{\textsf{T}}\xspaceextbf{A}\xspace^{(3)}$ has a blade-structure and this $(^{\textsf{T}}\xspaceextbf{D}\xspace^{(3)})^{-1}\cdot^{\textsf{T}}\xspaceextbf{A}\xspace^{(3)}$ yields the identity.
^{\textsf{T}}\xspaceextbf{e}\xspacegin{figure}
\centering
\includegraphics[height=15cm]{Images_PDF/Matrix_Pattern}
\caption{Successive multiplication of inverse of block-diagonal matrix from the left onto the system matrix. The block-diagonal matrices consist of blades. Their inverse is easy to apply in parallel.}
\label{fig:Matrix_Pattern}
\end{figure}
\largeparbreak
From the sparsity patterns of the system matrix in each iteration we can draw conclusions on the memory that the nodes need to hold in order to be able to compute $^{\textsf{T}}\xspaceextbf{A}\xspace^{(0)},...,^{\textsf{T}}\xspaceextbf{A}\xspace^{(d+1)}$ without the need of any overhead for, e.g., dynamically changing a sparse-memory representation of $^{\textsf{T}}\xspaceextbf{A}\xspace$. In our algorithm ^{\textsf{T}}\xspaceextsc{ParallelSolver} we store $\underline{^{\textsf{T}}\xspaceextbf{A}\xspace}$ as wing-matrices $^{\textsf{T}}\xspaceextbf{V}\xspace^{\lbrace j}$, $j=1,...,d$\,. The top of Figure~\ref{fig:MemoryPattern} depicts this: The fill-in pattern of $\underline{^{\textsf{T}}\xspaceextbf{A}\xspace}$ fits into $\log_2(N)$ column vectors.
For this data structure the product of $\underline{^{\textsf{T}}\xspaceextbf{A}\xspace}$ with an inverse of on of the above block-diagonal matrices $^{\textsf{T}}\xspaceextbf{D}\xspace^{(j)}$ can be computed efficiently, as is shown in the bottom of the figure: Say we want to compute the product with $(^{\textsf{T}}\xspaceextbf{D}\xspace^{(j)})^{-1}$ for $j=3$. In this case, the leaves send the data for the reduced system (compare to the red-framed portions in Figure~\ref{fig:Derivation_Spike} part 2) to the root of the sub-trees of level $j=3$. The roots of the subtrees compute the reduced solutions $^{\textsf{T}}\xspaceextbf{Z}\xspace_^{\textsf{T}}\xspaceext{up},^{\textsf{T}}\xspaceextbf{Z}\xspace_^{\textsf{T}}\xspaceext{down}$, that afterwards they send back to all leaves through their sub-tree. The leaves update the wing-matrices by performing the same computations for them as for updating $^{\textsf{T}}\xspaceextbf{Y}\xspace$.
^{\textsf{T}}\xspaceextbf{e}\xspacegin{figure}
\centering
\includegraphics[width=1\linewidth]{Images_PDF/MemoryPattern}
\caption{Top: The matrix is represented by $d=\log_2(N)$ dense wing-matrices $^{\textsf{T}}\xspaceextbf{V}\xspace^{\lbrace j\rbrace}$, $j=1,...,d$, which are each distributed in row-chunks over the leaves. Bottom: The product with the inverse of a blade. The leaves send extracted matrices to the root of the sub-tree. The root computes the decoupled solution $^{\textsf{T}}\xspaceextbf{Z}\xspace$ and sends it back through the tree to all leaves, so they can update the wing-matrices.}
\label{fig:MemoryPattern}
\end{figure}
\paragraph{Parallel time complexity}
We derive the time complexity by the following list of axioms:
^{\textsf{T}}\xspaceextbf{e}\xspacegin{enumerate}[label=(\roman*)]
\item According to Figure~\ref{fig:Matrix_Pattern} the algorithm performs $\mathcal{O}\xspace(\log(N))$ iterations.
\item The communicated pieces of data per iteration are the node-local matrices
^{\textsf{T}}\xspaceextbf{e}\xspacegin{align*}
[^{\textsf{T}}\xspaceextbf{V}\xspace^{\lbrace 1 \rbrace},...,^{\textsf{T}}\xspaceextbf{V}\xspace^{\lbrace d \rbrace},^{\textsf{T}}\xspaceextbf{Y}\xspace] \in \mathbb{R}\xspace^{ m ^{\textsf{T}}\xspaceimes (m \cdot d + k)}\,,
\end{align*}
which consist of $\mathcal{O}\xspace\big( m \cdot (m \cdot \log(N)+k) \big)$ elements.
The communication cost per iteration is bounded by the time required for sending these elements from an arbitrary leaf to the root or vice versa (size $^{\textsf{T}}\xspaceextbf{Z}\xspace_^{\textsf{T}}\xspaceext{V},^{\textsf{T}}\xspaceextbf{Z}\xspace_^{\textsf{T}}\xspaceext{X}$ have the same dimensions as $[^{\textsf{T}}\xspaceextbf{V}\xspace^{\lbrace 1 \rbrace},...,^{\textsf{T}}\xspaceextbf{V}\xspace^{\lbrace d \rbrace},^{\textsf{T}}\xspaceextbf{Y}\xspace]$). Equivalently, the time complexity for communication per iterations is
^{\textsf{T}}\xspaceextbf{e}\xspacegin{align*}
\mathcal{O}\xspace( c^^{\textsf{T}}\xspaceext{Lat}_N + m \cdot (m \cdot \log(N)+k ) )\,,
\end{align*}
where $c^^{\textsf{T}}\xspaceext{Lat}_N$ is the time complexity of physical time that is needed to send a single number from the root to a leaf that is farthest away from the root in terms of cable-length.
\item The computational complexity per iteration per node is as follows: The nodes that are not leaves do either compute nothing or they compute a decomposition of a matrix $^{\textsf{T}}\xspaceextbf{S}\xspace \in \mathbb{R}\xspace^{(2 \cdot m) ^{\textsf{T}}\xspaceimes (2 \cdot m)}$ and apply it to the columns of the matrix $[^{\textsf{T}}\xspaceextbf{V}\xspace^{\lbrace 1 \rbrace},...,^{\textsf{T}}\xspaceextbf{V}\xspace^{\lbrace d \rbrace},^{\textsf{T}}\xspaceextbf{Y}\xspace]$ in order to compute $^{\textsf{T}}\xspaceextbf{Z}\xspace_^{\textsf{T}}\xspaceext{V},^{\textsf{T}}\xspaceextbf{Z}\xspace_^{\textsf{T}}\xspaceext{X}$. The leaves on the other hand compute matrix-matrix-products of an $m ^{\textsf{T}}\xspaceimes m$-matrix with $^{\textsf{T}}\xspaceextbf{Z}\xspace_^{\textsf{T}}\xspaceext{V}$ and $^{\textsf{T}}\xspaceextbf{Z}\xspace_^{\textsf{T}}\xspaceext{X}$. Thus, the computational complexity per node per iteration is bounded by $\mathcal{O}\xspace( m^3 \cdot \log(N) + m^2 \cdot k )$.
\end{enumerate}
Combining the items, we find that the parallel time complexity of our method is:
^{\textsf{T}}\xspaceextbf{e}\xspacegin{align}
\mathcal{O}\xspace\Bigg(\ \log(N)\cdot \Big(\,c^^{\textsf{T}}\xspaceext{Lat}_N+m^2 \cdot \big( m \cdot \log(N) + k \big) \,\Big) \ \Bigg) \label{eqn:ComplexityOrder}
\end{align}
Unfortunately, for a computing system of $N$ nodes with each of a size in $\Theta(1)$ the minimum possible value for $c^^{\textsf{T}}\xspaceext{Lat}_N$ lives in $\Theta( N^{1/3} )$.
We consider two special cases:
^{\textsf{T}}\xspaceextbf{e}\xspacegin{enumerate}
\item Assuming $k,m \in \mathcal{O}\xspace(1)$ the complexity result simplifies to
^{\textsf{T}}\xspaceextbf{e}\xspacegin{align*}
\mathcal{O}\xspace\Big(\ \log(N) \cdot N^{1/3} \ \Big)\,.
\end{align*}
\item Assuming $k,m \in \Omega(N^{1/6})$, the time complexity is
^{\textsf{T}}\xspaceextbf{e}\xspacegin{align*}
\mathcal{O}\xspace\Big(\ \log(N) \cdot m^2 \cdot \big(m \cdot \log(N)+k\big) \ \Big)\,.
\end{align*}
\end{enumerate}
Whereas the second result is obviously optimal for solving a band-matrix with dense band of bandwidth $m$ (since it is has $N$ diagonal blocks of size $m$ that need to be factorized), it turns out that the first result is not optimal and can be made optimal by a fine-tuning.
\section{Lower complexity bound for the parallel solution of tridiagonal linear systems}
In this section we prove in order the following statements, where $s$ is the physical dimension in which the computing system is built (e.g., if the computing system is built on the surface of a planet then $s=2$, and if the computing system is a/the planet then $s=3$):
^{\textsf{T}}\xspaceextbf{e}\xspacegin{enumerate}
\item The latency $c^^{\textsf{T}}\xspaceext{Lat}_N$ is bounded from below by $\Omega(N^{1/s})$, regardless how many cables are used.
\item The time complexity for an algorithm for solving the tridiagonal linear equation system of dimension $N$ on a distributed memory machine of computing nodes is bounded from below by $\Theta(N^{1/(s+1)})$, no matter how many nodes and cables are used.
\end{enumerate}
\paragraph{Latency}
We discuss on the latency time for a computing system of $N$ nodes that are $s$-dimensional spheres, where the latency of one node to communicate to another is bounded from below by the order of their physical distance in space.
For $N$ nodes, the diameter on a line is $\geq N-2$ because the two outermost nodes have $N-2 \in \Omega(N)$ nodes between themselves. For an illustration of this, consider Figure~\ref{fig:Block_Matrix_Storage}, that shows twice the topology for the two-tree computing system of $N=8$ nodes. Since the leaves are placed on a line with a unit distance, the physical distance of the root to the farthest leaf is bounded from below in $\Omega(N)$.
Now let us consider the case where we use $s=2$ dimensions: We place the computing nodes into the smallest possible circle. Since the total area of $N$ nodes is $\Theta(N)$, the radius $r$ of the must be in $r \in\Theta(N^{1/2})$. Then, the communication between two nodes requires at most a time of $2 \cdot r \in \Theta(N^{1/2})$. In $s=3$ dimensions the situation is even better. Here the radius must only be of length $r \in \Theta(N^{1/3})$. Given the positions of the nodes in the sphere, it is trivial to find an almost optimal communication network for them: Connecting them as a tree yields that the maximum diameter of the communication tree is $\mathcal{O}\xspace(\,\log(N) \cdot N^{1/s})$ because each cable can at most have length $\mathcal{O}\xspace(N^{1/s})$ and a tree network has diameter $\mathcal{O}\xspace(\log(N))$.
\paragraph{Lower bound on time complexity for solving the tridiagonal linear systems}
Presume we shall solve a tridiagonal system $^{\textsf{T}}\xspaceextbf{A}\xspace \cdot ^{\textsf{T}}\xspaceextbf{x}\xspace = ^{\textsf{T}}\xspaceextbf{y}\xspace$ of dimension $N$. Presume that we use $\mathcal{O}\xspace(N^p)$ nodes of which each holds at most $\mathcal{O}\xspace(N^q)$ pieces of data. It must be $q\geq 1-p$ because $\mathcal{O}\xspace(N)$ pieces of data must be stored in total.
It is known that each number of the solution vector $^{\textsf{T}}\xspaceextbf{x}\xspace$ depends on each number of the right-hand side $^{\textsf{T}}\xspaceextbf{y}\xspace$ and each value of the matrix $^{\textsf{T}}\xspaceextbf{A}\xspace$. Thus the following hold:
^{\textsf{T}}\xspaceextbf{e}\xspacegin{enumerate}[label=(\roman*)]
\item Each node must read at least $\mathcal{O}\xspace(N^{1-p})$ pieces of its data.
\item Each node must communicate at least once (either directly or indirectly) to each other node.
\end{enumerate}
Combining the two properties, we find a lower bound for the time complexity with a fixed value of $p$:
^{\textsf{T}}\xspaceextbf{e}\xspacegin{align*}
\Omega(\ \underbrace{N^{1-p}}_^{\textsf{T}}\xspaceext{read problem} + \underbrace{N^{\frac{p}{s}}}_^{\textsf{T}}\xspaceext{communicate} \ ) = \mathcal{O}\xspace(N^\omega)
\end{align*}
The minimum of polynomial complexity orders $\omega$ depending on $s$ and the optimal order $p$ of the number of nodes are given in table \ref{table:ComplexityOrders}.
^{\textsf{T}}\xspaceextbf{e}\xspacegin{table}
\centering
^{\textsf{T}}\xspaceextbf{e}\xspacegin{tabular}{||c|c|c||}
\hline
\hline dimension $s$ & order $p$ of number of nodes & order $\omega$ of time complexity \\
\hline $1$ & $1/2$ & $1/2$ \\
$2$ & $2/3$ & $1/3$ \\
$3$ & $3/4$ & $1/4$ \\
\hline
\hline
\end{tabular}
\caption[]{Lower complexity bounds for solving a linear equation system on parallel a distributed memory system of arbitrary many nodes.}\label{table:ComplexityOrders}
\end{table}
The question arises why we have not presented our algorithm with a number of nodes that is in $\mathcal{O}\xspace(N^{3/4})$ since then our algorithm would be logarithmically close to the optimal time complexity. We did not because for our PhD thesis we will have to solve problems where $N \approx 10^6$ and $m \approx 100$. So for our applications the execution time is rather dominated by $\log(N)^2\cdot m^3$. The $m^3$ arises from the fact that $\underline{^{\textsf{T}}\xspaceextbf{A}\xspace}$ has $N$ dense $m ^{\textsf{T}}\xspaceimes m$-matrices on the diagonal that must be decomposed. In theory a fast matrix-multiplication algorithm could be employed to reduce the complexity order for this, but this is not practical. Thus, for the problems that we need to solve, the complexity of our algorithm is already logarithmically close to optimal or maybe even optimal.
\section{Conclusions}
We presented an algorithm for the efficient parallel solution of (block-)tridiagonal linear systems. We provided an accurate implementation of the algorithm that uses a common message-passing interface. Following our analysis, the algorithm has a parallel time complexity that could be made optimal for tridiagonal systems (by simply using fewer nodes) and that is clearly optimal for block-tridiagonal linear systems, respectively banded linear systems of small bandwidth.
Though a proper implementation has been provided for the algorithm, it will be rather difficult to utilize its full potential in practice. This is because the time that is required to send the data to the solver would already destroy the benefit. Software that uses this solver needs to be highly sophisticated. In particular, problems must be instantiated in a way such that the linear system is already distributed in the memory of the leaves of our computing system at the time when our solver is called.
Further work will be related to an attempt of implementing an optimal control solver by direct transcription that shall solve the linear systems within the non-linear programming solver by means of the linear system solver that has been presented in this work.
\FloatBarrier
\end{document}
|
\begin{document}
\begin{frontmatter}
\title{Embedded discontinuous Galerkin transport schemes with
localised limiters}
\author[math]{C.~J. Cotter}
\address[math]{Department of Mathematics, Imperial College London,
South Kensington Campus, London SW7 2AZ}
\author[dmitri]{D. Kuzmin}
\address[dmitri]{Institute of Applied Mathematics, Dortmund University of Technology, Vogelpothsweg 87, D-44227 Dortmund, Germany}
\begin{abstract}
Motivated by finite element spaces used for representation of
temperature in the compatible finite element approach for numerical
weather prediction, we introduce locally bounded transport schemes for
(partially-)continuous finite element spaces. The underlying
high-order transport scheme is constructed by injecting the
partially-continuous field into an embedding discontinuous finite
element space, applying a stable upwind discontinuous Galerkin (DG)
scheme, and projecting back into the partially-continuous space; we
call this an embedded DG transport scheme. We prove that this scheme
is stable in $L^2$ provided that the underlying upwind DG scheme
is. We then provide a framework for applying limiters for embedded DG
transport schemes. Standard DG limiters are applied during the
underlying DG scheme. We introduce a new localised form of
element-based flux-correction which we apply to limiting the
projection back into the partially-continuous space, so that the whole
transport scheme is bounded. We provide details in the specific case
of tensor-product finite element spaces on wedge elements that are
discontinuous P1/Q1 in the horizontal and continuous \revised{P2} in the
\revised{vertical}. The framework is illustrated with numerical tests.
\end{abstract}
\begin{keyword}
Discontinuous Galerkin \sep slope limiters \sep flux corrected transport
\sep convection-dominated transport \sep numerical weather prediction
\end{keyword}
\end{frontmatter}
\section{Introduction}
\revised{Recently there has been a lot of activity in the development
of finite element methods for numerical weather prediction (NWP),
using continuous (mainly spectral) finite elements as well as
discontinuous finite elements
\citep{fournier2004spectral,thomas2005ncar,dennis2011cam,kelly2012continuous,kelly2013implicit,marras2013simulations,brdar2013comparison,bao2015horizontally};
see \citet{marras2015review} for a comprehensive review. A key
aspect of NWP models is the need for transport schemes that preserve
discrete analogues of properties of the transport equation such as
monotonicity (shape preservation) and positivity; these properties are
particularly important when treating tracers such as
moisture. Discontinuous Galerkin methods can be interpreted as a
generalisation of finite volume methods and hence the roadmap for the
development of shape preserving and positivity preserving methods is
relatively clear (see \citet{cockburn2001runge} for an introduction to
this topic). However, this is not the case for continuous Galerkin
methods, and so different approaches must be used. In the NWP
community, limiters for CG methods have been considered by
\citet{marras2012variational}, who used first-order subcells to reduce
the method to first-order upwind in oscillatory regions, and
\citet{guba2014optimization}, who exploited the monotonicity of the
element-averaged scheme in the spectral element method to build a
quasi-monotone limiter. }
\revised{ In this paper, we address the problem of finding suitable
limiters for the partially continuous finite element spaces for
tracers that arise in the framework of compatible finite element
methods for numerical weather prediction models
\citep{cotter2012mixed,cotter2014finite,staniforth2013analysis,mcrae2014energy}. Compatible
finite element methods have been proposed as an evolution of the
C-grid staggered finite difference methods that are very popular in
NWP.} \revised{Within the UK dynamical core ``Gung-Ho'' project},
this \revised{evolution} is being driven by the need to move away from
the latitude-longitude grids which are currently used in NWP models,
since they prohibit parallel scalability
\revised{\citep{staniforth2012horizontal}}. \revised{Compatible finite
element} methods rely on choosing compatible finite element spaces
for the various prognostic fields (velocity, density, temperature,
etc.), in order to avoid spurious numerical wave propagation that
pollutes the numerical solution on long time scales. In particular,
in three dimensional models, this calls for the velocity space to be a
div-conforming space such as Raviart-Thomas, and the density space is
the corresponding discontinuous space. Many current operational
forecasting models, such as the Met Office Unified Model
\citep{davies2005new}, use a Charney-Philips grid staggering in the
vertical, to avoid a spurious mode in the vertical. When translated
into the framework of compatible finite element spaces, this requires
the temperature space to be a tensor product of discontinuous
functions in the horizontal and continuous functions in the vertical
(more details are given below). Physics/dynamics coupling then
requires that other tracers (moisture, chemical species \emph{etc.})
also use the same finite element space as temperature.
A critical requirement for numerical weather prediction models is that
the transport schemes for advected tracers do not lead to the creation
of new local maxima and minima, since their coupling back into the
dynamics is very sensitive. In the compatible finite element
framework, this calls for the development of limiters for
partially-continuous finite element spaces. Since there is a
well-developed framework for limiters for discontinuous Galerkin
methods
\revised{\citep{biswas94parallel,burbeau01:problem,cockburn2001runge,hoteit04:newgalerkin,krivodonova04:shockdg,tu05:slope,kuzmin2010vertex,zhang2011maximum}},
in this paper we pursue the three stage approach of (i) injecting the
solution into an embedding discontinuous finite element space at the
beginning of the timestep, then (ii) applying a standard discontinuous
Galerkin timestepping scheme, before finally (iii) projecting the
solution back into the partially continuous space. If the
discontinuous Galerkin scheme is combined with a slope limiter, the
only step where overshoots and undershoots can occur is in the final
projection. In this paper we describe a localised limiter for the
projection stage, which is a modification of element-based limiters
\citep{lohner1987finite,fctools} previously applied to remapping
in \citet{Lohner2008,kuzmin2010failsafe}. This leads to a locally bounded
advection scheme when combined with the other steps.
\revised{
The main results of this paper are:
\begin{enumerate}
\item The introduction of an embedded discontinuous Galerkin scheme which
is demonstrated to be linearly stable,
\item The introduction of localised element-based limiters to remove
spurious oscillations when projecting from from discontinuous to
continuous finite element spaces, which are necessary to make the
whole transport scheme bounded,
\item When combined with standard limiters for the discontinuous
Galerkin stage, the overall scheme remains locally bounded,
addressing the previously unsolved problem of how to limit partially
continuous finite element spaces that arise in the compatible finite
element framework.
\end{enumerate}
Our bounded transport scheme can also be used for continuous finite
element methods, although other approaches are available that do not
involve intermediate use of discontinuous Galerkin methods.}
The rest of the paper is structured as follows. The problem is
formulated in Section \ref{sec:formulation}. In particular, more
detail on the finite element spaces is provided in Section
\ref{sec:spaces}. The embedded discontinuous Galerkin schemes are
introduced in Section \ref{sec:embedded}; it is also shown that these
schemes are stable if the underlying discontinuous Galerkin scheme is
stable. The limiters are described in Section \ref{sec:bounded}.
In Section \ref{sec:numerics} we provide some numerical
examples. Finally, in Section \ref{sec:outlook} we provide a summary
and outlook.
\section{Formulation}
\label{sec:formulation}
\subsection{Finite element spaces}
\label{sec:spaces}
We begin by defining the partially continuous finite element spaces
under consideration. In three dimensions, the element domain is
constructed as the tensor product of a two-dimensional horizontal
element domain (a triangle or a quadrilateral) and a one-dimensional
vertical element domain (i.e., an interval); we obtain triangular
prism or hexahedral element domains aligned with the vertical
direction. For a vertical slice geometry in two dimensions (frequently
used in testcases during model development), the horizontal domain is
also an interval, and we obtain quadrilateral elements aligned with
the vertical direction.
To motivate the problem of transport schemes for a partially
continuous finite element space, we first consider a compatible finite
element scheme that uses a discontinuous finite element space for
density. This is typically formed as the tensor product of the $DG_k$
space in the horizontal (degree $k$ polynomials on triangles or bi-$k$
polynomials on quadrilaterals, allowing discontinuities between
elements) and the $DG_l$ space in the vertical. We consider the case
where the same degree is chosen in horizontal and vertical,
i.e. $k=l$, although there are no restrictions in the framework. We
will denote this space as $DG_k\times DG_k$.
In the compatible finite element framework, the vertical velocity
space is staggered in the vertical from the pressure space; the
staggering is selected by requiring that the divergence (i.e., the
vertical derivative of the vertical velocity) maps from the vertical
velocity space to the pressure space. This means that vertical
velocity is stored as a field in $DG_k\times CG_{k+1}$ (where
$CG_{k+1}$ denotes degree $k+1$ polynomials in each interval element,
with $C^0$ continuity between elements). To avoid spurious hydrostatic
pressure modes, one may then choose to store (potential) temperature
in the same space as vertical velocity (this is the finite element
version of the Charney-Phillips staggering). \revised{Figure
\ref{fig:com-fem} provides diagrams showing the nodes for these
spaces in the case $k=1$.} Details of how to automate the
construction of these finite element spaces within a code generation
framework are provided in \citet{mcrae2014automated}.
\begin{figure}
\caption{\label{fig:com-fem}
\label{fig:com-fem}
\end{figure}
Monotonic transport schemes for temperature are often required,
particularly in challenging testcases such as baroclinic front
generation. Further, dynamics-physics coupling requires that other
tracers such as moisture must be stored at the same points as
temperature; many of these tracers are involved in parameterisation
calculations that involve switches and monotonic advection is required
to avoid spurious formation of rain patterns at the gridscale, for
example. Hence, we must address the challenge of monotonic advection
in the partially continuous $DG_k\times CG_{k+1}$ space.
In this paper, we shall concentrate on the case of $DG_1\times CG_2$.
This is motivated by the fact that we wish to use standard DG upwind
schemes where the advected tracer is simply evaluated on the upwind
side; the lowest order space $DG_0\times CG_1$ leads to a first order
scheme in this case. We may return to higher order spaces in future
work.
\subsection{Embedded Discontinuous Galerkin schemes}
\label{sec:embedded}
\begin{figure}
\caption{\label{fig:vtheta-vthetadg}
\label{fig:vtheta-vthetadg}
\end{figure}
\revised{In this section we describe the basic embedded transport
scheme as a linear transport scheme without limiters. The scheme,
which can be applied to continuous or partially-continuous finite
element spaces, is motivated by the fact that limiters are most
easily applied to fully discontinuous finite element spaces. We call
the continuous or partially-continuous finite element space $V$, and
let $\hat{V}$ be the smallest fully discontinuous finite element
space containing $V$. A diagram illustrating $V$ and $\hat{V}$ in
our case of interest, namely the finite element space for
temperature described in the previous section, is shown in Figure
\ref{fig:vtheta-vthetadg}.
Before describing the transport scheme, we make a few definitions.
\begin{definition}[Injection operator]
For $u\in V\subset \hat{V}$, we denote $I:V\to \hat{V}$
the natural injection operator.
\end{definition}
The injection operator does nothing mathematically except to identify
$Iu$ as a member of $\hat{V}$ instead of just $V$. However, in a
computer implementation, it requires us to expand $u$ in a new basis.
This can be cheaply evaluated element-by-element.
\begin{definition}[Propagation operator]
Let $A:\hat{V}\to\hat{V}$ denote the operator representing the
application of one timestep of an $L^2$-stable discontinuous Galerkin
discretisation of the transport equation.
\end{definition}
For example, $A$ could be the combination of an upwind discontinuous
Galerkin method with a suitable Runge-Kutta scheme.
\begin{definition}[Projection operator]
For $\hat{u}\in \hat{V}$ we define the projection $P:\hat{V}\to V$ by
\[
\langle v,P\hat{u}\rangle = \langle v,\hat{u}\rangle, \MM{Q}uad \forall
v\in V.
\]
\end{definition}
In a computer implementation, this requires the inversion of the mass matrix
associated with $V$.
We now combine these operators to construct our embedded discontinuous Galerkin
scheme.
\begin{definition}[Embedded discontinuous Galerkin scheme]
Let $V\subset\hat{V}$, with injection operator $I$, projection operator $P$
and propagation operator $A$. Then one step of the embedded discontinuous
Galerkin scheme is defined as
\[
\theta^{n+1} = PAI\theta^n, \MM{Q}uad \theta^n,\theta^{n+1}\in V.
\]
\end{definition}
The $L^2$ stability of this scheme is ensured by the following
result.
\begin{proposition}
Let $\alpha>0$ be the stability constant of the
the propagation operator $A$, such that
\begin{equation}
\|A\| = \sup_{\hat{z}\in \hat{V},\|\hat{z}\|>0}\frac{\|A\hat{z}\|}{\|\hat{z}\|} \leq \alpha,
\end{equation}
where $\|\cdot\|$ denotes
the $L^2$ norm. Then, the stability constant $\alpha^*$ of the
embedded discontinuous Galerkin scheme on $V$ satisfies
$\alpha^*<\alpha$.
\end{proposition}
\begin{proof}
\begin{equation}
\sup_{z\in V,\|z\|>0}\frac{\|PAIz\|}{\|z\|} =
\sup_{z\in V,\|z\|>0}
\frac{\|PAIz\|}{\|Iz\|}
\leq \sup_{\hat{z}\in \hat{V},\|\hat{z}\|>0}
\frac{\|PA\hat{z}\|}{\|\hat{z}\|}
\leq \sup_{\hat{z}\in \hat{V},\|\hat{z}\|>0}\frac{\|A \hat{z}\|}{\|\hat{z}\|} \leq \alpha,
\end{equation}
as required. In the last inequality we used the fact that $\|P\hat{z}\|
\leq \|\hat{z}\|$, which is a consequence of the Riesz representation
theorem.
\end{proof}
\begin{corollary}
For a given velocity field $\MM{u}$, let $A_{\Delta t}$ denote the
propagation operator for timestep size $\Delta t$. Let $\Delta t^*$ denote
the critical timestep for $A_{\Delta t}$, \emph{i.e.},
\[
\|A_{\Delta t}\|\leq1, \MM{Q}uad \mbox{ for } \Delta t \leq \Delta t^*.
\]
Then, the critical timestep size $\Delta t^\dagger$ for the embedded
discontinuous Galerkin scheme $PA_{\Delta t}I$ is at least as large
as $\Delta t^*$.
\end{corollary}
\begin{proof}
If $\Delta t\leq\Delta t^*$, then
\[
\|PA_{\Delta t}I\|\leq \|A_{\Delta t}\| \leq 1,
\]
as required.
\end{proof}
Hence, the embedded DG scheme is $L^2$ stable whenever the propagation
operator $A$ is.
For the numerical examples in this paper, we consider the case
$V=DG_1\times CG_2$ (our temperature space) and $\hat{V}=DG_1\times
DG_2$. For a given divergence-free velocity field $\MM{u}$, defined
on the domain $\Omega$ and satisfying $\MM{u}\cdot\MM{n}=0$ on the
domain boundary $\MM{P}artial \Omega$, $A$ represents the application of
one timestep applied to the transport equation
\begin{equation}
\theta_t = -\MM{u}\cdot\nabla\theta = -\nabla\cdot(\MM{u}\theta),
\end{equation}
discretised using the usual Runge-Kutta discontinuous Galerkin
discretisation (see \citet{cockburn2001runge} for a review). To do
this, first we define $L:\hat{V}\to\hat{V}$ by
\begin{equation}
\int_\Omega \gamma L\theta \diff x =
-\Delta t \int_{\Omega} \nabla\gamma\cdot\MM{u} \theta \diff x
+\Delta t \int_{\Gamma} \jump{\MM{u}\gamma}\tilde{\theta}\diff S,
\end{equation}
where $\Gamma$ is the set of interior facets in the finite element
mesh, with the two sides of each facet arbitrarily labelled by $+$ and
$-$, the jump operator denotes $\jump{\MM{v}}=\MM{v}^+\cdot\MM{n}^++\MM{v}^-\cdot\MM{n}^-$, and where $\tilde{\theta}$ is the upwind value of $\theta$
defined by
\[
\tilde{\theta} = \left\{
\begin{array}{cc}
\theta^+ & \mbox{ if } \MM{u}\cdot\MM{n}^+<0, \\
\theta^- & \mbox{ otherwise.}
\end{array}
\right.
\]
Then, the timestepping method is defined by the usual
3rd order 3 step SSPRK timestepping method
\citep{shu1988efficient},
\begin{align}
\MM{P}hi^1 &= \theta^n + L\theta^n, \\
\MM{P}hi^2 &= \frac{3}{4}\theta^n + \frac{1}{4}(\MM{P}hi^1 + L\MM{P}hi^1), \\
A\theta^n = \theta^{n+1} &= \frac{1}{3}\theta^n + \frac{2}{3}
(\MM{P}hi^2 + L\MM{P}hi^2).
\end{align}
Since the finite element space $V$
is discontinuous in the horizontal, the projection $P:\hat{V}\to V$
decouples into independent problems to solve in each column
(\emph{i.e.}, the mass matrix for $DG_1\times CG_2$ is column-block
diagonal).
}
\subsection{Bounded transport}
\label{sec:bounded}
Next we wish to add limiters to the scheme. This is done in two
stages. \revised{ First, a slope limiter should be incorporated into the
$\hat{V}$ propagator, $A$; we call the resulting scheme $\tilde{A}$.
A suitable limiter is defined in Section \ref{sec:dg limiter}
After replacing $A$ with $\tilde{A}$, the only way that the solution
can generate overshoots and undershoots is after the application of the
projection $P$.} To control these unwanted
oscillations, we apply a (conservative) flux correction to the
projection, referred to as flux corrected remapping
\citep{kuzmin2010failsafe}\revised{; this is described in Section \ref{sec:fcr}.
We denote the flux corrected remapping $\tilde{P}$, and the resulting
bounded transport scheme may be written as $\theta^{n+1}=\tilde{P}\tilde{A}I$.
}
\subsubsection{Slope limiter \revised{for the propagator $A$}}
\label{sec:dg limiter}
\revised{In principle, any suitable discontinuous Galerkin slope
limiter can be used in the propagator $A$.} In this paper we used
the vertex-based slope limiter of \citet{kuzmin2010vertex}. This
limiter is both very easy to implement, and supports a treatment of
the quadratic structure in the vertical. \revised{Before presenting
the limiter for $\hat{V}=DG_1 \times DG_2$ (recall that this is the
space we must use to obtain a transport scheme for our $DG_1\times
CG_2$ space used for temperature), we first review the concepts in
the simpler case of $\hat{V}=DG_1\times DG_1$.} The basic idea for
\revised{$\theta\in\hat{V}=DG_1\times DG_1$} is to write
\begin{equation}
\theta = \bar{\theta} + \Delta \theta,
\end{equation}
where $\bar{\theta}$ is the projection of $\theta$ into $DG_0$,
\emph{i.e.} in each element $\bar{\theta}$ is the element-averaged
value of $\theta$. Then, for each vertex $i$ in the mesh, we compute
maximum and minimum bounds $\theta_{\max,i}$ and $\theta_{\min,i}$ by
computing the maximum and minimum values of $\bar{\theta}$ over all
the elements that contain that vertex, respectively. In each element
$e$ we then compute a constant $0\leq \alpha_e\leq 1$ such that the
value of
\begin{equation}
\theta_{\min,i} \leq \theta_e(\MM{x}_i) = \bar{\theta}_e + \alpha_e(\Delta \theta)_e(\MM{x}_i) \leq \theta_{\max,i},
\end{equation}
at each vertex $i$ contained by element $e$. {The optimal value of the correction
factor $\alpha_e$ can be determined using the formula of \citet{barthjesp1989}
\begin{equation}
\alpha_e=\min\limits_{i\in {\cal N}_e}\left\{\begin{array}{ll}
\min\left\{1,\frac{\theta_{\max,i}-\bar \theta_e}{\theta_{e,i}-\bar \theta_e}\right\} &
\mbox{if}\MM{Q}uad \theta_{e,i}-\bar \theta_e>0,\\[0.1cm]
1 &\mbox{if} \MM{Q}uad \theta_{e,i}-\bar \theta_e=0,\\[0.0cm]
\min\left\{1,\frac{\theta_{\min,i}-\bar \theta_e}{\theta_{e,i}-\bar\theta_e}\right\} &
\mbox{if}\MM{Q}uad \theta_{e,i}-\bar \theta_e<0,
\end{array}\right.\label{dglim}
\end{equation}
where ${\cal N}_e$ is the set of vertices of element $e$ and $\theta_{e,i}=
\bar\theta_e+(\Delta\theta)_e(\MM{x}_i)$ is
the unconstrained value of the $DG_1$ shape function at the $i$-th vertex.}
\revised{For our temperature space $DG_1 \times DG_2$ applied to
numerical weather prediction applications, we assume that we have a
columnar mesh. This means that the prismatic elements are stacked
vertically in layers, with vertical sidewalls (but possibly with
tilted top and bottom faces to facilitate terrain-following meshes,
so that the elements are trapezia). This allows us to adopt a Taylor
basis in the vertical, \emph{i.e.} the basis in local coordinates is
the tensor product of a Taylor basis in the vertical with a Lagrange
basis in the horizontal. We write}
\begin{equation}
\label{eq:2d-decomp}
\theta = {\bar{\theta}} +
(\theta_1 - \bar{\theta}) + (\theta-\theta_1),
\end{equation}
\revised{
where $\theta_1\in DG_1\times DG_1$, and satisfies the following conditions:
\begin{enumerate}
\item $\bar{\theta}_1=\bar{\theta}$,
\item $\MM{P}p{\theta_1}{z}$ and $\MM{P}p{\theta}{z}$ take the same values
along the horizontal element midline in local coordinates.
\end{enumerate}
}\noindent Then, $\MM{P}p{\theta}{z} \in DG_1\times
DG_1$ whilst $\MM{P}p{\theta_1}{z} \in DG_1 \times DG_0$.
First, we limit the quadratic component in the vertical (the third term in
Equation \eqref{eq:2d-decomp}), performing the following steps.
\begin{enumerate}
\item In each element, compute $\MM{P}p{\theta_1}{z}$, and
\revised{evaluate the derivative at the horizontal cell midline to
obtain $\overline{\MM{P}p{\theta_1}{z}} \in DG_1 \times DG_0$.} If
the quadratic component $\theta-\theta_1$ is limited to zero then
$\MM{P}p{\theta_1}{z}$ will become equal to
$\overline{\MM{P}p{\theta_1}{z}}$.
\item In each column, at each vertex $i$, compute bounds
$\MM{P}p{\theta}{z}|_{\min,i}$ and $\MM{P}p{\theta}{z}|_{\max,i}$ by taking
the maximum value of $\overline{\MM{P}p{\theta_1}{z}}$ at that vertex in the
elements \revised{sharing that vertex} in the column.
\item In each element, compute element correction factors $\alpha_{1,e}$
according to
\begin{equation}
\alpha_{1,e}=\min\limits_{i\in {\cal N}_e}\left\{\begin{array}{ll}
\min\left\{1,\frac{\MM{P}p{\theta}{z}|_{\max,i}-\overline{\MM{P}p{\theta}{z}}|_{e,i}}
{\MM{P}p{\theta}{z}_{e,i}-\overline{\MM{P}p{\theta}{z}}|_{e,i}}\right\} &
\mbox{if}\MM{Q}uad \MM{P}p{\theta}{z}|_{e,i}-\overline{\MM{P}p{\theta}{z}}|_{e,i}>0,\\[0.1cm]
1 &\mbox{if} \MM{Q}uad \revised{\MM{P}p{\theta}{z}|_{e,i}}-\overline{\MM{P}p{\theta}{z}}|_{e,i}=0,\\[0.0cm]
\min\left\{1,\frac{\MM{P}p{\theta}{z}|_{\min,i}-\overline{\MM{P}p{\theta}{z}}|_{e,i}}
{\MM{P}p{\theta}{z}|_{e,i}-\overline{\MM{P}p{\theta}{z}}|_{e,i}}\right\} &
\mbox{if}\MM{Q}uad \MM{P}p{\theta}{z}|_{e,i}-\overline{\MM{P}p{\theta}{z}}|_e<0.
\end{array}\right.
\end{equation}
\end{enumerate}
This approach can also be extended to meshes in spherical geometry for
which all side walls are parallel to the radial
direction\footnote{Such meshes arise when terrain following grids are
used in spherical geometry.}, having replaced $\MM{P}p{}{z}$ by the
radial derivative.
Second, we apply the vertex-based limiter to the {$DG_1\times DG_1$}
component $\theta_1$, obtaining limiting constants $\alpha_0$. We then
finally evaluate
\begin{equation}
\theta \mapsto \theta = \bar{\theta} + \alpha_0(\theta_1 - \bar{\theta}) + \alpha_1(\theta-\theta_1).
\end{equation}
To reduce diffusion of smooth extrema, it was recommended in
\citet{kuzmin2010vertex} to recompute the $\alpha_0$ coefficients
according to
\begin{equation}
\alpha_{0,e} \mapsto \max(\alpha_{0,e},\alpha_{1,e}).
\end{equation}
However, this does not work in the case of $DG_1 \times DG_2$ since
there is no quadratic component in the horizontal direction, and hence
nonsmooth extrema in the horizontal direction will not be detected.
{A possible remedy is to use $\alpha_{0,e}$ for the horizontal
gradient and $\max(\alpha_{0,e},\alpha_{1,e})$ for the vertical
gradient or to limit the directional derivatives separately using an
anisotropic version of the vertex-based slope limiter
(\citet{dg_anisotropic}).}
\revised{This limiter is applied to the input to $\tilde{A}$ and after
each SSPRK stage, to ensure that no new maxima or minima appear in
the solution over the timestep.}
\subsubsection{Flux corrected remapping}
\label{sec:fcr}
\revised{The final step of the embedded DG scheme is the projection
$P$ of the DG solution (which we denote here as $\hat{\theta}$) back
into $V$. We obtain a high-order, but oscillatory solution, which we
denote $\theta^H$. To obtain a bounded solution, we introduce a
localised element-based limiter that blends $\theta^H$ with a
low-order bounded solution ${\theta}^L$, such that high-order
approximation is preserved wherever overshoots and undershoots are not
present.}
\revised{First, we must obtain the low-order bounded solution. Using
the Taylor basis, we remove the quadratic part of $\hat{\theta}$,
to obtain $\tilde{\theta}\in DG_1\times DG_1$. A low-order bounded solution can then be obtained
by applying a lumped mass projection,}
\begin{equation}
\revised{M_i \theta_i^L = \int_\Omega \MM{P}hi_i\tilde{\theta}\diff x
= \sum_{k=1}^m Q_{ik}\tilde{\theta}_k, \MM{Q}uad i=1,\ldots,m, }
\label{l2lumped}
\end{equation}
where the lumped mass $M$ is defined by
\begin{equation}
M_i = \int_\Omega \MM{P}hi_i \diff x,
\end{equation}
the projection matrix $Q$ is defined by
\revised{
\begin{equation}
Q_{ik} = \int_\Omega \MM{P}hi_i\MM{P}si_k\diff x, \MM{Q}uad
i=1,\ldots,n,\, k=1,\ldots,\revised{m,}
\end{equation}}
$\{\MM{P}si_i\}_{i=1}^\revised{m}$ is a \revised{Lagrange} basis for
$DG_1\times DG_1$ \revised{and $\{\MM{P}hi_i\}_{i=1}^n$ is a
\revised{Lagrange} basis for $DG_1\times CG_2$. }
The lumped mass $M$ and projection matrix $Q$ both have strictly
positive entries. This means that for each $1\leq i \leq n$, the basis
coefficient $\theta^L_i$ is a weighted average of values of
\revised{$\tilde{\theta}$} coming from elements that lie in $S(i)$, the support of
$\MM{P}hi_i$. The weights are all positive, and hence the value of
$\theta^L_i$ is bounded by the maximum and minimum values of
\revised{$\tilde{\theta}$} in $S(i)$. Hence, no new maxima or minima appear in the
low order solution.
\revised{Next, we combine the low order and high order solutions
element-by-element, in a process called element-based flux
correction. Element based flux correction was introduced in \citep{lohner1987finite} and formulated for conservative remapping in
{\citep{Lohner2008,kuzmin2010failsafe}}. Here, we use a new localised
element-based formulation, where element contributions to the low and
high order solutions are blended locally and then assembled.}
\revised{To formulate the element-based limiter, we note that the} {consistent mass counterpart of (\ref{l2lumped}) is given by
\begin{equation}
\sum_{j=1}^nM_{ij}\theta_j^H = \int_\Omega \MM{P}hi_i\hat{\theta}\diff x, \MM{Q}uad i=1,\ldots,n,
\label{l2consistent}
\end{equation}
where
\begin{equation}
M_{ij} = \int_\Omega \MM{P}hi_i\MM{P}hi_j \diff x.
\end{equation}
}
First, by repeated addition and subtraction of terms, we write (with no implied sum over the
index $i$)
\revised{\begin{equation}
M_i\theta^H_i = M_i\theta^L_i + f_i
\end{equation}
where
\begin{align}
f_i&=M_i\theta^H_i-\sum_jM_{ij}\theta^H_j
+ M_i\theta^L_i + \sum_jM_{ij}\theta^H_j,\\
&= M_i\theta^H_i-\sum_jM_{ij}\theta^H_j
+ \int_{\Omega} \MM{P}hi_i(\hat{\theta}-\tilde{\theta})\diff x.
\end{align}}
This can be decomposed into elements to obtain
\revised{
\begin{equation}
M_i\theta^H_i = \sum_e\left(M_i^e\theta^L_i + f_i^e\right), \MM{Q}uad
f_i^e = M_i^e\theta^H_i-\sum_jM_{ij}^e\theta^H_j
+ \int_e \MM{P}hi_i(\hat{\theta}-\tilde{\theta})\diff x,
\end{equation}
where
\begin{equation}
M_i^e = \int_e \MM{P}hi_i \diff x, \MM{Q}uad \mbox{ and }
M_{ij}^e = \int_e \MM{P}hi_i\MM{P}hi_j \diff x.
\end{equation}}
\revised{Importantly, the contributions $f^e_i$ of element $e$
to its vertices sum to zero, since
\begin{align}
\nonumber
\sum_{i=1}^nf_i^e &=\sum_{i=1}^nM_i^e\theta^H_i-\sum_{i=1}^n\sum_{j=1}^nM_{ij}^e\theta_j^H + \int_e\underbrace{\sum_{i=1}^n\MM{P}hi_i}_{=1}(\hat{\theta}-\tilde{\theta})\diff x, \\
&=
\underbrace{\sum_{i=1}^nM_i^e\theta^H_i-\sum_{j=1}^nM_{j}^e\theta_j^H}_{=0}
+ \underbrace{\int_e(\hat{\theta}-\tilde{\theta})\diff x}_{=0}=0.
\end{align}
It follows that the total mass of the solution remains unchanged
(i.e., $\sum_{i=1}^nM_i\theta_i^H=\sum_{i=1}^nM_i\theta_i^L$) if all
contributions of the same element are reduced by the same amount. }
We can then choose element limiting constants $\alpha^e$
to get
\begin{equation}
M_i\theta^H_i = \sum_e\left(M_i^e\theta^L_i + \alpha_e f_i^e\right),
\end{equation}
where $0\leq\alpha_e\leq 1$ is a limiting constant for each element
which is chosen to satisfy {vertex bounds obtained from the nodal
values of $\hat{\theta}$.}
{
The bounds in each vertex are obtained as follows. First element
bounds $\theta_{\max}^e$ and $\theta_{\min}^e$ are obtained from
$\hat{\theta}$ by maximising/minimising over the vertices
of element $e$. Then for each vertex $i$, maxima/minima are
obtained by maximising/minimising over the elements
containing the vertex:
\begin{equation}
\theta_{\max,i}=\max_e\theta_{\max}^e,\MM{Q}uad
\theta_{\min,i}=\min_e\theta_{\min}^e.
\end{equation}
The correction factor $\alpha_e$ is chosen so as to enforce the
local inequality constraints
\begin{equation}
M_i^e\theta_{\min,i}\le M_i^e\theta_i^L+\alpha_ef_i^e\le
M_i^e\theta_{\max,i}
\end{equation}
Summing over all elements, one obtains the corresponding global estimate
\begin{equation}
M_i\theta_{\min,i}\le M_i\theta_i^L+\sum_e\alpha_ef_i^e\le
M_i\theta_{\max,i},
\end{equation}
which proves that the corrected value $\theta_i^C:=
\theta_i^L+\frac{1}{M_i}\sum_e\alpha_ef_i^e$ is bounded by
$\theta_{\max,i}$ and $\theta_{\min,i}$.}
To enforce the above maximum principles, we limit
the element contributions $f_i^e$ using
\begin{equation}
\alpha_e=\min\limits_{i\in {\cal N}_e}
\left\{\begin{array}{ll}
\min\left\{1,\frac{M_i^e( \theta_{\max,i} -\theta^L_i)}{f_{i}^{e}}\right\} &
\mbox{if}\MM{Q}uad f_{i}^{e}>0,\\[0.1cm]
1 &\mbox{if} \MM{Q}uad f_{i}^{e}=0,\\[0.0cm]
\min\left\{1,\frac{M_i^e (\theta_{\min,i}- \theta^L_i)}{f_{i}^{e}}\right\} &
\mbox{if}\MM{Q}uad f_{i}^{e}<0.
\end{array}\right.
\end{equation}
This definition of $\alpha_e$ corresponds to a localised version of
the element-based multidimensional FCT limiter
(\citep{lohner1987finite,fctools}) and has the same structure as
formula (\ref{dglim}) for the correction factors that we used to
constrain the $DG_1$ approximation. \revised{A further advantage of
the localised formulation is that the limited fluxes can be built
independently in each element, before assembling globally and
dividing by the global lumped mass by iterating over nodes.}
\section{Numerical Experiments}
\label{sec:numerics}
\begin{figure}
\caption{\label{fig:solid}
\label{fig:solid}
\end{figure}
In this section, we provide some numerical experiments demonstrating the
localised limiter for embedded Discontinuous Galerkin schemes.
\subsection{Solid body rotation}
\label{sec:solid}
In this standard test case, the transport equations are solved in the
unit square $\Omega=(0,1)^2$ with velocity field
$\MM{u}(x,y)=(0.5-y,x-0.5)$, \emph{i.e.} a solid body rotation in
anticlockwise direction about the centre of the domain, so that the
exact solution at time $t=2\MM{P}i$ is equal to the initial condition. The
initial condition is chosen to be the standard hump-cone-slotted
cylinder configuration defined in \citet{leveque96:highres}, and
solved on a regular mesh with element width $h=1/100$ \revised{and Courant number 0.3}. The result,
shown in Figure \ref{fig:solid}, is comparable with the result for the
\revised{$DG_1$} discontinuous Galerkin vertex-based limiter shown in
Figure 2 of \citet{kuzmin2010vertex}; it is free from over- and
undershoots and exhibits a similar amount of numerical
diffusion. \revised{It is also hard to distinguish between the
$x$-direction, where the finite element space is discontinuous, and
the $y$-direction, where the space is continuous. This suggests
that we have achieved our goal of constructing a limited transport
scheme for our partially-continuous finite element space.}
\revised{\subsection{Advection of a discontinous function with curvature}
\label{sec:curvybump}
In this test case, the transport equations are solved in the unit
square $\Omega=(0,1)^2$ with velocity field $\MM{u}=(1,0)$,
\emph{i.e.} steady translation in the $x$-direction (which is the
direction of discontinuity in the finite element space). The initial
condition is
\begin{equation}
\theta = \left\{
\begin{array}{cc}
4y(1-y) + 1 & \mbox{ if } 0.2<x<0.4, \\
4y(1-y) & \mbox{ otherwise. }
\end{array}\right.
\end{equation}
This test case is challenging because the height of the ``plateau''
next to the continuity varies as a function of $y$ (\emph{i.e.}, in
the direction tangential to the discontinuity); this means that the
behaviour of the limiter is more sensitive to the process of obtaining
local bounds.
The equations are integrated until $t=0.4$ in a $100\times100$ square
grid and Courant number 0.3. The results are showing in Figure
\ref{fig:curvybump}. One can see qualitatively that the degradation in
the solution due to the limiter and numerical errors is not too great.
}
\begin{figure}
\caption{\label{fig:curvybump}
\label{fig:curvybump}
\end{figure}
\subsection{Convergence test: deformational flow}
\label{sec:convergence}
In this test, we consider the advection of a smooth function by a
deformational flow field that is reversed so that the function at time
$t=1$ is equal to the initial condition. As is standard for this type
of test, we add a translational component to the flow and solve the
problem with periodic boundary conditions to eliminate the possibility
of fortuitous error cancellation due to the time reversal.
The transport equations are solved in a unit square, with periodic boundary
conditions in the $x$-direction. The initial condition is
\[
\theta(\MM{x},0) = 0.25(1+\cos(r)),
\MM{Q}uad r = \min\left(0.2,\sqrt{(x-0.3)^2 + (y-0.5)^2}/0.2\right),
\]
and the velocity field is
\[
\MM{u}(\MM{x},t) = \left(1-5(0.5-t)\sin(2\MM{P}i(x-t))\cos(\MM{P}i y),
5(0.5-t)\cos(2\MM{P}i(x-t))\sin(\MM{P}i y)\right),
\]
\revised{where $\MM{x}=(x,y)$}. The problem was solved on a sequence
of regular meshes with square elements \revised{at fixed timestep $\Delta t
=0.000856898$}, and the $L^2$ error was computed. A plot of the errors
is provided in Figure \ref{fig:convergence}. As expected, we obtain
second-order convergence (the quadratic space in the vertical does not
enhance convergence rate because the full two-dimensional quadratic
space is not spanned).
\begin{figure}
\caption{\label{fig:convergence}
\label{fig:convergence}
\end{figure}
\section{Summary and Outlook}
\label{sec:outlook}
In this paper we described a limited transport scheme for
partially-continuous finite element spaces. Motivated by numerical
weather prediction applications, where the finite element space for
temperature and other tracers is imposed by hydrostatic balance and
wave propagation properties, we focussed particularly on the case of
tensor-product elements that are continuous in the vertical direction
but discontinuous in the horizontal. However, the entire methodology
applies to standard \revised{$C_0$} finite element spaces. The
transport scheme was demonstrated in terms of convergence rate on
smooth solutions and dissipative behaviour for non-smooth solutions in
some standard testcases.
Having a bounded transport scheme for tracers is a strong requirement
for numerical weather prediction algorithms; the development of our
scheme advances the practical usage of the compatible finite element
methods described in the introduction. The performance of this
transport scheme applied to temperature in a fully coupled atmosphere
model will be evaluated in 2D and 3D testcases as part of the ``Gung
Ho'' UK Dynamical Core project in collaboration with the Met Office.
In the case of triangular prism elements we anticipate that it may be
necessary to modify the algorithm above to limit the time derivatives
as described in \citet{kuzmin2013slope}.
A key novel aspect of our transport scheme is the localised
element-based FCT limiter. This limiter has much broader potential
for use in FCT schemes for continuous finite element spaces, which
will be explored and developed in future work.
\end{document}
|
\begin{equation}gin{document}
\title{Berry-Esseen bounds in the inhomogeneous Curie-Weiss model with external field}
\author{
Sander Dommers
\footnote{University of Hull, School of Mathematics and Physical Sciences, Cottingham Road, HU6 7RX Hull, United Kingdom. {\tt [email protected]}}
\and
Peter Eichelsbacher
\footnote{Ruhr-Universit\"at Bochum, Fakult\"at f\"ur Mathematik, Universit\"atsstra\ss e 150, 44780 Bochum, Germany. {\tt [email protected]}}
}
\maketitle
\begin{equation}gin{abstract}
We study the inhomogeneous Curie-Weiss model with external field, where the inhomogeneity is introduced by adding a positive weight to every vertex and letting the interaction strength between two vertices be proportional to the product of their weights. In this model, the sum of the spins obeys a central limit theorem outside the critical line. We derive a Berry-Esseen rate of convergence for this limit theorem using Stein's method for exchangeable pairs. For this, we, amongst others, need to generalize this method to a multidimensional setting with unbounded random variables.
\end{abstract}
\section{Introduction, model and main results}
The inhomogeneous Curie-Weiss model (ICW) was recently introduced in~\cite{GiaGibHofPri16}. In this model, every vertex has an Ising spin attached to it and also has a positive weight. The spins interact with each other, where the (ferromagnetic) interaction strength between two spins is proportional to the product of the weights of the vertices, and the spins also interact with an external field.
This model arose in the study of the annealed Ising model on inhomogeneous random graphs \cite{GiaGibHofPri16}. In the inhomogeneous random graph model, an edge between two vertices is present in the graph with a probability that is proportional to the product of the weights of the vertices. Annealing the Ising model over these random graphs by taking appropriate expectations results in a mean-field type model where spins interact with an average of their neighborhood. When two weights are large, there will be an edge between them more often in the random graph, and therefore the interaction strength in the annealed model will be large as well. Indeed, it can be shown that the resulting interaction is, approximately, also proportional to the product of the weights.
In~\cite{GiaGibHofPri16}, it is proved that in the ICW, and hence also the annealed Ising model on inhomogeneous random graphs, the sum of spins, in the presence of an external field or above the critical temperature, satisfies a central limit theorem. The study of this model continued in~\cite{DomGiaGibHofPri16}, where critical exponents were computed and a non-standard limit theorem was obtained at the critical point, and in~\cite{DomGiaGibHof18}, where large deviations of the sum of spins were studied.
Stein's method for exchangeable pairs was introduced in \cite{Ste86} and is now a popular method to obtain rates of convergence for central and other limit theorems. Given a random variable $X$, Stein's
method is based on the construction of another variable $X'$ (some coupling) such that the pair
$(X,X')$ is exchangeable, i.e., their joint distribution is symmetric. The approach essentially uses the
elementary fact that if $(X,X')$ is an exchangeable pair, then $\mathbb{E} g(X,X') = 0$ for all antisymmetric
measurable functions $g(x, y)$ such that the expectation exists. A theorem of Stein shows that a measure of proximity of
$X$ to normality may be provided in terms of
the exchangeable pair, requiring $X'-X$ to be sufficiently small, see \cite[Theorem 1, Lecture III]{Ste86}.
Stein's approach has been successfully applied in many models, see e.g.\ \cite{bookDiaconis} and references
therein. In \cite{RR}, the range of application was extended by replacing the linear regression property
by a weaker condition. Moreover the method was successfully applied to several mean-field models in statistical mechanics, including the (homogeneous) Curie-Weiss model~\cite{ChaSha11, EicLow10}, the Hopfield model~\cite{EicMar14}, the Curie-Weiss-Potts model~\cite{EicMar15} and $O(N)$ models~\cite{KirMec13,KirNaw16}.
In this paper, we derive a Berry-Esseen rate of convergence for the central limit theorem of the sum of spins in the ICW, i.e., we show that the Kolmogorov distance between the normalized sum of spins and the normal distribution is bounded from above by a constant divided by the square root of the number of vertices. This generalizes the results in~\cite{EicLow10} to the inhomogeneous setting and also to the setting with an external field.
When deriving the so-called {\it regression equation} for the sum of spins, which is the starting point of Stein's method for exchangeable pairs, one sees that not only the sum of spins, but also a weighted sum of spins shows up, where every spin value is multiplied by the weight of its vertex. Hence, one obtains a two-dimensional regression equation. Looking at the joint distribution of the sum of spins and the weighted sum of spins is for example also used to study their large deviations~\cite{DomGiaGibHof18}. Another complication that arises is that the weighted spin sum is not necessarily uniformly bounded.
Multidimensional versions of Stein's method for exchangeable pairs are for example studied in \cite{GesineAdrian}
and \cite{FanRol15}. Stein's method for unbounded exchangeable pairs have for example been studied in~\cite{CheSha12} and~\cite{ShaZha17}. We combine ideas from the latter paper with ideas from~\cite{FanRol15} to derive bounds between marginals of unbounded multidimensional random variables to the standard normal distribution.
The rest of this paper is organized as follows. In the next subsections we formally introduce the ICW, state our main results and provide a short discussion. In Section~\ref{sec-Stein}, we prove the version of Stein's method we need. Finally, in Section~\ref{sec-BEICW}, we use this to prove the Berry-Esseen bound for the ICW.
\subsection{The inhomogeneous Curie-Weiss model}
We now formally introduce the model and present some preliminary results on this model. We write $[n]:=\{1,\ldots,n\}$ and to every vertex $i\in [n]$ we assign a weight $w_i>0$. We need to make some assumptions on the weight sequence $(w_i)_{i\in[n]}$ which are stated below, where we write $W_n=w_I$, with $I\sim Uni[n]$.
\begin{equation}gin{condition}[Weight regularity]\label{cond-WeightReg} There exists a random variable $W$ such that, as $n\rightarrow\infty$,
\begin{equation}gin{enumerate}[(i)]
\item $W_n \stackrel{d}{\longrightarrow} W$,
\item $\mathbb{E}[W_n^2] =\frac{1}{n}\sum_{i\in [n]} w^2_i \rightarrow \mathbb{E}[W^2]< \infty$,
\item $\mathbb{E}[W_n^3] =\frac{1}{n}\sum_{i\in [n]} w_i^3 \rightarrow \mathbb{E}[W^3]< \infty$.
\end{enumerate}
Further, we assume that $\mathbb{E}[W]>0$.
\end{condition}
The inhomogeneous Curie-Weiss model is then defined as follows:
\begin{equation}gin{definition}[Inhomogeneous Curie-Weiss model]
Given the weights $(w_i)_{i\in[n]}$,
the inhomogeneous Curie-Weiss model is defined by the Boltzmann-Gibbs measure which is, for
any
\noindent
${\sigma = \{\sigma_i\}_{i\in[n]} \in \{-1,1\}^{n}}$, given by
\begin{equation}\label{eq-BoltzmannGibbs}
\mu_{n}(\sigma) = \frac{e^{-H_n(\sigma)}}{Z_n},
\end{equation}
where $H_n(\sigma)$ is the Hamiltonian given by
$$
H_n(\sigma)=-\frac{\begin{equation}ta}{2 \ell_n}\biggl(\sum_{i\in[n]}w_i \sigma_{i}\biggr)^2-h\sum_{i \in[n]}\sigma_i,
$$
with $\begin{equation}ta\geq0$ the inverse temperature, $h\in\mathbb{R}$ the external magnetic field and
$$
\ell_n=\sum_{i\in [n]} w_i =n\mathbb E[W_n],
$$
and where $Z_{n}$ is the normalizing partition function, i.e.,
$$
Z_n = \sum_{\sigma \in \{-1,1\}^{n}}e^{-H_n(\sigma)}.
$$
\end{definition}
Note that we retrieve the standard Curie-Weiss model with external field by choosing $w_i\equiv 1$.
The inhomogeneous Curie-Weiss model was obtained in~\cite{GiaGibHofPri16} by annealing the Ising model over inhomogeneous random graphs with these weights. In that case $\begin{equation}ta$ has to be replaced by $\sinh \begin{equation}ta$ and several error terms have to be considered. For simplicity, we here only study the model stated above.
For a given configuration $\sigma$, let $m_n$ be the average spin value, i.e.,
$$
m_n=\frac{1}{n} \sum_{i\in[n]}\sigma_i.
$$
Several properties of $m_n$ under the Boltzmann-Gibbs measure~\eqref{eq-BoltzmannGibbs} have been obtained in~\cite{GiaGibHofPri16}. We summarize the results that are important for this paper below.
In~\cite{GiaGibHofPri16} first of all, it is shown that, for $h\neq0$, the magnetization in the thermodynamic limit equals
\begin{equation}\label{eq-magnetization}
M(\begin{equation}ta,h) := \lim_{n\to\infty} \mathbb E[m_n] = \mathbb E\left[\tanh\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W]}}W x^* + h\right) \right],
\end{equation}
where $x^* := x^*(\begin{equation}ta,h)$ is equal to the unique solution with the same sign as $h$ of the fixed point equation
\begin{equation}\label{eq-fixedpoint-x}
x^* = \mathbb E\left[\tanh\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W]}}W x^* + h\right)\sqrt{\frac{\begin{equation}ta}{\mathbb E[W]}} W \right].
\end{equation}
When $h\to0$, the model undergoes a phase transition, i.e., there exists a $\begin{equation}ta_c\geq0$, such that the spontaneous magnetization
$$
M(\begin{equation}ta, 0^+):= \lim_{h\searrow0} M(\begin{equation}ta,h) \left\{\begin{equation}gin{array}{ll} =0, \qquad & {\rm for\ }\begin{equation}ta<\begin{equation}ta_c, \\ >0, \qquad & {\rm for\ }\begin{equation}ta>\begin{equation}ta_c.\end{array} \right.
$$
In~\cite{DomGiaGibHofPri16}, it is shown that for $\begin{equation}ta=\begin{equation}ta_c$ we also have that $M(\begin{equation}ta, 0^+)=0$. The critical value is given by
$$
\begin{equation}ta_c = \frac{\mathbb E[W]}{\mathbb E[W^2]}.
$$
We define the uniqueness CLT regime as
\begin{equation} \label{CLTunique}
\mathcal{U} = \{(\begin{equation}ta,h) \,:\, \begin{equation}ta\geq0, h\neq0 {\rm\ or\ } 0<\begin{equation}ta<\begin{equation}ta_c, h=0 \}.
\end{equation}
In the uniqueness CLT regime, the fixed point equation \eqref{eq-fixedpoint-x} has a unique solution, and the sum of spins satisfies the central limit theorem, i.e., for $(\begin{equation}ta,h) \in \mathcal{U}$,
$$
\sqrt{n}\left(m_n- \mathbb E[m_n] \right) \stackrel{d}{\longrightarrow} \mathcal{N}(0,\chi),
$$
where $\chi$ is the susceptibility given by
\begin{equation}
\chi := \chi(\begin{equation}ta,h) :=\lim_{n\to\infty} \frac{\partial}{\partial h} \mathbb E[m_n] = \frac{\partial}{\partial h} M(\begin{equation}ta,h).
\end{equation}
This was proved in \cite{GiaGibHofPri16} by analyzing cumulant generating functions. In this paper, we analyze the rate of convergence for this central limit theorem.
We can make the value of the susceptibility more explicit by carrying out the differentiation of the magnetization:
\begin{equation}gin{align*}
\chi(\begin{equation}ta,h) &=\frac{\partial}{\partial h} M(\begin{equation}ta,h) = \frac{\partial}{\partial h} \mathbb E\left[\tanh\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W]}}W x^* + h\right) \right] \nonumber\\
&= \mathbb E\left[\left(1-\tanh^2\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W]}}W x^* + h\right)\right)\left(1+\sqrt{\frac{\begin{equation}ta}{\mathbb E[W]}}W \frac{\partial x^*}{\partial h}\right) \right].
\end{align*}
Using the fixed point equation~\eqref{eq-fixedpoint-x},
\begin{equation}gin{align*}
\frac{\partial x^*}{\partial h} &= \frac{\partial }{\partial h}\mathbb E\left[\tanh\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W]}}W x^* + h\right)\sqrt{\frac{\begin{equation}ta}{\mathbb E[W]}} W \right] \nonumber\\
&= \mathbb E\left[\left(1-\tanh^2\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W]}}W x^* + h\right)\right)\left(1+\sqrt{\frac{\begin{equation}ta}{\mathbb E[W]}}W \frac{\partial x^*}{\partial h} \right)\sqrt{\frac{\begin{equation}ta}{\mathbb E[W]}} W \right].
\end{align*}
Solving for $\frac{\partial x^*}{\partial h}$ gives
$$
\frac{\partial x^*}{\partial h} = \frac{\sqrt{\frac{\begin{equation}ta}{\mathbb E[W]}} \mathbb E\left[\left(1-\tanh^2\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W]}}W x^* + h\right)\right) W \right]}{1-\frac{\begin{equation}ta}{\mathbb E[W]}\mathbb E\left[\left(1-\tanh^2\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W]}}W x^* + h\right)\right) W^2 \right]},
$$
and hence
\begin{equation}\label{eq-susceptibility}
\chi(\begin{equation}ta,h)=1- \mathbb E\left[\tanh^2\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W]}}W x^* + h\right) \right]+\frac{\frac{\begin{equation}ta}{\mathbb E[W]} \mathbb E\left[\left(1-\tanh^2\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W]}}W x^* + h\right)\right) W \right]^2}{1-\frac{\begin{equation}ta}{\mathbb E[W]}\mathbb E\left[\left(1-\tanh^2\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W]}}W x^* + h\right)\right) W^2 \right]}.
\end{equation}
We define the finite size analogues of $M(\begin{equation}ta,h)$ and $\chi(\begin{equation}ta,h)$, given in~\eqref{eq-magnetization} and~\eqref{eq-susceptibility} respectively, as
\begin{equation}gin{equation} \label{Mn}
M_n := M_n(\begin{equation}ta,h) := \mathbb E\left[\tanh\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}}W_n x_n^* + h\right) \right],
\end{equation}
and
\begin{equation}gin{eqnarray}\label{eq-defchin}
\chi_n := \chi_n(\begin{equation}ta,h) &:= &1- \mathbb E\left[\tanh^2\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}}W_n x_n^* + h\right) \right]
\\ \nonumber
&&+\frac{\frac{\begin{equation}ta}{\mathbb E[W_n]} \mathbb E\left[\left(1-\tanh^2\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}}W_n x_n^* + h\right)\right) W_n \right]^2}{1-\frac{\begin{equation}ta}{\mathbb E[W_n]}\mathbb E\left[\left(1-\tanh^2\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}}W_n x_n^* + h\right)\right) W_n^2 \right]},
\end{eqnarray}
respectively, where $x_n^* := x_n^*(\begin{equation}ta,h)$ is equal to the unique solution (see Lemma~\ref{lem-globalminGn} below that shows this uniqueness) with the same sign as $h$ of the fixed point equation
\begin{equation}\label{eq-fixedpoint-xn}
x_n^* = \mathbb E\left[\tanh\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}}W_n x_n^* + h\right)\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}} W_n \right].
\end{equation}
\subsection{Main results}
Let $d_K$ denote the Kolmogorov distance, i.e., for random variables $X$ and $Y$,
$$
d_K(X,Y) := \sup_{z\in \mathbb{R}} | \mathbb P(X\leq z) - \mathbb P(Y\leq z)|.
$$
Our main result is then as follows.
\begin{equation}gin{theorem}[Berry-Esseen bound for the ICW]\label{thm-berryesseen-magnetization}
Let
\begin{equation}\label{eq-defXn}
X_n =\sqrt{n} \frac{m_n - M_n}{\sqrt{\chi_n}},
\end{equation}
with $M_n$ and $\chi_n$ defined in \eqref{Mn} and \eqref{eq-defchin} and let $Z\sim \mathcal{N}(0,1)$. Suppose that the weights $(w_i)$ satisfy Condition~\ref{cond-WeightReg}(i)--(iii). Then, for all $(\begin{equation}ta,h) \in \mathcal{U}$, there exists a constant $0<C=C(\begin{equation}ta,h, (w_i)_i)<\infty$, such that
\begin{equation}\label{eq-berryesseenbound}
d_K(X_n,Z) \leq \frac{C}{\sqrt{n}}.
\end{equation}
Note that the constant depends on the entire weight sequence $(w_i)_{i\geq1}$. Since the quantities are uniformly bounded, one can determine them knowing the entire sequence.
Under Condtition~\ref{cond-WeightReg}(i)--(ii),
$$
d_K(X_n,Z) =o(1).
$$
\end{theorem}
\noindent
Note that we do not normalize $m_n$ using its expectation and variance as was done in~\cite{GiaGibHofPri16}, but instead use explicit quantities for this.
\noindent
We prove this theorem in Section~\ref{sec-BEICW} by using a version of Stein's method to estimate the distance from a standard normal distribution of a one dimensional marginal if one has a $d$-dimensional regression equation for exchangeable pairs. For this, suppose that $X$ and $X'$ are $d$-dimensional random vectors for some $d\geq 1$, and that $(X,X')$ is an exchangeable pair, i.e., their joint distribution is symmetric. We write the vector of differences as $D=X-X'$.
\noindent
We suppose that we have a {\it regression equation} for $(X,X')$ of the form
\begin{equation}\label{eq-regressionMD}
\mathbb E[ D \,|\, X] = \lambda \Lambda X + \lambda R,
\end{equation}
for some $0<\lambda<1$, invertible matrix $\Lambda$ and vector $R$.
\begin{equation}gin{theorem}[Stein's method]\label{thm-MarginalStein}
Suppose that $(X,X')$ is an exchangeable pair for $d$-dimensional vectors $X$ and $X'$
such that~\eqref{eq-regressionMD} holds. Then, with $Z$ a standard normal random variable and with $X_1$ denoting
the first component of vector $X$, we obtain
\begin{equation}\label{eq-thmStein}
d_K(X_1,Z) \leq \mathbb E\left[\left|1-\frac{1}{2\lambda}\mathbb E\left[\ell D D_1 \,|\, X\right]\right|\right]+ \frac{1}{\lambda} \mathbb E\left[\bigl|\mathbb E\left[|\ell D| D_1 \,|\, X\right]\bigr|\right]+ \frac{\sqrt{2\pi}}{4} \mathbb E\left[|\ell R|\right],
\end{equation}
where $D_1 = X_1 - X_1'$, $\ell$ is the first row of $\Lambda^{-1}$, i.e., $\ell := e_1^t \Lambda^{-1}$, and $\ell D$ and $\ell R$, respectively, denote the
Euclidean scalar product of the vectors.
\end{theorem}
\noindent
The proof of this theorem can be found in Section~\ref{sec-Stein}.
\subsection{Discussion}
\paragraph{Berry-Esseen bound for the sum of weighted spins} In Section~\ref{sec-MGFs}, we prove that also the sum of weighted spins $\sum_{i\in[n]} w_i\sigma_i$ satisfies the central limit theorem. Berry-Esseen bounds for this limit theorem can be derived in a similar way as is done for the sum of spins, although one needs to assume the convergence of one more moment of $W$ compared to Condition~\ref{cond-WeightReg} because of the extra factor $w_i$. We make some more detailed remarks at the end of the paper.
\paragraph{Limit theorems on the critical line.} For $h=0$ and $\begin{equation}ta>\begin{equation}ta_c$, the solution to the fixed point equation~\eqref{eq-fixedpoint-x} is not unique. We expect that our bounds still hold when one conditions on the magnetization being close to the value that corresponds to appropriate fixed point as was done for example in the Curie-Weiss-Potts model, see~\cite[Theorem 1.5]{EicMar15}.
For $h=0$ and $\begin{equation}ta=\begin{equation}ta_c$ the central limit theorem no longer holds. In~\cite{DomGiaGibHofPri16}, it is shown that one has to rescale the sum of spins with a different power of $n$ to obtain a limit theorem, and that the limit is nonnormal. It would be interesting to generalize our methods also to this case, for example by generalizing the density approach used in~\cite{EicLow10,ChaSha11}. However, when the weight distribution has a sufficiently heavy tail, the limiting distribution is of a form that is not covered anymore by the density approach.
\paragraph{Annealed Ising model on inhomogeneous random graphs} As mentioned, the ICW arose as an approximation for the annealed Ising model on inhomogeneous random graphs. We expect that our results remain true for this model, although if one wants to prove this, extra error terms caused by the approximation with the ICW have to be taken into account.
\paragraph{Quenched Ising model on inhomogeneous random graphs} One can also look at the quenched Ising model on inhomogeneous random graphs, i.e., the Ising model on a fixed realization of the random graph. In~\cite{GiaGibHofPri15}, it is shown that also in this case the central limit theorem holds in the uniqueness CLT regime \eqref{CLTunique}. It would be interesting to also obtain the rate of convergence for this model. This model is not of mean-field type, but spins only interact with their direct neighbors, which makes finding a suitable regression equation more difficult.
\paragraph{Inhomogeneous versions of other mean-field models} It would be interesting to see if results for other mean-field models, such as the ones studied in \cite{EicMar14, EicMar15, KirMec13, KirNaw16}, can be generalized to an inhomogeneous setting. For this, a complete multi-dimensional version of Stein's method for unbounded random variables will have to be derived.
In~\cite{DomKulSch17}, continuous spin models on random graphs were studied in the annealed setting, also resulting in a mean-field approximation. It would be interesting to see if the central limit theorem for the sum of spins can also be proved using our techniques for that model.
\section{Stein's method, proof of Theorem~\ref{thm-MarginalStein}}\label{sec-Stein}
In this section, we prove the bound in~\eqref{eq-thmStein}, by using ideas from~\cite[Theorem~2.2]{ShaZha17} and~\cite{FanRol15}. Similar ideas to obtain one-dimensional CLTs in a multidimensional setting were used in~\cite[Construction 1C]{CheRol10}.
\begin{equation}gin{proof}[Proof of Theorem~\ref{thm-MarginalStein}]
Note that it follows from the regression equation~\eqref{eq-regressionMD} that, for any function $F:\mathbb{R}^d\to\mathbb{R}^d$ such that all expectations below exist,
\begin{equation}gin{align*}
\frac1{2\lambda} &\mathbb E\left[\left(\Lambda^{-1}(X'-X)\right)^t \left(F(X')-F(X)\right)\right] \nonumber\\
& = \frac1{2\lambda} \mathbb E\left[\left(\Lambda^{-1}(X'-X)\right)^t \left(F(X')+F(X) \right)\right] + \frac{1}{\lambda}\mathbb E\left[\left(\Lambda^{-1}(X-X')\right)^t F(X)\right]\nonumber\\
& = \frac{1}{\lambda}\mathbb E\left[\left(\Lambda^{-1}\mathbb E[ D \,|\, X] \right)^t F(X)\right]\nonumber\\
&= \mathbb E\left[X^t F(X)\right] + \mathbb E\left[(\Lambda^{-1}R)^t F(X)\right],
\end{align*}
where we used exchangeability in the second equality and~\eqref{eq-regressionMD} in the last equality.
In particular, by choosing $F=f e_1$ for some function $f:\mathbb{R}\to\mathbb{R}$ such that all expectations below exist and rewriting,
\begin{equation}\label{eq-XfX}
\mathbb E\left[X_1 f(X_1)\right] = \frac{1}{2\lambda} \mathbb E\left[\ell D \left(f(X_1)-f(X'_1)\right)\right] - \mathbb E\left[\ell R f(X_1)\right].
\end{equation}
Here $\ell$ denotes the first row of $\Lambda^{-1}$.
\noindent
For $z\in\mathbb{R}$, let $f_z$ be the solution of the Stein equation
\begin{equation}\label{eq-steineq}
f'_{z}(x)-x f_{z}(x) = \ind{x \leq z} - \Phi(z),
\end{equation}
where $\Phi(z)$ is the distribution function of a standard normal random variable. The background of this equation
reads as follows. A standard Gaussian random variable $Z$ is characterized by the fact that for every absolutely continuous function $f:\mathbb{R} \to \mathbb{R}$ for which $\mathbb E \big[Zf(Z)\big]<\infty$ it holds that
\begin{equation}gin{equation} \label{steinnormal}
\mathbb E \big[f'(Z)-Zf(Z)\big]=0.
\end{equation}
This together with the definition of the Kolmogorov-distance is the motivation to study the Stein equation.
If we replace $x$ by a random variable $X$ and take expectations in the Stein equation \eqref{eq-steineq}, we infer that
$$\mathbb E \big[f_z'(X)-Xf_z(X)\big]= \mathbb P [X \leq z] - \Phi(z).
$$
The curious fact is that the left hand side of the last equation is frequently much simpler to bound than the right hand side and leads to the successfulness of the method.
As shown in, e.g.,~\cite[Lemma~2.3]{CheGolSha11}, $f_z$ satisfies, for all $x\in\mathbb{R}$,
\begin{equation}\label{eq-steineqprop1}
|x f_z(x)|\leq1,\qquad |f'_z(x)|\leq 1, \qquad 0<f_z(x)\leq \frac{\sqrt{2\pi}}{4},
\end{equation}
and $x f_z(x)$ is an increasing function of $x$.
\noindent
If we take $x=X_1$ in~\eqref{eq-steineq} and take expectations on both sides, we get, also using~\eqref{eq-XfX},
\begin{equation}gin{align*}
\mathbb P[X_1\leq z] - \Phi(z) &= \mathbb E\left[f'_z(X_1)-X_1 f_z(X_1)\right] \nonumber\\
&=\mathbb E\left[f'_z(X_1)\right] - \frac{1}{2\lambda} \mathbb E\left[\ell D \left(f_z(X_1)-f_z(X'_1)\right)\right] + \mathbb E\left[\ell R f_z(X_1)\right]\nonumber\\
&=\mathbb E\left[f'_z(X_1)\left(1-\frac{1}{2\lambda}\ell D D_1\right)\right] +\frac{1}{2\lambda} \mathbb E\left[\ell D \int_{-D_1}^0 f_z'(X_1) {\rm d} t \right]\nonumber\\
&\qquad - \frac{1}{2\lambda} \mathbb E\left[\ell D \int_{-D_1}^0 f'_z(X_1+t) {\rm d} t\right] + \mathbb E\left[\ell R f_z(X_1)\right]\nonumber\\
&=\mathbb E\left[f'_z(X_1)\left(1-\frac{1}{2\lambda}\mathbb E\left[\ell D D_1\,|\, X\right]\right)\right] +\frac{1}{2\lambda} \mathbb E\left[\ell D \int_{-D_1}^0 \bigl( f_z'(X_1)-f'_z(X_1+t) \bigr) {\rm d} t \right]\nonumber\\
&\qquad + \mathbb E\left[\ell R f_z(X_1)\right].
\end{align*}
Hence, we can bound, using~\eqref{eq-steineqprop1},
\begin{equation}gin{align}\label{eq-absPminusPhi}
\left|\mathbb P[X_1\leq z] - \Phi(z)\right| &\leq \mathbb E\left[\left|1-\frac{1}{2\lambda}\mathbb E\left[\ell D D_1 \,|\, X\right]\right|\right] + \frac{1}{2\lambda}\left|\mathbb E\left[\ell D \int_{-D_1}^0 \bigl( f_z'(X_1)-f'_z(X_1+t) \bigr) {\rm d} t \right] \right| \nonumber\\
&\qquad + \frac{\sqrt{2\pi}}{4} \mathbb E\left[|\ell R|\right].
\end{align}
We use~\eqref{eq-steineq} again to rewrite the second term as:
\begin{equation}gin{align*}
\frac{1}{2\lambda}&\left|\mathbb E\left[\ell D \int_{-D_1}^0 \bigl( f_z'(X_1)-f'_z(X_1+t) \bigr) {\rm d} t \right] \right| \nonumber\\
& = \frac{1}{2\lambda}\left|\mathbb E\left[\ell D \int_{-D_1}^0 \bigl( X_1f_z(X_1)-(X_1+t)f_z(X_1+t)+\ind{X_1\leq z}-\ind{X_1+t\leq z} \bigr) {\rm d} t \right] \right| \nonumber\\
&\leq \frac{1}{2\lambda}\left(\left|\mathbb E\left[\ell D \int_{-D_1}^0 \bigl( X_1f_z(X_1)-(X_1+t)f_z(X_1+t) \bigr) {\rm d} t \right] \right|+\left|\mathbb E\left[\ell D \int_{-D_1}^0 \bigl( \ind{X_1\leq z}-\ind{X_1+t\leq z} \bigr) {\rm d} t \right] \right|\right) \nonumber\\
&=: \frac{1}{2\lambda}\left(\left|I_1 \right|+\left|I_2\right|\right).
\end{align*}
Since $x f_z(x)$ is increasing in $x$,
\begin{equation}gin{align*}
0\leq \int_{-D_1}^0 \bigl( X_1f_z(X_1)-(X_1+t)f_z(X_1+t) \bigr) {\rm d} t &\leq \int_{-D_1}^0 \bigl( X_1f_z(X_1)-(X_1-D_1)f_z(X_1-D_1) \bigr) {\rm d} t \nonumber\\
&=D_1 \left(X_1f_z(X_1)-X_1' f_z(X_1')\right).
\end{align*}
Hence,
\begin{equation}gin{align*}
I_1 &= \mathbb E\left[\ell D\left(\ind{\ell D<0}+\ind{\ell D>0}\right) \int_{-D_1}^0 \bigl( X_1f_z(X_1)-(X_1+t)f_z(X_1+t) \bigr) {\rm d} t \right] \nonumber\\
&\leq \mathbb E\left[\ell D\ind{\ell D>0} \int_{-D_1}^0 \bigl( X_1f_z(X_1)-(X_1+t)f_z(X_1+t) \bigr) {\rm d} t \right] \nonumber\\
&\leq \mathbb E\left[|\ell D| \ind{\ell D>0} D_1 \left(X_1f_z(X_1)-X_1' f_z(X_1')\right) \right] \nonumber\\
&= \mathbb E\left[|\ell D| \left(\ind{\ell D<0}+\ind{\ell D>0}\right) D_1 X_1f_z(X_1) \right] \nonumber\\
&= \mathbb E\left[\mathbb E\left[|\ell D| D_1 \,|\, X\right] X_1f_z(X_1) \right] \nonumber\\
&\leq \mathbb E\left[\left|\mathbb E\left[|\ell D| D_1 \,|\, X\right] \right| \right],
\end{align*}
where we used that it follows from exchangeability that
$$
\mathbb E\left[|\ell D| \ind{\ell D>0} D_1 \left(X_1' f_z(X_1')\right) \right] =-\mathbb E\left[|\ell D| \ind{\ell D<0} D_1 \left(X_1 f_z(X_1)\right) \right],
$$
and that $|X_1f_z(X_1)|\leq1$. Similarly,
\begin{equation}gin{align*}
I_1 &\geq \mathbb E\left[\ell D\ind{\ell D<0} \int_{-D_1}^0 \bigl( X_1f_z(X_1)-(X_1+t)f_z(X_1+t) \bigr) {\rm d} t \right] \nonumber\\
&\geq -\mathbb E\left[|\ell D| \ind{\ell D<0} D_1 \left(X_1f_z(X_1)-X_1' f_z(X_1')\right) \right] \nonumber\\
&\geq -\mathbb E\left[\left|\mathbb E\left[|\ell D| D_1 \,|\, X\right] \right| \right],
\end{align*}
Combining the upper and lower bound on $I_1$ gives,
\begin{equation}\label{eq-bdI1}
|I_1| \leq \mathbb E\left[\left|\mathbb E\left[|\ell D| D_1 \,|\, X\right] \right| \right].
\end{equation}
\noindent
We can show in a similar way, using that $\ind{x \leq z}$ is non-increasing in $x$, that also
\begin{equation}\label{eq-bdI2}
|I_2| \leq \mathbb E\left[\left|\mathbb E\left[|\ell D| D_1 \,|\, X\right] \right| \right].
\end{equation}
Combining~\eqref{eq-absPminusPhi},~\eqref{eq-bdI1} and~\eqref{eq-bdI2} proves the theorem.
\end{proof}
\section{Berry-Esseen bound for the ICW, proof of Theorem~\ref{thm-berryesseen-magnetization}}\label{sec-BEICW}
We now use Theorem~\ref{thm-MarginalStein} to prove the Berry-Esseen bound in~\eqref{eq-berryesseenbound}. First, we define our exchangeable pairs and derive the regression equation in Section~\ref{sec-regression}. Then we prove that the central limit theorem holds for the weighted spin sum in Section~\ref{sec-MGFs} and in particular also show that certain moments converge to that of the normal distribution. Finally, in Section~\ref{sec-errorterms}, we bound all terms of~\eqref{eq-thmStein} to prove Theorem~\ref{thm-berryesseen-magnetization}.
\subsection{Exchangeable pairs and regression equation}\label{sec-regression}
We let $X_n$ be as in~\eqref{eq-defXn}.
Let $I \sim Uni[n]$ and let $\sigma'_I$ be drawn from the conditional distribution given $(\sigma_j)_{j\neq I}$. Define $X'_n$ as
$$
X'_n = X_n - \frac{1}{\sqrt{n}}\frac{\sigma_I- \sigma'_I}{\sqrt{\chi_n}}.
$$
Then, $(X_n, X'_n)$ indeed is an exchangeable pair.
\noindent
Let
$$
\tilde{M}_n := \tilde{M}_n(\begin{equation}ta,h) := \sqrt{\frac{\mathbb E[W_n]}{\begin{equation}ta}} x_n^*,
$$
with $x_n^*$ given in \eqref{eq-fixedpoint-xn}. Let
$$
\tilde{m}_n = \frac1n \sum_{j\in[n]} w_j \sigma_j.
$$
Let us define the constant
\begin{equation}gin{equation} \label{sigma}
\sigma^2(x_n^*, \begin{equation}ta,h) := \frac{1}{1-\frac{\begin{equation}ta}{\mathbb E[W_n]}\mathbb E\left[\left(1-\tanh^2\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}}W_n x_n^* + h\right)\right)W_n^2\right]},
\end{equation}
and let $\tilde{\chi}_n$ be the constant given by
\begin{equation}gin{align}\label{eq-choicetildechi}
\tilde{\chi}_n := \tilde{\chi}_n(\begin{equation}ta,h) &:= \sigma^2(x_n^*, \begin{equation}ta,h) \mathbb E\left[\left(1-\tanh^2\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}}W_n x_n^* + h\right)\right)W_n^2\right] \nonumber\\
&= \frac{\mathbb E\left[\left(1-\tanh^2\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}}W_n x_n^* + h\right)\right)W_n^2\right]}{1-\frac{\begin{equation}ta}{\mathbb E[W_n]}\mathbb E\left[\left(1-\tanh^2\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}}W_n x_n^* + h\right)\right)W_n^2\right]}.
\end{align}
\noindent
Now define $\tilde{X}_n$ as
\begin{equation}\label{eq-deftildeXn}
\tilde{X}_n =\sqrt{n} \frac{\tilde{m}_n - \tilde{M}_n}{\sqrt{\tilde{\chi}_n}},
\end{equation}
and
$$
\tilde{X}'_n = \tilde{X}_n - \frac{1}{\sqrt{n}}\frac{w_I(\sigma_I- \sigma'_I)}{\sqrt{\tilde{\chi}_n}},
$$
so that also $(\tilde{X}_n, \tilde{X}'_n)$ is an exchangeable pair.
\noindent
From now on, we write $X=(X_n,\tilde{X}_n)^t$ and $X'=(X'_n,\tilde{X}'_n)^t$.
Denote by $\mathcal{F}_n$ the sigma algebra generated by $(\sigma_i)_{i\in[n]}$ and by $\mathcal{F}_n^i$ the sigma algebra generated by $(\sigma_j)_{j\in[n], j \neq i}$.
Also define
\begin{equation}gin{equation} \label{mni}
\tilde{m}_n^i = \frac1n \sum_{j\in[n]: j\neq i} w_j \sigma_j.
\end{equation}
Then, we can compute
\begin{equation}gin{align*}
\mu_n(\sigma_i \,|\, \mathcal{F}_n^i) &= \frac{\exp\left(\frac{\begin{equation}ta}{2\ell_n} \left(\sum_{j\in[n]: j\neq i} w_j\sigma_j + w_i\sigma_i\right)^2+h\sum_{j\in[n]}\sigma_j\right)}{\sum_{\sigma'_i\in\{-1,1\}}\exp\left(\frac{\begin{equation}ta}{2\ell_n} \left(\sum_{j\in[n]: j\neq i} w_j\sigma_j + w_i\sigma'_i\right)^2+h\sum_{j\in[n]:j\neq i}\sigma_j+h\sigma'_i \right)} \nonumber\\
&= \frac{\exp\left(\frac{\begin{equation}ta}{\ell_n} w_i \sigma_i \sum_{j\in[n]: j\neq i} w_j\sigma_j + h\sigma_i\right)}{\exp\left(\frac{\begin{equation}ta}{\ell_n} w_i \sum_{j\in[n]: j\neq i} w_j\sigma_j + h\right)+\exp\left(-\left(\frac{\begin{equation}ta}{\ell_n} w_i \sum_{j\in[n]: j\neq i} w_j\sigma_j + h\right)\right)},
\end{align*}
and hence,
\begin{equation}\label{eq-sigmaprimegivenF}
\mathbb E[\sigma'_i \,|\, \mathcal{F}_n] =\mathbb E[\sigma_i \,|\, \mathcal{F}_n^i]= \tanh\left(\frac{\begin{equation}ta}{\ell_n} w_i \sum_{j\in[n]: j\neq i} w_j\sigma_j + h\right) = \tanh\left(\frac{\begin{equation}ta w_i}{\mathbb E[W_n]} \tilde{m}_n^i+h\right).
\end{equation}
We obtain the following regression equation.
\begin{equation}gin{lemma}\label{lem-regression} Let us define
\begin{equation}\label{eq-deGn}
G_n(x; s) = \frac{x^2}{2} - \mathbb E\biggl[\log \cosh \biggl(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}}W_n (x+s)+h\biggr)\biggr],
\end{equation}
and $G_n(x) = G_n(x;0)$. Then for $(\begin{equation}ta,h)\in\mathcal{U}$ we obtain that
\begin{equation}\label{eq-regressionICW}
\mathbb E\left[\begin{equation}gin{pmatrix}X_n \\ \tilde{X}_n \end{pmatrix}-\begin{equation}gin{pmatrix} X'_n \\ \tilde{X}'_n \end{pmatrix} \,|\, \mathcal{F}_n\right] = \lambda \begin{equation}gin{pmatrix}1 & -c \\ 0 & 1/\sigma^2(x_n^*,\begin{equation}ta,h) \end{pmatrix} \begin{equation}gin{pmatrix}X_n \\ \tilde{X}_n \end{pmatrix} +\lambda \begin{equation}gin{pmatrix} R_1 +R_2 \\ \tilde{R_1}+ \tilde{R_2}\end{pmatrix},
\end{equation}
where $\sigma^2(x_n^*,\begin{equation}ta,h)$ is given in \eqref{sigma}, and
\begin{equation}\label{eq-deflambdasigma}
\lambda=1/n, \qquad \sigma^2(x_n^*,\begin{equation}ta,h)= \frac{1}{G''_n(x^*_n)},
\end{equation}
(the latter equality follows from \eqref{eq-deGn}, see below \eqref{eq-Gn2})
\begin{equation}\label{eq-defc}
c=\frac{\sqrt{\tilde{\chi}_n}}{\sqrt{\chi_n}} \frac{\begin{equation}ta}{\mathbb E[W_n]} \mathbb E\left[ \left(1-\tanh^2\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}} W_n x_n^*+h\right)\right) W_n \right],
\end{equation}
and the error terms are given by
\begin{equation}gin{align}
\label{eq-defR1}
R_1&= \frac{\sqrt{n}}{\sqrt{\chi_n}}\frac{1}{n}\sum_{i\in[n]} \left( \tanh\left(\frac{\begin{equation}ta w_i}{\mathbb E[W_n]} \tilde{m}_n+h\right) - \tanh\left(\frac{\begin{equation}ta w_i}{\mathbb E[W_n]} \tilde{m}_n^i+h\right) \right), \\
\label{eq-defR2}
R_2&=\frac{\sqrt{n}}{\sqrt{\chi_n}} \frac{1}{n} \sum_{i\in[n]}\left( \tanh\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}} w_i x_n^*+h\right) - \tanh\left(\frac{\begin{equation}ta w_i}{\mathbb E[W_n]} \tilde{m}_n+h\right) \right)+c \tilde{X}_n, \\
\label{eq-deftildeR1}
\tilde{R}_1 &= \frac{\sqrt{n}}{\sqrt{\tilde{\chi}_n}} \frac{1}{n}\sum_{i\in[n]}w_i \left( \tanh\left(\frac{\begin{equation}ta w_i}{\mathbb E[W_n]} \tilde{m}_n+h\right) - \tanh\left(\frac{\begin{equation}ta w_i}{\mathbb E[W_n]} \tilde{m}_n^i+h\right) \right), \\
\label{eq-deftildeR2}
\tilde{R}_2 &= \frac{\sqrt{n}}{\sqrt{\tilde{\chi}_n}} \sqrt{\frac{\mathbb E[W_n]}{\begin{equation}ta}} G'_n\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}} \tilde{m}_n\right) - \frac{1}{\sigma^2(x_n^*,\begin{equation}ta,h)}\tilde{X}_n.
\end{align}
\end{lemma}
\begin{equation}gin{proof}
We start by computing $\mathbb E[\tilde{X}_n-\tilde{X}_n' \,|\, \mathcal{F}_n] $. For this, note that
\begin{equation}\label{eq-Xn-Xnprime}
\tilde{X}_n-\tilde{X}_n' = \frac{1}{\sqrt{n}}\frac{w_I(\sigma_I- \sigma'_I)}{\sqrt{\tilde{\chi}_n}}.
\end{equation}
Hence,
\begin{equation}gin{align*}
\mathbb E[\tilde{X}_n-\tilde{X}_n' \,|\, \mathcal{F}_n] & =\frac{1}{\sqrt{n}\sqrt{\tilde{\chi}_n}} \frac{1}{n} \sum_{i\in[n]} w_i \mathbb E[\sigma_i-\sigma'_i \,|\, \mathcal{F}_n] \nonumber\\
&=\frac{1}{\sqrt{n}\sqrt{\tilde{\chi}_n}} \tilde{m}_n -\frac{1}{\sqrt{n}\sqrt{\tilde{\chi}_n}} \frac{1}{n}\sum_{i\in[n]}w_i \mathbb E[\sigma'_i \,|\, \mathcal{F}_n]\nonumber\\
&= \frac{1}{\sqrt{n}\sqrt{\tilde{\chi}_n}} \tilde{m}_n - \frac{1}{\sqrt{n}\sqrt{\tilde{\chi}_n}} \frac{1}{n}\sum_{i\in[n]}w_i \tanh\left(\frac{\begin{equation}ta w_i}{\mathbb E[W_n]} \tilde{m}_n^i+h\right) \nonumber\\
& = \frac{1}{\sqrt{n}\sqrt{\tilde{\chi}_n}} \tilde{m}_n -\frac{1}{\sqrt{n}\sqrt{\tilde{\chi}_n}} \frac{1}{n}\sum_{i\in[n]}w_i \tanh\left(\frac{\begin{equation}ta w_i}{\mathbb E[W_n]} \tilde{m}_n+h\right) +\lambda \tilde{R}_1,
\end{align*}
where $\tilde{R}_1$ is given in~\eqref{eq-deftildeR1}. Observe that it follows immediately from the definition of $G_n$ in~\eqref{eq-deGn} that
\begin{equation}gin{align}\label{eq-der-Gn}
G_n'(x) &= x - \mathbb E\left[\tanh\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}}W_n x + h\right) \sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}}W_n \right] \nonumber\\
&= x -\frac{1}{n} \sum_{i\in[n]} \tanh\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}}w_i x + h\right) \sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}}w_i,
\end{align}
and hence, with $x=\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}}\tilde{m}_n$,
\begin{equation}\label{eq-regressiontildeX}
\mathbb E[\tilde{X}_n-\tilde{X}_n' \,|\, \mathcal{F}_n] -\lambda \tilde{R}_1 = \frac{1}{\sqrt{n}\sqrt{\tilde{\chi}_n}} \sqrt{\frac{\mathbb E[W_n]}{\begin{equation}ta}} G'_n\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}} \tilde{m}_n\right)= \frac{\lambda}{\sigma^2(x_n^*,\begin{equation}ta,h)} \tilde{X}_n +\lambda\tilde{R}_2,
\end{equation}
where $\lambda$ and $\sigma^2(x_n^*,\begin{equation}ta,h)$ are given in~\eqref{eq-deflambdasigma} and $\tilde{R}_2$ in~\eqref{eq-deftildeR2}. Note that $\frac{\lambda}{\sigma^2(x_n^*,\begin{equation}ta,h)} \tilde{X}_n$ is equal to the first order Taylor expansion of $G_n'$ around $x_n^*$ and therefore it is to be expected that $\lambda\tilde{R}_2$, which is the error made in doing so, is small.
Similarly,
\begin{equation}gin{align*}
\mathbb E[X_n&-X_n' \,|\, \mathcal{F}_n] = \frac{1}{\sqrt{n}\sqrt{\chi_n}} m_n - \frac{1}{\sqrt{n}\sqrt{\chi_n}} \frac{1}{n}\sum_{i\in[n]} \tanh\left(\frac{\begin{equation}ta w_i}{\mathbb E[W_n]} \tilde{m}_n^i+h\right) \\
&= \lambda X_n + \frac{1}{\sqrt{n}\sqrt{\chi_n}} \frac{1}{n} \sum_{i\in[n]}\left( \tanh\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}} w_i x_n^*+h\right) - \tanh\left(\frac{\begin{equation}ta w_i}{\mathbb E[W_n]} \tilde{m}_n+h\right) \right) +\lambda R_1. \nonumber
\end{align*}
Using a Taylor expansion, we see that $\tanh(x) \approx \tanh(a)+(1-\tanh^2(a))(x-a)$ for $x$ close to $a$. Hence,
\begin{equation}gin{align}\label{eq-TaylorR2}
\frac{1}{\sqrt{n}\sqrt{\chi_n}} \frac{1}{n} &\sum_{i\in[n]}\left( \tanh\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}} w_i x_n^*+h\right) - \tanh\left(\frac{\begin{equation}ta w_i}{\mathbb E[W_n]} \tilde{m}_n+h\right) \right) \nonumber\\
&\approx- \frac{1}{\sqrt{n}\sqrt{\chi_n}} \frac{1}{n} \sum_{i\in[n]} \left(1-\tanh^2\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}} w_i x_n^*+h\right)\right) w_i \frac{\begin{equation}ta}{\mathbb E[W_n]} \left(\tilde{m}_n-\sqrt{\frac{\mathbb E[W_n]}{\begin{equation}ta}}x_n^* \right) \nonumber\\
&= - \frac{1}{\sqrt{n}\sqrt{\chi_n}} \mathbb E\left[ \left(1-\tanh^2\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}} W_n x_n^*+h\right)\right) W_n \right] \frac{\begin{equation}ta}{\mathbb E[W_n]} \frac{\sqrt{\tilde{\chi}_n}}{\sqrt{n}}\tilde{X}_n \nonumber\\
&= -\lambda c \tilde{X}_n,
\end{align}
where $c$ is defined in~\eqref{eq-defc}.
Hence, we write
\begin{equation}\label{eq-regressionX}
\mathbb E[X_n-X_n' \,|\, \mathcal{F}_n] = \lambda (X_n - c \tilde{X}_n) + \lambda (R_1+R_2),
\end{equation}
where $R_2$ is given in~\eqref{eq-defR2}.
\noindent
The lemma follows by combining~\eqref{eq-regressiontildeX} and~\eqref{eq-regressionX}.
\end{proof}
\subsection{Central limit theorem for weighted spin sums}\label{sec-MGFs}
In this section, we show that the weighted sum of spins $\sum_{i\in[n]} w_i \sigma_i$ obeys a central limit theorem. We do this by showing that, if we normalize the sum properly, the moment generating function converges to that of a normal distribution. This implies that the normalized sum converges to a normal in distribution and, more importantly for us, that also all moments converge to that of this normal. We also investigate sums of differently weighted spins.
We use the methods to prove the convergence of pressure of the inhomogeneous Curie-Weiss model in~\cite[Sec.~2.1]{GiaGibHofPri16} to prove that certain cumulant generating functions converge, and the methods to prove the CLT for the spin sum in~\cite[Sec.~2.2]{GiaGibHofPri16}, of which the details can be found in the proof of the CLT for the quenched Ising model on random graphs in~\cite[Sec.~2.3]{GiaGibHofPri15}.
\begin{equation}gin{lemma}\label{lem-cmf-weigthedsum}
Define the cumulant generating function
$$
c_n(s) = \frac1n \log \mathbb E\biggl[\exp\biggl(s \sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}}\sum_{i\in[n]} w_i\sigma_i\biggr)\biggr].
$$
Then, for any constant $a$,
$$
c_n(s) = \frac1n \log \frac{\int_{-\infty}^\infty e^{-n G_n\left(\frac{x}{\sqrt{n}}+a;s\right)} {\rm d} x}{\int_{-\infty}^\infty e^{-n G_n\left(\frac{x}{\sqrt{n}}+a\right)} {\rm d} x},
$$
where $G_n(x;s)$ is defined in \eqref{eq-deGn} with $G_n(x) := G_n(x;0)$.
\end{lemma}
\begin{equation}gin{proof}
Note that
\begin{equation}gin{align}\label{eq-cmf-weigthedsum}
\mathbb E\left[e^{s \sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}}\sum_{i\in[n]} w_i\sigma_i}\right] &= \frac{\sum_{\sigma\in\{-1,1\}^n}e^{s \sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}}\sum_{i\in[n]} w_i\sigma_i}e^{\frac{\begin{equation}ta}{2 n\mathbb E[W_n]}\left(\sum_{i\in[n]}w_i \sigma_{i}\right)^2+h\sum_{i \in[n]}\sigma_i} }{\sum_{\sigma\in\{-1,1\}^n}e^{\frac{\begin{equation}ta}{2 n\mathbb E[W_n]}\left(\sum_{i\in[n]}w_i \sigma_{i}\right)^2+h\sum_{i \in[n]}\sigma_i}}\nonumber\\
&= \frac{\sum_{\sigma\in\{-1,1\}^n}e^{\frac{\begin{equation}ta}{2 n\mathbb E[W_n]}\left(\sum_{i\in[n]}w_i \sigma_{i}\right)^2+\sum_{i \in[n]}(s\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}}w_i+h)\sigma_i} }{\sum_{\sigma\in\{-1,1\}^n}e^{\frac{\begin{equation}ta}{2 n\mathbb E[W_n]}\left(\sum_{i\in[n]}w_i \sigma_{i}\right)^2+h\sum_{i \in[n]}\sigma_i}}.
\end{align}
Hence, we can interpret the numerator as an inhomogeneous Curie-Weiss model, where also the field is inhomogeneous, i.e., the field at vertex $i$ is given by $s\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}}w_i+h$. We can use the Hubbard-Stratonovich transform $e^{\frac{t^2}{2}}=\mathbb E[e^{tZ}]$, where $Z$ is a standard normal random variable, to rewrite the numerator of~\eqref{eq-cmf-weigthedsum} as
\begin{equation}gin{align*}
\sum_{\sigma\in\{-1,1\}^n}&\mathbb E\Bigl[e^{\sqrt{\frac{\begin{equation}ta}{n\mathbb E[W_n]}}\left(\sum_{i\in[n]}w_i \sigma_{i}\right)Z}\Bigr]e^{\sum_{i \in[n]}(s\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}}w_i+h)\sigma_i}
= 2^n \mathbb E\Bigl[e^{\sum_{i\in[n]}\log\cosh\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}}w_i \left(\frac{Z}{\sqrt{n}}+s\right)+h\right)} \Bigr] \nonumber\\
&= \frac{2^n}{\sqrt{2\pi}} \int_{-\infty}^{\infty} e^{\sum_{i\in[n]}\log\cosh\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}}w_i \left(\frac{z}{\sqrt{n}}+s\right)+h\right)}e^{-\frac{z^2}{2}} {\rm d} z\nonumber\\
&= \frac{2^n}{\sqrt{2\pi}} \int_{-\infty}^{\infty} e^{-n\left[\frac12\left(\frac{z}{\sqrt{n}}\right)^2-\mathbb E\left[\log\cosh\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}} W_n \left(\frac{z}{\sqrt{n}}+s\right)+h\right)\right]\right]} {\rm d} z \nonumber\\
&=\frac{2^n}{\sqrt{2\pi}} \int_{-\infty}^{\infty} e^{-nG_n\left(\frac{z}{\sqrt{n}};s\right)} {\rm d} z=\frac{2^n}{\sqrt{2\pi}} \int_{-\infty}^{\infty} e^{-nG_n\left(\frac{x}{\sqrt{n}}+a;s\right)} {\rm d} x,
\end{align*}
where we used the change of variables $x=z-\sqrt{n}a$ in the last equality.
The same computation can be done for the denominator of~\eqref{eq-cmf-weigthedsum}, by setting $s=0$.
\end{proof}
An important role is played by the global minimum of $G_n(x)$. In~\cite{GiaGibHofPri16}, it is shown that for $(\begin{equation}ta,h)\in\mathcal{U}$ in the limit $n\to\infty$ the global minimizer is given by the unique solution with the same sign as $h$ of the fixed point equation~\eqref{eq-fixedpoint-x}.
We give a characterization of the global minimizer for finite $n$ in the next lemma.
\begin{equation}gin{lemma}\label{lem-globalminGn}
Suppose that Condition~\ref{cond-WeightReg}(i)--(ii) holds and that $n$ is large enough. Then, for $0\leq\begin{equation}ta <\begin{equation}ta_c$
and $h=0$, the global minimizer of $G_n(x)$ is given by $x_n^*=0$. For $\begin{equation}ta\geq 0, h\neq 0$, the global minimizer of $G_n(x)$ is given by the unique fixed point $x_n^*$ with the same sign as $h$ of the fixed point equation \eqref{eq-fixedpoint-xn}.
Furthermore, for all $(\begin{equation}ta,h) \in \mathcal{U}$,
$$
G_n''(x_n^*)>0.
$$
\end{lemma}
\begin{equation}gin{proof}
Note that $G_n$ is continuous and
$$
G_n'(x) = x - \mathbb E\left[\tanh\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}}W_n x + h\right)\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}} W_n \right],
$$
and hence the global minimizer has to satisfy~\eqref{eq-fixedpoint-xn}. We can also compute the second derivative:
\begin{equation}\label{eq-Gn2}
G_n''(x)=1-\mathbb E\left[\left(1-\tanh^2\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}}W_n x + h\right)\right)\frac{\begin{equation}ta}{\mathbb E[W_n]} W_n^2 \right].
\end{equation}
For $x\neq0$, it holds that $0<\tanh^2(x)\leq1$, and hence we can bound
$$
G_n''(x) > 1-\begin{equation}ta \frac{\mathbb E[W_n^2]}{\mathbb E[W_n]}.
$$
Therefore $G_n''(x)>0$ for $x\neq0$ and $\begin{equation}ta < \begin{equation}ta_c$ and $n$ large enough by Condition~\ref{cond-WeightReg}(ii), i.e., $G_n$ is strictly convex. Furthermore, using $\log\cosh x \leq |x|$
\begin{equation}\label{eq-limG-nxtoinfty}
\lim_{|x|\to\infty} G_n(x) \geq \lim_{|x|\to\infty} \frac{x^2}{2} - \mathbb E\left[\left| \sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}}W_n x+h \right|\right]=\infty.
\end{equation}
Therefore, for $\begin{equation}ta<\begin{equation}ta_c$, $G_n(x)$ has a unique local minimum, which must be a global minimum. For $h=0$ clearly $x_n^*=0$ is a fixed point proving the first statement.
Now suppose that $h>0$. Define
$$
H_n(x) = \mathbb E\left[\tanh\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}}W_n x + h\right)\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}} W_n \right].
$$
Then $H_n$ is continuous and
$$
H_n'(x)= \mathbb E\left[\left(1-\tanh^2\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}}W_n x + h\right)\right)\frac{\begin{equation}ta}{\mathbb E[W_n]} W_n^2 \right],
$$
and
$$
H_n''(x)=-2 \mathbb E\left[\tanh\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}}W_n x + h\right)\left(1-\tanh^2\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}}W_n x + h\right)\right)\left(\frac{\begin{equation}ta}{\mathbb E[W_n]}\right)^{3/2} W_n^3\right].
$$
Since $W_n$ is a positive random variable, we conclude that $H_n$ is concave for $x\geq0$. Since $H_n(0)>0$ and $H_n$ is bounded, there is a unique positive solution to $x=H_n(x)$, call this solution $x_n^*$. Since $G_n'(0)<0$ it follows from~\eqref{eq-limG-nxtoinfty} that $x_n^*$ is a local minimizer.
For any solution $x^-<0$ of~\eqref{eq-fixedpoint-xn},we have that
$$
G_n(x^-) > G_n(-x^-) \geq G_n(x_n^*),
$$
since $x_n^*$ is the unique positive local minimizer. Hence, $x_n^*$ is also the unique global minimizer.
\noindent
The proof for $h<0$ is similar.
\noindent
Since $x_n^*$ is the unique global minimizer, we must have that $G_n''(x) \geq 0$, so it only remains to show that this inequality is strict for $(\begin{equation}ta,h) \in \mathcal{U}$.
For $h\neq 0$, we know that
$$
\lim_{n\to\infty} \chi_n(\begin{equation}ta,h) = \chi(\begin{equation}ta,h) < \infty.
$$
Since we can rewrite~\eqref{eq-defchin} as
$$
\chi_n = 1- \mathbb E\left[\tanh^2\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}}W_n x_n^* + h\right) \right]+\frac{\frac{\begin{equation}ta}{\mathbb E[W_n]} \mathbb E\left[\left(1-\tanh^2\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}}W_n x_n^* + h\right)\right) W_n \right]^2}{G_n''(x_n^*)}.
$$
it must hold that $G_n''(x_n^*)>0$ for $n$ large enough.
\end{proof}
We can now investigate the moment generating function of the normalized weighted spin sum.
\begin{equation}gin{proposition}\label{prop-cltweighted}
Suppose that Condition~\ref{cond-WeightReg}(i)--(ii) holds. Then,
$$
\lim_{n\to\infty} \mathbb E\biggl[\exp\biggl\{s\Bigl(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}} \frac{1}{\sqrt{n}}\sum_{i\in[n]}w_i\sigma_i - \sqrt{n}x_n^*\Bigr)\biggr\}\biggr] = e^{c''(0) \frac{s^2}{2}},
$$
where
$$
c(s) = \lim_{n\to\infty} c_n(s),
$$
and $c_n$ is defined in Lemma \ref{lem-cmf-weigthedsum}.
In particular,
$$
\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}} \frac{1}{\sqrt{n}}\sum_{i\in[n]}w_i\sigma_i - \sqrt{n}x_n^* \stackrel{d}{\longrightarrow} \mathcal{N}\left(0,c''(0) \right),
$$
and all moments of the l.h.s.~converge to that of this normal distribution.
\end{proposition}
\begin{equation}gin{proof}
Note that, with $s_n = \frac{s}{\sqrt{n}}$,
\begin{equation}gin{align*}
\log \mathbb E\biggl[\exp\biggl\{s\Bigl(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}} \frac{1}{\sqrt{n}}\sum_{i\in[n]}w_i\sigma_i &- \sqrt{n}x_n^*\Bigr)\biggr\}\biggr] = n c_n(s_n) - n s_n x_n^*\nonumber\\
&=n\left(c_n(0) + (c_n'(0)- x_n^*)s_n + c_n''(s_n^*)\frac{s_n^2}{2}\right)\nonumber\\
&=nc_n(0) +\sqrt{n} (c_n'(0)- x_n^*)s + c_n''(s_n^*)\frac{s^2}{2},
\end{align*}
for some $s_n^* \in (0,s_n)$. Clearly $c_n(0)=0$.
As mentioned, the numerator of~\eqref{eq-cmf-weigthedsum} can be interpreted as the partition function of an Ising model, and hence $c_n(s)$ is the difference of two pressures. Hence, the convergence of $c_n(s)$ can be proved as in~\cite[Sec.~2.1]{GiaGibHofPri16}. Moreover, this means that the monotonicity and convexity properties of the Ising model can be used to show that
$$
\lim_{n\to\infty} c_n''(s_n) = c''(0),
$$
see~\cite[Sec.~2.3]{GiaGibHofPri15} for details.
It remains to show that $\sqrt{n} (c_n'(0)- x_n^*) = o(1)$. For this, we use Lemma~\ref{lem-cmf-weigthedsum} with $a=x_n^*$ and
$$
\frac{{\rm d}}{{\rm d} s} G_n(x;s) = G_n'(x+s)-(x+s),
$$
to obtain that
\begin{equation}gin{align*}
\sqrt{n} (c_n'(0)- x_n^*) &= \sqrt{n}\frac{\int_{-\infty}^\infty \left(\frac{x}{\sqrt{n}}+x_n^*-G'_n(\frac{x}{\sqrt{n}}+x_n^*)\right)e^{-n G_n\left(\frac{x}{\sqrt{n}}+x_n^*\right)} {\rm d} x}{\int_{-\infty}^\infty e^{-n G_n\left(\frac{x}{\sqrt{n}}+x_n^*\right)} {\rm d} x} - \sqrt{n}x_n^* \nonumber\\
&=\frac{\int_{-\infty}^\infty \left(x-\sqrt{n}G'_n(\frac{x}{\sqrt{n}}+x_n^*)\right)e^{-n G_n\left(\frac{x}{\sqrt{n}}+x_n^*\right)} {\rm d} x}{\int_{-\infty}^\infty e^{-n G_n\left(\frac{x}{\sqrt{n}}+x_n^*\right)} {\rm d} x}.
\end{align*}
Taylor expanding $G'_n(\frac{x}{\sqrt{n}}+x_n^*)$ and $G_n(\frac{x}{\sqrt{n}}+x_n^*)$ around $x_n^*$ gives
\begin{equation}gin{align*}
\sqrt{n} (c_n'(0)- x_n^*) &= \frac{\int_{-\infty}^\infty \left(x-\sqrt{n}G'_n(x_n^*) - G_n''(x_n^*)x + \mathcal{O}(1/\sqrt{n})\right)e^{-n G_n\left(x_n^*\right)-\sqrt{n}G_n'(x_n^*) x - G_n''(x_n^*)\frac{x^2}{2}+ \mathcal{O}(1/\sqrt{n})} {\rm d} x}{\int_{-\infty}^\infty e^{-n G_n\left(x_n^*\right)-\sqrt{n}G_n'(x_n^*) x - G_n''(x_n^*)\frac{x^2}{2}+ \mathcal{O}(1/\sqrt{n})} {\rm d} x} \nonumber\\
&=(1-G_n''(x_n^*))\frac{\int_{-\infty}^\infty xe^{ - G_n''(x_n^*)\frac{x^2}{2}+ \mathcal{O}(1/\sqrt{n})} {\rm d} x}{\int_{-\infty}^\infty e^{- G_n''(x_n^*)\frac{x^2}{2}+ \mathcal{O}(1/\sqrt{n})} {\rm d} x} +\mathcal{O}(1/\sqrt{n})=\mathcal{O}(1/\sqrt{n}),
\end{align*}
where we used that $G_n'(x_n^*)=0$ and that in the limit $n\to\infty$ the integral in the numerator equals $0$ since this is an integral over an odd function.
\end{proof}
In the above proposition, we use an explicit centering. Instead, we can also center with the expectation. In that case, we can prove a similar result also when the spins are weighted by different quantities as we show now.
\begin{equation}gin{lemma}\label{lem-clt-tweighted}
Suppose that $(t_i)_{i\in[n]}$ is a sequence satisfying Condition~\ref{cond-WeightReg}(i)--(ii) with $W_n$ replaced by $T_n:=t_I$, with $I\sim Uni[n]$. Let
$$
\tilde{c}_n(s) = \frac1n \log \mathbb E\biggl[\exp\biggl(s \sum_{i\in[n]} t_i\sigma_i\biggr)\biggr].
$$
Then,
$$
\lim_{n\to\infty} \mathbb E\biggl[\exp\biggl\{s\Bigl(\frac{1}{\sqrt{n}}\sum_{i\in[n]}t_i(\sigma_i - \mathbb E[\sigma_i])\Bigr)\biggr\}\biggr] = e^{\tilde{c}''(0) \frac{s^2}{2}},
$$
where
$$
\tilde{c}(s) = \lim_{n\to\infty} \tilde{c}_n(s).
$$
In particular,
$$
\frac{1}{\sqrt{n}}\sum_{i\in[n]}t_i(\sigma_i - \mathbb E[\sigma_i]) \stackrel{d}{\longrightarrow} \mathcal{N}\left(0,\tilde{c}''(0)\right),
$$
and all moments of the l.h.s.\ converge to that of this normal distribution.
\end{lemma}
\begin{equation}gin{proof}
We proceed as in the previous proposition. We write,with $s_n = \frac{s}{\sqrt{n}}$,
$$
\log \mathbb E\biggl[\exp\biggl\{s\Bigl(\frac{1}{\sqrt{n}}\sum_{i\in[n]}t_i(\sigma_i - \mathbb E[\sigma_i])\Bigr)\biggr\}\biggr] =n\tilde{c}_n(0) +\sqrt{n} \biggl(\tilde{c}_n'(0)- \frac{1}{n}\sum_{i\in[n]}t_i \mathbb E[\sigma_i]\biggr)s + \tilde{c}_n''(s_n^*)\frac{s^2}{2},
$$
for some $s_n^* \in (0,s_n)$. Again, $\tilde{c}_n(0)=0$. Since $\tilde{c}_n(s)$ is a cumulant generating function,
$$
\tilde{c}'_n(0) = \frac1n \mathbb E\biggl[\sum_{i\in[n]}t_i \sigma_i\biggr],
$$
so that $\tilde{c}_n'(0)- \frac{1}{n}\sum_{i\in[n]}t_i \mathbb E[\sigma_i]=0$. That $\lim_{n\to\infty}\tilde{c}_n''(s_n^*)=\tilde{c}''(0)$ can be shown as above.
\end{proof}
\subsection{Bounds on error terms}\label{sec-errorterms}
We are working in the setting of Lemma \ref{lem-regression}. Recall that $X=(X_n, \tilde{X}_n)^t$, $X' = (X_n', \tilde{X}_n')^t$ and $\lambda = \frac 1n$.
Note that the inverse of the matrix $\Lambda$ in~\eqref{eq-regressionICW} is given by
$$
\Lambda^{-1} = \begin{equation}gin{pmatrix}1 & c \, \sigma^2(x_n^*,\begin{equation}ta,h) \\ 0 & \sigma^2(x_n^*,\begin{equation}ta,h) \end{pmatrix},
$$
so that with the notations of Theorem \ref{thm-MarginalStein} we obtain
$$
\ell D D_1 = (X_n-X'_n)^2 +c \, \sigma^2(x_n^2,\begin{equation}ta,h) \, (X_n-X'_n)(\tilde{X}_n-\tilde{X}'_n),
$$
To prove our main result, we apply Theorem~\ref{thm-MarginalStein}. We first bound the first term of~\eqref{eq-thmStein}:
\begin{equation}gin{lemma}\label{lem-bound1sttermerrors}
We have the following bound:
\begin{equation}gin{eqnarray*}
\mathbb E\left[\left|1-\frac{1}{2\lambda} \mathbb E\left[(X_n-X'_n)^2 +c\, \sigma^2(x_n^*,\begin{equation}ta,h) \, (X_n-X'_n)(\tilde{X}_n-\tilde{X}'_n) \,\big|\, \mathcal{F}_n\right] \right|\right]
& & \\ \leq \mathbb E[|R_3+R_4+R_5+\hat{R}_3+\hat{R}_4+\hat{R}_5|],
\end{eqnarray*}
where
\begin{equation}gin{align*}
R_3&=\frac{1}{\chi_n}\frac{1}{n} \sum_{i\in[n]}\sigma_i \left(\tanh\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}} w_i x_n^*+h\right)-\tanh\left(\frac{\begin{equation}ta w_i}{\mathbb E[W_n]} \tilde{m}_n^i+h\right)\right), \\
R_4&=\frac{1}{\chi_n}\frac{1}{n} \sum_{i\in[n]}\tanh\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}} w_i x_n^*+h\right) \left(\tanh\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}} w_i x_n^*+h\right)-\mathbb E[\sigma_i] \right), \\
R_5&=\frac{1}{\chi_n}\frac{1}{n} \sum_{i\in[n]}\tanh\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}} w_i x_n^*+h\right) \left(\mathbb E[\sigma_i]-\sigma_i \right),\\
\hat{R}_3&=\frac{c\, \sigma^2(x_n^*,\begin{equation}ta,h)}{\sqrt{\chi_n\tilde{\chi}_n}}\frac{1}{n}\sum_{i\in[n]}w_i\sigma_i \left(\tanh\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}} w_i x_n^*+h\right)-\tanh\left(\frac{\begin{equation}ta w_i}{\mathbb E[W_n]} \tilde{m}_n^i+h\right)\right),\\
\hat{R}_4&=\frac{c\, \sigma^2(x_n^*,\begin{equation}ta,h)}{\sqrt{\chi_n\tilde{\chi}_n}}\frac{1}{n} \sum_{i\in[n]}w_i\tanh\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}} w_i x_n^*+h\right) \left(\tanh\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}} w_i x_n^*+h\right)-\mathbb E[\sigma_i] \right),\\
\hat{R}_5&=\frac{c\, \sigma^2(x_n^*,\begin{equation}ta,h)}{\sqrt{\chi_n\tilde{\chi}_n}}\frac{1}{n} \sum_{i\in[n]}w_i\tanh\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}} w_i x_n^*+h\right) \left(\mathbb E[\sigma_i]-\sigma_i \right).
\end{align*}
\end{lemma}
\begin{equation}gin{proof}
Note that
$$
(X_n-X_n')^2 = \frac{1}{n\chi_n} (\sigma_I-\sigma_I')^2 = \frac{2}{n\chi_n}(1-\sigma_I\sigma'_I).
$$
Hence, also using~\eqref{eq-sigmaprimegivenF},
\begin{equation}gin{align*}
\frac{1}{2\lambda}&\mathbb E[(X_n-X_n')^2 \,|\, \mathcal{F}_n] =\frac{1}{\chi_n}\biggl(1-\frac{1}{n} \sum_{i\in[n]}\sigma_i \mathbb E[\sigma'_i \,|\, \mathcal{F}_n]\biggr) \nonumber\\
&=\frac{1}{\chi_n}\biggl(1-\frac{1}{n} \sum_{i\in[n]}\sigma_i \tanh\left(\frac{\begin{equation}ta w_i}{\mathbb E[W_n]} \tilde{m}_n^i+h\right)\biggr) \nonumber\\
&=\frac{1}{\chi_n}\biggl(1-\mathbb E\left[\tanh^2\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}} W_n x_n^*+h\right)\right]\biggr) \nonumber\\
& \qquad + \frac{1}{\chi_n}\frac{1}{n} \sum_{i\in[n]}\sigma_i \left(\tanh\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}} w_i x_n^*+h\right)-\tanh\left(\frac{\begin{equation}ta w_i}{\mathbb E[W_n]} \tilde{m}_n^i+h\right)\right) \nonumber\\
&\qquad + \frac{1}{\chi_n}\frac{1}{n} \sum_{i\in[n]}\tanh\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}} w_i x_n^*+h\right) \left(\tanh\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}} w_i x_n^*+h\right)-\mathbb E[\sigma_i] \right)\nonumber\\
& \qquad + \frac{1}{\chi_n}\frac{1}{n} \sum_{i\in[n]}\tanh\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}} w_i x_n^*+h\right) \left(\mathbb E[\sigma_i]-\sigma_i \right) \nonumber\\
&=\frac{1}{\chi_n}\biggl(1-\mathbb E\left[\tanh^2\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}} W_n x_n^*+h\right)\right]\biggr) +R_3+R_4+R_5.
\end{align*}
Since
$$
(X_n-X_n')(\tilde{X}_n-\tilde{X}_n') = \frac{w_I}{n\sqrt{\chi_n\tilde{\chi}_n}} (\sigma_I-\sigma_I')^2 = \frac{2 w_I}{n\sqrt{\chi_n\tilde{\chi}_n}}(1-\sigma_I\sigma'_I),
$$
it can be shown in a similar way, by incorporating the extra factor $w_I$, that
\begin{equation}gin{align*}
\frac{c \, \sigma^2(x_n^*,\begin{equation}ta,h)}{2\lambda} &\mathbb E\left[(X_n-X'_n)(\tilde{X}_n-\tilde{X}'_n) \,\big|\, \mathcal{F}_n\right] \nonumber\\
&= \frac{c \, \sigma^2(x_n^*,\begin{equation}ta,h)}{\sqrt{\chi_n\tilde{\chi}_n}}\mathbb E\left[ \left(1-\tanh^2\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}} W_n x_n^*+h\right)\right) W_n \right] +\hat{R}_3+\hat{R}_4+\hat{R}_5.
\end{align*}
The lemma follows by observing that
\begin{equation}gin{align*}
\frac{1}{\chi_n}&\biggl(1-\mathbb E\left[\tanh^2\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}} W_n x_n^*+h\right)\right]\biggr)+\frac{c \, \sigma^2(x_n^*,\begin{equation}ta,h)}{\sqrt{\chi_n\tilde{\chi}_n}}\mathbb E\left[ \left(1-\tanh^2\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}} W_n x_n^*+h\right)\right) W_n \right]\nonumber\\
&=\frac{1}{\chi_n}\biggl(1-\mathbb E\left[\tanh^2\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}} W_n x_n^*+h\right)\right] + \frac{\frac{\begin{equation}ta}{\mathbb E[W_n]} \mathbb E\left[ \left(1-\tanh^2\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}} W_n x_n^*+h\right)\right) W_n \right]^2}{G_n''(x_n^*)}\biggr)\nonumber\\
&=1,
\end{align*}
which follows from~\eqref{eq-Gn2} and~\eqref{eq-defchin}.
\end{proof}
We bound the second term of~\eqref{eq-thmStein} in a similar way:
\begin{equation}gin{lemma}\label{lem-bound2ndtermerrors}
$$
\frac{1}{\lambda}\mathbb E\left[\left|\mathbb E\left[\left|(X_n-X'_n) +c \, \sigma^2(x_n^*,\begin{equation}ta,h) \, (\tilde{X}_n-\tilde{X}'_n)\right|(X_n-X'_n) \,\big|\, \mathcal{F}_n\right] \right|\right] \leq 2\mathbb E[|\bar{R}_3+\bar{R}_4+\bar{R}_5+\check{R}_3+\check{R}_4+\check{R}_5|],
$$
where
\begin{equation}gin{align*}
\bar{R}_3&=\frac{1}{\chi_n}\frac{1}{n} \sum_{i\in[n]} \left(\tanh\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}} w_i x_n^*+h\right)-\tanh\left(\frac{\begin{equation}ta w_i}{\mathbb E[W_n]} \tilde{m}_n^i+h\right)\right), \\
\bar{R}_4&=\frac{1}{\chi_n}\frac{1}{n} \sum_{i\in[n]} \left(\mathbb E[\sigma_i]-\tanh\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}} w_i x_n^*+h\right) \right), \\
\bar{R}_5&=\frac{1}{\chi_n}\frac{1}{n} \sum_{i\in[n]} \left(\sigma_i -\mathbb E[\sigma_i]\right),\\
\check{R}_3&=\frac{c\, \sigma^2(x_n^*,\begin{equation}ta,h)}{\sqrt{\chi_n\tilde{\chi}_n}}\frac{1}{n}\sum_{i\in[n]}w_i\left(\tanh\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}} w_i x_n^*+h\right)-\tanh\left(\frac{\begin{equation}ta w_i}{\mathbb E[W_n]} \tilde{m}_n^i+h\right)\right),\\
\check{R}_4&=\frac{c\, \sigma^2(x_n^*,\begin{equation}ta,h)}{\sqrt{\chi_n\tilde{\chi}_n}}\frac{1}{n} \sum_{i\in[n]}w_i\left(\mathbb E[\sigma_i]-\tanh\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}} w_i x_n^*+h\right) \right),\\
\check{R}_5&=\frac{c\, \sigma^2(x_n^*,\begin{equation}ta,h)}{\sqrt{\chi_n\tilde{\chi}_n}}\frac{1}{n} \sum_{i\in[n]}w_i\left(\sigma_i -\mathbb E[\sigma_i]\right).
\end{align*}
\end{lemma}
\begin{equation}gin{proof}
We have that
\begin{equation}gin{align*}
|\ell D| D_1 &= |(X_n-X'_n) +c\, \sigma^2(x_n^*,\begin{equation}ta,h) \, (\tilde{X}_n-\tilde{X}'_n)|(X_n-X'_n)\nonumber\\
&=\frac1n\left[\frac{1}{\chi_n}+\frac{c \, \sigma^2(x_n^*,\begin{equation}ta,h)}{\sqrt{\chi_n \tilde\chi_n}}w_I\right] | \sigma_I-\sigma'_I | (\sigma_I-\sigma'_I) \nonumber\\
& = \frac2n\left[\frac{1}{\chi_n}+\frac{c\, \sigma^2(x_n^*,\begin{equation}ta,h)}{\sqrt{\chi_n \tilde\chi_n}}w_I\right] (\sigma_I-\sigma'_I),
\end{align*}
where the last equality follows, since $| \sigma_I-\sigma'_I | $ can only take values $2$ or $0$.
\noindent
Hence with $\tilde{m}_n^i$ given by \eqref{mni} we obtain
\begin{equation}gin{align*}
\frac{1}{\lambda} \mathbb E\left[ |\ell D| D_1 \,|\, \mathcal{F}_n\right] &= \frac{2}{n} \sum_{i=1}^n \left[\frac{1}{\chi_n}+\frac{c \, \sigma^2(x_n^*,\begin{equation}ta,h)}{\sqrt{\chi_n \tilde\chi_n}}w_i\right] \left[\sigma_i - \tanh\left(\frac{\begin{equation}ta w_i}{\mathbb E[W_n]} \tilde{m}_n^i+h\right)\right] \\
& \hspace{-4cm} = \frac{2}{n} \sum_{i=1}^n \left[\frac{1}{\chi_n}+\frac{c\, \sigma^2(x_n^*,\begin{equation}ta,h)}{\sqrt{\chi_n \tilde\chi_n}}w_i\right] \biggl[\left( \tanh\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}} w_i x_n^*+h\right)- \tanh \left( \frac{\begin{equation}ta w_i}{\mathbb E[W_n]} \tilde{m}_n^i+h \right) \right) \\
& \qquad + \left( \mathbb E[\sigma_i]- \tanh \left( \sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}} w_i x_n^*+h \right) \right) + \left(\sigma_i-\mathbb E[\sigma_i]\right)\biggr].
\end{align*}
Expanding out both square brackets gives the six error terms of the lemma.
\end{proof}
The error terms can be bounded as follows.
\begin{equation}gin{lemma}\label{lem-errorterms}
\begin{equation}gin{align*}
\mathbb E[|R_1|]&\leq \frac{\begin{equation}ta}{\sqrt{\chi_n}}\frac{\mathbb E[W_n^2]}{\mathbb E[W_n]} \frac{1}{\sqrt{n}}, \\
\mathbb E[|R_2|]&\leq\frac{\tilde{\chi}_n}{\sqrt{\chi_n}} \left(\frac{\begin{equation}ta}{\mathbb E[W_n]}\right)^2 \mathbb E[W_n^2] \mathbb E[\tilde{X}_n^2] \frac{1}{\sqrt{n}}, \\
\mathbb E[|R_3|],\mathbb E[|\bar{R}_3|],\mathbb E[|R_4|],\mathbb E[|\bar{R}_4|]&\leq \frac{\begin{equation}ta \sqrt{\tilde{\chi}_n}}{\chi_n} \mathbb E[|\tilde{X}_n|] \frac{1}{\sqrt{n}} + \frac{\begin{equation}ta}{\chi_n}\frac{\mathbb E[W_n^2]}{\mathbb E[W_n]} \frac{1}{n}, \\
\mathbb E[|\tilde{R}_1|] &\leq \frac{\begin{equation}ta}{\sqrt{\tilde{\chi}_n}} \frac{\mathbb E[W_n^3]}{\mathbb E[W_n]} \frac{1}{\sqrt{n}}, \\
\mathbb E[|\tilde{R}_2|] &\leq 2\sqrt{\tilde{\chi}_n}\left(\frac{\begin{equation}ta}{\mathbb E[W_n]}\right)^2 \mathbb E[W_n^3]\mathbb E[\tilde{X}_n^2] \frac{1}{\sqrt{n}}, \\
\mathbb E[|\hat{R}_3|],\mathbb E[|\check{R}_3|],\mathbb E[|\hat{R}_4|],\mathbb E[|\check{R}_4|]&\leq\frac{\begin{equation}ta c \, \sigma^2(x_n^*,\begin{equation}ta,h)}{\sqrt{\chi_n}} \frac{\mathbb E[W_n^2]}{\mathbb E[W_n]} \mathbb E[|\tilde{X}_n|] \frac{1}{\sqrt{n}}+\frac{\begin{equation}ta c \, \sigma^2(x_n^*,\begin{equation}ta,h)}{\sqrt{\chi_n\tilde{\chi}_n}}\frac{\mathbb E[W_n^3]}{\mathbb E[W_n]} \frac{1}{n}.
\end{align*}
\end{lemma}
\begin{equation}gin{proof}
Since $\tanh$ is $1$-Lipschitz, $\tilde{m}_n-\tilde{m}_n^i = w_i\sigma_i/n$ and $|\sigma_i|=1$,
$$
\left|\tanh\left(\frac{\begin{equation}ta w_i}{\mathbb E[W_n]} \tilde{m}_n+h\right) - \tanh\left(\frac{\begin{equation}ta w_i}{\mathbb E[W_n]} \tilde{m}_n^i+h\right)\right| \leq \frac{\begin{equation}ta w_i^2}{n\mathbb E[W_n]}.
$$
From this, the bounds on $R_1$ and $\tilde{R}_1$ follow. Using that $\tanh$ is $1$-Lipschitz, it also follows with \eqref{eq-deftildeXn} that
\begin{equation}gin{align*}
\left|\tanh\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}} w_i x_n^*+h\right)-\tanh\left(\frac{\begin{equation}ta w_i}{\mathbb E[W_n]} \tilde{m}_n^i+h\right)\right| & \leq w_i \left|\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}} x_n^*-\frac{\begin{equation}ta}{\mathbb E[W_n]} \tilde{m}_n^i\right| \nonumber\\
= w_i \frac{\begin{equation}ta}{\mathbb E[W_n]} \left|\tilde{M}_n- \tilde{m}_n+w_i\frac{\sigma_i}{n}\right|& \leq w_i \frac{\begin{equation}ta \sqrt{\tilde{\chi}_n}}{\mathbb E[W_n]} |\tilde{X}_n| \frac{1}{\sqrt{n}}+\frac{\begin{equation}ta w_i^2}{n\mathbb E[W_n]}.
\end{align*}
From this, the bounds on $R_3, \bar{R}_3, \hat{R}_3$ and $\check{R}_3$ follow. Observe that, by~\eqref{eq-sigmaprimegivenF},
$$
\mathbb E[\sigma_i] = \mathbb E\left[\mathbb E[\sigma_i \,|\, \mathcal{F}_n^i]\right] = \mathbb E\left[\tanh\left(\frac{\begin{equation}ta w_i}{\mathbb E[W_n]} \tilde{m}_n^i+h\right)\right],
$$
and $|\tanh(x)| \leq 1$, so that the bounds for $R_3$ and $\hat{R}_3$ also hold for $R_4,\bar{R}_4$ and $\hat{R}_4,\check{R}_4$, respectively.
To bound $R_2$, we use the Taylor expansion
$$
\tanh(x) = \tanh(a)+(1-\tanh^2(a))(x-a)-\tanh(\xi)(1-\tanh^2(\xi)) (x-a)^2,
$$
for some $\xi$ between $x$ and $a$. From the computations in~\eqref{eq-TaylorR2} it follows that
\begin{equation}gin{align*}
|R_2| & \leq \left|\frac{\sqrt{n}}{\sqrt{\chi_n}} \frac{1}{n} \sum_{i\in[n]} \tanh(\xi_i)(1-\tanh^2(\xi_i))\left(\frac{\begin{equation}ta w_i}{\mathbb E[W_n]}\tilde{m}_n-\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}} w_i x_n^*\right)^2 \right|\nonumber\\
&= \left|\frac{\sqrt{n}}{\sqrt{\chi_n}} \frac{1}{n} \sum_{i\in[n]} \tanh(\xi_i)(1-\tanh^2(\xi_i))\left(\frac{\begin{equation}ta w_i}{\mathbb E[W_n]}\right)^2\frac{\tilde{\chi}_n}{n}\tilde{X}_n^2 \right| \nonumber\\
&\leq \frac{\tilde{\chi}_n}{\sqrt{\chi_n}} \left(\frac{\begin{equation}ta}{\mathbb E[W_n]}\right)^2 \mathbb E[W_n^2] \tilde{X}_n^2 \frac{1}{\sqrt{n}},
\end{align*}
where we used that $|\tanh(x)|\leq1$.
To bound $\tilde{R}_2$, we expand $G_n'\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}} \tilde{m}_n\right)$ around $x_n^*$ and use that $G_n'(x_n^*)=0$ by definition of $x_n^*$,
and use \eqref{eq-deftildeXn} and \eqref{eq-deflambdasigma} to obtain that, for some $\xi$ between $\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}} \tilde{m}_n$ and $x_n^*$,
\begin{equation}gin{align*}
\tilde{R}_2&=\frac{\sqrt{n}}{\sqrt{\tilde{\chi}_n}} \sqrt{\frac{\mathbb E[W_n]}{\begin{equation}ta}} G'_n\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}} \tilde{m}_n\right) - \frac{1}{\sigma^2(x_n^*,\begin{equation}ta,h)}\tilde{X}_n \nonumber\\
&=\frac{\sqrt{n}}{\sqrt{\tilde{\chi}_n}} \sqrt{\frac{\mathbb E[W_n]}{\begin{equation}ta}} G_n''(x_n^*) \left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}} \tilde{m}_n-x_n^*\right)-G_n''(x_n^*)\tilde{X}_n\nonumber\\
&\qquad +\frac{\sqrt{n}}{\sqrt{\tilde{\chi}_n}} \sqrt{\frac{\mathbb E[W_n]}{\begin{equation}ta}} G_n'''(\xi) \left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}} \tilde{m}_n-x_n^*\right)^2 \nonumber\\
&= \frac{1}{\sqrt{n}}\sqrt{\tilde{\chi}_n}\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}} \tilde{X}_n^2 G_n'''(\xi).
\end{align*}
Differentiating~\eqref{eq-Gn2} gives
$$
G'''_n(\xi) = 2\left(\frac{\begin{equation}ta}{\mathbb E[W_n]}\right)^{3/2}\mathbb E\left[\tanh\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}}W_n \xi + h\right)\left(1-\tanh^2\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}}W_n \xi + h\right)\right) W_n^3 \right].
$$
Since $|\tanh x|\leq1$, we obtain
$$
|G'''_n(\xi)| \leq 2\left(\frac{\begin{equation}ta}{\mathbb E[W_n]}\right)^{3/2} \mathbb E[W_n^3],
$$
from which the bound on $\tilde{R}_2$ follows.
\end{proof}
\noindent
We now combine all results to prove our main result.
\begin{equation}gin{proof}[Proof of Theorem~\ref{thm-berryesseen-magnetization}]
We apply Theorem~\ref{thm-MarginalStein}. To show that the first term of~\eqref{eq-thmStein} is $\mathcal{O}(1/\sqrt{n})$ it suffices to show, by Lemma~\ref{lem-bound1sttermerrors}, that
$$
\mathbb E[|R_3|],\mathbb E[|R_4|],\mathbb E[|R_5|], \mathbb E[|\hat{R}_3|],\mathbb E[|\hat{R}_4|],\mathbb E[|\hat{R}_5|] \leq \frac{C}{\sqrt{n}},
$$
where $C$ is a constant not depending on $n$ that may change from line to line.
For the second term of~\eqref{eq-thmStein}, it suffices to show, by Lemma~\ref{lem-bound2ndtermerrors}, that
$$
\mathbb E[|\bar{R}_3|],\mathbb E[|\bar{R}_4|],\mathbb E[|\bar{R}_5|], \mathbb E[|\check{R}_3|],\mathbb E[|\check{R}_4|],\mathbb E[|\check{R}_5|] \leq \frac{C}{\sqrt{n}}.
$$
Note that it follows from Proposition~\ref{prop-cltweighted} that $\mathbb E[|\tilde{X}_n|]$ is uniformly bounded. By Condition~\ref{cond-WeightReg}(i)--(iii), also the first three moments of $W_n$ are uniformly bounded. From this and Lemma~\ref{lem-errorterms}, the bounds on $\mathbb E[|R_3|], \mathbb E[|\bar{R}_3|],\mathbb E[|\hat{R}_3|], \mathbb E[|\check{R}_3|], \mathbb E[|R_4|], \mathbb E[|\bar{R}_4|],\mathbb E[|\hat{R}_4|]$ and $\mathbb E[|\check{R}_4|]$ follow. Remark that this is one of the places where we see that
the constant in our Berry-Esseen bound depends on $(w_i)_{i \geq 1}$.
By rewriting
$$
\mathbb E[|R_5|]=\frac{1}{\chi_n}\mathbb E\left[\left|\frac{1}{\sqrt{n}} \sum_{i\in[n]}\tanh\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}} w_i x_n^*+h\right) \left(\sigma_i-\mathbb E[\sigma_i] \right)\right|\right]\frac{1}{\sqrt{n}},
$$
it can be seen that $\mathbb E[|R_5|]$ is of the form considered in Lemma~\ref{lem-clt-tweighted} with $t_i=\tanh\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}} w_i x_n^*+h\right)$. Hence, it follows from Lemma~\ref{lem-clt-tweighted} that $\sqrt{n}\mathbb E[|R_5|]$ is uniformly bounded. A similar argument holds for $\bar{R}_5, \hat{R}_5$ and $\check{R}_5$.
For the third term in~\eqref{eq-thmStein}, note that by~\eqref{eq-regressionICW} that $R = \begin{equation}gin{pmatrix} R_1 +R_2 \\ \tilde{R_1}+ \tilde{R_2}\end{pmatrix}$ and we have that
$$
\mathbb E\left[|\ell R|\right] = \mathbb E\left[|R_1+R_2+c\sigma^2(x_n^*,\begin{equation}ta,h) (\tilde{R}_1+\tilde{R}_2)|\right],
$$
and it follows from Lemma~\ref{lem-errorterms} and the uniform boundedness of the first three moments of $W_n$ and all moments of $\tilde{X}_n$ that also this term can be bounded from above by $C/\sqrt{n}$.
If we only assume Condition~\ref{cond-WeightReg}(i)--(ii) and suppose that $\max_{i\in[n]}w_i \geq c\sqrt{n}$ for some $c>0$, then
$$
\mathbb E[W_n^2] = \frac1n \sum_{i\in[n]} w_i^2 \geq c^2+\frac1n \sum_{i: i\neq \argmax w_j} w_i^2 \stackrel{n\to\infty}{\longrightarrow} c^2 + \mathbb E[W^2],
$$
which is in contradiction to Condition~\ref{cond-WeightReg}(ii). Hence, $\max_{i\in[n]}w_i = o(\sqrt{n})$ and also
$$
\mathbb E[W_n^3] = \frac1n \sum_{i\in[n]} w_i^3 \leq \max_{i\in[n]}w_i\ \mathbb E[W_n^2] = o(\sqrt{n}).
$$
This suffices to prove the second statement of Theorem~\ref{thm-berryesseen-magnetization}.
\end{proof}
When one wants to prove the Berry-Esseen bound for the weighted sum of spins, the role of $X_n$ and $\tilde{X}_n$ can be interchanged. In fact, $X_n$ can be ignored in that case and in the first factor of~\eqref{eq-thmStein}, we have to estimate
\begin{equation}gin{align}\label{eq-approxweighted1stterm}
\frac{1}{2\lambda} \mathbb E[\ell D D_1 \,|\, \mathcal{F}_n] &= \frac{\sigma^2(x_n^*,\begin{equation}ta,h)}{\tilde{\chi}_n} \sum_{i\in[n]}w_i^2\left(1-\sigma_i\mathbb E[\sigma_i'\,|\, \mathcal{F}_n]\right) \nonumber\\
&\approx \frac{\sigma^2(x_n^*,\begin{equation}ta,h)}{\tilde{\chi}_n} \mathbb E\left[\left(1-\tanh^2\left(\sqrt{\frac{\begin{equation}ta}{\mathbb E[W_n]}}W_n x_n^* + h\right)\right)W_n^2\right].
\end{align}
Since we want this to be equal to $1$, we choose $\tilde{\chi}_n$ as in \eqref{eq-choicetildechi}.
The approximation in~\eqref{eq-approxweighted1stterm} can be made precise as in Lemma~\ref{lem-bound1sttermerrors} and the resulting error terms can be shown to be $\mathcal{O}(1/\sqrt{n})$ as in Lemma~\ref{lem-errorterms} under the assumption of one extra moment in Condition~\ref{cond-WeightReg}. Also the other terms in~\eqref{eq-thmStein} can then be shown to be $\mathcal{O}(1/\sqrt{n})$ as in Lemmas~\ref{lem-bound2ndtermerrors} and~\ref{lem-errorterms}.
\paragraph*{Acknowledgements.}
A large part of this work was carried out at the Ruhr-Universit\"at Bochum, supported by the Deutsche Forschungsgemeinschaft (DFG) via RTG 2131 \emph{High-dimensional Phenomena in Probability -- Fluctuations and Discontinuity}.
\begin{equation}gin{thebibliography}{99}
\bibitem{ChaSha11}
S.~Chatterjee and Q.-M.~Shao.
\newblock Nonnormal approximation by Stein's method of exchangeable pairs with application to the Curie-Weiss model.
\newblock {\em The Annals of Applied Probability}, {\bf 21}(2):464--483, (2011).
\bibitem{CheGolSha11}
L.H.Y.~Chen, L.~Goldstein and Q.-M.~Shao.
\newblock Normal Approximation by Stein's Method.
\newblock Springer, Berlin Heidelberg, (2011).
\bibitem{CheRol10}
L.H.Y.~Chen and A.~R\"ollin.
\newblock Stein couplings for normal approximation.
\newblock Preprint, arXiv:1003.6039, (2010).
\bibitem{CheSha12}
Y.~Chen and Q.-M.~Shao.
\newblock Berry-Esseen inequality for unbounded exchangeable pairs.
\newblock In: {\em Probability Approximations and Beyond}, pp.~13--30, Springer, New York, (2012).
\bibitem{DomGiaGibHof18}
S.~Dommers, C.~Giardin\`a, C.~Giberti and R.~van~der~Hofstad.
\newblock Large deviations for the annealed Ising model on inhomogeneous random graphs: spins and degrees.
\newblock {\em Journal of Statistical Physics}, {\bf 173}(3--4):1045--1081, (2018).
\bibitem{DomGiaGibHofPri16}
S.~Dommers, C.~Giardin\`a, C.~Giberti, R.~van~der~Hofstad and M.L.~Prioriello.
\newblock Ising critical behavior of inhomogeneous Curie-Weiss models and annealed random graphs.
\newblock {\em Communications in Mathematical Physics}, {\bf 348}(1):221--263, (2016).
\bibitem{DomKulSch17}
S.~Dommers, C.~K\"ulske and P.~Schriever.
\newblock Continuous spin models on annealed generalized random graphs.
\newblock {\em Stochastic Processes and their Applications}, {\bf 127}(11):3719--3753, (2017).
\bibitem{EicLow10}
P.~Eichelsbacher and M.~L\"owe.
\newblock Stein's method for dependent random variables occurring in statistical mechanics.
\newblock {\em Electronic Journal of Probability}, {\bf 15}(30):962--988, (2010).
\bibitem{EicMar14}
P.~Eichelsbacher and B.~Martschink.
\newblock On rates of convergence for the overlap in the Hopfield model.
\newblock {\em M\"unster Journal of Mathematics}, {\bf 7}:731--752, (2014).
\bibitem{EicMar15}
P.~Eichelsbacher and B.~Martschink.
\newblock On rates of convergence in the Curie-Weiss-Potts model with an external field.
\newblock {\em Annales de l'Institut Henri Poincar\'e, Probabilit\'es et Statistiques}, {\bf 51}(1):252--282, (2015).
\bibitem{EllNew78}
R.S.~Ellis and C.M.~Newman.
\newblock Limit theorems for sums of dependent random variables occurring in statistical mechanics.
\newblock {\em Zeitschrift f\"ur Wahrscheinlichkeitstheorie und verwandte Gebiete}, {\bf 44}(2):117--139, (1978).
\bibitem{FanRol15}
X.~Fang and A.~R\"ollin.
\newblock Rates of convergence for multivariate normal approximation with applications to dense graphs and doubly indexed permutation statistics.
\newblock {\em Bernoulli}, {\bf 21}(4):2157--2189, (2015).
\bibitem{GiaGibHofPri15}
C.~Giardin\`a, C.~Giberti, R.~van~der Hofstad and M.L.~Prioriello.
\newblock Quenched central limit theorems for the Ising model on random graphs.
\newblock {\em Journal of Statistical Physics}, {\bf 160}(6):1623--1657, (2015).
\bibitem{GiaGibHofPri16}
C.~Giardin\`a, C.~Giberti, R.~van~der Hofstad and M.L.~Prioriello.
\newblock Annealed central limit theorems for the Ising model on random graphs.
\newblock {\em ALEA, Latin American Journal of Probability and Mathematical Statistics}, {\bf 13}(1):121--161, (2016).
\bibitem{KirMec13}
K.~Kirkpatrick and E.~Meckes.
\newblock Asymptotics of the mean-field Heisenberg model.
\newblock {\em Journal of Statistical Physics}, {\bf 152}(1):54--92, (2013).
\bibitem{KirNaw16}
K.~Kirkpatrick and T.~Nawaz.
\newblock Asymptotics of mean-field $O(N)$ models.
\newblock {\em Journal of Statistical Physics}, {\bf 165}(6):1114--1140, (2016).
\bibitem{GesineAdrian}
G.~Reinert and A.~R\"ollin.
\newblock Multivariate normal approximation with Stein's method of exchangeable pairs under a general linearity condition.
\newblock {\em The Annals of Applied Probability}, {\bf 37} (6):2150--2173, (2009).
\bibitem{RR}
Y.~Rinott and V.~Rotar.
\newblock On coupling constructions and rates in the CLT for dependent summands
with applications to the antivoter model and weighted U-statistics.
\newblock {\em The Annals of Applied Probability}, {\bf 7}(4):1080--1105, (1997).
\bibitem{ShaZha17}
Q.-M.~Shao and Z.-S.~Zhang.
\newblock Berry-Esseen bounds of normal and non-normal approximation for unbounded exchangeable pairs.
\newblock {\em The Annals of Probability}, {\bf 47}(1):61--108, (2019).
\bibitem{Ste86}
C.~Stein.
\newblock Approximate computation of expectations.
\newblock {\em IMS Lecture Notes -- Monograph Series}, {\bf 7}, (1986).
\bibitem{bookDiaconis}
C.~Stein, P.~Diaconis, S.~Holmes and G.~Reinert.
\newblock Use of exchangeable pairs in the analysis of
simulations, Stein's method: expository lectures and applications.
\newblock {\em IMS Lecture Notes -- Monograph Series}, {\bf 46}, (2004).
\end{thebibliography}
\end{document}
|
\begin{document}
\title{Factoring the Cycle Aging Cost of Batteries \\ Participating in Electricity Markets}
\author{Bolun~Xu,~\IEEEmembership{Student Member,~IEEE,}
Jinye~Zhao,~\IEEEmembership{Member,~IEEE,}
Tongxin~Zheng,~\IEEEmembership{Senior Member,~IEEE,}
Eugene~Litvinov,~\IEEEmembership{Fellow,~IEEE},
Daniel S.~Kirschen,~\IEEEmembership{Fellow,~IEEE}
\thanks{B.~Xu and D.S.~Kirschen are with the University of Washington, USA (emails: \{xubolun, kirschen\}@uw.edu). }
\thanks{J.~Zhao, T.~Zheng, and E.~Litvinov are with ISO New England Inc., USA (emails: \{jzhao, tzheng, elitvinov\}@iso-ne.com). }
}
\maketitle
\makenomenclature
\begin{abstract}
When participating in electricity markets, owners of battery energy storage systems must bid in such a way that their revenues will at least cover their true cost of operation. Since cycle aging of battery cells represents a substantial part of this operating cost, the cost of battery degradation must be factored in these bids. However, existing models of battery degradation either do not fit market clearing software or do not reflect the actual battery aging mechanism. In this paper we model battery cycle aging using a piecewise linear cost function, an approach that provides a close approximation of the cycle aging mechanism of electrochemical batteries and can be incorporated easily into existing market dispatch programs. By defining the marginal aging cost of each battery cycle, we can assess the actual operating profitability of batteries. A case study demonstrates the effectiveness of the proposed model in maximizing the operating profit of a battery energy storage system taking part in the ISO New England energy and reserve markets.
\end{abstract}
\begin{IEEEkeywords}
Energy storage, battery aging mechanism, arbitrage, ancillary services, economic dispatch
\end{IEEEkeywords}
\IEEEpeerreviewmaketitle
\section{Introduction}
In 2016, about 200~MW of stationary lithium-ion batteries were operating in grid-connected installations worldwide~\cite{eu_bat}, and more deployments have been proposed~\cite{isone_outlook,cal_rm}.
To accommodate this rapid growth in installed energy storage capacity, system operators and regulatory authorities are revising operating practices and market rules to take advantage of the value that energy storage can provide to the grid. In particular, the U.S. Federal Energy Regulatory Commission (FERC) has required independent system operators (ISO) and regional transmission organizations (RTO) to propose market rules that account for the physical and operational characteristics of storage resources~\cite{ferc_rm}. For example, the California Independent System Operator (CAISO) has already designed a market model that supports the participation of energy-limited storage resources and considers constraints on their state of charge (SoC) as well as on their maximum charge and discharge capacity~\cite{caiso_bpm}.
As electricity markets evolve to facilitate participation by battery energy storage (BES), owners of these systems must develop bidding strategies which ensure that they will at least recover their operating cost. Battery degradation must be factored in the operating cost of a BES because the life of electrochemical battery cells is very sensitive to the charge and discharge cycles that the battery performs and is thus directly affected by the way it is operated ~\cite{vetter2005ageing,xu2016modeling}. Existing models of battery degradation either do not fit dispatch calculations, or do not reflect the actual battery degradation mechanism. In particular, traditional generator dispatch models based on heat-rate curves cannot be used to represent the cycle aging characteristic of electrochemical batteries.
This paper proposes a new and accurate way to model of the cost of battery cycle aging, which can be integrated easily in economic dispatch calculations. The main contributions of this paper can be summarized as follows:
\begin{itemize}
\item It proposes a piecewise linear cost function that provides a close approximation of the cost of cycle aging in electrochemical batteries.
\item System operators can incorporate this model in market clearing calculations to facilitate the participation of BES in wholesale markets by allowing them to properly reflect their operating cost.
\item Since this approach defines the marginal cost of battery cycle aging, it makes it possible for BES owners to design market offers and bids that recover at least the cost of battery life lost due to market dispatch.
\item The effectiveness of the proposed model is demonstrated using a full year of price data from the ISO New England energy markets.
\item The accuracy of the proposed model in predicting the battery cycle aging cost is demonstrated using an ex-post calculation based on a benchmark model.
\item The model accuracy increases with the number of linearization segments, and the error compared to the benchmark model approaches zero with sufficient linearization segments.
\end{itemize}
Section~II reviews the existing literature on battery cycle aging modeling. Section~III describes the proposed predictive battery cycle aging cost model. Section~IV shows how this model is incorporated in the economic dispatch. Section~V describes and discusses case studies performed using ISO New England market data. Section~VI draws conclusions.
\section{Literature Review}
\subsection{Battery Operating Cost}
Previous BES economic studies typically assume that battery cells have a fixed lifetime and do not include the cost of replacing the battery in the BES variable operating and maintenance (O\&M) cost~\cite{kintner2010energy}. The Electricity Storage Handbook from Sandia National Laboratories assumes that a BES performs only one charge/discharge cycle per day, and that the variable O\&M cost of a lithium-ion BES is constant and about 2~\$/MWh~\cite{akhil2013doe}. Similarly, Zakeri \emph{et al.}~\cite{zakeri2015electrical} assume that battery cells in lithium-ion BES are replaced every five years, and assume the same 2~\$/MWh O\&M cost. Other energy storage planning and operation studies also assume that the operating cost of BES is negligible and that they have a fixed lifespan~\cite{pandzic2015near,pozo2014unit,qiu2016stochastic}.
These assumptions are not valid if the BES is cycled multiple times per day because more frequent cycling increases the rate at which battery cells degrade and hasten the time at which they need to be replaced.
To secure the battery lifespan, Mohsenian-Rad~\cite{mohsenian2016optimal} caps the number of cycles a battery can operate per day. However, artificially limiting the cycling frequency prevents operators from taking advantage of a BES's operational flexibility and significantly lessens its profitability. To take full advantage of the ability of a BES to take part in energy and ancillary markets, its owner must be able to cycle it multiple times per day and to follow irregular cycles. Under these conditions, its lifetime can no longer be considered as being fixed and its replacement cost can no longer be treated as a capital expense. Instead, the significant part of the battery degradation cost that is driven by cycling should be treated as an operating expense.
A BES performs temporal arbitrage in an electricity market by charging with energy purchased at a low price, and discharging this stored energy when it can be sold at a higher price. The profitability of this form of arbitrage depends not only on the price difference but also on the cost of the battery cycle aging caused by these charge/discharge cycles. When market prices are stable, the expected arbitrage revenue is small and the BES owner may therefore opt to forgo cycling to prolong the battery lifetime and reduce its cycle aging cost. On the other hand, if the market exhibits frequent large price fluctuations, the BES owner could cycle the BES multiple times a day to maximize its profits. Fig.~\ref{Fig:MP} shows that the price profile in a given market can change significantly from day to day. Although the average market price is higher in Fig.~\ref{Fig:MP_a}, arbitrage is not profitable in this case because the price fluctuations are small, and the aging cost from cycling is likely to be higher than the revenue from arbitrage. On the other hand, a BES owner is likely to perform three arbitrage cycles if the price profile is similar to the one shown on Fig.~\ref{Fig:MP_b}, because the revenue opportunities arising from the large price fluctuations are likely to be larger than the associated cycle aging cost. It is thus crucial to accurately incorporate the cost of cycle aging into the optimal operation of a BES.
\begin{figure}
\caption{\footnotesize Market price daily variation examples (Source: ISO New England).}
\label{Fig:MP_a}
\label{Fig:MP_b}
\label{Fig:MP}
\end{figure}
\subsection{Electrochemical Battery Degradation Mechanisms}
Electrochemical batteries have limited cycle life~\cite{dunn2011electrical} because of the fading of active materials caused by the charging and discharging cycles. This cycle aging is caused by the growth of cracks in the active materials, a process similar to fatigue in materials subjected to cyclic mechanical loading~\cite{vetter2005ageing,fatemi1998cumulative,li2011crack,wang2011cycle,laresgoiti2015modeling}. Chemists describe this process using partial differential equations~\cite{ramadesigan2012modeling}. These models have good accuracy but cannot be incorporated in dispatch calculations. On the other hand, heuristic battery lifetime assessment models assume that degradation is caused by a set of stress factors, each of which can be represented by a stress model derived from experimental data. The effect of these stress factors varies with the type of battery technology. In this paper, we focus on lithium-ion batteries because they are widely considered as having the highest potential for grid-scale applications. For our purposes, it is convenient to divide these stress factors into two groups depending on whether or not they are directly affected by the way a grid-connected battery is operated:
\begin{itemize}
\item Non-operational factors: ambient temperature, ambient humidity, battery state of life, calendar time~\cite{kassem2012calendar}.
\item Operational factors: Cycle depth, over charge, over discharge, current rate, and average state of charge (SoC)~\cite{vetter2005ageing}.
\end{itemize}
\subsubsection{Cycle depth}
Cycle depth is an important factor in a battery's degradation, and is the most critical component in the BES market dispatch model. A 7~Wh Lithium Nickel Manganese Cobalt Oxide (NMC) battery cell can perform over 50,000 cycles at 10\% cycle depth, yielding a lifetime energy throughput (i.e. the total amount of energy charged and discharged from the cell) of 35~kWh. If the same cell is cycled at 100\% cycle depth, it can only perform 500 cycles, yielding an lifetime energy throughput of only 3.5~kWh~\cite{ecker2014calendar}. This nonlinear aging property with respect to cycle depth is observed in most static electrochemical batteries~\cite{ruetschi2004aging, byrne2012estimating, xu2016modeling, wang2014degradation}. Section~\ref{BES:CA} explains in details our modeling of the cycle depth stress.
\subsubsection{Current rate}
While high charging and discharging currents accelerate the degradation rate, grid-scale BES normally have capacities greater than 15 minutes. The effect of current rate on degradation is therefore small in energy markets according to results of laboratory tests~\cite{wang2014degradation}. We will therefore not consider the current rate in our model. If necessary, a piecewise linear cost curve can be used to model the current rate stress function as a function of the battery's power output.
\subsubsection{Over charge and over discharge} In addition to the cycle depth effect, extreme SoC levels significantly reduce battery life~\cite{vetter2005ageing}. However, over-charging and over-discharging are avoided by enforcing upper and lower limits on the SoC either in the dispatch or by the battery controller.
\subsubsection{Average state of charge} The average SoC level in each cycle has a highly non-linear but slight effect on the cycle aging rate~\cite{ecker2014calendar, millner2010modeling}. Therefore we do not consider this stress factor in the proposed model.
\subsection{The Rainflow Counting Algorithm}\label{Sec:rf}
\begin{figure}
\caption{\footnotesize Using the rainflow algorithm to identify battery cycle depths.}
\label{Fig:rf1}
\label{Fig:rf2}
\label{Fig:rf}
\end{figure}
The rainflow counting algorithm is used extensively in materials stress analysis to count cycles and quantify their cumulative impact. It has also been applied to battery life assessment ~\cite{xu2016modeling, muenzel2015multi}. Given a SoC profile with a series of local extrema (i.e. points where the current direction changed) $s_0$, $s_1$, $\dotsc$, etc, the rainflow method identifies cycles as~\cite{amzallag1994standardization}:
\begin{enumerate}
\item Start from the beginning of the profile (as in Fig.~\ref{Fig:rf1}).
\item Calculate $\Delta s_1 = |s_0-s_1|$, $\Delta s_2 = |s_1-s_2|$, $\Delta s_3 = |s_2-s_3|$.
\item If $\Delta s_2\leq \Delta s_1$ and $\Delta s_2 \leq \Delta s_3$, then a full cycle of depth $\Delta s_2$ associated with $s_1$ and $s_2$ has been identified. Remove $s_1$ and $s_2 $ from the profile, and repeat the identification using points $s_0$, $s_1$, $s_4$, $s_5$...
\item If a cycle has not been identified, shift the identification forward and repeat the identification using points $s_1$, $s_2$, $s_3$, $s_4$...
\item The identification is repeated until no more full cycles can be identified throughout the remaining profile.
\end{enumerate}
The remainder of the profile is called the rainflow residue and contains only half cycles~\cite{marsh2016review}. A half cycle links each pair of adjoining local extrema in the rainflow residue profile. A half cycle with decreasing SoC is a discharging half cycle, while a half cycle with increasing SoC is a charging half cycle. For example, the SoC profile shown on Fig.~\ref{Fig:rf2} has two full cycles of depth 10\% and one full cycle of depth 40\%, as well as a discharging half cycle of depth 50\% and charging half cycle of depth 50\%.
The rainflow algorithm does not have an analytical mathematical expression~\cite{benasciutti2005spectral} and cannot be integrated directly within an optimization problem. Nevertheless, several efforts have been made to optimize battery operation by simplifying the rainflow algorithm. Abdulla \emph{et al.}~\cite{abdulla2016optimal} and Tran \emph{et al.}~\cite{tran2013energy} simplify the cycle depth as the BES energy output within each control time interval. Koller \emph{et al.}~\cite{koller2013defining} define a cycle as the period between battery charging and discharging transitions. These model simplifications enable the incorporation of cycle depth in the optimization of BES operation, but introduce additional errors in the degradation model. He \emph{et al.}~\cite{he2015optimal} decompose the battery degradation model and optimize BES market offers iteratively. This method yields more accurate dispatch results, but is too complicated to be incorporated in an economic dispatch calculation.
We will use the rainflow algorithm as the basis for an ex-post benchmark method for assessing battery cycle life. In this model, the total life lost $L$ from a SoC profile is assumed to be the sum of the life loss from all $I$ number of cycles identified by the rainflow algorithm. If the life loss from a cycle of depth $\delta$ is given by a cycle depth stress function $\Phi(\delta)$ of polynomial form, we have:
\begin{align}
L = \textstyle \sum_{i=1}^{I} \Phi(\delta_i)\,.
\label{Eq:rf1}
\end{align}
\section{Marginal Cost of Battery Cycling}\label{BES:CA}
\begin{figure}
\caption{Upper-approximation to the cycle depth aging stress function.
}
\label{Fig:dod}
\end{figure}
In order to participate fully in electricity markets, owners of batteries must be able to submit offers and bids that reflect their marginal operating cost. As we argued above, this marginal cost curve should reflect the cost of battery degradation caused by each cycle. In order to keep the model simple, and to obtain a cost function similar to those used in existing market dispatch programs, we assume that battery cycle aging only occurs during the discharge stage of a cycle, so that a discharging half cycle causes the same cycle aging as a full cycle of the same depth, while a charging half cycles causes no cycle aging. This is a reasonable assumption because the amounts of energy charged and discharged from a battery are almost identical when assessed on a daily basis.
During a cycle, if the BES is discharged from a starting SoC $e\up{up}$ to an end SoC $e\up{dn}$ and later charged back (or vice-versa), the depth of this cycle is the relative SoC difference $(e\up{up}-e\up{dn})/E\up{rate}$, where $E\up{rate}$ is the energy capacity of the BES.
Let a battery be discharged from a cycle depth $\delta_{t-1}$ at time interval $t-1$. This battery's cycle depth at time $t$ can be calculated from its output power $g_t$ over time (assuming the time interval duration is one hour):
\begin{align}
\delta_t = \frac{1}{\eta\up{dis}E\up{rate}}g_t + \delta_{t-1}\,,
\label{ES:dod}
\end{align}
where $\eta\up{dis}$ is the BES discharge efficiency, and $g_t$ has non-negative values because we ignore charging for now. The incremental aging resulting from this cycle is $\Phi(\delta_t)$, and the marginal cycle aging can be calculated by taking the derivative of $\Phi(\delta_t)$ with respect to $g_t$ and substituting from \eqref{ES:dod}:
\begin{align}
\frac{\partial \Phi(\delta_i)}{\partial g_t} = \frac{d\Phi(\delta_i)}{d\delta_i}\frac{\partial \delta_i}{\partial g_t} = \frac{1}{\eta\up{dis}E\up{rate}}\frac{d\Phi(\delta_i)}{d\delta_i}\,,
\label{ES:inc}
\end{align}
To define the marginal cost of cycle aging, we prorate the battery cell replacement cost $R$ (\$) to the marginal cycle aging, and construct a piecewise linear upper-approximation function $c$. This function consists of $J$ segments that evenly divide the cycle depth range (from 0 to 100\%)
\begin{equation}
c\up{}(\delta_t) =
\begin{cases}
c\up{}_1 & \text{if } \delta_t \in [0, \frac{1}{J}) \\
\vdots & \\
c\up{}_j & \text{if }
\delta_t \in [\frac{j-1}{J}, \frac{j}{J}) \\
\vdots & \\
c\up{}_{J} & \text{if }
\delta_t \in [\frac{J-1}{J}, 1]
\end{cases}\,,
\label{ES:ca_pl}
\end{equation}
where
\begin{equation}
c\up{}_j = \frac{R}{\eta\up{dis}E\up{rate}}J\big[\Phi(\frac{j}{J})-\Phi(\frac{j-1}{J})\big]\,,
\end{equation}
and $\delta_t$ is the cycle depth of the battery at time $t$. Fig.~\ref{Fig:dod} illustrates the cycle depth stress function and its piecewise linearization with different numbers of segments.
\section{Optimizing the BES Dispatch}\label{Sec:opt}
Having established a marginal cost function for a BES, we are now able to optimize how it should be dispatched assuming that it acts as a price-taker on the basis of perfect forecasts of the market prices for energy and reserve. A formal description of this optimization requires the definitions of the following parameters:
\begin{itemize}
\item $T$: Number of time intervals in the optimization horizon, indexed by $t$
\item $J$: Number of segments in the cycle aging cost function, indexed by $j$
\item $M$: Duration of a market dispatch time interval
\item $S$: Sustainability time requirement for reserve provision
\item $E\up{0}$: Initial amount of energy stored in the BES
\item $E\up{final}$: Amount of energy that must be stored at the end of the optimization horizon
\item $E\up{min}$ and $E\up{max}$: Minimum and maximum energy stored in the BES
\item $D$, $G$: Discharging and charging power ratings
\item $c_j$: Marginal aging cost of cycle depth segment $j$
\item $\overline{e}\up{}_j$: Maximum amount of energy that can be stored in cycle depth segment $j$
\item $\overline{e}\up{0}_j$: Initial amount of energy of cycle depth segment $j$
\item $\eta\up{ch}$, $\eta\up{dis}$: Charge and discharge efficiencies
\item $\lambda\up{e}_t$, $\lambda\up{q}_t$: Forecasts of the energy and reserve prices at $t$
\end{itemize}
This optimization uses the following decision variables:
\begin{itemize}
\item $p\up{ch}_{t,j}$, $p\up{dis}_{t,j}$: Charge and discharge power for cycle depth segment $j$ at time $t$
\item $e\up{}_{t,j}$: Energy stored in marginal cost segment $j$ at time $t$
\item $d_t$, $g_t$: Charging and discharging power at time $t$
\item $d\up{q}_t$, $g\up{q}_t$: BES baseline charging and discharging power at time $t$ for reserve provision
\item $q_t$: Reserve capacity provided by the BES at time $t$
\item $v_t$: Operating mode of the BES: if at time $t$ the BES is charging then $v_t=0$; if it is discharging then $v_t=1$. If the BES is idling, this variable can take either value. If some sufficient conditions are satisfied, for example the market clearing prices should not be negative, the binary variable $v_t$ can be relaxed~\cite{li2016sufficient}
\item $u_t$: If at time $t$ the BES provides reserve then $u_t=1$, else $u_t=0$
\end{itemize}
The objective of this optimization is to maximize the operating profit $\Omega$ of the BES. This profit is defined as the difference between the revenues from the energy and reserve markets and the cycle aging cost $C$
\begin{align}
\max_{\mathbf{p,g,d,q}} \Omega := \textstyle\sum_{t=1}^{T}M\Big[ \lambda\up{e}_t(g_t-d_t) + \lambda\up{q}_t q_t\Big] - C\up{}\,.
\label{ES:obj}
\end{align}
Depending on the discharge power, the depth of discharge during each time interval extends over one or more segments.
To model the cycle depth in multi-interval operation, we assign a charge power component $p\up{ch}_{t,j}$ and an energy level $e\up{}_{t,j}$ to each cycle depth segment, so that we can track the energy level of each segment independently and identify the current cycle depth. For example, assume we divide the cycle depth of a 1~MWh BES into 10 segments of 0.1~MWh. If a cycle of 10\% depth starts with a discharge, as between $s_2$ and $s_3$ in Fig.~\ref{Fig:rf1}, the BES must have previously undergone a charge event which stored more than 0.1~MWh according to the definition from the rainflow method. Because the marginal cost curve is convex, the BES always discharges from the cheapest (shallowest) available cycle depth segment towards the more expensive (deeper) segments.
So that the proposed model provides a close approximation to the rainflow cycle counting algorithm, detailed proofs and a numerical example are included in the appendix.
The cycle aging cost $C$ is the sum of the cycle aging costs associated with each segment over the horizon:
\begin{equation}
C\up{} = \textstyle\sum_{t=1}^{T}\sum_{j=1}^{J}Mc\up{}_jp\up{dis}_{t,j}\,.
\label{ES:cost}
\end{equation}
This optimization is subject to the following constraints
\begin{align}
d_t &= \textstyle\sum_{j=1}^{J}p\up{ch}_{t,j}
\label{Eq:CP_1}\\
g_t &= \textstyle\sum_{j=1}^{J}p\up{dis}_{t,j}
\label{Eq:CP_2}\\
d_t &\leq D(1-v_t)
\label{Eq:CP_3}\\
g_t &\leq Gv_t
\label{Eq:CP_4}\\
e\up{}_{t,j}-e\up{}_{t-1,j} &= M(p\up{ch}_{t,j}\eta\up{ch}- p\up{dis}_{t,j}/\eta\up{dis})
\label{Eq:CE_1}\\
e\up{}_{t,j} &\leq \overline{e}\up{}_j
\label{Eq:CE_2}\\
E\up{min} \leq \textstyle \textstyle\sum_{j=1}^{J}e\up{}_{t,j} &\leq E\up{max}
\label{Eq:CE_4}\\
e\up{}_{1,j} &= e\up{0}_j
\label{Eq:CE_5}\\
\textstyle\sum_{j=1}^{J}e\up{}_{T,j} &\geq E\up{final}\,,
\label{Eq:CE_3}
\end{align}
Eq.~\eqref{Eq:CP_1} states that the BES charging power drawn from the grid is the sum of the charging powers associated with each cycle depth segment. Eq.~\eqref{Eq:CP_2} is the equivalent for the discharging power. Eqs.~\eqref{Eq:CP_3}--\eqref{Eq:CP_4} enforce the BES power rating, with the binary variable $v_t$ preventing simultaneous charging and discharging~\cite{go2016assessing}. Eq.~\eqref{Eq:CE_1} tracks the evolution of the energy stored in each cycle depth segment, factoring in the charging and discharging efficiency. Eq.~\eqref{Eq:CE_2} enforces the upper limit on each segment while Eq.~\eqref{Eq:CE_4} enforces the minimum and maximum SoC of the BES. Eq.~\eqref{Eq:CE_5} sets the initial energy level in each cycle depth segment, and the final storage energy level is enforced by Eq.~\eqref{Eq:CE_3}.
Because the revenues that a BES collects from providing reserve capacity are co-optimized with the revenues from the energy market, it must abide by the requirements that the North American Electric Reliability Corporation (NERC) imposes on the provision of reserve by energy storage. In particular, NERC requires that a BES must have enough energy stored to sustain its committed reserve capacity and baseline power dispatch for at least one hour~\cite{nerc_reserve}. This requirement is automatically satisfied when market resources are cleared over an hourly interval, as is the case for the ISO New England day-ahead market. If the dispatch interval is shorter than one hour (e.g.for the five-minute ISO New England real-time market), this one-hour sustainability requirement has significant implications on the dispatch of a BES because of the interactions between its power and energy capacities. For example, let us consider a 36~MW BES with 3~MWh of stored energy. If this BES is not scheduled to provide reserve, it can dispatch up to 36~MW of generation for the next 5-minute market period. On the other hand, if it is scheduled to provide 1~MW of reserve, its generation capacity is also constrained by the one hour sustainability requirement, therefore it can only provide up to 2~MW baseline generation for the next 5-minute market period.
\begin{align}
0\leq d_t - d\up{q}_t &\leq D(1-u_t)
\label{Eq:CR_1}\\
0\leq g_t - g\up{q}_t &\leq G(1-u_t)
\label{Eq:CR_2}\\
d\up{q}_t &\leq Du_t
\label{Eq:CR_3}\\
g\up{q}_t &\leq Gu_t
\label{Eq:CR_4}\\
g\up{q}_t + q_t - d\up{q}_t &\leq Gu_t
\label{Eq:CR_5}\\
q_t & \geq \varepsilon u_t
\label{Eq:CR_6}\\
S(g\up{q}_t + q_t - d\up{q}_t) &\leq \textstyle\sum_{j=1}^{J}\overline{e}\up{}_j\,,
\label{Eq:CR_7}
\end{align}
Equations \eqref{Eq:CR_1}--\eqref{Eq:CR_7}, enforce the constraints related to the provision of reserve by a BES. In particular, Eq.~\eqref{Eq:CR_7} enforces the one-hour reserve sustainability requirement. Depending on the requirements of the reserve market, the binary variable $u_t$ and constraints \eqref{Eq:CR_1}--\eqref{Eq:CR_7} can be simplified or relaxed.
The optimization model described above can be used by the BES owner to design bids and offers or self-schedule based on price forecasts. The ISO can also incorporate this model into the market clearing program to better incorporate the aging characteristic of BES. In this case, the cycle aging cost function should be included in the welfare maximization while constraints \eqref{Eq:CP_1}~-~\eqref{Eq:CR_7} should be added to the market clearing program constraints. A BES owner should include cycle aging parameters $c_j$ and $\overline{e}_j$ in its market offers, and parameters $D$, $G$, $E\up{min}$, $E\up{final}$, $\eta\up{ch}$, $\eta\up{dis}$ for ISO to manage its SoC and its upper/lower charge limits.
\section{Case Study}
The proposed model has been tested using data from ISO New England to demonstrate that it improves the profitability and longevity of a BES participating in this market. All simulations were carried out in GAMS using CPLEX solver~\cite{GAMS}, and the optimization period is 24 hours for all simulations.
\subsection{BES Test Parameters}\label{Sec:CS:BES}
The BES simulated in this case study has the following parameters:
\begin{itemize}
\item Charging and discharging power rating: 20 MW
\item Energy capacity: 12.5 MWh
\item Charging and discharging efficiency: 95\%
\item Maximum state of charge: 95\%
\item Minimum state of charge: 15\%
\item Battery cycle life: 3000 cycles at at 80\% cycle depth
\item Battery shelf life: 10 years
\item Cell temperature: maintained at 25$^\circ C$
\item Battery pack replacement cost: 300,000~\$/MWh
\item $\mathrm{Li(NiMnCo)O_2}$-based 18650 lithium-ion battery cells
\end{itemize}
These cells have a near-quadratic cycle depth stress function~\cite{laresgoiti2015modeling}:
\begin{align}
\Phi(\delta) = (5.24\text{E-4})\delta\up{2.03}\,.
\label{Eq:DoD}
\end{align}
Fig.~\ref{Fig:dod} shows this stress function along with several possible piecewise linearizations. We assume that all battery cells are identically manufactured, that the battery management system is ideal, and thus that all battery cells in the BES age at the same rate.
Since the BES dispatch is performed based on perfectly accurate price forecasts, our results provide an upper bound of its profitability in this market.
\begin{figure*}
\caption{BES dispatch for different cycle aging cost models (ISO New England SE-MASS Zone, Jan 5th \& 6th, 2015).}
\label{Fig:dispatch_price}
\label{Fig:dispatch_soc}
\label{Fig:dispatch_p}
\label{Fig:comp_block}
\end{figure*}
\subsection{Market Data}\label{Sec:CS:DS}
BES dispatch simulations were performed using zonal price for Southeast Massachusetts (SE-MASS) region of ISO New England market price data for 2015~\cite{iso_express} because energy storage has the highest profit potential in this price zone~\cite{yury2016ensuring}. Three market scenarios were simulated:
\begin{itemize}
\item \emph{Day-Ahead Market (DAM)}: Generations and demands are settled using hourly day-ahead prices in this energy market. The DAM does not clear operating reserve capacities. DAM is a purely financial market, and is used in this study to demonstrate the BES dispatch under stable energy prices.
\item \emph{Real-Time Market (RTM) with 1-hour settlement period}: The real-time energy market clears every five minutes and generates 5-minute real-time energy and reserve prices. Generations, demands, and reserves are settled hourly using an average of these 5-minute prices. The reserve sustainability requirement is one hour.
\item \emph{RTM with 5-minute settlement periods}: ISO New England plans to launch the 5-minute subhourly settlement on March 1,~2017~\cite{isone_rte}. The reserve sustainability requirement remains one hour.
\end{itemize}
Fig.~\ref{Fig:dispatch_price} compares the energy prices in these different markets and shows that the 5-minute real-time prices fluctuate the most, while the day-ahead prices are more stable than real-time prices.
\subsection{Accuracy of the Predictive Aging Model}
Fig.~\ref{Fig:dispatch_soc} and \ref{Fig:dispatch_p} compare the BES dispatches for piecewise-linear cycle aging cost functions with different numbers of cycle depth segments. A cost curve with more segments is a closer approximation of the actual cycle aging function. The price signal for these examples is the 5-minute RTM price curve shown in Fig.~\ref{Fig:dispatch_price}. Fig.~\ref{Fig:dispatch_soc} shows the SoC profile while Fig.~\ref{Fig:dispatch_p} shows the corresponding output power profile, where positive values correspond to discharging periods, and negative values to charging periods.
The gray curve in Fig.~\ref{Fig:comp_block} shows the dispatch of the BES assuming zero operating cost. This is the most aggressive dispatch, and the BES assigns full power to arbitrage as long as there are price fluctuations, regardless of the magnitude of the price differences. Fig.~\ref{Fig:dispatch_p} shows that the BES frequently switches between charging and discharging, and Fig.~\ref{Fig:dispatch_soc} that it ramps aggressively. This dispatch maximizes the market revenue for the BES, but not the maximum lifetime profit, because the arbitrage decisions ignore the cost of cycle aging. We will show in Section~\ref{Sec:CS_MP} that this dispatch actually results in negative profits for all market scenarios.
\begin{table*}[t]
\centering
\caption{Dispatch of a 20MW~/~12.5MWH BES in ISO-NE Energy Markets (full-year 2015).}
\begin{tabular}{l || c c c || c c c || c c c}
\hline
\hline
Market & \multicolumn{3}{c||}{DAM} & \multicolumn{3}{c||}{RTM with hourly settlement} & \multicolumn{3}{c}{RTM with 5-minute settlement} \Tstrut\Bstrut\\
\hline
Cycle aging cost model & no cost & 1-seg. & 16-seg. & no cost & 1-seg. & 16-seg. & no cost & 1-seg. & 16-seg. \Tstrut\Bstrut\\
\hline
Annual market revenue [k\$] & 138.8 & 0 & 21.3 & 382.5 & 197.5 & 212.5 & 789.3 & 303.8 & 372.3 \Tstrut\Bstrut\\
\hline
Revenue from reserve [\%] & \multicolumn{3}{c||}{No price for reserve in DAM}& 29.6 & 74.1 & 73.6 & 13.8 & 34.9 & 29.8 \Tstrut\Bstrut\\
\hline
Annual life loss from cycling [\%] & 24.4 & 0 & 0.3 & 43.6 & 1.0 & 1.1 & 77.0 & 2.2 & 2.6 \Tstrut\Bstrut\\
\hline
Annual prorated cycle aging cost [k\$] & 913.8 & 0 & 11.3 & 1626.3 & 36.3 & 38.8 & 2887.5 & 81.3 & 96.3 \Tstrut\Bstrut\\
\hline
Annual prorated profit [k\$] & -775.0 & 0 &10 & -1243.8 & 161.3 & 173.8 & -2101.3 & 222.5 & 276.3 \Tstrut\Bstrut\\
\hline
Profit from reserve [\%] & \multicolumn{3}{c||}{No price for reserve in DAM} & - & 90.7 & 90.0 & - & 47.7 & 40.2 \Tstrut\Bstrut\\
\hline
Battery life expectancy [year] & 2.9 & 10.0 & 9.7 & 1.9 & 9.1 & 9.1 & 1.1 & 8.2 & 8.0 \Tstrut\Bstrut\\
\hline
\hline
\end{tabular}
\label{tab:bes_dispatch}
\end{table*}
The yellow curve in Fig.~\ref{Fig:comp_block} illustrates the dispatch of the BES when the cycle aging cost curve is approximated by a single cycle depth segment. In this case, the marginal cost of cycle aging is constant and, as shown in Fig.~\ref{Fig:dod}, it overestimates the marginal cost of aging over a wide range of cycle depths. Therefore, this dispatch yields the most conservative arbitrage response, and the BES remains idle unless price deviations are very large, as demonstrated in Fig.~\ref{Fig:dispatch_p}. Consequently, the BES collects the smallest market revenues, but the BES never loses money from market dispatch because the actual cycle aging is always smaller than the value predicted by the model.
As the number of segments increases, the BES dispatch becomes more sensitive to the magnitude of the price fluctuations, and a tighter correlation can be observed between the market price in Fig.~\ref{Fig:dispatch_price} and the BES SoC in Fig.~\ref{Fig:dispatch_soc}. The red curve shows the dispatch of the BES using a 16-segment linearization of the cycle aging cost curve. When small price fluctuation occurs, the BES only dispatches at a fraction of its power rating, even though it has sufficient energy capacity. This ensures that the marginal cost of cycle aging does not exceed the marginal market arbitrage income.
Besides considering the impact of the piecewise linearization on the BES dispatch, it is also important to compare the cycle aging cost used by the predictive model incorporated in the dispatch calculation with an ex-post calculation of this cost using the benchmark rainflow-counting algorithm. Using the $\hat{e}_{t,j}$ calculated using the optimal dispatch model \eqref{ES:obj}, we generate a percentage SoC series:
\begin{align}
\sigma_t = \textstyle \sum_{j=1}^J {\hat{e}_{t,j}}/{E\up{rate}}\,,
\end{align}
This SoC series is fed into the rainflow method as described in Section~\ref{Sec:rf}, and the cycle life loss $L$ is calculated as in \eqref{Eq:rf1} with the cycle stress function \eqref{Eq:DoD}. The relative error $\epsilon$ on the cycle aging cost is calculated as:
\begin{align}
\epsilon = {|\hat{C} - RL|}/({RL})\,,
\end{align}
where $\hat{C}$ is the cycle aging cost from \eqref{ES:cost}. Fig.~\ref{Fig:cost_err} shows the difference between the predicted and ex-post calculations for the simulations based on the RTM with a 5-minute settlement. As the number of segments increases to 16, the error becomes negligible.
\begin{figure}
\caption{Difference between the cycle aging cost calculated using the predictive model and an ex-post calculation using the benchmark rainflow method for a full-year 5-minute RTM dispatch simulation.}
\label{Fig:cost_err}
\end{figure}
\subsection{BES Market Profitability Analysis}~\label{Sec:CS_MP}
Table~\ref{tab:bes_dispatch} summarizes the economics of BES operation under the three markets described in Section \ref{Sec:CS:DS} and for three cycle aging cost models: \emph{no operating cost}; \emph{single segment cycle aging cost}; and \emph{16-segment cycle aging cost}. The market revenue, profit, and battery life expectancy calculations are based on dispatch simulations using market data spanning all of 2015. On the fifth row, the life loss due to market dispatch is calculated using the benchmark cycle life loss model of Eqs. \eqref{Eq:rf1}, and \eqref{Eq:DoD}. In the sixth row, we calculate the cycle aging cost by prorating the battery cell replacement cost to the dispatch life loss. In the seventh row, the cost of cycle aging is subtracted from the market revenue to calculate the operating profit. In the last row, we estimate the battery cell life expectancy assuming the BES repeats the same operating pattern in future years. The life estimation $L\up{exp}$ includes shelf (calendar) aging and cycle aging
\begin{align}
L\up{exp} = ({100\%})/({\Delta L\up{cal} + \Delta L\up{cycle}})\,,
\end{align}
where $\Delta L\up{cal}$ is the 10\% annual self life loss as listed in Section~\ref{Sec:CS:BES}, and $\Delta L\up{cycle}$ is the annual life loss due to cycle aging as shown in the sixth row in Table~\ref{tab:bes_dispatch}.
The 16-segment model generates the largest profit in all market scenarios. Compared to the 16-segment model, the no cost model results in a more aggressive operation of the BES, while the 1-segment model is more conservative. Because the no-cost model encourages arbitrage in response to all price differences, it results in a very large negative profit and a very short battery life expectancy in all market scenarios. The 1-segment model only arbitrages during large price deviations. In particular, the BES is never dispatched in the day-ahead because these market prices are very stable.
The BES achieves the largest profits in the 5-minute RTM because this market has the largest price fluctuations. The revenue from reserve is lower in the 5-minute RTM than the hourly RTM. This result shows that the proposed approach is able to switch the focus of BES operation from reserve to arbitrage when market price fluctuations become high. In the RTM, the BES collects a substantial portion of its profits from the provision of reserve, especially in the hourly RTM. A BES is more flexible than generators at providing reserves because it does not have a minimum stable generation, it can start immediately, and can remain idle until called. Therefore, the provision of reserve causes no cycle aging. In the hourly RTM, the provision reserve represents about 74\% of the market revenue and 90\% of the prorated profits for this BES.
\section{Conclusion}
This paper proposes a method for incorporating the cost of battery cycle aging in economic dispatch, market clearing or the development of bids and offers . This approach takes advantage of the flexibility that a battery can provide to the power system while ensuring that its operation remains profitable in a market environment. The cycle aging model closely approximates the actual electrochemical battery cycle aging mechanism, while being simple enough to be incorporated into market models such as economic dispatch. Based on simulations performed using a full year of actual market price data, we demonstrated the effectiveness and accuracy of the proposed model. These simulation results show that modeling battery degradation using the proposed model significantly improves the actual BES profitability and life expectancy.
\appendix
In this appendix we prove that the proposed piecewise linear model of the battery cycle aging cost is a close approximation of the benchmark rainflow-based battery cycle aging model, and that the accuracy of the model increases with the number of linearization segments. \emph{The proposed model produces the same aging cost as to the benchmark aging model for the same battery operation profile with an adequate number of linearization segments.} To prove this, we first explicitly characterize the cycle aging cost result calculated using the proposed model (Theorem~1). We then show that this cost approaches the benchmark result when the number of linearization segments approaches infinity (Theorem~2).
We consider the operation of a battery over the period $\mathcal{T} = \{1,2,\dotsc,T\}$, the physical battery operation constraints are ($\forall t\in \mathcal{T}$)
\begin{align}
d_t &\leq D(1-v_t)
\label{Eq:App_1}\\
g_t &\leq Gv_t
\label{Eq:App_2}\\
e_t - e_{t-1} &= M(d_t\eta\up{ch}-g_t/\eta\up{dis})
\label{Eq:App_3}
\end{align}
We denote $\mathbf{d}=\{d_1, d_2, \dotsc, d_T\}$ as the set of all battery charge powers, and $\mathbf{g}=\{g_1, g_2, \dotsc, g_T\}$ as the set of all discharge powers. Hence, a set in the form of $(\mathbf{d}, \mathbf{g})$ is sufficient to describe the dispatch of a battery over $\mathcal{T}$. Let $\mathcal{P}(e_0)$ denote the set of all feasible battery dispatches that satisfy the physical battery operation constraints \eqref{Eq:App_1}--\eqref{Eq:App_3} given an battery initial energy level $e_0$.
Since we are only interested in characterizing the aging cost calculated by the proposed model for a certain battery operation profile, we will regard the battery operation profile as known variables in this proof. It is easy to see that once the dispatch profile $(\mathbf{d}, \mathbf{g})$ is determined, any battery dispatch problem that involves the proposed model with a linearization segment set $\mathcal{J}=\{1,2,\dotsc, J\}$, such as the one formulated in Section~\ref{Sec:opt}, can be reduced to the following problem if we neglect any operation prior to the operation interval $\mathcal{T}$
\begin{align}
\hat{\mathbf{p}} \in \mathrm{arg}&\min_{\mathbf{p}\in \mathbb{R}^+} \textstyle \sum_{t=1}^{T}\sum_{j=1}^{J}Mc\up{}_jp\up{dis}_{t,j}\,,
\label{Eq:Pro_obj}\\
&\text{s.t. }\nonumber\\
&d_t = \textstyle\sum_{j=1}^{J}p\up{ch}_{t,j}
\label{Eq:Pro_C1}\\
&g_t = \textstyle\sum_{j=1}^{J}p\up{dis}_{t,j}
\label{Eq:Pro_C2}\\
&e\up{}_{t,j}-e\up{}_{t-1,j} = M(p\up{ch}_{t,j}\eta\up{ch}- p\up{dis}_{t,j}/\eta\up{dis})
\label{Eq:Pro_C3}\\
&0\leq e\up{}_{t,j} \leq \overline{e}\up{}_j
\label{Eq:Pro_C4}\\
&\textstyle \sum_{j=1}^{J} e\up{}_{0,j} = e_0
\label{Eq:Pro_C5}
\end{align}
where $(\mathbf{d}, \mathbf{g})\in \mathcal{P}(e_0)$ is a feasible battery dispatch set, and $\mathbf{p} = \{p\up{ch}_{t,j}, p\up{dis}_{t,j}|t\in \mathcal{T}, j\in \mathcal{J}\}$ denotes a set of the battery charge and discharge powers for all segments during all dispatch intervals. Although the objective is still cost minimization, the problem in \eqref{Eq:Pro_obj}--\eqref{Eq:Pro_C5} does not optimize battery dispatch, instead it simulates cycle operations $\mathbf{p}$ and calculates the cycle aging cost with respect to a dispatch profile $(\mathbf{d}, \mathbf{g})$. Hence, the evaluation criteria to this problem is its accuracy compared to the benchmark aging cost model.
Let $\mathbf{c}=\{c_j|j\in \mathcal{J}\}$ denote a set of piecewise linear battery aging cost segments derived as in equation \eqref{ES:ca_pl}, so that $c_j$ is associated with the cycle depth range $[(j-1)/J, j/J)$ and $J=|\mathcal{J}|$ is the number of segments. We say that a battery has a \emph{convex} aging cost curve (i.e., non-decreasing marginal cycle aging cost) if a \emph{shallower} cycle depth segment (i.e., indexed with smaller $j$) is associated with a \emph{cheaper} marginal aging cost such that $c_1\leq c_2 \leq \dotsc \leq c_J$, and let $\mathcal{C}$ denote the set of all convex battery aging cost linearizations.
\begin{theorem}
Let $\hat{\mathbf{p}} = \{\hat{p}\up{ch}_{t,j}, \hat{p}\up{dis}_{t,j}|t\in \mathcal{T}, j\in \mathcal{J}\}$ and
\begin{align}
\hat{p}\up{ch}_{t,j} &= \textstyle \min\big[d_t-\sum_{\zeta=1}^{j-1}\hat{p}\up{ch}_{t,\zeta}, \;(\overline{e}_j- \hat{e}\up{}_{t-1,j})/(\eta\up{ch}M)\big]
\label{Eq:the1}\\
\hat{p}\up{dis}_{t,j} &= \textstyle \min\big[g_t-\sum_{\zeta=1}^{j-1}\hat{p}\up{dis}_{t,\zeta}, \;\eta\up{dis}\hat{e}\up{}_{t-1,j}/M\big]
\label{Eq:the2}\\
\hat{e}\up{}_{0,j} &= \textstyle \min\big[\overline{e}_j, \max(0, e_0-\sum_{\zeta=1}^{j-1}\hat{e}\up{}_{0,\zeta})\big]
\label{Eq:the3}\\
\hat{e}\up{}_{t,j} &= \hat{e}\up{}_{t-1,j} + M(p\up{ch}_{t,j}\eta\up{ch}- p\up{dis}_{t,j}/\eta\up{dis})\,.
\label{Eq:the4}
\end{align}
Then $\hat{\mathbf{p}}$ is a minimizer of the problem \eqref{Eq:Pro_obj}--\eqref{Eq:Pro_C5} as long as the battery dispatch is feasible and the cycle aging cost curve is convex, i.e.,
\begin{align}
&\hat{\mathbf{p}} \in \mathrm{arg}\min_{\mathbf{p}\in \mathbb{R}^+} \text{ \eqref{Eq:Pro_obj}--\eqref{Eq:Pro_C5}}\,,\nonumber\\
&\text{$\forall (\mathbf{d}, \mathbf{g})\in \mathcal{P}(e_0)$, $e_0 \in [E\up{min}, E\up{max}]$, $\mathbf{c} \in \mathcal{C}$}.
\end{align}
\end{theorem}
\begin{proof}
Equations \eqref{Eq:the1}--\eqref{Eq:the4} describe a battery operating policy over the proposed piecewise linear model. To calculate this policy, we start from \eqref{Eq:the3} which calculates the initial segment energy level from the battery initial SoC $e_0$. \eqref{Eq:the3} is evaluated in the order of $j=0,1,2,3,\dotsc, J$ such as (note that $\sum_{\zeta=1}^{0}\hat{e}\up{}_{0,\zeta} = 0$)
\begin{align}
\hat{e}\up{}_{0,1} &= \textstyle \min\big[\overline{e}_1, \max(0, e_0)\big]\nonumber\\
\hat{e}\up{}_{0,2} &= \textstyle \min\big[\overline{e}_2, \max(0, e_0-\hat{e}\up{}_{0,1})\big]\nonumber\\
\hat{e}\up{}_{0,3} &= \textstyle \min\big[\overline{e}_3, \max(0, e_0-\hat{e}\up{}_{0,1}-\hat{e}\up{}_{0,2})\big]\nonumber\\
& \dots\,,\nonumber
\end{align}
so that energy in $e_0$ is first assigned to $\hat{e}_{0,1}$ which corresponds to the shallowest cycle depth range $[0, 1/J]$, the remaining energy is then assigned to the second shallowest segment $\hat{e}_{0,2}$, and the procedure repeats until all the energy in $e_0$ has been assigned.
We then calculate all battery segment charge power at $t=1$ in the order of $j=0,1,2,3,\dotsc, J$ as
\begin{align}
\hat{p}\up{ch}_{1,1} &= \textstyle \min\big[d_t, \;(\overline{e}_1- \hat{e}\up{}_{0,1})/(\eta\up{ch}M)\big] \nonumber\\
\hat{p}\up{ch}_{1,2} &= \textstyle \min\big[d_t-\hat{p}\up{ch}_{1,1}, \;(\overline{e}_2 - \hat{e}\up{}_{0,2})/(\eta\up{ch}M)\big] \nonumber\\
\hat{p}\up{ch}_{1,3} &= \textstyle \min\big[d_t-\hat{p}\up{ch}_{1,1}-\hat{p}\up{ch}_{1,2}, \;(\overline{e}_3- \hat{e}\up{}_{0,3})/(\eta\up{ch}M)\big]\nonumber\\
&\dots\,,\nonumber
\end{align}
and the procedure is similar for segment discharge power $\hat{p}\up{dis}_{1,j}$. We calculate the segment energy level $\hat{e}_{1,j}$ at the end of $t=1$ using \eqref{Eq:the4}, and move the calculation to $t=2$. This procedure repeats until all values in $\hat{\mathbf{p}}$ have been calculated. Therefore in this policy, the battery always prioritizes energy in shallower segments for charge or discharge dispatch. For example, if the battery is required to discharge a certain amount of energy, it will first dispatch segment 1, then the remaining discharge requirement (if any) is dispatched from segment 2, then segment 3, etc.
Given this policy, this theorem stands if the battery cycle aging cost curve $\mathbf{c}$ is convex, i.e., $\mathbf{c}\in \mathcal{C}$, which means a shallower segment is associated with a cheaper marginal operating cost. Since the objective function \eqref{Eq:Pro_obj} is to minimize the battery aging cost and the problem involves no market price, then a minimizer for the problem \eqref{Eq:Pro_obj}--\eqref{Eq:Pro_C5} will give a cheaper segment a higher operation priority, which is equivalent to the policy described in \eqref{Eq:the1}--\eqref{Eq:the4}.
\end{proof}
Following Theorem~1, the cycle aging cost calculated by the proposed piecewise linear model $C\up{pwl}$ for a battery dispatch profile $(\mathbf{d}, \mathbf{g})$ can be written as a function of this profile and the linearization cost set as
\begin{align}
C\up{pwl}(\mathbf{c}, \mathbf{d}, \mathbf{g}) = \textstyle\sum_{t=1}^{T}\sum_{j=1}^{J}Mc\up{}_j\hat{p}\up{dis}_{t,j}\,,
\label{Eq:the_cost}
\end{align}
where $\hat{\mathbf{p}}$ is calculated as in \eqref{Eq:the1}--\eqref{Eq:the4}.
Let $\Phi(\delta)$ be a convex battery cycle aging stress function, and $\mathbf{c}(\Phi)$ be a set of piecewise linearizations of $\Phi(\delta)$ determined using the method described in equation \eqref{ES:ca_pl}. Let $|\mathbf{c}(\Phi)|$ denote the cardinality of $\mathbf{c}(\Phi)$, i.e. the number of segments in this piecewise linearization.
For a feasible battery dispatch profile $(\mathbf{d}, \mathbf{g}) \in \mathcal{P}(e_0)$, let $\Delta$ be the set of all full cycles identified from this operation profile using the rainflow method, $\Delta\up{dis}$ for all discharge half cycles, and $\Delta\up{ch}$ for all charge half cycles. The benchmark cycle aging cost $C\up{ben}$ resulting from $(\mathbf{d}, \mathbf{g})$ can be written as a function of the profile and the cycle aging function $\Phi$ (recall that a full cycle has symmetric depths for charge and discharge)
\begin{align}
C\up{ben}(\Phi, \mathbf{d}, \mathbf{g}) = \textstyle R\sum_{i=1}^{|\Delta|}\Phi(\delta_i) + R\sum_{i=1}^{|\Delta\up{dis}|}\Phi(\delta\up{dis}_i)\,.
\end{align}
\begin{theorem}
When the number of linearization segments approaches infinity, the proposed piecewise linear cost model yields the same result as the benchmark rainflow-based cost model:
\begin{align}
\lim_{|\mathbf{c}(\Phi)|\to \infty} C\up{pwl}\big(\mathbf{c}(\Phi), \mathbf{d}, \mathbf{g}\big) = C\up{ben}\big(\Phi, \mathbf{d}, \mathbf{g}\big)\,.
\label{Eq:the_2}
\end{align}
\end{theorem}
\begin{proof}
First we rewrite equation \eqref{Eq:the_cost} as
\begin{align}
\textstyle\sum_{j=1}^{J}c\up{}_j\sum_{t=1}^{T}M\hat{p}\up{dis}_{t,j} = \sum_{j=1}^{J}c\up{}_j\Theta_j\,,
\end{align}
where $\Theta_j = \sum_{t=1}^{T}M\hat{p}\up{dis}_{t,j}$ is the total amount of energy discharged at a cycle depth range between $(j-1)/J$ and $j/J$. Once the number of segments $|\mathbf{c}(\Phi)| = J$ approaches infinity, we can rewrite $\Theta_j$ into a function $\Theta(\delta)$ indicating the energy discharged at a specific cycle depth $\delta$, where $\delta\in [0 \; 1]$. With an infinite number of segments, we substitute \eqref{ES:inc} in and rewrite the cycle aging function in \eqref{Eq:the_cost} in a continuous form
\begin{align}
C\up{pwl}(\Phi, \mathbf{d}, \mathbf{g}) = \int_{0}^{1}\frac{R}{\eta\up{dis}E\up{rate}}\Theta(\delta)\frac{d\Phi(\delta)}{d\delta}d\delta\,.
\end{align}
We define a new function $N\up{dis}_T(\delta)$ the number of discharge cycles of depths equal or greater than $\delta$ during the operation period from $t=0$ to $t=T$, accounting all discharge half cycles and the discharge stage of all full cycles. $N\up{dis}_T(\delta)$ can be calculated by normalizing $\Theta(\delta)$ with the discharge efficiency and the energy rating of the battery
\begin{align}
N\up{dis}_T(\delta) = \frac{1}{\eta\up{dis}E\up{rate}}\Theta(\delta)\,,
\end{align}
recall that $\Theta(\delta)$ is the amount of energy discharged from the cycle depth $\delta$. This relationship is proved in Lemma 1 after this theorem.
Now the proposed cost function becomes
\begin{align}
C\up{pwl}(\Phi, \mathbf{d}, \mathbf{g}) = R\int_{0}^{1}\frac{d\Phi(\delta)}{d\delta}N\up{dis}_T(\delta)d\delta\,,
\label{eq:the2_p1}
\end{align}
which is a standard formulation for calculating rainflow fatigue damage~\cite{rychlik1996extremes}, and the function $N\up{dis}_T(\delta)$ is an alternative way of representing a rainflow cycle counting result. We substitute \eqref{Eq:lemma1} from Lemma 1 into \eqref{eq:the2_p1}
\begin{align}
&C\up{pwl}(\Phi, \mathbf{d}, \mathbf{g}) \nonumber\\
& = R\int_{0}^{1}\frac{d\Phi(\delta)}{d\delta}\Bigg( \sum_{i=1}^{|\Delta|}1_{[\delta \leq \delta_i]} + \sum_{i=1}^{|\Delta\up{dis}|}1_{[\delta \leq \delta\up{dis}_i]} \Bigg)d\delta\nonumber\\
& = R\sum_{i=1}^{|\Delta|}\int_{0}^{1} \frac{d\Phi(\delta)}{d\delta} 1_{[\delta \leq \delta_i]} d\delta + R\sum_{i=1}^{|\Delta\up{dis}|}\int_{0}^{1} \frac{d\Phi(\delta)}{d\delta}1_{[\delta \leq \delta\up{dis}_i]} d\delta\nonumber\\
&= R\sum_{i=1}^{|\Delta|} \Phi(\delta_i) + R\sum_{i=1}^{|\Delta\up{dis}|} \Phi(\delta\up{dis}_i)\nonumber\\
&= C\up{ben}(\Phi, \mathbf{d}, \mathbf{g})\,,
\end{align}
then it is trivial to see that this theorem stands if the proposed model yields the same counting result $N\up{dis}_T(\delta)$ as the rainflow algorithm. This relationship is proved in Lemma~1.
\end{proof}
\begin{lemma}
We assume that the proposed model has an infinite number of segments, then $N\up{dis}_T(\delta)$, as defined in Theorem 2, is the number of discharge cycles of depths equal or greater than $\delta$ during the operation period from $t=0$ to $t=T$, accounting all discharge half cycles and the discharge stage of all full cycles, hence
\begin{align}
N\up{dis}_T(\delta) &= \Theta(\delta)/(\eta\up{dis}E\up{rate}) \label{Eq:lemma1_2}\\
&= \textstyle\sum_{i=1}^{|\Delta|}1_{[\delta \leq \delta_i]} + \sum_{i=1}^{|\Delta\up{dis}|}1_{[\delta \leq \delta\up{dis}_i]}\,,
\label{Eq:lemma1}
\end{align}
where $1_{[x]}$ has a value of one if $x$ is true, and zero otherwise.
\end{lemma}
\begin{proof}
\eqref{Eq:lemma1_2} defines $N\up{dis}_T(\delta)$ as the number fo times that energy is discharged from the cycle depth $\delta$, while \eqref{Eq:lemma1} means the number of cycles with depths at least $\delta$. Therefore in this lemma we prove that these two definitions are equivalent, hence the proposed model has the same cycle counting result as the rainflow method.
Let $N\up{dis}_t(\delta)$ be the number of times energy is discharged from the depth $\delta$ during the operation period $[0,t]$, accounting all discharge half cycles and the discharge stage of all full cycles. Similarly, define $N\up{ch}_t(\delta)$ accounting all charge half cycles and the charge stage of all full cycles.
\begin{figure}
\caption{Cycle counting example.}
\label{Fig:lemma1}
\end{figure}
Because we assume charge dispatches cause no aging cost, we can alternatively model battery initial energy level $e_0$ as an empty battery being charged to $e_0$ at the beginning of operation (such as in Fig.~\ref{Fig:lemma1}), hence at $t=0$ we have
\begin{align}
N\up{ch}_0(\delta) = \begin{cases} 1 & \delta \leq e_0 \\ 0 & \delta > e_0 \end{cases}\,,\quad N\up{dis}_0(\delta) = 0\,.
\end{align}
Now assume at time $t_1$ the battery is switched from charging to discharging, and eventually resulted in a cycle of depth $x$ that ends at $t_2$, regardless whether it is a half cycle or a full cycle. We also assume that there is no other cycles occuring from $t_1$ tp $t_2$, since in the rainflow method The battery must have been previously charged at least $\delta$ depth worth of energy since we now assume the battery starts from empty. Therefore according to Theorem 1, segments in the range $[0,x]$ must be full at $t_1$, hence
\begin{align}
N\up{ch}_{t_1}(\delta) - N\up{dis}_{t_1}(\delta) = 1 \quad \forall \delta \leq x\,,
\end{align}
which is a sufficient condition for all discharge energy in this cycle being dispatched from segments in the depth range $[0,x]$, according to Theorem 1. After performing this cycle, all and only segments within the range $[0,x]$ are discharged one more time, in other words, all and only cycle depths in the range $[0,x]$ have one more count at end of this cycle $t_2$ compared to $t_1$ when the discharge begins, hence
\begin{align}
N\up{dis}_{t_2}(\delta)-N\up{dis}_{t_1}(\delta) = 1_{[\delta\leq x]}.
\end{align}
Therefore the proposed model has the same counting result as to the rainflow method for any cycles, which proves this lemma.
\end{proof}
\subsection{Numerical example}
\begin{figure}
\caption{An example of SoC profile.}
\label{Fig:app}
\end{figure}
\begin{table}[!hbt]
\begin{center}
\centering
\caption{Battery Operation Example.}
\label{tab:es_example}
\begin{tabular}{r c c c c}
\hline
\hline
t & SoC & energy segments & discharge power & cost \Tstrut \\
& & $[\mathbf{e}_t]$ & $[\mathbf{p}\up{dis}_t]$ & $C_t$ \Bstrut\\
\hline
- & - & $\to$ deeper depth $\to$ & $\to$ deeper depth $\to$ & -\Tstrut\Bstrut\\
\hline
0 & 60 & 1,1,1,1,1,1,0,0,0,0 & 0,0,0,0,0,0,0,0,0,0 & 0 \Tstrut\Bstrut\\
\hline
1 & 10 & 0,0,0,0,0,1,0,0,0,0 & 1,1,1,1,1,0,0,0,0,0 & 25 \Tstrut\Bstrut\\
\hline
2 & 20 & 1,0,0,0,0,1,0,0,0,0 & 1,1,1,1,1,0,0,0,0,0 & 0 \Tstrut\Bstrut\\
\hline
3 & 30 & 1,1,0,0,0,1,0,0,0,0 & 0,0,0,0,0,0,0,0,0,0 & 0 \Tstrut\Bstrut\\
\hline
4 & 20 & 0,1,0,0,0,1,0,0,0,0 & 1,0,0,0,0,0,0,0,0,0 & 1 \Tstrut\Bstrut\\
\hline
5 & 30 & 1,1,0,0,0,1,0,0,0,0 & 0,0,0,0,0,0,0,0,0,0 & 0 \Tstrut\Bstrut\\
\hline
6 & 40 & 1,1,1,0,0,1,0,0,0,0 & 0,0,0,0,0,0,0,0,0,0 & 0 \Tstrut\Bstrut\\
\hline
7 & 50 & 1,1,1,1,0,1,0,0,0,0 & 0,0,0,0,0,0,0,0,0,0 & 0 \Tstrut\Bstrut\\
\hline
8 & 40 & 0,1,1,1,0,1,0,0,0,0 & 1,0,0,0,0,0,0,0,0,0 & 1 \Tstrut\Bstrut\\
\hline
9 & 30 & 0,0,1,1,0,1,0,0,0,0 & 0,1,0,0,0,0,0,0,0,0 & 3 \Tstrut\Bstrut\\
\hline
10 & 40 & 1,0,1,1,0,1,0,0,0,0 & 0,0,0,0,0,0,0,0,0,0 & 0 \Tstrut\Bstrut\\
\hline
11 & 30 & 0,0,1,1,0,1,0,0,0,0 & 1,0,0,0,0,0,0,0,0,0 & 1 \Tstrut\Bstrut\\
\hline
12 & 20 & 0,0,0,1,0,1,0,0,0,0 & 0,0,1,0,0,0,0,0,0,0 & 5 \Tstrut\Bstrut\\
\hline
13 & 10 & 0,0,0,0,0,1,0,0,0,0 & 0,0,0,1,0,0,0,0,0,0 & 7 \Tstrut\Bstrut\\
\hline
14 & 60 & 1,1,1,1,1,1,0,0,0,0 & 0,0,0,0,0,0,0,0,0,0 & 0 \Tstrut\Bstrut\\
\hline
all & - & - & - & 43 \Tstrut\Bstrut\\
\hline
\hline
\end{tabular}
\end{center}
\end{table}
We include a step-by-step example to illustrate how the proposed model is a close approximation of the benchmark rainflow cost model using the battery operation profile shown in Fig.~\ref{Fig:app}. To simplify this example, we assume a perfect efficiency of $1$ and that the cycle aging cost function is $100\delta^2$. We consider 10 linearization segments, with each segment representing a 10\% cycle depth range. The proposed model therefore has the following cycle aging cost curve
\begin{align}
\mathbf{c}=\{1, 3, 5, 7, 9, 11, 13, 15, 17, 19\}.
\end{align}
According to the rainflow method demostrated in Fig.~\ref{Fig:rf} , this example profile has the following cycle counting results
\begin{itemize}
\item Two full cycles of depth 10\%, each costs 1
\item One full cycle of depth 40\% that costs 16
\item One discharge half cycle of depth 50\% that costs 25
\item One charge half cycle that costs zero,
\end{itemize}
hence the total aging cost identified by the benchmark rainflow-based model is 43.
We implement this operation profile using the policy in Theorem~1 and record the marginal cost during each time interval. The results are shown in Table~\ref{tab:es_example}. In this table, the first two columns are the time step and SoC. The third column shows the energy level of each linearization segment represented in a vector from $\mathbf{e}_t$. $\mathbf{e}_t$ is a $10\times 1$ vector, and its energy level segments are sorted from shallower to deeper depths. Segment energy levels are normalized so that one means the segment is full, and zero means the segment is empty. The fourth column shows how much energy is discharged from each segment during a time interval, represented by a discharge power vector $\mathbf{p}\up{dis}_t$ and is calculated as (the discharge efficiency is 1)
\begin{align}
\mathbf{p}\up{dis}_t = [\mathbf{e}_{t-1}-\mathbf{e}_t]^+\,.
\end{align}
The last column shows the operating cost that arises from each time interval, which is calculated as
\begin{align}
C_t = \mathbf{c}\mathbf{p}\up{dis}_t\,.
\end{align}
This example profile results in the same cost of 43 in both the proposed model and the benchmark model, as proved in Theorem~2.
\begin{IEEEbiographynophoto}{Bolun Xu}
(S'14) received B.S. degrees in Electrical and Computer Engineering
from Shanghai Jiaotong
University, Shanghai, China in 2011, and the M.Sc degree in Electrical
Engineering from Swiss Federal Institute of Technology, Zurich, Switzerland
in 2014.
He is currently pursuing the Ph.D. degree in Electrical Engineering at the
University of Washington, Seattle, WA, USA. His research interests include
energy storage, power system operations, and power system economics.
\end{IEEEbiographynophoto}
\begin{IEEEbiographynophoto}{Jinye Zhao}
(M'11) received the B.S. degree from East China Normal University,
Shanghai, China, in 2002 and the M.S. degree in mathematics from National
University of Singapore in 2004. She received the M.E. degree in operations
research and statistics and the Ph.D. degree in mathematics from Rensselaer
Polytechnic Institute, Troy, NY, in 2007.
She is a lead analyst at ISO New England, Holyoke, MA. Her main interests
are game theory, mathematical programming, and electricity market modeling.
\end{IEEEbiographynophoto}
\begin{IEEEbiographynophoto}{Tongxin Zheng}
(SM'08) received the B.S. degree in electrical engineering
from North China University of Electric Power, Baoding, China, in 1993, the
M.S. degree in electrical engineering from Tsinghua University, Beijing, China,
in 1996, and the Ph.D. degree in electrical engineering from Clemson
University, Clemson, SC, USA, in 1999.
Currently, he is a Technical Manager
with the ISO New England, Holyoke, MA, USA. His main interests are power
system optimization and electricity market design.
\end{IEEEbiographynophoto}
\begin{IEEEbiographynophoto}{Eugene Litvinov}
(SM'06-F'13) received the B.S. and M.S. degrees from the
Technical University, Kiev, Ukraine, and the Ph.D. degree from Urals
Polytechnic Institute, Sverdlovsk, Russia.
Currently, he is the Chief
Technologist at the ISO New England, Holyoke, MA. His main interests
include power system market-clearing models, system security, computer
applications in power systems, and information technology.
\end{IEEEbiographynophoto}
\begin{IEEEbiographynophoto}{Daniel S. Kirschen}
(M'86-SM'91-F'07) received his electrical and mechanical engineering degree from the Universite Libre de Bruxelles, Brussels, Belgium, in 1979 and his M.S. and Ph.D. degrees from the University of Wisconsin, Madison, WI, USA, in 1980, and 1985, respectively.
He is currently the Donald W. and Ruth Mary Close Professor of Electrical Engineering at the University of Washington, Seattle, WA, USA. His research interests include smart grids, the integration of renewable energy sources in the grid, power system economics, and power system security.
\end{IEEEbiographynophoto}
\end{document}
|
\begin{document}
\tildetle{On Time-Periodic Solutions to Parabolic Boundary Value Problems of Agmon-Douglis-Nirenberg Type}
\author{
Mads Kyed\\
Fachbereich Mathematik\\
Technische Universit\"at Darmstadt\\
Schlossgartenstr. 7, 64289 Darmstadt, Germany\\
Email: \texttt{[email protected]}
\and
Jonas Sauer\\
Max-Planck-Institut f\"ur Mathematik in den Naturwissenschaften\\
Inselstr. 22, 04103 Leipzig, Germany\\
Email: \texttt{[email protected]}
}
\date{\today}
\title{On Time-Periodic Solutions to Parabolic Boundary Value Problems of Agmon-Douglis-Nirenberg Type}
\begin{abstract}
Time-periodic solutions to partial differential equations of parabolic type corresponding to an operator that is elliptic in the sense of Agmon-Douglis-Nirenberg are investigated.
In the whole- and half-space case we construct an explicit formula for the solution and establish coercive $\LR{p}$ estimates.
The estimates generalize a famous result of Agmon, Douglis and Nirenberg for elliptic problems to the time-periodic case.
\textbf{e}_nd{abstract}
\noindent\textbf{MSC2010:} Primary 35B10, 35B45, 35K25.\\
\noindent\textbf{Keywords:} Time-periodic, parabolic, boundary value problem, a priori estimates.
\newCCtr[c]{c}
\newCCtr[C]{C}
\newCCtr[M]{M}
\newCCtr[B]{B}
\newCCtr[{z}repsilonilon]{eps}
\mathbb{C}cSetCtr{eps}{-1}
\section{Introduction}
We investigate time-periodic solutions to parabolic boundary value problems
\begin{align}\label{intro_maineq}
\begin{pdeq}
{{p}}artial_tu+A^Hfullu&=f &&\tilden\mathbb{R}\tildemes\Omega,\\
B^Hfull_ju&=g_j && \text{on }\mathbb{R}\tildemes{{p}}artial\Omega,
\textbf{e}_nd{pdeq}
\textbf{e}_nd{align}
where $A^Hfull$ is an elliptic operator of order $2m$ and $B^Hfull_1,\ldots,B^Hfull_m$ satisfy an appropriate complementing boundary condition.
The domain $\Omega$ is either the whole-space, the half-space or a bounded domain, and $\mathbb{R}$ denotes the time-axis.
The solutions $u(t,x)$ correspond to time-periodic data $f(t,x)$ and $g_j(t,x)$ of the same (fixed) period ${{p}}er>0$.
Using the simple projections
\begin{align*}
{{p}}roju = \iper\int_0^{{p}}er u(t,x)\,{\mathrm d}t,\mathfrak{q}uad {{p}}rojcompl:=\id-{{p}}roj,
\textbf{e}_nd{align*}
we decompose \eqref{intro_maineq} into an elliptic problem
\begin{align}\label{intro_projeq}
\begin{pdeq}
A^Hfull{{p}}roju&={{p}}rojf &&\tilden\Omega,\\
B^Hfull_j{{p}}roju&={{p}}rojg_j && \text{on }{{p}}artial\Omega,
\textbf{e}_nd{pdeq}
\textbf{e}_nd{align}
and a \emph{purely oscillatory} problem
\begin{align}\label{intro_projcompleq}
\begin{pdeq}
{{p}}artial_t{{p}}rojcomplu+A^Hfull{{p}}rojcomplu&={{p}}rojcomplf &&\tilden\mathbb{R}\tildemes\Omega,\\
B^Hfull_j{{p}}rojcomplu&={{p}}rojcomplg_j && \text{on }\mathbb{R}\tildemes{{p}}artial\Omega.
\textbf{e}_nd{pdeq}
\textbf{e}_nd{align}
The problem \eqref{intro_projeq} is elliptic in the sense of Agmon-Douglis-Nirenberg, for which a comprehensive $\LR{p}$ theory was established in \cite{ADN1}.
In this article, we develop a complementary theory for the purely oscillatory problem \eqref{intro_projcompleq}.
Employing ideas going back to \textsc{Peetre} \cite{Peetre61} and \textsc{Arkeryd} \cite{Arkeryd67}, we are able to establish an explicit formula for the solution to \eqref{intro_projcompleq} when the domain is either
the whole- or the half-space.
We shall then introduce a technique based on tools from abstract harmonic analysis to show coercive $\LR{p}$ estimates. As a consequence, we obtain
a time-periodic version of the celebrated theorem of
\textsc{Agmon}, \textsc{Douglis} and \textsc{Nirenberg} \cite{ADN1}.
The decomposition \eqref{intro_projeq}--\eqref{intro_projcompleq} is essential as the two problems
have substantially different properties.
In particular, we shall show in the whole- and half-space case that the principle part of the linear operator in the purely oscillatory problem
\eqref{intro_projcompleq} is a
homeomorphism in a canonical setting of time-periodic Lebesgue-Sobolev spaces. This is especially remarkable since the elliptic problem \eqref{intro_projeq} clearly does not satisfy this property.
Another truly remarkable characteristic of \eqref{intro_projcompleq} is that the $\LR{p}$ theory we shall develop for this problem leads directly to a similar $\LR{p}$ theory, sometimes referred to as maximal regularity, for the parabolic initial-value problem associated to \eqref{intro_maineq}.
We consider general differential operators
\begin{align}\label{DefOfDiffOprfull}
A^Hfull(x,D):=\sum_{|\alpha|\leq 2m}a_\alpha(x) D^\alpha,\mathfrak{q}uad B^Hfull_j(x,D):=\sum_{|\alpha|\leq m_j}b_{j,\alpha}(x) D^\alpha\mathfrak{q}uad (j=1,\ldots,m)
\textbf{e}_nd{align}
with complex coefficients
$a_\alpha:\Omega\to\mathbb{C}Numbers$ and
$b_{j,\alpha}:{{p}}artial\Omega\to\mathbb{C}Numbers$. Here, $\alpha\in\mathbb{N}^{n}$ is a multi-index and
$D^\alpha:=i^{|\alpha|}{{p}}artial_{x_1}^{\alpha_1}\ldots{{p}}artial_{x_n}^{\alpha_n}$.
We denote the principle part of the operators by
\begin{align}
A^H(x,D):=\sum_{|\alpha|= 2m}a_\alpha(x) D^\alpha,\mathfrak{q}uad B^H_j(x,D):=\sum_{|\alpha|= m_j}b_{j,\alpha}(x) D^\alpha.
\textbf{e}_nd{align}
We shall assume that $A^H$ is elliptic in the following classical sense:
\begin{defn}[Properly Elliptic]\label{TPADN_ProperlyElliptic}
The operator $A^H$ is said to be \emph{properly elliptic} if
for all $x\in\Omega$ and all $\xi\in\mathbb{R}^n\setminus\set{0}$ it holds $A^H(x,\xi)\neq 0$, and
for all $x\in\Omega$ and all linearly independent vectors ${\zeta},\xi\in\mathbb{R}^n$ the polynomial
$P(\tau):= A^H(x,{\zeta}+\tau\xi)$
has $m$ roots in $\mathbb{C}Numbers$ with positive imaginary part,
and
$m$ roots in $\mathbb{C}Numbers$ with negative imaginary part.
\textbf{e}_nd{defn}
Ellipticity, however, does not suffice to establish maximal $\LR{p}$ regularity for the time-periodic problem.
We thus recall Agmon's condition, also known as parameter ellipticity.
\begin{defn}[Agmon's Condition]\label{TPADN_AgmonCondA}
Let $\theta\in[-{{p}}i,{{p}}i]$. A properly elliptic operator $A^H$ is said to satisfy Agmon's condition on the ray $\e^{i\theta}$ if
for all $x\in\Omega$ and all $\xi\in\mathbb{R}^n\setminus\set{0}$ it holds $A^H(x,\xi)\notin \setc{r\e^{i\theta}}{rgeq 0}$.
\textbf{e}_nd{defn}
If $A^H$ satisfies Agmon's condition on the ray $\e^{i\theta}$, then,
since the roots of a polynomial depend continuously on its coefficients,
the polynomial
$Q(\tau):=-r\e^{i\theta}+A^H(x,{\zeta}+\tau\xi)$
has $m$ roots $\tau_h^+(r\e^{i\theta},x,{\zeta},\xi)\in\mathbb{C}Numbers$ with positive imaginary part,
and $m$ roots $\tau_h^-(r\e^{i\theta},x,{\zeta},\xi)\in\mathbb{C}Numbers$ with negative imaginary part ($h=1,\ldots,m$).
Consequently, the following assumption on the operator $(A^H,B^H_1,\ldots,B^H_m)$ is meaningful.
\begin{defn}[Agmon's Complementing Condition]\label{TPADN_AgmonCondAB}
Let $\theta\in[-{{p}}i,{{p}}i]$. If $A^H$ is a properly elliptic operator, then $(A^H,B^H_1,\ldots,B^H_m)$
is said to satisfy Agmon's complementing condition on the ray $\e^{i\theta}$ if:
\begin{enumerate}[(i)]
\item $A^H$ satisfies Agmon's condition on the ray $\e^{i\theta}$.
\item For all $x\in{{p}}artial\Omega$, all pairs ${\zeta},\xi\in\mathbb{R}^n$ with ${\zeta}$ tangent to ${{p}}artial\Omega$ and $\xi\in\mathbb{R}^n$ normal to ${{p}}artial\Omega$ at $x$, and all $rgeq 0$, let $\tau_h^+(r\e^{i\theta},x,{\zeta},\xi)\in\mathbb{C}Numbers$ ($h=1,\ldots,m$) denote the $m$ roots of the polynomial
${Q(\tau):=-r\e^{i\theta}+A^H(x,{\zeta}+\tau\xi)}$
with positive imaginary part. The polynomials
$P_j(\tau):=B^H_j(x,{\zeta}+\tau\xi)$ ($j=1,\ldots,m$) are linearly independent modulo the polynomial
$\Pi_{h=1}^m \bp{\tau-\tau_h^+(r\e^{i\theta},x,{\zeta},\xi)}$.
\textbf{e}_nd{enumerate}
\textbf{e}_nd{defn}
The property specified in Definition \ref{TPADN_AgmonCondAB} was first introduced by \textsc{Agmon} in \cite{Agmon62},
and later by \textsc{Agranovich} and \textsc{Vishik} in \cite{AgranovichVishik1964} as \textit{parameter ellipticity}. The condition was introduced in order to identify the additional requirements on the differential operators needed to extend the result of \textsc{Agmon}, \textsc{Douglis} and \textsc{Nirenberg} \cite{ADN1} from the
elliptic case to the corresponding parabolic initial-value problem.
The theorem of \textsc{Agmon}, \textsc{Douglis} and \textsc{Nirenberg} \cite{ADN1} requires $(A^H,B^H_1,\ldots,B^H_m)$
to satisfy Agmon's complementing condition only at the origin (\emph{not} on a full ray), in which case
$(A^H,B^H_1,\ldots,B^H_m)$ is said to be \textit{elliptic in the sense of Agmon-Douglis-Nirenberg}.
Analysis of the associated initial-value problem relies heavily on properties of the resolvent equation
\begin{align}\label{intro_resolventeq}
\begin{pdeq}
\lambdau+A^Hu&=f &&\tilden\Omega,\\
B^H_ju&=0 && \text{on }{{p}}artial\Omega.
\textbf{e}_nd{pdeq}
\textbf{e}_nd{align}
It was shown by \textsc{Agmon} \cite{Agmon62} that a necessary and sufficient condition for the
resolvent of $(A^H,B^H_1,\ldots,B^H_m)$ to lie in the negative complex half-plane (and thereby for the generation of an analytic semi-group) is that
Agmon's complementing condition is satisfied for all rays with $\snorm{\theta}geqfrac{{{p}}i}{2}$.
However, the step from
analyticity of the semi-group to maximal $\LR{p}$ regularity for the parabolic initial-value problem
proved to be highly non-trivial. Although many articles were dedicated to this problem after the publication of \cite{Agmon62},
it was not until the celebrated work of \textsc{Dore} and \textsc{Venni} \cite{DoreVenni87} that a framework was developed with which maximal regularity could be established comprehensively from the assumption that Agmon's condition is satisfied for all rays with $\snorm{\theta}geqfrac{{{p}}i}{2}$.
To apply \cite{DoreVenni87}, one has show that $(A^H,B^H_1,\ldots,B^H_m)$
admits bounded imaginary powers.
Later, it was shown that maximal regularity is in fact equivalent to ${\mathcal R}$-boundedness of an appropriate resolvent family; see \cite{DenkHieberPruess_AMS2003}.
Remarkably, our result for the time-periodic problem \eqref{intro_projcompleq} leads to a new and relatively short proof of
maximal regularity for the parabolic initial-value problem \emph{without} the use of either bounded imaginary powers or the notion of ${\mathcal R}$-boundedness; see Remark \ref{Intr_Rem} below. Under the assumption that $(A^H,B^H_1,\ldots,B^H_m)$ generates an analytic semi-group,
maximal regularity for the parabolic initial-value problem follows almost immediately as a corollary from our main theorem.
We emphasize that our main theorem of maximal regularity for the time-periodic problem does \emph{not} require
the principle part of $(A^Hfull,B^Hfull_1,\ldots,B^Hfull_m)$ to generate an analytic semi-group.
As a novelty of the present paper, and in contrast to the initial-value problem, we establish
that maximal $\LR{p}$ regularity for the time-periodic problem requires Agmon's complementing condition to be satisfied only on the two rays with $\theta={{p}}mfrac{{{p}}i}{2}$, that is, only on the imaginary axis.
Our main theorem for the purely oscillatory problem \eqref{intro_projcompleq} concerns the half-space case and the question of existence of a unique solution satisfying a coercive $\LR{p}$ estimate in
the Sobolev space $\mathscr{W}SRper{1,2m}{p}(\mathbb{R}\tildemes\mathbb{R}^n_+)$ of time-periodic functions
on the time-space domain $\mathbb{R}\tildemes\mathbb{R}^n_+$. We refer to Section \ref{pre} for definitions of the function spaces.
\begin{thm}\label{MainThm_HalfSpace}
Let $p\in (1,\infty)$, ${{p}}er>0$, $ngeq 2$.
Assume that $A^H$ and $(B^H_1,\ldots,B^H_m)$ have constant coefficients.
If $A^H$ is properly elliptic
and $(A^H,B^H_1,\ldots,B^H_m)$ satisfies Agmon's complementing condition on the two rays $\e^{i\theta}$ with $\theta={{p}}mfrac{{{p}}i}{2}$,
then for all functions $f\in{{p}}rojcompl\LRper{p}(\mathbb{R}\tildemes\wspace_+)$
and $g_j\in{{p}}rojcompl\mathscr{W}SRper{\kappa_j,2m\kappa_j}{p}(\mathbb{R}\tildemes{{p}}artial\wspace_+)$ with $\kappa_j:=1-frac{m_j}{2m}-frac{1}{2mp}$ ($j=1,\ldots,m$)
there exists a unique solution $u\in{{p}}rojcompl\mathscr{W}SRper{1,2m}{p}(\mathbb{R}\tildemes\mathbb{R}^n_+)$
to
\begin{align}\label{MainThm_HalfSpace_Eq}
\begin{pdeq}
{{p}}artial_tu+A^Hu&=f &&\tilden\mathbb{R}\tildemes\mathbb{R}^n_+,\\
B^H_ju&=g_j && \text{on }\mathbb{R}\tildemes{{p}}artial\mathbb{R}^n_+.
\textbf{e}_nd{pdeq}
\textbf{e}_nd{align}
Moreover,
\begin{align}\label{MainThm_HalfSpace_Est}
\begin{aligned}
\norm{u}_{\mathscr{W}SRper{1,2m}{p}(\mathbb{R}\tildemes\mathbb{R}^n_+)}
\leq \mathbb{C}cn{C}
\Bp{\norm{f}_{\LRper{p}(\mathbb{R}\tildemes\mathbb{R}^n_+)}+
\sum_{j=1}^m \norm{g_j}_{\mathscr{W}SRper{\kappa_j,2m\kappa_j}{p}(\mathbb{R}\tildemes{{p}}artial\mathbb{R}^n_+)}}
\textbf{e}_nd{aligned}
\textbf{e}_nd{align}
with $\mathbb{C}cnlast{C}=\mathbb{C}cnlast{C}(p,{{p}}er,n)$.
\textbf{e}_nd{thm}
Our proof of Theorem \ref{MainThm_HalfSpace} contains two results that are interesting in their own right. Firstly, we
establish a similar assertion in the whole-space case. Secondly, we provide an explicit formula for the solution; see \eqref{weak_sol_lem_repformula} below.
Moreover, our proof is carried out fully in a setting of time-periodic functions and follows an argument adopted from
the elliptic case. This is remarkable in view of the fact that analysis of time-periodic problems in existing literature
typically is based on theory for the corresponding initial-value problem; see for example \cite{Lieberman99}. A novelty of our approach is the introduction of suitable tools from abstract harmonic analysis that
allow us to give a constructive proof and avoid completely the classical indirect characterizations of time-periodic solutions as
fixed points of a Poincar\'{e} map, that is, as
special solutions to the corresponding initial-value problem. The circumvention of the initial-value problem
also enables us to avoid having to assume Agmon's condition for all $\snorm{\theta}geqfrac{{{p}}i}{2}$ and instead carry out our
investigation under the the weaker condition that Agmon's condition is satisfied only for $\theta={{p}}mfrac{{{p}}i}{2}$.
We shall briefly describe the main ideas behind the proof of Theorem \ref{MainThm_HalfSpace}.
We first consider the problem in the whole space $\mathbb{R}\tildemes\mathbb{R}^n$ and replace the time axis $\mathbb{R}$ with the torus ${\mathbb T}:=\mathbb{R}/{{p}}er\mathbb{Z}$
in order to reformulate the ${{p}}er$-time-periodic problem as a partial differential equation on the locally compact abelian group $\mathfrak{g}p:={\mathbb T}\tildemes\mathbb{R}^n$.
Utilizing the Fourier transform $\mathscr{F}_\mathfrak{g}p$ associated to $\mathfrak{g}p$, we obtain an explicit representation formula for the time-periodic solution.
Since $\mathscr{F}_\mathfrak{g}p=\mathscr{F}_{\mathbb T}\circ\mathscr{F}_{\mathbb{R}^n}$, this formula simply corresponds to a Fourier series expansion in time of the solution and subsequent Fourier
transform in space of all its Fourier coefficients.
While it is relatively easy to obtain $\LR{p}$ estimates (in space) for each Fourier coefficient separately,
it is highly non-trivial to deduce from these individual estimates an $\LR{p}$ estimate in space \emph{and} time via the corresponding Fourier series.
Instead, we turn to the representation formula given in terms of $\mathscr{F}_\mathfrak{g}p$
and show that the corresponding Fourier multiplier defined on the dual group $\widehat{G}$ is an $\LR{p}(\mathfrak{g}p)$ multiplier. For this purpose, we use the so-called \emph{Transference Principle} for Fourier multipliers in a group setting, and obtain the necessary estimate in the whole-space case.
In the half-space case, \textsc{Peetre} \cite{Peetre61} and \textsc{Arkeryd} \cite{Arkeryd67} utilized the Paley-Wiener Theorem
in order to construct a representation formula for solutions to elliptic problems; see also \cite[Section 5.3]{Triebel_InterpolationTheory1978}. We adapt their ideas to our setting and establish $\LR{p}$ estimates from the ones already obtained in the whole-space case.
Theorem \ref{MainThm_HalfSpace} can be reformulated as the assertion that the operator
\begin{multline*}
({{p}}artial_t+A^H,B^H_1,\ldots,B^H_m):\\
{{p}}rojcompl\mathscr{W}SRper{1,2m}{p}(\mathbb{R}\tildemes\mathbb{R}^n_+)\rightarrow{{p}}rojcompl\LRper{p}(\mathbb{R}\tildemes\wspace_+)\tildemes
\Pi_{j=1}^m {{p}}rojcompl\mathscr{W}SRper{\kappa_j,2m\kappa_j}{p}(\mathbb{R}\tildemes{{p}}artial\wspace_+)
\textbf{e}_nd{multline*}
is a homeomorphism.
By a standard localization and perturbation argument,
a purely periodic version of the celebrated theorem of
\textsc{Agmon}, \textsc{Douglis} and \textsc{Nirenberg} \cite{ADN1} in the general case of operators with variable coefficients and $\Omega$ being
a sufficiently smooth domain follows.
In fact, combining the classical result \cite{ADN1} for the elliptic case with Theorem
\ref{MainThm_HalfSpace}, we obtain the following time-periodic version of the Agmon-Douglis-Nirenberg Theorem:
\begin{thm}[Time-Periodic ADN Theorem]\label{MainThm_TPADN}
Let $p\in (1,\infty)$, ${{p}}er>0$, $ngeq 2$ and $\Omega$ be a domain with a boundary that is uniformly $\mathbb{C}R{2m}$-smooth.
Assume $a_\alpha$ is bounded and uniformly continuous on $\overline{\Omega}$ for $\snorm{\alpha}=2m$,
and $a_\alpha\in\LR{\infty}(\Omega)$ for $\snorm{\alpha}<2m$.
Further assume $b_{j,\beta}\in\mathbb{C}R{2m-m_j}({{p}}artial\Omega)$ with bounded and uniformly
continuous derivatives up to the order {$2m-m_j$}.
If $A^H$ is properly elliptic
and $(A^H,B^H_1,\ldots,B^H_m)$ satisfies Agmon's complementing condition on the two rays $\e^{i\theta}$ with $\theta={{p}}mfrac{{{p}}i}{2}$,
then the estimate
\begin{align}\label{TPADNEst}
\begin{aligned}
&\norm{u}_{\mathscr{W}SRper{1,2m}{p}(\mathbb{R}\tildemes\Omega)}\\
&\mathfrak{q}quad\mathfrak{q}quad\leq \mathbb{C}cn{C} \Bp{\norm{{{p}}artial_tu + Au}_{\LRper{p}(\mathbb{R}\tildemes\Omega)}+\sum_{j=1}^m
\norm{B^Hfull_ju}_{\mathscr{W}SRper{\kappa_j,2m\kappa_j}{p}(\mathbb{R}\tildemes{{p}}artial\Omega)}+ \norm{u}_{\LRper{p}(\mathbb{R}\tildemes\Omega)}}
\textbf{e}_nd{aligned}
\textbf{e}_nd{align}
holds for all $u\in\mathscr{W}SRper{1,2m}{p}(\mathbb{R}\tildemes\Omega)$, where $\kappa_j:=1-frac{m_j}{2m}-frac{1}{2mp}$ ($j=1,\ldots,m$).
\textbf{e}_nd{thm}
Since time-independent functions are trivially also time-periodic, we have $\mathscr{W}SR{2m}{p}(\Omega)\subset\mathscr{W}SRper{1,2m}{p}(\mathbb{R}\tildemes\Omega)$.
If estimate \eqref{TPADNEst} is restricted to functions in $\mathscr{W}SR{2m}{p}(\Omega)$, Theorem \ref{MainThm_TPADN} reduces to the classical theorem of Agmon-Douglis-Nirenberg \cite{ADN1},
which has played a fundamental role in the analysis of
elliptic boundary value problems for more then half a century now. This classical theorem for scalar equations was extended to systems in \cite{ADN2}. We shall only treat scalar equations in the following, but we believe the method developed here could be extended to include systems.
Time-periodic problems of parabolic type have been investigated in numerous articles over the years, and it would be too far-reaching to
list them all here. We mention only the article of \textsc{Liebermann} \cite{Lieberman99}, the recent article by \textsc{Geissert}, \textsc{Hieber} and \textsc{Nguyen} \cite{GeissertHieberNguyen16}, as well as the monographs \cite{HessBook91,VejvodaBook82}, and refer the reader to the references therein.
Finally, we mention the article \cite{KyedSauer17} by the present authors in which some of the ideas utilized in the following were introduced in a much simpler setting.
\begin{rem}\label{Intr_Rem}
The half-space case treated in Theorem \ref{MainThm_HalfSpace}
is also pivotal in the $\LR{p}$ theory
for parabolic initial-value problems.
Denote by $A^H_B^Hfull$ the realization of the operator $A^H(D)$ in $\LR{p}(\mathbb{R}^n_+)$ with domain
\begin{align*}
D(A^H_B^Hfull):=\setc{u\in \mathscr{W}SR{2m}{p}(\mathbb{R}^n_+)}{B^H_j(D)u=0, \mathfrak{q}uad j=1,\ldots,m}.
\textbf{e}_nd{align*}
Maximal regularity for parabolic initial-value problems of Agmon-Douglis-Niren\-berg type
is based on an investigation of the initial-value problem
\begin{align}\label{IVP}
\begin{pdeq}
{{p}}artial_tu+A^H_B^Hfullu&=f, &&t>0, \\
u(0)&=0.
\textbf{e}_nd{pdeq}
\textbf{e}_nd{align}
Maximal regularity for \eqref{IVP} means that
for each function $f\in \LR{p}(0,{{p}}er;\LR{p}(\mathbb{R}^n_+))$ there is a unique solution $u\in \LR{p}(0,{{p}}er;D(A^H_B^Hfull))\cap \mathscr{W}SR{1}{p}(0,{{p}}er;\LR{p}(\mathbb{R}^n_+))$ which satisfies the estimate
\begin{align}\label{IVP_Est}
\|u,{{p}}artial_tu, D^{2m}u\|_{\LR{p}(0,{{p}}er;\LR{p}(\mathbb{R}^n_+))}\leq c\|f\|_{\LR{p}(0,{{p}}er;\LR{p}(\mathbb{R}^n_+))}.
\textbf{e}_nd{align}
We shall briefly sketch how to obtain maximal regularity for \eqref{IVP} from Theorem \ref{MainThm_HalfSpace}.
For this purpose, it is required that
$-A^H_B^Hfull$ generates an analytic semi-group
$\set{\e^{-tA^H_B^Hfull}}_{t>0}$, which follows from resolvent estimates going back
to \textsc{Agmon} \cite[Theorem 2.1]{Agmon62} derived under the assumption that
$(A^H,B^H_1,\ldots,B^H_m)$ satisfies Agmon's complementing condition for all rays with $\snorm{\theta}geqfrac{{{p}}i}{2}$; see also
\cite[Theorem 5.5]{TanabeBook}. We would like to point out that these resolvent estimates can also be established with the arguments in our proof of Theorem \ref{MainThm_HalfSpace}. One can periodically extend any
$f\in \LR{p}(0,{{p}}er;\LR{p}(\mathbb{R}^n_+))$ to a ${{p}}er$-periodic function
$f\in\LRper{p}(\mathbb{R}\tildemes\wspace_+)$. With $u$ denoting the solution from Theorem \ref{MainThm_HalfSpace} corresponding to ${{p}}rojcomplf$, the function
\begin{align}\label{Intr_Rem_IVPSol}
\tildeldeu := u + \int_0^t \e^{-(t-s)A^H_B^Hfull}{{p}}rojf\,{\mathrm d}s - \e^{-tA^H_B^Hfull}u(0)
\textbf{e}_nd{align}
is the unique solution to \eqref{IVP}. The desired $\LR{p}$ estimates of $u$ follow from Theorem \ref{MainThm_HalfSpace}, while estimates of the two latter terms on the right-hand side in \eqref{Intr_Rem_IVPSol} follow by
standard theory for analytic semi-groups; see for example \cite[Theorem 4.3.1]{LunardiBookSemigroups}.
For more details, see also \cite[Theorem 5.1]{MaS17}. The connection between
maximal regularity for parabolic initial-value problems and
corresponding time-periodic problems was observed for the first time in the work of \textsc{Arendt} and \textsc{Bu} \cite[Theorem 5.1]{ArendtBu2002}.
\textbf{e}_nd{rem}
\section{Preliminaries and Notation}\label{pre}
\subsection{Notation}
Unless otherwise indicated, $x$ denotes an element in $\mathbb{R}^n$ and $x':=(x_1,\ldots,x_{n-1})\in\mathbb{R}^{n-1}$.
The same notation is employed for $\xi\in\mathbb{R}^n$ and $\xi':=(\xi_1,\ldots,\xi_{n-1})\in\mathbb{R}^{n-1}$.
We denote by $\mathbb{C}_+:=\setc{z\in\mathbb{C}}{\impart(z)>0}$ and $\mathbb{C}_-:=\setc{z\in\mathbb{C}}{\impart(z)<0}$ the upper and lower complex plane, respectively.
The notation ${{p}}artial_j:={{p}}artial_{x_j}$ is employed for partial derivatives with respect to spatial variables. Throughout,
${{p}}artial_t$ shall denote the partial derivative with respect to the time variable.
For a multi-index $\alpha\in\mathbb{N}^{n}$, we employ the notation
$D^\alpha:=i^{|\alpha|}{{p}}artial_{x_1}^{\alpha_1}\ldots{{p}}artial_{x_n}^{\alpha_n}$.
We introduce the parabolic length
\begin{align*}
forall (\eta,\xi)\in\mathbb{R}\tildemes\mathbb{R}^n:\mathfrak{q}uad {{p}}arnorm{\eta}{\xi}:=(|\eta|^2+|\xi|^{4m})^{frac{1}{4m}}.
\textbf{e}_nd{align*}
We call a generic function $g$ \emph{parabolically $\alpha$-homogeneous} if $\lambda^\alpha g(\eta,\xi) = g(\lambda^{2m}\eta,\lambda\xi)$ for all $\lambda>0$.
\subsection{Paley-Wiener Theorem}
\begin{defn}\label{hardy_def}
The Hardy space $\mathscr{H}_+^2\np{\mathbb{R}}$ consists of all functions $f\in\LR{2}\np{\mathbb{R}}$ admitting a holomorphic extension to the lower complex plane
$\tildeldef:\mathbb{C}_-\rightarrow\mathbb{C}$ with
\begin{align*}
\sup_{y<0}\myint{\mathbb{R}}{|\tildeldef(x+iy)|^2}{x}<\infty,\mathfrak{q}uad \lim_{y\to 0-}\myint{\mathbb{R}}{|\tildeldef(x+iy)-f(x)|^2}{x}=0.
\textbf{e}_nd{align*}
The Hardy space $\mathscr{H}_-^2\np{\mathbb{R}}$ consists of all functions $f\in\LR{2}\np{\mathbb{R}}$ admitting a similar holomorphic extension to the upper complex plane.
\textbf{e}_nd{defn}
\begin{prop}[Paley-Wiener Theorem]\label{paley_wiener_prop}
Let $f\in \LR{2}\np{\mathbb{R}}$.
Then $\supp f\subset \mathbb{R}_+$ if and only if $ft{f}\in\mathscr{H}_+^2$.
Moreover, $\supp f\subset \mathbb{R}_-$ if and only if $ft{f}\in\mathscr{H}_-^2$.
\textbf{e}_nd{prop}
\begin{proof}
See for example \cite[Theorems VI.4.1 and VI.4.2]{Yos80}.
\textbf{e}_nd{proof}
\subsection{Time-periodic function spaces}
Let $\Omega\subset\R^n$ be a domain and
\begin{align*}
\mathbb{C}Rciper(\mathbb{R}\tildemes\Omega) := \setc{f\in\mathbb{C}Ri(\mathbb{R}\tildemes\Omega)}{f(t+{{p}}er,x)=f(t,x),\ f\in\mathbb{C}Rci\bp{[0,{{p}}er]\tildemes\Omega}}
\textbf{e}_nd{align*}
the space of smooth time-period functions with compact support in the spatial variable. Clearly,
\begin{align}
&\norm{f}_p := \Bp{\iper\int_0^{{p}}er\int_{\Omega}\snorm{f(t,x)}^p\,{\mathrm d}x{\mathrm d}t}^{frac{1}{p}},\label{intro_defofLpNorm}\\
&\norm{f}_{1,2m,p} :=
\Bp{\norm{{{p}}artial_t f}_{p}^p +\sum_{0\leq \snorm{\alpha}\leq 2m} \norm{{{p}}artial_x^\alpha f}_{p}^p }^{frac{1}{p}}\label{intro_defofSobNorm}
\textbf{e}_nd{align}
are norms on $\mathbb{C}Rciper(\mathbb{R}\tildemes\Omega)$. We define
Lebesgue and anisotropic Sobolev spaces of time-periodic functions as completions
\begin{align*}
&\LRper{p}(\mathbb{R}\tildemes\Omega):= \closure{\mathbb{C}Rciper(\mathbb{R}\tildemes\Omega)}{\norm{\cdot}_{p}},\mathfrak{q}uad
\mathscr{W}SRper{1,2m}{p}\bp{\mathbb{R}\tildemes\Omega} := \closure{\mathbb{C}Rciper(\mathbb{R}\tildemes\overline{\Omega})}{\norm{\cdot}_{1,2m,p}}.
\textbf{e}_nd{align*}
One may identify
\begin{align*}
\LRper{p}(\mathbb{R}\tildemes\Omega) = \setc{f\in\LRloc{p}(\mathbb{R}\tildemes\overline{\Omega})}{f(t+{{p}}er,x)=f(t,x) \text{ for almost every } (x,t)}.
\textbf{e}_nd{align*}
On a similar note, one readily verifies that
\begin{align*}
\mathscr{W}SRper{1,2m}{p}\np{\mathbb{R}\tildemes\Omega}=\setc{f\in\mathscr{W}SRloc{1,2m}{p}\np{\mathbb{R}\tildemes\overline{\Omega}}}{f(t+{{p}}er,x)=f(t,x) \text{ for almost every } (x,t)},
\textbf{e}_nd{align*}
provided $\Omega$ satisfies the segment condition.
We introduce anisotropic fractional order Sobolev spaces (Sobolev-Slobodecki\u{\i} spa\-ces) by real interpolation:
\begin{align*}
\mathscr{W}SRper{s,2ms}{p}(\mathbb{R}\tildemes\Omega) = \bp{\LRper{p}(\mathbb{R}\tildemes\Omega),\mathscr{W}SRper{1,2m}{p}\np{\mathbb{R}\tildemes\Omega}}_{s,p}, \mathfrak{q}quad s\in(0,1).
\textbf{e}_nd{align*}
For a $\mathbb{C}R{2m}$-smooth manifold $G_{\om}amma\subset\mathbb{R}^n$, anisotropic Sobolev spaces $\mathscr{W}SRper{s,2ms}{p}(\mathbb{R}\tildemesG_{\om}amma)$ are defined in a similar manner.
We can identify (see also Section \ref{gr} below) the trace space of $\mathscr{W}SRper{1,2m}{p}(\mathbb{R}\tildemes\Omega)$
as $\mathscr{W}SRper{1-1/2mp,2m-1/p}{p}(\mathbb{R}\tildemes{{p}}artial\Omega)$ in the sense that the trace operator maps the former continuously onto the latter.
\subsection{Function Spaces and the Torus Group Setting}\label{gr}
We shall further introduce a setting of function spaces in which the time axis $\mathbb{R}$ in the underlying domains is replaced with the torus ${\mathbb T}:=\mathbb{R}/{{p}}er\mathbb{Z}$.
In such a setting, all functions are inherently ${{p}}er$-time-periodic. We shall therefore never have to verify
periodicity of functions \textit{a posteriori}, and it will always be clear in which sense the functions are periodic.
The setting of ${\mathbb T}$-defined functions is formalized in terms of the canonical quotient mapping
$\mathfrak{q}uotientmap :\mathbb{R}\tildemes\mathbb{R}^n \rightarrow {\mathbb T} \tildemes \mathbb{R}^n,\ \mathfrak{q}uotientmap(t,x):=([t],x)$.
A differentiable structure on ${\mathbb T} \tildemes \mathbb{R}^n$ is inherited via the quotient mapping form $\mathbb{R}\tildemes\mathbb{R}^n$. More specifically,
for any domain $\Omega\subset\mathbb{R}^n$ we let
\begin{align*}
\mathbb{C}Ri({\mathbb T}\tildemes\Omega) := \setc{u:{\mathbb T}\tildemes\Omega\rightarrow\mathbb{C}}{u\circ\mathfrak{q}uotientmap\in\mathbb{C}Ri(\mathbb{R}\tildemes\Omega)}
\textbf{e}_nd{align*}
and define for $u\in\mathbb{C}Ri({\mathbb T}\tildemes\Omega)$ derivatives by ${{p}}artial^\alphau := \bp{{{p}}artial^\alpha \nb{u\circ\mathfrak{q}uotientmap}}\circ\mathfrak{q}uotientmap_{|[0,{{p}}er)\tildemes\Omega}^{-1}$. We let
\begin{align*}
\mathbb{C}Rci\np{{\mathbb T}\tildemes\Omega} := \setc{u\in\mathbb{C}Ri({\mathbb T}\tildemes\Omega)}{\suppu\text{ is compact}}
\textbf{e}_nd{align*}
denote the space of compactly supported smooth functions. Introducing the normalized Haar measure on ${\mathbb T}$, we define norms $\norm{\cdot}_p$ and $\norm{\cdot}_{1,2m,p}$ on $\mathbb{C}Rci\np{{\mathbb T}\tildemes\Omega}$ as in \eqref{intro_defofLpNorm}--\eqref{intro_defofSobNorm}.
The quotient mapping trivially respects derivatives and is isometric with respect to $\norm{\cdot}_p$ and $\norm{\cdot}_{1,2m,p}$. Letting
\begin{align*}
&\LR{p}({\mathbb T}\tildemes\Omega):= \closure{\mathbb{C}Rci({\mathbb T}\tildemes\Omega)}{\norm{\cdot}_{p}},\mathfrak{q}uad
\mathscr{W}SR{1,2m}{p}\bp{{\mathbb T}\tildemes\Omega} := \closure{\mathbb{C}Rci({\mathbb T}\tildemes\overline{\Omega})}{\norm{\cdot}_{1,2m,p}},
\textbf{e}_nd{align*}
we thus obtain Lebesgue and Sobolev spaces that are isometrically isomorphic to the spaces
$\LRper{p}(\mathbb{R}\tildemes\Omega)$ and $\mathscr{W}SRper{1,2m}{p}\bp{\mathbb{R}\tildemes\Omega}$, respectively.
Defining weak derivatives with respect to test functions $\mathbb{C}Rci({\mathbb T}\tildemes\Omega)$, one readily verifies that
\begin{align*}
\mathscr{W}SR{1,2m}{p}\bp{{\mathbb T}\tildemes\Omega} = \setc{u\in\LR{p}({\mathbb T}\tildemes\Omega)}{u,{{p}}artial_tu,{{p}}artial_x^\alphau\in\LR{p}({\mathbb T}\tildemes\Omega)\text{ for all }\snorm{\alpha}\leq 2},
\textbf{e}_nd{align*}
provided $\Omega$ satisfies the segment property.
For $s\in(0,1)$, we define fractional ordered Sobolev spaces by real interpolation
\begin{align*}
\mathscr{W}SR{s,2ms}{p}({\mathbb T}\tildemes\Omega) = \bp{\LR{p}({\mathbb T}\tildemes\Omega),\mathscr{W}SR{1,2m}{p}\np{{\mathbb T}\tildemes\Omega}}_{s,p},
\textbf{e}_nd{align*}
and thereby obtain spaces isometrically isomorphic to $\mathscr{W}SRper{s,2ms}{p}(\mathbb{R}\tildemes\Omega)$.
In the half-space case, we clearly have
\begin{align*}
\mathscr{W}SR{1,2m}{p}({\mathbb T}\tildemes\mathbb{R}^n_+) =
\LR{p}\bp{\mathbb{R}_+;\mathscr{W}SR{1,2m}{p}\np{{\mathbb T}\tildemes\mathbb{R}^{n-1}}}\cap
\mathscr{W}SR{2m}{p}\bp{\mathbb{R}_+;\LR{p}\np{{\mathbb T}\tildemes\mathbb{R}^{n-1}}}.
\textbf{e}_nd{align*}
Hence, for $l\in\mathbb{N}, l\leq 2m$ the trace operator
\begin{align}\label{DefOfTraceOpr}
\begin{aligned}
&\trace_l:\mathbb{C}Rci({\mathbb T}\tildemes\overline{\mathbb{R}^n_+})\rightarrow\mathbb{C}Rci({\mathbb T}\tildemes{\mathbb{R}^{n-1}})^l,\\
&\trace_l(u)(t,x'):=\bp{u\np{t,x',0}, {{p}}artial_nu\np{t,x',0},\ldots,{{p}}artial_n^{l-1}u\np{t,x',0}}
\textbf{e}_nd{aligned}
\textbf{e}_nd{align}
extends to a bounded operator
\begin{align}\label{TraceOprMappingproperties}
\trace_l: \mathscr{W}SR{1,2m}{p}({\mathbb T}\tildemes\mathbb{R}^n_+) \rightarrow {{p}}rod_{j=0}^{l-1}\mathscr{W}SR{1-frac{j}{2m}-frac{1}{2mp},2m-j-frac{1}{p}}{p}({\mathbb T}\tildemes\mathbb{R}^{n-1})
\textbf{e}_nd{align}
that is onto; see for example \cite[Theorem 1.8.3]{Triebel_InterpolationTheory1978}. The existence of a bounded right inverse to
$\trace_l$ can be shown by applying \cite[Theorem 2.9.1]{Triebel_InterpolationTheory1978}.
We further introduce the operators
\begin{align}\label{intro_DefOfProjections}
{{p}}roj,{{p}}rojcompl:\mathbb{C}Rci({\mathbb T}\tildemes\Omega)\rightarrow\mathbb{C}Rci({\mathbb T}\tildemes\Omega),\mathfrak{q}uad {{p}}roj f := \int_{\mathbb T} f(t,x)\,{\mathrm d}t,\mathfrak{q}uad {{p}}rojcompl := \id - {{p}}roj,
\textbf{e}_nd{align}
which are clearly complementary projections.
Since ${{p}}roj f$ is independent of the time variable $t\in\mathbb{R}$, we may at times treat ${{p}}roj f$ as a function of the space variable $x\in\Omega$ only.
Both ${{p}}roj$ and ${{p}}rojcompl$ extend to bounded operators on the Lebesgue space $\LR{p}({\mathbb T}\tildemes\Omega)$ and Sobolev space
$\mathscr{W}SR{1,2m}{p}\bp{{\mathbb T}\tildemes\Omega}$. We employ the notation $\LRcompl{p}({\mathbb T}\tildemes\Omega):={{p}}rojcompl\LR{p}({\mathbb T}\tildemes\Omega)$
and $\mathscr{W}SRcompl{1,2m}{p}\bp{{\mathbb T}\tildemes\Omega}:={{p}}rojcompl\mathscr{W}SR{1,2m}{p}\bp{{\mathbb T}\tildemes\Omega}$ for the subspaces of ${{p}}rojcompl$-invariant functions.
This notation is canonical extended to other spaces such interpolation spaces of Lebesgue and Sobolev spaces. We sometimes refer to functions with $f={{p}}rojcomplf$
as \emph{purely oscillatory}.
Finally, we let
\begin{align*}
\iota_j := 1-frac{j-1}{2m}-frac{1}{2mp},\mathfrak{q}uad
\kappa_j := 1-frac{m_j}{2m}-frac{1}{2mp},\mathfrak{q}uad (j=1,\ldots,m)
\textbf{e}_nd{align*}
and put
\begin{align*}
&\tracespacecompl{\iota}{p}({\mathbb T}\tildemes\Omega):={{p}}rod_{j=1}^m\mathscr{W}SRcompl{\iota_j,2m\iota_j}{p}({\mathbb T}\tildemes\Omega),\mathfrak{q}uad \tracespacecompl{\kappa}{p}({\mathbb T}\tildemes\Omega):={{p}}rod_{j=1}^m\mathscr{W}SRcompl{\kappa_j,2m\kappa_j}{p}({\mathbb T}\tildemes\Omega).
\textbf{e}_nd{align*}
\subsection{Schwartz-Bruhat Spaces and Distributions}
When the spatial domain is the whole-space $\mathbb{R}^n$, we employ the notation ${\mathfrak{g}p:={\mathbb T}\tildemes\mathbb{R}^n}$. Equipped with the quotient topology via $\mathfrak{q}uotientmap$, $\mathfrak{g}p$ becomes a locally compact abelian group. Clearly, the $\LR{p}(\mathfrak{g}p)$ space corresponding to the Haar measure on $\mathfrak{g}p$, appropriately normalized, coincides with the $\LR{p}\np{{\mathbb T}\tildemes\mathbb{R}^n}$ space introduced in the previous section.
We identify $\mathfrak{g}p$'s dual group by $\widehat{G}={{p}}erf\mathbb{Z}\tildemes\mathbb{R}^n$ by associating
$(k,\xi)\in{{p}}erf\mathbb{Z}\tildemes\mathbb{R}^n$ with the character
$\chi:\mathfrak{g}p\rightarrow\mathbb{C}Numbers,\ \chi(x,t):=\e^{ix\cdot\xi+ik t}$.
By default, $\widehat{G}$ is equipped with the compact-open topology, which in this case coincides with the product of the
discrete topology on ${{p}}erf\mathbb{Z}$ and the
Euclidean topology on $\mathbb{R}^n$.
The Haar measure on $\widehat{G}$ is simply the product of the Lebesgue measure on $\mathbb{R}^n$ and the counting measure on ${{p}}erf\mathbb{Z}$.
The Schwartz-Bruhat
space $\mathscr{S}(\mathfrak{g}p) $ of generalized Schwartz functions
(originally introduced in \cite{Bruhat61}) can be described in terms of the semi-norms
\begin{align*}
forall(\alpha,\beta,gamma)\in\mathbb{N}_0\tildemes\mathbb{N}_0^n\tildemes\mathbb{N}_0^n:\mathfrak{q}uad
\rho_{\alpha,\beta,gamma}(u):=\sup_{(t,x)\in\mathfrak{g}p} \snorm{x^gamma{{p}}artial_t^\alpha{{p}}artial_x^\betau(x,t)}
\textbf{e}_nd{align*}
as
\begin{align*}
\mathscr{S}(\mathfrak{g}p):=\setc{u\in\mathbb{C}Ri(\mathfrak{g}p)}{forall(\alpha,\beta,gamma)\in\mathbb{N}_0\tildemes\mathbb{N}_0^n\tildemes\mathbb{N}_0^n:\ \rho_{\alpha,\beta,gamma}(u)<\infty}.
\textbf{e}_nd{align*}
The vector space $\mathscr{S}(\mathfrak{g}p)$ is endowed with the semi-norm
topology.
The topological dual space $\mathscr{S^\prime}(\mathfrak{g}p)$ of $\mathscr{S}(\mathfrak{g}p)$ is referred to as the space of tempered distributions on $\mathfrak{g}p$.
Observe that both $\mathscr{S}(\mathfrak{g}p)$ and $\mathscr{S^\prime}(\mathfrak{g}p)$ remain closed under multiplication
by smooth functions that have at most polynomial growth with respect to the spatial variables.
For a tempered distribution $u\in\mathscr{S^\prime}(\mathfrak{g}p)$, distributional derivatives
${{p}}artial_t^\alpha{{p}}artial_x^\betau\in\mathscr{S^\prime}(\mathfrak{g}p)$ are defined by duality in the usual manner.
Also the support $\suppu$ is defined in the classical way. Moreover, we may restrict the distribution $u$ to a subdomain ${\mathbb T}\tildemes\Omega$ by
considering it as a functional defined only on the test functions from $\mathscr{S}(\mathfrak{g}p)$ supported in ${\mathbb T}\tildemes\Omega$.
A differentiable structure on $\widehat{G}$ is obtained by introduction of the space
\begin{align*}
\mathbb{C}Ri(\widehat{G}):=\setc{w\in\mathbb{C}R{}(\widehat{G})}{forall k\in{{p}}erf\mathbb{Z}:\ w(k,\cdot)\in\mathbb{C}Ri(\mathbb{R}^n)}.
\textbf{e}_nd{align*}
The Schwartz-Bruhat space on the dual group $\widehat{G}$ is defined in terms of the semi-norms
\begin{align*}
forall (\alpha,\beta,gamma)\in\mathbb{N}_0\tildemes\mathbb{N}_0^n\tildemes\mathbb{N}_0^n:\ \hat{\rho}_{\alpha,\beta,gamma}(w):=
\sup_{(\xi,k)\in\widehat{G}} \snorm{k^\alpha \xi^gamma {{p}}artial_\xi^\beta w(k,\xi)}
\textbf{e}_nd{align*}
as
\begin{align*}
\begin{aligned}
\mathscr{S}(\widehat{G})&:=\setc{w\in\mathbb{C}Ri(\widehat{G})}{forall (\alpha,\beta,gamma)\in\mathbb{N}_0\tildemes\mathbb{N}_0^n\tildemes\mathbb{N}_0^n:\ \hat{\rho}_{\alpha,\beta,gamma}(w)<\infty}.
\textbf{e}_nd{aligned}
\textbf{e}_nd{align*}
We also endow $\mathscr{S}(\widehat{G})$ with the corresponding semi-norm topology and denote by $\mathscr{S^\prime}(\widehat{G})$ the topological dual space.
\subsection{Fourier Transform}
As a locally compact abelian group, $\mathfrak{g}p$ has a Fourier transform
$\mathscr{F}_{\mathfrak{g}p}$ associated to it. The ability to utilize a Fourier transform that acts simultaneously in time $t\in{\mathbb T}$ and space $x\in\mathbb{R}^n$ shall play a key role in the following.
The Fourier transform $\mathscr{F}_\mathfrak{g}p$ on $\mathfrak{g}p$ is given by
\begin{align*}
\mathscr{F}_\mathfrak{g}p:\LR{1}(\mathfrak{g}p)\rightarrow\mathbb{C}R{}(\widehat{G}),\mathfrak{q}uad \mathscr{F}_\mathfrak{g}p(u)(k,\xi):=ft{u}(k,\xi):=
\int_{\mathbb T}\int_{\mathbb{R}^n} u(t,x)\,\e^{-ix\cdot\xi-ik t}\,{\mathrm d}x{\mathrm d}t.
\textbf{e}_nd{align*}
If no confusion can arise, we simply write $\mathscr{F}$ instead of $\mathscr{F}_\mathfrak{g}p$.
The inverse Fourier transform is formally defined by
\begin{align*}
\mathscr{F}^{-1}:\LR{1}(\widehat{G})\rightarrow\mathbb{C}R{}(\mathfrak{g}p),\mathfrak{q}uad \mathscr{F}^{-1}(w)(t,x):=\ift{w}(t,x):=
\sum_{k\in{{p}}erf\mathbb{Z}}\,\int_{\mathbb{R}^n} w(k,\xi)\,\e^{ix\cdot\xi+ik t}\,{\mathrm d}xi.
\textbf{e}_nd{align*}
It is standard to verify that $\mathscr{F}:\mathscr{S}(\mathfrak{g}p)\rightarrow\mathscr{S}(\widehat{G})$ is a homeomorphism with $\mathscr{F}^{-1}$ as the actual inverse, provided the Lebesgue measure ${\mathrm d}xi$ is normalized appropriately.
By duality, $\mathscr{F}$ extends to a bijective mapping $\mathscr{F}:\mathscr{S^\prime}(\mathfrak{g}p)\rightarrow\mathscr{S^\prime}(\widehat{G})$.
The Fourier transform provides us with a calculus between the differential operators on $\mathfrak{g}p$ and the
polynomials on $\widehat{G}$. As one easily verifies, for $u\in\mathscr{S^\prime}(\mathfrak{g}p)$ and $(\alpha,\beta)\in\mathbb{N}_0\tildemes\mathbb{N}_0^n$ we have
$\mathscr{F}\bp{{{p}}artial_t^\alpha{{p}}artial_x^\betau}=i^{\snorm{\alpha}+\snorm{\beta}}\,k^\alpha\,\xi^\beta\,\mathscr{F}(u)$
as an identity in $\mathscr{S^\prime}(\widehat{G})$.
The projections introduced in \eqref{intro_DefOfProjections} can be extended trivially to projections on the Schwartz-Bruhat space
${{p}}roj,{{p}}rojcompl:\mathscr{S}(\mathfrak{g}p)\rightarrow\mathscr{S}(\mathfrak{g}p)$.
Introducing the delta distribution $\delta_\mathbb{Z}$ on ${{p}}erf\mathbb{Z}$, that is, $\delta_\mathbb{Z}(k):=1$ if $k=0$ and $\delta_\mathbb{Z}(k):=0$ for $k\neq 0$,
we observe that
${{p}}roju = \mathscr{F}^{-1}_\mathfrak{g}p\bb{\delta_\mathbb{Z} \mathscr{F}_\mathfrak{g}p\nb{u}}$ and
${{p}}rojcomplu = \mathscr{F}^{-1}_\mathfrak{g}p\bb{(1-\delta_\mathbb{Z}) \mathscr{F}_\mathfrak{g}p\nb{u}}$.
Using these representations for ${{p}}roj$ and ${{p}}rojcompl$, we naturally extend the projections to
operators ${{p}}roj,{{p}}rojcompl:\mathscr{S^\prime}(\mathfrak{g}p)\rightarrow\mathscr{S^\prime}(\mathfrak{g}p)$. In accordance with the notation introduced above, we put
$\mathscr{S^\prime}compl(\mathfrak{g}p):={{p}}rojcompl\mathscr{S^\prime}(\mathfrak{g}p)$.
In general, we shall utilize smooth functions $\mathfrak{m}per\in\mathbb{C}Ri(\widehat{G})$ with at most polynomial growth as Fourier multipliers by introducing
the corresponding operator
\begin{align*}
\mathsf{op}\,[\mathfrak{m}per]:\mathscr{S}(\mathfrak{g}p)\rightarrow\mathscr{S^\prime}(\mathfrak{g}p),\mathfrak{q}uad \mathsf{op}\,[\mathfrak{m}per]u:=\mathscr{F}^{-1}_\mathfrak{g}p \bb{\mathfrak{m}per\mathscr{F}_\mathfrak{g}p\nb{u}}.
\textbf{e}_nd{align*}
We call $\mathfrak{m}per$ an $\LR{p}(\mathfrak{g}p)$-multiplier if $\mathsf{op}\,[\mathfrak{m}per]$ extends to a bounded operator on $\LR{p}(\mathfrak{g}p)$ for any $p\in(1,\infty)$.
The following lemmas provide us with criteria to determine if $\mathfrak{m}per$ is an $\LR{p}(\mathfrak{g}p)$-multiplier.
\begin{lem}\label{MultiplierLem_ZeroHomoMultipliers}
Let $\mathfrak{m}per\in\mathbb{C}Ri(\widehat{G})$. If $\mathfrak{m}per=\mathfrak{m}restriction{\mathfrak{m}}{\widehat{G}}$ for some parabolically $0$-homogeneous $\mathfrak{m}:\mathbb{R}\tildemes\mathbb{R}^n\rightarrow\mathbb{C}$, then $\mathsf{op}\,\nb{\mathfrak{m}per}$ extends to a bounded operator $\mathsf{op}\,\nb{\mathfrak{m}per}:\LR{p}(\mathfrak{g}p)\rightarrow\LR{p}(\mathfrak{g}p)$.
\textbf{e}_nd{lem}
\begin{proof}
The Transference Principle (established originally by de Leeuw \cite{dLe65} and later extended to a general setting of locally compact abelian groups
by Edwards and Gaudry \cite[Theorem B.2.1]{EdG77}),
makes it possible to ``transfer'' the investigation of Fourier multipliers from one group setting into another.
In our case, \cite[Theorem B.2.1]{EdG77} yields that $\mathfrak{m}per$ is an $\LR{p}(\mathfrak{g}p)$-multiplier, provided $\mathfrak{m}$ is an
$\LR{p}(\mathbb{R}\tildemes\mathbb{R}^n)$-multiplier. To show the latter, we can employ one of the classical multiplier theorems available in
the Euclidean setting. Since $\mathfrak{m}$ is parabolically $0$-homogeneous, it is easy to verify that $\mathfrak{m}$ meets for instance the conditions of the
Marcinkiewicz's multiplier theorem (\cite[Chapter IV, \S 6]{Stein70}). Thus, $\mathfrak{m}$ is an
$\LR{p}(\mathbb{R}\tildemes\mathbb{R}^n)$-multiplier and by \cite[Theorem B.2.1]{EdG77} $\mathfrak{m}per$ therefore an $\LR{p}(\mathfrak{g}p)$-multiplier.
\textbf{e}_nd{proof}
\begin{lem}\label{MultiplierLem_NegHomoMultipliers}
Let $\mathfrak{m}per\in\mathbb{C}Ri(\widehat{G}\setminus\set{(0,0)})$ and $\alpha\leq 0$. If $\mathfrak{m}per=\mathfrak{m}restriction{\mathfrak{m}}{\widehat{G}}$ for some parabolically $\alpha$-homogeneous function $\mathfrak{m}:\mathbb{R}\tildemes\mathbb{R}^n\setminus\set{(0,0)}\rightarrow\mathbb{C}$, then $\mathsf{op}\,\nb{\mathfrak{m}per}$ extends to a bounded operator $\mathsf{op}\,\nb{\mathfrak{m}per}:\LRcompl{p}(\mathfrak{g}p)\rightarrow\LRcompl{p}(\mathfrak{g}p)$.
\textbf{e}_nd{lem}
\begin{proof}
Let $\chi\in\mathbb{C}Ri(\mathbb{R})$ be a ``cut-off'' function with
$\chi(\eta)=0$ for $\snorm{\eta}<frac{{{p}}i}{{{p}}er}$ and
$\chi(\eta)=1$ for $\snorm{\eta}geq {{p}}erf$.
Put $\mathfrak{M}(\eta,\xi):=\chi(\eta)\mathfrak{m}(\eta,\xi)$.
Utilizing that $\mathfrak{m}$ is $\alpha$-homogeneous and $\alpha\leq 0$, one readily verifies that $\mathfrak{M}$ satisfies
the conditions of Marcinkiewicz's multiplier theorem (\cite[Chapter IV, \S 6]{Stein70}). Consequently, $\mathfrak{M}$ is an $\LR{p}(\mathbb{R}\tildemes\mathbb{R}^n)$-multiplier.
For $u\in\LRcompl{p}(\mathfrak{g}p)$, we have $u={{p}}rojcomplu$ and thus
\begin{align*}
\mathsf{op}\,\nb{\mathfrak{m}per}\np{u}
= \mathscr{F}^{-1}_\mathfrak{g}p \bb{\mathfrak{m}per\mathscr{F}_\mathfrak{g}p\nb{{{p}}rojcomplu}}
= \mathscr{F}^{-1}_\mathfrak{g}p \bb{\mathfrak{m}per(1-\delta_\mathbb{Z})\mathscr{F}_\mathfrak{g}p\nb{{{p}}rojcomplu}}.
\textbf{e}_nd{align*}
Since $\mathfrak{m}per(1-\delta_\mathbb{Z})=\mathfrak{m}restriction{\mathfrak{M}}{\widehat{G}}$, we obtain from \cite[Theorem B.2.1]{EdG77} that
$\mathfrak{m}per(1-\delta_\mathbb{Z})$ is an $\LR{p}\np{\mathfrak{g}p}$-multiplier. Consequently, $\mathsf{op}\,\nb{\mathfrak{m}per}\np{u}\leq\mathbb{C}cn{C}\norm{u}_p$ for all $u\in\LRcompl{p}(\mathfrak{g}p)$.
\textbf{e}_nd{proof}
\begin{cor}\label{MultiplierLem}
Let $p\in(1,\infty)$ and $\beta\in[0,1]$. If $M\in\mathbb{C}Ri({{p}}erf\mathbb{Z}\setminus\set{0}\tildemes\mathbb{R}^{n})$ is parabolically $0$-homogeneous, then
$\mathsf{op}\,[M]$ extends to a bounded operator $\mathsf{op}\,[M]:\mathscr{W}SRcompl{\beta,2m\beta}{p}(\mathfrak{g}p)\rightarrow\mathscr{W}SRcompl{\beta,2m\beta}{p}(\mathfrak{g}p)$.
\textbf{e}_nd{cor}
\subsection{Time-Periodic Bessel Potential Spaces}
Time-periodic Bessel Potential spaces can be defined via the Fourier transform $\mathscr{F}_\mathfrak{g}p$. We shall only introduce Bessel Potential spaces of purely oscillatory distributions:
\begin{align*}
\HSRcompl{s}{p}:=\setc{f\in\mathscr{S^\prime}compl(\mathfrak{g}p)}{\mathsf{op}\,\nb{{{p}}arnorm{k}{\xi}^s}{f}\in\LR{p}\np{\mathfrak{g}p}}\mathfrak{q}uad\text{for } s\in\mathbb{R},\,p\in(1,\infty).
\textbf{e}_nd{align*}
Utilizing Lemma \ref{MultiplierLem_NegHomoMultipliers}, one readily verifies that $\HSRcompl{s}{p}$ is a Banach space with respect to the norm
\begin{align*}
\norm{f}_{s,p}:= \norm{\mathsf{op}\,\nb{{{p}}arnorm{k}{\xi}^s}{f}}_p.
\textbf{e}_nd{align*}
Time-periodic Bessel Potential spaces on the half-space are defined via restriction of distributions in the time-periodic Bessel Potential spaces defined above:
\begin{align*}
&\HSRcompl{s}{p}(\text{or }us\times\wspace_+):=\setc{funcrestriction{f}{\text{or }us\times\wspace_+}}{f\in\HSRcompl{s}{p}},\\
&\norm{f}_{s,p,\text{or }us\times\wspace_+}:=
\inf\setc{\norm{F}_{s,p}}{F\in\HSRcompl{s}{p},\ funcrestriction{F}{\text{or }us\times\wspace_+}=f}.
\textbf{e}_nd{align*}
Identifying $\HSRcompl{s}{p}(\text{or }us\times\wspace_+)$ as a factor space of $\HSRcompl{s}{p}$ in the canonical way, we see that $\HSRcompl{s}{p}(\text{or }us\times\wspace_+)$ is a Banach
space in the norm $\norm{\cdot}_{s,p,\text{or }us\times\wspace_+}$.
\begin{prop}\label{TPBesselEqualsTPSobWholespacecaseProp}
Let $p\in(1,\infty)$. Then $\HSRcompl{2m}{p}(\mathfrak{g}p)=\mathscr{W}SRcompl{1,2m}{p}(\mathfrak{g}p)$ and $\HSRcompl{2m}{p}(\text{or }us\times\wspace_+)=\mathscr{W}SRcompl{1,2m}{p}(\text{or }us\times\wspace_+)$
with equivalent norms.
\textbf{e}_nd{prop}
\begin{proof}
It follows from Lemma \ref{MultiplierLem_NegHomoMultipliers} that $\mathsf{op}\,\bb{{{p}}arnorm{\eta}{\xi}^{-m}}$ extends to a bounded operator on $\LRcompl{p}(\mathfrak{g}p)$, which implies that $\norm{f}_{m,p}$ is equivalent to the norm $\norm{f}_p+\norm{f}_{m,p}$. From this, we infer that
$\HSRcompl{2m}{p}=\mathscr{W}SRcompl{1,2m}{p}(\mathfrak{g}p)$. A standard method (see for example
\cite[Theorem 4.26]{adams:sobolevspaces}) can be used to construct an extension operator $\mathbb Ext:\mathscr{W}SRcompl{1,2m}{p}(\text{or }us\times\wspace_+)\rightarrow\mathscr{W}SRcompl{1,2m}{p}(\mathfrak{g}p)$.
The existence of an extension operator combined with the fact that $\HSRcompl{2m}{p}=\mathscr{W}SRcompl{1,2m}{p}(\mathfrak{g}p)$ implies
$\HSRcompl{2m}{p}(\text{or }us\times\wspace_+)=\mathscr{W}SRcompl{1,2m}{p}(\text{or }us\times\wspace_+)$.
\textbf{e}_nd{proof}
\begin{prop}\label{TPBesselEqualsTPSobHalfspacecaseProp}
Let $s\in\mathbb{R}$. Then
\begin{align}
&\norm{u}_{s+1,p,\text{or }us\times\wspace_+}\leq \mathbb{C}cn{C} \bp{ \norm{{{p}}artial_nu}_{s,p,\text{or }us\times\wspace_+} + \norm{\mathsf{op}\,\bb{{{{p}}arnorm{k}{{\xi'}}}}u}_{s,p,\text{or }us\times\wspace_+}},\label{TPBesselEqualsTPSobHalfspacecasePropEst}\\
&\norm{{{p}}artial_ju}_{s,p,\text{or }us\times\wspace_+}\leq { \norm{u}_{s+1,p,\text{or }us\times\wspace_+}}\mathfrak{q}uad(j=1,\ldots,n)\label{TPBesselEqualsTPSobHalfspacecasePropEst2}\\
&\norm{{{p}}artial_tu}_{s,p,\text{or }us\times\wspace_+}\leq { \norm{u}_{s+2m,p,\text{or }us\times\wspace_+}}\label{TPBesselEqualsTPSobHalfspacecasePropEst3}
\textbf{e}_nd{align}
for all $u\in\mathscr{S^\prime}(\mathfrak{g}p)$.
\textbf{e}_nd{prop}
\begin{proof}
The complex function $z\mapsto(iz+{{p}}kxp)^{-1}$ is holomorphic in the lower complex plane.
Due to Lemma \ref{MultiplierLem_ZeroHomoMultipliers}, we can employ Proposition \ref{paley_wiener_prop} to
conclude
\begin{align}\label{TPBesselEqualsTPSobHalfspacecaseProp_SuppProperty}
\supp{{p}}si\subset\text{or }us\times\wspace_-closed \mathfrak{q}uad \mathbb{R}a \mathfrak{q}uad \supp\Bp{\mathsf{op}\,\bb{(i\xi_n+{{p}}kxp)^{-1}}{{p}}si}\subset\text{or }us\times\wspace_-closed
\textbf{e}_nd{align}
for all ${{p}}si\in\mathscr{S}compl(\mathfrak{g}p)$. By duality, the same is true for all ${{p}}si\in\mathscr{S^\prime}compl(\mathfrak{g}p)$.
We employ Lemma \ref{MultiplierLem_ZeroHomoMultipliers} to estimate
\begin{align*}
\norm{u}_{s+1,p,\text{or }us\times\wspace_+}
&\leq \mathbb{C}cn{C}\,\inf\setcl{\norml{\mathsf{op}\,\bb{i\xi_n+{{p}}kxp}U}_{s,p}}{U\in\HSRcompl{s+1}{p},\ funcrestriction{U}{\text{or }us\times\wspace_+}=funcrestriction{u}{\text{or }us\times\wspace_+}}.
\textbf{e}_nd{align*}
It follows from \eqref{TPBesselEqualsTPSobHalfspacecaseProp_SuppProperty} that
\begin{align*}
funcrestriction{U}{\text{or }us\times\wspace_+}=funcrestriction{u}{\text{or }us\times\wspace_+} \mathfrak{q}uad \iff\mathfrak{q}uad
funcrestriction{\mathsf{op}\,\bb{i\xi_n+{{p}}kxp}U}{\text{or }us\times\wspace_+}=funcrestriction{\mathsf{op}\,\bb{i\xi_n+{{p}}kxp}u}{\text{or }us\times\wspace_+}.
\textbf{e}_nd{align*}
We thus conclude
\begin{align*}
\norm{u}_{s+1,p,\text{or }us\times\wspace_+}
&\leq \mathbb{C}cn{C}\, \norm{\mathsf{op}\,\bb{i\xi_n+{{p}}kxp}u}_{s,p,\text{or }us\times\wspace_+}
\textbf{e}_nd{align*}
and thereby \eqref{TPBesselEqualsTPSobHalfspacecasePropEst}. Furthermore,
\begin{align*}
\norm{{{p}}artial_nu}_{s,p,\text{or }us\times\wspace_+}
&=\inf\setc{\norm{U}_{s,p}}{U\in\HSRcompl{s}{p},\ U={{p}}artial_nu\ \tilden\text{or }us\times\wspace_+}\\
&\leq\inf\setc{\norm{{{p}}artial_n V}_{s,p}}{V \in\HSRcompl{s+1}{p},\ V=u\ \tilden\text{or }us\times\wspace_+}\\
&\leq\inf\setc{\norm{V}_{s+1,p}}{V \in\HSRcompl{s+1}{p},\ V=u\ \tilden\text{or }us\times\wspace_+} = \norm{u}_{s+1,p,\text{or }us\times\wspace_+},
\textbf{e}_nd{align*}
where the last inequality follows by an application of Lemma \ref{MultiplierLem_NegHomoMultipliers}.
We have thus shown \eqref{TPBesselEqualsTPSobHalfspacecasePropEst2}. One may verify \eqref{TPBesselEqualsTPSobHalfspacecasePropEst3} in
a similar manner.
\textbf{e}_nd{proof}
\begin{lem}\label{BS_SobSloboMultiplierLem}
Let $\beta\in(0,1)$ and $\alpha\in\bp{2m(\beta-1),2m\beta}$. Then $\mathsf{op}\,\bb{{{p}}kx^\alpha}$ extends to a bounded operator
$\mathsf{op}\,\bb{{{p}}kx^\alpha}:\mathscr{W}SRcompl{\beta,2m\beta}{p}({\mathbb T}\tildemes\mathbb{R}^n)\rightarrow\mathscr{W}SRcompl{\beta-frac{\alpha}{2m}}{2m\beta-\alpha}({\mathbb T}\tildemes\mathbb{R}^n)$.
\textbf{e}_nd{lem}
\begin{proof}
By interpolation, we directly obtain that $\mathsf{op}\,\bb{{{p}}kx^\alpha}$ extends to a bounded operator
\begin{align*}
\mathsf{op}\,\bb{{{p}}kx^\alpha}:\bp{\HSRcompl{\alpha}{p},\HSRcompl{2m}{p}}_{\theta,p}\rightarrow
\bp{\LRcompl{p}({\mathbb T}\tildemes\mathbb{R}^n),\HSRcompl{2m-\alpha}{p}}_{\theta,p}
\textbf{e}_nd{align*}
for any $\theta\in(0,1)$. Choose $\theta :=frac{2m\beta-\alpha}{2m-\alpha}$. Using a dyadic decomposition of the Fourier space with respect to the parabolic length, $\HSRcompl{s}{p}$ can be identified as the complex interpolation space $[\HSRcompl{s_0}{p},\HSRcompl{s_1}{p}]_{\omegaega}$ with $frac{1}{s}=frac{1-\omegaega}{s_0}+frac{\omegaega}{s_1}$ by verifying that it is a retract of $L^p(l^{s,2})$ as in \cite[Theorem 6.4.3]{BergLoefstroemBook}. In fact, relying on the Transference Principle, we have a Mikhlin's multiplier theorem at our disposal, which is the key ingredient in \cite[Theorem 6.4.3]{BergLoefstroemBook}. Hence, by reiteration and Proposition \ref{TPBesselEqualsTPSobWholespacecaseProp}
\begin{align*}
\bp{\HSRcompl{\alpha}{p},\HSRcompl{2m}{p}}_{\theta,p} =
\bp{\LRcompl{p}({\mathbb T}\tildemes\mathbb{R}^n),\HSRcompl{2m}{p}}_{\beta,p}
=\mathscr{W}SRcompl{\beta,2m\beta}{p}({\mathbb T}\tildemes\mathbb{R}^n).
\textbf{e}_nd{align*}
Similarly,
$\bp{\LRcompl{p}({\mathbb T}\tildemes\mathbb{R}^n),\HSRcompl{2m-\alpha}{p}}_{\theta,p}=\mathscr{W}SRcompl{\beta-frac{\alpha}{2m}}{2m\beta-\alpha}({\mathbb T}\tildemes\mathbb{R}^n)$.
\textbf{e}_nd{proof}
Finally, we
characterize the trace spaces of the time-periodic Bessel Potential spaces.
\begin{lem}\label{BS_TraceSpaceBesselPotentialLem}
Let $p\in(1,\infty)$.
The trace operator $\trace_m$ defined in \eqref{DefOfTraceOpr} extends to a bounded operator
\begin{align}\label{BS_TraceSpaceBesselPotentialLemTraceOpr}
\trace_m:\HSRcompl{m}{p}(\text{or }us\times\wspace_+)\rightarrow
{{p}}rod_{j=0}^{m-1}\mathscr{W}SR{\half-frac{j}{2m}-frac{1}{2mp},m-j-frac{1}{p}}{p}({\mathbb T}\tildemes\mathbb{R}^{n-1})
\textbf{e}_nd{align}
that is onto and has a bounded right inverse. If $u\in\HSRcompl{m}{p}(\mathfrak{g}p)$ with $\supp(u)\subset\text{or }us\times\wspace_+closed$, then
$\trace_m\bp{funcrestriction{u}{\text{or }us\times\wspace_+}}=0$. If $u\in\HSRcompl{m}{p}(\text{or }us\times\wspace_+)$ with $\trace_m\bp{funcrestriction{u}{\text{or }us\times\wspace_+}}=0$, then
$u$ is the restriction of a function $U\in\HSRcompl{m}{p}(\mathfrak{g}p)$ with $\supp U\subset\text{or }us\times\wspace_+closed$.
\textbf{e}_nd{lem}
\begin{proof}
For either $I=\mathbb{R}$ or $I=\mathbb{R}_+$, put
\begin{align*}
V(I):=\LR{p}\bp{I;\HSRcompl{m}{p}({\mathbb T}\tildemes\mathbb{R}^{n-1})}\cap\HSR{m}{p}\bp{I;\LRcompl{p}({\mathbb T}\tildemes\mathbb{R}^{n-1})}.
\textbf{e}_nd{align*}
We first verify that
$\HSRcompl{m}{p}=V(\mathbb{R})$ with equivalent norms.
It is straightforward to obtain the embedding $\HSRcompl{m}{p}\hookrightarrow V(\mathbb{R})$. To show the reverse embedding, consider $u\in V(\mathbb{R})$.
Then
$\norm{\mathsf{op}\,\nb{{{p}}arnorm{k}{{\xi'}}^m}u}_p\leq \norm{u}_{V(\mathbb{R})}$ and
$\norm{\mathsf{op}\,\nb{\xi_n^m}u}_p\leq \norm{u}_{V(\mathbb{R})}$. By Lemma \ref{MultiplierLem_ZeroHomoMultipliers},
\begin{align*}
\mathfrak{m}per:\mathfrak{g}p\rightarrow\mathbb{C},\mathfrak{q}uad\mathfrak{m}per(k,\xi):=frac{{{p}}arnorm{k}{\xi}^m}{{{p}}arnorm{k}{{\xi'}}^m+\xi_n^m}
\textbf{e}_nd{align*}
is an $\LR{p}(\mathfrak{g}p)$ multiplier. It follows that
$\norm{\mathsf{op}\,\nb{{{p}}arnorm{k}{\xi}^m}u}_p\leq \norm{u}_{V(\mathbb{R})}$ and thus the embedding $V(\mathbb{R})\hookrightarrow\HSRcompl{m}{p}$.
We conclude $\HSRcompl{m}{p}=V(\mathbb{R})$. It is standard to show existence of an extension operator $V(\mathbb{R}_+)\rightarrowV(\mathbb{R})$; see for example
\cite[Lemma 2.9.1]{Triebel_InterpolationTheory1978}. By restriction to $\text{or }us\times\wspace_+$, it thus follows that $\HSRcompl{m}{p}(\text{or }us\times\wspace_+)=V(\mathbb{R}_+)$.
The classical trace method now implies that trace operator extends to a bounded operator
\begin{align*}
\trace_m: \HSRcompl{m}{p}(\text{or }us\times\wspace_+)\rightarrow
{{p}}rod_{j=0}^{m-1}\bp{\LRcompl{p}({\mathbb T}\tildemes\mathbb{R}^{n-1}),\HSRcompl{m}{p}({\mathbb T}\tildemes\mathbb{R}^{n-1})}_{1-frac{j}{m}-frac{1}{mp},p}
\textbf{e}_nd{align*}
that is onto; see for example \cite[Theorem 1.8.3]{Triebel_InterpolationTheory1978}.
The existence of a bounded right inverse can be shown as in \cite[Theorem 2.9.1]{Triebel_InterpolationTheory1978}. Again by reiteration
we identify
\begin{align*}
&\bp{\LRcompl{p}({\mathbb T}\tildemes\mathbb{R}^{n-1}),\HSRcompl{m}{p}({\mathbb T}\tildemes\mathbb{R}^{n-1})}_{1-frac{j}{m}-frac{1}{mp},p} \\
&\ =\bp{\LRcompl{p}({\mathbb T}\tildemes\mathbb{R}^{n-1}),\HSRcompl{2m}{p}({\mathbb T}\tildemes\mathbb{R}^{n-1})}_{\half-frac{j}{2m}-frac{1}{2mp},p}
=\mathscr{W}SR{\half-frac{j}{2m}-frac{1}{2mp},m-j-frac{1}{p}}{p}({\mathbb T}\tildemes\mathbb{R}^{n-1}).
\textbf{e}_nd{align*}
Thus, we conclude \eqref{BS_TraceSpaceBesselPotentialLemTraceOpr}.
Consider now
$u\in\HSRcompl{m}{p}(\mathfrak{g}p)$ with $\supp(u)\subset\text{or }us\times\wspace_+closed$. As above we can identify $u$ as an element of $V(\mathbb{R})$, which necessarily satisfies
$u(0)=0$. It follows that $\trace_m\bp{funcrestriction{u}{\text{or }us\times\wspace_+}}=0$.
If on the other hand $u\in\HSRcompl{m}{p}(\text{or }us\times\wspace_+)$ with $\trace_m\bp{funcrestriction{u}{\text{or }us\times\wspace_+}}=0$, then it is standard to show that
$u$ can be approximated by a sequence of functions from $\mathbb{C}Rci\np{\text{or }us\times\wspace_+}$; see for example \cite[Theorem 2.9.1]{Triebel_InterpolationTheory1978}.
Clearly, this sequence also converge in $\HSRcompl{m}{p}(\mathfrak{g}p)$. The limit function $U\in\HSRcompl{m}{p}(\mathfrak{g}p)$ satisfies $\supp U\subset\text{or }us\times\wspace_+closed$
and $funcrestriction{U}{\text{or }us\times\wspace_+}=u$.
\textbf{e}_nd{proof}
\section{Constant Coefficients in the Whole- and Half-Space}
In this section, we establish the assertion of Theorem \ref{MainThm_HalfSpace}.
We first treat the whole-space case, and then show the theorem as stated in the half-space case.
Since we consider only the differential operators with constant coefficients in this section, we employ the simplified notation
$A^Hfull(D)$ instead of $A^Hfull(x,D)$. Replacing the differential operator $D$ with $\xi\in\mathbb{R}^n$, we refer to $A^Hfull(\xi)$
as the symbol of $A^Hfull(D)$.
\subsection{The Whole Space}\label{WholeSpaceSection}
We consider first the case of the spatial domain being the whole-space $\mathbb{R}^n$.
The time-space domain then coincides with the locally abelian group $\mathfrak{g}p$, and we can thus employ the Fourier transform $\mathscr{F}_\mathfrak{g}p$
and base the proof on an investigation of the corresponding Fourier multipliers.
\begin{lem}\label{wholeop_bdd_lem}
Assume that $A^H$ is properly elliptic and satisfies
Agmon's condition on the two rays $\e^{i\theta}$ with $\theta={{p}}mfrac{{{p}}i}{2}$.
Let $p\in\np{1,\infty}$, $s\in\mathbb{R}$ and
\begin{align}\label{wholeop_bdd_lem_FparopperDef}
&\mathfrak{M}per: \widehat{G}\rightarrow\mathbb{C}, \mathfrak{q}quad \mathfrak{M}per(k,\xi):=ik+A^H(\xi).
\textbf{e}_nd{align}
Then the linear operators
\begin{align*}
{{p}}arop:=\mathsf{op}\,[\mathfrak{M}per]:\mathscr{S}_\bot(\mathfrak{g}p)\to \mathscr{S}'_\bot(\mathfrak{g}p),\mathfrak{q}uad
{{p}}aropinv:=\mathsf{op}\,[\mathfrak{M}per^{-1}]:\mathscr{S}_\bot(\mathfrak{g}p)\to \mathscr{S}'_\bot(\mathfrak{g}p)
\textbf{e}_nd{align*}
extend uniquely to bounded operators
\begin{align}\label{wholeop_bdd_lem_Mappingproperties}
{{p}}arop:\HSR{s}{p}_\bot\np{\mathfrak{g}p}\to\HSR{s-2m}{p}_\bot\np{\mathfrak{g}p},\mathfrak{q}uad
{{p}}aropinv:\HSR{s-2m}{p}_\bot\np{\mathfrak{g}p}\to\HSR{s}{p}_\bot\np{\mathfrak{g}p}.
\textbf{e}_nd{align}
In the setting \eqref{wholeop_bdd_lem_Mappingproperties}, ${{p}}aropinv$ is the actual inverse of ${{p}}arop$.
\textbf{e}_nd{lem}
\begin{proof}
Let
\begin{align*}
\mathfrak{m}:\mathbb{R}\tildemes\mathbb{R}^n\setminus\set{(0,0)}\rightarrow\mathbb{C},\mathfrak{q}uad \mathfrak{m} (\eta,\xi)&:=frac{i\eta+A^H(\xi)}{{{p}}ex^{2m}}.
\textbf{e}_nd{align*}
Clearly, $\mathfrak{m}$ is parabolically $0$-homogeneous. By Lemma \ref{MultiplierLem_NegHomoMultipliers}, it follows that $\mathfrak{m}per:=funcrestriction{\mathfrak{m}}{\widehat{G}}$ is an $\LR{p}(\mathfrak{g}p)$-multiplier.
Since $\mathfrak{M}per={{{p}}kx^{2m}\mathfrak{m}per(k,\xi)}$, we conclude that
\begin{align*}
\norm{{{p}}aropf}_{s-2m,p}&=\norm{\mathsf{op}\,\bb{{{p}}kx^{2m}\mathfrak{m}per(k,\xi)f}}_{s-2m,p} = \norm{\mathsf{op}\,\bb{\mathfrak{m}per(k,\xi)}f}_{s,p}
\leq \mathbb{C}cn{C}\norm{f}_{s,p}.
\textbf{e}_nd{align*}
Since $A^H$
satisfies Agmon's condition for $\theta={{p}}mfrac{{{p}}i}{2}$, it follows that
$A^H(\xi)\notin i\mathbb{R}$ for all $\xi\in\mathbb{R}^n\setminus\set{0}$. Consequently,
$\mathfrak{m}^{-1}:\mathbb{R}\tildemes\mathbb{R}^n\setminus\set{(0,0)}\rightarrow\mathbb{C}$ is well-defined and clearly parabolically $0$-homogeneous. We deduce as above that
\begin{align*}
\norm{{{p}}aropinvf}_{s,p}&=\norm{\mathsf{op}\,\bb{{{p}}kx^{-2m}\mathfrak{m}per(k,\xi)^{-1}f}}_{s,p} = \norm{\mathsf{op}\,\bb{\mathfrak{m}per(k,\xi)^{-1}}f}_{s-2m,p}
\leq \mathbb{C}cn{C}\norm{f}_{s,p}.
\textbf{e}_nd{align*}
Consequently, ${{p}}arop$ and ${{p}}aropinv$ extend uniquely to bounded operators ${{p}}arop:\HSR{s}{p}_\bot\to\HSR{s-2m}{p}_\bot$
and ${{p}}aropinv:\HSR{s-2m}{p}_\bot\to\HSR{s}{p}_\bot$, respectively.
Clearly, ${{p}}aropinv$ is the actual inverse of ${{p}}arop$.
\textbf{e}_nd{proof}
\begin{thm}\label{MainThm_ws}
Assume $A^H$ is properly elliptic and satisfies Agmon's condition on the two rays $\e^{i\theta}$ with $\theta={{p}}mfrac{{{p}}i}{2}$.
Let $s\in\mathbb{R}$ and $p\in (1,\infty)$. There is a constant $\mathbb{C}cn{C}>0$ such that
\begin{align}\label{TPADNEst_ws}
\norm{u}_{s,p}\leq \mathbb{C}cn{C} \Bp{\norm{{{p}}artial_tu+A^Hfullu}_{s-2m,p}+ \norm{u}_{s-1,p}}.
\textbf{e}_nd{align}
for all $u\in\HSR{s}{p}_\bot(\mathfrak{g}p)$.
\textbf{e}_nd{thm}
\begin{proof}
Since ${{p}}aropu= {{p}}artial_tu + A^Hu$, we employ Lemma \ref{wholeop_bdd_lem} to estimate
\begin{align*}
\norm{u}_{s,p} \leq
\mathbb{C}cn{C}\norm{{{p}}artial_tu + A^Hu}_{s-2m,p}
\leq \mathbb{C}cn{C}\norm{{{p}}artial_tu + A^Hfullu}_{s-2m,p} + \norm{\bb{A^Hfull-A^H}u}_{s-2m,p}.
\textbf{e}_nd{align*}
Since the differential operator $A^Hfull-A^H$ contains derivatives of at most of order $2m-1$,
we conclude \eqref{TPADNEst_ws} by a similar multiplier argument as in the proof of Lemma \ref{wholeop_bdd_lem}.
\textbf{e}_nd{proof}
\subsection{The Half Space with Dirichlet Boundary Condition}\label{HalfSpaceDirichletBoundaryCondSection}
In the next step, we consider the case of the spatial domain being the half-space $\wspace_+$ and boundary operators corresponding to Dirichlet boundary
conditions.
As in the whole-space case, we shall work with the symbol of ${{p}}artial_t+A^H$. In the following lemma, we collect its key properties.
\begin{lem}\label{SymbolPropertiesLem}
Assume $A^H$ is properly elliptic and satisfies Agmon's condition on the two rays $\e^{i\theta}$ with $\theta={{p}}mfrac{{{p}}i}{2}$. Let
\begin{align*}
\mathfrak{M}:\mathbb{R}\tildemes\mathbb{R}^n\rightarrow\mathbb{C},\mathfrak{q}uad \mathfrak{M}(\eta,\xi',\xi_n):=i\eta+A^H(\xi',\xi_n).
\textbf{e}_nd{align*}
\begin{enumerate}[(1)]
\item\label{SymbolPropertiesLem_PropNumberOfRoots} For every $(\eta,\xi')\in{\mathbb{R}\tildemes\mathbb{R}^{n-1}\setminus\set{(0,0)}}$ the complex polynomial
$z\mapsto\mathfrak{M}(\eta,\xi',z)$ has exactly
$m$ roots $\rho_j^+(\eta,\xi')\in\mathbb{C}_-$ in the upper complex plane, and
$m$ roots $\rho_j^-(\eta,\xi')\in\mathbb{C}_+$ in the lower complex plane ($j\in\set{1,\ldots,m})$.
\item The functions
\begin{align}\label{FparopDef}
\begin{aligned}
&\mathfrak{M}_{{p}}m:\mathbb{R}\tildemes\mathbb{R}^n\setminus\setc{(\eta,\xi)}{(\eta,{\xi'})=(0,0)} \rightarrow\mathbb{C},\\
&\mathfrak{M}_{{p}}m(\eta,\xi):={{p}}rod_{j=1}^{m}\bp{\xi_n-\rho_j^{{p}}m(\eta,\xi')}
\textbf{e}_nd{aligned}
\textbf{e}_nd{align}
are parabolically $m$-homogeneous.
\item The coefficients of the polynomials
$z\mapsto\mathfrak{M}_{{p}}m(\eta,\xi',z)$, more specifically the functions
$\mathfrak{M}coef^{{p}}m_\alpha: \mathbb{R}\tildemes\mathbb{R}^{n-1}\setminus\set{(0,0)}\rightarrow\mathbb{C}$ $(\alpha=0,\ldots,m)$
with the property that
\begin{align}\label{SymbolPropertiesLem_FparopAsSumWithCoefficients}
\mathfrak{M}_{{p}}m(\eta,\xi',z) = \sum_{\alpha=0}^m \mathfrak{M}coef^{{p}}m_\alpha(\eta,{\xi'})z^{m-\alpha},
\textbf{e}_nd{align}
are analytic. Moreover, $\mathfrak{M}coef^{{p}}m_\alpha$ is parabolically $\alpha$-homogeneous.
\textbf{e}_nd{enumerate}
\textbf{e}_nd{lem}
\begin{proof}\mbox{}
\begin{enumerate}[(1)]
\item Since $A^H$ is properly elliptic, the polynomial $z\mapsto\mathfrak{M}(0,\xi',z)$ has exactly m roots in the upper and lower complex plane, respectively.
Recall that $A^H(x,\xi)\notin i\mathbb{R}$ for all $\xi\in\mathbb{R}^n\setminus\set{0}$. Since the roots of a polynomial depend continuously on the polynomial's coefficients, we deduce part (\ref{SymbolPropertiesLem_PropNumberOfRoots}) of the lemma.
\item Since $\mathfrak{M}$ is parabolically $2m$-homogeneous, the roots $\rho_j^{{p}}m$ are parabolically $1$-homo\-geneous. It follows that $\mathfrak{M}_{{p}}m$
is parabolically $m$-homogeneous.
\item The analyticity of the coefficients $\mathfrak{M}coef^{{p}}m_\alpha$ follows by a classical argument; see for example \cite[Chapter 4.4]{TanabeBook}.
The coefficient $\mathfrak{M}coef^{{p}}m_\alpha$ being parabolically $\alpha$-homogeneous is a direct consequence of $\mathfrak{M}_{{p}}m$ being $m$-homogeneous. \mathfrak{q}edhere
\textbf{e}_nd{enumerate}
\textbf{e}_nd{proof}
\begin{lem}\label{halfop_bdd_lem}
Assume $A^H$ is properly elliptic and satisfies Agmon's condition on the two rays $\e^{i\theta}$ with $\theta={{p}}mfrac{{{p}}i}{2}$.
Put $\mathfrak{M}per_{{p}}m:=\mathfrak{M}_{{p}}m|_{\widehat{G}}$, where $\mathfrak{M}_{{p}}m$ is defined by \eqref{FparopDef}.
Let $p\in\np{1,\infty}$ and $s\in\mathbb{R}$. Then the linear operators
\begin{align*}
{{p}}aroppm:=\mathsf{op}\,[\mathfrak{M}per_{{p}}m]:\mathscr{S}_\bot(\mathfrak{g}p)\to \mathscr{S}'_\bot(\mathfrak{g}p),\mathfrak{q}uad
{{p}}aroppminv:=\mathsf{op}\,[\mathfrak{M}per_{{p}}m^{-1}]:\mathscr{S}_\bot(\mathfrak{g}p)\to \mathscr{S}'_\bot(\mathfrak{g}p)
\textbf{e}_nd{align*}
extend uniquely to bounded and mutually inverse operators ${{p}}aroppm:\HSR{s}{p}_\bot(\mathfrak{g}p)\to\HSR{s-m}{p}_\bot(\mathfrak{g}p)$ and
${{p}}aroppminv:\HSR{s-m}{p}_\bot(\mathfrak{g}p)\to\HSR{s}{p}_\bot(\mathfrak{g}p)$, respectively.
\textbf{e}_nd{lem}
\begin{proof}
The assertion of the lemma follows as in the proof of Lemma \ref{wholeop_bdd_lem}, provided we can show that the restriction to $\widehat{G}$ of the
multiplier
\begin{align*}
\mathfrak{m}:\mathbb{R}\tildemes\mathbb{R}^n\setminus\setc{(\eta,\xi)}{(\eta,{\xi'})=(0,0)}\rightarrow\mathbb{C},\mathfrak{q}uad \mathfrak{m}(\eta,\xi)&:=frac{\mathfrak{M}_{{p}}m(\eta,\xi)}{{{p}}ex^{m}}
\textbf{e}_nd{align*}
and its inverse are $\LRcompl{p}(\mathfrak{g}p)$-multipliers. Although $\mathfrak{m}$ is parabolically $0$-homogeneous, we cannot apply
Lemma \ref{MultiplierLem_NegHomoMultipliers} directly since $\mathfrak{m}$ is not defined on all of $\mathbb{R}\tildemes\mathbb{R}^n\setminus\set{(0,0)}$.
Instead, we recall \eqref{SymbolPropertiesLem_FparopAsSumWithCoefficients} and observe that
\begin{align}\label{halfop_bdd_lem_FparopSeparationTrick}
\mathfrak{m}(\eta,\xi) = \sum_{\alpha=0}^m frac{ \mathfrak{M}coef^{{p}}m_\alpha(\eta,{\xi'})}{{{p}}exp^\alpha} \cdot frac{\xi_n^{m-\alpha}{{p}}exp^\alpha}{{{p}}ex^m}
=:\sum_{\alpha=0}^m\mathfrak{m}_1^\alpha \cdot \mathfrak{m}_2^\alpha.
\textbf{e}_nd{align}
Owing to the $\alpha$-homogeneity of $\mathfrak{M}coef^{{p}}m_\alpha$, Lemma \ref{MultiplierLem_NegHomoMultipliers} yields that both
$funcrestriction{{\mathfrak{m}_1^\alpha}}{\widehat{G}}$ and $funcrestriction{{\mathfrak{m}_2^\alpha}}{\widehat{G}}$ are $\LR{p}(\mathfrak{g}p)$-multipliers. Consequently, also $\mathfrak{m}$
is an $\LR{p}(\mathfrak{g}p)$-multiplier, and we thus conclude as in the proof of Lemma \ref{wholeop_bdd_lem} that ${{p}}aroppm$
extends uniquely to a bounded operator ${{p}}aroppm:\HSR{s}{p}_\bot(\mathfrak{g}p)\to\HSR{s-m}{p}_\bot(\mathfrak{g}p)$.
To show the corresponding property for ${{p}}aroppminv$, we introduce a cut-off function $\chi\in\mathbb{C}Ri(\mathbb{R})$ with
$\chi(\eta)=0$ for $\snorm{\eta}<frac{{{p}}i}{{{p}}er}$ and $\chi(\eta)=1$ for $\snorm{\eta}geq {{p}}erf$.
We claim that
\begin{align*}
\mathfrak{\widetilde{m}}:\mathbb{R}\tildemes\mathbb{R}^n\rightarrow\mathbb{C},\mathfrak{q}uad \mathfrak{\widetilde{m}}(\eta,\xi):= \chi(\eta) frac{{{p}}ex^m}{\mathfrak{M}_{{p}}m(\eta,\xi)}
\textbf{e}_nd{align*}
is an $\LR{p}\np{\mathbb{R}\tildemes\mathbb{R}^n}$-multiplier. Indeed, utilizing that $\mathfrak{M}_{{p}}m$ is $m$-homogeneous, we see
that $\mathfrak{M}_{{p}}m$ can be bounded below by
\begin{align}\label{halfop_bdd_lem_FparopBoundBelow}
\snorm{\mathfrak{M}_{{p}}m(\eta,\xi)} geq {{p}}ex^m\, \inf_{{{p}}arnorm{\widetilde{\eta}}{\widetilde{\xi}}=1,(\widetilde{\eta},\widetilde{\xi}')\neq(0,0)} \snorm{\mathfrak{M}_{{p}}m(\widetilde{\eta},\widetilde{\xi})},
\textbf{e}_nd{align}
where the infimum above is strictly positive due to the roots in definition \eqref{FparopDef} satisfying $\lim_{(\eta,{\xi'})\rightarrow(0,0)}\rho_j^{{p}}m(\eta,{\xi'})=0$. Using only \eqref{halfop_bdd_lem_FparopBoundBelow} and
the $\alpha$-homogeneity of the coefficients $\mathfrak{M}coef^{{p}}m_\alpha$ as in \eqref{halfop_bdd_lem_FparopSeparationTrick}, it is now straightforward to verify that $\mathfrak{\widetilde{m}}$ satisfies the condition of the
Marcinkiewicz's multiplier theorem (\cite[Chapter IV, \S 6]{Stein70}). Thus, $\mathfrak{\widetilde{m}}$ is an
$\LR{p}(\mathbb{R}\tildemes\mathbb{R}^n)$-multiplier and by \cite[Theorem B.2.1]{EdG77} $funcrestriction{{\mathfrak{\widetilde{m}}}}{{\widehat{G}}}$ therefore an $\LR{p}(\mathfrak{g}p)$-multiplier.
Since the restriction of the corresponding operator $\mathsf{op}\,\bb{funcrestriction{{\mathfrak{\widetilde{m}}}}{{\widehat{G}}}}:\LRcompl{p}(\mathfrak{g}p)\rightarrow\LRcompl{p}(\mathfrak{g}p)$ to the subspace of purely periodic functions coincides with $\mathsf{op}\,\bb{funcrestriction{{\mathfrak{m}^{-1}}}{{\widehat{G}}}}:\LRcompl{p}(\mathfrak{g}p)\rightarrow\LRcompl{p}(\mathfrak{g}p)$,
we deduce as in the proof of Lemma \ref{wholeop_bdd_lem} that ${{p}}aroppminv$ extends uniquely to a bounded operator
${{p}}aroppminv:\HSR{s-m}{p}_\bot(\mathfrak{g}p)\to\HSR{s}{p}_\bot(\mathfrak{g}p)$.
\textbf{e}_nd{proof}
The lemma above provides us with at decomposition of the differentiable operators in \eqref{wholeop_bdd_lem_Mappingproperties}, that is, for
${{p}}arop:\HSR{s}{p}_\bot(\mathfrak{g}p)\to\HSR{s-2m}{p}_\bot(\mathfrak{g}p)$ and ${{p}}aropinv:\HSR{s-2m}{p}_\bot(\mathfrak{g}p)\to\HSR{s}{p}_\bot(\mathfrak{g}p)$ the decompositions ${{p}}arop={{p}}aropp{{p}}aropm={{p}}aropm{{p}}aropp$ and ${{p}}aropinv={{p}}aroppinv{{p}}aropminv={{p}}aropminv{{p}}aroppinv$
are valid provided ${{p}}arop$ is normalized accordingly. Employing the Paley-Wiener Theorem,
we shall now show that the operators ${{p}}arop_{{p}}m$ and ${{p}}aropinv_{{p}}m$ ``respect'' the support of a function in the upper (lower) half-space.
\begin{lem}\label{paley_wiener_lem}
Assume $A^H$ is properly elliptic and satisfies Agmon's condition on the two rays $\e^{i\theta}$ with $\theta={{p}}mfrac{{{p}}i}{2}$.
Let $p\in\np{1,\infty}$, $s\in\mathbb{R}$ and consider $u\in \HSR{s}{p}_\bot(\mathfrak{g}p)$.
\begin{enumerate}
\item\label{paley_wiener_lem_i} If $\suppu\subset\text{or }us\times\wspace_+closed$, then $\supp{{p}}aroppu\subset\text{or }us\times\wspace_+closed$ and $\supp{{p}}aroppinvu\subset\text{or }us\times\wspace_+closed$.
\item\label{paley_wiener_lem_ii} If $\suppu\subset\text{or }us\times\wspace_-closed$, then $\supp{{p}}aropmu\subset\text{or }us\times\wspace_-closed$ and $\supp{{p}}aropminvu\subset\text{or }us\times\wspace_-closed$.
\textbf{e}_nd{enumerate}
\textbf{e}_nd{lem}
\begin{proof}
We shall prove only part \eqref{paley_wiener_lem_i}, for part \eqref{paley_wiener_lem_ii} follows analogously. We employ the notation $\mathfrak{g}pprime:={\mathbb T}\tildemes\mathbb{R}^{n-1}$ and the canonical decomposition $\mathscr{F}_\mathfrak{g}p=\mathscr{F}_\mathfrak{g}pprime\mathscr{F}_\mathbb{R}$ of the Fourier transform. In view of Lemma \ref{halfop_bdd_lem}, it suffices to consider only $u\in \mathscr{S}(\mathfrak{g}p)$ with $\suppu\subset\text{or }us\times\wspace_+closed$.
For fixed ${k}\in{{p}}erf\mathbb{Z}\setminus\set{0}$ and $\xi'\in\mathbb{R}^{n-1}$, we let $\mathsf{D}({k},\xi'):=\mathscr{F}_{\mathbb{R}}^{-1}\mathfrak{M}per_+({k},\xi',\cdot)\mathscr{F}_{\mathbb{R}}$. Since $\mathfrak{M}per_+$ is a polynomial with respect to the variable $\xi_n$, $\mathsf{D}({k},\xi')$ is a differential operator in $x_n$ and hence $\supp(\mathsf{D}({k},\xi')f)\subset \overline{\mathbb{R}_+}$ for every $f\in\mathscr{S}'(\mathbb{R})$ with $\suppf\subset\overline{\mathbb{R}_+}$.
Clearly, $\supp([\mathscr{F}_\mathfrak{g}pprimeu]({k},\xi',\cdot))\subset\overline{\mathbb{R}_+}$.
Since $\mathscr{F}_\mathfrak{g}pprime[{{p}}aroppu]({k},\xi',\cdot)=[\mathsf{D}({k},\xi')\mathscr{F}_\mathfrak{g}pprimeu]({k},\xi',\cdot)$, we conclude $\supp{{p}}aroppu\subset\text{or }us\times\wspace_+closed$.
To show the same property for ${{p}}aroppinvu$, we employ the version of the Paley-Wiener Theorem presented in Proposition \ref{paley_wiener_prop}. Since $u\in \mathscr{S}(\mathfrak{g}p)\subset \LR{2}(\mathfrak{g}p)$, we immediately obtain that for fixed ${k}\in{{p}}erf\mathbb{Z}$ and $\xi'\in\mathbb{R}^{n-1}$, the Fourier transform $[\mathscr{F}_\mathfrak{g}pu]({k},\xi',\cdot)$ is in the Hardy space $\mathscr{H}_+^2(\mathbb{R})$. Let
\begin{align*}
\widetilde{\mathfrak{M}per_+^{-1}}({k},\xi',\cdot):\mathbb{C}Numbers_+\to\mathbb{C}Numbers,\mathfrak{q}uad \widetilde{\mathfrak{M}per_+^{-1}}({k},\xi',z):={{p}}rod_{j=1}^m(z-\rho_j^+({k},\xi'))^{-1}
\textbf{e}_nd{align*}
denote the extension of $\mathfrak{M}per_+^{-1}({k},\xi',\cdot)$ to the lower complex plane. Since all the roots $\rho_j^+$ lie in the upper complex plane,
this extension is holomorphic and bounded.
It follows that $[\mathfrak{M}per_+^{-1}\mathscr{F}_\mathfrak{g}pu]({k},\xi',\cdot)\in \mathscr{H}_+^2(\mathbb{R})$. Hence, taking the inverse Fourier transform, Proposition \ref{paley_wiener_prop} yields $\supp{{p}}aroppinvu\subset\text{or }us\times\wspace_+closed$.
\textbf{e}_nd{proof}
The above properties of ${{p}}arop_{{p}}m$ and ${{p}}aropinv_{{p}}m$ lead to a surprisingly simple representation formula, see \eqref{weak_sol_lem_repformula} below, for the solution $u$ to the problem ${{p}}artial_tu+A^Hu=f$ in the half-space ${\mathbb T}\tildemes\wspace_+$ with Dirichlet boundary conditions.
The problem itself can be formulated elegantly as \eqref{weak_sol_1}.
\begin{lem}\label{weak_sol_lem}
Assume $A^H$ is properly elliptic and satisfies Agmon's condition on the two rays $\e^{i\theta}$ with $\theta={{p}}mfrac{{{p}}i}{2}$.
Let $p\in\np{1,\infty}$ and $f\in \HSR{-m}{p}_\bot(\mathfrak{g}p)$.
Let $Y_+$ denote the characteristic function on $\text{or }us\times\wspace_+$.
Then
\begin{align}\label{weak_sol_lem_repformula}
u:={{p}}aroppinvY_+{{p}}aropminvf,
\textbf{e}_nd{align}
is the unique solution in $\HSRcompl{m}{p}(\mathfrak{g}p)$ to
\begin{align}\label{weak_sol_1}
\suppu\subset \text{or }us\times\wspace_+closed, \mathfrak{q}quad \text{and } \mathfrak{q}quad \supp\np{{{p}}aropu-f}\subset \text{or }us\times\wspace_-closed.
\textbf{e}_nd{align}
Moreover, there is a constant $c=c(n,p)>0$ such that
\begin{align}\label{weak_sol_est}
\norm{u}_{m,p,\text{or }us\times\wspace_+}\leq \mathbb{C}cn{C} \norm{f}_{-m,p,\text{or }us\times\wspace_+}.
\textbf{e}_nd{align}
\textbf{e}_nd{lem}
\begin{proof}
By Lemma \ref{halfop_bdd_lem}, ${{p}}aropminvf\in\LRcompl{p}(\mathfrak{g}p)$. Clearly then $Y_+{{p}}aropminvf\in\LRcompl{p}(\mathfrak{g}p)$ and trivially
$\supp\np{Y_+{{p}}aropminvf}\subset\text{or }us\times\wspace_+closed$.
Lemma \ref{paley_wiener_lem} now implies
$\supp\np{{{p}}aroppinvY_+{{p}}aropminvf}\subset\text{or }us\times\wspace_+closed$, which concludes the first part of \eqref{weak_sol_1}.
Since
$\supp\bp{\np{Y_+-\id}{{p}}aropminvf}\subset\text{or }us\times\wspace_-closed$,
Lemma \ref{paley_wiener_lem} implies
$\supp\bp{{{p}}aropm\np{Y_+-\id}{{p}}aropminvf}\subset\text{or }us\times\wspace_-closed$. However,
${{p}}aropu-f={{p}}aropm\np{Y_+-\id}{{p}}aropminvf$, whence the second part of \eqref{weak_sol_1} follows.
To show uniqueness, let $v\in\HSRcompl{m}{p}$ be a solution to \eqref{weak_sol_1} with $f=0$. Since ${{p}}aroppv={{p}}aropminv{{p}}aropv$, Lemma \ref{paley_wiener_lem} yields
$\supp\np{{{p}}aroppv}\subset\text{or }us\times\wspace_-closed$. On the other hand, $\suppv\subset\text{or }us\times\wspace_+closed$ by assumption, whence $\supp{{p}}aroppv\subset\text{or }us\times\wspace_+closed$ by Lemma \ref{paley_wiener_lem}.
Recalling that ${{p}}aroppv\in\LRcompl{p}(\mathfrak{g}p)$ by
Lemma \ref{halfop_bdd_lem}, we thus deduce ${{p}}aroppv=0$ and consequently $v=0$.
This concludes the assertion of uniqueness.
It remains to show \eqref{weak_sol_est}.
For $F\in\HSRcompl{-m}{p}(\mathfrak{g}p)$ with $\supp\np{F-f}\subset\text{or }us\times\wspace_-closed$, we can utilize Lemma \ref{halfop_bdd_lem} and Lemma \ref{paley_wiener_lem} as
above to conclude that $Y_+{{p}}aropminv F = Y_+{{p}}aropminv f$, which in turn implies $u={{p}}aroppinvY_+{{p}}aropminv F$.
Recalling Lemma \ref{wholeop_bdd_lem}, we estimate
\begin{align*}
\norm{u}_{m,p,\text{or }us\times\wspace_+} &\leq \norm{u}_{m,p}\\
&= \inf\setc{\norm{{{p}}aroppinvY_+{{p}}aropminv F}_{m,p}}{F\in\HSRcompl{-m}{p}(\mathfrak{g}p),\ \supp\np{F-f}\subset\text{or }us\times\wspace_-closed}\\
&\leq \mathbb{C}cn{C}\, \inf\setc{\norm{F}_{-m,p}}{F\in\HSRcompl{-m}{p}(\mathfrak{g}p),\ \supp\np{F-f}\subset\text{or }us\times\wspace_-closed} \\
&=\mathbb{C}cn{C}\, \norm{f}_{-m,p,\text{or }us\times\wspace_+}.
\textbf{e}_nd{align*}
\textbf{e}_nd{proof}
Finally, we can establish the main theorem in the case of the spatial domain being the half-space $\wspace_+$ and boundary operators corresponding to Dirichlet boundary
conditions.
\begin{thm}\label{dirichlet_sol_thm}
Assume $A^H$ is properly elliptic and satisfies Agmon's condition on the two rays $\e^{i\theta}$ with $\theta={{p}}mfrac{{{p}}i}{2}$.
Let $p\in\np{1,\infty}$. For $f\in \LRcompl{p}(\text{or }us\times\wspace_+)$ and
$g\in\tracespacecompl{\iota}{p}({\mathbb T}\tildemes\mathbb{R}^n_+)$
there is a unique $u\in \mathscr{W}SRcompl{1,2m}{p}(\text{or }us\times\wspace_+)$ subject to
\begin{align}
\begin{pdeq}\label{dirichlet_sol_prob}
{{p}}artial_tu+A^H u&=f && \tilden \text{or }us\times\wspace_+, \\
\trace_{m}u&=g && \text{on } {\mathbb T}\tildemes{{p}}artial\mathbb{R}^n_+.
\textbf{e}_nd{pdeq}
\textbf{e}_nd{align}
Moreover, there is a constant $c=c(n,p)>0$ such that
\begin{align}\label{dirichlet_sol_est}
\norm{u}_{\mathscr{W}SR{1,2m}{p}(\text{or }us\times\wspace_+)}\leq c (\norm{f}_{\LR{p}(\text{or }us\times\wspace_+)}+\norm{g}_{\tracespacecompl{\iota}{p}({\mathbb T}\tildemes\mathbb{R}^n_+)}).
\textbf{e}_nd{align}
\textbf{e}_nd{thm}
\begin{proof}
We first assume $g=0$. Extending $f$ by zero to the whole space ${\mathbb T}\tildemes\mathbb{R}^n$, we have $f\in \LR{p}_\bot(\mathfrak{g}p)\subset \HSR{-m}{p}_\bot(\mathfrak{g}p)$.
Let $u\in\HSRcompl{m}{p}(\mathfrak{g}p)$ be the solution to \eqref{weak_sol_1} from Lemma \ref{weak_sol_lem}. Lemma \ref{BS_TraceSpaceBesselPotentialLem} yields $\trace_{m}u=0$. Thus, $u$ is a solution to \eqref{dirichlet_sol_prob}. We shall establish
higher order regularity of $u$ iteratively. For this purpose, we employ Proposition \ref{TPBesselEqualsTPSobHalfspacecaseProp} to estimate
\begin{align*}
\norm{u}_{m+1,p,\text{or }us\times\wspace_+}
&\leq \mathbb{C}cn{C} \Bp{\norm{{{p}}artial_n^{2m}u}_{-m+1,p,\text{or }us\times\wspace_+} +\sum_{j=0}^{2m-1}\norm{{{p}}artial_n^j\mathsf{op}\,\bb{{{{p}}arnorm{k}{{\xi'}}^{2m-j}}}u}_{-m+1,p,\text{or }us\times\wspace_+}}\\
&\leq \mathbb{C}cn{C} \bp{\norm{{{p}}artial_n^{2m}u}_{-m+1,p,\text{or }us\times\wspace_+} +\norm{\mathsf{op}\,\bb{{{{p}}arnorm{k}{{\xi'}}}}u}_{m,p,\text{or }us\times\wspace_+}}.
\textbf{e}_nd{align*}
Since the symbol of ${{p}}arop$ reads $\mathfrak{M}per({k},\xi',\xi_n)=a\xi_n^{2m}+i{k}+\sum_{k=0}^{2m-1}\sum_{|\alpha|= 2m-k} a_{\alpha,k}(\xi')^\alpha \xi_n^k$ with $a\neq 0$, we deduce with the help of Lemma \ref{MultiplierLem_NegHomoMultipliers} that
\begin{align*}
\norm{{{p}}artial_n^{2m}u}_{-m+1,p,\text{or }us\times\wspace_+}\leq \mathbb{C}cn{C}\bp{\norm{Au}_{-m+1,p,\text{or }us\times\wspace_+}+\norm{\mathsf{op}\,\bb{{{{p}}arnorm{k}{{\xi'}}}}u}_{m,p,\text{or }us\times\wspace_+}}.
\textbf{e}_nd{align*}
Consequently,
\begin{align*}
\norm{u}_{m+1,p,\text{or }us\times\wspace_+}
&\leq \mathbb{C}cn{C} \bp{\norm{f}_{-m+1,p,\text{or }us\times\wspace_+} +\norm{\mathsf{op}\,\bb{{{{p}}arnorm{k}{{\xi'}}}}u}_{m,p,\text{or }us\times\wspace_+}}.
\textbf{e}_nd{align*}
Clearly, $\mathsf{op}\,\bb{{{p}}arnorm{k}{{\xi'}}}$ commutes with ${{p}}aroppinvY_+{{p}}aropminv$, whence
\begin{align*}
\norm{u}_{m+1,p,\text{or }us\times\wspace_+}
&\leq \mathbb{C}cn{C} \bp{\norm{f}_{-m+1,p,\text{or }us\times\wspace_+} +\norm{{{p}}aroppinvY_+{{p}}aropminv\mathsf{op}\,\bb{{{{p}}arnorm{k}{{\xi'}}}}f}_{m,p,\text{or }us\times\wspace_+}}.
\textbf{e}_nd{align*}
By Lemma \ref{weak_sol_lem}, we finally obtain
\begin{align*}
\norm{u}_{m+1,p,\text{or }us\times\wspace_+}
&\leq \mathbb{C}cn{C} \bp{\norm{f}_{-m+1,p,\text{or }us\times\wspace_+} +\norm{\mathsf{op}\,\bb{{{{p}}arnorm{k}{{\xi'}}}}f}_{-m,p,\text{or }us\times\wspace_+}}
\leq \mathbb{C}cn{C} {\norm{f}_{-m+1,p,\text{or }us\times\wspace_+}}.
\textbf{e}_nd{align*}
Iterating this procedure, we obtain after $m$ steps the desired regularity $u\in\HSRcompl{2m}{p}(\text{or }us\times\wspace_+)$ together with the estimate
$\norm{u}_{2m,p,\text{or }us\times\wspace_+}\leq\mathbb{C}cn{C}\norm{f}_p$. Recalling from Proposition \ref{TPBesselEqualsTPSobWholespacecaseProp} that $\HSRcompl{2m}{p}(\text{or }us\times\wspace_+)=\mathscr{W}SRcompl{1,2m}{p}(\text{or }us\times\wspace_+)$, we conclude \eqref{dirichlet_sol_est} in the case $g=0$.
If $g\neq 0$, we recall the properties \eqref{TraceOprMappingproperties} of the trace operator and
choose a function $v\in \mathscr{W}SR{1,2m}{p}({\mathbb T}\tildemes\mathbb{R}^n_+)$ with $\trace_mv=g$ and
$\norm{v}_{2m,p}\leq \mathbb{C}cn{C} \norm{g}_{\tracespacecompl{\iota}{p}({\mathbb T}\tildemes\mathbb{R}^n_+)}$.
With $w:=u-v$, problem \eqref{dirichlet_sol_prob} is then reduced to
\begin{align*}
\begin{pdeq}
{{p}}artial_tw+A^Hw&=f-({{p}}artial_tv+A^Hv) && \tilden \text{or }us\times\wspace_+, \\
\trace_{m}w&=0 && \text{on } {\mathbb T}\tildemes{{p}}artial\mathbb{R}^n_+,
\textbf{e}_nd{pdeq}
\textbf{e}_nd{align*}
and the assertion readily follows from the homogeneous part already proven.
To show uniqueness, assume $u\in \mathscr{W}SRcompl{1,2m}{p}(\text{or }us\times\wspace_+)$ is a solution the \eqref{dirichlet_sol_prob} with homogeneous data $f=g=0$.
By Lemma \ref{BS_TraceSpaceBesselPotentialLem} there is an extension $U\in\mathscr{W}SRcompl{1,2m}{p}(\mathfrak{g}p)$ of $u$ with $\supp U\in\text{or }us\times\wspace_+closed$.
By Lemma \ref{weak_sol_lem}, $U=0$.
\textbf{e}_nd{proof}
\subsection{The Half Space with General Boundary Conditions}\label{HalfSpaceGeneralBoundaryConditionsSection}
Let $\Omega:=\mathbb{R}^n_+$ and $(A^Hfull,B^Hfull_1,\ldots,B^Hfull_m)$ be differential operators of the form \eqref{DefOfDiffOprfull} with constant
coefficients.
\begin{lem}\label{CharmatrixLem}
Assume $A^H$ is properly elliptic
and $(A^H,B^H_1,\ldots,B^H_m)$ satisfies Agmon's complementing condition on the two rays $\e^{i\theta}$ with $\theta={{p}}mfrac{{{p}}i}{2}$.
Let $(k,\xi')\in{{p}}erf\mathbb{Z}\tildemes\mathbb{R}^{n-1}\setminus\set{(0,0)}$. Consider as polynomials in $z$ the mappings $z\mapsto\mathfrak{M}per_+(k,\xi',z)$ and $z\mapstoB^H_j(\xi',z)$, where $\mathfrak{M}per_+$ is defined as in Lemma \ref{halfop_bdd_lem}.
For $j=1,\ldots,m$ let
$F_{(j-1)l}(k,\xi')$, $l=0,\ldots,m-1$, denote the coefficients of the polynomial
$B^H_j(\xi',z)\mod\mathfrak{M}per_+(k,\xi',z)$.
The corresponding matrix
$F(k,\xi')\in\mathbb{C}^{m\tildemes m}$ is called \emph{characteristic matrix}.
The characteristic matrix function
$F:{{p}}erf\mathbb{Z}\tildemes\mathbb{R}^{n-1}\setminus\set{(0,0)}\rightarrow\mathbb{C}^{m\tildemes m}$
has an extension
$F:\mathbb{R}\tildemes\mathbb{R}^{n-1}\setminus\set{(0,0)}\rightarrow\mathbb{C}^{m\tildemes m}$ that satisfies $(j,l=0,\ldots,m-1)$
\begin{align}
&forall\lambda>0:\mathfrak{q}uad F_{jl}(\eta,\xi') = \lambda^{l-m_{j+1}}F_{jl}(\lambda^{2m}\eta,\lambda\xi'),\label{CharmatrixLem_Scaling}\\
&F_{jl}\in\mathbb{C}Ri\bp{\mathbb{R}\tildemes\mathbb{R}^{n-1}\setminus\set{(0,0)}}^{m\tildemes m},\label{CharmatrixLem_Smooth}\\
&\snorm{F_{jl}(\eta,{\xi'})}\leq \mathbb{C}cn[CharmatrixLem_GrowthBoundConst]{C}\,{{{p}}arnorm{\eta}{{\xi'}}}^{m_{j+1}-l}.\label{CharmatrixLem_GrowthBound}
\textbf{e}_nd{align}
Moreover, $F(\eta,{\xi'})$ is invertible and the inverse matrix function $F^{-1}(\eta,{\xi'})$ satisfies
\begin{align}
&forall\lambda>0:\mathfrak{q}uad F^{-1}_{jl}(\eta,\xi') = \lambda^{m_{l+1}-j}F^{-1}_{jl}(\lambda^{2m}\eta,\lambda\xi'),\label{CharmatrixLem_InvScaling}\\
&F^{-1}_{jl}\in\mathbb{C}Ri\bp{\mathbb{R}\tildemes\mathbb{R}^{n-1}\setminus\set{(0,0)}}^{m\tildemes m},\label{CharmatrixLem_InvSmooth}\\
&\snorm{F^{-1}_{jl}(\eta,{\xi'})}\leq \mathbb{C}cn[CharmatrixLem_InvGrowthBoundConst]{C}\,{{{p}}arnorm{\eta}{{\xi'}}}^{j-m_{l+1}}.\label{CharmatrixLem_InvGrowthBound}
\textbf{e}_nd{align}
\textbf{e}_nd{lem}
\begin{proof}
For $j=1,\ldots,m$ the coefficients $F_{(j-1)l}(\eta,\xi')$, $l=0,\ldots,m-1$, of the polynomial
$z\mapsto\bb{B^H_j(\xi',z)\mod\mathfrak{M}_+(\eta,\xi',z)}$ is clearly an extension of the $j$'th row of the characteristic matrix $F$ to $(\eta,{\xi'})\in\mathbb{R}\tildemes\mathbb{R}^{n-1}\setminus\set{(0,0)}$. By definition we have
\begin{align}\label{CharmatrixLem_DivWithRest}
B^H_j(\xi',z) = Q_j(\eta,\xi',z)\mathfrak{M}_+(\eta,\xi',z) + \sum_{l=0}^{m-1} F_{(j-1)l}(\eta,\xi')\,z^l
\textbf{e}_nd{align}
for some polynomial $z\mapsto Q_j(\eta,\xi',z)$. Recalling \eqref{FparopDef} and that the roots $\rho_j^+$ are parabolically $1$-homogeneous, we deduce for $\lambda>0$ that
\begin{align*}
B^H_j(\xi',z) = \lambda^{-m_j}B^H_j(\lambda \xi',\lambdaz) = \widetilde{Q_j}(\eta,\xi',z)\mathfrak{M}_+(\eta,\xi',z) + \sum_{l=0}^{m-1} \lambda^{l-m_j}F_{(j-1)l}(\lambda^{2m }\eta,\lambda \xi')\,z^l
\textbf{e}_nd{align*}
for some polynomial $z\mapsto\widetilde{Q_j}(\eta,\xi',z)$. Comparing coefficients in the two expressions for the polynomial $z\mapstoB^H_j(\xi',z)$ above yields \eqref{CharmatrixLem_Scaling}. By \eqref{FparopDef} we have
$\mathfrak{M}_+(k,\xi',z) = \sum_{\alpha=0}^m \mathfrak{M}coef_\alpha(\eta,{\xi'})z^{m-\alpha}$
with coefficients $\mathfrak{M}coef_\alpha(\eta,{\xi'})\in\mathbb{C}Ri\bp{\mathbb{R}\tildemes\mathbb{R}^{n-1}\setminus\set{(0,0)}}$ and $\mathfrak{M}coef_0(\eta,{\xi'})=1$.
Polynomial division of $z\mapstoB^H_j({\xi'},z)$ with $z\mapsto\mathfrak{M}_+(k,\xi',z)$ thus yields a polynomial with coefficients in $\mathbb{C}Ri\bp{\mathbb{R}\tildemes\mathbb{R}^{n-1}\setminus\set{(0,0)}}$, which establishes \eqref{CharmatrixLem_Smooth}. Choosing $\lambda:={{p}}arnorm{\eta}{{\xi'}}^{-1}$ in \eqref{CharmatrixLem_Scaling} we obtain
$\snorm{F_{jl}(\eta,{\xi'})}\leq {{{p}}arnorm{\eta}{{\xi'}}}^{m_{j+1}-l} \sup_{{{p}}arnorm{s}{gamma}=1}\snorm{F_{jl}(s,gamma)}$
and thus \eqref{CharmatrixLem_GrowthBound}. As a direct consequence of Definition \ref{TPADN_AgmonCondAB},
the rows of $F(k,\xi')$ are linearly independent and $F(k,{\xi'})$ therefore invertible for
$(k,{\xi'})\in{{p}}erf\mathbb{Z}\tildemes\mathbb{R}^{n-1}\setminus\set{(0,0)}$.
The scaling property \eqref{CharmatrixLem_Scaling} implies that $F(\eta,{\xi'})$ is also invertible for
$(\eta,{\xi'})\in\mathbb{R}\tildemes\mathbb{R}^{n-1}\setminus\set{(0,0)}$. The corresponding scaling property \eqref{CharmatrixLem_InvScaling} for the inverse $F^{-1}(\eta,{\xi'})$ follows by multiplying \eqref{CharmatrixLem_Scaling} with
$\lambda^lF^{-1}_{l\alpha}(\eta,{\xi'})F_{\beta j}(\lambda^{2m}\eta,\lambda{\xi'})$ and summing over $j$ and $l$. Since $F^{-1}(\eta,{\xi'})=\bp{\det F(\eta,{\xi'})}^{-1}\cof F(\eta,{\xi'})^\top$, \eqref{CharmatrixLem_InvSmooth} follows
from \eqref{CharmatrixLem_Smooth}. Finally, \eqref{CharmatrixLem_InvGrowthBound} follows from \eqref{CharmatrixLem_InvScaling}
in the same way \eqref{CharmatrixLem_GrowthBound} was derived from \eqref{CharmatrixLem_Scaling}.
\textbf{e}_nd{proof}
\begin{lem}\label{CharmatrixMultiplierLem}
Let $p\in(1,\infty)$. Under the same assumptions as in Lemma \ref{CharmatrixLem}, the operators $\mathsf{op}\,\bb{F}$ and $\mathsf{op}\,\bb{F^{-1}}$ extend to bounded operators
\begin{align}
&\mathsf{op}\,\bb{F}:
\tracespacecompl{\iota}{p}({{\mathbb T}\times\mathbb{R}^{n-1}})\to\tracespacecompl{\kappa}{p}({{\mathbb T}\times\mathbb{R}^{n-1}}),
\label{CharmatrixMultiplier}\\
&\mathsf{op}\,\bb{F^{-1}}:
\tracespacecompl{\kappa}{p}({{\mathbb T}\times\mathbb{R}^{n-1}})\to\tracespacecompl{\iota}{p}({{\mathbb T}\times\mathbb{R}^{n-1}}).\label{InvCharmatrixMultiplier}
\textbf{e}_nd{align}
\textbf{e}_nd{lem}
\begin{proof}
For $g\in\mathscr{S}compl({{\mathbb T}\times\mathbb{R}^{n-1}})^m$ we recall Lemma \ref{BS_SobSloboMultiplierLem} and estimate
\begin{align*}
\norm{\mathsf{op}\,\bb{F^{-1}}g}_{\tracespacecompl{\iota}{p}}
&\leq \sum_{j=1}^m \sum_{l=1}^m
\norm{\mathscr{F}^{-1}_{{{\mathbb T}\times\mathbb{R}^{n-1}}}\bb{F^{-1}_{(j-1)(l-1)}\mathscr{F}_{{\mathbb T}\times\mathbb{R}^{n-1}}\np{g_{l-1}}}}_{\mathscr{W}SRcompl{\iota_j,2m\iota_j}{p}({{\mathbb T}\times\mathbb{R}^{n-1}})}\\
&\leq \sum_{j=1}^m \sum_{l=1}^m
\norm{\mathscr{F}^{-1}_{{{\mathbb T}\times\mathbb{R}^{n-1}}}\bb{{{p}}arnorm{k}{{\xi'}}^{m_l-(j-1)}F^{-1}_{(j-1)(l-1)}\mathscr{F}_{{\mathbb T}\times\mathbb{R}^{n-1}}\np{g_{l-1}}}}_{\mathscr{W}SRcompl{\kappa_l,2m\kappa_l}{p}}.
\textbf{e}_nd{align*}
By \eqref{CharmatrixLem_InvScaling}, ${{p}}arnorm{k}{\xi'}^{m_l-(j-1)}F^{-1}_{(j-1)(l-1)}(k,{\xi'})$ is parabolically $0$-homogeneous.
Corollary \ref{MultiplierLem} thus implies
\eqref{InvCharmatrixMultiplier}. Assertion \eqref{CharmatrixMultiplier} is shown in a similar manner.
\textbf{e}_nd{proof}
\begin{lem}\label{HalfspaceGeneralBrdCondsSolutionRepformularLem}
Let the assumptions be as in Lemma \ref{CharmatrixLem}.
Let $(k,\xi')\in{{p}}erf\mathbb{Z}\tildemes\mathbb{R}^{n-1}\setminus\set{(0,0)}$ and $gamma$ be a rectifiable Jordan curve in $\mathbb{C}$ that encircles all the roots $\rho_j^+(k,\xi')$ $(j=0,\ldots,m-1)$ of
$z\mapsto\mathfrak{M}per_+(k,\xi',z)$. Let $c_l^+(k,\xi')\in\mathbb{C}$, $l=0,\ldots,m$, denote coefficients such that
$\mathfrak{M}per_+(k,\xi',z)=\sum_{l=0}^m c_l^+(k,\xi')z^{m-l}$.
Put
\begin{align*}
L(k,\xi',\cdot):\mathbb{R}^+\rightarrow\mathbb{C}^m,\mathfrak{q}uad L_\alpha(k,\xi',x_n):=
frac{1}{2{{p}}i i}\int_{gamma} frac{\sum_{l=0}^{m-\alpha-1} c_l^+(k,\xi')z^{m-\alpha-l}}{\mathfrak{M}per_+(k,\xi',z)} \e^{ix_nz}\,\dargz.
\textbf{e}_nd{align*}
Then for any $g=(g_0,\ldots,g_{m-1})\in\mathbb{C}^m$ the unique solution $u\in\mathscr{W}SR{2m}{2}(\mathbb{R}^+)$ to
\begin{align}\label{HalfspaceDirichletBrdCondsSolutionRepformularLem_ODE}
\begin{pdeq}
& {{p}}arop(k,\xi',{{p}}artial_n) u = 0 && \tilden\mathbb{R}^+,\\
& \trace_{m}u(0) = g &&
\textbf{e}_nd{pdeq}
\textbf{e}_nd{align}
is given by $u(k,\xi',x_n) = L(k,\xi',x_n)\cdot g$.
Moreover,
\begin{align}\label{HalfspaceDirichletBrdCondsSolutionRepformularLem_FormulaDerivatives}
B^H_j(\xi',{{p}}artial_n) u(k,\xi',0) = {F_{(j-1)l}(k,\xi')g_l}\mathfrak{q}quad (j=1,\ldots,m).
\textbf{e}_nd{align}
\textbf{e}_nd{lem}
\begin{proof}
With $u$ as above, Cauchy's integral theorem yields
\begin{align*}
{{p}}arop(k,\xi',{{p}}artial_n) u =
\Bp{frac{1}{2{{p}}i i}\int_{gamma} {\sum_{l=0}^{m-\alpha-1} c_l^+(k,\xi')z^{m-\alpha-l}}\mathfrak{M}per_-(k,\xi',z) \e^{ix_nz}\,\dargz}
g_\alpha
= 0.
\textbf{e}_nd{align*}
On the same token
\begin{align*}
B^H_j(\xi',{{p}}artial_n) u(0)
&= \sum_{\alpha,\beta=0}^{m-1}
\Bp{frac{1}{2{{p}}i i}\int_{gamma}frac{\sum_{l=0}^{m-\alpha-1} c_l^+(k,\xi')z^{m-\alpha-l+\beta}}{\sum_{l=0}^m c_l^+(k,\xi')z^{m-l}}\,\dargz}
F_{(j-1)\beta}(k,\xi')g_\alpha.
\textbf{e}_nd{align*}
Since
\begin{align*}
\Bp{frac{1}{2{{p}}i i}\int_{gamma}frac{\sum_{l=0}^{m-\alpha-1} c_l^+(k,\xi')z^{m-\alpha-l+\beta}}{\sum_{l=0}^m c_l^+(k,\xi')z^{m-l}}\,\dargz} = \delta_{\alpha\beta},
\textbf{e}_nd{align*}
which follows by choosing $gamma$ to be a circle of sufficiently large radius $R$ and letting $R\rightarrow\infty$ (see for example \cite[Chapter 2, Proposition 4.1]{LionsMagenes1}), \eqref{HalfspaceDirichletBrdCondsSolutionRepformularLem_FormulaDerivatives} follows.
\textbf{e}_nd{proof}
\begin{lem}\label{HalfspaceCharmatrixTransformationBrdValuesL2Setting}
Let the assumptions be as in Lemma \ref{CharmatrixLem}.
If $u\in\mathscr{W}SRcompl{1,2m}{2}(\text{or }us\times\wspace_+)$ satisfies ${{p}}aropu=0$, then $B^Hu= \mathsf{op}\,\nb{F}\trace_mu$.
\textbf{e}_nd{lem}
\begin{proof}
Employ the partial Fourier transform $\mathscr{F}_{{{\mathbb T}\times\mathbb{R}^{n-1}}}$ to the equation ${{p}}aropu=0$, which in view of Plancherel's theorem implies
${{p}}arop(k,\xi',{{p}}artial_n)\mathscr{F}_{{{\mathbb T}\times\mathbb{R}^{n-1}}}(u)=0$ for almost every $(k,\xi')$.
By Lemma \ref{HalfspaceGeneralBrdCondsSolutionRepformularLem},
$B^H_j(\xi',{{p}}artial_n)\mathscr{F}_{{{\mathbb T}\times\mathbb{R}^{n-1}}}(u)={F_{(j-1)l}(k,\xi')\bp{\trace_m u(0)}_l}$. Employing $\mathscr{F}^{-1}_{{{\mathbb T}\times\mathbb{R}^{n-1}}}$, we obtain
$B^Hu= \mathsf{op}\,\nb{F}\trace_mu$.
\textbf{e}_nd{proof}
\begin{thm}\label{generalbrdvalues_sol_thm}
Assume $A^H$ is properly elliptic
and $(A^H,B^H_1,\ldots,B^H_m)$ satisfies Agmon's complementing condition on the two rays $\theta={{p}}mfrac{{{p}}i}{2}$.
Let $p\in\np{1,\infty}$. For $f\in \LR{p}_\bot(\text{or }us\times\wspace_+)$ and $g\in\tracespacecompl{\kappa}{p}({\mathbb T}\tildemes{{p}}artial\mathbb{R}^n_+)$ there is a unique $u\in \mathscr{W}SRcompl{1,2m}{p}(\text{or }us\times\wspace_+)$ subject to
\begin{align}
\begin{pdeq}\label{generalbrdvalues_sol_prob}
{{p}}artial_tu+A^H u&=f && \tilden \text{or }us\times\wspace_+, \\
B^Hu&=g && \text{on } {\mathbb T}\tildemes{{p}}artial\mathbb{R}^n_+.
\textbf{e}_nd{pdeq}
\textbf{e}_nd{align}
Moreover,
\begin{align}\label{generalbrdvalues_sol_est}
\norm{u}_{\mathscr{W}SR{1,2m}{p}(\text{or }us\times\wspace_+)}\leq c (\norm{f}_{\LR{p}(\text{or }us\times\wspace_+)}+\norm{g}_{\tracespacecompl{\kappa}{p}({\mathbb T}\tildemes{{p}}artial\mathbb{R}^n_+)}),
\textbf{e}_nd{align}
where $c=c(n,p)>0$.
\textbf{e}_nd{thm}
\begin{proof}
As in the proof of Theorem \ref{dirichlet_sol_thm}, it suffices to show existence of a solution to \eqref{generalbrdvalues_sol_prob} satisfying
\eqref{generalbrdvalues_sol_est} for $f=0$ and $g\in\mathscr{S}compl({{\mathbb T}\times\mathbb{R}^{n-1}})^m$.
Since $F^{-1}$ is smooth away from the origin \eqref{CharmatrixLem_InvSmooth} and has at most polynomial growth \eqref{CharmatrixLem_GrowthBound},
it follows that $\mathsf{op}\,\nb{F^{-1}} g\in\mathscr{S}compl({{\mathbb T}\times\mathbb{R}^{n-1}})$.
Consequently, Theorem \ref{dirichlet_sol_thm} yields existence of a solution
$u\in\mathscr{W}SRcompl{1,2m}{p}(\text{or }us\times\wspace_+)\cap \mathscr{W}SRcompl{1,2m}{2}(\text{or }us\times\wspace_+)$ to
\begin{align}\label{generalbrdvalues_sol_thm_transformedeq}
\begin{pdeq}
{{p}}artial_tu+A^H u&=0 && \tilden \text{or }us\times\wspace_+, \\
\trace_mu&= \mathsf{op}\,\nb{F^{-1}} g && \text{on } {\mathbb T}\tildemes{{p}}artial\mathbb{R}^n_+.
\textbf{e}_nd{pdeq}
\textbf{e}_nd{align}
From Lemma \ref{HalfspaceCharmatrixTransformationBrdValuesL2Setting} it follows that $u$ is in fact a solution to \eqref{generalbrdvalues_sol_prob}.
Additionally, Theorem \ref{dirichlet_sol_thm} and Lemma \ref{CharmatrixMultiplierLem} imply
\begin{align*}
\norm{u}_{\mathscr{W}SR{1,2m}{p}(\text{or }us\times\wspace_+)}
\leq \mathbb{C}cn{c}\,\norm{\mathsf{op}\,\nb{F^{-1}} g}_{\tracespacecompl{\iota}{p}({\mathbb T}\tildemes{{p}}artial\mathbb{R}^n_+)}
\leq \mathbb{C}cn{c}\,\norm{ g}_{\tracespacecompl{\kappa}{p}({\mathbb T}\tildemes{{p}}artial\mathbb{R}^n_+)}.
\textbf{e}_nd{align*}
It remains to show uniqueness. Assume for this purpose that $u\in \mathscr{W}SRcompl{1,2m}{p}(\text{or }us\times\wspace_+)$ is a solution to \eqref{generalbrdvalues_sol_prob}
with homogeneous right-hand side $f=g=0$. Let $\seqN{g_n}\subset\mathscr{S}compl({{\mathbb T}\times\mathbb{R}^{n-1}})^m$ be a sequence with $\lim_{n\rightarrow\infty}g_n=\trace_mu$ in $\tracespacecompl{\iota}{p}({{\mathbb T}\times\mathbb{R}^{n-1}})$. By virtue of Theorem \ref{dirichlet_sol_thm} there is a $u_n\in\mathscr{W}SRcompl{1,2m}{p}(\text{or }us\times\wspace_+)\cap \mathscr{W}SRcompl{1,2m}{2}(\text{or }us\times\wspace_+)$ with $\bb{{{p}}artial_t+A^H}u_n=0$ and $\trace_mu_n=g_n$.
Theorem \ref{dirichlet_sol_thm} and Lemma \ref{CharmatrixMultiplierLem} imply that $\lim_{n\rightarrow\infty}u_n=u$ in $\mathscr{W}SRcompl{1,2m}{p}(\text{or }us\times\wspace_+)$ and
thus $B^Hu_n\rightarrowB^Hu=0$ in $\tracespacecompl{\kappa}{p}({{\mathbb T}\times\mathbb{R}^{n-1}})$.
By Lemma \ref{HalfspaceCharmatrixTransformationBrdValuesL2Setting}, $B^Hu_n=\mathsf{op}\,\nb{F}g_n$. Lemma \ref{CharmatrixMultiplierLem} thus
yields $\trace_mu = \lim_{n\rightarrow\infty}g_n=0$. We conclude $u=0$ by Theorem \ref{dirichlet_sol_thm}.
\textbf{e}_nd{proof}
\section{Proof of the Main Theorems}
\begin{proof}[Proof of Theorem \ref{MainThm_HalfSpace}]
As already noted in Section \ref{gr}, the canonical bijection
between $\mathbb{C}Rci\np{{\mathbb T}\tildemes\mathbb{R}^n_+}$ and $\mathbb{C}Rciper\np{\mathbb{R}\tildemes\mathbb{R}^n_+}$ implies that
$\mathscr{W}SR{1,2m}{p}\bp{{\mathbb T}\tildemes\mathbb{R}^n_+}$ and $\mathscr{W}SRper{1,2m}{p}(\mathbb{R}\tildemes\mathbb{R}^n_+)$ are isometrically isomorphic.
It follows that
$\mathscr{W}SRcompl{1,2m}{p}\bp{{\mathbb T}\tildemes\mathbb{R}^n_+}$ and ${{p}}rojcompl\mathscr{W}SRper{1,2m}{p}(\mathbb{R}\tildemes\mathbb{R}^n_+)$
as well as $\tracespacecompl{\kappa}{p}({\mathbb T}\tildemes{{p}}artial\mathbb{R}^n_+)$ and
$\Pi_{j=1}^m {{p}}rojcompl\mathscr{W}SRper{\kappa_j,2m\kappa_j}{p}(\mathbb{R}\tildemes{{p}}artial\wspace_+)$ are
isometrically isomorphic.
Consequently, Theorem \ref{MainThm_HalfSpace} follows from Theorem \ref{generalbrdvalues_sol_thm}.
\textbf{e}_nd{proof}
\begin{proof}[Proof of Theorem \ref{MainThm_TPADN}]
Theorem \ref{MainThm_TPADN} follows from Theorem \ref{MainThm_HalfSpace} by a
standard localization and perturbation argument. One can even apply the argument used
in the elliptic case \cite{ADN1}; see also \cite[Chapter 4.8]{TanabeBook}.
\textbf{e}_nd{proof}
\begin{thebibliography}{10}
\bibitem{adams:sobolevspaces}
R.~A. Adams.
\newblock {\em Sobolev spaces}.
\newblock Academic Press, New York-London, 1975.
\bibitem{Agmon62}
S.~{Agmon}.
\newblock {On the eigenfunctions and on the eigenvalues of general elliptic
boundary value problems.}
\newblock {\em {Commun. Pure Appl. Math.}}, 15:119--147, 1962.
\bibitem{ADN1}
S.~{Agmon}, A.~{Douglis}, and L.~{Nirenberg}.
\newblock {Estimates near the boundary for solutions of elliptic partial
differential equations satisfying general boundary conditions. I.}
\newblock {\em {Commun. Pure Appl. Math.}}, 12:623--727, 1959.
\bibitem{ADN2}
S.~{Agmon}, A.~{Douglis}, and L.~{Nirenberg}.
\newblock {Estimates near the boundary for solutions of elliptic partial
differential equations satisfying general boundary conditions. II.}
\newblock {\em {Commun. Pure Appl. Math.}}, 17:35--92, 1964.
\bibitem{AgranovichVishik1964}
M.~{Agranovich} and M.~{Vishik}.
\newblock {Elliptic problems with a parameter and parabolic problems of general
type.}
\newblock {\em {Russ. Math. Surv.}}, 19(3):53--157, 1964.
\bibitem{ArendtBu2002}
W.~{Arendt} and S.~{Bu}.
\newblock {The operator-valued Marcinkiewicz multiplier theorem and maximal
regularity.}
\newblock {\em {Math. Z.}}, 240(2):311--343, 2002.
\bibitem{Arkeryd67}
L.~{Arkeryd}.
\newblock {On the $L\sp P$ estimates for elliptic boundary problems.}
\newblock {\em {Math. Scand.}}, 19:59--76, 1967.
\bibitem{BergLoefstroemBook}
J.~{Bergh} and J.~{L\"ofstr\"om}.
\newblock {\em {Interpolation spaces. An introduction.}}
\newblock Berlin-Heidelberg-New York: Springer-Verlag, 1976.
\bibitem{Bruhat61}
F.~Bruhat.
\newblock {Distributions sur un groupe localement compact et applications \`a
l'\'etude des repr\'esentations des groupes $p$-adiques.}
\newblock {\em Bull. Soc. Math. Fr.}, 89:43--75, 1961.
\bibitem{dLe65}
K.~de~Leeuw.
\newblock {On $L\sb p$ multipliers.}
\newblock {\em Ann. Math. (2)}, 81:364--379, 1965.
\bibitem{DenkHieberPruess_AMS2003}
R.~{Denk}, M.~{Hieber}, and J.~{Pr\"uss}.
\newblock {${\mathcal R}$-boundedness, Fourier multipliers and problems of
elliptic and parabolic type.}
\newblock {\em {Mem. Am. Math. Soc.}}, 788:114, 2003.
\bibitem{DoreVenni87}
G.~{Dore} and A.~{Venni}.
\newblock {On the closedness of the sum of two closed operators.}
\newblock {\em {Math. Z.}}, 196:189--201, 1987.
\bibitem{EdG77}
R.~Edwards and G.~Gaudry.
\newblock {\em {Littlewood-Paley and multiplier theory.}}
\newblock {Berlin-Heidelberg-New York: Springer-Verlag}, 1977.
\bibitem{GeissertHieberNguyen16}
M.~{Geissert}, M.~{Hieber}, and T.~H. {Nguyen}.
\newblock {A general approach to time periodic incompressible viscous fluid
flow problems.}
\newblock {\em {Arch. Ration. Mech. Anal.}}, 220(3):1095--1118, 2016.
\bibitem{HessBook91}
P.~{Hess}.
\newblock {\em {Periodic-parabolic boundary value problems and positivity.}}
\newblock Harlow: Longman Scientific \&| Technical; New York: John Wiley \&|
Sons, Inc., 1991.
\bibitem{KyedSauer17}
M.~{Kyed} and J.~{Sauer}.
\newblock {A method for obtaining time-periodic $L^{p}$ estimates.}
\newblock {\em {J. Differ. Equations}}, 262(1):633--652, 2017.
\bibitem{Lieberman99}
G.~M. {Lieberman}.
\newblock {Time-periodic solutions of linear parabolic differential equations.}
\newblock {\em {Commun. Partial Differ. Equations}}, 24(3-4):631--663, 1999.
\bibitem{LionsMagenes1}
J.~{Lions} and E.~{Magenes}.
\newblock {\em {Non-homogeneous boundary value problems and applications. Vol.
I. Translated from the French by P. Kenneth.}}
\newblock Springer-Verlag, Berlin-Heidelberg-New York, first edition, 1972.
\bibitem{LunardiBookSemigroups}
A.~{Lunardi}.
\newblock {\em {Analytic semigroups and optimal regularity in parabolic
problems. Reprint of the 1995 hardback ed.}}
\newblock Basel: Birkh\"auser, reprint of the 1995 hardback ed. edition, 2013.
\bibitem{MaS17}
Y.~Maekawa and J.~Sauer.
\newblock Maximal regularity of the time-periodic stokes operator on unbounded
and bounded domains.
\newblock {\em J. Math. Soc. Japan}, in press.
\bibitem{Peetre61}
J.~{Peetre}.
\newblock {On estimating the solutions of hypoelliptic differential equations
near the plane boundary.}
\newblock {\em {Math. Scand.}}, 9:337--351, 1961.
\bibitem{Stein70}
E.~M. Stein.
\newblock {\em {Singular integrals and differentiability properties of
functions.}}
\newblock {Princeton, N.J.: Princeton University Press}, 1970.
\bibitem{TanabeBook}
H.~{Tanabe}.
\newblock {\em {Functional analytic methods for partial differential
equations.}}
\newblock New York, NY: Marcel Dekker, 1997.
\bibitem{Triebel_InterpolationTheory1978}
H.~{Triebel}.
\newblock {\em {Interpolation theory. Function spaces. Differential
operators.}}
\newblock {Berlin: Deutscher Verlag des Wissenschaften}, 1978.
\bibitem{VejvodaBook82}
O.~{Vejvoda}.
\newblock {\em {Partial differential equations: time-periodic solutions.}}
\newblock {The Hague - Boston - London: Martinus Nijhoff Publishers; Prague:
SNTL, Publishers of Technical Literature.}, 1982.
\bibitem{Yos80}
K.~Yosida.
\newblock {\em Functional analysis}, volume 123 of {\em Grundlehren der
Mathematischen Wissenschaften [Fundamental Principles of Mathematical
Sciences]}.
\newblock Springer-Verlag, Berlin-New York, sixth edition, 1980.
\textbf{e}_nd{thebibliography}
\textbf{e}_nd{document}
|
\begin{document}
\begin{abstract}
We consider ideals in a polynomial ring that are generated by
regular sequences of homogeneous polynomials and are stable under
the action of the symmetric group permuting the variables. In
previous work, we determined the possible isomorphism types for
these ideals. Following up on that work, we now analyze the possible
degrees of the elements in such regular sequences. For each case of
our classification, we provide some criteria guaranteeing the
existence of regular sequences in certain degrees.
\end{abstract}
\title{Degrees of regular sequences with a symmetric group action}
\section{Introduction}
Consider the graded polynomial ring $R = \mathbb{C} [x_1,x_2,\dots,x_n]$. A
set of $n$ homogeneous polynomials $f_1,f_2,\dots,f_n$ is a maximal
regular sequence in $R$ if the only common zero of these $n$
polynomials is the point $(0,0,\dots,0)$. A sequence
$g_1,g_2,\dots,g_t$ is a regular sequence in $R$ if it can be extended
to a maximal regular sequence in $R$.
We suppose that $G$ is a group acting linearly on $R$ via an action
which preserves the grading. The subring
$R^G := \{f \in R : \forall \sigma \in G, \sigma\cdot f = f\}$ is
called the ring of invariants. There has been some interest in
determining the degrees $(d_1,d_2,\dots,d_t)$ for which there exists a
regular sequence in $R^G$ with $\deg (f_i) = d_i$. Dixmier
\cite{Dixmier} made a conjecture concerning this question for the
classical case of the action of $\text{SL}(2,\mathbb{C})$ on an irreducible
representation. This conjecture has attracted some attention. See
for example \cite{L-P,DD,BBS}. Recently, a few authors have taken up
this question for the natural action of the symmetric group on $R$.
See \cite{CKW,Chen,K-M}.
We consider a more general question. Our goal is to determine the
degrees of a maximal regular sequence $f_1,f_2,\dots,f_n$ in $R$ such
that the ideal $I:=(f_1,f_2,\dots,f_n)$ is stable under the group
action. This is equivalent to the artinian quotient algebra $R/I$
inheriting the action of the group.
We will also restrict our attention to the natural action of the
symmetric group $\mathcal{S}Gp$ permuting the variables. In our earlier
paper \cite{SCI1}, it is shown that there are four possible
representation types for the action of $\mathcal{S}Gp$ on $I$ (the notation
follows that of \cite{Sagan}):
\begin{enumerate}
\item the trivial representation $S^{(n)}$, given by all $f_i$ being
symmetric polynomials;
\item the alternating representation $S^{(1^n)}$, given by one
alternating polynomial, together with up to $n-1$ symmetric
polynomials;
\item the standard representation $S^{(n-1,1)}$, possibly together
with one symmetric polynomial;
\item the representation $S^{(2,2)}$, together with up to two
symmetric polynomials (this only occurs when $n=4$).
\end{enumerate}
Our earlier paper shows examples of regular sequences corresponding to
all four cases, but does not address the question of how ``often''
such regular sequences can appear or, more precisely, in what degrees
they can be realized. Here we give explicit answers showing in which
degrees it is possible to find a regular sequence for each of the
above four representation types for $n \leqslant 4$. We also derive a
number of results for general values of $n$.
Note also that our results relating to the first case above, actually
apply to the degrees of regular sequences of homogeneous polynomials
in the polynomial ring $\mathbb{C} [y_1,\ldots,y_n]$, with the non-standard
grading given by $\deg (y_i)=i$. This case corresponds geometrically
to the homogeneous coordinate ring of a weighted projective space.
\section{Regular sequences of symmetric polynomials}
We consider the polynomial ring $R = \mathbb{C} [x_1,x_2,\dots,x_n]$ in $n$
indeterminates equipped with the standard grading. The symmetric
group $\mathcal{S}Gp$ acts naturally on $R$ by permuting the variables. It is
well known that the invariant subring $R^\mathcal{S}Gp$ can be identified
with the subalgebra $\mathbb{C} [e_1,e_2,\dots,e_n]$ generated by the
elementary symmetric polynomials \cite{Goodman}. In particular,
$R^\mathcal{S}Gp$ is a polynomial ring equipped with the non-standard grading
$\deg (e_i) = i$.
\subsection{Degree sequences}
We are concerned with the degrees of elements of homogeneous regular sequences in $R^\mathcal{S}Gp$.
\begin{definition}
Let $(d_1,d_2,\dots,d_n)$ be an (unordered) sequence of $n$ positive integers. If there exists a homogeneous regular
sequence $f_1,f_2,\dots,f_n \in R^\mathcal{S}Gp$ with $\deg(f_i)=d_i$ then we say that $(d_1,d_2,\dots,d_n)$ is a \em{regular degree sequence}.
\end{definition}
\begin{prop}\label{beta condition}
Suppose $(d_1,d_2,\dots,d_n)$ is a regular degree sequence. For
$i=2,3,\dots,n$ we define
$\beta_i := \#\{1\leqslantslant j \leqslantslant n : i \mid d_j\}$. Then
\begin{equation}
\beta_i \geqslant \bigg\lfloor \frac n i \bigg\rfloor \quad \text{for all } i = 1,2,\dots n \tag{\raisebox{-2pt}{*}}
\end{equation}
In particular, $n! \mid \prod_{j=1}^n d_j$.
\end{prop}
\begin{proof}
If $ (d_1,d_2,\dots,d_n)$ is a regular degree sequence, then there
exists a homogeneous regular sequence $f_1,f_2,\dots,f_n$ in
$R^\mathcal{S}Gp$ with $\deg(f_i)=d_i$. The graded subring
$A=\mathbb{C}[f_1,f_2,\dots,f_n]$ is a polynomial ring and $R^\mathcal{S}Gp$ is a free
$A$-module:
$R^\mathcal{S}Gp \cong \oplus_{\gamma \in \Gamma}\,A \!\cdot\!\gamma$ for some
set of homogeneous elements $\Gamma \subset R^\mathcal{S}Gp$ \cite[Lemma
6.4.13]{Bruns}. Thus the Hilbert series of $R^\mathcal{S}Gp$ and $A$ are related
by
$$\mathcal{H}(R^\mathcal{S}Gp,t) = \sum_{\gamma \in \Gamma} t^{\deg(\gamma)} \mathcal{H}(A,t).$$
Since $\mathcal{H}(R^\mathcal{S}Gp,t) = \prod_{i=1}^n (1-t^i)^{-1}$ and
$\mathcal{H}(A,t) = \prod_{i=1}^n (1-t^{d_i})^{-1}$, we see that
$$\prod_{i=1}^n \frac{1-t^{d_i}}{1-t^i} = \sum_{\gamma\in\Gamma} t^{\deg(\gamma)}$$
is a non-negative integer polynomial.
Working over $\mathbb{Q}$, all the irreducible factors of $(1-t^d)$ are
cyclotomic polynomials. Specifically
$(1-t^d) = \prod_{i \mid d}\Phi_i(t)$, where $\Phi_i$ denotes the
$i^\text{th}$ cyclotomic polynomial. Since
$\#\{1 \leqslant j \leqslant n : i \mid j \} = \lfloor n / i \rfloor$, we see
that $\prod_{i=1}^n \frac{1-t^{d_i}}{1-t^i}$ is an integer
polynomial if and only if
$ \beta_i \geqslant \lfloor n / i \rfloor \quad \text{for all } i =
1,2,\dots n$.
To prove the final assertion, we cancel the factors of $(1-t)$ from
the numerator and denominator. Thus
$$\prod_{i=1}^n \frac{1+t+t^2+\dots+t^{d_i}}{1+t+t^2+\dots + t^i} = \sum_{\gamma\in\Gamma} t^{\deg(\gamma)}\ .$$
Evaluating at $t=1$ we see that
$\left(\prod_{i=1}^n d_i\right)/n! = |\Gamma| =$ rank of
$R^\mathcal{S}Gp$ as an $A$-module.
\end{proof}
\begin{remark}
Our inequality (*) was first observed by Conca, Krattenthaler and
Watanabe for regular sequences of power sums \cite[Lemma 2.6
(2)]{CKW}. The three authors also showed that the product of the
$d_i$ is divisible by $n!$ \cite[Lemma 2.8]{CKW}. This seems to be the first time
the restriction that $$\prod_{i=1}^n \frac{1-t^{d_i}}{1-t^i} = \sum_{\gamma\in\Gamma} t^{\deg(\gamma)}$$
is a non-negative integer polynomial has been observed.
\end{remark}
Suppose $(d_1,d_2,\dots,d_n)$ is a regular degree sequence. Since
$\oplus_{d \leqslant i} R^\mathcal{S}Gp_d \subset \mathbb{C}[e_1,e_2,\dots,e_i]$ and
hence cannot contain a regular sequence with more than $i$ terms, we
deduce that $(d_1,d_2,\dots,d_n)$ must also satisfy the following
condition
\begin{equation*}
\#\{j : d_j \leqslant i \} \leqslant i \quad \text{for all } i = 1,2,\dots n\ . \tag{\dag}
\end{equation*}
\begin{definition}
Let $(d_1,d_2,\dots,d_n)$ be an (unordered) sequence of $n$ positive
integers. We say that $(d_1,d_2,\dots,d_n)$ is {\em permissible} if
it satisfies the two conditions $(*)$ and $(\dag)$. Thus every
regular degree sequence is permissible.
\end{definition}
Note that if there exists a {\em matching}, i.e., a permutation
$\pi \in \mathcal{S}Gp$ such that $i$ divides $d_{\pi(i)}$ for all
$i=1,2,\dots,n$ then $(d_1,d_2,\dots,d_n)$ is a regular degree
sequence as is shown by the regular sequence of polynomials
$(e_i)^{d_{\pi(i)}/i}$ for $i = 1,2, \dots, n$.
\subsection{Regular degree sequences for $n \leqslant 4$}
\begin{theorem}
\label{srs}
\hspace{2em}
\begin{enumerate}
\item \label{n is 2} For $n=2$, a degree sequence is regular if and
only if it is permissible if and only if it satisfies $(*)$.
\item \label{n is 3} For $n=3$, a degree sequence is regular if and
only if it is permissible.
\item \label{n is 4} For $n=4$, every permissible degree sequence
except $(1,2,5,12\delta), (2,2,5,12\delta)$ and $(5,2,5,12\delta)$
is regular.
\end{enumerate}
\end{theorem}
\begin{proof}
(\ref{n is 2}) If $(d_1,d_2)$ satisfies $(*)$ then at least one of
$d_1$ or $d_2$ is even and so we have a matching.
(\ref{n is 3}) Let $n=3$ and suppose that $(d_1,d_2,d_3)$ is
permissible but has no matching. Then, without loss of generality,
6 divides $d_3$ while $d_1$ and $d_2$ are both odd numbers not
divisible by 3 with $d_2 \geqslant d_1$. Now condition $(\dag)$ implies
that $d_2 \geqslant 2$ and thus $d_2 \geqslant 5$. Thus we have a regular
sequence $e_1^{d_1}, e_3 e_2^{(d_2-3)/2}, (e_2^3 + e_3^2)^{d_3/6}$
with degrees $(d_1,d_2,d_3)$.
(\ref{n is 4}) Let $n=4$ and suppose that $(d_1,d_2,d_3,d_4)$ is
permissible but has no matching. Note that
$R^\mathcal{S}Gp_1 \oplus R^\mathcal{S}Gp_2 \oplus R^\mathcal{S}Gp_5$ is contained in the
ideal generated by $e_1$ and $e_2$. This implies that a regular
degree sequence must satisfy $\#\{i : d_i \in \{1,2,5\} \} \leqslant 2$.
This shows that the three permissible degree sequences
$(1,5,2,12\delta), (2,5,2,12\delta), (5,5,2,12\delta)$ are not
regular.
Condition $(*)$ implies that two of the $d_i$ are even, one is
divisible by 3 and one is divisible by 4. Without loss of
generality, $d_3$ and $d_4$ are both even, $d_4 = 4\delta$ is
divisible by 4 and $d_2 \geqslant d_1$. Since there is no matching,
neither $d_1$ nor $d_2$ is divisible by 3. Thus either $d_3$ is a
multiple of 6 or $d_4$ is a multiple of 12.
First we consider the case where $d_3 = 6 \beta$ is a multiple of 6.
Since there is no matching both $d_1$ and $d_2$ are odd integers not
divisible by 3. Thus $(\dag)$ implies that $d_2 \geqslant 5$. Therefore
$e_1^{d_1}, e_3 e_2^{(d_2-3)/2}, (e_2^3 + e_3^2)^{\beta},
e_4^{\delta}$ is a regular sequence of the required degrees.
Thus we may suppose that $d_4 = 12 \delta$ is a multiple of 12. Now
we adjust our labelling as follows. We suppose that $d_3$ is the
largest of those elements of $\{d_1,d_2,d_3\}$ which are even.
Further we assume that $d_2 \geqslant d_1$. Furthermore, since there is
no matching, 3 divides neither $d_1$ nor $d_2$.
Since $d_2 \geqslant 2$ we may write $d_2 = 2p + 3q$ where $p$ and $q$
are non-negative integers. Suppose first that $d_3 \geqslant 4$ and
define
$$f :=
\begin{cases}
e_4^{d_3/4}, & \mbox{if } d_3 \equiv 0 \pmod 4;\\
e_4^{(d_3-6)/4}(e_2^3+ e_3^2), & \mbox{if } d_3 \equiv 2 \pmod 4.
\end{cases}
$$
Then $e_1^{d_1}, e_2^p e_3^q, f, (e_2^6 + e_3^4 + e_4^3)^{\delta}$
is a regular sequence of degrees $(d_1,d_2,d_3,d_4)$.
Finally we suppose that $d_3 = 2$. Then either $d_2=2$ or $d_2$ is
odd. But we have seen that $(1,2,2,12\delta)$ and
$(2,2,2,12\delta)$ are not regular degree sequences and thus $d_2$
must be odd. Since 3 does not divide $d_2$, we have $d_2\geqslant 5$.
If $d_2 = 5$ then $d_1 \in \{1,2,5\}$ which is again not possible
since $(1,5,2,12\delta)$, $(2,5,2,12\delta)$ and $(5,5,2,12\delta)$
are not regular degree sequences. Therefore $d_2 \geqslant 7$ if
$d_3=2$. Thus we may write $d_2 =3 p + 4q$. Then
$e_1^{d_1}, e_3^p e_4^q, e_2, (e_3^4+e_4^3)^{\delta}$ is a regular
sequence of the required degrees.
\end{proof}
Table \ref{tab:symmetric} summarizes the regular sequences we have
found when $n=4$.
The first row corresponds to a matching.
\begin{table}[htb]
\centering
$\begin{array}{|l|l|l|l|l|l|l|l|}
\hline
\multicolumn{4}{|c|}{\rm \bf Degrees} & \multicolumn{4}{|c|}{\rm\bf Symmetric\ Polynomials}\\
\hline
\multicolumn{1}{|c|}{d_1} & \multicolumn{1}{|c|}{d_2} & \multicolumn{1}{|c|}{d_3} & \multicolumn{1}{|c|}{d_4}
& \multicolumn{1}{|c|}{f_1} & \multicolumn{1}{|c|}{f_2} & \multicolumn{1}{|c|}{f_3} & \multicolumn{1}{|c|}{f_4}\\ \hline
d_1& 3\beta & 2\gamma & 4\delta & e_1^{d_1} & e_3^{\beta} & e_2^{\gamma} & e_4^{\delta}\\
d_1 & d_2\geqslant 5 &6\gamma &4\delta& e_1^{d_1} & e_3 e_2^{(d_2-3)/2} & (e_2^3 + e_3^2)^{\gamma}& e_4^{\delta}\\
d_1 & d_2 \geqslant 2 & 4\beta & 12\delta & e_1^{d_1} & e_2^p e_3^q & e_4^\beta & (e_2^6+e_3^4+e_4^3)^{\delta}\\
d_1 & d_2 \geqslant 2 & 4\beta+2 \geqslant 6 & 12\delta & e_1^{d_1} & e_2^p e_3^q & (e_2^3+e_3^2)e_4^{\beta-1} & (e_2^6+e_3^4+e_4^3)^{\delta}\\
d_1 & d_2 \geqslant 7 & 2 & 12\delta & e_1^{d_1} & e_3^p e_4^q & e_2 & (e_3^4+e_4^3)^{\delta}\\
\hline
\end{array}$
\caption{Regular sequences of symmetric polynomials for $n=4$}\label{tab:symmetric}
\end{table}
Note that we have in fact proved the following result.
\begin{cor}\label{improved cor}
Suppose that $d_2,d_3,d_4$ are three positive integers such that
$4\mid d_4$, $d_3$ is even, and $3\mid d_2 d_3 d_4$. Then there
exist three symmetric polynomials $f_2,f_3,f_4$ (as given in
Table~1) of degrees $d_2,d_3,d_4$ respectively such that
$e_1,f_2,f_3,f_4$ is a regular sequence.
\end{cor}
\begin{remark}
Note that the degree sequence $(2,5,2,12)$ (which is not regular)
has the property that
$$ \frac{(1-t^2)(1-t^5)(1-t^2)(1-t^{12})}{(1-t)(1-t^2)(1-t^3)(1-t^4)} = 1 + t + t^3 + 2t^4 + 2t^7 +t^8 + t^{10} + t^{11}$$
is a non-negative integer polynomial.
\end{remark}
For larger values of $n$ little is known. The following statement was
proved in \cite[Prop.~2.9]{CKW} using sequences of power sums and
homogeneous symmetric polynomials.
\begin{prop}\label{consecutive}
For every positive integer $a$ the sequence of consecutive degrees
$(a, a+1, a+2, \dots, a+n-1)$ is a regular degree sequence.
\end{prop}
\subsection{Regular sequences with an alternating polynomial}
\label{sec:alt-rep}
A polynomial $f\in R$ is said to be \emph{alternating} if, for all
$\sigma \in \mathcal{S}Gp$, $\sigma f = \pm f$, depending on the sign of
$\sigma$. As an example, the Vandermonde determinant
$$\Delta := \det
\begin{bmatrix}
1 &x_1 &x_1^2 &\dots &x_1^{n-1}\\
1 &x_2 &x_2^2 &\dots &x_2^{n-1}\\
\vdots &\vdots &\vdots & &\vdots\\
1 &x_n &x_n^2 &\dots &x_n^{n-1}
\end{bmatrix} = \prod_{1\leqslant i<j \leqslant n} (x_j-x_i) \in R$$ is clearly
alternating. In fact, every homogeneous alternating polynomial in $R$
is divisible by $\Delta$, the quotient being a homogeneous symmetric
polynomial.
As noted in \cite[Prop.~2.5]{SCI1}, there exist homogeneous regular
sequences $f_1,f_2,\dots,f_t,g\Delta$ in $R$ with $f_1,f_2,\dots,f_t$
and $g$ symmetric polynomials. These sequences are closely related to
sequences of symmetric polynomials.
\begin{lemma}\label{lem:reg_seq_factors}
Let $f_1,f_2,\dots,f_t,g,h\in R$ be homogeneous polynomials. Then
the sequence $f_1,f_2,\dots,f_t,g h$ is regular if and only if both
$f_1,f_2,\dots,f_t,g$ and $f_1,f_2,\dots,f_t,h$ are regular.
\end{lemma}
\begin{proof}
Suppose $f_1,f_2,\dots,f_t$ form a regular sequence. Then $g h$ is
not a zero-divisor modulo $(f_1,f_2,\dots,f_t)$ if and only if both
$g$ and $h$ are not zero-divisors modulo $(f_1,f_2,\dots,f_t)$.
\end{proof}
The following is an immediate consequence of Lemma
\ref{lem:reg_seq_factors}.
\begin{prop}\label{reg_seq_alt}
Let $f_1,f_2,\dots,f_t,g\in R$ be homogeneous symmetric polynomials. The
sequence $f_1,f_2,\dots,f_t,g\Delta$ is regular if and only if both
$f_1,f_2,\dots,f_t,g$ and $f_1,f_2,\dots,f_t,\Delta$ are regular.
\end{prop}
Proposition \ref{reg_seq_alt} allows to rule out existence of regular
sequences of certain degrees that contain an alternating polynomial.
\begin{example}
For $n=4$, $\Delta$ has degree 6. By Theorem \ref{srs}
(\ref{n is 4}), there is no regular sequence of homogeneous
symmetric polynomials $f_1,f_2,f_3,g$ of degrees
$1,2,5,12\delta$. Therefore, Proposition \ref{reg_seq_alt} implies
there is no regular sequence $f_1,f_2,f_3,g\Delta$ of degrees
$1,2,5,12\delta + 6$.
\end{example}
\begin{remark}
The polynomial $\Delta^{2k}$ is symmetric for all positive integers
$k$. Moreover, the sequence $f_1,f_2,\dots,f_t,\Delta$ is regular if
and only if $f_1,f_2,\dots,f_t,\Delta^{2k}$ is regular
(cf. \cite[Cor.~17.8 a]{Eisenbud}). As a consequence, we can exclude
the existence of regular sequences in certain degrees. For example,
there is no regular sequence of homogeneous polynomials
$f_1,f_2,f_3,\Delta$ with $f_1,f_2,f_3$ symmetric of degrees $1,2,5$
because $f_1,f_2,f_3,\Delta^2$ would violate Theorem
\ref{srs} (\ref{n is 4}).
\end{remark}
\section{Regular sequences and the standard representation}
\label{sec:std-rep}
We begin this section by recalling some basic facts about the
representation theory of the symmetric group $\mathcal{S}Gp$ over a field of
characteristic zero. We refer the reader to \cite[Ch.~2]{Sagan} for
the details.
We write $\lambda \vdash a$ to denote that
$\lambda=(\lambda_1,\lambda_2,\dots,\lambda_r)$ is a partition of the
integer $a$,
i.e., that $\lambda_1 + \lambda_2 + \dots + \lambda_r=a$ and
$\lambda_1 \geqslant \lambda_2 \geqslant \dots \geqslant \lambda_r >
0$.
The irreducible representations of $\mathcal{S}Gp$ are in bijection with the
partitions of $n$; for $\lambda \vdash n$, we denote by $S^\lambda$
the corresponding irreducible. Every finite dimensional
representation of $\mathcal{S}Gp$ decomposes into a direct sum of copies of
the $S^\lambda$.
The irreducible representation $S^{(n-1,1)}$ of $\mathcal{S}Gp$ is often
called the \emph{standard representation}. It can be described as the
$\mathcal{S}Gp$-stable complement of the subspace spanned by $e_1$ inside the
representation $R_1 = \langle x_1,x_2,\dots,x_n \rangle$. The
polynomials $x_1-x_n,x_2-x_n,\dots,x_{n-1}-x_n$ give an explicit basis
of the complement.
Let $\mathfrak{m} = (x_1,x_2,\dots,x_n)$ be the irrelevant maximal
ideal of $R$. In this section, we study homogeneous regular sequences
$f_1,f_2,\dots,f_t \in R$ such that the ideal $I=(f_1,f_2,\dots,f_t)$
is stable under the action of $\mathcal{S}Gp$ and $I / \mathfrak{m}I$
contains a copy of the standard representation. As shown in
\cite[Prop.~2.5]{SCI1}, there are two possibilities:
$I / \mathfrak{m}I \cong S^{(n-1,1)}$ or
$I / \mathfrak{m}I \cong S^{(n-1,1)} \oplus S^{(n)}$, where $S^{(n)}$
is the one-dimensional trivial representation.
\subsection{Regular sequences of type $S^{(n-1,1)}$}\label{reduced rep}
Here we prove the existence of regular sequences of type $S^{(n-1,1)}$
in every positive degree.
Let $\mathcal{V}_d \subset \mathbb{A}^n$ denote the affine variety cut out by the
$x_1^d-x_n^d,x_2^d-x_n^d,\dots,x_{n-1}^d-x_n^d$ and $x_n=1$; i.e.,
$$\mathcal{V}_d = \{(z_1,z_2,\dots,z_n) \in \mathbb{A}^n : z_i^d=1, z_n=1\}.$$
\begin{theorem}
\label{red_rep}
Let $d$ be a positive integer. The polynomials
$x_1^d - x_n^d,x_2^d - x_n^d,\dots,x_{n-1}^d - x_n^d$ form a regular
sequence of type $S^{(n-1,1)}$.
\end{theorem}
\begin{proof}
The polynomials in question form a basis of the $\mathcal{S}Gp$-stable
complement of the one-dimensional invariant subspace spanned by
$x_1^d + x_2^d + \dots + x_n^d$ inside
$\langle x_1^d, x_2^d, \dots, x_n^d\rangle$. It is clear from the
comments at the beginning of the section that this complement is
isomorphic to $S^{(n-1,1)}$.
To prove $x_1^d - x_n^d,x_2^d - x_n^d,\dots,x_{n-1}^d - x_n^d$ form
a regular sequence, we extend it by adding the polynomial $x_n^d$.
It is clear that the two ideals
$(x_1^d - x_n^d,x_2^d - x_n^d,\dots,x_{n-1}^d - x_n^d,x_n^d)$ and
$(x_1^d ,x_2^d,\dots,x_{n-1}^d ,x_n^d)$ are equal and that the
latter is generated by a regular sequence. Thus the extended
sequence, and so also the original, is a regular sequence.
\end{proof}
\subsection{Regular sequences of type $S^{(n-1,1)} \oplus S^{(n)}$}
Let $I\subseteq R$ be an $\mathcal{S}Gp$-stable homogeneous ideal such that
$I/\mathfrak{m}I \cong S^{(n-1,1)} \oplus S^{(n)}$. Then $I$ admits a
generating set $g_1,g_2,\dots,g_{n-1},f$ such that:
\begin{itemize}
\item $\deg (g_i) = d$ for $i=1,2,\dots,n-1$ and the vector space
spanned by $g_1,g_2,\dots,g_{n-1}$ is a representation of $\mathcal{S}Gp$
isomorphic to $S^{(n-1,1)}$;
\item $\deg (f) = a$ and $f \in R^\mathcal{S}Gp$.
\end{itemize}
We are interested in understanding the possible choices of degrees $d$
and $a$ for which such an ideal $I$ can be generated by a regular
sequence. For simplicity, we restrict to the case
$g_i = x_i^d - x_n^d$ for $i=1,2,\dots,n-1$. This is the instance of
regular sequence described in Theorem \ref{red_rep}. Therefore our
main question becomes: when can a symmetric polynomial $f$ of degree
$a$ be chosen so that
$x_1^d - x_n^d, x_2^d - x_n^d, \dots, x_{n-1}^d - x_n^d, f$ is a
regular sequence?
\begin{definition}\label{good_bad}
Let $n,d,a$ be three positive integers. We say the triple $(n,d,a)$
is {\em good} if there exists $f \in R_a^{\mathcal{S}Gp}$ such that
$x_1^d-x_n^d,x_2^d-x_n^d,\dots,x_{n-1}^d-x_n^d,f$ is a regular
sequence. Otherwise $(n,d,a)$ is called {\em bad}.
\end{definition}
\begin{remark}
\label{rem:M2_rem}
Clearly, if $(n,d,a)$ is good, then there exists a regular sequence
of type $S^{(n-1,1)} \oplus S^{(n)}$ with $S^{(n-1,1)}$ in degree
$d$ and $S^{(n)}$ in degree $a$. However, the converse is not true
in general. For example, the triple $(5,6,1)$ is bad{} because
$x_1^6-x_5^6,x_2^6-x_5^6,x_3^6-x_5^6,x_4^6-x_5^6,e_1$ is not a
regular sequence. However, if we set
$g_i = \sum_{j=2}^5 e_j (x_i^{6-j} - x_5^{6-j})$ for $i=1,2,3,4$,
then $g_1,g_2,g_3,g_4,e_1$ is a regular sequence. The assertions
about these sequences of polynomials can be verified computationally
using the software Macaulay2 \cite{M2}, and the code provided in
Appendix \ref{sec:macaulay2-code}.
\end{remark}
Observe that, if $f \in R$ is homogeneous, then
$x_1^d-x_n^d,x_2^d-x_n^d,\dots,x_{n-1}^d-x_n^d,f$ is a regular
sequence if and only if $f$ does not vanish on $\mathcal{V}_d$.
For a positive integer $a$, the \emph{power sum}
$\mathcal{P}_a = x_1^a + x_2^a + \dots + x_n^a$ is a homogeneous symmetric
polynomial of degree $a$. Furthermore, given a partition
$\lambda = (\lambda_1, \lambda_2, \dots, \lambda_r)$ of $a$, we write
$\mathcal{P}_\lambda$ for the symmetric polynomial
$\prod_{t=1}^r \mathcal{P}_{\lambda_t}$ of degree $a$. The set of
$\mathcal{P}_\lambda$ with
$\lambda = (\lambda_1, \lambda_2, \dots, \lambda_r)$ a partition of
$a$ whose parts $\lambda_i$ do not exceed $n$ is a basis of
$R^\mathcal{S}Gp_a$ as a complex vector space
(cf. \cite[Prop.~7.8.2]{Stanley}).
\begin{lemma}\label{power sums suffice}
The triple $(n,d,a)$ is bad{} if and only if there exists a point
$Q \in \mathcal{V}_d$ such that $\mathcal{P}_\lambda(Q)=0$ for every partition
$\lambda \vdash a$.
\end{lemma}
\begin{proof}
If such a point $Q$ exists, then it is clear that $(n,d,a)$ is bad.
Suppose then that $(n,d,a)$ is bad{}. Enumerate the partitions
$\lambda \vdash a$ whose parts do not exceed $n$ and denote them by
$\lambda^{(1)}, \lambda^{(2)}, \dots, \lambda^{(t)}$. Introduce the
homogeneous symmetric polynomial
$$f := \sum_{i=1}^t \pi^i \mathcal{P}_{\lambda^{(i)}}$$
of degree $a$. Since $(n,d,a)$ is bad{}, there exists $Q\in \mathcal{V}_d$
such that
$$0 = f(Q) = \sum_{i=1}^t \pi^i \mathcal{P}_{\lambda^{(i)}} (Q).$$
Since the coordinates of $Q$ are algebraic numbers,
$\mathcal{P}_{\lambda^{(i)}} (Q)$ is algebraic for all $i=1,2,\dots,t$.
Then $f(Q)=0$ implies $\mathcal{P}_{\lambda^{(i)}} (Q) = 0$ for all
$i=1,2,\dots,t$ because $\pi$ is transcendental. The result follows.
\end{proof}
The following is an immediate consequence of Lemma \ref{power sums
suffice}.
\begin{cor}\label{single Q}
The triple $(n,d,a)$ is bad{} if and only if there exists a point
$Q \in \mathcal{V}_d$ such that $f(Q)=0$ for every $f \in R_a^{\mathcal{S}Gp}$.
\end{cor}
Lemma \ref{power sums suffice} suggests it might be useful to
understand the vanishing of power sums at roots of unity. The
following result is due to Lam and Leung \cite[Thm.~5.2]{vanishing
sums}.
\begin{theorem}\label{LL}
Let $d$ be a positive integer and let $\Gamma(d)$ denote the
numerical semi-group generated by the prime divisors of $d$. Then
there exist $d^\text{th}$ roots of unity $z_1,z_2,\dots,z_n$ (not
necessarily distinct) such that $z_1+z_2+\dots+z_n=0$ if and only
if $n \in \Gamma(d)$.
\end{theorem}
Note that $\Gamma(1):=\{0\}$ here.
\begin{cor}\label{LL_cor}
Let $a,d$ be positive integers and let $g:=\gcd(a,d)$. Then
there exist $d^\text{th}$ roots of unity $z_1,z_2,\dots,z_n$ (not
necessarily distinct) such that $\mathcal{P}_a(z_1,z_2,\dots,z_n)=0$ if and
only if $n \in \Gamma(d/g)$.
\end{cor}
\begin{proof}
Assume there exist $d^\text{th}$ roots of unity $z_1,z_2,\dots,z_n$
such that $\mathcal{P}_a(z_1,z_2,\dots,z_n)=0$. Note that $z_i^a$ is a
$(d/g)^\text{th}$ root of unity. Then
$$z_1^a+z_2^a+\dots+z_n^a =\mathcal{P}_a(z_1,z_2,\dots,z_n) =0$$
implies $n \in \Gamma (d/g)$ by Theorem \ref{LL}.
Conversely, assume $n \in \Gamma(d/g)$. Then Theorem \ref{LL} implies
the existence of $(d/g)^\text{th}$ roots of unity $w_1,w_2,\dots,w_n$
such that $w_1+w_2+\dots+w_n=0$. Since $g = \gcd(a,d)$, we have
$1 = \gcd(a,d/g)$. By Bezout's identity \cite[Prop.~5.1]{Lang},
there exist integers $u,v$ such that $au+(d/g)v=1$. Note that
$z_i = w_i^u$ is a $d^\text{th}$ root of unity. Therefore we get
$$0 = \sum_{i=1}^n w_i = \sum_{i=1}^n w_i^{au+(d/g)v} =
\sum_{i=1}^n (w_i^u)^a (w_i^{d/g})^v = \mathcal{P}_a (z_1,z_2,\dots,z_n).$$
\end{proof}
\begin{remark} \label{Galois remark}
Let $\zeta_d$ be a primitive $d^{\rm th}$ root of unity. The Galois
group of the cyclotomic field $\mathbb{Q} (\zeta_d)$ is isomorphic to
$(\mathbb{Z}/d\mathbb{Z})^\times$, the group of units modulo $d$. An element of
$(\mathbb{Z}/d\mathbb{Z})^\times$ is represented by the class of an integer $s$
coprime to $d$. Let $\gamma_s$ denote the corresponding Galois
automorphism of $\mathbb{Q} (\zeta_d)$, which is defined by fixing $\mathbb{Q}$
and sending $\zeta_d$ to $\zeta_d^s$. If $z$ is a $d^{\rm th}$ root
of unity, then $z$ is a power of $\zeta_d$, therefore
$\gamma_s (z) = z^s$.
Now let $Q = (z_1,z_2,\dots,z_n)\in \mathcal{V}_d$. We have that
\begin{equation*}
\begin{split}
\mathcal{P}_s (Q) &= z_1^s + z_2^s + \dots + z_n^s
=\gamma_s (z_1) + \gamma_s (z_2) + \dots + \gamma_s (z_n) =\\
&=\gamma_s (z_1 + z_2 + \dots + z_n) =\gamma_s (\mathcal{P}_1 (Q)).
\end{split}
\end{equation*}
Therefore $\mathcal{P}_s (Q) = 0$ if and only if $\mathcal{P}_1 (Q) = 0$.
\end{remark}
\subsection{Numerical criteria for good and bad triples}
\label{sec:numerical-criteria}
Throughout the rest of this section $(n,d,a)$ is intended to be a
triple of positive integers. We present criteria to decide whether
$(n,d,a)$ is good{} or bad{} in the sense of Definition
\ref{good_bad}.
\begin{prop}
\label{n_notin_Gamma}
Let $g:=\gcd(a,d)$. If
$n \notin \Gamma(d/g)$, then $(n,d,a)$ is good.
In particular, if $n \notin \Gamma(d)$, then $(n,d,a)$ is good{}
for every $a$.
\end{prop}
\begin{proof}
If $n \notin \Gamma (d/g)$, then $\mathcal{P}_a$ does not vanish on $\mathcal{V}_d$
by Corollary \ref{LL_cor}, thus $(n,d,a)$ is good. The second
assertion follows from the fact that
$\Gamma (d/g) \subseteq \Gamma (d)$ for any divisor $g$ of $d$.
\end{proof}
\begin{remark}
The proof of Proposition \ref{n_notin_Gamma} uses a power sum as the
symmetric polynomial of degree $a$. It seems that we might be able
to use Theorem \ref{LL} to handle more cases by using some other
symmetric polynomial $f$. While it is possible that
$n \in \Gamma(d)$ and $f \in R^\mathcal{S}Gp$ is homogeneous having $m$
terms with $m \notin \Gamma(d)$, this only happens in two cases.
The first case is $f = e_n$, the $n^{\rm th}$ elementary symmetric
polynomial, which consists of a single term and does not vanish on
$\mathcal{V}_d$. In particular, this shows that if $n$ divides $a$, then
$(n,d,a)$ is good.
The second case is essentially when $d$ is a power of a prime. See
Corollaries~\ref{prime power} and \ref{essentially} below. In
fact, suppose two distinct primes $p,q$ divide $d$, $n \geqslant p+q$,
$n \in \Gamma(d)$ and let $f$ be a non-constant symmetric polynomial
having $m$ terms. Then $n \geqslant p+q$ implies that
$\binom{n}{2} \geqslant (p-1)(q-1)$. Thus, if
$m \geqslant \binom{n}{2}$, then $m \geqslant (p-1)(q-1)$, which implies
$m \in \Gamma(pq)$ (cf. \cite[Thm.~2.1.1]{Sylvester}). Since
$\Gamma (pq) \subseteq \Gamma (d)$, we deduce that
$m \geqslant \binom{n}{2}$ implies $m\in \Gamma (d)$. Therefore, if
$m \notin \Gamma(d)$, then $m < \binom{n}{2}$. Since we are
assuming $n\in \Gamma(d)$, this implies that $f = \lambda e_n$ for
some scalar $\lambda$.
\end{remark}
\begin{prop}\label{S semigroup}
Define $S := \left\{ q : q \mid d, n \notin \Gamma(d/q)
\right\}$. If $a$ lies in the numerical semi-group generated by $S$,
then the triple $(n,d,a)$ is good.
\end{prop}
\begin{proof}
By the hypothesis, we can write $a = \sum_{i=1}^r \lambda_i$, where
$\lambda_i \in S$ for $i=1,2,\dots,r$ and
$\lambda_1 \geqslant \lambda_2 \geqslant \dots \geqslant \lambda_r$. Then
$\lambda = (\lambda_1,\lambda_2,\dots,\lambda_r)$ is a partition of
$a$ and $\mathcal{P}_\lambda$ is a symmetric polynomial of degree $a$.
Since $\lambda_i \in S$, we have that $\lambda_i \mid d$, hence
$\gcd (\lambda_i,d) = \lambda_i$. Moreover,
$n\notin \Gamma (d/\lambda_i)$. Therefore Corollary \ref{LL_cor}
implies that $\mathcal{P}_{\lambda_i}$ does not vanish on $\mathcal{V}_d$. Since this
holds for all indices $i=1,2,\dots,r$, we conclude that
$\mathcal{P}_\lambda (Q)$ does not vanish on $\mathcal{V}_d$. Therefore $(n,d,a)$ is
good.
\end{proof}
\begin{remark}
Note that $d \in S$ always. Furthermore, if
$d = p_1^{b_1} p_2^{b_2}\cdots p_t^{b_t}$ is the prime factorization
of $d$, then the set
$$\left\{ \frac{d}{p_i^{b_i}} : p_i \nmid n\right\}$$
is a subset of $S$.
\end{remark}
\begin{remark}
Proposition \ref{S semigroup} remains true if we use $S\cup \{n\}$
instead of $S$. In fact, if $a$ lies in the numerical semi-group
generated by $S\cup {n}$, then $a = b + cn$, where $b,c$ are
positive integers and $b$ lies in the numerical semi-group generated
by $S$. By the proof of Proposition \ref{S semigroup}, there exists
$\lambda \vdash b$ such that $\mathcal{P}_\lambda$ does not vanish on
$\mathcal{V}_d$. At the same time, the elementary symmetric polynomial $e_n$
does not vanish on $\mathcal{V}_d$. Therefore $\mathcal{P}_\lambda e_n^c$ is a
homogeneous symmetric polynomial of degree $a$ which does not vanish
on $\mathcal{V}_d$.
\end{remark}
\begin{prop}\label{coprime lemma}
Suppose that $n \in \Gamma(d)$ and
$a \notin \Gamma(d)$. Then $(n,d,a)$ is bad.
\end{prop}
\begin{proof}
Since $n \in \Gamma(d)$, there exists $Q \in \mathcal{V}_d$ such that
$\mathcal{P}_1(Q)=0$ by Theorem \ref{LL}. If
$\lambda=(\lambda_1,\lambda_2,\dots,\lambda_r) \vdash a$, then some
part $\lambda_t$ is coprime to $d$ since $a \notin \Gamma(d)$.
Hence, by Remark \ref{Galois remark}, we have $\mathcal{P}_{\lambda_t}(Q)=0$
and thus $\mathcal{P}_\lambda(Q) = 0$. The reasoning holds for all
$\lambda \vdash a$. Therefore $(n,d,a)$ is bad{} by Lemma
\ref{power sums suffice}.
\end{proof}
\begin{prop}\label{two proofs}
Let $g: = \gcd(d,n)$. If $g\nmid a$, then $(n,d,a)$ is bad.
\end{prop}
\begin{proof}
Let $\omega$ be a primitive $g^{\rm th}$ root of unity and define
$Q = (\omega,\omega^2,\dots,\omega^n) \in \mathcal{V}_d$. Observe that
$\omega^i = \omega^{i+gj}$ for all $i,j\in\mathbb{Z}$. Hence, using the
auxiliary variable $y$, we have
$$\prod_{i=1}^n (y-\omega^i) = \left[ \prod_{i=1}^g (y-\omega^i)
\right]^{n/g} = (y^g - 1)^{n/g}.$$
On the other hand
$$\prod_{i=1}^n (y-\omega^i) = \sum_{j=0}^n (-1)^j e_j (Q) y^{n-j}.$$
By comparing the two expressions, we deduce that $e_j (Q) = 0$
whenever $g\nmid j$. Thus the only symmetric polynomials potentially
not vanishing at $Q$ are the ones in the subring
$\mathbb{C} [e_j : g\mid j]$. Note how the degree of any element in this
subring is divisible $g$. Since $g\nmid a$, $(n,d,a)$ is bad{} by
Corollary \ref{single Q}.
\end{proof}
\begin{prop}\label{a_bigger_than}
Let $g:=\gcd (d,n)$ and assume that
$$a \geqslant \frac{(n-g)(d-g)}{g}.$$
Then $(n,d,a)$ is bad{} if and only if $g\nmid a$.
\end{prop}
\begin{proof}
If $g\nmid a$, then the triple is bad{} by Proposition \ref{two
proofs}.
Assume $g\mid a$ and let $a'=a/g$, $n'=n/g$, and $d'=d/g$. The
inequality in the assumption gives
$$a' = \frac{a}{g} \geqslant \frac{n-g}{g} \frac{d-g}{g}
= (n'-1)(d'-1).$$
By \cite[Thm.~2.1.1]{Sylvester}, $a'$ belongs to the numerical
semi-group generated by $d'$ and $n'$. Thus we can write
$a' = sd'+tn'$, for some non-negative integers $s$ and
$t$. Multiplying by $g$, we obtain $a = sd+tn$. This equality
implies that the homogeneous symmetric polynomial
$f := \mathcal{P}_d^s e_n^t$ has degree $a$. For all $Q\in \mathcal{V}_d$, we have
$\mathcal{P}_d (Q) = n \neq 0$. Moreover, $e_n$ does not vanish on
$\mathcal{V}_d$. Therefore $f$ does not vanish on $\mathcal{V}_d$ and the triple
$(n,d,a)$ is good.
\end{proof}
\subsection{Triples and prime factors}
\label{sec:triples-with-prime}
Here we analyze the property of a triple $(n,d,a)$ being good{} or
bad{} in relation to certain prime factors of $n$, $d$, and $a$. We
begin by developing some technical results.
Let $z_1,z_2,\dots,z_n$ be $d^{\rm th}$ roots of unity and consider
the point $Q=(z_1,z_2,\dots,z_n) \in \mathbb{A}^n$. For an integer $v$, we
say that $Q$ is \emph{$v$-symmetric} if, given a primitive
$v^{\rm th}$ root of unity $\epsilon$, there exists $\tau \in \mathcal{S}Gp$
such that
$$(\epsilon z_1, \epsilon z_2, \dots, \epsilon z_n) =
(z_{\tau(1)},z_{\tau(2)},\dots,z_{\tau(n)}).$$ In other words, $Q$ is
$v$-symmetric if rotating each of the complex coordinates $z_i$ by
$2\pi/v$ radians produces a point in the $\mathcal{S}Gp$-orbit of $Q$.
Note that $v\mid d$ because
$1=z_{\tau (1)}^d=\epsilon^d z_1^d=\epsilon^d$ and $\epsilon$ is
primitive.
\begin{lemma}\label{is symmetric}
The point $Q \in \mathcal{V}_d \subset \mathbb{A}^n$ is $v$-symmetric if and only if
$v \mid n$ and $e_j(Q)=0$ for all $j$ such that $v\nmid j$.
\end{lemma}
\begin{proof}
First suppose that $Q$ is $v$-symmetric. The coordinates of $Q$
split into orbits under the cyclic group of order $v$ acting on the
complex plane by rotation. Since $Q \in \mathcal{V}_d$, we have $z_i \neq 0$
for all $i$. Therefore all the above orbits have cardinality $v$ and
$v\mid n$.
Since $Q$ is $p$-symmetric, there is a primitive $v^{\rm th}$ root
of unity $\epsilon$ such that, up to reordering, we may write
$z_{jv+i}= \epsilon^i \omega_j$ for $1 \leqslant i \leqslant v$,
$1 \leqslant j \leqslant n/v$, and for some $d^{\rm th}$ roots of unity
$\omega_j$. Using the auxiliary variable $y$, we have
\begin{align*}
\sum_{j=1}^n (-1)^{j} e_j(Q) y^j
& = \prod_{i=1}^n(y-z_i) = \prod_{j=1}^{n/v} \prod_{i=1}^v (y-\epsilon^i \omega_j) =
\prod_{j=1}^{n/v} \omega_j^v \prod_{i=1}^v (y/\omega_j-\epsilon^i)\\
&= \prod_{j=1}^{n/v} \omega_j^v \left( (y/\omega_j)^v - 1\right) = \prod_{j=1}^{n/v} (y^v - \omega_j^v).
\end{align*}
Thus $e_j(Q)=0$ whenever $v\nmid j$.
Conversely, suppose that $v\mid n$, $Q\in\mathcal{V}_d$ and $e_j(Q)=0$
whenever $j\nmid v$. We have
$$\prod_{i=1}^n(y-z_i) = \sum_{j=1}^n (-1)^{j} e_j(Q) y^j = f(y^v),$$
where $f$ is a polynomial in one variable. At the same time
$$\prod_{i=1}^n(y-\epsilon z_i) = \epsilon^n \prod_{i=1}^n
(y/\epsilon - z_i) = \epsilon^n f((y/\epsilon)^v) = f(y^v).$$
Therefore, comparing factors, we deduce that $Q$ is symmetric.
\end{proof}
\begin{lemma}\label{p-symmetric}
Suppose $Q=(z_1,z_2,\dots,z_n) \in \mathcal{V}_d$ is $v^m$-symmetric and
$(z_1^{v^m},z_2^{v^m},\dots,z_n^{v^m})$ is $v$-symmetric. Then $Q$
is $v^{m+1}$-symmetric.
\end{lemma}
\begin{proof}
Proceeding as in the proof of Lemma \ref{is symmetric}, $Q$ being
$v^m$-symmetric implies the existence of a primitive
$(v^m)^{\rm th}$ root of unity $\epsilon$ such that, up to
reordering, we may write $z_{jv^m+i}= \epsilon^i \omega_j$ for
$1 \leqslant i \leqslant v^m$, $1 \leqslant j \leqslant n/v^m$, and for some $d^{\rm th}$
roots of unity $\omega_j$. Using the auxiliary variable $y$, we
have
\begin{equation}\label{eq:5}
\prod_{i=1}^n (y-z_i^{v^m}) =
\prod_{j=1}^{n/v^m} \prod_{i=1}^{v^m} (y-\omega_j^{v^m})=
\prod_{j=1}^{n/v^m} (y-\omega_j^{v^m})^{v^m}=
\left( \prod_{j=1}^{n/v^m} (y-\omega_j^{v^m}) \right)^{v^m}.
\end{equation}
Since $(z_1^{v^m},z_2^{v^m},\dots,z_n^{v^m})$ is $v$-symmetric,
Lemma \ref{is symmetric} implies
\begin{equation}
\label{eq:6}
\prod_{i=1}^n (y-z_i^{v^m}) = f(y^v),
\end{equation}
for some polynomial $f$ in one variable. To only way to reconcile
equations \eqref{eq:5} and \eqref{eq:6} is if
\begin{equation*}
\label{eq:7}
\prod_{j=1}^{n/v^m} (y-\omega_j^{v^m}) = g(y^v),
\end{equation*}
for some polynomial $g$ in one variable.
Therefore we must have
\begin{equation*}
\begin{split}
\prod_{i=1}^n (y-z_i)
&=\prod_{j=1}^{n/v^m} \prod_{i=1}^{v^m} (y-z_{jv^m+i})
=\prod_{j=1}^{n/v^m} \prod_{i=1}^{v^m} (y-\epsilon^i \omega_j)=\\
&=\prod_{j=1}^{n/v^m} (y^{v^m}- \omega_j^{v^m}) =
g \left( (y^{v^m})^v \right) = g (y^{v^{m+1}}).
\end{split}
\end{equation*}
Using Lemma \ref{is symmetric} again, we conclude that $Q$ is
$v^{m+1}$-symmetric.
\end{proof}
\begin{prop}\label{symmetric case}
Let $p$ be prime and suppose that all points
$Q\in \mathcal{V}_d \subseteq \mathbb{A}^n$ with $\mathcal{P}_1 (Q)=0$ are $p$-symmetric.
Let $g:=\gcd (d,n)$ and assume $p\mid g$. Then $(n,d,a)$ is bad{}
if and only if $g\nmid a$.
\end{prop}
\begin{proof}
If $g\nmid a$, then $(n,d,a)$ is bad{} by Proposition \ref{two
proofs}.
We prove the other implication by contradiction, so suppose that
$g\mid a$. Let $n=p^r n'$, $d=p^s d'$ and $a = p^t a'$, where
$\gcd(p,n')=\gcd(p,d')=\gcd(p,a')=1$. Set $k = \min\{r,s\}$. Since
$p^k \mid g$, the condition $g\mid a$ implies $p^k \mid a$ and
therefore $k\leqslant t$.
The hypothesis $p\mid g$ implies $s\geqslant 1$; hence $p\in \Gamma(d)$.
At the same time, $p\mid g$ also implies $r\geqslant 1$; hence
$n\in \Gamma(d)$. Thus, by Theorem \ref{LL}, there exists
$Q\in \mathcal{V}_d \subseteq \mathbb{A}^n$ such that $\mathcal{P}_1 (Q) =0$. By the
hypothesis, $Q$ is $p$-symmetric. However, $Q$ is not
$p^{k+1}$-symmetric because either $p^{k+1} \nmid n$ or
$p^{k+1} \nmid d$. Therefore there is an integer $m$, with
$1\leqslant m\leqslant k$, such that $Q$ is $p^m$-symmetric but not
$p^{m+1}$-symmetric.
Now suppose that $\mathcal{P}_{p^m} (Q) = 0$. Then we would have
$$\mathcal{P}_1 (z_1^{p^m},z_2^{p^m},\dots,z_n^{p^m}) = \mathcal{P}_{p^m} (Q) = 0.$$
Our hypothesis would imply that
$(z_1^{p^m},z_2^{p^m},\dots,z_n^{p^m})$ is $p$-symmetric. However,
Lemma \ref{p-symmetric} would give that $Q$ is $p^{m+1}$-symmetric,
contradicting our choice of $m$. Therefore $\mathcal{P}_{p^m} (Q) \neq 0$.
Thus the homogeneous polynomial $(\mathcal{P}_{p^m})^{a'p^{t-m}}$ has degree
$a$ and does not vanish at $Q$. We conclude that $(n,d,a)$ is
good{} by Corollary \ref{single Q}.
\end{proof}
In \cite{vanishing sums}, Lam and Leung consider sequences
$z_1,z_2,\dots,z_n$ with each $z_i$ a $d^{\rm th}$ root of unity and
whose sum is 0, in particular, points
$Q = (z_1,z_2,\dots,z_n) \in \mathcal{V}_d$ such that $\mathcal{P}_1 (Q) =0$.
Corollary~3.4 of \cite{vanishing sums} shows that if $d=p^r$ is a
prime power, then $Q$ must be $p$-symmetric. This yields the
following corollary of Proposition~\ref{symmetric case}.
\begin{cor}
\label{prime power}
Suppose $d=p^s$ for some prime $p$ and positive integer $s$. Let
$g:=\gcd(d,n)$. Then $(n,d,a)$ is bad{} if and only if $g\nmid a$.
\end{cor}
\begin{proof}
If $p\mid g$, then the result follows from \cite[Cor.~3.4]{vanishing
sums} and Proposition~\ref{symmetric case}.
Assume $p\nmid g$. In this case, $g=1\mid a$ so we must show that
$(n,d,a)$ is good. Note that $p\nmid g$ implies $p\nmid n$. Hence
$n\notin \Gamma (d) = \langle p \rangle$. Therefore $(n,d,a)$ is
good{} by Proposition \ref{n_notin_Gamma}.
\end{proof}
Lam and Leung also showed that if $(z_1,z_2,\dots,z_n)$ is not
$p$-symmetric for all primes $p$ dividing $d$, then
$n \geqslant p_1(p_2-1) + p_3 - p_2$, where $p_1 < p_2 <p_3$ are the three
smallest primes dividing $d$ \cite[Thm.~4.8]{vanishing sums}. This
yields the following corollary of Proposition~\ref{symmetric case}.
\begin{cor}\label{essentially}
Suppose that at least two distinct primes divide $d$ and that
$n < p + q$ where $p$ and $q$ are the smallest two distinct primes
dividing $d$. Let $g:=\gcd(d,n)$. Then $(n,d,a)$ is bad{} if and
only if $g\nmid a$.
\end{cor}
\begin{proof}
Let $d = p^s \prod_{i=1}^m q_i^{s_i}$ be the prime factorization of
$d$, where $p < q_1 < q_2 < \dots < q_m$. Suppose that
$Q = (z_1,z_2,\dots,z_n) \in \mathcal{V}_d$ is such that $\mathcal{P}_1(Q)=0$. Since
$$p(q_1-1) + q_2-q_1 \geqslant 2(q_1-1) + q_2 - q_1 = (q_2-1) + q_1 > p + q_1 > n,$$
\cite[Thm.~4.8]{vanishing sums} implies that every non-empty
minimal subset $I \subset \{1,2,\dots,n\}$ such that
$\sum_{i \in I} z_i = 0$ corresponds to a $v$-symmetric point
$(z_i : i \in I)$, where $v$ is a prime dividing $d$. Moreover, $v$
divides the cardinality of $I$.
Clearly, we may partition $\{1,2,\dots,n\}$ into a disjoint union
$I_1 \sqcup I_2 \sqcup \dots \sqcup I_t$ of such minimal subsets.
Thus $n = \# I_1 + \# I_2 + \dots + \# I_t$. Since the cardinality
of each $I_j$ is either $p$ or some $q_i$, the hypothesis $n<p+q_1$
implies we must have either $t=1$ and $n= \# I_1 = q_i$ for some
$i$, or else $\# I_j =p$ for all $j$ and $n=tp$.
Thus there are two possibilities: either $n=q_i$ for some $q_i$, or
else $n=pt$. In the former case, $q_i \mid g$ and every
$Q \in \mathcal{V}_d$ with $\mathcal{P}_1(Q)=0$ is $q_i$-symmetric. In the latter
case, $p\mid g$ and every $Q \in \mathcal{V}_d$ with $\mathcal{P}_1(Q)=0$ is
$p$-symmetric. Thus the hypotheses of Proposition~\ref{symmetric
case} are satisfied (either with the prime $q_i$ or with $p$).
\end{proof}
\subsection{Generating good and bad triples}
\label{sec:generating-triples}
We illustrate how to obtain more good{} and bad{} triples from the
ones already at our disposal.
\begin{prop}\label{inc_bad}
Let $k$ be a positive integer.
\begin{enumerate}
\item\label{inc1} If $(n,d,a)$ is bad, then $(n,kd,a)$ is also
bad.
\item\label{inc2} If $(n,d,a)$ is bad, then $(kn,d,a)$ is also
bad.
\item\label{inc5} If $(n,d,a)$ is bad, then $(kn,kd,ka)$ is also
bad.
\end{enumerate}
\end{prop}
\begin{proof}
Suppose that $(n,d,a)$ is bad. By Corollary~\ref{single Q}, there
is a point $Q=(z_1,z_2,\dots,z_n) \in \mathcal{V}_{d} \subset \mathbb{A}^{n}$
such that $f(Q)=0$ for all $f \in R_{a}^\mathcal{S}Gp$. Assertion
(\ref{inc1}) follows immediately since $\mathcal{V}_d \subset \mathcal{V}_{kd}$.
For the second assertion, choose a point
$Q = (z_1,z_2,\dots,z_n)\in \mathcal{V}_d$. Define the point
$Q' = (z'_1,z'_2,\dots,z'_{kn}) \in \mathcal{V}_{kd} \subset \mathbb{A}^{kn}$ by
$z'_{i+n(j-1)} := z_i$ for $1 \leqslant i \leqslant n$ and $1 \leqslant j \leqslant k$.
Assume, by way of contradiction, that there exists
$f' \in \mathbb{C}[x_1,x_2,\ldots,x_{kn}]_{a}^ {{\mathfrak{S}_{kn}}}$ such
that $f'(Q') \neq 0$. The polynomials $\mathcal{P}_\lambda$ with $\lambda$ a
partition of $a$ whose parts do not exceed $kn$ form a basis of
$\mathbb{C}[x_1,x_2,\ldots,x_{kn}]_{a}^ {{\mathfrak{S}_{kn}}}$. Then
$f'(Q')\neq 0$ implies that there exists a partition
$\lambda=(\lambda_1,\lambda_2,\dots,\lambda_r) \vdash a$ with
$\mathcal{P}_{\lambda}(Q') \neq 0$. Hence $\mathcal{P}_{\lambda_t}(Q') \neq 0$ for
all $t=1,2,\dots,r$. Since
$$\mathcal{P}_{\lambda_t}(Q') = k z_1^{\lambda_t} + k z_2^{\lambda_t} + \dots +
k z_n^{\lambda_t} = k \mathcal{P}_{\lambda_t}(Q),$$ we have
$\mathcal{P}_{\lambda_t}(Q) \neq 0$ for all $t=1,2,\dots,r$, and therefore
$\mathcal{P}_\lambda (Q) \neq 0$. Because $Q\in \mathcal{V}_d$ is arbitrary, Lemma
\ref{power sums suffice} shows $(n,d,a)$ is not bad. This
contradicts the assumption, thus proving (\ref{inc2}).
Now we prove part (\ref{inc5}). By contradiction, assume
$(kn,kd,ka)$ is not bad. Given $Q\in \mathcal{V}_d$, we will construct
$f\in R^\mathcal{S}Gp_a$ such that $f(Q)\neq 0$, which will prove $(n,d,a)$
is not bad. Consider the primitive $d^{\rm th}$ root of unity
$\zeta := e^{2\pi i/d}$. We have
$Q = (\zeta^{b_1}, \zeta^{b_2}, \dots, \zeta^{b_n})$ for some
positive integers $b_1,b_2,\dots,b_n$. Let
$\omega := e^{2\pi i/(kd)}$; observe that $\omega$ is a
$(kd)^{\rm th}$ root of unity and $\omega^k = \zeta$. Define the
point $Q' = (z'_1,z'_2,\dots,z'_{kn}) \in \mathcal{V}_{kd} \subset\mathbb{A}^{kn}$
by $z'_{k(j-1)+i} := \omega^{b_j + id}$ for $1\leqslant i\leqslant k$ and
$1\leqslant j\leqslant n$. Since we have assumed that $(kn,kd,ka)$ is not
bad, by Lemma \ref{power sums suffice}, there exists a partition
$\lambda = (\lambda_1,\lambda_2,\dots,\lambda_r) \vdash ka$ such
that $\mathcal{P}_\lambda (Q') \neq 0$. In particular,
$\mathcal{P}_{\lambda_t} (Q') \neq 0$ for all $t=1,2,\dots,r$.
Using the auxiliary variable $y$, we can write
\begin{align*}
\prod_{t=1}^{kn} (y-z'_t)
&=\prod_{j=1}^{n}\prod_{i=1}^k (y-\omega^{b_j + id}) =
\prod_{j=1}^{n}\prod_{i=1}^k \omega^{b_j}(y/\omega^{b_j}-\omega^{id}) =\\
&=\prod_{j=1}^{n} \omega^{kb_j}
\prod_{i=1}^k(y/\omega^{b_j}-(\omega^d)^i).
\end{align*}
Since $\omega^d$ is a primitive $k^{\rm th}$ root of unity, the $k$
elements $(\omega^d)^1,(\omega^d)^2,\dots,(\omega^d)^k$ are all the
$k^{\rm th}$ roots of unity. Therefore we get
$$\prod_{i=1}^k (y/\omega^{b_j}-(\omega^d)^i) = (y/\omega^{b_j})^k -1.$$
Combining the two previous equations, we obtain
$$\prod_{t=1}^{kn} (y-z'_t) = \prod_{j=1}^{n} \omega^{k b_j}
[(y/\omega^{b_j})^k - 1] = \prod_{j=1}^{n} (y^k-\zeta^{b_j}).$$
On the other hand, we have
$$\prod_{t=1}^{kn} (y-z'_t) = \sum_{j=0}^{kn} (-1)^j e_j (Q') y^{kn-j}.$$
By comparing these expressions, we deduce that $e_j (Q') = 0$
whenever $k \nmid j$. This implies that every homogeneous polynomial
in $\mathbb{C} [x_1,x_2,\dots,x_{kn}]^{\mathfrak{S}_{kn}}$ whose degree is
not divisible by $k$ vanishes at $Q'$.
Thus the above integers $\lambda_1,\lambda_2,\dots,\lambda_r$ are
all divisible by $k$ and we set $c_t := \lambda_t / k$ for all
$i=1,2,\dots,r$. We have
\begin{align*}
\mathcal{P}_{\lambda_t}(Q')
&= \mathcal{P}_{kc_t}(Q') = \sum_{s=1}^{kn} (z'_s)^{kc_t} =
\sum_{j=1}^{n} \sum_{i=1}^k (\omega^{b_j+id})^{kc_t} =
\sum_{i=1}^{k} \sum_{j=1}^{n} (\omega^k)^{(b_j+id)c_t} =\\
& = \sum_{i=1}^{k} \sum_{j=1}^{n} \zeta^{(b_j+id)c_t} =
\sum_{i=1}^k \sum_{j=1}^{n} (\zeta^{b_j})^{c_t} =
\sum_{i=1}^k \mathcal{P}_{c_t}(Q) = k \mathcal{P}_{c_t}(Q).
\end{align*}
We deduce that $\mathcal{P}_{c_t}(Q) \neq 0$ for all $i=1,2,\dots,r$.
Define $f := \prod_{t=1}^r \mathcal{P}_{\lambda_t / k} \in R$ and observe
that $f$ is an element of $R^\mathcal{S}Gp_a$ with $f(Q)\neq 0$. This
concludes the proof.
\end{proof}
As the following example illustrates, $(n,d,a)$ being bad{} does not
imply that any of $(n,d,ka)$, $(n,kd,ka)$, or $(kn,d,ka)$ is bad.
\begin{example}
Consider $(n,d,a)=(8,15,4)$. Since
$8 = 5+3 \in \Gamma(d) = \Gamma(15)$ and $4 \notin \Gamma(d)$, we
see that $(8,15,4)$ is bad{} by Proposition~\ref{coprime lemma}.
Let $k=2$. The triples $(n,d,ka)=(8,15,8)$ and $(n,kd,ka)=(8,30,8)$
are good{} because $e_8$ clearly does not vanish on $\mathcal{V}_{15}$ nor
on $\mathcal{V}_{30}$.
Now consider the triple $(kn,d,ka)=(16,15,8)$. Observe that
$$S = \{q : q\mid 15, 16\notin \Gamma(15/q)\} = \{3,5,15\},$$
hence the numerical semi-group $\langle 3,5\rangle$ generated by $S$
contains $ka=8$. Therefore $(16,15,8)$ is good{} by Proposition
\ref{S semigroup}.
\end{example}
\begin{remark}
Consider the triple $(n,d,a)$ and let $g:=\gcd(n,d)$. By
Proposition~\ref{two proofs}, $(n,d,a)$ is bad{} if $g\nmid a$.
Thus we suppose that $g\mid a$. By Proposition \ref{inc_bad}
(\ref{inc5}), if $(n,d,a)$ is good{}, then $(n/g,d/g,a/g)$ is also
good.
\end{remark}
\begin{prop}\label{inc_good}
Let $k$ be a positive integer.
\begin{enumerate}
\item\label{inc3} If $(n,d,a)$ is good, then $(n,d,ka)$ is also
good.
\item\label{inc4} If $(n,d,a)$ is good, then $(n,kd,ka)$ is also
good.
\end{enumerate}
\end{prop}
\begin{proof}
Suppose that $(n,d,a)$ is good. This implies that there exists
$f \in R_{a}^\mathcal{S}Gp$ which does not vanish on $\mathcal{V}_d$. Assertion
(\ref{inc3}) now follows since $f^k \in R_{ka}^\mathcal{S}Gp$ also does not
vanish on $\mathcal{V}_d$.
To prove (\ref{inc4}), define
$$f'(x_1,x_2,\dots,x_n) := f(x_1^k,x_2^k,\dots,x_n^k) \in
R_{ka}^\mathcal{S}Gp.$$ For every point
$Q'=(z_1,z_2,\dots,z_n) \in \mathcal{V}_{kd}$, the point
$Q = (z_1^k,z_2^k,\dots,z_n^k)$ lies in $\mathcal{V}_d$; moreover,
$f'(Q') =f(Q) \neq 0$. Thus $(n,kd,ka)$ is good.
\end{proof}
As the following example illustrates, $(n,d,a)$ being good{} does not
imply that any of $(kn,d,a)$, $(n,kd,a)$, $(kn,kd,a)$, $(kn,d,ka)$ or
$(kn,kd,ka)$ is good.
\begin{example}
Consider $(n,d,a)=(4,15,1)$. Since
$n=4 \notin \Gamma(d) = \Gamma(15) = \langle 3,5 \rangle$, we see
that $(4,15,1)$ is good{} by Proposition \ref{S semigroup}.
Now consider $k=2$ and $(kn,d,a)=(8,15,1)$. Since
$8 \in \Gamma(15)$ and $1\notin \Gamma(15)$, we see that $(8,15,1)$
is bad{} by Proposition \ref{coprime lemma}. The triple
$(n,kd,a)=(4,30,1)$ is bad{} for similar reasons. Then
$(kn,kd,a)=(8,30,1)$ is bad{} as well by
Proposition~\ref{inc_bad}~(\ref{inc2}).
We claim the triple $(kn,d,ka)=(8,15,2)$ is also bad{}. Using the
fact that $(8,15,1)$ is bad, we deduce that there exists
$Q\in \mathcal{V}_{15}$ such that $\mathcal{P}_1 (Q) = 0$. Since $2$ and $15$ are
coprime, Remark \ref{Galois remark} implies $\mathcal{P}_2 (Q) = 0$. Given
that $\mathcal{P}_2$ and $\mathcal{P}_{(1,1)} = \mathcal{P}_1^2$ form a basis of the
symmetric polynomials of degree 2, their simultaneous vanishing at
$Q$ implies the claim by Lemma \ref{power sums suffice}. Finally,
the claim just proved, together with
Proposition~\ref{inc_bad}~(\ref{inc1}), implies that
$(kn,kd,ka)=(8,30,2)$ is bad.
\end{example}
\section{Regular Sequences of Type $S^{(2,2)} \oplus S^{(4)} \oplus S^{(4)}$}
Throughout this section we fix $n=4$, so $R=\mathbb{C}[x_1,x_2,x_3,x_4]$.
As proved in \cite[Prop.~2.5]{SCI1}, there exist homogeneous regular
sequences $g_1,g_2,f_1,f_2$ in $R$ such that $g_1,g_2$ form a basis of
a graded representation isomorphic to $S^{(2,2)}$ and $f_1,f_2$ are
symmetric polynomials. If $I \subset R$ is the ideal generated by
$g_1,g_2,f_1,f_2$, then $I/\mathfrak{m} I$ is isomorphic to
$S^{(2,2)} \oplus S^{(4)} \oplus S^{(4)}$. Setting
$a := \deg(g_1) = \deg(g_2)$, $c:= \deg(f_1)$ and $d := \deg(f_2)$, we
seek the possible tuples $(a,a,c,d)$ corresponding to regular
sequences $g_1,g_2,f_1,f_2$ of type
$S^{(2,2)} \oplus S^{(4)} \oplus S^{(4)}$.
\subsection{Sequences in low degree}
\label{sec:sequences-low-degree}
We recall some facts of invariant theory; more details can be found in
\cite[Ch.~3,4]{refl_groups}. There is an isomorphism
$R \cong R^{\mathfrak{S}_4} \otimes_\mathbb{C} R/(e_1,e_2,e_3,e_4)$ of graded
$\mathfrak{S}_4$-representations. The symmetric group acts trivially
on $R^{\mathfrak{S}_4}$. On the other hand, the coinvariant algebra
$R/(e_1,e_2,e_3,e_4)$ is isomorphic to the regular representation of
$\mathfrak{S}_4$. We worked out the graded character of
$R/(e_1,e_2,e_3,e_4)$ in \cite[Ex.~3.1]{SCI1}. In particular,
$R/(e_1,e_2,e_3,e_4)$ contains two copies of the irreducible
representation $S^{(2,2)}$, one in degree 2 and one in degree 4.
Let us find an explicit description of these two
representations. Specht's original construction shows that the
polynomials
\begin{equation}\label{eq:1}
(x_1-x_2)(x_3-x_4), (x_1-x_3)(x_2-x_4)
\end{equation}
span a copy of $S^{(2,2)}$ inside the degree 2 component of $R$
(cf. \cite[\S 7.4, Ex.~17]{Fulton}). Now observe that the polynomials
\begin{equation}\label{eq:2}
(x_1^2-x_2^2)(x_3^2-x_4^2), (x_1^2-x_3^2)(x_2^2-x_4^2)
\end{equation}
behave in the same way under the action of $\mathfrak{S}_4$. Therefore
they span a copy of $S^{(2,2)}$ inside the degree 4 component of
$R$. Note also that the polynomials in \eqref{eq:1} and \eqref{eq:2}
do not belong to the ideal $(e_1,e_2,e_3,e_4)$. Therefore their
residue classes span the desired copies of $S^{(2,2)}$ inside
$R/(e_1,e_2,e_3,e_4)$.
Using the isomorphism
$R \cong R^{\mathfrak{S}_4} \otimes_\mathbb{C} R/(e_1,e_2,e_3,e_4)$ together
with our construction above, we can establish the following
fundamental fact: any copy of $S^{(2,2)}$ contained inside the degree
$a$ component of $R$ is spanned by
\begin{equation}\label{eq:3}
\begin{split}
g_1 &= h (x_1-x_2)(x_3-x_4) + h' (x_1^2-x_2^2)(x_3^2-x_4^2),\\
g_2 &= h (x_1-x_3)(x_2-x_4) + h' (x_1^2-x_3^2)(x_2^2-x_4^2)
\end{split}
\end{equation}
for some symmetric polynomials $h$ of degree $a-2$ and $h'$ of degree
$a-4$.
Thus, when searching for degree tuples $(a,a,c,d)$ corresponding to
regular sequences $g_1,g_2,f_1,f_2$ of type
$S^{(2,2)} \oplus S^{(4)} \oplus S^{(4)}$, we can assume that
$g_1,g_2$ have the form given in equation \eqref{eq:3}.
We consider the cases where $a \leqslant 4$ first.
Clearly we must have $a \geqslant 2$.
\begin{prop}
Let $a=2$ or $4$. A regular sequence of type
$S^{(2,2)} \oplus S^{(4)} \oplus S^{(4)}$ with degree tuple
$(a,a,c,d)$ exists if and only if $d\geqslant 2$. If $a=3$, then no such
sequence exists.
\end{prop}
\begin{proof}
Let $a=2$. We form polynomials $g_1,g_2$ as in equation
\eqref{eq:3}. By degree considerations, $h$ is a unit and
$h'=0$. Therefore we may take
\begin{equation*}
g_1=(x_1-x_2)(x_3-x_4), g_2=(x_1-x_3)(x_2-x_4).
\end{equation*}
Now we need symmetric polynomials $f_1,f_2$ such that
$g_1,g_2,f_1,f_2$ is a regular sequence. Note that $f_1,f_2$ cannot
both be linear, otherwise they would be scalar multiples of
$e_1$. However, if we assume that $d = \deg (f_2) \geqslant 2$, then we
can write $d=2p+3q$, where $p,q$ are non-negative integers, and set
$f_1:=e_1^c$, $f_2:=e_2^p e_3^q$. The sequence $g_1,g_2,f_1,f_2$ is
regular with degree tuple $(2,2,c,d)$.
Now let $a=4$. We need $h'$ to be a unit. In fact, we can take
$h'=1$ and $h=e_2$; this gives
\begin{equation*}
\begin{split}
&g_1=e_2 (x_1-x_2)(x_3-x_4) + (x_1^2-x_2^2)(x_3^2-x_4^2),\\
&g_2=e_2 (x_1-x_3)(x_2-x_4) + (x_1^2-x_3^2)(x_2^2-x_4^2).
\end{split}
\end{equation*}
Again $f_1,f_2$ cannot both be linear. In fact, choosing the same
$f_1,f_2$ as before gives a regular sequence $g_1,g_2,f_1,f_2$ with
degree tuple $(4,4,c,d)$ for $d\geqslant 2$.
Finally let $a=3$. In this case, $h'=0$ while $h$ is a scalar
multiple of $e_1$. Thus $g_1,g_2$ have a common factor and do not
form a regular sequence.
\end{proof}
\subsection{Sequences with $a\geqslant 5$}
\label{sec:sequences-with-ageq5}
Here we obtain general results about regular sequences
$g_1,g_2,f_1,f_2$ of type $S^{(2,2)} \oplus S^{(4)} \oplus S^{(4)}$
with degree tuple $(a,a,c,d)$ and $a \geqslant 5$. We still refer to the
form of $g_1,g_2$ given in equation \eqref{eq:3}.
\begin{lemma}\label{reg seq conditions}
Let
\begin{equation*}
\begin{split}
&h_1 := h + h' (x_1 + x_2)(x_3 + x_4),\\
&h_2 := h + h' (x_1 + x_3)(x_2 + x_4),
\end{split}
\end{equation*}
so that $g_1 = (x_1-x_2)(x_3-x_4)h_1$ and
$g_2 = (x_1-x_3)(x_2-x_4)h_2$. The sequence $g_1,g_2,f_1,f_2$ is
regular if and only if the sequences
\begin{enumerate}
\item $h,h',f_1,f_2$
\item $(x_1-x_2)(x_3-x_4),(x_1-x_3)(x_2-x_4),f_1,f_2$
\item $(x_1-x_2)(x_3-x_4),h_2,f_1,f_2$
\end{enumerate}
are regular.
\end{lemma}
\begin{proof}
By Lemma \ref{lem:reg_seq_factors}, $g_1,g_2,f_1,f_2$ is regular if
and only if
\begin{enumerate}
\renewcommand\labelenumi{(\roman{enumi})}
\item $h_1,h_2,f_1,f_2$
\item $(x_1-x_2)(x_3-x_4),(x_1-x_3)(x_2-x_4),f_1,f_2$
\item $(x_1-x_2)(x_3-x_4),h_2,f_1,f_2$
\item $h_1,(x_1-x_3)(x_2-x_4),f_1,f_2$
\end{enumerate}
are regular. Note that (ii) and (iii) are the same as (2) and (3)
above. Moreover, the transposition $(2\,3)\in\mathfrak{S}_4$
permutes (iii) and (iv), therefore it is enough to assume (iii) is
regular. Thus the statement of the lemma will follow if we can prove
that (1), (2), and (3) are regular if and only if (i), (ii), and
(iii) are regular.
Let us show that if (i) is regular then (1) is regular. Since we
have an equality of ideals
$(h_1,h_2,f_1,f_2) = (h_1,h_2-h_1,f_1,f_2)$ and (i) is regular,
$h_1,h_2-h_1,f_1,f_2$ is also regular. Notice that
\begin{equation}
\label{eq:4}
h_2 - h_1 = h' (x_1-x_4)(x_2-x_3).
\end{equation}
This implies that $h_1,h',f_1,f_2$ is regular. We deduce that (1) is
regular, because of the equality
$(h_1,h',f_1,f_2)=(h,h',f_1,f_2)$.
Now assume that (1) and (3) are regular and let us prove that (i) is
regular. Since (1) is regular, the equality
$(h,h',f_1,f_2)=(h_1,h',f_1,f_2)$ implies that $h_1,h',f_1,f_2$ is
regular. As previously observed, (3) being regular implies (iii) and
(iv) are regular. Note that $(3\,4) h_1 = h_1$. Therefore, applying
$(3\,4)$ to (iv), we obtain the regular sequence
$h_1,(x_1-x_4)(x_2-x_3),f_1,f_2$. Since both $h_1,h',f_1,f_2$ and
$h_1,(x_1-x_4)(x_2-x_3),f_1,f_2$ are regular, we can multiply their
second elements to obtain a new regular sequence. By equation
\eqref{eq:4}, this sequence is simply $h_1,h_2-h_1,f_1,f_2$. Finally
the ideal equality $(h_1,h_2-h_1,f_1,f_2) = (h_1,h_2,f_1,f_2)$
allows us to conclude that (i) is regular.
\end{proof}
By Lemma \ref{reg seq conditions}, a necessary condition for
$g_1,g_2,f_1,f_2$ to be a homogeneous regular sequence of type
$S^{(2,2)} \oplus S^{(4)} \oplus S^{(4)}$ and degrees $(a,a,c,d)$ is
that $(a-2,a-4,c,d)$ is a regular degree sequence. In fact, we will
show this condition is also sufficient when $a \geqslant 5$.
\begin{prop}
\label{pro:a_geq_5}
Let $a \geqslant 5$. Suppose that $(a-2,a-4,c,d)$ is a regular degree
sequence for $n=4$. Then there exists a homogeneous regular
sequence of type $S^{(2,2)} \oplus S^{(4)} \oplus S^{(4)}$ and
degrees $(a,a,c,d)$.
\end{prop}
\begin{proof}
First we suppose that $a$ is even. By Proposition \ref{beta
condition}, $4! \mid (a-2)(a-4)cd$. Note that both $a-2$ and $a-4$
are even, and at least one of them is divisible by 4. Therefore it
is enough to account for 3 dividing $(a-2)(a-4)cd$. Moreover, we can
assume $c\geqslant 2$ by condition $(\dag)$; in particular, we can write
$c=2p+3q$ for some positive integers $p,q$. Table \ref{tab:even}
contains our choices of polynomials $h,h',f_1,f_2$; one can easily
verify that, in each case, $h,h',f_1,f_2$ is a regular sequence (see
Remark \ref{rem:reg_seq_proof}). The polynomials $g_1,g_2$ are
obtained using equation \eqref{eq:3}. Using Lemma~\ref{reg seq
conditions}, we conclude that, in each case, $g_1,g_2,f_1,f_2$ is
a regular sequence of type
$S^{(2,2)} \oplus S^{(4)} \oplus S^{(4)}$.
\begin{table}[htb]
\centering
$\begin{array}{|l|l|l|l|l|l|l|l|}
\hline
\multicolumn{4}{|c|}{\rm \bf Degrees} & \multicolumn{4}{|c|}{\rm\bf Symmetric\ Polynomials}\\ \hline
\multicolumn{1}{|c|}{a-2} & \multicolumn{1}{|c|}{a-4} & \multicolumn{1}{|c|}{c} & \multicolumn{1}{|c|}{d}
& \multicolumn{1}{|c|}{h} & \multicolumn{1}{|c|}{h'} & \multicolumn{1}{|c|}{f_1} & \multicolumn{1}{|c|}{f_2}\\ \hline
4\alpha+2 & 4\alpha & 3\gamma & d & e_2^{2\alpha+1} & e_4^\alpha & e_3^\gamma & e_1^d\\
4\alpha & 4\alpha-2 & 3\gamma & d & e_4^\alpha & e_2^{2\alpha-1} & e_3^\gamma & e_1^d\\
12\alpha+2 & 12\alpha & 2p+3q & d & (e_2^3+e_3^2)e_4^{3\alpha-1} & (e_2^6+e_3^4+e_4^3)^\alpha & e_2^p e_3^q & e_1^d\\
12\alpha & 12\alpha-2 & 2p+3q & d & (e_2^6+e_3^4+e_4^3)^\alpha & (e_2^3+e_3^2)e_4^{3\alpha-2} & e_2^p e_3^q & e_1^d\\
6\alpha,\ (2\nmid \alpha) & 6\alpha-2 & 2p+3q & d & (e_2^3+e_3^2)^\alpha & e_4^{(3\alpha-1)/2} & e_2^p e_3^q & e_1^d\\
6\alpha+2,\ (2\nmid \alpha) & 6\alpha & 2p+3q & d & e_4^{(3\alpha+1)/2} & (e_2^3+e_3^2)^\alpha & e_2^p e_3^q & e_1^d\\
\hline
\end{array}$
\caption{$a$ even}\label{tab:even}
\end{table}
Next suppose that $a$ is odd. We must have $8\mid cd$ so, without
loss of generality, we may assume that $2\mid c$ and $4\mid
d$. Furthermore, $3 \mid (a-2)(a-4)cd$. We outline our choices of
$h,h',f_1,f_2$ in Table \ref{tab:odd}. As before, one can verify that
the corresponding sequence $g_1,g_2,f_1,f_2$ is regular.
\begin{table}[htb]\centering
$\begin{array}{|l|l|l|l|l|l|l|l|}
\hline
\multicolumn{4}{|c|}{\rm \bf Degrees} & \multicolumn{4}{|c|}{\rm\bf Symmetric\ Polynomials}\\
\hline
\multicolumn{1}{|c|}{a-2} & \multicolumn{1}{|c|}{a-4} & \multicolumn{1}{|c|}{c} & \multicolumn{1}{|c|}{d}
& \multicolumn{1}{|c|}{h} & \multicolumn{1}{|c|}{h'} & \multicolumn{1}{|c|}{f_1} & \multicolumn{1}{|c|}{f_2}\\ \hline
3\alpha &3\alpha-2 & 2\gamma & 4\delta & e_3^\alpha & e_2^{(3\alpha-3)/2} e_1 & (e_1^2 + e_2)^\gamma & e_4^\delta\\
3\alpha+2 &3\alpha & 2\gamma & 4\delta & e_2^{(3\alpha -1)/2} e_1 & e_3^\alpha & (e_1^2 + e_2)^\gamma & e_4^\delta\\
4\alpha-1 &4\alpha-3 & 6\gamma & 4\delta & e_2^{2\alpha-2} e_3 & e_4^{\alpha-1}e_1 & (e_2^3 + e_3^2)^{\gamma} & (e_1^4+e_4)^{\delta}\\
4\alpha-1 &4\alpha-3 & 2\gamma & 12\delta & e_4^{\alpha-1} e_3 & e_2^{2\alpha-2}e_1 & (e_1^2 + e_2)^{\gamma} & (e_3^4+e_4^3)^{\delta}\\
4\alpha+1 &4\alpha-1 & 6\gamma & 4\delta & e_4^\alpha e_1 & e_2^{2\alpha-2}e_3 & (e_2^3 + e_3^2)^{\gamma} & (e_1^4+e_4)^{\delta}\\
4\alpha+1 &4\alpha-1 & 2\gamma & 12\delta & e_2^{2\alpha}e_1 & e_4^{\alpha-1}e_3 & (e_1^2 + e_2)^{\gamma} & (e_3^4+e_4^3)^{\delta}\\
\hline
\end{array}$\\
\caption{$a$ odd}
\label{tab:odd}
\end{table}
\end{proof}
\begin{remark}\label{rem:reg_seq_proof}
For each line in Table \ref{tab:even} and Table \ref{tab:odd}, one
can prove that the polynomials $h,h',f_1,f_2$ form a regular
sequence using Lemma \ref{lem:reg_seq_factors} and \cite[Cor.~17.8
a]{Eisenbud}. As an example, we show that the polynomials in the
third line of Table \ref{tab:even}, specifically
\begin{equation*}
(e_2^3+e_3^2)e_4^{3\alpha-1}, \quad (e_2^6+e_3^4+e_4^3)^\alpha,
\quad e_2^p e_3^q, \quad e_1^d,
\end{equation*}
form a regular sequence.
By Lemma \ref{lem:reg_seq_factors} and \cite[Cor.~17.8 a]{Eisenbud},
it is enough to show that the sequences
\begin{itemize}
\item $e_2^3+e_3^2, e_2^6+e_3^4+e_4^3, e_2, e_1$
\item $e_2^3+e_3^2, e_2^6+e_3^4+e_4^3, e_3, e_1$
\item $e_4, e_2^6+e_3^4+e_4^3, e_2, e_1$
\item $e_4, e_2^6+e_3^4+e_4^3, e_3, e_1$
\end{itemize}
are regular.
Let us show the first sequence is regular. The ideal it generates is
equal to $e_3^2,e_3^4+e_4^3, e_2, e_1$, therefore it suffices to
show that these generators form a regular sequence. Using
\cite[Cor.~17.8 a]{Eisenbud} again, it is enough to prove that
$e_3,e_3^4+e_4^3, e_2, e_1$ is regular. Because of the ideal
equality $(e_3,e_3^4+e_4^3, e_2, e_1)=(e_3,e_4^3, e_2, e_1)$, we
only need to prove that $e_3,e_4^3, e_2, e_1$ is regular. This
follows immediately from \cite[Cor.~17.8 a]{Eisenbud} and the fact
that the elementary symmetric polynomials form a regular sequence.
The other sequences are handled similarly.
\end{remark}
In summary, we have the following result.
\begin{theorem}
There exists a regular sequence of type
$S^{(2,2)} \oplus S^{(4)} \oplus S^{(4)}$ and degrees $(a,a,c,d)$ if
and only if
\begin{enumerate}
\item $a = 2$ or $4$ and $(c,d) \neq (1,1)$, or
\item $a\geqslant 5$ and $(a-2,a-4,c,d)$ is a regular degree sequence.
\end{enumerate}
\end{theorem}
\begin{appendix}
\section{Macaulay2 code}
\label{sec:macaulay2-code}
We present here the Macaulay2 code used to produce the example in
Remark \ref{rem:M2_rem}.
\begin{verbatim}
needsPackage "Depth"
R=QQ[x_1..x_5]
e=apply(5,i->sum(apply(subsets(gens R,i+1),product)))
l=apply(4,i->x_(i+1)^6-x_5^6)
g=apply(4,i->sum(apply(4,j->e_(j+1)*(x_(i+1)^(4-j)-x_5^(4-j)))))
isRegularSequence(l|{e_0})
isRegularSequence(g|{e_0})
\end{verbatim}
\end{appendix}
\def\Dbar{\leavevmode\lower.6ex\hbox to 0pt{\hskip-.23ex
\accent"16\hss}D}
\end{document}
|
\begin{document}
\theoremstyle{plain}
\newtheorem{thm}{\sc Theorem}
\newtheorem*{s-thm}{\sc Theorem}
\newtheorem{lem}{\sc Lemma}[section]
\newtheorem{d-thm}[lem]{\sc Theorem}
\newtheorem{prop}[lem]{\sc Proposition}
\newtheorem{cor}[lem]{\sc Corollary}
\theoremstyle{definition}
\newtheorem{conj}[lem]{\sc Conjecture}
\newtheorem{prob}[lem]{\sc Open Problem}
\newtheorem{defn}[lem]{\sc Definition}
\newtheorem{qn}[lem]{\sc Question}
\newtheorem{ex}[lem]{\sc Example}
\newtheorem{rmk}[lem]{\sc Remark}
\newtheorem{rmks}[lem]{\sc Remarks}
\newtheorem*{ack}{\sc Acknowledgment}
\begin{abstract}
We establish sufficient conditions for extension of weighted-$L^2$ holomorphic functions from a possibly singular hypersurface $W$ to the ambient space ${\mathbb C} ^n$. The $L^2$-norms we use are the so-called generalized Bargmann-Fock norms, and thus there are restrictions on the singularities of $W$ as well as the density of $W$. Our sufficient conditions are that $W$ has density less than $1$ and is uniformly flat in a sense that extends to singular varieties the notion of uniform flatness introduced in {\mathcal I}te{osv}. We present an example of Ohsawa showing that uniform flatness is not necessary for extension in the singular case, and find an example showing that, for rather different reasons, uniform flatness is also not necessary in the smooth case. The latter answers in the negative a question posed in {\mathcal I}te{osv}.
\end{abstract}
\title{Bargmann-Fock Extension From Singular Hypersurfaces}
{\mathscr E}tcounter{tocdepth}1
{\mathscr E}ction*{Introduction}
In this article we consider the problem of extending, from an analytic hypersurface $W$ in ${\mathbb C} ^n$, holomorphic functions that are square-integrable with respect to some ambient weight, in such a way that the extension is also square-integrable with respect to the same weight. When the hypersurface is smooth and uniformly flat, the result we present here was proved in {\mathcal I}te{osv}: If $W$ is uniformly flat and the density of $W$ is less than $1$ then extension is possible. (See Sections {\rm Re\ }f{flat-section} and {\rm Re\ }f{density-section} for the definition of uniform flatness and of density respectively.) On the other hand, if $W$ is singular then in general extension is not possible. The precise--- by which we mean necessary and sufficient--- conditions for such extension on a possibly singular hypersurface are not even conjectured.
Below we extend the notion of uniformly flat hypersurface to a possibly singular hypersurface. The notion places strong restrictions on the singularities of the hypersurface, among other things. We show that the results of {\mathcal I}te{osv} extend to the case of uniformly flat singular varieties.
To state our results precisely, we introduce some notation. Let $\omega := \tfrac{\sqrt{-1}}{2} \partial \bar \partial |z|^2$ denote the K\"ahler form associated to the Euclidean metric. To a smooth function $\varphi : {\mathbb C} ^n \to {\mathbb R}$ we associate the Hilbert space
\[
{\mathscr H} ({\mathbb C} ^n,\varphi) := {\mathcal O} ({\mathbb C} ^n ) {\mathcal A}p L^2 (e^{-\varphi}) = \left \{ f \in {\mathcal O} ({\mathbb C} ^n) \ ;\ \int _{{\mathbb C} ^n} |f|^2 e^{-\varphi} \omega ^n < +\infty\right \}.
\]
Let $W$ be a possibly singular complex analytic hypersurface. To $\varphi$ and $W$ we associate the Hilbert space
\[
{\mathfrak H} (W,\varphi) := \left \{ f \in {\mathcal O} (W)\ ;\ \int _{W_{\rm reg}} |f|^2 e^{-\varphi} \omega ^{n-1} < +\infty \right \}.
\]
The different letters ${\mathscr H}$ and ${\mathfrak H}$ stress that in the latter case, the weight $\varphi$ is defined on the entire ambient space ${\mathbb C} ^n$ that contains $W$. We emphasize that, by definition, a function is holomorphic on $W$ if each point $x \in W$ has a neighborhood $U$ in ${\mathbb C} ^n$ and a holomorphic function $\tilde f \in {\mathcal O} (U)$ such that $\tilde f |_W = f$. Since $W {\mathscr U}bset {\mathbb C} ^n$ is a closed analytic subset, we may even take $U= {\mathbb C} ^n$.
Let us denote by ${\mathscr R} _W : {\mathscr H} ({\mathbb C} ^n,\varphi) \to {\mathfrak H}(W,\varphi)$ the map that sends $F\in {\mathscr H} ({\mathbb C} ^n,\varphi)$ to its restriction to $W$. In general, this restriction map is not bounded. For example, when $n=1$, ${\mathscr R} _W$ is bounded if and only if $W$ is a finite union of uniformly separated sequences. But we will not discuss the boundedness of ${\mathscr R} _W$ in this article; our main concern is with the surjectivity of ${\mathscr R} _W$.
In section {\rm Re\ }f{flat-section} we define the notion of uniformly flat complex analytic hypersurface $W$ and in Section {\rm Re\ }f{density-section} the upper density $D^+_{\varphi}(W)$ of a hypersurface with respect to a weight $\varphi$. The notions of uniform flatness and upper density were defined for smooth hypersurfaces in {\mathcal I}te{osv}. Here we introduce modifications of both definitions to the case of possibly singular varieties.
We can now state our first main result.
\begin{thm}\label{suff-analytic}
Let $\varphi:{\mathbb C} ^n \to {\mathbb R}$ be a ${\mathscr C} ^2$-smooth function satisfying
\begin{equation}\label{gbf-weight}
\varepsilon \omega \le \sqrt{-1} \partial \bar \partial \varphi \le C\omega
\end{equation}
for some positive constants $\varepsilon$ and $C$, and let $W {\mathscr U}bset {\mathbb C} ^n$ be a possibly singular, uniformly flat complex hypersurface such that $D^+_{\varphi}(W) < 1$. Then ${\mathscr R} _W : {\mathscr H} ({\mathbb C} ^n,\varphi) \to {\mathfrak H} (W,\varphi)$ is surjective.
\end{thm}
In fact we will show that if $W$ is {\it smooth} and uniformly flat then condition \eqref{gbf-weight} can be dropped completely. We therefore conjecture that the same is true for Theorem {\rm Re\ }f{suff-analytic}.
The method used to prove Theorem {\rm Re\ }f{suff-analytic} has two parts, the second of which bears similarity to the work in {\mathcal I}te{osv}. The first part, which concerns local extension with estimates near the singularities, therefore constitutes one of the main contributions of this paper.
The second main contribution of the present paper is to show that the conditions of Theorem {\rm Re\ }f{suff-analytic} are not necessary; especially, uniform flatness need not hold in general. We show this by example. In the case of singular $W$, we present an example told to us by Ohsawa. In the smooth case, we find a new example that took us rather by surprise when we first discovered it. The example is the content of Theorem {\rm Re\ }f{non-necess-smooth} below.
The examples we find suggest that the kinds of separation conditions we expect to constrain extension are ``in the large", rather than local. The exact condition, which must reduce to the necessary condition of uniformly separated sequence in the case where $W$ is zero-dimensional, remains undiscovered so far as the authors know.
\end{ack}
{\mathscr E}ction{Weighted mean-value inequalities}
In what follows, we will repeatedly use the following result.
\begin{lem}\label{quimbo-trick}{\mathcal I}te{quimbo,lindholm}
Let $\varphi$ be a plurisubharmonic function on the unit ball ${\mathbb B}_k$ in ${\mathbb C} ^k$ such that $\sqrt{-1} \partial \bar \partial \varphi \le M\omega$ for some $M>0$. Then there exist a positive constant $K$, depending only on M, and a holomorphic function on $G \in {\mathcal O} (\tfrac{1}{2}{\mathbb B}_k )$ such that $G(0)=0$ and
\[
{\mathscr U}p _{{\mathbb B} _k \left (0,1/2\right )} \left |\varphi - \varphi (0) - 2{\rm Re\ } \ G \right | \le K.
\]
Moreover, if $\varphi$ depends smoothly on a parameter, then so does $G$.
\end{lem}
Lemma {\rm Re\ }f{quimbo-trick} shows that for each $z \in {\mathbb C} ^n$ and $r >0$ there is a function $G_z \in {\mathcal O} (B(z,r))$ such that
\[
G_z(w) = \varphi (w) - \varphi (z) + {\mathbb P}si _z(w)
\]
where ${\mathbb P}si_z(w)$ is bounded on $B(z,r)$ and $G_z(z) = {\mathbb P}si _z(z) = 0$. By averaging over $B(z,r)$ we see that, with
\[
\varphi _r (z) = \Xint- _{B(z,r)} \varphi (\zeta)\omega ^n (\zeta),
\]
we have the estimate
\[
\left | \varphi (z) - \varphi _r (z) \right | \le C_r.
\]
It follows that ${\mathscr H} ({\mathbb C} ^n,\varphi) = {\mathscr H} ({\mathbb C} ^n, \varphi _r)$ and ${\mathfrak H}(W,\varphi) = {\mathfrak H}(W,\varphi_r)$ in the sense that the identity map is a bounded vector space isomorphism. We will use this uniform comparison between $\varphi$ and $\varphi_r$ constantly and often without mention.
We will also need some uniform and ${\mathscr C}^1$ estimate for weighted-$L^2$ holomorphic functions in ${\mathbb C} ^n$. These estimates, the first of which often also goes by the name {\it weighted Bergman inequality}, might be thought of as weighted analogues of the Cauchy estimates for a function and its derivative.
\begin{lem}\label{quimbo-cauchy-est}{\mathcal I}te{quimseep, lindholm}
Let ${\mathbb P}si$ be a function satisfying
\[
-C\omega \le \sqrt{-1} \partial \bar \partial {\mathbb P}si \le C\omega.
\]
Then there is a constant $K > 0$ such that for any $F \in {\mathscr H} ({\mathbb C} ^n, {\mathbb P}si)$,
\begin{eqnarray}
\label{norm} {\mathscr U}p _{{\mathbb C} ^n} |F|^2e^{-{\mathbb P}si} &\le& K\int _{{\mathbb C} ^n} |F|^2 e^{-{\mathbb P}si} \omega ^n\\
\nonumber \text{and} && \\
\label{d-of-norm} {\mathscr U}p _{{\mathbb C} ^n} |d(|F|e^{-\tfrac{1}{2} {\mathbb P}si})^r| &\le& K \left ( \int _{{\mathbb C} ^n} |F|^2 e^{-{\mathbb P}si} \omega ^n \right )^{r/2}.
\end{eqnarray}
\end{lem}
\noindent Strictly speaking, (2) was probably not explicitly proved in the literature when $n \ge 2$, but a slight modification of the proof of Lemma {\rm Re\ }f{quimbo-trick} in {\mathcal I}te{lindholm} can be used to obtain ${\mathscr C} ^1$-estimates for the function ${\mathbb P}si _z(w)$ above, and thus generalize the proof of {\mathcal I}te{quimseep} to higher dimensions. For details, the reader can see, for example, {\mathcal I}te[Lemma 2.1]{sv-toeplitz}
{\mathscr E}ction{Uniform flatness}\label{flat-section}
We extend to possibly singular hypersurfaces the notion of uniform flatness introduced for smooth hypersurfaces in {\mathcal I}te{osv}.
\begin{defn}
\begin{enumerate}
\item[(i)] For a subset $A {\mathscr U}bset {\mathbb C} ^n$ and a positive number $\varepsilon$, we define
\[
U_{\varepsilon} (A) := \{ x\in {\mathbb C} ^n\ ;\ {\rm dist}(x,A) = \inf _{a\in A} |x-a| < \varepsilon \}
\]
\item[(ii)] Let $Y {\mathscr U}bset {\mathbb C}^n$ be a smooth complex hypersurface with boundary. If $\varepsilon : Y \to (0,\infty)$ is a continuous function, the union
\[
N_{\varepsilon}(Y):= \bigcup _{y\in Y}\left \{y+t \tfrac{df(y)}{|df(y)|}\ ;\ t\in {\mathbb C} \text{ and } |t|< \varepsilon (y), (f)={\mathcal O} _{Y,y}\right \}
\]
is said to be a tubular neighborhood of $Y$ if it is diffeomorphic to a neighborhood of the zero section in the normal bundle of $Y$.
$\partialamond$
\end{enumerate}
\end{defn}
\noindent In the rest of the paper, the function $\varepsilon$ in the definition of $N_{\varepsilon}(Y)$ will always be constant.
Our goal in this section is to extend to certain singular varieties the notion of uniform flatness introduced in {\mathcal I}te{osv} for smooth hypersurfaces in ${\mathbb C} ^n$. To motivate our definition, let us recall the notion of uniformly flat smooth hypersurfaces.
\begin{defn}[Uniform flatness. Smooth case{\mathcal I}te{osv}]
A smooth hypersurface $W {\mathscr U}bset {\mathbb C} ^n$ is said to be uniformly flat if there exists a positive constant $\varepsilon _o$ such that $U_{\varepsilon_o}(W) = N_{\varepsilon _o}(W)$.
$\partialamond$
\end{defn}
We take this opportunity to remind the reader of the following proposition describing the basic properties of uniformly flat smooth hypersurfaces. We remind the reader of the notation
\[
D_W(w,\varepsilon_o) := T_{W,w} {\mathcal A}p B(w,\varepsilon _o) \oplus \{v\in T_{{\mathbb C} ^n,w}\ ; \partial f(w) v = 1 \text{ and } |v| <\varepsilon _o \}.
\]
\begin{prop}{\mathcal I}te[Proposition 3.2]{osv}\label{osv-uf}
Let $W {\mathscr U}bset {\mathbb C} ^n$ be a uniformly flat hypersurface and let $\varepsilon _o$ be a constant such that $U_{\varepsilon_o}(W) = N_{\varepsilon _o}(W)$. Then the following hold.
\begin{enumerate}
\item[(G)] Assume $n \ge 2$. Then for all $w \in W$, $W {\mathcal A}p D_W(w,\varepsilon_o)$ is given as a graph $y = f(x)$, over $T_{W,w}{\mathcal A}p B(w,\varepsilon _o)$, of a function $f : T_{W,w} {\mathcal A}p B(w,\varepsilon_o) \to {\mathbb C}$ satisfying
\begin{equation}\label{quad-graph}
|f(w+x)| \le {\mathfrak r}ac{|x|^2}{\varepsilon_o}.
\end{equation}
Here $D_W(w,\varepsilon_o)$ denotes the union of the disks with centers on $T_{W,w}{\mathcal A}p B(w,\varepsilon _o)$ and radius $\varepsilon _o$ that are orthogonal to $T_{W,w} {\mathscr U}bset {\mathbb C}^n$.
\item[(A)] For each $R > 0$ there exists a constant $C_R>0$ such that for all $z \in {\mathbb C} ^n$
\[
{\rm Area}(W {\mathcal A}p B(z,R)) \le C_R.
\]
\end{enumerate}
\end{prop}
\noindent In extending the notion of uniform flatness to the singular setting, we aim to achieve the following:
\begin{enumerate}
\item[(i)] Away from the singular locus of $W$, the notion of uniform flatness should be the same as for general smooth varieties in ${\mathbb C} ^n$, and
\item[(ii)] near the singular locus, the hypersurface should look like a finite number of uniformly flat smooth hypersurfaces intersecting pairwise-transversely, and the transversality should be uniform. The local notion of uniform flatness will be the graph property (G).
\end{enumerate}
With these comments in mind, we propose the following definition.
\begin{defn}[Uniform flatness]\label{u-flat-defn}
A singular analytic variety $W$ is said to be uniformly flat if there are numbers $\varepsilon _o>0$ and $a>1$ with the following properties.
\begin{enumerate}
\item[(R)] The set $N_{\varepsilon_o} (W- U_{a \varepsilon_o}(W_{\rm{sing}}))$ is a tubular neighborhood of the smooth hypersurface with boundary $W- U_{a \varepsilon_o}(W_{\rm sing})$ in ${\mathbb C} ^n$.
\item[(S)] For each $p \in W_{\rm sing}$ the set $W {\mathcal A}p B(p,\varepsilon _o)$ is a union of smooth hypersurfaces $W_1,...,W_{N_p}$ each of which is given as the graph of a function on its tangent space with the property \eqref{quad-graph}. Moreover,
\begin{enumerate}
\item[(S$_N$)] $N_p \le \varepsilon _o^{-1}$ for all $p$, and
\item[(S$_A$)] the angle at $p$ between any two of the $W_i$ lies in $[\varepsilon _o, {\mathbb P}i - \varepsilon _o]$.
$\partialamond$
\end{enumerate}
\end{enumerate}
\end{defn}
\begin{rmk}
Note that $N_p \le n$ when $W$ has only simply normal crossing singularities.
$\partialamond$
\end{rmk}
\begin{rmk}
At one point, we had hoped to use the following definition of uniform flatness: a singular analytic variety $W$ was to be uniformly flat if there are positive numbers $\varepsilon _o$ and $a$ such that the following holds. For any $0<\varepsilon <\varepsilon _o$, the set
\[
N_{\varepsilon} (W- U_{a \varepsilon}(W_{\rm{sing}}))
\]
is a tubular neighborhood of the smooth hypersurface with boundary $W- U_{a \varepsilon}(W_{\rm sing})$ in ${\mathbb C} ^n$.
So far, we have been unable to prove that this notion of uniform flatness is the same as the one we have taken above. The more natural, but possibly less general, definition is the one made in this remark. It would be nice to decide whether the two definitions are the same.
$\partialamond$
\end{rmk}
\noindent The following are two important consequences of uniform flatness that we will use in the sequel. These properties follow easily from Proposition {\rm Re\ }f{osv-uf} and the definition of uniform flatness.
\begin{lem}\label{flat-properties}
Let $W {\mathscr U}bset {\mathbb C} ^n$ be a uniformly flat hypersurface, and let $\varepsilon _o$ and $a$ be as in the definition of uniform flatness. Then the following hold.
\begin{enumerate}
\item[(G)] Assume $n \ge 2$. Let $B_{W,w}(\varepsilon _o):= T_{W,w} {\mathcal A}p B(w,\varepsilon _o)$. Then for all $w \in W-U_{(a+1)\varepsilon_o}(W_{\rm sing})$, $W{\mathcal A}p N_{\varepsilon_o}(B_{W,w}(\varepsilon _o))$ is given as a graph over $B_{W,w}(\varepsilon _o)$ of a function $f : B_{W,w}(\varepsilon_o) \to {\mathbb C}$ satisfying
\[
|f(w+x)| \le {\mathfrak r}ac{|x|^2}{\varepsilon _o}.
\]
\item[(A)] For each $R > 0$ there is a constant $C_R
>0$ such that for all $z \in {\mathbb C} ^n$,
\[
{\rm Area}(W {\mathcal A}p B(z,R)) \le C_R.
\]
\end{enumerate}
\end{lem}
{\mathscr E}ction{Density}\label{density-section}
\begin{defn}
Let $T \in {\mathcal O} ({\mathbb C} ^n)$ be a holomorphic function such that $W = T^{-1} (0)$ and $dT$ is nowhere zero on $W_{\rm{reg}}$. (That is to say, $T$ generates the ideal of functions vanishing on $W$.) For any $z\in{\mathbb C}^n$ and any $r>0$ consider the (1,1)-form
\[
{\mathbb U}psilon ^W _r(z) := {\mathfrak r}ac{1}{2{\mathbb P}i} {\mathscr U}m _{i ,\bar j =1} ^n \left (\Xint- _{B(z,r)} {\mathfrak r}ac{\partial ^2\log |T|^2}{\partial \zeta ^i \partial \bar \zeta ^j} \omega ^n(\zeta) \right ) \sqrt{-1} dz^i \wedge d\bar z ^j.
\]
The $(1,1)$-form ${\mathbb U}psilon _r^W(z)$ is called the total density tensor of $W$.
$\partialamond$
\end{defn}
\begin{rmk}
Note that the total density tensor is identical in form to the total density tensor for smooth uniformly flat hypersurfaces introduced in {\mathcal I}te{osv}; the only thing different is that we no longer assume that $W$ is smooth. As was pointed out then, the definition of ${\mathbb U}psilon ^W_r (z)$ is independent of the choice of the function $T$ defining $W$. Moreover, if $[W]$ denotes the current of integration over $W$ then ${\mathbb U}psilon _r ^W$ is the average of $[W]$ in a ball of center $z$ and radius $r$:
\[
{\mathbb U}psilon ^W_r = [W]*{\mathfrak r}ac{\mathbf{1}_{B(0,r)}}{\operatorname{Vol}(B(0,r))},
\]
where $\mathbf{1}_A$ denotes the characteristic function of a set $A$ and $*$ is convolution.
$\partialamond$
\end{rmk}
A useful concept in the study of interpolation and sampling for a smooth hypersurface with respect to strictly plurisubharmonic weights is that of density of the hypersurface. The definition given in {\mathcal I}te{osv}, which we now recall, extends immediately to the setting of possibly singular hypersurface.
\begin{defn}\label{d-def}
Assume $\varphi$ is strictly plurisubharmonic. The number
\[
D_r(W;z) := {\mathscr U}p \left \{ {\mathfrak r}ac{{\mathbb U}psilon _r ^W (z)(v,v)}{\sqrt{-1} \partial \bar \partial \varphi _r (z)(v,v)}\ ;\ v\in T_{{\mathbb C} ^n, z}-\{0\}\right \}
\]
is called the density of $W$ in the ball of radius $r$ and center $z$. The upper density of $W$ is
\[
D^+_{\varphi}(W) := \limsup _{r \to \infty} {\mathscr U}p _{z \in {\mathbb C} ^n} D_r(W;z).
\]
The lower density of $W$ is
\[
D^-_{\varphi}(W) := \liminf_{r \to \infty}\inf_{z \in {\mathbb C} ^n} D_r(W;z).
\]
(We will not use $D^-_{\varphi}(W)$ in the present article.)
$\partialamond$
\end{defn}
As stated, the upper and lower densities are well-defined only for strictly plurisubharmonic functions. However, one can reformulate the definition as follows.
\[
D^+_{\varphi}(W))= \inf \left \{ \alpha \ge 0 \ ;\ \sqrt{-1} \partial \bar \partial \varphi _r - \tfrac{1}{\alpha}{\mathbb U}psilon _r^W \ge 0 \text{ for all }r>>0 \right \}
\]
and
\[
D^-_{\varphi}(W)= {\mathscr U}p \{ \gamma \ge 0\ ;\ \gamma \sqrt{-1} \partial \bar \partial \varphi _r(z) - {\mathbb U}psilon _r^W(z) \not \ge 0 \text{ for all }z \in {\mathbb C} ^n \text{ and all }r>>0\}.
\]
These equivalent formulations of the upper and lower densities make sense for $\varphi$ that are plurisubharmonic but not necessarily strictly plurisubharmonic.
Thus for example, $D^+_{\varphi}(W) < 1$ if and only if there exists a constant $\delta > 0$ such that for all $r >> 0$,
\[
\sqrt{-1} \partial \bar \partial \varphi _r \ge (1+\delta) {\mathbb U}psilon ^W_r.
\]
{\mathscr E}ction{Proof of Theorem {\rm Re\ }f{suff-analytic}}
Part of our proof of Theorem {\rm Re\ }f{suff-analytic} resembles the method of proof used in {\mathcal I}te{osv}. The twisted Bochner-Kodaira technique used in {\mathcal I}te{fv} cannot be used directly to prove Theorem {\rm Re\ }f{suff-analytic} when $W$ is not smooth. We will illustrate this claim in Paragraph {\rm Re\ }f{smooth-case}.
{\mathscr U}bsection{A singular function}
As is usual in the $L^2$ approach of extension, we need to produce a function that is singular on $W$. The function we choose is a tried-and-true one (see {\mathcal I}te{osv,fv}), but we contribute one new insight to the definition.
\begin{defn}\label{sing-fn-defn}
We define the function
\[
s_r(z) = \log |T(z)|^2 - \Xint- _{B(z,r)} \log |T(\zeta)|^2 {\mathfrak r}ac{\omega ^n}{n!}
(\zeta),
\]
where $T \in {\mathcal O} ({\mathbb C} ^n)$ be a holomorphic function such that $W = \{ T=0 \}$ and $dT$ is not identically zero on $W$.
$\partialamond$ \end{defn}
\begin{rmk}
Note that the function $s_r$ depends only on $W$, and not on the generator $T$ of the ideal ${\mathscr I} _W$ of germs of holomorphic functions vanishing on $W$.
$\partialamond$
\end{rmk}
Definition {\rm Re\ }f{sing-fn-defn} and the Poincar\'e-Lelong Identity yield the following proposition.
\begin{prop}\label{t-rep}
Let $s_r$ be as in Definition {\rm Re\ }f{sing-fn-defn}. Then
\[
{\mathfrak r}ac{\sqrt{-1}}{2{\mathbb P}i} \partial \bar \partial s_r = [W]-[W]*{\mathfrak r}ac{\mathbf{1} _{B(0,r)}} {\operatorname{Vol}(B(0,r))} = [W]-{\mathbb U}psilon ^W_r.
\]
\end{prop}
\noindent The proof can be found in {\mathcal I}te{osv} where the following lemma was also established.
\begin{lem}\label{s-prop}
The function $s_r$ has the following properties.
\begin{enumerate}
\item[(a)] It is non-positive.
\item[(b)] For each $r,\varepsilon > 0$ there is a constant $C_{r,\varepsilon}$ such that if $\operatorname{dist}(z, W) \ge \varepsilon$, then $s_r(z) \ge - C_{r,\varepsilon}$.
\item[(c)] The function $e^{-s_r}$ is not integrable at any open subset that intersects $W$.
\end{enumerate}
\end{lem}
\begin{rmk}
The function $s_r$ is the logarithm of the length of a defining section $T$ of the (trivial) line bundle associated to $W$, measured with a metric constructed from $T$, namely $e^{-{\mathbb P}si _T}$, where
\[
{\mathbb P}si _T (z) := \Xint- _{B(z,r)} \log |T(\zeta)|^2 {\mathfrak r}ac{\omega ^n(\zeta)}{n!}.
\]
Note that the curvature of $e^{-{\mathbb P}si _T}$ is $\sqrt{-1} \partial \bar \partial {\mathbb P}si _T = {\mathbb U}psilon ^W_r$, which depends only on $W$.
$\partialamond$
\end{rmk}
\begin{rmk}
We have not missed the dependence of ${\mathbb P}si _T$ on the radius $r$. In our work, this dependence will be irrelevant as soon as $r$ is sufficiently large, but how large an $r$ is needed depends on $W$. Perhaps this dependence is important in other considerations.
$\partialamond$
\end{rmk}
{\mathscr U}bsection{The smooth case}\label{smooth-case}
In {\mathcal I}te{v-tak}, the following $L^2$-extension theorem was proved.
\begin{d-thm}\label{ot-thm}
Let $X$ be a Stein manifold with K\"ahler form $\omega$, $Z {\mathscr U}bset X$ a smooth hypersurface, $e^{-\eta}$ a singular Hermitian metric for the holomorphic line bundle associated to the smooth divisor $Z$, and $T$ a holomorphic section of this line bundle such that $Z= \{T=0\}$. Assume that $e^{-\eta}|_Z$ is still a singular Hermitian metric, and that
\[
{\mathscr U}p _X |T|^2 e^{-\eta} = 1.
\]
Let $H \to X$ be a holomorphic line bundle with singular Hermitian metric $e^{-\kappa}$ whose curvature $\sqrt{-1} \partial \bar \partial \kappa$ is non-negative in the sense of currents. Suppose also that
\[
\sqrt{-1} \partial \bar \partial \kappa + {\rm Ricci}(\omega) \ge (1+\delta) \sqrt{-1} \partial \bar \partial \eta
\]
for some positive number $\delta$. Then for each section $f \in H^0(Z, H)$ satisfying
\[
\int _{Z} {\mathfrak r}ac{|f|^2 e^{-\kappa}}{|dT|^2 e^{-\eta}} {\mathfrak r}ac{\omega ^{n-1}}{(n-1)!} < +\infty
\]
there is is a section $F \in H^0(X,H)$ such that
\[
F|_Z = f \quad \text{and}\quad \int _{X} |F|^2 e^{-\kappa} {\mathfrak r}ac{\omega ^n}{n!} \le {\mathfrak r}ac{C}{\delta} \int _Z {\mathfrak r}ac{|f|^2 e^{-\kappa}}{|dT|^2 e^{-\eta}} {\mathfrak r}ac{\omega ^{n-1}}{(n-1)!},
\]
where the constant $C$ is universal.
\end{d-thm}
\begin{rmk}
Theorem {\rm Re\ }f{ot-thm} can be easily extended to the case of singular $Z$ with essentially the same proof, provided that integration over $Z$ is replaced by integration over $Z_{\rm reg}$.
$\partialamond$
\end{rmk}
Theorem {\rm Re\ }f{ot-thm} looks rather similar to Theorem {\rm Re\ }f{suff-analytic}, except for the denominator $|dT|^2e^{-\eta}$ used on the subvariety $Z$. In fact, let us take $X = {\mathbb C} ^n$, $Z=W$, $\omega = \tfrac{\sqrt{-1}}{2} \partial \bar \partial |z|^2$, $\kappa = \varphi$ and
\[
\eta(z) := {\mathbb P}si _T(z) = \Xint- _{B(z,r)} \log |T|^2 \omega ^n.
\]
If we define{\mathfrak o}otnote{Note that $\rho _r$ is not differentiable at $W$, but $|\partial \rho _r|^2$ is well-defined on $W$ and therefore on ${\mathbb C} ^n$.}
\[
\rho _r :=e^{\tfrac{1}{2} s_r}
\]
and
\[
{\mathcal H} (W,\varphi) := \left\{ f \in {\mathcal O} (W)\ ;\ \int _{W_{\rm reg}}{\mathfrak r}ac{|f|^2e^{-\varphi}}{|\partial \rho _r|^2} \omega ^{n-1} < +\infty\right \},
\]
then we have the following theorem.
\begin{d-thm}\label{suff-anal-sing}
Let $W$ be a singular hypersurface in ${\mathbb C} ^n$ and $\varphi$ a plurisubharmonic function in ${\mathbb C} ^n$. Suppose $D^+_{\varphi}(W) < 1$. Then the restriction map ${\mathcal R} _W : {\mathscr H} ({\mathbb C} ^n,\varphi) \to {\mathcal H} (W,\varphi)$ is surjective.
\end{d-thm}
\begin{prob}
Is the converse of Theorem {\rm Re\ }f{suff-anal-sing} true under the additional assumption \eqref{gbf-weight}?
\end{prob}
As the next lemma shows, Theorem {\rm Re\ }f{suff-analytic} follows from Theorem {\rm Re\ }f{ot-thm} when $W$ is smooth (and uniformly flat), and in fact the latter is more general than the former in the case of smooth $W$, since the curvature hypotheses on $\varphi$ are weaker.
\begin{lem}\label{flat-lem}
If $W$ is smooth and uniformly flat then there is a constant $C_r$ such that
\[
\inf _{x \in W} |\partial \rho _r(x)|^2 \ge C_r.
\]
\end{lem}
\begin{proof}
We choose a point $z \in W$, and we will show that there is a lower bound for $|\partial \rho _r(z)|^2$ that does not depend on $z$. To this end, let us first fix $T \in {\mathcal O} (W)$ such that $W= \{ T=0\}$ and $dT|_W$ is never zero.
Now, one can write
\begin{eqnarray*}
|\partial \rho _r(z)|^2 &=& |dT(z)|^2 \exp \left ( - \Xint- _{B(z,r)} \log |T|^2 \omega ^n \right ) \\
&=& |dT(z)|^2 \exp \left ( - \Xint- _{B(z,a)} \log |T|^2 \omega ^n \right ) \times \exp \left ( \left ( \Xint- _{B(z,a)} - \Xint- _{B(z,r)} \right ) \log |T|^2 \omega ^n \right ).
\end{eqnarray*}
The two factors in the last line are both independent of the choice of function $T$ that cuts out $W$. For the first factor, we need only use a function that cuts out $W$ in $B(z,a)$, while for the second factor we may use a function that cuts out $W$ in $B(z,r)$. We make choices for each factor, so as to obtain universal lower bounds.
Let us begin with the left factor, which takes place over $B(z,a)$. By using the function $T= y - f(x)$ given by Proposition {\rm Re\ }f{osv-uf}(G) representing the graph of $W$ near the point $z$ in question we get a uniform lower bound for the first factor. To have such a defining function, it suffices by the uniform flatness hypothesis to take $a$ sufficiently small but independent of $z$.
Let us now turn to the second factor. Of course, this factor is bounded above by $1$, because of the increasing property of (pluri)subharmonic averages, but we are interested in a lower bound. To obtain the latter, we proceed as follows. Consider the closed positive $(1,1)$-current
\[
{\mathbb U}psilon ^W_a (x) := {\mathfrak r}ac{\sqrt{-1}}{2{\mathbb P}i} \partial \bar \partial \Xint- _{B(0,a)} \log |T(\zeta + x)|^2 \omega ^n(\zeta) = [W]* {\mathfrak r}ac{\mathbf{1}_{B(0,a)}}{{\rm Vol}(B(0,a))}(x).
\]
The trace of ${\mathbb U}psilon ^W_a(x)$ is ${\rm Area}(W {\mathcal A}p B(x,a))$, and therefore by Proposition {\rm Re\ }f{osv-uf}(A), ${\mathbb U}psilon ^W_a$ is bounded above by a multiple of the Euclidean metric, the multiple depending only on $a$. Below we are going to consider balls of the form $B(x,a)$ as we let $x$ vary in $B(z,r)$, and therefore we want to work in $B(z,2r)$ for the moment. From the proof of Lemma {\rm Re\ }f{quimbo-trick} (for example, in {\mathcal I}te{lindholm}) we deduce that for $r > 0$ there is a (plurisubharmonic) function $u = u_{z,a,r}$ such that $\sqrt{-1} \partial \bar \partial u = {\mathbb U}psilon ^W_a$ in $B(z,2r)$ and
\[
{\mathscr U}p _{B(z,r+2a)} |u| \le A_{a,r},
\]
where $A_{a,r}$ is independent of $z$.
Now, the function $h(x) := u(x) - \Xint-_{B(x,a)} \log |T|^2 \omega ^n$ is pluriharmonic in $B(z,2r)$, and therefore with $H \in {\mathcal O} (B(z,2r))$ such that $h:= 2{\rm Re\ } H$, we have
\[
u(x) = \Xint- _{B(x,a)} \log |Te^{H}|^2 \omega ^n, \quad x\in B(z,r+a).
\]
Letting $T_o := T e^H$, we have
\[
\left | \Xint- _{B(x,a)}\log |T_o|^2 \omega ^n \right | \le A_{a,r}, \qquad x\in B(z,r+a).
\]
By the sub-mean value property, we have
\[
\log |T_o(x)|^2 \le \Xint- _{B(x,a)} \log |T_o|^2 \omega ^n \le A_{a,r},
\]
and thus setting $T_1 := T_o e^{ - \tfrac{1}{2} A_{a,r}}$ we have a function $T_1 \in {\mathcal O} (B(z,r+a))$ such that $\log |T_1(x)|^2 \le 0$ for all $x\in B(z,r+a)$, and therefore
\[
\Xint- _{B(x,r)}- \log |T_1|^2 \ge 0.
\]
Moreover,
\[
\Xint- _{B(z,a)} \log |T_1|^2 \omega ^n \ge - \left | \Xint- _{B(z,a)} \log |T_1|^2 \omega ^n \right | \ge - 2A_{a,r}.
\]
The proof is complete.
\end{proof}
\begin{rmk}
Lemma {\rm Re\ }f{flat-lem} clearly fails for non-smooth $W$. It is for this reason that we said we could not use the method of {\mathcal I}te{fv} to prove Theorem {\rm Re\ }f{suff-analytic} in the non-smooth case. Even more, when $W$ is singular it is in fact the case that the spaces ${\mathcal H} (W,\varphi)$ and ${\mathfrak H}(W,\varphi)$ are different. Thus the two results {\rm Re\ }f{suff-analytic} and {\rm Re\ }f{suff-anal-sing} discuss extension from two completely different spaces of holomorphic functions.
$\partialamond$
\end{rmk}
Notice that our proof of Theorem {\rm Re\ }f{suff-analytic} when $W$ is smooth requires only that the weight $\varphi$ be plurisubharmonic, and does not need the stronger positivity hypothesis \eqref{gbf-weight}. On the other hand, in the proof of Theorem {\rm Re\ }f{suff-analytic} for the general case, we will use the strong curvature hypothesis \eqref{gbf-weight}. We see no reason at the moment why Theorem {\rm Re\ }f{suff-analytic} cannot be extended to the case of weights that are only plurisubharmonic, and thus state the following conjecture.
\begin{conj}\label{weak-gbf}
Theorem {\rm Re\ }f{suff-analytic} holds for any plurisubharmonic weight $\varphi$ such that $D^+_{\varphi}(W) <1$.
\end{conj}
We now turn our attention to the singular case. As already indicated, we will use the approach, taken in {\mathcal I}te{osv}, of locally extending the data from $W$, and then patching together the local extensions using H\"ormander's Theorem.
{\mathscr U}bsection{Local extensions}
Let $f \in {\mathfrak H} (W,\varphi)$ be the function to be extended.
\begin{d-thm}\label{local-extensions-with-bounds}
There exists a covering of $W$ by a locally finite collection of balls $\{ B_{\alpha}\}$ each having radius $r \in [\varepsilon _o/2, (a+1)\varepsilon _o]$, with the following additional properties.
\begin{enumerate}
\item There is a number $N$ having the property that each point of ${\mathbb C} ^n$ is contained in at most $N$ balls.
\item Write $f _{\alpha} = f|_{W {\mathcal A}p B_{\alpha}}$. Then for each ${\alpha}$, there exists $F_{\alpha} \in {\mathcal O} (B_{\alpha})$ such that $F_{\alpha} |_{W{\mathcal A}p B_{\alpha}} = f_{\alpha}$ and
\[
\int _{B'_{\alpha}}|F_{\alpha}|^2 e^{-\varphi} \omega ^n \le C \int _{W_{\rm reg}{\mathcal A}p B_{\alpha}} |f|^2 e^{-\varphi} \omega ^{n-1},
\]
where $B'_{\alpha}$ is a ball with the same center as $B_{\alpha}$ and with radius $\lambda r$ for some $\lambda \in (0,1)$ independent of $\alpha$, and the constant $C$ is independent of $j$.
\end{enumerate}
\end{d-thm}
We now embark on the proof of Theorem {\rm Re\ }f{local-extensions-with-bounds}. We begin by dispensing with the uniformity of the number $N$ of open balls containing any point. To this end, it is clear that locally such a number $N$ obviously exists, and by uniform flatness the picture locally is the same everywhere.
Next, observing that by the uniform flatness hypothesis we can cover $W$ by open balls $B_{\alpha}$ of a fixed radius (which we may shrink a few times below) such that for each $\alpha$, $W {\mathcal A}p B_{\alpha}$ is a finite union of smooth hypersurfaces cut out by holomorphic functions $T_1,...,T_k$ where $k=k_{\alpha} \le N$ for some positive integer $N$ independent of $\alpha$. By the definition of uniform flatness, particularly Property (S) of Definition {\rm Re\ }f{u-flat-defn}, as well as Property (G) of Lemma {\rm Re\ }f{flat-properties}, we may assume, perhaps after decreasing the radius of the balls $B_{\alpha}$ if necessary, that
\begin{equation}\label{cauchy-est-tj-bounds}
{\mathfrak r}ac{1}{C} \le |dT_j(z)| \le C, \quad z\in B_{\alpha}
\end{equation}
for some constant $C > 0$ independent of $\alpha$ and $j$. Indeed, since the $T_j$ are holomorphic, the result follows from a simple application of the Cauchy estimates to the function whose graph is cut out by $T_j$.
Uniform flatness also means that the angles between the unit vectors orthogonal to the $W_i$ at the origin are uniformly bounded away from zero (the uniformity being with respect to ${\alpha}$). For our purposes, this property takes the following form: there exists a constant $\tilde C$ independent of the center of $B_{\alpha}$, such that for all $1 \le i \neq j \le k$,
\begin{equation}\label{dt-lb}
\left | dT_j(v) \right | \ge \tilde C |v| \quad \text{for all } v\in T^{1,0}_{W_i} -\{0\}.
\end{equation}
We have the following lemma.
\begin{lem}\label{vanish=divide}
Define the variety
\[
W_{\alpha} := \bigcup _{1 \le j \le k} \{T_j=0\} {\mathscr U}bset B_{\alpha}.
\]
Suppose $0 \in W_{\alpha}$ is a singular point, and that $F \in {\mathcal O} (B)$ vanishes on $W_{\alpha}$. Then for any multiindex $I=(i_1,...,i_{j-1})$ such that $1 \le i_{\ell} \le k$, $F$ is divisible by the product $T_{i_1}{\mathcal D}ot ...{\mathcal D}ot T_{i_{j-1}}$.
\end{lem}
\begin{proof}
It suffices to prove that
\[
{\mathfrak r}ac{F}{T_1{\mathcal D}ots T_{k}} \in {\mathcal O} (B_{\alpha}).
\]
Indeed, if ${\mathfrak r}ac{F}{T_1{\mathcal D}ots T_k}$ is holomorphic, and we could then eliminate any undesired factor $T_{\ell}$ in the denominator simply by multiplying by $T_{\ell}$.
To prove the holomorphicity of the latter, observe that ${\mathfrak r}ac{F}{T_1{\mathcal D}ots T_{k}}$ is holomorphic at all the smooth points of $W_{\alpha}$, i.e., the points of $W_i$ where $T_j \neq 0$ for all $j \neq i$. Since ${\mathfrak r}ac{F}{T_1{\mathcal D}ots T_{k}}$ is clearly holomorphic away from $W_{\alpha}$, it follows that the poles of ${\mathfrak r}ac{F}{T_1{\mathcal D}ots T_{k}}$ are contained in the set of points of $W_{\alpha}$ where at least two of the $T_j$ vanish. But since any two branches of $W_{\alpha}$ intersect transversally, the polar set of ${\mathfrak r}ac{F}{T_1{\mathcal D}ots T_{k}}$ is contained in a subvariety of codimension at least $2$. On the other hand, the polar set of ${\mathfrak r}ac{F}{T_1{\mathcal D}ots T_{k}}$ is a divisor, and therefore it must be empty.
\end{proof}
For ease of notation, we now drop the subscript $\alpha$ and work on a fixed ball $B$, whose center we may assume, without loss of generality, is the origin.
\begin{lem}\label{denom-clear}
Let $g :W_j \to {\mathbb C}$ be a holomorphic function such that ${\mathfrak r}ac{g}{T_1{\mathcal D}ot \dots T_{j-1}} \in {\mathcal O} (W_j)$. Then there exist positives constant $C < {\mathfrak r}ac{1}{\varepsilon_o}$ and $\widehat C$ independent of the center of $B$ such that, with $B^{k,\varepsilon_o}$ denoting the ball whose center is the center of $B$ and whose radius is $(1-C\varepsilon_o)^k$ times that of $B$,
\begin{equation}\label{local-est-for-denom-clear}
\int _{W_j {\mathcal A}p B^{j-1,\varepsilon_o}} {\mathfrak r}ac{|g|^2e^{-\varphi}}{|T_1 {\mathcal D}ots T_{j-1}|^2} \omega ^{n-1} \le \widehat C \int _{W_j} |g|^2 e^{-\varphi} \omega ^{n-1}.
\end{equation}
\end{lem}
\begin{proof}
By Lemma {\rm Re\ }f{quimbo-trick} we may assume, upon replacing $g$ by $ge^{G}$ for an appropriate holomorphic function $G$ defined on a neighborhood of the closure of $B$, that $\varphi = 0$.
Suppose first that $j=2$. We can assume without loss of generality (perhaps after slightly shrinking the ball $B$ at the outset by a small factor independent of the center of $B$) that $W_2$ is the unit ball with coordinates given by $T_1,z^2,...,z^{n-1}$. Uniform flatness renders all of the scaling uniform in the center of $B$. Let $P \in B$ satisfy $|P|=1-2\varepsilon_o$. If $|T_1(P)| \ge {\mathfrak r}ac{\varepsilon _o}{2^{n+1}}$ then \[
{\mathfrak r}ac{|g(P)|}{|T_1(P)|} \le {\mathfrak r}ac{2^{n+1}}{\varepsilon _o} |g(P)|.
\]
On the other hand, if $|T_1(P)| < {\mathfrak r}ac{\varepsilon _o}{2^{n+1}}$ then by the Cauchy formula we have
\begin{eqnarray*}
\left |{\mathfrak r}ac{g(P)}{T_1(P)}\right | &=& {\mathfrak r}ac{1}{2{\mathbb P}i} \left | \int _{|T_1-T_1(P)|= {\mathfrak r}ac{\varepsilon_o}{2^n}}{\mathfrak r}ac{g(T_1,z^2(P),...,z^{n-1}(P))}{T_1(T_1-T_1(P))}dT_1 \right |\\
&\le &{\mathfrak r}ac{1}{2{\mathbb P}i}\int _{|T_1-T_1(P)|= {\mathfrak r}ac{\varepsilon_o}{2^n}} {\mathfrak r}ac{|g(T_1,z^2(P),...,z^{n-1}(P))|}{|T_1-T_1(P)|^2\left (1-{\mathfrak r}ac{|T_1(P)|}{|T_1-T_1(P)|}\right )} |dT_1|\\
&\le & {\mathfrak r}ac{1}{2{\mathbb P}i}\int _{|T_1-T_1(P)|= {\mathfrak r}ac{\varepsilon_o}{2^n}} {\mathfrak r}ac{2 |g(T_1,z^2(P),...,z^{n-1}(P))|}{|T_1-T_1(P)|}{\mathfrak r}ac{|dT_1|}{|T_1-T_1(P)|}\\
&\le & {\mathfrak r}ac{2^{n+1}}{\varepsilon _o} {\mathscr U}p \{ |g(T_1(Q),z^2(P),...,z^{n-1}(P)))|\ ;\ |T_1(Q)-T_1(P)| \le 2^{-n}\varepsilon _o\}.
\end{eqnarray*}
Thus
\[
{\mathfrak r}ac{|g(P)|}{|T_1(P)|} \le {\mathfrak r}ac{2^{n+1}}{\varepsilon _o}{\mathscr U}p \{ |g(Q)|\ ;\ {\mathscr Q}rt{|T_1(Q)-T_1(P)|^2+|z'(Q)-z'(P)|^2} \le 2^{-n}\varepsilon _o\},
\]
where $z'=(z^2,...,z^{n-1})$. By the maximum principle,
\[
{\mathscr U}p _{B^{1,2\varepsilon_o} {\mathcal A}p W_2} {\mathfrak r}ac{|g|}{|T_1|} \le {\mathscr U}p _{B^{1,\varepsilon _o}{\mathcal A}p W_2} |g|.
\]
From these sup-norm estimates the result easily follows, and we have the case $j=2$. If we write
\[
{\mathfrak r}ac{g}{T_1{\mathcal D}ots T_{j-1}} = {\mathfrak r}ac{g/(T_2{\mathcal D}ots T_{j-1})}{T_1},
\]
the remaining cases follow by induction.
\end{proof}
\begin{proof}[Proof of Theorem {\rm Re\ }f{local-extensions-with-bounds}]
Let
\[
f_i := f|_{W_i}.
\]
Consider first the function $f_1$. By \eqref{dt-lb} we have
\[
\int _{W_1} {\mathfrak r}ac{|f_1|^2e^{-\varphi}}{|dT_1|^2} \omega ^{n-1} \lesssim \int _{W_1} |f_1|^2e^{-\varphi} \omega ^{n-1} < +\infty.
\]
By Theorem {\rm Re\ }f{ot-thm} with $\eta = 0$ and $\omega$ the Euclidean metric in ${\mathbb C} ^n$, there exists a holomorphic extension $F_1$ of $f_1$ to $B$ such that
\[
\int _{B} |F_1|^2e^{-\varphi} \omega ^n \lesssim \int _{W_1} |f_1|^2e^{-\varphi} \omega ^{n-1}.
\]
Let $F^1 := F_1$, and notice that
\[
F^1|_{W_1} = f_1 \quad \text{and} \quad \int _B |F_1|^2 e^{-\varphi} \omega ^n \lesssim \int _{W{\mathcal A}p B} |f|^2e^{-\varphi} \omega ^{n-1}.
\]
We now argue by induction. Suppose we have found $F^{j-1} \in {\mathcal O} (B)$ such that
\[
F^{j-1}|_{W_i} = f_i \quad \text{for} \quad 1 \le i \le j-1
\]
and
\[
\int _{B} |F^{j-1}|^2 e^{-\varphi} \omega ^n \lesssim \int _{B{\mathcal A}p W} |f|^2e^{-\varphi} \omega ^{n-1}.
\]
Consider the function
\[
f_j^* := f_j - F^{j-1}|_{W_j}.
\]
We observe that for each $1\le i \le j-1$,
\begin{equation}\label{vanish-on-i}
f_j^* |_{W_i} = f_j|_{W_i} - f_i |_{W_j} = 0.
\end{equation}
The last equality follows because $f$ is assumed to have a local extension to a neighborhood of $W {\mathcal A}p B$ in $B$. It follows from Lemma {\rm Re\ }f{vanish=divide} that
\begin{equation}\label{quotient-fn}
{\mathfrak r}ac{f_j^*}{{\mathbb P}rod _{i=1} ^{j-1} T_i} \in {\mathcal O} (W_j - ((W_1{\mathcal A}p W_j) {\mathcal U}p ... {\mathcal U}p (W_{j-1} {\mathcal A}p W_j)))
\end{equation}
extends holomorphically to $W_j$, and thus by Lemma {\rm Re\ }f{denom-clear} the extension satisfies the estimate
\begin{equation}\label{claim-est}
\int _{W_j{\mathcal A}p B^{j,\varepsilon_o}} {\mathfrak r}ac{|f^*_j|^2e^{-\varphi}}{|T_1{\mathcal D}ot \dots {\mathcal D}ot T_{j-1}|^2{|dT_j|^2}} \omega ^{n-1} \lesssim \int _{W_j} |f_j^*|^2e^{-\varphi} \omega ^{n-1} < +\infty.
\end{equation}
By Theorem {\rm Re\ }f{ot-thm} there exists a holomorphic function $F_j^* \in {\mathcal O} (B^{j,\varepsilon})$ such that
\[
F_j^* |_{W_j} = f^*_j \quad \text{and} \quad \int _{B^{j,\varepsilon}} {\mathfrak r}ac{|F_j^*|^2e^{-\varphi}}{|T_1 {\mathcal D}ot \dots {\mathcal D}ot T_{j-1}|^2} \omega ^n \lesssim \int _{W} |f|^2e^{-\varphi} \omega ^{n-1}.
\]
In particular, $F_j^*|_{W_i} = 0$ for all $1 \le i < j$. Let
\[
F^j := F^{j-1} + F^*_j.
\]
Then for $1 \le i < j$,
\[
F^j|_{W_i} = F^{j-1}|_{W_i} = f_i.
\]
moreover,
\[
F^j|_{W_j} = F^{j-1}|_{W_j} + F_j ^*|_{W_j} = f_j.
\]
Finally,
\begin{eqnarray*}
\int _{B^{j,\varepsilon}} |F^j|^2 e^{-\varphi} \omega^n & \lesssim & \int _{B^{j-1,\varepsilon}} |F^{j-1}|^2 e^{-\varphi} \omega ^n + \int _{B^{j,\varepsilon}} |F^*_j|^2 e^{-\varphi} \omega ^n \\
& \lesssim & \int _{B^{j,\varepsilon}} |F^{j-1}|^2 e^{-\varphi} \omega ^n + \int _{B^{j,\varepsilon}} {\mathfrak r}ac{|F^*_j|^2 e^{-\varphi}}{|T_1 {\mathcal D}ot \dots {\mathcal D}ot T_{j-1}|^2} \omega ^n \\
& \lesssim & \int _{W{\mathcal A}p B} |f|^2 e^{-\varphi} \omega ^{n-1}.
\end{eqnarray*}
By induction on $j$ we obtain the existence of a function $F:= F^k \in {\mathcal O} (B)$ which evidently satisfies the desired conclusions. This completes the proof.
\end{proof}
Now that we have found our local extensions with good bounds, we patch them together.
{\mathscr U}bsection{The patching process}\label{patching}
We begin with the balls $B_{\alpha}$ and functions $F_{\alpha}$ of Theorem {\rm Re\ }f{local-extensions-with-bounds}. We can assume that our open cover $\{B_{\alpha}\}$ is such that
\[
B_{\alpha} = B(w_{\alpha} ,2a_{\alpha}\varepsilon) \quad \text{and} \quad \bigcup _{j} B(w_{\alpha},a_{\alpha}\varepsilon) {\mathscr U}pset W.
\]
Here $a_{\alpha}=a$ if $w_{\alpha} \in W_{\rm sing}$ and $a_{\alpha} = 1$ if $w_{\alpha} \in W_{\rm reg}$. For simplicity of exposition, let
\[
\widehat B_{\alpha} := B(w_{\alpha},a_{\alpha}\varepsilon )
\]
be the notation for the `half-balls'. We assume the index ${\alpha}$ begins at $1$, and add the set
\[
B_o = \widehat B_0 := {\mathbb C} ^n - \left (U_{\varepsilon}(W) {\mathcal U}p U_{a\varepsilon}(W_{\rm sing})\right )
\]
to the open cover, to obtain an open cover of ${\mathbb C} ^n$. We let $F_0 = 0$. Then we fix a partition of unity $\{ {\mathbb P}hi _{\alpha} \}$ subordinate to the cover $\{ \widehat B_{\alpha}\}$, which we will assume has the property
\[
{\mathscr U}m _{\alpha} |d {\mathbb P}hi _{\alpha}|^2 \le C.
\]
We seek a global holomorphic function $F$ such that
\[
F|_W= f \quad \text{and} \quad \int _W |F|^2 e^{-\varphi} \omega ^n < +\infty.
\]
To this end, let
\[
G_{\alpha\beta} = F_{\alpha} - F_{\beta} \in {\mathcal O} (\widehat B_{\alpha} {\mathcal A}p \widehat B_{\beta}).
\]
Let us set
\begin{equation}\label{psi-choice}
{\mathbb P}si := \varphi _r + s_r.
\end{equation}
We have
\[
G_{\alpha \beta}|_{W{\mathcal A}p \widehat B_{\alpha} {\mathcal A}p \widehat B_{\beta}} \equiv 0 \quad \text{and} \quad \int _{\widehat B_{\alpha} {\mathcal A}p \widehat B_{\beta}} |G_{\alpha \beta}|^2 e^{-{\mathbb P}si} \omega ^n \lesssim \int _{W_{\rm reg} {\mathcal A}p B_{\alpha} {\mathcal A}p B_{\beta}}|f|^2 e^{-\varphi} \omega ^{n-1}.
\]
The vanishing of the restriction is obvious, and the inequality is established in exactly the same way as in the proof of Lemma 4.3 in {\mathcal I}te{osv}. We also take this opportunity to observe that
\begin{equation}\label{pre-hormander}
\sqrt{-1} \partial \bar \partial {\mathbb P}si \ge \sqrt{-1} \partial \bar \partial \varphi _r - {\mathbb U}psilon ^W_r = \delta ' \sqrt{-1} \partial \bar \partial \varphi _r + (1-\delta') \varphi _r - {\mathbb U}psilon^W_r \ge \delta ' \omega
\end{equation}
for some $\delta' >0$ sufficiently small.
We claim that there are functions $G_{\alpha} \in {\mathcal O} (\widehat B_{\alpha})$ such that
\begin{equation}\label{cousin-correction}
G_{\alpha} |_{W {\mathcal A}p B_{\alpha}} \equiv 0, \quad \int _{\widehat B_{\alpha}}|G_{\alpha}|^2 e^{-\varphi} \omega ^n \lesssim \int _{W_{\rm reg}{\mathcal A}p B_{\alpha}} |f|^2 e^{-\varphi} \omega ^{n-1} \quad \text{and} \quad G_{\alpha} \in {\mathcal O} (\widehat B_{\alpha}).
\end{equation}
The functions
\[
\tilde G_{\alpha} := {\mathscr U}m _{\beta} G_{\alpha \beta} {\mathbb P}hi _{\beta}
\]
have the first two of the properties \eqref{cousin-correction}. Moreover, we have
\[
\bar \partial (\tilde G_{\alpha} - \tilde G_{\beta}) = \bar \partial G_{\alpha \beta} = 0 \quad \text{on }\widehat B_{\alpha} {\mathcal A}p \widehat B_{\beta},
\]
and thus we can define the global $\bar \partial$-closed $(0,1)$-form
\[
H = \bar \partial \tilde G_{\alpha} = {\mathscr U}m _{\beta} G_{\alpha \beta} \bar \partial {\mathbb P}hi _{\beta} \quad \text{on }\widehat B_{\alpha}.
\]
We calculate that
\begin{eqnarray*}
\int _{{\mathbb C} ^n} |H|^2 e^{-{\mathbb P}si} \omega ^n &\lesssim & {\mathscr U}m_{\alpha \beta} \int _{\widehat B_{\alpha} {\mathcal A}p \widehat B_{\beta}} |G_{\alpha \beta}|^2 |d{\mathbb P}hi _{\beta}|^2 \omega ^n \\
&\lesssim & {\mathscr U}m_{\alpha \beta} \int _{W_{\rm reg} {\mathcal A}p B_{\alpha} {\mathcal A}p B_{\beta}} |f|^2 e^{-\varphi} \omega ^{n-1} \\
&\lesssim & \int _{W_{\rm reg}} |f|^2 e^{-\varphi} \omega ^{n-1}.
\end{eqnarray*}
Since the right side is finite, H\"ormander's Theorem (which, in view of \eqref{pre-hormander}, may be used) provides a function $u$ satisfying
\[
\bar \partial u = H \quad \text{and} \quad \int _{{\mathbb C} ^n} |u|^2 e^{-\varphi}\omega ^{n-1} \le \int _{{\mathbb C} ^n} |u|^2 e^{-{\mathbb P}si}\omega ^{n-1} < +\infty
\]
The second estimate implies that $u|_W\equiv 0$. The functions
\[
G_{\alpha} := \tilde G_{\alpha} - u
\]
are therefore holomorphic and satisfy \eqref{cousin-correction}, and the function
\[
F:= F_{\alpha} - G_{\alpha} \quad \text{on }\widehat B_{\alpha}
\]
satisfies
\[
F|_W = f \quad \text{and} \quad \int _{{\mathbb C} ^n}|F|^2 e^{-\varphi} \omega ^n < +\infty.
\]
The proof of Theorem {\rm Re\ }f{suff-analytic} is thus complete.
\qed
{\mathscr E}ction{Non-uniformly flat hypersurfaces may have extension}\label{non-necess}
In this last section, we give a negative answer to the question of whether uniform flatness is necessary for the surjectivity of ${\mathscr R} _W$. In fact, there are two cases one should treat separately. The first case is that of singular hypersurfaces, and the second that of smooth hypersurfaces.
{\mathscr U}bsection{Non-necessity for singular hypersurfaces}
The content of this section is an observation of Ohsawa {\mathcal I}te{o-ober} to the effect that the restriction map ${\mathscr R} _W : {\mathscr H} ({\mathbb C} ^n, \varphi) \to {\mathfrak H} (W,\varphi)$ may be surjective even if $W$ is not uniformly flat. In fact, let $W = \{ T=0\}$ be a hypersurface with an isolated singularity at $0$, i.e.,
\[
T(z)= dT(z)= 0 \iff z=0.
\]
Let $f \in {\mathfrak H} (W, \varphi)$. Then there is a polynomial $P \in {\mathbb C} [z^1,...,z^n]$ such that
\[
\int _{W_{\rm reg}} {\mathfrak r}ac{|f-P|^2}{|dT|^2}e^{-\varphi} \omega ^{n-1} < +\infty.
\]
Assume now that
\[
D^+_{\varphi}(W) < 1 \quad \text{and} \quad \int _{{\mathbb C} ^n} |P|^2 e^{-\varphi} \omega ^n < +\infty
\]
(which holds, for example, if $W$ is algebraic and $\varphi(z) = |z|^2$). Then Theorem {\rm Re\ }f{suff-anal-sing} implies that there is a function $F_o \in {\mathscr H} ({\mathbb C} ^n,\varphi)$ such that
\[
F_o |_W = f- P|_W.
\]
It follows that $F:= F_o+P$ is the desired extension.
{\mathscr U}bsection{Non-necessity for smooth hypersurfaces}
The main result of this section is the following theorem.
\begin{thm}\label{non-necess-smooth}
The embedded smooth curve
\[
W:= \{ (x,y) \in {\mathbb C} ^2\ ;\ xy{\mathscr I}n y = 1\} \hookrightarrow {\mathbb C} ^2
\]
is not uniformly flat, but nevertheless ${\mathscr R} _W : {\mathscr H} ({\mathbb C} ^2, |{\mathcal D}ot |^2) \to {\mathfrak H} (W,|{\mathcal D}ot |^2)$ is surjective.
\end{thm}
Near the points $(0,2{\mathbb P}i n)$ for $n \in {\mathbb Z}$ with $|n| >>0$, the curve $W$ looks a lot like the curve
\[
V_n := \{ (x,y)\ |\ x(y-2{\mathbb P}i n) = {\mathfrak r}ac{1}{2{\mathbb P}i n}.
\]
From this approximation of $W$ by such model curves, it follows immediately that $W$ is not uniformly flat. On the other hand, the model curve $V_n$ `converges', as $|n|\to\infty$, to a singular curve which, by Theorem {\rm Re\ }f{suff-analytic}, induces a surjective restriction map. Thus if we can establish some continuity in the extension process, we will be able to prove the claimed surjectivity of ${\mathscr R} _W$.
The approach is to begin by establishing the desired continuity in the model case, and then patching together perturbations of the model extensions.
{\mathscr U}bsubsection{\sf The model case}
The continuity of the extension process we seek in this model case may be expressed by saying that there is an extension operator ${\mathscr E} _{s} : {\mathfrak H} (W_s, |{\mathcal D}ot |^2) \to {\mathscr H} ({\mathbb C} ^2, |{\mathcal D}ot |^2)$ whose square norm
\[
||{\mathscr E} _s||^2 = {\mathscr U}p \left \{ \int _{{\mathbb C} ^2}|{\mathscr E} _sf|^2 e^{-|{\mathcal D}ot |^2}\omega ^2\ \left | \ \int _{W_s}|f|^2 e^{-|{\mathcal D}ot |^2}\omega=1 \right \} \right .
\]
is bounded independent of $s$. The extension operator in question is going to be the extension of minimal norm. To bound this operator, we need {\it any} extension operator with the desired bounds.
We now define an extension operator that works. Consider the map
\[
j : {\mathbb C} ^* \to W_s ; t \mapsto (t, st^{-1}).
\]
We pull back functions and integrals on $W_s$ to functions and integrals on ${\mathbb C} ^*$. If we have $f \in {\mathfrak H} (W, |{\mathcal D}ot |^2)$ then with
\[
j ^*f (t) = {\mathscr U}m _{j \in {\mathbb Z}} a_j t^j
\]
we have
\[
||f||^2 = \int _{{\mathbb C} ^*} \left ( {\mathscr U}m _{j,k\in {\mathbb Z}}a_j\bar a_k t^j \bar t^k \right )(1+ |s|^2|t|^{-4}) e^{-(|t|^2 + |s|^2 |t|^{-2})}{\mathfrak r}ac{\sqrt{-1}}{2} dt \wedge d\bar t= {\mathscr U}m _{j \in {\mathbb Z}} |a_j|^2 C_{j,|s|},
\]
where
\[
C_{j,|s|} = {\mathbb P}i \int _0 ^{\infty}r^{2j}(1+|s|^2r^{-4}) e^{-(r^2 + |s|^2 r^{-2})}r dr.
\]
We will need lower bounds for the constants $C_{j,|s|}$. For positive $j$, it is easy to estimate these constants, and the symmetry of the curve $W_s$ will give us a handle on the case of negative $j$.
\begin{lem}\label{laurent-constants}
The constants $C_{j,|s|}$ have the following properties.
\begin{enumerate}
\item[(i)] For $j \ge 0$ and $|s|$ sufficiently small, $C_{j,|s|} \ge {\mathfrak r}ac{{\mathbb P}i}{2} (j!)$.
\item[(ii)] For all $j$ and $s$, $C_{j,|s|} = |s|^{-2j}C_{-j,|s|}$.
\end{enumerate}
\end{lem}
\begin{proof}
First, let $j \ge 0$. Then
\begin{eqnarray*}
C_{j,|s|} &\le & {\mathbb P}i \int _0 ^{\infty} r^{2j+1}e^{-r^2} dr + {\mathbb P}i \int _0 ^{\infty} (r^{2j}e^{-r^2}) e^{-|s|^2/r^2} {\mathfrak r}ac{|s|^2}{r^3} dr\\
& \le & {\mathbb P}i (j!) + {\mathfrak r}ac{{\mathbb P}i}{2} j^je^{-j}\int _0^{\infty} e^{-u} du\\
& \le & 2{\mathbb P}i (j!)
\end{eqnarray*}
Thus by the Dominated Convergence Theorem,
\[
\lim _{|s| \to 0} C_{j,|s|} = {\mathbb P}i (j!),
\]
and Property (i) is proved. Property (ii) is obtained by substituting $r \mapsto |s|/r$ in the integral.
\end{proof}
We now define the extension of $f$ to be the holomorphic function $F$ given by the Taylor series
\[
F(x,y) = a_0 + {\mathscr U}m _{n > 0} (a_n x^n + s^{-n}a_{-n} y^n),
\]
where
\[
j^*f (t) = {\mathscr U}m _{n \in {\mathbb Z}} a_n t^n.
\]
Note that $j^* F= j^*f$ so that $F$ extends $f$, and that
\begin{eqnarray*}
||F||^2 &:=& \int _{{\mathbb C} ^2} |F(x,y)|^2 e^{-(|x|^2+|y|^2)}{\mathfrak r}ac{\sqrt{-1}}{2} dx \wedge d\bar x \wedge {\mathfrak r}ac{\sqrt{-1}}{2} dy \wedge d\bar y \\
& = & \int _0 ^{\infty}\!\!\!\! \int _0 ^{\infty}\!\!\!\! \int _0 ^{2{\mathbb P}i} \!\!\!\! \int _0 ^{2{\mathbb P}i}\!\!\!\! (a_0 + {\mathscr U}m _{n > 0} (a_n e^{\sqrt{-1} n\theta }r^n + s^{-n}a_{-n} e^{\sqrt{-1} n{\mathbb P}hi} \rho ^n))\times \\
&& \qquad (\bar a_0 + {\mathscr U}m _{m > 0} (\bar a_m e^{-\sqrt{-1} m\theta} r^m + \bar s^{-m}\bar a_{-m}e^{-\sqrt{-1} m{\mathbb P}hi}\rho^m)) d\theta d{\mathbb P}hi e^{-r^2} rdr e^{-\rho ^2}\rho d\rho\\
& = & 4 {\mathbb P}i ^2 \int _0 ^{\infty}\!\!\!\! \int _0 ^{\infty}\!\!\!\! \left (|a_0|^2 + {\mathscr U}m _{n>0} |a_n|^2 r^{2n}+ |s|^{-2n}|a_{-n}|^2 \rho ^{2n} \right )e^{-r^2} rdr e^{-\rho ^2}\rho d\rho\\
& = & {\mathbb P}i ^2\left ( {\mathscr U}m _{n \ge 0} |a_n|^2 n! + {\mathscr U}m _{n < 0} |a_n|^2 |s|^{2n} (-n)! \right )\\
&& \quad \le 2 {\mathbb P}i {\mathscr U}m _{n \in {\mathbb Z}} C_{n,|s|} |a_n|^2 = 2 {\mathbb P}i ||f||^2.
\end{eqnarray*}
The inequality is of course a consequence of Lemma {\rm Re\ }f{laurent-constants}. We have thus proved the following lemma.
\begin{lem}\label{model-extension-estimate}
There exists a positive number $r_o$ such that for $0<|s| < r_o$, the minimal extension operator ${\mathscr E} _s : {\mathfrak H} (W_s,|{\mathcal D}ot |^2)\to {\mathscr H} ({\mathbb C} ^2, |{\mathcal D}ot |^2)$ satisfies
\[
||{\mathscr E} _s|| \le{\mathscr Q}rt{2 {\mathbb P}i}.
\]
\end{lem}
{\mathscr U}bsubsection{\sf Clipping and translating the model}
Let $c=(a,b) \in {\mathbb C}^2$ and $\varepsilon > 0$. Define
\[
W_{s}(\varepsilon;c) := \left \{ (x,y)\in {\mathbb C}^2\ ;\ (x-a)(y-b)=s \text{ and }
\max (|x-a|, |y-b|)< \varepsilon \right \} {\mathscr U}bset {\mathbb D}elta ^2 _{c}(\varepsilon),
\]
where ${\mathbb D}elta ^2 _{c}(\varepsilon)=\{(x,y) \in {\mathbb C} ^2\ ; \max( |x-a|, |y-b|) < \varepsilon \}$. We will now modify the calculation for the model to show the following.
\begin{lem}\label{clipped-model-est}
For each $f \in {\mathcal O} (\overline{W_s(\varepsilon;c)})$ there exists $F \in {\mathcal O} ({\mathbb D}elta ^2_{c}(\varepsilon))$ such that
\[
F|_{W_s(\varepsilon;c)}= f \quad \text{and} \int _{{\mathbb D}elta ^2_{c}(\varepsilon)} |F(z)|^2 e^{-|z|^2}dV(z) \le 2{\mathbb P}i \int _{W_s(\varepsilon;c)} |f(z)|^2 e^{-|z|^2} \omega (z).
\]
\end{lem}
\begin{proof}
To begin, we translate $W_s(\varepsilon;a)$ to the origin with the change of variables
\[
\zeta = (\xi,\eta) = (x-a,y-b)=z-c.
\]
Then
\[
\int _{W_s(\varepsilon;c)} |f(z)|^2 e^{-|z|^2} \omega (z) = \int _{W_s(\varepsilon;0)} |f(\zeta+c) e^{\zeta {\mathcal D}ot c - \tfrac{1}{2} |c|^2} |^2 e^{-|\zeta|^2} \omega (\zeta).
\]
Thus the function
\[
f_c(\zeta) = f(\zeta +c)e^{\zeta {\mathcal D}ot c - \tfrac{1}{2}|c|^2}
\]
is square integrable on the part of the model $W_s$ contained in ${\mathbb D}elta ^2_0(\varepsilon)$. An extension of this function to a function $F_c$ that is square integrable on ${\mathbb D}elta^2 _0(\varepsilon)$ would then lead to the extension
\[
F (z) = F_c(z-c)e^{-\zeta {\mathcal D}ot c + \tfrac{1}{2}|c|^2},
\]
and the latter is square integrable on ${\mathbb D}elta ^2_c(\varepsilon)$. Thus we have reduced to the situation $c=0$.
The remainder of the proof proceeds in a manner directly analogous to the proof of Lemma {\rm Re\ }f{model-extension-estimate}, except that we work inside the bidisk ${\mathbb D}elta ^2_0(\varepsilon)$. $W_s(\varepsilon;0)$ is now parameterized by an annulus $A_{\varepsilon,|s|} = \{ |s|/\varepsilon < |t| < \varepsilon \}${\mathfrak o}otnote{Note that we must have $\varepsilon^2 > |s|$, but this must be the case if $W_s(\varepsilon;0) \neq \emptyset$} but the parameterization is still the same map, namely
\[
t \mapsto (t,s/t).
\]
The integrals to be estimated, instead of being over the entire plane, are now constrained to lie in $A_{\varepsilon,|s|}$, but the integrands are rather concentrated in this annulus anyway. We leave it to the reader to check that if
\[
j^*f (t) = {\mathscr U}m _{n \in {\mathbb Z}} a_n t^n
\]
then
\[
\int _{W_s(\varepsilon;0)} |f|^2 e^{-|z|^2} = {\mathscr U}m _{n \in {\mathbb Z}} C_{n,|s|}(\varepsilon) |a_n|^2
\]
where
\[
C_{j,|s|}(\varepsilon) = {\mathbb P}i \int _{|s|/\varepsilon} ^{\varepsilon}r^{2j}(1+|s|^2r^{-4}) e^{-(r^2 + |s|^2 r^{-2})}r dr.
\]
Since $C_{j,|s|}(\varepsilon)$ is increasing and bounded in $\varepsilon$, we can apply the dominated convergence theorem to see that
\[
\lim _{|s| \to 0} C_{j,|s|}(\varepsilon) = C_{j,0}(\varepsilon) := {\mathbb P}i e^{-\varepsilon^2} {\mathscr U}m _{k=0} ^j {\mathfrak r}ac{j!}{k!} \varepsilon ^{2k}.
\]
We therefore have the lower bound
\[
C_{j,|s|}(\varepsilon) \ge 2C_{j,0}(\varepsilon)
\]
for all sufficiently small $|s|$, and we calculate that the square of the $L^2$-norm of the extension
\[
F(x,y) = a_0 + {\mathscr U}m _{n > 0} (a_n x^n +s^{-n}a_{-n} y^n)
\]
of $f$ to ${\mathbb D}elta ^2_0(\varepsilon)$ is
\[
||F||^2 = \int _{{\mathbb D}elta ^2_0(\varepsilon)}|F(z)|^2 e^{-|z|^2} dV(z) = {\mathbb P}i \left ( {\mathscr U}m _{n\ge 0} |a_n|^2 C_{n,0}(\varepsilon) + {\mathscr U}m _{n < 0} |a_n|^2 |s|^{2n}C_{n,0}(\varepsilon) \right ) \le 2{\mathbb P}i ||f||^2
\]
The proof is complete.
\end{proof}
{\mathscr U}bsubsection{\sf The local picture near the approximate singularities of $\mathsf{W}$}
Since $W$ is the zero set of the holomorphic function $g(x,y) = xy{\mathscr I}n (y) - 1$, we need to examine $W$ near the points $(0,n{\mathbb P}i)$, $n \in {\mathbb Z}$, where, at least for $|n|$ large, it greatly resembles a translate of the model $W_s$ for $s = {\mathfrak r}ac{(-1)^{n+1}}{n{\mathbb P}i}$.
\begin{lem}\label{perturb-the-model}
There is a positive constant $\varepsilon > 0$ and injective holomorphic maps ${\mathbb P}si _n$ defined on a small disk centered at the origin and satisfying ${\mathscr U}p _{|\zeta| < \varepsilon} |d\zeta -(-1)^{n+1}d{\mathbb P}si _n(\zeta)| = O(\varepsilon)$ uniformly in $n$ for $|n|$ sufficiently large, such that
\[
W {\mathcal A}p {\mathbb D}elta ^2 _{(0,n{\mathbb P}i)}(\varepsilon) = \left \{ (x,y)\ ;\ x {\mathbb P}si _n (y- n{\mathbb P}i) = \tfrac{(-1)^{n+1}}{n{\mathbb P}i} \right \} {\mathcal A}p {\mathbb D}elta ^2 _{(0,n{\mathbb P}i)}(\varepsilon).
\]
\end{lem}
\begin{proof}
Let
\[
{\mathbb P}si _n (z) = {\mathfrak r}ac{(z+n{\mathbb P}i){\mathscr I}n (z+n{\mathbb P}i)}{n{\mathbb P}i} = {\mathfrak r}ac{(z+n{\mathbb P}i)(-1) ^{n+1}{\mathscr I}n z}{n{\mathbb P}i}.
\]
Then
\[
\left | (-1)^{n+1}\tfrac{d{\mathbb P}si_n}{dz} -1 \right |\le {\mathfrak r}ac{|{\mathscr I}n z|}{|n|{\mathbb P}i} + {\mathfrak r}ac{|z {\mathcal O}s z|}{|n|{\mathbb P}i} + |{\mathcal O}s z -1|.
\]
The right side of the estimate is clearly controlled in a neighborhood $|z| < \varepsilon$ by a constant that is uniform in $|n| >> 0$. It follows from the proof of the implicit function theorem that, for some $\varepsilon > 0$, ${\mathbb P}si _n$ is a local diffeomorphism in a neighborhood $|z| < \varepsilon $ of $0$, and we can choose $\varepsilon$ independent of $n$ provided $|n|$ is sufficiently large. The proof is finished.
\end{proof}
{\mathscr U}bsubsection{\sf End of the proof of Theorem {\rm Re\ }f{non-necess-smooth}}
Let $\varepsilon > 0$ be as in Lemma {\rm Re\ }f{perturb-the-model}. Outside the open set
\[
U := \bigcup _{n \in {\mathbb Z}} {\mathbb D}elta ^2_{(0,n{\mathbb P}i)}(\varepsilon)
\]
$W$ is uniformly flat in the sense of Definition {\rm Re\ }f{u-flat-defn}. We may assume, perhaps after shrinking $\varepsilon > 0$ slightly, that there is an open cover of some neighborhood of $W$ in ${\mathbb C} ^2$ by open sets $U_j$ that are either of the form ${\mathbb D}elta _{(0,n{\mathbb P}i)}^2(\varepsilon)$ or are balls $B_{p_j}(\varepsilon)$ of radius $\varepsilon$ and center $p_j$, and in the latter case $W {\mathcal A}p B_{p_j}(\varepsilon)$ is the graph, over its tangent space at $p_j$, of a function bounded by a small quadratic as in Property (F1) of Lemma {\rm Re\ }f{flat-properties}. Moreover, we can assume that the number of elements of this cover containing any one point is bounded independent of the point.
For the neighborhoods ${\mathbb D}elta ^2 _{(0,n{\mathbb P}i)}(\varepsilon)$ we have local extension of our datum $f$ with $L^2$-bounds independent of $n$. Indeed, by using Lemma {\rm Re\ }f{perturb-the-model} we may reduce to the model case, in which the claim is a consequence of Lemma {\rm Re\ }f{clipped-model-est}.
On the other hand, for those neighborhoods $B_p(\varepsilon)$ we have such uniform $L^2$ extensions for the same reason as in the proof of Theorem {\rm Re\ }f{suff-analytic}.
Finally we would like to apply the patching technique to extend $f$ to a function $F \in {\mathcal O} ({\mathbb C} ^n)$ with the estimate
\[
\int _{{\mathbb C} ^n} |F|^2 e^{-|z|^2} \omega ^n \lesssim \int _W |f|^2 e^{-|z|^2} \omega ^{n-1}.
\]
This extension is done in exactly the same way as in Subsection {\rm Re\ }f{patching}, as soon as we prove the following proposition.
\begin{prop}
For the curve $W$ of Theorem {\rm Re\ }f{non-necess-smooth}, $D^+_{|{\mathcal D}ot|^2}(W) = 0$.
\end{prop}
\begin{proof}
Geometrically, the density of $W$ is the maximum, over all directions $v \in S^{3}$, of the number of intersection points of $W {\mathcal A}p B_p(r)$ with the average straight line $\ell$ parallel to $v$, divided by the area of the disk $\ell {\mathcal A}p B_p(r)$. The average is taken over the lines. We find that ${\mathbb U}psilon ^W _r(z) = O(r^{-1})$.
\end{proof}
\noindent Thus the proof of Theorem {\rm Re\ }f{non-necess-smooth} is complete.\qed
{\mathscr U}bsection{Something must take the place of uniform flatness}
Uniform flatness cannot be completely done away with even if the hypersurface in question is smooth. For example, in dimension $1$ it is known to be necessary, and by taking a product, we can easily find examples of hypersurfaces with density less than $1$ such that the restriction map ${\mathscr R} _W : {\mathscr H} ({\mathbb C} ^n, \varphi) \to {\mathfrak H} (W,\varphi)$ is not surjective.
\begin{ex}
Consider the sequence $\{z_{j}\}_{j \ge 2} {\mathscr U}bset {\mathbb C}$ defined by
\[
z_{2k} = k^2, z_{2k+1}= k^2- k^{-1}.
\]
Let
\[
W = \{ (z_j, w) \in {\mathbb C} ^2\ ;\ w \in {\mathbb C},\ j=2,3,4,...\}.
\]
Then the area of $W {\mathcal A}p B(x,r) = o(r^3)$ so that the density of $W$ is zero. But we claim that the restriction map ${\mathscr R} _W : {\mathscr H} ({\mathbb C} ^n, \varphi) \to {\mathfrak H} (W,\varphi)$ is not bounded surjective.
To prove this, we argue by contradiction. Suppose, then, that ${\mathscr R} _W : {\mathscr H} ({\mathbb C} ^n, \varphi) \to {\mathfrak H} (W,\varphi)$ is bounded and surjective. Define the locally constant function $f_k \in {\mathcal O} (W)$ by
\[
f_k(z_{2k},w)= e^{{\mathfrak r}ac{1}{2}|z_{2k}|^2} \quad \text{and} \quad f(z_{j},w) = 0, \qquad j \neq 2k, \ w\in {\mathbb C}.
\]
Then
\[
\int _W|f_k(z,w)|^2 e^{-(|z|^2 +|w|^2)} \omega = \int _{{\mathbb C}} e^{-|w|^2} \sqrt{-1} dw \wedge d\bar w = {\mathbb P}i < +\infty.
\]
Since the restriction map is bounded surjective, there exists $F_k \in {\mathcal O} ({\mathbb C} ^2)$ such that
\[
F_k|_W= f_k \quad \text{and} \quad \int _{{\mathbb C} ^2}|F_k|^2e^{-|z|^2-|w|^2} dV(z,w) \le {\mathbb P}i C
\]
for some constant $C>0$ independent of $k$. By the sub-mean value property,
\[
\int _{{\mathbb C}} |F_k(z,0)|^2e^{-|z|^2} dA(z) \le {\mathfrak r}ac{1}{{\mathbb P}i} \int _{{\mathbb C}} \left (\int _{{\mathbb C}} |F_k(z,w)|^2e^{-|z|^2}dA(z) \right ) e^{-|w|^2}dA(w) \le C.
\]
It follows from Cauchy's formula that, for $r \ge 2$ and $k$ sufficiently large,
\begin{eqnarray*}
k^2 &=& \left |{\mathfrak r}ac{F(z_{2k},0)}{z_{2k}-z_{2k+1}} \right |^2e^{-|z_{2k}|^2} = {\mathfrak r}ac{1}{4{\mathbb P}i^2} \left | \int _{|\zeta - z_{2k}|= r} {\mathfrak r}ac{F(\zeta,0)e^{(z_{2k}^2-\zeta z_{2k})}}{(\zeta - z_{2k})(\zeta - z_{2k+1})} d\zeta \right |^2 e^{-|z_{2k}|^2}\\
&\le & {\mathfrak r}ac{e^{r^2}}{4{\mathbb P}i^2 r^2(r-k^{-1})^2} \left (\int _{|\zeta-z_{2k}| = r}|F(\zeta,0)|e^{{\mathfrak r}ac{1}{2} |z_{2k}|^2-{\rm Re\ } (\zeta z_{2k}) -\tfrac{1}{2} |\zeta - z_{2k}|^2} d\theta \right )^2\\
&=& {\mathfrak r}ac{e^{r^2}}{4{\mathbb P}i^2 r^2(r-k^{-1})^2} \left (\int _{|\zeta-z_{2k}| = r}|F(\zeta,0)|e^{-{\mathfrak r}ac{1}{2} |\zeta |^2} d\theta \right )^2\\
&\le & {\mathfrak r}ac{e^{r^2}}{2{\mathbb P}i r^2(r-k^{-1})^2} \int _{|\zeta-z_{2k}| = r}|F(\zeta,0)|^2e^{-|\zeta|^2} d\theta,
\end{eqnarray*}
where $\zeta = z_{2k} +re^{\sqrt{-1} \theta}$, in the second step we have used $|\zeta - z_{2k+1}| \ge |\zeta - z_{2k}|- k^{-1}$, and in the last step we have used the Cauchy-Schwarz Inequality. Multiplying both sides by $2{\mathbb P}i (r-k^{-1})^2r^3e^{-r^2} dr$, using the inequality $r-k^{-1} \ge 1$, and integrating over $r \in [2,\infty)$, we have
\[
2{\mathbb P}i k \int _2^{\infty} r^3 e^{-r^2} dr \le \int _{|\zeta - z_{2k}| \ge 2} |F(\zeta, 0)|^2e^{-|\zeta|^2} dA(\zeta) \le \int _{{\mathbb C}} |F_k(\zeta , 0)|^2e^{-|\zeta|^2} dA(\zeta) \le C.
\]
The desired contradiction is obtained as soon as $k$ is large enough.
$\partialamond$
\end{ex}
It would be interesting to find the right necessary geometric condition on hypersurfaces $W$ for the surjectivity of ${\mathscr R} _W$. It seems that the conditions should not be too local in nature, as uniform flatness seems to be.
\end{document}
|
\betagin{document}
\setlength{\baselineskip}{16pt}
\tildetle
{A $W^n_2$-Theory of Elliptic and Parabolic Partial Differential
Systems in $C^1$ domains}
\author{Kijung Lee\footnote{Department of Mathematics, Ajou University,
Suwon, South Korea 443-749, \,\, [email protected].}
\qquad \hbox{\rm and} \qquad Kyeong-Hun Kim\footnote{Department of
Mathematics, Korea University, 1 Anam-dong, Sungbuk-gu, Seoul, South
Korea 136-701, \,\, [email protected]. The research of this
author is supported by the Korean Research Foundation Grant funded
by the Korean Government 20090087117}}
\date{}
\maketitle
\betagin{abstract}
In this paper second-order elliptic and parabolic partial
differential systems are considered on $C^1$ domains. Existence and uniqueness results
are obtained in terms of Sobolev spaces with weights so that we
allow the derivatives of the solutions to blow up near the boundary.
The coefficients of the systems are allowed to substantially
oscillate or blow up near the boundary.
\vspace*{.125in}
\noindent {\it Keywords: Elliptic
systems, Parabolic systems, Weighted Sobolev spaces.}
\vspace*{.125in}
\noindent {\it AMS 2000 subject classifications:}
35K45, 35J57.
\end{abstract}
\mysection{Introduction}
In this article we are dealing with the Sobolev space theory of
second-order parabolic and elliptic systems :
\betagin{equation}
\lambdabel{eqn main system}
u^k_t=a^{ij}_{kr}u^r_{x^ix^j}+b^i_{kr}u^r_{x^i}+c_{kr}u^r+f^k, \quad t>0, x\in \cO
\end{equation}
\betagin{equation}
\lambdabel{elliptic system}
a^{ij}_{kr}u^r_{x^ix^j}+b^{i}_{kr}u^r_{x^i}+c_{kr}u^r+f^k=0, \quad x\in \cO,
\end{equation}
where $\cO$ is a $C^1$ domain in $\bR^d$, $i,j=1,2,\ldots,d$ and
$k,r=1,2,\ldots,d_1$. We used summation notation on repeated indices
$i,j,r$.
Since the boundary is not supposed to be regular enough, we have to
look for solutions in function spaces with weights, allowing
derivatives of our solutions to blow up near the boundary. In the
framework of H\"older space such setting leads to investigating
so-called intermediate (or interior) Schauder estimates, which
originated in \cite{DN}. For results about these estimates the
reader is referred to \cite{DN}, \cite{GH}, \cite{GT} (elliptic
case) and \cite{Fr}, \cite{Li} (parabolic case).
Various Sobolev spaces with weights
and their
applications to partial differential equations have been
investigated since long ago; we do not want even to try to collect
all relevant references (some of them can be found in \cite{Ch}).
The reader can find a part of references related to the subject of
this article in the papers \cite{KK2}, \cite{kr99}
and \cite{Lo2}, the results of which are extensively
used in what follows.
The main source of our interest in the Sobolev space theory of systems (\ref{eqn main system}) and (\ref{elliptic system})
comes from \cite{KK2}, \cite{kr99}, \cite{KL2} and \cite{Lo2}, where weighted Sobolev space theory is constructed
for single equations. The goal of this article is to extend the results for
{\bf{single equations}} in \cite{KK2}, \cite{kr99}, \cite{KL2} and \cite{Lo2} to the
case of {\bf{the systems}}. We
prove the uniqueness and existence results of systems (\ref{eqn main system}) and (\ref{elliptic system}) in weighted Sobolev spaces under minimal regularity conditions on the coefficients.
As in the articles referred above, our coefficients $a^{ij}_{kr}$
are allowed to substantially oscillate near the boundary, and the
coefficients $b^{i}_{kr},c_{kr}$ are allowed to be unbounded and blow up
near the boundary. For instance, if $d=d_1=1$ and $\cO=(0,\infty)$, then we allow $a:=a^{11}_{11}$
to behave near $x=0$ like $2+\cos|\ln x|^{\alphapha}$, where $\alphapha\in
(0,1)$ (see Remark \ref{05.18.01}).
However, unlike in those articles, we were able to obtain only
$L_2$-estimates, instead of $L_p$-estimates. This is due to the
difficulty caused by considering systems instead of single
equations. For $L_p$-theory, $p>2$, one must overcome tremendous mathematical difficulties rising in the
general settings; one of the main difficulties in the case $p>2$ is
that the arguments we are using in the proofs of Lemma \ref{a priori 1} and Lemma \ref{a priori 2} below are not
working when $p>2$ since in this case we get extra terms which we
simply can not control.
The organization of the article is as follows. Section \ref{Cauchy}
handles the Cauchy problem. In section \ref{main section} we present
our main results, Theorem \ref{main theorem on domain} and Theorem \ref{theorem elliptic}. In section \ref{section auxiliary}
we develop some auxiliary results. Theorem \ref{main theorem on domain} and Theorem \ref{theorem elliptic} are proved in section \ref{section 5} and section \ref{section 6}, respectively.
As usual $\bR^{d}$
stands for the Euclidean space of points $x=(x^{1},...,x^{d})$,
$B_{r}(x)=\{y\in\bR^{d}:|x-y|<r\}$, $B_{r}=B_{r}(0)$,
$\bR^{d}_{+}=\{x\in\bR^{d}:x^{1}>0\}$.
For $i=1,...,d$, multi-indices $\alphapha=(\alphapha_{1},...,\alphapha_{d})$,
$\alphapha_{i}\in\{0,1,2,...\}$, and functions $u(x)$ we set
$$
u_{x^{i}}=\frac{\partialrtial u}{\partialrtial x^{i}}=D_{i}u,\quad
D^{\alphapha}u=D_{1}^{\alphapha_{1}}\cdot...\cdot D^{\alphapha_{d}}_{d}u,
\quad|\alphapha|=\alphapha_{1}+...+\alphapha_{d}.
$$
If we write $c=c(\cdots)$, this means that the constant $c$ depends
only on what are in parenthesis.
\mysection{The system on $\bR^d$}\lambdabel{Cauchy}
First we introduce some solvability results of linear systems
defined on $\bR^d$. These results will be used later for systems
defined on the half space and bounded $C^1$ domains.
Let
$C^{\infty}_0=C^{\infty}_0(\mathbb{R}^d;\mathbb{R}^{d_1})$ denote
the set of all $\mathbb{R}^{d_1}$-valued infinitely differentiable
functions with compact support in $\mathbb{R}^d$. By $\mathcal{D}$
we denote the space of $\mathbb{R}^{d}$-valued distributions on
$C^{\infty}_0$; precisely, for $u\in \mathcal{D}$ and $\phi\in
C^{\infty}_0$ we define $(u,\phi)\in \mathbb{R}^{d}$ with components
$(u,\phi)^k=(u^k,\phi^k)$, $k=1,\ldots,d_1$. Each $u^k$ is a usual
$\mathbb{R}$-valued distribution defined on
$C^{\infty}(\mathbb{R}^d;\mathbb{R})$.
We define $L_p=L_p(\mathbb{R}^d;\mathbb{R}^{d_1})$ as the space of
all $\mathbb{R}^{d_1}$-valued functions $u=(u^1,\ldots,u^{d_1})$
satisfying
\[
\|u\|^p_{L_p}:=\sum^{d_1}_{k=1}\|u^k\|^p_{L_p}<\infty.
\]
Let $p \in[2,\infty)$ and $\gammamma\in(-\infty,\infty)$. We define the
space of Bessel potential
$H^{\gammamma}_p=H^{\gammamma}_p(\mathbb{R}^d;\mathbb{R}^{d_1})$ as the
space of all distributions $u$ such that $(1-\Deltalta)^{n/2}u\in L_p$
where we define each component by
\[
((1-\Deltalta)^{\gammamma/2}u)^k=(1-\Deltalta)^{\gamma/2}u^k
\]
and the norm is given by
\[
\|u\|_{H^{\gammamma}_p}:=\|(1-\Deltalta)^{\gamma/2}u\|_{L_p}.
\]
Then, $H^{\gammamma}_p$ is a Banach space with the given norm and
$C^{\infty}_0$ is dense in $H^{\gammamma}_p$. Note that $H^{\gamma}_p$ are
usual Sobolev spaces for $\gamma=0,1,2,\ldots$. It is well known that
the first order differentiation operators,
$\partialrtial_i:H^{\gammamma}_{p}(\mathbb{R}^d;\mathbb{R})\to
H^{\gammamma-1}_p(\mathbb{R}^d;\mathbb{R})$ given by $u\to u_{x^i}$
$(i=1,2,\ldots,d)$, are bounded. On the other hand, for $u\in
H^{\gammamma}_{p}(\mathbb{R}^d;\mathbb{R})$, if $\thetaxt{supp}\, (u)
\subset (a,b)\tildemes \mathbb{R}^{d-1}$ with $-\infty<a<b<\infty$,
we have
\betagin{equation}
\lambdabel{eqn 5.1.1}
\|u\|_{H^{\gammamma}_{p}(\mathbb{R}^d;\mathbb{R})}\leq
c(d,a,b)\|u_{x^1}\|_{H^{\gammamma-1}_{p}(\mathbb{R}^d;\mathbb{R})}
\end{equation}
(see, for instance, Remark 1.13 in \cite{kr99}).
For a fixed time $T$, we define
$$
\bH^{\gamma}_p(T):=L_p((0,T],H^{\gamma}_p), \quad
\mathbb{L}_p(T):=\bH^0_p(T)
$$
with the norm given by
\[
\|u\|^p_{\mathbb{H}^{\gamma}_p(T)}=\int^{T}_0\|u(t)\|^p_{H^{\gammamma}_p}dt.
\]
Finally, we set
$U^{\gammamma}_p=H^{\gammamma-2/p}_p$.
\betagin{definition}\lambdabel{md}
For a $\mathcal{D}$-valued function $u\in\mathbb{H}^{\gamma+2}_p(T)$,
we write $u\in\mathcal{H}^{\gamma+2}_p(T)$ if
$u\in\mathbb{H}^{\gamma+2}_p(T)$, $u(0,\cdot)\in U^{\gammamma+2}_p$
and there exists $f\in \mathbb{H}^{\gamma}_p(T)$ such that, for any $\phi\in
C^{\infty}_0$, the equality
\betagin{equation}\lambdabel{e}
(u(t,\cdot),\phi)= (u(0,\cdot),\phi)+ \int^t_0( f(s,\cdot),\phi)ds
\end{equation}
holds for all $t\leq T$. In this case, we say that $u_t=f$ \emph{in the sense of distributions}.
The norm in
$\cH^{\gammamma+2}_{p}(T)$ is defined by
\[
\|u\|_{\mathcal{H}^{\gamma+2}_p(T)}= \|u\|_{\mathbb{H}^{\gamma+2}_p(T)}+\|u_t\|_{\mathbb{H}^{\gamma}_p(T)}+
\|u(0)\|_{U^{\gamma+2}_p}.
\]
\end{definition}
For any $d_1\tildemes d_1$ matrix $C=(c_{kr})$ we let
$$
|C|:=\sqrt{\sum_{k,r}(c_{kr})^2}.
$$
Set $A^{ij}=(a^{ij}_{kr})$.
Throughout the
article we assume the following.
\betagin{assumption}
\lambdabel{main assumptions}
There exist constants $\deltalta,K^j, L>0$ so that
(i)
\betagin{equation}\lambdabel{assumption 1}
\delta|\xi|^2\le\xi^*_{i} A^{ij}\xi_{j}
\end{equation}
holds for any $t,x$, where $\xi$ is any
(real) $d_1\tildemes d$ matrix, $\xi_i$ is the $i$th column of $\xi$,
and again the summations on $i,j$ are understood.
(ii)
\betagin{equation}\lambdabel{assumption 2}
\left|A^{1j}\right|\le K^j, \quad
j=1,2,\ldots,d.
\end{equation}
\end{assumption}
Before we study system (\ref{eqn main system}), we consider the following system of equations with constant coefficients:
\betagin{equation}
\lambdabel{eqn system}
u^k_t=a^{ij}_{kr}u^r_{x^ix^j}+f^k,
\quad u^k(0)=u^k_0,
\end{equation}
where $i,j=1,2,\cdots,d$ and $k,r=1,2,\cdots,d_1$;
recall that we are using summation notation on $i,j,r$.
The following $L_2$-theory (even $L_p$-theory) is not new and can be found, for instance, in \cite{Lee}. However we give a short and independent proof for the sake of completeness.
\betagin{theorem}
\lambdabel{thm 1}
Let $a^{ij}_{kr}=a^{ij}_{kr}(t)$, independent of $x$. Then for any $f\in
\bH^{\gammamma}_2(T)$ and $u_0\in
U^{\gammamma+2}_2$, system (\ref{eqn system}) has a unique solution
$u\in \mathcal{H}^{\gammamma+2}_2(T)$, and for this solution
\betagin{equation}
\lambdabel{e 6.5.2}
\|u_{xx}\|_{\bH^{\gammamma}_{2}(T)}\leq
c\|f\|_{\bH^{\gammamma}_2(T)}+c\|u_0\|_{U^{\gammamma+2}_2},
\end{equation}
\betagin{equation}
\lambdabel{e 6.5.3}
\|u\|_{\bH^{\gammamma+2}_{2}(T)}\leq
ce^{cT}(\|f\|_{\bH^{\gammamma}_2(T)}+\|u_0\|_{U^{\gammamma+2}_2}),
\end{equation}
where $c=c(d,d_1,\gammamma,\deltalta,K^j)$.
\end{theorem}
\betagin{proof}
By Theorem 5.1 in \cite{Kr99}, for each $k$, the
equation
$$
u^k_t=\deltalta \Deltalta u^k+f^k, \quad u^k(0)=u^k_0
$$
has a solution $u^k\in \mathcal{H}^{\gammamma+2}_{2}(T)$. For
$\lambdambda\in [0,1]$ define
$A^{ij}_{\lambdambda}:=(1-\lambdambda)A^{ij}+\deltalta_{ij}\lambdambda\deltalta I$. Then
$$
|A^{ij}_{\lambdambda}|\le |A^{ij}|,\quad \deltalta|\xi|^2\leq
\sum_{i,j}\xi^*A^{ij}_{\lambdambda}\xi_j
$$
with any $d_1\tildemes d$-matrix $\xi$. Thus having the method of
continuity in mind, we only prove that (\ref{e 6.5.2}) and (\ref{e
6.5.3}) hold given that a solution $u$ already exists.
{\bf{Step 1}}.
Assume $\gammamma=0$. Applying the chain rule
$d|u^k|^2=2u^kdu^k$ for each $k$,
\betagin{equation}
|u^k(t)|^2=|u^k_0|^2+\int^t_0
2u^k(a^{ij}_{kr}u^r_{x^ix^j}+f^k)\,ds,\quad
t>0.\lambdabel{square}
\end{equation}
By integrating with respect to $x$ and using
integrating by parts,
\betagin{eqnarray}
&&\int_{\bR^d}|u(t)|^2dx+2\int^t_0\int_{\bR^d}\sum_{i,j}(u_{x^i})^*A^{ij}u_{x^j}dxds\nonumber\\
&=&\int_{\bR^d}|u_0|^2dx
+\int^t_0\int_{\bR^d}2u^*fdxds.
\lambdabel{eqn 7.8}
\end{eqnarray}
Hence, it follows that
\betagin{eqnarray}
&&\int_{\bR^d}|u(t)|^2dx+2\deltalta\;
\int^t_0\int_{\bR^d}|u_x|^2dxds\nonumber\\
&\leq&\int_{\bR^d}|u_{0}|^2dx + \int^t_0\int_{\bR^d}|u|^2dxds
+\int^t_0\int_{\bR^d}|f|^2dxds.\lambdabel{eqn
6.5.5}
\end{eqnarray}
Similarly, for $v=u_{x^n}$ with any $n=1,2,\ldots,d$, we get (see
(\ref{eqn 7.8}))
\betagin{eqnarray}
&&\int_{\bR^d}|v(t)|^2dx+2\deltalta \int^t_0\int_{\bR^d}|v_x|^2dxds\nonumber\\
&\leq&\int_{\bR^d}|(u_0)_{x^n}|^2dx
+\int^t_0\int_{\bR^d}-2v_{x^n}^*f\,dxds\nonumber\\
&\leq& \|u_0\|^2_{U^2_2}+\varphirepsilon
\|u_{xx}\|^2_{\bL_2(t)}+c\|f\|^2_{\bL_2(t)}.\lambdabel{v_x}
\end{eqnarray}
Choosing small $\varphirepsilon$ and considering all $n$, we have
(\ref{e 6.5.2}). Now, (\ref{v_x}), (\ref{eqn 6.5.5}) and Gronwall's
inequality easily lead to (\ref{e 6.5.3}).
{\bf{Step 2}}. Let $\gammamma\neq 0$. The results of this case easily
follow from the fact that
$(1-\Deltalta)^{\mu/2}:H^{\gammamma}_p\to H^{\gammamma-\mu}_p$ is an isometry
for any $\gammamma,\mu\in \bR$ when $p\in (1,\infty)$; indeed, $u\in
\cH^{\gammamma+2}_2(T)$ is a solution of (\ref{eqn system}) if and only
if $v:=(1-\Deltalta)^{\gammamma/2}u\in \cH^2_2(T)$ is a solution of
(\ref{eqn system}) with $(1-\Deltalta)^{\gammamma/2}f, (1-\Deltalta)^{\gammamma/2}u_0$ in place of $f,
u_0$ respectively. Moreover, for instance,
$$
\|u\|_{\bH^{\gammamma+2}_{2}(T)}=\|v\|_{\bH^{2}_{2}(T)}\leq
ce^{cT}\left(\|(1-\Deltalta)^{\gammamma/2}f\|_{\bL_2(T)}
+\|(1-\Deltalta)^{\gammamma/2}u_0\|_{U^2_2}\right)
$$
$$
=ce^{cT}\left(\|f\|_{\bH^{\gammamma}_2(T)}+\|u_0\|_{U^{\gammamma+2}_2}\right).
$$
The theorem is proved.
\end{proof}
Theorem \ref{thm 1} is extended to the systems with variable
coefficients in the followings.
Fix $\mu>0$. For $\gammamma \in \bR$ define
$|\gammamma|_+=|\gammamma|$ if $|\gammamma|=0,1,2,\cdots$ and
$|\gammamma|_+=|\gammamma|+\mu$ otherwise. Also define
$$
B^{|\gammamma|_+}=\betagin{cases} B(\bR) &: \quad \gammamma=0\\
C^{|\gammamma|-1,1}(\bR) &: \quad |\gammamma|=1,2,... \\
C^{|\gammamma|+\mu}(\bR) &: \quad \thetaxt{otherwise},
\end{cases}
$$
where $B$ is the space of bounded functions, and $C^{|\gammamma|-1,1}$
and $C^{|\gammamma|+\mu}$ are usual H\"older spaces.
Consider the system with variable coefficients:
\betagin{equation}
\lambdabel{eqn system2}
u^k_t=a^{ij}_{kr}u^r_{x^ix^j}+b^i_{kr}u^r_{x^i}+c_{kr}u^r+f^k, \quad
u^k(0)=u^k_0.
\end{equation}
\betagin{theorem}
\lambdabel{thm 2}
Assume that the coefficients $a^{ij}_{kr}$ are
uniformly continuous in $x$, that is, for any $\varphirepsilon>0$ there
exists $\deltalta=\deltalta(\varphirepsilon)>0$ so that for any $t>0$,
$i,j,k,r$,
$$
|a^{ij}_{kr}(t,x)-a^{ij}_{kr}(t,y)|<\varphirepsilon,
\quad \thetaxt{if}\quad |x-y|<\deltalta.
$$
Also, assume for any $t>0$, $i,j,k,r$,
$$
|a^{ij}_{kr}(t,\cdot)|_{|\gammamma|_+}+|b^i_{kr}(\omega,t,\cdot)|_{|\gammamma|_+}
+|c_{kr}(\omega,t,\cdot)|_{|\gammamma|_+}<L.
$$
Then for any $f\in \bH^{\gammamma}_2(T)$ and $u_0\in U^{\gammamma+2}_2$, system
(\ref{eqn system2}) has a unique solution $u\in
\mathcal{H}^{\gammamma+2}_2(T)$, and for this solution we have
$$
\|u\|_{\bH^{\gammamma+2}_{2}(T)}\leq
ce^{cT}(\|f\|_{\bH^{\gammamma}_2(T)}+\|u_0\|_{U^{\gammamma+2}_2}),
$$
where $c=c(d,d_1,\gammamma,\deltalta,K^i,L)$.
\end{theorem}
\betagin{proof}
This is an easy extension of Theorem \ref{thm 1} and can be proved
by repeating the proof of Theorem 5.1 in \cite{Kr99}, where the
theorem is proved when $d_1=1$. We leave the details to the reader.
\end{proof}
\mysection{The system on $\mathcal{O} \subset \bR^d$}\lambdabel{main
section}
\betagin{assumption}
\lambdabel{assumption domain}
The domain $\cO$ is of class $C^{1}_{u}$. In other words, for any
$x_0 \in \partialrtial \cO$, there exist constants $r_0, K_0\in(0,\infty)
$ and a one-to-one continuously differentiable mapping $\Psi$ of
$B_{r_0}(x_0)$ onto a domain $J\subset\bR^d$ such that
(i) $J_+:=\Psi(B_{r_0}(x_0) \cap \cO) \subset \bR^d_+$ and
$\Psi(x_0)=0$;
(ii) $\Psi(B_{r_0}(x_0) \cap \partialrtial \cO)= J \cap \{y\in
\bR^d:y^1=0 \}$;
(iii) $\|\Psi\|_{C^{1}(B_{r_0}(x_0))} \leq K_0 $ and
$|\Psi^{-1}(y_1)-\Psi^{-1}(y_2)| \leq K_0 |y_1 -y_2|$ for any $y_i
\in J$;
(iv) $\Psi_{x}$ is uniformly continuous in for $B_{r_{0}}(x_{0})$.
\end{assumption}
To proceed further we introduce some well known results from
\cite{GH} and \cite{KK2} (also, see \cite{La} for details).
\betagin{lemma}
\lambdabel{lemma 10.3.1}
Let the domain $\cO$ be of class $C^{1}_{u}$. Then
(i) there is a bounded real-valued function $\psi$ defined in
$\bar{\cO} $ such that the functions $\psi(x)$ and
$\rho(x):=\thetaxt{dist}(x,\partialrtial \cO)$ are comparable in the part
of
a neighborhood of $\partialrtial \cO$ lying in $\cO$. In other words, if $\rho(x)$ is
sufficiently small, say $\rho(x)\leq 1$, then $N^{-1}\rho(x) \leq
\psi(x) \leq N\rho(x)$ with some constant
$N$ independent of $x$,
(ii) for any multi-index $\alphapha$,
\betagin{equation}
\lambdabel{03.04.01}
\sup_{\cO} \psi ^{|\alphapha|}(x)|D^{\alphapha}\psi_{x}(x)| <\infty.
\end{equation}
\end{lemma}
To describe the assumptions of $f$ we use the Banach spaces
introduced in \cite{KK2} and \cite{Lo2}.
Let $\zeta\in C^{\infty}_{0}(\bR_{+})$
be a function satisfying
\betagin{equation}
\lambdabel{eqn 5.6.5}
\sum_{n=-\infty}^{\infty}\zeta(e^{n+x})>c>0, \quad \forall x\in \bR,
\end{equation}
where $c$ is a constant. Note that any nonnegative function
$\zeta$, $\zeta>0$ on $[1,e]$, satisfies (\ref{eqn 5.6.5}). For $x\in \cO$ and $n\in\bZ=\{0,\pm1,...\}$
define
$$
\zeta_{n}(x)=\zeta(e^{n}\psi(x)).
$$
Then we have $\sum_{n}\zeta_{n}\geq c$ in $\cO$ and
\betagin{equation*}
\zeta_n \in C^{\infty}_0(\cO), \quad |D^m \zeta_n(x)|\leq
N(m)e^{mn}.
\end{equation*}
For $\theta,\gammamma \in \bR$, let $H^{\gammamma}_{p,\theta}(\cO)$ be the
set of all distributions $u=(u^1,u^2,\cdots u^{d_1})$ on $\cO$ such
that
\betagin{equation}
\lambdabel{10.10.03}
\|u\|_{H^{\gammamma}_{p,\theta}(\cO)}^{p}:= \sum_{n\in\bZ} e^{n\theta}
\|\zeta_{-n}(e^{n} \cdot)u(e^{n} \cdot)\|^p_{H^{\gammamma}_p} < \infty.
\end{equation}
It is known (see, for instance, \cite{Lo2}) that up to equivalent
norms the space $H^{\gammamma}_{p,\theta}(\cO)$ is independent of the
choice of $\zeta$ and $\psi$. Moreover if $\gammamma=n$ is a
non-negative integer then
\betagin{equation}
\lambdabel{eqn 02.09.1}
\|u\|^p_{H^{\gammamma}_{p,\theta}(\cO)} \sigmam \sum_{k=0}^n
\sum_{|\alphapha|=k}\int_{\cO} |\psi^kD^{\alphapha}u(x)|^p
\psi^{\theta-d}(x) \,dx.
\end{equation}
Denote $\rho(x,y)=\rho(x)\wedge \rho(y)$ and
$\psi(x,y)=\psi(x)\wedge \psi(y)$. For
$n \in\bZ$, $\mu \in(0,1]$
and $k=0,1,2,...$, define
$$
|u|_{C}=\sup_{\cO}|u(x)|, \quad [u]_{C^{\mu}}=\sup_{x\neq
y}\frac{|u(x)-u(y)|}{|x-y|^{\mu}}.
$$
\betagin{equation}
\lambdabel{eqn 5.6.2}
[u]^{(n)}_{k}=[u]^{(n)}_{k,\cO} =\sup_{\substack{x\in \cO\\
|\betata|=k}}\psi^{k+n}(x)|D^{\betata}u(x)|,
\end{equation}
\betagin{equation}
\lambdabel{eqn 5.6.3}
[u]^{(n)}_{k+\mu}=[u]^{(n)}_{k+\mu,\cO} =\sup_{\substack{x,y\in \cO
\\ |\betata|=k}}
\psi^{k+\mu+n}(x,y)\frac{|D^{\betata}u(x)-D^{\betata}u(y)|}
{|x-y|^{\mu}},
\end{equation}
$$
|u|^{(n)}_{k}=|u|^{(n)}_{k,\cO}=\sum_{j=0}^{k}[u]^{(n)}_{j,\cO},
\quad |u|^{(n)}_{k+\mu}=
|u|^{(n)}_{k+\mu,\cO}=|u|^{(n)}_{k, \cO}+
[u]^{(n)}_{k+\mu,\cO}.
$$
In case $\cO=\bR_+$, we also define the norm
$|u|^{(n)*}_{k}=|u|^{(n)*}_{k,\bR_+}$ by using
$\rho(x)(=x^1)$ and $\rho(x)\wedge \rho(y)$ in place of $\psi(x)$
and $\psi(x,y)$ respectively in (\ref{eqn 5.6.2}) and (\ref{eqn
5.6.3}).
Below we collect some other properties of spaces
$H^{\gammamma}_{p,\theta}(\cO)$.
\betagin{lemma} $(\cite{kr99})$ Let $d-1<\theta<d-1+p$.
\lambdabel{lemma 1}
(i) Assume that $\gammamma-d/p=m+\nu$ for some $m=0,1,\cdots$ and
$\nu\in (0,1]$. Then for any $u\in H^{\gammamma}_{p,\theta}(\cO)$ and $i\in
\{0,1,\cdots,m\}$, we have
$$
|\psi^{i+\theta/p}D^iu|_{C}+[\psi^{m+\nu+\theta/p}D^m
u]_{C^{\nu}}\leq c \|u\|_{ H^{\gammamma}_{p,\theta}(\cO)}.
$$
(ii) Let $\alphapha\in \bR$, then
$\psi^{\alphapha}H^{\gammamma}_{p,\theta+\alphapha
p}(\cO)=H^{\gammamma}_{p,\theta}(\cO)$,
$$
\|u\|_{H^{\gammamma}_{p,\theta}(\cO)}\leq c
\|\psi^{-\alphapha}u\|_{H^{\gammamma}_{p,\theta+\alphapha p}(\cO)}\leq
c\|u\|_{H^{\gammamma}_{p,\theta}(\cO)}.
$$
(iii) There is a constant $c=c(d,p,\gammamma,\theta)$ so that
$$
\|a f\|_{H^{\gammamma}_{p,\theta}(\cO)}\leq
c|a|^{(0)}_{|\gammamma|_+}|f|_{H^{\gammamma}_{p,\theta}(\cO)}.
$$
(iv) $\psi D, D\psi: H^{\gammamma}_{p,\theta}(\cO)\to
H^{\gammamma-1}_{p,\theta}(\cO)$ are bounded linear operators, and
$$
\|u\|_{H^{\gammamma}_{p,\theta}(\cO)}\leq
c\|u\|_{H^{\gammamma-1}_{p,\theta}(\cO)}+c \|\psi
Du\|_{H^{\gammamma-1}_{p,\theta}(\cO)}\leq c
\|u\|_{H^{\gammamma}_{p,\theta}(\cO)},
$$
$$
\|u\|_{H^{\gammamma}_{p,\theta}(\cO)}\leq
c\|u\|_{H^{\gammamma-1}_{p,\theta}(\cO)}+c \|D\psi
u\|_{H^{\gammamma-1}_{p,\theta}(\cO)}\leq c
\|u\|_{H^{\gammamma}_{p,\theta}(\cO)}.
$$
\end{lemma}
Denote
$$
\bH^{\gammamma}_{p,\theta}(\cO,T)=L_p(
[0,T],H^{\gammamma}_{p,\theta}(\cO)), \quad \bL_{p,\theta}(\cO,T)=\bH^{0}_{p,\theta}(\cO,T)
$$
$$
U^{\gammamma}_{p,\theta}(\cO)=
\psi^{1-2/p}H^{\gammamma-2/p}_{p,\theta}(\cO)).
$$
\betagin{definition}
We write $u\in \frH^{\gammamma+2}_{p,\theta}(\cO,T)$ if
$u=(u^1,\cdots, u^{d_1})\in \psi\bH^{\gammamma+2}_{p,\theta}(\cO,T)$,
$u(0,\cdot) \in U^{\gammamma+2}_{p,\theta}(\cO)$ and for some $f \in
\psi^{-1}\bH^{\gammamma}_{p,\theta}(\cO,T)$, it holds that $u_t=f$
in the sense of distributions. The norm in $
\frH^{\gammamma+2}_{p,\theta}(\cO,T)$ is introduced by
$$
\|u\|_{\frH^{\gammamma+2}_{p,\theta}(\cO,T)}=
\|\psi^{-1}u\|_{\bH^{\gammamma+2}_{p,\theta}(\cO,T)} + \|\psi
u_t\|_{\bH^{\gammamma}_{p,\theta}(\cO,T)} +
\|u(0,\cdot)\|_{U^{\gammamma+2}_{p,\theta}(\cO)}.
$$
\end{definition}
The following result is due to N.V.Krylov (see \cite{Kr01} and
\cite{Kim04-1}).
\betagin{lemma}
\lambdabel{lemma 15.05}
Let $p\geq 2$. Then there exists a constant
$c=c(d,p,\theta,\gammamma,T)$ such that
$$
\sup_{t\leq T}\|u(t)\|_{H^{\gammamma+1}_{p,\theta}(\cO)}\leq c
\|u\|_{\frH^{\gammamma+2}_{p,\theta}(\cO,T)}.
$$
In particular, for any $t\leq T$,
$$
\|u\|^p_{\bH^{\gammamma+1}_{p,\theta}(\cO,t)}\leq c \int^t_0
\|u\|^p_{\frH^{\gammamma+2}_{p,\theta}(\cO,s)}ds.
$$
\end{lemma}
\betagin{assumption}
\lambdabel{assumption regularity}
(i) The functions $a^{ij}_{kr}(t,\cdot)$
are point-wise continuous in $\cO$, that is, for any $\varphirepsilon >0, x\in \cO$ there exists $\deltalta=\deltalta(\varphirepsilon,x)$ so that
$$
|a^{ij}_{kr}(t,x)-a^{ij}_{kr}(t,y)|<\varphirepsilon
$$
whenever $y\in \cO$ and $|x-y|<\deltalta$.
(ii) There is control on the behavior of $a^{ij}_{kr}$,
$b^i_{kr}$ and $c_{kr}$ near
$\partialrtial \cO$, namely,
\betagin{equation}
\lambdabel{12.10.1}
\lim_{\substack{\rho(x)\to 0\\
x\in \cO}}\sup_{\substack{y\in \cO \\|x-y|\leq\rho(x,y)}} \sup_{t} |a^{ij}_{kr}(t,x)-a^{ij}_{kr}(t,y) | =0.
\end{equation}
\betagin{equation}
\lambdabel{05.04.01}
\lim_{\substack{\rho(x)\to0\\
x\in \cO}} \sup_{t}[\rho(x)|b^i_{kr}(t,x)|+\rho^{2}(x)|c_{kr}(t,x)|]=0.
\end{equation}
(iii) For any $t>0$,
$$
|a^{ij}_{kr}(t,\cdot)|^{(0)}_{|\gammamma|_+}
+|b^{i}_{kr}(t,\cdot)|^{(1)}_{|\gammamma|_+}
+|c_{kr}(t,\cdot)|^{(2)}_{|\gammamma|_+} \leq L.
$$
\end{assumption}
\betagin{remark}
\lambdabel{05.18.01}
It is easy to see
that (\ref{12.10.1}) is much weaker than uniform continuity condition.
For instance,
if $\deltalta\in(0,1)$, $d=d_1=1$, and $\cO=\bR_{+}$, then the function
$a(x)$ equal to $2+\sigman (|\ln x|^{\deltalta})$ for $0<x\leq1/2$
satisfies (\ref{12.10.1}). Indeed, if $x,y>0$ and $ |x-y|\leq
x\wedge y $, then
$$
|a(x)-a(y)|=|x-y||a'(\xi)|,
$$
where $\xi$ lies between $x$ and $y$. In addition, $|x-y|\leq
x\wedge y \leq\xi\leq2(x\wedge y)$, and $ \xi|a'(\xi)|\leq
|\ln[2(x\wedge y)]|^{\deltalta-1}\to0$ as $x\wedge y\to0$.
Also observe that (\ref{05.04.01}) allows the coefficients
$b^i_{kr}$ and $c_{kr}$ to blow up near the boundary at a
certain rate.
\end{remark}
Now, for each $i,j$,
we define the symmetric part ($S^{ij}$) and the diagonal part ($S^{ij}_d$) of $A^{ij}$ as follows:
$$
S^{ij}=(s^{ij}_{kr}):=(A^{ij}+(A^{ij})^*)/2, \quad \quad S^{ij}_d=(s^{ij}_{d,kr}):=(\deltalta_{kr}a^{ij}_{kr})=(\deltalta_{kr}s^{ij}_{kr}).
$$
Also define
$$
H^{ij}:=A^{ij}-(A^{ij})^*, \quad
S^{ij}_o=S^{ij}-S^{ij}_d.
$$
Assume there exist constants $\alphapha,\bar{\alphapha},\betata_1,\cdots,\betata_d\in [0,\infty)$ so that
\betagin{equation}
\lambdabel{eqn 01.26.1}
|H^{1j}|\leq \betata^j \quad \forall j=1,2,\ldots,d, \quad\quad |S^{11}_o|\leq \alphapha,
\end{equation}
\betagin{equation}
\lambdabel{eqn 01.26.2}
\xi^*_iS^{ij}_o\xi_j \leq \bar{\alphapha}|\xi|^2,
\end{equation}
for any (real) $d_1\tildemes d$ matrix $\xi$. Here $\xi_i$ is the $i$th column of $\xi$,
and again the summations on $i,j$ are understood. Denote
$$
K:=\sqrt{\sum_j (K^j)^2}, \quad \betata=\sqrt{\sum_j (\betata^j)^2}.
$$
\betagin{assumption}
\lambdabel{assumption theta}
One of the following four conditions is satisfied:
\betagin{equation}
\lambdabel{theta 11}
\theta\in \left(d-\frac{\deltalta}{2K-\deltalta},\,\,
d+\frac{\deltalta}{2K+\deltalta}\right);
\end{equation}
\betagin{equation}
\lambdabel{con 11}
\theta\in (d-1,d], \quad
2\deltalta(d+1-\theta)^2-2(d+1-\theta)(d-\theta)\betata-4(d-\theta)(d+1-\theta)K^1>0;
\end{equation}
\betagin{equation}
\lambdabel{con 44}
\theta\in (d-1,d], \quad (\deltalta-\bar{\alphapha})-\frac{(d-\theta)}{(d+1-\theta)}(2\deltalta-\betata-2\alphapha)>0;
\end{equation}
\betagin{equation}
\lambdabel{con 22}
\theta\in [d,d+1), \quad 8(d+1-\theta)\deltalta^2-(\theta-d)\betata^2>0.
\end{equation}
\end{assumption}
\betagin{remark}
(i) If $A^{1j}$ are symmetric, i.e., $\betata=0$, then
(\ref{con 11}) combined with (\ref{con 22}) is $\theta\in
(d-\frac{\deltalta}{2K^1-\deltalta},d+1)$ which is weaker than (\ref{theta
11}).
(ii) If $A^{ij}$ are diagonal matrices, that is if $\alphapha=\betata^i=0$, then (\ref{con 11}) combined with (\ref{con 22})
is $\theta\in (d-1,d+1)$. This is the case when the system is not correlated.
(iii)
We also mention that if $\theta\not\in (d-1,d+1)$ then Theorem
\ref{main theorem on domain} is false even for the heat equation
$u_t=\Deltalta u+f$ (see \cite{kr99}).
\end{remark}
Here are the main results of this article. The proofs of the theorems will be given in section \ref{section 5}
and section \ref{section 6} after we develop some auxiliary results on $\bR^d_+$ in section \ref{section auxiliary}.
\betagin{theorem}
\lambdabel{main theorem on domain}
Let $\gammamma \geq 0$ and $\cO$ be bounded. Also let Assumptions
\ref{main assumptions}, \ref{assumption domain}, \ref{assumption
regularity} and \ref{assumption theta} hold. Then for any $f\in
\psi^{-1}\bH^{\gammamma}_{2,\theta}(\cO,T),\, u_0\in
U^{\gammamma+2}_{2,\theta}(\cO)$, system (\ref{eqn system2}) admits a
unique solution $u\in \frH^{\gammamma+2}_{2,\theta}(\cO,T)$, and for
this solution
\betagin{equation}
\lambdabel{a priori domain}
\|\psi^{-1}u\|_{\bH^{\gammamma+2}_{2,\theta}(\cO,T)}\leq ce^{cT}\left(\|\psi
f\|_{\bH^{\gammamma}_{2,\theta}(\cO,T)}
+\|u_0\|_{U^{\gammamma+2}_{2,\theta}(\cO)}\right),
\end{equation}
where $c=c(d,\deltalta,\theta,K,L)$.
\end{theorem}
\betagin{theorem}
\lambdabel{theorem elliptic}
Let $\gammamma\geq 0$ and $\cO$ be bounded. Assume $a^{ij}_{kr},b^{i}_{kr},c_{kr}$ are independent of $t$ and $\lambdambda^k$ are
sufficiently large constants (actually, any constants bigger than
$c$ from (\ref{a priori domain})). Under the assumptions of Theorem
\ref{main theorem on domain}, for any $f\in
\psi^{-1}H^{\gammamma}_{2,\theta}(\cO)$ there is a unique $u\in \psi
H^{\gammamma+2}_{2,\theta}(\cO)$ such that in $\cO$,
\betagin{equation}
\lambdabel{eqn elliptic}
a^{ij}_{kr}u^r_{x^ix^j}+b^{i}_{kr}u^r_{x^i}+c_{kr}u^r-\lambdambda^ku^k+f^k=0.
\end{equation}
Furthermore,
\betagin{equation}
\lambdabel{a priori elliptic}
\|\psi^{-1}u\|_{H^{\gammamma+2}_{2,\theta}(\cO)}\leq N\|\psi
f\|_{H^{\gammamma}_{2,\theta}(\cO)},
\end{equation}
where the constant $N$ is independent of $f$.
\end{theorem}
\betagin{remark}
Actually Theorem \ref{main theorem on domain} and Theorem \ref{theorem elliptic} hold even for
$\gammamma<0$. Using results for the case $\gammamma\geq 0$, repeat the arguments in the proof of Theorem 2.10 in \cite{KK2},
where the theorems are proved when $d_1=1$. We leave the details to the reader. Also
by inspecting the proofs carefully one can check that the above two theorems hold true even if $\cO$ is not bounded.
\end{remark}
\mysection{Auxiliary results: some results on $\bR^d_+$}
\lambdabel{section auxiliary}
In this section we develop some results for the systems defined on
$\bR^d_+$. Here we use the Banach spaces $H^{\gammamma}_{p,\theta}$,
$\bH^{\gammamma}_{p,\theta}(T)$ and $\frH^{\gammamma}_{p,\theta}(T)$
defined on $\bR^d_+$.
They are defined on the basis of (\ref{10.10.03})
by formally taking $\psi(x)=x^{1}$, so that $\zeta_{-n}(e^{n}
x)=\zeta(x)$ and
\betagin{equation*}
\|u\|_{H^{\gammamma}_{p,\theta} }^{p}:= \sum_{n\in\bZ} e^{n\theta}
\|u(e^{n} \cdot)\zeta(\cdot) \|^p_{H^{\gammamma}_p} < \infty.
\end{equation*}
Observe that the spaces $H^{\gammamma}_{p,\theta}(\bR^d_+)$ and
$H^{\gammamma}_{p,\theta}$ are different since $\psi$ is bounded.
Actually for any nonnegative function $\xi=\xi(x^1)\in
C^{\infty}_0(\bR^1)$ so that $\xi=1$ near $x^1=0$ we have
\betagin{equation}
\lambdabel{eqn 9.9}
\|u\|_{H^{\gammamma}_{p,\theta}(\bR^d_+)}\sigmam \left(\|\xi
u\|_{H^{\gammamma}_{p,\theta}}+\|(1-\xi)u\|_{H^{\gammamma}_p}\right).
\end{equation}
Also, it is known (see \cite{kr99}) that for any $\eta\in
C^{\infty}_0(\bR^d_+)$,
\betagin{equation}
\lambdabel{eqn 5.6.1}
\sum_{n=-\infty}^{\infty}
e^{n\theta}\|u(e^n\cdot)\eta\|^p_{H^{\gamma}_p} \leq c
\sum_{n=-\infty}^{\infty}
e^{n\theta}\|u(e^n\cdot)\zeta\|^p_{H^{\gamma}_p},
\end{equation}
where $c$ depends only on $d,d_1,\gammamma,\theta,p,\eta,\zeta$.
Furthermore,
if $\gammamma=n$ is a nonnegative integer then (see (\ref{eqn 02.09.1}))
\betagin{equation}
\lambdabel{eqn 02.09.2}
\|u\|^p_{H^{\gammamma}_{p,\theta}} \sigmam \sum_{k=0}^n
\sum_{|\alphapha|=k}\int_{\bR^d_+} |\psi^kD^{\alphapha}u(x)|^p
(x^1)^{\theta-d}(x) \,dx.
\end{equation}
Let $M^{\alphapha}$ be the operator of multiplying $(x^1)^{\alphapha}$ and
$M=M^1$.
\betagin{lemma}
\lambdabel{collection half}
The assertions (i)-(iv) in Lemma \ref{lemma 1} hold true if one
formally replaces $H^{\gammamma}_{p,\theta}(\cO)$ and
$\psi$ by $H^{\gammamma}_{p,\theta}$ and $M$, respectively.
\end{lemma}
We need the following three lemmas to prove the main result of this section.
\betagin{lemma}
\lambdabel{lemma 2.05.10}
Let $a^{ij}_{kr}=a^{ij}_{kr}(t)$, independent of $x$. Assume that $f\in M^{-1}\bH^{\gammamma}_{2,\theta}(T), u(0)\in
U^{\gammamma+2}_{2,\theta}$ and $u\in M\bH^{\gammamma+1}_{2,\theta}(T)$ is
a solution of system (\ref{eqn system}) on $[0,T]\tildemes
\mathbb{R}_+$, then $u\in M\bH^{\gammamma+2}_{2,\theta}(T)$ and
\betagin{equation}
\lambdabel{eqn 5.1}
\|M^{-1}u\|_{\bH^{\gammamma+2}_{2,\theta}(T)}\leq
c\|M^{-1}u\|_{\bH^{\gammamma+1}_{2,\theta}(T)}+c\|Mf\|_{\bH^{\gammamma}_{2,\theta}(T)}
+c\|u(0)\|_{U^{\gammamma+2}_{2,\theta}},
\end{equation}
where $c=c(d,d_1,\gammamma,\theta,\deltalta,K,L)$.
\end{lemma}
\betagin{proof}
By Lemma \ref{collection half} and (\ref{eqn 5.1.1}),
\betagin{eqnarray*}
\|M^{-1}u\|^2_{\bH^{\gammamma+2}_{2,\theta}(T)}
&\leq&
c\sum_{n}e^{n(\theta-2)}\|u(t,e^nx)\zeta(x)\|^2_{\bH^{\gammamma+2}_2(T)}\\
&=&
c\sum_{n}e^{n\theta}\|u(e^{2n}t,e^nx)\zeta(x)\|^2_{\bH^{\gammamma+2}_2(e^{-2n}T)}\\
&\leq&
c\sum_{n}e^{n\theta}\|(u(e^{2n}t,e^nx)\zeta(x))_{x^1x^1}\|^2_{\bH^{\gammamma}_2(e^{-2n}T)}.
\end{eqnarray*}
Denote
$$
v_n(t,x)=u(e^{2n}t,e^nx)\zeta(x),\quad
a^{ij}_{n,kr}(t)=a^{ij}_{kr}(e^{2n}t).
$$
Then since $v_n$ has compact support in $\bR^d_+$, $v_n$ is in
$\bH^{\gammamma+1}_2(e^{-2n}T)$ and satisfies
$$
(v^k_n)_t= a^{ij}_{n,kr}(v^r_n)_{x^ix^j}+f^k_n,
\quad v^k_n(0,x)=\zeta(x)u^k_0(e^nx),
$$
where
$$
f^k_n=-2e^n a^{ij}_{n,kr} u^r_{x^i}(e^{2n}t,e^nx)\zeta_{x^j}(x) -a^{ij}_{n,kr}
u^r(e^{2n}t,e^nx)\zeta_{x^ix^j}(x)+e^{2n}f^k(e^{2n}t,e^nx)\zeta(x).
$$
By Theorem \ref{thm 1}, $v_n$ is in $\bH^{\gammamma+2}_2(e^{-2n}T)$ and
$$
\|(v_{n})_{xx}\|^2_{\bH^{\gammamma}_2(e^{-2n}T)}\leq
c(d,d_1,\gammamma,\deltalta,K,L)(\|f_n\|^2_{\bH^{\gammamma}_2(e^{-2n}T)}
+\|\zeta(x)u_0(e^nx)\|^2_{U^{\gammamma+2}_2}).
$$
Thus by (\ref{eqn 5.6.1})
and Lemma \ref{collection half},
\betagin{eqnarray*}
&&\sum_{n}e^{n\theta}\|(u(e^{2n}t,e^nx)\zeta(x))_{xx}\|^2_{\bH^{\gammamma}_2(e^{-2n}T)}\\
&\leq&
c\sum_{n}e^{n\theta}\|u_{x}(t,e^n\cdot)\zeta_{x}\|^2_{\bH^{\gammamma}_2(T)}
+c\sum_{n}e^{n(\theta-2)}\|u(t,e^n\cdot)\zeta_{xx}\|^2_{\bH^{\gammamma}_2(T)}\\&&+
c\sum_{n}e^{n(\theta+2)}\|f(t,e^n\cdot)\zeta\|^2_{\bH^{\gammamma}_2(T)}
+c\sum_{n}e^{n\theta}\|u_0(t,e^nx)\zeta\|^2_{U^{\gammamma+2}_2}\\
&\leq & c \|u_x\|^2_{\bH^{\gammamma}_{2,\theta}(T)}+c\|M^{-1}u\|^2_{\bH^{\gammamma}_{2,\theta}(T)}
+c\|Mf\|^2_{\bH^{\gammamma}_{2,\theta}(T)}+c\|u_0\|^2_{U^{\gammamma+2}_{2,\theta}}\\
&\leq &c\|M^{-1}u\|^2_{\bH^{\gammamma+1}_{2,\theta}(T)}+c\|M
f\|^2_{\bH^{\gammamma}_{2,\theta}(T)}+c\|u_0\|^2_{U^{\gammamma+2}_{2,\theta}}.
\end{eqnarray*}
The lemma is proved.
\end{proof}
It follows from the above lemma that if $\gammamma \geq 0$, then
$$
\|M^{-1}u\|_{\bH^{\gammamma+2}_{2,\theta}(T)}\leq c\|M^{-1}u\|_{\bL_{2,\theta}(T)}
+c\|Mf\|_{\bH^{\gammamma}_{2,\theta}(T)}+c\|u_0\|_{U^{\gammamma+2}_{2,\theta}}.
$$
Thus to get a priori estimate, we only need to estimate $\|M^{-1}u\|_{\bL_{2,\theta}(T)}$ in terms of $f$ and $u_0$.
\betagin{lemma}
\lambdabel{a priori 1}
Let $a^{ij}_{kr}=a^{ij}_{kr}(t)$, independent of $x$. Assume
\betagin{equation}
\lambdabel{theta 1}
\theta\in \left(d-\frac{\deltalta}{2K-\deltalta},\,\,
d+\frac{\deltalta}{2K+\deltalta}\right)
\end{equation}
and $u\in M\bH^{1}_{2,\theta}(T)$ is a solution of $($\ref{eqn
system}$)$ so that $u\in C([0,T],C^2_0((1/N,N)\tildemes
\{x':|x'|<N\}))$ for some $N>0$. Then we have
\betagin{equation}
\lambdabel{eqn main}
\|M^{-1}u\|^2_{\bL_{2,\theta}(T)}\leq
c_0(\|Mf\|^2_{\bL_{2,\theta}(T)}+\|u_0\|^2_{U^1_{2,\theta}}),
\end{equation}
where $c_0=c_0(d,\deltalta,\theta,K,L)$.
\end{lemma}
\betagin{proof}
As in the proof of Theorem \ref{thm 1}, applying the chain rule $d|u^k|^2=2u^kdu^k$ for each $k$, we have
$$
|u^k(t)|^2=|u^k_0|^2+\int^t_0 2u^k(a^{ij}_{kr}u^r_{x^ix^j}+f^k)\,ds
$$
where the summations on $i,j,r$ are understood. Denote $c=\theta-d$.
For each $k$, we have
\betagin{eqnarray}
0&\leq& \int_{\bR^d_+}|u^k(T,x)|^2(x^1)^c
dx\nonumber\\
&=&\int_{\bR^d_+}|u^k(0,x)|^2(x^1)^cdx\nonumber\\
&& + 2\int^T_0\int_{\bR^d_+} a^{ij}_{kr}u^ku^r_{x^ix^j}(x^1)^c
dxds+2\int^T_0\int_{\bR^d_+} (M^{-1}u^k)(Mf^k)(x^1)^c
dxds .\lambdabel{2009.06.02 04:27 PM}
\end{eqnarray}
Note that, by integration by parts, the second term in
(\ref{2009.06.02 04:27 PM}) is
\betagin{eqnarray}
\int^T_0\int_{\bR^d_+}\left[-2a^{ij}_{kr}u^k_{x^i}u^r_{x^j}-2
c(a^{1j}_{kr}u^r_{x^j})(M^{-1}u^k)\right](x^1)^cdxds \lambdabel{2009.09.02.6:18PM}
\end{eqnarray}
$$
\leq \int^T_0\int_{\bR^d_+}-2a^{ij}_{kr}u^k_{x^i}u^r_{x^j}(x^1)^c\,dxds+|c|\left(\kappappa\|u_x\|^2_{\bL_{2,\theta}(T)}+
K^2\kappappa^{-1}\|M^{-1}u\|^2_{\bL_{2,\theta}(T)}\right),
$$
for each $\kappappa>0$, because for any vectors $v,w\in \bR^n$ and $\kappappa>0$,
$$
|<A^{1j}v,w>|\leq |A^{1j}v||w|\leq K^j|v||w|\leq
\frac12(\kappappa|v|^2+\kappappa^{-1}(K^j)^2|w|^2).
$$
By summing up the terms in (\ref{2009.06.02 04:27 PM}) over $k$ and
rearranging the terms, we get
\betagin{eqnarray}
&&2\int^T_0\int_{\bR^d_+}u^*_{x^i}A^{ij}u_{x^j}\;(x^1)^c
dxds\nonumber\\
&\leq&|c|\left(\kappappa\|u_x\|^2_{\bL_{2,\theta}(T)}+
K^2\kappappa^{-1}\|M^{-1}u\|^2_{\bL_{2,\theta}(T)}\right)
+\varphirepsilon \|M^{-1}u\|^2_{\bL_{2,\theta}(T)}\\
&+&c(\varphirepsilon)\|Mf\|^2_{\bL_{2,\theta}(T)}
+\|u(0)\|^2_{U^1_{2,\theta}},\lambdabel{eqn
2}
\end{eqnarray}
where
$\kappappa,\varphirepsilon>0$ will be decided below. Assumption
\ref{main assumptions}(i), inequality (\ref{eqn 2}) and the inequality
\betagin{equation}
\lambdabel{eqn 3}
\|M^{-1}u\|^2_{L_{2,\theta}}\leq
\frac{4}{(d+1-\theta)^2}\|u_x\|^2_{L_{2,\theta}}
\end{equation}
(see Corollary 6.2 in \cite{kr99}) lead us to
\betagin{eqnarray}
&&2\deltalta\|u_x\|^2_{\bL_{2,\theta}(T)}-
|c|\left(\kappappa
+\frac{4K^2}{\kappappa(d+1-\theta)^2}\right)\|u_x\|^2_{\bL_{2,\theta}(T)}\nonumber\\
&\le& c\varphirepsilon\|u_x\|^2_{\bL_{2,\theta}(T)}
+c(\varphirepsilon)\|Mf\|^2_{\bL_{2,\theta}(T)}+\|u(0)\|^2_{U^1_{2,\theta}}.\nonumber
\end{eqnarray}
Now it is enough to take $\kappappa=2K/(d+1-\theta)$ and observe that
(\ref{theta 1}) is equivalent to the condition
$$
2\deltalta-|c|\left(\kappappa
+\frac{4K}{\kappappa(d+1-\theta)^2}\right)=2\deltalta-\frac{4|c|K}{d+1-\theta}>0.
$$
Choosing a small $\varphirepsilon=\varphirepsilon(d,\deltalta,\theta,K,L)$, the lemma is proved.
\end{proof}
\betagin{lemma}
\lambdabel{a priori 2}
Let $a^{ij}_{kr}=a^{ij}_{kr}(t)$. Suppose either
\betagin{equation}
\lambdabel{con 1}
\theta\in (d-1,d], \quad
2\deltalta(d+1-\theta)^2-2(d+1-\theta)(d-\theta)\betata-4(d-\theta)(d+1-\theta)K^1>0
\end{equation}
or
\betagin{equation}
\lambdabel{con 3}
\theta\in (d-1,d], \quad (\deltalta-\bar{\alphapha})-\frac{(d-\theta)}{(d+1-\theta)}(2\deltalta-\betata-2\alphapha)>0;
\end{equation}
or
\betagin{equation}
\lambdabel{con 2}
\theta\in [d,d+1), \quad 8(d+1-\theta)\deltalta^2-(\theta-d)\betata^2>0.
\end{equation}
Let $u\in M\bH^{1}_{2,\theta}(T)$ be a solution of $($\ref{eqn
system}$)$ so that $u\in C([0,T],C^2_0((1/N,N)\tildemes
\{x':|x'|<N\}))$ for some $N>0$. Then the assertion of Lemma \ref{a
priori 1} holds.
\end{lemma}
\betagin{proof}
1. Denote $S^{1j}=(s^{1j}_{kr})=\frac12(A^{1j}+(A^{1j})^*)$ as the
symmetric part of $A^{1j}$. Then $A^{1j}=S^{1j}+\frac12 H^{1j}$, and
for any $\xi\in \bR^{d_1}$ we notice that
$\xi^*A^{1j}\xi=\xi^*S^{1j}\xi$. Let $c:=\theta-d$. Note that, by
integration by parts,
$$
\int_{\bR^d_+}u^*S^{11}u_{x^1}(x^1)^{c-1}dx=
-\frac{c-1}{2}\int_{\bR^d_+}u^* S^{11}u
(x^1)^{c-2}dx=-\frac{c-1}{2}\int_{\bR^d_+}u^* A^{11}u (x^1)^{c-2}dx
$$
and hence
\betagin{eqnarray*}
-2c\int_{\bR^d_+}u^*A^{11}u_{x^1}(x^1)^{c-1}dx&=&-2c\int_{\bR^d_+}u^*S^{11}u_{x^1}(x^1)^{c-1}dx
-c\int_{\bR^d_+}u^*H^{11}u_{x^1}(x^1)^{c-1}dx\\
&=&c(c-1)\int_{\bR^d_+}u^* A^{11}u (x^1)^{c-2}dx
-c\int_{\bR^d_+}u^*H^{11}u_{x^1}(x^1)^{c-1}dx.
\end{eqnarray*}
Moreover, another usage of integration by parts gives us
\betagin{eqnarray}
\int_{\bR^d_+}u^*S^{1j}u_{x^j}(x^1)^{c-1}dx=
-\int_{\bR^d_+}u_{x^j}^* S^{1j}u (x^1)^{c-1}dx=-\int_{\bR^d_+}
u^*(S^{1j})^*u_{x^j} (x^1)^{c-1}dx\nonumber
\end{eqnarray}
for $j\ne 1$, meaning that
$\int_{\bR^d_+}u^*S^{1j}u_{x^j}(x^1)^{c-1}dx=0$ and
\betagin{eqnarray}
-2c\int_{\bR^d_+}u^*A^{1j}u_{x^j}(x^1)^{c-1}dx=-c\int_{\bR^d_+}u^*H^{1j}u_{x^j}(x^1)^{c-1}dx.\nonumber
\end{eqnarray}
We gather the above terms to get
\betagin{eqnarray}
-2c\int_{\bR^d_+}(a^{1j}_{kr}u^r_{x^j})u^k(x^1)^{c-1}dx
=c(c-1)\int_{\bR^d_+}u^* A^{11}u (x^1)^{c-2}dx
-c\int_{\bR^d_+}u^*H^{1j}u_{x^j}(x^1)^{c-1}dx,\nonumber
\end{eqnarray}
where the summation on $j$ includes $j=1$.
Now, as in the proof of Lemma \ref{a priori 1}, we have
\betagin{eqnarray}
&&2\deltalta\|u_x\|^2_{\bL_{2,\theta}(T)}\nonumber\\
&\le&2\int^T_0\int_{\bR^d_+}u^*_{x^i}A^{ij}u_{x^j}\;(x^1)^c
dxds\nonumber\\
&\leq&\int_{\bR^d_+}|u^k(0,x)|^2x^cdx\nonumber\\
&+&c(c-1)\int^T_0\int_{\bR^d_+}a^{11}_{kr}(M^{-1}u^k)(M^{-1}u^r)(x^1)^{c}dxds
-c\int^T_0\int_{\bR^d_+}(h^{1j}_{kr}u^r_{x^j})(M^{-1}u^k) (x^1)^{c} dxds\nonumber\\
&+&2\int^T_0\int_{\bR^d_+} (M^{-1}u^k)(Mf^k)(x^1)^c
dxds.
\lambdabel{2009.06.03 06:21 PM}
\end{eqnarray}
Note that the first and last terms in
the right hand side of (\ref{2009.06.03 06:21 PM}) are bounded by
$$
\varphirepsilon \|M^{-1}u\|^2_{\bL_{2,\theta}(T)} +c(\varphirepsilon)\|Mf\|^2_{\bL_{2,\theta}(T)}
+\|u(0)\|^2_{U^1_{2,\theta}}.
$$
2. If $c(c-1)\geq 0$, hence $\theta\in (d-1,d]$, then
\betagin{eqnarray*}
&&c(c-1)\int^T_0\int_{\bR^d_+}a^{11}_{kr}(M^{-1}u^k)(M^{-1}u^r)(x^1)^{c}dxds\\
&\leq& c(c-1)K^1\|M^{-1}u\|^2_{\bL_{2,\theta}(T)} \leq
\frac{4}{(d+1-\theta)^2}c(c-1)K^1\|u_x\|^2_{\bL_{2,\theta}(T)}.
\end{eqnarray*}
Also,
\betagin{eqnarray*}
\left|-c\int^T_0\int_{\bR^d_+}(h^{1j}_{kr}u^r_{x^j})(M^{-1}u^k)
(x^1)^{c} dxds\right|&\leq&
\frac{1}{2}|c|\left(\kappappa\|u_x\|^2_{\bL_{2,\theta}(T)}
+\kappappa^{-1}\betata^2\|M^{-1}u\|^2_{\bL_{2,\theta}(T)}\right)\\
&\leq& \frac{1}{2}|c|\left(\kappappa
+\frac{4\betata^2}{\kappappa(d+1-\theta)^2}\right)\|u_x\|^2_{\bL_{2,\theta}(T)}
\end{eqnarray*}
for any $\kappappa>0$. To minimize this we take $\kappappa=2\betata/(d+1-\theta)$, then
\betagin{equation}
\lambdabel{eqn 1.28.1}
\left|-c\int^T_0\int_{\bR^d_+}(h^{1j}_{kr}u^r_{x^j})(M^{-1}u^k)
(x^1)^{c} dxds\right| \leq \frac{2\betata(d-\theta)}{(d+1-\theta)}\|u_x\|^2_{\bL_{2,\theta}(T)}.
\end{equation}
Thus we deduce
$$
\left(2\deltalta-\frac{2\betata(d-\theta)}{(d+1-\theta)}-
\frac{4}{(d+1-\theta)^2}c(c-1)K^1\right)\|u_x\|^2_{\bL_{2,\theta}(T)}
\leq c\varphirepsilon \|u_x\|^2_{\bL_{2,\theta}(T)}
+c(\varphirepsilon)\|Mf\|^2_{\bL_{2,\theta}(T)}+\|u(0)\|^2_{U^1_{2,\theta}}.
$$
This and (\ref{eqn 3}) yield a priori
(\ref{eqn main}), since (\ref{con 1}) is equivalent to
$$
2\deltalta-\frac{2\betata(d-\theta)}{(d+1-\theta)}-
\frac{4}{(d+1-\theta)^2}c(c-1)K^1 >0.
$$
3. Again assume $c(c-1)\geq 0$. By (\ref{2009.06.03 06:21 PM}) and (\ref{eqn 1.28.1}),
\betagin{eqnarray*}
&&2\int^T_0\int_{\bR^d_+}u^*_{x^i}\left(S^{ij}_d+S^{ij}_0\right)u_{x^j}\;(x^1)^c
dxds\\
&\leq&\int_{\bR^d_+}|u^k(0,x)|^2x^cdx\\
&+&c(c-1)\int^T_0\int_{\bR^d_+}\left(s^{11}_{d,kr}+s^{11}_{s,kr}\right)(M^{-1}u^k)(M^{-1}u^r)(x^1)^{c}dxds\\
&+&\frac{2\betata(d-\theta)}{(d+1-\theta)}\|u_x\|^2_{\bL_{2,\theta}(T)}
+\varphirepsilon\|M^{-1}u\|^2_{\bL_{2,\theta}(T)}+c\|Mf\|^2_{\bL_{2,\theta}(T)}.
\end{eqnarray*}
By Corollary 6.2 of \cite{kr99}, for each $t$,
$$
c(c-1)
\int s^{11}_{d,kr}(M^{-1}u^k)(M^{-1}u^r)(x^1)^{c}\,dx\leq \frac{4(d-\theta)}{(d+1-\theta)}\int_{\bR^d_+}u^*_{x^i} S^{ij}_{d}u_{x^j}\;(x^1)^c\,dx.
$$
By assumptions,
$$
2\int_{\bR^d_+} u^*_{x^i} S^{ij}_{o}u_{x^j}\;(x^1)^c\,dx\leq 2\bar{\alphapha}\int_{\bR^d_+}|u_x|^2\;(x^1)^c\,dx,
$$
\betagin{eqnarray*}
c(c-1)\int_{\bR^d_+} s^{11}_{0,kr}|M^{-1}u^k||M^{-1}u^r|(x^1)^c\,dx &\leq& \alphapha c(c-1)\int_{\bR^d_+}|M^{-1}u|^2(x^1)^c\,dx \\
&\leq &\frac{4\alphapha(d-\theta)}{(d+1-\theta)}\int_{\bR^d_+}|u_x|^2\;(x^1)^c\,dx.
\end{eqnarray*}
It follows
$$
\left[(\deltalta-\bar{\alphapha})-\frac{(d-\theta)}{(d+1-\theta)}(2\deltalta-\betata-2\alphapha)\right]
\|u_x\|^2_{\bL_{2,\theta}(T)}\leq \varphirepsilon \|u_x\|^2_{\bL_{2,\theta}(T)}+c\|Mf\|^2_{\bL_{2,\theta}(T)}
+\|u_0\|^2_{U^1_{2,\theta}}.
$$
This, (\ref{con 3}) and (\ref{eqn 3}) lead to the a priori estimate.
4. If $c(c-1)\leq 0$, hence $\theta\in [d,d+1)$, then
$$
c(c-1)\int^T_0\int_{\bR^d_+}a^{11}_{kr}(M^{-1}u^k)(M^{-1}u^r)(x^1)^{c}dxds\le
\deltalta c(c-1)\|M^{-1}u\|^2_{\bL_{2,\theta}(T)};
$$
for this we consider a $d_1\tildemes d$ matrix consisting of $M^{-1}u$
as the first column and zeros for the rest, and apply the assumption
\ref{assumption 1}. Next, as before, we have
\betagin{eqnarray*}
\left|-c\int^T_0\int_{\bR^d_+}(h^{1j}_{kr}u^r_{x^j})(M^{-1}u^k)
(x^1)^{c} dxds\right|&\leq&
\frac{1}{2}c\left(\kappappa\|u_x\|^2_{\bL_{2,\theta}(T)}
+\kappappa^{-1}\betata^2\|M^{-1}u\|^2_{\bL_{2,\theta}(T)}\right)
\end{eqnarray*}
and hence from (\ref{2009.06.03 06:21 PM}) it follows
\betagin{eqnarray}
&&2\deltalta\|u_x\|^2_{\bL_{2,\theta}(T)}-
\frac12 c\;\left(\kappappa\|u_x\|^2_{\bL_{2,\theta}(T)}
+\kappappa^{-1}\betata^2\|M^{-1}u\|^2_{\bL_{2,\theta}(T)}\right)-\deltalta c(c-1)\|M^{-1}u\|^2_{\bL_{2,\theta}(T)}\nonumber\\
&\le&{\varphirepsilon}\|u_x\|^2_{\bL_{2,\theta}(T)}+
+c(\varphirepsilon) \|Mf\|^2_{\bL_{2,\theta}(T)}+\|u(0)\|^2_{U^1_{2,\theta}}.\lambdabel{2009.06.03
07:56 PM}
\end{eqnarray}
As we take
$$
\kappappa=\frac{\betata^2}{2\deltalta(1-c)},$$ the terms with
$\|M^{-1}u\|^2_{\bL_{2,\theta}(T)}$ in the left hand side of
(\ref{2009.06.03 07:56 PM}) are canceled. Now, (\ref{con 2}) which
is equivalent to $2\deltalta-\frac{c\betata^2 }{4\deltalta(1-c)}>0$ gives us
a priori estimate (\ref{eqn main}). The lemma is proved.
\end{proof}
\betagin{theorem}
\lambdabel{theorem half-constant}
Let $\gammamma \geq 0$ and $a^{ij}_{kr}=a^{ij}_{kr}(t)$. Assume that one of (\ref{theta 1}), (\ref{con 1}), (\ref{con 3}) and (\ref{con 2}) holds.
Then for any $f\in M^{-1}\bH^{\gammamma}_{2,\theta}(T)$ and $ u_0\in
U^{\gammamma+2}_{2,\theta}$, system (\ref{eqn system}) admits a unique
solution $u\in \frH^{\gammamma+2}_{2,\theta}(T)$, and for this solution
\betagin{equation}
\lambdabel{a priori}
\|M^{-1}u\|_{\bH^{\gammamma+2}_{2,\theta}(T)}\leq
c\|Mf\|_{\bH^{\gammamma}_{2,\theta}(T)}+c\|u_0\|_{U^{\gammamma+2}_{2,\theta}},
\end{equation}
where $c=c(d,\deltalta,\theta,K,L)$.
\end{theorem}
\betagin{proof}
1. By Theorem 3.3 in \cite{KL2}, for each $k$, the equation
$$
u^k_t=\deltalta \Deltalta u^k+f^k, \quad u^k(0)=u^k_0
$$
has a solution $u^k\in \frH^{\gammamma+2}_{2,\theta}(T)$. As in the
proof of Theorem \ref{thm 1} we only need to show that estimate
(\ref{a priori}) holds given that a solution already exists.
2. By Theorem 2.9 in \cite{KL2}, for any nonnegative integer $n\geq
\gammamma+2$, the set
$$
\frH^{n}_{2,\theta}(T) \cap
\bigcup_{N=1}^{\infty}C([0,T],C^n_0((1/N,N)\tildemes
\{x':|x'|<N\}))
$$
is everywhere dense in $\frH^{\gammamma+2}_{p,\theta}(T)$ and we may
assume that $u$ is sufficiently smooth in $x$ and vanishes near the
boundary. Thus a priori estimate (\ref{a
priori}) follows from Lemma \ref{lemma 2.05.10}, Lemma \ref{a priori 1} and Lemma \ref{a priori
2}. The theorem is proved.
\end{proof}
Here is the main result of this section.
\betagin{theorem}
\lambdabel{theorem half}
Let $\gammamma \geq 0$ and Assumption \ref{assumption theta} hold. Assume that for each $t$
$$
|a^{ij}_{kr}(t,\cdot)|^{(0)*}_{\gammamma_+}
+|b^{i}_{kr}(t,\cdot)|^{(1)*}_{\gammamma_+}
+|c_{kr}(t,\cdot)|^{(2)*}_{\gammamma_+} \leq L
$$
and
$$
|a^{ij}_{kr}(t,x)-a^{ij}_{kr}(t,y)| +
|Mb^i_{kr}(t,x)|+|M^2c_{kr}(t,x)|<\kappappa
$$
for all $x,y\in \bR^d_+$ with $|x-y|\leq x^1\wedge y^1$. Then there
exists $\kappappa_0=\kappappa_0(d,\theta,\deltalta,K,L)$ so that if $\kappappa\leq
\kappappa_0$, then for any $f\in M^{-1}\bH^{\gammamma}_{2,\theta}(T)$,
and $u_0\in
U^{\gammamma+2}_{2,\theta}$, system (\ref{eqn system2}) admits a unique solution $u\in
\frH^{\gammamma+2}_{2,\theta}(T)$, and furthermore
\betagin{equation}
\lambdabel{eqn 9.9.2}
\|u\|_{\frH^{\gammamma+2}_{2,\theta}(T)}\leq
c\|Mf\|_{\bH^{\gammamma}_{2,\theta}(T)}
+c\|u_0\|_{U^{\gammamma+2}_{2,\theta}}
\end{equation}
where $c=c(d,d_1,\deltalta,\theta,K,L)$.
\end{theorem}
To prove Theorem \ref{theorem half} we use
the following lemmas taken from \cite{KK2}.
\betagin{lemma}
\lambdabel{lemma 8.26.10}
Let constants $C,\deltalta\in(0,\infty)$, a function
$u\in H^{\gammamma}_{p,\theta} $, and $q$ be the smallest integer such
that $|\gammamma|+2\leq q$.
(i) Let $\eta_{n}\in C^{\infty}(\bR^{d}_{+})$, $n=1,2,...$, satisfy
\betagin{equation}
\lambdabel{8.26.11}
\sum_{n}M^{|\alphapha|} |D^{\alphapha} \eta_n |\leq C
\quad\thetaxt{in}\quad\bR^{d}_{+}
\end{equation}
for any multi-index $\alphapha$ such that $ 0\leq |\alphapha| \leq q$.
Then
$$
\sum_{n} \|\eta_{n}u\|^{p}_{H^{\gammamma}_{p,\theta} } \leq
NC^{p}\|u\|^{p}_{H^{\gammamma}_{p,\theta} },
$$
where the constant $N$ is independent of $u$, $\theta$, and $C$.
(ii) If in addition to the condition in (i)
\betagin{equation}
\lambdabel{1.5.2}
\sum_{n} \eta_{n} ^{2}\geq\deltalta\quad\thetaxt{on}\quad\bR^{d}_{+},
\end{equation}
then
\betagin{equation}
\lambdabel{11.25.1}
\|u\|^{p}_{H^{\gammamma}_{p,\theta} }\leq N\sum_{n}
\|\eta_{n}u\|^{p}_{H^{\gammamma}_{p,\theta} },
\end{equation}
where the constant $N$ is independent of $u$ and $\theta$.
\end{lemma}
The reason the first inequality in (\ref{11.14.1}) below is written
for $\eta_n^4$ (not for $\eta_n^2$) as in the above lemma is to have
the possibility to apply Lemma \ref{lemma 8.26.10} to $\eta_n^2$.
Also observe that obviously $\sum a^2 \leq (\sum |a|)^2$.
\betagin{lemma}
\lambdabel{lemma 11.14.1}
For each $\varphirepsilon>0$ and $q=1,2,...$ there exist non-negative
functions $\eta_{n}\in C^{\infty}_{0}(\bR^{d}_{+})$, $n=1,2,...$
such that (i) on $\bR^{d}_{+}$ for each multi-index $\alphapha$ with
$1\leq|\alphapha|\leq q$ we have
\betagin{equation}
\lambdabel{11.14.1}
\sum_{n}\eta^{4}_{n}\geq1,\quad \sum_{n} \eta _{n} \leq N(d),
\quad\sum_{n}M^{|\alphapha|}|D^{\alphapha}\eta_{n}|\leq\varphirepsilon;
\end{equation}
(ii) for any $n$ and $x,y\in\thetaxt{\rm supp}\,\eta_{n}$ we have $
|x-y|\leq N ( x^{1}\wedge y^{1})$, where
$N=N(d,q,\varphirepsilon)\in[1,\infty)$.
\end{lemma}
\betagin{lemma}
\lambdabel{lemma 1.2.2}
Let $p\in(1,\infty)$, $\gammamma,\theta\in\bR$. Then there exists a
constant $N=N(\gammamma,|\gammamma|_+,p,d)$ such that if $f\in
H^{\gammamma}_{p,\theta}$ and $a$ is a function with finite norm
$|a|^{(0)*}_{|\gammamma|_+,\bR^{d}_{+}}$, then
\betagin{equation}
\lambdabel{8.19.05}
\|af\|_{H^{\gammamma}_{p,\theta}} \leq N
|a|^{(0)*}_{|\gammamma|_+,\bR^{d}_{+}} \|f\|_{H^{\gammamma }_{p,\theta}}.
\end{equation}
In addition,
(i) if $\gammamma=0,1,2,...$, then
\betagin{equation}
\lambdabel{1.24.06}
\|af\|_{H^{\gammamma}_{p,\theta}} \leq N \sup_{\bR^{d}_{+}}|a|\,
\|f\|_{H^{\gammamma }_{p,\theta}}+ N_0\|f\|_{H^{\gammamma-1}_{p,\theta}}
\sup_{\bR^{d}_{+}}\sup_{1\leq|\alphapha|\leq\gammamma}
|M^{|\alphapha|}D^{\alphapha}a |,
\end{equation}
where $N_0=0$ if $\gammamma=0$, and $N_0=N_0(\gammamma,d)>0$ otherwise.
(ii) if $\gammamma$ is not integer, then
\betagin{equation}
\lambdabel{1.24.07}
\|af\|_{H^{\gammamma}_{p,\theta}} \leq N (\sup_{\bR^{d}_{+}}|a|)^{s}
(|a|^{(0)*}_{|\gammamma|_+})^{1-s} \|f\|_{H^{\gammamma }_{p,\theta}},
\end{equation}
where $s:=1-\frac{|\gammamma|}{{|\gammamma|_+}} > 0$.
\end{lemma}
{\bf{Proof of Theorem \ref{theorem half}}}
We closely follow the proof of Theorem 2.16 of \cite{KK}.
As usual, for simplicity, we assume $u_0 =0$.
Also having the method of continuity in mind, we convince
ourselves that
to prove the theorem it suffices to
show that there exist $\kappappa_{0}$ such that
the a priori estimate (\ref{eqn 9.9.2}) holds
given that the solution already exists and $\kappappa
\leq\kappappa_{0}$.
We divide the
proof into two cases. This is because if $\gammamma$ is an
integer we use (\ref{1.24.06}), and otherwise we use (\ref{1.24.07}).
{\bf Case 1}: $\gammamma=0$ or $\gammamma$ is not integer.
Take the least integer $q\geq|\gammamma|+4$. Also take an
$\varphirepsilon\in(0,1)$ to be specified later and take a sequence of
functions $\eta_{n}$, $n=1,2,...$, from Lemma \ref{lemma 11.14.1}
corresponding to $\varphirepsilon,q$.
Then by Lemma \ref{lemma 8.26.10}, we have
\betagin{equation}
\lambdabel{8.28.15}
\|M^{-1}u\|_{\bH^{\gammamma+2}_{2,\theta}(T)}^{2} \leq
N\sum_{n=1}^{\infty}
\|M^{-1}u\eta^{2}_{n}\|_{\bH^{\gammamma+2}_{2,\theta}(T)}^{2}.
\end{equation}
For any $n$ let $x_{n}$ be a point in $\thetaxt{supp}\,\eta_{n}$
and $a^{ij}_{n,kr}(t)=a^{ij}_{kr}(t,x_{n})$.
From (\ref{eqn system2}), we easily have
$$
(u^k\eta^{2}_{n})_t=
a^{ij}_{n,kr}(u^r\eta^{2}_{n})_{ x^ix^j}+M^{-1}f^k_{n},
$$
where
$$
f^k_{n}=(a^{ij}_{kr}-a^{ij}_{n,kr}) \eta^{2}_{n} Mu^r_{x^ix^j}
-2a^{ij}_{n,kr}M(\eta^{2}_{n})_{ x^i}u^r_{x^j}
-a^{ij}_{n,kr}M^{-1}u^rM^{2}(\eta^{2}_{n})_{ x^ix^j}
$$
$$
+\eta_{n}^{2}Mb^{i}_{kr}u^r_{x^{i}}
+\eta_{n}^{2}M^{2}c_{kr}M^{-1}u^r +Mf^k\eta^{2}_{n}.
$$
By Theorem \ref{theorem half-constant}, for each n,
\betagin{equation}
\lambdabel{8.28.20}
\|M^{-1}u\eta^{2}_{n}\|_{\bH^{\gammamma+2}_{2,\theta}(T)}^{2} \leq N
\|f_{n}\|^{2}_{\bH^{\gammamma}_{2,\theta}(T)}
\end{equation}
and by (\ref{1.24.07}),
\betagin{equation}
\lambdabel{1.24.01}
\|(a^{ij}_{kr}-a^{ij}_{n,kr}) \eta^{2}_{n} Mu_{x^ix^j}\|
_{\bH^{\gammamma}_{p,\theta}} \leq N\| \eta _{n} Mu_{xx}\|
_{\bH^{\gammamma}_{p,\theta}} \sup_{t,x}|(a^{ij}_{kr}-
a^{ij}_{n,kr})\eta_{n}|^{s},
\end{equation}
where $s=1$ if $\gammamma=0$, and $s=1-\frac{\gammamma}{\gammamma_+} >0$ otherwise.
By Lemma \ref{lemma 11.14.1}(ii), for each $n$ and
$x,y\in\thetaxt{supp}\,\eta_{n}$
we have
$|x-y|\leq N(\varphirepsilon)(x^{1}\wedge y^{1})$, where
$N(\varphirepsilon)=N(d,q,\varphirepsilon)$, and we can easily find not more
than $N(\varphirepsilon)+2 \leq 3N(\varphirepsilon)$ points $x_i$ lying on
the straight segment
connecting $x$ and $y$ and including $x$ and $y$, such that $|x_i-x_{i+1}|\leq
x^1_{i} \wedge x^1_{i+1}$. It follows from our assumptions
$$
\sup_{\omegaega,t,x}|(a^{ij}_{kr}-a^{ij}_{n,kr})\eta_{n}|
\leq 3N(\varphirepsilon)\kappappa.
$$
We substitute this to (\ref{1.24.01}) and get
$$
\|(a^{ij}_{kr}-a^{ij}_{k,rn}) \eta^{2}_{n}
Mu^r_{x^ix^j}\|_{\bH^{\gammamma}_{2,\theta}(T)} \leq N
N(\varphirepsilon)\kappappa^{s}\| \eta _{n}
Mu_{xx}\|_{\bH^{\gammamma}_{2,\theta}(T)}.
$$
Similarly,
$$
\| \eta^{2}_{n} Mb^{i}_{kr}u^r_{x^i}\|
_{\bH^{\gammamma}_{2,\theta}(T)}+ \| \eta^{2}_{n}
M^{2}c_{kr}M^{-1}u^r\| _{\bH^{\gammamma}_{2,\theta}(T)} \leq N
N(\varphirepsilon)
\kappappa^{s}(\|\eta_{n}u_{x}\|_{\bH^{\gammamma}_{2,\theta}(T)}
+\|\eta_{n}M^{-1}u\| _{\bH^{\gammamma}_{2,\theta}(T)}).
$$
Coming back to (\ref{8.28.20}) and (\ref{8.28.15}) and using Lemma
\ref{lemma 8.26.10}, we conclude
$$
\|M^{-1}u\|_{\bH^{\gammamma+2}_{2,\theta}(T)}^{2} \leq NN(\varphirepsilon)
\kappappa^{2s}(\|M u_{xx}\|_{\bH^{\gammamma}_{2,\theta}(T)}^{2} +
\|u_{x}\|_{\bH^{\gammamma}_{2,\theta}(T)}^{2} +
\|M^{-1}u\|_{\bH^{\gammamma}_{2,\theta}(T)}^{2})
$$
\betagin{equation}
\lambdabel{11.19.2}
+NC^{2}\left(\|u_{x}\|_{\bH^{\gammamma }_{2,\theta}(T)}^{2}
+\|M^{-1}u\|_{\bH^{\gammamma+1}_{2,\theta}}^{2}\right)+N
\|Mf\|_{\bH^{\gammamma }_{2,\theta}}^{2},
\end{equation}
where
$$
C=\sup_{\bR^{d}_{+}}\sup_{|\alphapha|\leq q-2}
\sum_{n=1}^{\infty}M^{|\alphapha|}(|D^{\alphapha}(M(\eta_{n}^{2})_{x})|
+|D^{\alphapha}(M^{2}(\eta_{n}^{2})_{xx})|).
$$
By construction, we have $C\leq N\varphirepsilon$. Furthermore (see, Lemma \ref{collection half})
\betagin{equation}
\lambdabel{11.19.1}
\|u_{x}\|_{H^{\gammamma+1}_{2,\theta}} \leq
N\|M^{-1}u\|_{H^{\gammamma+2}_{2,\theta}},\quad
\|Mu_{xx}\|_{H^{\gammamma}_{2,\theta}} \leq
N\|M^{-1}u\|_{H^{\gammamma+2}_{2,\theta}}.
\end{equation}
Hence (\ref{11.19.2}) yields
$$
\|M^{-1}u\|_{\bH^{\gammamma+2}_{2,\theta}(T)}^{2} \leq
N_{1}(N(\varphirepsilon)\kappappa^{2s}+\varphirepsilon^{2})
\|M^{-1}u\|_{\bH^{\gammamma+2}_{2,\theta}(T)}^{2} +N ( \|Mf\|_{\bH^{\gammamma
}_{2,\theta}(T)}^{2}).
$$
Finally to get the a priori estimate, it's enough to
choose first
$\varphirepsilon$ and then $\kappappa_{0}$,
so that $N_{1}(N(\varphirepsilon)\kappappa^{2s} +\varphirepsilon^{2})\leq1/2$
for $\kappappa\leq\kappappa_{0}$.
{\bf Case 2}: $\gammamma \in\{1,2,...\}$.
Proceed as in Case 1 with $\varphirepsilon=1$ and
arrive at (\ref{8.28.20}) which is
$$
\|M^{-1}u\eta^{2}_{n}\|_{\bH^{\gammamma+2}_{2,\theta}(T)}^{2} \leq
N\|f_{n}\|^{2}_{\bL_{2,\theta}(T)}.
$$
Now we use (\ref{1.24.06}) to get
$$
\|(a^{ij}_{kr}-a^{ij}_{k,rn}) \eta^{2}_{n}
Mu^r_{x^ix^j}\|_{\bH^{\gammamma}_{2,\theta}(T)} \leq N
\kappappa\| \eta _{n}
Mu_{xx}\|_{\bH^{\gammamma}_{2,\theta}(T)}+N\| \eta _{n}
Mu_{xx}\|_{\bH^{\gammamma-1}_{2,\theta}(T)}.
$$
From this point by following the arguments in case 1, one easily gets
\betagin{equation}
\lambdabel{eqn 2.06.2}
\|M^{-1}u\|_{\bH^{\gammamma+2}_{2,\theta}(T)} \leq N_{1} \kappappa
\|M^{-1}u\|_{\bH^{\gammamma+2}_{2,\theta}(T)}
+ N_2 \|M^{-1}u\|_{\bH^{\gammamma+1}_{2,\theta}(T)} +
N\|Mf\|_{\bH^{\gammamma}_{2,\theta}(T)}.
\end{equation}
This and the embedding inequality
$$
\|M^{-1}u\|_{H^{\gammamma+1}_{2,\theta}}\leq \frac{1}{2N_2} \|M^{-1}u\|_{H^{\gammamma+2}_{2,\theta}}+
N(N_2,\gammamma)\|M^{-1}u\|_{H^2_{2,\theta}}
$$
yield
\betagin{equation}
\lambdabel{1.24.03}
\|M^{-1}u\|_{\bH^{\gammamma+2}_{2,\theta}(T)} \leq 2N_1 \kappappa \|M^{-1}u\|_{\bH^{\gammamma+2}_{2,\theta}(T)}+N
\|M^{-1}u\|_{\bH^2_{2,\theta}(T)} + N\|Mf\|_{\bH^{\gammamma}_{2,\theta}(T)}.
\end{equation}
Now take $\kappappa_0$ is from Case 1 for $\gammamma=0$, then it is enough to assume
$\kappappa \leq \kappappa_0\wedge 1/{(4N_1)}$, because
by the result of Case 1,
$$
\|M^{-1}u\|_{\bH^2_{2,\theta}(T)}\leq N\|Mf\|_{\bL_{2,\theta}(T)}.
$$
The theorem is proved.
\mysection{Proof of Theorem \ref{main theorem on domain}}
\lambdabel{section 5}
By Theorem 2.10 in \cite{KK2}, for each $k$, $f^k \in \psi^{-1}\bH^{\gammamma}_{2,\theta}(\cO,T)$ and
$u^k_0\in U^{\gammamma+2}_{2,\theta}(\cO)$, the equation
$$
u^k_t=\Deltalta u^k +f^k, \quad u^k(0)=u^k_0(0)
$$
has a unique solution $u\in \frH^{\gammamma+2}_{2,\theta}(T)$, and furthermore
$$
\|\psi^{-1}u^k\|_{\bH^{\gammamma+2}_{2,\theta}(\cO,T)}\leq c\|\psi f^k\|_{\bH^{\gammamma}_{2,\theta}(\cO,T)}
+c\|u^k_0\|_{U^{\gammamma+2}_{2,\theta}(\cO)}.
$$
Thus to prove the theorem we only
need to prove that (\ref{a priori domain}) holds given that a
solution $u\in \frH^{\gammamma+2}_{2,\theta}(\cO,T)$ already exists. As
usual we assume $u_0=0$.
Let $x_0 \in \partialrtial \cO$ and $\Psi$ be a function from
Assumption \ref{assumption domain}. In \cite{KK2} it is shown that
$\Psi$ can be chosen in such a way that for any non-negative integer $n$
\betagin{equation}
\lambdabel{2.25.03}
|\Psi_{x}|^{(0)}_{n,B_{r_0}(x_0)\cap \cO} +
|\Psi^{-1}_{x}|^{(0)}_{n,J_{+}} < N(n)< \infty
\end{equation}
and
\betagin{equation}
\lambdabel{2.25.02}
\rho(x)\Psi_{xx}(x) \to 0 \quad \thetaxt{as}\quad x\in
B_{r_0}(x_0)\cap \cO,
\thetaxt{and} \,\,\, \rho(x) \to 0,
\end{equation}
where the constants $N(n)$ and the
convergence in (\ref{2.25.02}) are independent of $x_0$.
Define $r=r_{0}/K_{0}$
and fix smooth functions $\eta \in C^{\infty}_{0}(B_r ), \varphirphi\in
C^{\infty}(\bR)$ such that $ 0 \leq \eta, \varphirphi \leq 1$, and
$\eta=1$ in $B_{r/2} $, $\varphirphi(t)=1$ for $t\leq -3$, and
$\varphirphi(t)=0$
for $t\geq-1$. Observe that
$\Psi(B_{r_0}(x_0))$ contains $B_r $.
For $n=1,2,... $, $t>0$, $x\in\bR^{d}_{+}$ introduce
$\varphirphi_{n}(x)=\varphirphi(n^{-1}\ln x^1)$,
$$
\hat{a}^{ij,n}(t,x):= \eta(x)\varphirphi_n(x)\left(\sum_{l,m=1}^d
a^{lm}(t,\Psi^{-1}(x))\cdot\partialrtial_l\Psi^{i}(\Psi^{-1}(x))\cdot\partialrtial_m\Psi^{j}(\Psi^{-1}(x))\right) +
\deltalta^{ij}(1- \eta(x)\varphirphi_n(x) )I,
$$
\betagin{eqnarray*}
\hat{b}^{i,n}(t,x) &:=&\eta(x) \varphirphi_{n}(x)\Big[
\sum_{l,m}a^{lm}(t,\Psi^{-1}(x))\cdot
\partialrtial_{lm}\Psi^{i}(\Psi^{-1}(x))+\sum_{l}b^{l}(t,\Psi^{-1}(x))\cdot\partialrtial_l\Psi^{i}(\Psi^{-1}(x))\Big],
\end{eqnarray*}
$$
\hat{c}^{n}(t,x) :=\eta(x) \varphirphi_{n}(x)c(t,\Psi^{-1}(x)).
$$
Then by Assumption \ref{assumption regularity}(iii) and (\ref{2.25.03}),
one can show that there is a constant $L'$ independent of $n$ and $x_0$ such that
$$
|\hat{a}^{ij,n}(t,\cdot)|^{(0)*}_{\gammamma_+}
+|\hat{b}^{i,n}_{kr}(t,\cdot)|^{(1)*}_{\gammamma_+}
+|\hat{c,n}(t,\cdot)|^{(2)*}_{\gammamma_+} \leq L'.
$$
Take $\kappappa_0$ from Theorem \ref{theorem
half} corresponding to $d,d_1,\theta,\deltalta, K$ and $L'$.
Observe that $\varphirphi_n(x)=0$ for $x^1 \geq e^{-n}$. Also
(\ref{2.25.02}) implies $x^{1}\Psi_{xx}(\Psi^{-1}(x)) \to 0$ as $x^1\to 0$.
Using these facts and
Assumption \ref{assumption regularity}(ii), one can find
$n>0$ independent of $x_0$ such that
$$
|\hat{a}^{ij,n}_{kr}(t,x)-\hat{a}^{ij,n}_{kr}(t,y)|
+ (x^1)|\hat{b}^{i,}_{kr}(t,x)|+ x^2|\hat{c}^n_{kr}(t,x)| \leq \kappappa_0,
$$
whenever $t>0, x,y\in \bR^d_+$ and $|x-y|\leq x^1 \wedge y^1$.
Now we fix a $\rho_0 <r_0 $ such that
$$
\Psi(B_{\rho_0}(x_0)) \subset B_{r/2} \cap \{x:x^1 \leq e^{-3n }\}.
$$
Let $\zeta$ be a smooth function
with support in $B_{\rho_0}(x_0)$ and denote
$v:=(u\zeta)(\Psi^{-1})$ and continue $v$ as zero in
$\bR^{d}_{+}\setminus\Psi(B_{\rho_0}(x_0))$. Since
$\eta\varphirphi_{n}=1$ on $\Psi(B_{\rho_0}(x_0))$, the function $v$
satisfies
$$
v^k_t = \hat{a}^{ij,n}_{kr}v^r_{x^i x^j} + \hat{b}^{i,n}_{kr}v^r_{x^i} +
\hat{c}^{n}_{kr}v^r + \hat{f}^k
$$
where
$$\hat{f}^k =\tildelde{f}^k(\Psi^{-1}), \quad \tildelde{f}^k=
-2a^{ij}_{kr}u^r_{x^{i}}\zeta_{x^{j}}
-a^{ij}_{kr}u^r\zeta_{x^{i}x^{j}}-b^{i}_{kr}u^r\zeta_{x^{i}} +\zeta f^k.
$$
Next we observe that by Lemma \ref{lemma 10.3.1} and
Theorem 3.2 in \cite{Lo2} (or see \cite{KK2})
for any $\nu,\alphapha \in \bR $ and $h \in
\psi^{-\alphapha}H^{\nu}_{p,\theta}(\cO)$ with support in
$B_{\rho_0}(x_0)$
\betagin{equation}
\lambdabel{1.28.01}
\|\psi^{\alphapha}h\|_{H^{\nu}_{p,\theta}(\cO)} \sigmam
\|M^{\alphapha}h(\Psi^{-1})\|_{H^{\nu}_{p,\theta}}.
\end{equation}
Therefore we conclude that
$v\in \frH^{\gammamma+2}_{2,\theta}(T)$, and by Theorem \ref{theorem half} we have, for any $t\leq T$,
$$
\|M^{-1}v\|_{\bH^{\gammamma+2}_{2,\theta}(t)} \leq N \|M\hat{f}
\|_{\bH^{\gammamma}_{2,\theta}(t)}.
$$
By using (\ref{1.28.01}) again we obtain
\betagin{eqnarray*}
\|\psi^{-1}u\zeta\|_{\bH^{\gammamma+2}_{2,\theta}(\cO,t)} &\leq& N
\|a\zeta_x \psi u_x\|_{\bH^{\gammamma}_{2,\theta}(\cO,t)} + N
\|a\zeta_{xx}\psi u\|_{\bH^{\gammamma}_{2,\theta}(\cO,t)}\\
&+& N \|\zeta_x \psi b u\|_{\bH^{\gammamma}_{2,\theta}(\cO,t)} + N \|\zeta \psi
f\|_{\bH^{\gammamma}_{2,\theta}(\cO,t)}.
\end{eqnarray*}
Next, we easily check that
$$
|\zeta_x a(t,\cdot)|^{(0)}_{|\gammamma|_+},\,\, |\zeta_{xx}\psi
a(t,\cdot)|^{(0)}_{|\gammamma|_+},\,\, |\zeta_x \psi
b(t,\cdot)|^{(0)}_{|\gammamma|_+}
$$
are bounded on $[0,T]$, and conclude
$$
\|\psi^{-1}u\zeta\|_{\bH^{\gammamma+2}_{2,\theta}(\cO,t)} \leq N \|\psi
u_x\|_{\bH^{\gammamma}_{2,\theta}(\cO,t)} + N
\|u\|_{\bH^{\gammamma}_{2,\theta}(\cO,t)}
+N \|\psi f\|_{\bH^{\gammamma}_{2,\theta}(\cO,t)}.
$$
Finally, to estimate the norm
$\|\psi^{-1} u\|_{\bH^{\gammamma+2}_{2,\theta}(\cO,t)}$,
we introduce a partition of unity $\zeta_{(i)}, i=0,1,2,...,M$ such
that $\zeta_{(0)} \in C^{\infty}_0(\cO)$ and
$\zeta_{(i)} \in C^{\infty}_0(B_{\rho_0}(x_i))$,
$ x_i \in \partialrtial \cO$ for $i\geq1$.
Observe that since
$u\zeta_{(0)}$ has compact in $\cO$, we get
$$
\|\psi^{-1}u\zeta_{(0)}\|_{\bH^{\gammamma+2}_{2,\theta}(\cO,t)}\sigmam
\|u\zeta_{(0)}\|_{\bH^{\gammamma+2}_{2}(t)}.
$$
Thus we can estimate
$\|\psi^{-1} u\zeta_{(0)}\|_{\bH^{\gammamma+2}_{2,\theta}(\cO,t)}$ using
Theorem \ref{thm 2} and the other norms as above.
By summing
up those estimates we get
$$
\|\psi^{-1} u\|_{\bH^{\gammamma+2}_{2,\theta}(\cO,t)} \leq N
\|\psi u_x\|_{\bH^{\gammamma}_{2,\theta}(\cO,t)}+
N\|u\|_{\bH^{\gammamma}_{2,\theta}(2,t)}
+ N \|\psi f\|_{\bH^{\gammamma}_{2,\theta}(\cO,t)}.
$$
Furthermore, we know that
$$
\|\psi u_x\|_{ H^{\gammamma}_{2,\theta}(\cO)} \leq N \|u\|_{
H^{\gammamma+1}_{2,\theta}(\cO)}.
$$
Therefore it follows
\betagin{eqnarray*}
\|u\|^2_{\frH^{\gammamma+2}_{2,\theta}(\cO,t)} &\leq& N
\|u\|^2_{\bH^{\gammamma+1}_{2,\theta}(\cO,t)} + N \|\psi
f\|^2_{\bH^{\gammamma}_{2,\theta}(\cO,t)}\\
&\leq & N \int^t_0 \|u\|^2_{\frH^{\gammamma+2}_{2,\theta}(\cO,s)}\,ds+N \|\psi
f\|^2_{\bH^{\gammamma}_{2,\theta}(\cO,t)}
\end{eqnarray*}
where Lemma \ref{lemma 15.05} is used for the second inequality. Now (\ref{a priori domain}) follows from
Gronwall's inequality. The theorem is proved.
\mysection{Proof of Theorem \ref{theorem elliptic}}
\lambdabel{section 6}
Again we only show that a priori estimate (\ref{a priori elliptic}) holds
given that a solution $u\in \psi H^{\gammamma+2}_{2,\theta}(\cO)$
already exists. By (\ref{03.04.01}) it follows that $\psi$ is a point-wise multiplier in $H^{\nu}_{p,\theta}(\cO)$ for any $\nu$ and $p$. Thus
\betagin{equation}
\lambdabel{eqn 6.24}
\|u\|_{U^{\gammamma+2}_{2,\theta}(\cO)}:=\|u\|_{H^{\gammamma+1}_{2,\theta}(\cO)}\leq
c(\theta,\gammamma)\|\psi^{-1}u\|_{H^{\gammamma+1}_{2,\theta}(\cO)}.
\end{equation}
Note that $v^k:=u^ke^{\lambdambda^k t}$ satisfies
$$
v^k_t=a^{ij}_{kr}v^r_{x^ix^j}+b^{i}_{kr}v^r_{x^i}+c_{kr}v^r+f^ke^{\lambdambda^k t}.
$$
By (\ref{a priori domain}) and (\ref{eqn 6.24}),
$$
g_1(T)\|\psi^{-1}u\|_{H^{\gammamma+2}_{2,\theta}(\cO)}\leq
ce^{cT}\left(\|\psi^{-1}u\|_{H^{\gammamma+2}_{2,\theta}(\cO)}+g_2(T)\|\psi
f\|_{H^{\gammamma}_{2,\theta}(\cO)}\right),
$$
where
$$
g_1(T)=\left(\int^T_0 e^{2t\min\{\lambdambda^k\}}dt\right)^{1/2}, \quad
g_2(T)=\left(\int^T_0 e^{2t\max\{\lambdambda^k\}}dt\right)^{1/2}.
$$
If $\min\{\lambdambda^k\}>c$, then the ratio $ce^{cT}/{g_1(T)}$ tends to
zero as $T\to \infty$. Then after finding a $T$ such that this ratio
is less than $1/2$ one gets (\ref{a priori elliptic}). The theorem is
proved.
\betagin{thebibliography}{mm}
{\small
\bibitem{Ch} J. Chabrowski, ``The Dirichlet problem with $L^2$
boundary data for elliptic equations'',
Lecture Notes in Math., {\bf{1482}} (1991),
Springer Verlag.
\bibitem{DN} A. Douglis and L. Nirenberg, {\em Interior estimates
for elliptic systems of partial differential equations\/},
Comm. Pure Appl. Math., {\bf{8}}(1955), 503-538.
\bibitem{Fr} A. Friedman, ``Partial differential equations of parabolic
type'', Prentice-Hall, Inc., Englewood Cliffs, N.J., 1964.
\bibitem{GH} D. Gilbarg and L. H\"ormander,
{\em Intermediate Schauder estimates\/}, Archive Rational Mech.
Anal., {\bf{74}} (1980), 297-318.
\bibitem{GT} D.~Gilbarg and N.S.~Trudinger,
{\em Elliptic partial differential equations of second order}, 2d
ed., Springer Verlag, Berlin, 1983.
\bibitem{Kim04-1} K. Kim, {\em $L_q(L_p)$ theory and H\"older estimates for
parabolic SPDE\/}, Stochastic processes and their applications,
{\bf{114}} (2004), no. 2, 313-330.
\bibitem{KK} K. Kim and N.V. Krylov, {\em On SPDEs with variable
coefficients in one space dimension\/}, Potential Anal, {\bf{21}}
(2004), no. 3, 203-239.
\bibitem{KK2} K. Kim and N.V. Krylov, {\em On the Sobolev space theory
of parabolic and elliptic equations in $C^{1}$ domains\/}, SIAM J.
Math. Anal. {\bf{36}} (2004), 618-642.
\bibitem{Kr01} N.V. Krylov, {\em Some properties of traces for stochastic
and determistic parabolic weighted Sobolev spaces}, Journal of
Functional Analysis {\bf{183}} (2001), 1-41.
\bibitem{Kr99} N.V. Krylov, {\em An analytic approach to SPDEs\/},
pp. 185-242 in
Stochastic Partial Differential Equations: Six Perspectives,
Mathematical Surveys and Monographs, {\bf{64}} (1999), AMS,
Providence, RI.
\bibitem{kr99} N.V. Krylov, {\em Weighted Sobolev spaces and Laplace equations
and the heat equations in a half space\/}, Comm. in PDEs, {\bf{23}}
(1999), no. 9-10, 1611-1653.
\bibitem{KL2} N.V. Krylov and S.V. Lototsky, {\em A Sobolev space
theory of SPDEs with constant coefficients in a half space\/}, SIAM
J. on Math. Anal., {\bf{31}} (1999), no. 1, 19-33.
\bibitem{La} S.K. Lapic, {\em On the first-initial boundary
value problem for stochastic partial differential equations\/},
Ph.D. thesis, University of Minnesota, Minneapolis, MN, 1994.
\bibitem{Lee} Kijung Lee, {\em On a Deterministic Linear Partial
Differential System\/}, Journal of Mathematical Analysis and
Applications, {\bf{353}}, (2009), no. 1, 24-42.
\bibitem{Li} G. Lieberman, ``Second order parabolic differential
equations'', World Scientific, Singapore-New Jersey-London-Hong Kong,
1996.
\bibitem{Lo2} S.V. Lototksy,
{\em Sobolev spaces with weights in domains and boundary
value problems for degenerate elliptic equations\/}, Methods
and Applications of
Analysis, {\bf{1}} (2000), no.1, 195-204.
}
\end{thebibliography}
\end{document}
|
\begin{document}
\title{Multi-dueling Bandits with Dependent Arms}
\begin{abstract}
The dueling bandits problem is an online learning framework for learning from pairwise preference feedback, and is particularly well-suited for modeling settings that elicit subjective or implicit human feedback.
In this paper, we study the problem of \textit{multi-dueling bandits with dependent arms}, which extends the original dueling bandits setting by simultaneously dueling multiple arms as well as modeling dependencies between arms. These extensions capture key characteristics found in many real-world applications, and allow for the opportunity to develop significantly more efficient algorithms than were possible in the original setting.
We propose the {\sc\textsf{SelfSparring}}\xspace algorithm, which reduces the multi-dueling bandits problem to a conventional bandit setting that can be solved using a stochastic bandit algorithm such as Thompson Sampling, and can naturally model dependencies using a Gaussian process prior. We present a no-regret analysis for multi-dueling setting, and demonstrate the effectiveness of our algorithm empirically on a wide range of simulation settings.
\end{abstract}
\section{Introduction}
In many online learning settings, particularly those that involve human feedback, reliable feedback is often limited to pairwise preferences (e.g., ``is A better than B?''). Examples include implicit or subjective feedback for information retrieval and various recommender systems \citep{chapelle2012large,sui2014clinical}. This setup motivates the dueling bandits problem \citep{yue2012k}, which formalizes the problem of online regret minimization via preference feedback.
The original dueling bandits setting ignores many real world considerations. For instance, in personalized clinical recommendation settings \citep{sui2014clinical}, it is often more practical for subjects to provide preference feedback on several actions (or treatments) simultaneously rather than just two. Furthermore, the action space can be very large, possibly infinite, but often has a low-dimensional dependency structure.
In this paper, we address both of these challenges in a unified framework, which we call \textit{multi-dueling bandits with dependent arms}. We extend the original dueling bandits problem by simultaneously dueling multiple arms as well as modeling dependencies between arms using a kernel. Explicitly formalizing these real-world characteristics provides an opportunity to develop principled algorithms that are much more efficient than algorithms designed for the original setting. For instance, most dueling bandits algorithms suffer regret that scales linearly with the number of arms, which is not practical when the number of arms is very large or infinite.
For this setting, we propose the {\sc\textsf{SelfSparring}}\xspace algorithm, inspired by the Sparring algorithm from \cite{ailon2014reducing}, which algorithmically reduces the multi-dueling bandits problem into a conventional muilti-armed bandit problem that can be solved using a stochastic bandit algorithm such as Thompson Sampling \citep{chapelle2011empirical,russo2014learning}. Our approach can naturally incorporate dependencies using a Gaussian process prior with an appropriate kernel.
While there have been some prior work on multi-dueling \citep{brost2016multi} and learning from pairwise preferences over kernels \citep{gonzalez2016bayesian}, to the best of our knowledge, our approach is the first to address to both in a unified framework. We are also the first to provide a regret analysis of the multi-dueling setting. We further demonstrate the effectiveness of our approach over conventional dueling bandits approaches in a wide range of simulation experiments.
\section{Background}
\subsection{Dueling Bandits}
\label{sec:db}
The original dueling bandits problem is a sequential optimization problem with relative feedback.
Let $\mathcal{B} = \{b_1,\ldots,b_K\}$ be the set of $K$ bandits (or arms). At each iteration, the algorithm duels or compares a single pair of arms $b_i, b_j$ from the set of $K$ arms ($b_i$ and $b_j$ can be identical). The outcome of each duel between $b_i$ and $b_j$ is an independent sample of a Bernoulli random variable. We define the probability that arm $b_i$ beats $b_j$ as: $$P(b_i \succ b_j) = \phi(b_i,b_j) + 1/2,$$
where $\phi(b_i,b_j)\in [-1/2,1/2]$ denotes the stochastic preference between $b_i$ and $b_j$, thus $b_i \succ b_j \Leftrightarrow \phi(b_i,b_j) > 0$.
We assume there is a total ordering, and WLOG that $b_i \succ b_j \Leftrightarrow i < j$.
The setting proceeds in a sequence of iterations or rounds. At each iteration $t$, the decision maker must choose a pair of bandits $b_t^{(1)}$ and $b_t^{(2)}$ to compare, and observes the outcome of that comparison. The quality of the decision making is then quantified using a notion of cumulative regret of $T$ iterations:
\begin{eqnarray}
R_T = \sum_{t=1}^T \left[ \phi(b_1,b_t^{(1)}) + \phi(b_1, b_t^{(2)})\right].\label{eqn:regret}
\end{eqnarray}
When the algorithm has converged to the best arm $b_1$, then it can simply duel $b_1$ against itself, thus incurring no additional regret.
In the recommender systems setting, one can interpret \eqref{eqn:regret} as the how much the user(s) would have preferred the best bandit over the the ones presented by the algorithm.
To date, there have been several algorithms proposed for the stochastic dueling bandits problem, including Interleaved Filter \citep{yue2012k}, Beat the Mean \citep{yue2011beat}, SAVAGE \citep{urvoy2013generic}, RUCB \citep{zoghi2014relative,zoghi2015mergerucb}, Sparring \citep{ailon2014reducing,dudik2015contextual}, RMED \citep{komiyama2015regret}, and DTS \citep{wu2016doublets}. Our proposed approach, {\sc\textsf{SelfSparring}}\xspace, is inspired by Sparring, which along with RUCB-style algorithms are the best performing methods. In contrast to Sparring, which has no theoretical guarantees, we provide no-regret guarantees for {\sc\textsf{SelfSparring}}\xspace, and demonstrate significantly better performance in the multi-dueling setting.
Previous work on extending the original dueling bandits setting have been largely restricted to settings that duel a single pair of arms at a time. These include continuous-armed convex dueling bandits \citep{yue2009interactively}, contextual dueling bandits which also introduces the von Neumann winner solution concept \citep{dudik2015contextual}, sparse dueling bandits that focuses on the Borda winner solution concept \citep{jamieson2015sparse}, Copeland dueling bandits that focuses on the Copeland winner solution concept \citep{zoghi2015copeland}, and adversarial dueling bandits \citep{gajane2015relative}.
In contrast, our work studies the complementary directions of how to formalize multiple duels simultaneously, as well as how to reduce the dimensionality of modeling the action space using a low-dimensional similarity kernel.
Recently, there have been increasing interest in studying personalization settings that simultaneously elicit multiple pairwise comparisons. Example settings include information retrieval \citep{hofmann2011probabilistic,schuth2014multileaved,schuth2016multileave} and clinical treatment \citep{sui2014clinical}. There have also been some previous work on multi-dueling bandits settings \citep{brost2016multi,sui2014clinical,schuth2016multileave}, however the previous approaches are limited in their scope and lack rigorous theoretical guarantees. In contrast, our approach can handle a wide range of multi-dueling mechanisms, has near-optimal regret guarantees, and can be easily composed with kernels to model dependent arms.
\subsection{Multi-armed Bandits}
Our proposed algorithm,
{\sc\textsf{SelfSparring}}\xspace, utilizes a multi-armed bandit (MAB) algorithm as a subroutine, and so we provide here a brief formal description of the conventional MAB problem for completeness.
The stochastic MAB problem \citep{robbins52} refers to an iterative decision making problem where the algorithm repeatedly chooses among K actions (or bandits or arms). In contrast to the dueling bandits setting, where the feedback is relative between two arms, here, we receive an absolute reward that depends on the arm selected. We assume WLOG that every reward is bounded between $[0,1]$.\footnote{So long as the rewards are bounded, one can shift and re-scale them to fit within $[0,1]$.} The goal then is to minimize the cumulative regret compared to the best arm:
\begin{eqnarray}
R_T^{\text{MAB}} = \sum_{t=1}^T \left[\mu^1 - \mu(b_t)\right],
\label{eqn:mab_regret}
\end{eqnarray}
where $b_t$ denotes the arm chosen at time $t$, $\mu(b)$ denotes the expected reward of arm $b$, and $\mu^1 = \operatornamewithlimits{argmax}_b \mu(b)$.
Popular algorithms for the stochastic setting include UCB (upper confidence bound) algorithms \citep{auer2002finite}, and Thompson Sampling
\citep{chapelle2011empirical,russo2014learning}.
In the adversarial setting, the rewards are chosen in an adversarial fashion, rather than sampled independently from some underlying distribution. In this case, regret \eqref{eqn:mab_regret} is rephrased as the difference in the sum of rewards. The predominant algorithm for the adversarial setting is EXP3 \citep{auer2002nonstochastic}.
\subsection{Thompson Sampling}
\label{sec:ts}
The specific MAB algorithm used by our {\sc\textsf{SelfSparring}}\xspace approach is Thompson Sampling.
Thompson Sampling is a stochastic algorithm that maintains a distribution over the arms, and chooses arms by sampling \citep{chapelle2011empirical}. This distribution is updated using reward feedback. The entropy of the distribution thus corresponds to uncertainty regarding which is the best arm, and flatter distributions lead to more exploration.
\newcommand{\hat{\mu}}{\hat{\mu}}
\begin{algorithm}[t]
\caption{Thompson Sampling for Bernoulli Bandits}
\label{alg:ts}
{
\begin{algorithmic}[1]
\STATE For each arm $i=1,2,\cdots, K$, set $S_i=0$, $F_i=0$.
\FOR{$t=1,2,\ldots $}
\STATE For each arm $i=1,2,\cdots, K$, sample $\theta_{i}$ from $Beta(S_i+1,F_i+1)$
\STATE Play arm $i(t) := \operatornamewithlimits{argmax}_i{\theta_i(t)}$, observe reward $r_t$
\STATE $S_i \leftarrow S_i + r_t$, $F_i \leftarrow F_i + 1 - r_t$
\ENDFOR
\end{algorithmic}
}
\end{algorithm}
Consider the Bernoulli bandits setting where observed rewards are either 1 (win) or 0 (loss). Let $S_i$ and $F_i$ denote the historical number of wins and losses of arm $i$, and let $D_t$ denote the set of all parameters at round $t$:
$$D_t= \{ S_1, \cdots, S_K; F_1, \cdots, F_K\}_t.$$
For brevity, we often represent $D_t$ by $D$, since only the current iteration matters at run-time. The sampling process of Beta-Bernoulli Thompson Sampling given $D$ is:
\begin{itemize}
\item For each arm $i$, sample $\theta_i \sim Beta(S_i+1,F_i+1)$.
\item Choose the arm with maximal $\theta_i$.
\end{itemize}
In other words, we model the average utility of each arm using a Beta prior, and rewards for arm $i$ as Bernoulli distributed according to latent mean utility $\theta_i$. As we observe more rewards, we can compute the posterior, which is also Beta distributed by conjugation between Beta and Bernoulli. The sampling process above can be shown to be sampling for the following distribution:
\begin{eqnarray}
P(i|D) = P(i =\operatornamewithlimits{argmax}_b \theta_b|D).
\label{eqn:ts}
\end{eqnarray}
Thus, any arm $i$ is chosen with probability that it has maximal reward under the Beta posterior. Algorithm~\ref{alg:ts} describes the Beta-Bernoulli Thompson Sampling algorithm, which we use as a subroutine for our approach.
Thompson Sampling enjoys near-optimal regret guarantees in the stochastic MAB setting, as given by the lemma below (which is a direct consequence of main theorems in \citet{agrawal2012,kaufmann2012}).
\begin{lemma}
\label{lem:ts}
For the K-armed stochastic MAB problem, Thompson Sampling has expected regret:
$\mathbb{E}[R_T^{\text{MAB}}] = \mathcal{O}\left(\frac{K}{\Delta}\ln T \right)$, where $\Delta$ is the difference between expected rewards of the best two arms.
\end{lemma}
\subsection{Gaussian Processes \& Kernels}
\label{sec:gp}
Normally, when one observes measurements about one arm (in both dueling bandits and conventional multi-armed bandits), one cannot use that measurement to infer anything about other arms -- i.e., the arms are independent. This limitation necessarily implies that regret scales linearly w.r.t. the number of arms $K$, since each arm must be explored at least once to collect at least one measurement about it. We will use Gaussian processes and kernels to model dependencies between arms.
For simplicity, we present Gaussian processes in the context of multi-armed bandits. We will describe how to apply them to multi-dueling bandits in Section \ref{sec:problem}
A Gaussian process (GP) is a probability measure over functions such that any linear restriction is multivariate Gaussian. A GP is fully determined by its mean and a positive definite covariance operator, also known as a kernel.
A $GP(\mu(b),k(b, b'))$ is a probability distribution across a class of ``smooth'' functions, which is parameterized by a kernel function $k(b, b')$ that characterizes the smoothness of $f$. One can think of $f$ has corresponding to the reward function in the standard MAB setting.
We assume WLOG~that $\mu(b)=0$, and that our observations are perturbed by i.i.d.~Gaussian noise, i.e., for samples at points $A_T=[b_1 \dots b_T]$, we have $y_t = f(b_t) + n_t$ where $n_t \sim T(0, \sigma^2)$ (e will relax this later). The posterior over $f$ is then also Gaussian with mean $\mu_T( b)$, covariance $k_T(b, b)$ and variance $\sigma_T^2(b, b')$ that satisfy:
\begin{align*}
\mu_T(b) &= k_T(b)^T(\mathcal{K}_T + \sigma^2I)^{-1}y_T\\
k_T(b, b') &= k(b, b') - k_T(x)^T(\mathcal{K}_T + \sigma^2I)^{-1}k_T(b')\\
\sigma_T^2(b) &= k_T(b, b),
\end{align*}
where $k_T(b) = [k(b_1, b) \dots k(b_T, b)]^T$ and $\mathcal{K}_T$ is the positive definite kernel matrix $[k(x, x')]_bx, b' \in A_T]$.
Posterior inference updates the mean reward estimates for all the arms that share dependencies (as specified by the kernel) with the arms selected for measurement. Thus one can show that MAB algorithms using Gaussian processes have regret that scale linearly w.r.t. the dimensionality of the kernel rather than the number of arms (which can now be infinite) \citep{srinivas10}.
\section{Multi-dueling Bandits}
\label{sec:problem}
We now formalize the multi-dueling bandits problem. We inherit all notation from original dueling bandits setting (Section \ref{sec:db}).
The key difference is that the algorithm now selects a (multi-)set $S_t$ of arms at each iteration $t$, and observes outcomes of duels between some pairs of arms in $S_t$. For example, in information retrieval this can be implemented via multi-leaving \citep{schuth2014multileaved} the ranked lists of the subset, $S_t$, of rankers and then inferring the relative quality of the lists (and the corresponding rankers) from user feedback.
In general, we assume the number of arms being dueled at each iteration is some fixed constant $m = |S_t|$. When $m=2$, the problem reduces to the original dueling bandits setting. Extending the regret formulation from the original setting \eqref{eqn:regret}, we can write the regret as:
\begin{eqnarray}
R_T = \sum_{t=1}^T \sum_{b\in S_t} \phi(b_1,b).\label{eqn:regret2}
\end{eqnarray}
The goal then is to select subsets of arms ${S_t}$ so that the cumulative regret \eqref{eqn:regret2} is minimized. Intuitively, all arms have to be selected a small number of times in order to be explored, but the goal of the algorithm is to minimize the number of times when suboptimal arms are selected.
When the algorithm has converged to the best arm $b_1$, then it can simply choose $S_t$ to only contain $b_1$, thus incurring no additional regret.
Our setting differs from \citet{brost2016multi} in two ways. First, we play a fixed, rather than variable, number of arms at each iteration. Furthermore, we focus on total regret, rather than the instantaneous average regret in a single iteration; in many applications (e.g., \citet{sui2014clinical}), playing each arm incurs its own regret .
\textbf{Feedback Mechanisms.} Simultaneously dueling multiple arms opens up multiple options for collecting feedback. For example, in some applications it may be viable to collect all pairwise feedback for all chosen arms $S_t$. In other applications, it is more realistic to only observe the ``winner'' of $S_t$, in which we observe feedback that one $b\in S_t$ wins against all other arms in $S_t$, but nothing about pairwise preferences between the other arms.
\textbf{Approximate Linearity.}
One assumption that we leverage in developing our approach is \textit{approximate linearity}, which fully generalizes the linear utility-based dueling bandits setting studied in \citet{ailon2014reducing}.
For any triplet of bandits $b_i \succ b_j \succ b_k$ and some constant $\gamma > 0$:
\begin{eqnarray}
\phi(b_i,b_k) - \phi(b_j,b_k) \geq \gamma\phi(b_i,b_j).
\label{eqn:al}
\end{eqnarray}
To understand Approximate Linearity, consider the special case when the preference function follows the form $\phi(b_i, b_j) = \Phi(u_i - u_j)$, where $u_i$ is a bounded utility measure of $b_i$. Approximate linearity of $\phi(\cdot, \cdot)$ is equivalent to having $\Phi(\cdot)$ be not far from some linear function on its bounded support (see Figure \ref{fig:al}), and is satisfied by any continuous monotonic increasing function. When $\Phi$ is linear, then our setting reduces to the utility-based dueling bandits setting of \citet{ailon2014reducing}.\footnote{Compared to the assumptions of \cite{yue2012k}, Approximate Linearity is a stricter requirement than strong stochastic transitivity, and is a complementary requirement to stochastic triangle inequality. In particular, stochastic triangle inequality requires that the curve in Figure \ref{fig:al} exhibits diminishing returns in the top-right quadrant (i.e., is sub-linear), whereas Approximate Linearity requires that the curve be not too far from linear.}
\begin{figure}
\caption{Illustration of Approximate Linearity. The curve represents $\Phi(\cdot)$ with support on $[-1, 1]$. Monotonicity guarantees Approximate Linearity for some $\gamma$.}
\label{fig:al}
\end{figure}
\section{Algorithms \& Results}
\label{sec:algorithm}
We start with a high-level description of our general framework, called {\sc\textsf{SelfSparring}}\xspace, which is inspired by the Sparring algorithm from \citet{ailon2014reducing}. The high-level strategy is to reduce the multi-dueling bandits problem to a multi-armed bandit (MAB) problem that can be solved using a MAB algorithm, and ideally lift existing MAB guarantees to the multi-dueling setting.
Algorithm \ref{alg:ss} describes the {\sc\textsf{SelfSparring}}\xspace approach.
{\sc\textsf{SelfSparring}}\xspace uses a stochastic MAB algorithm such as Thompson sampling as a subroutine to independently sample the set of $m$ arms, $S_t$ to duel. The distribution of $S_t$ is generally not degenerate (e.g., all the same arm) unless the algorithm has converged. In contrast, the Sparring algorithm uses $m$ MAB algorithms to control the choice of the each arm, which essentially reduces the conventional dueling bandits problem to two multi-armed bandit problems ``sparring'' against each other.
\begin{algorithm}[tb]
\caption{{\sc\textsf{SelfSparring}}\xspace}
\label{alg:ss}
\begin{algorithmic}[1]
\INPUT arms $1, \ldots, K$ in space $S$, $m$ the number of arms drawn at each iteration, $\eta$ the learning rate
\STATE Set prior $D_0$ over $S$
\FOR{$t=1,2,\ldots $}
\FOR{$j = 1, \ldots, m$}
\STATE select arm $i_j(t)$ using $D_{t-1}$ \label{lin:sample}
\ENDFOR
\STATE Play $m$ arms $\{i_j(t)\}_j$ and observe $m\times m$ pairwise feedback matrix $R = \{r_{ij} \in \{0,1,\emptyset\}\}_{m \times m}$
\STATE update $D_{t-1}$ using $R$ to obtain $D_t$
\ENDFOR
\end{algorithmic}
\end{algorithm}
{\sc\textsf{SelfSparring}}\xspace takes as input $S$ the total set of arms, $m$ the number of arms to be dueled at each iteration, and $\eta$ the learning rate for posterior updates. $S$ can be a finite set of $K$ arms for independent setting, or a continuous action space of arms for kernelized setting. A prior distribution $D_0$ is used to initialize the sampling process over $S$. In the $t$-th iteration, {\sc\textsf{SelfSparring}}\xspace selects $m$ arms by sampling over the distribution $D_{t-1}$ as shown in line~\ref{lin:sample} of Algorithm~\ref{alg:ss}. The preference feedback can be any type of comparisons ranging from full comparison over the $m$ arms (a full matrix for $R$, aka `all pairs'') to single comparison of one pair (just two valid entries in $R$). The posterior distribution over arms $D_t$ then gets updated by $R$ and the prior $D_{t-1}$.
We specialize {\sc\textsf{SelfSparring}}\xspace in two ways. The first, {\sc\textsf{IndSelfSparring}}\xspace (Algorithm~\ref{alg:ms}), is the independent-armed version of {\sc\textsf{SelfSparring}}\xspace. The second, {\sc\textsf{KernelSelfSparring}}\xspace (Algorithm~\ref{alg:ks}), uses Gaussian processes to make predictions about preference function $f$ based on noisy evaluations over comparisons. We emphasize here that {\sc\textsf{SelfSparring}}\xspace is very modular approach, and is thus easy to implement and extend.
\subsection{Independent Arms Case}
\label{sec:multisparring}
{\sc\textsf{IndSelfSparring}}\xspace (Algorithm \ref{alg:ms}) instantiates {\sc\textsf{SelfSparring}}\xspace using Beta-Bernoulli Thompson sampling.
The posterior Beta distributions $D_t$ over the arms are updated by the preference feedback within the iteration and the prior Beta distributions $D_{t-1}$.
We present a no-regret guarantee of {\sc\textsf{IndSelfSparring}}\xspace in Theorem \ref{thm:ms} below. We now provide a high-level outline of the main components leading to the result. Detail proofs are deferred to the supplementary material.
Our first step is to prove that {\sc\textsf{IndSelfSparring}}\xspace is asymptotically consistent, i.e., it is guaranteed (with high probability) to converge to the best bandit. In order to guarantee consistency, we first show that all arms are sampled infinitely often in the limit.
\begin{lemma}
\label{lem:io}
Running {\sc\textsf{IndSelfSparring}}\xspace with infinite time horizon will sample each arm infinitely often.
\end{lemma}
In other words, Thompson sampling style algorithms do not eliminate any arms.
Lemma~\ref{lem:io} also guarantees concentration of any statistical estimates for each arm as $t\rightarrow \infty$.
We next show that the sampling of {\sc\textsf{IndSelfSparring}}\xspace will concentrate around the optimal arm.
\begin{theorem}
\label{thm:conv}
Under Approximate Linearity, {\sc\textsf{IndSelfSparring}}\xspace converges to the optimal arm $b_1$ as running time $t\rightarrow \infty$: $\lim_{t\rightarrow \infty} \mathbb{P}(b_t = b_1) = 1$.
\end{theorem}
\begin{algorithm}[tb]
\caption{{\sc\textsf{IndSelfSparring}}\xspace}
\label{alg:ms}
\begin{algorithmic}[1]
\INPUT $m$ the number of arms drawn at each iteration, $\eta$ the learning rate
\STATE For each arm $i=1,2,\cdots, K$, set $S_i=0$, $F_i=0$.
\FOR{$t=1,2,\ldots $}
\FOR{$j = 1, \ldots, m$}
\STATE For each arm $i=1,2,\cdots, K$, sample $\theta_{i}$ from $Beta(S_i+1,F_i+1)$
\STATE Select $i_j(t) := \operatornamewithlimits{argmax}_i{\theta_i(t)}$
\ENDFOR
\STATE Play $m$ arms $\{i_j(t)\}_j$, observe pairwise feedback matrix $R = \{r_{jk} \in \{0,1,\emptyset\}\}_{m \times m}$
\FOR{$j,k = 1, \ldots, m$}
\IF{$r_{jk} \neq \emptyset$}
\STATE $S_j \leftarrow S_j + \eta\cdot r_{jk}$,
$F_j \leftarrow F_j + \eta(1 - r_{jk})$
\ENDIF
\ENDFOR
\ENDFOR
\end{algorithmic}
\end{algorithm}
\begin{figure*}
\caption{Evolution of a GP preference function in {\sc\textsf{KernelSelfSparring}
\label{fig:ks}
\end{figure*}
Theorem~\ref{thm:conv} implies that {\sc\textsf{IndSelfSparring}}\xspace is asymptotically no-regret. As $t\rightarrow \infty$, the Beta distribution for each arm $i$ is converging to $P(b_i \succ b_1)$, which implies converging to only choosing the optimal arm.
Most existing dueling bandits algorithm chooses one arm as a ``reference'' arm and the other arm as a competing arm for exploration/exploitation (in the $m=2$ setting). If the distribution over reference arms never changes, then the competing arm is playing against a fixed ``environment'', i.e., it is a standard MAB problem. For general $m$, we can analogously consider choosing only one arm against a fixed distribution over all the other arms. Using Thompson sampling, the following lemma holds.
\begin{lemma}
\label{lem:fixed}
Under Approximate Linearity, selecting only one arm via Thompson sampling against a fixed distribution over the remaining arms leads to optimal regret w.r.t. choosing that arm.
\end{lemma}
Lemma~\ref{lem:fixed} and Theorem \ref{thm:conv} motivate the idea of analyzing the regret of each individual arm against near-fixed (i.e., converging) environments.
\begin{theorem}
\label{thm:ms}
Under Approximate Linearity, {\sc\textsf{IndSelfSparring}}\xspace converges to the optimal arm with asymptotically optimal no-regret rate of $\mathcal{O}(K\ln(T)/\Delta)$.
\end{theorem}
Theorem \ref{thm:ms} shows an no-regret guarantee for {\sc\textsf{IndSelfSparring}}\xspace that asymptotically matches the optimal rate of $\mathcal{O}(K\ln(T)/\Delta)$ up to constant factors. In other words, once $t>C$ for some problem-dependent constant $C$, the regret of {\sc\textsf{IndSelfSparring}}\xspace matches information-theoretic bounds up to constant factors (see \citet{yue2012k} for lower bound analysis).\footnote{A finite-time guarantee requires more a refined analysis of $C$, and is an interesting direction for future work.}
The proof technique follows two major steps:
(1) prove the convergence of {\sc\textsf{IndSelfSparring}}\xspace as shown in Theorem \ref{thm:conv}; and
(2) bound the expected total regret for sufficiently large $T$.
\begin{algorithm}[tb]
\caption{{\sc\textsf{KernelSelfSparring}}\xspace}
\label{alg:ks}
\begin{algorithmic}[1]
\INPUT Input space $S$, GP prior $(\mu_0, \sigma_0)$, $m$ the number of arms drawn at each iteration
\FOR{$t=1,2,\ldots $}
\FOR{$j = 1, \ldots, m$}
\STATE Sample $f_j$ from $(\mu_{t-1}, \sigma_{t-1})$
\STATE Select $i_j(t) := \operatornamewithlimits{argmax}_x{f_j(x)}$
\ENDFOR
\STATE Play $m$ arms $\{i_j(t)\}_j$, observe pairwise feedback matrix $R = \{r_{jk} \in \{0,1,\emptyset\}\}_{m \times m}$
\FOR{$j, k = 1, \ldots, m$}
\IF{$r_{jk}\neq\emptyset$}
\STATE apply Bayesian update using $(i_j(t), r_{jk})$ to obtain $(\mu_t, \sigma_t)$
\ENDIF
\ENDFOR
\ENDFOR
\end{algorithmic}
\end{algorithm}
\subsection{Dependent Arms Case}
We use Gaussian processes (see Section \ref{sec:gp}) to model dependencies among arms.
Applying Gaussian processes is not straightforward, since the underlying utility function is not directly observable or does not exist.
We instead use Gaussian processes to model a specific the preference function. In Gaussian process notation, the preference function $f(b)$ represents the preference of choosing $b$ over the perfect ``environment'' of competing arms. Like in the independent arms case (Section \ref{sec:multisparring}), the perfect environment corresponds to having all the remaining arms be deterministically selected as the best arm $b_1$, yielding $f(b) = P(b \succ b_1)$. We model $f(b)$ as a sample from a Gaussian process $GP(\mu(b),k(b, b'))$. Note that this setup is analogous to the independent arms case, which uses a Beta prior to estimate the probability of each arm defeating the environment (and converges to competing against the best environment).
Algorithm \ref{alg:ks} describes {\sc\textsf{KernelSelfSparring}}\xspace, which instantiates {\sc\textsf{SelfSparring}}\xspace using a Gaussian process Thompson sampling algorithm.
The input space $S$ can be continuous.
At each iteration $t$, $m$ arms are sampled using the Gaussian process prior $D_{t-1}$. The posterior $D_t$ is then updated by the responses $R$ and the prior.
Figure~\ref{fig:ks} illustrates the optimization process in a one-dimensional example. The underlying preference function against the best environment is shown in blue. Dashed lines are the mean function of GP. Shaded areas are $\pm 2$ standard deviations regions (high confidence regions). Figures \ref{fig:ks}(a)(b)(c) represent running {\sc\textsf{KernelSelfSparring}}\xspace algorithm at 5, 20, and 100 iterations. The GP model can be observed to be converging to the preference function against the best environment.
We conjecture that it is possible to prove no-regret guarantees that scale w.r.t. the dimensionality of the kernel. However, there does not yet exist suitable regret analyses for Gaussian Process Thompson Sampling in the kernelized MAB setting to leverage.
\begin{table*}[t]
\centering
\begin{small}
\begin{tabular}{|l|l|}
\hline
Name & Distribution of Utilities of arms \\ \hline
\hline
1good & 1 arm with utility 0.8, 15 arms with utility 0.2 \\ \hline
arith & 1 arm with utility 0.8, 15 arms forming an arithmetic sequence between 0.7 and 0.2 \\ \hline
\end{tabular}
\end{small}
\caption{16-arm synthetic datasets used for experiments.}
\label{tab:arms}
\end{table*}
\section{Experiments}\label{sec:experiments}
\subsection{Simulation Settings \& Datasets}
\label{sec:data}
\textbf{Synthetic Functions.} We evaluated on a range of 16-arm synthetic settings derived from the utility-based dueling bandits setting of \citet{ailon2014reducing}. For the multi-dueling setting, we used the following preference functions:
\begin{table}[H]
\centering
\begin{tabular}{lc}
linear: & $\phi(x, y) - 1/2 = (1+x-y)/2$ \\
logit: & $\phi(x, y) - 1/2 = (1+\exp{(y-x)})^{-1}$
\end{tabular}
\end{table}
and the utility functions shown in Table \ref{tab:arms} (generalized from those in \citet{ailon2014reducing}).
Note that although these preference functions do not satisfy approximate linearity over their entire domains, they do for the utility samples (over the a finite subset of arms).
\textbf{MSLR Dataset.}
Following the evaluation setup of \citet{brost2016multi}, we also used the Microsoft Learning to Rank (MSLR) WEB30k dataset, which consists of over 3 million query-document pairs labeled with relevance scores \citep{liu2007letor}. Each pair is scored along 136 features, which can be treated as rankers (arms). For any subset of arms, we can estimate a preference matrix using the expected probability over the entire dataset of one arm beating another using top-10 interleaving and a perfect-click model.
We simulate user feedback by using team-draft multileaving \citep{schuth2014multileaved}.
\subsection{Vanilla Dueling Bandits Experiments}
We first compare against the vanilla dueling bandits setting of dueling a single pair of arms at a time. These experiments are included as a sanity check to confirm that {\sc\textsf{SelfSparring}}\xspace (with $m=2$) is a competitive algorithm in the original dueling bandits setting, and are not the main focus of our empirical analysis.
We empirically evaluate against a range of conventional dueling bandit algorithms, including:
\begin{itemize}
\item \textbf{Interleaved Filter (IF)} \citep{yue2012k}
\item \textbf{Beat the Mean (BTM)} \citep{yue2011beat}
\item \textbf{RUCB} \citep{zoghi2014relative}
\item \textbf{MergeRUCB} \citep{zoghi2015mergerucb}
\item \textbf{Sparring + UCB1} \citep{ailon2014reducing}
\item \textbf{Sparring + EXP3} \citep{dudik2015contextual}
\item \textbf{RMED1} \citep{komiyama2015regret}
\item \textbf{Double Thompson Sampling} \citep{wu2016doublets}
\end{itemize}
For Double Thompson Sampling and {\sc\textsf{IndSelfSparring}}\xspace, we set the learning rates to be 2.5 and 3.5 as optimized over a separate dataset of uniformly sampled utility functions. We use $\alpha=0.51$ for RUCB/MergeRUCB, $\gamma=1$ for BTM, and $f(K) = 0.3K^{1.01}$ for RMED1.
\textbf{Results.}
For each scenario, we run each algorithm 100 times for 20000 iterations. For brevity, we show in Figure \ref{fig:best9} the average regret of one synthetic simulation along with shaded one standard-deviation areas. We observe that {\sc\textsf{SelfSparring}}\xspace is competitive with the best performing methods in the original dueling bandits setting. More complete experiments that replicate \citet{ailon2014reducing} are provided in the supplementary material, and demonstrate the consistency of this result.
Double Thompson Sampling (DTS) is the best performing approach in Figure \ref{fig:best9}, which is a fairly consistent result in the extended results in the supplementary material.
However, given their high variances they are essentially comparable w.r.t. all other algorithms.
Furthermore, {\sc\textsf{IndSelfSparring}}\xspace has the advantage of being easily extensible to the more realistic multi-dueling and kernelized settings, which is not true of DTS.
\begin{figure}
\caption{Vanilla dueling bandits setting. Average regret for top nine algorithms on logit/arith. Shaded regions correspond to one standard deviation.}
\label{fig:best9}
\end{figure}
\subsection{Multi-Dueling Bandits Experiments}
We next evaluate the multi-dueling setting with independent arms. We compare against the main existing approaches that are applicable to the multi-dueling setting, including the MDB algorithm \citep{brost2016multi}, and the multi-dueling extension of Sparring, which we refer to as MultiSparring \citep{ailon2014reducing}.
Following \cite{brost2016multi}, we use $\alpha=0.5$ and $\beta=1.5$ for the MDB algorithm. For {\sc\textsf{IndSelfSparring}}\xspace, we set learning rate to be the default 1. Note that the vast majority dueling bandits algorithms are not easily applicable to the multi-dueling setting. For instance, RUCB-style algorithms treat the two arms asymmetrically, which is not easily generalized to multi-dueling.
\textbf{Results on Synthetic Experiments.}
We test $m=4$ on the linear 1good and arith datasets in Figure \ref{fig:multi1good} and Figure \ref{fig:multiarith}, respectively. We observe that {\sc\textsf{IndSelfSparring}}\xspace significantly outperforms competing approaches.
\textbf{Results on MSLR Dataset.} Following the simulation setting of \cite{brost2016multi} on the MSLR dataset (see Section \ref{sec:data}), we compared against the MDB algorithm over the same collection of 50 randomly sampled 16-arm subsets. We ensured that each 16-arm subset had a Condorcet winner; in general it is likely for any random subset of arms in the MSLR dataset to have a Condorcet winner \citep{zoghi2015copeland}.
Figure \ref{fig:multimslr} shows the results, where we again see that {\sc\textsf{IndSelfSparring}}\xspace enjoys significantly better performance.
\begin{figure}
\caption{Multi-dueling regret for linear/1good setting}
\label{fig:multi1good}
\end{figure}
\begin{figure}
\caption{Multi-dueling regret for linear/arith setting}
\label{fig:multiarith}
\end{figure}
\begin{figure}
\caption{Multi-dueling regret for MSLR-30K experiments}
\label{fig:multimslr}
\end{figure}
\begin{figure}
\caption{2-dueling regret for kernelized setting with synthetic preferences}
\label{fig:gpsynthetic}
\end{figure}
\begin{figure}
\caption{2-dueling regret for kernelized setting with Forrester objective function}
\label{fig:gpforrester}
\end{figure}
\begin{figure}
\caption{2-dueling regret for kernelized setting with Six-Hump Camel objective function}
\label{fig:gpsixhump}
\end{figure}
\begin{figure}
\caption{Multi-dueling regret for kernelized setting with Forrester objective function}
\label{fig:multiforrester}
\end{figure}
\begin{figure}
\caption{Multi-dueling regret for kernelized setting with Six-Hump Camel objective function}
\label{fig:multisixhump}
\end{figure}
\subsection{Kernelized (Multi-)Dueling Experiments}
We finally evaluate the kernelized setting for both the 2-dueling and the multi-dueling case.
We evaluate {\sc\textsf{KernelSelfSparring}}\xspace against BOPPER \citep{gonzalez2016bayesian} and Sparring \citep{ailon2014reducing} with GP-UCB \citep{srinivas10}. BOPPER is a Bayesian optimization method can be applied to kernelized 2-dueling setting (but not multi-dueling). Sparring with GP-UCB, which refer to as GP-Sparring, is essentially a variant of our {\sc\textsf{KernelSelfSparring}}\xspace approach but maintains a $m$ GP-UCB bandit algorithms (one controlling each choice of arm to be dueled), rather than just a single one.
{\sc\textsf{KernelSelfSparring}}\xspace and GP-Sparring use GPs that model the preference function, i.e. are one-sided, whereas BOPPER uses a GP to model the entire preference matrix. Following \citet{srinivas10}, we use a squared exponential kernel with lengthscale parameter 0.2 for both GP-Sparring and {\sc\textsf{KernelSelfSparring}}\xspace, and use a squared exponential kernel with parameter 1 for BOPPER. We initialize all GPs with a zero-mean prior, and use sampling noise variance $\sigma^2=0.025$. For GP-Sparring, we use the scaled-down version of $\beta_t$ as suggested by \citet{srinivas10}.
We use the Forrester and Six-Hump Camel functions as utility functions on $[0,1]$ and $[0,1]^2$, respectively, as in \citet{gonzalez2016bayesian}. Similarly, we use the same uniform discretizations of 30 and 64 points for the Forrester and Six-Hump Camel settings respectively, and use the logit link function to generate preferences.
Since the BOPPER algorithm is computationally expensive, we only include it in the Forrester setting, and run each algorithm 20 times for 100 iterations. In the Six-Hump Camel setting, we run {\sc\textsf{KernelSelfSparring}}\xspace and GP-Sparring for 500 iterations 100 times each. Results are presented in Figures \ref{fig:gpforrester} and \ref{fig:gpsixhump}, where we observe much better performance from {\sc\textsf{KernelSelfSparring}}\xspace against both BOPPER and GP-Sparring.
In the kernelized multi-dueling setting, we compare against GP-Sparring. We run each algorithm for 100 iterations 50 times on the Forrester and Six-Hump Camel functions, and plot their regrets in Figures \ref{fig:multiforrester} and \ref{fig:multisixhump} respectively. We use $m=4$ for both algorithms, and the same discretization as in the standard dueling case. We again observe significant performance gains of our {\sc\textsf{KernelSelfSparring}}\xspace approach.
\section{Conclusions}
We studied multi-dueling bandits with dependent arms. This setting extends the original dueling bandits setting by dueling multiple arms per iteration rather than just two, and modeling low-dimensional dependencies between arms rather than treat each arm independently. Both extensions are motivated by practical real-world considerations such as in personalized clinical treatment \citep{sui2014clinical}.
We proposed {\sc\textsf{SelfSparring}}\xspace, which is simple and easy to extend, e.g., by integrating with kernels to model dependencies across arms.
Our experimental results demonstrated significant reduction in regret compared to state-of-the-art dueling bandit algorithms. Generally, relative benefits compared to dueling bandits increased with the number of arms being compared. For {\sc\textsf{SelfSparring}}\xspace, the incurred regret did not increase substantially as the number of arms increased.
Our approach can be extended in several important directions. Most notably, the theoretical analysis could be improved. For instance, it would be more desirable to provide explicit finite-time regret guarantees rather than asymptotic ones. Furthermore, an analysis of the kernelized multi-dueling setting is also lacking. From a more practical perspective, we assumed that the choice of arms does not impact the feedback mechanism (e.g., all pairs), which is not true in practice (e.g., humans can have a hard time distinguishing very different arms).
\begin{small}
\end{small}
\appendix
\section{Proofs}
\label{app:pf}
This section provides the proof sketch of Lemmas and Theorems mentioned in the main paper.
\textbf{Lemma~\ref{lem:ts}.}
For the K-armed stochastic MAB problem, Thompson Sampling has expected regret:
$\mathbb{E}[R_T^{\text{MAB}}] = \mathcal{O}\left(\frac{K}{\Delta}\ln T \right)$, where $\Delta$ is the difference between expected rewards of the best two arms.
\begin{proof}
This lemma is a direct result from Theorem 2 of \citet{agrawal2012} and Theorem 1 of \citet{kaufmann2012}.
\end{proof}
\textbf{Lemma~\ref{lem:io}.}
Running {\sc\textsf{IndSelfSparring}}\xspace with infinite time horizon will sample each arm infinitely often.
\begin{proof}
Proof by contradiction. \\
Let $B(x;\alpha,\beta) = \int_0^x t^{\alpha-1}(1-t)^{\beta-1} dt$.
Then the CDF of Beta distribution with parameters $(\alpha,\beta)$ is $$F(x;\alpha,\beta) = \frac{B(x;\alpha,\beta)}{B(1;\alpha,\beta)}.$$
Suppose arm $b$ can only be sampled in finite number of iterations. Then there exists finite upper bound $T_b$ for $\alpha_b + \beta_b$. For any given $x \in (0,1)$, the probability of sampling values of arm $b$ $\theta_b$ greater than $x$ is $$P(\theta_b > x) = 1-F(x;\alpha_b,\beta_b) $$
$$\geq 1-F(x;1,T_b-1) = (1 - x)^{T_b-1} >0$$
Then by running {\sc\textsf{IndSelfSparring}}\xspace, the probability of choosing arm $b$ after it has been chosen $T_b$ times:
$$P(\theta_b \geq max_i\{\theta_{b_i}\}) \geq \prod_i P(\theta_b \geq \theta_{b_i})$$
is strictly non-zero. That violates any fixed upper bound $T_b$.
\end{proof}
\textbf{Theorem~\ref{thm:conv}.}
Under Approximate Linearity, {\sc\textsf{IndSelfSparring}}\xspace converges to the optimal arm $b_1$ as running time $t\rightarrow \infty$: $\lim_{t\rightarrow \infty} \mathbb{P}(b_t = b_1) = 1$.
\begin{proof}
{\sc\textsf{IndSelfSparring}}\xspace keeps one Beta distribution $ Beta(\alpha_i(t),\beta_i(t))$ for each arm $b_i$ at time step $t$. Let $\hat{\mu}_i(t)= \frac{\alpha_i(t)}{\alpha_i(t) + \beta_i(t)}$, $\hat{\sigma}^2_i(t) = \frac{\alpha_i(t)\beta_i(t)}{(\alpha_i(t) + \beta_i(t))^2(\alpha_i(t) + \beta_i(t) + 1)}$ be the empirical mean and variance for arm $b_i$. \\
Obviously, $\hat{\sigma}^2_i(t) \rightarrow 0$ as $(\alpha_i(t) + \beta_i(t)) = (S_i(t)+F_i(t)) \rightarrow \infty$. By Lemma~\ref{lem:io} we have $(S_i(t)+F_i(t)) \rightarrow \infty$ as $t\rightarrow \infty$. That shows every Beta distribution is concentrating to a Dirac function at $\hat{\mu}_i(t)$ when $t\rightarrow \infty$. Define $\hat{\mu}(t) = [\hat{\mu}_1(t), \cdots, \hat{\mu}_K(t)]^T \in [0,1]^K$to be the vector of means of all arms. Then $\mu = \{\mu_i = P(b_i \succ b_1)\}_{i = 1, \cdots, K}$ is a stable point for {\sc\textsf{IndSelfSparring}}\xspace in the $K$ dimensional mean space.
Suppose there exists another stable point $\nu \in [0,1]^K$($\nu \neq \mu$) for {\sc\textsf{IndSelfSparring}}\xspace, consider the following two possibilities: (1) $\nu_1 = max_i\{{\nu_i}\}$ and (2) $\nu_1 < max_i\{{\nu_i}\} = \nu_j$.
Since the Beta distributions for each arm $b_i$ is concentrating to Dirac functions at $\nu_i$, $P(\theta_i > \theta_j) \in [\mathbb{I}(\nu_i > \nu_j) -\delta, \mathbb{I}(\nu_i > \nu_j) + \delta]$ for any fixed $\delta > 0$ with high probability.
If (1) holds, then $\nu_1$ will converge to $\frac{1}{2} = \mu_1$ and $\nu_i$ will converge to $P(b_i \succ b_1) = \mu_i$. Thus $\nu = \mu$. Contradict to $\nu \neq \mu$.
If (2) holds, then $\nu_j$ will converge to $\frac{1}{2} = \mu_1$ and $\nu_1 \in [P(b_1 \succ b_j) - \delta, P(b_1 \succ b_j) + \delta]$ for any fixed $\delta>0$ with high probability. Since $P(b_1 \succ b_j) \geq \frac{1}{2}+ \Delta$, $\nu_1 \in [P(b_1 \succ b_j) - \delta, P(b_1 \succ b_j) + \delta] \geq \frac{1}{2} + \Delta - \delta$ . Since $\delta$ can be arbitrarily small, we have $\nu_1 \geq \frac{1}{2} + \Delta-\delta > \frac{1}{2}+\delta > \nu_j$. That contradict to $\nu_1 < \nu_j$.
In summary, $\mu = \{ \mu_i =P(b_i \succ b_1)\}_{i = 1, \cdots, K}$ is the only stable point in the mean space. As $\hat{\mu}(t) \rightarrow \mu$, $\mathbb{P}(b_t = b_1) \rightarrow 1$.
Define $\mathbb{P}_t = [P_1(t), P_2(t), ..., P_K(t)]$ as the probabilities of picking each arm at time $t$. Let $\mathbb{P} = \{\mathbb{P}_t\}_{t = 1, 2, ...}$ be the sequence of probabilities w.r.t. time.
Assume {\sc\textsf{IndSelfSparring}}\xspace is non-convergent. It is equivalent to say that $\mathbb{P}$ is not converging to a fixed distribution. Then $\exists \delta > 0$ and arm $i$ s.t. the sequence of probabilities $\{P_i(t)\}_t$ satisfies:
$$\limsup_{t \rightarrow \infty} P_i(t) - \liminf_{t \rightarrow \infty} P_i(t) > \delta$$
w.h.p. which is equivalent of having:
$$\limsup_{t \rightarrow \infty} \hat{\mu}_i(t) - \liminf_{t \rightarrow \infty} \hat{\mu}_i(t) > \epsilon$$
w.h.p. for some fixed $\epsilon > 0$. This violates the stability of {\sc\textsf{IndSelfSparring}}\xspace in the $K$ dimensional mean space as shown above. So as $t\rightarrow \infty$, $\hat{\mu}(t) \rightarrow \mu$, $\mathbb{P}(b_t = b_1) \rightarrow 1$.
\end{proof}
\textbf{Lemma~\ref{lem:fixed}.}
Under Approximate Linearity, selecting only one arm via Thompson sampling against a fixed distribution over the remaining arms leads to optimal regret w.r.t. choosing that arm.
\begin{proof}
We first prove the results for $m = 2$. Results for any $m > 2$ can be proved in a similar way.
Consider Player 1 drawing arms from a fixed distribution $L$. Player 2's drawing strategy is an MAB algorithm $\mathcal{A}$.
Let $R_A(T)$ be the regret of algorithm $\mathcal{A}$ within horizon $T$. $B(T) = \sup \mathbb{E}[R_A(T)]$ is the supremum of the expected regret of $\mathcal{A}$.
The reward of Player 2 at iteration $t$ is $\phi (b_{2t}, b_{1t})$. Reward of keep playing the optimal arm is $\phi (b_1, b_{1t})$. So the total regret after $T$ rounds is
$$R_A(T) = \sum_{t=1}^{T} [\phi (b_1, b_{1t}) - \phi (b_{2t}, b_{1t})]$$
Since Approximate Linearity yields
$$\phi (b_1, b_{1t}) - \phi (b_{2t}, b_{1t}) \geq \gamma \cdot \phi(b_1, b_{2t})$$
We have
$$\mathbb{E}[R_A(T)] = \mathbb{E}\mathbb{E}_{b_{1t}\sim L}\left[\sum_{t=1}^{T} [\phi (b_1, b_{1t}) - \phi (b_{2t}, b_{1t})\right]$$
$$\geq \mathbb{E}\mathbb{E}_{b_{1t}\sim L} \left[\sum_{t=1}^{T} \gamma \cdot \phi(b_1, b_{2t})\right] $$
$$= \gamma \cdot \mathbb{E} \left[ \sum_{t=1}^{T}\phi(b_1, b_{2t})\right]
= \gamma \cdot \mathbb{E}[R(T)]$$
So the total regret of Player 2 is bounded by
$$\mathbb{E}[R(T)] \leq \frac{1}{\gamma} \mathbb{E}[R_A(T)] \leq \frac{1}{\gamma} \sup \mathbb{E}[R_A(T)] = \frac{1}{\gamma} B(T)$$
\end{proof}
\begin{cor}
\label{cor:conv}
If approximate linearity holds, competing with a drifting but converging distribution of arms guarantees the one-side convergence for Thompson Sampling.
\end{cor}
\begin{proof}
Let $D_t$ be the drifting but converging distribution and $D_t \rightarrow D$ as $t \rightarrow \infty$. Let $b_T$ be the drifting mean bandit of $D_T$ after $T$ iterations. Since $D_t$ is convergent, $\exists T > K$ such that
$$\phi(\sup_{t > T} b_T, \inf_{t > T} b_T) < \phi(b_1, b_2)$$
where $\phi(b_1, b_2)$ is the preference between the best two arms.
The mean value of feedback by playing arm $i$ is $\phi(b_i, b_T)$. If $b_T$ is fixed, by Lemma\ref{lem:fixed}, Thompson sampling converges to the arm:
$i^* = \operatornamewithlimits{argmax}_i \phi(b_i, b_T)$. For drifting $b_T$, define $b^+ = \sup_{t > T} b_T$ and $b^- = \inf_{t > T} b_T$.
Thompson sampling convergence to the optimal arm implies that:
$$\phi(b_1, b^+) > \phi(b_i, b^-)$$
for all $i \neq 1$. Consider:
$$\phi(b_1, b^+) - \phi(b_2, b^-)$$
$$ = \phi(b_1, b^+) - \phi(b_2, b^-) + \phi(b_1, b^-) - \phi(b_1, b^-)$$
$$ = \phi(b_1, b^-) - \phi(b_2, b^-) + \phi(b_2, b^+) - \phi(b_1, b^-)$$
$$ \geq \gamma\cdot[\phi(b_1, b_2) - \phi(b^+, b^-)] > 0$$
by approximate linearity.
So we have $\phi(b_1, b^+) > \phi(b_2, b^-)$. Since $\phi(b_2, b^-) > \phi(b_i, b^-)$ for $i > 2$. Then we have $$\phi(b_1, b^+) > \phi(b_i, b^-)$$
holds for all $i \neq 1$. So Thompson sampling converge to the optimal arm.
\end{proof}
\textbf{Theorem~\ref{thm:ms}.}
Under Approximate Linearity, {\sc\textsf{IndSelfSparring}}\xspace converges to the optimal arm with asymptotically optimal no-regret rate of $\mathcal{O}(K\ln(T)/\Delta)$.
Where $\Delta$ is the difference between the rewards of the best two arms.
\begin{proof}
Theorem~\ref{thm:conv} provides the convergence guarantee of {\sc\textsf{IndSelfSparring}}\xspace. Corollary~\ref{cor:conv} shows one-side convergence for playing against a converging distribution.
Since {\sc\textsf{IndSelfSparring}}\xspace converges to the optimal arm $b_1$ as running time $t\rightarrow \infty$: $\lim_{t\rightarrow \infty} \mathbb{P}(b_t = b_1) = 1$. For $\forall \delta>0$, there exists $C(\delta) > 0$ such that for any $t > C(\delta)$, the following condition holds w.h.p.:
$P(b_t = b_1) \ge 1-\delta$.
For the triple of bandits $b_1 \succ b_i \succ b_K$, Approximate Linearity guarantees:
$$\phi(b_i,b_K) < \phi(b_1,b_K) \leq \omega$$
holds for some fixed $\omega > 0$ and $\forall i \in \{2, \cdots, K-1\}$. With small $\delta$, the competing environment of any Player $p$ is bounded. If $\delta < \frac{\Delta}{\Delta + \omega}$, $(1-\delta)\cdot(-\Delta) + \delta \cdot \phi(b_2,b_K) < 0 = 1\cdot \phi(b_1,b_1)$. The competing environment can be considered as unbiased and the theoretical guarantees for Thompson sampling for stochastic multi-armed bandit is valid (up to a constant factor).
Then {\sc\textsf{IndSelfSparring}}\xspace has an no-regret guarantee that asymptotically matches the optimal rate of $\mathcal{O}(K\ln(T)/\Delta)$ up to constant factors, which proves Theorem~\ref{thm:ms}.
\end{proof}
\section{Further Experiments}
\label{app:exp}
\begin{figure*}
\caption{Average regret vs iterations for each of 8 algorithms and 15 scenarios.}
\label{fig:grid}
\end{figure*}
\begin{table*}[t]
\centering
\begin{tabular}{|l|l|}
\hline
Name & Distribution of Utilities of arms \\ \hline
\hline
1good & 1 arm with utility 0.8, 15 arms with utility 0.2 \\ \hline
2good & 1 arm with utility 0.8, 1 arms with utility 0.7, 14 arms with utility 0.2 \\ \hline
6good & 1 arm with utility 0.8, 5 arms with utility 0.7, 10 arms with utility 0.2 \\ \hline
arith & 1 arm with utility 0.8, 15 arms forming an arithmetic sequence between 0.7 and 0.2 \\ \hline
geom & 1 arm with utility 0.8, 15 arms forming a geometric sequence between 0.7 and 0.2 \\ \hline
\end{tabular}
\caption{16-arm synthetic datasets used for experiments.}
\label{tab:arms2}
\end{table*}
\end{document}
|
\begin{document}
\title{Common Permutation Problem}
\begin{abstract}
In this paper we show that the following problem is $NP$-complete: Given an alphabet $\Sigma$ and
two strings over $\Sigma$, the question is whether there exists a permutation
of $\Sigma$ which is a~subsequence of both of the given strings.
\end{abstract}
\section{Introduction}
\label{intro}
In computer science, efficient algorithms for various string problems are
studied. One of such problems is a well-known \prob{Longest Common Subsequence}
problem. For two given strings, the problem is to find
the longest string which is a subsequence of both the strings. A~survey of
efficient algorithms for this problem can be found in \cite{Survey}.
Let us consider a modification of the \prob{Longest Common Subsequence} problem. Instead of finding any
longest common subsequence, we restrict ourselves to subsequences in which
symbols do not repeat, i.e., every symbol occurs at most once. We call this problem
\prob{Longest Restricted Common Subsequence}.\footnote{A more general version of this
problem (with the same name) appeared in \cite{Andrejkova} together with its
efficient solution. Unfortunately, that solution is incorrect. Our result in
this paper indeed shows that an efficient (polynomial) solution for this
problem does not exist unless $P=NP$.}
\begin{example}
For strings ``$\symb{bcaba}$'' and ``$\symb{babcca}$'', the longest
common subsequence is ``$\symb{baba}$`` while the longest restricted common
subsequence is ``$\symb{bca}$''.
\end{example}
\prob{Longest Restricted Common Subsequence} is an optimization problem. In
this paper we consider its special case which is the following decision
problem: Suppose that the two strings are formed over an alphabet $\Sigma$. The
question is, do the two strings contain a restricted common subsequence of the
maximal possible length, i.e., a string that contains \textsl{every} symbol of $\Sigma$
exactly once? Such a string is a permutation of $\Sigma$. Therefore, we call
this problem the \prob{Common Permutation} problem.
\begin{quote}
\prob{Common Permutation}\\
\textsl{Instance}: An alphabet $\Sigma$ and two strings $a,b$ over $\Sigma$.\\
\textsl{Question}: Is there a permutation of $\Sigma$ which is a common
subsequence of $a$ and $b$?
\end{quote}
We will show that \prob{Common Permutation} is $NP$-complete. Moreover, we
will show that \prob{Common Permutation} is $NP$-complete even if the input
strings contain every symbol of $\Sigma$ at most twice.
\prob{Common Permutation} can be reduced to \prob{Longest Restricted
Common Subsequence} by asking whether the longest restricted common
subsequence of the two strings is equal to the size of the alphabet. Since
\prob{Common Permutation} will be shown to be $NP$-complete, it follows that \prob{Longest
Restricted Common Subsequence} is $NP$-hard.
In the next section we define the terms used in this paper. Section
\ref{alignments} introduces \textsl{alignments} as a way to visualize the \prob{Common Permutation}
problem. Finally, Section \ref{proof} presents the proof of
$NP$-completeness by reducing \prob{3SAT} to \prob{Common Permutation}.
\section{Preliminaries}
An \textsl{alphabet} is a finite set of \textsl{symbols}. A \textsl{string}
over an alphabet $\Sigma$ is a finite sequence $a=a_1a_2\dots a_N$ where
$N$ is a \textsl{length} of the string and $a_i\in\Sigma$ for all
$i\in\{1,\dots,N\}$. We say that $a_i$ is a
symbol on a \textsl{position} $i$ in the string $a$. For a given symbol $x\in
\Sigma$, \textsl{occurrences of $x$ in $a$} are all positions $i$ such that $a_i=x$.
A \textsl{subsequence} of a string $a=a_1a_2\dots a_N$ over $\Sigma$
is a string $b=a_{i_1}a_{i_2}\dots a_{i_n}$ where
$n \in \{0,1,\dots, N\}$ and $1\leq i_1 < i_2 < \cdots < i_n\leq N$. A
\textsl{common subsequence} of two strings $a$ and $b$ is a string which is a
subsequence of both $a$ and $b$.
A \textsl{permutation} of a finite set $A=\{x_1,\dots,x_n\}$ is a string
$x_{i_1}x_{i_2}\dots x_{i_n}$ (note that the length of the string is the same as
the number of elements in $A$) where $i_j\in \{1,\dots,n\}$ for
$j\in\{1,\dots,n\}$ and for all $k,l\in \{1,\dots,n\}$ if $k\neq l$ then
$i_k\neq i_l$.
The above definitions give a formal basis for the statement of the problem
from Section \ref{intro}.
For the proof of $NP$-completeness in Section \ref{proof} we use the
reduction from \prob{3-Satisfiability} (\prob{3SAT} for short). The following
definitions are from \cite{GareyJohnson}.
\subsection{3-Satisfiability}
Let $U=\{u_1,u_2,\dots,u_n\}$ be a set of Boolean variables. A \textsl{truth
assignment}
for $U$ is a function $t:U\to\{T,F\}$. If $t(u) = T$ we say that $u$ is true
under $t$; if $t(u) = F$ we say that $u$ is false. If $u$ is a variable in $U$,
then $u$ and $\varneg{u}$ are \textsl{literals} over $U$. The literal $u$ is true under $t$
if and only if the variable $u$ is true under $t$; the literal $\varneg{u}$ is
true if and only if the variable $u$ is false.
A \textsl{clause} over $U$ is a set of literals over $U$, for example
$\{u_1,\varneg{u_3},u_8\}$. It represents the disjunction of those literals and is
satisfied by a truth assignment if and only if at least one of its members is
true under that assignment. In other words, the clause is not satisfied if and
only all its literals are false. The clause above will be satisfied by $t$ unless
$t(u_1)=F$, $t(u_3) = T$, $t(u_8)=F$. A~collection $C$ of clauses over $U$ is
satisfiable if and only if there exists some truth assignment for $U$ that
simultaneously satisfies all the clauses in $C$. Such a truth assignment is
called a satisfying truth assignment for $C$.
\begin{quote}
\prob{3SAT}\\
\textsl{Instance}: A set $U$ of variables and a collection $C$ of clauses over $U$ with
exactly three literals per clause.\\
\textsl{Question}: Is there a satisfying truth assignment for $C$?
\end{quote}
\begin{theorem}
\prob{3SAT} is $NP$-complete.
\end{theorem}
See \cite{GareyJohnson} for the definition of $NP$-completeness and for the
proof of this theorem.
\section{Alignments}
\label{alignments}
In Section \ref{proof} we will use a notion of
\textsl{alignments}. Imagine the two input strings of \prob{Common Permutation}
written in two rows, one string per row. For every symbol of the alphabet
$\Sigma$ we want to find exactly one occurrence of that symbol in both
strings, such that we can ``align'' those occurrences.
\begin{example}
For two strings ``$\symb{bcaba}$'' and ``$\symb{babcca}$'', one of the possible
alignments is depicted below (the aligned occurrences are bold)
\[\begin{array}{cccccc}
\mathbf{b}& &\mathbf{c}&\symb{ab}&&\mathbf{a}\\
\mathbf{b}&\symb{ab}&\mathbf{c}&&\symb{c} &\mathbf{a}
\end{array}\]
\end{example}
Formally, let $a$ and $b$ be strings over an alphabet $\Sigma$. Let $n$ be the
number of symbols in $\Sigma$. An \textsl{alignment}
(denoted $A$) of $a$ and $b$ is a sequence of ordered pairs $A=\langle i_1, j_1 \rangle,
\langle i_2, j_2 \rangle, \dots, \langle i_n, j_n \rangle$ such that for all $k$, the value of $i_k$ is a position in the
string $a$, $j_k$ is a position in the string $b$, and $a_{i_k} = b_{j_k}$.
Moreover, $i_1<\cdots<i_n$, $j_1<\cdots<j_n$, and
$a_{i_1}a_{i_2}\dots a_{i_n} (= b_{j_1}b_{j_2}\dots b_{j_n})$ is a~permutation of $\Sigma$.
For all $k$ we say that, in the alignment $A$, the position $i_k$ in the string $a$ \textsl{is
aligned with} the position $j_k$ in the string $b$. We also say that the symbol
$a_{i_k} (=b_{j_k})$ is \textsl{aligned} at the position $i_k$ in $a$, and at
the position $j_k$ in $b$. Positions $i_k$ and $j_k$ are \textsl{aligned
occurrences} of $a_{i_k}$.
Notice that once a position $i$ (in $a$) is aligned with a position $j$ (in
$b$), positions
less than $i$ (in $a$) cannot be aligned with positions greater than $j$ (in $b$)
and vice versa. In
other words, the aligned occurrences of different symbols cannot ``cross''.
\begin{lemma}Let $a$ and $b$ be two strings over $\Sigma$. A
permutation of $\Sigma$ which is a common subsequence of $a$ and
$b$ exists if and only if there exists one or more alignments of $a$
and $b$.
\end{lemma}
\begin{proof}
An alignment corresponds to subsequences in $a$ and $b$ which comprise a
(common) permutation of $\Sigma$.
\end{proof}
According to this lemma, an alignment of two strings is an existence proof (of
a polynomial size with respect to lengths of the strings) for an instance of
\prob{Common Permutation}. Therefore, \prob{Common Permutation} is in $NP$. The
proof of $NP$-\textsl{completeness} follows in the next section.
\section{Reduction}
\label{proof}
In this section we will reduce \prob{3SAT} to \prob{Common Permutation}.
\begin{theorem}
\label{CPisNPC}
\prob{Common Permutation} is $NP$-complete.
\end{theorem}
\begin{proof}
Let $U$ be a finite set of variables and $C=\{c_1,c_2,\dots,c_n\}$
be a set of clauses over $U$. We have to construct an alphabet $\Sigma$ and two strings
$a,b$ over $\Sigma$ such that there exists a permutation of
$\Sigma$ which is a common subsequence of both $a$ and $b$ if and only if $C$ is
satisfiable.
The proof consists of two parts. The first part presents the construction of
$\Sigma$ and the strings. The second part proves that the construction
is correct in a sense that it satisfies the property described above.
\paragraph{Construction}
The alphabet $\Sigma$ consists of a pair of symbols $\symbvar{u}{i}$ and
$\symbvarneg{u}{i}$ for every variable $u\in U$ and every clause $c_i$ for
which either $u\in c_i$ or $\varneg{u} \in c_i$.
Additionally, $\Sigma$ contains a special ``boundary'' symbol $\bullet$.
The strings $a$ and $b$ have two parts: ``truth-setting'' part and ``satisfaction testing''
part. The parts are separated by the boundary symbol which ensures that
occurrences from one part cannot be aligned with occurrences from the other part of the
strings.
The ``truth-setting'' part consists of a concatenation of blocks, one for each
variable. Let $u$ be a variable from $U$ and let $\{i_1,\dots,i_m\}$ be the indexes of
clauses in which it appears. The strings contain the following block for
variable $u$:
\[\begin{array}{ccll}
a &=&\dots
\symbvar{u}{i_1}\symbvar{u}{i_2}\dots\symbvar{u}{i_m}
&\symbvarneg{u}{i_1}\symbvarneg{u}{i_2}\dots\symbvarneg{u}{i_m}
\dots\\
b &=&\dots
\symbvarneg{u}{i_1}\symbvarneg{u}{i_2}\dots\symbvarneg{u}{i_m}
&\symbvar{u}{i_1}\symbvar{u}{i_2}\dots\symbvar{u}{i_m}
\dots
\end{array}\]
This block is constructed in such a way that it is possible to simultaneously
align all the symbols $\{\symbvar{u}{i_1},\symbvar{u}{i_2},\dots,\symbvar{u}{i_m}\}$
inside this block, or all the symbols
$\{\symbvarneg{u}{i_1},\symbvarneg{u}{i_2},\dots,\symbvarneg{u}{i_m}\}$.
It is, however, not possible to simultaneously align both $\symbvar{u}{i}$ and
$\symbvarneg{u}{j}$ for some $i$ and $j$ inside this block.
The ``satisfaction-testing'' part consists of a concatenation of blocks, one
for each clause. For a clause $c_i\in C$, let $x$, $y$, and $z$ be the
literals in the clause $c_i$, i.e., $c_i=\{x,y,z\}$. We use the
following notation:
\begin{itemize}
\item if $x = u$ for $u\in U$, then $\symbvar{x}{i} = \symbvar{u}{i}$ and
$\symbvarneg{x}{i} = \symbvarneg{u}{i}$
\item if $x = \varneg{u}$ for $u\in U$, then
$\symbvar{x}{i} = \symbvarneg{u}{i}$ and
$\symbvarneg{x}{i} = \symbvar{u}{i}$
\end{itemize}
The strings contain the following block for the clause $c_i$:
\[\begin{array}{cclc}
a &=& \dots
\symbvar{x}{i}\symbvar{y}{i}\symbvar{z}{i}
&\symbvarneg{x}{i}\symbvarneg{y}{i}\symbvarneg{z}{i}
\dots\\
b &=& \dots
\symbvar{x}{i}\symbvar{y}{i}\symbvar{z}{i}
&\symbvarneg{y}{i}\symbvarneg{x}{i}\symbvarneg{z}{i}\symbvarneg{y}{i}
\dots
\end{array}\]
The block has two parts. The left part is the same for both strings. The
right part is constructed in such a way that the symbols
$\{\symbvarneg{x}{i},\symbvarneg{y}{i},\symbvarneg{z}{i}\}$ cannot be
simultaneously aligned in this block. Notice that these are the symbols
corresponding to the truth assignment for which the clause is false.
The alphabet $\Sigma$ contains $6n + 1$ symbols. The length of the string $a$ is
$6n + 1 + 6n=12n + 1$; the length of the string $b$ is $6n + 1 + 7n = 13n +
1$. Therefore, the size of the constructed \prob{Common Permutation} instance is polynomial
with respect to the original \prob{3SAT} instance. The construction can be
carried out in polynomial time.
\begin{example}
For a set of variables $\{w,x,y,z\}$ and
clauses $\{\{w,\varneg{x},y\},\{\varneg{z},x,\varneg{y}\}\}$ which represent
the logical function $$(w\lor \bar{x} \lor y)\land(\bar{z}\lor x\lor \bar{y})$$
we get the alphabet $$\Sigma=\{
\symbvar{w}{1},\symbvarneg{w}{1},
\symbvar{x}{1},\symbvarneg{x}{1},\symbvar{x}{2},\symbvarneg{x}{2},
\symbvar{y}{1},\symbvarneg{y}{1},\symbvar{y}{2},\symbvarneg{y}{2},
\symbvar{z}{2},\symbvarneg{z}{2}\}$$
and the following strings:
\[
\begin{array}{llllcll}
a = \symbvar{w}{1}\symbvarneg{w}{1}
&\symbvar{x}{1}\symbvar{x}{2}\symbvarneg{x}{1}\symbvarneg{x}{2}
&\symbvar{y}{1}\symbvar{y}{2}\symbvarneg{y}{1}\symbvarneg{y}{2}
&\symbvar{z}{2}\symbvarneg{z}{2}
&\bullet
&\symbvar{w}{1}\symbvarneg{x}{1}\symbvar{y}{1}\symbvarneg{w}{1}\symbvar{x}{1}\symbvarneg{y}{1}
&\symbvarneg{z}{2}\symbvar{x}{2}\symbvarneg{y}{2}\symbvar{z}{2}\symbvarneg{x}{2}\symbvar{y}{2}
\\
b = \underbrace{\symbvarneg{w}{1}\symbvar{w}{1}}_{\mbox{for }w}
&\underbrace{\symbvarneg{x}{1}\symbvarneg{x}{2}\symbvar{x}{1}\symbvar{x}{2}}_{\mbox{for }x}
&\underbrace{\symbvarneg{y}{1}\symbvarneg{y}{2}\symbvar{y}{1}\symbvar{y}{2}}_{\mbox{for }y}
&\underbrace{\symbvarneg{z}{2}\symbvar{z}{2}}_{\mbox{for }z}
&\bullet
&\underbrace{\symbvar{w}{1}\symbvarneg{x}{1}\symbvar{y}{1}\symbvar{x}{1}\symbvarneg{w}{1}\symbvarneg{y}{1}\symbvar{x}{1}}_{\mbox{for clause 1}}
&\underbrace{\symbvarneg{z}{2}\symbvar{x}{2}\symbvarneg{y}{2}\symbvarneg{x}{2}\symbvar{z}{2}\symbvar{y}{2}\symbvarneg{x}{2}}_{\mbox{for clause 2}}
\end{array}
\]
\end{example}
\paragraph{Correctness} Now we verify that the constructed strings $a$ and
$b$ contain a common permutation of $\Sigma$ if and only if $C$ is satisfiable.
Let $t:U\to\{T,F\}$ be any satisfying truth assignment for $C$. We will show that
there exists a permutation of $\Sigma$ which a common subsequence of both $a$
and $b$, i.e., that it is possible to align all symbols from
$\Sigma$ in the strings.
There is only one choice how to align the boundary symbol. For a variable $u\in U$,
if $t(u)=T$, we align $\symbvarneg{u}{i}$ symbols for all $i$ in the
``truth-assigning'' part and
$\symbvar{u}{i}$ in the ``satisfaction-testing'' part. If $t(u) = F$ we
conversely align $\symbvar{u}{i}$ symbols in the ``truth-assigning''
part and $\symbvarneg{u}{i}$ in the ``satisfaction-testing'' part.
As we noted during construction, the desired alignment in the ``truth-assigning'' part
is always possible to find. To show that we can align the remaining symbols in the
``satisfaction-testing'' part, we use that $t$ is satisfying $C$.
Notice that the symbols which we have to align
in the ``satisfaction-testing'' part correspond to the truth values of the variables. For example, if
we have to align symbol $\symbvarneg{u}{i}$ in this part, we know that $t(u)=F$.
For every clause $c_i\in C$, $c_i=\{x,y,z\}$, we align the
remaining symbols corresponding to clause $c_i$ in the block for $c_i$.
The symbols $\symbvar{x}{i}$, $\symbvar{y}{i}$, and $\symbvar{z}{i}$ can be
aligned in the first part of the block. It is easy to see that any pair of
symbols from $\{\symbvarneg{x}{i},\symbvarneg{y}{i},\symbvarneg{z}{i}\}$ can be aligned in
the second part. Therefore, for all seven possibilities how $c_i$ can be satisfied,
we can align the corresponding symbols in the block for the clause $c_i$.
For the proof in the opposite direction, suppose now that $a$ and $b$
have a common permutation of the symbols in
$\Sigma$. We will construct a satisfying truth assignment for $C$. For that
we look at ``truth-setting'' part of the strings. For a
variable $u\in U$,
\begin{itemize}
\item if the symbol $\symbvarneg{u}{i}$ for some $i$ is aligned in the
``truth-setting'' part of the strings, we set $t(u)=T$,
\item if the symbol $\symbvar{u}{i}$ for some $i$ is aligned in the
``truth-setting'' part of the strings, we set $t(u)=F$,
\item if none of the symbols $\{\symbvar{u}{i}, \symbvarneg{u}{i}\}$ are aligned in the
``truth-setting'' part, we set $t(u)$ arbitrarily, say $t(u)=T$.
\end{itemize}
Notice that, according to the construction of the
``truth-setting'' part, this is a valid definition of the assignment, i.e., it
cannot happen that we would want to assign $t(u)$ to both $T$ and $F$.
We now have to prove that $t$ is a satisfying truth assignment for $C$. For
any clause $c_i\in C$, let $x$, $y$, $z$ be its literals, so $c_i = \{x,y,z\}$.
We know that not all the symbols $\{\symbvarneg{x}{i}, \symbvarneg{y}{i},
\symbvarneg{z}{i}\}$ can be aligned in ``satisfaction-testing'' part of the
strings, so at least one them must be aligned in the ``truth-setting'' part.
Without loss of generality say it is $\symbvarneg{x}{i}$. Therefore, by the
definition of $t$, we know that the literal $x$ is true, and therefore $c_i$ is true.
\end{proof}
Our construction in the proof used every symbol at most twice in the string
$a$, but used some of the symbols three times in the string $b$. The following
corollary shows that we can use a slightly different construction which
uses every symbol at most twice in both the strings.
\begin{corollary}
\label{twosymbols}
\prob{Common Permutation} is $NP$-complete even if every symbol occurs at most
twice in the given strings.
\end{corollary}
\begin{proof}
We will use the same construction as in the proof of Theorem
\ref{CPisNPC}, except for the definition of $\Sigma$, and blocks for clauses.
For every clause $c_i\in C$ we will add three additional symbols $\symba{i}$, $\symbb{i}$, and $\symbc{i}$ to the
alphabet $\Sigma$. The strings contain the following block for the clause $c_i$:
\[\begin{array}{ccl}
a &=& \dots \symbvarneg{x}{i}\symbvar{x}{i}\symba{i}\symbb{i}\symbvar{y}{i}\symbvarneg{y}{i}\symba{i}\symbc{i}\symbb{i} \symbvarneg{z}{i} \symbc{i} \symbvar{z}{i}\dots\\
b &=& \dots \symbvar{x}{i} \symba{i}\symbvarneg{x}{i}\symbvarneg{y}{i}\symbb{i}\symbvar{y}{i}\symbc{i}\symba{i}\symbb{i}\symbc{i} \symbvar{z}{i}\symbvarneg{z}{i}\dots
\end{array}\]
One can verify that inside this block we can
align symbols corresponding to the satisfying assignment for $c_i$, but we cannot
align simultaneously $\symbvarneg{x}{i}$, $\symbvarneg{y}{i}$, and
$\symbvarneg{z}{i}$.
The constructed strings contain every symbol exactly twice with the exception
of $\bullet$ which they contain once.
\end{proof}
\end{document}
|
\begin{document}
\title{Anti-adiabatic evolution in quantum-classical hybrid system}
\author{J. Shen}
\affiliation{Fundamental Education Department, Dalian Neusoft University of Information, Dalian 116023, China}
\author{W. Wang}
\affiliation{
School of Physics, Northeast Normal University, Changchun 130024, China}
\author{C. M. Dai}
\affiliation{
Center for Quantum Sciences and School of Physics, Northeast Normal University, Changchun 130024, China}
\author{X. X. Yi}
\altaffiliation{[email protected]}
\affiliation{
Center for Quantum Sciences and School of Physics, Northeast Normal University, Changchun 130024, China}
\date{\today}
\begin{abstract}
The adiabatic theorem is an important concept in quantum mechanics,
it tells that a quantum system subjected to gradually changing
external conditions remains to the same instantaneous eigenstate of
its Hamiltonian as it initially in. In this paper, we study the
another extreme circumstance where the external conditions vary
rapidly such that the quantum system can not follow the change and
remains in its initial state (or wavefunction). We call this type of
evolution anit-adiabatic evolution. We examine the matter-wave
pressure in this situation and derive the condition for such an
evolution. The study is conducted by considering a quantum particle
in an infinitely deep potential, the potential width $Q$ is assumed
to be change rapidly. We show that the total energy of the quantum
subsystem decreases as $Q$ increases, and this rapidly change
exerts a force on the wall which plays the role of boundary of the
potential. For $Q<Q_{0}$ ($Q_0$ is the initial width of the
potential), the force is repulsive, and for $Q>Q_{0}$, the force is
positive. The condition for the anti-adiabatic evolution is given via a spin-$\frac 1 2$ in a rotating magnetic field.
\end{abstract}
\pacs{73.40.Gk, 03.65.Ud, 42.50.Pq} \maketitle
\section{introduction}
A quantum system would remain in the instantaneous eigenstate of
its Hamiltonian if the Hamiltonian changes slowly enough with
respect to the energy gaps among the instantaneous eigenstate
\cite{P. Ehrenfest16,M. Born28,J. Schwinger37, T. Kato50}. This is
the so-called adiabatic theorem, which is an important and intuitive
concept in quantum mechanics, and it was found insightful and
potential to applications. For example, Landau-Zener transition,
Berry phase \cite{L. D. Landau32, C. Zener32, M. Gell-mann51, M. V.
Berry84}, quantum control and quantum adiabatic
computation\cite{T.Corbitt07, M.Bhattacharya08}. Although progresses
have been made, there are many issues remain open concerning the
adiabatic evolution, for instance, the adiabatic condition \cite{K.
P. Marzlin04, D. M. Tong05} and its extension to open systems
\cite{yijpb07}.
Recent years have witnessed a series of developments at the
intersection of optical cavities and mechanical
resonators\cite{craighead00,aspelmeyer14}. The opto-mechanical coupling between a
moving mirror and the radiation pressure of light has first appeared
in the context of interferometric gravitational wave experiments.
Owing to the discrete nature of photons, the quantum fluctuations of
the radiation pressure forces give rise to the so-called standard
quantum limit\cite{caves81}. The experimental manifestations of
opto-mechanical coupling by radiation pressure have been observable
for some time. For instance, radiation pressure forces were observed
in\cite{dorsel83}, while even earlier work in the microwave domain
had been carried out by Braginsky\cite{braginsky77}. Moreover the
modification of mechanical oscillator stiffness caused by radiation
pressure, the \textbf{optical spring}, have also recently been
observed\cite{sheard04}.
It is the similarity between light and matter-wave that motivates
the concept so-called matter-wave pressure\cite{J.Shen10}. By
examining the dynamics of the adiabatic quantum-classical system,
the authors calculated the force exerted on the classical subsystem
by the quantum subsystem\cite{J.Shen10}. In the analysis, an
assumption that the classical system moves slowly is used, this
leads to the adiabatic evolution for the quantum subsystem.
On the contrary, quantum quenching refers to a sudden change of some parameter of the Hamiltonian. A variety of processes can result in quenching, such as a sudden moving of the mirror in optomechanics, a spin rotating driven by a magnetic field. Recently, the concept of phase transitions has been extended to non-equilibrium dynamics of time-independent systems induced by a quantum
quench\cite{arkadiusz17}. It has been shown that the quantum quench in a discrete time crystal leads to dynamical quantum phase transitions, and the return probability of a periodically driven system to a Floquet eigenstate before the quench reveals singularities in time.
Based on random quenches, random unitaries in atomic Hubbard and spin models can be generated\cite{elben17}, this proposal works for a broad class of atomic and spin lattice models\cite{vermersch18}.
It is believe that in the case of rapidly varying conditions (to
which the quantum system subjected), the quantum system may has no
time to change its state\cite{citro05,pellegrini11,eidelstein13}. If this is the case, what is the condition
for such an evolution? Is it that the quantum system has no time to
change its state? or it can not follow the rapidly changing
conditions? How the matter-wave pressure behaves in this
circumstance? In this paper, we will focus on these questions.
The paper is organized as follows. In Sec.{\rm II}, we study the
dynamics of a quantum particle in an infinitely deep potential with
varying potential width. The matter-wave pressure force is
calculated and discussed by assuming that the system remains in its
initial state. In Sec.{\rm III}, we derive the condition of
anti-adiabatic evolution trough a simple example. Finally, we
conclude our results in Sec.{\rm IV}.
\section{a quantum particle in an infinite one-dimensional potential with varying width}
Consider a quantum system in a one-dimensional infinitely deep
well whose boundary at $Q$ is a moving wall(see Fig.\ref{fig1}).
\begin{figure}
\caption{A infinitely deep potential
well with a moving wall as its \textbf{right}
\label{fig1}
\end{figure}
The Hamiltonian $\hat{H}$ of this system can be written as,
\begin{eqnarray}
\hat{H}=-\frac{\hbar^{2}}{2m}\frac{d^{2}}{dx^{2}}+V(x),\label{Hamiltonian}
\end{eqnarray}
where the potential $V(x)$ takes,
\begin{eqnarray}
V(x)=\left\{\begin{array}{cc}
0 & 0\leq x \leq Q \\
+\infty & x<0 ~\text{or} ~x>Q
\end{array}\right. .
\end{eqnarray}
Suppose that the moving wall at $Q$ only changes the boundary
condition of the quantum system, we have the eigenvalues and the
corresponding eigenstates of the quantum system with fixed $Q$,
\begin{eqnarray}
&&\Psi_n(Q)=\langle q|\psi_{n}(q, Q)\rangle=\sqrt{\frac{2}{Q}}\sin\frac{n\pi q}{Q}, \nonumber\\
&&E_{n}(Q)=\frac{\hbar^{2}\pi^{2}n^{2}}{2mQ^{2}}, ~~~~~n=1,2,3\cdots, \label{eigenvalue}
\end{eqnarray}
where $q$ denotes the coordinate of the quantum particle.
In the following, we assume that the wall moves so fast such that
the quantum system does not evolve. The condition for this
assumption to hold will be discussed in the next section. Suppose
that the quantum particle is initially prepared in the ground state
with the boundary at $Q_0$, i.e.,
\begin{eqnarray}
&&\Psi_{1}(Q_0)=\sqrt{\frac{2}{Q_{0}}}\sin\frac{\pi q}{Q_{0}}, \nonumber\\
&&E_1(Q_0)=\frac{\hbar^{2}\pi^{2}}{2mQ_{0}^{2}}.
\end{eqnarray}
At next instance of time, the wall moves to $Q$. Since we assume the
wall to moves so fast, the particle does not evolve and keeps in
$\Psi_1(Q_0)$ that can be expended as a function of $\Psi_{n}(Q)$,
\begin{eqnarray}
\Psi_{1}(Q_0)=\sum_{n} b_{n}\Psi_{n}(Q),
~~~~~n=1,2,3,\cdots,\label{assumption}
\end{eqnarray}
where $b_{n}$ is the expansion coefficients, and we define
$\rho_{n}=|b_{n}|^{2}$, standing for the probability of the
particle in the $n$th energy-level with the wall at $Q$. Simple
calculation shows that,
\begin{eqnarray}
b_{n} =\left\{
\begin{array}{ll}
\frac{(-1)^{n}2n\sqrt{\gamma}\sin(\gamma\pi)}{\pi(\gamma^2-n^2)}, &~~ \gamma < 1; \\
\frac{2\gamma^\frac{3}{2}}{\pi(\gamma^2-n^2)}
\sin\frac{n\pi}{\gamma}, & ~~\gamma > 1 \text{ and } n \neq \gamma; \\
\frac{1}{\sqrt{\gamma}}, & ~~n = \gamma.
\end{array}
\right.
\label{population}
\end{eqnarray}
where $\gamma=\frac{Q}{Q_{0}}$ was defined. $\rho_{n}$ follows
from Eq.(\ref{population}),
\begin{eqnarray}
\rho_{n}=\left\{
\begin{array}{ll}
\frac{4n^2\gamma\sin^2(\gamma\pi)}{\pi^2(\gamma^2-n^2)^2}, &~~ \gamma < 1; \\
\frac{4\gamma^3}{\pi^2(\gamma^2-n^2)^2}
\sin^2(\frac{n\pi}{\gamma}), & ~~\gamma > 1 \text{ and } n \neq \gamma; \\
\frac{1}{\gamma}, & ~~n = \gamma.
\end{array}
\right.
\label{probability}
\end{eqnarray}
\begin{figure}
\caption{(Color on line)~The
population $\rho_{n}
\label{fig2}
\end{figure}
\begin{figure}
\caption{(Color on line)~The total probability of the first ten energy-levels.}
\label{fig3}
\end{figure}
From Eq.(\ref{probability}), we find that the probability $\rho_{n}$
is only a function of $\gamma$ and $n$, indicating that $Q$ and
$Q_0$ jointly changes $\rho_{n}$. Furthermore, expression for
$0<\gamma<1$ and $\gamma>1$ is different. In Fig.\ref{fig2} we plot
the probability distribution over the energy level $\Psi_n(Q)$,
while $\gamma$ is chosen to 0.5, 1.5, 4.9 and 10.1, respectively.
From this figure, we find that there are population transfer from
the ground state to the higher energy levels that is different from
the results in Ref.\cite{J.Shen10}. We also find that the
probability distribution sharply depends on $\gamma$. For example,
when $\gamma=4.9$, the particle mainly occupied the 5th level,
while the occupation of the other levels, especially that far from
the 5th levels, are almost zero. From Eq.(\ref{eigenvalue}), we
observe that the $n$th eigenfunction with boundary at $Q$ is similar
to the initial state when $n \approx \gamma$. This may be the reason
why the probability of the energy-level with index (i.e., $n$) close
to $\gamma$ is favoringly occupied.
To calculate the energy change due to boundary moving, we have to
calculate the population distribution over all eigenstates of the
Hamiltonian with the new boundary. This is a time-consuming task.
Fortunately, our calculation show that only first 10 levels are
populated when the boundary change is from $0$ to
$5Q_{0}$(see Fig.\ref{fig3}). Then we can only take the first ten
energy-levels into account, which is a good approximation to
calculating the energy for the parameter under our discussion.
Since $\Psi_{1}$ can be expended as Eq.(\ref{assumption}), we can
easily get the total energy after the boundary moving to the new
position,
\begin{eqnarray}
E^{'}&=&\sum_{n=1}^{10} \rho_{n}^{'}E_{n}^{'}\nonumber\\
&=&\sum_{n=1}^{10} \frac{\rho_{n}}{\sum_{n=1}^{10}\rho_{n}}E_{n}^{'}\nonumber\\
&=&\sum_{n=1}^{10} \frac{\rho_{n}}{\sum_{n=1}^{10}\rho_{n}}
\frac{\hbar^2\pi^2n^2}{2m(\gamma Q_{0})^2}\nonumber\\
&=&\sum_{n=1}^{10} \frac{\rho_{n}}{\sum_{n=1}^{10}\rho_{n}}
\frac{n^2}{\gamma^2}\cdot E_{1}, \label{energy'}
\end{eqnarray}
where $\rho^{'}$ is the re-normalized probability distribution over
the first ten energy-levels.
\begin{figure}
\caption{(Color on line)~The energy
(in units of $E_{1}
\label{fig4}
\end{figure}
\begin{figure}
\caption{(Color on line)~The force(in
units of $\frac{E_{1}
\label{fig5}
\end{figure}
\begin{figure}
\caption{(Color on line)~A
local graphics amplification of Fig.\ref{fig4}
\label{fig6}
\end{figure}
From Eq.(\ref{probability}) and Eq.(\ref{energy'}), we find that
$E^{'}$ depends only on $\gamma$ and $n$. In other words, $E^{'}$
does not depends on $Q$ and $Q_0$ separately. So it is interesting
to see how the ratio $\gamma$ affects the total energy when the
boundary changes rapidly.
Fig.\ref{fig4} shows the dependence of the energy on $\gamma$. We
find that the total energy increases rapidly as $\gamma$ decreases
in the regime $0<\gamma<1$. This observation can be easily
understood by examining Eq.(\ref{eigenvalue}). It is obviously that
$E_{n}$ is inversely proportional to the square of the width of the
well $Q$, which means that the energy of each energy-level increases
as $Q$ decreases and $Q<Q_{0}$. Moreover, because we choose the
ground state as the initial state, no matter in what state the
particle will be after the boundary change, the total energy will
certainly increases.
For $\gamma>1$, the situation is different. We find that the total
energy decreases at large as $\gamma$ increases. The total energy
change is almost zero when the change of $Q$ is not very large.
Meanwhile we find that the energy does change monotonically with
$\gamma$. In other words, the energy may increase as $\lambda$
increases. This is interesting. In Ref.\cite{J.Shen10}, the total
energy decreases as $\gamma$ increases. This is because the
evolution of the system is adiabatic and the particle is always at
the ground state of the Hamiltonian, no matter how the boundary
changes. In the other words, the width of the well $Q$ is the only
parameter to determine the energy of the quantum subsystem. However,
this is not the case for anti-adiabatic evolution in our model.
Indeed, there are population transfer among the eigenstates when the
boundary changes from $Q_{0}$ to $Q$, see Eq.(\ref{assumption}).
Namely the particle will not always stay in the ground state when
the boundary changes. This will affect the total energy of the
system. From Eq.(\ref{energy'}), we can see that the total energy is
related to $b_{n}$, which varies as the boundary changes. This
analysis suggests that the energy depends on two parameters, one the
eigenvalue $E_{n}$ and the other is the population distribution
$\rho_{n}$. From Eq.(\ref{eigenvalue}), we can see that when $Q$
increases, the eigenvalue $E_{n}(Q)$ decreases. On the other hand,
from Eqs.(\ref{population}) and (\ref{probability}), we see that the
change of $\gamma$ will result in the change of the population
$\rho_{n}$. Specifically, we find that the probability distribution
mainly in a few energy levels near $\gamma$ for $\gamma>1$. These
together can interpret why the total energy increase as $Q$
increases. see Fig.{\ref{fig4}.
Now we discuss this issue from the aspect of matter wave force
exerted on the boundary wall. It can be given by $F=-\frac{dE}{dQ}$
\cite{J.Shen10}. We show this force in Fig. \ref{fig5}. From
Fig.\ref{fig5}, we find that the force tends to very large and
repulsive as $Q$ decreases in the regime $Q<Q_{0}$. This is similar
to the result in Ref.\cite{J.Shen10}. In addition, for the case of
$Q>Q_{0}$, the force has a slight fluctuation around zero with the
increasing of $Q$. This means that the force between the particle
and the moving wall may be repulsive or attractive. Fig.\ref{fig6}
is a enlarged version of Fig.\ref{fig4} and Fig.\ref{fig5} for
$\gamma$ ranging from $\gamma=2.5$ to $\gamma=3.5$. From the two
figures, we observe that when the total energy increases, the force
is attractive. On the contrary, there is a repulsive force when the
total energy decreases. Since the change of the total energy due to
the boundary moving is so weak for $Q>Q_{0}$, the force in this case
is negligibly small.
\section{a spin-$\frac 1 2$ in a rotating magnetic field}
In the last section, we study the matter-wave pressure with an
assumption that the boundary moves so fast that the quantum system
does not evolve. One may wonder if this situation exist, and what is
the condition for such an evolution. Does the system have no time to
evolve? Or the change is too fast that the system can not follow? To
simplify the discussion, we here adapt a simple model that a
spin-$\frac 1 2$ in a rotating magnetic field to formulate the
problem.
The system Hamiltonian takes, $\hat{H}=-\vec{\mu} \cdot \vec{B(t)}$.
We will choose $\vec{B(t)}=B_{0}\widehat{n(t)}$ with the unit vector
$\widehat{n(t)}=(\sin \alpha \cos \omega t,\sin \alpha \sin \omega
t,\cos \alpha)$ as the magnetic field, where $B_{0}$ is strength of
the field. The eigenvalue and the corresponding eigenstate of the
system takes,
\begin{eqnarray}
\mathbf{\psi_{1}}(t)&=&(\cos(\alpha/2), e^{i\omega t}\sin(\alpha/2))^{T} , \nonumber\\
E_{1}&=&+\frac{\hbar\omega_{0}}{2} \label{stateequation1}
\end{eqnarray}
and
\begin{eqnarray}
\mathbf{\psi_{2}}(t)&=&(e^{-i\omega t}\sin(\alpha/2),-\cos(\alpha/2))^{T} , \nonumber\\
E_{2}&=&-\frac{\hbar\omega_{0}}{2}
\end{eqnarray}
where $E_{1}$ and $E_{2}$ are the eigenvalues of $\mathbf{\psi_{1}}$
and $\mathbf{\psi_{2}}$, respectively. And $\omega$ is the frequency
of the magnetic field, $\alpha$ denotes the angle between the spin
and the magnetic field. $\omega_{0}\equiv\frac{eB_{0}}{m}$, and $e$
is the charge of the particle, $m$ is the mass of the particle.
Starting with $\mathbf{\psi_{1}}(t=0)$, the particle will evolve to
\begin{equation}
\mathbf{\psi}(t)=\left(
\begin{array}{c}
( \cos(\frac{\lambda t}{2})-i(\frac{\omega_{0}-\omega}{\lambda})
\sin(\frac{\lambda t}{2}) )\cos(\frac{\alpha}{2})
e^{-\frac{i\omega t}{2}} \\
(\cos(\frac{\lambda t}{2})-i(\frac{\omega_{0}+\omega}{\lambda})
\sin(\frac{\lambda t}{2}))\sin(\frac{\alpha}{2})
e^{\frac{i\omega t}{2}} \\
\end{array}
\right)
\end{equation}
where $\lambda$ is defined by,
\begin{equation}
\lambda=\sqrt{\omega^{2}+\omega_{0}^{2}-2\omega\omega_{0}\cos(\alpha)}.
\end{equation}
We now examine in which circumstance the system remains un-evolved
on $\mathbf{\psi_{1}(0)}$. This can be done by calculating the
probability of the particle on $\mathbf{\psi_{1}(0)}$,
\begin{eqnarray}
\rho_{1}(t)&=&|\langle\mathbf{\psi}(t)|\mathbf{\psi_{1}(0)}\rangle|^{2} \nonumber\\
&=&[\cos(\frac{\lambda t}{2}) \sin(\frac{\omega t}{2}) \cos\alpha \nonumber\\
&&+ (\frac{\omega_0 - \omega \cos\alpha}{\lambda})\sin(\frac{\lambda t}{2}) \cos(\frac{\omega t}{2})]^{2} \nonumber\\
&&+ [\cos(\frac{\lambda t}{2}) \cos(\frac{\omega t}{2}) \nonumber\\
&&+ (\frac{\omega - \omega_0 \cos\alpha}{\lambda})\sin(\frac{\lambda t}{2}) \sin(\frac{\omega t}{2})]^{2}.
\label{pro1}
\end{eqnarray}
We are interested in the probability of the particle in its initial
state, when the magnetic filed completes a circle. Eq.(\ref{pro1})
would give the results if we assume that the evolution time $t$ and
the magnetic frequency $\omega$ satisfy $\omega t=2\pi$. With
$\omega t=2\pi$, Eq.(\ref{pro1}) can be rewritten as
\begin{eqnarray}
\rho_1(\omega)&=&\cos^2(\frac{\lambda\pi}{\omega})
+(\frac{\omega_0-\omega \cos\alpha}{\lambda})^2\sin^2(\frac{\lambda\pi}{\omega}).
\label{rho1}
\end{eqnarray}
On the contrary, the particle will evolve to $\mathbf{\psi^{'}(t)}$ given blow when starting with $\mathbf{\psi_{2}}(t=0)$,
\begin{equation}
\mathbf{\psi}^{'}(t)=\left(
\begin{array}{c}
(\cos(\frac{\lambda t}{2})+i(\frac{\omega_{0}+\omega}{\lambda})
\sin(\frac{\lambda t}{2}))\sin(\frac{\alpha}{2})
e^{-\frac{i\omega t}{2}} \\
-(\cos(\frac{\lambda t}{2})+i(\frac{\omega_{0}-\omega}{\lambda})
\sin(\frac{\lambda t}{2}))\cos(\frac{\alpha}{2})
e^{\frac{i\omega t}{2}} \\
\end{array}
\right)
\end{equation}
\begin{eqnarray}
\rho_{2}^{'}(t)&=&|\langle\mathbf{\psi}^{'}(t)|\mathbf{\psi_{2}(0)}\rangle|^{2} \nonumber\\
&=&[\cos(\frac{\lambda t}{2}) \sin(\frac{\omega t}{2}) \cos\alpha \nonumber\\
&&+ (\frac{\omega_0 - \omega \cos\alpha}{\lambda})\sin(\frac{\lambda t}{2}) \cos(\frac{\omega t}{2})]^{2} \nonumber\\
&&+ [\cos(\frac{\lambda t}{2}) \cos(\frac{\omega t}{2}) \nonumber\\
&&+ (\frac{\omega - \omega_0 \cos\alpha}{\lambda})\sin(\frac{\lambda t}{2}) \sin(\frac{\omega t}{2})]^{2}.
\label{pro2}
\end{eqnarray}
\begin{eqnarray}
\rho_{2}^{'}(\omega)&=&\cos^2(\frac{\lambda\pi}{\omega})
+(\frac{\omega_0-\omega \cos\alpha}{\lambda})^2\sin^2(\frac{\lambda\pi}{\omega}).
\label{rho2}
\end{eqnarray}
From Eqs(\ref{pro2}) and (\ref{rho2}), we observe that the expressions of $\rho_{2}^{'}(t)$ and $\rho_{2}^{'}(\omega)$ both are the same as $\rho_{1}(t)$ and $\rho_{1}(\omega)$. Hence, we only discuss the evolution of $\psi(t)$ in the following discussions.
\begin{figure}
\caption{(Color on line)~The
relationship between $\rho_1$ and $\omega$ with
\textbf{$\alpha=\frac{\pi}
\label{omega1}
\end{figure}
In Fig.\ref{omega1}, we plot $\rho_{1}$ as a function of $\omega$
for $\alpha=\frac{\pi}{4}$. From this figure, we can find that when
$\frac{\omega}{\omega_{0}}$ is very small, the evolution of
$\rho_{1}$ is irregular, for this reason we can not find a suitable
frequency $\omega$ to make sure that the electron will stay in the
initial state. Fortunately, $\rho_1$ increases with the increasing
of $\omega$ when $\frac{\omega}{\omega_{0}}>1.442$ and we find that
when $\omega$ is 15 times larger than $\omega_0$, the probability
$\rho_1$ is almost one and we claim that the electron will stay at
the state $\mathbf{\psi_1}(0)$ at any time. This suggests that the
quantum system will stay in its initial state if the external
conditions change much faster than the typical frequency of the
system.
\begin{figure}
\caption{(Color on line)~The
relationship between $\rho_1$ and $\omega$ with
$\alpha=\frac{\pi}
\label{omegaa}
\end{figure}
In Fig.\ref{omegaa}, we plot $\rho_{1}$ as a function of
$\omega$ for different $\alpha$. From this figure, we find that the
minimum value of $\rho_{1}$ changes for different $\alpha$. However,
there always exists a $\omega$ which can keep the system at the state $\mathbf{\psi_1}(0)$ at any time, no matter
what $\alpha$ is. And $\omega$ for different $\alpha$ is
nearly the same.
\section{Conclusion and discussions}
The adiabatic theorem tells that a quantum mechanical system
subjected to gradually changing external conditions can adapt its
functional form. In this paper, we explore another extreme varying
conditions--rapidly varying conditions. The evolution of the system
in this condition we call anti-adiabatic evolution. We have examined
the condition for such evolutions and calculate the matter-wave
pressure for the quantum system. Specifically, we have considered a
quantum particle in a one-dimensional infinitely deep potential, one
boundary of the potential is assumed to move rapidly, such that the
particle inside does not evolve with time, however, as the potential
width varies, the energy of the particle changes. This change would
lead to a force on the quantum system. We calculated the force and
find that as the width increases the force is attractive, while it
is repulsive as the width decreases. By considering a spin-$\frac 1
2 $ in a rotating magnetic field, we explore the condition for the
anti-adiabatic evolution. Discussions and remarks on this condition
are given.
\begin{references}
\bibitem{P. Ehrenfest16} P. Ehrenfest, Ann. Phys (Berlin) \textbf{51},327
(1916).
\bibitem{M. Born28} M. Born and V. Fock, Z.Phys \textbf{51},165
(1928).
\bibitem{J. Schwinger37} J. Schwinger, Phys. Rev \textbf{51},648
(1937).
\bibitem{T. Kato50} T. Kato, J. Phys .Soc. Jpn \textbf{5},435
(1950).
\bibitem{L. D. Landau32} L. D. Landau, Zeitschrify \textbf{2},46
(1932).
\bibitem{C. Zener32} C. Zener, Proc. R. Soc. London A \textbf{137},696
(1932).
\bibitem{M. Gell-mann51} M. Gell-mann and F. Low, Phys. Rev \textbf{84},350
(1951).
\bibitem{M. V. Berry84} M. V. Berry, Proc. R. Soc. A \textbf{392},45
(1984).
\bibitem{T.Corbitt07} T. Corbitt, Y. Chen, E. Innerhofer,
H. M\"{u}ller-Ebhardt, D. Ottaway, H. Rehbein, D. Sigg,
S. Whitcomb, C. Wipf, and N. Mavalvala, Phys. Rev. Lett. \textbf{98}, 150802 (2007).
\bibitem{M.Bhattacharya08} M. Bhattacharya, H. Uys and
P. Meystre, Phys. Rev. A \textbf{77}, 033819 (2008).
\bibitem{K. P. Marzlin04} K. P. Marzlin and B. C. Sanders,
Phys. Rev. Lett. \textbf{93},160408 (2004).
\bibitem{D. M. Tong05} D. M. Tong, K. Singh, L. C. Kwek, and
C. H. Oh, Phys. Rev. Lett. \textbf{95}, 110407 (2005).
\bibitem{yijpb07} X. X. Yi, D. M. Tong, L. C. Kwek, and C. H. Oh, J. Phys. B 40,
281 (2007).
\bibitem{craighead00} H. G. Craighead, Science
290, 1532 (2000).
\bibitem{aspelmeyer14}Markus Aspelmeyer, Tobias J. Kippenberg, and Florian Marquardt, Cavity optomechanics, Rev. Mod. Phys. \textbf{86}, 1391 (2014), and references therein.
\bibitem{caves81} C. M. Caves,
Physical Review D 23, 1693(1981); K. Jacobs, I. Tittonen, H. M.
Wiseman, and S. Schiller, Physical Review A 60, 538(1999); I.
Tittonen, G. Breitenbach, T. Kalkbrenner, T. Muller, R. Conradt, S.
Schiller, E. Steinsland, N. Blanc, and N. F. de Rooij, Physical
Review A 59, 1038(1999).
\bibitem{dorsel83} A. Dorsel, J. D. McCullen, P. Meystre, E. Vignes, and H.
Walther, Physical Review Letters 51, 1550(1983).
\bibitem{braginsky77} V. B. Braginsky, Measurement of Weak Forces in Physics
Experiments (University of Chicago Press, Chicago, 1977).
\bibitem{sheard04} B. S. Sheard, M. B. Gray, C. M. Mow-Lowry, D. E. McClelland, and
S. E. Whitcomb, Observation and characterization of an optical spring, Phys. Rev. A \textbf{69}, 051801(R) (2004).
\bibitem{J.Shen10} J. Shen, X. L. Huang, X. X. Yi, Chunfeng Wu,
and C. H. Oh, Phys. Rev. A \textbf{82}, 062107 (2010).
\bibitem{arkadiusz17} Arkadiusz Kosior and Krzysztof Sacha, Dynamical quantum phase transitions in discrete time crystals, arXiv:1712.05588v1.
\bibitem{elben17} A. Elben, B. Vermersch, M. Dalmonte, J. I. Cirac, and
P. Zoller, arXiv:1709.05060.
\bibitem{vermersch18} B. Vermersch, A. Elben, M. Dalmonte, J. I. Cirac, and P. Zoller, Unitary n-designs via random quenches in atomic Hubbard and Spin models: Application to the measurement of Rényi entropies, arXiv:1801.00999v1.
\bibitem{citro05}R. Citro, E. Orignac, T. Giamarchi, Adiabatic-antiadiabatic crossover in a spin-Peierls chain, Phys. Rev. B 72, 024434 (2005).
\bibitem{pellegrini11}Franco Pellegrini, Carlotta Negri, Fabio Pistolesi, Nicola Manini, Giuseppe E. Santoro, Erio Tosatti, Crossover from adiabatic to antiadiabatic quantum pumping with dissipation, Phys. Rev. Lett. 107, 060401 (2011).
\bibitem{eidelstein13}Eitan Eidelstein, Dotan Goberman, Avraham Schiller, Crossover from adiabatic to antiadiabatic phonon-assisted tunneling in single-molecule transistors, Phys. Rev. B 87, 075319 (2013).
\end{references}
\end{document}
|
\begin{document}
\title{Extremal covariant POVM's}
\author{Giulio Chiribella}
\email{[email protected]}
\address{{\em QUIT} Group, http://www.qubit.it, Istituto Nazionale di Fisica della Materia,
Unit\`a di Pavia, Dipartimento di Fisica "A. Volta", via Bassi 6,
I-27100 Pavia, Italy}
\author{Giacomo Mauro D'Ariano}
\email{[email protected]}
\address{{\em QUIT} Group, http://www.qubit.it, Istituto Nazionale di Fisica della Materia,
Unit\`a di Pavia, Dipartimento di Fisica "A. Volta", via Bassi 6,
I-27100 Pavia, Italy, and \\ Department of Electrical and Computer
Engineering, Northwestern University, Evanston, IL 60208}
\operatorname{d}}\def\<{\langle}\def\>{\rangleate{\today}
\maketitle
\begin{abstract}
We consider the convex set of positive operator valued measures (POVM) which are covariant under a
finite dimensional unitary projective representation of a group. We derive a general
characterization for the extremal points, and provide bounds for the ranks of the corresponding POVM
densities, also relating extremality to uniqueness and stability of optimized measurements.
Examples of applications are given.
\end{abstract}
\section{introduction}
An essential step in the design of the new quantum information technology\cite{Nielsen2000} is to
asses the ultimate precision limits achievable by quantum measurements in extracting information
from physical systems. For example, the security analysis of a quantum cryptographic
protocol\cite{gisirev} is based on the evaluation of the limits posed in principle by the quantum
laws to any possible eavesdropping strategy. A general method to establish such limits is to
optimize a quantum measurement according to a suitable criterion, and this is the general objective
of the so-called {\em quantum estimation theory}\cite{helstrom, holevo}. Different criteria can
be adopted for optimizing the measurement, the choice of a particular one depending on the
particular problem at hand. Moreover, many different optimization problems often share the same
form, e. g. they resort to the maximization of a concave function on the set of the possible
measurements. We remind that measurements form a convex set, the convex combination corresponding
to the random choice between two different apparatuses. Since a concave function attains its
maximum in an extremal point, it is clear that the optimization problem is strictly connected to the
problem of characterizing the extremal points of the convex set.
The quantum measurements interesting in most applications are {\em covariant}\cite{holevo}
with respect to a group of physical transformations. In a purely statistical description of a
quantum measurement in terms of the outcome probability only---i. e. without considering the
state-reduction---the measurement is completely described by a positive operator valued measure
(POVM) on its probability space. In terms of POVM's, "group-covariant" means that there is an action of the
transformation group on the probability space which maps events into events, in such
a way that when the measured system is transformed according to a group transformation, the
probability of a given event becomes the probability of the transformed event. Such scenario
naturally occurs in the estimation of an unknown group transformation performed on a known input
state, e. g. in the estimation of the unknown unitary transformation\cite{entang_meas,ajv}, in the
measurement of a phase-shift in the radiation field \cite{holevo, DMS}, or in the estimation of
rotations on a system of spins \cite{refframe}. A first technique for characterizing extremal
covariant POVM's and quantum operations has been presented in Ref. \cite{extPOVMandQO}
inspired by the method for characterizing extremal correlation matrices of Ref. \cite{LiTam}, in particular,
classification of extremal POVM's has been presented for the case of trivial stability group, i. e.
when the only transformation which leaves the input state unchanged is the identity. Here we solve
the characterization problem for extremal covariant POVM's in the general case of nontrivial
stability group, providing a simple criterion for extremality in Theorem \ref{supporti} in terms of
minimality of the support of the {\em seed} of the POVM, presenting iff conditions for extremality
in Theorem \ref{th:iff}, and providing bounds for the rank of extremal POVM's (in the following we
will define the rank of a POVM as the rank of its respective density: see Eq. (\ref{density}) for
its definition).
We show that, contrarily to the usual credo, the optimal covariant POVM can have rank larger
than one. Indeed, there are group representations for which covariant POVM cannot
have unit rank, since this would violate a general bound for the rank of the POVM in
relation to dimensions and multiplicity of the invariant subspaces of the group.
In the present paper we adopt the maximum likelihood optimality criterion, which, however, as we
will show, is formally equivalent to the solution of the optimization problem in a very large class of
optimality criteria. Other issues of practical interest that we address are the uniqueness and the
stability of the optimal covariant POVM. The whole derivation is given for finite dimensional
Hilbert spaces: as we will show in a simple example, it can be generalized to infinite dimensions,
however, at the price of making the theory much more technical.
\par The paper is organized as follows. After introducing covariant POVM's and their convex
structure in Section \ref{ConvStruct}, the main group theoretical tools that will be used for the
characterization of covariant POVM's are presented in Section \ref{GrpTools}. In Section
\ref{ExtrPovms} we give a characterization of extremal covariant POVM's in finite dimension with
a general stability group, deriving an algebraic extremality criterion, along with a general bound for the rank of the extremal POVM's in
terms of the dimensions of the invariant subspaces of the group and of the stability subgroup.
Properties of extremal POVM's in relation with optimization problems are analyzed in
Section \ref{OptExtr}, where also the issues of uniqueness and stability of the optimal covariant
POVM's are addressed. Finally, examples of application of the theory to estimation of rotation,
state, phase-shift, etc. are given in Section \ref{s:examples}, providing extremal POVM's with a non
trivial stability group and giving examples of optimization problems with solution consisting of
extremal POVM with rank greater than one.
\section{Convex structure of covariant POVM's}\label{ConvStruct}
The general description of the statistics of a measurement is given in
terms of a probability space ${\mathfrak X}}\def\dim{\operatorname{dim}$---the set of all possible
measurement {\em outcomes}---equipped with a $\sigma-$algebra
$\sigma({\mathfrak X}}\def\dim{\operatorname{dim})$ of subsets $\set{B}\subseteq{\mathfrak X}}\def\dim{\operatorname{dim}$ and with a probability
measure $p$ on $\sigma({\mathfrak X}}\def\dim{\operatorname{dim})$. Each subset $\set{B}\in \sigma({\mathfrak X}}\def\dim{\operatorname{dim})$
describes the event "the outcome $x$ belongs to $\set{B}$" and the
statistics of the measurement is fully specified by the probability
measure $p$, which associates to any event $\set{B}$ its probability
$p(\set{B})$.
In quantum mechanics the probability $p(\set{B})$ is given by the Born rule
\begin{equation}
p(\set{B}) \operatorname{d}}\def\<{\langle}\def\>{\rangleoteq \operatorname{Tr}}\def\atanh{\operatorname{atanh}}\def\:{\hbox{\bf :}[\rho P(\set{B})]
\end{equation}
where $\rho$ is a density operator (i.e. a positive semidefinite operator with unit trace) on the Hilbert space $\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}$ of the
measured system, representing its state, whereas $P$ is the POVM of
the apparatus, giving the probability measure $p$ for every given
state $\rho$ of the quantum system. Mathematically a POVM $P:
\sigma({\mathfrak X}}\def\dim{\operatorname{dim}) \to \Bnd H$ is a \emph{positive operator valued measure}
on $\sigma({\mathfrak X}}\def\dim{\operatorname{dim})$, namely it satisfies the following defining
properties
\begin{eqnarray}
&&0 \leq P(\set{B}) \leq I \qquad \forall \set{B}\in \sigma({\mathfrak X}}\def\dim{\operatorname{dim})\\
&&P(\cup_{i=1}^{\infty} \set{B}_i)= \sum_{i=1}^{\infty} P(\set{B}_i)\quad \forall
\{\set{B}_i\}~~\text{disjoint}\\ &&P({\mathfrak X}}\def\dim{\operatorname{dim})=I.
\end{eqnarray}
Notice that the set of POVM's for $\sigma({\mathfrak X}}\def\dim{\operatorname{dim})$ is a convex set, namely, if $P_1$ and $P_2$ are
POVM's for $\sigma({\mathfrak X}}\def\dim{\operatorname{dim})$, then also $\lambda P_1+(1-\lambda)P_2$ is a POVM for
$\sigma({\mathfrak X}}\def\dim{\operatorname{dim})$ for any $0\leq\lambda\leq 1$. The measurement described by the POVM $\lambda
P_1+(1-\lambda)P_2$ corresponds to randomly choosing between two different measuring apparatuses
described by the POVM's $P_1$ and $P_2$ respectively. The extremal points of such convex set of
POVM's---the socalled \emph{extremal POVM's}---correspond to measurements that cannot result
from a random choice between different measuring apparatuses.
In the following we will focus attention to the case of probability space ${\mathfrak X}}\def\dim{\operatorname{dim}$ given by the
quotient $\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}/\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}_0$ of a compact Lie group $\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}$ with respect to a subgroup
$\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}_0$. Physically, this situation arises when the POVM is designed to estimate a state
of the group-orbit $\{U_g \rho U_g^{\operatorname{d}}\def\<{\langle}\def\>{\rangleag}~|~ g \in \grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}\}$ of a given state $\rho$, with the group
$\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}$ acting on the Hilbert space $\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}$ of a quantum system via the unitary projective
representation $\set{R}(\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I})\operatorname{d}}\def\<{\langle}\def\>{\rangleoteq\{U_g
~|~ g \in \grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}\}$. In such case, in fact, the probability space of the POVM is exactly ${\mathfrak X}}\def\dim{\operatorname{dim}
=\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}/\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}_\rho$, and $\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}_0=\{h\in \grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}~|~U_h \rho U_h^{\operatorname{d}}\def\<{\langle}\def\>{\rangleag}=\rho\}$ is the stability
group of $\rho$, whence the points of the orbit are in one to one
correspondence with the elements of ${\mathfrak X}}\def\dim{\operatorname{dim}=\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}/\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}_0$. Notice that in the following the fact that
the representation is projective is inconsequential, whence there will be no need of reminding it.
An important class of measurements with ${\mathfrak X}}\def\dim{\operatorname{dim} = \grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I} /\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}_0$ is described by the \emph{covariant}
POVM's \cite{holevo}, namely those POVM's which enjoy the property
\begin{equation}
P(\set{gB})= U_g P(\set{B}) U_g^\operatorname{d}}\def\<{\langle}\def\>{\rangleag \qquad \forall \set{B}\in \sigma({\mathfrak X}}\def\dim{\operatorname{dim}),~ \forall g\in\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I},
\end{equation}
where $g\set{B}\operatorname{d}}\def\<{\langle}\def\>{\rangleoteq \{gx ~|~ x\in \set{B}\}$. Any POVM $P$ in this class is absolutely
continuous with respect to the measure $\operatorname{d}}\def\<{\langle}\def\>{\rangle x$ induced on ${\mathfrak X}}\def\dim{\operatorname{dim}$ by the normalized Haar
measure $\operatorname{d}}\def\<{\langle}\def\>{\rangle g$ on the group $\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}$, and admits an operator density $M$, namely
\begin{equation}
M:~ {\mathfrak X}}\def\dim{\operatorname{dim} \to \Bnd{H},\qquad P(\set{B}) =\int_\set{B} \operatorname{d}}\def\<{\langle}\def\>{\rangle x\, M(x).\label{density}
\end{equation}
For a covariant POVM, the operator density has the form \cite{holevo}
\begin{equation}\label{COVdens}
M(x)=U_{g(x)}\Xi U_{g(x)}^{\operatorname{d}}\def\<{\langle}\def\>{\rangleag},
\end{equation}
where $g(x) \in \grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}$ is any element in the equivalence class $x\in{\mathfrak X}}\def\dim{\operatorname{dim}=\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}/\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}_0$, and $\Xi$ is an
Hermitian operator satisfying the constraints
\begin{eqnarray}
\label{XIposnorm} &&\Xi \geq 0,\qquad \int_{\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}} \operatorname{d}}\def\<{\langle}\def\>{\rangle g~ U_g \Xi U_g^{\operatorname{d}}\def\<{\langle}\def\>{\rangleag}~~ =I \\
\label{XIcomm} &&\left[\Xi, U_h \right] =0 \quad \forall h \in\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}_0.
\end{eqnarray}
The operator $\Xi$ is usually referred to as the \emph{seed} of the covariant POVM\cite{note}.
Notice that the constraints (\ref{XIposnorm}) are needed for positivity and normalization of the
probability density, whereas identity (\ref{XIcomm}) guarantees that $M(x)= U_{g(x)} \Xi
U_{g(x)}^{\operatorname{d}}\def\<{\langle}\def\>{\rangleag}$ does not depend on the particular element $g(x)$ in the equivalence class $x$. It
is easy to see that the constraints (\ref{XIposnorm}) and (\ref{XIcomm}) still define a convex set
$\Conv$, namely,
for any $\Xi_1, \Xi_2 \in \Conv$ and for any $0\leq\lambda\leq 1$ one has $\lambda\Xi_1+
(1-\lambda) \Xi_2 \in \Conv$. Precisely, the convex set $\Conv$ is the intersection of the cone of
positive semidefinite operators with the two affine hyperplanes given by identity (\ref{XIcomm}) and by the
normalization condition in Eq. (\ref{XIposnorm}). Since a covariant POVM is completely specified by
its seed $\Xi$ as in Eq. (\ref{COVdens}), the classification of the the extremal covariant POVM's
resorts to the classification of the extremal points in the convex set $\Conv$.
\section{Group theoretic tools}\label{GrpTools}
Let $\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}$ be a compact Lie group, with invariant Haar measure
$\operatorname{d}}\def\<{\langle}\def\>{\rangle g$ normalized as $\int_{\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}} \operatorname{d}}\def\<{\langle}\def\>{\rangle g=1$, and consider a unitary representation
$\set{R}(\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I})=\{U_g~|~ g\in\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}\}$ on a finite dimensional Hilbert space $\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}$.
Then $\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}$ is decomposed as direct sum of orthogonal irreducible subspaces as follows
\begin{equation}
\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R} = \bigoplus_{\mu \in S} \bigoplus_{i=1}^{m_{\mu}}\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}_i^{(\mu)},\label{decomp1}
\end{equation}
$\set{S}$ denoting the collection of equivalence classes of irreducible components of the
representation, the classes being labeled by the Greek index $\mu$, whereas the Latin index $i$
numbers equivalent representations in the same class. Let $T_{ij}^{(\mu)}: \spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}_j^{(\mu)} \to
\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}_i^{(\mu)}$ denote invariant isomorphisms connecting the irreducible representations of the
equivalence class $\mu$ of dimension $d_{\mu}$, namely for any $i,j=1, \operatorname{d}}\def\<{\langle}\def\>{\rangleots, m_{\mu}$
$T_{ij}^{(\mu)}: \spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}_j^{(\mu)} \to \spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}_i^{(\mu)}$ is an invertible operator satisfying the identity
\begin{equation}
U_g T_{ij}^{(\mu)} U_g^{\operatorname{d}}\def\<{\langle}\def\>{\rangleag}=T_{ij}^{(\mu)}, \quad \forall g \in \grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}.
\end{equation}
Consistently with this notation $T^{(\mu)}_{ii}$ will denote the projection operator on
$\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}_i^{(\mu)}$. Since all subspaces $\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}_i^{(\mu)}$ are isomorphic, we can equivalently write
\begin{equation}
\bigoplus_{i=1}^{m_{\mu}} \spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}_i^{(\mu)} \equiv \spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}_{\mu} \bigotimes \sM_{\mu},
\end{equation}
where $\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}_{\mu}$ denotes the \emph{representation space}, i.e. an abstract $d_{\mu}$-dimensional subspace where a representation
of the class $\mu$ acts, while $\sM_{\mu}$ denotes the \emph{multiplicity space}, i.e. a $m_{\mu}$-dimensional space which is unaffected
by the action of the group.
In this way, the decomposition (\ref{decomp1}) can be
written in the Wedderburn's form\cite{Zhelobenko}
\begin{equation}
\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R} = \bigoplus_{\mu \in S} \spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}_{\mu} \otimes \sM_{\mu}.\label{decomp2}
\end{equation}
Due to Schur lemmas, an operator $O$ in the commutant of the representation $\set{R}(\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I})$ can be
decomposed as follows \cite{MLpovms}
\begin{equation}\label{commutingO}
O=\sum_{\mu} \sum_{i,j=1}^{m_{\mu}}~~ \frac{\operatorname{Tr}}\def\atanh{\operatorname{atanh}}\def\:{\hbox{\bf :}[T_{ji}^{(\mu)} O]}{d_{\mu}}~ T_{ij}^{(\mu)},
\end{equation}
whereas, in terms of the decomposition (\ref{decomp2}) one has
\begin{equation}
O=\oplus_{\mu \in S} \left( I_{\mu} \otimes
O_{\mu}\right),
\end{equation}
$I_{\mu}$ denoting the identity on the representation space
$\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}_{\mu}$, and $O_\mu\in \Bnd{M_{\mu}}$ being a suitable set of operators on the
multiplicity spaces $\sM_{\mu}$.
In this paper we will consider covariant POVM's with ${\mathfrak X}}\def\dim{\operatorname{dim}= \grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}/\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}_0$
where both $\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}$ and $\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}_0$ are compact Lie groups, represented
on the Hilbert space $\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}$ by the unitary representations
$\set{R}(\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I})=\{U_g~|~ g \in \grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}\}$ and $\set{R}(\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}_0)=\{U_h~|~h \in
\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}_0\}$. We will denote with $\set{S}$ and $\set{S}_0$ the equivalence classes of irreducible
representations of $\set{R}(\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I})$ and $\set{R}(\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}_0)$ respectively. The constraints
(\ref{XIposnorm},\ref{XIcomm})
can be rewritten in a remarkably simple form using the decompositions
of $\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}$ in irreducible subspaces under the action of $\set{R}(\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I})$ and
$\set{R}(\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}_0)$. In fact, due to the invariance of the Haar measure
$\operatorname{d}}\def\<{\langle}\def\>{\rangle g$, the integral in (\ref{XIposnorm}) belongs to the commutant of
$\set{R}(\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I})$. Rewriting the constraint (\ref{XIposnorm}) by using
(\ref{commutingO}), one get easily:
\begin{equation}\label{TRnorm}
\operatorname{Tr}}\def\atanh{\operatorname{atanh}}\def\:{\hbox{\bf :}[T_{ij}^{(\mu)} \Xi]= d_{\mu} ~\operatorname{d}}\def\<{\langle}\def\>{\rangleelta_{ij},\qquad \forall \mu \in\set{S},\quad \forall
i,j=1,\operatorname{d}}\def\<{\langle}\def\>{\rangleots,m_{\mu}.
\end{equation}
Moreover, according to (\ref{XIposnorm}) and (\ref{XIcomm}), the operator $\Xi$ must be a positive semidefinite operator in the commutant of $\set{R}(\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}_0)$ (\ref{XIcomm}), then we have
\begin{equation}\label{XIpos-comm}
\Xi =\oplus_{\nu \in S_0}(I_{\nu} \otimes X_{\nu}^{\operatorname{d}}\def\<{\langle}\def\>{\rangleag}X_{\nu}),
\end{equation}
where $X_{\nu}$ is an operator on the multiplicity subspace $\sM_{\nu}$.
\section{Extremal covariant POVM's with a nontrivial stability group}\label{ExtrPovms}
In this section we will classify the extremal points of the convex set $\Conv$ of covariant seeds,
namely the convex set of operators that satisfy both conditions (\ref{XIposnorm}) and (\ref{XIcomm}).
For the characterization of the extremal points of a convex set we will use the well known method of
perturbations. We will say that the operator $\Theta$ is a "perturbation" of a given $\Xi\in\Conv$
if and only if there exists an $\epsilon>0$ such that $~\Xi +t \Theta \in \Conv$ for any $t \in
[-\epsilon,\epsilon]$. With such definition one has that an operator $\Xi$ is extremal if and only
if its unique perturbation is the trivial one, namely if $\Theta$ is a perturbation of $\Xi$ then
$\Theta=0$.
Let's start with a simple lemma which is useful for the characterization of the perturbations of a given seed $\Xi$.
\begin{lemma}\label{PosAndSupp}
Let $\Xi\in \Bnd{H}$ be a positive semidefinite operator. Then, for any Hermitian $\Theta \in
\Bnd{H}$ the condition
\begin{equation}\label{epsilon}
\exists \epsilon > 0: \qquad \forall t \in [-\epsilon,\epsilon] \quad \Xi +t\Theta\geq0
\end{equation}
is equivalent to
\begin{equation}\label{supports}
\Supp(\Theta) \subseteq \Supp(\Xi).
\end{equation}
\end{lemma}
\par\noindent{\bf Proof. } Suppose that the condition (\ref{epsilon}) holds. Then for any $|\phi\>\in\Ker(\Xi)$ one
necessarily has $\<\phi|\Theta|\phi\>=0$. Therefore, for any vector $|\psi\>\in\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}$ one
has:
\begin{equation*}
|\<\psi|\Theta|\phi\>|=\frac{1}{t}|\<\psi|(\Xi+t\Theta)|\phi\>|\leq
\frac{1}{t}\sqrt{\<\psi|(\Xi+t\Theta)|\psi\>~\<\phi|(\Xi+t\Theta)|\phi\>}=0.\end{equation*}
Hence $\Ker{(\Xi)}\subseteq \Ker{(\Theta)}$, implying that
$\Supp{(\Theta)}\subseteq \Supp{(\Xi)}$. Conversely, suppose that
(\ref{supports}) holds. Let's denote by $\lambda$ the smallest nonzero
eigenvalue of $\Xi$ and by $||\Theta||$ the norm of $\Theta$, then
condition (\ref{epsilon}) holds with
$\epsilon=\frac{\lambda}{||\Theta||}$.$\,\blacksquare$\par
Using the previous lemma we can state that an Hermitian operator $\Theta$ is a perturbation for a
given seed $\Xi$ if and only if the following conditions are satisfied:
\begin{eqnarray}
\label{THETAsupp} &\Supp{(\Theta)}\subseteq \Supp{(\Xi)}&\\
\label{THETAnorm} &\operatorname{Tr}}\def\atanh{\operatorname{atanh}}\def\:{\hbox{\bf :}[\Theta T_{ij}^{(\mu)}]=0 \qquad&\forall \mu \in S,~ \forall i,j=1,\operatorname{d}}\def\<{\langle}\def\>{\rangleots,m_{\mu}\\
\label{THETAcomm} &[\Theta,U_h]=0 \qquad&\forall h\in \grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}_0
\end{eqnarray}
(conditions (\ref{THETAnorm}) and (\ref{THETAcomm}) follow directly from the normalization constraints
(\ref{TRnorm}) and (\ref{XIpos-comm})).
\par This set of conditions leads to an interesting property of extremal seeds:
\begin{theorem}
\label{supporti} $\Xi$ is an extremal point of $\Conv$ if and only if for any $\zeta \in \Conv$ one has
\begin{equation}
\Supp(\zeta) \subseteq \Supp(\Xi) ~~ \Longrightarrow~~ \zeta = \Xi.\label{suppz}
\end{equation}
\end{theorem}
\par\noindent{\bf Proof. }
To prove necessity it is sufficient to define $\Theta\operatorname{d}}\def\<{\langle}\def\>{\rangleoteq \Xi-\zeta$ and note that
it is a perturbation of $\Xi$. In fact, $\Theta$ is in the commutant
of $\set{R}(\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}_0)$, $\Supp(\Theta) \subseteq \Supp(\Xi)$, and
$\operatorname{Tr}}\def\atanh{\operatorname{atanh}}\def\:{\hbox{\bf :}[\Theta T_{ij}^{\mu}]=0 \quad \forall \mu \in S, \forall i,j=1,
\operatorname{d}}\def\<{\langle}\def\>{\rangleots, m_{\mu}$. But, since $\Xi$ is extremal, then $\Theta$ must be zero.
\par Viceversa , assume (\ref{suppz}). If $\Theta$ is a perturbation for $\Xi$, then there exists some $t\not =0$ such that $\zeta\operatorname{d}}\def\<{\langle}\def\>{\rangleoteq \Xi +t \Theta \in \Conv$. But a perturbation must satisfy (\ref{supports}), then $\Supp(\zeta)\subseteq \Supp(\Xi)$. Using (\ref{suppz}) is then clear that $\Theta=t^{-1}(\zeta-\Xi)=0$.$\,\blacksquare$\par
The proposition tells us that extremal seeds have "minimal support", in the sense that there is no element $\zeta\in \Conv$ with $\Supp{(\zeta)}\subseteq \Supp(\Xi)$ which is different from $\Xi$.
\begin{theorem}
\label{XIpert} Let be $\Xi \in \Conv$.
Write $\Xi$ in the form (\ref{XIpos-comm}). Then an operator $\Theta$
is a perturbation of $\Xi$ if and only if
\begin{equation}\label{pert1}
\operatorname{Tr}}\def\atanh{\operatorname{atanh}}\def\:{\hbox{\bf :}[\Theta T_{ij}^{(\mu)}]=0 \qquad \forall \mu \in S,~\forall i,j =1, \operatorname{d}}\def\<{\langle}\def\>{\rangleots, m_{\mu}
\end{equation}
and $\Theta$ can be written as follows
\begin{equation}\label{pert2}
\Theta = \oplus_{\nu \in S_0} \left(I_{\nu} \otimes X_{\nu}^{\operatorname{d}}\def\<{\langle}\def\>{\rangleag}A_{\nu}X_{\nu}\right),
\end{equation}
with $X_\nu\in\Bnd{M_\nu}$ and $A_{\nu}\in \Bndd{\set{Rng}}\def\Ker{\set{Ker}}\def\Supp{\set{Supp}(X_{\nu})}$ Hermitian
$\forall\nu\in\set{S}_0$.
\end{theorem}
\par\noindent{\bf Proof. } Suppose $\Theta$ is a perturbation. Condition (\ref{THETAnorm}) is the same as
(\ref{pert1}). Due to condition
(\ref{THETAcomm}), $\Theta$ must be an Hermitian operator in the commutant
of $\set{R}(\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}_0)$, then we can write it in the block form $\Theta=
\oplus_{\nu \in\set{S}_0} (I_{\nu} \otimes O_{\nu})$, with each $O_{\nu}\in\Bnd{M_{\nu}}$
Hermitian. Moreover, condition (\ref{THETAsupp}) along with (\ref{XIpos-comm}) imply that
each operator $O_{\nu}$ must have $\Supp(O_{\nu}) \subseteq
\Supp(X_{\nu}^{\operatorname{d}}\def\<{\langle}\def\>{\rangleag}X_{\nu})= \Supp(X_{\nu})$. Using the singular value decomposition
$X_{\nu}=\sum_{i=1}^{r_{\nu}} \lambda_{i}^{(\nu)}
|w_i^{(\nu)}\>\<v_i^{\nu}|$ ($\{|v_i^{\nu}\>\}$ and
$\{|w_i^{(\nu)}\>$ are orthonormal bases for $\Supp(X_{\nu})$ and
$\set{Rng}}\def\Ker{\set{Ker}}\def\Supp{\set{Supp}(X_{\nu})$ respectively) one can see that any Hermitian operator $O_{\nu}$ with
$\Supp(O_{\nu})\subseteq\Supp(X_{\nu})$ admit the decomposition
$O_{\nu}=X_{\nu}^{\operatorname{d}}\def\<{\langle}\def\>{\rangleag}A_{\nu}X_{\nu}$, with $A_{\nu}$ Hermitian operator in
$\Bndd{\set{Rng}}\def\Ker{\set{Ker}}\def\Supp{\set{Supp}(X_{\nu})}$. Conversely, if both conditions (\ref{pert1}) and (\ref{pert2}) hold, then
conditions (\ref{THETAsupp}--\ref{THETAcomm}) are obviously fulfilled. $\,\blacksquare$\par
\begin{theorem}\label{th:iff}
Let be $P_{\nu}$ the projection operator onto the subspace $\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}_{\nu} \otimes \sM_{\nu}\subseteq
\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}$ corresponding to the class $\nu \in S_0$. An operator $\Xi \in \Conv$ written in the form
$\Xi =\oplus_{\nu \in S_0}(I_{\nu} \otimes X_{\nu}^{\operatorname{d}}\def\<{\langle}\def\>{\rangleag}X_{\nu})$
is extremal if and only if
\begin{equation}\label{iff} \oplus_{\nu \in S_0}\Bndd{\set{Rng}}\def\Ker{\set{Ker}}\def\Supp{\set{Supp}(X_{\nu})}= \set{Span} \{F_{ij}^{(\mu)}
~|~\mu\in S,~ i,j=1, \operatorname{d}}\def\<{\langle}\def\>{\rangleots, m_{\mu}\}, \end{equation}
where
\begin{equation*} F_{ij}^{(\mu)} \operatorname{d}}\def\<{\langle}\def\>{\rangleoteq \oplus_{\nu \in S_0}~ X_{\nu}~\operatorname{Tr}}\def\atanh{\operatorname{atanh}}\def\:{\hbox{\bf :}_{\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}_{\nu}}\left[P_{\nu} T_{ij}^{(\mu)} P_{\nu}\right]~X_{\nu}^{\operatorname{d}}\def\<{\langle}\def\>{\rangleag}.
\end{equation*}
\end{theorem}
\par\noindent{\bf Proof. } Using the characterization of Theorem \ref{XIpert}, we know that
$\Xi$ is extremal if and only if for any operator $\Theta$
satisfying $(\ref{pert1})$ and $(\ref{pert2})$ one has
$\Theta=0$. Let's take $\Theta$ in the form (\ref{pert2}), and
rewrite the direct sum as an ordinary sum
\begin{equation}\Theta=\sum_{\nu \in
S_0} P_{\nu} \left( I_{\nu} \otimes
X_{\nu}^{\operatorname{d}}\def\<{\langle}\def\>{\rangleag}A_{\nu}X_{\nu}\right) P_{\nu},
\end{equation} using the projectors $P_{\nu}$ onto $\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}_{\nu} \otimes M_{\nu}$. Using invariance
of trace under cyclic permutations, we can write
\begin{equation}
\begin{split}
\operatorname{Tr}}\def\atanh{\operatorname{atanh}}\def\:{\hbox{\bf :}\left[ \Theta~T^{(\mu)}_{ij}\right] =& \sum_{\nu \in S_0} \operatorname{Tr}}\def\atanh{\operatorname{atanh}}\def\:{\hbox{\bf :}\left[ (I_{\nu} \otimes A_{\nu}) (I_{\nu}
\otimes X_{\nu}) P_{\nu} T^{(\mu)}_{ij} P_{\nu}(I_{\nu} \otimes
X_{\nu}^{\operatorname{d}}\def\<{\langle}\def\>{\rangleag})\right]\\=&\sum_{\nu \in S_0}\operatorname{Tr}}\def\atanh{\operatorname{atanh}}\def\:{\hbox{\bf :} \left[
A_{\nu}~~ X_{\nu} \operatorname{Tr}}\def\atanh{\operatorname{atanh}}\def\:{\hbox{\bf :}_{\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}_{\nu}}
[P_{\nu} T^{(\mu)}_{ij} P_{\nu}]X^{\operatorname{d}}\def\<{\langle}\def\>{\rangleag}_{\nu}\right].
\end{split}
\end{equation}
Define the space $\sR\operatorname{d}}\def\<{\langle}\def\>{\rangleoteq \oplus_{\nu \in S_0}\set{Rng}}\def\Ker{\set{Ker}}\def\Supp{\set{Supp}(X_{\nu})$ and denote as
$\oplus_{\nu \in S_0} \Bndd{\set{Rng}}\def\Ker{\set{Ker}}\def\Supp{\set{Supp}(X_{\nu})}$ the linear space of
operators acting on $\sR$ which are block diagonal on the subspaces $\set{Rng}}\def\Ker{\set{Ker}}\def\Supp{\set{Supp}(X_{\nu})$,
$\nu\in\set{S}_0$. Then, the extremality condition for $\Xi$
becomes: for any Hermitian operator $A \in \oplus_{\nu \in S_0} \Bndd{\set{Rng}}\def\Ker{\set{Ker}}\def\Supp{\set{Supp}(X_{\nu})}$ one has
\begin{equation}
\operatorname{Tr}}\def\atanh{\operatorname{atanh}}\def\:{\hbox{\bf :}\left[ A F^{(\mu)}_{ij}\right]=0 \quad \forall \mu \in\set{S},~\forall i,j=1, \operatorname{d}}\def\<{\langle}\def\>{\rangleots,m_{\mu} \quad \Longrightarrow \quad A=0.
\end{equation}
In terms of the Hilbert-Schmidt product $(A,B)\operatorname{d}}\def\<{\langle}\def\>{\rangleoteq \operatorname{Tr}}\def\atanh{\operatorname{atanh}}\def\:{\hbox{\bf :}[A^{\operatorname{d}}\def\<{\langle}\def\>{\rangleag}B]$ this condition says that the
unique Hermitian operator $A\in
\oplus_{\nu \in S_0} \Bndd{\set{Rng}}\def\Ker{\set{Ker}}\def\Supp{\set{Supp}(X_{\nu})}$ which is orthogonal to the
whole set of operators $\set{F}\operatorname{d}}\def\<{\langle}\def\>{\rangleoteq\{F^{(\mu)}_{ij}~~|~~ \mu \in\set{S},~
i,j=1, \operatorname{d}}\def\<{\langle}\def\>{\rangleots, m_{\mu}\}$ is the null operator. Orthogonality to
the set $\set{F}$ is equivalent to orthogonality to the set of
Hermitian operators
$\set{F}'=\{(F^{(\mu)}_{ij}+F_{ji}^{(\mu)})~,~i(F_{ij}^{(\mu)}-~F_{ji}^{(\mu)})~|~\mu
\in\set{S},~i,j=1,\operatorname{d}}\def\<{\langle}\def\>{\rangleots m_{\mu}\}$. Such orthogonality holds if and only if
$\set{F}'$ is a spanning set for the real space of Hermitian
operators in $\oplus_{\nu \in S_0}\Bndd{\set{Rng}}\def\Ker{\set{Ker}}\def\Supp{\set{Supp}(X_{\nu}}$. Nevertheless, using the
Cartesian decomposition we see that any complex block operator
$O\in\oplus_{\nu \in S_0} \Bndd{\set{Rng}}\def\Ker{\set{Ker}}\def\Supp{\set{Supp}(X_{\nu})}$ can be written as sum of
two Hermitian ones, whence the extremality condition is equivalent to
$\set{Span}(\set{F}')=\oplus_{\nu \in S_0}
\Bndd{\set{Rng}}\def\Ker{\set{Ker}}\def\Supp{\set{Supp}(X_{\nu}}$. Finally, the observation
$\set{Span}(\set{F}')=\set{Span}(\set{F})$ completes the proof.
$\,\blacksquare$\par
Notice that for trivial stability group $\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}_0=\{e\}$ ($e$ denotes the identity element), we recover
the characterization of \cite{extPOVMandQO}: there, one has indeed a single equivalence class
$\bar\nu$ in $\set{S}_0$ with one-dimensional representation space $\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}_{\bar\nu}$, so that the
whole Hilbert space $\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}$ is isomorphic to the multiplicity space $\sM_{\bar\nu}$ and the
extremality condition (\ref{iff}) reduces to $\set{Span}\{X T^{(\mu)}_{ij}X^{\operatorname{d}}\def\<{\langle}\def\>{\rangleag}~|~ \mu
\in\set{S},~i,j=1, \operatorname{d}}\def\<{\langle}\def\>{\rangleots, m_{\mu}\}= \Bndd{\set{Rng}}\def\Ker{\set{Ker}}\def\Supp{\set{Supp}(X)}$.
\begin{corollary}\label{rankone-extr} Any rank-one seed is extremal.\end{corollary}
\par\noindent{\bf Proof. } Let be $\Xi$ a rank-one seed. In this case there is only one class $\nu_0$ in the
decomposition (\ref{XIpos-comm}) of $\Xi$ (otherwise $\Xi$ could not
have unit rank), and the space $\Bndd{\set{Rng}}\def\Ker{\set{Ker}}\def\Supp{\set{Supp}(X_{\nu_0})}$ to be spanned
is one dimensional, whence the condition (\ref{iff}) is always
satisfied. $\,\blacksquare$\par An alternative proof of Corollary \ref{rankone-extr}
follows by observing that any rank-one element of the cone $\set{D}$ of positive
semidefinite operators is necessarily extremal for such cone: since the convex set
$\Conv$ is a subset of $\set{D}$, a rank-one seed $\Xi\in\Conv$ is necessarily an extreme point of
$\Conv$.
\begin{corollary}\label{RankBound}
Let $\Xi\in \Conv$ be an extremal seed and write it in the form $\Xi =\oplus_{\nu \in S_0}(I_{\nu}
\otimes X_{\nu}^{\operatorname{d}}\def\<{\langle}\def\>{\rangleag}X_{\nu})$. Define $r_{\nu}\operatorname{d}}\def\<{\langle}\def\>{\rangleoteq \operatorname{rank}}\def\sign{\operatorname{sign}(X_{\nu})$. Then
\begin{equation}\label{ranks}
\sum_{\nu \in\set{S}_0} r_{\nu}^2 \leq \sum_{\mu \in\set{S}} m_{\mu}^2.
\end{equation}
\end{corollary}
\par\noindent{\bf Proof. } This relation follows directly from the extremality condition by noting
that the left hand side is the dimension of the complex linear space
of block operators $\oplus_{\nu \in\set{S}_0} \Bndd{\set{Rng}}\def\Ker{\set{Ker}}\def\Supp{\set{Supp}(X_{\nu})}$, while
the right hand side is the cardinality of the spanning set
$\set{F}=\{F_{ij}^{(\mu)}~|~\mu \in\set{S},~i,j=1, \operatorname{d}}\def\<{\langle}\def\>{\rangleots, m_{\mu}\}$.$\,\blacksquare$\par
In Section \ref{s:examples} we will see an explicit example of extremal POVM which achieves this bound.
\section{Extremal POVM's and optimization problems}\label{OptExtr}
A crucial step in a quantum estimation approach is the optimization of the estimation strategy for a
given figure of merit. This consists in finding the POVM which maximizes some linear (more generally
concave) functional $\mathcal{F}$---e. g. the average fidelity of the estimated
state with the true one. Then, the convex structure of the set of POVM's
plays a fundamental role in this problem, since, due to concavity of $\mathcal{F}$, one can restrict
the optimization procedure to the extremal POVM's only.
In the covariant case, the problem resorts to optimize the state estimation in the orbit $\{U_g\rho
U_g^{\operatorname{d}}\def\<{\langle}\def\>{\rangleag}~|~ g\in \grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}\}\simeq\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}/\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}_0$
of a given state $\rho$ under the action of a group
$\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}$, $\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}_0$ being the stability group of $\rho$. The optimization typically is
the maximization of a linear functional corresponding to the average value of a positive function
$f(x,x_*)$, where the average is taken over all the couples $(x,x_*)$ of measured and true
values $x,x_*\in{\mathfrak X}}\def\dim{\operatorname{dim}\operatorname{d}}\def\<{\langle}\def\>{\rangleoteq\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}/\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}_0$, respectively. The joint probability density
$p(x,x_*)$ is connected to the conditional density $p(x|x_*)$ given
by the Born rule via Bayes, assuming an \emph{a priori} probability distribution
of the true value $x_*$. In the covariant problem the function $f$ enjoys the invariance
property $f(gx,gx_*)=f(x,x_*)$ $\forall g \in \grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}$, and is taken as a decreasing function of the
distance $|x-x_*|$ of the measured value $x$ from the true one $x_*$. In the case of compact $\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}$
one can assume a uniform \emph{a priori} distribution for $x_*$ values, so that the functional
corresponding to the average can be written as follows
\begin{eqnarray}
\label{average}\mathcal{F}_{\rho}[\Xi] &=& \int_{\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}} \operatorname{d}}\def\<{\langle}\def\>{\rangle g \int_{\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}} \operatorname{d}}\def\<{\langle}\def\>{\rangle g_*~~ f(gx_0,g_*x_0)~ \operatorname{Tr}}\def\atanh{\operatorname{atanh}}\def\:{\hbox{\bf :}[U_{g_*} \rho U_{g_*}^{\operatorname{d}}\def\<{\langle}\def\>{\rangleag} U_g \Xi U_g^{\operatorname{d}}\def\<{\langle}\def\>{\rangleag}]\\
&=& \int_{\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}}\operatorname{d}}\def\<{\langle}\def\>{\rangle g ~~ f(x_0,gx_0)~ \operatorname{Tr}}\def\atanh{\operatorname{atanh}}\def\:{\hbox{\bf :}[U_g \rho U_g^{\operatorname{d}}\def\<{\langle}\def\>{\rangleag} \Xi],
\end{eqnarray}
where $x_0$ is the equivalence class containing the identity. In the
following, we will consider as the prototype optimization problem
the maximization of the likelihood functional\cite{helstrom,holevo}
\begin{equation}
\mathcal{L}_{\rho}[\Xi]\operatorname{d}}\def\<{\langle}\def\>{\rangleoteq\operatorname{Tr}}\def\atanh{\operatorname{atanh}}\def\:{\hbox{\bf :}[\rho \Xi],
\end{equation}
corresponding to the choice $f(x,x_*)=\operatorname{d}}\def\<{\langle}\def\>{\rangleelta(x-x_*)$ in Eq.(\ref{average}). Maximizing
$\mathcal{L}_{\rho}[\Xi]$ means maximizing the probability density that the measured value $x$
coincides with the true value $x_*$. For such estimation strategy the optimization problem
has a remarkably simple form, enabling a general treatment for a large class of group
representations \cite{MLpovms}. Moreover, the solution of the maximum likelihood is formally
equivalent to the solution of any optimization problem with a positive (which, a part from an
additive constant, means bounded from below) summable function $f(x,x_*)$. Indeed, we can define the
map
\begin{equation}
\map{M}(\rho) = k^{-1} \int_{\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}}\operatorname{d}}\def\<{\langle}\def\>{\rangle g~~ f(x_0,gx_0)~ U_g \rho U_g^{\operatorname{d}}\def\<{\langle}\def\>{\rangleag},\label{maplike}
\end{equation}
where $k= \int_{\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}} \operatorname{d}}\def\<{\langle}\def\>{\rangle g~~f(x_0,gx_0)$. This map is completely positive, unital and trace preserving, and, in particular, $\map{M}[\rho]$ is a state. With this definition, we have
\begin{equation}\mathcal{F}_{\rho}[\Xi] = k~ \mathcal{L}_{\map{M}(\rho)}[\Xi],
\end{equation}
whence the maximization of $\mathcal{F}_{\rho}$ is equivalent to the maximization of the likelihood for the transformed state $\map{M}(\rho)$.
\par Essentially all optimal covariant measurements known in the literature are represented by
rank-one operators. The rank-one assumption often provides a useful instrument for simplifying
calculations. Nevertheless, as we will show in the following, the occurrence of
POVM's with rank grater than one is unavoidable in some relevant situations.
\begin{proposition}
For any $\Xi \in \Conv$,
\begin{equation}
\operatorname{rank}}\def\sign{\operatorname{sign}[\Xi] \geq
\max_{\mu \in\set{S}}\left(\frac{m_{\mu}}{d_{\mu}}\right).
\end{equation}
\end{proposition}
\par\noindent{\bf Proof. } Let's decompose $\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}$ into irreducible subspaces for the representation $\set{R}(\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I})$ of
$\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}$ as follows
\begin{equation}\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}=\oplus_{\mu \in S}\oplus_{i=1}^{m_{\mu}}
\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}_i^{(\mu)}.
\end{equation}
Take an orthonormal basis
$\set{B}^{(\mu)}_i=\{|(\mu,i), n\>~|~n=1, \operatorname{d}}\def\<{\langle}\def\>{\rangleots, d_{\mu}\}$ for each
subspace $\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}_i^{(\mu)}$ in such a way that $|(\mu,i), n\>= T^{(\mu)}_{ij}|(\mu,j), n\>$
for any $n$, $T^{(\mu)}_{ij}: \spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}_j \to \spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}_i$ being the invariant isomorphism which intertwines the
equivalent representations $(\mu,i)$ and $(\mu,j)$. Diagonalize $\Xi$ as
\begin{equation}
\Xi =\sum_{k=1}^{\operatorname{rank}}\def\sign{\operatorname{sign}(\Xi)} |\eta_k\>\<\eta_k|
\end{equation}
and write
\begin{equation}
|\eta_k\>=\sum_{\mu \in\set{S}}\sum_{i=1}^{m_{\mu}} \sum_{n=1}^{d_{\mu}} c^k_{(\mu,i),n}~~ |(\mu,i),n\>.
\end{equation}
Since $\<\eta_k| T^{(\mu)}_{ij}|\eta_k\>=\sum_{n=1}^{d_{\mu}} c^{k*}_{(\mu,i),n} c^{k}_{(\mu,j),n}$, the normalization constraints (\ref{TRnorm}) become
\begin{equation}
\sum_{k=1}^{\operatorname{rank}}\def\sign{\operatorname{sign}(\Xi)} \sum_{n=1}^{d_{\mu}}~ c^{k*}_{(\mu,i),n}c^{k}_{(\mu,j),n} = d_{\mu}~ \operatorname{d}}\def\<{\langle}\def\>{\rangleelta_{ij}.
\end{equation}
This relation implies that for any $\mu \in\set{S}$ the vectors
$\{\vec{c}_{(\mu,i)}~|~i=1, \operatorname{d}}\def\<{\langle}\def\>{\rangleots,m_{\mu}\}$ defined by
$ (\vec{c}_{(\mu,i)})_{k,n} \operatorname{d}}\def\<{\langle}\def\>{\rangleoteq c^k_{(\mu,i),n}$ are orthogonal:
since they are $m_{\mu}$ orthogonal vectors in a linear space whose
dimension is $d_{\mu}\times\operatorname{rank}}\def\sign{\operatorname{sign}(\Xi)$, it follows that $m_{\mu} \leq d_{\mu} \times\operatorname{rank}}\def\sign{\operatorname{sign}(\Xi)$,
hence $\operatorname{rank}}\def\sign{\operatorname{sign}(\Xi)\geq \frac{m_{\mu}}{d_{\mu}}\quad \forall \mu \in\set{S}$. $\,\blacksquare$\par
Summarizing, every times $m_{\mu}>d_{\mu}$ for some class $\mu \in\set{S}$, a covariant POVM
cannot be represented by a rank-one seed.
\par The previous proposition exhibits a structural reason for which, in
the presence of equivalent representations, the set $\Conv$ of
covariant seeds may contain only elements with rank greater then one. On
the other hand, in the following we will discuss the occurrence of
covariant POVM's with rank greater than one in explicit optimization
problems, independently of the presence of equivalent representations.
\begin{proposition}
\label{uniqueOptPOVM} Let be $\Xi$ an extremal point of $\Conv$. Denote by $P$ the projector onto $\Supp(\Xi)$, and let $r\operatorname{d}}\def\<{\langle}\def\>{\rangleoteq\operatorname{rank}}\def\sign{\operatorname{sign}(P)$. Then $\Xi$ is the unique seed which maximizes the likelihood for the state $\rho=\frac{P}{r}$.
\end{proposition}
\par\noindent{\bf Proof. }
First, we need to prove that $\Xi$ commutes with the representation
$\set{R}(\gH_0)\operatorname{d}}\def\<{\langle}\def\>{\rangleoteq \{U_k ~|~ k\in\gH_0\}$, where $\gH_0$ is the
stability group of $\rho$, defined by $[\rho,U_k]=0 \quad \forall k \in \gH_0$. Define the group average
\begin{equation}
\xi \operatorname{d}}\def\<{\langle}\def\>{\rangleoteq \frac{\int_{\gH_0}\operatorname{d}}\def\<{\langle}\def\>{\rangle h\, U_h\Xi U_h^\operatorname{d}}\def\<{\langle}\def\>{\rangleag}{\int_{\gH_0}\operatorname{d}}\def\<{\langle}\def\>{\rangle h}~.
\end{equation}
Since $\set{R}(\gH_0)$ is the stability group of the projector onto $\Supp(\Xi)$, clearly $\Supp(\Xi)$ is invariant under $\set{R}(\gH_0)$, whence $\xi$ satisfies
$\Supp(\xi) \subseteq
\Supp(\Xi)$. Moreover, using the invariance of the Haar measure it is easy to see that $\xi$ commutes with $\set{R}(\gH_0)$. Finally, $\xi$ is an element of $\Conv$. In fact, it is positive semidefinite, satisfies $(\ref{TRnorm})$ and commutes with $\set{R}(\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}_0)$---the stability group of $\Xi$---which is by definition a subset of $\set{R}(\gH_0)$. Since $\Xi$ is extremal, using Theorem \ref{supporti} we can conclude that $\Xi=\xi$, whence $\Xi$ commutes with $\set{R}(\gH_0)$.
\par Let's prove now optimality. For any arbitrary seed $\zeta\in \Conv$, the following bound holds:
\begin{equation}
\mathcal{L}_{\rho}[\zeta]= \operatorname{Tr}}\def\atanh{\operatorname{atanh}}\def\:{\hbox{\bf :}[\rho \zeta]= \frac{\operatorname{Tr}}\def\atanh{\operatorname{atanh}}\def\:{\hbox{\bf :}[P \zeta]}{r} \leq \frac{\operatorname{Tr}}\def\atanh{\operatorname{atanh}}\def\:{\hbox{\bf :}[\zeta]}{r}= \frac{\operatorname{d}}\def\<{\langle}\def\>{\rangleim(\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R})}{r},
\end{equation}
where the last equality follows from the normalization constraints
(\ref{TRnorm}). Clearly $\Xi$ achieves the bound, whence it is
optimal. Notice that the inequality $\operatorname{Tr}}\def\atanh{\operatorname{atanh}}\def\:{\hbox{\bf :}[P
\zeta] \leq \operatorname{Tr}}\def\atanh{\operatorname{atanh}}\def\:{\hbox{\bf :}[\zeta]$ becomes equality if and only of $\Supp(\zeta)
\subseteq \Supp(\Xi)$, then using Theorem \ref{supporti} we can see
that $\Xi$ represents the unique optimal POVM. $\,\blacksquare$\par
Consider now a density matrix $\sigma$ with support in the orthogonal complement of $\Supp(\Xi)$,
and consider the randomization
\begin{equation}\label{randomized}
\rho=(1-\alpha) \frac{P}{r} + \alpha
\sigma,
\end{equation}
with $0\leq \alpha \leq 1$. In the following we prove that, for sufficiently small $\alpha>0$, $\Xi$
is still optimal for the maximum likelihood strategy. In other words, the extremal POVM represented
by $\Xi$ is stable under randomization, and the same measuring apparatus can be used for a larger
class of mixed states.
\begin{proposition}
\label{stabilityPOVM} Consider the randomized state $\rho$ in (\ref{randomized}) and denote by $\bar{q}$ the maximum eigenvalue of $\sigma$. If $\alpha < \frac{1}{1+ r \bar{q}}$, then $\Xi$ is the unique seed which maximizes the likelihood for the state $\rho$.
\end{proposition}
\par\noindent{\bf Proof. } First, notice that $\Xi$ commutes with the representation $\set{R}(\gH_0)$ of the stability
group of $\rho$. This follows from the observation that the condition $\alpha <
\frac{1}{1+r\bar{q}}$ implies that $\frac{1-\alpha}{r}$ is strictly the
largest eigenvalue of $\rho$. Then, $P$ is the projector on the
eigenspace with maximum eigenvalue of $\rho$, while, for any $h\in \grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}$, $P_h \operatorname{d}}\def\<{\langle}\def\>{\rangleoteq U_h P
U_h^{\operatorname{d}}\def\<{\langle}\def\>{\rangleag}$ is the projector on the eigenspace with maximum eigenvalue
of $\rho_h \operatorname{d}}\def\<{\langle}\def\>{\rangleoteq U_h \rho U_h^{\operatorname{d}}\def\<{\langle}\def\>{\rangleag}$. If $h \in \gH_0$ then it must
be $\rho_h =\rho$, and, necessarily, $P_h=P$. Therefore $\gH_0$ is a
subgroup of the stability group of $P$. But $\Xi$ commutes with the
representation of the stability group of $P$, as proven in Proposition
\ref{uniqueOptPOVM}, then it commutes also with $\set{R}(\gH_0)$.
\par Now we prove optimality of $\Xi$. Let's
denote by $Q$ the projection onto $\Supp(\sigma)$. The following bound
holds
for any $\zeta \in
\Conv$:
\begin{eqnarray}
\mathcal{L}_{\rho}[\zeta]&=& \frac{(1-\alpha)}{r} \operatorname{Tr}}\def\atanh{\operatorname{atanh}}\def\:{\hbox{\bf :}[P\zeta] + \alpha \operatorname{Tr}}\def\atanh{\operatorname{atanh}}\def\:{\hbox{\bf :}[\sigma \zeta]\\
&\leq& \frac{(1-\alpha)}{r} \operatorname{Tr}}\def\atanh{\operatorname{atanh}}\def\:{\hbox{\bf :}[P\zeta]+ \alpha\bar{q} \operatorname{Tr}}\def\atanh{\operatorname{atanh}}\def\:{\hbox{\bf :}[Q
\zeta]\\ \label{ineq1} &\leq& \frac{(1-\alpha)}{r} \operatorname{Tr}}\def\atanh{\operatorname{atanh}}\def\:{\hbox{\bf :}[(P+Q)\zeta]\\
\label{ineq2}&\leq& \frac{(1-\alpha)}{r} \operatorname{Tr}}\def\atanh{\operatorname{atanh}}\def\:{\hbox{\bf :}[\zeta]
= \frac{(1-\alpha)}{r} \operatorname{d}}\def\<{\langle}\def\>{\rangleim(\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}).
\end{eqnarray}
This bound is achieved by $\Xi$, proving its optimality. Notice that $\Xi$ is the unique optimal
seed. In fact, equality in (\ref{ineq1}) is attained if and only if $\operatorname{Tr}}\def\atanh{\operatorname{atanh}}\def\:{\hbox{\bf :}[Q\zeta]=0$, namely when $\Supp(Q)\subseteq
\Ker(\zeta)$, while in (\ref{ineq2}) equality is
attained if and only if
$\Supp(\zeta)\subseteq \Supp(P)\oplus \Supp(Q)$. Therefore the bound is
achieved if and only if $\Supp(\zeta) \subseteq \Supp(P)=\Supp(\Xi)$,
implying $\zeta=\Xi$.$\,\blacksquare$\par
\section{Examples}\label{s:examples}\subsection{Extremal POVM's with a non trivial stability group}
\subsubsection{} Consider the group of rotations, represented in a
$(2j+1)$-dimensional Hilbert space $\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}_j$ by the irreducible representation $R_{{\bf n},
\varphi}\operatorname{d}}\def\<{\langle}\def\>{\rangleoteq e^{i\varphi {\bf n}\cdot{\bf j}}$, where $\varphi$ is an angle, ${\bf n}$ is a unit-vector, and ${\bf j}
\operatorname{d}}\def\<{\langle}\def\>{\rangleoteq (j_x,j_y,j_z)$ is the angular momentum operator. In this case
a covariant estimation in the orbit of a pure state $|\psi\>$ generally
may involve a nontrivial stability group. This is actually the case when
$|\psi\> \operatorname{d}}\def\<{\langle}\def\>{\rangleoteq |j m\>_{\bf n_0}$, is an eigenvector
of ${\bf n_0 \cdot j}$ for some unit vector ${\bf n_0}$. Clearly in
such case the stability group $\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}_0$ consists of rotations around ${\bf n_0}$, and the state
estimation in the orbit reduces to the estimation of a rotated direction
${\bf n'}$. The same situation arises for any state $\rho$ mixture of eigenvectors of ${\bf n_0
\cdot j}$. Without loss of generality, let's take ${\bf n_0}$ as the direction of the $z$-axis, and
write $\rho=\sum_{m=-j}^{j} p_m |jm\>\<jm|$ with $p_m \geq 0 \quad \forall
m$. Let's denote by $P$ the projector onto $\Supp(\rho)$, and take
$\bar{m}$ such that $p_{\bar{m}}=\max_{m}\{p_m\}$. Then, since
\begin{equation*}
\operatorname{Tr}}\def\atanh{\operatorname{atanh}}\def\:{\hbox{\bf :}[\rho \zeta] \leq p_{\bar{m}} \operatorname{Tr}}\def\atanh{\operatorname{atanh}}\def\:{\hbox{\bf :}[P \Xi] \leq p_{\bar{m}}\operatorname{Tr}}\def\atanh{\operatorname{atanh}}\def\:{\hbox{\bf :}[\Xi] =p_{\bar{m}} (2j+1),
\end{equation*}
one has that $\Xi = (2j+1) |j\bar{m}\>\<j\bar{m}|$ is the
optimal POVM. Notice that such POVM commutes with the stability group
$\set{R}(\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}_0)$ and is extremal, as a consequence of Corollary \ref{rankone-extr}.
\subsubsection{}
Consider the group $\mathbb{SU}(d)$ of unitary $d
\times d$ matrices with unit determinant, acting on the space $\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R} \operatorname{d}}\def\<{\langle}\def\>{\rangleoteq \Cmplx^d$. It is easy to
see that each vector $|\psi\> \in \spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}$ has a nontrivial stability group $\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}_0\equiv
\mathbb{U}(d-1)$. In fact, by
introducing an orthonormal basis $\set{B}_{\perp} \operatorname{d}}\def\<{\langle}\def\>{\rangleoteq
\{|n\>~|~n=1, \operatorname{d}}\def\<{\langle}\def\>{\rangleots ,d-1\}$ for the orthogonal complement $\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}^{\perp}$ of the line $\set{Span}\{|\psi\>\}$,
and the basis $\set{B} \operatorname{d}}\def\<{\langle}\def\>{\rangleoteq |\psi\> \cup \set{B}_{\perp}$
for $\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}$, the stability group $\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}_0$ consists on matrices of the form
\begin{equation}\label{block}
U_{h} = \left( \begin{array}{l|lll}
\omega_h &&{\bf 0}\\
\hline
{\bf 0} && V_h\\
\end{array}~
\right),
\end{equation}
where $\omega_h\in \Cmplx, \ |\omega_h|=1$, and $V_h$ is a unitary $(d-1)\times (d-1)$
matrix with $\operatorname{Det}(V_h)=\omega_h^*$.
Let's consider now the tensor
representation $\set{R}(\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I})=\{U_g^{\otimes 2}~|~ U_g \in\mathbb{SU}(d) \}$ on the space
$\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}^{\otimes 2}$. This representation has two irreducible subspaces, the symmetric and the
antisymmetric ones $\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}_+$ and $\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}_-$, with dimensions $d_+=\frac{d(d+1)}{2}$ and $d_-=\frac{d(d-1)}{2}$ respectively. Denote by $P_+$ and $P_-$ the projectors on $\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}_+$ and
$\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}_-$. Let's apply the representation $\set{R}(\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I})$ on the state $|\psi\>^{\otimes
2} \in \spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}^{\otimes 2}$. Clearly the stability group is the
same $\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}_0$ as before, and it is represented by
$\set{R}(\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}_0)=\{U_h^{\otimes 2} ~|~h\in\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}_0\}$. It is easy
to see that $\set{R}(\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}_0)$ contains five irreducible
components, carried by the subspaces $\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}_1=
\set{Span}\{|\psi\>^{\otimes 2}\}$, $\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}_2= \set{Span}\{|\psi\>\} \otimes
\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}^{\perp}$ , $\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}_3 = \spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}^{\perp} \otimes \set{Span}\{|\psi\>\}$,
$\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}_4= P_+(\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}^{\perp~\otimes 2})$, and $\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}_5=P_-(\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}^{\perp~
\otimes 2})$. Notice that $\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}_2$ and $\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}_3$ carry equivalent
representations, corresponding to a two dimensional multiplicity
space. An example of extremal POVM is given by
\begin{equation*}
\Xi = \frac{d(d+1)}{2}~
|\psi\>\<\psi|^{\otimes 2} \oplus \frac{d}{d-2}~ P_- Q P_-,
\end{equation*}
where $Q$ is the projection on $\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}^{\perp~ \otimes 2}$. Since the two
summands are proportional to $|\psi\>\<\psi|^{\otimes 2}$ and $P_- Q P_-$, which are the projectors on $\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}_1$ and
$\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}_5$ respectively, then $\Xi$ belongs to the commutant of $\set{R}(\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}_0)=\{U_h^{\otimes
2}~|~h\in\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}_0\}$. Notice that the subspaces $\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}_1$ and $\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}_5$ have multiplicities
$m_1=m_5=1$, corresponding to one-dimensional multiplicity spaces
$\sM_1 \equiv \sM_5 \equiv \Cmplx$ (whence the partial traces over $\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}_{1,5}$ will be
$c$-numbers). Moreover, using the fact that $\operatorname{Tr}}\def\atanh{\operatorname{atanh}}\def\:{\hbox{\bf :}_{\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}_1}[P_+]=1$,
$\operatorname{Tr}}\def\atanh{\operatorname{atanh}}\def\:{\hbox{\bf :}_{\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}_1}[P_-]=0$, $\operatorname{Tr}}\def\atanh{\operatorname{atanh}}\def\:{\hbox{\bf :}_{\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}_5}[P_+]=0$,
$\operatorname{Tr}}\def\atanh{\operatorname{atanh}}\def\:{\hbox{\bf :}_{\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}_5}[P_-]=\frac{(d-1)(d-2)}{2}$ one can check extremality using
the condition (\ref{iff}). Let's observe that in this example we have $r_1=r_5=1$ and $m_+=m_-=1$,
where $r_1$ and $r_5$ are defined as in Corollary \ref{RankBound}, while $m_+$ and $m_-$ are the
multiplicities of the two irreducible representations of $\set{R}(\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I})$. Then the bound of
(\ref{ranks}) is saturated. Finally, we remark that this POVM is
optimal for discriminating states in the orbit of $|\psi\>^{\otimes
2}$ \cite{MLpovms}, in the orbit of $\rho= \frac{1}{r}
\left(|\psi\>\<\psi|^{\otimes 2} + P_- Q P_- \right)$ where $r = 1+
\frac{(d-1)(d-2)}{2}$ because of Proposition \ref{uniqueOptPOVM}, and also in the orbit of any
randomization $\rho'=(1-\alpha)\rho + \alpha \sigma$ where $\sigma$ is density matrix with
$\Supp(\sigma)\subseteq \Ker(P)$, and $\alpha < \frac{1}{1+r}$,
because of Proposition \ref{stabilityPOVM}.
\subsection{Extremal POVM's with rank greater than one}
\subsubsection{} Consider the Abelian group $\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}=\mathbb{U}(1)$ of phase shifts, acting in the space
$\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}= \Cmplx^{d}$ by the representation $\set{R}(\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I})=\{U(\varphi) = \exp(i\varphi
N\}~|~ \varphi \in [-\pi, \pi]\}$, where the generator $N$ is given by $N= \sum_{n=0}^{d-1}
n~|n\>\<n|$ for some orthonormal basis $\{|n\>~|~n=0,1, \operatorname{d}}\def\<{\langle}\def\>{\rangleots, d-1\}$.
The stability group $\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}_0$ may be either the whole $\mathbb{U}(1)$ (for $\rho$ diagonal on the
eigenstates of the generator), or a discrete subgroup $\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}_0 =
\mathbb{Z}_k$ for some integer $k$, including the case $k=1$ of
trivial stability group. We exclude the degenerate case $\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}_0=\mathbb{U}(1)$ of shift invariant
states. The parameter space ${\mathfrak X}}\def\dim{\operatorname{dim}=\mathbb{U}(1)/\mathbb{Z}_k$ will be a circle, parametrized by an
angle $\theta \in [-\pi,\pi]$, and the action of a group element
$g(\varphi)\in \grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}$ on an element $\theta\in{\mathfrak X}}\def\dim{\operatorname{dim}$
will be given by $g(\varphi)~\theta= \theta + k \varphi$.
\par Due to constraint (\ref{TRnorm}), a seed $\Xi$ is represented in the eigenbasis of the generator by a correlation matrix, namely by a positive semidefinite matrix with unit diagonal entries. Viceversa, any correlation matrix corresponds to a seed in the case of trivial stability group $\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}_0$. In \cite{LiTam} one can find a constructive method which provides extremal correlation matrices with rank $r>1$: here we show that any of such matrices can be viewed as the optimal seed for the estimation problem in the orbit of
a particular state. Let us choose as optimality criterion the
maximization of the average value of a positive summable function
$f: {\mathfrak X}}\def\dim{\operatorname{dim} \times {\mathfrak X}}\def\dim{\operatorname{dim} \to \mathbb{R}_+$ depending only on the difference
$\theta-\theta_*$ between the measured and the true value. Suppose $\rho$ a state with stability group $\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}_0=\mathbb{Z}_k$. As we noted at the beginning of section \ref{OptExtr}, the
maximization of
$\set{F}_{\rho}[\Xi]$---the average value of $f(\theta-\theta_*)$---corresponds to the maximization of the
likelihood $\mathcal{L}_{\map{M}(\rho)}[\Xi]$ for the transformed state
$\map{M}(\rho)= f_0^{-1}\int_{-\pi}^{\pi}\frac{\operatorname{d}}\def\<{\langle}\def\>{\rangle \varphi}{2\pi} f(-k\varphi)
U_{\varphi}\rho U_{\varphi}^{\operatorname{d}}\def\<{\langle}\def\>{\rangleag}$ (from Eq. (\ref{maplike})). Notice that the map $\map{M}$
is trivially covariant---i.e. $\map{M}(U_\phi\rho U_\phi^\operatorname{d}}\def\<{\langle}\def\>{\rangleag)=U_\phi\map{M}(\rho)
U_\phi^\operatorname{d}}\def\<{\langle}\def\>{\rangleag$----since the group is abelian. For simplicity here we require that the map $\map{M}$ is
invertible, whence also $\map{M}^{-1}$ is covariant and trace-preserving (but generally not
positive). Covariance of $\map{M}$ implies that the stability group of $\map{M}(\rho)$
contains the stability group of $\rho$, and covariance of $\map{M}^{-1}$ implies the reverse
inclusion, whence the stability group is not changed by the maps.
\par Let's take now an extremal correlation matrix $\Xi$
with $\operatorname{rank}}\def\sign{\operatorname{sign}(\Xi)= r\geq 1$ and denote by $P$ the projector onto
$\set{Rng}}\def\Ker{\set{Ker}}\def\Supp{\set{Supp}(\Xi)$. Using Proposition \ref{uniqueOptPOVM}, we can see that $\Xi$ commutes with the representation $\set{R}(\gH_0)$, where $\gH_0$ is the stability group of $P$. Call $\lambda$ the modulus of the minimum eigenvalue of
$\mathcal{M}^{-1}(\frac{P}{r})$, then
\begin{equation*}
\rho= \frac{\lambda}{1+d\lambda }~
I+ \frac{1}{1+d\lambda} \mathcal{M}^{-1}(\frac{P}{r})
\end{equation*} is a density operator. Notice that the stability group $\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}_0$ of $\rho$ is the same stability group of $\mathcal{M}^{-1}(P)$, which coincides with $\gH_0$, the stability group of $P$. Therefore $\Xi$ commutes with the representation $\set{R}(\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}_0)$. It is
easy to show that $\Xi$ is the unique seed commuting with $\set{R}(\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}_0)$ which is also optimal for the estimation
of states in the orbit of $\rho$. In fact, for any $\zeta$ in the convex set $\Conv$ of the seeds with stability group $\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I}_0$, we have
\begin{eqnarray*}
\set{F}_{\rho}[\zeta]&=& f_0 \operatorname{Tr}}\def\atanh{\operatorname{atanh}}\def\:{\hbox{\bf :}[\zeta\mathcal{M}(\rho)]=f_0\left( \frac{\lambda}{1+d\lambda} \operatorname{Tr}}\def\atanh{\operatorname{atanh}}\def\:{\hbox{\bf :}[\zeta] + \frac{1}{r(1+d\lambda)} \operatorname{Tr}}\def\atanh{\operatorname{atanh}}\def\:{\hbox{\bf :}[\zeta P]\right)\\
&\leq& f_0 \left(\frac{d}{r}\right)~\left(\frac{1+r\lambda}{1+d\lambda}\right)
\end{eqnarray*}
This bound is achieved choosing $\zeta=\Xi$,
moreover, as in Proposition \ref{uniqueOptPOVM}, we
can observe that the functional $\operatorname{Tr}}\def\atanh{\operatorname{atanh}}\def\:{\hbox{\bf :}[\zeta P]$ with $\zeta \in\Conv$ is maximum if and only if $\zeta=\Xi$, then
the maximum is unique.
\subsubsection{} We provide now an example with a
non-compact group represented in an infinite dimensional Hilbert
space. This example is out of the general treatment of the present paper---which considers only
finite dimensions---and is given only with the purpose of showing that our results could be
generalized to infinite dimensions, however at the price of much more technical proofs.
Take $\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}$ as the Fock space, and consider the projective representationon $\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}$ of the
group of translations on the complex plane $\Cmplx$ in terms of the Weyl-Heisenberg operators
$\set{R}(\grp{G}}\def\gH{\grp{H}}\def\gI{\grp{I})=\{D(\alpha)=e^{\alpha a^{\operatorname{d}}\def\<{\langle}\def\>{\rangleag}-\bar{\alpha}a}~|~ \alpha \in \Cmplx\}$, where $[a,a^{\operatorname{d}}\def\<{\langle}\def\>{\rangleag}]=1$. Here we will
consider the 2-fold tensor representation $\{D(\alpha)^{\otimes 2}~|~
\alpha \in \Cmplx\}$ on $\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}^{\otimes 2}$. Using the unitary
operator $V=e^{\frac{\pi}{4}(a_1a_2^{\operatorname{d}}\def\<{\langle}\def\>{\rangleag}-a_1^{\operatorname{d}}\def\<{\langle}\def\>{\rangleag}a_2)}$, one can
write $D(\alpha)^{\otimes 2}= V (D(\sqrt{2}\alpha) \otimes I)V^{\operatorname{d}}\def\<{\langle}\def\>{\rangleag}$ and
see that the irreducible subspaces of this representation are $\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}_n =
V (\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}
\otimes \set{Span}(|\phi_n\>)$, $\{|\phi_n\>~|~ n=1,2, \operatorname{d}}\def\<{\langle}\def\>{\rangleots \infty \}$ any
orthonormal basis for $\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}$. All these subspaces carry equivalent
representations, the isomorphism between $\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}_m$ and $\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}_n$ being
\begin{equation}
T_{mn} =V(I \otimes |\phi_m\>\<\phi_n|)V^{\operatorname{d}}\def\<{\langle}\def\>{\rangleag}.\label{Tmn}
\end{equation}
In terms of these isomorphisms, the normalization constraints
(\ref{TRnorm}) for a seed operator become \cite{MLpovms}
\begin{equation}
\operatorname{Tr}}\def\atanh{\operatorname{atanh}}\def\:{\hbox{\bf :}[T_{mn} \zeta] = 2 \operatorname{d}}\def\<{\langle}\def\>{\rangleelta_{mn}
\end{equation}
Notice that the number 2 in this formula has nothing to do with the
dimension of $\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}_n$ which is infinite: in the non-compact case the
dimensions are replaced by positive numbers depending only on the equivalence class of
representations. In principle, since the space $\spc{H}}\def\sM{\spc{M}}\def\sR{\spc{R}^{\otimes 2}$ is
infinite dimensional, there is the possibility of extremal covariant
POVM's with an infinite rank. Actually we can provide the remarkable
example
\begin{equation}
\Xi= 2~V (|0\>\<0|\otimes I)V^{\operatorname{d}}\def\<{\langle}\def\>{\rangleag},
\end{equation}
where $|0\>$ is the vacuum state of the Fock basis $\{|m\>~|~ a^{\operatorname{d}}\def\<{\langle}\def\>{\rangleag}a|m\>=m|m\>\}$. The corresponding POVM can be realized by averaging the outcomes of two independent measurement with
$\Xi_1= |0\>\<0|\otimes I$ and $\Xi_2= I \otimes |0\>\<0|$ \cite{MLpovms}, which in
quantum optics correspond to two heterodyne measurements \cite{bilkent}.
We can observe that $\Xi$ is maximizes the likelihood functional for any state of the form $\rho= V
(|0\>\<0| \otimes \sigma) V^{\operatorname{d}}\def\<{\langle}\def\>{\rangleagger}$, where
$\sigma= \sum_{n=0}^{\infty} p_n |\phi_n\>\<\phi_n|$, is a mixed state
with $p_n >0~\forall n$. In fact, for any seed $\zeta$, one has the bound
\begin{equation}
\begin{split}
\operatorname{Tr}}\def\atanh{\operatorname{atanh}}\def\:{\hbox{\bf :}[V(|0\>\<0|\otimes \sigma)V^{\operatorname{d}}\def\<{\langle}\def\>{\rangleag} \zeta] =&\sum_{n=0}^{\infty} p_n \operatorname{Tr}}\def\atanh{\operatorname{atanh}}\def\:{\hbox{\bf :}[V~ (|0\>\<0| \otimes |\phi_n\>\<\phi_n|)~ V^{\operatorname{d}}\def\<{\langle}\def\>{\rangleag}~~ \zeta]\\
\leq& \sum_{n=0}^\infty p_n\operatorname{Tr}}\def\atanh{\operatorname{atanh}}\def\:{\hbox{\bf :}[V(I\otimes|\phi_n\>\<\phi_n|)V^{\operatorname{d}}\def\<{\langle}\def\>{\rangleag}\zeta]=
\sum_{n}^{\infty} p_n \operatorname{Tr}}\def\atanh{\operatorname{atanh}}\def\:{\hbox{\bf :}[T_{nn} \zeta]\label{bnd}= 2,
\end{split}
\end{equation}
and since $\Xi$ achieves the
bound (\ref{bnd}), it is optimal. Moreover $\Xi$ is the
unique optimal seed. In fact, the equality in (\ref{bnd}) is
achieved if and only if $\operatorname{Tr}}\def\atanh{\operatorname{atanh}}\def\:{\hbox{\bf :}[V (|0\>\<0| \otimes
|\phi_n\>\<\phi_n|)V^{\operatorname{d}}\def\<{\langle}\def\>{\rangleag} \zeta] = \operatorname{Tr}}\def\atanh{\operatorname{atanh}}\def\:{\hbox{\bf :}[V(I\otimes |\phi_n\>\<\phi_n|)V^{\operatorname{d}}\def\<{\langle}\def\>{\rangleag}
\zeta]$ for any $n$: by expanding the identity on the Fock basis, the positivity of $\zeta$ implies $\<m|\<\phi_n|V^\operatorname{d}}\def\<{\langle}\def\>{\rangleag \zeta V|m\>|\phi_n\>=0$ for any $m \not =0$. Hence the unique nonzero diagonal elements of $\zeta$ are on the vectors $V|0\>|\phi_n\>$.
On the other hand, the positivity of $\zeta$ along with the normalization constraint
$\operatorname{Tr}}\def\atanh{\operatorname{atanh}}\def\:{\hbox{\bf :}[T_{mn} \zeta]=0 \quad \forall m \not =n$ imply that all the off diagonal elements of $\zeta$
are zero. Hence $\zeta=2V\sum_{n=1}^\infty (|0\>\< 0|\otimes|\phi_n\>\<\phi_n|)V^\operatorname{d}}\def\<{\langle}\def\>{\rangleag
=2V(|0\>\< 0|\otimes I)V^\operatorname{d}}\def\<{\langle}\def\>{\rangleag=\Xi$. The fact that $\Xi$ is the unique optimal seed ensures that it
is also extremal, otherwise there would be two different seeds which are equally optimal. Notice that
$\Xi$ is extremal also according to our characterization (\ref{iff}).
\end{document}
|
\begin{document}
\vskip6cm
\title{Spectral radius and signless Laplacian spectral radius of strongly connected digraphs
\footnote{Research supported
the Zhujiang Technology New Star Foundation of
Guangzhou (No. 2011J2200090) and Program on International Cooperation and Innovation,
Department of Education, Guangdong Province (No. 2012gjhz0007).}}
\author{Wenxi Hong, Lihua You\footnote{{\it{Corresponding author:\;}}[email protected].}}
\vskip.2cm
\date{{\small
School of Mathematical Sciences, South China Normal University,\\
Guangzhou, 510631, P.R. China\\
}} \maketitle
\begin{abstract}
\vskip.3cm
Let $D$ be a strongly connected digraph and $A(D)$ be the adjacency matrix of $D$. Let $diag(D)$ be the diagonal matrix with outdegrees of the vertices of $D$ and $Q(D)=diag(D)+A(D)$ be the signless Laplacian matrix of $D$. The spectral radius of $Q(D)$ is called the signless Laplacian spectral radius of $D$, denoted by $q(D)$. In this paper, we give sharp bound on $q(D)$ with outdegree sequence and compare the bound with some known bounds, establish some sharp upper or lower bound on $q(D)$ with some given parameter such as clique number, girth or vertex connectivity, and characterize the corresponding extremal digraph or proposed open problem. In addition, we also determine the unique digraph which achieves the minimum (or maximum), the second minimum (or maximum), the third minimum, the fourth minimum spectral radius and signless Laplacian spectral radius among all strongly connected digraphs, and answer the open problem proposed by Lin-Shu [H.Q. Lin, J.L. Shu, A note on the spectral characterization of strongly connected bicyclic digraphs, Linear Algebra Appl. 436 (2012) 2524--2530].
\vskip.2cm \noindent{\it{AMS classification:}} 05C20; 05C50; 15A18
\vskip.2cm \noindent{\it{Keywords:}} Digraph; Signless Laplacian; Spectral radius; Clique number; Girth; Vertex connectivity.
\end{abstract}
\section{ Introduction}
\hskip.6cm
Let $D=(V(D), E(D))$ be a digraph, where $V(D)=\{1, 2, \ldots, n \}$ and $E(D)$ are the vertex set and arc set of $D$, respectively.
A digraph $D$ is simple if it has no loops and multiple arcs.
A digraph $D$ is strongly connected if for every pair of vertices $i, j\in V(D)$, there are a directed path from $i$ to $j$ and a directed path from $j$ to $i$.
In this paper, we consider finite, simple strongly connected digraphs, simply, strongly connected digraphs. We follow \cite{2001, 1979, 1976} for terminology and notations.
Let $D$ be a digraph. If two vertices are connected by an arc, then they are called adjacent.
For $e=(i, j)\in E(D)$, $i$ is the initial vertex of $e$, $j$ is the terminal vertex of $e$ and vertex $i$ is the tail of vertex $j$.
Let $N^-_D(i)=\{j\in V(D)|(j, i)\in E(D)\}$ and $N^+_D(i)=\{j\in V(D)|(i, j)\in E(D)\}$ denote the in-neighbors and out-neighbors of $i$, respectively.
Let $d^-_i=|N^-_D(i)|$ denote the indegree of the vertex $i$ and $d^+_i=|N^+_D(i)|$ denote the outdegree of the vertex $i$ in $D$.
If $d^+_1=d^+_2=\cdots=d^+_n$, then $D$ is a regular digraph. Let $t^+_i=\sum\limits_{j\in N^+_D(i)} d^+_j$ be the 2--outdegree of the vertex $i$, $m^+_i=\frac{t^+_i}{d^+_i}$ be the average 2--outdegree of the vertex $i$.
Let $\overrightarrow{P_n}$ and $\overrightarrow{C_n}$ denote the directed path and the directed cycle on $n$ vertices, respectively. Let $\overset{\longleftrightarrow}{K_n}$ denote the complete digraph on $n$ vertices in which two arbitrary vertices $i, j\in V(\overset{\longleftrightarrow}{K_n})$, there are arcs $(i, j), (j, i)\in E(\overset{\longleftrightarrow}{K_n})$.
Let $F$ be a subdigraph of $D$. If $D[V(F)]$ is a complete subdigraph of $D$, then $F$ is called a clique of $D$.
The clique number of a digraph $D$, denoted by $\omega(D)$, is the maximum value of the numbers of the vertices of the cliques in $D$.
The girth of $D$ is the length of the shortest directed cycle of $D$.
The vertex connectivity of $D$, denoted by $\kappa(D)$, is the minimum number of vertices whose removal destroys the strong connectivity of $D$.
Let $M$ be an $n\times n$ nonnegative matrix, $\lambda_1, \lambda_2, \ldots, \lambda_n$ be the eigenvalues of $M$.
It is obvious that the eigenvalues can be complex numbers since $M$ is not symmetric in general.
We usually assume that $|\lambda_1|\geq |\lambda_2|\geq\ldots \geq |\lambda_n|$.
The spectral radius of $M$ is defined as $\rho(M)=|\lambda_1|$, i.e., it is the largest modulus
of the eigenvalues of $M$. Since $M$ is a nonnegative matrix, it follows from the Perron Frobenius Theorem that $\rho(M)$ is a real number.
For a digraph $D$, let $A(D)=(a_{ij})$ denote the adjacency matrix of $D$, where $a_{ij}$ is equal to the number of arc $(i, j)$.
The spectral radius of $A(D)$, denoted by $\rho(D)$, is called the spectral radius of $D$.
Let $diag(D)=diag(d^+_1, d^+_2, \ldots, d^+_n)$ be the diagonal matrix with outdegree of the vertices of $D$ and
$Q(D)=diag(D)+A(D)=(q_{ij})$ be the signless Laplacian matrix of $D$.
The spectral radius of $Q(D)$, $\rho(Q(D))$, denoted by $q(D)$, is called the signless Laplacian spectral radius of $D$.
Since $D$ is a strongly connected digraph, then $A(D)$, $Q(D)$ are nonnegative irreducible matrices. It follows from the Perron Frobenius Theorem that
$\rho(D)$ and $\rho(Q(D))=q(D)$ are positive real numbers and there is a positive unit eigenvector corresponding to $\rho(D)$ and $q(D)$, respectively.
The spectral radius and the signless Laplacian spectral radius of undirected graph are well treated in the literature,
see \cite{2009PIM, 2010LAA, 2013, 2010LAA1, 2013LAA, 2013LAA2, 2014LAA} and so on,
but there is not much known about digraphs.
Recently, R.A. Brualdi wrote a stimulating survey on the spectra of digraphs\cite{2010}. Furthermore,
some upper or lower bounds on the spectral radius or the signless Laplacian spectral radius of digraphs were obtained
by the outdegrees and the average 2-outdegrees \cite{2013ARS, 2013}.
Some extremal digraphs which attain the maximum or minimum spectral radius
and the distance spectral radius of digraphs with given parameters, such as given connectivity, given arc connectivity,
given dichromatic number, given clique number, given girth and so on, were characterized,
see e.g. \cite{2013DM, 2011LAA, 2013DAM, 2012DM, 2012DM2}.
In \cite{2013ARS}, S. Burcu Bozkurt and Durmus Bozkurt gave some sharp upper and lower bounds for the signless Laplacian spectral radius as follows:
\begin{equation}\label{eq11}
\min\{d^+_i+d^+_j:(i, j)\in E(D)\}\leq q(D) \leq\max\{d^+_i+d^+_j:(i, j)\in E(D)\},
\end{equation}
\begin{equation}\label{eq12}
\min\{d^+_i+m^+_i:i\in V(D)\}\leq q(D) \leq\max\{d^+_i+m^+_i:i\in V(D)\},
\end{equation}
\begin{equation}\label{eq13}
q(D)\leq\max\{\frac{d^+_i+d^+_j+\sqrt{(d^+_i-d^+_j)^2+4m^+_im^+_j}}{2}:(i, j)\in E(D)\},
\end{equation}
\begin{equation}\label{eq14}
q(D)\leq\max\{d^+_i+\sqrt{t_i^+}:i\in V(D)\}.
\end{equation}
The paper is organized as follows: we give a new sharp upper bound of the signless Laplacian spectral radius with outdegree sequence among strongly connected digraphs and compare the bound with some known bounds in Section 2;
In Section 3,
we give some graph transformations on digraphs which are useful to the proofs of our main results;
In Section 4, we characterize the digraph which minimizes the signless Laplacian spectral radius with given clique number;
In Section 5, we characterize the digraph which minimizes the signless Laplacian spectral radius with given girth
and determine the unique digraph with the second, the third and the fourth minimum signless Laplacian spectral radius among all strongly connected digraphs;
In Section 6, we study the maximum signless Laplacian spectral radius with given vertex connectivity among all strongly connected digraphs,
describe the extremal digraph set and the value of the signless Laplacian spectral radius of such digraphs,
determine the unique digraph with the second maximum signless Laplacian spectral radius among all strongly connected digraphs,
and conjecture the digraph which maximizes the signless Laplacian spectral radius with given vertex connectivity.
In Section 7, we determine the unique digraph which achieves the second minimum (or maximum), the third minimum, the fourth minimum spectral radius among all strongly connected digraphs and answer the open problem proposed by Lin-Shu [H.Q. Lin, J.L. Shu, A note on the spectral characterization of strongly connected bicyclic digraphs, Linear Algebra Appl. 436 (2012) 2524--2530].
\section{Bounds on the signless spectral radius of digraphs}
\hskip.6cm
In this section, we first give the maximal and minimal signless spectral radius of digraphs. Then we give a new sharp upper bound of the signless Laplacian spectral radius among all simple digraphs and compare them with the upper bounds given in \cite{2013ARS} as inequalities (1.1)--(1.4).
The technique used in the result is motivated by \cite{2013, 2013LAA2} et al.
\begin{lem}\label{lem21}{\rm(\cite{1979})}
If $A$ is an $n\times n$ nonnegative matrix with the spectral radius $\rho(A)$ and row sums $r_1, r_2, \ldots, r_n$, then
$\min\limits_{1\leq i\leq n}r_i\leq \rho(A)\leq \max\limits_{1\leq i\leq n}r_i$. Moreover, if $A$ is irreducible, then one of the equalities holds if and only if the row sums of $A$ are all equal.
\end{lem}
By the definition of $Q(D)$, the $i$-th row sum of $Q(D)$ is $2d_i^+$. Follows from Lemma \ref{lem21}, we can get $q(\overrightarrow{C_n})=2$ and $q(\overset{\longleftrightarrow}{K_n})=2n-2$ immediately.
\begin{defn}\label{defn22}{\rm(\cite{1979}, Chapter 2)}
Let $A=(a_{ij}), B=(b_{ij})$ be $n\times n$ matrices. If $a_{ij}\leq b_{ij}$ for all $i$ and $j$, then $A\leq B$.
If $A\leq B$ and $A\neq B$, then $A< B$. If $a_{ij} < b_{ij}$ for all $i$ and $j$, then $A \ll B$.
\end{defn}
\begin{lem}\label{lem23}{\rm(\cite{1979}, Chapter 2)}
Let $A, B$ be $n\times n$ matrices with the spectral radius $\rho(A)$ and $\rho(B)$. If $0\leq A\leq B$, then $\rho(A) \leq \rho(B)$.
Furthermore, if $0\leq A<B$ and $B$ is irreducible, then $\rho(A) < \rho(B)$.
\end{lem}
\begin{lem}\label{lem24}{\rm(\cite{1979}, Chapter 2; \cite{1988}, Chapter 1)}
Let $m< n$, $A, B$ be $n\times n$, $m\times m$ nonnegative matrices with the spectral radius $\rho(A)$ and $\rho(B)$, respectively.
If $B$ is a principal submatrix of $A$, then $\rho(B) \leq \rho(A)$.
Furthermore, if $A$ is irreducible, then $\rho(B) < \rho(A)$.
\end{lem}
By Lemmas \ref{lem23}--\ref{lem24} and the definitions of $Q(D)$ and $q(D)$, we have the following results in terms of digraphs.
\begin{cor}\label{cor25}
Let $D$ be a digraph and $H$ be a subdigraph of $D$. Then $q(H)\leq q(D)$. If $D$ is strongly connected, and $H$ is a proper subdigraph of $D$, then $q(H)<q(D)$.
\end{cor}
From Lemma \ref{lem21} and Corollary \ref{cor25}, we easily get the following results.
\begin{cor}\label{cor26}
Let $D$ be a strongly connected digraph. Then $2\leq q(D)\leq 2(n-1)$,
$q(D)=2n-2$ if and only if $D\cong \overset{\longleftrightarrow}{K_n}$,
and $q(D)= 2$ if and only if $D\cong \overrightarrow{C_n}$.
\end{cor}
\begin{theo}\label{theo27}
Let $D=(V(D),E(D))$ be a simple digraph on $n\geq 2$ vertices with $V(D)=\{1,2,\ldots, n\}$, outdegree sequence $d^+_1, d^+_2, \ldots, d^+_n$, where $d^+_1\geq d^+_2\geq\cdots\geq d^+_n$.
Let $\phi_1=2d_1^+$ and for $2\leq l\leq n$,
\begin{equation}\label{eq21}
\phi_l=\frac{d^+_1+2d^+_l-1+\sqrt{(2d^+_l-d^+_1+1)^2+8\sum\limits_{i=1}^{l-1}(d^+_i-d^+_l)}}{2}.
\end{equation}
\noindent and $\phi_s=\min\limits_{1\leq l\leq n}\{\phi_l\}$ for some $s\in \{1,2, \ldots, n\}$. Then $q(D)\leq \phi_s.$
Furthermore, if $D$ is a strongly connected digraph, then $q(D)=\phi_s$ if and only if $D$ is regular or
there exists an integer $t$ with $2\leq t\leq s$ such that $d^+_1=\cdots=d^+_{t-1}>d^+_t=\cdots=d^+_n$ and the indegrees $d^-_1=\cdots=d^-_{t-1}=n-1$.
\end{theo}
\begin{proof}
Firstly, we show $q(D)\leq \phi_l$ for all $1\leq l\leq n$.
{\bf Case 1: } $l=1$.
It is obvious that $q(D)\leq \phi_1=2d_1$ by Lemma \ref{lem21} and the definition of $Q(D)$.
{\bf Case 2: } $2\leq l\leq n$.
By (\ref{eq21}), it is obvious that $\phi_l\geq d_1^+-1$, and $(\phi_l-2d^+_l)(\phi_l-d^+_1+1)=2\sum\limits_{i=1}^{l-1}(d^+_i-d^+_l)$.
Let $U=diag(x_1, \ldots, x_{l-1}, 1, \ldots,1)$ be an $n\times n$ diagonal matrix,
where $x_i=1+\frac{2(d^+_i-d^+_l)}{\phi_l-d^+_1+1}$ for $i\in\{1, 2, \ldots, l-1\}$.
Then
$x_1\geq x_2\geq \dots\geq x_{l-1}\geq 1$, $\sum\limits_{k=1}^{l-1}(x_k-1)=\phi_l-2d_l^+$,
and $U^{-1}=diag(x^{-1}_1, \ldots, x^{-1}_{l-1}, 1, \ldots,1)$.
Let $Q(D)=(q_{ij})_{n\times n}=diag(d_1^+, \ldots, d_n^+)+A(D)$ be the signless Laplacian matrix of $D$ and $B=U^{-1}Q(D)U$.
Obviously, $B$ and $Q(D)$ have the same eigenvalues, thus $q(D)=\rho(B)$. Let $r_i(B)$ $(i=1, 2, \ldots, n)$ be the row sums of $B$.
Now we show $r_i(B)\leq \phi_l$ for any $1\leq i\leq n$.
{\bf Subcase 2.1: } $1\leq i\leq l-1$.
\hskip.6cm $r_i(B)=\sum\limits_{k=1}^{l-1}\frac{x_k}{x_i}q_{ik}+\sum\limits_{k=l}^{n}\frac{1}{x_i}q_{ik}$
\hskip1.65cm$=\frac{1}{x_i}\sum\limits_{k=1}^{l-1}x_kq_{ik}+\frac{1}{x_i}\sum\limits_{k=l}^{n}q_{ik}$
\hskip1.65cm$=\frac{2d_i^+}{x_i}+\frac{1}{x_i}\sum\limits_{k=1}^{l-1}(x_k-1)q_{ik}$
\hskip1.65cm $=\frac{2d_i^+}{x_i}+\frac{1}{x_i}\left((x_i-1)q_{ii}+\sum\limits_{k=1,k\neq i}^{l-1}(x_k-1)q_{ik}\right)$
\hskip1.65cm $\leq \frac{2d_i^+}{x_i}+\frac{1}{x_i}\left(d^+_1(x_i-1)+\sum\limits_{k=1,k\neq i}^{l-1}(x_k-1)\right)$
\hskip1.65cm $=\frac{1}{x_i}\left(2d_i^++(d^+_1-1)(x_i-1)+\sum\limits_{k=1}^{l-1}(x_k-1)\right)$
\hskip1.65cm $=\frac{1}{x_i}\left(2d^+_i+(d^+_1-1)\frac{2(d^+_i-d^+_l)}{\phi_l-d^+_1+1}+\frac{2\sum\limits_{k=1}^{l-1}(d^+_k-d^+_l)}{\phi_l-d^+_1+1}\right)$
\hskip1.65cm $=\frac{1}{x_i}\left(2d^+_i+(d^+_1-1)\frac{2(d^+_i-d^+_l)}{\phi_l-d^+_1+1}+\frac{(\phi_l-2d^+_l)(\phi_l-d^+_1+1)}{\phi_l-d^+_1+1}\right)$
\hskip1.65cm $=\phi_l$,
\noindent with equality if and only if (1) and (2) hold: (1) $x_i=1$ or $q_{ii}=d^+_1$ for $x_i>1$, (2) $x_k=1$ or $q_{ik}=1$ for $x_k>1$ if $1\leq k\leq l-1$ with $k\neq i$.
{\bf Subcase 2.2: } $l\leq i\leq n$.
\hskip.6cm
$r_i(B)=\sum\limits_{k=1}^{l-1}x_kq_{ik}+\sum\limits_{k=l}^{n}q_{ik}$
$=2d^+_i+\sum\limits_{k=1}^{l-1}(x_k-1)q_{ik}$
$\leq 2d^+_l+\sum\limits_{k=1}^{l-1}(x_k-1)$$=\phi_l$,
\noindent with equality if and only if (3) and (4) hold: (3) $d^+_i=d^+_l$, (4) $x_k=1$ or $q_{ik}=1$ for $x_k>1$
if $1\leq k\leq l-1$.
Hence by Lemma \ref{lem21}, $q(D)=\rho(B)\leq \max\limits_{1\leq i\leq n}\{r_i(B)\}\leq \phi_l$ for any $l\in \{2,3,\ldots, n\}$.
Thus $q(D)=\rho(B)\leq \max\limits_{1\leq i\leq n}\{r_i(B)\}\leq \min\limits_{2\leq l\leq n}\{\phi_l\}$.
\vskip.1cm
Combining the above two cases, $q(D)\leq \min\limits_{1\leq l\leq n}\{\phi_l\}$.
Let $D$ be a strongly connected digraph, and $\phi_s=\min\limits_{1\leq l\leq n}\{\phi_l\}$ for some $s\in\{1, 2, \ldots, n\}$.
{\bf Case 1: } $s=1$.
It is obvious that $\rho(Q(D))=q(D)=\phi_1=2d_1^+$ if and only if $D$ is regular by Lemma \ref{lem21} and the fact $d^+_1\geq d^+_2\geq\cdots\geq d^+_n$.
{\bf Case 2: } $2\leq s\leq n$.
Clearly, $Q(D)$ and $B$ are irreducible nonnegative matrices because $D$ is a strongly connected digraph.
Then $q(D)=\phi_s$ if and only if $\phi_1\geq \phi_s$, $\rho(B)=\max\limits_{1\leq i\leq n}\{r_i(B)\}$ and
$\max\limits_{1\leq i\leq n}\{r_i(B)\}=\phi_s$.
Note that $\rho(B)=\max\limits_{1\leq i\leq n}\{r_i(B)\}$ if and only if the row sums of $B$, $r_1(B), \ldots, r_n(B)$ are all equal by Lemma \ref{lem21},
we have $q(D)=\phi_s$ if and only if $\phi_1\geq \phi_s$ and $r_1(B)=\cdots=r_n(B)=\phi_s.$
Note that $r_1(B)=\cdots=r_n(B)=\phi_s$ if and only if $B$ satisfies the following four conditions:
(a) $x_i=1$ or $q_{ii}(=d_i^+)=d^+_1$ for $x_i>1$ holds for all $1\leq i\leq s-1$;
(b) $x_k=1$ or $q_{ik}=1$ for $x_k>1$ if $1\leq k\leq s-1$ with $k\neq i$ holds for all $1\leq i\leq s-1$,
(c) $d_s^+=d_{s+1}^+=\cdots=d_n^+$;
(d) $x_k=1$ or $q_{ik}=1$ for $x_k>1$ if $1\leq k\leq s-1$ holds for all $s\leq i\leq n$.
Thus we only need to show (a)--(d) hold if and only if $D$ is regular or
there exists an integer $t$ with $2\leq t\leq s$ such that $d^+_1=\cdots=d^+_{t-1}>d^+_t=\cdots=d^+_n$, and $d^-_1=\cdots=d^-_{t-1}=n-1.$
If (a)-(d) hold, we consider the following cases.
{\bf Subcase 2.1: } $x_1=1$.
Then $x_1=x_2=\cdots=x_{s-1}=1$ by $x_1\geq x_2\geq \cdots\geq x_{s-1}\geq 1$, and thus $d_1^+=d_{2}^+=\cdots=d_{s-1}^+=d_s^+$. It implies that $D$ is a regular digraph from (c).
{\bf Subcase 2.2: } $x_1\geq \cdots\geq x_{t-1}>1$ and $x_t=\cdots=x_{s-1}=1$ for some $t\in \{2,\ldots, s\}.$
Then $q_{ii}=d_i^+=d_1^+$ for $1\leq i\leq t-1$ by (a) and $d_{t}^+=\cdots=d_{s-1}^+=d_s^+=\cdots=d_n^+$ by (c).
Thus $d_1^+=\cdots=d_{t-1}^+> d_{t}^+=\cdots=d_n^+$.
By (b) and (d), $q_{ik}=1$ (that is, $(i, k)\in E(D)$) for all $i\in\{1, 2, \ldots, n\}$ and all $k\in \{1, 2, \ldots, t-1\}\backslash\{i\}$,
That implies $d^-_1=\cdots=d^-_{t-1}=n-1$.
Conversely, if $D$ is a regular digraph, then $d_1^+=d_{2}^+=\cdots=d_n^+$ and $\phi_1=\cdots=\phi_n=2d_1^+$, the result follows.
If there exists some $t$ with $2\leq t\leq s$ such that $d_1^+=\cdots=d_{t-1}^+> d_{t}^+=\cdots=d_n^+$,
and $d^-_1=\cdots=d^-_{t-1}=n-1$,
then $x_1\geq \cdots\geq x_{t-1}>1=x_t=\cdots=x_{s-1}$,
and $(i,k)\in E(D)$ for all $i\in\{1, 2, \ldots, n\}$ and all $k\in \{1, 2, \ldots, t-1\}\backslash\{i\}$,
thus (a),
(b),
(c),
(d) hold.
Therefore, $r_i(B)=\phi_s$ for all $i\in\{1, 2, \ldots, n\}$
and thus by Lemma \ref{lem21}, $q(D)=\rho(B)= \max\limits_{1\leq i\leq n}r_i(B)=\phi_s$.
\end{proof}
\begin{exam}\label{exam28}
For digraph $\overset{\longleftrightarrow}{K_n}-(u, v)$ where $u,v\in V(\overset{\longleftrightarrow}{K_n})$, by directly calculating, we see that
the bound of (\ref{eq21}) is better than the bounds of (\ref{eq11})--(\ref{eq14}) in \cite{2013ARS} (see Table 1) because
$$\frac{3n-6+\sqrt{n^2+4n-4}}{2}<\min\{2n-2, \frac{2n^2-4n+1}{n-1}, n-1+\sqrt{n(n-2)} \}.$$
\end{exam}
\begin{exam}\label{exam29}
Let $D_1$ as shown in Fig.1. For $D_1$, the outdegree sequence is $3=d^+_1>d^+_2=d^+_3=\cdots=d^+_n=2$ and the indegree $d^-_1=n-1$. Then $q(D_1)=3+\sqrt{3}$ by Theorem \ref{theo27}. We can see from Table 1 that the bound of (\ref{eq21}) is better than the bounds of (\ref{eq11})--(\ref{eq14}) in \cite{2013ARS} because
$q(D_1)=3+\sqrt{3}<\min\{5, \frac{5}{2}+\frac{\sqrt{21}}{2}, 3+\sqrt{6}\}.$
\end{exam}
However, we also see that the bound of (\ref{eq21}) is not better than the bounds of (\ref{eq12})--(\ref{eq13}) in \cite{2013ARS} for any digraph.
For example, let $D_2$ and $D_3$ as shown in Fig.1. For digraph $D_3$, the upper bounds of (\ref{eq12})--(\ref{eq13}) and (\ref{eq21}) are equal to 3;
and for digraph $D_2$, the upper bounds of (\ref{eq12})--(\ref{eq13}) are less than the upper bound of (\ref{eq21}).
\vskip0.25cm
$
\xy 0;/r3pc/: \POS (4,1) *\xycircle<3pc,3pc>{};
\POS (3.01, .9) *@{*}*+!R{u_n}="n";
\POS (4.3, .9) *@{*}*+!U{u_1}="a";
\POS(5.0,.9) *@{*}*+!L{u_2}="b";
\POS(4.8,1.6) *@{*}*+!L{u_3}="c";
\POS(4.4,1.9) *@{*}*+!D{u_4}="d";
\POS(4,2) *@{*}*+!D{u_5}="e";
\POS(3.5,1.86) \ar@{->}(3.5,1.86);(3.6,1.91);
\POS(3.3,1.72) *@{*}*+!R{u_6}="f";
\POS(3.4,1.3) *@{}*+!L{\vdots};
\POS(2.8,1.4) *@{}*+!L{\vdots};
\POS "b" \ar@{-} "n"; \POS "a" \ar@{-} "c"; \POS "a" \ar@{-} "d";
\POS "e" \ar@{->} (4.15, 1.45); "e"; \POS (4.15, 1.45) \ar@{-} "a";
\POS "f" \ar@{->} (3.8, 1.3); "f"; \POS (3.8, 1.3) \ar@{-} "a";
\POS(3.8,.9) \ar@{->}(3.8,.9);(3.76,.9);
\POS(4.99,1.1) \ar@{->}(4.98,1.25);(4.99,1.1);
\POS(3,1.1) \ar@{->}(3,1.1);(3.01,1.15);
\POS(4.5,1.85) \ar@{->}(4.5,1.85);(4.6,1.8);
\POS(4.3,.06) \ar@{->}(4.3,.06);(4.24,.038);
\POS(4.23,1.98) \ar@{->}(4.23,1.98);(4.27,1.965);
\endxy
\hskip0.5cm
\xy 0;/r3pc/: \POS (3.5,1) *\xycircle<3pc,3pc>{};
\POS(3.2,1.3) *@{}*+!D{\overset{\longleftrightarrow}{K_d}};
\POS(3.5,0.3) *@{}*+!D{\overrightarrow{P_{n-d+2}}};
\POS(3.7,-0.2) *@{}*+!D{\ldots};
\POS(4.3,1.6) *@{*}*+!L{\hspace*{3pt}{\hspace*{-3pt}\hspace*{-3pt}}}="c";
\POS(5.4,1.6) *@{}*+!R{u_{n-d+2}};
\POS(2.95,.16) *@{*}*+!R{u_2}="d";
\POS(2.7,.4) \ar@{->}(2.7,.4) ;(2.69,.415) ="e";
\POS(4.3,.4) \ar@{->}(4.3,.4);(4.29,.385)="f";
\POS (2.515, .9) *@{*}*+!R{u_1}="g";
\POS "c" \ar @{-} "g";
\POS(4.5,.9) *@{*}*+!L{\hspace*{3pt}{\hspace*{-3pt}\hspace*{-3pt}}}="h";
\POS(5.6,.9) *@{}*+!R{u_{n-d+1}};
\POS(4.49,1.1) \ar@{->}(4.48,1.25);(4.49,1.1)="i";
\endxy
\hskip0.5cm
\xy 0;/r3pc/: \POS (4,1) *\xycircle<3pc,3pc>{};
\POS(4.8,1.6) *@{*}*+!L{u_2}="c";
\POS(4.75,.3) *@{*}*+!L{u_n}="d";
\POS (3.015, .9) *@{*}*+!R{u_{n-g+2}}="g";
\POS(5.0,.9) *@{*}*+!L{u_1}="h";
\POS(4.99,1.1) \ar@{->}(4.98,1.25);(4.99,1.1)="i";
\POS(3.5,1.86) \ar@{->}(3.5,1.86);(3.6,1.91)="b";
\POS(3.5,.145) \ar@{->}(3.5,.145);(3.49,.151) ="e";
\POS(3.2,0.15) *@{}*+!L{\ddots};
\POS(4.94,.6) \ar@{->}(4.94,.6);(4.93,.585)="f";
\POS "h" \ar @{->} (3.8,.9) \ar @{-} "g";
\POS (4.1, .1) *@{}*+!D{\overrightarrow{C_g}};
\POS (4, 1.3) *@{}*+!D{\overrightarrow{P_{n-g+2}}};
\POS (4, 1.9) *@{}*+!D{\cdots};
\endxy
$
\hskip1.5cm $D_1$ \hskip3.8cm$D_2$ \hskip5.3cm$D_3$
\vskip0.1mm
\hskip5cm Fig.1. \hskip.1cm The digraphs $D_1, D_2$ and $D_3$.
\vskip.2cm
\vskip.3cm
\hskip-0.8cm
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\mbox{digraph} & $(1.1) $ & $(1.2)$ & $(1.3)$ & $(1.4)$ & $(2.1)$ \\ \hline
$\overset{\longleftrightarrow}{K_n}-(u, v)$ & $2n-2$ & $\frac{2n^2-4n+1}{n-1}$ & $\frac{2n^2-4n+1}{n-1}$ & $n-1+\sqrt{n(n-2)}$ & $\frac{3n-6+\sqrt{n^2+4n-4}}{2}$ \\ \hline
$D_1$ & $5$ & $5$ & $\frac{5}{2}+\frac{\sqrt{21}}{2}$ & $3+\sqrt{6}$ & $3+\sqrt{3}$ \\ \hline
$D_2$ & $2d-1$ & $2d-2+\frac{2}{d}$ & $\frac{2d-1+\sqrt{1+\frac{4(d^2-2d+2)^2}{d(d-1)}}}{2}$ & $d+\sqrt{d^2-2d+2}$ & $\frac{3d-3+\sqrt{(d-1)^2+8}}{2}$ \\ \hline
$D_3$ & $3$ & $3$ & $3$ & $2+\sqrt{3}$ & $3$ \\ \hline
\end{tabular}
\vskip0.1mm
\hskip1.5cm Table 1. \hskip.1cm The upper bounds for digraphs $\overset{\longleftrightarrow}{K_n}-(u, v)$, $D_1, D_2$ and $D_3$.
\section{Some graph transformations on digraphs}
\hskip.6cm
In this section, we present some graph transformations on digraphs which are useful for the proof of the main results.
\begin{lem}\label{lem31}{\rm(\cite{1979}, Chapter 2)}
Let $A$ be an $n\times n$ nonnegative matrix with the spectral radius $\rho(A)$, $x=(x_1, x_2, \ldots, x_n)^T>0$ be any positive vector.
If $\alpha\geq 0$ and $\alpha x\leq Ax$, then $\alpha\leq \rho(A)$.
Furthermore, if $A$ is irreducible and $\alpha x< Ax$, then $\alpha< \rho(A)$.
\end{lem}
In the rest of this section, let $x=(x_1, x_2, \ldots, x_n)^T$ be the unique positive unit eigenvector corresponding to $q(D)$,
while $x_i$ corresponds to the vertex $i$.
\begin{theo}\label{theo32}
Let $D=(V(D), E(D))$ be a simple digraph on $n$ vertices, $u, v, w\in V(D)$, and $(u, v) \in E(D)$.
Let $H=D-\{(u, v) \}+\{(u, w) \}$ (Note that if $(u, w)\in E(D)$, then $H$ has multiple arc $(u, w)$.)
If $x_w\geq x_v$, then $q(H)\geq q(D)$. Furthermore, if $H$ is strongly connected and $x_w>x_v$, then $q(H)>q(D)$.
\end{theo}
\begin{proof}
Now we show $(Q(H)x)_s\geq (Q(D)x)_s$ for any $s\in V(D)=V(H)$.
When $s\neq u$, then $(Q(H)x)_s=\sum\limits_{t=1}^{n}q_{st}x_t=(Q(D)x)_s=q(D)x_s$ where $Q(D)=(q_{ij})$;
when $s= u$, then $(Q(H)x)_s-(Q(D)x)_s=x_w-x_v\geq 0.$
Thus $Q(H)x\geq Q(D)x=q(D)x$. By Lemma \ref{lem31}, $\rho(Q(H))=q(H)\geq q(D)$.
Similarly, if $H$ is strongly connected and $x_w>x_v$, then $q(H)>q(D)$ by Lemma \ref{lem31} immediately.
\end{proof}
\begin{lem}\label{lem33}{\rm(\cite{1979}, Chapter 2; \cite{1988}, Chapter 1)}
Let $A$ be an $n\times n$ nonnegative matrix with the spectral radius $\rho(A)$.
Then $A$ is reducible if and only if $\rho(A)$ is the spectral radius of some proper principal submatrix of $A$.
\end{lem}
The following result follows from Lemma \ref{lem33} in terms of digraph.
\begin{cor}\label{cor34}
Let $D$ be a digraph and $D_1, D_2, \ldots, D_s$ be the strongly connected components of
$D$. Then $q(D)=\max\{q(D_1), q(D_2), \ldots, q(D_s)\}$.
\end{cor}
\begin{lem}\label{lem35}
Let $D$ $(\neq \overrightarrow{C_n})$ be a strongly connected digraph with $V(D)=\{u_1, u_2, \ldots, u_n\}$,
$\overrightarrow{P}=u_1u_2\cdots u_l$ $(l\geq3)$ be a directed path of $D$ with $d^+_{u_i}=1$ $(i=2, 3, \ldots, l-1)$.
Then we have $x_2<x_3<\cdots<x_{l-1}<x_l$.
\end{lem}
\begin{proof}
Since $D$ is a strongly connected digraph and $D\neq \overrightarrow{C_n}$, then $D$ contains a directed cycle, denoted by
$\overrightarrow{C_g}$ $(g\geq 2)$, as a proper subdigraph of $D$.
Thus $q(D)>q(\overrightarrow{C_g})=2$ by Corollary \ref{cor25}.
Therefore, for any $i\in \{2,3,\ldots, l-1\}$, we have
\hskip2cm $2x_i<q(D)x_i=(Q(D)x)_i=d^+_{u_i}x_i+x_{i+1}=x_i+x_{i+1}.$
\noindent Then $x_i<x_{i+1}$ and thus $x_2<x_3<\cdots<x_{l-1}<x_l$.
\end{proof}
Let $D_{uv}$ denote the simple digraph obtained from $D$ by deleting arc $(u, v)$, identifying $u$ with $v$ of $D$ and deleting the multiple arcs.
\begin{theo}\label{theo36}
Let $D$ $(\neq \overrightarrow{C_n})$ be a strongly connected digraph with $V(D)=\{u_1, u_2, \ldots, u_n\}$,
and $\overrightarrow{P}=u_1u_2\cdots u_l$ $(l\geq3)$ be a directed path of $D$ with $d^+_{u_i}=d^-_{u_i}=1$ $(i=2, 3, \ldots, l-1)$.
Then for any $i\in \{2, 3, \ldots, l-1\}$, $q(D_{u_{i-1}u_i})\geq q(D)$.
\end{theo}
\begin{proof}
For any $i\in \{2, 3, \ldots, l-1\}$, let $H=D-\{(u_{i-1}, u_i)\}+\{(u_{i-1}, u_{i+1})\}$.
Then by $d^-_{u_i}=1$, $H$ has exactly two strongly connected components, one is the isolated vertex $u_i$, the other is $D_{u_{i-1}u_i}$,
thus $q(H)=q(D_{u_{i-1}u_i})$ by Corollary \ref{cor34}.
On the other hand, for any $i\in \{2, 3, \ldots, l-1\}$, by Lemma \ref{lem35} and $d^+_{u_i}=1$, we have $x_{i+1}>x_i$. Then $q(H) \geq q(D)$ by Theorem \ref{theo32},
and thus $q(D_{u_{i-1}u_i})=q(H)\geq q(D)$.
\end{proof}
Let $D=(V(D), E(D))$ be a digraph with $(u, v)\in E(D)$ and $w\notin V(D)$,
$D^w=(V(D^w), E(D^w))$ with $V(D^w)=V(D)\cup \{w\}$, $E(D^w)=E(D)-\{(u, v)\}+\{(u, w), (w, v)\}$.
Then the following result follows from Theorem \ref{theo36}.
\begin{cor}\label{cor37}
Let $D$ $(\neq \overrightarrow{C_n})$ be a strongly connected digraph, $w\notin V(D)$, and $D^w$ defined as before. Then $q(D)\geq q(D^w)$.
\end{cor}
\begin{proof}
Clearly $D=D^w_{uw}$, $D^w(\neq \overrightarrow{C_n})$ is a strongly connected digraph, $\overrightarrow{P}=uwv$ is a directed path of $D^w$ and
the outdegree and the indegree of $w$ in $D^w$ are equal to 1, then $q(D)=q(D^w_{uw})\geq q(D^w)$ by Theorem \ref{theo36}.
\end{proof}
\section{The minimum signless Laplacian spectral radius of digraphs with given clique number}
\hskip.6cm
Let $\mathcal{C}_{n,d}$ denote the set of strongly connected digraphs on $n$ vertices with clique number $\omega(D)=d$. As we know, if $d=n$, then $\mathcal{C}_{n,n}=\{\overset{\longleftrightarrow}{K_n}\}$ and $q(\overset{\longleftrightarrow}{K_n})=2n-2$. If $d=1$, then $\overrightarrow{C_n}\in\mathcal{C}_{n,1}$ and $q(\overrightarrow{C_n})=2$. By Corollary \ref{cor26}, for any $D\in \mathcal{C}_{n,1}$, $q(D)\geq q(\overrightarrow{C_n})=2$ with equality if and only if $D\cong \overrightarrow{C_n}$. Thus we only discuss the cases $2\leq d\leq n-1$.
Let $2\leq d\leq n-1$, $B_{n,d}=(V(B_{n,d}), E(B_{n,d}))$ be a digraph obtained by adding a directed
\vskip.1cm
\noindent path $\overrightarrow{P_{n-d+2}}=u_1u_2\cdots u_{n-d+2}$ to a clique $\overset{\longleftrightarrow}{K_d}$ such that $V(\overset{\longleftrightarrow}{K_d})\cap V(\overrightarrow{P_{n-d+2}})=\{u_1, u_{n-d+2}\}$ (as shown in Fig.2), where $V(B_{n,d})=\{u_1, u_2, \ldots, u_n\}$. Clearly, $B_{n,d}\in \mathcal{C}_{n,d}$. In this section, we will show that $B_{n, d}$ is the unique digraph with the minimum signless Laplacian spectral radius among all digraphs in $\mathcal{C}_{n,d}$ where $2\leq d\leq n-1$.
\vskip.2cm
$
\hskip3cm
\xy 0;/r3pc/: \POS (3.5,1) *\xycircle<3pc,3pc>{};
\POS(3.2,1.3) *@{}*+!D{\overset{\longleftrightarrow}{K_d}};
\POS(3.5,0.4) *@{}*+!D{\overrightarrow{P_{n-d+2}}};
\POS(4.3,1.6) *@{*}*+!L{\hspace*{3pt}{\hspace*{-3pt}\hspace*{-3pt}}}="c";
\POS(5.4,1.6) *@{}*+!R{u_{n-d+2}};
\POS(2.95,.16) *@{*}*+!R{u_2}="d";
\POS(2.7,.4) \ar@{->}(2.7,.4) ;(2.69,.415) ="e";
\POS(4.3,.4) \ar@{->}(4.3,.4);(4.29,.385)="f";
\POS (2.515, .9) *@{*}*+!R{u_1}="g";
\POS "c" \ar @{-} "g";
\POS(4.5,.9) *@{*}*+!L{\hspace*{3pt}{\hspace*{-3pt}\hspace*{-3pt}}}="h";
\POS(5.6,.9) *@{}*+!R{u_{n-d+1}};
\POS(4.49,1.1) \ar@{->}(4.48,1.25);(4.49,1.1)="i";
\POS(3.7,-0.2) *@{}*+!D{\ldots};
\endxy
\hskip1cm
\xy 0;/r3pc/: \POS (1,1) *\xycircle<2pc,2pc>{}; \POS (2.33,1)*\xycircle<2pc,2pc>{}
\POS(1.665,1) *@{*}*+!R{u_1}="n";
\POS(1.28,0.38) *@{*}*+!U{u_{n-d+2}}="k";
\POS(0.8,0.8) *@{}*+!D{\overset{\longleftrightarrow}{K_d}};
\POS(2.1,1.6) *@{*}*+!D{u_{n-d+1}}="d";
\POS(1.8,.58) \ar@{->}(1.8,.58) ;(1.79,.6) ="e";
\POS (2.33, .8) *@{}*+!D{\overrightarrow{C_{n-d+1}}};
\POS (3.1, .8) *@{}*+!D{\vdots};
\POS(2.0,.4) *@{*}*+!U{u_2}="h";
\POS(2.98,0.9) \ar@{->}(2.98,1.15);(2.99,1)="j";
\POS(1.78,1.37) \ar@{->}(1.78,1.37);(1.79,1.385)="i";
\endxy
$
\vskip0.00001mm
\hskip4.5cm $B_{n,d}$ \hskip4.3cm $B'_{n,d}$
\vskip0.1mm
\hskip4.5cm Fig.2. \hskip.1cm The digraphs $B_{n,d}$ and $B'_{n,d}$.
\begin{lem}\label{lem41}
Let $B'_{n,d}=B_{n,d}-\{(u_{n-d+1},u_{n-d+2})\}+\{(u_{n-d+1},u_1)\}$ (as shown in Fig.2). Then $q(B'_{n,d})>q(B_{n,d})$.
\end{lem}
\begin{proof}
Clearly, $B'_{n,d}$ is strongly connected. Let $x=(x_1, x_2, \ldots, x_n)^T$ be the unique positive unit eigenvector corresponding to $q=q(B_{n,d})$, while $x_i$ corresponds to the vertex $u_i$. By Theorem \ref{theo32}, we only need to show $x_1>x_{n-d+2}$.
Since $\overset{\longleftrightarrow}{K_d}$ is a proper subdigraph of $B_{n,d}$,
then $q=q(B_{n,d})>q(\overset{\longleftrightarrow}{K_d})=2d-2$ by Corollary \ref{cor25}.
Let $V_1=V(\overset{\longleftrightarrow}{K_d})\backslash\{u_1, u_{n-d+2}\}$. We have
$(Q(B_{n,d})x)_1=qx_1=dx_1+\sum\limits_{v\in V_1}x_v+x_2+x_{n-d+2}, $
\noindent and
$(Q(B_{n,d})x)_{n-d+2}=qx_{n-d+2}=(d-1)x_{n-d+2}+\sum\limits_{v\in V_1}x_v+x_1$.
Then $(q-d+1)(x_1-x_{n-d+2})=x_2+x_{n-d+2}>0$. Thus $x_1>x_{n-d+2}$ by $q>2d-2$.
\end{proof}
\begin{theo}\label{theo42}
Let $2\leq d\leq n-1$ and $D\in\mathcal{C}_{n,d}$ be a digraph. Then $q(D)\geq q(B_{n,d})$, with equality if and only if $D\cong B_{n,d}$.
\end{theo}
\begin{proof}
Clearly, $\overset{\longleftrightarrow}{K_d}$ is a proper subdigraph of $D$ because of $D\in \mathcal{C}_{n,d}$.
Since $D$ is strongly connected, then delete the vertices or arcs of $D$ such that the resulting digraph is denoted by $H$,
while $H\cong B_{d+l-2, d}$ $(l\geq 3)$ or $H\cong B'_{d+l-1, d}$ $(l\geq 2)$.
By Corollary \ref{cor25}, $q(H)\leq q(D)$ with equality if and only if $H\cong D$.
{\bf Case 1: } $H\cong B_{d+l-2, d}$ $(l\geq 3)$.
Insert $n-d-l+2$ vertices into $\overrightarrow{P_l}$ such that the resulting digraph is $B_{n, d}$.
Then $q(B_{n,d})\leq q(H)$ by using Corollary \ref{cor37} $n-d-l+2$ times.
{\bf Case 2: } $H\cong B'_{d+l-1, d}$ $(l\geq 2)$.
Insert $n-d-l+1$ vertices into the directed cycle $\overrightarrow{C_l}$ such that the resulting digraph is $B'_{n, d}$.
Then $q(B'_{n,d})\leq q(H)$ by using Corollary \ref{cor37} $n-d-l+1$ times, and thus $q(B_{n,d})<q(B'_{n,d})\leq q(H)$ by Lemma \ref{lem41}.
Combining the above two cases, we have $q(D)\geq q(B_{n,d})$ with equality if and only if $D\cong B_{n,d}$.
\end{proof}
Now we estimate the signless Laplacian spectral radius of $B_{n,d}$ and show $2=q(\overrightarrow{C_n})<q(B_{n,2})<q(B_{n,3})<\cdots<q(B_{n,n-1})<q(\overset{\longleftrightarrow}{K_n})=2n-2$.
\begin{lem}\label{lem43}
Let $2\leq d\leq n-1$ and $B_{n,d}$ be defined as above. Then
$2d-2<q(B_{n,d})\leq\frac{3d-3+\sqrt{(d-1)^2+8}}{2}$.
\end{lem}
\begin{proof}
Clearly, $q(B_{n,d})>q(\overset{\longleftrightarrow}{K_d})=2d-2$ since Corollary \ref{cor25} and $\overset{\longleftrightarrow}{K_d}$ is a proper subdigraph of $B_{n,d}$.
Let $x=(x_1, x_2, \ldots, x_n)^T$ be the unique positive unit eigenvector corresponding to $q(B_{n,d})$,
while $x_i$ corresponds to he vertex $u_i$. Then $x_{2}<x_{3}<\cdots<x_{n-d+2}$ by Lemma \ref{lem35}.
Let $H=B_{n, d}-\{(u_1, u_2)\}+\{(u_1, u_{n-d+2})\}$. Then $q(H)\geq q(B_{n, d})$ by Theorem \ref{theo32}.
It is easy to check that $H$ has $n-d+1$ strongly connected components, one is $H_1=\overset{\longleftrightarrow}{K_d}\cup\{(u_1, u_{n-d+2)}\}$ which has multiple arcs $(u_1, u_{n-d+2})$, and the others are isolated vertices $u_2, u_3, \ldots, u_{n-d+1}$.
Then $q(H)=q(H_1)$ by Corollary \ref{cor34}.
Let $q=q(H_1)$ and $y=(y_1, y_2, \ldots, y_d)^T$ be the unique positive unit eigenvector corresponding to $q$.
Then for any two vertices $u, v\in V(\overset{\longleftrightarrow}{K_d})\backslash\{u_1\}$,
let $V_1=V(\overset{\longleftrightarrow}{K_d})\backslash\{u, v\}$, we have
$(Q(H_1)y)_u=qy_u=(d-1)y_u+\sum\limits_{w\in V_1}y_w+y_v$,
$(Q(H_1)y)_v=qy_v=(d-1)y_v+\sum\limits_{w\in V_1}y_w+y_u$.
Thus $y_u=y_v$, it implies $y_s=y_u$ for any $s\in V(\overset{\longleftrightarrow}{K_d})\backslash\{u_1\}$ by the choice of $u$ and $v$. Then
\vskip.2cm
$\left\{
\begin{array}{c}
qy_u=(2d-3)y_u+y_{u_1}, \\[0.2cm]
qy_{u_1}=dy_{u_1}+dy_u.
\end{array}
\right.$
\vskip.2cm
Then we have $q^2-(3d-3)q+(2d^2-4d)=0$, and $q=\frac{3d-3+\sqrt{(d-1)^2+8}}{2}$ by $q=q(H_1)=q(H)\geq q(B_{n, d})>2d-2$.
Thus $q(B_{n,d})\leq\frac{3d-3+\sqrt{(d-1)^2+8}}{2}$.
\end{proof}
\begin{rem}
Since the outdegree sequence of $B_{n,d}$ is $d^+_1=d, d^+_2=d^+_3=\cdots=d^+_{d}=d-1$, $d^+_{d+1}=d^+_{d+2}=\cdots=d^+_{n}=1$, we can also proof that $q(B_{n,d})\leq\frac{3d-3+\sqrt{(d-1)^2+8}}{2}=\phi_2=\ldots=\phi_d$ by Theorem \ref{theo27}.
\end{rem}
From Lemma \ref{lem43}, we immediately get the following corollary.
\begin{cor}\label{cor44}Let $n\geq 4$. Then
$2=q(\overrightarrow{C_n})<q(B_{n,2})<4<q(B_{n,3})<6<\cdots<2n-4<q(B_{n,n-1})<q(\overset{\longleftrightarrow}{K_n})=2n-2$.
\end{cor}
\begin{proof}
Since $2d-2<\frac{3d-3+\sqrt{(d-1)^2+8}}{2}\leq 2d$, then $2d-2<q(B_{n,d})<2d<q(B_{n,d+1})<2d+2$ for $2\leq d\leq n-1$ by Lemma \ref{lem43}.
\end{proof}
\section{The minimum signless Laplacian spectral radius of digraphs with given girth}
\hskip.6cm
Let $g\geq 2$ and $\mathcal{G}_{n, g}$ denote the set of strongly connected digraph on $n$ vertices with girth $g$.
If $g=n$, then $\mathcal{G}_{n, n}=\{\overrightarrow{C_n}\}$ and $q(\overrightarrow{C_n})=2$.
Thus we only need to discuss the cases $2\leq g\leq n-1$.
Let $2\leq g\leq n-1$ and $C_{n, g}=(V(C_{n,g}), E(C_{n,g}))$ be a digraph obtained by adding a directed path $\overrightarrow{P_{n-g+2}}=u_gu_{g+1}\cdots u_{n}u_1$ on the directed cycle $\overrightarrow{C_g}=u_1u_2\cdots u_gu_1$ such that $V(\overrightarrow{C_g})\cap V(\overrightarrow{P_{n-g+2}})=\{u_1, u_{g}\}$ (as shown in Fig.3), where $V(C_{n,g})=\{u_1, u_2, \cdots, u_n\}$, and $E(C_{n, g})=\{(u_i,u_{i+1}), 1\leq i\leq n-1\}\cup\{(u_g, u_1), (u_n, u_1)\}$.
Clearly, $C_{n, g}\in \mathcal{G}_{n, g}$. In the rest of this section, we will show that $C_{n, g}$ achieves the minimum signless Laplacian spectral radius among all digraphs in $\mathcal{G}_{n, g}$. We also determine the digraphs which achieve the second, the third and the fourth minimum signless Laplacian spectral radius among all digraphs.
\vskip0.25cm
$
\hskip3cm
\xy 0;/r3pc/: \POS (4,1) *\xycircle<3pc,3pc>{};
\POS(3.2,1.6) *@{*}*+!L{u_{n}}="j";
\POS(4.8,1.6) *@{*}*+!L{u_{g+1}}="c";
\POS(4.75,.32) *@{*}*+!L{u_{g-1}}="d";
\POS (3.015, .9) *@{*}*+!R{u_{1}}="g";
\POS(5.0,.9) *@{*}*+!L{u_g}="h";
\POS(4.99,1.1) \ar@{->}(4.98,1.25);(4.99,1.1)="i";
\POS(3,1.1) \ar@{->}(3,1.1);(3.01,1.15);
\POS(3.5,1.86) \ar@{->}(3.5,1.86);(3.6,1.91)="b";
\POS(3.5,.145) \ar@{->}(3.5,.145) ;(3.49,.151) ="e";
\POS(4.94,.6) \ar@{->}(4.94,.6);(4.93,.585)="f";
\POS "h" \ar @{->} (3.8,.9) \ar @{-} "g";
\POS (4.1, .1) *@{}*+!D{\overrightarrow{C_g}};
\POS (4, -0.25) *@{}*+!D{\cdots};
\POS (4.1, 1.1) *@{}*+!D{\overrightarrow{P_{n-g+2}}};
\POS (4, 1.9) *@{}*+!D{\cdots};
\endxy
\hskip0.2cm
\hskip1cm
\xy 0;/r3pc/: \POS (1,1) *\xycircle<2pc,2pc>{}; \POS (2.33,1)*\xycircle<2pc,2pc>{}
\POS(1.665,1) *@{*}*+!R{u_g}="n";
\POS(1.28,0.38) *@{*}*+!U{u_{1}}="k";
\POS(0.8,0.8) *@{}*+!D{\overrightarrow{C_{g}}};
\POS(0.22,0.8) *@{}*+!D{\vdots};
\POS(1.6,1.3) \ar@{->}(1.6,1.3);(1.55,1.37)="a";
\POS(1.45,0.5) \ar@{->}(1.45,0.5);(1.5,0.55);
\POS(0.33,0.9) \ar@{->}(0.33,1);(0.34,0.85);
\POS(2.1,1.6) *@{*}*+!D{u_{n}}="d";
\POS(1.8,.58) \ar@{->}(1.8,.58) ;(1.79,.6) ="e";
\POS (2.33, .8) *@{}*+!D{\overrightarrow{C_{n-g+1}}};
\POS(2.0,.4) *@{*}*+!U{u_{g+1}}="h";
\POS(2.98,0.9) \ar@{->}(2.98,1.15);(2.99,1)="j";
\POS(1.78,1.37) \ar@{->}(1.78,1.37);(1.79,1.385)="i";
\POS (3.1, .8) *@{}*+!D{\vdots};
\endxy
$
\vskip0.00001mm
\hskip4.8cm $C_{n,g}$ \hskip4cm $C'_{n,g}$
\vskip0.1mm
\hskip5cm Fig.3. \hskip.1cm The digraphs $C_{n,g}$ and $C'_{n,g}$.
\begin{lem}\label{lem51}
Let $2\leq g\leq n-1$, $C'_{n,g}=C_{n, g}-\{(u_{n},u_1)\}+\{(u_{n}, u_g)\}$ (as shown in Fig.3) and
$x=(x_1, x_2, \ldots, x_n)^T$ be the unique positive unit eigenvector corresponding to $q(C_{n,g})$,
while $x_i$ corresponds to the vetex $u_i$. Then
{\rm (1) } $x_{g+1}<x_{g+2}<\cdots <x_n<x_1<x_2<\cdots<x_g$;
{\rm (2) } $q(C'_{n, g})>q(C_{n, g})$.
\end{lem}
\begin{proof}
Since $\overrightarrow{Q}=u_{g+1}u_{g+2}\cdots u_{n}u_1\cdots u_g$ and $\overrightarrow{R}=u_{g}u_{g+1}u_{g+2}$ are directed paths of $C_{n,g}$
with $d^+_{u_i}=1$ where $i\in \{1,2, \ldots, g-1, g+1, \ldots, n\}$, then by Lemma \ref{lem35}, we have
$x_{g+2}<\cdots <x_{n}<x_1<\cdots<x_g$ and $x_{g+1}<x_{g+2}$, thus $x_{g+1}<x_{g+2}<\cdots <x_n<x_1<x_2<\cdots<x_g$.
Since $C'_{n,g}$ is strongly connected and $x_1<x_g$, then $q(C'_{n, g})>q(C_{n, g})$ by Theorem \ref{theo32}.
\end{proof}
\begin{theo}\label{theo52}
Let $2\leq g\leq n-1$ and $D\in\mathcal{G}_{n, g}$ be a digraph. Then $q(D)\geq q(C_{n, g})>2$, with equality if and only if $D\cong C_{n, g}$.
\end{theo}
\begin{proof}
Since $D\in \mathcal{G}_{n, g}$, then $\overrightarrow{C_g}$ is the proper subdigraph of $D$ and thus $q(D)>2=q(C_g)$. Without loss of generality,
we let $\overrightarrow{C_g}=u_1u_2\cdots u_gu_1$. Since $D\in\mathcal{G}_{n, g}$ is strongly connected, then
delete vertices or arcs of $D$ such that the resulting subdigraph is denoted by $D_1$, while $D_1\cong C'_{g+l-1, g}$ $(l\geq g)$
or $D_1\cong H$, where $H=(V(H), E(H))$, $V(H)=\{u_1, \ldots, u_g, u_{g+1}, \ldots, u_{g+l-2}\}$,
$E(H)=\{(u_i, u_{i+1})| i\in \{1, \ldots, g+l-3\}\}\cup\{(u_g, u_1), (u_{g+l-2}, u_t)\}$ with $1\leq t\leq g$, $1+t\leq l$ (see Fig.4).
By Corollary \ref{cor25}, we have $q(D)\geq q(D_1)$ with equality if and only if $D\cong D_1$.
\vskip0.25cm
$
\hskip3cm
\xy 0;/r3pc/: \POS (4,1) *\xycircle<3pc,3pc>{};
\POS(3.2,1.6) *@{*}*+!R{u_{g+l-2}}="c";
\POS (3.01, .9) *@{*}*+!R{u_{t}}="g";
\POS (4.3, .9) *@{*}*+!D{u_{1}};
\POS(5.0,.9) *@{*}*+!L{u_g}="h";
\POS(4.9,1.4) *@{*}*+!L{u_{g+1}};
\POS(4.5,.9) \ar@{->}(4.5,.9);(4.504,.9);
\POS(3,1.1) \ar@{->}(3,1.1);(3.01,1.15)="i";
\POS(4.5,1.85) \ar@{->}(4.5,1.85);(4.6,1.8)="b";
\POS(4.94,.6) \ar@{->}(4.94,.6);(4.93,.585)="f";
\POS(4.3,.06) \ar@{->}(4.3,.06);(4.24,.038);
\POS "h" \ar@{-} "g";
\POS(3.8,.9) \ar@{->}(3.8,.9);(3.76,.9)
\POS (4.1, .08) *@{}*+!D{\overrightarrow{C_g}};
\POS (4, 1.3) *@{}*+!D{\overrightarrow{P_{l}}};
\POS(4.99,1.1) \ar@{->}(4.98,1.25);(4.99,1.1);
\POS (4, 1.9) *@{}*+!D{\cdots};
\POS(2.9,1) *@{}*+!D{\vdots};
\POS (4, -0.25) *@{}*+!D{\cdots};
\POS (3.6, .85) *@{}*+!D{\cdots};
\endxy
\hskip0.2cm
\hskip1cm
\xy 0;/r3pc/: \POS (1,1) *\xycircle<2pc,2pc>{}; \POS (2.33,1)*\xycircle<2pc,2pc>{}
\POS(1.665,1) *@{*}*+!R{u_g}="n";
\POS(1.28,0.38) *@{*}*+!U{u_{1}}="k";
\POS(0.8,0.8) *@{}*+!D{\overrightarrow{C_{g}}};
\POS(1.6,1.3) \ar@{->}(1.6,1.3);(1.55,1.37)="a";
\POS(1.45,0.5) \ar@{->}(1.45,0.5);(1.5,0.55);
\POS(0.33,0.9) \ar@{->}(0.33,1);(0.34,0.85);
\POS(2.1,1.6) *@{*}*+!D{u_{g+l-1}}="d";
\POS(1.8,.58) \ar@{->}(1.8,.58) ;(1.79,.6) ="e";
\POS (2.33, .8) *@{}*+!D{\overrightarrow{C_{l}}};
\POS(2.0,.4) *@{*}*+!U{u_{g+1}}="h";
\POS(2.98,0.9) \ar@{->}(2.98,1.15);(2.99,1)="j";
\POS(1.78,1.37) \ar@{->}(1.78,1.37);(1.79,1.385)="i";
\POS(0.22,0.8) *@{}*+!D{\vdots};
\POS (3.1, .8) *@{}*+!D{\vdots};
\endxy
$
\vskip0.00001mm
\hskip4.8cm $H$ \hskip4cm $C'_{g+l-1,g}$
\vskip0.1mm
\hskip5cm Fig.4. \hskip.1cm The digraphs $H$ and $C'_{g+l-1,g}$.
{\bf Case 1: } $D_1\cong C'_{g+l-1, g}$ where $l\geq g$.
Insert $n-g-l+1$ vertices into $\overrightarrow{C_l}$ such that the resulting digraph is $C'_{n, g}$.
By using Corollary \ref{cor37} $n-g-l+1$ times, we have $q(D_1)\geq q(C'_{n, g})$,
and thus $q(D)\geq q(D_1)\geq q(C'_{n, g})>q(C_{n, g})$ by Lemma \ref{lem51}.
{\bf Case 2: } $D_1\cong H$.
Insert $n-g-l+2$ vertices to $\overrightarrow{P_l}$ such that the resulting digraph, denoted by $H'$.
Clearly, $H^{\prime}$ is strongly connected. By using Corollary \ref{cor37} $n-g-l+2$ times,
we have $q(H)\geq q(H^{\prime})$ with equality if and only if $H\cong H^{\prime}$.
{\bf Subcase 2.1: } $t=1$.
In this case, $H^{\prime}\cong C_{n,g}$, then $q(D)\geq q(D_1)=q(H)\geq q(H^{\prime})=q(C_{n,g})$,
with equality if and only if $D\cong C_{n, g}$.
{\bf Subcase 2.2: } $2\leq t\leq g$.
Note that $H^{\prime}\cong C_{n,g}-\{(u_n, u_1)\}+\{(u_n, u_t)\}$,
By Lemma \ref{lem51}, we have $x_1<x_t$, and then $q(C_{n, g})<q(H^{\prime})$ by Theorem \ref{theo32}.
Thus $q(D)\geq q(D_1)=q(H)\geq q(H^{\prime})>q(C_{n,g})$.
Combining the above arguments, $q(D)\geq q(C_{n, g})$ with equality if and only if $D\cong C_{n, g}$.
\end{proof}
\begin{theo}\label{theo53} Let $n\geq 4$. Then
$2=q(\overrightarrow{C_n})<q(C_{n,n-1})<q(C_{n,n-2})<\cdots<q(C_{n,2})<3$.
\end{theo}
\begin{proof}
It is clear $ q(\overrightarrow{C_n})=2$ and $q(C_{n,n-1})>2$.
For any $g$ with $2\leq g\leq n-1$, note that the outdegree of $C_{n, g}$ is $d^+_1=2, d^+_2=d^+_3=\cdots=d^+_{n}=1$,
then $q(C_{n,g})<3$ by Theorem \ref{theo27}.
Now we only need to show that $q(C_{n,g+1})<q(C_{n,g})$ for $2\leq g\leq n-2$.
Let $x=(x_1, x_2, \ldots, x_n)^T$ be the unique positive unit eigenvector corresponding to $q(C_{n,g+1})$,
while $x_i$ corresponds to the vertex $u_i$. Clearly, $C_{n,g}\cong C_{n,g+1}-\{(u_{g+1}, u_{1})\}+\{(u_{g+1}, u_{2})\}$.
Similar to the proof of lemma \ref{lem51}, we have $x_1<x_2$, then $q(C_{n,g+1})<q(C_{n,g})$ by Theorem \ref{theo32}.
\end{proof}
In \cite{2012LAA}, the authors defined $\theta$-digraph as follows. The $\theta$-digraph consists of three directed paths $P_{a+2}$, $P_{b+2}$ and $P_{c+2}$ such that the initial vertex of $P_{a+2}$ and $P_{b+2}$ is the terminal vertex of $P_{c+2}$, and the initial vertex of $P_{c+2}$ is the terminal vertex of $P_{a+2}$ and $P_{b+2}$, denoted by $\theta(a,b,c)$. Clearly, $C_{n,g}\cong \theta(0,n-g,g-2)$, and $H\cong\theta(t-1, l-2, g-t-1)$ where $H$ defined in the proof of Theorem \ref{theo52}.
Let $\theta(1,1,n-4), \widehat{\theta}=\theta(1,1,n-4)\cup\{(u_2, u_3)\}, \theta(0,2,n-4)$ as shown in Fig.5. By Corollary \ref{cor26},
we know that $\overrightarrow{C_n}$ is the digraph which achieve the minimum signless Laplacian spectral radius
among all strongly connected digraphs on $n\geq 4$ vertices.
Now we will show $\theta(0,1,n-3)$ (that is, $C_{n,n-1}$), $\theta(1,1,n-4) $, $\theta(0,2,n-4)$ (that is, $C_{n,n-2}$) are the digraphs which achieve the second,
the third and the fourth minimum signless Laplacian spectral radius among all strongly connected digraphs on $n\geq 4$ vertices, respectively.
\vskip0.2cm
$
\hskip1cm
\xy 0;/r3pc/: \POS (4,1) *\xycircle<3pc,3pc>{};
\POS(4.1,2) *@{*}*+!D{u_2}="c";
\POS (3.01, .9) *@{*}*+!R{u_{4}}="g";
\POS (4.3, .9) *@{*}*+!D{u_3};
\POS(5.0,.9) *@{*}*+!L{u_1}="h";
\POS(4.5,.9) \ar@{->}(4.5,.9);(4.504,.9);
\POS(3.25, 1.65) \ar@{->}(3.25, 1.65);(3.2, 1.58)
\POS(4.8,1.57) \ar@{->}(4.8,1.57);(4.9,1.44);
\POS(4.94,.6) \ar@{->}(4.94,.6);(4.93,.585);
\POS(4.3,.06) \ar@{->}(4.3,.06);(4.24,.038);
\POS "h" \ar@{-} "g";
\POS(3.8,.9) \ar@{->}(3.8,.9);(3.76,.9)
\POS (4, -0.25) *@{}*+!D{\cdots};
\POS(3.2,.4) \ar@{->}(3.2,.4) ;(3.19,.415) ;
\POS (3.4, .2) *@{*}*+!R{u_5};
\POS (4.6, .2) *@{*}*+!L{u_n};
\endxy
\hskip1.2cm
\xy 0;/r3pc/: \POS (4,1) *\xycircle<3pc,3pc>{};
\POS(4.1,2) *@{*}*+!D{u_3}="c";
\POS (3.15, 1.5) *@{*}*+!R{u_{4}}="d";
\POS (4.85, 1.5) *@{*}*+!L{u_2}="b";
\POS(5.0,.9) *@{*}*+!L{u_1}="a";
\POS(3.5,1.86) \ar@{->}(3.5,1.86);(3.6,1.91);
\POS(4.99,1.1) \ar@{->}(4.98,1.25);(4.99,1.1);
\POS(4.5,1.85) \ar@{->}(4.5,1.85);(4.6,1.8);
\POS(4.94,.6) \ar@{->}(4.94,.6);(4.93,.585);
\POS(4.3,.06) \ar@{->}(4.3,.06);(4.24,.038);
\POS "a" \ar@{-} "c"; \POS "b" \ar@{-} "d";
\POS(3.8,1.5) \ar@{->}(3.8,1.5);(3.76,1.5)
\POS(3,1.1) \ar@{->}(3,1.1);(3,1)
\POS(4.75,1.2) \ar@{->}(4.75,1.2);(4.7,1.25)
\POS (4, -0.25) *@{}*+!D{\cdots};
\POS(3.2,.4) \ar@{->}(3.2,.4) ;(3.19,.415) ;
\POS (3, .9) *@{*}*+!R{u_5};
\POS (4.6, .2) *@{*}*+!L{u_n};
\endxy
\hskip1.2cm
\xy 0;/r3pc/: \POS (4,1) *\xycircle<3pc,3pc>{};
\POS(3.4,1.8) *@{*}*+!D{u_3}="c";
\POS (3.01, .9) *@{*}*+!R{u_4}="g";
\POS (4.3, 1.95) *@{*}*+!D{u_2};
\POS(5.0,.9) *@{*}*+!L{u_1}="h";
\POS(3.8, 1.98) \ar@{->}(3.8, 1.98);(3.75, 1.97)
\POS(3.07, 1.35) \ar@{->}(3.07, 1.35);(3.04, 1.298)
\POS(4.8,1.57) \ar@{->}(4.8,1.57);(4.9,1.44);
\POS(4.94,.6) \ar@{->}(4.94,.6);(4.93,.585);
\POS(4.3,.06) \ar@{->}(4.3,.06);(4.24,.038);
\POS "h" \ar@{-} "g";
\POS(3.8,.9) \ar@{->}(3.8,.9);(3.76,.9)
\POS (4, -0.25) *@{}*+!D{\cdots};
\POS(3.2,.4) \ar@{->}(3.2,.4) ;(3.19,.415) ;
\POS (3.4, .2) *@{*}*+!R{u_5};
\POS (4.6, .2) *@{*}*+!L{u_n};
\endxy
$
\vskip0.00001mm
\hskip1.7cm $\theta(1,1,n-4)$ \hskip3.4cm $\widehat{\theta}$ \hskip3.7cm $\theta(0,2,n-4)$
\vskip0.1mm
\hskip2.5cm Fig.5. \hskip.1cm The digraphs $\theta(1,1,n-4)$, $\widehat{\theta}$, and $\theta(0,2,n-4)$.
\begin{theo}\label{theo54}
Let $n\geq 4$. Then $\theta(0,1,n-3)$, $\theta(1,1,n-4)$, $\theta(0,2,n-4)$ are the digraphs which achieve the second,
the third and the fourth minimum signless Laplacian spectral radius among all strongly connected digraphs on $n$ vertices,
respectively.
\end{theo}
\begin{proof}
By $\mathcal{G}_{n, n}=\{\overrightarrow{C_n}\}$ and Theorems \ref{theo52}$\sim$\ref{theo53}, it is clear that $C_{n,n-1}\cong \theta(0,1,n-3)$ is the unique digraph with the second minimum signless Laplacian spectral radius.
Note that $\mathcal{G}_{n, n-1}=\{\theta(0,1,n-3), \theta(1,1,n-4), \widehat{\theta}\}$ and
$2=q(\overrightarrow{C_n})<q(\theta(0,1,n-3))<\min \{q(\theta(1,1,n-4)), q(\widehat{\theta}), q(\theta(0,2,n-4))\}$,
we only need to show that $q(\theta(1,1,n-4))<q(\theta(0,2,n-4))<q(\widehat{\theta})$ by Theorems \ref{theo52}$\sim$\ref{theo53}.
Let $P_{\theta(1,1,n-4)}(x)$, $P_{\theta(0,2,n-4)}(x)$, $P_{\widehat{\theta}}(x)$ be the signless Laplacian characteristic polynomial of
$\theta(1,1,n-4), \theta(0,2,n-4)$ and $\widehat{\theta}$, respectively. By directly calculating, we have
\vskip2mm
\noindent
$P_{\theta(1,1,n-4)}(x)=\left|\begin{array}{cccccccc}
x-2 & -1 & -1 & 0 & 0 & 0 & \cdots & 0 \\
0 & x-1 & 0 & -1 & 0 & 0 & \cdots & 0 \\
0 & 0 & x-1 & -1 & 0 & 0 & \cdots & 0 \\
0 & 0 & 0 & x-1 & -1 & 0 & \cdots &0 \\
0 & 0 & 0 & 0 & x-1 & -1 & \cdots &0 \\
\vdots & \vdots & \vdots &\vdots &\ddots & \ddots & \ddots & \vdots \\
0 & 0 & 0 & 0 &\cdots & 0 & x-1 & -1 \\
-1 & 0 & 0 & 0 & 0 & \cdots & 0 & x-1
\end{array}\right|
$
\vskip2mm
\hskip1.8cm $=(x-1)[(x-2)(x-1)^{n-2}-2]$,
\vskip2.4mm
\noindent$P_{\theta(0,2,n-4)}(x)=\left|\begin{array}{cccccccc}
x-2 & -1 & 0 & -1 & 0 & 0 & \cdots & 0 \\
0 & x-1 & -1 & 0 & 0 & 0 & \cdots & 0 \\
0 & 0 & x-1 & -1 & 0 & 0 & \cdots & 0 \\
0 & 0 & 0 & x-1 & -1 & 0 & \cdots &0 \\
0 & 0 & 0 & 0 & x-1 & -1 & \cdots &0 \\
\vdots & \vdots & \vdots &\vdots &\ddots & \ddots & \ddots & \vdots \\
0 & 0 & 0 & 0 &\cdots & 0 & x-1 & -1 \\
-1 & 0 & 0 & 0 & 0 & \cdots & 0 & x-1
\end{array}\right|
$
\vskip2mm
\hskip1.8cm $=(x-1)^2[(x-2)(x-1)^{n-3}-1]-1$,
\vskip2.4mm
\noindent$P_{\widehat{\theta}}(x)=\left|\begin{array}{cccccccc}
x-2 & -1 & -1 & 0 & 0 & 0 & \cdots & 0 \\
0 & x-2 & -1 & -1 & 0 & 0 & \cdots & 0 \\
0 & 0 & x-1 & -1 & 0 & 0 & \cdots & 0 \\
0 & 0 & 0 & x-1 & -1 & 0 & \cdots &0 \\
0 & 0 & 0 & 0 & x-1 & -1 & \cdots &0 \\
\vdots & \vdots & \vdots &\vdots &\ddots & \ddots & \ddots & \vdots \\
0 & 0 & 0 & 0 &\cdots & 0 & x-1 & -1 \\
-1 & 0 & 0 & 0 & 0 & \cdots & 0 & x-1
\end{array}\right|
$
\vskip2mm
\hskip0.5cm $=(x-1)[(x-2)^2(x-1)^{n-3}-2]$.
When $x>2$, we note that
\hskip3.5cm $P_{\theta(1,1,n-4)}(x)-P_{\widehat{\theta}}(x)=(x-1)^{n-2}(x-2)>0,$
\hskip3cm$ P_{\theta(0,2,n-4)}(x)-P_{\widehat{\theta}}(x)=(x-2)[(x-1)^{n-2}-(x-2)]>0,$
\hskip3.7cm $P_{\theta(1,1,n-4)}(x)-P_{\theta(0,2,n-4)}(x)=(x-2)^2>0.$
\noindent Then $q(\theta(1,1,n-4))<q(\theta(0,2,n-4))<q(\widehat{\theta})$.
\end{proof}
\section{The maximum signless Laplacian spectral radius of strongly connected digraph with given vertex connectivity}
\hskip.6cm
In this section, we will discuss the maximum signless Laplacian spectral radius of strongly connected digraph with given vertex connectivity, and propose some open problem.
Let $\mathcal{D}_{n, k}$ denote the set of strongly connected digraphs on $n$ vertices with vertex connectivity $\kappa(D)=k$. If $k=n-1$, then $\mathcal{D}_{n, n-1}=\{\overset{\longleftrightarrow}{K_n}\}$.
Now we only need to discuss the cases $1\leq k\leq n-2$.
\vskip0.2cm
$
\hskip6cm
\xy 0;/r3pc/: \POS (4,3) *\xycircle<3pc,1pc>{}; \POS (4,2) *\xycircle<3pc,1pc>{}; \POS (4,1) *\xycircle<3pc,1pc>{};
\POS (3.5, 1) *@{*}="a"; \POS(4.5,1) *@{*}="b";\POS(5.8,0.7) *@{}*+!D{\overset{\longleftrightarrow\quad\quad}{K_{n-m-k}}};
\POS (3.5, 2) *@{*}="c"; \POS(4.5,2) *@{*}="d";\POS(5.5,1.7) *@{}*+!D{\overset{\longleftrightarrow}{K_k}};
\POS (3.5, 3) *@{*}="e"; \POS(4.5,3) *@{*}="f"\POS(5.5,2.7) *@{}*+!D{\overset{\longleftrightarrow}{K_m}};
\POS(4.28,1) *@{}*+!R{\cdots};\POS(4.28,2) *@{}*+!R{\cdots}; \POS(4.28,3) *@{}*+!R{\cdots};
\ar@{-}"a";"c"; \ar@{-}"a";"d"; \ar@{-}"b";"c"; \ar@{-}"b";"d"; \ar@{-}"c";"e"; \ar@{-}"c";"f"; \ar@{-}"d";"e"; \ar@{-}"d";"f";
\ar@{-}\ar@/_{1.7pc}/ "a";"e";
\ar@{-}\ar@/_{-1.7pc}/ "b";"f";
\endxy
$
\vskip0.00001mm
\hskip5cm Fig.6. \hskip.1cm The digraph $\overrightarrow{K}(n, k, m)$.
\vskip0.2cm
Let $D_1\bigtriangledown D_2$ denote the digraph obtained from two disjoint digraphs $D_1$, $D_2$ with vertex set $V=V(D_1)\cup V(D_2)$ and
arc set $E=E(D_1)\cup E(D_2)\cup \{(u,v), (v,u) | u\in V(D_1), v\in V(D_2)\}$.
Let $1\leq k\leq n-2$, $1\leq m\leq n-k-1$, and $\overrightarrow{K}(n, k, m)$ denote the digraph $\overset{\longleftrightarrow}{K_k}\bigtriangledown(\overset{\longleftrightarrow}{K_m}\cup \overset{\longleftrightarrow\quad\quad}{K_{n-m-k}})\cup E_1$,
where $E_1=\{(u,v) |u\in V(\overset{\longleftrightarrow}{K_m}), v\in V(\overset{\longleftrightarrow\quad\quad}{K_{n-m-k}})\}$
(see Fig.6). Clearly, $\overrightarrow{K}(n, k, m)\in \mathcal{D}_{n, k}$.
Let $\overrightarrow{\mathcal{K}}(n, k)=\{\overrightarrow{K}(n, k, m)|1\leq m\leq n-k-1\}$ where $1\leq k\leq n-2$.
Clearly, $\overrightarrow{\mathcal{K}}(n, k)\subseteq\mathcal{D}_{n, k}$.
\begin{prop}\label{prop61}{\rm(\cite{1976})}
Let $D$ be a strongly connected digraph with $\kappa(D)=k$. Suppose that $S$ is a $k$--vertex cut of $D$ and $D_1$, $D_2$, \ldots, $D_t$ are the strongly connected components of $D-S$. Then there exists an ordering of $D_1$, $D_2$, \ldots, $D_t$ such that for $1\leq i\leq t$ and $v\in V(D_i)$, every tail of $v$ is in $\bigcup\limits^{i-1}_{j=1} D_j$.
\end{prop}
\begin{rem}\label{rem62}
By Proposition \ref{prop61}, we know that $D_1$ with $|V(D_1)|=m$ is the strongly connected component of $D-S$ where the inneighbors of vertices of $V(D_1)$ in $D-S$ are zero. Let $D_2=D-S-D_1$. We add arcs to $D$ until both induced subdigraph of $V(D_1)\cup S$ and induced subdigraph of $V(D_2)\cup S$ attain to complete digraphs, add arc $(u,v)$ for any $u\in V(D_1)$ and any $v\in V(D_2)$. Denote the new digraph by $H$. Since $D$ is $k$--strongly connected, then $H=\overrightarrow{K}(n,k,m)\in \overrightarrow{\mathcal{K}}(n, k)\subseteq \mathcal{D}_{n, k}$.
Since $D$ is the subdigraph of $H$, then $q(D)\leq q(H)$, with equality if and only if $D\cong H$ by Corollary \ref{cor25}. Therefore, the digraph which achieves the maximum signless Laplacian spectral radius in $\mathcal{D}_{n, k}$ must be some digraph in $\overrightarrow{\mathcal{K}}(n, k)$.
\end{rem}
\begin{theo}\label{theo63}
Let $n,k,m$ be positive integers with $1\leq k\leq n-2$ and $1\leq m\leq n-k-1$.
Then $q(\overrightarrow{K}(n, k, m))=\frac{3n-m-4+\sqrt{(n-3m)^2+8mk}}{2}$.
\end{theo}
\begin{proof}
Let $D=\overrightarrow{K}(n, k, m),$ $S$ be a $k$--vertex cut of $D$. Suppose that $D_1$ with $|V(D_1)|=m$ and $D_2$ with $|V(D_2)|=n-m-k=t$ are two strongly connected components, i.e. two complete subdigraphs of $D-S$ with arcs $E_1=\{(u, v) | u\in V(D_1), v\in V(D_2)\}$. Then
$Q(D)=\begin{bmatrix}
J_{m\times m}+(n-2)I_{m\times m} & J_{m \times k} & J_{m\times t}\\
J_{k\times m} & J_{k \times k}+(n-2)I_{k\times k} & J_{k\times t}\\
O_{t\times m} & J_{t\times k} & J_{t\times t}+(k+t-2)I_{t\times t}
\end{bmatrix}$
\vskip.2cm
\noindent and the signless Laplacian characteristic polynomial of $D$ is
$P_{D}(x)=|xI_{n\times n}-Q(D)|$
\hskip1.2cm$=\begin{vmatrix}
(x-n+2)I_{m\times m}-J_{m\times m} & -J_{m \times k} & -J_{m\times t}\\
-J_{k\times m} & (x-n+2)I_{k\times k}-J_{k \times k} & -J_{k\times t}\\
O_{t\times m} & -J_{t\times k} & (x-t-k+2)I_{t\times t}-J_{t\times t}
\end{vmatrix}$
\vskip.2cm
\hskip1.2cm$=(x-n+2)^{m+k-1}(x-k-t+2)^{t-1}(x-n-m+2)\left[x-2(k+t-1)-\frac{2mk}{x-n-m+2}\right]$
\vskip.2cm
\hskip1.2cm$=(x-n+2)^{m+k-1}(x-n+m+2)^{t-1}[x^2-(3n-m-4)x+2(n-m-1)(n+m-2)-2mk]$.
Let $D'$ be the proper subdigraph of $D$ and
\vskip.2cm
$Q(D')=\begin{bmatrix}
J_{(m+k)\times (m+k)}+(n-2)I_{(m+k)\times (m+k)} & O_{(m+k)\times t}\\
O_{t\times (m+k)} & J_{t\times t}+(k+t-2)I_{t\times t}
\end{bmatrix}$,
\vskip.2cm
\noindent
then $q(D)>q(D')=\max\{n+m+k-2, 2n-2m-k-2\}$ by $\rho(aJ_{n\times n}+bI_{n\times n})=na+b$.
Note that $\max\{n+m+k-2, 2n-2m-k-2\}>\max\{n-2,n-m-2\}=n-2$, then $q(D)$ is equal to the solution of the the quadratic equation $x^2-(3n-m-4)x+2(n-m-1)(n+m-2)-2mk=0$, thus $q(D)=\frac{3n-m-4+\sqrt{(n-3m)^2+8mk}}{2}$ or $q(D)=\frac{3n-m-4-\sqrt{(n-3m)^2+8mk}}{2}$.
\vskip.2cm
If $q(D)=\frac{3n-m-4-\sqrt{(n-3m)^2+8mk}}{2}$, then
\hskip1cm $q(D')<q(D)<\frac{3n-m-4-|n-3m|}{2}=\left\{\begin{array}{ll}
n+m-2,& \mbox{if } n\geq 3m;\\
2n-2m-2, & \mbox{if } 0<n<3m.
\end{array}
\right.$
Now we will show there exists a condiction since $n+m+k-2\leq q(D')<q(D)$.
When $n\geq 3m$, there is a contradiction by $n+m+k-2<q(D)<n+m-2$; when $0<n<3m$, there is also a contradiction by $n+m+k-2<q(D)<2n-2m-2$.
Combining the above arguments, we have $q(D)=\frac{3n-m-4+\sqrt{(n-3m)^2+8mk}}{2}$.
\end{proof}
\begin{rem}\label{rem65}
Note that $\overset{\longleftrightarrow}{K_n}$ is the unique digraph which achieves the maximum signless Laplacian spectral radius $2n-2$ among all strongly connected digraphs, and $\overrightarrow{K}(n, n-2, 1)\cong \overset{\longleftrightarrow}{K_n}-\{(u, v)\}$ where $u,v\in V(\overset{\longleftrightarrow}{K_n})$,
by Lemma \ref{lem23} and Theorem \ref{theo63}, we are sure that $\overrightarrow{K}(n, n-2, 1)$ is the unique digraph which achieves the second maximum signless Laplacian spectral radius $\frac{3n-5+\sqrt{n^2+2n-7}}{2}$ among all strongly connected digraphs.
\end{rem}
\begin{theo}\label{theo66}
Let $D$ be a strongly connected digraph, $D\not\cong \overset{\longleftrightarrow}{K_n},$ and $D\not\cong \overrightarrow{K}(n, n-2, 1)$.
Then $q(D)< \frac{3n-5+\sqrt{n^2+2n-7}}{2}$.
\end{theo}
Since $\mathcal{D}_{n, n-1}=\{\overset{\longleftrightarrow}{K_n}\}$, we know $\overrightarrow{K}(n, n-2, 1)$
is the unique digraph which achieves the maximum signless Laplacian spectral radius in $\mathcal{D}_{n, n-2}$ by Remark \ref{rem65} or Theorem \ref{theo66}.
Thus we can proposed the following conjecture and we have seen that when $k=n-2$, the conjecture is true.
\begin{con}\label{con67}
Let $n,k$ be given positive integers with $1\leq k\leq n-2$, $D\in\mathcal{D}_{n, k}$.
Then $q(D)\leq \frac{3n-5+\sqrt{(n-3)^2+8k}}{2}$ with equality if and only if $G\cong \overrightarrow{K}(n, k, 1)$.
\end{con}
\section{Some notes on the spectral radius of strongly connected digraphs}
\hskip.6cm
In Section 3 $\sim$ Section 6, we use some similar technique which applied in \cite{2012DM}.
Although there are some defects in Section 2 $\sim$ Section 3 in \cite{2012DM} which can be revised by similar proofs of this paper,
the results are well done and there are some useful techniques which can be used to study strongly connected digraphs.
Now in this section, we only show that there are more results can be obtained on the spectral radius of strongly connected digraphs.
Note that $\overrightarrow{C_n}$ is the unique digraph with the minimum signless Laplacian spectral radius among all strongly connected digraphs on $n$ vertices. In \cite{2012LAA}, the authors characterized the extremal digraphs which achieve the maximum and minimum spectral radius among all strongly connected bicyclic digraphs, and they proposed the following open problem.
\begin{prob}\label{prob71}
Is the digraph $\theta (0, 1, n-3)$ achieving the second minimum spectral radius among all strongly connected digraphs?
\end{prob}
We confirm Problem \ref{prob71}
and we can obtain more by the similar proofs of Theorems \ref{theo53}$\sim$\ref{theo54}.
\begin{theo}\label{theo72}
Let $n\geq 4$. Then $\rho(\overrightarrow{C_n})<\rho(C_{n,n-1})<\rho(C_{n,n-2})<\cdots<\rho(C_{n,2})$.
\end{theo}
\begin{theo}\label{theo73}
Let $n\geq 4$. Then $\theta(0,1,n-3)$, $\theta(1,1,n-4)$, $\theta(0,2,n-4)$ are the digraphs which achieve the second,
the third and the fourth minimum spectral radius among all strongly connected digraphs on $n$ vertices, respectively.
\end{theo}
Note that $\overset{\longleftrightarrow}{K_n}$ is the unique digraph which achieves the maximum spectral radius $n-1$ among all strongly connected digraphs,
we have the following result by Theorem 4.2 in \cite{2012DM} and Lemma \ref{lem23}.
\begin{theo}\label{theo74}
Let $n\geq 4$. Then $\overrightarrow{K}(n, n-2, 1)$ is the unique digraph which achieves the second maximum spectral radius $\frac{n-2+\sqrt{n^2-4}}{2}$ among all strongly connected digraphs.
\end{theo}
\end{document}
|
\begin{document}
\title{ {\bf Entanglement-assisted quantum feedback control}
\thanks{
This work was supported in part by JSPS Grant-in-Aid No. 15K06151
and JST PRESTO No. JPMJPR166A.
The authors acknowledge helpful discussions with M. R. Hush,
A. R. R. Carvalho, and S. S. Szigeti. }
}
\author{Naoki Yamamoto ~ and ~ Tomoaki Mikami
\\
\\
Department of Applied Physics and Physico-Informatics, Keio University, \\
Hiyoshi 3-14-1, Kohoku, Yokohama 223-8522, Japan
}
\maketitle
\begin{abstract}
The main advantage of quantum metrology relies on the effective use of
entanglement, which indeed allows us to achieve strictly better estimation
performance over the standard quantum limit.
In this paper, we propose an analogous method utilizing entanglement for the
purpose of feedback control.
The system considered is a general linear dynamical quantum system, where
the control goal can be systematically formulated as a linear quadratic Gaussian
control problem based on the quantum Kalman filtering method;
in this setting, an entangled input probe field is effectively used to reduce
the estimation error and accordingly the control cost function.
In particular, we show that, in the problem of cooling an opto-mechanical
oscillator, the entanglement-assisted feedback control can lower the stationary
occupation number of the oscillator below the limit attainable by the controller
with a coherent probe field and furthermore beats the controller with an optimized
squeezed probe field.
\end{abstract}
\section{Introduction}
Entanglement is a special notion that had been considered as a ``spooky"
correlation \cite{Einstein}.
However over recent decades it has gained a positive impression mainly thanks
to its central role in quantum information science \cite{Nielsen,Dowling&Milburn}.
A particularly important application of entanglement in our context is the
{\it quantum metrology} \cite{Braunstein,Giovanetti}.
The basic configuration is depicted in Fig. \ref{Q metrology}.
The goal is to estimate an unknown parameter $\vartheta$ of the system.
A standard estimation method is first to send a known input state and
then measure the output state containing the information about $\vartheta$
(Fig.~\ref{Q metrology}~(a));
in this case the estimation error has a strict lower bound called the
{\it standard quantum limit (SQL)} with respect to the input energy.
In the quantum metrology schematic depicted in Fig.~\ref{Q metrology}~(b),
on the other hand, an entangled state is chosen as an input so that one
portion passes through the system while the other portion does not;
then by measuring the combined output, we obtain more information
about $\vartheta$ than the standard case (Fig.~\ref{Q metrology} (a))
and thus can beat the SQL in the estimation error.
This schematic have been experimentally demonstrated in several settings,
e.g., \cite{Steinberg,Takeuchi,Polzik 2009,Polzik 2010}.
Note that the entanglement-assisted method is not a unique approach for beating
the SQL; particularly in the case where $\vartheta$ is the parameter of a force
applied to a mechanical oscillator (e.g., a gravitational wave force), several
alternative estimation schemes beating the SQL have been developed, such as
the squeezed-probe scheme
\cite{Kimble 2001,Aasi 2013,Furusawa 2013,Andersen 2016} and
the variational measurement technique \cite{Vyatchanin 1995} for back-action
evasion.
\begin{figure}
\caption{\label{Q metrology}
\label{Q metrology}
\end{figure}
What we learn from the theory of quantum metrology is the fact that, in
a broad sense, a quantum estimator could have better performance if
assisted by entanglement.
Therefore it is a reasonable idea to employ an entanglement-assisted
estimation strategy for the measurement-based quantum feedback
control \cite{WisemanBook,KurtBook}, which is now well established
based on the quantum filtering theory \cite{Belavkin1999,Bouten}.
Actually, in a similar configuration depicted in Fig.~\ref{Q metrology}~(b),
it is expected that the quantum filter (i.e., the best continuous-time estimator)
brings us more information, and as consequence we will have chance to
construct a better controller than in the standard case without entanglement.
The idea of entanglement-assisted feedback control is briefly mentioned
in \cite{Genoni}, but there has been no quantitative analysis of this control
strategy.
That is, we are interested in the following questions;
(i) {\it How much does the entanglement-assisted strategy improve the control
performance in a realistic setup?}
(ii) {\it In what situation is the entanglement-assisted feedback control really
beneficial?}
Note that the answers to these questions are non-trivial, because, for a realistic
noisy system, the entanglement-assisted feedback control will not always
outperform the standard one without entanglement, due to the fragile nature
of entangled states.
\begin{figure}
\caption{\label{General setup}
\label{General setup}
\end{figure}
In this paper, we consider the setup illustrated in Fig.~2.
The system to be controlled is a general linear quantum system such as an
optical amplifier and an opto-mechanical oscillator
\cite{BachorBook,James2008,Nurdin 2009,Hamerly 2012,Yamamoto2014,Yamamoto2016}.
The probe input is given by an optical entangled state generated by combining
a squeezed field and a coherent field \cite{Furusawa2011} at a beam splitter;
one portion of this entangled field couples to the system while the other portion
does not, as in the scheme shown in Fig.~1 (b), and then the combined output
field is continuously measured by Homodyne detectors.
Finally, based on the measurement signal, we construct the quantum filter and
then apply a feedback control to the system.
The point of this setting is that the system state is always Gaussian, and as a
result the quantum filter is simplified to the {\it quantum Kalman filter}
\cite{Doherty1999a}, which enables us to compute the exact real-time estimate
of the system variables.
Furthermore, in this paper, we consider the quantum {\it Linear Quadratic
Gaussian (LQG)} optimal control problem, meaning that the control goal is
to minimize a quadratic-type cost function.
Fortunately, again thanks to the Gaussianity of the system state, this problem
can be analytically solved by almost the same way as in the classical case
\cite{Doherty1999b,Belavkin2008}.
An important fact in this formulation is that a strict lower bound of the
cost, which is ideally achievable by employing the so-called {\it cheap control}
\cite{Sivan,Seron Book,Seron}, is represented by a function of only the
estimation error.
Therefore this ultimate limit of the LQG control cost has the meaning of
SQL, when the input is given by a coherent or a vacuum field.
In the above described framework, first, this paper proves that, thanks to
the entangled probe input, the filter certainly gains additional information
which may improve the control performance.
Next we study a feedback cooling problem of an opto-mechanical oscillator
\cite{Mancini 1998,Hopkins,Hamerly,Hofer 2015,Kippenberg 2015,Aspelmeyer 2015}
and provide answers to the above-posed questions by conducting detailed
numerical simulations.
In particular, it is shown that, by carefully choosing the system parameters
($\theta, \beta_1, \beta_2, \phi_1, \phi_2$; see Fig.~2), the
entanglement-assisted feedback control can lower the stationary occupation
number of the oscillator below the SQL in the sense of cheap control mentioned
above, and moreover, it outperforms the control with an optimized squeezed
probe field \cite{Andersen 2016}.
Finally, we note that the scheme presented in this paper differs from that
studied in \cite{Hofer 2015}, which considers the use of system's {\it internal}
entanglement to enhance cooling for an opto-mechanical system.
{\it Notation:}
$\Re$ and $\Im$ denote the real and imaginary parts, respectively.
$I_n$: $n\times n$ identity matrix.
$O_n$: $n\times n$ zero matrix.
$0_{n\times m}$: $n\times m$ zero matrix.
\section{Quantum Kalman filtering, LQG control, and cheap control}
We here review the general theory of quantum Kalman filtering, LQG
optimal control, and the cheap control.
\subsection{Quantum linear systems}
In this paper we consider a linear quantum system, whose general form is
described as follows (see \cite{WisemanBook,KurtBook,James2008,Yamamoto2014}
for more details).
The system variables are collected in a vector of operators
$\hat x :=[\hat q_1, \hat p_1, \ldots, \hat q_n, \hat p_n]^\top$,
where $\hat q_i$ and $\hat p_i$ are position and momentum operators.
They satisfy the canonical commutation relation
$\hat q_i\hat p_j - \hat p_j \hat q_i=i\delta_{ij}$
(we set $\hbar =1$), which are summarized as
\begin{equation}
\label{CCR}
\hat x \hat x ^\top -(\hat x \hat x^\top )^\top
= i \Sigma_n,
\end{equation}
where $\Sigma_n$ is the following $2n\times 2n$ block diagonal matrix:
\[
\Sigma_n={\rm diag}\{\sigma, \ldots, \sigma\},~~
\sigma = \left[\begin{array}{cc}
0 & 1 \\
-1 & 0
\end{array}\right].
\]
The system variables are governed by the linear dynamics
\begin{align}
\label{dynamics}
d\hat x_t
=A\hat x_t dt+Fu_tdt + B d\hat W_t.
\end{align}
Note that, in order to preserve Eq.~\eqref{CCR} for all $t$, the matrix $A$
must be of the following form:
\begin{equation}
\label{general A matrix}
A=\Sigma_n(G+\Sigma_n^\top B \Sigma_m B^\top \Sigma_n/2),
\end{equation}
where $G$ is a $2n\times 2n$ real symmetric matrix determining the system
Hamiltonian by $\hat H=\hat x^\top G\hat x/2$.
Also $B$ is a $2n \times 2m$ real matrix determined from the system-field
coupling.
$F$ is a real matrix and $u_t$ is the vector of classical (i.e., non-quantum)
signal representing the control input.
The system couples with $m$ probe or environment bosonic fields, with
vector of noise operators
$\hat W_t :=[\hat Q_1, \hat P_1, \ldots, \hat Q_m, \hat P_m]^\top$.
This satisfies the quantum Ito rule $d\hat W_t d\hat W_t^\top=\Theta dt$,
with zero mean: $\mean{d\hat W_t}=0$.
The correlation matrix $\Theta$ is $2m\times 2m$ block diagonal
Hermitian, and their $j$th block matrix (i.e., the correlation matrix of
$\hat Q_j$ and $\hat P_j$) is in general written as
\begin{equation}
\label{quantum Ito rule}
\Theta_j =
\left[\begin{array}{cc}
N_j+\Re(M_j)+1/2 & \Im(M_j)+i/2 \\
\Im(M_j)-i/2 & N_j-\Re(M_j)+1/2 \\
\end{array}\right].
\end{equation}
The parameters $N_j\in{\mathbb R}$ and $M_j\in{\mathbb C}$ satisfy
$N_j(N_j+1)\geq |M_j|^2$.
Note that $N_j$ represents the average excitation number of the probe
quanta, and $M_j$ is related to squeezing of the field;
if $N_j(N_j+1)=|M_j|^2$ is satisfied, the probe field state is a pure squeezed state
\footnote{
The field annihilation operator $\hat A_1=(\hat Q_1 + i\hat P_1)/\sqrt{2}$ for
a pure squeeze state is modeled by the Bogoliubov transformation
$\hat A_1=\hat A_1^{(0)} \cosh(r/2)
- \hat A_1^{(0)}\mbox{}^\dagger e^{i\theta/2}\sinh(r/2)$, where
$\hat A_1^{(0)}$ is the vacuum field operator.
The corresponding parameters $N_1$ and $M_1$ are obtained from the
definition $d\hat A_1 d\hat A_1^\dagger =(N_1+1)dt$,
$d\hat A_1^\dagger d\hat A_1=N_1dt$,
$dA_1^2=M_1dt$, and $dA_1^\dagger\mbox{}^2=M_1^*dt$, which lead to
$N_1=\sinh^2(r/2)$ and $M_1=-e^{i\theta/2}\sinh(r/2)\cosh(r/2)$.
Hence certainly $N_1(N_1+1)=|M_1|^2$ is satisfied.
}
, while if $M_j = 0$ it is not squeezed.
Also note that $d\hat Q_j d\hat P_j - d\hat P_j d\hat Q_j=idt$.
For this system we perform a (joint) Homodyne measurement on $\ell~(\leq m)$
output probe fields, which generates the classical measurement signal
\begin{align}
\label{output}
d y_t = C\hat x_t dt + D d\hat W_t.
\end{align}
Note that, due to the unitarity of the system-field coupling, the $\ell\times 2n$
real matrix $C$ and the $\ell\times 2m$ real matrix $D$ satisfy the following
specific structure:
\begin{equation}
\label{general C and D matrices}
C=D\Sigma_m B^\top \Sigma_n, ~~~D\Sigma_m D^\top=0.
\end{equation}
In this paper we assume that $A$ is {\it Hurwitz}, meaning that the real
parts of all the eigenvalues of $A$ are negative;
hence in this case the mean of the system variables, $\mean{\hat x_t}$,
which obeys the dynamics $d\mean{\hat x_t}/dt=A\mean{\hat x_t}$,
converges to zero in the long time limit, i.e., $\mean{\hat x_t}\rightarrow 0$.
Note that the opto-mechanical system studied in Section 4 is Hurwitz.
\subsection{Quantum Kalman filter}
Let us consider the situation where we want to perform a real-time estimate
of the system variable $\hat x_t$ based on the measurement signal $y_t$.
The solution is provided by the quantum filtering theory;
that is, we can rigorously define the quantum conditional expectation
$\pi(\hat x_t):={\mathbb E}(\hat x_t\,|\,{\cal Y}_t)$, where ${\cal Y}_t$ is
the $\sigma$-algebra composed of the measurement signal
$\{ y_s\,|\,0\leq s\leq t\}$.
In fact the classical random variable $\pi(\hat x_t)$ is the least mean squared
estimate of $\hat x_t$.
The recursive equation updating $\pi(\hat x_t)$ is given by the quantum
Kalman filter \cite{Doherty1999a,Doherty1999b,Belavkin2008}:
\begin{eqnarray}
\label{linear-filter}
& & \hspace*{-2em}
d\pi(\hat x_t) = A\pi(\hat x_t)dt + Fu_tdt
+ K_t(dy_t -C\pi(\hat x_t)dt),
\nonumber \\ & & \hspace*{-2em}
K_t =(V_t C^\top + B\Re(\Theta)D^\top)(D\Re(\Theta) D^\top)^{-1},
\end{eqnarray}
where the initial condition is $\pi(\hat x_0)=\mean{\hat x_0}$ with
$\mean{\bullet}$ the unconditional expectation.
$V_t$ is the estimation error covariance matrix defined as
\[
V_t:=\mean{
\Delta\hat{x}_t\Delta\hat{x}_t^\top
+(\Delta\hat{x}_t\Delta\hat{x}_t^\top)^\top}/2,~~
\Delta\hat{x}_t:=\hat{x}_t-\pi(\hat x_t),
\]
which evolves in time according to the following Riccati differential equation:
\begin{equation}
\label{riccati}
\hspace{0.5 em}
\dot V_t = AV_t + V_tA^\top + B\Re(\Theta)B^\top
-K_t D\Re(\Theta) D^\top K_t^\top.
\end{equation}
Under the assumption $A$ being Hurwitz
\footnote{
Note that the Hurwitz property is a sufficient condition for Eq.~\eqref{riccati}
to have a unique steady solution.
A useful necessary and sufficient condition is that $(A^\top, D)$ is stabilizable
and $(A^\top, B)$ is detectable \cite{Kucera}.
}
, this equation has a unique steady solution $V_\infty>0$.
\subsection{Quantum LQG control}
In the infinite horizon quantum LQG control problem, we consider the following
cost function:
\begin{equation}
\label{cost function}
J[u]
= \lim_{T\rightarrow \infty}
\frac{1}{T}
\biggl \langle
\int_0^T ({\hat x}_t^\top Q {\hat x}_t
+u_t^{\top}Ru_t )dt \biggr \rangle,
\end{equation}
where $Q\geq 0$ and $R>0$ are real weighting matrices.
The goal is to design the feedback control law $u_t$ as a function of $y_t$,
that minimizes the cost \eqref{cost function} under the condition \eqref{dynamics},
i.e., $u_t^*={\rm arg}\min_u J[u]$.
The point is that, due to the tower property
$\mean{\hat x_t}=\mean{\pi(\hat{x}_t)}
=\mean{{\mathbb E}(\hat x_t\,|\,{\cal Y}_t)}$, the cost can be represented
in terms of only the filter variable as follows.
That is, due to the relation
${\mathbb E}(\hat{x}_t\hat{x}_t^\top \,|\, {\cal Y}_t)
=V_t + i\Sigma_n/2 + \pi(\hat{x}_t)\pi(\hat{x}_t)^\top$, we have
\begin{eqnarray*}
& & \hspace*{0em}
\biggl \langle
\int_0^T (\hat{x}_t^\top Q \hat{x}_t + u_t^\top R u_t )dt
\biggr \rangle
= \biggl \langle
\int_0^T
\Big( {\rm Tr}\hspace{0.07cm}[Q{\mathbb E}(\hat{x}_t\hat{x}_t^\top\,|\,{\cal Y}_t)]
+ u_t^\top R u_t \Big)dt
\biggr \rangle
\nonumber \\ & & \hspace*{3em}
= \biggl \langle
\int_0^T \Big( \pi(\hat{x}_t)^\top Q \pi(\hat{x}_t)
+ u_t^\top Ru_t \Big)dt \biggr \rangle
+ \int_0^T{\rm Tr}\hspace{0.07cm}\Big[Q\Big(V_t+\frac{i}{2}\Sigma_n\Big)\Big]dt,
\end{eqnarray*}
where we have used the fact that $V_t$ obeys the deterministic time evolution
\eqref{riccati}.
Hence this equation leads to
\begin{equation}
\label{cost function rewritten}
J[u]
= \lim_{T\rightarrow \infty}
\frac{1}{T}
\biggl \langle
\int_0^T \Big( \pi(\hat{x}_t)^\top Q \pi(\hat{x}_t)
+ u_t^\top Ru_t \Big)dt \biggr \rangle
+ {\rm Tr}\hspace{0.07cm}(QV_\infty).
\end{equation}
Note that the second term is constant.
As a result, our problem is to find $u_t$ minimizing the first term of
Eq.~\eqref{cost function rewritten} under the condition \eqref{linear-filter}.
This is exactly the {\it classical} LQG control problem and can be analytically
solved as follows (see \cite{Bensoussan} or Appendix~A);
the optimal control input is given by
\begin{equation}
\label{optimal control}
u^*_t=-R^{-1}F^\top P_t\pi(\hat{x}_t),
\end{equation}
where $P_t$ is the solution of the following Riccati equation:
\begin{equation}
\label{control Riccati}
\dot{P}_t+P_tA + A^\top P_t - P_tFR^{-1}F^\top P_t + Q = 0.
\end{equation}
Likewise the case of Eq.~\eqref{riccati}, because $A$ is Hurwitz
\footnote{
As mentioned in the footnote 2, this condition is stronger than the condition
$(A,F)$ being stabilizable and $(A, \sqrt{Q})$ begin detectable, which is a
necessary and sufficient condition for Eq.~\eqref{control Riccati} to have a
unique steady solution $P_\infty\geq 0$.
Note that, in this case, $A-FR^{-1}F^\top P_\infty$ is Hurwitz, meaning that
the controlled filter equation is stable.
}, this equation has a unique steady solution $P_\infty\geq 0$.
The minimum value of the cost, which is reached by the optimal
control \eqref{optimal control}, is given by
\begin{equation}
\label{min cost}
J[u^*]
= {\rm Tr}\hspace{0.07cm}(K_\infty D\Re(\Theta) D^\top K_\infty^\top P_\infty)
+ {\rm Tr}\hspace{0.07cm}(QV_\infty).
\end{equation}
Note that $u^*_t$ is a function of the optimal estimate $\pi(\hat x_t)$.
Thus, we can design the optimal estimate and control separately;
this is called the {\it separation principle} as in the classical case
\cite{Bouten 2008}.
\subsection{Lower bound of the minimum cost: The cheap control}
Let us set $R=\epsilon^2 I$ for the cost function \eqref{cost function}, where
$\epsilon>0$ is a positive scalar, and use $P_\epsilon$ to denote the solution
of the Riccati equation \eqref{control Riccati}.
Then, it was proven in \cite{Sivan} that $P_\epsilon$ monotonically
decreases as $\epsilon$ goes to zero.
Moreover, we have that $\lim_{\epsilon\rightarrow 0}P_\epsilon=0$, if and
only if the system characterized by $(A, F, \bar{Q})$ is {\it minimum phase}
and {\it right invertible}; see Appendix~B for the definitions of this condition
($\bar{Q}$ is a real matrix satisfying $Q=\bar{Q}^\top \bar{Q}$).
In this case, the steady solution of the Riccati equation \eqref{control Riccati}
takes the form $P_\infty=\epsilon \bar{P}+O(\epsilon^2)$.
Then, the minimum cost \eqref{min cost} becomes
\begin{equation}
\label{cheap cost}
J[u^*]
= \epsilon {\rm Tr}\hspace{0.07cm}(K_\infty D\Re(\Theta) D^\top K_\infty^\top \bar{P})
+ {\rm Tr}\hspace{0.07cm}(QV_\infty) + O(\epsilon^2),
\end{equation}
and the optimal control input \eqref{optimal control} is given by
$u^*_t=-\epsilon^{-1}F^\top \bar{P}\pi(\hat{x}_t)$ at steady state.
Now we consider the situation where the actuator is allowed to have a large
control gain (e.g., the case $\epsilon\approx 0$);
in particular, the control input in the ideal limit $\epsilon \rightarrow +0$,
meaning that no penalty is imposed on it, is called the cheap control
\cite{Sivan,Seron Book,Seron}.
We then see that this ultimate feedback control perfectly suppresses the
fluctuation of the estimated variables $\pi(\hat{x}_t)$, i.e., the first term
of Eq.~\eqref{cheap cost}, and as a result the total cost is limited only by the
optimal estimation error.
Therefore, we can take $J^*[u^*] = {\rm Tr}\hspace{0.07cm}(QV_\infty)$ as a fundamental
quantity that is reasonably used for evaluating the performance of feedback
control, because it cannot be further decreased by any control.
In particular, we define the SQL as the value of $J^*[u^*]$ when the probe
input is given by a coherent or a vacuum field.
Finally note that, when $\epsilon\rightarrow +0$, Eq.~\eqref{cost function}
equals to the stationary energy $\mean{\hat x_\infty^\top Q \hat x_\infty}$,
and this can be ultimately reduced, by the ideal cheap control, to
$J^*[u^*] = {\rm Tr}\hspace{0.07cm}(QV_\infty)$.
\section{Configuration of the entanglement-assisted feedback control}
\subsection{Model}
The entanglement-assisted feedback control configuration considered in
this paper is depicted in Fig.~\ref{General setup}.
This model has the following four features.
{\bf (i)} The system is linear and couples with a single probe field.
{\bf (ii)}
The entangled optical field is produced by combining a fixed squeezed
field $\hat W_1$ and a fixed coherent field $\hat W_2$, at a beam splitter (BS1).
The correlation matrices \eqref{quantum Ito rule} of these fields are respectively
given by
\[
\Theta_1 = \frac{1}{2}
\left[ \begin{array}{cc}
\cos\theta & -\sin\theta \\
\sin\theta & \cos\theta \\
\end{array} \right]
\left[ \begin{array}{cc}
e^{-r} & i \\
-i & e^r \\
\end{array} \right]
\left[ \begin{array}{cc}
\cos\theta & \sin\theta \\
-\sin\theta & \cos\theta \\
\end{array} \right],~~~
\Theta_2 = \frac{1}{2}
\left[ \begin{array}{cc}
1 & i \\
-i & 1 \\
\end{array} \right],
\]
where $r$ is the squeezing level and $\theta$ represents the phase of the
squeezed state in the phase space; see Fig.~\ref{General setup} and the footnote
in page 5.
The reflectivity of BS1 is, for simplicity, set to $\beta_1^2$ with
$0\leq \beta_1\leq 1$.
Note that, for all $\beta_1\in(0,1)$, the output fields of BS1 are entangled
(see Appendix~C).
As shown in the figure, one portion of this entangled field couples with
the system, while the other portion does not.
The degree of entanglement can be changed by tuning $\beta_1$, while
maintaining the total amount of energy for producing this entangled field.
Hence we can conduct a fair comparison of the entanglement-assisted
control method (the case $0< \beta_1< 1$) and the standard method without
entanglement (the case $\beta_1=0, 1$), given the same amount of resources.
In particular, note that the SQL corresponds to the case $\beta_1=1$.
{\bf (iii)} We assume that the system's output field is subjected to an optical loss,
which can be modeled by introducing a fictitious beam splitter with reflectivity
$\delta^2$; if $\delta=0$, then there is no optical loss.
$\hat W_3$ denotes the vacuum noise field coming into this fictitious beam
splitter, whose correlation matrix is the same as that of $\hat W_2$, i.e.,
$\Theta_3 = \Theta_2$.
As consequence the overall system contains three input fields $\hat W_1$,
$\hat W_2$, and $\hat W_3$, implying that $m=3$ in the system equations
\eqref{dynamics} and \eqref{output}.
The total correlation matrix is thus given by
\[
\Theta={\rm diag}\{\Theta_1, \Theta_2, \Theta_3\}.
\]
Note again that $\hat W_1$ and $\hat W_2$ represent the probe fields,
while $\hat W_3$ denotes the unwanted noise field.
{\bf (iv)} The system's output field after being subjected to the optical loss meets
the other portion of the entangled input field, at the second beam splitter
(BS2) with reflectivity $\beta_2^2$; again for simplicity we assume
$0\leq \beta_2 \leq 1$.
Then the final output fields are measured by two Homodyne detectors with
phase $\phi_1$ and $\phi_2$, which generate two output signals
$y_1$ and $y_2$.
Note that, if $\beta_2=0$ or $\beta_2=1$, the two optical fields are not
combined at BS2 and are measured independently; this type of measurement
is called the local measurement.
The other case with $0<\beta_2<1$ is called the global measurement.
The overall system dynamics realizing the above setup is given as follows.
First we use the fact that, for a general open linear system interacting with
a single probe field, the system-field coupling is represented by an operator
(the so-called Lindblad operator) of the form $\hat L=c^\top\hat x$ with
$c$ the $2n$-dimensional complex column vector, and this determines the
$B$ matrix in Eq.~\eqref{dynamics} as follows
(see e.g., \cite{WisemanBook,KurtBook,Yamamoto2014}).
That is, by defining the $2\times 2n$ real matrix
\begin{equation}
\label{def of Cs}
\bar{C}=
\sqrt{2}\left[ \begin{array}{cc}
\Re(c)^\top \\
\Im(c)^\top \\
\end{array} \right],
\end{equation}
we can specify the $B$ matrix in the following form:
\begin{equation}
\label{B matrix}
B = [ \alpha_1\Sigma_n\bar{C}^\top\Sigma_1,
\beta_1\Sigma_n\bar{C}^\top\Sigma_1,
0_{2n\times 2} ],
\end{equation}
where $\alpha_1=\sqrt{1-\beta_1^2}$.
Then the $A$ matrix is determined by Eq.~\eqref{general A matrix}
with $G$ specified by the system Hamiltonian $\hat H=\hat x^\top G \hat x/2$.
The $C$ matrix is also specified by Eq.~\eqref{general C and D matrices} and
is now given by
\begin{equation}
\label{C matrix}
C = D\Sigma_3 B^\top \Sigma_n
= D \left[ \begin{array}{c}
\alpha_1 \bar{C} \\
\beta_1 \bar{C} \\
0_{2\times 2n} \\
\end{array} \right].
\end{equation}
Here $D$ is a $2\times 6$ real matrix of the form
\[
D = [D_1, D_2, O_2]T_2 T_L T_1,
\]
where
\[
D_1 = \left[ \begin{array}{cc}
\cos\phi_1 & \sin\phi_1 \\
0 & 0 \\
\end{array} \right],~~~
D_2 = \left[ \begin{array}{cc}
0 & 0 \\
\cos\phi_2 & \sin\phi_2 \\
\end{array} \right]
\]
and
\[
T_k=\left[ \begin{array}{ccc}
\alpha_k I_2& \beta_k I_2 & O_2 \\
-\beta_k I_2& \alpha_k I_2 & O_2 \\
O_2 & O_2 & I_2 \\
\end{array} \right]~~(k=1,2),~~~
T_L=\left[ \begin{array}{ccc}
\sqrt{1-\delta^2} I_2 & O_2 & \delta I_2 \\
O_2 & I_2 & O_2 \\
-\delta I_2 & O_2 & \sqrt{1-\delta^2} I_2 \\
\end{array} \right],
\]
with $\alpha_2=\sqrt{1-\beta_2^2}$.
Note that $T_1$ and $T_2$ represent the scattering process at BS1 and BS2,
respectively.
Also $T_L$ corresponds to the optical loss in the system's output field.
$D_1$ and $D_2$ represent the Homodyne measurements with phase
$\phi_1$ and $\phi_2$, respectively.
\subsection{Information gain via entanglement}
This subsection is devoted to show that, in a special setup, an additional
information about the system is indeed obtained through the second path
in the interferometer, which may improve the control performance.
That is, we consider the case where the system's output field completely
diminishes, i.e., $\delta=1$.
In this case, the $D$ matrix is given by
\[
D=[-\beta_1(\beta_2 D_1+\alpha_2 D_2), ~
\alpha_1(\beta_2 D_1+\alpha_2 D_2), ~ \alpha_2 D_1-\beta_2 D_2],
\]
and as a result $C=0$ for any choice of $\bar{C}$.
Then the system equations \eqref{dynamics} and \eqref{output} are given by
\begin{align}
\label{no knowledge dynamics}
d\hat x_t
=A\hat x_t dt+Fu_tdt + B d\hat W_t, ~~
d y_t = D d\hat W_t.
\end{align}
Hence, as expected, the measurement output $y_t$ does not {\it explicitly}
contain any information about the system.
However, interestingly, the observer can {\it implicitly} gain information;
intuitively, this is because the observer knows that the {\it same} noise
$\hat{W}_t$ enters into the system and the detector;
in other words, the observer exactly knows the noise that drives the system and
thus can track the estimate of the system's time-evolution.
Actually, the quantum Kalman filter equation \eqref{linear-filter} is now
given by
\begin{equation}
\label{no knowledge filter}
d\pi(\hat x_t) = A\pi(\hat x_t)dt + Fu_tdt
+ B\Re(\Theta)D^\top (D\Re(\Theta) D^\top)^{-1}dy_t,
\end{equation}
which means that the observer can update the estimate $\pi(\hat x_t)$
using the measurement result $y_t$.
Also the estimation error covariance matrix follows
\begin{equation}
\label{no knowledge riccati}
\dot V_t = AV_t + V_tA^\top + B\Re(\Theta)B^\top
-B\Re(\Theta)D^\top (D\Re(\Theta) D^\top)^{-1} D\Re(\Theta)B^\top.
\end{equation}
Now, because $A$ is Hurwitz, $V_t$ has a steady solution, meaning that
the estimation error is bounded.
The above two equations indicate that the important term bringing the information
to the filter \eqref{no knowledge filter} and \eqref{no knowledge riccati}
is $B\Re(\Theta)D^\top$, which is now calculated as
\[
B\Re(\Theta)D^\top
= \frac{\alpha_1\beta_1}{2}\Sigma_n \bar{C}^\top \Sigma_1
[I_2-2\Re(\Theta_1)]
\Big( \beta_2 D_1^\top + \alpha_2 D_2^\top \Big).
\]
If there is no entanglement (i.e., $r=0$ or $\alpha_1\beta_1=0$), then
$B\Re(\Theta)D^\top=0$, and the filter equations are reduced to
\[
d\pi(\hat x_t) = A\pi(\hat x_t)dt + Fu_tdt, ~~~
\dot V_t = AV_t + V_tA^\top + B\Re(\Theta)B^\top.
\]
These are simply the dynamics of unconditional expectation
$\pi(\hat x_t)=\mean{\hat x_t}$ and the error covariance matrix,
which correspond to the master equation describing the statistical
time-evolution of the system without measurement.
Therefore it is now clear that the entanglement-assisted filter gains additional
information about the system through the entangled input field.
However, note that an additional information does not always improve the
control performance, because, as demonstrated in
Section \ref{The case of lossy output field}, an entangled probe field is generally
fragile and as a result the system's output field becomes more noisy compared
to the case of coherent input.
{\it Remark:}
The measurement output $dy_t=Dd\hat{W}_t$ in
Eq.~\eqref{no knowledge dynamics} has the form of
{\it no-knowledge measurement} \cite{Szigeti}, which can be used to cancel
decoherence.
Interestingly, unlike the measurement scheme presented here, the no-knowledge
one does not provide any information to the observer;
actually for the setup of \cite{Szigeti} it can be proven that $A$ is not
Hurwitz and the estimation error diverges.
\section{Entanglement-assisted feedback for opto-mechanical oscillator}
In this section, we conduct detailed numerical simulations to evaluate
how much the proposed entanglement-assisted feedback control scheme
is effective in a practical setup.
\subsection{System model}
\begin{figure}
\caption{\label{Setup}
\label{Setup}
\end{figure}
The system of interest is an opto-mechanical oscillator shown in Fig.~\ref{Setup}.
Let $(\hat q_1, \hat p_1)$ be the position and momentum operators of the
mechanical oscillator, and $\hat a_2$ be the annihilation operator of the optical
cavity.
The system Hamiltonian is given by
\begin{equation}
\label{Osci Hamiltonian}
\hat H = \frac{\omega}{2}(\hat q_1^2 + \hat p_1^2)
+ \frac{\Delta}{2}(\hat q_2^2 + \hat p_2^2)
- \lambda \hat q_1 \hat q_2,
\end{equation}
where $\omega$ is the resonant frequency of the oscillator and $\Delta$ is the
frequency detuning of the cavity mode in the rotating frame of the driving laser
frequency; see \cite{Mancini 1998,Hofer 2015,Milburn} for more
detailed description.
Note $\hat q_2=(\hat a_2+\hat a_2^\dagger)/\sqrt{2}$ and
$\hat p_2=(\hat a_2-\hat a_2^\dagger)/\sqrt{2}i$.
The third term is the linearized radiation pressure force with strength
$|\lambda|$, representing the interaction between the oscillator and
the cavity field.
From the relation $\hat H = \hat x^\top G\hat x/2$ with
$\hat x = [\hat q_1, \hat p_1, \hat q_2, \hat p_2]^\top$, we have
\[
G = \left[ \begin{array}{cc|cc}
\omega & 0 & -\lambda & 0 \\
0 & \omega & 0 & 0 \\ \hline
-\lambda & 0 & \Delta & 0 \\
0 & 0 & 0 & \Delta \\
\end{array} \right].
\]
The system couples to the driving laser field at the partially reflective end-mirror
of the cavity, with strength $\kappa$;
this coupling is represented by the following operator:
\[
\hat L = \sqrt{\kappa} \hat a_2
= \sqrt{\frac{\kappa}{2}}(\hat q_2 + i \hat p_2)
= \sqrt{\frac{\kappa}{2}}[0, 0, 1, i] \hat x.
\]
Thus, Eq.~\eqref{def of Cs} yields
\[
\bar{C}
= \sqrt{\kappa}
\left[ \begin{array}{cc|cc}
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1 \\
\end{array} \right].
\]
This determines the system's $B$ and $C$ matrices from Eqs. \eqref{B matrix}
and \eqref{C matrix}, respectively, and the $A$ matrix from
Eq.~\eqref{general A matrix}.
In addition, we assume that the oscillator is subjected to a thermal environment
with mean photon number $\bar{n}_{\rm th}$.
Then the system matrices are modified as follows;
we need to change the $A$ matrix to $A-\gamma \Gamma/2$ and the constant
term in the Riccati equation~\eqref{riccati}, $B\Re(\Theta)B^\top$, to
$B\Re(\Theta)B^\top + \gamma(\bar{n}_{\rm th}+1/2)\Gamma$, where
$\gamma$ represents the system-environment coupling strength and
$\Gamma={\rm diag}\{1, 1, 0, 0\}$.
Note that $A-\gamma \Gamma/2$ is Hurwitz, meaning that both the Riccati
equations \eqref{riccati} and \eqref{control Riccati} have a unique steady
solution.
The oscillator can be directly controlled by implementing a piezo-actuator
\cite{Rugar 2008} (the case shown in Fig.~\ref{Setup}) or indirectly
controlled by modulating the input probe field.
In both cases, it can be shown that the system satisfies the conditions for
the cheap control described in Section 2.4; see Appendix~B.
\subsection{Control goal}
The control goal is to cool the oscillator toward its motional ground state;
i.e., we want to minimize the stationary mechanical occupation number
\[
\bar{n} = \mean{\hat a_{1,\infty}^\dagger \hat a_{1,\infty}}
= (\mean{\hat q_{1,\infty}^2}+\mean{\hat p_{1,\infty}^2}-1)/2.
\]
As described in Section 2.4, this can be ultimately reduced, by the ideal cheap
control, to
\begin{equation}
\label{min energy}
\bar{n}^*=({\rm Tr}\hspace{0.07cm}(\Gamma V_\infty)-1)/2,
\end{equation}
where again $\Gamma={\rm diag}\{1, 1, 0, 0\}$ and $V_\infty$ is the
steady solution of the Riccati equation \eqref{riccati}.
The system parameters are set to the following typical values (in the unit
$\omega=1$) in the feedback cooling setup (e.g., \cite{Hofer 2015}).
First we assume the resonant driving $\Delta=0$, meaning that the oscillator's
position and momentum can be best estimated by the filter and accordingly
controlled efficiently.
Also the cavity line width is set to $\kappa=2$ (bad cavity regime), so that
the intra cavity field immediately leaks to outside and as a consequence
the oscillator dynamics can be well observed by the filter.
The oscillator is subjected to a thermal noise with mean photon number
$\bar{n}_{\rm th}=1\times 10^5$ with coupling strength
$\gamma=1\times 10^{-7}$.
The squeezing level of the probe field is set to $r=2.3$ (10 dB squeezing),
which is accessible with the current technology.
In this setting, the task is to optimize the parameters
$(\beta_1, \beta_2, \phi_1, \phi_2, \theta)$ so that Eq.~\eqref{min energy}
is minimized.
To see how to find those optimal parameters, let us assume the lossless setup
(i.e., $\delta=0$) and focus on only the oscillator mode, where the cavity mode is adiabatically eliminated.
This dynamical equation is obtained by setting $d\hat q_2=0$ and
$d\hat p_2=0$ due to $\kappa\gg \gamma$ and eventually eliminating
$(\hat q_2, \hat p_2)$ from the whole dynamical equation;
\begin{eqnarray}
& & \hspace*{-1em}
\label{q1 eq}
d\hat{q}_1 = -\frac{\gamma}{2} \hat q_1 dt + \omega \hat p_1 dt
-\sqrt{\gamma}d\hat Q_{\rm th},
\\ & & \hspace*{-1em}
\label{p1 eq}
d\hat{p}_1 = -\omega\hat q_1 dt -\frac{\gamma}{2} \hat p_1 dt + udt
-\frac{2\lambda}{\sqrt{\kappa}}
(\alpha_1 d\hat Q_1 + \beta_1 d\hat Q_2)
-\sqrt{\gamma}d\hat P_{\rm th},
\\ & & \hspace*{-1em}
\label{y1 eq}
dy_1 = \frac{2\alpha_2\lambda \sin\phi_1}{\sqrt{\kappa}} \hat q_1 dt
- (\alpha_1 \alpha_2 + \beta_1\beta_2 )
( \cos\phi_1 d\hat Q_1 + \sin\phi_1 d\hat P_1)
\nonumber \\ & & \hspace*{8.6em}
\mbox{}
+ (\alpha_1 \beta_2 - \beta_1\alpha_2 )
( \cos\phi_1 d\hat Q_2 + \sin\phi_1 d\hat P_2 ),
\\ & & \hspace*{-1em}
\label{y2 eq}
dy_2 = -\frac{2\beta_2\lambda \sin\phi_2}{\sqrt{\kappa}} \hat q_1 dt
+ (\alpha_1 \beta_2 - \beta_1\alpha_2 )
( \cos\phi_2 d\hat Q_1 + \sin\phi_2 d\hat P_1 )
\nonumber \\ & & \hspace*{9.3em}
\mbox{}
+ (\alpha_1 \alpha_2 + \beta_1\beta_2 )
(\cos\phi_2 d\hat Q_2 + \sin\phi_2 d\hat P_2 ),
\end{eqnarray}
where $u$ represents the magnitude of the force applied to a piezo-actuator
mounted on the oscillator, and $(\hat Q_{\rm th}, \hat P_{\rm th})$ are the
quadrature of the thermal field.
These equations lead to a rough guide for choosing the parameters, as follows.
\begin{enumerate}
\item
First, the bigger the first terms in Eqs.~\eqref{y1 eq} and \eqref{y2 eq}
become, the more information the observer gains.
Hence, it would be reasonable to make the terms $\sin\phi_1$ and $\sin\phi_2$
bigger, or equivalently take $\phi_1\approx \pi/2$ and $\phi_2\approx \pi/2$.
\item
The above choice of $(\phi_1, \phi_2)$ implies that the measurement outputs
are dominantly affected by the phase-quadrature noise $(\hat P_1, \hat P_2)$
rather than the amplitude quadrature noise $(\hat Q_1, \hat Q_2)$.
Then by squeezing $\hat P_1$, we can improve the signal to noise ratio both
in $y_1$ and $y_2$.
This means that $\theta\approx \pi/2$ would be a proper choice.
\item
The coefficients of the noise terms related to $\hat W_1=[\hat Q_1, \hat P_1]^\top$
and $\hat W_2=[\hat Q_2, \hat P_2]^\top$ in $y_1$ and $y_2$ cannot be
simultaneously reduced, because they satisfy
$(\alpha_1 \alpha_2 + \beta_1\beta_2 )^2 +
(\alpha_1 \beta_2 - \beta_1\alpha_2 )^2=1$.
Then, because $\hat W_2$ is not a tunable noise field, meaning that
$\mean{(\cos\phi_1 d\hat Q_2 + \sin\phi_1 d\hat P_2)^2}=dt$ and
$\mean{(\cos\phi_2 d\hat Q_2 + \sin\phi_2 d\hat P_2)^2}=dt$, it would
be reasonable to choose the BS parameters so that the coefficient of $\hat W_2$
is reduced;
more precisely, $\alpha_1 \beta_2 - \beta_1\alpha_2\approx 0$ if $y_1$ is
mainly used (i.e., $\alpha_2\approx 1$), or
$\alpha_1 \alpha_2 + \beta_1\beta_2\approx 0$ if $y_2$ is mainly used
(i.e., $\beta_2\approx 1$).
This leads to $(\alpha_1, \beta_1, \alpha_2, \beta_2)\approx (1, 0, 1, 0)$ or
$(\alpha_1, \beta_1, \alpha_2, \beta_2)\approx (1, 0, 0, 1)$.
\end{enumerate}
Of course, the above intuitive observation, particularly the last one, would not
be necessarily true.
In fact, we have now arrived at $(\alpha_1, \beta_1)\approx (1, 0)$, but this
means that the input probe field is nearly a separable state where the squeezed
component is injected to the system, and the entanglement property is not
effectively used.
Moreover, when $(\alpha_1, \beta_1)\approx (1, 0)$, the back-action noise on
$\hat p_1$ (i.e., the fourth term in the right-hand side of Eq.~\eqref{p1 eq}) is
dominated by $d\hat Q_1$;
however, because now $\hat Q_1$ is nearly {\it anti-squeezed}
(due to $\theta\approx \pi/2$), this parameter choice induces a bigger
back-action noise.
Therefore, the parameters have to be carefully chosen, via detailed numerical
simulations taking into account the tradeoff between the back-action noise
and the signal to noise ratio for the measurement outputs $y_1$ and $y_2$.
\subsection{Effectiveness of the entanglement-assisted feedback control}
\begin{figure}
\caption{\label{Cooling optim}
\label{Cooling optim}
\end{figure}
First let us see if the entanglement would actually bring any advantage
to the feedback control.
Figure~\ref{Cooling optim} (a) shows $\bar{n}^*$ as a function of the
reflectivity of BS1, $\beta_1^2$, in the case $\lambda=0.3$ (weak coupling
regime) and $\delta=0$ (the system's output field has no loss).
$\bar{n}^*$ is calculated from Eq.~\eqref{min energy} together with the
steady solution $V_\infty$ of the Riccati equation \eqref{riccati}.
Furthermore, it is minimized with respect to the phase of the probe squeezed
field, $\theta$, and the phases of the two Homodyne detectors, $(\phi_1,\phi_2)$,
at each $\beta_1$;
Figure~\ref{Cooling optim} (b) illustrates the optimal $\theta$ at each $\beta_1$
for the case $\beta_2^2=0.8$.
The reflectivity $\beta_1^2$ represents how much the squeezed field $\hat W_1$
is split into two arms, which determines the amount of entanglement.
Note that when $\beta_1=0$ or $\beta_1=1$, the input fields are not entangled.
In particular, $\beta_1=1$ corresponds to the standard case where only
the coherent field is injected to the system;
hence the value of $\bar{n}^*$ in this case has the meaning of SQL, which is
now $\bar{n}^*_{\rm SQL}\approx 0.119$ as indicated in the figure (a).
The five solid curves in the figure (a) show $\bar{n}^*$ for several values of
$\beta_2^2$, the reflectivity of BS2;
recall that $\beta_2=0$ means the case of local measurement, while the
cases $\beta_2\neq 0$ correspond to the global measurement (see {\bf (iv)}
in Section 3.1).
Importantly, in all cases the minimum of $\bar{n}^*$, which is indicated by the
black box, is smaller than the SQL and is attained at a certain point of
$\beta_1\in(0,1)$, where the input field is entangled.
In particular, the most effective feedback cooling is carried out when we use
the highly entangled probe field with $\beta_1^2=0.4$ and perform the
global measurement with $\beta_2^2=0.8$, in which case the minimum of
$\bar{n}^*$ is about 0.06.
As a conclusion, the entanglement-assisted feedback control is in fact
effective and realizes further cooling of the oscillator below the SQL.
The followings are the list of other notable features of this system.
\begin{itemize}
\item
For the cases $\beta_1^2=0.4, 0.6, 0.8$, the optimal phase of the squeezed
field is $\theta=\pi/2$.
The optimality of $\theta=\pi/2$ was indeed expected in the second observation
in Section~4.2 (page 13), but $\beta_1$ is not nearly zero, which is not consistent
with the third observation in Section~4.2.
Therefore, the numerical solver has actually chosen a nontrivial set of parameters
that balances the back-action noise on $\hat p_1$ and the measurement noise on
$(y_1, y_2)$.
\item
The minimum of $\bar{n}^*$ is reached at $\beta_1^2\neq 0.5$ and
$\theta=\pi/2$.
This means that the maximal entangled field is not the best probe for the
estimation and feedback control; see Appendix~C.
\item
The entanglement-assisted method outperforms the control with the optimized
squeezed probe field \cite{Andersen 2016}, which corresponds to the case
$\beta_1=0$.
\end{itemize}
\subsection{Coupling strength and optimal probe}
\begin{figure}
\caption{\label{Cooling optim 3D}
\label{Cooling optim 3D}
\end{figure}
Here we study how much the minimum occupation number $\bar{n}^*_{\rm min}$
changes with respect to the coupling strength $\lambda$.
Figure \ref{Cooling optim 3D} shows $\bar{n}^*$ as a function of $\theta$
and $\beta_2$, for $\delta=0$ and several values of $\lambda$.
In each figure (a)-(d), $\bar{n}^*$ is already minimized with respect to
$(\beta_1, \phi_1, \phi_2)$, and the achieved $\bar{n}^*_{\rm min}$ is
shown together with $\bar{n}^*_{\rm SQL}$.
In particular, in each figure, the optimal value of $\beta_1$ has been chosen
as: (a) $\beta_1^2=0.40$, (b) $\beta_1^2=0.65$, (c) $\beta_1^2=0.55$,
and (d) $\beta_1^2=0.50$, implying that the input probe field is highly
entangled in all cases.
Hence, we end up with the same conclusion that the entanglement-assisted
feedback control cools the oscillator below the SQL and even performs better
than the case with optimized squeezed probe field.
Note here that, as implied by Eqs.~\eqref{p1 eq}, \eqref{y1 eq}, and \eqref{y2 eq},
making $\lambda$ bigger improves the signal to noise ratio in the measurement
output, but at the same time this induces a bigger back-action noise on $\hat p_1$.
Hence, $\bar{n}^*_{\rm SQL}$ does not monotonically change with respect to
$\lambda$;
interestingly, $\bar{n}^*_{\rm min}$ takes almost the same value for all
$\lambda$, which suggests that there would exist a fundamental lower
bound of $\bar{n}^*_{\rm min}$ that is independent to $\lambda$.
Another remarkable fact is that, for small values of $\lambda$, the optimal
phase of the squeezed field is $\theta=\pi/2$, as seen in the previous
subsection; however, this does not hold when $\lambda$ becomes large.
This is because, for a large $\lambda$, it is more important to reduce
the back-action noise $(2\lambda/\sqrt{\kappa})\alpha_1 d\hat Q_1$
than to improve the signal to noise ratio in the measurement process, and
thus the squeezed field with $\mean{d\hat Q_1^2}<\mean{d\hat P_1^2}$
is chosen.
\subsection{The case of lossy output field}
\label{The case of lossy output field}
\begin{figure}
\caption{\label{loss curves}
\label{loss curves}
\end{figure}
Next let us consider the case where the system's output field is subjected to
the optical loss.
Figure~\ref{loss curves} is the plot of $\bar{n}^*$ with the same setting as
in Fig.~\ref{Cooling optim} (i.e., $\lambda=0.3$ and $(\theta, \phi_1, \phi_2)$
are optimized), except that the loss parameter is now set to $\delta^2=0.1$.
As in the lossless case $\delta=0$, we find that the minimum of $\bar{n}^*$
is reached when the input probe field is entangled ($\beta_1^2\approx 0.8$)
and the global measurement ($\beta_2^2=0.8$) is performed.
However, notably, the difference between the minimum value of $\bar{n}^*$
and the SQL given at $\beta_1=1$ (i.e., how much the control performance is
improved by entanglement) is smaller than the case when $\delta=0$.
That is, the entanglement-assisted feedback is less effective if the system's
output field is lossy.
Another notable feature is that there is a case where the control performance
becomes worse than the SQL via the entanglement-assisted feedback control.
This happens when $\beta_1$ takes a small value, in which case
the portion injected into the system is nearly a pure squeezed field.
This result makes sense, because, as is well known, a squeezed field is fragile
to noise and the system's output field loses more information than the
standard case, which cannot be compensated by the additional information
gained from the second path of the interferometer.
Finally Fig.~\ref{Cooling loss 3D} shows the plot of $\bar{n}^*$ as a function of
$\theta$ and $\beta_2$, for $\lambda=0.3$ and several values of $\delta$.
As in the case of Fig.~\ref{Cooling optim 3D}, $\bar{n}^*$ is already minimized
with respect to $(\beta_1, \phi_1, \phi_2)$.
Note that Fig.~\ref{Cooling loss 3D} (a) is the same as Fig.~\ref{Cooling optim 3D}
(a).
A notable point is that the optimal values of $\theta$ and $\beta_2$ in the case
$\delta^2=0.1$ are the same as those for $\delta=0$.
This means that the optimal input probe field and measurement are independent
to the system's output loss $\delta$.
This is a desirable fact because an exact value of $\delta$ is hard to estimate
in practice, but the same input probe field and measurement can be used
without respect to $\delta$ as long as the system's output loss is enough
suppressed.
However, Figs.~\ref{Cooling loss 3D} (c) and (d) show that the probe and
measurement have to be changed when $\delta$ becomes bigger.
In this sense, the optimal probe field is not robust for a system with lossy output.
\begin{figure}
\caption{\label{Cooling loss 3D}
\label{Cooling loss 3D}
\end{figure}
\section{Conclusion}
In this paper, we have formulated the entanglement-assisted feedback control
method for general linear quantum systems, which involves optimization of the
amount of entanglement, the phase of the probe squeezed field, and the
Homodyne measurement.
Thanks to the linear setting, the strict lower bound of LQG cost function, which
is achievable by the ideal cheap control, can be explicitly obtained, and it is
used to evaluate the control performance.
In the detailed numerical simulation studying the cooling problem of an
opto-mechanical oscillator, it was shown that the entanglement-assisted controller
works better than the standard method without entanglement, i.e., the control
with a coherent probe field and even that with an optimized squeezed probe field.
Although the improvement is not so drastic especially when the system's output
field is lossy, we expect that a significant advantage of the entanglement-assisted
method would appear for some nonlinear systems.
In fact, it was shown in \cite{Clerk 2015} that, in a different measurement
configuration, an entangled probe field can be used to significantly improve
the detection efficiency for a qubit system;
an extension of this study to the feedback control problem is an interesting
future work.
\section*{Appendix A: Solution of LQG control problem}
Here we briefly explain how to derive the solution of the LQG control problem;
see \cite{Bensoussan} for a more detailed derivation.
The essential idea is to use the {\it dynamic programming} method based on
the following {\it expected cost-to-go}:
\[
J_t[u, z]
={\mathbb E}\Big[
\int_t^T \Big( \pi(\hat{x}_s)^\top Q \pi(\hat{x}_s)
+ u_s^\top Ru_s \Big)ds ~\Big|~ \pi(\hat{x}_t)=z \Big].
\]
The goal is to obtain the minimum of this function, i.e.,
$J_t^*(z)=\min_{u[t,T]}J_t[u,z]$, with respect to the input in the time
interval $[t, T]$, denoted by $u[t,T]$.
Now we rewrite $J_t^*(z)$ in the following form:
\begin{eqnarray}
& & \hspace*{-1em}
J_t^*(z)
= \min_{u[t,T]}{\mathbb E}\Big[
\int_t^{t+dt} \Big( \pi(\hat{x}_s)^\top Q \pi(\hat{x}_s)
+ u_s^\top Ru_s \Big)ds
\nonumber \\ & & \hspace*{6em}
\mbox{}
+ \int_{t+dt}^T \Big( \pi(\hat{x}_s)^\top Q \pi(\hat{x}_s)
+ u_s^\top Ru_s \Big)ds
~\Big|~ \pi(\hat{x}_t)=z \Big]
\nonumber \\ & & \hspace*{1.6em}
= \min_{u_t}\Big\{
(z^\top Q z
+ u_t^\top Ru_t )dt + J_{t+dt}^*(z+dz) \Big\}.
\nonumber
\end{eqnarray}
Then, noting that $\pi(\hat{x}_t)$ obeys Eq.~\eqref{linear-filter} and
$d\bar{w}_t=dy_t -C\pi(\hat x_t)dt$ is the standard classical Wiener
process satisfying $d\bar{w}_td\bar{w}_t^\top =D\Theta D^\top dt$,
we see that the optimal value function $J_t^*(z)$ satisfies the
{\it Bellman equation}
\begin{eqnarray}
& & \hspace*{-1em}
\min_{u_t}\Big\{
\Big\|u_t
+\half R^{-1}F^\top\frac{\partial J_t^*(z)}{\partial z}\Big\|_R^2
-\frac{1}{4}\Big(\frac{\partial J_t^*(z)}{\partial z}\Big)^\top
FR^{-1}F^\top \frac{\partial J_t^*(z)}{\partial z}
+ \frac{\partial J_t^*(z)}{\partial t}
\nonumber \\ & & \hspace*{2em}
\mbox{}
+ z^\top Q z
+\Big(\frac{\partial J_t^*(z)}{\partial z}\Big)^\top Az
+ \half {\rm Tr}\hspace{0.07cm}\Big[ \frac{\partial^2 J_t^*(z)}{\partial z\mbox{}^2}
K_t D \Re(\Theta) D^\top K_t^\top\Big] \Big\}=0,
\nonumber
\end{eqnarray}
with the terminal condition $J_T^*(z)=0$.
Here we have defined $\|x\|_R^2=x^\top Rx$.
The optimal control input is thus given by
\begin{equation}
\label{optimal-u-V}
u_t^*=-\half R^{-1}F^\top\frac{\partial J_t^*(z)}{\partial z}.
\end{equation}
The optimal value function $J_t^*(z)$ is then determined from the
following partial differential equation:
\begin{eqnarray}
& & \hspace*{-1em}
\frac{\partial J_t^*(z)}{\partial t}
+z^\top Q z
-\frac{1}{4}\Big(\frac{\partial J_t^*(z)}{\partial z}\Big)^\top
FR^{-1}F^\top \frac{\partial J_t^*(z)}{\partial z}
+\Big(\frac{\partial J_t^*(z)}{\partial z}\Big)^\top Az
\nonumber \\ & & \hspace*{2.24em}
\mbox{}
+ \half {\rm Tr}\hspace{0.07cm}\Big[ \frac{\partial^2 J_t^*(z)}{\partial z\mbox{}^2}
K_t D \Re(\Theta) D^\top K_t^\top\Big]=0.
\nonumber
\end{eqnarray}
Now we assume that the solution is of the quadratic form
$J_t^*(z)=z^\top P_t z + \nu_t$ with $P_t\in{\mathbb R}^{2n\times 2n}$
and $\nu_t\in{\mathbb R}$;
then the above partial differential equation is reduced to
\begin{eqnarray}
& & \hspace*{-1em}
z^{{\mathsf T}}\Big(
\dot{P}_t+P_tA+A^\top P_t-P_t FR^{-1}F^\top P_t+Q \Big)z
\nonumber \\ & & \hspace*{2.24em}
\mbox{}
+ \dot{\nu}_t
+ {\rm Tr}\hspace{0.07cm}[ P_t K_t D \Re(\Theta) D^\top K_t^\top]=0.
\nonumber
\end{eqnarray}
This equality must hold for any $z\in{\mathbb R}^{2n}$, and we thus obtain
the following set of ordinary differential equations:
\[
\dot{P}_t+P_tA+A^\top P_t-P_t FR^{-1}F^\top P_t+Q=0, ~~
\dot{\nu}_t + {\rm Tr}\hspace{0.07cm}[ P_t K_t D \Re(\Theta) D^\top K_t^\top] = 0.
\]
It follows from $J_T^*(z)=0$ that the terminal conditions are $P_T=0$ and
$\nu_T=0$.
Under the assumption that the above set of equations have solutions,
the optimal controller (\ref{optimal-u-V}) is given by
\begin{equation}
\label{optimal-u}
u^*_t=-R^{-1}F^\top P_t\pi(\hat{x}_t).
\end{equation}
Moreover, we now have
\[
\mean{J_0^*(\hat{x}_0)}
= \mean{ \hat{x}_0^\top P_0 \hat{x}_0 + \nu_0}
= \mean{\hat{x}_0^\top P_0 \hat{x}_0}
+ \int_0^T {\rm Tr}\hspace{0.07cm}[ P_t K_t D \Re(\Theta) D^\top K_t^\top]dt,
\]
hence the minimum of the original cost function \eqref{min cost}
is given by
\[
J[u^*] = \lim_{T\rightarrow \infty} \frac{1}{T}\mean{J_0^*(\hat{x}_0)}
+ {\rm Tr}\hspace{0.07cm}(QV_\infty)
= {\rm Tr}\hspace{0.07cm}[ P_\infty K_\infty D \Re(\Theta) D^\top K_\infty^\top]
+ {\rm Tr}\hspace{0.07cm}(QV_\infty).
\]
\section*{Appendix B: Condition for the cheap control \cite{Sivan,Seron Book,Seron}}
Let us consider the system whose transfer function matrix is given by
$\Xi(s)=\bar{Q}(sI-A)^{-1}F$, which we simply call the system $(A, F, \bar{Q})$.
First, if there exist a complex number $z\in{\mathbb C}$ and a vector
$u$ such that $u^\top G(z)=0$ or $G(z)u=0$, then $z$ is called a {\it zero}
(more precisely, it is called a transmission zero).
Then the system $(A, F, \bar{Q})$ is called {\it minimum phase}, if all the
zeros of $\Xi(s)$ have negative real part.
Next the system $(A, F, \bar{Q})$ is called {\it right invertible}, if $\Xi(s)$
has full row rank for at least one $s\in{\mathbb C}$.
In general, such a minimum phase and right invertible system is regarded
as a system easy to control;
an intuitive understanding of this fact is that there exists an ``inverse" and
``stable" system (i.e., there exists $\Xi(s)^{-1}$ and all its poles have negative
real part), and this system completely compensates $\Xi(s)$.
In fact, as mentioned in Section 2.4, for such a system there exists a
stabilizing controller that well suppresses the dynamical fluctuation of the
(estimated) system variables.
Here we prove that the opto-mechanical system examined in Section~4
actually satisfies the above condition for cheap control.
Note that the LQG problem is now formulated with the choice
\[
\bar{Q}
= \left[ \begin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
\end{array} \right],
\]
which actually yields $Q=\bar{Q}^\top \bar{Q}={\rm diag}\{1, 1, 0, 0\}$;
see Eq.~\eqref{min energy}.
The $F$ matrix representing the actuator mechanism of the controller
can be typically chosen as follows.
First, if the oscillator can be directly manipulated via a piezo electrical
device, then $F_1=[0, 1, 0, 0]^\top$, meaning that the momentum of the
oscillator can be driven by an external force.
Another typical setup for actuation is that the control is carried out
by modulating the input probe field, in which case $F_2=[0,0,1,0]^\top$,
where especially only the $\hat q_2$ quadrature is assumed to be modulated
(it can be proven that modulating $\hat p_2$ does not affect on the
condition to be fulfilled).
Then we have
\begin{eqnarray*}
& & \hspace*{-1em}
\Xi_1(s)
=\bar{Q}(sI-A)^{-1}F_1
= \frac{1}{(s+\kappa/2)^2+\omega^2}
\left[ \begin{array}{c}
\omega \\
s+\gamma/2 \\
\end{array} \right],
\\ & & \hspace*{-1em}
\Xi_2(s)
=\bar{Q}(sI-A)^{-1}F_2
= \frac{\lambda}{[(s+\kappa/2)^2+\omega^2](s+\kappa/2)}
\left[ \begin{array}{c}
\omega \\
s+\gamma/2 \\
\end{array} \right].
\nonumber
\end{eqnarray*}
Therefore, in both cases, the system $(A, F, \bar{Q})$ is minimum phase and
right invertible.
\section*{Appendix C: Logarithmic negativity}
For a two-mode Gaussian state with mean zero, its correlation property can
be completely characterized by the covariance matrix
\begin{equation}
\label{V deconposition}
V=\left[ \begin{array}{cc}
V_1 & V_2 \\
V_2^\top & V_3 \\
\end{array} \right],
\end{equation}
where $V_i$ are $2\times 2$ matrices.
In particular, the following {\it logarithmic negativity} \cite{Vidal,Plenio}
can be used as a reasonable measure of entanglement of this Gaussian state:
\begin{equation*}
E_{\cal N}={\rm max}\big\{0,~-\log(2\nu)\big\},
\end{equation*}
where $\log x$ denotes the natural logarithm of $x$, and
\[
\nu=\frac{1}{\sqrt{2}}
\sqrt{ \tilde{\Delta}
-\sqrt{ \tilde{\Delta}^2-4{\rm det}(V)} },~~~
\tilde{\Delta}={\rm det}(V_1)+{\rm det}(V_3)-2{\rm det}(V_2).
\]
Actually the state is entangled if and only if $E_{\cal N} > 0$.
In our case, the output of BS1 is an entangled Gaussian field;
particularly when $\theta=\pi/2$, the covariance (more precisely the
spectral density) matrix is given by Eq.~\eqref{V deconposition} with
\begin{eqnarray*}
& & \hspace*{-1em}
V_1 = {\rm diag}\{ \alpha_1^2 e^{r} + \beta_1^2,
\alpha_1^2 e^{-r} + \beta_1^2 \} /2,
\\ & & \hspace*{-1em}
V_2 = {\rm diag}\{ \alpha_1\beta_1(1-e^{r}), \alpha_1\beta_1(1-e^{-r}) \} /2,
\\ & & \hspace*{-1em}
V_3 = {\rm diag}\{ \beta_1^2 e^{r} + \alpha_1^2,
\beta_1^2 e^{-r} + \alpha_1^2 \} /2.
\nonumber
\end{eqnarray*}
This yields $\nu=\sqrt{d-\sqrt{d^2-1}}/2$ with
$d=2\alpha_1^2\beta_1^2(e^r+e^{-r}-2)+1$, and thus $E_{\cal N} > 0$
for all $\beta_1\in(0,1)$ and $r\neq 0$.
Note that, hence, the maximal entangled field for $\theta=\pi/2$ is produced
when $\alpha_1^2=\beta_1^2=1/2$.
\end{document}
|
\begin{document}
\title{Robust two-qubit trapped ions gates using spin-dependent squeezing}
\author{Yotam Shapira$^1$}
\author{Sapir Cohen$^2$}
\author{Nitzan Akerman$^1$}
\author{Ady Stern$^2$}
\author{Roee Ozeri$^1$}
\affiliation{\small{$^1$Department of Physics of Complex Systems\\
$^2$Department of Condensed Matter Physics\\
Weizmann Institute of Science, Rehovot 7610001, Israel}}
\begin{abstract}
Entangling gates are an essential component of quantum computers. However, generating high-fidelity gates, in a scalable manner, remains a major challenge in all quantum information processing platforms. Accordingly, improving the fidelity and robustness of these gates has been a research focus in recent years. In trapped ions quantum computers, entangling gates are performed by driving the normal modes of motion of the ion chain, generating a spin-dependent force. Even though there has been significant progress in increasing the robustness and modularity of these gates, they are still sensitive to noise in the intensity of the driving field. Here we supplement the conventional spin-dependent displacement with spin-dependent squeezing, which enables a gate that is robust to deviations in the amplitude of the driving field. We solve the general Hamiltonian and engineer its spectrum analytically. We also endow our gate with other, more conventional, robustness properties, making it resilient to many practical sources of noise and inaccuracies.
\end{abstract}
\maketitle
Two qubit entanglement gates are a crucial component of quantum computing, as they are an essential part of a universal gate set. Moreover, fault-tolerant quantum computing requires gates with fidelities above the fault-tolerance threshold \cite{aharonov2008fault}. Generating high-fidelity two-qubit gates in a robust and scalable manner remains an open challenge, and a research focus, in all current quantum computing platforms \cite{bruzewicz2019trapped,kjaergaard2020superconducting,alexeev2021quantum}.
Trapped ions based quantum computers are a leading quantum computation platform, due to their high controlability, long coherence times and all-to-all qubit connectivity \cite{postler2021demonstration,pino2021demonstration,egan2021fault}. Entanglement gates are typically generated by driving the ions with electromagnetic fields, that creates phonon mediated qubit-qubit interactions. Such gates have been demonstrated, with outstanding fidelities \cite{ballance2016high,gaebler2016high,clark2021high,srinivas2021high}. Moreover, in recent years there have been many theoretical proposals and experimental demonstrations \cite{roos2008ion,green2015phase,haddadfarshi2016high,manovitz2017fast,palmero2017fast,wong2017demonstration,shapira2018robust,zarantonello2018robust,leung2018robust,leung2018entangling,schafer2018fast,webb2018resilient,sutherland2019versatile,figgatt2019parallel,lu2019global,shapira2020theory,sutherland2020laser,lishman2020trapped,wang2020noise,milne2020phase,bentley2020numeric,sameti2021strong,duwe2021numerical,kang2021batch,blumel2021power,blumel2021efficient,dong2021phase,valahu2021robust,wang2022ultra,manovitz2022trapped,fang2022crosstalk,valahu2022quantum} aimed at improving the fidelity, rate, connectability and resilience of such gates. These schemes are largely based on generating spin-dependent displacement forces on the ions which, depending on realization, are linear or quadratic in the driving field. These result in gates which are sensitive to the field amplitude and exhibit a degradation of fidelity which is linear in field intensity noise. A widely used scheme for which is the M\o lmer-S\o rensen (MS) gate \cite{sorensen1999quantum,sorensen2000entanglement}. Driving field amplitude deviations arise naturally in trapped ions systems and may come about due to intensity noise in the drive source, as well as beam pointing noise and polarization noise \cite{brown2016co,mount2016scalable}.
Here we propose a gate scheme which is resilient to deviations in the driving field's amplitude. We combine the conventional spin-dependent displacement with spin-dependent squeezing, by driving the first and second motional sidebands of the ion crystal normal modes. We solve the resulting interaction analytically and formulate constraints on the drive which generate a resilient gate. Crucially, most constraints can be easily satisfied without any numerical optimization. We combine other well-known robustness methods, resulting in a two-qubit entanglement gate which is resilient to many experimental parameters and is independent of the initial motional state, within the Lamb-Dicke regime. Our gates may be implemented using conventional waveform spectral shaping which are straightforward to implement and are common to trapped ions systems. Our method is compatible to laser driven gates as well as laser-free entangling gates \cite{srinivas2021high}.
\begin{figure}
\caption{Robust gate performance. Left: Fidelity of our robust gate (blue) and the conventional MS gate (red), in presence of a deviation of the laser's Rabi frequency, $\delta\Omega$. Our gate shows a flat response, that scales as $\delta\Omega^4$, yielding a high-fidelity operation even in the presence of $10\%$ errors. The MS gate exhibits a quadratic response, and a fast deterioration in fidelity. The inset shows the infidelity, $1-F$, in log scale. Our method typically provides more than two orders of magnitude of improvement throughout the $10\%$ error range. Right: Population dynamics of the initial state $\ket{00}
\label{figMain}
\end{figure}
Figure \ref{figMain} showcases our main results, with the fidelity (left) of our gate (blue) and the conventional MS gate (red), in the presence of deviations in the field's Rabi frequency, $\delta\Omega$. As seen, our gate shows a robust response which scales as $\delta\Omega^4$, and exhibits a high-fidelity entangling operation even with $10\%$ Rabi frqeuency errors. This is contrasted by the quadratic error of an MS gate. The population dynamics of the initial state $\ket{00}$ are shown (right) for the ideal, $\delta\Omega=0$, case (solid) and in presence of a deviation, with $\delta\Omega/\Omega=0.05$ (dashed). While the two scenarios exhibit different dynamics, at the gate time, $t=T$, they both converge and result in a high-fidelity Bell state (green).
Utilizing spin-dependent squeezing for entangling gates has been suggested in other contexts, such as in order to generate gates in the strong-coupling regime \cite{sameti2021strong}, or in order to generate 3-body \cite{andrade2022engineering} and $n$-body \cite{or2022nbody} interaction terms.
Below we present the Hamiltonian of interest, its solution, the formulation of constraints, and their resolution using spectral shaping. Finally, we analyze our gate's performance and feasibility.
We start with the non-interacting Hamiltonian of two trapped ions, given by,
\begin{equation}
H_0=\hbar\omega_0 J_z+\hbar\nu a^\dagger a,\label{eqH0}
\end{equation}
with $\omega_0$ the single ion separation frequency of the relevant qubit levels, $J_z=\left(\sigma^z_1+\sigma^z_2\right)/2$ the global Pauli-$z$ operator such that $\sigma^z_n$ is the $z$-Pauli operator acting on the $n$'th ion and $\nu$ the frequency of the center-of-mass normal mode of motion of the ion-chain with its phonon creation operator, $a^\dagger$. All other modes of motion are assumed to be decoupled from the ion's evolution, yet this assumption can be relaxed \cite{shapira2020theory}. The Hamiltonian in Eq. \eqref{eqH0} can trivially be used in a larger ion-chain, by assuming only two ions are illuminated \cite{wang2020high,manovitz2022trapped} and they equally participate in the coupled normal mode.
Without loss of generality and for concreteness we assume the ion qubit levels are coupled by a direct optical transition. The ions are driven by a multi-tone global laser field with a spectral content in the vicinity of the first and second motional sidebands. This yields the interaction Hamiltonian,
\begin{equation}
V_I=J_x \left[w_1\left(t\right) a^\dagger+ i w_2\left(t\right) \left(a^\dagger\right)^2\right]+H.c,\label{eqVI}
\end{equation}
with $w_n\left(t\right)=\sum_m\rho_{n,m}e^{i\delta_{n,m}t}$. Here $\rho_{n,m}$ and $\delta_{n,m}$ are amplitudes and frequencies determined below. Equation \eqref{eqVI} is obtained in a frame rotating with respect to $H_0$, and by driving the ion chain with the global time-dependent drive,
\begin{equation}
\begin{split}
W\left(t\right)=&-\frac{4}{\eta}\sin\left(\omega_0 t\right)\sum_m \rho_{1,m}\cos\left(\left(\nu-\delta_{1,m}\right)t\right) \\ & -\frac{8}{\eta^2} \cos\left(\omega_0 t\right)\sum_m \rho_{2,m}\sin\left(\left(2\nu-\delta_{2,m}\right)t\right),\label{eqW}
\end{split}
\end{equation}
with $\eta$ the Lamb-Dicke parameter, quantifying the coupling between qubit and motional states \cite{wineland1998experimental}. The structure of Eq. \eqref{eqW} implies that the $w_n$'s are proportional to $\Omega$, the driving field's Rabi frequency. The resulting interaction in Eq. \eqref{eqVI} is valid in terms of a rotating wave approximation (RWA) in $\Omega/\omega_0$ and a second order expansion in $\eta$. Furthermore we make use of a RWA in $\Omega/\nu$ allowing us to omit off-resonance carrier coupling terms and counter-rotating terms. Below we incorporate methods that eliminate carrier coupling terms even further \cite{shapira2018robust}. We note that counter-rotating terms still allow for an analytic solution \cite{shapira2020theory}, but are omitted here in favor of a more concise presentation. Note that the $w_n\left(t\right)$'s can be arbitrary complex time-dependent functions.
For the oscillator, the Hamiltonian in Eq. \eqref{eqVI} generates both a spin-dependent displacing term, modulated by $w_1$, and a spin-dependent squeezing term, modulated by $w_2$. In the special case of $w_2=0$ the interaction $V_I$ reduces to the MS Hamiltonian and is exactly solvable. We show below that we may still solve it for non-vanishing second sideband modulations.
There exists a known solution to general time-dependent quantum harmonic oscillators \cite{harari2011propagator}. However here the appearance of spin-dependence requires special care. We move to a frame rotating with respect to a spin-dependent squeezing by applying a unitary transformation $S\left(J_x r\left(t\right)\right)=\exp\left[\frac{J_x r}{2}\left(a^2-\left(a^\dag\right)^2\right)\right]$, with the time-dependent parameter $r\left(t\right)$, for which we assume $r\left(t=0\right)=0$. This transforms $V_I$ to $V_S=S^\dagger V_I S-i S^\dagger \partial_t S$ (see full expression in section I of the supplemental material). Choosing $w_2\in\mathbb{R}$, i.e. the spectrum of $w_2$ is symmetric around the second sideband, the term in $V_S$ that is proportional to $J_x a^2$ is,
\begin{equation}
V_{S}^{\left(J_{x}a^{2}\right)}=-i J_{x}a^{2}\left(w_{2}+\frac{1}{2}\partial_{t}r\right).\label{V_Sa2}
\end{equation}
To simplify $V_S$ we are interested in eliminating this term. This yields the trivial differential constraint, $\partial_{t}r=-2w_{2}$, solved by
\begin{equation}
r=-2\int_0^t dt^\prime w_2\left(t^\prime\right),\label{eqr}
\end{equation}
With these choices, we are left with
\begin{equation}
V_{S}=J_{x}a\left[w_{1}^{\ast}\cosh\left(J_x r\right)-w_{1}J_{x}\sinh\left(J_x r\right)\right]+H.c\label{eqVS2}
\end{equation}
Since $V_{S}$ in Eq. \eqref{eqVS2} is linear in the mode operators it is analytically solvable. Rotating back to the original frame, the resulting unitary evolution operator due to $V_I$ is,
\begin{equation}
U_{I}\left(t\right)=S\left(J_x r\left(t\right)\right)D\left(J_x\alpha\left(t\right)\right)e^{-i\left(J_{x}^{2}\left(\Phi_{2}\left(t\right)+\Phi_{4}\left(t\right)\right)+J_{x}\Phi_{3}\left(t\right)\right)}.\label{eqU}
\end{equation}
On the spin side, the evolution in Eq. \eqref{eqU} is composed a global $J_x$ rotation with angle $\Phi_3$, and the desired qubit entangling operation, $J_x^2$ with phase $\Phi_2+\Phi_4$ (Expressions for all $\Phi$'s are given below). On the oscillator side, it is composed of spin-dependent squeezing, $S$ and spin-dependent displacement, $D$, with $D(\alpha)=\exp\left((\alpha a^\dagger-\alpha^\ast a\right)$.
Adopting the useful conventions, $\left\{ f\right\} =\int_{0}^{t}dt_{1}f\left(t_{1}\right)$ and $\left\{ f\left\{ g\right\} \right\} =\int\limits _{0}^{t}dt_{1}\int\limits _{0}^{t_{1}}dt_{2}f\left(t_{1}\right)g\left(t_{2}\right)$, introduced in \cite{sameti2021strong}, $\alpha$ and the $\Phi$'s are given by,
\begin{align}
\alpha=&\left\{ -i\left(w_{1}\cosh\left(r\right)-J_x w_{1}^{\ast}\sinh\left(r\right)\right)\right\},
\\
\Phi_{2}=&\im\left[\left\{ w_{1}^{\ast}\cosh\left(r\right)\left\{ w_{1}\cosh\left(r\right)\right\} \right\} \right],
\\
\begin{split}
\Phi_{3}=&\im\left[\left\{ w_{1}\cosh\left(r\right)\left\{ w_{1}\sinh\left(r\right)\right\} \right\} \right] \\
+ & \im\left[\left\{ w_{1}^{\ast}\sinh\left(r\right)\left\{ w_{1}^{\ast}\cosh\left(r\right)\right\} \right\} \right],
\end{split}
\\
\Phi_{4}=&\im\left[\left\{ w_{1}\sinh\left(r\right)\left\{ w_{1}^{\ast}\sinh\left(r\right)\right\} \right\} \right],\label{eqPhi}
\end{align}
Before analyzing the results in full we note that for a small $w_2$, the leading order contributions to the entangling phase is, $\Phi_2+\Phi_4=\im\left[\left\{ w_{1}^{\ast}\left\{ w_{1}\right\} \right\} +4\left\{ w_{1}\left\{w_{2}\right\}\left\{ w_{1}^{\ast}\left\{w_{2}\right\}\right\} \right\} \right]$, such that $\Phi_2$ scales as $\Omega^2$ and $\Phi_4$ as $\Omega^4$. This dependence is different from that of the MS scheme and its generalizations, and provides the opportunity to mitigate deviations in $\Omega$.
The form of Eq. \eqref{eqU} allows us to formulate constraints for the generation of two-qubit entangling gates, which are robust to deviations in $\Omega$, and to then choose the proper $w$'s that will satisfy these constraints. We first require that at the gate time $t=T$ there will be no residual displacement or squeezing, i.e., that $r(T)=\alpha(T)=0$, and no rotation of $J_x$, i.e., $\Phi_3(T)=0$. Explicitly, this requires,
\begin{align}
\left\{w_{1}\cosh\left(r\right)\right\}=0,\tag{C1}\label{C1}\\
\left\{w_{1}^{\ast}\sinh\left(r\right)\right\}=0\tag{C2}\label{C2},\\
r\left(t=T\right)=\left\{w_2\right\}=0,\tag{C3}\label{C3}\\
\Phi_3\left(T\right)=0\tag{C4}\label{C4}.
\end{align}
Crucially, \eqref{C1} and \eqref{C2} are required to render the gate operation independent of the initial state of the motional mode, i.e. independent of temperature.
Next, without loss of generality we choose the entanglement phase to be $\varphi=-\pi/2$, a value that rotates the computational basis to fully entangled states,
\begin{equation}\tag{C5}
\Phi_2\left(T\right)+\Phi_4\left(T\right)=\varphi=-\pi/2.\label{C5}
\end{equation}
Then, robustness to errors in $\Omega$ is provided by,
\begin{equation}\tag{C6}
\partial_\Omega\left(\Phi_2\left(T\right)+\Phi_4\left(T\right)\right)=0.\label{C6}
\end{equation}
That is, we assume a small error, $\Omega\rightarrow\Omega+\delta\Omega$, and eliminate the leading order contribution of this error to the entanglement phase. This can be generalized to next order terms. In principle similar constraints are required also for other quantities. However, we show below that they are unnecessary by construction.
Our compiled list of six constraints does not uniquely define the drives $w_1,w_2$. We analyze these constraints in terms of frequencies. All the constraints are expressed as integrals from $t=0$ to $t=T$. For these integrals to vanish, the integrands must be composed of non-zero multiples of the gate rate $\xi=2\pi/T$. The choice,
\begin{align}
w_1\left(t\right)= & \sum_{n} a_{2n+1}e^{i\xi\left(2n+1\right)t},\\
r\left(t\right)= & \sum_{n} s_{2n}\sin\left(2\xi n t\right),\label{eqSol}
\end{align}
in which $w_1$ is made of odd harmonics of the gate rate and $r$ of a sine series of even harmonics, guarantees that products of the form $w_1\cosh(r)$ and $w_1\sinh(r)$ will not have components at zero frequency, and will therefore integrate to zero. This choice guarantees, then, compatibility with the constraints \eqref{C1}--\eqref{C3}. Furthermore the choice to expand $r$ in a sine series (and not cosine) satisfies \eqref{C4} (see details in section II of the supplemental material). These considerations are independent of $\Omega$ and are therefore resilient to its possible deviations.
We are left with only two constraints, \eqref{C5}, which sets the entangling phase and \eqref{C6}, which makes this phase robust to deviation in $\Omega$. Appropriately, these can be satisfied with only two degrees of freedom. There are infinitely many solutions to these constraints. The simplest uses only $a_3$ and $s_2$ (setting all other $a$'s and $s$'s to zero). This minimal gate scheme is presented in section III of the supplemental material.
We employ a more elaborate solution, making use of $a_3$, $a_5$, $a_7$, $s_2$ and $s_4$, in order to combine this new result with previously demonstrated robustness properties: mitigation of unwanted off-resonant carrier and sideband couplings, robustness to deviations in the gate time, resilience to phonon mode heating and robustness to motional mode errors \cite{haddadfarshi2016high,shapira2018robust,webb2018resilient}. These all correspond to constraints which are linear in the $a_n$'s and $s_n$'s and are therefore straightforward to implement (see section IV of the supplemental material for further details). Yielding the drive,
\begin{align}
w_1\left(t\right)= & \frac{a}{3}\left(3e^{3i\xi t}-10e^{5i\xi t}+7e^{7i\xi t}\right),\\
r\left(t\right)= & s\left(\sin\left(2\xi t\right)-\frac{1}{2}\sin\left(4\xi t\right)\right).\label{eqSolRobust}
\end{align}
\begin{figure}
\caption{Spectrum and resulting modulation used to generate our robust gate. Top: Spectrum of the first (blue) and second (green) sidebands. The amplitude is given in units of $\xi/\eta^n$, with $n$ the sideband order. Bottom: Resulting time-domain modulation of the first (blue) and second (green) sidebands. Both modulations vanish continuously at $t=0$ and $t=T$ thus mitigating off-resonance coupling to unwanted transitions.}
\label{figSpect}
\end{figure}
Thus we still have only two undetermined degrees of freedom, $a$ and $s$, in order to satisfy , \eqref{C5} and \eqref{C6}. We progress by Taylor expanding the hyperbolic functions in these constraints to $6$'th order, yielding polynomial equations for $a$ and $s$, which are solved analytically. We then optimize these solutions numerically by directly evaluating \eqref{C5} and \eqref{C6} with a straightforward gradient descent. This yields, $a=0.3608\cdot\xi$ and $s=0.7820$, which constitutes a $3\%$ correction to the analytical solution (see further information in section V of the supplemental material).
The Rabi frequency required by our scheme is $\Omega_\text{robust}\approx\left(3/\eta+6/\eta^2\right)\xi$. It is more demanding than the MS Rabi frequency, $\Omega_\text{MS}=\frac{\xi}{2\eta}$, showing that robustness is afforded at the price of additional drive power. In the case of two ions, with COM mode $\eta=0.144$, a gate time of $50\us$ requires total laser power of $1.6\text{ mW}$ in the usual MS gate and a power of $150\text{ mW}$ for our fully robust gate (see further details in section VI of the supplemental material). Nevertheless in most implementations the limit on two-qubit entangling gates is fidelity and not driving power. More sophisticated solutions, using additional driving tones can divert field amplitudes from $w_2$ to $w_1$, which reduces the required power but increases the drive complexity. We note that the second sideband drive is significant and cannot be naively treated perturbatively \cite{sameti2021strong}.
The resulting spectrum is presented in Fig. \ref{figSpect} (top) showing the spectral components modulating the first (blue) and second (green) sidebands. The amplitude of the spectrum is normalized by the Lamb-Dicke parameter to the power of the sideband order, i.e. the second sideband modulation is $\eta$ times stronger than the first sideband modulation. The corresponding time-domain modulation of the sidebands due to our drive is shown in Fig. \ref{figSpect} (bottom). We note that both modulations continuously vanish at the start and the end of the gate, which acts to reduce off-resonance coupling to unwanted transitions.
The phase space trajectories generated by our scheme are deduced by the squeezing, $S\left(J_x r\right)$, and displacement, $D\left(J_x \alpha\right)$ operators, in Eq. \eqref{eqU}. We note that due to the appearance of $J_x$ in $D$ and $S$, the states $\ket{++}$ and $\ket{--}$ follow different phase space trajectories. This also occurs in the MS gate, however here we also have $J_x$ operators in $\alpha$, hence the trajectories are not simply reflected about the origin, as in the MS case. We calculate the phase space trajectories by using the identity, $S\left(r\right)D\left(\alpha\right)=D\left(\gamma\right)S\left(r\right)$, with $\gamma=\alpha\cosh\left(r\right)-\alpha^\ast\sinh\left(r\right)$, such that the state is first squeezed by $r$ and then displaced by $\gamma$. We note that the MS phase-space intuition for spin-dependent displacement gate is not valid here, i.e. the entangling phase is not proportional to the area enclosed by the phase space trajectory.
\begin{figure}
\caption{
Motion in phase space Left: Phase space displacement of the $\ket{++}
\label{figPhase}
\end{figure}
The phase space displacement of $\ket{++}$ is presented in Fig. \ref{figPhase} (left, solid). The same evolution is shown for $\ket{--}$ (dashed). The trajectories are reflected around the $x$ axis and time reversed. Indeed, using $r\left(T-t\right)=r\left(t\right)$ and $w_1\left(T-t\right)=w_1^\ast\left(t\right)$, this is readily confirmed. Squeezing by $r$ changes the expectation value error of position and momentum, $\Delta x$ and $\Delta p$, to $e^r/2$ and $e^{-r}/2$ respectively. Indeed, the figure also shows the standard deviations (right) of $x$ (solid), and $p$ (dashed) for the $\ket{++}$ state, exhibiting non-trivial dynamics. Since $r$ is real the displacement and standard deviations along both axes completely define the phase-space motion.
The form of the evolution operator in Eq. \eqref{eqU}, together with known phase-space identities \cite{gerry2004introductory}, allow us to calculate the gate fidelity. Specifically, we calculate the overlap of the state generated by our gate with the ideal case, assuming the initial state is $\ket{00}$, at the motional ground state (see details in section VII of the supplemental material). This is used to calculate the gate fidelity in presence of Rabi frequency deviations, $\delta\Omega$ shown in Fig. \ref{figMain} (left).
\begin{figure}
\caption{Additional robustness properties of our gate (blue) compared to the MS gate (red). The insets show the same data recast as infidelity in log scale. Left: Fidelity in presence of gate timing errors, $\delta T/T$. Our gate shows a fourth order, wide response enabling high-fidelity operation even in presence of $5\%$ errors. Right: Fidelity in presence of motional mode frequency errors, $\delta\nu/\xi$. Similarly, our gate exhibits a wide high fidelity region. As expected, the fidelity is not symmetric around the peak.}
\label{figFid2}
\end{figure}
Moreover, as previously demonstrated, the form of the drive in Eq. \eqref{eqSolRobust} ensures that our gate is robust to additional errors and noise. Indeed, Fig. \ref{figFid2} shows our gate fidelity in the presence of gate time deviations, $\delta T$ (left), and motional mode frequency errors, $\delta\nu$ (right). For both of these errors our gate exhibits high fidelity (blue) which scales favorably compared to the MS gate (red).
In conclusion, we have used spin-dependent squeezing in order to propose a two-qubit entangling gate for trapped ions qubits, which is resilient to deviations in the driving field intensity. We have also supplemented our gate with more conventional, previously demonstrated, robustness properties, making it also robust to gate timing errors, motional mode heating, secular frequency drifts, as well as mitigation of coupling to unwanted transitions such as the carrier transition and off resonance sidebands. We do so by generating constraints, which can then be satisfied, almost entirely, with spectral consideration in an analytic fashion. Our new gate can be readily incorporated in the trapped ion quantum toolbox.
We thank David Schwerdt for helpful discussions. This work was supported by the Israeli Science Foundation, the Israeli Ministry of Science Technology and Space, the Minerva Stiftung, the European Union’s Horizon 2020 research and innovation programme (Grant Agreement LEGOTOP No. 788715), the DFG (CRC/Transregio 183, EI 519/7-1), ISF Quantum Science and Technology (2074/19).
\end{document}
|
{\bf b}egin{equation}gin{document}
\title[Inverse problem MGT]{An inverse problem for Moore--Gibson--Thompson equation arising in high intensity ultrasound}
{{\bf b}f a}uthor{R. Arancibia}
{{\bf b}f a}ddress{R. Arancibia, Universidad T\'ecnica Federico Santa Mar\'ia, Departamento de Matem\'atica, Casilla 110-V, Valpara\'iso, Chile}
email{[email protected]}
{{\bf b}f a}uthor{R. Lecaros}
{{\bf b}f a}ddress{R. Lecaros, Universidad T\'ecnica Federico Santa Mar\'ia, Departamento de Matem\'atica, Casilla 110-V, Valpara\'iso, Chile}
email{[email protected]}
{{\bf b}f a}uthor{A. Mercado}
{{\bf b}f a}ddress{A. Mercado, Universidad T\'ecnica Federico Santa Mar\'ia, Departamento de Matem\'atica, Casilla 110-V, Valpara\'iso, Chile}
email{[email protected]}
{{\bf b}f a}uthor{S. Zamorano}
{{\bf b}f a}ddress{S. Zamorano, University of Santiago of Chile (USACH), Faculty of Science, Mathematics and Computer Science Department, Casilla 307, Correo 2, Santiago, Chile.}
email{[email protected]}
\thanks{R. Arancibia was partially supported by the Direcci\'on de Postgrados y Programas (DPP) of the U. T\'ecnica Federico Santa Mar\'ia. R. Lecaros was partially supported by FONDECYT(Chile) Grant NO: 11180874. The work of A. Mercado was partially supported by FONDECYT (Chile) Grant NO: 1171712. The second and third author was partially supported by BASAL Project, CMM - U. de Chile. S. Zamorano was supported by the FONDECYT(Chile) Postdoctoral Grant NO: 3180322 nd by ANID PAI Convocatoria Nacional Subvenci\'on a la Instalaci\'on en la Academia Convocatoria 2019 PAI 77190106.}
{\bf b}egin{equation}gin{abstract}
In this article we study the inverse problem of recovering a space-dependent coefficient of the Moore--Gibson--Thompson (MGT) equation, from knowledge of the trace of the solution on
some open subset of the boundary. We obtain the Lipschitz stability for this inverse problem, and we design a convergent algorithm for the reconstruction of the unknown coefficient. The techniques used are based on Carleman inequalities for wave equations and properties of the MGT equation.
end{abstract}
\maketitle
\section{Introduction}
Let $(-1,1)ega\subseteq\mathbb R^{N}$ be a nonempty bounded open set (for $N=2$ or $N=3$), with a smooth boundary $\Gammamma$, and let $T>0$. We consider the MGT equation
{\bf b}egin{equation}gin{align}\label{eqmain2}
\left\{
{\bf b}egin{equation}gin{array}{ll}
\tau u_{ttt}+ {{\bf b}f a}lphapha u_{tt}-c^2\displaystyleelta u -b\displaystyleelta u_{t}=f, &(-1,1)ega\times(0,T)\\
u=0, & \Gammamma \times (0,T) \\
u(\cdot,0) = u_0, \medspace u_t(\cdot,0) = u_1, \medspace u_{tt}(\cdot,0) = u_2, &(-1,1)ega,
end{array}
\right.
end{align}
where ${{\bf b}f a}lphapha \in L^\infty((-1,1)ega)$, $c \in \mathbb R$ and $\tau,b>0$.
This equation arises as a linearization of a model for wave propagation in viscous thermally relaxing fluids. In that cases, the space-dependent coefficient ${{\bf b}f a}lphapha$ depends on a viscosity of the fluid \cite{kaltenbacher2012exponential}. This third order in time equation
has been studied by several authors from various points of view. We can mentioned, among others, the works \cite{ kaltenbacher2015mathematics, kaltenbacher2011wellposedness, kaltenbacher2012well, liu2014inverse, marchand2012abstract, pellicer2019optimal, lizama2019controllability} for a variety of problems related to this equation.
In particular, one interesting characteristic of this equation is that the structural damping $b$ plays a crucial role for the well-posedness, contrary of second order equations with damping ($\tau=0$ and ${{\bf b}f a}lphapha>0$ in eqref{eqmain2}). For instance, in \cite{kaltenbacher2011wellposedness} it is
proved that, if $b=0$ and ${{\bf b}f a}lphapha$ a positive constant, there does not exist an infinitesimal generator of a semigroup, in contrast with second order equations, where the structural damping does not affect the well-posedness of the equation.
The parameter $\gammamma:={{\bf b}f a}lphapha-\frac{\tau c^2}{b}$ gives relevant information regarding the stability of the system. If $\gammamma>0$, the group associated to the equation is exponentially stable, and for $\gammamma=0$, the group is conservative, see for instance \cite{marchand2012abstract}.
On the other hand, Conejero, Lizama and Rodenas \cite{conejero2015chaotic} proved that the one-dimensional equation exhibits a chaotic behavior
if $\gammamma<0$.
Also, for the case in which ${{\bf b}f a}lphapha$ is given by a function depending on space and time,
the well posedness and the exponential decay was proved by Kaltenbacher and Lasiecka in
\cite{kaltenbacher2012exponential}.
Concerning the well posedness of the system eqref{eqmain2}, it is known (see \cite[Theorem 2.2]{kaltenbacher2012exponential}) that,
given a coefficient ${{\bf b}f a}lphapha\in L^{\infty}((-1,1)ega)$ and data satisfying
{\bf b}egin{equation}gin{align}\label{dat}
(u_0,u_1,u_2)\in H^2((-1,1)ega)\cap H_0^1((-1,1)ega)\times H_0^1((-1,1)ega)\times L^2((-1,1)ega),\;
f\in L^2(0,T;L^2((-1,1)ega)),
end{align}
the system eqref{eqmain2} admits a unique weak solution $(u,u_t,u_{tt})$ satisfying
$$
(u,u_{t},u_{tt})\in C([0,T]; H^2((-1,1)ega)\cap H_0^1((-1,1)ega)\times H_0^1((-1,1)ega)\times L^2((-1,1)ega)).
$$
In this article, we study the inverse problem of recovering the unknown space-dependent coefficient
${{\bf b}f a}lphapha = {{\bf b}f a}lphapha(x)$, the frictional damping term,
from partial knowledge of some trace of the solution $u$ at the boundary, namely,
{\bf b}egin{equation}gin{align*}
\displaystyle\frac{\partial u}{\partial n}\mbox{ on }\Gammamma_0\times(0,T),
end{align*}
where $\Gammamma_0 \subset \Gammamma$ is a relatively open subset of the boundary, called the observation region, and $n$ is the outward unit normal vector on $\Gammamma$.
We will often write $u({{\bf b}f a}lphapha)$
to denote the dependence of $u$ on the coefficient ${{\bf b}f a}lphapha$.
More precisely, in this paper we study the following properties of the stated inverse problem:
{\bf b}egin{equation}gin{itemize}
\item {{\bf b}f Uniqueness:}
{\bf b}egin{equation}gin{align*}
\frac{\partial u({{\bf b}f a}lphapha_1)}{\partial n} = \frac{\partial u({{\bf b}f a}lphapha_2)}{\partial n}
\mbox{ on }\Gammamma_0\times(0,T)\mbox{ implies }{{\bf b}f a}lphapha_1={{\bf b}f a}lphapha_2\mbox{ in }(-1,1)ega.
end{align*}
\item {{\bf b}f Stability:}
{\bf b}egin{equation}gin{align*}
\|{{\bf b}f a}lphapha_1-{{\bf b}f a}lphapha_2\|_{X((-1,1)ega)}\leq C\left\|\frac{\partial u({{\bf b}f a}lphapha_1)}{\partial n} - \frac{\partial u({{\bf b}f a}lphapha_2)}{\partial n}\right\|_{Y(\Gammamma_0)},
end{align*}
for some appropriate spaces $X((-1,1)ega)$ and $Y(\Gammamma_0)$.
\item {{\bf b}f Reconstruction:}
Design an algorithm to recover the coefficient ${{\bf b}f a}lphapha$ from the knowledge of $\displaystyle \frac{\partial u({{\bf b}f a}lphapha)}{\partial n}$ on $\Gammamma_0$.
end{itemize}
The first part of this work is concerned with the uniqueness and stability issues of the inverse problem.
We obtain a stability result, which directly implies a uniqueness one, under certain conditions for ${{\bf b}f a}lphapha$, $\Gammamma_0$ and the time $T$.
We use the Bukheim--Klibanov method, which is based on
the so-called Carleman estimates.
We prove a Carleman estimate for MGT equation, which will be based on the Carleman inequality for wave operator given in \cite{baudouin2013global}.
The second part of this work is focused on giving a constructive and iterative algorithm which allows us to find the coefficient ${{\bf b}f a}lphapha$ from the knowledge of the additional data $\frac{\partial u}{\partial n}$ on the observation domain $\Gammamma_0$. For that, we
study an appropriate functional, and we show that this functional admits a unique minimizer on a suitable domain.
Using this results, we will prove the convergence of an iterative algorithm. We refer to Section \ref{s4} for details.
This algorithm is adapted from \cite{baudouin2013global},
where it was introduced an algorithm for recovering zero-order terms in the wave equation.
We can also mention the works of Beilina and Klibanov \cite{beilina2012approximate, beilina2015globally},
where the authors studied the reconstruction of a coefficient in a hyperbolic equation using the Carleman weight.
The remaining of this paper is organized as follows. In Section \ref{s2-1} we present our main results:
Theorem \ref{mainresult}, which establishes the stabilization property of our inverse problem and a Carleman type estimate which is contained in Theorem \ref{observability}.
In section \ref{s2} we present some auxiliary results of the MGT equation which are needed for the inverse problem. Besides, we prove the hidden regularity for the MGT equation. In section \ref{s3} we prove the main results of our work, namely Theorems \ref{mainresult} and \ref{observability}. Finally, in section \ref{s4} we focus on the algorithm for the reconstruction of coefficient ${{\bf b}f a}lphapha$ and we prove the convergence of this Algorithm.
\section{Statement of the main results}\label{s2-1}
In this section we state our main results concerning the inverse problem proposed in the Introduction. In order to state the precise result that we obtain, we consider the following set of admissible coefficients:
{\bf b}egin{equation}gin{align}\label{asuma}
{\mathcal A}_M = \left\{ {{\bf b}f a}lphapha \in L^\infty((-1,1)ega), \quad \frac{c^2}{b} \leq {{\bf b}f a}lphapha(x) \leq M \quad \forall x\in\overline{(-1,1)ega} \right\},
end{align}
and the geometrical assumptions, sometimes referred to as the Gamma--condition of Lions or the multiplier condition:
{\bf b}egin{equation}gin{align}\label{asumb}
exists x_0notin(-1,1)ega\mbox{ such that }\Gammamma_0\supset\{x\in\Gammamma: \ (x-x_0)\cdot n\geq 0\},
end{align}
and
{\bf b}egin{equation}gin{align}\label{asumc}
T>\sup_{x\in(-1,1)ega}|x-x_0|.
end{align}
Henceforth we will set $\tau=1$ for simplicity.
Our main result concerns the stability of the inverse problem:
{\bf b}egin{equation}gin{theorem}\label{mainresult}
For $\Gammamma_0 \subset \Gammamma$, $M>0$ and $T >0$ satisfying eqref{asumb}-eqref{asumc},
suppose there exists $eta >0$ such that
{\bf b}egin{equation}gin{align}\label{mineta}
|u_2| \geq eta > 0 \quad a.e. \, in \, \, (-1,1)ega,
end{align}
and ${{\bf b}f a}lphapha_2 \in {{\bf b}f A}_M$ is such that the unique solution $u({{\bf b}f a}lphapha_2)$ of eqref{eqmain2} satisfies
{\bf b}egin{equation}gin{align}\label{regsol}
u({{\bf b}f a}lphapha_2) \in H^3(0,T;L^{\infty}((-1,1)ega)).
end{align}
Then there exists a constant $C>0$ such that
{\bf b}egin{equation}gin{align}\label{stabi}
C^{-1}\|{{\bf b}f a}lphapha_1-{{\bf b}f a}lphapha_2\|_{L^2((-1,1)ega)}^2
\leq
\left\| \frac{\partial u({{\bf b}f a}lphapha_1)}{\partial n} - \frac{\partial u({{\bf b}f a}lphapha_2)}{\partial n} \right\|_{H^2(0,T;L^2(\Gammamma_0))}^2
\leq C\|{{\bf b}f a}lphapha_1-{{\bf b}f a}lphapha_2\|_{L^2((-1,1)ega)}^2
end{align}
for all ${{\bf b}f a}lphapha_1 \in {{\bf b}f A}_M$.
end{theorem}
Let us mention some comments about Theorem \ref{mainresult}.
{\bf b}egin{equation}gin{remark} \label{reg}
The hypothesis $u({{\bf b}f a}lphapha_2)\in H^3(0,T;L^{\infty}((-1,1)ega))$ in Theorem \ref{mainresult} is satisfied if more regularity is imposed on the data.
For instance,
taking
$m>\frac{N}{2}+1$,
it is enough to take $(u_0,u_1,u_2)\in (H^{m+2}((-1,1)ega)\times H^{m+1}((-1,1)ega)\times H^m((-1,1)ega))$, ${{\bf b}f a}lphapha_2\in H^{m-1}((-1,1)ega)$, $fequiv 0$ and appropriate boundary compatibility conditions.
Indeed, by Theorem 2.2 in \cite{kaltenbacher2012exponential}, we obtain $$u=u({{\bf b}f a}lphapha_2)\in C([0,T]; H^{m+2}((-1,1)ega))\cap C^1([0,T]; H^{m+1}((-1,1)ega))\cap C^2([0,T]; H^{m}((-1,1)ega)).$$
Then, from equation eqref{eqmain2} and taking into account that, for $s>\frac{N}{2}$, the Sobolev space $H^{s}((-1,1)ega)$ is an algebra, we have that
$(u_{tt}(\cdot,0), u_{ttt}(\cdot,0), u_{tttt}(\cdot,0))\in (H^{m}((-1,1)ega)\times H^{m-1}((-1,1)ega)\times H^{m-2}((-1,1)ega))$.
Therefore, using again Theorem 2.2 in \cite{kaltenbacher2012exponential}, we deduce that $u_{tt}\in C([0,T]; H^{m}((-1,1)ega))\cap C^1([0,T]; H^{m-1}((-1,1)ega))\cap C^2([0,T]; H^{m-2}((-1,1)ega))$.
Hence
{\bf b}egin{equation}gin{align*}
u_{ttt}\in C([0,T];H^{m-1}((-1,1)ega))\cap C^1([0,T];H^{m-2}((-1,1)ega)),
end{align*}
and using Sobolev's embedding theorem, we get that $u_{ttt} \in L^2(0,T;L^{\infty}((-1,1)ega))$.
end{remark}
{\bf b}egin{equation}gin{remark}
The inverse problem studied in this paper was previously considered by Liu and Triggiani \cite[Theorem 15.5]{liu2014inverse}.
They considered ${{\bf b}f a}lphapha\in H^{m}((-1,1)ega)$
and
initial data $(u_0,u_1,u_2)\in (H^{m+2}((-1,1)ega)\times H^{m+1}((-1,1)ega)\times H^m((-1,1)ega))$
with $m>\frac{N}{2}+2$. By using Carleman estimates for a general hyperbolic equation, the authors proved global uniqueness of any damping coefficient ${{\bf b}f a}lphapha$ with boundary measurement given by
{\bf b}egin{equation}gin{align*}
\frac{c^2}{b}\frac{\partial u}{\partial n}+\frac{\partial u_t}{\partial n},\quad \mbox{on }\Gammamma_0\times [0,T],
end{align*}
and the initial data is supposed to satisfy eqref{mineta} and
{\bf b}egin{equation}gin{align}\label{extracondition}
\frac{c^2}{b}u_0(x)+u_1(x)=0, \quad x\in(-1,1)ega.
end{align}
In this paper, using an appropriate Carleman inequality and the method of Bukhgeim--Klibanov,
we obtain stability around any regular state, under hypothesis $m>\frac{N}{2}+1$ and without the additional assumption eqref{extracondition}.
end{remark}
{\bf b}egin{equation}gin{remark}
The hypotheses eqref{asumb} and eqref{asumc} on $\Gammamma_0$ and $T$ typically arises
in the study of stability or observability inequalities
for the wave equation,
see \cite{ho} where the multiplier method is used, or
\cite{fursikov1996controllability, zhang2000explicit}
where some observability inequalities are obtained from Carleman estimates.
These hypotheses provide a particular case of the geometric control condition stated in \cite{bardos1992sharp}.
end{remark}
{\bf b}egin{equation}gin{remark}
The assumption of the positivity for $|u_2|$ appearing in Theorem \ref{mainresult} is classical
when applying the Bukhgeim-Klibanov method
and Carleman estimates for inverse problems with only one boundary measurement, see \cite{baudouin2010lipschitz, liu2011global, yamamoto1999uniqueness}.
end{remark}
As we mentioned before, in order to study the stated inverse problem, we use global Carleman estimates and the method of Bukhgeim--Klibanov, introduced in \cite{BK}. To state our Carleman estimates precisely, we shall need the following notations.
Assume that $\Gammamma_0$ satisfies eqref{asumb} for some $x_0\in \mathbb R^{N}\setminus\overline{(-1,1)ega}$. For $\lambda>0$, we define the weight function
{\bf b}egin{equation}gin{align}\label{4-SZ}
\varepsilonphi_{\lambda}(x,t)=e^{\lambda\phi(x,t)}, \quad (x,t)\in(-1,1)ega\times (-T,T),
end{align}
where
{\bf b}egin{equation}gin{align}\label{5-SZ}
\phi(x,t)=|x-x_0|^2-{\bf b}egin{equation}ta t^2+M_0
end{align}
for some ${\bf b}egin{equation}ta \in (0,1)$ to be chosen later, and for some $M_0$ such that
$ \phi \geq1$, for example any constant satisfying $M_0 \geq {\bf b}egin{equation}ta T^2 +1$.
To prove Theorem \ref{mainresult}, we shall use the following Carleman estimate.
{\bf b}egin{equation}gin{theorem}\label{observability}
Suppose that $\Gammamma_0$ and $T$ satisfies eqref{asumb}, eqref{asumc}. Let $M>0$ and ${{\bf b}f a}lphapha\in \mathcal{A}_{M}$. Let ${\bf b}egin{equation}ta\in (0,1)$ such that
{\bf b}egin{equation}gin{align}\label{TC22}
{\bf b}egin{equation}ta T>\sup_{x\in(-1,1)ega} |x-x_0 |.
end{align}
Then, there exists $s_0>0$, $\lambda>0$ and a positive constant $C$ such that
{\bf b}egin{equation}gin{multline}\label{1.133}
\sqrt{s}\int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda}(\cdot,0)}|y_{tt}(\cdot,0)|^2dx+s\lambda c^4\int_0^{T}\int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda}}\varepsilonphi_{\lambda}(|y_{t}|^2+|nabla y|^2)dxdt\\ +s^3\lambda^3c^4\int_0^{T}\int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda}}\varepsilonphi_{\lambda}^3|y|^2dxdt + s\lambda \int_0^{T}\int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda}}\varepsilonphi_{\lambda}(|y_{tt}|^2+|nabla y_{t}|^2)dxdt\\ +s^3\lambda^3\int_0^{T}\int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda}}\varepsilonphi_{\lambda}^3|y_{t}|^2dxdt
\leq
C \int_{0}^{T}\int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda} }|Ly|^2dxdt
\\+Cs\lambda\int_{0}^{T}\int_{\Gammamma_0}e^{2s\varepsilonphi_{\lambda}}\left(|nabla y_{t}\cdot n|^2+c^4|nabla y\cdot n|^2\right)d\sigmagma dt,
end{multline}
for all $s\geq s_0$
and
for all $y\in L^2(0,T;H_0^1((-1,1)ega))$ satisfying $Ly:=y_{ttt}+ {{\bf b}f a}lphapha y_{tt}-c^2\displaystyleelta y -b\displaystyleelta y_{t} \in L^2((-1,1)ega\times(0,T))$, $y(\cdot,0)=y_{t}(\cdot,0)=0$ in $(-1,1)ega$, and $y_{tt}(\cdot,0)\in L^2((-1,1)ega)$.
end{theorem}
Let us mention that, in order to obtain estimate
eqref{1.133}, we do not follow the classical procedure of
decomposing the differential operator $Ly$ of the MGT system.
Instead of that, we
use in an appropriate way the well-known Carleman estimate for the wave operator,
from which we are able to obtain eqref{1.133} thanks to the fact that we are asking that the initial conditions $y(\cdot, 0)$ and $y_t(\cdot, 0)$ are null.
For instance, this result is not enough to obtain controllability, but this is coherent with the fact
the MGT equation has poor control properties:
in \cite{lizama2019controllability}
is proved that the interior null controllability of this system is not true,
and then, the boundary null controllability is also false.
A similar idea was considered in \cite{yamamoto2003one},
where a Carleman estimate for the Laplace operator
was used to prove the unique continuation property for a linearized Benjamin--Bona--Mahony equation.
The Bukhgeim--Klibanov method and Carleman estimates have been widely used for obtaining stability of coefficients with one-measurement observations.
Concerning inverse problems for wave equations with boundary observations, in \cite{PuelYama} is studied the problem of recovering a source term of the equation, \cite{Yama99}
deals with the problem of recovering a coefficient of the zero-order term, and \cite{Bella04} concerns the recovering of the main coefficient.
In addition, we can mention the works \cite{MR1964256, MR3774702} related to coefficient inverse problems for hyperbolic equations.
We refer to \cite{BellaYama} for an account of classic and recent results concerning the use of Carleman estimates
on the study of inverse problems for hyperbolic equations.
\section{Auxiliary results}\label{s2}
In this section, we state and prove some auxiliary results concerning estimates for the Laplacian of a solution of eqref{eqmain2} and a hidden regularity estimate
for the solution of the MGT equation.
\subsection{Bound of Laplacian of the solutions}
From now, throughout the article, we define
{\bf b}egin{equation}gin{align} \label{gamma}
\gammamma(x):={{\bf b}f a}lphapha(x)-\frac{c^2}{b}.
end{align}
Let us note that ${{\bf b}f a}lphapha\in \mathcal{A}_{M}$ if and only if
$0 \leq \gammamma \leq M$ in
$\overline{(-1,1)ega}$.
We also define the energy
{\bf b}egin{equation}gin{align}\label{1.3}
E_e(y):=&\frac{b}{2}\|nabla y\|_{L^2((-1,1)ega)}^2+\frac{1}{2}\|y_t\|_{L^2((-1,1)ega)}^2.
end{align}
In order to prove our main results, some technical estimations are necessary. One of them is the following:
{\bf b}egin{equation}gin{lemma}\label{Energy}
Let $b>0$ and $M>0$ such that ${{\bf b}f a}lphapha\in\mathcal{A}_{M}$.
Then there exists $C>0$ such that the total energy
{\bf b}egin{equation}gin{align} \label{defener}
\overline{E}(t):=E_e(u_t(t))+E_e(u(t)),
end{align}
satisfies
{\bf b}egin{equation}gin{align*}
\overline{E}(t)\leq C\left(\overline{E}(0)+\|f\|^2_{L^2(0,T:L^2((-1,1)ega))}\right),\quad t\in [0,T],
end{align*}
for every $(u_0,u_1,u_2)\in (H^2((-1,1)ega)\cap H_0^1((-1,1)ega))\times H_0^1((-1,1)ega)\times L^2((-1,1)ega)$ and $f\in L^2(0,T;L^2((-1,1)ega))$, where $u$ be the unique solution of eqref{eqmain2}.
end{lemma}
{\bf b}egin{equation}gin{proof}
Without loss of generality, we assume that $b=1$. Then, the equation
$$u_{ttt}+{{\bf b}f a}lphapha(x) u_{tt}-c^2 \displaystyleelta u-b\displaystyleelta u_t=f$$
can be write as follows (recall the definition of $\gammamma$ in eqref{gamma})
{\bf b}egin{equation}gin{align}\label{1.6}
Lu:=L_0u_t+c^2L_0u+\gammamma(x)u_{tt}=f,
end{align}
where $L_0$ is the wave operator given by
{\bf b}egin{equation}gin{align}\label{1.7}
L_0:=\partial_{t}^2-\displaystyleelta.
end{align}
Let us multiply the equation eqref{1.6} by $u_{tt}(t)+c^2 u_t(t) \in L^2((-1,1)ega)$ and after integrating on $(-1,1)ega$, we deduce that
{\bf b}egin{equation}gin{align*}
\frac{d}{dt}E_e(u_t+c^2u)+\int_(-1,1)ega\gammamma u_{tt}(u_{tt}+c^2u_t)=\int_(-1,1)ega f(u_{tt}+c^2u_t),
end{align*}
thus, we have
{\bf b}egin{equation}gin{align*}
\frac{d}{dt}E_e(u_t+c^2u)+\frac{c^2}{2}\frac{d}{dt}\|\gammamma^{1/2}u_{t}\|^2_{L^2((-1,1)ega)} \leq \frac{1}{2}\|f\|^2_{L^2((-1,1)ega)}+E_e(u_t+c^2 u).
end{align*}
And using Gronwall's inequality, there exists a constant $C>0$, such that
{\bf b}egin{equation}gin{align}\label{eqEner01}
E_e(u_t+c^2u)+\frac{c^2}{2}\|\gammamma^{1/2}u_{t}\|^2_{L^2((-1,1)ega)} \leq C (\|f\|^2_{L^2(0,T;L^2((-1,1)ega))}+\overline{E}(0)),\;\forall t\in [0,T].
end{align}
On other side, a direct computation give us
{\bf b}egin{equation}gin{align}\label{1-SZ}
E_e(u_t+c^2u)=E_e(u_t)+c^4E_e(u)+c^2\frac{d}{dt}E_e(u),
end{align}
and replacing eqref{1-SZ} in eqref{eqEner01}, we have
{\bf b}egin{equation}gin{align*}
c^2\frac{d}{dt}E_e(u)\leq C (\|f\|^2_{L^2(0,T;L^2((-1,1)ega))}+\overline{E}(0)).
end{align*}
Hence, integrating we obtain that, there exists a constant $C>0$, such that
{\bf b}egin{equation}gin{align}\label{eqEner02}
E_e(u)\leq C (\|f\|^2_{L^2(0,T;L^2((-1,1)ega))}+\overline{E}(0)),\;\;\;\forall t\in [0,T].
end{align}
Finally, if we take $\varepsilonepsilon<1$, we observe that
{\bf b}egin{equation}gin{align}\label{2-SZ}
c^2\frac{d}{dt}E_e(u)=c^2\int_(-1,1)ega (u_{tt}u_t+nabla u\cdotnabla u_t)\geq -\varepsilonepsilon^2 E_e(u_t)-\frac{c^4}{\varepsilonepsilon^2}E_e(u),
end{align}
replacing eqref{2-SZ} in eqref{eqEner01} and using eqref{eqEner02}, we obtain that, there exists a constant $C>0$, such that,
{\bf b}egin{equation}gin{align}\label{eqEnerFinal}
E_e(u_t) \leq C (\|f\|^2_{L^2(0,T;L^2((-1,1)ega))}+\overline{E}(0)),\;\forall t\in [0,T],
end{align}
which together with eqref{eqEner02}, we can conclude the proof.
end{proof}
{\bf b}egin{equation}gin{lemma}\label{PropLaplaciano}
Let $b=1$ and $M>0$ such that ${{\bf b}f a}lphapha\in\mathcal{A}_{M}$. Let $(u,u_{t},u_{tt})$ be the unique solution of eqref{eqmain2} with data $(u_0,u_1,u_2)\in (H^2((-1,1)ega)\cap H_0^1((-1,1)ega))\times H_0^1((-1,1)ega)\times L^2((-1,1)ega) $ and $f\in L^2(0,T; L^2((-1,1)ega))$. Then, the term $\displaystyleelta u(t)$ can be bounded as follows
{\bf b}egin{equation}gin{align*}
\|\displaystyleelta u(t)\|_{L^2((-1,1)ega)}^2\leq C \left(\|f\|^2_{L^2(0,T;L^2((-1,1)ega))}+\overline{E}(0)+\|\displaystyleelta u_0\|_{L^2((-1,1)ega)}^2\right),\;\forall t\in[0,T].
end{align*}
end{lemma}
{\bf b}egin{equation}gin{proof}
Since the term $u_{tt}(t),\displaystyleelta u(t) \in L^2((-1,1)ega)$, let us multiply the equation eqref{1.6} by $L_0u$ and after integrating on $(-1,1)ega$, we deduce that
{\bf b}egin{equation}gin{align}\label{1.61}
\displaystyle\frac{d}{dt}\|L_0u(t)\|_{L^2((-1,1)ega)}^2+2c^2\|L_0u(t)\|_{L^2((-1,1)ega)}^2=2\langle f(t)-\gammamma u_{tt}(t),L_0u(t)\rangle_{L^2((-1,1)ega)}.
end{align}
By standard argument, from eqref{1.61} we immediately obtain
{\bf b}egin{equation}gin{align}\label{1.62}
\displaystyle\frac{d}{dt}\|L_0u(t)\|_{L^2((-1,1)ega)}^2\leq \frac{1}{c^2}\|f(t)-\gammamma u_{tt}(t)\|^2_{L^2((-1,1)ega)}.
end{align}
Integrating eqref{1.62} from $0$ to $t>0$, we obtain that
{\bf b}egin{equation}gin{align*}
\|L_0u(t)\|_{L^2((-1,1)ega)}^2{\bf B}ig|_{0}^{t}\leq \frac{1}{c^2}\int_0^{t}\|f(\tau)-\gammamma u_{tt}(\tau)\|^2_{L^2((-1,1)ega)} d\tau.
end{align*}
Then, we have
{\bf b}egin{equation}gin{multline*}
\|L_0u(t)\|_{L^2((-1,1)ega)}^2-\|L_0u(t)\|_{L^2((-1,1)ega)}^2{\bf B}ig|_{t=0}\leq
\frac{2}{c^2}\int_0^{t}\|f(\tau)\|^2_{L^2((-1,1)ega)}d\tau+\\\frac{2}{c^2}\int_0^{t}\|\gammamma u_{tt}(\tau)\|^2_{L^2((-1,1)ega)}d\tau,
end{multline*}
and then using Theorem \ref{Energy} we obtain the desired estimate.
end{proof}
\subsection{Hidden regularity}
We can observe that the inverse problem considered in this paper needs that the normal derivative of the solution can be defined on the boundary. It is well known that the wave equation satisfies certain extra regularity called emph{hidden regularity} \cite{lions1988controlabilite}.
it is natural
to expect an analogous result for the Moore--Gibson--Thompson equation, due its hyperbolic nature \cite{kaltenbacher2011wellposedness}.
In the following result, using the multiplier method, we obtain a hidden regularity for the solutions of this equation.
{\bf b}egin{equation}gin{proposition}\label{Regularity}
The unique solution $(u,u_{t},u_{tt})\in C([0,T];(H^2((-1,1)ega)\cap H_0^1((-1,1)ega))\times H_0^1((-1,1)ega)\times L^2((-1,1)ega))$ of eqref{eqmain2} satisfies
{\bf b}egin{equation}gin{align}\label{1.5}
\displaystyle\frac{\partial u}{\partial n}\in H^1(0,T;L^2(\Gammamma)).
end{align}
Moreover, the normal derivative satisfies
{\bf b}egin{equation}gin{multline}\label{1.51}
\left\|\frac{\partial u}{\partial n}\right\|_{H^1(0,T;L^2(\Gammamma))}^2\leq C{\bf B}ig(\|u_0\|_{H^2((-1,1)ega)\cap H_0^1((-1,1)ega)}^2+\|u_1\|_{H_0^1((-1,1)ega)}^2+\|u_2\|_{L^2((-1,1)ega)}^2\\+\|f\|_{L^2(0,T;L^2((-1,1)ega))}^2{\bf B}ig).
end{multline}
Consequently, the mapping
{\bf b}egin{equation}gin{align*}
(f,u_0,u_1,u_2)\mapsto \frac{\partial u}{\partial n}
end{align*}
is linear continuous from $L^2(0,T;L^2((-1,1)ega))\times (H^2((-1,1)ega)\cap H_0^1((-1,1)ega))\times H_0^1((-1,1)ega)\times L^2((-1,1)ega))$ into $H^1(0,T;L^2(\Gammamma))$.
end{proposition}
{\bf b}egin{equation}gin{proof}
We use the multiplier method for the proof. Let $m\in W^{1,\infty}((-1,1)ega;\mathbb R^{N})$ and let us multiply $L_0u$ by $mnabla u$ and $L_0(u_t)$ by $mnabla u_{t}$. Using the summation convention for repeated index, we obtain, respectively
{\bf b}egin{equation}gin{multline}
\displaystyle
\int_{0}^{T}\int_{(-1,1)ega}L_0(u_{t})mnabla u_{t} dxdt=\frac{1}{2}\int_0^{T}\int_{(-1,1)ega}\mbox{div}(m)| u_{tt}|^2dxdt +\int_{(-1,1)ega}u_{tt}mnabla u_{t}{\bf B}ig|_0^{T}dx\\
+\int_0^{T}\int_{(-1,1)ega}\frac{\partial u_{t}}{\partial x_{i}}\frac{\partial m_{j}}{\partial x_{i}}\frac{\partial u_{t}}{\partial x_{j}}dxdt
-\frac{1}{2}\int_0^{T}\int_{(-1,1)ega}\mbox{div}(m)|nabla u_{t}|^2dxdr
-\frac{1}{2}\int_0^{T}\int_{\partial(-1,1)ega}|nabla u_{t}\cdot n|^2(m\cdot n)d\sigmagma dt.
end{multline}
and
{\bf b}egin{equation}gin{multline}
\displaystyle
\int_{0}^{T}\int_{(-1,1)ega}L_0umnabla u dxdt=\frac{1}{2}\int_0^{T}\int_{(-1,1)ega}\mbox{div}(m)| u_{t}|^2dxdt+\int_{(-1,1)ega}u_{t}mnabla u{\bf B}ig|_0^{T}dx\\
+\int_0^{T}\int_{(-1,1)ega}\frac{\partial u}{\partial x_{i}}\frac{\partial m_{j}}{\partial x_{i}}\frac{\partial u}{\partial x_{j}}dxdt
-\frac{1}{2}\int_0^{T}\int_{(-1,1)ega}\mbox{div}(m)|nabla u|^2dxdt
-\frac{1}{2}\int_0^{T}\int_{\partial(-1,1)ega}|nabla u\cdot n|^2(m\cdot n)d\sigmagma dt.
end{multline}
Now, taking the multiplier $m$ as a lifting of the outward unit normal $n$, so that $m\cdot n=1$, on $\Gammamma$ and using that $(u,u_{t},u_{tt})\in C([0,T];(H^2((-1,1)ega)\cap H_0^1((-1,1)ega))\times H_0^1((-1,1)ega)\times L^2((-1,1)ega))$ we obtain
{\bf b}egin{equation}gin{multline}\label{3-SZ}
\displaystyle
\dfrac{1}{2} \int_0^{T}\int_{\partial(-1,1)ega}|nabla u\cdot n|^2d\sigmagma dt+ \dfrac{1}{2} \int_0^{T}\int_{\partial(-1,1)ega}|nabla u_{t}\cdot n|^2d\sigmagma dt\\
= -\int_{0}^{T}\int_{(-1,1)ega}(f-c^2L_0u-\gammamma u_{tt})mnabla u_{t} dxdt+\frac{1}{2}\int_0^{T}\int_{(-1,1)ega}\mbox{div}(m)| u_{tt}|^2dxdt \\
+\int_{(-1,1)ega}u_{tt}mnabla u_{t}{\bf B}ig|_0^{T}dx+\int_0^{T}\int_{(-1,1)ega}\frac{\partial u_{t}}{\partial x_{i}}\frac{\partial m_{j}}{\partial x_{i}}\frac{\partial u_{t}}{\partial x_{j}}dxdt
-\frac{1}{2}\int_0^{T}\int_{(-1,1)ega}\mbox{div}(m)|nabla u_{t}|^2dxdr\\
-\int_{0}^{T}\int_{(-1,1)ega}mL_0unabla u dxdt+\frac{1}{2}\int_0^{T}\int_{(-1,1)ega}\mbox{div}(m)| u_{t}|^2dxdt+\int_{(-1,1)ega}u_{t}mnabla u{\bf B}ig|_0^{T}dx\\
+\int_0^{T}\int_{(-1,1)ega}\frac{\partial u}{\partial x_{i}}\frac{\partial m_{j}}{\partial x_{i}}\frac{\partial u}{\partial x_{j}}dxdt
-\frac{1}{2}\int_0^{T}\int_{(-1,1)ega}\mbox{div}(m)|nabla u|^2dxdt \\
\leq
C \left(\|f\|^2_{L^2(0,T;L^2((-1,1)ega))}+\overline{E}(0)+\|\displaystyleelta u_0\|_{L^2((-1,1)ega)}^2\right).
end{multline}
From eqref{3-SZ}, using the continuous dependence of the solution with respect to the data, we obtain the desired estimate eqref{1.51} and the proof is finished.
end{proof}
\section{Proof of Main Results}\label{s3}
In this section we prove our main results, that is, Theorem \ref{mainresult} and Theorem \ref{observability}. First, we obtain the Carleman estimate given in Theorem \ref{observability} and then we apply this inequality to solve our inverse problem.
We use the following notation for the weighted energy of the wave operator $L_0$
{\bf b}egin{equation}gin{align}\label{WEner}
W(y) :=
s\lambda \int_{0}^T\int_(-1,1)ega e^{2s\varepsilonphi_\lambda}\varepsilonphi_\lambda ( |y_t|^2 + |nabla y|^2)dxdt
+s^3\lambda^3 \int_{0}^T\int_(-1,1)ega e^{2s\varepsilonphi_\lambda}\varepsilonphi^3_\lambda |y|^2 dxdt,
end{align}
with $\varepsilonphi_{\lambda}$ is given by eqref{4-SZ}. Also, we recall the operator $L$ defined in Section \ref{s2}:
{\bf b}egin{equation}gin{align*}
Ly:=L_0 y_{t}+c^2L_0 y +\gammamma y_{tt}.
end{align*}
{\bf b}egin{equation}gin{proof}[{{\bf b}f Proof of Theorem \ref{observability}}]
Let $y\in L^2(0,T;H_0^1((-1,1)ega))$ satisfying $Ly = f \in L^2((-1,1)ega\times(0,T))$, $y(\cdot,0)=y_{t}(\cdot,0)=0$ in $(-1,1)ega$, and
$y_{tt}(\cdot,0) = y_2 \in L^2((-1,1)ega)$.
Then, by \cite[Theorem 2.10]{kaltenbacher2012exponential} then
$(y,y_{t},y_{tt})\in C([0,T];(H^2((-1,1)ega)\cap H_0^1((-1,1)ega))\times H_0^1((-1,1)ega)\times L^2((-1,1)ega))$
and satisfies the boundary value problem
{\bf b}egin{equation}gin{align}
\left\{
{\bf b}egin{equation}gin{array}{ll}
L_0y_t+c^2L_0y+ \gammamma y_{tt}= f , &(-1,1)ega \times (0,T) \\
y=0, & \Gammamma \times (0,T). \\
y(\cdot,0) = 0, \medspace y_t(\cdot,0) =0 , \medspace y_{tt}(\cdot,0) = y_2, & (-1,1)ega
end{array}
\right.
end{align}
For a given function $F$ defined in $[0,T]$, we will denote by $\widetilde F$ its even extension, and by $\widehat F$ its odd extension to $[-T,T]$.
Then $w = \widetilde{y}$ satisfies
{\bf b}egin{equation}gin{align} \label{c21}
\left\{
{\bf b}egin{equation}gin{array}{ll}
L_0w_t+\widehat{c^2}L_0w+\widehat{ \gammamma} w_{tt}= \widehat{f} , & (-1,1)ega \times (-T,T) \\
w=0, &\Gammamma \times (-T,T). \\
w(\cdot,0) = 0, \medspace w_t(\cdot,0) =0 ,\medspace w_{tt}(\cdot,0) = y_2, & (-1,1)ega.
end{array}
\right.
end{align}
We denote by $P$ the operator
{\bf b}egin{equation}gin{align*}
P:=\partial_{t}L_0+\widehat{c^2}L_0+\widehat{\gammamma}\partial_{t}^2,
end{align*}
and by $\|\cdot\|_{s}$ the weighted norm
{\bf b}egin{equation}gin{align*}
\|w\|_{s}^2:=\displaystyle\int_{-T}^{T}\int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda}}|w|^2dxdt,
end{align*}
where $\varepsilonphi_{\lambda}$ is given by eqref{4-SZ}. Then,
{\bf b}egin{equation}gin{align}\label{c4}
\|Pw- \widehat{\gammamma}w_{tt}\|_{s}^2
=
\|L_0w_{t}\|_{s}^2+c^4\|L_0w\|_{s}^2 +\displaystyle\int_{-T}^{T}\int_{(-1,1)ega}\widehat{c^2}e^{s\varepsilonphi_{\lambda}}\partial_{t}|L_0w|^2dxdt,
end{align}
and, subsequently
{\bf b}egin{equation}gin{multline*}
\displaystyle\int_{-T}^{T}\int_{(-1,1)ega}\widehat{c^2}e^{s\varepsilonphi_{\lambda}}\partial_{t}|L_0w|^2dxdt=
\int_0^{T}\int_{(-1,1)ega}c^2e^{s\varepsilonphi_{\lambda}}\partial_{t}|L_0 w|^2dxdt
-\int_{-T}^{0}\int_{(-1,1)ega}c^2e^{s\varepsilonphi_{\lambda}}\partial_{t}|L_0w|^2dxdt\\
\geq
-2c^2\int_{(-1,1)ega}|L_0w(\cdot,0)|^2e^{s\varepsilonphi_{\lambda}(\cdot,0)}dx
-\int_0^{T}\int_{(-1,1)ega}sc^2|L_0w|^2(\partial_{t}\varepsilonphi_{\lambda})e^{s\varepsilonphi_{\lambda}}dxdt
\\+\int_{-T}^{0}\int_{(-1,1)ega}sc^2|L_0w|^2(\partial_{t}\varepsilonphi_{\lambda})e^{s\varepsilonphi_{\lambda}}dxdt.
end{multline*}
Also, from the definition of the weight function, we have
{\bf b}egin{equation}gin{align*}
\left\{
{\bf b}egin{equation}gin{array}{ll}
\partial_{t}\varepsilonphi_{\lambda}<0, &\quad t\in(0,T),\\
\partial_{t}\varepsilonphi_{\lambda}>0, &\quad t\in(-T,0),
end{array}
\right.
end{align*}
and then
{\bf b}egin{equation}gin{align}\label{c2}
\displaystyle\int_{-T}^{T}\int_{(-1,1)ega}\widehat{c^2}e^{s\varepsilonphi_{\lambda}}\partial_{t}|L_0w|^2dxdt
\geq
-2c^2\int_{(-1,1)ega}|L_0w(\cdot,0)|^2e^{s\varepsilonphi_{\lambda}(\cdot,0)}dx.
end{align}
From eqref{c4} and eqref{c2}, using that $w(\cdot,0)=0$, we deduce that
{\bf b}egin{equation}gin{align}\label{c5}
\|L_0w_{t}\|_{s}^2+c^4\|L_0w\|_{s}^2-2c^2\int_{(-1,1)ega}|y_2(x)|^2e^{s\varepsilonphi_{\lambda}(\cdot,0)}dx
\leq
\|Pw\|_{s}^2+ \|\widehat{\gammamma}w_{tt}\|_{s}^2.
end{align}
Hence, taking into account that $\phi(x,t) \leq \phi(x,0)$ for all $x \in (-1,1)ega$, and Lemma eqref{Energy},
we get
{\bf b}egin{equation}gin{multline}
\int_{-T}^{T}\int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda}}|Pw|^2dxdt
\leq
C \int_{0}^{T}\int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda}}|f|^2dxdt
+ C \int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda}(\cdot,0) }|y_2|^2dxdt,
end{multline}
which together with eqref{c5} gives
{\bf b}egin{equation}gin{align}
\|L_0w_{t}\|_{s}^2+c^4\|L_0w\|_{s}^2
\leq C \int_{0}^{T}\int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda} }|f|^2dxdt
+ C \int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda}(\cdot,0) }|y_2|^2dx
+ \|\widehat{\gammamma}w_{tt}\|_{s}^2.
\label{acotPv}
end{align}
Since
$\widehat{\gammamma} \in L^{\infty}((-1,1)ega \times (-T,T))$,
from eqref{acotPv} we obtain that $L_0w$ and $L_0w_{t}$ belongs to $L^2((-1,1)ega\times(-T,T))$.
Therefore, using the hidden regularity for the wave equation, we have that $\frac{\partial w}{\partial n}\in H^1(-T,T;L^2(\Gammamma_0))$. Then, we can apply the Carleman estimates given by Theorem 2.10 in \cite{baudouin2013global} for the wave equation to each term $L_0w$ and $L_0w_{t}$. Namely, we have
{\bf b}egin{equation}gin{multline}\label{c6}
s\lambda\displaystyle\int_{-T}^{T}\int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda}}\varepsilonphi_{\lambda}(|w_{t}|^2+|nabla w|^2)dxdt+s^3\lambda^3\displaystyle\int_{-T}^{T}\int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda}}\varepsilonphi_{\lambda}^3|w|^2 dxdt\\\leq C \int_{-T}^{T}\int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda}}|L_0 w|^2dxdt +Cs\lambda\int_{-T}^{T}\int_{\Gammamma_0}e^{2s\varepsilonphi_{\lambda}}\left|\frac{\partial w}{\partial n}\right|^2d\sigmagma dt,
end{multline}
where we use the fact that $w_{t}(\cdot,0)=0$,
and
{\bf b}egin{equation}gin{multline}\label{c7}
\sqrt{s}\int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda}(\cdot,0)}|y_2|^2dx+s\lambda\displaystyle\int_{-T}^{T}\int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda}}\varepsilonphi_{\lambda}(|w_{tt}|^2+|nabla w_{t}|^2)dxdt\\+s^3\lambda^3\displaystyle\int_{-T}^{T}\int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda}}\varepsilonphi_{\lambda}^3|w_{t}|^2dxdt \leq C \int_{-T}^{T}\int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda}}|L_0 w_{t}|^2dxdt \\+Cs\lambda\int_{-T}^{T}\int_{\Gammamma_0}e^{2s\varepsilonphi_{\lambda}}\left|\frac{\partial w_{t}}{\partial n}\right|^2d\sigmagma dt.
end{multline}
Thus, from eqref{acotPv}, eqref{c6} and eqref{c7} we obtain
{\bf b}egin{equation}gin{multline}\label{c9}
\sqrt{s}\int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda}(\cdot,0)}| y_2 |^2dx+c^4W(y)+W(y_{t})
\leq
C \int_{0}^{T}\int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda}(\cdot,0) }|f|^2dxdt
\\+ \|\widehat{\gammamma}w_{tt}\|_{s}^2
+ C \int_{(-1,1)ega}|y_2|^2e^{s\varepsilonphi_{\lambda}(\cdot,0)}dx
+s\lambda C \int_{-T}^T\int_{\Gammamma_0} e^{2s\varepsilonphi_\lambda}\left(\left|\frac{\partial y_{t}}{\partial n}\right|^2
+ c^4\left|\frac{\partial y}{\partial n}\right|^2\right)d\sigmagma dt.
end{multline}
Then, there exists $s_0>0$ and $\lambda$ such that for every $s\geq s_0$ we absorb the second and third term in the right hand side of eqref{c9} which implies
{\bf b}egin{equation}gin{multline*}
\sqrt{s}\int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda}(\cdot,0)}|y_2|^2dx+c^4W(y)+W(y_{t})
\leq
C \int_{0}^{T}\int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda}(\cdot,0) }|f|^2dxdt
\\+s\lambda C \int_{0}^T\int_{\Gammamma_0} e^{2s\varepsilonphi_\lambda}\left(\left|\frac{\partial y_{t}}{\partial n}\right|^2
+ c^4\left|\frac{\partial y}{\partial n}\right|^2\right)d\sigmagma dt.
end{multline*}
Finally, without loss of generality, we can take $M_0 > 0$ and $C > 1$ in definition eqref{5-SZ} such that
$ \phi(x,0) \leq C \phi(x,t) $ for all $x \in (-1,1)ega$ and $t \in [0,T]$. Then we have
$ \varepsilonphi_{\lambda}(x,0) \leq C_1 \varepsilonphi_{\lambda}(x,t)$ for some $C_1 = C_1(\lambda)$ independent
of $(x, t) \in (-1,1)ega \times [0,T]$, from where we conclude the desired estimate eqref{1.133}.
end{proof}
With the previous Carleman inequality, we can prove the main result of this article.
{\bf b}egin{equation}gin{proof}[{{\bf b}f Proof of Theorem \ref{mainresult}}]
Using the notation settled in
the previous section (see eqref{gamma} and eqref{1.6}), we write the MGT equation in the following way.
{\bf b}egin{equation}gin{align}\label{NE}
\left\{
{\bf b}egin{equation}gin{array}{ll}
L_0u_{t}+c^2L_0u+\gammamma u_{tt}=f, &(-1,1)ega\times(0,T) \\
u=h, &\Gammamma \times (0,T) \\
u(\cdot,0) = u_0, \medspace u_t(\cdot,0) = u_1, \medspace u_{tt}(\cdot,0) = u_2, & (-1,1)ega.
end{array}
\right.
end{align}
Hence, we will prove a stability estimate for
coefficient $\gammamma = \gammamma(x)$ in equation eqref{NE}.
Let us denote by $u^k$ the solution of equation eqref{NE} with coefficient $\gammamma_k$, for $k=1, 2$, which
existence is guaranteed by Theorem 2.10 in \cite{kaltenbacher2012exponential}.
Hence $z:= u^1 - u^2$ solves the following system.
{\bf b}egin{equation}gin{align} \label{eqDif}
\left\{
{\bf b}egin{equation}gin{array}{ll}
L_0z_t+c^2L_0z+\gammamma_1(x)z_{tt}= (\gammamma_2 - \gammamma_1) R(x,t) , & (-1,1)ega \times (0,T) \\
z=0, & \Gammamma \times (0,T) \\
z(\cdot,0) = z_t(\cdot,0) = z_{tt}(\cdot,0) = 0, & (-1,1)ega
end{array}
\right.
end{align}
where $R = \partial_t^2 u^2$.
Then $y := \partial_t z$ satisfies
{\bf b}egin{equation}gin{align} \label{eqDer}
\left\{
{\bf b}egin{equation}gin{array}{ll}
L_0y_t+c^2L_0y+\gammamma_1(x)y_{tt}= (\gammamma_2 - \gammamma_1) \partial_t R(x,t) , & (-1,1)ega \times (0,T) \\
y=0, & \Gammamma \times (0,T) \\
y(\cdot,0) = y_{t}(\cdot,0) = 0, \medspace y_{tt}(\cdot,0) = (\gammamma_2 - \gammamma_1) R(x,0), & (-1,1)ega
end{array}
\right.
end{align}
Since $\gammamma_2-\gammamma_1$ belongs, in particular, to $L^2((-1,1)ega)$ and $R\in H^1(0,T;L^{\infty}((-1,1)ega))$, by Theorem 2.10 in \cite{kaltenbacher2012exponential}, we obtain that the Cauchy problem eqref{eqDer} is well--posed and admits a unique solution
{\bf b}egin{equation}gin{align*}
(y,y_{t},y_{tt})\in C([0,T]; (H^2((-1,1)ega)\cap H_0^1((-1,1)ega))\times H_0^1((-1,1)ega)\times L^2((-1,1)ega)).
end{align*}
Moreover, from Theorem \ref{Regularity} the normal derivative $\frac{\partial y}{\partial n}$ belongs to $H^1(0,T;L^2(\Gammamma))$ and satisfy
{\bf b}egin{equation}gin{align*}
\left\|\frac{\partial y}{\partial n}\right\|_{H^1(0,T;L^2(\Gammamma))}^2\leq C\|\gammamma_2-\gammamma_1\|_{L^2((-1,1)ega)}^2(\|R(\cdot,0)\|_{L^{\infty}((-1,1)ega)}^2+\|\partial_t R\|_{L^2(0,T;L^{\infty}((-1,1)ega))}).
end{align*}
This last estimate gives that $\frac{\partial z}{\partial n} \in H^2(0,T;L^2(\Gammamma_0))$ and proves the
second inequality of eqref{stabi}.
Next, we apply Theorem \ref{observability} to $y$. From system eqref{eqDer} we have
{\bf b}egin{equation}gin{align*}
\int_{0}^{T}\int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda} }|Ly|^2dxdt
\leq C(\|\gammamma_1\|_{L^{\infty}((-1,1)ega)},\|\partial_{t}R\|_{L^2(0,T;L^{\infty}((-1,1)ega))})\int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda}(\cdot,0)}|\gammamma_2-\gammamma_1|^2dx.
end{align*}
Thus, from eqref{1.133}
{\bf b}egin{equation}gin{multline*}
\sqrt{s}\int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda}(\cdot,0)}|\gammamma_2-\gammamma_1|^2|R(x,0)|^2dx\leq
C\int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda}(\cdot,0)}|\gammamma_2-\gammamma_1|^2dx\\ +Cs\lambda\int_{0}^{T}\int_{\Gammamma_0}e^{2s\varepsilonphi_{\lambda}}\left(\left|\frac{\partial y_{t}}{\partial n}\right|^2+c^4\left|\frac{\partial y}{\partial n}\right|^2\right)d\sigmagma dt,
end{multline*}
which implies, using that
$
|R(x,0)| = |u_2| \geq eta > 0
$ a.e in $(-1,1)ega$,
{\bf b}egin{equation}gin{multline*}
eta^2\sqrt{s}\int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda}(\cdot,0)}|\gammamma_2-\gammamma_1|^2dx\leq
C\int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda}(\cdot,0)}|\gammamma_2-\gammamma_1|^2dx\\ +Cs\lambda\int_{0}^{T}\int_{\Gammamma_0}e^{2s\varepsilonphi_{\lambda}}\left(\left|\frac{\partial y_{t}}{\partial n}\right|^2+c^4\left|\frac{\partial y}{\partial n}\right|^2\right)d\sigmagma dt.
end{multline*}
Therefore, taking $s$ large enough we absorb the first term in the right hand side and we have
{\bf b}egin{equation}gin{align*}
eta^2\int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda}(\cdot,0)}|\gammamma_2-\gammamma_1|^2dx\leq
C\sqrt{s}\lambda\int_{0}^{T}\int_{\Gammamma_0}e^{2s\varepsilonphi_{\lambda}}\left(\left|\frac{\partial y_{t}}{\partial n}\right|^2+c^4\left|\frac{\partial y}{\partial n}\right|^2\right)d\sigmagma dt,
end{align*}
which is the first inequality of eqref{stabi} and the proof is finished.
end{proof}
\section{Reconstruction of the coefficient}\label{s4}
In this section we shall propose an reconstruction algorithm for the unknown parameter $\gammamma$, from measurements of the normal derivative of the solution $u(\gammamma)$ of the MGT equation eqref{NE}.
This algorithm is an extension of the work of Baudouin, Buhan and Ervedoza \cite{baudouin2013global}, in which they propose a reconstruction algorithm for the potential of the wave equation.
By Theorem \ref{mainresult}, we known that the knowledge of $\frac{\partial u}{\partial n}$ on $\Gammamma_0\times (0,T)$
is enough to
identify the parameter $\gammamma$. Then ${{\bf b}f a}lphapha \in {\mathcal A}_M$ is equivalent to ask that $\gammamma$ belongs to
{\bf b}egin{equation}gin{align}\label{asumad}
\mathcal{B}_{M}:=\{\gammamma \in L^{\infty}((-1,1)ega),\quad 0\leq \gammamma(x)\leq M, \quad \forall x\in\overline{(-1,1)ega}\}.
end{align}
Let $\gammamma\in \mathcal{B}_{M}$. Let $g\in L^2((-1,1)ega\times(0,T))$ and $\mu\in H^1(0,T;L^2(\Gammamma_0))$.
Given $\varepsilonphi_\lambda$ defined in eqref{4-SZ} with $\lambda >0$ given by Theorem \ref{observability}, we define the functional
{\bf b}egin{equation}gin{multline}\label{5.1}
J[\mu,g](y)=\frac{1}{2s}\int_{0}^{T}\int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda}}|Ly-g|^2dxdt\\+\frac{1}{2}\int_{0}^{T}\int_{\Gammamma_0}e^{2s\varepsilonphi_{\lambda}}\left(\left|\frac{\partial y}{\partial n}-\mu\right|^2+\left|\frac{\partial y_{t}}{\partial n}-\mu_{t}\right|^2\right)d\sigmagma dt,
end{multline}
defined in the space
{\bf b}egin{equation}gin{align}\label{5.2}
\mathcal{V}=\{y\in L^2(0,T;H_0^1((-1,1)ega))\mbox{ with }Ly\in L^2((-1,1)ega\times(0,T)), y(\cdot,0)=y_{t}(\cdot,0)=0 nonumber\\\mbox{ and } y_{tt}(\cdot,0)\in L^2((-1,1)ega)\},
end{align}
with the family of semi--norms
{\bf b}egin{equation}gin{align}\label{5.3}
\|y\|_{\mathcal{V}, s}^2
:=
\frac{1}{s}\int_{0}^{T}\int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda}}|Ly|^2dxdt+\int_{0}^{T}\int_{\Gammamma_0}e^{2s\varepsilonphi_{\lambda}}\left(\left|\frac{\partial y}{\partial n}\right|^2+\left|\frac{\partial y_{t}}{\partial n}\right|^2\right)d\sigmagma dt.
end{align}
A few remarks about this semi--norms (for more details see \cite[Section 4]{baudouin2013global}):
{\bf b}egin{equation}gin{remark}
{\bf b}egin{equation}gin{enumerate}
\item\label{prue} Since the weighted functions $e^{s\varepsilonphi_{\lambda}}$ are bounded from below and from above by a positive constants depending on $s$, the semi--norms eqref{5.3} are equivalent to
{\bf b}egin{equation}gin{align*}
\|y\|_{\mathcal{V}}^2:=\int_{0}^{T}\int_{(-1,1)ega}|Ly|^2dxdt+\int_{0}^{T}\int_{\Gammamma_0}\left(\left|\frac{\partial y}{\partial n}\right|^2+\left|\frac{\partial y_{t}}{\partial n}\right|^2\right)d\sigmagma dt,
end{align*}
in the sense that there exists a constant $C=C(s)$, such that for all $y\in\mathcal{V}$
{\bf b}egin{equation}gin{align*}
\frac{1}{C}\|y\|_{\mathcal{V}}^2\leq
\|y\|_{\mathcal{V}, s}^2 \leq
C\|y\|_{\mathcal{V}}^2.
end{align*}
\item By Theorem \ref{observability}, there exists $s_0>0$ such that for every $s\geq s_0$ the semi--norm eqref{5.3} is actually a norm. Hence, from 1. we have that $\|\cdot\|_{\mathcal{V},s}$ is a norm for all $s>0$.
In the rest of the paper, we will omit the subscript $s$ in the notation.
end{enumerate}
end{remark}
The first result concerning the reconstruction of $\gammamma$, is to guarantee that the functional $J[\mu,g]$ reaches the minimum. Moreover, we have the following uniqueness result.
{\bf b}egin{equation}gin{theorem}\label{minimo}
Assume the same hypotheses of Theorem \ref{observability} and assume that $g\in L^2((-1,1)ega\times(0,T))$ and $\mu\in H^1(0,T;L^2(\Gammamma_0))$. Then, for all $s>0$ and $\gammamma\in \mathcal{B}_{M}$, the functional $J[\mu,g]$ defined by eqref{5.1} is continuous, strictly convex and coercive on $\mathcal{V}$. Besides, the minimizer $y^{*}$ satisfies
{\bf b}egin{equation}gin{align*}
\|y^{*}\|_{\mathcal{V}}^2\leq\frac{4}{s}\int_{0}^{T}\int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda}}|g|^2dxdt+4\int_{0}^{T}\int_{\Gammamma_0}e^{2s\varepsilonphi_{\lambda}}(|\mu|^2+|\mu_{t}|^2)d\sigmagma dt.
end{align*}
end{theorem}
{\bf b}egin{equation}gin{proof}
The continuity and convexity is immediately. Let us see the coercivity.
{\bf b}egin{equation}gin{multline*}
J[\mu,g](y)= \frac{1}{2s}\int_{0}^{T}\int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda}}|Ly|^2dxdt+\frac{1}{2s}\int_{0}^{T}\int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda}}|g|^2dxdt
-\frac{1}{s}\int_{0}^{T}\int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda}}gLydxdt\\+
\frac{1}{2}\int_{0}^{T}\int_{\Gammamma_0}e^{2s\varepsilonphi_{\lambda}}\left(\left|\frac{\partial y}{\partial n}\right|^2
+\left|\frac{\partial y_{t}}{\partial n}\right|^2\right)d\sigmagma dt
-\int_{0}^{T}\int_{\Gammamma_0}e^{2s\varepsilonphi_{\lambda}}\left(\mu\frac{\partial y}{\partial n}+\mu_t\frac{\partial y_t}{\partial n}\right)\\+\frac{1}{2}\int_{0}^{T}\int_{\Gammamma_0}e^{2s\varepsilonphi_{\lambda}}(|\mu|^2+|\mu_{t}|^2)d\sigmagma dt.
end{multline*}
Using the fact that $2ab\leq 2a^2+\frac{b^2}{2}$, we deduce
{\bf b}egin{equation}gin{multline*}
J[\mu,g](y)\geq \frac{1}{4s}\int_{0}^{T}\int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda}}|Ly|^2dxdt -\frac{1}{2s}\int_{0}^{T}\int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda}}|g|^2dxdt\\
\quad +\frac{1}{4}\int_{0}^{T}\int_{\Gammamma_0}e^{2s\varepsilonphi_{\lambda}}\left(\left|\frac{\partial y}{\partial n}\right|^2
+\left|\frac{\partial y_{t}}{\partial n}\right|^2\right)d\sigmagma dt
-\frac{1}{2}\int_{0}^{T}\int_{\Gammamma_0}e^{2s\varepsilonphi_{\lambda}}(|\mu|^2+|\mu_{t}|^2)d\sigmagma dt\\
=\frac{1}{4}\|y\|_{\mathcal{V}}^2-\frac{1}{s}\int_{0}^{T}\int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda}}|g|^2dxdt-\int_{0}^{T}\int_{\Gammamma_0}e^{2s\varepsilonphi_{\lambda}}(|\mu|^2+|\mu_{t}|^2)d\sigmagma dt.
end{multline*}
Therefore, the functional $J[\mu,g]$ admits a unique minimizer $y^{*}$ in $\mathcal{V}$.
Now, let us prove the estimates on the minimizer. First, we develop the functional $J[\mu,g](y^{*})$:
{\bf b}egin{equation}gin{multline*}
J[\mu,g](y^{*})=\frac{1}{2s}\int_{0}^{T}\int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda}}|Ly^{*}|^2dxdt+\frac{1}{2}\int_{0}^{T}\int_{\Gammamma_0}e^{2s\varepsilonphi_{\lambda}}\left(\left|\frac{\partial y^{*}}{\partial n}\right|^2+\left|\frac{\partial y_{t}^{*}}{\partial n}\right|^2\right)d\sigmagma dt\\
\quad +\frac{1}{2s}\int_{0}^{T}\int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda}}|g|^2dxdt+\frac{1}{2}\int_{0}^{T}\int_{\Gammamma_0}e^{2s\varepsilonphi_{\lambda}}(|\mu|^2+|\mu_{t}|^2)d\sigmagma dt\\
\quad -\frac{1}{s}\int_{0}^{T}\int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda}}gLy^{*} dxdt-\int_{0}^{T}\int_{\Gammamma_0}e^{2s\varepsilonphi_{\lambda}}\left(\mu\frac{\partial y^{*}}{\partial n}+\mu_{t}\frac{\partial y_{t}^{*}}{\partial n}\right)d\sigmagma dt.
end{multline*}
Next, since $y^{*}$ is the minimizer, we have that $J[\mu,g](y^{*})\leq J[\mu,g](0)$, which implies in particular
{\bf b}egin{equation}gin{multline*}
\frac{1}{2s}\int_{0}^{T}\int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda}}|Ly^{*}|^2dxdt+\frac{1}{2}\int_{0}^{T}\int_{\Gammamma_0}e^{2s\varepsilonphi_{\lambda}}\left(\left|\frac{\partial y^{*}}{\partial n}\right|^2+\left|\frac{\partial y_{t}^{*}}{\partial n}\right|^2\right)d\sigmagma dt\\\leq \frac{1}{s}\int_{0}^{T}\int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda}}gLy^{*}dxdt+\int_{0}^{T}\int_{\Gammamma_0}e^{2s\varepsilonphi_{\lambda}}\left(\mu\frac{\partial y^{*}}{\partial n}+\mu_{t}\frac{\partial y_{t}^{*}}{\partial n}\right)d\sigmagma dt
end{multline*}
Therefore, using that $2ab\leq 2a^2+\frac{b^2}{2}$ and the definition of the norm $\|\cdot\|_{\mathcal{V}}$, we deduce
{\bf b}egin{equation}gin{align*}
\frac{1}{4}\|y^{*}\|_{\mathcal{V}}^2\leq \frac{1}{s}\int_{0}^{T}\int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda}}|g|^2dxdt+\int_{0}^{T}\int_{\Gammamma_0}e^{2s\varepsilonphi_{\lambda}}(|\mu|^2+|\mu_{t}|^2)d\sigmagma dt.
end{align*}
end{proof}
Secondly, the following Theorem gives a relationship between the unique minimizer of $J[\mu,g]$ and $g$. This is, together with the Theorem \ref{observability}, an essential result for the proof of convergence of our algorithm of reconstruction.
{\bf b}egin{equation}gin{theorem}\label{minimo2}
Assume the same hypotheses of Theorem \ref{observability} and assume that $\mu\in H^1(0,T;L^2(\Gammamma_0))$ and $g^1,g^2\in L^2((-1,1)ega\times(0,T))$. Let $y^{*,i}$ be the unique minimizer of the functional $J[\mu,g^{i}]$, for $i=1,2$. Then, there exists $s_0>0$ and a constant $C>0$ such that for all $s\geq s_0$
{\bf b}egin{equation}gin{align}\label{5.4}
\sqrt{s}\int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda}(\cdot,0)}|y_{tt}^{*,1}(\cdot,0)-y_{tt}^{*,2}(\cdot,0)|
^2dx\leq C\int_{0}^{T}\int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda}}|g^1-g^2|^2dxdt.
end{align}
end{theorem}
{\bf b}egin{equation}gin{proof}
Since $y^{*,i}$ is the unique minimizer of $J[\mu,g^{i}]$, for $i=1,2$, we have that for all $y\in\mathcal{V}$
{\bf b}egin{equation}gin{multline}\label{5.5}
\frac{1}{s}\int_{0}^{T}\int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda}}(Ly^{*,1}-g^1)Lydxdt\\+\int_{0}^{T}\int_{\Gammamma_0}e^{2s\varepsilonphi_{\lambda}}\left[\left(\frac{\partial y^{*,1}}{\partial n}-\mu\right)\frac{\partial y}{\partial n}+\left(\frac{\partial y_{t}^{*,1}}{\partial n}-\mu_{t}\right)\frac{\partial y_{t}}{\partial n}\right]d\sigmagma dt=0,
end{multline}
and
{\bf b}egin{equation}gin{multline}\label{5.6}
\frac{1}{s}\int_{0}^{T}\int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda}}(Ly^{*,2}-g^2)Lydxdt\\+\int_{0}^{T}\int_{\Gammamma_0}e^{2s\varepsilonphi_{\lambda}}\left[\left(\frac{\partial y^{*,2}}{\partial n}-\mu\right)\frac{\partial y}{\partial n}+\left(\frac{\partial y_{t}^{*,2}}{\partial n}-\mu_{t}\right)\frac{\partial y_{t}}{\partial n}\right]d\sigmagma dt=0.
end{multline}
Subtracting eqref{5.5} and eqref{5.6}, for $y=y^{*,1}-y^{*,2}$, we deduce that
{\bf b}egin{equation}gin{multline*}
\frac{1}{s}\int_{0}^{T}\int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda}}|Ly|^2dxdt+\int_{0}^{T}\int_{\Gammamma_0}e^{2s\varepsilonphi_{\lambda}}\left(\left|\frac{\partial y}{\partial n}\right|^2+\left|\frac{\partial y_{t}}{\partial n}\right|^2\right)d\sigmagma dt\\=\frac{1}{s}\int_{0}^{T}\int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda}}(g^1-g^2)Ly dxdt,
end{multline*}
Then, applying again $2ab\leq 2a^2+\frac{b^2}{2}$ we obtain
{\bf b}egin{equation}gin{multline}\label{5.7}
\frac{1}{2}\int_{0}^{T}\int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda}}|Ly|^2dxdt+s\int_{0}^{T}\int_{\Gammamma_0}e^{2s\varepsilonphi_{\lambda}}\left(\left|\frac{\partial y}{\partial n}\right|^2+\left|\frac{\partial y_{t}}{\partial n}\right|^2\right)d\sigmagma dt\\\leq2\int_{0}^{T}\int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda}}|g^1-g^2|^2 dxdt,
end{multline}
Finally, by the estimate eqref{1.133} of Theorem \ref{observability} we obtain the desired result.
end{proof}
Finally, we present our algorithm and the convergence result of this.\\
\hrulefill
{{\bf b}f Algorithm:}
\hrulefill
{\bf b}egin{equation}gin{enumerate}
\item {{\bf b}f Initialization:} $\gammamma^0=0$.
\item {{\bf b}f Iteration:} From $k$ to $k+1$
{\bf b}egin{equation}gin{enumerate}
\item[Step 1] - Given $\gammamma^{k}$ we consider $\mu^{k}=\partial_{t}\left(\frac{\partial u(\gammamma^{k})}{\partial n}-\frac{\partial u(\gammamma)}{\partial n}\right)$ and \\$\mu_{t}^{k}=\partial_{t}\left(\frac{\partial u_{t}(\gammamma^{k})}{\partial n}-\frac{\partial u_{t}(\gammamma)}{\partial n}\right)$ on $\Gammamma_0\times(0,T)$, where $u(\gammamma^{k})$ and $u(\gammamma)$ are the solution of the problems
{\bf b}egin{equation}gin{align}
\left\{
{\bf b}egin{equation}gin{array}{ll}
L_0u_t+c^2L_0u+\gammamma^k(x)u_{tt}=f, & (-1,1)ega \times (0,T) \\
u=g, & \Gammamma \times (0,T) \\
u(\cdot,0) = u_0, \medspace u_t(\cdot,0) = u_1, \medspace u_{tt}(\cdot,0) = u_2, & (-1,1)ega
end{array}
\right.
end{align}
and
{\bf b}egin{equation}gin{align}
\left\{
{\bf b}egin{equation}gin{array}{ll}
L_0u_t+c^2L_0u+\gammamma(x)u_{tt}=f, & (-1,1)ega \times (0,T) \\
u=g, & \Gammamma \times (0,T) \\
u(\cdot,0) = u_0, \medspace u_t(\cdot,0) = u_1, \medspace u_{tt}(\cdot,0) = u_2, & (-1,1)ega,
end{array}
\right.
end{align}
respectively.
\item[Step 2] - Minimize the functional $J[\mu^{k},0]$ on the admissible trajectories $y\in\mathcal{V}$:
{\bf b}egin{equation}gin{multline*}
J[\mu^{k},0](y)=\frac{1}{2s}\int_{0}^{T}\int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda}}|L_0y_t+c^2L_0y+\gammamma^k(x)y_{tt}|^2dxdt\\+\frac{1}{2}\int_{0}^{T}\int_{\Gammamma_0}e^{2s\varepsilonphi_{\lambda}}\left(\left|\frac{\partial y}{\partial n}-\mu^{k}\right|^2+\left|\frac{\partial y_{t}}{\partial n}-\mu_{t}^{k}\right|^2\right)d\sigmagma dt
end{multline*}
\item[Step 3] - Let $y^{*,k}$ the minimizer of $J[\mu^{k},0]$ and
{\bf b}egin{equation}gin{align}\label{5.10}
\widetilde{\gammamma}^{k+1}=\gammamma^{k}+\frac{y_{tt}^{*,k}(\cdot,0)}{u_2}.
end{align}
\item[Step 4] - Finally, consider $\gammamma^{k+1}=T(\widetilde{\gammamma}^{k+1})$, where
{\bf b}egin{equation}gin{align}\label{5.11}
T(\gammamma)=
\left\{
{\bf b}egin{equation}gin{array}{ll}
M\quad &\mbox{ if }\gammamma(x)>M\\
\\
\gammamma\quad &\mbox{ if } 0\leq \gammamma(x)\leq M\\
\\
0\quad &\mbox{ if }\gammamma(x)<0.
end{array}
\right.
end{align}
end{enumerate}
end{enumerate}
\hrulefill\\
Therefore, under the previous Theorem \ref{minimo2}, we can prove the convergence of this algorithm:
{\bf b}egin{equation}gin{theorem}\label{convergencia}
Assume the same hypotheses of Theorem \ref{observability}, and the following assumption of $u(\gammamma):$
{\bf b}egin{equation}gin{align}
u(\gammamma)\in H^3(0,T;L^{\infty}((-1,1)ega)) \mbox{ and } |u_2|\geqeta>0.
end{align}
Then, there exists a constant $C>0$ and $s_0>0$ such that for all $s\geq s_0$ and $k\in{\mathbb{N}}$
{\bf b}egin{equation}gin{align}\label{4.13}
\int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda}(\cdot,0)}(\gammamma^{k+1}-\gammamma)^2dx\leq \frac{C}{\sqrt{s}}\int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda}(\cdot,0)}(\gammamma^{k}-\gammamma)^2dx.
end{align}
end{theorem}
{\bf b}egin{equation}gin{proof}
We consider $y^{k}=\partial_{t}(u(\gammamma^{k})-u(\gammamma))$, which is the solution of
{\bf b}egin{equation}gin{align}\label{5.12}
\left\{
{\bf b}egin{equation}gin{array}{ll}
L_0y_t^{k}+c^2L_0y^{k}+\gammamma(x)^{k}y_{tt}^{k}=(\gammamma-\gammamma^{k})\partial_{t}R(x,t), & (-1,1)ega \times (0,T) \\
y^{k}=0, & \Gammamma \times (0,T) \\
y^{k}(\cdot,0) =0 ,\medspace y_t^k(\cdot,0) = 0, \medspace y_{tt}^{k}(\cdot,0) = (\gammamma-\gammamma^{k})R(x,0), & (-1,1)ega
end{array}
\right.
end{align}
where $R(x,t)=\partial_{t}^2u(\gammamma)$. Thus,
{\bf b}egin{equation}gin{align}\label{mu}
\mu^{k}=\frac{\partial y^{k}}{\partial n},\qquad \mu_{t}^{k}=\frac{\partial y_{t}^{k}}{\partial n}.
end{align}
We observe that $y^{k}$ belongs to $\mathcal{V}$. Therefore, by eqref{mu}, the solution $y^{k}$ of eqref{5.12} satisfy the Euler--Lagrange equations associated to the functional $J[\mu^{k},g^{k}]$, where $g^{k}=(\gammamma-\gammamma^{k})\partial_{t}R(x,t)$. Since $J[\mu^{k},g^{k}]$ admits a unique minimizer, $y^{k}$ corresponds to minimum of $J[\mu^{k},g^{k}]$.
Let $y^{*,k}$ be the minimizer of $J[\mu^{k},0].$ From Theorem \ref{minimo2} we obtain that
{\bf b}egin{equation}gin{align}
\sqrt{s}\int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda}(\cdot,0)}|y_{tt}^{*,k}(\cdot,0)-y_{tt}^{k}(\cdot,0)|
^2dx\leq C\int_{0}^{T}\int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda}}|(\gammamma-\gammamma^{k})\partial_{t}R(x,t)|^2dxdt.
end{align}
From eqref{5.10} and eqref{5.12}
{\bf b}egin{equation}gin{align*}
y_{tt}^{*,k}(\cdot,0)=(\widetilde{\gammamma}^{k+1}-\gammamma^{k})u_2, \qquad y_{tt}^{k}(\cdot,0)=(\gammamma-\gammamma^{k})u_2.
end{align*}
This implies that, using that $|u_2|\geqeta>0$
{\bf b}egin{equation}gin{align}
eta^2\sqrt{s}\int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda}(\cdot,0)}(\widetilde{\gammamma}^{k+1}-\gammamma)^2
dx\leq C\int_{0}^{T}\int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda}}|(\gammamma-\gammamma^{k})\partial_{t}R(x,t)|^2dxdt.
end{align}
Since the function $T$ defined in eqref{5.11} is Lipschitz continuous and satisfy $T(\gammamma)=\gammamma$, we obtain
{\bf b}egin{equation}gin{align}
|\widetilde{\gammamma}^{k+1}-\gammamma|\geq|T(\widetilde{\gammamma}^{k+1})-T(\gammamma)|=|\gammamma^{k+1}-\gammamma|.
end{align}
On the other hand, since $\phi(\cdot,t)$ is decreasing in $t\in(0,T)$ and $\partial_{t}R(x,t)\in L^2(0,T;L^{\infty}((-1,1)ega))$, we conclude
{\bf b}egin{equation}gin{align*}
\int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda}(\cdot,0)}(\gammamma^{k+1}-\gammamma)^2
dx\leq \frac{C}{\sqrt{s}}\frac{\|\partial_{t}R(x,t)\|_{L^2(0,T;L^{\infty}((-1,1)ega)}^{}}{eta^2}\int_{(-1,1)ega}e^{2s\varepsilonphi_{\lambda}(\cdot,0)}(\gammamma-\gammamma^{k})^2dxdt.
end{align*}
end{proof}
Let us finish this section with the following observation.
{\bf b}egin{equation}gin{remark}
We can observe that this algorithm, from a theoretical point of view, is based on the minimization of a convex and coercive functional. Therefore, we can expect that numerical simulations can be done using, for instance, CasADi open-source tool for nonlinear optimization and algorithmic differentiation \cite{Andersson2019}. However, some drawbacks appears in its numerical simulations. This can be seen in the definition of the functional $J[\mu,g]$, which involves two exponentials
{\bf b}egin{equation}gin{align*}
e^{2s\varepsilonphi_\lambda}=e^{2se^{\lambda\phi}}.
end{align*}
Both parameters $\lambda$ and $s$ are chosen large enough, in order to use the Carleman estimate given in Theorem \ref{observability}. This implies a immediately problem from a numerical point of view. For example, if we consider $s=\lambda=3$, $(-1,1)ega=(0,1)$, $x_0=0$, $T=1$ and ${\bf b}egin{equation}ta=1$, the following ratio
{\bf b}egin{equation}gin{align*}
\frac{\max_{(-1,1)ega\times (0,T)}e^{2s\varepsilonphi_\lambda}}{\min_{(-1,1)ega\times (0,T)}e^{2s\varepsilonphi_\lambda}}
end{align*}
is of order of $10^{340}$ (see for instance \cite{MR3670259}).
It seems reasonable modify the algorithm presented here in order to obtain a numerical implementation, to validate at least with an example, the coefficient inverse problem studied in this article. In this direction, the modified algorithm is part of our forthcoming work.
end{remark}
{\bf b}ibliographystyle{abbrv}
{\bf b}ibliography{References1}
end{document}
|
\begin{document}
\title[]{Kodaira dimension of universal holomorphic symplectic varieties}
\author[]{Shouhei Ma}
\thanks{Supported by JSPS KAKENHI 15H05738 and 17K14158.}
\address{Department~of~Mathematics, Tokyo~Institute~of~Technology, Tokyo 152-8551, Japan}
\email{[email protected]}
\subjclass[2010]{}
\keywords{}
\begin{abstract}
We prove that the Kodaira dimension of the $n$-fold universal family
of lattice-polarized holomorphic symplectic varieties with
dominant and generically finite period map
stabilizes to the moduli number when $n$ is sufficiently large.
Then we study the transition of Kodaira dimension explicitly,
from negative to nonnegative,
for known explicit families of polarized symplectic varieties.
In particular, we determine the exact transition point
in the cases of Beauville-Donagi and Debarre-Voisin,
where the Borcherds $\Phi_{12}$ form plays a crucial role.
\end{abstract}
\maketitle
\section{Introduction}\label{sec:intro}
The discovery of Beauville-Donagi \cite{BD} that
the Fano variety of lines on a smooth cubic fourfold
is a holomorphic symplectic variety deformation equivalent to
the Hilbert squares of $K3$ surfaces of genus $8$
was the first example of explicit geometric construction of polarized holomorphic symplectic varieties.
Gradually further examples,
all deformation equivalent to Hilbert schemes of $K3$ surfaces,
have been found by
\begin{itemize}
\item Iliev-Ranestad \cite{IR} as the varieties of power sums of cubic fourfolds,
\item O'Grady \cite{OG} as the double EPW sextics,
\item Debarre-Voisin \cite{DV} as the zero loci of sections of a vector bundle on the Grassmannian $G(6, 10)$,
\item Lehn-Lehn-Sorger-van~Straten \cite{LLSS} using the spaces of
twisted cubics on cubic fourfolds, and more recently
\item Iliev-Kapustka-Kapustka-Ranestad \cite{IKKR} as the double EPW cubes.
\end{itemize}
The moduli spaces $\mathcal{M}$ of these polarized symplectic varieties are
unirational by construction.
However, if we consider
the $n$-fold fiber product ${\mathcal{F}_{n}}\to\mathcal{M}$ of the universal family $\mathcal{F}\to\mathcal{M}$
(more or less the moduli space of the varieties with $n$ marked points or its double cover),
its Kodaira dimension $\kappa({\mathcal{F}_{n}})$ is nondecreasing with respect to $n$ (\cite{Ka}),
and bounded by $\dim \mathcal{M}=20$ (\cite{Ii}).
The main purpose of this paper is to study the transition of $\kappa({\mathcal{F}_{n}})$ as $n$ grows,
especially from $\kappa=-\infty$ to $\kappa\geq 0$,
by using modular forms on the period domain.
Moreover, we prove that $\kappa({\mathcal{F}_{n}})$ stabilizes to $\dim \mathcal{M}$
at large $n$ for more general families of lattice-polarized holomorphic symplectic varieties.
Our main result is summarized as follows.
\begin{theorem}\label{thm:main}
Let ${\mathcal{F}_{n}}$ be the $n$-fold universal family of
polarized holomorphic symplectic varieties of
Beauville-Donagi or Debarre-Voisin or
Lehn-Lehn-Sorger-van~Straten or
Iliev-Ranestad or O'Grady or Iliev-Kapustka-Kapustka-Ranestad type.
Then ${\mathcal{F}_{n}}$ is
unirational / $\kappa({\mathcal{F}_{n}})\geq 0$ / $\kappa({\mathcal{F}_{n}})>0$
for the following bounds of $n$.
\begin{center}
\begin{tabular}{cccccccccccc}
\toprule
& BD & DV & LLSS & IR & OG & IKKR
\\ \midrule
unirational & 13 & 5 & 5 & 1 & 0 & 0
\\ \midrule
$\kappa \geq 0$ & 14 & 6 & 7 & 6 & 11 & 16
\\ \midrule
$\kappa > 0$ & 23 & 13 & 12 & 12 & 19 & 20
\\ \bottomrule
\end{tabular}
\end{center}
In all cases, $\kappa(\mathcal{F}_{n})=20$ when $n$ is sufficiently large.
The stabilization $\kappa(\mathcal{F}_{n})=\dim \mathcal{M}$ at large $n$
holds more generally for families $\mathcal{F}\to \mathcal{M}$
of lattice-polarized holomorphic symplectic varieties
whose period map is dominant and generically finite.
\end{theorem}
This table means, for example in the Beauville-Donagi (BD) case,
that ${\mathcal{F}_{n}}$, the moduli space of Fano varieties of cubic fourfolds with $n$ marked points
(or equivalently cubic fourfolds with $n$ marked lines),
is unirational when $n\leq 13$,
has $\kappa(\mathcal{F}_{n})\geq 0$ when $n\geq 14$,
and $\kappa(\mathcal{F}_{n}) > 0$ when $n\geq 23$.
In particular, we find the exact transition point from $\kappa=-\infty$ to $\kappa\geq0$
in the Beauville-Donagi and Debarre-Voisin cases,
and nearly exact in the Lehn-Lehn-Sorger-van Straten case.
On the other hand, it would not be easy to explicitly calculate a bound for $\kappa=20$;
in fact, we expect that the transition of Kodaira dimension would be sudden,
so the actual bound for $\kappa=20$
would be quite near to the (actual) bound for $\kappa\geq 0$.
(In this sense, the above bound for $\kappa>0$ should be temporary.)
Markman \cite{Mark2} gave an analytic construction of
general marked universal families over (non-Haussdorff, unpolarized) period domains.
Here we take more ad hoc construction.
The space ${\mathcal{F}_{n}}$ (birationally) parametrizes the isomorphism classes of
the $n$-pointed polarized symplectic varieties except for the two double EPW cases,
while in those cases it is a double cover of the moduli space.
Theorem \ref{thm:main} in the direction of $\kappa\geq 0$
is proved by using modular forms on the period domain.
For a family $\mathcal{F}\to \mathcal{M}$ of lattice-polarized holomorphic symplectic varieties
of dimension $2d$ whose period map is dominant and generically finite,
we construct an injective map (Theorem \ref{thm:cusp-canonical})
\begin{equation}\label{eqn:cusp canonical intro}
S_{b+dn}(\mathbb{G}(1, 5)amma, \det) \hookrightarrow H^{0}(K_{{\mathcal{F}_{n}cpt}}),
\end{equation}
where ${\mathcal{F}_{n}cpt}$ is a smooth projective model of ${\mathcal{F}_{n}}$,
$\mathbb{G}(1, 5)amma$ is an arithmetic group containing the monodromy group,
$S_{k}(\mathbb{G}(1, 5)amma, \det)$ is the space of $\mathbb{G}(1, 5)amma$-cusp forms of weight $k$ and character $\det$,
and $b=\dim \mathcal{M}$.
For the above six cases, we construct cusp forms explicitly
by using quasi-pullback of the Borcherds $\Phi_{12}$ form (\cite{Bo}, \cite{BKPSB})
and its product with modular forms obtained by the Gritsenko lifting (\cite{Gr}).
The same technique of construction should be also applicable to lattice-polarized families,
of which more examples would be available.
The proof of unirationality is done by geometric argument,
but in the Beauville-Donagi and Debarre-Voisin cases,
we also make use of the ``transcendental" results
$\kappa(\mathcal{F}^{BD}_{14})\geq 0$ and $\kappa(\mathcal{F}^{DV}_{6})\geq 0$
when checking nondegeneracy of certain maps in the argument (Claim \ref{lem:BD dominant}).
A similar result has been obtained for $K3$ surfaces of low genus $g$ (\cite{Ma}),
where the quasi-pullback $\Phi_{K3,g}$ of $\Phi_{12}$ was crucial too.
Moreover, when $3\leq g\leq 10$, the weight of $\Phi_{K3,g}$ minus $19$
coincided with the dimension of a representation space appearing in the projective model of the $K3$ surfaces.
In the present paper we see no such a direct identity,
but a ``switched'' identity between $K3$ surfaces of genus $2$ and cubic fourfolds
(Remark \ref{remark: g=2 K3 and cubic4}).
This paper is organized as follows.
\S \ref{sec:recall} is a recollection of holomorphic symplectic manifolds and modular forms.
In \S \ref{ssec:cusp-canonical} we construct
the map \eqref{eqn:cusp canonical intro} (Theorem \ref{thm:cusp-canonical})
and prove the latter half of Theorem \ref{thm:main} (Corollary \ref{cor:stabilize}).
The first half of Theorem \ref{thm:main} is proved in
\S \ref{sec:BD} -- \S \ref{sec:EPW}.
Throughout this paper,
a \textit{lattice} means a free abelian group of finite rank
endowed with a nondegenerate integral symmetric bilinear form.
$A_{k}$, $D_{l}$, $E_{m}$ stand for the \textit{negative-definite} root lattices
of respective types.
The even unimodular lattice of signature $(1, 1)$ is denoted by $U$.
No confusion will likely to occur when $U$ is also used for an open set of a variety.
The Grassmannian parametrizing $r$-dimensional linear subspaces of
${\mathbb{C}}^{N}$ is denoted by $G(r, N)=\mathbb{G}(r-1, N-1)$.
We freely use the fact (\cite{GIT}) that
if $G={\rm PGL}_{N}$ acts on a projective variety $X$ and
$U$ is a $G$-invariant Zariski open set of $X$ contained in the stable locus,
then a geometric quotient $U/G$ exists.
If no point of $U$ has nontrivial stabilizer, $U\to U/G$ is a principal $G$-bundle in the etale topology.
In that case, every $G$-linearized vector bundle on $U$ descends to a vector bundle on $U/G$.
Similarly, if $V$ is a representation of ${\rm SL}_{N}$,
a geometric quotient $({{\mathbb P}}V\times U)/G$ exists
as a Brauer-Severi variety over $U/G$.
If $Y$ is a normal $G$-invariant subvariety of ${{\mathbb P}}V\times U$,
its geometric quotient $Y/G$ exists as the image of $Y$ in $({{\mathbb P}}V\times U)/G$.
I would like to thank Kieran O'Grady for
valuable advice on double EPW sextics.
\section{Preliminaries}\label{sec:recall}
In this section we recall basic facts about
holomorphic symplectic manifolds (\S \ref{ssec:HSV})
and orthogonal modular forms (\S \ref{ssec:modular}).
\subsection{Holomorphic symplectic manifolds}\label{ssec:HSV}
A compact K\"ahler manifold $X$ of dimension $2d$ is called a
\textit{holomorphic symplectic manifold} if
it is simply connected and
$H^{0}(\Omega_{X}^{2})={\mathbb{C}}\omega$
for a nowhere degenerate $2$-form $\omega$.
There exists a non-divisible integral symmetric bilinear form $q_{X}$
of signature $(3, b_{2}(X)-3)$ on $H^{2}(X, {\mathbb{Z}})$,
called the \textit{Beauville form} (\cite{Be}),
and a constant $c_{X}$ called the \textit{Fujiki constant},
such that
$\int_{X}v^{2d}=c_{X}\cdot q_{X}(v, v)^{d}$
for every $v\in H^{2}(X, {\mathbb{Z}})$.
In particular, for $\omega\in H^{0}(\Omega_{X}^{2})$,
we have $q_{X}(\omega, \omega)=0$ and
\begin{equation}\label{eqn:Beauville paring 2-form}
q_{X}(\omega, \bar{\omega})^{d} = C \int_{X}(\omega \wedge \bar{\omega})^{d}
\end{equation}
for a suitable constant $C$.
A holomorphic symplectic manifold $X$ is said to be of $K3^{[n]}$ \textit{type}
if it is deformation equivalent to the Hilbert scheme of $n$ points on a $K3$ surface.
The Beauville lattice of such $X$ is isometric to
$L_{2t}=3U\oplus 2E_{8} \oplus \langle -2t \rangle$
where $t=n-1$ (\cite{Be}).
Let $h\in L_{2t}$ be a primitive vector of norm $2D>0$.
The orthogonal complement $h^{\perp}\cap L_{2t}$ is described as follows
(\cite{GHS10} \S 3).
For simplicity we assume $(t, D)=1$, which holds in later sections except \S \ref{ssec:IKKR}.
We have either $(h, L_{2t})={\mathbb{Z}}$ or $2{\mathbb{Z}}$.
In the former case, $h$ is called of \textit{split type}, and
$h^{\perp}\cap L_{2t}$ is isometric to
$2U \oplus 2E_{8} \oplus \langle -2t \rangle \oplus \langle -2D \rangle$.
In the latter case, $h$ is called of \textit{non-split type}, and
\begin{equation}\label{eqn:lattice non-split}
h^{\perp}\cap L_{2t} \simeq
2U \oplus 2E_{8} \oplus
\begin{pmatrix} -2t & t \\ t & -(D+t)/2 \end{pmatrix},
\end{equation}
which has determinant $tD$.
In \S \ref{sec:BD} -- \S \ref{sec:IR},
$h$ will be of non-split type
and the determinant $tD$ will be a prime number of class number $1$.
\subsection{Modular forms}\label{ssec:modular}
Let $L$ be a lattice of signature $(2, b)$ with $b\geq 3$.
The dual lattice of $L$ is denoted by $L^{\vee}$.
We write $A_{L}=L^{\vee}/L$ for the discriminant group of $L$.
$A_{L}$ is equipped with a natural ${\mathbb{Q}}/{\mathbb{Z}}$-valued bilinear form,
which when $L$ is even is induced from a natural ${\mathbb{Q}}/2{\mathbb{Z}}$-valued quadratic form.
The Hermitian symmetric domain $\mathcal{D}=\mathcal{D}_{L}$ attached to $L$
is defined as either of the two connected components of the space
\begin{equation*}
\{ \: {\mathbb{C}}\omega \in {{\mathbb P}}L_{{\mathbb{C}}} \: | \:
(\omega, \omega)=0, (\omega, \bar{\omega})>0 \: \}.
\end{equation*}
Let ${{\rm O}^{+}(L)}$ be the subgroup of the orthogonal group ${\rm O}(L)$
preserving the component $\mathcal{D}$.
We write ${\tilde{{\rm O}}^{+}(L)}$ for the kernel of ${{\rm O}^{+}(L)}\to {\rm O}(A_{L})$.
When $A_{L} \simeq {\mathbb{Z}}/p$ for a prime $p$,
which holds in \S \ref{sec:BD} -- \S \ref{sec:IR},
we have ${\rm O}(A_{L})=\{ \pm {\rm id} \}$
and so ${{\rm O}^{+}(L)}=\langle {\tilde{{\rm O}}^{+}(L)}, -{\rm id} \rangle$.
Let $\mathcal{L}$ be the restriction of the tautological line bundle
$\mathcal{O}_{{{\mathbb P}}L_{{\mathbb{C}}}}(-1)$ over $\mathcal{D}$.
$\mathcal{L}$ is naturally ${\rm O}^{+}(L_{{\mathbb{R}}})$-linearized.
Let $\mathbb{G}(1, 5)amma$ be a finite-index subgroup of ${\rm O}^{+}(L)$ and
$\chi$ be a unitary character of $\mathbb{G}(1, 5)amma$.
A $\mathbb{G}(1, 5)amma$-invariant holomorphic section of
$\mathcal{L}^{\otimes k}\otimes \chi$
over $\mathcal{D}$ is called a \textit{modular form} of weight $k$
and character $\chi$ with respect to $\mathbb{G}(1, 5)amma$.
When it vanishes at the cusps, it is called a \textit{cusp form}
(see, e.g., \cite{GHS}, \cite{Ma} for the precise definition.)
We write $M_{k}(\mathbb{G}(1, 5)amma, \chi)$ for the space of $\mathbb{G}(1, 5)amma$-modular forms of weight $k$ and character $\chi$,
and $S_{k}(\mathbb{G}(1, 5)amma, \chi)$ the subspace of cusp forms.
We especially write $M_{k}(\mathbb{G}(1, 5)amma)=M_{k}(\mathbb{G}(1, 5)amma, 1)$.
If $\mathbb{G}(1, 5)amma' \lhd \mathbb{G}(1, 5)amma$ is a normal subgroup of finite index,
the quotient group $\mathbb{G}(1, 5)amma/\mathbb{G}(1, 5)amma'$ acts on $M_{k}(\mathbb{G}(1, 5)amma', \chi)$
by translating $\mathbb{G}(1, 5)amma'$-invariant sections by elements of $\mathbb{G}(1, 5)amma$.
We also remark that when $\chi=\det$ and $k \equiv b$ mod $2$,
$-{\rm id}$ acts trivially on $\mathcal{L}^{\otimes k}\otimes \det$,
so that
\begin{equation}\label{eqn:-id effect}
M_{k}(\langle \mathbb{G}(1, 5)amma, -{\rm id} \rangle, \det) = M_{k}(\mathbb{G}(1, 5)amma, \det).
\end{equation}
When $k \not\equiv b$ mod $2$,
$M_{k}(\langle \mathbb{G}(1, 5)amma, -{\rm id} \rangle, \det)$ is zero.
The Hermitian form $(\cdot, \bar{\cdot})$ on $L_{{\mathbb{C}}}$
defines an ${\rm O}^{+}(L_{{\mathbb{R}}})$-invariant Hermitian metric
on the line bundle $\mathcal{L}$.
This defines a $\mathbb{G}(1, 5)amma$-invariant Hermitian metric on $\mathcal{L}^{\otimes k}\otimes \chi$
which we denote by $( \: , \: )_{k,\chi}$.
We especially write $( \: , \: )_{k}=( \: , \: )_{k,1}$.
Let ${\rm vol}$ be the ${\rm O}^{+}(L_{{\mathbb{R}}})$-invariant volume form on $\mathcal{D}$,
which exists and is unique up to constant.
\begin{lemma}\label{lem:Petersson cusp}
Let $\mathcal{M}'$ be a Zariski open set of ${\mathbb{G}(1, 5)amma \backslash \mathcal{D}}$
and $\mathcal{D}'\subset \mathcal{D}$ be its inverse image.
Let $\Phi$ be a $\mathbb{G}(1, 5)amma$-invariant holomorphic section of
$\mathcal{L}^{\otimes k}\otimes \chi$
defined over $\mathcal{D}'$ with $k\geq b$.
Then
$\Phi\in S_{k}(\mathbb{G}(1, 5)amma, \chi)$
if and only if
$\int_{\mathcal{M}'}(\Phi, \Phi)_{k,\chi}{\rm vol} < \infty$.
\end{lemma}
\begin{proof}
In \cite{Ma} Proposition 3.5, this is proved when $\mathcal{D}'=\mathcal{D}$,
i.e., $\Phi\in M_{k}(\mathbb{G}(1, 5)amma, \chi)$.
Hence it suffices here to show that
$\int_{\mathcal{M}'}(\Phi, \Phi)_{k,\chi}{\rm vol} < \infty$
implies holomorphicity of $\Phi$ over $\mathcal{D}$.
Let $H$ be an irreducible component of $\mathcal{D}-\mathcal{D}'$.
We may assume that $H$ is of codimension $1$.
If $\Phi$ has a pole along $H$, say of order $a>0$,
a local calculation shows that
in a neighborhood of a general point of $H$,
with $H$ locally defined by $z=0$,
the integral
\begin{eqnarray*}
\int_{\varepsilon \leq |z| \leq 1} (\Phi, \Phi)_{k,\chi}{\rm vol}
& \geq &
C \int_{\varepsilon \leq |z| \leq 1} |z|^{-2a}dz\wedge d\bar{z} \\
& = &
C \int_{0}^{2\pi} d\theta \int_{\varepsilon}^{1}r^{-2a+1}dr
\qquad
(z=re^{i\theta})
\end{eqnarray*}
must diverge as $\varepsilon \to 0$.
\end{proof}
Let $II_{2,26}=2U\oplus 3E_{8}$ be the even unimodular lattice of signature $(2, 26)$.
Borcherds \cite{Bo} discovered a modular form $\Phi_{12}$ of weight $12$ and character $\det$
for ${\rm O}^{+}(II_{2,26})$.
The \textit{quasi-pullback} of $\Phi_{12}$ is defined as follows (\cite{Bo}, \cite{BKPSB}).
Let $L$ be a sublattice of $II_{2,26}$ of signature $(2, b)$ and $N=L^{\perp}\cap II_{2,26}$.
Let $r(N)$ be the number of $(-2)$-vectors in $N$.
Then
\begin{equation*}
\Phi_{12}|_{L} :=
\left. \frac{\Phi_{12}}{\prod_{\delta}(\delta, \cdot)} \: \right|_{{\mathcal{D}_{L}}}
\end{equation*}
where $\delta$ runs over all $(-2)$-vectors in $N$ up to $\pm 1$,
is a nonzero modular form on ${\mathcal{D}_{L}}$ of weight $12+r(N)/2$ and character $\det$ for ${\tilde{{\rm O}}^{+}(L)}$.
Moreover, when $r(N)>0$, $\Phi_{12}|_{L}$ is a cusp form (\cite{GHS}).
In later sections, we will embed $h^{\perp}\cap L_{2t}$ into $II_{2,26}$
by embedding the last rank $2$ component of \eqref{eqn:lattice non-split} into $E_{8}$.
The following model of $E_{8}$ will be used:
\begin{equation}\label{eqn:E8}
E_{8} =
\{ \: (x_{i})\in {\mathbb{Q}}^{8} \: | \:
\forall x_{i}\in {\mathbb{Z}} \: \textrm{or} \: \forall x_{i}\in {\mathbb{Z}}+1/2, \;
x_{1}+ \cdots +x_{8}\in 2{\mathbb{Z}} \: \}.
\end{equation}
Here we take the standard (negative) quadratic form on ${\mathbb{Q}}^{8}$.
The $(-2)$-vectors in $E_{8}$ are as follows.
For $j\ne k$ we define
$\delta_{\pm j, \pm k}=(x_{i})$ by
$x_{j}=\pm 1$, $x_{k}=\pm 1$ and $x_{i}=0$ for $i\ne j, k$.
For a subset $S$ of $\{ 1, \cdots, 8 \}$ consisting of even elements,
we define
$\delta'_{S}=(x_{i})$ by
$x_{i}=1/2$ if $i\in S$ and $x_{i}=-1/2$ if $i\not\in S$.
These are the $240$ roots of $E_{8}$.
We will also use the Gritsenko lifting \cite{Gr}.
Assume that $L$ is even and contains $2U$.
We shall specialize to the case $b=20$ for later use.
For an odd number $k$,
let $M_{k}(\rho_{L})$ be the space of modular forms for ${\rm SL}_{2}({\mathbb{Z}})$
of weight $k$ with values in the Weil representation $\rho_{L}$ on ${\mathbb{C}}A_{L}$.
The Gritsenko lifting with $b=20$ is an injective, ${\rm O}^{+}(L)$-equivariant map
\begin{equation*}
M_{k}(\rho_{L}) \hookrightarrow M_{k+9}({\tilde{{\rm O}}^{+}(L)}).
\end{equation*}
The dimension of $M_{k}(\rho_{L})$ for $k>2$ can be explicitly computed
by using the formula in \cite{Br}.
A similar formula for the ${\rm O}(A_{L})$-invariant part
$M_{k}(\rho_{L})^{{\rm O}(A_{L})}$
is given in \cite{Ma2}.
\section{Cusp forms and canonical forms}\label{ssec:cusp-canonical}
In this section we establish, in a general setting,
a correspondence between
canonical forms on $n$-fold universal family of holomorphic symplectic varieties
and modular forms on the period domain.
This is the basis of this paper.
As a consequence we deduce in Corollary \ref{cor:stabilize} the latter half of Theorem \ref{thm:main}.
The first half of Theorem \ref{thm:main} will be proved case-by-case in later sections.
Let $M$ be a hyperbolic lattice and
$L$ be a lattice of signature $(2, b)$.
We say that a smooth algebraic family
$\pi:\mathcal{F}\to \mathcal{M}$
of holomorphic symplectic manifolds is
\textit{$M$-polarized with polarized Beauville lattice $L$}
if $R^{2}\pi_{\ast}{\mathbb{Z}}$
contains a sub local system $\Lambda_{pol}$ in its $(1, 1)$-part
whose fiber is isometric to $M$
with the orthogonal complement isometric to $L$.
Let $\Lambda_{per}=(\Lambda_{pol})^{\perp}\cap R^{2}\pi_{\ast}{\mathbb{Z}}$
and we choose an isometry $(\Lambda_{per})_{x_{0}}\simeq L$ at some base point $x_{0}\in \mathcal{M}$.
If a finite-index subgroup $\mathbb{G}(1, 5)amma$ of ${{\rm O}^{+}(L)}$
contains the monodromy group of $\Lambda_{per}$,
we can define the period map
\begin{equation*}
\mathcal{P} :
\mathcal{M} \to \mathbb{G}(1, 5)amma \backslash {\mathcal{D}_{L}}, \qquad
x \mapsto [H^{2,0}(\mathcal{F}_{x}) \subset (\Lambda_{per})_{x}\otimes {\mathbb{C}}].
\end{equation*}
By Borel's extension theorem, $\mathcal{P}$ is a morphism of algebraic varieties.
Our interest will be in the case ${\rm rk}(M)=1$,
but the proof of the following theorem works in the general lattice-polarized setting as well.
\begin{theorem}\label{thm:cusp-canonical}
Let $L$ be a lattice of signature $(2, b)$
and $\mathbb{G}(1, 5)amma$ be a finite-index subgroup of ${\rm O}^{+}(L)$.
Let $\mathcal{F}\to \mathcal{M}$
be a smooth algebraic family of
lattice-polarized holomorphic symplectic manifolds of dimension $2d$
with polarized Beauville lattice $L$
whose monodromy group is contained in $\mathbb{G}(1, 5)amma$.
Assume that the period map
$\mathcal{P}:\mathcal{M}\to {\mathbb{G}(1, 5)amma \backslash \mathcal{D}}$
is dominant and generically finite.
If
${\mathcal{F}_{n}}=\mathcal{F}\times_{\mathcal{M}}\cdots \times_{\mathcal{M}}\mathcal{F}$
($n$ times) and
${\mathcal{F}_{n}cpt}$ is a smooth projective model of ${\mathcal{F}_{n}}$,
we have a natural injective map
\begin{equation}\label{eqn:general correspondence}
S_{b+dn}(\mathbb{G}(1, 5)amma, \det) \hookrightarrow H^{0}(K_{{\mathcal{F}_{n}cpt}})
\end{equation}
which makes the following diagram commutative:
\begin{equation}\label{eqn:CD canonical map}
\xymatrix{
{\mathcal{F}_{n}cpt} \ar@{-->}[r]^{\phi_{K}} \ar@{-->}[d] &
|K_{{\mathcal{F}_{n}cpt}}|^{\vee} \ar@{-->}[d]^{\eqref{eqn:general correspondence}^{\vee}} \\
{\mathbb{G}(1, 5)amma \backslash \mathcal{D}} \ar@{-->}[r]_{\phi} & {{\mathbb P}}S_{b+dn}(\mathbb{G}(1, 5)amma, \det)^{\vee}.
}
\end{equation}
Here
$\phi_{K}$ is the canonical map of ${\mathcal{F}_{n}cpt}$ and
$\phi$ is the rational map defined by the sections in $S_{b+dn}(\mathbb{G}(1, 5)amma, \det)$.
Furthermore, if the period map $\mathcal{P}$ is birational and $\mathbb{G}(1, 5)amma$ does not contain $-{\rm id}$,
\eqref{eqn:general correspondence} is an isomorphism.
\end{theorem}
\begin{proof}
Let $\mathcal{M}'=\mathcal{P}(\mathcal{M})\subset{\mathbb{G}(1, 5)amma \backslash \mathcal{D}}$
and $\mathcal{D}'\subset \mathcal{D}$ be the inverse image of $\mathcal{M}'$.
Shrinking $\mathcal{M}$ as necessary,
we may assume that
both $\mathcal{M}\to \mathcal{M}'$ and $\mathcal{D}'\to \mathcal{M}'$ are unramified.
We take the universal cover
$\tilde{\mathcal{M}}\to \mathcal{M}$ of $\mathcal{M}$
and pullback the family:
write
$\tilde{\mathcal{F}}=\mathcal{F}\times_{\mathcal{M}}\tilde{\mathcal{M}}$
with the projection
$\pi\colon \tilde{\mathcal{F}} \to \tilde{\mathcal{M}}$.
We obtain a lift
$\tilde{\mathcal{P}} : \tilde{\mathcal{M}} \to \mathcal{D}'\subset \mathcal{D}$
of the period map $\mathcal{P}$
which is equivariant with respect to the monodromy representation
$\pi_{1}(\mathcal{M})\to \mathbb{G}(1, 5)amma$.
Since $\mathcal{P}$ is unramified,
$\tilde{\mathcal{P}}$ is unramified too.
We first construct an injective map
\begin{equation}\label{eqn:correspondence interior}
H^{0}(\mathcal{D}', \mathcal{L}^{\otimes b+dn}\otimes \det)^{\mathbb{G}(1, 5)amma}
\hookrightarrow
H^{0}({\mathcal{F}_{n}}, K_{{\mathcal{F}_{n}}}),
\end{equation}
where $H^{0}({\mathcal{F}_{n}}, K_{{\mathcal{F}_{n}}})$ means the space of
holomorphic (rather than regular) canonical forms on ${\mathcal{F}_{n}}$.
We have a natural ${\rm O}^{+}(L_{{\mathbb{R}}})$-equivariant isomorphism
$K_{\mathcal{D}} \simeq \mathcal{L}^{\otimes b}\otimes \det$
of line bundles over $\mathcal{D}$ (see, e.g., \cite{GHS}, \cite{Ma}),
and hence a $\pi_{1}(\mathcal{M})$-equivariant isomorphism
\begin{equation}\label{eqn:KC}
K_{\tilde{\mathcal{M}}}
\simeq \tilde{\mathcal{P}}^{\ast}K_{\mathcal{D}}
\simeq \tilde{\mathcal{P}}^{\ast}(\mathcal{L}^{\otimes b}\otimes \det)
\end{equation}
over $\tilde{\mathcal{M}}$.
Here $\pi_{1}(\mathcal{M})$ acts on
$\tilde{\mathcal{P}}^{\ast}\mathcal{L}$, $\tilde{\mathcal{P}}^{\ast}\det$
through the $\mathbb{G}(1, 5)amma$-action on $\mathcal{L}$, $\det$
and the monodromy representation $\pi_{1}(\mathcal{M})\to \mathbb{G}(1, 5)amma$.
On the other hand,
by the definition of the period map,
we have a canonical isomorphism
$\pi_{\ast}\Omega_{\pi}^{2} \simeq \tilde{\mathcal{P}}^{\ast}\mathcal{L}$
sending a symplectic form to its cohomology class.
Since $\pi\colon \tilde{\mathcal{F}}\to \tilde{\mathcal{M}}$
is a family of holomorphic symplectic manifolds,
both $\pi_{\ast}\Omega_{\pi}^{2}$ and
$\pi_{\ast}K_{\pi}$ are invertible sheaves,
and the homomorphism
$(\pi_{\ast}\Omega_{\pi}^{2})^{\otimes d} \to \pi_{\ast}K_{\pi}$
defined by the wedge product is isomorphic.
Therefore we have a natural isomorphism
$\pi_{\ast}K_{\pi} \simeq \tilde{\mathcal{P}}^{\ast}\mathcal{L}^{\otimes d}$.
Since the natural homomorphism
$\pi^{\ast}\pi_{\ast}K_{\pi}\to K_{\pi}$ is isomorphic,
we find that
$K_{\pi} \simeq \pi^{\ast}\tilde{\mathcal{P}}^{\ast}\mathcal{L}^{\otimes d}$.
By construction this is $\pi_{1}(\mathcal{M})$-equivariant.
If we write
$\tilde{\mathcal{F}}_{n}=\mathcal{F}_{n}\times_{\mathcal{M}}\tilde{\mathcal{M}}$
with the projection
$\pi_{n}\colon \tilde{\mathcal{F}}_{n} \to \tilde{\mathcal{M}}$,
this shows that
\begin{equation}\label{eqn:isom Kpin univ cover}
K_{\pi_{n}} \simeq \pi_{n}^{\ast}\tilde{\mathcal{P}}^{\ast} \mathcal{L}^{\otimes dn}
\end{equation}
as $\pi_{1}(\mathcal{M})$-linearized line bundles on $\tilde{\mathcal{F}}_{n}$.
Combining \eqref{eqn:KC} and \eqref{eqn:isom Kpin univ cover},
we obtain a $\pi_{1}(\mathcal{M})$-equivariant isomorphism
\begin{equation*}
K_{\tilde{\mathcal{F}}_{n}} \simeq
\pi_{n}^{\ast}\tilde{\mathcal{P}}^{\ast}(\mathcal{L}^{\otimes b+dn}\otimes \det)
\end{equation*}
over $\tilde{\mathcal{F}}_{n}$.
Hence pullback of
sections of $\mathcal{L}^{\otimes b+dn}\otimes \det$ over $\mathcal{D}'$
by $\tilde{\mathcal{P}}\circ \pi_{n}$ defines
a $\pi_{1}(\mathcal{M})$-equivariant injective map
\begin{equation}\label{eqn:isom univ cover}
H^{0}(\mathcal{D}', \mathcal{L}^{\otimes b+dn}\otimes \det)
\hookrightarrow
H^{0}(\tilde{\mathcal{F}}_{n}, K_{\tilde{\mathcal{F}}_{n}}).
\end{equation}
Taking the invariant parts by $\mathbb{G}(1, 5)amma$ and $\pi_{1}(\mathcal{M})$ respectively,
we obtain \eqref{eqn:correspondence interior}.
Next we prove that restriction of \eqref{eqn:correspondence interior}
gives the desired map \eqref{eqn:general correspondence}.
Let $\Phi$ be a $\mathbb{G}(1, 5)amma$-invariant section of
$\mathcal{L}^{\otimes b+dn}\otimes \det$ over $\mathcal{D}'$ and
$\omega \in H^{0}(K_{{\mathcal{F}_{n}}})$ be the image of $\Phi$ by \eqref{eqn:correspondence interior}.
We shall show that
\begin{equation*}
\int_{{\mathcal{F}_{n}}}\omega\wedge\bar{\omega} =
C \int_{\mathcal{M}'}(\Phi, \Phi)_{b+dn, \det} {\rm vol}
\end{equation*}
for some constant $C$.
Our assertion then follows from Lemma \ref{lem:Petersson cusp}
and the standard fact that
$\omega$ extends over a smooth projective model of ${\mathcal{F}_{n}}$
if and only if
$\int_{{\mathcal{F}_{n}}}\omega\wedge\bar{\omega}<\infty$.
Since the problem is local,
it suffices to take an arbitrary small open set
$U\subset \tilde{\mathcal{M}}$ and prove
\begin{equation}\label{eqn:Petersson=L2}
\int_{\pi_{n}^{-1}(U)}\omega\wedge\bar{\omega} =
C \int_{\tilde{\mathcal{P}}(U)}(\Phi, \Phi)_{b+dn, \det}{\rm vol}
\end{equation}
for some constant $C$ independent of $U$.
In what follows, $C$ stands for any unspecified such a constant.
Since $U$ is small, we may decompose $\Phi$ as
$\Phi=\Phi_{1}\otimes \Phi_{2}^{\otimes dn}$
with $\Phi_{1}$ a local section of $\mathcal{L}^{\otimes b}\otimes \det$
and $\Phi_{2}$ a local section of $\mathcal{L}$.
Let $\omega_{1}$ be the canonical form on
$U\simeq \tilde{\mathcal{P}}(U)$ corresponding to $\Phi_{1}$, and
$\omega_{2}$ be the relative symplectic form on
$\tilde{\mathcal{F}}|_{U} \to U$ corresponding to $\tilde{\mathcal{P}}^{\ast}\Phi_{2}$.
On the one hand, we have
\begin{equation}\label{eqn:Petersson=L2 base}
\omega_{1}\wedge \bar{\omega}_{1} =
C (\Phi_{1}, \Phi_{1})_{b,\det}{\rm vol}
\end{equation}
(see, e.g., \cite{Ma} \S 3.1).
On the other hand, at each fiber $X$ of $\tilde{\mathcal{F}}|_{U}$,
the pointwise Petersson norm $(\Phi_{2}, \Phi_{2})_{1}=(\Phi_{2}, \bar{\Phi}_{2})$
is nothing but the pairing $q_{X}(\omega_{2}, \bar{\omega}_{2})$ in the Beauville form of $X$.
Since
\begin{equation*}
q_{X}(\omega_{2}, \bar{\omega}_{2})^{d} =
C \int_{X}(\omega_{2} \wedge \bar{\omega}_{2})^{d}
\end{equation*}
by \eqref{eqn:Beauville paring 2-form},
we find that
\begin{equation}\label{eqn:Petersson=L2 fiber}
(\Phi_{2}^{\otimes dn}, \Phi_{2}^{\otimes dn})_{dn} =
C \int_{X^{n}} (p_{1}^{\ast}\omega_{2}\wedge \cdots \wedge p_{n}^{\ast}\omega_{2})^{d} \wedge
(p_{1}^{\ast}\bar{\omega}_{2} \wedge \cdots \wedge p_{n}^{\ast}\bar{\omega}_{2})^{d},
\end{equation}
where $p_{i}\colon X^{n}\to X$ is the $i$-th projection.
Since $(p_{1}^{\ast}\omega_{2}\wedge \cdots \wedge p_{n}^{\ast}\omega_{2})^{d}$
is the canonical form on $X^{n}$ corresponding to
the value of $\Phi_{2}^{\otimes dn}$ at $[X]\in U$,
the equalities \eqref{eqn:Petersson=L2 base} and \eqref{eqn:Petersson=L2 fiber}
imply \eqref{eqn:Petersson=L2}.
Thus we obtain the map \eqref{eqn:general correspondence}.
Since this map is defined by pullback of sections of line bundle,
the diagram \eqref{eqn:CD canonical map} is commutative.
Finally, when $\mathcal{P}$ is birational,
we may assume as before that it is an open immersion.
If $\mathbb{G}(1, 5)amma$ does not contain $-{\rm id}$,
$\mathbb{G}(1, 5)amma$ acts on $\mathcal{D}$ effectively,
and the monodromy group coincides with $\mathbb{G}(1, 5)amma$.
We can kill the monodromy by pulling back the family $\mathcal{F}\to \mathcal{M}$
to $\mathcal{D}'$ instead of to $\tilde{\mathcal{M}}$.
Rewriting $\tilde{\mathcal{F}}_{n}={\mathcal{F}_{n}}\times_{\mathcal{M}}\mathcal{D}'$,
this shows that \eqref{eqn:isom univ cover} is isomorphic.
Taking the $\mathbb{G}(1, 5)amma$-invariant part,
we see that \eqref{eqn:correspondence interior} is isomorphic.
Finally, taking the subspace of finite norm,
we see that \eqref{eqn:general correspondence} is isomorphic.
This completes the proof of Theorem \ref{thm:cusp-canonical}.
\end{proof}
\begin{remark}
The last statement of Theorem \ref{thm:cusp-canonical} can also be proved more directly
by using descends of the $\mathbb{G}(1, 5)amma$-linearized line bundles $\mathcal{L}$, $\det$
to line bundles on $\mathcal{M}\subset {\mathbb{G}(1, 5)amma \backslash \mathcal{D}}$.
\end{remark}
\begin{corollary}\label{cor:stabilize}
If $n$ is sufficiently large, then $\kappa({\mathcal{F}_{n}})=b$.
\end{corollary}
\begin{proof}
Since ${\mathcal{F}_{n}}\to\mathcal{F}_{n-1}$ is
a smooth family of holomorphic symplectic varieties,
$\kappa({\mathcal{F}_{n}})$ is nondecreasing with respect to $n$
by Iitaka's subadditivity conjecture known in this case \cite{Ka}.
We also have the bound
$\kappa({\mathcal{F}_{n}})\leq \dim \mathcal{M} = b$
by Iitaka's addition formula \cite{Ii}.
We take a weight $k_{0}$ such that
$S_{k_{0}}(\mathbb{G}(1, 5)amma, \det)\ne \{ 0 \}$.
Then we take a weight $k_{1}$ such that $k_{1}\equiv b-k_{0}$ mod $d$ and that
${\mathbb{G}(1, 5)amma \backslash \mathcal{D}}\dashrightarrow {{\mathbb P}}M_{k_{1}}(\mathbb{G}(1, 5)amma)^{\vee}$
is generically finite onto its image.
(When $\mathbb{G}(1, 5)amma$ contains $-{\rm id}$, we must have $k_{0}\equiv b$ mod $2$,
so $b-k_{0}+d{\mathbb{Z}}$ contains sufficiently large even $k_{1}$.)
Since
\begin{equation*}
S_{k_{0}}(\mathbb{G}(1, 5)amma, \det)\cdot M_{k_{1}}(\mathbb{G}(1, 5)amma) \subset S_{k_{0}+k_{1}}(\mathbb{G}(1, 5)amma, \det),
\end{equation*}
Theorem \ref{thm:cusp-canonical} implies that
for $n_{0}=(k_{0}+k_{1}-b)/d$,
the image of the canonical map of $\bar{\mathcal{F}}_{n_{0}}$ has dimension $\geq b$.
Hence $\kappa(\mathcal{F}_{n_{0}})\geq b$
and so
$\kappa({\mathcal{F}_{n}})=b$ for all $n\geq n_{0}$.
\end{proof}
This proves the latter half of Theorem \ref{thm:main}.
In the following sections, we apply Theorem \ref{thm:cusp-canonical}
to the six explicit families in Theorem \ref{thm:main}.
In practice, one needs to identify the group $\mathbb{G}(1, 5)amma$.
For example, according to \cite{Mark} Remark 8.5 and \cite{GHS} Remark 3.15,
the monodromy group of a family of
polarized symplectic manifolds of $K3^{[2]}$ type with polarization vector $h$
is contained in $\tilde{{\rm O}}^{+}(h^{\perp}\cap L_{2})$.
\section{Fano varieties of cubic fourfolds}\label{sec:BD}
In this section we prove Theorem \ref{thm:main}
for the case of Fano varieties of cubic fourfolds \cite{BD}.
Let $Y\subset {{\mathbb P}}^{5}$ be a smooth cubic fourfold.
The Fano variety $F(Y)\subset {\mathbb{G}(1, 5)}$ of $Y$ is the variety parametrizing lines on $Y$,
which is smooth of dimension $4$.
Beauville-Donagi \cite{BD} proved that
$F(Y)$ is a holomorphic symplectic manifold of $K3^{[2]}$ type
polarized by the Pl\"ucker,
and its polarized Beauville lattice is isometric to $L_{cub}=2U\oplus 2E_{8}\oplus A_{2}$.
In fact, the polarized Beauville lattice of $F(Y)$
is isomorphic to the primitive part of $H^{4}(Y, {\mathbb{Z}})$ as polarized Hodge structures,
where the intersection form on $H^{4}(Y, {\mathbb{Z}})$ is $(-1)$-scaled.
Let $U\subset |\mathcal{O}_{{{\mathbb P}}^{5}}(3)|$
be the parameter space of smooth cubic fourfolds.
By GIT (\cite{GIT}), the geometric quotient $U/{\rm PGL}_{6}$
exists as an affine variety of dimension $20$.
Let $\mathbb{G}(1, 5)amma=\tilde{{\rm O}}^{+}(L_{cub})$.
The period map
$U/{\rm PGL}_{6} \to {\mathbb{G}(1, 5)amma \backslash \mathcal{D}}$
is an open immersion by Voisin \cite{Vo},
and the complement of its image was determined by
Looijenga \cite{Lo} and Laza \cite{La}.
\begin{lemma}[cf.~\cite{La}]\label{lem:cusp form BD}
The cusp form $\Phi_{12}|_{L_{cub}}$ has weight $48$.
Moreover, $S_{66}(\mathbb{G}(1, 5)amma, \det)$ and $S_{68}(\mathbb{G}(1, 5)amma, \det)$
have dimension $\geq 2$.
\end{lemma}
\begin{proof}
Write $L=L_{cub}$.
The weight of $\Phi_{12}|_{L}$ is computed in \cite{La}.
($A_{2}^{\perp}\simeq E_{6}$ has $72$ roots.)
We have
$\dim M_{k}(\rho_{L})=[(k+3)/6]$
by computing the formula in \cite{Br}.
Product of $\Phi_{12}|_{L}$ with the Gritsenko lift of
$M_{9}(\rho_{L})$ and $M_{11}(\rho_{L})$
proves the second assertion.
\end{proof}
We consider the parameter space of
smooth cubic fourfolds with $n$ marked lines:
\begin{equation*}
F_{n}
=
\{ \: (Y, l_{1}, \cdots , l_{n}) \: | \:
Y\in U, \: l_{1}, \cdots, l_{n} \in F(Y) \: \} \\
\subset
U\times {\mathbb{G}(1, 5)}^{n},
\end{equation*}
and let
$\mathcal{F}_{n} = F_{n}/{\rm PGL}_{6}$.
Then $\mathcal{F}_{n}$ is smooth over the open locus of $U/{\rm PGL}_{6}$
where cubic fourfolds have no nontrivial stabilizer.
By Lemma \ref{lem:cusp form BD},
with $48=20+2\cdot 14$ and $66=20+2\cdot 23$,
we see that $\mathcal{F}_{14}$ has positive geometric genus
and $\kappa(\mathcal{F}_{23})>0$.
(Cusp forms of weight $68$ will be used in \S \ref{sec:LLSS}.)
It remains to prove that $\mathcal{F}_{13}$ is unirational.
We prove
\begin{proposition}\label{prop:F5 BD rational}
$F_{13}$ is rational.
\end{proposition}
\begin{proof}
Consider the second projection $\pi\colon F_{13}\to {\mathbb{G}(1, 5)}^{13}$.
If $(l_{1}, \cdots, l_{13})\in \pi(F_{13})$,
the fiber $\pi^{-1}(l_{1}, \cdots, l_{13})$ is a non-empty open set of
the linear system of cubics containing $l_{1}, \cdots, l_{13}$,
which we denote by
\begin{equation*}
{{\mathbb P}}V(l_{1},\cdots,l_{13}) =
{{\mathbb P}}\operatorname{Ker} (H^{0}(\mathcal{O}_{{{\mathbb P}}^{5}}(3)) \to
\oplus_{i=1}^{13}H^{0}(\mathcal{O}_{l_{i}}(3))).
\end{equation*}
This shows that $F_{13}$ is birationally a ${{\mathbb P}}^{N}$-bundle over $\pi(F_{13})$ with
\begin{equation*}
N= \dim F_{13} - \dim \pi(F_{13}) \geq \dim F_{13} - \dim {\mathbb{G}(1, 5)}^{13} = 3.
\end{equation*}
Hence we are reduced to the following assertion.
\begin{claim}\label{lem:BD dominant}
$\pi\colon F_{13} \to {\mathbb{G}(1, 5)}^{13}$ is dominant.
\end{claim}
Assume to the contrary that $\pi$ was not dominant.
Then we have
$\dim V(l_{1},\cdots,l_{13})\geq 5$
for a general point $(l_{1}, \cdots, l_{13})$ of $\pi(F_{13})$.
Consider the similar projection
$\pi'\colon F_{14}\to {\mathbb{G}(1, 5)}^{14}$ in $n=14$.
Since $\mathcal{F}_{14}=F_{14}/{\rm PGL}_{6}$ cannot be uniruled as just proved,
we must have
$\dim V(l_{1}, \cdots, l_{14})=1$
for general $(l_{1}, \cdots, l_{14})\in \pi'(F_{14})$.
On the other hand, $V(l_{1}, \cdots, l_{14})$ can be written as
\begin{equation*}
V(l_{1},\cdots,l_{14}) =
\operatorname{Ker} (V(l_{1},\cdots,l_{13}) \stackrel{\rho}{\to} H^{0}(\mathcal{O}_{l_{14}}(3))),
\end{equation*}
where $\rho$ is the restriction map.
Hence
for general $(l_{1}, \cdots, l_{13})\in \pi(F_{13})$,
we have
$\dim V(l_{1}, \cdots, l_{13})=5$,
$\rho$ is surjective,
and $\pi(F_{13})$ is of codimension $1$ in ${\mathbb{G}(1, 5)}^{13}$.
The last property implies that the similar projection
$\pi''\colon F_{12}\to {\mathbb{G}(1, 5)}^{12}$ in $n=12$ must be dominant,
because otherwise $\pi(F_{13})$ would be dense in the inverse image of
$\pi''(F_{12})\subset {\mathbb{G}(1, 5)}^{12}$
by the projection ${\mathbb{G}(1, 5)}^{13}\to {\mathbb{G}(1, 5)}^{12}$,
which contradicts the $\frak{S}_{13}$-invariance of $\pi(F_{13})$.
This in turn shows that
\begin{equation*}
\dim V(l_{1},\cdots,l_{12}) = \dim F_{12} - \dim {\mathbb{G}(1, 5)}^{12} + 1 = 8
\end{equation*}
for a general point $(l_{1},\cdots,l_{12})$ of ${\mathbb{G}(1, 5)}^{12}$.
However, since
$V(l_{1},\cdots,l_{13})\to H^{0}(\mathcal{O}_{l_{14}}(3))$
is surjective,
$V(l_{1},\cdots,l_{12})\to H^{0}(\mathcal{O}_{l_{14}}(3))$
is surjective too.
Hence
$\dim V(l_{1},\cdots,l_{12},l_{14})=4$.
But since $(l_{1}, \cdots, l_{12}, l_{14})$ is a general point of $\pi(F_{13})$,
this is absurd.
This proves Claim \ref{lem:BD dominant}
and so finishes the proof of Proposition \ref{prop:F5 BD rational}.
\end{proof}
\begin{remark}\label{remark: g=2 K3 and cubic4}
In the analogous case of $K3$ surfaces of genus $g$ (\cite{Ma}),
when $3\leq g \leq 10$,
the weight of the quasi-pullback $\Phi_{K3,g}$ of $\Phi_{12}$ coincided with
\begin{equation*}
{\rm weight}(\Phi_{K3,g}) = \dim V_{g} + 19 = \dim V_{g} + \dim({\rm moduli})
\end{equation*}
for a representation space $V_{g}$ related to the projective model of the $K3$ surfaces.
Here,
for $\Phi_{K3,2}$ and $\Phi_{cubic}=\Phi_{12}|_{L_{cub}}$,
the ``switched'' equalities
\begin{eqnarray*}
& & {\rm weight}(\Phi_{K3,2}) -19 = 56 = h^{0}(\mathcal{O}_{{{\mathbb P}}^{5}}(3)) \\
& & {\rm weight}(\Phi_{cubic}) - 20 = 28 = h^{0}(\mathcal{O}_{{{\mathbb P}}^{2}}(6))
\end{eqnarray*}
hold.
Is this accidental?
\end{remark}
\section{Debarre-Voisin fourfolds}\label{sec:DV}
In this section we prove Theorem \ref{thm:main}
for the case of Debarre-Voisin fourfolds \cite{DV}.
Let $\mathcal{E}$ be the dual of the rank $6$ universal sub vector bundle
over the Grassmannian $G(6, 10)$.
The space $H^{0}(\bigwedge^{3}\mathcal{E})$ is naturally isomorphic to $\bigwedge^{3}({\mathbb{C}}^{10})^{\vee}$.
Debarre-Voisin \cite{DV} proved that the zero locus
$X_{\sigma}\subset G(6, 10)$ of a general section $\sigma$ of $\bigwedge^{3}\mathcal{E}$
is a holomorphic symplectic manifold of $K3^{[2]}$ type,
and the polarization given by the Pl\"ucker has Beauville norm $22$ and is of non-split type.
The polarized Beauville lattice is hence isometric to
\begin{equation*}\label{eqn:LDV}
L_{DV} = 2U \oplus 2E_{8} \oplus K, \qquad
K=\begin{pmatrix} -2 & 1 \\ 1 & -6 \end{pmatrix}.
\end{equation*}
Let $\mathbb{G}(1, 5)amma=\tilde{{\rm O}}^{+}(L_{DV})$.
\begin{lemma}\label{lem:cusp form DV}
There exists an embedding $K\hookrightarrow E_{8}$ with $r(K^{\perp})=40$.
The resulting cusp form $\Phi_{12}|_{L_{DV}}$ has weight $32$.
Moreover, $S_{46}(\mathbb{G}(1, 5)amma, \det)$ has dimension $\geq 2$.
\end{lemma}
\begin{proof}
Let $v_{1}, v_{2}$ be the basis of $K$ in the above matrix expression.
We embed $K$ into $E_{8}$, in the model \eqref{eqn:E8} of $E_{8}$, by
\begin{equation*}
v_{1}\mapsto (1, -1, 0, \cdots, 0), \quad
v_{2}\mapsto (0, 1, 1, 2, 0, \cdots, 0).
\end{equation*}
The roots of $E_{8}$ orthogonal to these two vectors are
$\delta_{\pm i, \pm j}$ with $i, j\geq 5$ and
$\pm\delta'_{S}$ with $1, 2, 3\in S$ and $4\not\in S$.
The total number is $24+16=40$.
Hence the weight of $\Phi_{12}|_{L_{DV}}$ is $12+20=32$.
Working out the formula in \cite{Br},
we also see that $\dim M_{k}(\rho_{L_{DV}})=(k-1)/2$.
Taking product of $\Phi_{12}|_{L_{DV}}$ with
the Gritsenko lift of $M_{5}(\rho_{L_{DV}})$,
we obtain the last assertion.
\end{proof}
Let $U$ be the open locus of ${{\mathbb P}}(\bigwedge^{3}{\mathbb{C}}^{10})^{\vee}$
where $X_{\sigma}$ is smooth of dimension $4$
and $[\sigma]$ is ${\rm PGL}_{10}$-stable with no nontrivial stabilizer.
The period map
$U/{\rm PGL}_{10}\to {\mathbb{G}(1, 5)amma \backslash \mathcal{D}}$
is generically finite and dominant (\cite{DV}).
Consider the incidence
\begin{equation*}
F_{n} =
\{ \: ([\sigma], p_{1}, \cdots, p_{n}) \in U \times G(6, 10)^{n} \: | \: p_{i}\in X_{\sigma} \: \}
\subset U \times G(6, 10)^{n}
\end{equation*}
and let
$\mathcal{F}_{n}=F_{n}/{\rm PGL}_{10}$.
By Lemma \ref{lem:cusp form DV},
with $32=20+2\cdot 6$ and $46=20+2\cdot 13$,
we see that $\mathcal{F}_{6}$ has positive geometric genus and $\kappa(\mathcal{F}_{13})>0$.
It remains to show that $\mathcal{F}_{5}$ is unirational.
We prove
\begin{proposition}
$F_{5}$ is rational.
\end{proposition}
\begin{proof}
Consider the second projection
$\pi\colon F_{n}\to G(6, 10)^{n}$.
The fiber $\pi^{-1}(p_{1}, \cdots, p_{n})$ over
$(p_{1}, \cdots, p_{n})\in \pi(F_{n})$
is a non-empty open set of
the linear system
${{\mathbb P}}V(p_{1},\cdots,p_{n})\subset {{\mathbb P}}H^{0}(\bigwedge^{3}\mathcal{E})$
of sections vanishing at $p_{1}, \cdots, p_{n}$.
When $n=5$, we have
\begin{equation*}
\dim V(p_{1},\cdots,p_{5}) \geq
h^{0}(\wedge^{3}\mathcal{E}) - 5 \cdot {\rm rk}(\wedge^{3}\mathcal{E}) = 20,
\end{equation*}
so
$F_{5}\to \pi(F_{5})$ is birationally a ${{\mathbb P}}^{N}$-bundle with $N\geq 19$.
Furthermore, by the same argument as Claim \ref{lem:BD dominant},
the above result $\kappa(\mathcal{F}_{6})\geq 0$ enables us to conclude
that $F_{5}\to G(6, 10)^{5}$ is dominant.
Therefore $F_{5}$ is rational.
\end{proof}
\section{Lehn-Lehn-Sorger-van Straten eightfolds}\label{sec:LLSS}
In this section we prove Theorem \ref{thm:main}
for the case of Lehn-Lehn-Sorger-van~Straten eightfolds \cite{LLSS}.
They have the same parameter space and period space as the Beauville-Donagi case.
Let $Y\subset {{\mathbb P}}^{5}$ be a smooth cubic fourfold which does not contain a plane.
The space $M^{gtc}(Y)$ of \textit{generalized twisted cubics} on $Y$ is defined as
the closure of the locus of twisted cubics on $Y$ in the Hilbert scheme ${\rm Hilb}_{3m+1}(Y)$.
By Lehn-Lehn-Sorger-van Straten \cite{LLSS},
$M^{gtc}(Y)$ is smooth and irreducible of dimension $10$, and
there exists a natural contraction $M^{gtc}(Y)\to X(Y)$
to a holomorphic symplectic manifold $X(Y)$
with general fibers ${{\mathbb P}}^{2}$.
The variety $X(Y)$ is of $K3^{[4]}$ type (\cite{AL})
and has a polarization of Beauville norm $2$ and non-split type
(see \cite{De} footnote 22).
Hence its polarized Beauville lattice is isometric to the lattice
$L_{cub}=2U \oplus 2E_{8}\oplus A_{2}$
considered in \S \ref{sec:BD},
and the monodromy group is evidently contained in ${\rm O}^{+}(L_{cub})$.
We can reuse Lemma \ref{lem:cusp form BD}:
since
${\rm O}^{+}(L_{cub})= \langle \tilde{{\rm O}}^{+}(L_{cub}), -{\rm id} \rangle$
and the weights in Lemma \ref{lem:cusp form BD} are even,
the cusp forms there are
not just $\tilde{{\rm O}}^{+}(L_{cub})$-invariant
but also ${\rm O}^{+}(L_{cub})$-invariant
as remarked in \eqref{eqn:-id effect}.
Let $H={\rm Hilb}^{gtc}({{\mathbb P}}^{5})$ be the irreducible component of
the Hilbert scheme ${\rm Hilb}_{3m+1}({{\mathbb P}}^{5})$
that contains the locus of twisted cubics in ${{\mathbb P}}^{5}$.
Then $H$ is smooth of dimension $20$,
and we have $M^{gtc}(Y)=H\cap {\rm Hilb}_{3m+1}(Y)$
for $Y$ as above (\cite{LLSS}).
Let $U\subset |\mathcal{O}_{{{\mathbb P}}^{5}}(3)|$
be the parameter space of smooth cubic fourfolds
which does not contain a plane and has no nontrivial stabilizer in ${\rm PGL}_{6}$.
The period map $U/{\rm PGL}_{6} \to {\mathbb{G}(1, 5)amma \backslash \mathcal{D}}$,
where $\mathbb{G}(1, 5)amma={\rm O}^{+}(L_{cub})$,
is generically finite and dominant (\cite{LLSS}, \cite{AL}).
We consider the incidence
\begin{equation*}
M^{gtc}_{n} =
\{ \: (Y, C_{1}, \cdots, C_{n})\in U\times H^{n} \: | \: C_{i}\in M^{gtc}(Y) \: \}
\subset U\times H^{n}.
\end{equation*}
As noticed in \cite{LLSS},
the construction of $X(Y)$ can be done in family.
This produces a smooth family $X\to U$ of symplectic eightfolds
and a contraction $M^{gtc}_{1}\to X$ over $U$
with general fibers ${{\mathbb P}}^{2}$.
Taking the $n$-fold fiber product
$X_{n} = X\times_{U} \cdots \times_{U} X$,
we obtain a morphism
$M^{gtc}_{n}\to X_{n}$ over $U$
with general fibers $({{\mathbb P}}^{2})^{n}$.
Let
$\mathcal{F}_{n}=X_{n}/{\rm PGL}_{6}$.
By Lemma \ref{lem:cusp form BD},
now with $48=20+4\cdot 7$ and $68=20+4\cdot 12$
($d=4$ in place of $d=2$)
and with $\mathbb{G}(1, 5)amma={\rm O}^{+}(L_{cub})$ in place of $\tilde{{\rm O}}^{+}(L_{cub})$,
we see that
$\mathcal{F}_{7}$ has positive geometric genus
and $\kappa(\mathcal{F}_{12})>0$.
It remains to show that $\mathcal{F}_{5}$ is unirational.
It suffices to prove
\begin{proposition}
$M^{gtc}_{5}$ is unirational.
\end{proposition}
\begin{proof}
We enlarge $M^{gtc}_{n}$ to the complete incidence over $|\mathcal{O}_{{{\mathbb P}}^{5}}(3)|$:
\begin{equation*}
(M^{gtc}_{n})^{\ast} =
\{ \: (Y, C_{1}, \cdots, C_{n})\in |\mathcal{O}_{{{\mathbb P}}^{5}}(3)|\times H^{n} \: | \: C_{i}\subset Y \: \}.
\end{equation*}
The fiber of the projection
$\pi\colon (M^{gtc}_{n})^{\ast}\to H^{n}$ over $(C_{1}, \cdots, C_{n})\in H^{n}$
is the linear system
${{\mathbb P}}V(C_{1},\cdots,C_{n})\subset |\mathcal{O}_{{{\mathbb P}}^{5}}(3)|$
of cubics containing $C_{1}, \cdots, C_{n}$.
When $n=5$,
we have $\dim V(C_{1}, \cdots, C_{5})\geq 6$ for any $(C_{1}, \cdots, C_{5})\in H^{5}$,
so $\pi$ is surjective, and
there is a unique irreducible component of $(M^{gtc}_{5})^{\ast}$
of dimension $\geq 105$ that is birationally a ${{\mathbb P}}^{N}$-bundle over $H^{5}$ with $N\geq 5$.
On the other hand,
$M^{gtc}_{5}$ is an open set of the unique irreducible component of $(M^{gtc}_{5})^{\ast}$
of dimension $105$ that dominates $|\mathcal{O}_{{{\mathbb P}}^{5}}(3)|$.
We want to show that these two irreducible components coincide:
then $M^{gtc}_{5}\to H^{5}$ is dominant,
and $M^{gtc}_{5}$ is birationally a ${{\mathbb P}}^{5}$-bundle over $H^{5}$
and hence unirational.
Let $(C_{1}, \cdots, C_{5})$ be a general point of $H^{5}$.
By genericity we may assume that
each $C_{i}$ is smooth and spans a $3$-plane $P_{i}\subset {{\mathbb P}}^{5}$,
$P_{i}\cap P_{j}$ is a line, and $C_{i}\cap P_{j}=\emptyset$.
Let $(Y, C_{1}, \cdots, C_{5})$ be a general point of
$\pi^{-1}(C_{1}, \cdots, C_{5})={{\mathbb P}}V(C_{1}, \cdots, C_{5})$.
It suffices to show that
generalization of $(Y, C_{1}, \cdots, C_{5})$,
i.e., small perturbation inside $(M^{gtc}_{5})^{\ast}$,
contains $(Y', C_{1}', \cdots, C_{5}')$ with $Y'\in U$.
We may assume that $Y$ is irreducible and contains no $3$-plane,
because the locus of $(Y, C_{1}, \cdots, C_{5})$ with $Y$ reducible or containing a $3$-plane
has dimension $<105$.
Since each $C_{i}$ is smooth,
the results of \cite{LLSS} \S 2 tell us that
the cubic surface $S_{i}=Y\cap P_{i}$ is either
(A) with at most ADE singularities or
(B) integral but non-normal (singular along a line) or
(C) reducible.
By comparison of dimension again,
we may assume that at least one, say $S_{1}$, is of type (A).
Now $(C_{2}, \cdots, C_{5})$ is a general point of $H^{4}$.
The projection $M^{gtc}_{4}\to H^{4}$ is dominant
as can be checked similarly in an inductive way.
Therefore there exists a cubic fourfold $Y''\in U$ containing $C_{2}, \cdots, C_{5}$.
Let $Y'$ be a general member of the pencil $\langle Y, Y'' \rangle$.
Since $Y''\in U$, we have $Y'\in U$.
Since both $Y$ and $Y''$ contain $C_{2}, \cdots, C_{5}$,
$Y'$ contains $C_{2}, \cdots, C_{5}$ too.
In the fixed $3$-plane $P_{1}$,
the cubic surface $S'=Y'\cap P_{1}$ degenerates to
the cubic surface $S_{1}=Y\cap P_{1}$ with at most ADE singularities,
so $S'$ has at most ADE singularities too.
By \cite{LLSS} Theorem 2.1,
the nets of twisted cubics on cubic surfaces degenerate flatly in such a family.
Therefore we have a twisted cubic $C'\subset S'$
which specializes to $C_{1}\subset S_{1}$ as $Y'$ specializes to $Y$.
Therefore $(Y', C', C_{2}, \cdots, C_{5})\in M_{5}^{gtc}$ specializes to $(Y, C_{1}, C_{2}, \cdots, C_{5})$.
This proves our assertion.
\end{proof}
\section{Varieties of power sums of cubic fourfolds}\label{sec:IR}
In this section we prove Theorem \ref{thm:main} for the case of Iliev-Ranestad fourfolds \cite{IR}.
Let $H$ be the irreducible component of the Hilbert scheme
${\rm Hilb}_{10}|\mathcal{O}_{{{\mathbb P}}^{5}}(1)|$
of length $10$ subschemes of $|\mathcal{O}_{{{\mathbb P}}^{5}}(1)|$
that contains the locus of $10$ distinct points.
For a cubic fourfold $Y\subset {{\mathbb P}}^{5}$
with defining equation $f\in H^{0}(\mathcal{O}_{{{\mathbb P}}^{5}}(3))$,
its variety of $10$ sums of powers
$VSP(Y)=VSP(Y, 10)$
is defined as the closure in $H$
of the locus of distinct
$([l_{1}], \cdots, [l_{10}])$ such that
$f=\sum_{i}\lambda_{i}l_{i}^{3}$
for some $\lambda_{i}\in {\mathbb{C}}$.
Iliev-Ranestad \cite{IR}, \cite{IR2} proved that
when $Y$ is general,
$VSP(Y)$ is a holomorphic symplectic manifold of $K3^{[2]}$ type,
with polarization of Beauville norm $38$ and non-split type.
(See also \cite{Mo} for the computation of polarization.)
Hence its polarized Beauville lattice is isometric to
\begin{equation*}\label{eqn:LDV}
L_{IR} = 2U \oplus 2E_{8} \oplus K, \qquad
K=\begin{pmatrix} -2 & 1 \\ 1 & -10 \end{pmatrix}.
\end{equation*}
Let $\mathbb{G}(1, 5)amma=\tilde{{\rm O}}^{+}(L_{IR})$.
\begin{lemma}\label{lem:cusp form IR}
There exists an embedding $K\hookrightarrow E_{8}$ with $r(K^{\perp})=40$.
The resulting cusp form $\Phi_{12}|_{L_{IR}}$ has weight $32$.
Moreover, $S_{44}(\mathbb{G}(1, 5)amma, \det)$ has dimension $\geq 2$.
\end{lemma}
\begin{proof}
Let $v_{1}, v_{2}$ be the basis of $K$ in the above expression.
We embed $K\hookrightarrow E_{8}$ by sending, in the model \eqref{eqn:E8} of $E_{8}$,
\begin{equation*}
v_{1}\mapsto (1, -1, 0, \cdots, 0), \quad
v_{2}\mapsto (0, 1, 3, 0, \cdots, 0).
\end{equation*}
The roots of $E_{8}$ orthogonal to these two vectors are
$\delta_{\pm i, \pm j}$ with $i, j\geq 4$,
whose number is $2\cdot 5 \cdot 4=40$.
Hence $\Phi_{12}|_{L_{IR}}$ has weight $12+20=32$.
Furthermore, computing the formula in \cite{Br}, we see that
$\dim M_{k}(\rho_{L_{IR}})=[(5k-3)/6]$.
Product of $\Phi_{12}|_{L_{IR}}$ with the Gritsenko lift of $M_{3}(\rho_{L_{IR}})$
implies the last assertion.
\end{proof}
Let $U$ be the open locus of $|\mathcal{O}_{{{\mathbb P}}^{5}}(3)|$
where $VSP(Y)$ is smooth of dimension $4$ and $Y$ is smooth with no nontrivial stabilizer.
The period map $U/{\rm PGL}_{6}\to {\mathbb{G}(1, 5)amma \backslash \mathcal{D}}$
is generically finite and dominant (\cite{IR}, \cite{IR2}).
Consider the incidence
\begin{equation*}
VSP_{n} =
\{ \: (Y, \mathbb{G}(1, 5)amma_{1}, \cdots, \mathbb{G}(1, 5)amma_{n}) \in U \times H^{n} \: | \: \mathbb{G}(1, 5)amma_{i}\in VSP(Y) \: \}
\subset U \times H^{n}
\end{equation*}
and let
$\mathcal{F}_{n}=VSP_{n}/{\rm PGL}_{6}$.
By Lemma \ref{lem:cusp form IR},
with $32=20+2\cdot 6$ and $44=20+2\cdot 12$,
we see that $\mathcal{F}_{6}$ has positive geometric genus
and $\kappa(\mathcal{F}_{12})>0$.
On the other hand,
as observed in \cite{IR},
$VSP_{1}$ is birationally a ${{\mathbb P}}^{9}$-bundle over $H$
and hence rational.
Therefore $\mathcal{F}_{1}$ is unirational.
This proves Theorem \ref{thm:main} in the present case.
\begin{remark}
There also exist embeddings $K\hookrightarrow E_{8}$ with $r(K^{\perp})=30$
(send $v_{2}$ to $(0, 1, 1, 2, 2, 0, 0, 0)$ or to $(0, 1, 1, 1, 1, 1, 1, 2)$),
but the resulting cusp form has weight $27$,
which is not of the form $20+2n$.
This, however, suggests that $\kappa\geq 0$ would actually start at least from $n=4$.
\end{remark}
\section{Double EPW series}\label{sec:EPW}
In this section we prove Theorem \ref{thm:main} for
the cases of double EPW sextics by O'Grady \cite{OG}
and of double EPW cubes by Iliev-Kapustka-Kapustka-Ranestad \cite{IKKR}.
They share some common features:
both are parametrized by the Lagrangian Grassmannian $LG=LG(\bigwedge^{3}{\mathbb{C}}^{6})$,
where $\bigwedge^{3}{\mathbb{C}}^{6}$ is equipped with the canonical symplectic form
$\bigwedge^{3}{\mathbb{C}}^{6} \times \bigwedge^{3}{\mathbb{C}}^{6} \to \bigwedge^{6}{\mathbb{C}}^{6}$.
Both are constructed as double covers of degeneracy loci related to $\bigwedge^{3}{\mathbb{C}}^{6}$.
And both have $L_{EPW}=2U\oplus 2E_{8}\oplus 2A_{1}$
as the polarized Beauville lattices.
Thus they share the same parameter space and essentially the same period space.
The presence of covering involution requires extra care
in the construction of the universal (or perhaps we should say rather ``tautological'') family
over a Zariski open set of the moduli space.
\subsection{Double EPW sextics}\label{ssec:OG}
We recall the construction of double EPW sextics following \cite{OG}, \cite{OG2}.
Let $F$ be the vector bundle over ${{\mathbb P}}^{5}$
whose fiber over $[v]\in {{\mathbb P}}^{5}$ is the image of
${\mathbb{C}}v \wedge (\bigwedge^{2}{\mathbb{C}}^{6})\to \bigwedge^{3}{\mathbb{C}}^{6}$.
For $[A]\in LG$ we write
$Y_{A}[k]\subset {{\mathbb P}}^{5}$
for the locus of those $[v]\in {{\mathbb P}}^{5}$ such that $\dim (A\cap F_{v})\geq k$.
We say that $A$ is generic if
$Y_{A}[3]= \emptyset$ and
${{\mathbb P}}A \cap G(3, 6) = \emptyset$ in ${{\mathbb P}}(\bigwedge^{3}{\mathbb{C}}^{6})$.
In that case,
$Y_{A}=Y_{A}[1]$ is a sextic hypersurface in ${{\mathbb P}}^{5}$ singular along $Y_{A}[2]$,
$Y_{A}[2]$ is a smooth surface, and
$Y_{A}$ has a transversal family of $A_{1}$-singularities along $Y_{A}[2]$.
Let $\lambda_{A}\colon F\to (\bigwedge^{3}{\mathbb{C}}^{6}/A) \otimes \mathcal{O}_{{{\mathbb P}}^{5}}$
be the composition of the inclusion
$F\hookrightarrow \bigwedge^{3}{\mathbb{C}}^{6} \otimes \mathcal{O}_{{{\mathbb P}}^{5}}$
and the projection
$\bigwedge^{3}{\mathbb{C}}^{6}\otimes \mathcal{O}_{{{\mathbb P}}^{5}} \to
(\bigwedge^{3}{\mathbb{C}}^{6}/A) \otimes \mathcal{O}_{{{\mathbb P}}^{5}}$.
Then ${\rm coker}(\lambda_{A})=i_{\ast}\zeta_{A}$
for a coherent sheaf $\zeta_{A}$ on $Y_{A}$
where $i:Y_{A}\hookrightarrow {{\mathbb P}}^{5}$ is the inclusion.
Let $\xi_{A}=\zeta_{A}\otimes \mathcal{O}_{Y_{A}}(-3)$.
If we choose a Lagrangian subspace $B$ of $\bigwedge^{3}{\mathbb{C}}^{6}$ transverse to $A$,
one can define a multiplication $\xi_{A}\times \xi_{A}\to \mathcal{O}_{Y_{A}}$.
Although $B$ is necessary for the construction,
the resulting multiplication does not depend on the choice of $B$ (\cite{OG2} p.152).
Then let
$X_{A}={\rm Spec}(\mathcal{O}_{Y_{A}}\oplus \xi_{A})$.
This is a double cover of $Y_{A}$.
If $A$ is generic in the above sense,
$X_{A}$ is a holomorphic symplectic manifold of $K3^{[2]}$ type.
The polarization (pullback of $\mathcal{O}_{{{\mathbb P}}^{5}}(1)$)
has Beauville norm $2$ and is of split type,
and the polarized Beauville lattice is isometric to $L_{EPW}$.
If $LG^{\circ}\subset LG$ is the open locus of generic $A$,
the period map
$LG^{\circ}/{\rm PGL}_{6}\to {\mathbb{G}(1, 5)amma \backslash \mathcal{D}}$,
where $\mathbb{G}(1, 5)amma=\tilde{{\rm O}}^{+}(L_{EPW})$,
is birational (\cite{OG} \S 6 and \cite{Mark} \S 8).
\begin{lemma}\label{lem:cusp form EPW6}
The cusp form $\Phi_{12}|_{L_{EPW}}$ has weight $42$.
Moreover, $S_{58}(\mathbb{G}(1, 5)amma, \det)$ has dimension $\geq 2$.
\end{lemma}
\begin{proof}
We embed $2A_{1}$ in $E_{8}$ in any natural way.
Then $(2A_{1})^{\perp}\simeq D_{6}$ has $60$ roots,
so $\Phi_{12}|_{L_{EPW}}$ has weight $42$.
Working out the formula in \cite{Br}, we see that
$\dim M_{k}(\rho_{L_{EPW}})=[k/3]$.
Product of $\Phi_{12}|_{L_{EPW}}$ with the Gritsenko lift of $M_{7}(\rho_{L_{EPW}})$
implies the second assertion.
\end{proof}
The construction of double EPW sextics can be done
over a Zariski open set of the moduli space as follows (cf.~\cite{OG2}).
Let $LG'\subset LG^{\circ}$ be the open locus where
$A$ has no nontrivial stabilizer,
and
$\pi_{1}\colon LG'\times {{\mathbb P}}^{5}\to LG'$,
$\pi_{2}\colon LG'\times {{\mathbb P}}^{5}\to {{\mathbb P}}^{5}$
be the projections.
Let
$Y=\cup_{A}Y_{A}\subset LG'\times {{\mathbb P}}^{5}$
be the universal family of EPW sextics over $LG'$.
\begin{lemma}\label{lem:shrink}
There exists a ${\rm PGL}_{6}$-invariant Zariski open set $LG''$ of $LG'$ such that
$\mathcal{O}_{LG''\times {{\mathbb P}}^{5}}(Y) \simeq \pi_{2}^{\ast}\mathcal{O}_{{{\mathbb P}}^{5}}(6)$
as ${\rm PGL}_{6}$-linearized line bundles over $LG''\times {{\mathbb P}}^{5}$.
\end{lemma}
\begin{proof}
Consider the quotient
$\mathcal{Y}=Y/{\rm PGL}_{6}$,
which is a divisor of the Brauer-Severi variety
$\mathcal{P}=(LG'\times{{\mathbb P}}^{5})/{\rm PGL}_{6}$
over $\mathcal{M}=LG'/{\rm PGL}_{6}$.
Each fiber of $\mathcal{Y}\to \mathcal{M}$ is a canonical divisor of
the fiber of $\pi:\mathcal{P}\to\mathcal{M}$.
This implies that
$\mathcal{O}_{\mathcal{P}}(\mathcal{Y}) \simeq
K_{\pi}\otimes \pi^{\ast}\mathcal{O}_{\mathcal{M}}(D)$
for some divisor $D$ of $\mathcal{M}$.
Removing the support of $D$ from $\mathcal{M}$,
we obtain $\mathcal{O}_{\mathcal{P}}(\mathcal{Y}) \simeq K_{\pi}$
over its complement.
Pulling back this isomorphism to $LG'\times {{\mathbb P}}^{5}\to LG'$,
we obtain the desired ${\rm PGL}_{6}$-equivariant isomorphism.
\end{proof}
We rewrite $LG''=LG'$ and $Y|_{LG''}=Y$.
Let $E$ be the universal quotient vector bundle of rank $10$ over $LG'$.
We have a natural homomorphism
$\lambda\colon \pi_{2}^{\ast}F\to \pi_{1}^{\ast}E$
over $LG'\times {{\mathbb P}}^{5}$
whose restriction to $\{ A \} \times {{\mathbb P}}^{5}$ is $\lambda_{A}$,
and ${\rm coker}(\lambda)=i_{\ast}\zeta$
for a coherent sheaf $\zeta$ on $Y$
where $i\colon Y\to LG'\times {{\mathbb P}}^{5}$ is the inclusion.
As was done in \cite{OG2},
if we choose $B\in LG$ and let
$U_{B}\subset LG'$ be the open locus of those $A$ transverse to $B$,
we have a multiplication $\zeta\times \zeta \to \mathcal{O}_{Y}(Y)$
over $Y|_{U_{B}}$.
Since the multiplication does not depend on the choice of $B$ at each fiber,
we obtain an ${\rm SL}_{6}$-equivariant multiplication
$\zeta\times \zeta \to \mathcal{O}_{Y}(Y)$
over the whole $Y$.
If we put
$\xi=\zeta\otimes \mathcal{O}_{Y}(-3)$,
Lemma \ref{lem:shrink} enables us to pass to an ${\rm SL}_{6}$-equivariant multiplication
$\xi\times \xi \to \mathcal{O}_{Y}$.
Since the scalar matrices in ${\rm SL}_{6}$ act trivially on $\xi$,
$\xi$ is actually ${\rm PGL}_{6}$-linearized and
this multiplication is ${\rm PGL}_{6}$-equivariant.
Now taking $X={\rm Spec}(\mathcal{O}_{Y}\oplus \xi)$,
we obtain a universal family of double EPW sextics over $LG'$
acted on by ${\rm PGL}_{6}$.
Let
$\mathcal{M}=LG'/{\rm PGL}_{6}$,
$\mathcal{F}=X/{\rm PGL}_{6}$ and
$\mathcal{F}_{n}=\mathcal{F}\times_{\mathcal{M}}\cdots \times_{\mathcal{M}} \mathcal{F}$
($n$ times).
Note that this is not a moduli space even birationally,
as it is not mod out by the covering involution.
By Lemma \ref{lem:cusp form EPW6},
with $42=20+2\cdot 11$ and $58=20+2\cdot 19$,
we see that
$\mathcal{F}_{11}$ has positive geometric genus and
$\kappa(\mathcal{F}_{19})>0$.
This proves Theorem \ref{thm:main} in the case of double EPW sextics.
\subsection{Double EPW cubes}\label{ssec:IKKR}
We recall the construction of double EPW cubes following \cite{IKKR}.
For $[U]\in G(3, 6)$, we write
$T_{U}=(\bigwedge^{2}U)\wedge {\mathbb{C}}^{6} \subset \bigwedge^{3}{\mathbb{C}}^{6}$.
For $[A]\in LG$, let
$D_{k}^{A}\subset G(3, 6)$ be the locus of those $[U]$ with $\dim (A\cap T_{U})\geq k$.
We say that $A$ is generic if
$D^{A}_{4}=\emptyset$ and
${{\mathbb P}}A \cap G(3, 6)=\emptyset$ in ${{\mathbb P}}(\bigwedge^{3}{\mathbb{C}}^{6})$.
In that case,
$D^{A}_{2}$ is a sixfold singular along $D^{A}_{3}$,
$D^{A}_{3}$ is a smooth threefold, and
the singularities of $D^{A}_{2}$ is a transversal family of
$\frac{1}{2}(1, 1, 1)$ quotient singularity along $D^{A}_{3}$.
Let $\tilde{D}_{2}^{A}\to D^{A}_{2}$ be the blow-up at $D^{A}_{3}$
and $E\subset \tilde{D}_{2}^{A}$ be the exceptional divisor.
Then $\tilde{D}_{2}^{A}$ is smooth, and
$E$ is a smooth bi-canonical divisor of $\tilde{D}_{2}^{A}$ (\cite{IKKR} p.~254).
Take the double cover $\tilde{Y}_{A}\to \tilde{D}_{2}^{A}$ branched over $E$ and
contract the ${{\mathbb P}}^{2}$-ruling of the ramification divisor
by using pullback of some multiple of $\mathcal{O}_{D^{A}_{2}}(1)$.
This produces a holomorphic symplectic manifold $Y_{A}$
of $K3^{[3]}$ type (\cite{IKKR} Theorem 1.1).
The polarization has Beauville norm $4$ and divisibility $2$,
so the polarized Beauville lattice is isometric to $L_{EPW}$ by \cite{GHS10}.
The monodromy group is evidently contained in ${\rm O}^{+}(L_{EPW})$
(but whether it is smaller seems unclear to me).
The quotient ${\rm O}^{+}(L_{EPW})/\tilde{{\rm O}}^{+}(L_{EPW})$
is $\frak{S}_{2}$ generated by the switch of the two copies of $A_{1}$,
say $\iota\in {\rm O}^{+}(L_{EPW})$.
Construction of cusp forms becomes more delicate than the previous cases,
as $\Phi_{12}|_{L_{EPW}}$ is \textit{anti}-invariant under $\iota$.
\begin{lemma}\label{lem:cusp form EPW cube}
Let $\mathbb{G}(1, 5)amma={\rm O}^{+}(L_{EPW})$.
Then
$S_{68}(\mathbb{G}(1, 5)amma, \det) \ne \{ 0 \}$ and
$S_{80}(\mathbb{G}(1, 5)amma, \det)$ has dimension $\geq 2$.
\end{lemma}
\begin{proof}
We abbreviate $L=L_{EPW}$.
We first verify that $\Phi_{12}|_{L}$ is $\iota$-anti-invariant.
Let $\iota'$ be the involution of the $D_{6}$ lattice induced by
the involution of its Dynkin diagram.
Then $\iota \oplus \iota'$ extends to an involution $\tilde{\iota}$ of $II_{2,26}$.
The modular form $\Phi_{12}$ is $\tilde{\iota}$-invariant.
If we run $\delta$ over the positive roots of $D_{6}$,
the product $\prod_{\delta}(\delta, \cdot)$ is also $\tilde{\iota}$-invariant
because $\iota'$ permutes the positive roots of $D_{6}$.
Therefore $\Phi_{12}/\prod_{\delta}(\delta, \cdot )$
as a section of $\mathcal{L}^{\otimes 42}\otimes \det$
over $\mathcal{D}_{II_{2,26}}$ is $\tilde{\iota}$-invariant.
Since $\det(\tilde{\iota})=1$ while $\det(\iota)=-1$,
this shows that
$\Phi_{12}|_{L}$ as a section of $\mathcal{L}^{\otimes 42}\otimes \det$ over $\mathcal{D}_{L}$
is anti-invariant under $\iota$.
In order to construct $\iota$-invariant cusp forms of character $\det$,
we take product of $\Phi_{12}|_{L}$ with
the Gritsenko lift of the $\iota$-anti-invariant part of $M_{k}(\rho_{L})$.
By the formulae in \cite{Br} and \cite{Ma2},
we see that
$\dim M_{k}(\rho_{L})=[k/3]$ and
$\dim M_{k}(\rho_{L})^{\iota} = [(k+2)/4]$
for $k>2$ odd.
We also require the congruence condition
$42+k+9 \equiv 20$ mod $3$, namely $k\equiv 2$ mod $3$.
Now, when $k=17$ (resp.~$k=29$),
the $\iota$-anti-invariant part has dimension $1$ (resp.~$2$).
This proves our claim.
\end{proof}
We can do the double cover construction over a Zariski open set of the moduli space.
Let $LG^{\circ}\subset LG$ be the open set of generic $[A]$
which is ${\rm PGL}_{6}$-stable and has no nontrivial stabilizer.
Let $D_{2}=\cup_{A}D_{2}^{A} \subset LG^{\circ}\times G(3, 6)$
be the universal family of $D^{A}_{2}$'s.
We have the geometric quotients
$\mathcal{M}=LG^{\circ}/{\rm PGL}_{6}$,
$\mathcal{Z}=D_{2}/{\rm PGL}_{6}$
with projection $\mathcal{Z}\to \mathcal{M}$.
The relative $\mathcal{O}(2)$ descends.
Let $\tilde{\mathcal{Z}}\to \mathcal{Z}$ be the blow-up at ${\rm Sing}(\mathcal{Z})$,
$\mathcal{B}\subset \tilde{\mathcal{Z}}$ be the exceptional divisor,
and $\pi\colon \tilde{\mathcal{Z}}\to \mathcal{M}$ be the projection.
As in the proof of Lemma \ref{lem:shrink},
we may shrink $\mathcal{M}$ to a Zariski open set $\mathcal{M}'\subset \mathcal{M}$
so that $\mathcal{B}|_{\mathcal{M}'}\sim 2K_{\pi}$.
Then we can take the double cover of
$\tilde{\mathcal{Z}}|_{\mathcal{M}'}$ branched over $\mathcal{B}|_{\mathcal{M}'}$.
Contracting the ramification divisor relatively by using
pullback of a multiple of the relative $\mathcal{O}(2)$,
we obtain a universal family $\mathcal{F}\to \mathcal{M}'$
of double EPW cubes over $\mathcal{M}'$.
Then let
$\mathcal{F}_{n}=\mathcal{F}\times_{\mathcal{M}'} \cdots \times_{\mathcal{M}'} \mathcal{F}$
($n$ times).
The period map $\mathcal{M}\to {\mathbb{G}(1, 5)amma \backslash \mathcal{D}}$ is generically finite and dominant (\cite{IKKR} Proposition 5.1).
By Lemma \ref{lem:cusp form EPW cube},
with $68=20+3\cdot 16$ and $80=20+3\cdot 20$,
we see that $\mathcal{F}_{16}$ has positive geometric genus
and $\kappa(\mathcal{F}_{20})>0$.
This proves Theorem \ref{thm:main} in the case of double EPW cubes.
\end{document}
|
\begin{enumerate}gin{document}
\title{Multi-Structural Games and Number of Quantifiers}
\titlecomment{This paper is an expanded and reorganized version of the LICS (Logic in Computer Science) 2021 paper \cite{Fagin21}, which points to the Computer Science arXiv, version 1, entry \cite{Fagin21a} for full proofs of its theorems.}
\author[]{Ronald Fagin}
\address{IBM Research - Almaden, San Jose, CA}
\email{[email protected]}
\author[]{Jonathan Lenchner}
\address{IBM T.J. Watson Research Center, Yorktown Heights, NY}
\email{[email protected]}
\author[]{Kenneth W. Regan}
\address{Department of CSE, University at Buffalo, Amherst, NY}
\email{[email protected]}
\author[]{$\text{Nikhil Vyas}^\dagger$}
\address{Department of EECS, MIT, Cambridge, MA}
\email{[email protected]}
\thanks{$^\dagger$Supported by NSF CCF-1909429. Work Partially done while visiting IBM Research - Almaden}
\begin{enumerate}gin{abstract}
We study multi-structural games, played on two sets $\mathcal{A}$ and $\mathcal{B}$ of structures. These games generalize Ehrenfeucht-\fraisse games. Whereas Ehrenfeucht-\fraisse games capture the quantifier rank of a first-order sentence, multi-structural games capture the number of quantifiers, in the sense that Spoiler wins the $r$-round game if and only if there is a first-order sentence $\phi$ with at most $r$ quantifiers, where every structure in $\mathcal{A}$ satisfies $\phi$ and no structure in $\mathcal{B}$ satisfies $\phi$. We use these games to give a complete characterization of the number of quantifiers required to distinguish linear orders of different sizes and we develop machinery for analyzing structures beyond linear orders.
\end{abstract}
\maketitle
\section{Introduction}
Model theory has a number of techniques for proving inexpressibility results.
However,
as noted in \cite{Fagin93},
almost none of the key theorems and tools of model
theory, such as the compactness theorem and the
L\"owenheim-Skolem theorems, apply to
finite structures. Among the few tools of model
theory that yield inexpressibility results for finite structures are Ehrenfeucht-\fraisse games \cite{Ehr61,Fra54}, henceforth E-F games.
The standard E-F game is played by ``Spoiler'' and ``Duplicator'' on a pair $(A,B)$ of structures over the same first-order vocabulary $\tau$, for a specified number $r$ of \emph{rounds}. In each round, Spoiler chooses an element from $A$ or from $B$, and Duplicator replies by choosing an element from the other structure. In this way, they determine sequences of elements $a_1,\dots,a_r \in A$ and $b_1,\dots,b_r \in B$, repetitions allowed, which, in addition to any possible constants defined on $A$ and $B$, define substructures $A'$ of $A$ and $B'$ of $B$. The analysis can be phrased as attempting to define a sequence of functions $f$, $g$, and $f\cup g$. If any of these functions is not well-defined then Spoiler wins. The function $f{:}\{a_1,....,a_r\} \rightarrow \{b_1,....,b_r\}$ is defined by $f(a_i) = b_i$ for $i = 1,...,r$. If there are $i$ and $j$ with $1 \leq i < j \leq r$ such that $a_i = a_j$ but $b_i \neq b_j$, then $f$ is not well-defined. The partial function $g{:} A \rightarrow B$ is defined so that $g(a) = b$ if $a$ and $b$ are associated with the same constant symbol in $\tau$. If after a round of play there are two constant symbols $c_i, c_j$ with $a \in A$ associated with both $c_i$ and $c_j$, but different elements of $B$ associated with $c_i$ and $c_j$, then $g$ is not well-defined. The joint partial function $f \cup g$ stipulates that $(f \cup g)(a) = f(a)$ if $f$ is defined on $a$ and $(f \cup g)(a) = g(a)$ if $g$ is defined on $a$. If $f$ and $g$ \emph{are} well-defined \emph{and} agree on common elements, then $f \cup g$ is well-defined. If it is not well-defined, or if $f \cup g$ is not an isomorphism from its domain $A'$ to its image $B'$, then Spoiler wins. Finally, if $f \cup g$ is well-defined, and $f \cup g$ is an isomorphism from $A'$ to $B'$, then Duplicator wins. In this latter case of a Duplicator win, we say that the sequences $\langle a_1,...,a_r \rangle$ and $\langle b_1,...,b_r \rangle$ of selected elements ``give rise to a partial isomorphism'' between $A$ and $B$.
The equivalence theorem for E-F games \cite{Ehr61,Fra54}
characterizes the minimum \emph{quantifier rank} of a sentence $\phi$ over $\tau$ that is true for $A$ but false for $B$.
The quantifier rank $\mathrm{qr}(\phi)$ is defined as zero for a quantifier-free sentence $\phi$, and inductively:
\begin{enumerate}gin{eqnarray*}
\mathrm{qr}(\neg\phi) &=& \mathrm{qr}(\phi),\\
\mathrm{qr}(\phi \vee \psi) = \mathrm{qr}(\phi \land \psi)&=& \max\set{\mathrm{qr}(\phi),\mathrm{qr}(\psi)},\\
\mathrm{qr}(\forall x \phi(x)) = \mathrm{qr}(\exists x \phi(x)) &=& \mathrm{qr}(\phi) + 1,
\end{eqnarray*}
\begin{enumerate}gin{thm} \textbf{Equivalence Theorem for E-F Games:}\label{thm:ef}
Spoiler wins the $r$-round E-F game on $(A,B)$ if and only if there is a 1st order sentence $\phi$ of quantifier rank at most $r$ such that $A \models \phi$ while $B \models \neg\phi$.
\end{thm}
The ``if'' direction of this theorem is fairly easy to prove by induction on $r$. This is the ``useful" direction, which is used to prove inexpressibility results. The ``only if'' direction is somewhat tricky to prove; intuitively, it tells us that
any technique for proving
that a certain property cannot be defined by a first-order sentence with a certain quantifier rank
can, in principle, be replaced by a proof via E-F games. See \cite{Imm99, Lib12} for
a proof and extended discussion.
We investigate a variant of E-F games that we call \emph{multi-structural games}. These games make Duplicator more powerful and characterize the \emph{number} of quantifiers rather than quantifier \emph{rank}.
It is straightforward to see that the minimum number of quantifiers needed to define a property $P$ is the same as the minimum size of the quantifier prefix of a sentence in prenex normal form that is needed to define property $P$.
This is because converting a sentence into prenex normal form does not increase the number of quantifiers.
As we discovered during review of this paper's conference version [FLRV21a] and acknowledged there, an equivalent of our multi-structural game was described in the journal version of Neil Immerman’s paper ``Number of Quantifiers Is Better Than Number of Tape Cells" \cite{Immerman81}. Its conference version \cite{Immerman79} did not mention the game.\footnote{We reference two personal communications with Immerman as \cite{Immerman21}, one beforehand where he raised the related size game \cite{AdlImm03} discussed below in section 1.3, and one after our discovery that informed the present discussion.} In \cite{Immerman81}, Immerman called it the ``separability game" and showed that it characterized the number of quantifiers, without providing further results.
Just prior to the conclusion of \cite{Immerman81}, Immerman remarked,
\begin{enumerate}gin{small}
\begin{enumerate}gin{quote}
``Little is known about how to play the separability game. We leave it here as a
jumping off point for further research. We urge others to study it, hoping that the
separability game may become a viable tool for ascertaining some of the lower
bounds which are `well believed' but have so far escaped proof.''
\end{quote}
\end{small}
Indeed, as our paper shows, analysis of the multi-structural games is often quite confounding, with many delicate issues.
We now define the rules of the multi-structural game. There are again two players, Spoiler and Duplicator, and there is a fixed number $r$ of rounds. Instead of being played on a pair
$(A,B)$
of structures with the same vocabulary (as in an E-F game), it is played on a pair $({\mathcal A}, {\mathcal B})$ of sets of structures, all with the same vocabulary.
For $k$ with $0 \leq k \leq r$, by a
\textit{labeled structure} after $k$ rounds, we mean a structure along with a labeling of which elements were selected from it in each of the first $k$ rounds.
Let ${\mathcal{A}}_0 = \mathcal{A}$ and ${\mathcal{B}}_0 = \mathcal{B}$.
Thus, ${\mathcal{A}}_0$ represents the labeled structures from $\mathcal A$ after 0 rounds, and similarly for ${\mathcal{B}}_0$.
If $1 \leq k <r$, let ${\mathcal{A}}_k$ be the labeled structures originating from $\mathcal A$ after $k$ rounds, and similarly for ${\mathcal{B}}_k$.
In round $k+1$,
Spoiler either chooses an element from each member of ${\mathcal{A}}_k$, thereby creating ${\mathcal{A}}_{k+1}$,
or chooses an element from each member of ${\mathcal{B}}_k$, thereby creating ${\mathcal{B}}_{k+1}$.
Duplicator responds as follows.
Suppose that Spoiler chose an element from each member of ${\mathcal{A}}_k$, thereby creating ${\mathcal{A}}_{k+1}$. Duplicator can then make multiple copies of each labeled structure of ${\mathcal{B}}_k$, and choose an element from each copy,
thereby creating ${\mathcal{B}}_{k+1}$.
Similarly, if Spoiler chose an element from each member of ${\mathcal{B}}_k$, thereby creating ${\mathcal{B}}_{k+1}$, Duplicator can then make multiple copies of each labeled structure of ${\mathcal{A}}_k$, and choose an element from each copy, thereby creating ${\mathcal{A}}_{k+1}$.
Duplicator wins if there is some labeled $A$ in ${\mathcal{A}}_{r}$ and some labeled $B$ in ${\mathcal{B}}_{r}$ where the labelings (in addition to any constants) give rise to a partial isomorphism in the same sense as in an E-F game.
Otherwise, Spoiler wins.
Note that on each of Duplicator's moves, Duplicator can make ``every possible choice," via the multiple copies.
Making every possible choice creates what we call the \emph{oblivious strategy}.
It is easy to see that Duplicator has a winning strategy if and only if the oblivious strategy is a winning strategy.
We shall prove the following theorem.
It is analogous to Theorem~\ref{thm:ef} for ordinary E-F games.
\begin{enumerate}gin{thm} \textbf{Equivalence Theorem for Multi-Structural Games:} \label{thm:main1}
Spoiler wins the $r$-round multi-structural game on
$(\mathcal{A}, \mathcal{B})$
if and only if there is a 1st order sentence $\phi$ with at most $r$ quantifiers such that $A \models \phi$ for every $A \in {\mathcal A}$ while $B \models \neg\phi$ for every $B \in {\mathcal B}$.
\end{thm}
We now give an interesting refinement of the Equivalence Theorem (although, as we shall discuss, it does not seem to directly imply the Equivalence Theorem).
Let $Q_1 \cdots Q_r$ be a sequence of quantifiers.
We now define the ``$Q_1 \cdots Q_r$ multi-structural game".
It is an $r$-round multi-structural game, with the following restrictions on Spoiler.
If $Q_k$ is an existential quantifier, then Spoiler's $k$th move
must be in $\mathcal A$, and otherwise must be in $\mathcal B$.
We then have the following result.
\begin{enumerate}gin{thm} \textbf{Fixed Prefix Equivalence Theorem for Multi-Structural Games:} \label{thm:main1a}
Spoiler wins the $Q_1 \cdots Q_r$ multi-structural game on
$(\mathcal{A}, \mathcal{B})$
if and only if there is a 1st order sentence $\phi$
in prenex normal form with exactly $r$ quantifiers, in the order $Q_1 \cdots Q_r$,
such that $A \models \phi$ for every $A \in {\mathcal A}$ while $B \models \neg\phi$ for every $B \in {\mathcal B}$.
\end{thm}
On the face of it, Theorem~\ref{thm:main1a} does not seem to directly imply Theorem~\ref{thm:main1}, for the following reason.
It is a priori conceivable that Spoiler's winning strategy in an $r$-round game is to move first in $\mathcal A$, and then, depending on Duplicator's response, to move in either $\mathcal A$ or $\mathcal B$.
So there is then no prefix $Q_1 \cdots Q_r$ dictating where Spoiler must move.
In fact, in an E-F game (but not in a multi-structural game), it can indeed happen that Spoiler can win in 2 rounds, and where Spoiler plays in the second round depends on how Duplicator played in the first round.
An example of where this phenomenon can take place in a 2-round E-F game is via the sentence
$\exists x (\forall y B(x,y) \land \exists y R(x,y))$.
The proof of Theorem~\ref {thm:main1} appears in Section~\ref {sec:fundamental}.
The proof of Theorem~\ref{thm:main1a}
is almost the same as the proof of Theorem~\ref{thm:main1}, with very minor changes.
There is an interesting and non-obvious difference between E-F games and multi-structural games.
Let us say that a player makes a move ``on top of'' a previous move if the player selects an element $c$ of a structure, and the same element $c$ had been selected by either player in an earlier round.
It is easy to see that in an E-F game, it never helps Spoiler to make a move on top of a previous move (it only wastes a round). On the other hand, in multi-structural games the issue of playing on top of a previous move is a frequent consideration for us. In fact, a detailed analysis shows that playing a move on top of a previous move may be a necessary part of a winning Spoiler strategy (see Observation \ref{obs:play-on-top}).
We now give an example (Figure \ref{fig:3_vs_2_singleton-intro}) that shows differences between the E-F game and the multi-structural game, and what they say about quantifier rank vs. number of quantifiers.
Consider the following two structures $B$ (for ``Big'') and $L$ (for ``Little''), over $\tau = \{<\}$, where $<$ is the binary ``less than'' relation. The vertex labels are not part of the structures. Elements that appear to the left, within the same structure, are considered to be less than elements to the right. $B$ is a linear order on $3$ elements and $L$ is a linear order on two elements. In the text of this paper we write $B(i)$ (or $L(i)$) to denote the $i$th element in the linear order $B$ (respectively, $L$), while in the figures, for economy of space, we label the $i$th vertex instead by $Bi$ (respectively, $Li$).
\begin{enumerate}gin{figure} [ht]
\centerline{\scalebox{0.4}{\includegraphics{images/3_vs_2_singleton.png}}}
\caption{An example showing the difference between multi-structural and E-F games.}
\label{fig:3_vs_2_singleton-intro}
\end{figure}
Further, rather than use the notation ${<}(x,y)$
we shall use the customary $x < y$ notation.
Suppose first that the number $r$ of rounds is $2$.
We show that Spoiler wins the E-F game.
On Spoiler's first move, Spoiler selects
vertex $B(2)$ in $B$. Duplicator must select either $L(1)$ or $L(2)$ in $L$. If Duplicator chooses $L(1)$, then Spoiler selects $B(1)$ in $B$.
After Duplicator selects $L(2)$ in $L$ Spoiler wins since the mapping given by $B(2) \mapsto L(1)$ and $B(1) \mapsto L(2)$ fails to be a partial isomorphism because the ``less than'' relationship is flipped. If Duplicator had instead selected $L(2)$ in the first round, then Spoiler would have won, by a similar argument, by selecting $B(3)$ in $B$ in the second round.
The fact that Spoiler wins the 2-round game over $(B,L)$ tells us (by Theorem~\ref{thm:ef}) that there is a sentence
$\phi$ of quantifier rank at most 2 such that $B \models \phi$ while $L \models \neg\phi$.
Such a sentence is: $\exists x(\exists y(y < x) \; \wedge \; \exists y(x < y))$.
Now let us consider the 2-round multi-structural game over $({\mathcal B}, {\mathcal L})$, where ${\mathcal B} = \set{B}$ and ${\mathcal L} = \set{L}$.
We show
that unlike the 2-round E-F game, in the 2-round multi-structural game, Duplicator wins.
It is easy to see that Duplicator wins if Spoiler's first-round move is anything other than $B(2)$.
Let us see what happens if Spoiler's first move is $B(2)$, which was a winning move for Spoiler in the 2-round E-F game.
Then on Duplicator's first move, Duplicator makes a second copy of $L$ and in one copy, call it $L_1$, Duplicator selects $L(1)$, and in the other copy, call it $L_2$, Duplicator selects $L(2)$.
Let us now consider Spoiler's possible second round responses.
Suppose first that Spoiler's second round move is in $B$.
If Spoiler selects $B(3)$, then Duplicator selects $L(2)$ in $L_1$ and the mapping $B \rightarrow L_1$ such that $B(2)\mapsto L(1), B(3) \mapsto L(2)$ yields a partial isomorphism. On the other hand, if Spoiler selects $B(1)$, then Duplicator selects $L(1)$ in $L_2$ and $B \rightarrow L_2$ such that $B(1) \mapsto L(1), B(2) \mapsto L(2)$ yields a partial isomorphism. Section \ref{sec:upperbds} will complete the analysis of a Duplicator win.
Since Duplicator wins the 2-round game, it follows
by Theorem \ref{thm:main1} there is no sentence with just two quantifiers that distinguishes
$\mathcal B$ from $\mathcal L$.
The focus of our analysis of multi-structural games in this paper is finite linear orders. Henceforth, all linear orders are assumed to be finite. In the case of E-F games, one has the following:
\begin{enumerate}gin{thmC}[\cite{Ros82}]\label{thm:f}
Let $f(r) = 2^r -1$.
In an $r$-round E-F game played on two linear orders of different sizes, Duplicator wins if and only if the size of the smaller linear order is at least $f(r)$.
\end{thmC}
Since part of the proof of Theorem~\ref{thm:f} is left to an exercise in \cite{Ros82}, we give a proof
in Appendix \ref{app:f}.
Further,
the proof illustrates a simple recursive idea that is surprisingly not available to us in the analysis of linear orders from the vantage point of multi-structural games.
Theorem \ref{thm:f} together with Theorem \ref{thm:ef} imply that $f(r)$ is the maximum value $k$ such that a sentence of quantifier rank $r$ can distinguish linear orders of size $k$ and above from those of size smaller than $k$.
In analogy to this function $f$, and in an effort to arrive at a parallel theorem to Theorem \ref{thm:f}, we make the following definition.
\begin{enumerate}gin{defi} \label{def:g}
Define the function $g:\mathbb{N} \rightarrow \mathbb{N}$ such that $g(r)$ is the maximum number $k$ such that there is a sentence with $r$ quantifiers that can distinguish linear orders of size $k$ or larger, from linear orders of size less than $k$.
\end{defi}
To see that $g$ is well-defined, observe that the sentence
\begin{enumerate}gin{equation} \label{eqn:basic}
\exists x_1 \cdot\cdot\cdot \exists x_r \begin{enumerate}gin{itemize}gwedge\limits_{1 \leq i < r} x_i < x_{i+1},
\end{equation}
distinguishes linear orders of size $r$ or larger from linear orders of size less than $r$. Furthermore,
there are only finitely many inequivalent sentences with up to $r$ quantifiers that include only the relation symbols $<$ and $=$, some fraction of which distinguish linear orders of some size $k$ or greater from linear orders of size less than $k$. There is therefore a maximum such $k \geq r$, which is then $g(r)$.
After building up quite a bit of machinery we eventually arrive at the following:
\begin{enumerate}gin{thm}\label{thm:g}
The function $g$ takes on the following values:
$g(1) = 1, g(2) = 2, g(3) = 4, g(4) = 10$, and for $r > 4$,
\begin{enumerate}gin{equation*}
g(r) = \begin{enumerate}gin{cases} &2g(r-1)~~~~~~\textrm{if $r$ is even,}\\ &2g(r-1) + 1\textrm{ if $r$ is odd.} \end{cases}
\end{equation*}
\end{thm}
The value $g(3) = 4$ is a curious anomaly. If it had turned out that $g(3) = 5$, then the entire induction could be founded on $r = 1$.
The proof of Theorem~\ref{thm:g} is a careful mathematical journey to restore this induction founded instead at $r = 4$.
The following theorem for multi-structural games is the analog of Theorem \ref{thm:f} for E-F games, and describes precisely when Duplicator (alternatively, Spoiler) wins $r$-round multi-structural games on two linear orders of different sizes.
\begin{enumerate}gin{thm} \label{thm:g_for_game_play}
In an $r$-round multi-structural game played on two linear orders of different sizes Duplicator has a winning strategy if and only if the size of the smaller linear order is at least $g(r)$.
\end{thm}
It is important to note that neither the \textit{if} nor \textit{only if} portion of this theorem is implied by the definition of $g$.
Both E-F games and multi-structural games are used to prove inexpressibility results by showing there is a wining strategy for Duplicator.
It is typically easier to demonstrate a winning strategy for Duplicator in E-F games than in multi-structural games, for several reasons.
First, it is easier to reason about only two structures at a time rather than about many structures at a time.
Second, in multi-structural games, there is a tactic available to Spoiler (that is of no use in E-F games) to make one move on top of an earlier move in one of the structures, and this can greatly complicate the analysis.
On the other hand, in mutii-structural games,
Duplicator has the advantage of being able to make multiple copies of structures and make different moves on the various copies.
This feature can be very useful in proving a winning strategy for Duplicator.
A similar phenomenon of modifying the rules of the game to make it easier for Duplicator to win arose in defining and making use of Ajtai-Fagin games \cite{AjtaiFagin90} rather than making use of the originally defined Fagin games \cite{Fagin75} for proving inexpressibility results in monadic existential second-order logic (called ``monadic NP" in \cite{FaginSV95}).
In Fagin games, there is a coloring round (a choice of the combinations of existentially-quantified monadic predicates) where Spoiler colors $A$ then Duplicator colors $B$, and then an ordinary E-F game is played on the colored structures.
In Ajtai-Fagin games, Spoiler must commit to a coloring of $A$ without knowing what the other structure $B$ is.
Fagin, Stockmeyer, and Vardi \cite{FaginSV95} use Ajtai-Fagin games to give a much simpler proof that connectivity is not in monadic NP than Fagin's original proof in \cite{Fagin75}.
In extending multi-structural games to second-order logic, which we think is an interesting and important future step (and which is straightforward to define), these games can easily simulate Ajtai-Fagin games.
This is because we can replace structure $A$ by the singleton set ${\mathcal A} = \{A\}$, and replace the structure $B$ by a set ${\mathcal B}$ that contains all possible choices for $B$ that Duplicator might choose in the Ajtai-Fagin game after Spoiler colors $A$.
\subsection{Related work}
Since our results can be viewed as giving information about the size of prefixes of sentences in prenex normal form, we begin by discussing some other papers that focus on such prefixes.
Rosen \cite{Rosen2005} shows that there is a strict prefix hierarchy, based on the prefixes of sentences written in prenex normal form. The proof involves standard E-F games.
Dawar and Sankaran \cite{DaSa21} consider E-F games, each of which focuses on a fixed prenex prefix. For example, there is one game that deals with the prenex prefix $\exists \forall \exists$.
For each of these prefixes, they define an E-F game on a pair $(A,B)$ of structures. For example, in the $\exists \forall \exists$ game, Spoiler must move first in $A$, then in $B$, and then in $A$.
Their Theorem 2.3 says that Spoiler has a winning strategy in a prefix game if and only if there is a sentence in prenex normal form with exactly that prefix that is true about $A$ but not about $B$. Unfortunately, the ``only if” direction of their Theorem 2.3 is false \cite{Dawar21pc}. This is because if $A$ is a linear order of size 5 and $B$ is a linear order of size 4, and the prefix is $\exists \forall \exists$, then it turns out that Spoiler wins that 3-round prefix game, but it follows from our Theorem~\ref{thm:g} that $A$ and $B$ agree on all sentences with at most three quantifiers, and in particular on all sentences $\exists x \forall y \exists z \phi(x,y,z)$, where $\phi$ is quantifier-free.
Fortunately, in their paper, Dawar and Sankaran just make use of the ``if” direction of Theorem 2.3, which is correct \cite{Dawar21pc}.
We now discuss some papers that, like ours, modify E-F games by allowing a pair of sets of structures, rather than simply a pair of structures.
Adler and Immerman \cite{AdlImm03} use a type of E-F game that involve a pair of sets of structures, where, as in our multi-structural games, Duplicator can make multiple copies of structures and make different moves on them.
Adler and Immerman’s concern is to obtain results about the size of sentences (rather than the number of quantifiers) in transitive closure logic (first-order logic with the transitive closure operator). The rules of the game are rather complicated, since it must deal with transitive closure logic and capture the size of sentences.
Hella and Vilander \cite{HelVil19} build on Adler and Immerman’s game, and their goal is also to determine sentence size (but in modal logic). Their rules of their game are also fairly complicated.
Grohe and Schweikardt \cite{GroheS05}
introduce a method (extended syntax trees) that corresponds to a game tree that is constructed by the two players in the Adler-Immerman game. They use these to study the size of sentences in the $2,3$ and $4$-variable fragments of first-order logic on linear orders.
Lotfallah \cite{Lotfallah04} has a class of E-F games played on a pair
of sets of structures rather than on a pair of single structures.
In Lotfallah’s games, Duplicator cannot make multiple copies of structures.
A follow-up paper by Lotfallah and Youssef \cite{LoYo05} characterizes certain first and second order
prefix types but does not involve sets of structures.
Hella and V{\"{a}}{\"{a}}n{\"{a}}nen \cite{HellaV15}, like Lotfallah,
have a class of E-F games played on a pair
of sets of structures rather than on a pair of single structures, where Duplicator cannot make multiple copies of structures. Hella and V{\"{a}}{\"{a}}n{\"{a}}nen
characterize
the size of sentences needed for separating sets of structures in propositional logic and a second one for first-order logic. The first-order game is used for proving an exact bound on the size of existential sentences needed to define the length of linear orders.
\subsection{Overview of the Sections}
In Section~\ref{sec:fundamental} we prove the Equivalence Theorem~\ref{thm:main1}.
In Section~\ref{sec:prelims} we establish certain preliminary terminology and notation and prove that the property of Duplicator having a winning strategy on two sets of structures gives rise to an equivalence relation over sets of structures with the same signature. The ensuing sections establish upper and lower bounds on the function $g(r)$ associated with Theorem~\ref{thm:g} until we are able to observe that we have tight bounds in all cases.
In Section~\ref{sec:upperbds} we establish upper bounds on $g(r)$, for $2$ and $3$. (The trivial tight bound $g(1) = 1$ is established in Section \ref{sec:prelims}.) The natural next step would be to proceed to higher values of $r$ using a type of recursive argument, but in Section~\ref{sec:interlude} we show why the natural recursive argument for multi-structural games (henceforth ``MS games'') does not work.
For pedagogical reasons, in section \ref{sec:lower_bounds}, we jump to establishing lower bounds for $g(r)$. We then jump back to establishing upper bounds
in Section~\ref{sec:atoms}, where we introduce a new type of game, an MS game with ``atoms", which allows us to recurse and prove upper bounds for all $r$.
The upper bounds are then seen to be tight with respect to our lower bounds and hence, in Section \ref{sec:final}, we are able to prove Theorem \ref{thm:g}. The machinery that we have built up then enables a quick proof of Theorem \ref{thm:fund_thm_for_los}, which is just the syntactic equivalent of Theorem \ref{thm:g_for_game_play} from the Introduction (there stated game theoretically). We note that while games with ``atoms" are an important part of our upper bound proofs, the final sentences guaranteed by
Definition \ref{def:g} and Theorem \ref{thm:g}
do not contain atoms.
We present our conclusions in Section \ref{sec:conc}.
\section{Proof of the Equivalence Theorem}\label{sec:fundamental}
To prove Theorem~\ref{thm:main1} we are going to add a special constant just prior to the play of each round that will help us maintain the induction. These constants, which we denote by $c_1,...,c_r$ are in addition to whatever constants may exist in the vocabulary $\tau$. We write $(A;\dots;c_1 \!\leftarrow\! a_1)$ to mean the structure obtained after assigning $c_1$ to the element $a_1$ in $A$, and so on.
\begin{enumerate}gin{proof}[Proof of Theorem~\ref{thm:main1}]
Both directions are proved by induction on the number $r$ of rounds. For $r = 0$, Spoiler winning in $0$ rounds means that for every $A \in \mathcal{A}$ and $B \in \mathcal{B}$, the restrictions $A',B'$ of $A$ and $B$ (respectively) to their constants must be non-isomorphic. For every $A \in \mathcal{A}$, we can write a quantifier-free sentence $\phi_{A}$ that characterizes $A'$ up to isomorphism, using equality to identify any coinciding constants. Then take $\phi$ to be the disjunction of all $\phi_{A}$ -- note that even if $\mathcal{A}$ is infinite, there are only finitely many distinct $\phi_{A}$. Then $A \models \phi$ for all $A \in \mathcal{A}$. Now consider any $B \in \mathcal{B}$. We claim that $B \models \neg\phi$. As $\neg\phi$ is a conjunction, this is equivalent to $B \models \neg\phi_{A}$ for every $A$. Suppose not, then we would have $B \nvDash \neg\phi_{A}$, i.e., $B \models \phi_{A}$. But because $\phi_{A}$ characterizes $A'$ up to isomorphism, this would make $B'$ isomorphic to $A'$, contradicting that Spoiler wins. Hence $\phi$ is a quantifier-free sentence that distinguishes $\mathcal{A}$ and $\mathcal{B}$.
Conversely, if there is a quantifier-free sentence $\phi$ that distinguishes $\mathcal{A}$ and $\mathcal{B}$, then there cannot exist $A \in \mathcal{A}$ and $B \in \mathcal{B}$ such that the restrictions $A'$ and $B'$ to the constants appearing in $\phi$ are isomorphic. Thus, Duplicator loses without further play.
Now suppose $r \geq 1$ and the equivalence is true for $r-1$. We induct on the first move rather than the last move of the games. For the forward direction, suppose Spoiler can win---say by playing in $\mathcal{B}$. For each $B \in \mathcal{B}$, Spoiler selects an element $b \in B$. Duplicator replies by replicating every $A \in \mathcal{A}$ and playing every possible $a \in A$. Now let us assign the constant $c_1$ to the element played on each of the structures in $\mathcal{A}$ and $\mathcal{B}$. The resulting game position $(\mathcal{A}^1,\mathcal{B}^1)$, which includes the new constant $c_1$ in each structure, is winnable in $r-1$ rounds by Spoiler. By the induction hypothesis, there is a sentence $\psi$ with $r-1$ quantifiers that distinguishes $\mathcal{B}^1$ from $\mathcal{A}^1$. Now define $\phi = (\exists x_1)\psi'$ where $\psi'$ replaces all occurrences of $c_1$ in $\psi$ by $x_1$.
(This is alright even in degenerate cases where $c_1$ does not occur in $\psi$.)
For every $B \in \mathcal{B}$, we have $B \models \phi$ because there is a $b \in B$ such that $(B;\dots;c_1 \!\leftarrow\! b) \models \psi$, namely the $b$ that Spoiler played in $B$. Hence it suffices to show that $A \models \neg\phi = (\forall x_1)\neg\psi'$ for every $A \in \mathcal{A}$. After Duplicator's play, $A$ was replaced by
\[
\{(A;\dots;c_1 \!\leftarrow\! a_1),(A;\dots;c_1 \!\leftarrow\! a_2),\dots,(A;\dots;c_1 \!\leftarrow\! a_m)\},
\]
where $A = \{a_1,\dots,a_m\}$. Since $\psi$ distinguishes $\mathcal{B}^1$ from $\mathcal{A}^1$, we have $(A;\dots,c_1 \!\leftarrow\! a_j) \models \neg\psi$ for each $j = 1,\dots,m$. It follows that $A \models (\forall x_1)\neg\psi'$. The case where Spoiler wins by playing in $\mathcal{A}$ is handled symmetrically.
Going the other way, suppose $\phi$ is a prenex sentence with $r$ quantifiers that distinguishes $\mathcal{A}$ from $\mathcal{B}$. If the leading quantifier is $\forall$ then $\neg\phi$ has leading quantifier $\exists$ and distinguishes $\mathcal{B}$ from $\mathcal{A}$, so we can reason by symmetry. So let $\phi = (\exists x_1)\psi'$ for some $\psi'$ and take $\psi$ to be the sentence with $x_1$ replaced everywhere by the special constant symbol $c_1$. For every $A \in \mathcal{A}$, $A \models \phi$, so there exists $a_1 \in A$ such that $(A;\dots;c_1 \!\leftarrow\! a_1) \models \psi$. Spoiler can play such an element $a_1$ in every $A$. Now every $B \in \mathcal{B}$ models $\neg\phi = (\forall x_1)\neg\psi'$. For every $b \in B$, Duplicator creates the structure $(B;\dots;c_1 \!\leftarrow\! b)$, but regardless of $b$, it models $\neg\psi$. Thus, $\psi$ distinguishes the resulting set $\mathcal{A}^1$ from Duplicator's $\mathcal{B}^1$, has $r-1$ quantifiers, and includes $c_1$ along with any previous constants. By the induction hypothesis, Spoiler wins from $(\mathcal{A}^1,\mathcal{B}^1)$ in $r-1$ rounds, so Spoiler wins from $(\mathcal{A},\mathcal{B})$ in $r$ rounds.
\end{proof}
\section{Preliminaries} \label{sec:prelims}
\begin{enumerate}gin{defi}
Let $\mathcal{A}$ and $\mathcal{B}$ be two sets of structures. Write $\mathcal{A} \equiv_r \mathcal{B}$ iff Duplicator has a winning strategy for MS games of $r$ rounds on $\mathcal{A}$ and $\mathcal{B}$. If $\mathcal{A} = \{A\}$ and $\mathcal{B} = \{B\}$ we also write $A \equiv_r B$.
\end{defi}
An important consequence of Theorem \ref{thm:main1} is the following.
\begin{enumerate}gin{lem} \label{lemma:prenex_equiv} The relation $\equiv_r$ is an equivalence relation between sets of structures.
\end{lem}
\begin{enumerate}gin{proof} That the relation $\equiv_r$ is reflexive and symmetric follows immediately from the definition. For transitivity, suppose there are three sets of structures, $\mathcal{A}, \mathcal{B}$ and $\mathcal{C}$ such that $\mathcal{A} \equiv_r \mathcal{B}$ and $\mathcal{B} \equiv_r \mathcal{C}$. By the Equivalence Theorem \ref{thm:main1}, $\mathcal{A} \equiv_r \mathcal{B}$ implies that $\mathcal{A}$ and $\mathcal{B}$ agree on the same sentences with at most $r$ quantifiers. Similarly, $\mathcal{B} \equiv_r \mathcal{C}$ implies that $\mathcal{B}$ and $\mathcal{C}$ agree on the same set of sentences with at most $r$ quantifiers. Hence, $\mathcal{A}$ and $\mathcal{C}$ agree on these same set of sentences. By the Equivalence Theorem again, it follows that $\mathcal{A} \equiv_r \mathcal{C}$.
\end{proof}
Before proceeding further let us establish some terminology that is intended to make the reading smoother. In cases where there are multiple linear orders on one side or another we often refer to the different linear orders as different ``boards'' that Spoiler and Duplicator play on.
When we say a ``$K$ versus $K'$ game'' or a ``$K$ vs. $K'$ game'', we mean an MS game played on $(\mathcal{A}, \mathcal{B})$ where $\mathcal{A}$ consists of a single linear order of size $K$, and $\mathcal{B}$ consists of a single linear order of size $K'$.
We will typically play games where $\mathcal{A}$ and $\mathcal{B}$ each consist of a single linear order as above. In this context, as we did in the Introduction, we will use $B$ to denote the \textit{big} linear order and $L$ to denote the \textit{little} linear order.
As is standard in model theory, we assume a non-empty universe so all linear orders are of size at least $1$.
\begin{enumerate}gin{lem} \label{lemma:g_upper_of_1}
$g(1) = 1$.
\end{lem}
\begin{enumerate}gin{proof}
The sentence $\exists x(x=x)$ is true for all linear orders so that $g(1) \geq 1$. Moreover, Duplicator can win $1$-round MS games whenever both linear orders are of size $1$ or greater, which implies that $g(1) \leq 1$.
\end{proof}
\section{Towards Establishing Upper Bounds on $g(r)$}\label{sec:upperbds}
A potent tool for finding an upper bound k on the value of $g(r)$ will be to find strategies such that Duplicator can win $r$-round games on two linear orders whenever the sizes of the linear orders are at least $k$. All of our upper bounds are established in this manner.
Since we have established that $g(1) = 1$,
we start establishing upper bounds at $g(2)$.
\begin{enumerate}gin{lem} \label{lemma:g_upper_of_2}
Duplicator can win $2$-round MS games whenever both linear orders are of size $2$ or greater, and hence $g(2) \leq 2$.
\end{lem}
\begin{enumerate}gin{proof}
In the Introduction we considered a $2$-round MS game on two linear orders of sizes $|B| = 3, |L| = 2$.
Figure \ref{fig:3_vs_2_singleton-intro} is given again here for ease of reference.
\begin{enumerate}gin{figure} [ht]
\centerline{\scalebox{0.40}{\includegraphics{images/3_vs_2_singleton.png}}}
\caption{The case $|B| = 3, |L| = 2$.}
\label{fig:3_vs_2_singleton}
\end{figure}
The Introduction covered the case where Spoiler selects the middle $B$ element, $B(2)$, in the first round. The case where Spoiler picks an end element from either the $L$ or $B$ boards in the first round is easier -- Duplicator just picks the corresponding end element from the other linear order and she does not even need to make a second copy of the board to win. For example, in response to $B(1)$, Duplicator will play $L(1)$, or in response to $L(2)$, Duplicator will play $B(3)$, in either case leading to simple wins.
In the example given in the Introduction, where Spoiler played $B(2)$, once Duplicator makes copies and plays different moves in each copy, we render the game after round 1, as in Figure \ref{fig:3_vs_2_rd1}.
\begin{enumerate}gin{figure} [ht]
\centerline{\scalebox{0.40}{\includegraphics{images/3_vs_2_rd1.png}}}
\caption{In response to Spoiler playing $B(2)$, Duplicator makes a second copy of the $L$ board and place $L(1)$ on one board and $L(2)$ on the other board. (The diagrams omit parentheses.)}
\label{fig:3_vs_2_rd1}
\end{figure}
To complete the analysis, we must show that Duplicator wins
whenever $2 \leq |L| < |B|$. Let us begin with the case $|L| = 2$ and $|B| > 3.$
If Spoiler picks an end element from $B$ or from $L$, then Duplicator picks the corresponding end element from the opposite side and wins. On the other hand, if Spoiler picks a non-end element on $B$, Duplicator picks $L(1)$ on top and $L(2)$ on bottom, winning just like in the introduction. We are left to consider the case when $3 \leq |L| < |B|$. Playing end moves from either $B$ or $L$ have the same effect as before. While if Spoiler picks a non-end element from either $B$ (or $L$), Duplicator wins by playing \textit{any} non-end element from, respectively, $L$ (or $B$), guaranteeing a $2$-round win.
\end{proof}
\begin{enumerate}gin{lem} \label{lemma:g_upper_of_3}
Duplicator can win $3$-round MS games on linear orders whenever both linear orders are of size $4$ or greater, and hence $g(3) \leq 4$.
\end{lem}
\begin{enumerate}gin{proof}
Let us start with the base case $|B|=5, |L|=4$. See Figure \ref{fig:5_vs_4_base}.
\begin{enumerate}gin{figure} [ht]
\centerline{\scalebox{0.30}{\includegraphics{images/5_vs_4_base.png}}}
\caption{The case $|B| = 5, |L| = 4$.}
\label{fig:5_vs_4_base}
\end{figure}
Duplicator-winning outcomes associated with all Spoiler 1st round plays, \textit{other than} $B(3)$, are easy to analyze and described in the table below. Further explanation of the notation used in the table is given in the text that follows it.
\begin{enumerate}gin{small}
\begin{enumerate}gin{center}
\begin{enumerate}gin{tabular}
{|cc|cc|cc|}
\hline
\multicolumn{2}{|c|}{\textbf{Round 1}} & \multicolumn{2}{|c|}{\textbf{Round 2}} & \multicolumn{2}{|c|}{\textbf{Round 3}} \\
\hline
\textbf{S} & \textbf{D} & \textbf{S} & \textbf{D} & \multicolumn{2}{|l|}{}\\
\hline
$B(1)$ & $L(1)$ & \multicolumn{2}{|l|}{D wins by reduction to $f(2)$} & \multicolumn{2}{|l|}{} \\
$B(2)$ & $L(2)$ & $B(1)$ & $L(1)$ & \multicolumn{2}{|l|}{D wins} \\
& & $B(2)$ & $L(2)$ & \multicolumn{2}{|l|}{D wins} \\
& & $B(3)$ & $L(3)$ & \multicolumn{2}{|l|}{D wins} \\
& & $B(4)$ & $L(3)$ & \multicolumn{2}{|l|}{D wins on this board or board below} \\
& & & $L(4)$ & \multicolumn{2}{|l|}{} \\
& & $B(5)$ & $L(4)$ & \multicolumn{2}{|l|}{D wins} \\
& & $L(1)$ & $B(1)$ & \multicolumn{2}{|l|}{D wins by transposition to Rd2. $B(1)$,$L(1)$} \\
& & $L(2)$ & $B(2)$ & \multicolumn{2}{|l|}{D wins by transposition to Rd2. $B(2)$,$L(2)$} \\
& & $L(3)$ & $B(3)$ & \multicolumn{2}{|l|}{D wins by transposition to Rd2. $B(3)$,$L(3)$} \\
& & $L(4)$ & $B(5)$ & \multicolumn{2}{|l|}{D wins by transposition to Rd2. $B(5)$,$L(4)$} \\
$B(3)$ & $L(1)$ & \multicolumn{2}{|l|}{\textbf{Described in the text}} & \multicolumn{2}{|l|}{} \\
& $L(2)$ & \texttt{"} & \texttt{"} & \multicolumn{2}{|l|}{} \\
& $L(3)$ & \texttt{"} & \texttt{"} & \multicolumn{2}{|l|}{} \\
& $L(4)$ & \texttt{"} & \texttt{"} & \multicolumn{2}{|l|}{} \\
$B(4)$ & $L(3)$ & \multicolumn{2}{|l|}{Symmetrical to Rd1. $B(2)$,$L(2)$} & \multicolumn{2}{|l|}{} \\
$B(5)$ & $L(4)$ & \multicolumn{2}{|l|}{Symmetrical to Rd1. $B(1)$,$L(1)$} & \multicolumn{2}{|l|}{} \\
$L(1)$ & $B(1)$ & \multicolumn{2}{|l|}{Transposition to Rd1. $B(1)$,$L(1)$} & \multicolumn{2}{|l|}{} \\
$L(2)$ & $B(2)$ & \multicolumn{2}{|l|}{Transposition to Rd1. $B(2)$,$L(2)$} & \multicolumn{2}{|l|}{} \\
$L(3)$ & $B(4)$ & \multicolumn{2}{|l|}{Transposition to Rd1. $B(4)$,$L(3)$} & \multicolumn{2}{|l|}{} \\
$L(4)$ & $B(5)$ & \multicolumn{2}{|l|}{Transposition to Rd1. $B(5)$,$L(4)$} & \multicolumn{2}{|l|}{} \\
\hline
\end{tabular}
\end{center}
\end{small}
A few notes on the table:
Moves given in the respective \textbf{S} columns are Spoiler plays while those given in the \textbf{D} columns are Duplicator plays. When we say that a game is winnable for Duplicator ``by reduction to $f(k)$,'' for some $k$, we mean that the game from this point on can be won by Duplicator simply as a $1$-board standard Ehrenfeucht-\fraisse game. Recall the definition of $f$ from Theorem \ref{thm:f}. In these cases, typically an end element has been played, and from an Ehrenfeucht-\fraisse point of view, it is of no benefit to Spoiler to subsequently play on top of the 1st element, so we may remove these elements from both $L$ and $B$ and consider the game, in subsequent rounds, to be played solely on the remaining elements. The game reduces to $f(k)$ if there are $k$ rounds yet to be played and both sets of remaining elements are at least of size $f(k)$.
When we say that a given sequence of moves $X,Y$ is ``symmetrical'' to another sequence of moves $X', Y'$, we mean that you can arrive at one sequence by taking the mirror image of the other sequence with respect to the center of the boards., where the other sequence has already been analyzed.
We say that a sequence of moves is the transposition of another sequence of moves if the end board positions are the same but the sequence of moves leading to that board position is different.
Note that if Duplicator is able to win in a single additional move by focusing just on a single board on the $B$ and $L$ sides without any special strategy, we do not describe every possible move and response sequence (there are just too many), but rather just indicate that ``D wins.''
In response to the one tricky Spoiler 1st round move of $B(3)$, Duplicator creates additional copies of the $L$ board and makes every possible move, as depicted in Figure \ref{fig:5_vs_4_rd1}.
\begin{enumerate}gin{figure} [ht]
\centerline{\scalebox{0.35}{\includegraphics{images/5_vs_4_rd1.png}}}
\caption{After Spoiler plays $B(3)$ from the $B$ side, Duplicator plays $L(1)$, $L(2)$, $L(3)$ and $L(4)$ on different boards on the $L$ side.}
\label{fig:5_vs_4_rd1}
\end{figure}
Let's now analyze the possible Spoiler 2nd round responses.
\noindent \underline{Case where Spoiler makes 2nd round move on $B$}:
\begin{enumerate}gin{small}
\begin{enumerate}gin{center}
\begin{enumerate}gin{tabular}
{|cc|}
\hline
\multicolumn{2}{|c|}{\textbf{Round 2}} \\
\hline
\textbf{Spoiler} & \textbf{Duplicator} \\
\hline
$B(1)$ & $L(1)$ on 3rd $L$ board insuring a win on that board \\
$B(2)$ & $L(2)$ on 3rd $L$ board insuring a win on that board \\
$B(3)$ & $L(3)$ on 3rd $L$ board insuring a win on that board \\
$B(4)$ & $L(3)$ on 2nd $L$ board insuring a win on that board \\
$B(5)$ & $L(4)$ on 2nd $L$ board insuring a win on that board \\
\hline
\end{tabular}
\end{center}
\end{small}
\begin{enumerate}gin{itemize}gskip
\noindent \underline{Case where Spoiler makes 2nd round moves on $L$}:
First consider the possibility of Spoiler playing atop an existing move. If he plays atop $L(2)$ on board 2, or $L(3)$ on board 3, then Duplicator will play atop $B(3)$, ensuring victory on the associated boards in another move.
Analogously, if Spoiler plays on top of \textit{both} $L(1)$ on the top board and $L(4)$ on the bottom board, then Duplicator can play atop $B(3)$ guaranteeing a win on one of these $L$ boards or the other. Thus, we may assume Spoiler plays atop \textit{at most} one of the existing moves, with that one move being atop either $L(1)$ or $L(4)$.
Of the at least three boards in which Spoiler does \textit{not} play on top of existing moves, the moves are either to the right of the existing moves, or to the left of the existing moves, and hence, there must be at least two played to one side or the other of the existing moves.
Without loss of generality, assume that at least two of these moves are to the right of the existing moves.
Suppose that one of the moves to the right is on the 2nd board. An $L(3)$ move would be met by a $B(4)$ response by Duplicator, while an $L(4)$ move would be met by a $B(5)$ response, in either case leading to a single board win for Duplicator. Thus, we can assume that the two 2nd round moves to the right of the existing moves are on the 1st and 3rd boards -- the only two boards we need to consider to conclude this bit of the analysis. The element $L(4)$ must be selected from board 3. If $L(2)$ is selected from board 1 then Duplicator wins by selecting $B(4)$: any 3rd round Spoiler move on $B$ is parried on one of the two $L$ boards, while if Spoiler plays his 3rd round from $L$, Duplicator can maintain an isomorphism with any move played on either the 1st or 3rd board. We are left to consider just the possibilities that Spoiler plays $L(3)$ or $L(4)$ for his 2nd round move on board 1. Suppose he plays $L(3)$. Duplicator can then win by playing $B(5)$: if Spoiler plays his 3rd move from $B$ then $B(4)$ is met with $L(2)$ on the 1st board and any other $B$ move is easily parried on the 3rd $L$ board. On the other hand, if Spoiler plays his 3rd move from $L$ then whatever he does on the 3rd $L$ board can be matched with an isomorphism-preserving move on $B$. In the final case where Spoiler plays $L(4)$ for his 2nd round move on the 1st $L$ board, Duplicator responds with $B(5)$, guaranteeing an isomorphism. It follows that Duplicator can always win the $|B| = 5, |L| = 4$, $3$-round game.
To complete the argument we must show that Duplicator can also win when one or both of $|B| > 5$ and $|L| > 4$. Let us start by considering the case of $|B| > 5, |L| = 4$. The analysis for any initial Spoiler move where he plays either an end move or a next-to-end move is precisely as earlier. Consider all other moves on $B$ to be ``middle'' moves. Any such middle move is parried just like in Figure \ref{fig:5_vs_4_rd1}, by playing $L(1)$, $L(2)$, $L(3)$, and $L(4)$ on the different boards on the $L$ side. The only new wrinkle in this analysis occurs in the case where Spoiler first plays a middle move on $B$. We illustrate for $|B| = 6$ and where Spoiler's first play is $B(3)$. He can now play $B(5)$ and Duplicator must take new precautions because she cannot win by playing on a single board. She can, however, play $L(3)$ on the first $L$ board, covering further play on the right of $B(3)$ by Spoiler, while also playing, say, $L(3)$ on board 2, to cover potential play by Spoiler on the left of $B(3)$. The rest of the analysis is precisely as in the smaller $|B|$ case.
Finally, consider the case $|B| > |L| > 4$. If Spoiler plays an end element or next-to-end element, Duplicator follows suit, playing, respectively, an end element or next-to-end element from the same side on the opposite board, leading to an easy victory: if Spoiler plays on both sides of the 1st round move in subsequent rounds he will clearly lose, while playing just on one side reduces to a losing $2$ round game for him. Similarly, if Spoiler plays a ``middle'' element, Duplicator can respond playing any middle element from the opposite side. Again, if Spoiler plays on both sides of the 1st round move in subsequent rounds he will clearly lose, but playing just on one side reduces to a losing $2$ round game for him.
That completes the argument that Duplicator can win an MS game of $3$ rounds whenever the size of all boards is at least $4$ and hence establishes the lemma.
\end{proof}
At this point it is natural to suspect that one can build up upper bounds recursively in a relatively simple manner. However, such an approach runs into unexpected difficulties, as we describe in the next section.
\section{Interlude: Why Naive Recursion cannot be used to Build up Duplicator Winning Strategies in MS Games} \label{sec:interlude}
It is worth pausing to understand why a simple idea to use recursion to build up Duplicator-winning strategies, and hence upper bounds on $g(r)$, fails. Via Lemma \ref{lemma:g_upper_of_3}, we have established the fact that $g(3) \leq 4$ by showing that Duplicator wins $3$-round MS games if the sizes of both linear orders are $4$ or larger. It is tempting to try to use this fact to produce a Duplicator strategy for winning $4$-round games using recursion. To understand the problematic logic, it suffices to consider a $4$-round game on boards of sizes $9$ and $10$. The erroneous argument runs as follows: Suppose Spoiler plays $L(5)$ on his 1st move. Duplicator can then simply reply with the single move $B(5)$ (so the erroneous reasoning goes), as in Figure \ref{fig:10_vs_9_recursion_attempt}.
\begin{enumerate}gin{figure} [ht]
\centerline{\scalebox{0.33}{\includegraphics{images/10_vs_9_recursion_attempt_v2.png}}}
\caption{A simple attempt to arrive at a Duplicator-winning strategy for the $10$ versus $9$ game. The first round moves are given in red.}
\label{fig:10_vs_9_recursion_attempt}
\end{figure}
Since there are five unplayed elements to the right of $B(5)$ and four unplayed elements to the right of $L(5)$, Duplicator should now just be able to mimic Spoiler's moves at, or to the left of, $B(5)$/$L(5)$ and otherwise play moves to the right of $B(5)$/$L(5)$ as if it were a $3$-round, $5$ versus $4$ game, which we know is winnable by Duplicator. In fact, as we learned in Lemma~\ref{lemma:g_underline_of_4}, the $10$ vs. $9$ game is winnable by \textit{Spoiler}. Hence this strategy does not work.
The reason the strategy doesn't work is that there is interaction between play on the two sides and there are moves to the left of the $5$ vs. $4$ sub-game that are more powerful for Spoiler (in the sense of breaking more to-that-point maintained partial isomorphisms) than any moves available in the $5$ vs. $4$ game. This additional power is achieved by Spoiler playing atop an already played move that is not part of the $5$ vs. $4$ sub-game at a critical juncture.
Indeed, for his 2nd round move, Spoiler will select $B(8)$, and as we know from the analysis of the $5$ vs. $4$ games, this will require Duplicator to make copies of the $L$ board and play each of the possible moves to the right of $L(5)$, as depicted in Figure \ref{fig:10_vs_9_recursion_attempt2}.
\begin{enumerate}gin{figure} [ht]
\centerline{\scalebox{0.40}{\includegraphics{images/10_vs_9_recursion_attempt2_v2.png}}}
\caption{The 2nd round plays of the simple Duplicator-winning strategy for the $10$ versus $9$ game. The first round moves are given in red, 2nd round moves in blue.}
\label{fig:10_vs_9_recursion_attempt2}
\end{figure}
At this point Spoiler will play on top of $L(5)$ on the 1st $L$ board (a move that \textit{wasn't available} in the $5$ vs. $4$ game), on $L(6)$ on the 2nd $L$ board, on $L(9)$ on the 3rd $L$ board, and on top of $L(9)$ on the 4th $L$ board. See Figure \ref{fig:10_vs_9_recursion_attempt3a}.
\begin{enumerate}gin{figure} [ht]
\centerline{\scalebox{0.40}{\includegraphics{images/10_vs_9_recursion_attempt3a_v2.png}}}
\caption{3rd round plays (in green) that foils the simple Duplicator-winning strategy for the $10$ versus $9$ game. The first round moves are given in red, 2nd round moves in blue.}
\label{fig:10_vs_9_recursion_attempt3a}
\end{figure}
Spoiler is going to play his 4th and final round moves from $B$, but first let's see what happens in response to the various Duplicator 3rd round moves (we assume the oblivious strategy). The only move that would keep an isomorphism with the 1st $L$ board is a play atop $B(5)$ -- breaking isomorphisms with any $L$ board but the top one. But then Spoiler will play $B(6)$ on his 4th move on the same board, and Duplicator will then not be able to keep the isomorphism going with the top $L$ board. On the other hand, to maintain an isomorphism with the 2nd $L$ board, Duplicator must play either $B(6)$ or $B(7)$, and in so doing, break an isomorphism with any other $L$ board. However, Spoiler will respond with $B(7)$ if $B(6)$ was played, or $B(6)$ if $B(7)$ was played, and Duplicator will have no retort. To maintain an isomorphism with the 3rd $L$ board, Duplicator must play $B(9)$ or $B(10)$, again breaking the isomorphism with all other $L$ boards, and Spoiler will respond by playing $B(10)$ if $B(9)$ was played and vice versa. Finally, to keep an isomorphism going with the bottom $L$ board, Duplicator must play on top of $B(8)$, again breaking the isomorphisms with other $L$ boards. But then Spoiler plays $B(9)$ and Duplicator cannot respond.
Thus, trying to replicate the $5$ vs. $4$ strategy to the right of $B(5)$/$L(5)$ and mimicking play on top of or to the left of $B(5)$/$L(5)$ does not work for Duplicator.
When Spoiler played on top of $L(5)$ for one of his 3rd round moves, he was utilizing a move that was \textit{not} available to him in the $5$ vs. $4$ game. He \textit{did} have the option to play on top of $L(6)$ (which was labeled $L(1)$ in the $5$ vs. $4$ game) -- but playing on top of $L(5)$ has a subtly stronger effect than playing on top of $L(6)$ -- since it forces Duplicator to break an additional isomorphism if she plays a move that keeps an isomorphism with the board in which $L(6)$ was played.
Before moving on, it is worth noting that it was not strictly necessary for Spoiler to play on top of $L(5)$ on the top $L$ board on his 3rd move -- any move on $L(1)$--$L(4)$ would have worked just as well. See Figure \ref{fig:10_vs_9_recursion_attempt3b}.
\begin{enumerate}gin{figure} [ht]
\centerline{\scalebox{0.40}{\includegraphics{images/10_vs_9_recursion_attempt3b_v2.png}}}
\caption{A second example of 3rd round plays (in green) that foils the simple Duplicator-winning strategy for the $10$ versus $9$ game. The first round moves are given in red, 2nd round moves in blue.}
\label{fig:10_vs_9_recursion_attempt3b}
\end{figure}
For Duplicator to maintain an isomorphism with the top $L$ board she will have to move on $B(1)$--$B(4)$, again breaking the isomorphisms with any other $L$ boards, and Spoiler can then pick $B(6)$ on his 4th move,
breaking any hope for an isomorphism with the 1st board.
\section{Lower Bounds on $g(r)$} \label{sec:lower_bounds}
\begin{enumerate}gin{lem} \label{lemma:g_lower_of_2}
$g(2) \geq 2$
\end{lem}
\begin{enumerate}gin{proof}
The sentence $\Phi_2 = \exists x\exists y(x < y)$ distinguishes linear orders of size $2$ and above, from the linear order of size $1$.
\end{proof}
\begin{enumerate}gin{lem} \label{lemma:g_lower_of_3}
$g(3) \geq 4$
\end{lem}
\begin{enumerate}gin{proof}
The following sentence, with $3$ quantifiers,
distinguishes linear orders of size at least $4$ from those of size
at most $3$:
\begin{enumerate}gin{equation}
\Phi_3 = \forall x\exists y \exists z(x < y < z \vee y < z < x)\qedhere \label{g3_forall}
\end{equation}
\end{proof}
\begin{enumerate}gin{lem} \label{lemma:g_underline_of_4}
${g}(4) \geq 10$.
\end{lem}
\begin{enumerate}gin{proof}
The following sentence with $4$ quantifiers distinguishes linear orders of size at least $10$ from those of size at most $9$.
\begin{enumerate}gin{small}
\begin{enumerate}gin{align}
\Phi_4 = \forall x \exists y \forall z \exists w(& \notag \\
&x < z < y \rightarrow (w \neq z \land x < w < y) ~~~\land \label{cond1} \\
&x < y < z \rightarrow (w \neq z \land x < y < w) ~~~\land \label{cond2}\\
&y < z < x \rightarrow (w \neq z \land y < w < x)~~~ \land\label{cond3} \\
&z < y < x \rightarrow (w \neq z \land w < y < x)~~~\land \label{cond4}\\
&z = x \rightarrow (x < w < y \vee y < w < x)~~~\land \label{cond5}\\
&z = y \rightarrow (x < y < w \vee w < y < x)). \label{cond6}
\end{align}
\end{small}
This sentence captures the fact that ``for every $x$ there is a $y$ with two or more elements on each side of $y$, both of which are on the same side of $x$ as $y$'', a fact that is true for linear orders of size $10$ or greater, but not for linear orders of size less than $10$. For example, in a linear order of size $9$, the middle element will not have an element to either side of it having these properties. More specifically, the first four implications (\ref{cond1})--(\ref{cond4}) say that for every $x$ there is a $y$ such that for any $z \neq x,y$, with $z$ on the same side of $x$ as $y$, there is an additional element besides $z$ on the same side of $y$ and also on the same side of $x$ as $y$. The last two implications (\ref{cond5})--(\ref{cond6}) are critical and insure that there are elements (i) between $x$ and $y$ and (ii) less than $y$ if $y < x$, and greater than $y$ if $y > x$.
\end{proof}
For a game-based proof of Lemma \ref{lemma:g_underline_of_4} see Appendix \ref{sec:app-g_underline_of_4}.
With initial values $g(1) = 1, g(2) \geq 2$ (Lemma \ref{lemma:g_lower_of_2}), $g(3) \geq 4$ (Lemma \ref{lemma:g_lower_of_3}) and $g(4) \geq 10$ (Lemma \ref{lemma:g_underline_of_4}), we establish all remaining lower bounds via a game argument:
\begin{enumerate}gin{thm} \label{thm:main_lower}
For $r > 4$,
\begin{enumerate}gin{equation} \label{eqn:main_recursion_lower}
g(r) \geq \begin{enumerate}gin{cases} 2g(r-1)~~~~~~\textrm{if $r$ is even,}\\ 2g(r-1) + 1\textrm{ if $r$ is odd.} \end{cases}
\end{equation}
\end{thm}
\begin{enumerate}gin{proof}
Let us establish (\ref{eqn:main_recursion_lower}) first for odd $r$. To establish the theorem (for odd $r$) we need to provide a Spoiler-winning strategy for a game with arbitrary linear orders of sizes at least $2g(r-1)+1$ on one side and arbitrary linear orders of sizes at most $2g(r-1)$ on the other side. For simplicity, we will show the strategy for the game in which we have a single linear order of size at least $2g(r-1)$ + 1 on one side and a single linear order of size at most $2g(r-1)$ on the other side. The strategy for the case of multiple linear orders on each side is exactly the same, as we shall see. Analogous remarks hold for the even $r$ case.
We start with the special, though pivotal, case where we have linear orders of size $2g(r-1) + 1$ and $2g(r-1)$, as in Figure \ref{fig:lower_bound_odd_case}.
\begin{enumerate}gin{figure} [ht]
\centerline{\scalebox{0.38}{\includegraphics{images/lower_bound_odd_case_v2.png}}}
\caption{The odd $r$ case: linear orders of sizes $2g(r-1) + 1$ and $2g(r-1)$. Spoiler plays first on $B$ (the left hand side). First round moves are indicated in red. Duplicator responds playing every possible move. Five exemplary boards with different moves played on each are shown.}
\label{fig:lower_bound_odd_case}
\end{figure}
We describe a winning strategy for Spoiler. Spoiler begins by playing the middle element on $B$, i.e. $B(g(r-1) + 1)$. Duplicator responds playing every possible move leaving short sides that all have fewer than $g(r-1)$ elements. Spoiler next makes his 2nd round move on each board of $L$ on the short sides, while playing on top of any end moves (moves without clearly defined short sides), such as those shown on the bottom two boards in Figure \ref{fig:lower_bound_odd_case}. With the exception of the two end moves, which we will handle independently, Spoiler is going to play all subsequent moves entirely on the short sides of the boards of $L$ and the corresponding sides of the boards of $B$ -- in other words if the short side of $L$ is on the left, he will play on the left side of $B$, and vice versa -- in accordance with the known Spoiler-winning strategy on boards of size $g(r-1)$ or greater vs. boards of size less than $g(r-1)$.
In order to see how he is able to do this, we need to use a strong inductive assumption, namely that in games of $r$ rounds, when $r$ is even, and there are linear orders of sizes $g(r)$ or greater on one side, and less than $g(r)$ on the other side, Spoiler can then always win by playing first on the $L$ side. The game-based proof of Lemma \ref{lemma:g_underline_of_4} demonstrated such a strategy for the case $r=4$ and we will have to keep this commitment when we cover additional even $r$ cases next. For now though, we are assuming that $r$ is odd, so $r-1$ is even and we have such a strategy.
Duplicator will respond with the oblivious strategy, making many copies of $B$ and playing all possible moves. From this point forward the boards on both sides that have had their 2nd moves played to the right of the 1st moves conceptually constitute one game, and the boards on both sides that have had their 2nd moves played to the left of the 1st moves conceptually constitute a second game. The conceptual game played entirely on the left side of the boards can be won by Spoiler as well as the conceptual game played on the right, both times using the strong induction hypotheses. Note that since both games have the same larger size boards, Spoiler can win by picking the same side to play on (L or B) on all boards in each of the conceptual games, in each subsequent round, and hence the two conceptual games can be played round-by-round in tandem. In so doing, all partial isomorphisms are broken and so Spoiler wins the combined game.
However, we have not yet described how to take care of the case where Duplicator played end moves on $L$ in the 1st round and Spoiler reciprocated by playing on top of these end moves. In order to maintain an isomorphism with either of these boards Duplicator will have to play on top of the 1st move on $B$ -- which will break any potential isomorphism with any other $L$ boards. Now we use a key observation from the Spoiler-winning strategy in the 10+ vs. 9- game that established $g(4) \geq 10$ (see the proof of Lemma \ref{lemma:g_underline_of_4}): in the next-to-last round Spoiler played on $L$, and in the last round he played on $B$. Note that the $r$-round Spoiler strategy recursively uses an $(r-1)$-round Spoiler strategy,..., eventually using the $4$-round Spoiler strategy since the base case of this lemma is $r=5$, which is defined in terms of $r=4$. Thus, for boards that remain isomorphic long enough, Spoiler will play the last two rounds consecutively on $L$ and then $B$.
In the next-to-last round in which Spoiler plays on $L$, Spoiler will select any element to the right of $L(1)$ on the boards where $L(1)$ was played as a first move, and Spoiler will select any element to the left of $L(2g(r-1))$ on the boards where $L(2g(r-1))$ was played as a 1st move. As a result, to maintain partial isomorphisms with the boards in which, respectively, $L(1)$ and $L(2g(r-1))$ were played, Duplicator will have to play so that the $B$ boards that remain isomorphic with the board in which $L(1)$ was played are not isomorphic to the boards in which $L(2g(r-1))$ was played, and vice versa. Thus, in the final round, Spoiler will play to the right of the middle element on the $B$ boards that maintained a partial isomorphism with the boards where $L(2g(r-1))$ was played first, thus killing all surviving partial isomorphisms, and will play to the left of the middle element on the $B$ boards that maintained a partial isomorphism with boards where $L(1)$ was played first, killing all of its remaining partial isomorphisms.
For the more general case where $|B| > 2 g(r-1) + 1$, Spoiler simply picks any element having at least $g(r-1)$ elements on each side of his 1st move and the play proceeds via the same induction. Analogously, if $|L| < 2g(r-1)$ it still follows that Duplicator's 1st round moves leave a short side of size less than $g(r-1)$, and hence the argument remains the same with respect to such smaller size $L$.
Next let us tackle the case where $r$ is even. We begin as usual with the base case of boards of sizes $2g(r-1)$ and $2g(r-1) - 1$. See Figure \ref{fig:lower_bound_even_case}.
\begin{enumerate}gin{figure} [ht]
\centerline{\scalebox{0.38}{\includegraphics{images/lower_bound_even_case_v2.png}}}
\caption{The even $r$ case: linear orders of sizes $2g(r-1)$ and $2g(r-1) - 1$. Spoiler plays first on $L$ (the right hand side). First round moves are indicated in red. Duplicator responds playing every possible move. Five exemplary boards with different moves played on each are shown.}
\label{fig:lower_bound_even_case}
\end{figure}
As Spoiler, we must keep our inductive commitment to play first on $L$ and so play the middle element, as indicated in the figure. Now any move by Duplicator on $B$ leaves a long side of size at least $g(r-1)$ versus the same side (left or right of the 1st move) on $L$, of size $g(r-1)-1$. Thus we adapt the argument from the odd $r$ case to this case, but where we play on the long side now rather than on the short side. The strong induction hypothesis we need this time is that smaller odd $r$ cases can be won by playing the 1st round from $B$ -- but we know this to be the case for $r=5$, the very first case covered by this theorem, and all subsequent cases of odd $r$ by how we argued the odd $r$ case. The theorem therefore follows.
\end{proof}
For $r > 4$, the sentences associated with the strategies described in the proof of the above theorem say, respectively, for $r$ even, that for every $x$ there is a linear order of size at least $g(r-1)$ either to the left or right of $x$, and for $r$ odd, that there is an element $x$ with a linear order of size at least $g(r-1)$ both to the left and right of $x$.
We give a sketch of how this works for $r = 5$ and $r = 6$. To get started we rewrite the previous expression for $\Phi_4$, given by (\ref{cond1}) -- (\ref{cond6}), replacing the variables $x,y,z,w$, respectively with $x_2, x_3, x_4, x_5$:
\begin{enumerate}gin{small}
\begin{enumerate}gin{align}
\Phi_4 = \forall x_2 \exists x_3 \forall x_4 \exists x_5(& \notag \\
&x_2 < x_4 < x_3 \rightarrow (x_5 \neq x_4 \land x_2 < x_5 < x_3) ~~~\land \label{cond1a} \\
&x_2 < x_3 < x_4 \rightarrow (x_5 \neq x_4 \land x_2 < x_3 < x_5) ~~~\land \label{cond2a}\\
&x_3 < x_4 < x_2 \rightarrow (x_5 \neq x_4 \land x_3 < x_5 < x_2)~~~ \land\label{cond3a} \\
&x_4 < x_3 < x_2 \rightarrow (x_5 \neq x_4 \land x_5 < x_3 < x_2)~~~\land \label{cond4a}\\
&x_4 = x_2 \rightarrow (x_2 < x_5 < x_3 \vee x_3 < x_5 < x_2)~~~\land \label{cond5a}\\
&x_4 = x_3 \rightarrow (x_2 < x_3 < x_5 \vee x_5 < x_3 < x_2)). \label{cond6a}
\end{align}
\end{small}
With this translation of variable names, the sentence says that ``for every $x_2$ there is a $x_3$ with two or more elements on each side of $x_3$, both of which are on the same side of $x_2$ as $x_3$''. More specifically, as noted before, the first four implications (\ref{cond1a})--(\ref{cond4a}) say that for every $x_2$ there is an $x_3$ such that for any $x_4 \neq x_2,x_3$, with $x_4$ on the same side of $x_2$ as $x_3$, there is an additional element besides $x_4$ on the same side of $x_3$ and also on the same side of $x_2$ as $x_3$. The last two implications (\ref{cond5a})--(\ref{cond6a}) insure that there are elements (i) between $x_2$ and $x_3$ and (ii) less than $x_3$ if $x_3 < x_2$, and greater than $x_3$ if $x_3 > x_2$.
To this we now add that there exists an element $x_1$ such that there is a linear order of size 10 both to its right and its left, in other words there exists an element $x_1$ such that the above sentence is true for elements less than $x_1$ and for elements greater than $x_1$, specifically:
\begin{enumerate}gin{small}
\begin{enumerate}gin{align*}
\Phi_5 = \exists x_1 \forall x_2 \exists x_3 \forall x_4 \exists x_5\Big(& \notag \\
x_1 < x_2 \rightarrow~~~~~&\Big( x_1 < x_2 < x_4 < x_3 \rightarrow (x_5 \neq x_4 \land x_1 < x_2 < x_5 < x_3) ~~~\land \label{cond1b} \\
&x_1 < x_2 < x_3 < x_4 \rightarrow (x_5 \neq x_4 \land x_1 < x_2 < x_3 < x_5) ~~~\land \\
&x_1 < x_3 < x_4 < x_2 \rightarrow (x_5 \neq x_4 \land x_1 < x_3 < x_5 < x_2)~~~ \land \\
&x_1 < x_4 < x_3 < x_2 \rightarrow (x_5 \neq x_4 \land x_1 < x_5 < x_3 < x_2)~~~\land \\
&x_4 = x_2 \rightarrow (x_1 < x_2 < x_5 < x_3 \vee x_1 < x_3 < x_5 < x_2)~~~\land \\
&x_4 = x_3 \rightarrow (x_1 < x_2 < x_3 < x_5 \vee x_1 < x_5 < x_3 < x_2)\Big)~~\land \\
x_2 < x_1 \rightarrow~~~~~&\Big( x_2 < x_4 < x_3 < x_1 \rightarrow (x_5 \neq x_4 \land x_2 < x_5 < x_3 < x_1) ~~~\land \\
&x_2 < x_3 < x_4 < x_1 \rightarrow (x_5 \neq x_4 \land x_2 < x_3 < x_5 < x_1) ~~~\land \\
&x_3 < x_4 < x_2 < x_1 \rightarrow (x_5 \neq x_4 \land x_3 < x_5 < x_2 < x_1)~~~ \land \\%\label{cond3c}
&x_4 < x_3 < x_2 < x_1 \rightarrow (x_5 \neq x_4 \land x_5 < x_3 < x_2 < x_1)~~~\land \\
&x_4 = x_2 \rightarrow (x_2 < x_5 < x_3 < x_1 \vee x_3 < x_5 < x_2 < x_1)~~~\land \\
&x_4 = x_3 \rightarrow (x_2 < x_3 < x_5 < x_1 \vee x_5 < x_3 < x_2 < x_1)\Big)~~\land \\
x_1 = x_2 \rightarrow~~~~~&(x_3 < x_1 \land x_1 < x_5)\Big).
\end{align*}
\end{small}
$\Phi_5$ distinguishes linear orders of size $21$ and above from those of size less than $21$.
Next, to form the expression for $\Phi_6$, which distinguishes linear orders of size $42$ and above from those of size less than $42$, we say that for every element $x_0$ there is either a linear order of size $21$ to the left or right of $x_0$. The construction is analogous:
\begin{enumerate}gin{footnotesize}
\begin{enumerate}gin{flalign*}
\Phi_6 = \forall x_0 \exists x_1& \forall x_2 \exists x_3 \forall x_4 \exists x_5\Bigg(& \notag \\
x_0 < x_1 \rightarrow \Bigg(&x_0 < x_1 < x_2 \rightarrow&~~\Big( x_0 < x_1 < x_2 < x_4 < x_3 \rightarrow (x_5 \neq x_4 \land x_0 < x_1 < x_2 < x_5 < x_3) ~~~\land \\
&&x_0 < x_1 < x_2 < x_3 < x_4 \rightarrow (x_5 \neq x_4 \land x_0 < x_1 < x_2 < x_3 < x_5) ~~~\land \\
&&x_0 < x_1 < x_3 < x_4 < x_2 \rightarrow (x_5 \neq x_4 \land x_0 < x_1 < x_3 < x_5 < x_2)~~~ \land \\
&&x_0 < x_1 < x_4 < x_3 < x_2 \rightarrow (x_5 \neq x_4 \land x_0 < x_1 < x_5 < x_3 < x_2)~~~\land \\
&&x_4 = x_2 \rightarrow (x_0 < x_1 < x_2 < x_5 < x_3 \vee x_0 < x_1 < x_3 < x_5 < x_2)~~~\land \\
&&x_4 = x_3 \rightarrow (x_0 < x_1 < x_2 < x_3 < x_5 \vee x_0 < x_1 < x_5 < x_3 < x_2)\Big)~~\land \\
&x_0 < x_2 < x_1 \rightarrow&~~\Big( x_0 < x_2 < x_4 < x_3 < x_1 \rightarrow (x_5 \neq x_4 \land x_0 < x_2 < x_5 < x_3 < x_1) ~~~\land \\
&&x_0 < x_2 < x_3 < x_4 < x_1 \rightarrow (x_5 \neq x_4 \land x_0 < x_2 < x_3 < x_5 < x_1) ~~~\land \\
&&x_0 < x_3 < x_4 < x_2 < x_1 \rightarrow (x_5 \neq x_4 \land x_0 < x_3 < x_5 < x_2 < x_1)~~~ \land \\
&&x_0 < x_4 < x_3 < x_2 < x_1 \rightarrow (x_5 \neq x_4 \land x_0 < x_5 < x_3 < x_2 < x_1)~~~\land \\
&&x_4 = x_2 \rightarrow (x_0 < x_2 < x_5 < x_3 < x_1 \vee x_0 < x_3 < x_5 < x_2 < x_1)~~~\land \\
&&x_4 = x_3 \rightarrow (x_0 < x_2 < x_3 < x_5 < x_1 \vee x_0 < x_5 < x_3 < x_2 < x_1)\Big)~~\land \\
&(x_0 < x_1 \land x_1 = x_2) \rightarrow&~~(x_0 < x_3 < x_1 \land x_0 < x_1 < x_5)\Bigg) \\
& \land & \\
x_1 < x_0 \rightarrow \Bigg(&x_1 < x_2 < x_0 \rightarrow&~~\Big( x_1 < x_2 < x_4 < x_3 < x_0 \rightarrow (x_5 \neq x_4 \land x_1 < x_2 < x_5 < x_3 < x_0) ~~~\land \\
&&x_1 < x_2 < x_3 < x_4 < x_0 \rightarrow (x_5 \neq x_4 \land x_1 < x_2 < x_3 < x_5 < x_0) ~~~\land \\
&&x_1 < x_3 < x_4 < x_2 < x_0 \rightarrow (x_5 \neq x_4 \land x_1 < x_3 < x_5 < x_2 < x_0)~~~ \land \\
&&x_1 < x_4 < x_3 < x_2 < x_0 \rightarrow (x_5 \neq x_4 \land x_1 < x_5 < x_3 < x_2 < x_0)~~~\land \\
&&x_4 = x_2 \rightarrow (x_1 < x_2 < x_5 < x_3 < x_0 \vee x_1 < x_3 < x_5 < x_2 < x_0)~~~\land \\
&&x_4 = x_3 \rightarrow (x_1 < x_2 < x_3 < x_5 < x_0 \vee x_1 < x_5 < x_3 < x_2 < x_0)\Big)~~\land \\
&x_2 < x_1 < x_0 \rightarrow&~~\Big( x_2 < x_4 < x_3 < x_1 < x_0 \rightarrow (x_5 \neq x_4 x_2 < x_5 < x_3 < x_1 < x_0) ~~~\land \\
&&x_2 < x_3 < x_4 < x_1 < x_0 \rightarrow (x_5 \neq x_4 \land x_2 < x_3 < x_5 < x_1 < x_0) ~~~\land \\
&&x_3 < x_4 < x_2 < x_1 < x_0 \rightarrow (x_5 \neq x_4 \land x_3 < x_5 < x_2 < x_1 < x_0)~~~ \land \\
&&x_4 < x_3 < x_2 < x_1 < x_0 \rightarrow (x_5 \neq x_4 \land x_5 < x_3 < x_2 < x_1 < x_0)~~~\land \\
&&x_4 = x_2 \rightarrow x_0 < (x_2 < x_5 < x_3 < x_1 \vee x_3 < x_5 < x_2 < x_1 < x_0)~~~\land \\
&&x_4 = x_3 \rightarrow x_0 < (x_2 < x_3 < x_5 < x_1 \vee x_5 < x_3 < x_2 < x_1 < x_0)\Big)~~\land \\
&(x_1 = x_2 \land x_1 < x_0) \rightarrow~~&(x_3 < x_1 < x_0 \land x_1 < x_5 < x_0)\Bigg) \\
& \land & \\
x_0 = x_1 \rightarrow &~x_3 \neq x_0\Bigg).&
\end{flalign*}
\end{footnotesize}
It is worth noting that we could have begun this process at $g(4)$. Starting with the expression for $\Phi_3$, given by (\ref{g3_forall}), establishing $g(3) \geq 4$ with prenex signature $\forall\exists\exists$ we could have ``relativized'' $\Phi_3$ to form an expression with prenex signature $\exists\forall\exists\exists$ saying that there exists an element with a linear order of size at least $4$ both to the left and right. This would have established that $g(4) \geq 9$. From there we could have obtained an expression with prenex signature $\forall\exists\forall\exists\exists$ establishing that $g(5) \geq 18$, and so on, at each juncture obtaining slightly worse bounds than we have already obtained. The magic is that a stronger lower bound for $g(4)$ can be expressed with the sentence given by (\ref{cond1}) -- (\ref{cond6}) [alternatively, \ref{cond1a}) -- (\ref{cond6a})].
\section{Games with Atoms: A different Approach to Obtaining Upper Bounds on $g(r)$}\label{sec:atoms}
Let us define a new type of game. These games are similar to the MS games but with a twist. They will make it harder for Duplicator to win any particular game of $r$ rounds. Since Duplicator-winning strategies for a game of $r$ rounds provide upper bounds on $g(r)$, we will obtain upper bounds that are potentially weaker, but will later match the lower bounds for $r \geq 4$, proving the upper bounds in the range $r \geq 4$ to be tight. The reason for considering these games is that they will allow us to recurse, getting around the issue we got stuck on in Section~\ref{sec:interlude}.
In our new game of $r$ rounds, we again have sets $\mathcal{A}$ and $\mathcal{B}$ of structures. However, each board on each of the sides, in addition to containing a structure $S$, contains a collection of unrelated, but labeled elements, $\{a_1,...,a_s\}$, where $s$ can be as large as Spoiler wants.
We shall refer to the collection of unrelated labeled elements as \textit{atoms}.
By a slight abuse of notation, we will treat atoms as if they are constant symbols, so that they can appear in sentences. However, these symbols will not appear in the sentences
guaranteed by Theorems~\ref{thm:main1} and Theorem~\ref{thm:main1a}.
These atoms are both unrelated to elements of the structure and unrelated to each other.
On their respective turns, Spoiler and Duplicator play, as in the MS games, on all boards on their chosen side for that turn, picking an element from that board, choosing either an element from the structure, or from the set of atoms. If atom $a_k$ is selected by Spoiler on a given board, in a given round $j$, the only way Duplicator can maintain a partial isomorphism between that board and an opposite board is to select $a_k$ in round $j$ on the opposite board.
When Duplicator makes copies of boards, the atoms are copied as well as the structures labeled with the moves made thus far.
In our figures, rather than showing all the atoms, and distinguishing those that are selected, we show only the atoms that have thus far been selected for a given board adjacent to the structures, with appropriate labeling.
Let us refer to these new games as \textit{MS games with atoms}.
\begin{enumerate}gin{observ}\label{obs:eq}
MS games with atoms are equivalent to MS games with structures that are a union of the prior structures and the set of atoms.
Hence, by Theorem~\ref{thm:main1}, we have that Spoiler wins $r$-round MS games with atoms on two sets of structures iff there exists a sentence $\Phi$ with up to $r$ quantifiers that distinguishes the two sets of structures. The sentence can contain constants and utilize an additional unary relation $A(x)$, which is true iff $x$ is an atom.
\end{observ}
Since a Spoiler winning strategy for a MS game is still a winning strategy for the analogous game with atoms (where atoms are never selected by Spoiler) we have:
\begin{enumerate}gin{lem} \label{lemma:atoms}
If Spoiler can win an $r$-round MS game on a given pair of sets of structures then he can also win an $r$-round MS game with atoms on the same pair of sets of structures.
\end{lem}
\begin{enumerate}gin{cor} \label{cor:atoms}
If Duplicator can win an $r$-round MS game with atoms on a given pair of sets of structures, then she can also win an $r$-round MS game on the same pair of sets of structures.
\end{cor}
To understand the additional power provided to Spoiler by the atoms, let us revisit how Duplicator survives in one critical juncture of the $3$-round,
$5$ versus $4$ MS game from Lemma~\ref{lemma:g_upper_of_3}.
Figure~\ref{fig:B5L4}
\begin{enumerate}gin{figure}[ht]
\begin{enumerate}gin{center}
\includegraphics[width=3.4in]{images/G5v4example_v2.png}
\caption{Multi-structural game play on $\mathcal{B}$ versus $\mathcal{L}$. 1st round moves are in Red, 2nd round moves in blue.}\label{fig:B5L4}
\end{center}
\end{figure}
shows the boards after two rounds in the critical sequence. In the 1st round, Spoiler played $B(3)$ and Duplicator played all possible moves on $L$. We consider the case where Spoiler then replies with 2nd round moves on $L$ as indicated in blue in the figure, and in particular playing on top of $L(1)$ on the board labeled $L_1$ and on top of $L(4)$ on the board labeled $L_4$. Duplicator then replies with all possible responses on $B$ (again in blue). To the left of each $B$ board we indicate which $L$ boards the board has managed to keep an isomorphism with.
The crux of the matter is that because the third $B$ board is consistent with both $L_1$ and $L_4$, Spoiler cannot break both isomorphisms in the one remaining move.
With atoms, however, investing a turn to play an atom can usefully separate these two $L$ boards. Figure~\ref{fig:B5L4Atoms}
\begin{enumerate}gin{figure}[ht]
\begin{enumerate}gin{center}
\includegraphics[width=3.4in]{images/G5v4exampleAtoms_v2.png}
\caption{Multi-structural game play with atoms.
}\label{fig:B5L4Atoms}
\end{center}
\end{figure}
illustrates Spoiler changing his second move on board $L_4$ from $L(4)$ to $a_1$. Duplicator now has an additional potentially viable 2nd round move, playing the newly played atom, $a_1$, as indicated in the additional board added on the left. However, note in the left hand column of the figure how no board is any longer simultaneously partially isomorphic with boards $L_1$ and $L_4$. Spoiler can now break each of the isolated isomorphisms by playing on $B$, in order, from top to bottom, as follows: $B(2), B(1), B(2), B(5), B(4), B(4)$.
The following follows from the discussion above and Lemma \ref{lemma:g_upper_of_3}.
\begin{enumerate}gin{prop} \label{lemma:atoms_vs_no_atoms_example}
While Duplicator can always win $3$-round MS games on linear orders of sizes $5$ vs. $4$, Spoiler can always win $3$-round MS games with atoms on linear orders of these sizes.
\end{prop}
We are now going to establish Duplicator-winning strategies for MS games with atoms, for various numbers of rounds. By Lemma \ref{lemma:atoms}, Duplicator has a harder time winning these games than standard MS games and so these strategies will provide weaker upper bounds than the upper bounds obtained by providing Duplicator-winning strategies in MS games.
\begin{enumerate}gin{defi}
Let $g'(r)$ denote the largest $k$ such that Spoiler wins every $r$-round MS game with atoms on two sets $\mathcal{B}$ and $\mathcal{L}$ of linear orders where each $B \in \mathcal{B}$ is of size at least $k$ and each $L \in \mathcal{L}$ is of size less than $k$.
\end{defi}
To see that $g'(r)$ is well-defined, note that if Duplicator can win an $r$-round E-F game on two linear orders $B, L$ then she can win an $r$-round MS game, as well as an $r$-round MS game with atoms, on sets $\mathcal{B},\mathcal{L}$ of structures with $B \in \mathcal{B}$ and $L \in \mathcal{L}$ -- this is the case because she can focus on the one pair of structures, $L$ and $B$, when playing the MS game, and further, because Spoiler gains no advantage from playing atoms when play is constrained to a single set of structures. Hence $g'(r) \leq f(r)$. In our considerations of $g'$ we will exclusively be focusing on game-based approaches. We will, further, just be concerned with establishing upper bounds on $g'(r)$, and in so doing will always be taking $\mathcal{B}$ and $\mathcal{L}$ to be singletons, $\mathcal{B} = \{B\}, \mathcal{L} = \{L\}$.
The following is an immediate consequence of Lemma \ref{lemma:atoms}.
\begin{enumerate}gin{lem} \label{lemma:g'_doms_g}
For all values of $r$, we have $g(r) \leq g'(r)$.
\end{lem}
The following lemma describes why it is possible to recursively prove upper bounds on $g'$ (and hence $g$) in MS games with atoms and get around the issue described in Section~\ref{sec:interlude}.
\begin{enumerate}gin{lem} \label{lemma:reduction}
\textbf{Reduction Lemma:} Suppose we have an $r$-round MS game with atoms on linear orders of sizes $K$ and $K'$. For some integer $h$, with $1 \leq h \leq \min(K, K')$, suppose there are boards on the $L$ and $B$ sides in which first moves of $L(h)$ and $B(h)$ have been played, or in which moves of $L(K-h+1)$ and $B(K'-h+1)$ have been played. Then Duplicator wins the $K$ vs. $K'$ game on those boards iff she wins the $(r-1)$-round MS game with atoms on linear orders of sizes $K-h$ and $K'-h$.
\end{lem}
Figure \ref{fig:6_vs_5_reduction} illustrates the reduction of a $3$-round, $6$ versus $5$ MS game with atoms, to a $2$-round, $3$ versus $2$ MS game with atoms, per the conclusion.
\begin{enumerate}gin{figure} [ht]
\centerline{\scalebox{0.30}{\includegraphics{images/6_vs_5_reduction_v2.png}}}
\caption{An illustration of the reduction in Lemma \ref{lemma:reduction} where a $3$-round, $6$ versus $5$ MS game with atoms gets reduced to a $2$-round, $3$ versus $2$ MS game with atoms.}
\label{fig:6_vs_5_reduction}
\end{figure}
\begin{enumerate}gin{proof}
Without loss of generality assume the 1st round moves are $L(h)$ and $B(h)$.
Duplicator now makes a pact with Spoiler, saying that the only way she will maintain an isomorphism with moves on the left hand side of $L(h)$/$B(h)$ is by mirroring (in other words by responding to $L(i)$ with $B(i)$ when $1 \leq i \leq h$, and vice versa). She tells Spoiler that if ever such a move is \textit{not} mirrored, Spoiler can count it as a break in the partial isomorphism between the two boards. By agreeing to these stricter isomorphism rules, Duplicator makes it harder for herself to maintain partial isomorphisms. However, the effect is that we can remove the elements $B(1),...,B(h)$ and $L(1),...,L(h)$ and set aside the first $h$ atoms on each side, $a_1,...,a_h$, only to be played as follows: if Spoiler ever would have wanted to play $B(i)$ or $L(i)$, with $1 \leq i \leq h$, on a given board, he can instead play $a_i$ with the same effect. This reduces the game to an $(r-1)$-round MS game with atoms on linear orders of size $K-h$ and $K'-h$ with $h$ additional atoms. Since in the definition of MS games with atoms, Spoiler already has as many atoms as he wants, the additional $h$ atoms have no effect on the game.
We are left with just playing an $(r-1)$-round MS game with atoms on linear orders of size $K-h$ and $K'-h$ and if Duplicator can win such a game, she can win the original game.
On the other hand, if Duplicator does \textit{not} have a winning strategy in the $K-h$ vs. $K'-h$ MS game with atoms, then Spoiler has a winning strategy and can force play to be entirely on the side of the initial move with these many unplayed elements, and
thus win. The lemma follows.
\end{proof}
\begin{enumerate}gin{lem} \textbf{Laddering Up Lemma for MS Games and MS Games with Atoms:} \label{lemma:stepping-up}
Suppose Duplicator can win MS games (respectively MS games with atoms) on boards of sizes $K, K+1$ for all $K \geq N$. Then Duplicator can also win MS games (respectively MS games with atoms)
on boards of sizes $K, K'$ whenever $K \geq N$ and $K' \geq N$.
\end{lem}
\begin{enumerate}gin{proof}
Suppose both $K \geq N$ and $K' \geq N$. Let l.o.($K$) denote the linear order of size $K$. Then we have that l.o.($K$) $\equiv_r$ l.o.($K+1$) $\equiv_r \cdot\cdot\cdot \equiv_r$ l.o.($K'$). By repeated application of Lemma \ref{lemma:prenex_equiv} the lemma follows for MS games.
The same argument works for MS games with atoms by replacing each linear order with the union of the linear order and the corresponding atoms, whereby MS games with atoms reduce to MS games, per Observation~\ref{obs:eq}.
\end{proof}
\begin{enumerate}gin{lem} \label{lemma:g_prime_of_2}
Duplicator can win $2$-round multi-structural games with atoms on linear orders of sizes $2$ or greater and hence $g'(2) \leq 2$.
\end{lem}
\begin{enumerate}gin{proof}
Immediate from Lemma \ref{lemma:g_upper_of_2} coupled with the observation that it never helps Spoiler to play an atom in the first or last round.
\end{proof}
\begin{enumerate}gin{lem} \label{lemma:g_prime_of_3}
Duplicator can win $3$-round MS games with atoms on linear orders of sizes $5$ or greater and hence $g'(3) \leq 5$.
\end{lem}
\begin{enumerate}gin{proof}
Suppose we have linear orders of sizes $K,K+1$ with $K \geq 5$. Any Spoiler 1st round move leaves a short side of no more than $\lfloor\frac{K}{2}\rfloor$ unplayed elements on that side. Duplicator can then reply with a move leaving an identical short side. Without loss of generality assume this common short side is on the left. Then to the right of the played moves, each board has at least $K - \lfloor\frac{K}{2}\rfloor - 1 \geq 2$ unplayed elements, and by virtue of the fact that $g'(2) \leq 2$ (Lemma \ref{lemma:g_prime_of_2}), Lemma \ref{lemma:reduction} guarantees that Duplicator has a winning strategy. The Laddering Up Lemma \ref{lemma:stepping-up} then guarantees that Duplicator has a winning strategy for any boards of sizes $K \geq 5$ and $K' \geq 5$. The lemma follows.
\end{proof}
Although this section is concerned with establishing upper bounds on $g$, we shall actually need to establish precise values for $g'$ in order to get the upper bounds on $g$ to go through. The discussion we gave to show that Spoiler can win a $5$ vs. $4$ $3$-round MS game with atoms (Proposition \ref{lemma:atoms_vs_no_atoms_example}) can easily be extended to show that Spoiler can win such a $3$-round game on linear orders of sizes $5$ or greater versus $4$ or smaller. Hence we have that $g'(3) \geq 5$, so that together with the prior lemma we have:
\begin{enumerate}gin{lem} \label{lemma:g_prime_of_3_hard} $g'(3) = 5$.
\end{lem}
Any Spoiler-winning strategy in an ordinary MS game of $r$ rounds corresponds to a sentence that is valid for $B$ but not for $L$ with $r$ quantifiers. If Spoiler's strategy starts on $B$ then the corresponding sentence starts with $\exists$, while if Spoiler's strategy starts on $L$, the sentence starts with $\forall$. If we force Spoiler to play first on $L$, then we are giving Duplicator an advantage so that she may be able to win games that she would not be able to win otherwise.
\begin{enumerate}gin{defi} \label{def:g_forall}
Let $g'_\forall(r)$ denote the smallest $k$ such that Duplicator can win MS games with atoms on a pair of linear orders, each of size $k$ or greater, if Spoiler is constrained to play his first move on $L$.
\end{defi}
\begin{enumerate}gin{lem} \label{lemma:for_all}
If Spoiler is constrained to play his first move from $L$, Duplicator can win $r$-round MS games with atoms on linear orders of sizes $2g'(r-1)$ or greater. Hence,
$g'_\forall(r) \leq 2g'(r-1)$.
\end{lem}
\begin{enumerate}gin{proof}
Consider an $r$-round, $2g'(r-1) + 1$ vs. $2g'(r-1)$ MS game with atoms. Refer to Figure \ref{fig:forall_lemma}.
\begin{enumerate}gin{figure} [ht]
\centerline{\scalebox{0.38}{\includegraphics{images/forall_lemma_v3.png}}}
\caption{The $r$-round, $2g'(r-1) + 1$ vs. $2g'(r-1)$ MS game with atoms. Only one board on each side are shown.}
\label{fig:forall_lemma}
\end{figure}
If Spoiler is constrained to play on the $L$ side, his play will necessarily leave a short side of size at most $g'(r-1)-1$, which can be matched by a move that leaves the same short side on $B$, and with long sides that are each of size at least $g'(r-1)$. Hence the game is winnable by Duplicator via the Reduction Lemma (Lemma \ref{lemma:reduction}). For $r$ round games on boards of sizes $K+1$ vs. $K$ with $K > 2g'(r-1)$ again the short sides can be matched up, leaving long sides of sizes still at least $g'(r-1)$. The lemma follows by the Laddering Up Lemma (Lemma \ref{lemma:stepping-up}).
\end{proof}
\begin{enumerate}gin{thm} \label{thm:main_upper}
For $r \geq 2$,
\begin{enumerate}gin{equation} \label{eqn:main_recursion}
g'(r) = \begin{enumerate}gin{cases} 2g'(r-1)~~~~~~\textrm{if $r$ is even,}\\ 2g'(r-1) + 1\textrm{ if $r$ is odd.} \end{cases}
\end{equation}
Moreover, Duplicator can win $r$-round MS games with atoms on linear orders of sizes $2g'(r-1)$ or greater if $r$ is even, and on linear orders of sizes $2g'(r-1) + 1$ or greater if $r$ is odd.
\end{thm}
\begin{enumerate}gin{proof}
We establish the equality asserted in the theorem via first establishing that the $\geq$ inequality holds, and then establishing that the $\leq$ inequality holds. Since for all $r \geq 1$ we have $g'(r) \geq g(r)$ (lemma \ref{lemma:g'_doms_g}), the $\geq$ portion of the theorem follows from the lemmas establishing that $g(1) = 1, g(2) \geq 2, g'(3) = 5$ and $g(4) \geq 10$, together with Theorem \ref{thm:main_lower}.
Now let us establish that the $\leq$ inequality holds. It is trivial to verify that $g'(1) = 1$, while $g'(2) \leq 2$ is Lemma \ref{lemma:g_prime_of_2} and $g'(3) = 5$ is Lemma \ref{lemma:g_prime_of_3_hard}. Hence, we have already established the $\leq$ part of the theorem for $r=2$ and $r = 3$. For larger values of $r$ we proceed inductively, assuming the truth of the theorem for values up to $r-1$ and proving it for the value $r$. The inductive step for the case of odd $r$ is easy, so let's dispose of that case first -- it is essentially the same argument we gave to establish $g'(3) \leq 5$ in Lemma \ref{lemma:g_prime_of_3}. Suppose we have linear orders of sizes $K, K+1$ with $K \geq 2g'(r-1) + 1$. Any Spoiler 1st round move leaves a short side of no more than $\lfloor\frac{K}{2}\rfloor$ unplayed elements on that side. Duplicator can then reply with a move leaving an identical short side. Without loss of generality assume this common short side is on the left. Then, to the right of the played moves, each board has at least $K - \lfloor\frac{K}{2}\rfloor - 1$ unplayed elements. But, using the inductive assumption, $K - \lfloor\frac{K}{2}\rfloor - 1 \geq K - \frac{K}{2} - 1 \geq (g'(r-1) + \frac{1}{2}) - 1$, whence $K - \lfloor\frac{K}{2}\rfloor - 1 \geq g'(r-1) - \frac{1}{2}$. Since both $K - \lfloor\frac{K}{2}\rfloor - 1$ and $g'(r-1)$ are integers, it follows that $K - \lfloor\frac{K}{2}\rfloor - 1 \geq g'(r-1)$. Thus, each board has at least $g'(r-1)$
unplayed elements on their long sides. The Reduction Lemma (Lemma \ref{lemma:reduction}) in conjunction with the induction hypothesis therefore guarantees that Duplicator has a winning strategy. The Laddering Up Lemma (Lemma \ref{lemma:stepping-up}) then guarantees that Duplicator has a winning strategy for any boards of sizes $K,K' \geq 2g'(r-1) + 1$.
Next consider the case of even $r$. Suppose first that we have linear orders of sizes $2g'(r-1)$ and $2g'(r-1) + 1$. Let us first dispose of any first move by Spoiler other than $B(g'(r-1) + 1)$, the middle element on the $B$ side. Any move other than this one on the $B$ side can be responded to with a move on $L$ that matches the short side while leaving at least $g'(r-1)$ unplayed elements on both long sides, and therefore, again by the Reduction Lemma and induction, yielding a winning strategy for Duplicator. On the other hand, any Spoiler 1st round move on $L$ is analogously met by matching the short side with a move on $B$, transposing to the just-prior analysis. Thus we may assume that Spoiler plays the element $B(g'(r-1) + 1)$. In response, Duplicator uses two boards and plays $L(g'(r-1))$ on one of the boards and $L(g'(r-1) + 1)$ on the other one. See Figure \ref{fig:split_board}.
\begin{enumerate}gin{figure} [ht]
\centerline{\scalebox{0.38}{\includegraphics{images/split_board_v3.png}}}
\caption{A $2g'(r-1) + 1$ vs. $2g'(r-1)$ game where Spoiler plays first on $B(g'(r-1) + 1)$ and Duplicator responds by playing $L(g'(r-1))$ on one board and $L(g'(r-1) + 1)$ on another.}
\label{fig:split_board}
\end{figure}
Consider the possible Spoiler 2nd round responses. Suppose Spoiler plays on $B$. A play on one of the left-hand unplayed $g'(r-1)$ elements, i.e., on some $B(i)$ such that $1 \leq i \leq g'(r-1)$ will be met with a play of $L(i)$ on the 2nd $L$ board. As we argued in the proof of the Reduction Lemma, we can now regard the remaining $g'(r-1) - 1$ unplayed elements to the left of the 1st round selections on the bottom $L$ board and the $B$ boards as additional atoms and just consider the game on the right side of these boards, which is now an $r-2$ round game on boards of sizes $g'(r-1)$ and $g'(r-1) - 1$. Inductively it is easy to see that $g'(r-1) - 1 > g'(r-2)$ and so such a sequence leads inductively to a Duplicator win. A 2nd round Spoiler play on $B$, on one of the right hand set of unplayed $g'(r-1)$ elements, is handled with a symmetrical argument. Further, playing an atom on $B$ is met by playing the same atom on both $L$ boards (in fact playing on just one board and ignoring the other is sufficient), and again leads to an inductive win for Duplicator by virtue of the fact that $g'(r-1) - 1 > g'(r-2)$.
Thus we may assume that Spoiler makes his 2nd round moves on $L$. A play on the long side of either board is met with a symmetrical play by Duplicator on $B$, transposing to our prior analysis for when Spoiler played his 2nd move on $B$. Playing an atom on $L$ is met with the same atom being played on $B$, again with a transposition. Thus we may suppose that Spoiler plays on the short side of both $L$ boards, and, in particular, plays on the left on the top $L$ board. Now the left hand (short side) of the top board is of size $g'(r-1) - 1$ and the left hand side of $B$ is of size $g'(r-1)$. Further, we are assuming that $r$ is even, and so $r-1$ is odd and hence by (\ref{eqn:main_recursion}) we have
\begin{enumerate}gin{equation} \label{eqn:r-1_to_r-2}
g'(r-1) = 2g'(r-2) + 1.
\end{equation}
Since we've reduced the analysis to Spoiler next playing on the $L$ side of this $(r-1)$-round, $g'(r-1)$ vs. $g'(r-1)-1$ game, Lemma \ref{lemma:for_all} applies and says that $g'_\forall(r-1) \leq 2g'(r-2)$. Taken together with (\ref{eqn:r-1_to_r-2}), we have that $g'_\forall(r-1) \leq g'(r-1) - 1$.
Thus, we are left with boards of sizes $g'(r-1)$ and $g'(r-1)-1$, both of which are at least of the size of $g'_\forall(r-1)$. Hence, by the definition of $g'_\forall$ (Definition \ref{def:g_forall}), Duplicator has a winning strategy, over the remaining $r-1$ rounds, playing just on the left hand sides of these two boards and hence, by the Reduction Lemma, has a winning strategy playing on the entire board. Thus Duplicator has a winning strategy in the original $r$-round game for boards of sizes $2g'(r-1)$ and $2g'(r-1) + 1$.
For $K, K+1$ with $K > 2g'(r-1)$ the argument is easier since Duplicator can just mimic the short side play of any 1st round Spoiler play and immediately apply the Reduction Lemma. As usual, the argument is completed by applying the Laddering Up Lemma.
\end{proof}
We have therefore established the following upper bounds:
\begin{enumerate}gin{cor} \label{cor:main_upper}
We have
$g(2) \leq 2, g(3) \leq 4, g(4) \leq 10$, and for $r > 4$,
\begin{enumerate}gin{equation*}
g(r) \leq \begin{enumerate}gin{cases} 2g(r-1)~~~~~~\textrm{if $r$ is even,}\\ 2g(r-1) + 1\textrm{ if $r$ is odd.} \end{cases}
\end{equation*}
Moreover, Duplicator can win $r$-round MS games on linear orders of sizes that are at least as large as these upper bound (right hand side) values in all the inequalities.
\end{cor}
\begin{enumerate}gin{proof}
The inequalities $g(2) \leq 2, g(3) \leq 4$ are Lemmas \ref{lemma:g_upper_of_2} and \ref{lemma:g_upper_of_3}. The chain of inequalities $g(4) \leq g'(4) \leq 2g'(3) \leq 10$, follows from Lemma \ref{lemma:g'_doms_g}, Lemma \ref{lemma:g_prime_of_3} and Theorem \ref{thm:main_upper}. The inductively defined inequality for $r > 4$ follows by Lemma \ref{lemma:g'_doms_g} and Theorem \ref{thm:main_upper}. Finally, the upper bounds associated with all these lemmas and one theorem were established and stated by observing that Duplicator could win $r$ round games when both linear orders were at least as large as the upper bounds. The same therefore follows for this corollary.
\end{proof}
\section{Tight Bounds on $g(r)$ and Final Theorems} \label{sec:final}
At last,
we pull together the identical upper and lower bounds we have obtained for $g(r)$,
to yield Theorem \ref{thm:g}. Further, since the upper and lower bounds on $g(r)$ turned out to be tight, the very last paragraph at the end of Section \ref{sec:lower_bounds} immediately implies the following:
\begin{enumerate}gin{thm}
For each $r \geq 1$ there is a sentence with $r$ quantifiers distinguishing linear orders of size $g(r)$ or greater from linear orders of size less than $g(r)$. The prenex signatures of such sentences are as follows:
\begin{enumerate}gin{eqnarray*}
r = 1:& \exists \\
r = 2:& \exists\exists \\
r = 3:& \forall\exists\exists \\
r \geq 4, r~\textrm{even}:& \forall\exists\cdot\cdot\cdot\forall\exists \\
r \geq 4, r~\textrm{odd}:& \exists\forall\exists\cdot\cdot\cdot\forall\exists.
\end{eqnarray*}
\end{thm}
\noindent Here the $\cdot\cdot\cdot$ signifies a sequence of repeating quantifier alternations $\forall\exists$ of the length needed to give rise to $r$ quantifiers in total.
\begin{enumerate}gin{cor}
Let $\mathcal{A}$ be any collection of linear orders, each of size at least $g(r)$, and $\mathcal{B}$ any collection of linear orders, each of size less than $g(r)$. The Spoiler can win an $r$-round MS game on $\mathcal{A}$ and $\mathcal{B}$.
\end{cor}
Although we have a completely specified $g(r)$, we have not completely answered the question of the minimum number of quantifiers needed to distinguish one linear order, $B$, from another, $L$ in one special case. Specifically, if $g(r-1) \leq |L| < |B| < g(r)$ for some value of $r$, we know there is no sentence with $r-1$ quantifiers that distinguishes $B$ from $L$, but nothing more. The following lemma closes this gap.
\begin{enumerate}gin{lem} \label{lemma:gap_closer} Given two linear orders $B, L$, with $|L| < |B| < g(r)$ for some $r > 0$ then there is a sentence with $r$ quantifiers that distinguishes $B$ from $L$.
\end{lem}
\begin{enumerate}gin{proof}
If there is a sentence with $r-1$ quantifiers that distinguishes $B$ from $L$, there is obviously a sentence with $r$ quantifiers that does the same. Hence, we only need to verify the lemma for the case $g(r-1) \leq |L| < |B| < g(r)$. This is the range of values of $|L|$ and $|B|$ for which we have not resolved how many quantifiers suffice to separate $B$ from $L$, and hence why we call this section ``Filling in the Gaps.''
We will demonstrate the conclusion of the lemma using a combination of explicit sentences and Spoiler winning strategies. It is not possible to have $|L| < |B| < g(1)$, so that case is handled. For $g(2)$ the only possibility is $|L| = 0, |B| = 1$, which is covered by $g(1) = 1$. Next, for $g(3)$, the cases covered by the lemma are those given in the table below.
\begin{enumerate}gin{center}
\begin{enumerate}gin{tabular}{ |c|c|c|}
\hline
$\bm{|L|}$ & $\bm{|B|}$ & \textbf{Why Spoiler Wins in $\bm{3}$ Rounds} \\
\hline
0 & 1 & $g(1) = 1$ \\
0 & 2 & $g(1) = 1$ \\
0 & 3 & $g(1) = 1$ \\
1 & 2 & $g(2) = 2$ \\
1 & 3 & $g(2) = 2$ \\
2 & 3 & $3$ rounds; Spoiler wins by playing 3 distinct elements in $B$ \\
\hline
\end{tabular}
\end{center}
Next consider $g(4)$, which we shall consider as a special case.
Everything else will follow via an induction argument based on $g(r)$ for $r > 4$. Since $g(4) = 10,$ we have $ |B| \leq 9$. The following expression with $4$ quantifiers distinguishes linear orders of size $9$ and above from those of size $8$ and below:
\begin{enumerate}gin{small}
\begin{enumerate}gin{align*}
\Phi_{4, 9} = \forall x \exists y \forall z \exists w(& \notag \\
&x < z < y \rightarrow (w \neq z \land x < w < y) ~~~\land \\
&x < y < z \rightarrow (w \neq z \land x < y < w) ~~~\land \\
&y < z < x \rightarrow (w \neq z \land y < w < x) ~~~\land \\
&z = x \rightarrow (x < w < y \vee y < w < x) ~~~\land \\
&z = y \rightarrow (x < y < w \vee w < y < x)).
\end{align*}
\end{small}
The sentence $\Phi_{4, 9}$ is constructed from the sentence $\Phi_{4}$ used in the proof of Lemma \ref{lemma:g_underline_of_4} by removing Condition~(\ref{cond4}) of $\Phi_4$. In words, $\Phi_{4, 9}$ states that ``for every $x$ either there are 5 elements after it or 4 elements before it''. In more detail, ``for every $x$ there is a $y$ such that if $y > x$ then there are two or more elements on each side of $y$, both of which are greater than $x$ and if $y < x$ then there are two or more elements between $y$ and $x$ and one element smaller than $y$''.
It is similarly easy to construct $\Phi_{4, 8}$, $\Phi_{4, 7}$, $\Phi_{4, 6}$ and $\Phi_{4, 5}$ just by removing more conditions from $\Phi_{4}$. We present these sentences below for completeness.
\begin{enumerate}gin{small}
\begin{enumerate}gin{align*}
\Phi_{4, 8} = \forall x \exists y \forall z \exists w(& \notag \\
&x < y < z \rightarrow (w \neq z \land x < y < w) ~~~\land \\
&y < z < x \rightarrow (w \neq z \land y < w < x) ~~~\land \\
&z = x \rightarrow (x < w < y \vee y < w < x) ~~~\land \\
&z = y \rightarrow (x < y < w \vee w < y < x)).
\end{align*}
\end{small}
\begin{enumerate}gin{small}
\begin{enumerate}gin{align*}
\Phi_{4, 7} = \forall x \exists y \forall z \exists w(& \notag \\
&x < y < z \rightarrow (w \neq z \land x < y < w) ~~~\land \\
&z = x \rightarrow (x < w < y \vee y < w < x) ~~~\land \\
&z = y \rightarrow (x < y < w \vee w < y < x)).
\end{align*}
\end{small}
\begin{enumerate}gin{small}
\begin{enumerate}gin{align*}
\Phi_{4, 6} = \forall x \exists y \forall z \exists w(& \notag \\
&z = x \rightarrow (x < w < y \vee y < w < x) ~~~\land \\
&z = y \rightarrow (x < y < w \vee w < y < x)).
\end{align*}
\end{small}
\begin{enumerate}gin{small}
\begin{enumerate}gin{align*}
\Phi_{4, 5} = \forall x \exists y \forall z \exists w(& \notag \\
&z = x \rightarrow (x < w < y \vee y < w < x) ~~~\land \\
&z = y \rightarrow (x < y < w \vee y < x)).
\end{align*}
\end{small}
We have ``filled in the gaps'' for our base case of $r=4$ and we will now proceed by induction as we did in Theorem \ref{thm:main_lower}. A critical point is that all of these sentences for $\Phi_{4,k}$ for $4 < k \leq 9$ end with a universal and then an existential quantifier, meaning that Spoiler plays the next-to-last round on $L$ and the last round on $B$.
Suppose then that $g(r-1) \leq |L| < |B| < g(r)$ for $r > 4$. Spoiler adopts the same strategy as that described in the proof of Theorem \ref{thm:main_lower} for the $g(r)$ vs $g(r)-1$ game. One very minor nuance is that in the even $r$ case there is not always a central element for Spoiler to select for his 1st round move in $L$ since $|L|$ could be even. Similarly in the odd $r$ case there is not always a central element for Spoiler to select for his 1st round move in $B$ since $|B|$ could be even. It suffices, however, for Spoiler to play as close to the center as possible. As an example, the sentence $\Phi_{4, 9}$ corresponds to playing the left point among the two middle points in $L$ in the 9 versus 8 game. All other details of the argument are unchanged, including the fact that the last two rounds are played on $L$ and then $B$.
\end{proof}
Finally, we are able to prove the precise game theoretic analog of Theorem \ref{thm:g_for_game_play}.
\begin{enumerate}gin{thm} \label{thm:fund_thm_for_los}
Two linear orders, $B$ and $L$, with $|L| < |B|$, can be distinguished by a sentence with $r$ quantifiers iff $|L| < g(r)$.
\end{thm}
\begin{enumerate}gin{proof}
If direction: If $|L| < |B| < g(r)$ then $B$ and $L$ can be distinguished by Lemma \ref{lemma:gap_closer}. If $|L| < g(r) \leq |B|$ then $B$ and $L$ can can be distinguished by the definition of $g$ (Definition \ref{def:g}).
Only if direction: If $g(r) \leq |L| < |B|$ then $B$ and $L$ \textit{cannot} be distinguished by Corollary \ref{cor:main_upper}.
\end{proof}
\section{Conclusions} \label{sec:conc}
We have studied multi-structural games, which generalize E-F games by being played over sets $\mathcal A$, $\mathcal B$ of structures rather than over a pair $A$, $B$ of individual structures.
Whereas E-F games can capture exactly the quantifier rank needed to describe a property, we showed that multi-structural games can capture exactly the number of quantifiers needed to describe a property.
As a first application, we used them to determine the number of quantifiers needed to distinguish between linear orders of different sizes.
The quantifier count is a natural complexity measure but has received scant attention compared to the quantifier rank, number of distinct variable names, and measures of size and depth of the sentence body.
We expect complexity differences to be magnified in studying other structures beyond linear orders, such as higher-dimensional lattices, rooted trees, and other classes of graphs. As with the related ideas of Lotfallah \cite{Lotfallah04}, MS games extend readily to second-order logic where they may bear on higher-order problems in descriptive complexity. For example, MS games when adapted to second order logic can easily simulate Ajtai-Fagin games.
\section*{Acknowledgments}
We thank Neil Immerman and Leonid Libkin for helpful discussions.
\begin{enumerate}gin{itemize}bliographystyle{alphaurl}
\begin{enumerate}gin{itemize}bliography{ref}
\appendix
\section{A Proof of Theorem \ref{thm:f}} \label{app:f}
\begin{enumerate}gin{proof}
Note for future use that $f(1) = 1$ and $f(r) = 2f(r-1) +1$ for $r>1$.
The statement is clearly true for $r=1$. Assume inductively that it is true for $r-1$. Let us refer to the bigger linear order as $B$ (for ``big'') and the smaller linear order by L (for ``little'').
We first show that if the size of $L$ is less than $f(r)$, then Spoiler wins the $r$-round game.
There are two possibilities, depending on whether the size of $B$ is odd or even. Assume first that it is odd, say of size $2k+1$. So the size of $L$ is at most $2k$, and since the size of $L$ is less than $f(r)$, it follows that $2k < f(r) = 2f(r-1) +1$, so $k \leq f(r-1)$. In the first round, Spoiler selects the middle point of $B$, call it $s_1$. There are then $k$ points to the left and $k$ to the right of $s_1$ in $B$. After Duplicator selects a point $d_{1}$ in $L$, there will be either fewer than $k$ points to the left of $d_{1}$ in $L$ or fewer than $k$ points to the right of $d_{1}$ in $L$.
In the former case, Spoiler now makes all of his moves to the left in either structure (forcing Duplicator to do the same in the other structure), and in the latter case (when there less than $k$ points to the right of $d_{1}$ in $L$.), Spoiler makes all of his moves to the right in either structure (forcing Duplicator to do the same in the other structure). Spoiler now wins by the inductive assumption, since $k \leq f(r-1)$, and Spoiler has turned this into a game with less than $k$ elements in the smaller linear order.
Assume now that the size of $B$ is even, say of size $2k$. So the size of $L$ is at most $2k-1$, and since the size of $L$ is less than $f(r)$, we have
\[
2k-1 < f(r) = 2f(r-1) + 1,
\]
so $2k - 2 < 2f(r-1)$,
and so $k-1 < f(r-1)$.
In the first round, Spoiler selects the
$k$th point of $B$, call it $s_1$. There are
then $k-1$ points to the left and $k$ to the right of $s_1$ in $B$. Duplicator now selects a point $d_1$ in $L$. If $d_1$ is within the first $k-1$ points in $L$, then
Spoiler makes all remaining moves to the left in both structures, since there are $k-1$ points to the left of $s_1$, fewer than $k-1$ points to the left of $d_1$, and $k-1 < f(r-1)$, so Spoiler wins by inductive hypothesis. If $d_1$ is not within the first $k-1$ points in $L$, then there are at most $k-1$ points to the right of $d_1$. But there are $k$ points to the right of $s_1$. Spoiler makes all remaining moves to the right in either structures, and wins by inductive hypothesis since $k-1< f(r-1)$.
We now show that if the size of $L$ is at least $f(r)$, then Duplicator wins the $r$-round game.
If Spoiler selects $s_1$ within the first $f(r-1)+1$ points in $B$ or $L$, then Duplicator selects $d_1$ in the other structure with $d_1$ = $s_1$. There are now at least $f(r-1)$ points to the right of the first move in $L$, and more points than that to the right of the first move in $B$. Since Duplicator can simply mimic Spoiler’s choices when Spoiler moves to the left of the first point chosen, Duplicator wins by the inductive hypothesis in considering moves on the right.
Now assume that Spoiler selects $s_1$ beyond the first $f(r-1)+1$ points in $B$ or $L$. There are two cases, depending on whether Spoiler moves in $B$ or $L$. Assume first that Spoiler moves in $B$, and selects point $s_1$, which is the $k$th point from the right of $B$. This splits into two subcases, depending on whether or not $k \leq f(r-1) + 1$. If $k \leq f(r-1) + 1$, then Duplicator selects $d_1$ in $L$ that is $k$ points from the right of $L$. There are then at least $f(r-1)$ points in $L$ to the left of $d_1$. Duplicator simply mimics the moves of Spoiler on points to the right of $s_1$ or $d_1$, and uses her winning strategy for points to the left of $s_1$ and $d_1$, where there is a winning strategy by inductive assumption, since there at least $f(r-1)$ to the left of $d_1$ and more than that to the left of $s_1$.
We now consider subcase 2, where $k > f(r-1) + 1$. Let $I$ (respectively, $I'$) be the closed interval in $B$ (respectively, $L$)
where the left endpoint is at position $f(r-1)+1$ from the left of $B$ (respectively, $L$), and whose right endpoint is at position
$f(r-1)+1$ from the right of $B$ (respectively, $L$). By assumption, the point $s_1$ in $B$ is inside of $I$. The interval $I'$ contains the point that is $f(r-1)+1$ from the left of $L$, and so is nonempty. Assume that the point $s_1$ in $B$ is $m$ from the left side of $I$ and $n$ from the right side of $I$. Since $L$ is smaller in size than $B$, it follows that $I'$ is smaller in size than $I$. So there is a point $d_1$ in $I'$ that is at most $m$ from the left side of $I'$, and at most n from the right side of $I'$. Since there are at least $f(r-1)$ points to the left of $d_1$ in $L$ and at least $f(r-1)$ points to the right of $d_1$ in $L$, and since the number of points to the left (respectively, to the right) of $d_1$ in $L$ is at most the number of points to the left (respectively, to the right) of $s_1$ in $B$, Duplicator can win by making use of her winning strategy on moves to the left of $s_1$ or $d_1$, and make use of her winning strategy on moves to the right of $s_1$ and $d_1$.
Assume now that Spoiler selects point $s_1$ in $L$. By assumption, $s_1$ is beyond the first $f(r-1)+1$ points in $L$. Assume $s_1$ is the $k$th point from the right of $L$. Then Duplicator selects $d_1$ as the $k$th point from the right of $B$. On moves to the right of $d_1$ or $s_1$, Duplicator simply mimics Spoiler’s moves, an on moves to the left of $d_1$ or $s_1$, Duplicator has a winning strategy by inductive assumption, since there are more than $f(r-1)$ points to the left of $s_1$ in $L$, and more than that to the left of $d_1$ in $B$. So again, Duplicator wins. This concludes the proof of the theorem.
\end{proof}
\section{Game based proof of Lemma~\ref{lemma:g_underline_of_4}}\label{sec:app-g_underline_of_4}
\begin{enumerate}gin{proof} [Proof of Lemma~\ref{lemma:g_underline_of_4} via Games]
We first consider the $10$ vs. $9$ base case. Spoiler will play first by playing the central element, $L(5)$, on the $L$ side. We presume WLOG that Duplicator plays all possible moves on the $B$ side. See Figure \ref{fig:10_vs_9_counter_move1} for an illustration.
\begin{enumerate}gin{figure} [ht]
\centerline{\scalebox{0.35}{\includegraphics{images/10_vs_9_counter_move1_v2.png}}}
\caption{First Spoiler plays $L(5)$ on the $L$ side. Duplicator responds by playing all possible moves on the $B$ side. When discussing a given $B$ board, given that there are $10$ elements, there will either be more elements to the right or more elements to the left of the 1st move played. We refer to the side that has more elements as the ``long side'' and the side with fewer elements as the ``short side.'' Thus, in the figure, on the board in which Duplicator has played $B(5)$, the long side is to the right and the short side is to the left.}
\label{fig:10_vs_9_counter_move1}
\end{figure}
Since there are $10$ elements on each board on the $B$ side, there will necessarily be more elements on one side or the other of any move on any particular board. Refer to the side that has more elements as the ``long side'' and the side with fewer elements as the ``short side.'' Spoiler will now make his 2nd round moves on $B$, playing as close as he can to the middle of the long side of every board. Since there are at least $5$ elements on the long side of every board, there will necessarily be at least two unplayed elements to either side of Spoiler's 2nd round move on the long side of each board. See Figure \ref{fig:10_vs_9_counter_move2_spoiler}.
\begin{enumerate}gin{figure} [ht]
\centerline{\scalebox{0.35}{\includegraphics{images/10_vs_9_counter_move2_spoiler_v2.png}}}
\caption{Spoiler's 2nd moves on the $B$ side in blue), associated with each possible 1st round move on this side by Duplicator. The moves are all in the ``middle'' of the ``long sides.'' See the text for a description of these terms.}
\label{fig:10_vs_9_counter_move2_spoiler}
\end{figure}
Refer to these moves that have two unplayed elements to either side of them as ``middle moves.'' Duplicator then follows up, playing every possible move on the $L$ side.
The possible moves are depicted in blue in Figure \ref{fig:10_vs_9_counter_move2}. There is plainly no value to playing on top of the 1st round move here so we omit that option.
\begin{enumerate}gin{figure} [ht]
\centerline{\scalebox{0.40}{\includegraphics{images/10_vs_9_counter_move2_v2.png}}}
\caption{Possible 2nd moves by Duplicator (in blue) in response to Spoiler playing middle moves from the long side of all $B$ boards. The play-on-top move is omitted, as discussed in the text.}
\label{fig:10_vs_9_counter_move2}
\end{figure}
For his 3rd move, Spoiler will now play on $L$, replying to each possible play of Duplicator using the green moves in Figure \ref{fig:10_vs_9_counter_move3}.
\begin{enumerate}gin{figure} [ht]
\centerline{\scalebox{0.40}{\includegraphics{images/10_vs_9_counter_move3_v2.png}}}
\caption{3rd moves of Spoiler (in green) in response to each possible 2nd move of Duplicator (in blue).}
\label{fig:10_vs_9_counter_move3}
\end{figure}
In order to keep an isomorphism with the elements in boards 1 or 5 of Figure \ref{fig:10_vs_9_counter_move3}, Duplicator will have to play on top of her 1st move on $B$. Note that this move breaks the isomorphism with all other $L$ boards, except those like board 1 if the $B$ board had its long side on the right of the first move, and except those like board 5 if the $B$ board had its long side on the left of the first move. Spoiler is going to play his 4th moves from the $B$ boards. In case the second move on $B$ was right of the first move (in other words the long side was to the right), Spoiler will play to the right between the 1st and 2nd moves. Since there is no corresponding move on the 1st board, this move breaks the isomorphism with all $L$ boards. Analogously, if the 2nd move on $B$ was to the left of the 1st move, Spoiler will play to the left between the 1st and 2nd moves, again breaking the isomorphism with all boards. Since the arguments continue to be completely parallel, with plays on the top four boards of Figure \ref{fig:10_vs_9_counter_move3} corresponding to cases where the long side of $B$ was to the right, and play on boards 5 through 8 of Figure \ref{fig:10_vs_9_counter_move3} corresponding to cases where the long side of $B$ was to the left, we shall focus just on the top four boards. We have ruled out the case where Duplicator makes a 3rd round play that is on top of her 1st round play. To try to keep an isomorphism going with boards like board 4 in the Figure, Duplicator must play on top of her 2nd move. But in this case Spoiler plays to the right of his 2nd move and there is no analogous move on boards like board 4. Hence Duplicator cannot keep an isomorphism going with boards like board 4. Next, to keep an isomorphism going with boards like board 2 in Figure \ref{fig:10_vs_9_counter_move3}, Duplicator must play to the right between her 1st and 2nd moves. But then Spoiler plays a second time to the right between his 1st and 2nd moves, in the at least one additional unplayed element and Duplicator cannot reply in kind on the 2nd board. An attempt to maintain the isomorphism on boards like board 3 in the Figure is seen to be fruitless with an analogous argument. Duplicator must play to the right of the 2nd move on $B$ boards that have their long side to the right. Spoiler then picks a second unplayed element to the right of the 2nd move played on these boards and there is no analogous move on boards like the 3rd board on the $L$ side. It is thus impossible to maintain an isomorphism with any of the boards in Figure \ref{fig:10_vs_9_counter_move3} and hence Spoiler wins this 10 versus 9 game.
Suppose now that $|L| < 9$. On his first move Spoiler will play a move that is as close to the center of $L$ as possible. Duplicator will then just have fewer possible moves in the 2nd round when she plays on $L$ because $|L|$ is smaller than in the prior analysis. Spoiler still exploits the fact that there are two unplayed elements both to the left and right of the 2nd round moves on the same side of the 1st round moves on $B$ but not on $L$. If, for a 2nd round move, Duplicator plays immediately to the right or immediately to the left of Spoiler's round $1$ move, Spoiler will, on his 3rd round move, play on $L$ on top of his 1st move (just as before). If Duplicator plays an end move, which is not simultaneously an immediate neighbor of Spoiler's 1st move, then Spoiler will play on top of that move (again, as before). On the other hand, if Duplicator plays a move that is two to the right (alternatively, two to the left) of the 1st round move, and the move is not also an end move, then Spoiler will play immediately to the right (alternatively, immediately to the left) of center. Analogously, a Duplicator move that is a 2nd-from-end element and not covered in any other case, will be responded to with an end move on the same side. In all cases the analysis is exactly as before and results in a Spoiler victory. Finally, if $|B| > 10$, the analysis is not particularly different than what we've seen; there is just a bit more ``room'' when picking Spoiler's 2nd round moves, which again must leave at least two unplayed elements both to the right and left. All other aspects of the analysis are unchanged. The lemma is therefore established.
\end{proof}
It is worth noting that Spoiler's ability to play on top of existing moves was essential to the above Spoiler-winning strategy. Suppose such a move were prohibited. Consider the situation after Duplicator's 2nd round moves -- see Figure \ref{fig:10_vs_9_counter_move2}. Consider just the top $4$ boards in this figure and suppose for the moment that our 3rd round moves were constrained to be on $L(6)$--$L(9)$. By the analysis showing that the $5$ vs. $4$ game is Duplicator-winning (proof of Lemma \ref{lemma:g_upper_of_3}), recall that playing two 3rd round moves with both moves either to the left of the 2nd round (blue) moves or to the right of the 2nd round moves would lead to a Duplicator victory. Thus, under the assumption that Spoiler does not play on top of an existing move, he would have to play moves on two boards in the $L(1)$--$L(4)$ range. But then, for a 3rd round move, Duplicator could mimic any one of these Spoiler moves on the $4$th $B$ board (e.g., playing $B(i)$ if Spoiler played $L(i)$). It is then evident that any 4th round move on this 4th $B$ board would be fruitless, while if Spoiler makes his 4th round moves on $L$, one of the moves on the two boards in which Spoiler previously played on $L(1)$--$L(4)$ will allow for a Duplicator victory with respect to the same $4$th $B$ board.
It is worth calling out this last fact explicitly:
\begin{enumerate}gin{observation} \label{obs:play-on-top} If Spoiler were not able to play on top of existing moves, the M-S game on linear orders of sizes $10$ vs. linear orders of size $9$ would be winnable by Duplicator.
\end{observation}
\end{document}
|
\hat{a}_2egin{document}
\title{Single- and two-mode quantumness at a beam splitter}
\hat{a}_1uthor{Matteo Brunelli}\email{[email protected]}
\hat{a}_1ffiliation{Centre for Theoretical Atomic, Molecular and Optical Physics,
School of Mathematics and Physics, Queen's University Belfast,
Belfast BT7\,1NN, United Kingdom }
\hat{a}_1uthor{Claudia Benedetti}\email{[email protected]}
\hat{a}_1ffiliation{Dipartimento di Fisica, Universit\`a degli Studi di
Milano, I-20133 Milano, Italy}
\hat{a}_1uthor{Stefano Olivares}\email{[email protected]}
\hat{a}_1ffiliation{Dipartimento di Fisica, Universit\`a degli Studi di
Milano, I-20133 Milano, Italy }
\hat{a}_1ffiliation{CNISM UdR Milano Statale, I-20133 Milano, Italy}
\hat{a}_1uthor{Alessandro Ferraro}\email{[email protected]}
\hat{a}_1ffiliation{Centre for Theoretical Atomic, Molecular and Optical Physics,
School of Mathematics and Physics, Queen's University Belfast,
Belfast BT7\,1NN, United Kingdom}
\hat{a}_1uthor{Matteo G.~A.~Paris}\email{[email protected]}
\hat{a}_1ffiliation{Dipartimento di Fisica, Universit\`a degli Studi di
Milano, I-20133 Milano, Italy}
\hat{a}_1ffiliation{CNISM UdR Milano Statale, I-20133 Milano, Italy}
\date{\today}
\hat{a}_2egin{abstract}
In the context of bipartite bosonic systems, two notions of classicality
of correlations can be defined: $P$-classicality, based on the
properties of the Glauber-Sudarshan $P$-function; and $C$-classicality,
based on the entropic quantum discord. It has been shown that these two
notions are maximally inequivalent in a static (metric) sense ---
as they coincide only on a set of states of zero measure. We extend and
reinforce quantitatively this inequivalence by addressing the dynamical
relation between these types of non-classicality in a paradigmatic
quantum-optical setting: the linear mixing at a beam splitter of a
single-mode Gaussian state with a thermal reference state. Specifically,
we show that almost all $P$-classical input states generate outputs that
are not $C$-classical. Indeed, for the case of zero thermal reference
photons, the more $P$-classical resources at the input the less
$C$-classicality at the output. In addition, we show that the
$P$-classicality at the input --- as quantified by the non-classical
depth --- does instead determine quantitatively the potential of
generating output entanglement. This endows the non-classical depth with
a new operational interpretation: it gives the maximum number of thermal
reference photons that can be mixed at a beam splitter without
destroying the output entanglement.
\end{abstract}
\pacs{03.67.Mn, 42.50.Dv}
\maketitle
\section{Introduction}\label{s:intro}
Since the early days of quantum mechanics considerable efforts have
been spent in establishing whether a given physical system possesses
genuinely quantum features.
As far as bosonic systems are
concerned, Wigner first attacked this problem introducing a quantum
analogue to the classical phase-space \cite{Wig}. Later on, a systematic
approach was finally developed in the framework of quantum optics, with
the introduction of various classes of quasi-probability distributions
defined over the quantum phase-space. Specifically, the analytical
features of such distributions unveil physical constraints: whenever the
normally-ordered distribution function --- called Glauber-Sudarshan
$P$-function \cite{gl1,sud1}--- behaves like a regular probability
distribution, the corresponding state can be described as a statistical
ensemble of classical fields and, in this sense, it cannot show any
non-classical feature \cite{mandel}. In the following, these states will
be referred to as $P$-classical states.
\par
On the other hand, the more recent development of quantum information
theory promoted a reconsideration of the quantumness of physical systems
from an information-theoretical perspective. Since quantum systems can
be correlated in ways unaccessible to classical ones, the discrimination
between classical and nonclassical states of a given system is pursued
by studying the nature of the correlations among its subparts. In
particular, quantum entanglement accounts for quantum correlations that
may lead to the violation of local realism \cite{HHHH}. Moreover, even
separable (\textit{i.e.}, non-entangled) states have been recognized to
retain non-classical features, leading to the introduction of an
entropic measure, called quantum discord, to capture the quantum
features of correlations beyond entanglement \cite{Modi}. Following this
criterion, classical states can be defined as states with vanishing
discord, and we will refer to this notion as $C$-classicality.
\par
Although both acceptable and well-grounded, the two foregoing notions of
classicality have been shown to be radically different, indeed maximally
inequivalent, in the following sense: only a zero-measure set of states
is classical according to both criteria \cite{PvsI}. Besides embodying a
matter of fundamental interest, this conclusion is also relevant for
practical purposes, since it enlightens different resources in quantum
information processing \cite{C+:14}. However, such a characterization is based on
purely geometrical considerations and, as a consequence, it is
intrinsically static. In particular, the relation between $C$- and
$P$-classicality in common physical processes remains unclear. In
addition, a quantitative comparison between these two notions in terms
of their respective figures of merit still lacks.
\par
In this work, we address the above issues in the context of quantum
optics, whose description in term of the phase-space offers a natural
framework to develop a quantitative analysis \cite{cah69}. A
paradigmatic setting in quantum optics is constituted by Gaussian states
and operations, due to their relevance for quantum technologies and
their thorough theoretical characterization \cite{
oli:rev,GS3,FOP:05}. Specifically, we address
the dynamical relation of $P$- and $C$-nonclassical states arising from
the linear mixing of Gaussian states at a beam splitter. In this
setting, the availability of analytical expressions to quantify Gaussian
$P$-classicality --- in terms of the \textit{non-classical depth} ---
and Gaussian discord and entanglement, is crucial to work out a
quantitative comparison between the various notions of non-classicality.
\par
In particular, we consider the mixing of a generic Gaussian state with a
reference thermal state, and explore the relationships between the
classicality of the input state and the $P$- and $C$-classicality at the
output. Specifically, we show that almost all $P$-classical input states
generate output states that are not $C$-classical. Indeed, for the case
of zero thermal reference photons, the more $P$-classical resources at
the input the less $C$-classicality at the output. These findings
strengthen the inequivalence between $P$- and $C$-classicality by
quantitatively extending it to a process in which correlations are
dynamically generated, rather then statically analyzed as in
Ref.\cite{PvsI}. In addition, we show that the $P$-classicality at the
input does instead determine quantitatively the potential of generating
output entanglement. This endows the non-classical depth with a new
operational interpretation: it gives the maximum number of thermal
reference photons that can be mixed at a beam splitter without
destroying the output entanglement.
\par
The paper is structured as follows. In Section \ref{s:gauss} we give a
brief account on Gaussian states and their phase-space representation,
focusing on their bilinear interaction in linear optical devices. In
Section \ref{s:noncl} we review the two notions of nonclassicality and
establish the notation for Gaussian discord, non-classical depth and
entanglement used in the following. The reader familiar with the
foregoing topic can skip the respective sections. In Section
\ref{ss:vac} we analyze in details the generation of $P$- and
$C$-nonclassicality by mixing of a Gaussian state with the vacuum,
whereas in Section \ref{ss:the} we focus attention to the mixing with a
thermal state, also introducing the concept of effective
nonclassicality. Section \ref{s:out} closes the paper with some
concluding remarks.
\section{Linear mixing of Gaussian states}\label{s:gauss}
\STErev{The simplest bilinear interaction involving two bosonic field
modes described by the annihilation operators $\hat{a}_1$ and $\hat{a}_2$
(with $[\hat{a}_k, \hat{a}_k^{\dagger}]=\hat{\mathbb{I}}$)
corresponds to the mode mixing and it is described by an effective
Hamiltonian of the form $H_I \propto (\hat{a}_1^{\dagger}\hat{a}_2+ \hat{a}_1 \hat{a}_2^{\dagger}$).
This kind of interaction is very common in different quantum systems,
ranging from optical modes in linear optical devices \cite{WM}
to collective modes in ultracold atoms \cite{meystre}, opto-
and nano- mechanical oscillators \cite{woolley:08,xiang:10,PZ:12,AKM:14} and
superconducting resonators \cite{W+:04,chirolli:10}. For the sake of clarity,
in this paper we focus on the quantum optics realm and we address the
correlations properties of the two optical modes emerging from a beam splitter
(BS) when the input ones are excited in Gaussian states.}
\par
Gaussian states (GSs) are states with Gaussian Wigner functions
\cite{schum:86} and an exhaustive information about them is provided by
the knowledge of the first and second statistical moments of the
quadrature operators. Information about correlations is
contained in the second moment and from now on we set the first moments
to zero, without loss of generality.
\STErev{Upon introducing the quadrature operators
$\hat{q}=(\hat{a}+\hat{a}^{\dagger})/\sqrt2$ and
$\hat{p}=(\hat{a}-\hat{a}^{\dagger})/(i\sqrt2)$,} the covariance
matrix (CM) of a single-mode GS of $\hat{a}$ is defined as $\left[
\hat{a}_2oldsymbol{\sigma}\right]_{k l}=\frac12 \langle \{R_k,R_l\}
\rangle-\langle R_k \rangle\langle R_l \rangle$, with
$k,l=1,2$, being $\mathbf{R}^T=(R_1,R_2) \equiv (\hat{q},\hat{p})$ the
vector of the quadratures and $\{ \cdot,\cdot \} $ the anticommutator.
Now, the canonical commutation relations take the form $[R_k,R_l]=i\,
\omega_{kl}$, where $\omega_{kl} =(1- \delta_{kl})(-1)^{l}$ are the
entries of the is the $2 \times 2$ symplectic form
$\hat{a}_2oldsymbol{\omega}$.
The set of the eigenvalues $(q,p)\in\mathbb{R}^2$ of the position and
momentum-like operators, endowed with the symplectic form
$\hat{a}_2oldsymbol{\omega}$, spans the real symplectic space
$\Gamma=(\mathbb{R}^2,\hat{a}_2oldsymbol{\omega})$,
which is referred to as the phase space of the mode $\hat{a}_1$.
A single-mode GS may always be written as
\hat{a}_2egin{equation}
\label{gen1G}
\varrho_1(n_t,n_s)=S(r) \nu(n_t) S^\dagger(r)\,,
\end{equation}
where
$S(r)=\exp \left[\frac12 \left(ra^{\dagger 2}
-r^*a^2\right)\right]$, $r \in \mathbb{C}$,
is the squeezing operator and
$$\nu(n_t)=(n_t + 1)^{-1}\left[ n_t/(n_t+1) \right]^{a^{\dagger} a}\,,$$
is a thermal state with $n_t$ average number of photons,
the quantity $n_s=\sinh^2 |r|$ will be referred to as the
number of squeezed photons.
\par
\STErev{By choosing a suitable rotating frame, the lossless
BS exchange interaction is described by the unitary evolution
$U_\tau=\exp [ \theta (\hat{a}^{\dagger}_1\hat{a}_2
- \hat{a}_1 \hat{a}^{\dagger}_2)]$,
with $\theta \in \mathbb{R}$ and where $\tau=\cos^2\theta$ denotes
the transmissivity of the BS. If $\tau=1/2$, the BS is said to be balanced.
Being a bilinear interaction of modes,} this evolution preserves the
Gaussian character of the state, and in turn induces a symplectic
transformation $\mathsf{S}_{\tau}$ in the quantum phase space of the
composite system, namely
\hat{a}_2egin{equation}\label{sympl}
\mathsf{S}_{\tau}=
\left(
\hat{a}_2egin{array}{cc}
\sqrt{\tau } \, \mathbb{I} & \sqrt{1-\tau } \, \mathbb{I} \\[1ex]
-\sqrt{1-\tau } \, \mathbb{I} & \sqrt{\tau} \, \mathbb{I}
\end{array}
\right) \, ,
\end{equation}
where $\mathbb{I}=\hbox{diag}(1,1)$.
Given two uncorrelated single-mode GSs, with CMs $\hat{a}_2oldsymbol{\sigma}_1$ and
$\hat{a}_2oldsymbol{\sigma}_2$, respectively,
the symplectic transformation $\mathsf{S}_{\tau}$, acting by congruence
on the initial CM $\hat{a}_2oldsymbol{\Sigma_0}= \hat{a}_2oldsymbol{\sigma}_1 \oplus
\hat{a}_2oldsymbol{\sigma}_2$, leads to the
evolved CM $\hat{a}_2oldsymbol{\Sigma}=\mathsf{S}_{\tau}\, \hat{a}_2oldsymbol{\Sigma_0}\,
\mathsf{S}^T_{\tau}$.
As mentioned above and schematically depicted in Fig. \ref{f:BS}, in this work
we will consider a bipartite quantum system of modes $\hat{a}_1$ and $\hat{a}_2$. The mode
$\hat{a}_1$ is initially in the zero-mean GS $\varrho=\varrho(n_s,n_t)$ while mode $\hat{a}_2$ is
in a thermal state $\nu=\nu(n_2)$. Without loss of generality we will assume a real
squeezing parameter $r \in \mathbb{R}$. Using this parametrization, the average
number of photons in the first mode $\langle\hat{a}_1^{\dagger}\hat{a}_1\rangle_{\varrho} \equiv
\hbox{Tr}[\hat{a}_1^{\dag}\hat{a}_1\,\varrho(n_s,n_t)]$ explicitly
reads:
\hat{a}_2egin{equation}\label{toten}
\langle\hat{a}_1^{\dagger}\hat{a}_1\rangle_{\varrho} =
n_t + (1+ 2 n_t) n_s \equiv n_1.
\end{equation}
\hat{a}_2egin{figure}[h!]
\centering
\includegraphics[width=0.7\columnwidth]{f1_bsfig.pdf}
\caption{(Color online)
Linear mixing of Gaussian states. The two input modes $\hat{a}_1$ and
$\hat{a}_2$, initially excited in the zero-mean Gaussian state
$\varrho=\varrho(n_s,n_t)$ and in the thermal state $\nu=\nu(n_2)$
respectively, enter a beam splitter of transmissivity $\tau$, after
which quantum correlations are eventually established.\label{f:BS}}
\end{figure}
\par
In the phase space, the GSs $\varrho$ and $\nu$ are represented
by the $2\times 2$ CMs:
\hat{a}_2egin{equation}\label{sigmarho}
\hat{a}_2oldsymbol{\sigma}_{\varrho}=
\mathrm{diag}{\left(\frac12 + n_1 + \Delta, \frac12 + n_1 - \Delta
\right)} \, , \end{equation}
and
\hat{a}_2egin{equation}\label{sigmanu}
\hat{a}_2oldsymbol{\sigma}_{\nu}=
\left(\frac12 + n_2 \right) \mathbb{I} \, ,
\end{equation}
respectively, where $\Delta=(1+2 n_t) \sqrt{n_s (1+ n_s)}$. Since the
initial state $R_0$ of the bipartite system is chosen to be factorized,
namely $R_0=\varrho\otimes\nu$, the total number of excitations is given
by $\langle\hat{a}^{\dagger}_1\hat{a}_1
+\hat{a}^{\dagger}_2\hat{a}_2\rangle_{R_0}=n_1+n_2\equiv
N \, .$ \STErev{Now we let the state $R_0$ evolve through a lossless BS
of transmissivity $\tau$ using the symplectic transformation
$\mathsf{S}_{\tau}$ given by Eq.~(\ref{sympl}). The $4 \times 4$ CM of
the two-mode output state $R = U_\tau R_0 U_\tau^{\dag}$ is
$\hat{a}_2oldsymbol{\Sigma}=\mathsf{S}_{\tau} (\hat{a}_2oldsymbol{\sigma}_{\varrho}
\oplus
\hat{a}_2oldsymbol{\sigma}_{\nu}) \mathsf{S}^T_{\tau}$ and it reads:}
\hat{a}_2egin{equation}\label{sigmamatrix}
\hat{a}_2oldsymbol{\Sigma}=
\left(
\hat{a}_2egin{array}{cccc}
a_+ & 0 & c_+ & 0 \\
0 & a_- & 0 & c_- \\
c_+ & 0 & b_+ & 0 \\
0 & c_- & 0 & b_-
\end{array}
\right) \, ,
\end{equation}
where:
\STErev{
\hat{a}_2egin{subequations}\label{sigmaeq}
\hat{a}_2egin{align}
a_{\pm} &=\left(\frac{1}{2}+n_2\right)
(1-\tau )+\left(\frac{1}{2}+ n_1\pm \Delta \right)\tau \, ,
\\[1ex]
b_{\pm} &=\left(\frac{1}{2}+n_2\right)
\tau+\left(\frac{1}{2}+n_1\pm \Delta\right) (1-\tau ) \, ,
\\[1ex]
c_{\pm} &=\left[\left(\frac{1}{2}+n_2\right)-
\left(\frac{1}{2}+n_1\pm \Delta \right)\right]
\sqrt{\tau(1-\tau)} \,.
\end{align}
\end{subequations}
}
It is useful to introduce the following local symplectic invariants,
which in this case are given by $I_1=a_+a_-, I_2=b_+b_-, I_3=c_+c_-$ and
$I_4=\det\hat{a}_2oldsymbol{\Sigma}$. Via symplectic
diagonalization $\hat{a}_2oldsymbol{\sigma}_{AB}$ can be cast into the diagonal
form $\mathrm{diag}(\lambda_+,\lambda_+,\lambda_-,\lambda_-)$ where the
expression of the
symplectic eigenvalues is given by \cite{ser:04}:
\hat{a}_2egin{equation}
\lambda_{\pm}=\sqrt{\frac{I_1+I_2+2I_3\pm\sqrt{(I_1+I_2+2I_3)^2-4I_4}}{2}}\, .
\end{equation}
\STErev{Positivity of $\varrho_{AB}$ requires $\lambda_{-}\ge 1/2$.}
\section{Nonclassicality for bosonic systems}\label{s:noncl}
Let us now review in some details the concepts of non-classicality we
are going to consider, together with their respective figures of merit.
The reader familiar with those concepts can skip this Section.
\subsection{Nonclassicality in the phase space: P-classicality}
Any bipartite bosonic
state described by the density matrix $\varrho_{AB}$ can be always
expanded in terms of coherent states as follows:
\hat{a}_2egin{equation}\label{pfunction}
\varrho_{AB}=
\int_{\mathbb{C}}d^2\hat{a}_1lpha\int_{\mathbb{C}}
d^2\hat{a}_2eta\,P(\hat{a}_1lpha,\hat{a}_2eta)\ket{\hat{a}_1lpha}\hat{a}_2ra{\hat{a}_1lpha}
\otimes\ket{\hat{a}_2eta}\hat{a}_2ra{\hat{a}_2eta} \, ,
\end{equation}
where $\ket{\hat{a}_1lpha}$ and $\ket{\hat{a}_2eta}$ are coherent states of the two
modes and $P(\hat{a}_1lpha,\hat{a}_2eta)$ is the Glauber-Sudarshan $P$-representation
of the state. $P(\hat{a}_1lpha,\hat{a}_2eta)$ provides a complete characterization of
the state. Equation (\ref{pfunction}) suggests that the state of the
electromagnetic field can be regarded as a mixture of coherent states,
weighted by $P(\hat{a}_1lpha,\hat{a}_2eta)$. However, in general the $P$-function
cannot be regarded as a probability density function. On the other hand,
when all the conditions for the $P$-function to be a probability density
are
satisfied, one can conclude that it describes a classical state of the
bosonic field, motivating the following definition:
\hat{a}_2egin{Def}\label{P:def}
(\textit{$P$-classicality}) A state $\varrho_{AB}$
of a two-mode bosonic field is called $P$-classical if
$P(\hat{a}_1lpha,\hat{a}_2eta)$ is a regular and normalized positive function.
\end{Def}
\STErev{In the case of a single-mode state $\varrho$, we can introduce a
generalized $s$-ordered Wigner function which encompasses all the
quasi-probability distributions :}
\hat{a}_2egin{equation}\label{sordered}
W_s(\hat{a}_1lpha)=\int_{\mathbb{C}}{\frac{d^2
\lambda}{\pi^2}e^{\hat{a}_1lpha \lambda^* - \hat{a}_1lpha^*
\lambda+(s/2)|\lambda|^2}\tr{D(\lambda)\varrho}},
\end{equation}
where $D(\hat{a}_1lpha)\equiv \exp(\hat{a}_1lpha \hat{a}^{\dagger}- \hat{a}_1lpha^*\hat{a})$
is the displacement operator. In the case of $s=-1,0,1$ one recovers the
Husimi, Wigner, and Glauber-Sudarshan functions, respectively. The
latter, more than any other representation, can depart from being a
well-behaved probability density. In order to understand this fact, let
us observe that the $s$-ordered Wigner function of a state is related to
the $P$-function ($s=1$) of the same state through a Gaussian
convolution, namely:
\hat{a}_2egin{equation}\label{convP}
W_s(\hat{a}_1lpha)=\frac{2}{\pi(1-s)}\int_{\mathbb{C}}d^2\hat{a}_2eta
\, \exp\left\{ -\frac{2\vert\hat{a}_1lpha-\hat{a}_2eta\vert^2}{1-s}\right\}P(\hat{a}_2eta) \, ,
\end{equation}
that can be seen as a smoothing operation. Given the $P$-function of the
state of interest, as the parameter $s$ moves towards $-1$, the
resulting distributions $W_s(\hat{a}_1lpha)$ get smoother and smoother. Since
for $s=-1$ the Husimi $Q$-function is recovered, we obtain a continuous
interpolation between $P$- and $Q$-function, and we are guaranteed that
this smoothing operation always succeeds in giving a true probability
distribution.
\par
Based on this, it is possible to define a quantitative measure of
$P$-nonclassicality, that is the non-classical depth of a quantum state
\cite{lee,lee2}. To this aim it is useful to introduce the parameter
${\rm T}=(1-s)/2$. For ${\rm T}$ large enough, Eq.~(\ref{convP}) leads to a
$P$-classical state and the smoothing operation is referred to as
complete. If $\Omega$ denotes the set of all ${\rm T}$ which give a
complete smoothing of the
initial the $P$-function, the non-classical depth is defined as
\hat{a}_2egin{equation}\label{ncldepth}
{\rm T}_m=\inf_{{\rm T}\,\in\, \Omega}({\rm T})\,.
\end{equation}
The non-classical depth ranges from 0 for coherent states to 1 for
Fock states \cite{tak02}, whereas for a
single-mode GS $\varrho$ we have \cite{seraf:05}:
\hat{a}_2egin{equation}
{\rm T}_m= \max\left[\frac{1}{2}\left(1-\frac{e^{-2r}}{\mu} \right),0\right] ,
\end{equation}
where $\mu=\hbox{Tr}[\varrho^2]$ is the purity of the state and $r$ is
the squeezing parameter introduced in Eq. (\ref{gen1G}).
\subsection{Nonclassicality of correlations: C-classicality}
The
total amount of correlations between two classical systems $A$ and $B$ is
quantified by the mutual information $I(A:B)=H(A)+H(B)-H(A,B)$ where
$H(X)=-\sum_ x p_X(x)\log p_X(x)$ is the Shannon entropy of the random
variable $X$. By exploiting the relation $p_{AB}(a,b)=p_A(a\vert b)p_B(b)$
one also gets the equivalent expression of the mutual information $I(A:B)=
H(A)-H(A\vert B)$ in terms of the conditional entropy $H(A\vert B)=-\sum_{a,b}
p_B(b)\, p_A(a\vert b)\log p_A(a\vert b)$.
The first expression of the mutual information
has an immediate extension to quantum systems, simply by replacing the
Shannon entropy with the Von Neumann entropy $S[\varrho]=-\tr{\varrho
\log \varrho}$, namely
$I_M(\varrho_{AB})=S(\varrho_A)+S(\varrho_B)-S(\varrho_{AB})$; if we
address GSs, it can be expressed as \cite{ser:04}
\hat{a}_2egin{equation}
I_M(\varrho_{AB})=f\left(\sqrt{I_1}\right)+f\left(\sqrt{I_2}\right)
-f\left(\lambda_+\right)-f\left(\lambda_-\right),
\end{equation}
where $f(x)=(x+\frac12)\log(x+\frac12)-(x-\frac12)\log(x-\frac12)$. On
the other hand, the extension of the second expression to the quantum
realm involves a measurement on one of the two parties, say $B$,
described by the positive operation-valued measure (POVM)
$\{\Pi_k\}, \; \Pi_k\ge0,\; \sum_k \Pi_k = \hat{\mathbb{I}}$. The probability to
obtain the outcome $k$ is given, according to the Born rule, by
$p_k=\hbox{Tr}[\varrho_{AB}\, \hat{\mathbb{I}}\otimes\Pi_k]$ and the conditional
state of $A$ with respect to the $k$ outcome
$\varrho^{\Pi_k}_{A|B}=(p_k)^{-1} \hbox{Tr}_{B}[\varrho_{AB}\,
\hat{\mathbb{I}}\otimes\Pi_k]$. The maximum amount of information we can gain on
the part $A$ by locally measuring the other part thus has the
non-trivial expression
\hat{a}_2egin{equation}
\mathcal{C}_{A|B}(\varrho_{AB})=
\max_{\{\Pi_k\}}\left\{S\left(\varrho_A\right)
-\sum_k p_k S\left(\varrho^{\Pi_k}_{A|B}\right) \right\} \,,
\label{maxx}
\end{equation}
and invokes an optimization procedure over the set of all measurements.
The quantum discord is properly defined as the difference between these
two quantities \cite{zurek}
\hat{a}_2egin{equation}
\mathcal{D}_{A|B}(\varrho_{AB})=I_M(\varrho_{AB})-\mathcal{C}_{A|B}(\varrho_{AB})\,.
\end{equation}
We can thus conclude that a system shows some quantumness as soon as the
discord is different from zero, providing us with the following
criterion:
\hat{a}_2egin{Def}\label{C:def}
\textit{($C$-classicality)} A state
$\varrho_{AB}$ of a two-mode bosonic field is called $C$-classical if
$\mathcal{D}_{A|B}(\varrho_{AB})=\mathcal{D}_{B|A}(\varrho_{AB})\equiv 0$.
\end{Def}
If we restrict to the subclass of GSs and Gaussian
measurements, an analytical expression of the quantum discord can be
derived, which is called Gaussian discord, and is given by
\hat{a}_2egin{equation}\label{discordgauss}
\mathcal{D}_{A|B}(\varrho_{AB})=
f\left(\sqrt{E^{\mathrm{min}}_{A|B}}\right)
+f\left(\sqrt{I_2}\right)-f\left(\lambda_+\right)-
f\left(\lambda_-\right) \, ,
\end{equation}
where $E^{\mathrm{min}}_{A|B}$ has an analytical expression as a
function of the local symplectic invariants $I_k$
\cite{gio10,ade10,gu12,mas12,bla12}. It is worth noting that
for a large class of Gaussian states \cite{gea14} the Gaussian
discord of Eq. (\ref{discordgauss}) coincides with the quantum discord,
i.e. the maximum in Eq. (\ref{maxx}) is achieved by a Gaussian
measurement.
\par
A stronger form of quantum correlations with respect to discord is given
by quantum entanglement. In the case of two-mode GSs
a necessary and sufficient condition can be derived to assess the
presence of entanglement \cite{sim:00}. It is essentially based on the
positivity of $\varrho_{AB}$ under partial transposition (PPT), that is
the positivity of the density matrix obtained by the transposition
applied only to one part of a system \cite{per:96}. One can show in fact
that the symplectic eigenvalues of the partially transposed states are
given by:
\hat{a}_2egin{equation}
\tilde{\lambda}_{\pm}=\sqrt{\frac{I_1+I_2-2I_3\pm
\sqrt{(I_1+I_2-2I_3)^2-4I_4}}{2}}\,,
\end{equation}
and a Gaussian state $\varrho_{AB}$ is entangled if and only if $\tilde{\lambda}_- < 1/2$. In fact,
a measure of entanglement is given by the logarithmic negativity \cite{vid:02}, that is
$E(\hat{a}_2oldsymbol{\sigma})=\mathrm{max}\left[-\log(2\tilde{\lambda}_-),0\right]$.
\section{Non-classicality arising from mixing a Gaussian state
with the vacuum}
\label{ss:vac}
We start considering the case in which the reference input state
of mode $\hat{a}_2$ is the vacuum, i.e. $n_2=0$ and, in particular
$\hat{a}_2oldsymbol{\sigma}_{\nu}\rightarrow \hat{a}_2oldsymbol{\sigma}_0\equiv
\frac12 \mathbb{I}$ and $n_1 = N$.
\subsection{P-classicality}
The initial state of the system $R_0$ is clearly $C$-classical. On the
contrary, $P$-nonclassicality has to be addressed both in the input and
output channels, in order to see wether differences arise. The
Glauber-Sudarshan $P$-representation of $R_0$ is given by
\hat{a}_2egin{equation}\label{R0}
R_0=\int_{\mathbb{C}}d^2\hat{a}_1lpha\int_{\mathbb{C}}d^2\hat{a}_2eta\,
P_{\varrho}(\hat{a}_1lpha)\,P_{0}(\hat{a}_2eta)
\ket{\hat{a}_1lpha}\hat{a}_2ra{\hat{a}_1lpha}\otimes\ket{\hat{a}_2eta}\hat{a}_2ra{\hat{a}_2eta} \, ,
\end{equation}
where the $P$-function $P_{R_0}$ is factorized in the product of
$P_{\varrho}$ and $P_{0}$, being the latter the Glauber-Sudarshan
P-function of the vacuum. Moreover, since $P_{0}$ is a well-behaved
probability density, any possible $P$-nonclassical feature of the input
state is due to the pathological behavior of $P_{\varrho}$ alone, and is
quantified by the nonclassical depth of Eq. (\ref{ncldepth}). Since the
action of the BS evolution on the two-mode displacement operator is
$U_\tau\,D_a(\hat{a}_1lpha)\otimes D_b(\hat{a}_2eta)\,
U_\tau^{\dagger}=D_a(\sqrt{\tau}\hat{a}_1lpha+\sqrt{1-\tau}\hat{a}_2eta)
D_b(\sqrt{\tau}\hat{a}_2eta-\sqrt{1-\tau}\hat{a}_1lpha)$, and it amounts to a rotation
of the arguments, one obtains the following $P$-representation of $R$:
\hat{a}_2egin{align}\label{Rrotate}
P_{R} (\hat{a}_1lpha,\hat{a}_2eta)=
P_{\varrho}(\sqrt{\tau}\hat{a}_1lpha-\sqrt{1-\tau}\hat{a}_2eta)\,
P_{0}(\sqrt{1-\tau}\hat{a}_1lpha+\sqrt{\tau}\hat{a}_2eta)\, .
\end{align}
It is apparent that the effect of the evolution only amounts to a
re-parametrization of the argument, that does not affect the functional
form of both $P_{\varrho}$ and $P_{0}$. Thus we can conclude that
output state $R$ is $P$-nonclassical if and only if $\varrho$ is
$P$-nonclassical, that is, for our configuration \textit{the two-mode
$P$-nonclassicality of the output equals single-mode $P-$nonclassicality
of the input}. Let us remark, for the sake of
Sec.\ref{ss:the}, that the foregoing argument applies as well for the
case of mixing with a reference thermal state with positive temperature,
given that the P-function of a general thermal state is a well-behaved
probability density.
The non-classical depth in Eq.~(\ref{ncldepth})
relative to the mode $\hat{a}_1$, can be expressed as:
\hat{a}_2egin{equation}\label{NCdepthmax}
{\rm T}_m=\mathrm{max}\left[\frac{1-2u}{2},0\right] \, ,
\end{equation}
where $u=\frac{1}{2}+n_1-\Delta$ is the minimum eigenvalue of the CM
$\hat{a}_2oldsymbol{\sigma}_{\varrho}$, as it is apparent from
Eq.~(\ref{sigmarho}). The condition ${\rm T}_m=0$ singles out a
$P$-classicality threshold, which can be made explicit either as a
function of the thermal component $n_t$, hence having
\hat{a}_2egin{equation}\label{NsNCl}
n_s^{\mathsf{P}}=\frac{n_t^2}{1+2 n_t} \, ,
\end{equation}
or of the squeezed ones
\hat{a}_2egin{equation}\label{NthNCl}
n_t^{\mathsf{P}}=n_s+\sqrt{n_s(1+ n_s)} \, .
\end{equation}
Whenever the average number of squeezed photons exceeds
$n_s^{\mathsf{P}}$, or the thermal component falls below
$n_t^{\mathsf{P}}$ the state $\varrho$ turns out to be $P$-nonclassical.
From now on, otherwise differently stated, $n_s^{\mathsf{P}}$ shall be
employed as the \textit{$P$-classicality threshold}.
\hat{a}_2egin{figure}[h!]
\centering
\includegraphics[width=0.9\columnwidth]{f2_ncthr.pdf}
\caption{(Color online)
Nonclassicality by mixing with the vacuum:
Plot of the implicit function T$_m=0$, which defines the
$P$-classicality threshold. It takes the explicit expression of Eq.
(\ref{NsNCl}) as a function of $n_t$, while as a function of $n_s$ it is
given by Eq. (\ref{NthNCl}). In this case the non-classicality threshold
$n_s^{\mathsf{P}}$ and the separability threshold $n_s^{\mathsf{sep}}$
coincide. The black lines are curves of fixed total energy $N=n_s
+n_t+2n_sn_t$ : dashed for odd values $N=2k+1\, ,\, k=0,1,\ldots$ and
dot-dashed for even values $N=2k ,\, k=1,2,\ldots$ of the total
energy.\label{f:NCthr}}
\end{figure}
\subsection{C-classicality and generation of Gaussian discord}
We now address general quantum correlations, and investigate the
generation of Gaussian discord. Being the parties involved the output
modes $\hat{b}_1=U^{\dagger}\,\hat{a}_1\,U\,$ and
$\hat{b}_2=U^{\dagger}\,\hat{a}_2\,U\,$, we will refer to
$\mathcal{D}_{1|2}$ as the $b_1$-discord and $\mathcal{D}_{2|1}$ to the
$b_2$-discord; when they both coincide the symbol $\mathcal{D}$ will be
employed employed. In fact, this is the case of a balanced BS, namely
$\mathcal{D}_{1|2}(n_s,n_t,1/2)=\mathcal{D}_{2|1}(n_s,n_t,1/2) \quad
\forall \; n_s, \, n_t$.
\par
In Fig. \ref{f:Discord3D} we show a plot of the Gaussian discord as a
function of the squeezed and thermal component of the input state
$\varrho$, for the balanced case $\tau=1/2$. Except for the trivial
case of a vacuum input state $\varrho=\ket{0}\hat{a}_2ra{0}$, it is apparent
that the discord is always positive and therefore, \textit{contrary to
the case of $P$-classicality, there is no $C$-classicality threshold}:
whatever the input state, a balanced BS is capable of generating quantum
correlations. This is also in agreement with the fact that typically
almost all states possess positive discord \cite{almostall}.
\hat{a}_2egin{figure}[h!]
\includegraphics[width=0.9\columnwidth]{f3a_d3d.pdf}
\includegraphics[width=0.95\columnwidth]{f3b_dcvac.pdf}
\caption{(Color online)
Nonclassicality by mixing with the vacuum: In the upper panel
we show the discord $\mathcal{D}$ as a function of the number of
squeezed and thermal photons $n_s$, $n_t$ for a balanced BS
$\tau=1/2$. The dotted red line corresponds to the case of
a squeezed vacuum state entering the beam splitter. The solid
blue line corresponds to a thermal input
state, while the dashed black curve is the discord at the
$P$-classicality threshold. Finally the dashed magenta curve points out
the minimum value of the discord, obtained via numerical maximization.
The lower left panel shows the discord as a
function of the total energy $N$; solid blue for thermal state, dashed
red for squeezed vacuum, black dot-dashed for $P$-classicality
threshold. The right panel shows the classical correlations for
the same input states and with the same color codes.
\label{f:Discord3D}}
\end{figure}
\par
From Fig.~\ref{f:Discord3D} a quantitatively relevant feature emerges.
Considering input states below the $P$-classicality threshold (denoted
by the dashed black curve), one can observe that the output discord and
hence the \textit{$C$-nonclassicality increases as the input
non-classical resources decreases}. This is true regardless the
constraints that one considers: either moving along the curves at
constant $n_s$, $n_t$, or total energy $N$, the discord increases as
$n_s$ decreases or $n_t$ increases. This is a quantitative feature that
strikingly confirms --- together with the absence of a $C$-classicality
threshold --- the inequivalence between the two notions of classicality
considered here.
\par
In Fig. \ref{f:Discord3D} the Gaussian discord corresponding to three
families of input states has been highlighted: the discord generated by
a thermal input state $\mathcal{D}^{\,\mathrm{th}}=\mathcal{D}(0,n_t)$
corresponds to the solid blue curve; the dotted red line is obtained
when the input state is the squeezed vacuum state, i.e.
$\mathcal{D}^{\,\mathrm{sq}} =\mathcal{D}(n_s,0)$; and the black dashed
line represents the value of the discord at the $P$-classicality
threshold, i.e. $\mathcal{D}^{\,\mathsf{P}}=
\mathcal{D}(n_s^{\mathsf{P}},n_t)$. For
these limiting cases, analytical expressions of the discord in terms of
the total energy $N$ of are available, even if quite cumbersome, and
hence not reported. Being functions of a single quantity, they are
suitable for comparison and have been plotted in the lower panel of
Fig.~\ref{f:Discord3D}, together with the relative values of the
classical correlations $C=I_M-\mathcal{D}$. Moreover, in Fig.
\ref{f:Discord3D} we also show, by a dashed magenta line, the curve
corresponding to the minimum value attained by the discord (for fixed $n_t$)
obtained via numerical minimization.
\par
From the left bottom panel in Fig.~\ref{f:Discord3D}, we can see that
the discord is a monotonically increasing function of the total energy
$N$. The discord saturates to a finite value both for a thermal
input state, for which we find
$$\lim_{N\rightarrow\infty}\mathcal{D}^{\,\mathrm{th}}=\log 2\,,$$
and at the $P$-classicality threshold \cite{caz13}, where
\hat{a}_2egin{align}
\lim_{N\rightarrow\infty}\mathcal{D}^{\,\mathsf{P}}= \frac12 \log\left(3+2
\sqrt{2}\right)-\frac32 \log\sqrt{2}\hat{a}_1pprox 0.2067\,.\label{dinf}\end{align}
Again we see that, for a fixed value
of the total energy, a thermal input state results in more quantum
correlations than a state lying on the non-separability boundary,
although in the latter case squeezing is involved. Actually, as we can
see from Fig. \ref{f:Discord3D}, the states corresponding to the
$P$-classicality threshold do not correspond to the states with minimum
output discord (dashed magenta curve), confirming again the
inequivalence between $P$- and $C$-classicality. The minimum output
discord curve has been obtained numerically and we have not found
any clear physical picture of the class of states for which this minimum
is achieved. On the other hand, for a squeezed vacuum state the discord
grows logarithmically for large $N$ values. As a final remark, from the
right plot in the lower panel of Fig.~\ref{f:Discord3D}, it is apparent
that as the energy increases the classical correlations always increase
indefinitely, whereas as said, below the $P$-classicality threshold
discord is bounded. This behavior will become clear in the next
section, where entanglement will be considered.
\par
If we now release the restriction of a balanced BS and inquire the
behavior of the discord with respect to $\tau$, we see that
$\mathcal{D}_{1|2}(R)$ and $\mathcal{D}_{2|1}(R)$ differ from each
others; for a generic value $\tau$ of the transmissivity, $b_1$-discord
and $b_2$-discord are simply related by an exchange of the symplectic
invariants of $I_1$ and $I_2$, which amounts to a swap of the BS
transmissivity from $\tau$ to $1-\tau$. In Fig. \ref{f:Disctau}, the
$b_1$-discord has been plotted as a function of $\tau$, for the relevant
cases already mentioned. Being obtained by the exchange of
transmissivity and reflectivity, the $b_2$-discord is simply given by a
reflection about the axis $\tau=\frac12$. Of course, in the limiting
cases of transmissivity 0 and 1, the discord falls to zero. Furthermore,
apart for a squeezed vacuum input state, the behavior of
$\mathcal{D}_{1|2}(R)$ is not symmetric with respect to $\tau=\frac12$.
By increasing the incoming energy, the maximum of
$\mathcal{D}_{1|2}(R)\; (\;\mathcal{D}_{2|1}(R)\;)$, and hence the
optima transmissivity, shifts towards $\tau=1\, (\,0\,)$.
\hat{a}_2egin{figure}[h!]
\centering
\includegraphics[width=0.85\columnwidth]{f4_dsctau.pdf}
\caption{(Color online)
Nonclassicality by mixing with the vacuum: plot of the
discord $\mathcal{D}_{1|2}(R)$ as a function of
the transmissivity $\tau$ for a given total energy $N=10$. Colors as in
Fig. \ref{f:Discord3D} . \label{f:Disctau}}
\end{figure}
\par
\subsection{Generation of Gaussian entanglement}
Let us now focus on the generation of Gaussian entanglement.
The explicit expressions of the symplectic
invariants are given by
\STErev{
\hat{a}_2egin{subequations}\label{symplinv}
\hat{a}_2egin{align}
I_1&= \frac{1}{4}+ n_t(1+n_t)\tau + N (1-\tau) \tau \,,\\[1ex]
I_2&= \frac{1}{4}+ n_t(1+n_t) (1-\tau) + N (1-\tau)\tau \,,\\[1ex]
I_3&=-\left[(1-n_t)n_t + N \right](1-\tau)\tau\,,\\[1ex]
I_4&=\frac{1}{16}\left(1+2 n_t \right)^2 \, .
\end{align}
\end{subequations}
By solving the equation $\tilde{\lambda}_-(n_s,n_t,\tau)=\frac12$ with
respect to $n_s$, one finds an analytic expression for the the number of
squeezing photons at the separability threshold:}
\hat{a}_2egin{equation}\label{NsSep}
n_s^{\mathsf{sep}}=\frac{n_t^2}{1+2 n_t} \, ,
\end{equation}
which does not depend on $\tau$ and, most important, it equals the
$P$-classicality threshold Eq.~(\ref{NsNCl}). This is in agreement with
the general fact that $P$-nonclassicality is necessary and sufficient
for the generation of entanglement at a BS, regardless the
Gaussian nature of the input state \cite{msk02,xia02,WEP:03,asb05,oli09,
oli11,JLC:13,VS:14}.
\par
We are now going to analyze more in details the relationship between the
generation of Gaussian discord and entanglement. Since, although
analytical, the expression of the Gaussian discord is far too involved,
being in particular non invertible, we proceed in our analysis by
randomly sampling a large number of input Gaussian states and making
them evolve trough a BS of random transmissivity $\tau\in [0,1]$; for
each of them, minimum symplectic eigenvalue
of the partially transposed CM and Gaussian discord are then computed.
\hat{a}_2egin{figure}[h!]
\centering
\includegraphics[width=.9\columnwidth]{f5_rndvac.pdf}
\caption{(Color online)
Nonclassicality by mixing with the vacuum:
plot of the minimum symplectic eigenvalue of the partial
transpose CM $\tilde{\lambda}_-$ versus the discord
$\mathcal{D}_{1|2}(R)$ for randomly generated input states
$\varrho(n_s,n_t)$ evolving through a balanced BS (dark gray points),
and for random values of the transmissivity $\tau$ (light gray points).
The vertical dashed lines correspond to $1-\log2\hat{a}_1pprox 0.3069$,
$\log2\hat{a}_1pprox0.6931$, which is an asymptotic value for thermal states
entering a balance BS, and $1$, beyond which only entangled states
($\tilde{\lambda}_-<$ 1/2) can be found. The black circle stresses the
portion of the plane occupied by states attaining the maximum value of
the discord, still being separable; the black arrow points the maximum
value of the discord achieved at the separability threshold
$\tilde{\lambda}_-$ = 1/2 in the case of a balanced BS.
\label{f:RndDiscVac}}
\end{figure}
\par
The results are shown in Fig.~\ref{f:RndDiscVac} (light gray points),
together with the plot obtained for evolutions trough a balanced BS
(dark gray points). Inspecting the latter distribution it is easy to
recover all the features already addressed in Fig. \ref{f:Discord3D}. In
particular, since for input thermal states of large energy the discord
has been found to reach the limiting value $\log2\hat{a}_1pprox0.6931$, the
distribution displays an asymptote, so that we can conclude that the
region of high $\tilde{\lambda}$ corresponds to highly excited input
thermal states. Moreover, as it can be seen following the dashed black
line in Fig. \ref{f:Discord3D}, by moving on the separability threshold,
i.e. considering the points laying on the line $\tilde{\lambda}_- = 1/2$, we
move from zero discord to the asymptotic value of Eq. (\ref{dinf}),
which is obtained for infinite input energy and pointed out by
the black arrow.
It is also possible to note that the minimum value of the discord is
attained slightly below $\tilde{\lambda}_- = 1/2$, as shown by the
dashed magenta line of Fig. \ref{f:Discord3D}.
\par
More in general, considering arbitrary transmissivity (light gray points
in Fig.~\ref{f:RndDiscVac}) if the evolved state has a discord
$\mathcal{D}_{1|2}(R)>1$, it will be necessarily entangled
\cite{gio10,ade10}: the avoided region of the plane
$\{(\mathcal{D},\tilde{\lambda}_-)\; \vert\; \mathcal{D}>1\;,
\tilde{\lambda}_- > 1/2 \}$ shows that the discord for separable states
is always smaller than one. It means that for separable---and hence
$P$-classical---input states, whatever the transmissivity, the discord
between the output modes cannot grow indefinitely~\cite{ade10}, by simply pumping
more energy. On the other hand, in the region
$0\le\mathcal{D}_{1|2}(R)\le1$, both entangled and separable states are
present. Another region of interest is the entangled region, namely
$\{(\mathcal{D},\tilde{\lambda}_-)\; \vert\; \tilde{\lambda}_- \le
1/2 \}$, where the random generated points get ``horn-shaped". One
remarkable feature is that, when $\tilde{\lambda}_-$ approaches zero,
i.e. entanglement is high, the discord becomes, loosely speaking, nearly
a function of $\tilde{\lambda}_-$, and hence of entanglement itself.
\STErev{ In this case, we note that for $\mathcal{D}\gtrsim1$ the extent
of the region in Fig.~\ref{f:RndDiscVac} is bounded by two convergent
quantities. To better clarify this point we set $\tau=1/2$ and consider
an input squeezed vacuum state. For large $N\gg 1$ the analytic
expressions of the discord $\mathcal{D}^{\,\mathrm{sq}}$ reads (at the leading
order):
\hat{a}_2egin{equation}
\mathcal{D}^{\,\mathrm{sq}} \hat{a}_1pprox \log\left( \frac{\sqrt{N}}{2}\right)+1 \, ,
\end{equation}
while the minimum symplectic eigenvalue of the partial transpose
$\tilde{\lambda}_-^{\,\mathrm{sq}}$ is
\hat{a}_2egin{equation}
\tilde{\lambda}_-^{\,\mathrm{sq}} \hat{a}_1pprox \frac{1}{4\sqrt{N}} \,
\end{equation}
respectively. Form the previous equations it follows that in this limit
$\tilde{\lambda}_-^{\,\mathrm{sq}} \hat{a}_1pprox e^{1-\mathcal{D}^{\,\mathrm{sq}}}/8$, valid
for high discord value.} It turns out to be an upper bound for the
random distribution of symplectic eigenvalues in the entangled region, and
hence will be denoted as $\tilde{\lambda}_-^M$. Upon omitting the superscript
$\mathrm{sq}$, we may write
\hat{a}_2egin{equation}\label{lambdaM}
\tilde{\lambda}_-\,\le\,\tilde{\lambda}_-^{M}\,
\hat{a}_1pprox\, \frac{e^{1-\mathcal{D}}}{8} \qquad
\mathrm{for}\quad \mathcal{D}\gtrsim1
\end{equation}
\STErev{Analogously, our numerical analysis in the same region shows
that} the random generated points are always bounded from below
by $\tilde{\lambda}_-^m$, whose expression is
\hat{a}_2egin{equation}\label{lambdam}
\tilde{\lambda}_-\,\ge\,\tilde{\lambda}_-^m\,
\hat{a}_1pprox \,\frac{e^{-\mathcal{D}}}{4} \qquad\mathrm{for}
\quad \mathcal{D}\gtrsim1\, .
\end{equation}
Putting together Eqs.~(\ref{lambdaM}) and (\ref{lambdam}) we conclude
that, for a fixed value of the discord $\mathcal{D}\gtrsim1$, the
distribution of minimum symplectic eigenvalues of the partially
transpose CM, is constrained in the range
\hat{a}_2egin{equation}
\tilde{\lambda}_-^m \le \tilde{\lambda}_- \le \tilde{\lambda}_-^M \, .
\end{equation}
Therefore, for $\mathcal{D}\gg1$, $\,\tilde{\lambda}_-$ is an
exponentially decreasing function of the Gaussian discord.
\par
Particularly interesting is finally the region corresponding to highly
discordant---yet separable---states, stressed by a circle in
Fig.~\ref{f:RndDiscVac}. These are states sharing the maximum amount of
quantum correlations without invoking entanglement. We found that these
states are obtained for high input energies, whose value can also be due
uniquely to thermal photons, entering a BS of extremely high
transmissivity, namely $\tau=1-\varepsilon$. Having unlimited thermal
resources at disposal, we can still
generate quantum correlated output states up to a value
$\mathcal{D}_{1|2}=1$, by sending a very excited input state in an
unbalanced BS of very high transmissivity. If, always keeping the BS
transmissivity close to one, the fraction of squeezed photons is such to
render the input state $P$-nonclassical one, the corresponding points in
the plane will lie just below the separability threshold, but the value
of the discord can increase only up to the value $1-\log2\hat{a}_1pprox 0.3069$
(indicated by a red dashed line). By further increasing the amount of
squeezing, the resulting states will eventually occupy more entangled
and more discordant regions of the lower branch.
\section{Nonclassicality arising from mixing a Gaussian state
with a thermal state}
\label{ss:the}
We now consider the general case, allowing for thermal photons to enter
the second port of the BS. In fact, in practical scenarios a certain
amount of thermal noise (\textit{e.g.}, in the form of black body
radiation or scattered light) unavoidably participates in the
interference phenomenon and affects the statistics of the outgoing
fields.
\par
Though $P$-classicality, as already discussed, retains the expression
(\ref{NsNCl}) for the threshold value, the presence of another source of
photons affects the properties of the output state. The relevant
changes both to quantum discord and entanglement can be again evaluated
via the symplectic invariants, which now reads
\hat{a}_2egin{align}\label{symplinvth}
I_1=&\frac{1}{4}+n_2^2 (1-\tau )^2 +\tau\left[n_t
+n_s\left(1+2n_t\right)\left(1-\tau\right)+\tau \,n_t^2 \right] \nonumber
\\ & \nonumber \\ & +n_2 (1-\tau ) \left[1+2n_1\,\tau \right] \, , \nonumber \\ & \nonumber \\
I_2=&\frac{1}{4}+n_t^2 (1-\tau )^2 +\tau\left[n_2
+n_s\left(1+2n_2\right)\left(1-\tau\right)+\tau \,n_2^2 \right] \nonumber
\\ & \nonumber \\ & +n_t (1-\tau ) \left[1+2\left(n_s+n_2+2n_sn_2
\right)\tau \right] \, , \nonumber \\ & \\ I_3=&\left[n_2^2 + n_t^2 -
n_s(1+2 n_t)-2n_1n_2\right] (1-\tau) \tau \, ,\nonumber \\ & \nonumber
\\ I_4=&\frac{1}{16}\left(1+2 n_t \right)^2 \left(1+2 n_2 \right)^2 \, ,
\nonumber
\end{align}
whence, recalling the expression for the total energy in mode $\hat{a}_1$ Eq. (\ref{toten}), we can see that
$I_1$ and $I_2$ are each other related via the exchange of the number of thermal photons $n_t$
and $n_2$.
\subsection{Generation of Gaussian discord}
\hat{a}_2egin{figure}
\subfigure{\label{N2_0}
\includegraphics[width=3.9cm]{f6a_N2_0.pdf}}
\subfigure{\label{N2_01}
\includegraphics[width=3.9cm]{f6b_N2_01.pdf}}
\hat{a}_2egin{picture}(3,3)(117,52)
\put(-125,145){$n_s$}
\put(-100,70){\white (a)}
\put(17,70){\white (b)}
\end{picture}\\
\vskip -0.1cm
\subfigure{\label{N2_1}
\includegraphics[width=3.9cm]{f6c_N2_1.pdf}}
\subfigure{\label{N2_3}
\includegraphics[width=3.9cm]{f6d_N2_3.pdf}}
\hat{a}_2egin{picture}(3,3)(117,52)
\put(-100,70){\white (c)}
\put(100,45){$n_t$}
\put(17,70){\white (d)}
\end{picture}
\caption{(Color online)
Nonclassicality by mixing with a thermal state:
contour plots of Gaussian discord as a function of
squeezed and thermal photons $n_s$ and $n_t$ in mode $a_1$. From panel
(a) to (d) the number thermal photons in mode $a_2$ are given by
$n_2=0,0.1,1,3$ [panel (a) is in fact the contour plot of
Fig.~\ref{f:Discord3D}]. Colors go from black (0.) to white (0.7).}\label{f_ThermalD} \end{figure}
\par
As in the previous Section, we first focus on the case of a balanced
BS ($\tau=1/2$). In Fig.~\ref{f_ThermalD} we show the
contours of quantum discord as a function of $n_s$ and $n_t$ for
different thermal-photon number $n_2$ of mode $a_2$. We can see that,
despite the $P$-classicality of the output state remains invariant, the
$C$-classicality is much affected by the presence of an additional
source of thermal photons. In particular, for a low number of thermal
photons $n_2$ the region with minimal discord (darker areas in
Fig.~\ref{f_ThermalD}) localizes close to the $P$-classicality
threshold, whereas it tends to get closer to the zero squeezed-photon
axis for larger $n_2$. Also in this case, the inequivalence of the two
notions of non-classicality is apparent.
\par
Since there is no threshold for the production of discord, it is
legitimate to enquire the generation of quantum correlations at a BS for
the ''cheapest" conceivable scenario, namely having at disposal only
\textit{thermal resources} in input. Given a certain amount of total thermal
photons $N=n_1+n_2$, which is the most convenient redistribution of the
total energy between the two modes, in order to maximize the Gaussian
discord at the output? The answer to this question is shown in In Fig.
\ref{f:DiscComparison}, where $b_1$-discord as a function of the photon
imbalance $d=n_1-n_2$ between the two input modes has been plotted. For
each transmissivity, the $b_1$-discord is a monotonically increasing
function of the imbalance $d$, so we can conclude that the optimal
configuration is the most asymmetric one, where all the thermal photons
are sent in on channel, leaving the other in the vacuum state.
Moreover, an even distribution of photons ($d=0$) between the input
modes always leads to zero output discord. This fact is apparent by
looking at Eq. (\ref{sigmaeq}) where for equal input states, no matter
the transmissivity, the correlation terms $c_{\pm}$ of the CM
identically vanish: the phenomenon is referred to as transparency, since
the evolution trough the BS does not leave any imprint on the input
states. Being the optimal configuration the one with a thermal input in
one port of the BS and the vacuum in the other, we already know that,
for a given amount of energy $N$, there will be an optimal value of the
transmissivity maximizing the $b_1$-discord (as shown by the blue curve
of Fig. \ref{f:Disctau}); this fact is also manifest in the crossing of
the blue curve and the red dot-dashed one in Fig.
\ref{f:DiscComparison}, corresponding to $\tau=0.5$ and $\tau=0.8$
respectively, when approaching the maximum imbalance. Finally, the
dashed curve corresponds to an extremely unbalanced BS, namely
$\tau=0.99$, and, provided that high-enough thermal energy is available,
the corresponding value of the $b_1$-discord close to the imbalance
would be the greatest, eventually achieving the limiting value of
$\mathcal{D}_{1|2}=1$, as discussed above for the circled points in Fig.
\ref{f:RndDiscVac}.
\hat{a}_2egin{figure}[h!]
\centering
\includegraphics[width=0.9\columnwidth]{f7_dsccmp.pdf}
\caption{(Color online)
Nonclassicality by mixing with a thermal state:
logarithmic plot of the $b_1$-discord $\mathcal{D}_{1|2}(R)$ as a
function of the imbalance $d$ for different values of the transmissivity
$\tau$ and fixed total energy $N=5$. The solid blue curve is for a
balanced BS $\tau=0.5$, the dot-dashed red line is for $\tau=0.8$ while
the black dashed line is for $\tau=0.99$.}\label{f:DiscComparison}
\end{figure}
\subsection{Generation of Gaussian entanglement}
As before, the equation $\tilde{\lambda}(n_s,n_t,n_2,\tau)=1/2$, if
solved with respect to the squeezed number of photon $n_s$, gives a
threshold on the generation of entanglement when $n_2$ thermal photons
enter the BS.
\hat{a}_2egin{figure}[h!]
\centering
\includegraphics[width=0.9\columnwidth]{f8_septh.pdf}
\caption{(Color online)
Nonclassicality by mixing with a thermal state:
plot of the separability thresholds $n_s^{\mathsf{sep}}$ for
fixed $\tau=\frac12$ and different values of $n_2$. The solid blue line
corresponds to $n_2=0$, and indeed coincides with the non-classicality
threshold $n_s^{\mathsf{nc}}$. The red dot-dashed lines represents
$n_s^{\mathsf{sep}}$ for $n_2=0.1$, and finally the black dashed
one is for $n_2=1$. The black lines are curves of fixed energy in the
mode $\hat{a}_1$ $n_1=n_s +n_t+2n_sn_t$ : dashed for odd values $n_1=2k+1\, ,\,
k=0,1,\ldots$ and dot-dashed for even values $n_1=2k ,\, k=1,2,\ldots$
of the total energy.\label{f:SepTh}} \end{figure}
The explicit expression of $n_s^{\mathsf{sep}}$ is given by
\hat{a}_2egin{equation}\label{NsSepth}
n_s^{\mathsf{sep}}=\frac{\mu_1\, \mu_2}{\tau(1-\tau )} \Theta_{t,2}\,\Theta_{2,t} \; ,
\end{equation}
where $\Theta_{k,l}=n_k n_l +n_k-( n_k-n_l)\tau$ and $\mu_{1,2}$ are the purities
of the two input states.
\par
Contrary to the vacuum case, it is apparent that \STErev{in the presence
of a thermal state} the separability threshold $n_s^{\mathsf{sep}}$ and
the non-classicality threshold $n_s^{\mathsf{nc}}$ are no longer
coincident. In Fig.~\ref{f:SepTh} several separability thresholds are
shown, for different values of $n_2$. As soon as $n_2$ differs from
zero, the number of squeezed photons $n_s$ required to have entanglement
increases --- as shown both in Fig.~\ref{f:SepTh} and
Fig.~\ref{f:FracNsNthTh}.
\hat{a}_2egin{figure}[h!]
\centering
\includegraphics[width=0.9\columnwidth]{f9_fracth.pdf}
\caption{(Color online)
Nonclassicality by mixing with a thermal state:
plot of the squeezed fraction of photons $n_s/n_1$ at the
separability threshold, as a function of the total number of photons
$n_1=n_s+n_t+2n_sn_t$ entering the first port of the beam splitter, for
different values of $n_2$.The solid blue line corresponds to $n_2=0$,
The red dot-dashed line corresponds to $n_2=0.1$, while the black dashed
one to $n_2=1$.
\label{f:FracNsNthTh}}
\end{figure}
The previous symmetry between the notions of non-classicality in the
phase space and non-separability no longer holds: there exists
$P$-singular input states of the electromagnetic field, and hence
$P$-singular output states, which nevertheless are not entangled. We
can conclude that a hierarchy of non-classicality has settled down:
non-separability at the output imposes a stricter notion of quantumness
than the one put forward by $P$-singular distributions. Injecting into
the BS \textit{a non-classical state is no longer a sufficient
condition to get entanglement between the output modes}.
\par
\STErev{In order to better investigate this point,} we express the
separability threshold relative to $n_2$ thermal photons as a function
of the $P$-nonclassicality threshold $n_s^{\mathsf{P}}$. We focus on
the optimal case of a balanced BS, obtaining:
\hat{a}_2egin{equation}
n_s^{\mathsf{sep}}=\frac{\left[\,n_2+h(n_s^{
\mathsf{P}})(1+2n_2)\,\right]^2}{(1+2n_2)[1+2\,h(n_s^{\mathsf{P}})\,]} \, ,
\end{equation}
where
$h(n_s^{\mathsf{P}})=n_s^{\mathsf{P}}+
\sqrt{n_s^{\mathsf{P}}(1+n_s^{\mathsf{P}})}$
is a monotonically increasing function of the non-classicality
threshold. Even if the two thresholds $n_s^{\mathsf{sep}}$ and
$n_s^{\mathsf{P}}$
now differ, their knowledge enables one, given a known amount of thermal
noise in $\hat{a}_2$, to estimate the effective $P$-nonclassicality required in
$\hat{a}_1$, i.e. how much squeezing pump into the BS, in order to get
entanglement. When we have a squeezed vacuum state entering the BS in
the mode $\hat{a}_1$, namely $n_t=0$, and $n_2$ thermal photons in $\hat{b}$,
the separability threshold in Eq.~(\ref{NsSepth}) reduces to
$n_2^2/(1+2n_2)$, independently on $\tau$. It is the value of the curves
$n_s^{\mathsf{sep}}$ at $n_t=0$, as it can be seen from
Fig.~\ref{f:SepTh}, and moreover it is the same expression as
$n_s^{\mathsf{nc}}$ with $n_t$ replaced by $n_2$. Thus, having a
squeezed vacuum state in mode $\hat{a}_1$ and a thermal state in mode $\hat{a}_2$
(characterized by $n_s$ squeezed and $n_2$ thermal photons,
respectively) is equivalent to have a single mode Gaussian state
$\varrho(n_s,n_2)$ in $\hat{a}_1$ and the vacuum in $\hat{a}_2$.
\par
Finally, in Fig.~\ref{f:RndDiscTh} we propose the same random plot
as in Fig.~\ref{f:RndDiscVac}, with the difference that random number of
thermal photons is added in the second mode. We can see that the lower
branch is substantially unchanged by the presence of thermal noise; even
if the entanglement sets later, i.e. for higher amount of squeezing, the
relationship with Gaussian discord remains the same. On the other hand,
in the remaining accessible region of the plane, compared with
Fig.~\ref{f:RndDiscVac}, the points are scattered and the sharp pattern
is now washed out. On average the distribution drops towards lower
values of the discord.
\hat{a}_2egin{figure}[h!]
\centering
\includegraphics[width=0.9\columnwidth]{f10_rndth.pdf}
\caption{(Color online)
Nonclassicality by mixing with a thermal state:
symplectic eigenvalue of the partial transpose
$\tilde{\lambda}_-$ versus discord for randomly generated
input states $\varrho(n_s,n_t)$, random values of the transmissivity
$\tau$ and randomly generated thermal states $\nu(n_2)$ at the second
port of the beam splitter. \label{f:RndDiscTh}}
\end{figure}
\subsection{Effective non-classicality and non-classical depth}
As said above, in the case in which thermal photons are injected in the
second port of the BS, the $P$-non-classicality is no longer a necessary
and sufficient condition to obtain output entanglement. However,
remarkably, a quantitative relation between these two notions can still
be worked out. In particular, we will now see that the non-classical
depth at the input determines the potential of generating entanglement
at the output.
\par
Let us first consider the implicit equation defining the separability
threshold $\tilde{\lambda}_-(n_s,n_t,n_2,\tau)=1/2$, and let us call
$\mathcal{E}_{\varrho}(\tau)$ its solution with respect to the number of
thermal photons in the second port \STErev{as a function of $\tau$}. It
expresses (as a function of the input parameters $n_s, n_t$) the number
of thermal photons that can enter a BS of transmissivity $\tau$ in the
$\hat{a}_2$ mode and yield an output entangled state. The explicitly form of
$\mathcal{E}_{\varrho}(\tau)$, although analytical, is quite cumbersome
and hence has not been reported. If we now perform a maximization over
the transmissivity $\tau$ we obtain the following quantity:
\hat{a}_2egin{equation}\label{effective}
\mathcal{E}_{\varrho}=\max_{\tau}\,\mathcal{E}_{\varrho}(\tau) \, .
\end{equation}
We shall refer to this quantity as to the\textit{ effective
non-classicality} of the state $\varrho$ and it embodies the maximum
allowed number of thermal photons that can be mixed with $\varrho$ at a
BS and still get an entangled output state.
\par
While the non-classical depth is a property of a single-mode state of
the field, the effective non-classicality is a property of the two-mode
configuration that we are considering. In other words, the effective
non-classicality $\mathcal{E}_{\varrho}$ must be intended as an attempt
to characterize \textit{operationally} the non-classical feature of a
state. Thus, the relation between $\mathcal{E}_{\varrho}$ and ${\rm T}_m$,
if any, is a priori unclear. Let us stress that the operational
interpretation commonly associated to the ${\rm T}_m$ of a single-mode
state $\varrho$ is that it gives the number of thermal photons that have
to be statistically mixed with the state $\varrho$ in order to obtain a
classical state. In this sense, this operational interpretation of
${\rm T}_m$ exclusively refers to single-mode states.
\par
In order to clarify the relation between $\mathcal{E}_{\varrho}$ and
${\rm T}_m$ we notice first that, by means of a numerical maximization, it
is possible to show that $\mathcal{E}_{\varrho}$ is always obtained for
$\tau=1/2$. Thus the balanced BS represents the overall optimal
configuration, and in this case the effective non-classicality reads
\hat{a}_2egin{equation}
\mathcal{E}_{\varrho}=\frac{n_s - n_t+ \sqrt{n_s (1 + n_s)} }{1 + 2 n_t}
\, .
\end{equation}
If now we look at the expression of the
non-classical depth Eq. (\ref{NCdepthmax}) and insert it in
$\mathcal{E}_{\varrho}$, after some manipulation we find the following
relation
\hat{a}_2egin{equation}\label{enc_ncd}
\mathcal{E}_{\varrho}=\frac{{\rm T}_m}{1-2{\rm T}_m} \, .
\end{equation}
Thus, the two quantities $\mathcal{E}_{\varrho}$ and ${\rm T}_m$ which, as
said, are defined in reference to different systems, are in fact related
via a simple expression. In other words, this endows the non-classical
depth with a new operational interpretation: \textit{the non-classical
depth of a state determines, via Eq.~(\ref{enc_ncd}), the maximum number
of thermal photons that can be mixed with it at a beam
splitter without destroying the output entanglement}.
\section{Conclusions}
\label{s:out}
The quantum-to-classical transition for a single-mode bosonic system may
be fully characterized by the properties of its Glauber-Sudarshan
$P$-function in the phase space. On the other hand, for two-mode states,
quantumness may be recognized either by the presence of quantum
correlations ($C$-nonclassicality) {\em or} in terms of its phase space
distribution ($P$-nonclassicality). In this paper we have addressed the
generation of both types of nonclassicality by the linear mixing of a
single-mode Gaussian state with a thermal state at a beam splitter, and
have explored in details the relationships between the nonclassical
features of the single-mode input and the $P$- and $C$-nonclassicality
of the two-mode outputs.
\par
We have shown that, for mixing with vacuum, a balanced BS is capable of
generating $C$-nonclassicality for any input state, contrary to the case
of $P$-nonclassicality. In addition, the $C$-nonclassicality increases
as the input $P$-nonclassicalitity decreases. These findings clearly
confirm in a dynamical setting the inequivalence between these two
notions of nonclassicality that was highlighted in Ref.~\cite{PvsI} in a
geometrical context. We confirm this inequivalence also for mixing with a
thermal state, even if more complex behaviors emerge.
\par
In addition, we have shown that input $P$-classicality and output
separability single out two thresholds which coincide only for the case
of linear mixing with the vacuum, whereas they are connected in a non
trivial way for linear mixing with a thermal state. In fact,
$P$-classicality at the input, as quantified by the non-classical depth,
does determine quantitatively the potential of generating output
entanglement. This allows us to provide a new operational
interpretation for the non-classical depth: it gives the maximum number
of thermal reference photons that can be mixed at a beam splitter
without destroying the
output entanglement.
\par
By reinforcing quantitatively the inequivalence between $P$- and
$C$-classicality, our results paves the way for analyzing the dynamical
relationship between different types of non-classicality in more
general contexts.
\hat{a}_1cknowledgments
This work has been supported by MIUR through the FIRB project
``LiCHIS'' (grant RBFR10YQ3H), by EU through the Collaborative
Projects TherMiQ (Grant Agreement 618074) and QuProCS (Grant
Agreement 641277) and by UniMI through the H2020 Transition
Grant 14-6-3008000-625.
\hat{a}_2egin{thebibliography}{99}
\hat{a}_2ibitem{Wig} E.P. Wigner, Phys. Rev. \textbf{40} 749 (1932).
\hat{a}_2ibitem{gl1}R.J. Glauber, Phys. Rev. {\hat{a}_2f 131}, 2766 (1963).
\hat{a}_2ibitem{sud1} E. C. G. Sudarshan, Phys. Rev. Lett. {\hat{a}_2f 10}, 277 (1963) .
\hat{a}_2ibitem{mandel} L. Mandel, E. Wolf, {\it Optical Coherence and
Quantum Optics}. (Cambridge University Press, 1995).
\hat{a}_2ibitem{HHHH} R.~Horodecki, P.~Horodecki, M.~Horodecki, and
K.~Horodecki, Rev.~Mod.~Phys. \textbf{81}, 865 (2009).
\hat{a}_2ibitem{Modi} K.~Modi, A.~Brodutch, H.~Cable, T.~Paterek, and
V.~Vedral, Rev.~Mod.~Phys. \textbf{84}, 1655 (2012).
\hat{a}_2ibitem{PvsI} A.~Ferraro, M. G. A.~Paris Phys. Rev. Lett.
{\hat{a}_2f 108}, 260403 (2012).
\hat{a}_2ibitem{C+:14} V.~Chille, N. Quinn, C. Peuntinger, C. Croal,
L. Mista Jr., C. Marquardt, G. Leuchs, N. Korolkova,
arXiv:1411.6922 [quant-ph].
\hat{a}_2ibitem{cah69} K. E. Cahill, R. J. Glauber, Phys. Rev. {\hat{a}_2f 177}, 1857
(1969); {\hat{a}_2f 177}, 1882 (1969).
\hat{a}_2ibitem{oli:rev} S. Olivares, Eur. Phys. J. Special Topics {\hat{a}_2f 203}, 3 (2012).
\hat{a}_2ibitem{GS3} C.~Weedbrook, S.~Pirandola, R.~Garc\`õa-Patr\`on,
N.~J.~Cerf, T.~C.~Ralph,
J.~H.~Shapiro and S.~Lloyd, Rev. Mod. Phys., {\hat{a}_2f 84}, 621 (2012).
\hat{a}_2ibitem{FOP:05} A.~Ferraro, S.~Olivares, M.~G.~A.~Paris, {\it Gaussian
States in Quantum Information} (Bibliopolis, Napoli, 2005).
\hat{a}_2ibitem{WM} R. A. Campos, B. E. A. Saleh, M.C. Teich,
Phys. Rev. A {\hat{a}_2f 40}, 1371 (1989);
D. Walls and G. Milburn. {\it Quantum optics}
(Springer Verlag, Berlin. 1994).
\hat{a}_2ibitem{meystre} P.~Meystre, {\it Atom Optics}
(Springer-Verlag, New York, 2001)
\hat{a}_2ibitem{woolley:08} M.~J.~Woolley, G.~J.~Milburn, and
C.~M.~Caves, New J. Phys. {\hat{a}_2f 10}, 125018 (2008).
\hat{a}_2ibitem{xiang:10} S.-H.~Xiang, W.~Wen, Z.-G.~Shi, and
K.-H.~Song, Phys. Rev. A {\hat{a}_2f 81}, 054301 (2010).
\hat{a}_2ibitem{PZ:12} M.~Poot, H.S.J.~van der Zant, Phys.
Rep. \textbf{511}, 273 (2012).
\hat{a}_2ibitem{AKM:14} M.~Aspelmeyer, T.J.~Kippenberg, F.~Marquardt,
Rev. Mod. Phys., {\hat{a}_2f 86}, 1391 (2014).
\hat{a}_2ibitem{W+:04} A.~Wallraff \textit{et al.}, Nature \textbf{431}, 162 (2004).
\hat{a}_2ibitem{chirolli:10} L.~Chirolli, G.~Burkard, S.~Kumar, and
D.~P.~Di Vincenzo,
Phys. Rev. Lett. {\hat{a}_2f 104}, 230502 (2010).
\hat{a}_2ibitem{schum:86} B. L. Schumaker, Phys. Rep. {\hat{a}_2f 135}, 317 (1986).
\hat{a}_2ibitem{ser:04} A. Serafini, F. Illuminati, S. De Siena,
J. Phys. B {\hat{a}_2f 37}, L21 (2004).
\hat{a}_2ibitem{lee} C. T. Lee, Phys. Rev. A {\hat{a}_2f 44}, R2775 (1991).
\hat{a}_2ibitem{lee2}C. T. Lee, Phys. Rev. A {\hat{a}_2f 52}, 3374 (1995).
\hat{a}_2ibitem{tak02} M. Takeoka, M. Ban, M. Sasaki, J. Opt. B {\hat{a}_2f 4} 114 (2002).
\hat{a}_2ibitem{seraf:05} A. Serafini, M. G. A. Paris, F. Illuminati,
and S. De Siena, J. Opt. B {\hat{a}_2f 7}, R1 (2005).
\hat{a}_2ibitem{zurek} H. Ollivier and W. H. Zurek, Phys. Rev.
Lett. {\hat{a}_2f 88}, 017901 (2001).
\hat{a}_2ibitem{gio10}
P. Giorda, M. G. A. Paris, Phys. Rev. Lett. {\hat{a}_2f 105}, 020503 (2010).
\hat{a}_2ibitem{ade10} G. Adesso, A. Datta, Phys. Rev. Lett.
{\hat{a}_2f 105}, 030501 (2010).
\hat{a}_2ibitem{gu12} M. Gu, H. M. Chrzanowski, S. M. Assad, T. Symul,
K. Modi, T. C. Ralph, V. Vedral, and P. K. Lam, Nat.
Phys. {\hat{a}_2f 8}, 671 (2012).
\hat{a}_2ibitem{mas12} L. S. Madsen, A. Berni, M. Lassen, and U. L. Andersen,
Phys. Rev. Lett. {\hat{a}_2f 109}, 030402 (2012).
\hat{a}_2ibitem{bla12}
{R. Blandino, M. G. Genoni, J. Etesse, M. Barbieri, M. G. A. Paris, P.
Grangier, R. Tualle-Brouri}, Phys. Rev. Lett {\hat{a}_2f 109}, 180402 (2012).
\hat{a}_2ibitem{gea14} S. Pirandola, G. Spedalieri, S. L. Braunstein, N. J.
Cerf, and S. Lloyd, Phys. Rev. Lett. {\hat{a}_2f 113}, 140405 (2014).
\hat{a}_2ibitem{sim:00} R. Simon, Phys. Rev. Lett. {\hat{a}_2f 84}, 2726 (2000).
\hat{a}_2ibitem{per:96} A. Peres, Phys. Rev. Lett. {\hat{a}_2f 77}, 1413 (1996).
\hat{a}_2ibitem{vid:02} G. Vidal, R.F. Werner, Phys. Rev. A {\hat{a}_2f 65}, 032314 (2002).
\hat{a}_2ibitem{msk02} M. S. Kim, W. Son, V. Buzek, and P. L.
Knight, Phys. Rev. A {\hat{a}_2f 65}, 032323 (2002).
\hat{a}_2ibitem{xia02} W. Xiang-bin, Phys. Rev. A {\hat{a}_2f 66}, 024303 (2002).
\hat{a}_2ibitem{WEP:03} M.M.~Wolf, J.~Eisert, and M.B.~Plenio, Phys.~Rev.~Lett.
\textbf{90}, 047904 (2003).
\hat{a}_2ibitem{asb05} J.K. Asb\'oth, J.~Calsamiglia, and H.~Ritsch,
Phys.~Rev.~Lett.\textbf{ 94}, 173602 (2005).
\hat{a}_2ibitem{oli09} S. Olivares and M. G. A. Paris,
Phys. Rev. A {\hat{a}_2f 80}, 032329 (2009).
\hat{a}_2ibitem{oli11} S. Olivares and M. G. A. Paris, Phys. Rev.
Lett. {\hat{a}_2f 107}, 170505 (2011).
\hat{a}_2ibitem{JLC:13} Z.~Jiang, M.D.~Lang, and C.M.~Caves, Phys. Rev. A
\textbf{88}, 044301 (2013).
\hat{a}_2ibitem{VS:14} W.~Vogel and J.~Sperling, Phys. Rev. A
\textbf{89}, 052302 (2014).
\hat{a}_2ibitem{almostall} A.~Ferraro, L.~Aolita, D.~Cavalcanti, F.
M.~Cucchietti, and A.~Ac\'in, Phys.~Rev.~A \textbf{81}, 052318 (2010).
\hat{a}_2ibitem{caz13} A. Cazzaniga, S. Maniscalco, S. Olivares, M. G. A.
Paris, Phys. Rev. A {\hat{a}_2f 88}, 032121 (2013).
\end{thebibliography}
\end{document}
|
\begin{document}
\begin{abstract}
We consider the motion of compressible Navier-Stokes fluids with the hard sphere pressure law around a rigid obstacle when the velocity and the density at infinity are non zero. This kind of pressure model is largely employed in various physical and industrial applications. We prove the existence of weak solution to the system in the exterior domain.
\end{abstract}
\maketitle
{\bf The article was finished shortly after death of A. Novotn\' y. We never forget him.}
\tableofcontents
\section{Introduction} \label{intro}
We consider a bounded domain $\mathcal{ S}\subset \mathbb{R}^d$, $d=2,3$ of class $C^2$ with
boundary $\partial\mathcal{S}$. Let us denote the open ball with radius $R$ with the center at the origin by $B_R$ and without loss of generality let us assume that $\overline{\mathcal{S}}\subset B_{1/2}$. Let $\Omega$ be an exterior domain given by
\begin{equation*}
\Omega := \mathbb{R}^d \setminus \overline{\mathcal{S}},\quad d=2,3.
\end{equation*}
We consider the motion of viscous compressible fluid in the exterior domain $\Omega$ around the obstacle $\mathcal S$. Precisely, the mass density $\rho=\rho(t,x)$ and the velocity $\mathbf{u}=\mathbf{u}(t,x)$ of the fluid satisfy the following system:
\begin{align}
{\partial _t \rho }+ \mbox {div }(\rho \mathbf u) & = 0 \quad \mbox{in}\quad (0,T)\times {\Omega}\label{eq:mass}\\
\partial _{t}(\rho\mathbf u)+\mbox { div } \leqslantft( \rho\mathbf u \otimes \mathbf u\right) +\nabla p(\rho)-
\mbox { div } {\mathbb S(\nabla \mathbf u)} & = 0
\quad \mbox{in}\quad (0,T)\times {\Omega}\label{eq:mom}\\
\mathbb {S}(\nabla \mathbf{u}) &=
\mu(\nabla \mathbf{u} + \nabla^{\top} \mathbf{u}) + \lambda\operatorname{div}\mathbf{u} \mathbb{I},\quad
\mu>0, \lambda\geqslantqslant 0,\label{eq:tensor}\\
\mathbf u & = 0
\quad \mbox{on}\quad (0,T)\times \partial{\mathcal{S}},\label{eq:bdary}
\end{align}
where $p=p(\rho)$ is the hard sphere pressure. The system is
endowed with the initial conditions
\begin{equation}\label{ini}
\rho(0,x)=\rho_0(x),\quad \rho\mathbf{u} (0,x)=\mathbf{q}_0 (x),\quad x\in\Omega .
\end{equation}
Since $\Omega$ is an exterior domain, we need to prescribe the behaviour of $(\rho,\mathbf{u})$ at infinity:
\begin{equation}\label{behaviour}
\rho(t,x) \rightarrow \rho_{\infty}, \quad \mathbf u(t,x) \rightarrow \mathbf a_{\infty}\quad \mbox{as} \quad |x|\rightarrow\infty,\ (t,x)\in (0,T)\times\Omega,
\end{equation}
where $\mathbf a_{\infty}\in \mathbb R^3$ is a nonzero constant vector and $\rho_{\infty}>0$ is a given positive constant. We assume that the fluid density cannot exceed a limit value $\overline{\rho}>0$ and the hard sphere pressure $p=p(\rho)$, as a function of the density, becomes infinite when the density approaches a finite critical value $\overline{\rho}>0$:
\begin{equation*}
\lim_{\rho\rightarrow \overline{\rho}} p(\rho)=\infty.
\end{equation*}
The above condition of hard sphere model eliminates the possibility of the standard pressure law for the isentropic gases. Specifically, we consider the pressure $p\in C^1[0,\overline{\rho})\cap C^2(0,\overline{\rho})$ satisfies
\begin{equation}\label{p-law}
p(0)=0,\quad p'(\rho)>0\ \forall\ \rho>0,\quad \liminf_{\rho\rightarrow 0}\frac{p'(\rho)}{\rho}>0,\quad p(\rho)\sim _{\rho\rightarrow \overline{\rho}-} |\overline{\rho}-\rho|^{-\beta},\mbox{ for some }\beta > 5/2,
\end{equation}
where without loss of generality, we assume that $1< \overline{\rho}<\infty$ and $a(s)\sim_{s\rightarrow s_0\pm} b(s)$ stands for
\begin{equation*}
c_1a(s)\leqslantqslant b(s) \leqslantqslant c_2 a(s),\mbox{ in a right}(+),\mbox{ left}(-)\mbox{ neighbourhood of }s_0.
\end{equation*}
We study the well-accepted Carnahan-Starling equation of the state (\ref{p-law}). It
is an approximate but quite a good (as explained in \cite{Song}) equation of state for the fluid phase of the hard sphere model. Such model was derived from a quadratic relation between the integer portions of the virial coefficients and their orders. This model is convenient for the initial study of the behavior of dense gases and liquids. For more details regarding this model and several corrections (Percus-Yevick equation, Kolafa correction, Liu correction), we refer to \cite{Carnahan-Starling, Hongqin,KLM04AEHS,KVB62CPES}.
Similar type of the singular pressure law are considered in many physical models. Let us mention the work by Degond and Hu \cite{MR3020033} and Degond et all \cite{MR2835410} for collective motion. Moreover, the work of Berthelin et al. \cite{MR2438216, MR2366138} for the trafic flow, the paper by Maury \cite{maury2012}
concerning the crowd motion models. Similar type of models can be found also in works of Bresch et al. \cite{bresch2014, bresch2017}, Perrin and Zatorska \cite{peza2015}, Bresch, Ne\v casov\' a and Perrin \cite{MR3974475}.
The existence of weak solutions in the case of the barotropic situation is going back to the
seminal work by Lions \cite{MR1637634} and improved by Feireisl et al. \cite{MR1867887}. The question about the existence of weak solutions of the hard pressure case in a bounded domain with no-slip boundary conditions were studied recently by Feireisl and Zhang \cite[Section 3]{MR2646821}. The case of general inflow/outflow was investigated by Choe, Novotn\' y and Yang \cite{MR3912678}. Weak--strong uniqueness in the case of the hard pressure in periodic spatial domains was shown by Feireisl, Lu and Novotn\' y \cite{FLN}. In our knowledge, there is no available existence result of weak solutions with hard sphere pressure law in the case of an unbounded domain. In the case of barotropic compressible fluid,
the existence of weak solutions in an unbounded domain when the velocity at infinity and the density at infinity are nonzero has been done by Novotn\' y, Stra\v skraba \cite[ Section 7.12.6]{MR2084891} and by Lions \cite[Section 7]{MR1637634}. The case of the motion of the compressible fluids in $\mathbb{R}^3$ around a rotating obstacle where the velocity at infinity is nonzero and parallel to the axis of rotation was shown in the paper by Kra\v cmar, Ne\v casov\' a and Novotn\' y \cite {MR3208793}.
In this work, our aim is to establish the existence of weak solutions to the compressible Navier--Stokes system with hard pressure in the context of exterior domain. The main idea is to use the method of ``invading domains''. The exterior domain $\Omega$ is approximated by invading
domains $\Omega_R=\Omega \cap B_R$ and to begin with, we have to show the existence of solution in these bounded domains. Then we need to find estimates independent of domains $\Omega_R$ so that we can identify the weak limits of the growing invading domains (as radius $R$ of $B_R$ goes to infinity) via
local weak compactness results \cite[Lemma 6.6]{MR2084891}. We have to use div-curl lemma, effective viscous flux, commutator lemmas, renormalized solutions of the transport equation frequently in our analysis. The complete methodology has been explained in Novo \cite{MR2189672} and in Novotn\' y, Stra\v skraba \cite[ Section 7.12.6]{MR2084891} for the case of compressible barotropic fluid and we have adapted it in this paper for compressible fluid with hard sphere pressure law.
The outline of the paper is as follows. Section \ref{intro} deals with the description of the problem, the meaning of weak solution to the problem and the statement of the main result of the paper. The approximation problem on large balls via a suitable penalization is introduced in Section \ref{Approx}. The penalization uses an auxiliary vector field $\mathbf{u}_{\infty}$ (defined in \eqref{d2}) which is very crucial to achieve the required behaviour of velocity at infinity in the limiting process. Moreover, the existence of such problem is shown and the limit process is performed with the penalization parameter tending to infinity where equi-integrability of the pressure is important to pass the limit in the pressure term.
Section \ref{proof} is devoted to the proof of \cref{thm:main} where the limit with the radius of large balls tending to infinity is achieved and the method of ``invading domains'' is used. In this step, the special choice of the test functions is crucial to identify the limit of the pressure.
\subsection{Weak formulation and Main result}\label{weak}
We want to define the notion of weak solutions to system \eqref{eq:mass}--\eqref{behaviour} together with the pressure $p(\rho)$ satisfying \eqref{p-law}. Let us denote the open ball with radius $R$ with the center at the origin by $B_R.$
To start with,
without loss of generality, assume that $
\overline { \mathcal S}\subset B_{1/2}$
and $B_1\subset
B_R$. We set
\begin{align}\label{d1}
\mathbf{U}_{\infty} := \begin{cases}
\mathbf{u}_{\infty}&\mbox{ in }B_1,\\
\mathbf{a}_{\infty}&\mbox{ in }\mathbb{R}^3\setminus B_1,
\end{cases}
\end{align}
where $\mathbf{u}_{\infty}\in C^{1}_c(\mathbb R^3)$ is such that:
\begin{equation}\label{d2}
\mathbf{u}_{\infty}= \leqslantft \{ \begin{array}{l}
0 \mbox { in } B_{1/2},\\
\mathbf{a}_{\infty } \mbox { in } B_{(3/2)R}\setminus B_1,
\\
\mbox { supp } \mathbf{u}_{\infty } \subset B_{2R}.
\end{array}
\right. \quad \mbox { div } \mathbf{u}_{\infty} = 0 \mbox { in }\mathbb R^3.
\end{equation}
The construction of such vector field $\mathbf{u}_{\infty}$ follows from the explanations \cite[Section 3, page 195]{MR3208793}, \cite[Section 1, page 487--488]{MR2189672} and the result \cite[Exercise III.3.5, page 176]{MR2808162}.
\begin{Definition}\label{def:bddenergy}
We say that a couple $(\rho,\mathbf{u})$ is a bounded energy weak solution of the problem \eqref{eq:mass}--\eqref{behaviour} with the pressure law \eqref{p-law} if the following conditions are satisfied:
\begin{itemize}
\item Functions $(\rho,\mathbf{u})$ are such that
\begin{equation*}
0\leqslantqslant \rho < \overline{\rho},\quad E(\rho|\rho_{\infty})\in
L^\infty(0,T;L^1(\Omega)),
\end{equation*}
\begin{equation*}
\rho|\mathbf{u}-\mathbf{U}_\infty|^2\in L^\infty(0,T;L^1(\Omega)),\quad (\mathbf{u}-
\mathbf{U}_\infty)\in L^2(0,T;W^{1,2}(\Omega)).
\end{equation*}
In the above
\begin{equation*}
E(\rho|\rho_{\infty}) =
H(\rho)-H'(\rho_{\infty})(\rho-\rho_{\infty})-H(\rho_{\infty}),
\end{equation*}
where
\begin{equation}\label{def:H}
H(\rho) = \rho \int _1^{\rho} \frac{p(s)}{s^2}ds,
\end{equation}
\item The function $\rho\in C_{\rm{weak}}([0,T]; L^1(K))$ for any compact $K\subset\overline\Omega$ and the
equation of continuity \eqref{eq:mass} is satisfied in the weak sense,
\begin{equation}\label{eqf:weakmass}
\int_\Omega \rho(\tau,\cdot)\varphi(\tau,\cdot)\ {\rm
d}x - \int_\Omega \rho_0(\cdot)\varphi(0,\cdot)\ {\rm
d}x=\int_0^\tau \int_{\Omega}\leqslantft(\rho
\partial_t \varphi + \rho\mathbf{u} \cdot \nabla \varphi\right)
\ dx\ dt,
\end{equation}
for all $\tau\in [0,T]$ and any test function $\varphi
\in C^1_{c}([0,T] \times \overline{\Omega})$.
\item The linear momentum $\rho\mathbf{u}\in C_{\rm weak}([0,T], L^{1}(K))$ for
any compact $K\subset\overline \Omega$ and the momentum equation \eqref{eq:mom}
is satisfied in the weak sense
\begin{multline}\label{eqf:weakmom}
\int_\Omega\rho\mathbf{u}(\tau,\cdot)\cdot\varphi(\tau,\cdot){\rm d} x - \int_\Omega \mathbf{q}_0(\cdot)\cdot\varphi(0,\cdot){\rm d} x \\
=\int_0^\tau \int_{\Omega}\Big(
\rho \mathbf{u} \cdot \partial_t \varphi + \rho \mathbf{u}
\otimes \mathbf{u} : \nabla \varphi +
p(\rho)\operatorname{div}\varphi - \mathbb {S}(\nabla \mathbf{u}) : \nabla \varphi \Big)\ dx \ dt,
\end{multline}
for all $\tau\in [0,T]$ and
for any test
function $\varphi \in C^1_c([0,T] \times \Omega)$.
\item The following energy inequality holds: for a.e. $\tau\in (0,T)$,
\begin{multline}\label{eqf:ee}
\int_{\Omega}\Big(\frac{1}{2}\rho|\mathbf{u}-\mathbf {U}_{\infty}|^2 +
E(\rho|\rho_{\infty})\Big)(\tau)\ dx
+ \int_0^\tau\int_{\Omega} \mathbb{S}(\nabla (\mathbf{u}-\mathbf{U}_{\infty})):\nabla (\mathbf{u}-\mathbf{U}_{\infty})\ dx\ dt\\
\leqslantqslant
\int_{\Omega}\Big(\frac{1}{2}\frac{|\mathbf{q}_0-\rho_{0}\mathbf{U}_{\infty}|^2}{\rho_0} +
E(\rho_0|\rho_{\infty})\Big)\ dx - \int_0^\tau
\int_{B_1\setminus\mathcal{S}} \rho\mathbf{u}\cdot\nabla\mathbf {U}_{\infty}
\cdot(\mathbf{u} - \mathbf{U}_{\infty})\,{ d}x\,{ d}t - \int _0^{ \tau}\int _{B_1\setminus\mathcal{S}}\mathbb{S}(\nabla\mathbf{U}_{\infty}):\nabla(\mathbf{u}-\mathbf{U}_\infty)\,{ d} x\, { d}t.
\end{multline}
\end{itemize}
\end{Definition}
\begin{Remark}
We can use the regularization procedure in the transport theory by DiPerna and Lions \cite{DiPerna1989} to show that if $(\rho,\mathbf{u})$ is a bounded energy weak solution of the problem \eqref{eq:mass}--\eqref{behaviour} according to \cref{def:bddenergy}, then $(\rho,\mathbf{u})$ also satisfy a renormalized continuity equation in a weak sense, i.e,
\begin{equation}\label{eq:renorm}
\partial_t b(\rho) + \operatorname{div}(b(\rho)\mathbf{u}) + (b'(\rho)-b(\rho))\operatorname{div}\mathbf{u}=0 \mbox{ in }\, \mathcal{D}'([0,T)\times {\Omega}) ,
\end{equation}
for any $b\in C([0,\infty)) \cap C^1((0,\infty))$.
\end{Remark}
We are now in a position to state the main result of the present paper.
\begin{Theorem}\label{thm:main}
Assume that
$0< \rho_{\infty} < \overline{\rho}$, $\mathbf{a}_{\infty}(\neq 0)\in \mathbb{R}^d$, $\mathbf{U}_{\infty}\in C_c^1(\mathbb{R}^d)$ is defined by \eqref{d1} and $\Omega =\mathbb R^d\setminus
\overline{\mathcal S}$, where $\mathcal{S}\subset \mathbb{R}^d$, $d = 2, 3$ is a bounded domain of class $C^2$. Assume that the pressure satisfies the hypothesis \eqref{p-law}, the initial data have finite energy
\begin{equation}\label{init}
E(\rho_0|\rho_{\infty}) \in L^1(\Omega),\quad 0\leqslantqslant \rho_{0} < \overline{\rho},\quad
\frac{|\mathbf{q}_0-\rho_{0}\mathbf{a}_{\infty}|^2}{\rho_0}\mathds{1}_{\{\rho_0 > 0\}}\in L^1(\Omega).
\end{equation}
Then the problem \eqref{eq:mass}--\eqref{behaviour} admits at least one renormalized bounded energy weak solution on $(0,T)\times\Omega$.
\end{Theorem}
\section{Approximate problems in bounded domain}\label{Approx}
In this section, in order to solve system \eqref{eq:mass}--\eqref{behaviour}, we want to propose some approximate problems in a bounded domain and to analyze the well-posedness of such problems.
\subsection{Existence of a penalized problem}
Let us denote $V:=B_{2R}$. In order to construct solutions to \cref{thm:main}, we start with the following penalized problem:
\begin{align}
{\partial _t \rho }+ \mbox {div }(\rho \mathbf u) & = 0 \quad \mbox{in}\quad (0,T)\times {V}\label{eq:penmass},\\
\partial _{t}(\rho\mathbf u)+\mbox { div } \leqslantft( \rho\mathbf u \otimes \mathbf u\right) +\nabla p(\rho)-
\mbox { div } {\mathbb S(\nabla \mathbf u)} + m\mathds{1}_{\{(V\setminus B_R)\cup \mathcal{S}\}}(\mathbf{u}-\mathbf{u}_{\infty}) & = 0
\quad \mbox{in}\quad (0,T)\times {V}\label{eq:penmom},\\
\mathbf u & = 0
\quad \mbox{on}\quad (0,T)\times \partial{V},\label{eq:penbdary}
\end{align}
\begin{equation}\label{penini}
\rho(0,x)=\rho_0(x),\quad \rho\mathbf{u} (0,x)=\mathbf{q}_0 (x),\quad x\in V .
\end{equation}
In the above, the initial data $\rho_0$ and $\mathbf{u}_0$ have been extended by zero in $\mathcal{S}$.
\begin{Definition}\label{def:penbddenergy}
We say that a couple $(\rho_m,\mathbf{u}_m)$ is a bounded energy weak solution of the problem \eqref{eq:penmass}--\eqref{penini} with \eqref{p-law} if the following conditions are satisfied:
\begin{itemize}
\item Functions $(\rho_m,\mathbf{u}_m)$ are such that
\begin{equation*}
0\leqslantqslant \rho_m < \overline{\rho},\quad E(\rho_m|\rho_{\infty})\in
L^\infty(0,T;L^1(V)),
\end{equation*}
\begin{equation*}
\rho_m|\mathbf{u}_m-\mathbf{u}_\infty|^2\in L^\infty(0,T;L^1(V)),\quad (\mathbf{u}_m-
\mathbf{u}_\infty)\in L^2(0,T;W_0^{1,2}(V)).
\end{equation*}
\item The function $\rho_m\in C_{\mbox{weak}}([0,T]; L^1(V))$ and the
equation of continuity \eqref{eq:penmass} is satisfied in the weak sense,
\begin{equation}\label{eq:weakpenmass}
\int_V \rho_m(\tau,\cdot)\varphi(\tau,\cdot)\ {\rm
d}x - \int_V \rho_0(\cdot)\varphi(0,\cdot)\ {\rm
d}x=\int_0^\tau \int_{V}\leqslantft(\rho_m
\partial_t \varphi + \rho_m\mathbf{u}_m \cdot \nabla \varphi\right)
\ dx\ dt,
\end{equation}
for all $\tau\in [0,T]$ and any test function $\varphi
\in C^1_{c}([0,T] \times \overline{V})$.
\item The linear momentum $\rho_m\mathbf{u}_m\in C_{\rm weak}([0,T], L^{1}(V))$ and the momentum equation \eqref{eq:penmom}
is satisfied in the weak sense
\begin{multline}\label{eq:weakpenmom}
\int_V\rho_m\mathbf{u}_m(\tau,\cdot)\cdot\varphi(\tau,\cdot){\rm d} x - \int_V \mathbf{q}_0(\cdot)\cdot\varphi(0,\cdot){\rm d} x \\
=\int_0^\tau \int_{V}\Big(
\rho_m \mathbf{u}_m \cdot \partial_t \varphi + \rho_m \mathbf{u}_m
\otimes \mathbf{u}_m : \nabla \varphi +
p(\rho_m)\operatorname{div}\varphi - \mathbb {S}(\nabla \mathbf{u}_m) : \nabla \varphi - m\mathds{1}_{\{(V\setminus B_R)\cup \mathcal{S}\}}(\mathbf{u}_m-\mathbf{u}_{\infty})\cdot \varphi \Big)\ dx \ dt,
\end{multline}
for all $\tau\in [0,T]$ and
for any test
function $\varphi \in C^1_c([0,T] \times V)$.
\item The following energy inequality holds: for a.e. $\tau\in (0,T)$,
\begin{multline}\label{pen:energy}
\int_{V}\Big(\frac{1}{2}\rho_m|\mathbf{u}_m-\mathbf {u}_{\infty}|^2 +
E(\rho_m|\rho_{\infty})\Big)(\tau)\ dx
+ \int_0^\tau\int_{V}\Big( \mathbb{S}(\nabla (\mathbf{u}_m-\mathbf{u}_{\infty})):\nabla (\mathbf{u}_m-\mathbf{u}_{\infty}) +m\mathds{1}_{\{(V\setminus B_R)\cup \mathcal{S}\}} |\mathbf{u}_m-\mathbf{u}_{\infty}|^2\Big)\ dx\ dt\\
\leqslantqslant
\int_{V}\Big(\frac{1}{2}\frac{|\mathbf{q}_0-\rho_{0}\mathbf{u}_{\infty}|^2}{\rho_0} +
E(\rho_0|\rho_{\infty})\Big)\ dx + \int_0^\tau
\int_{V} \rho_m\mathbf{u}_m\cdot\nabla\mathbf {u}_{\infty}
\cdot(\mathbf{u}_{\infty}- \mathbf{u}_m)\,{ d}x\,{ d}t - \int _0^{ \tau}\int _V\mathbb{S}(\nabla\mathbf{u}_{\infty}):\nabla(\mathbf{u}_m-\mathbf{u}_\infty)\,{ d} x\, { d}t.
\end{multline}
\end{itemize}
\end{Definition}
\begin{Remark}
We can use the regularization procedure in the transport theory by DiPerna and Lions \cite{DiPerna1989} to show that if $(\rho_m,\mathbf{u}_m)$ is a bounded energy weak solution of the problem \eqref{eq:penmass}--\eqref{penini} according to \cref{def:penbddenergy}, then $(\rho_m,\mathbf{u}_m)$ also satisfy a renormalized continuity equation in a weak sense, i.e,
\begin{equation}\label{eq:penrenorm}
\partial_t b(\rho_m) + \operatorname{div}(b(\rho_m)\mathbf{u}_m) + (b'(\rho_m)-b(\rho_m))\operatorname{div}\mathbf{u}_m=0 \mbox{ in }\, \mathcal{D}'([0,T)\times \overline{V}) ,
\end{equation}
for any $b\in C([0,\infty)) \cap C^1((0,\infty))$.
\end{Remark}
We can prove an existence result to the penalized problem \eqref{eq:penmass}--\eqref{penini} by following the idea of Feireisl, Zhang \cite[Theorem 3.1]{MR2646821}:
\begin{Theorem}\label{thm:pen}
Assume that
$0<\rho_{\infty}<\overline{\rho}$, $\mathbf{u}_{\infty}\in C_c^1(\mathbb{R}^d)$ is defined by \eqref{d2}. Assume that the pressure satisfies the hypothesis \eqref{p-law}, the initial data satisfy
\begin{equation}\label{ini:m}
E(\rho_0|\rho_{\infty}) \in L^1(V),\quad 0\leqslantqslant \rho_{0} < \overline{\rho},\quad
\frac{|\mathbf{q}_0-\rho_{0}\mathbf{a}_{\infty}|^2}{\rho_0}\mathds{1}_{\{\rho_0 > 0\}}\in L^1(V).
\end{equation}
Then the problem \eqref{eq:penmass}--\eqref{penini} admits at least one renormalized bounded energy weak solution $(\rho_m,\mathbf{u}_m)$ on $(0,T)\times V$.
\end{Theorem}
\begin{proof}
We consider a family of solutions $(\rho_{\varepsilon},\mathbf{u}_{\varepsilon})$ of an approximate problem with regularized pressure $p_{\varepsilon}(\rho)$:
\begin{align}
{\partial _t \rho_{\varepsilon} }+ \mbox {div }(\rho_{\varepsilon} \mathbf u_{\varepsilon}) & = 0 \quad \mbox{in}\quad (0,T)\times {V}\label{eq:epmass},\\
\partial _{t}(\rho_{\varepsilon}\mathbf u_{\varepsilon})+\mbox { div } \leqslantft( \rho_{\varepsilon}\mathbf u_{\varepsilon} \otimes \mathbf u_{\varepsilon}\right) +\nabla p_{\varepsilon}(\rho_{\varepsilon})-
\mbox { div } {\mathbb S(\nabla \mathbf u_{\varepsilon})} + m\mathds{1}_{\{(V\setminus B_R)\cup \mathcal{S}\}}(\mathbf{u}_{\varepsilon}-\mathbf{u}_{\infty}) & = 0
\quad \mbox{in}\quad (0,T)\times {V}\label{eq:epmom},\\
\mathbf u_{\varepsilon} & = 0
\quad \mbox{on}\quad (0,T)\times \partial{V},\label{eq:epbdary}
\end{align}
\begin{equation}\label{epini}
\rho_{\varepsilon}(0,x)=\rho_0(x),\quad \rho_{\varepsilon}\mathbf{u}_{\varepsilon} (0,x)=(\rho\mathbf{u})_0 (x),\quad x\in V ,
\end{equation}
where regularized pressure $p_{\varepsilon}$ is given by
\begin{align}\label{reg:pre}
p_{\varepsilon}(\rho)=\begin{cases}
p(\rho)&\mbox{ for }\rho \in [0,\overline{\rho}-\varepsilon]\\
p(\overline{\rho}-\varepsilon) + |(\rho - \overline{\rho}+\varepsilon)^{+}|^{\gamma}&\mbox{ for }\rho \in (\overline{\rho}-\varepsilon,\infty),
\end{cases}
\end{align}
for a certain exponent $\gamma>d$ (which is chosen sufficiently large).
The idea is to establish the existence of the problem \eqref{eq:penmass}--\eqref{penini} as an asymptotic limit of the family $(\rho_{\varepsilon},\mathbf{u}_{\varepsilon})$ as $\varepsilon\rightarrow 0$. Under the assumptions \eqref{init} on initial data and $\mathbf{u}_{\infty}$ \eqref{d2}, the problem \eqref{eq:epmass}--\eqref{epini} with regularized pressure law \eqref{reg:pre} admits at least one weak solution $(\rho_{\varepsilon},\mathbf{u}_{\varepsilon})$ by following the idea of \cite[Theorem 7.79, Page 425]{MR2084891}. Then we can follow \cite[Section 3]{MR2646821} to obtain the uniform bounds (with respect to $\varepsilon$) of the density $\{{\rho}_{\varepsilon}\}$, velocity $\{\mathbf{u}_{\varepsilon}\}$, the pressure $\{p_{\varepsilon}\}$ and the equi-integrability of the pressure family so that we can pass the limit $\varepsilon \rightarrow 0$ and most importantly, we can conclude
\begin{equation*}
p_{\varepsilon}({\rho}_{\varepsilon})\rightarrow p(\rho)\mbox{ in }L^1((0,T)\times V).
\end{equation*}
\end{proof}
\subsection{Limit \texorpdfstring{$m\rightarrow \infty$}{}}
Let us denote $\Omega_R:= B_R \setminus \overline{\mathcal{S}}$ and we consider the following system:
\begin{align}
{\partial _t \rho }+ \mbox {div }(\rho \mathbf u) & = 0 \quad \mbox{in}\quad (0,T)\times {V}\label{eq:mmass},\\
\partial _{t}(\rho\mathbf u)+\mbox { div } \leqslantft( \rho\mathbf u \otimes \mathbf u\right) +\nabla p(\rho)-
\mbox { div } {\mathbb S(\nabla \mathbf u)} & = 0
\quad \mbox{in}\quad (0,T)\times \Omega_R \label{eq:mmom},\\
\mathbf u & = 0
\quad \mbox{on}\quad (0,T)\times \partial\mathcal{S},\label{eq:mbdary}\\
\mathbf{u} &=\mathbf{u}_{\infty} \mbox{ a.e. in }(0,T)\times [(V\setminus B_R)\cap \mathcal{S}],
\end{align}
\begin{equation}\label{mini}
\rho(0,x)=\rho_0(x),\quad \rho\mathbf{u} (0,x)=\mathbf{q}_0 (x),\quad x\in V .
\end{equation}
\begin{Definition}\label{def:mbddenergy}
We say that a couple $(\rho_R,\mathbf{u}_R)$ is a bounded energy weak solution of the problem \eqref{eq:mmass}--\eqref{mini} with \eqref{p-law} if the following conditions are satisfied:
\begin{itemize}
\item Functions $(\rho_R,\mathbf{u}_R)$ are such that
\begin{equation*}
0\leqslantqslant \rho_R < \overline{\rho},\quad E(\rho_R|\rho_{\infty})\in
L^\infty(0,T;L^1(\Omega_R)),
\end{equation*}
\begin{equation*}
\rho_R|\mathbf{u}_R-\mathbf{u}_\infty|^2\in L^\infty(0,T;L^1(\Omega_R)),\quad (\mathbf{u}_R-
\mathbf{u}_\infty)\in L^2(0,T;W_0^{1,2}(\Omega_R)).
\end{equation*}
\item The function $\rho_R\in C_{\mbox{weak}}([0,T]; L^1(V))$ and the
equation of continuity \eqref{eq:mmass} is satisfied in the weak sense,
\begin{equation}\label{eq:weakmmass}
\int_V \rho_R(\tau,\cdot)\varphi(\tau,\cdot)\ {\rm
d}x - \int_V \rho_0(\cdot)\varphi(0,\cdot)\ {\rm
d}x=\int_0^\tau \int_{V}\leqslantft(\rho_R
\partial_t \varphi + \rho_R\mathbf{u}_R \cdot \nabla \varphi\right)
\ dx\ dt,
\end{equation}
for all $\tau\in [0,T]$ and any test function $\varphi
\in C^1_{c}([0,T] \times \overline{V})$.
\item The linear momentum $\rho_R\mathbf{u}_R\in C_{\rm weak}([0,T], L^{2}(V))$ and the momentum equation \eqref{eq:mmom}
is satisfied in the weak sense
\begin{multline}\label{eq:weakmmom}
\int_{\Omega_R}\rho_R\mathbf{u}_R(\tau,\cdot)\cdot\varphi(\tau,\cdot){\rm d} x - \int_{\Omega_R} \mathbf{q}_0(\cdot)\cdot\varphi(0,\cdot){\rm d} x \\
=\int_0^\tau \int_{\Omega_R}\Big(
\rho_R \mathbf{u}_R \cdot \partial_t \varphi + \rho_R \mathbf{u}_R
\otimes \mathbf{u}_R : \nabla \varphi +
p(\rho_R)\operatorname{div}\varphi - \mathbb {S}(\nabla \mathbf{u}_R) : \nabla \varphi \Big)\ dx \ dt,
\end{multline}
for all $\tau\in [0,T]$ and
for any test
function $\varphi \in C^1_c([0,T] \times \Omega_R)$. Moreover,
\begin{equation}\label{umequinf}
\mathbf{u}_R=\mathbf{u}_{\infty} \mbox{ a.e. in }(0,T)\times [(V\setminus B_R)\cap \mathcal{S}].
\end{equation}
\item The following energy inequality holds: for a.e. $\tau\in (0,T)$,
\begin{multline}\label{m:energy}
\int_{\Omega_R}\Big(\frac{1}{2}\rho_R|\mathbf{u}_R-\mathbf {u}_{\infty}|^2 +
E(\rho_R|\rho_{\infty})\Big)(\tau)\ dx
+ \int_0^\tau\int_{\Omega_R} \mathbb{S}(\nabla (\mathbf{u}_R-\mathbf{u}_{\infty})):\nabla (\mathbf{u}_R-\mathbf{u}_{\infty})\ dx\ dt\\
\leqslantqslant
\int_{\Omega_R}\Big(\frac{1}{2}\frac{|\mathbf{q}_0-\rho_{0}\mathbf{u}_{\infty}|^2}{\rho_0} +
E(\rho_0|\rho_{\infty})\Big)\ dx + \int_0^\tau
\int_{B_1\setminus\mathcal{S}} \rho\mathbf{u}\cdot\nabla\mathbf {u}_{\infty}
\cdot(\mathbf{u}_{\infty}- \mathbf{u})\,{ d}x\,{ d}t - \int _0^{ \tau}\int _{B_1\setminus\mathcal{S}}\mathbb{S}(\nabla\mathbf{u}_{\infty}):\nabla(\mathbf{u}-\mathbf{u}_\infty)\,{ d} x\, { d}t.
\end{multline}
\end{itemize}
\end{Definition}
We want to show the existence of the solution $(\rho_R,\mathbf{u}_R)$ according to \cref{def:mbddenergy} to the system \eqref{eq:mmass}--\eqref{mini} as a limit of the solution $(\rho_m,\mathbf{u}_m)$ to the system \eqref{eq:penmass}--\eqref{penini} as $m\rightarrow \infty$.
\begin{Theorem}\label{thm:m}
Assume that
$0<\rho_{\infty}<\overline{\rho}$, $\mathbf{u}_{\infty}\in C_c^1(\mathbb{R}^d)$ is defined by \eqref{d2}. Assume that the pressure satisfies the hypothesis \eqref{p-law}, the initial data satisfy
\begin{equation}\label{ini:R}
E(\rho_0|\rho_{\infty}) \in L^1(\Omega_R),\quad 0\leqslantqslant \rho_{0} < \overline{\rho},\quad
\frac{|\mathbf{q}_0-\rho_{0}\mathbf{u}_{\infty}|^2}{\rho_0}\mathds{1}_{\{\rho_0 > 0\}}\in L^1(\Omega_R).
\end{equation}
Then the problem \eqref{eq:mmass}--\eqref{mini} admits at least one renormalized bounded energy weak solution $(\rho_R,\mathbf{u}_R)$ according to \cref{def:mbddenergy}.
\end{Theorem}
\begin{proof}
We denote by $c=c(R)$, a generic constant that may depend on $R$
but is independent of $m$. As $(\rho_m,\mathbf{u}_m)$ satisfies energy inequality \eqref{pen:energy}, we can deduce the following estimates: \begin{equation}\label{E1}
\displaystyle
\sup_{t\in (0,T)}\int _V \rho _m|\mathbf u_m-\mathbf u_\infty|^2
\mbox{d}x \leqslantqslant c(R),
\end{equation}
\begin{equation}\label{E2}
\sup_{t\in (0,T)}\int _V E(\rho _m|\rho_{\infty})
\mbox{d}x \leqslantqslant c(R), \end{equation}
\begin{equation}\label{E3}
\|\mathbf{u}_m -\mathbf{u}_{\infty}\|_{L^2((0,T)\times((V\setminus B_R)\cup\mathcal S ))}
\leqslant \frac{c(R)}{\sqrt{m}},
\end{equation}
\begin{equation}\label{E4}
\|\mathbf{u}_m \|_{L^2(0,T;W^{1,2}(V))}\leqslant c(R),
\end{equation}
where the estimates \eqref{E1}--\eqref{E3} are direct consequence of \eqref{pen:energy} and to derive \eqref{E4}, we need to use Korn-Poincar\'{e} type inequality along with \eqref{pen:energy}. Moreover, by virtue of \eqref{E1}, \eqref{E4} and boundedness of $\rho_m$ in $(0,T)\times V$, we have
\begin{equation}\label{bd:rhou}
\|\rho_m \mathbf{u}_m\|_{L^{\infty}(0,T;L^{2}(V))} + \|\rho_m \mathbf{u}_m\|_{L^2(0,T; L^{\frac{6q}{6+q}}(V))} \leqslantqslant c(R),\mbox{ for any }1\leqslantqslant q <\infty.
\end{equation}
\begin{equation}\label{bd:rhouu}
\|\rho_m |\mathbf{u}_m|^2\|_{L^2(0,T; L^{\frac{3}{2}}(V))} \leqslantqslant c(R).
\end{equation}
\underline{Step 1: Limit in the continuity equation and boundedness of the density.}
It follows from \cite[Section 3]{MR2646821} that
\begin{equation}\label{rhoCw}
\rho_m \mbox{ is bounded in }(0,T)\times V,\quad
\rho_m\in C_{\mbox{weak}}([0,T]; L^q(V))\mbox{ for any }1\leqslantqslant q <\infty,
\end{equation}
and
\begin{equation}\label{rhouCw}
\rho_m \mathbf{u}_m\in C_{\mbox{weak}}([0,T]; L^2(V)).
\end{equation}
Using the renormalized continuity equation \eqref{eq:penrenorm}, we conclude that
\begin{equation}\label{rhoC}
\rho_m\in C([0,T]; L^q(V))\mbox{ for any }1\leqslantqslant q <\infty.
\end{equation}
Consequently, we infer from relations \eqref{rhoC} and \eqref{E4} that
\begin{align}
\rho_m\rightarrow \rho_R &\mbox{ weakly--}* \mbox{ in }L^{\infty}((0,T); L^q(V)),\label{rhom}\\
\mathbf{u}_m \rightarrow \mathbf{u}_R &\mbox{ weakly in }L^2(0,T;W^{1,2}(V))\label{um}.
\end{align}
We obtain from the continuity equation \eqref{eq:weakpenmass} and the estimate \eqref{bd:rhou} that $\{\rho_m\}$ is uniformly continuous in $W^{-1,2}(V)$ on $[0,T]$. Since, it is also uniformly bounded in $L^q(V)$, we can apply Arzela-Ascoli \cite[Lemma 6.2, page 301]{MR2084891} to conclude
\begin{equation}\label{m:rhoCw}
\rho_m \rightarrow \rho_R \mbox{ in } C_{\mbox{weak}}([0,T]; L^q(V))\mbox{ for any }1\leqslantqslant q <\infty.
\end{equation}
Furthermore, due to the compact imbedding $L^2(V)\hookrightarrow \hookrightarrow W^{-1,2}(V)$, we obtain
\begin{equation}\label{m:rhos}
\rho_m \rightarrow \rho_R \mbox{ strongly in } L^2(0,T; W^{-1,2}(V)).
\end{equation}
Moreover, the bound
\eqref{bd:rhou} and the convergences \eqref{rhom}--\eqref{m:rhos} imply
\begin{equation}\label{con:rhou}
\rho_m \mathbf{u}_m \rightarrow \rho_R\mathbf{u}_R \mbox{ weakly in }L^2(0,T; L^{\frac{6q}{6+q}}(V)) \mbox{ and weakly}-* \mbox{ in }L^{\infty}(0,T;L^{2}(V)).
\end{equation}
Consequently, the convergences \eqref{rhom}--\eqref{con:rhou} enable us to pass the limit $m\rightarrow \infty$ in the equation \eqref{eq:weakpenmass} and we obtain:
\begin{equation*}
\int_V \rho_R(\tau,\cdot)\varphi(\tau,\cdot)\ {\rm
d}x - \int_V \rho_0(\cdot)\varphi(0,\cdot)\ {\rm
d}x=\int_0^\tau \int_{V}\leqslantft(\rho_R
\partial_t \varphi + \rho_R\mathbf{u}_R \cdot \nabla \varphi\right)
\ dx\ dt,
\end{equation*}
for all $\tau\in [0,T]$ and any test function $\varphi
\in C^1_{c}([0,T] \times \overline{V})$.
Now we need to prove that $\rho_R$ is bounded. We already know from \cref{thm:pen} that
\begin{equation}\label{bdd:rhom}
0\leqslantqslant \rho_m < \overline{\rho}\quad\mbox{ in }\quad (0,T)\times V.
\end{equation}
Moreover, we know from \cite[Section 4.2]{MR3912678} that
\begin{equation}\label{rhom:ant}
\int _V |\overline{\rho}-\rho_m|^{-\beta+1} \leqslantqslant c(R)\mbox{ for all }t\in [0,T]. \end{equation}
We can use \eqref{m:rhoCw}--\eqref{m:rhos} to conclude
\begin{equation}\label{bd:rhoR1}
0\leqslantqslant \rho_R \leqslantqslant \overline{\rho}.
\end{equation}
Using \eqref{E2} and taking limit $m\rightarrow\infty$, we have
\begin{equation}\label{ER}
\sup_{t\in (0,T)}\int _V E(\rho _R|\rho_{\infty})
\mbox{d}x \leqslantqslant c(R). \end{equation}
Observe that
\begin{equation*}
\lim_{\rho_R\rightarrow \overline{\rho}} \frac{E(\rho_R|\rho_{\infty})}{H(\rho_R)} = 1.
\end{equation*}
Thus, there exists $\delta>0$ such that
\begin{equation}\label{rel:HR}
\frac{1}{2}H(\rho_R)\leqslantqslant E(\rho_R|\rho_{\infty})\leqslantqslant \frac{3}{2}H(\rho_R),\quad\forall\ \rho_R\in [\overline{\rho}-\delta,\overline{\rho}].
\end{equation}
Moreover, the behaviour of pressure \eqref{p-law} and definition of the function $H$ in \eqref{def:H} imply that
\begin{equation}\label{HR:beh}
H(\rho_R)\sim _{\rho\rightarrow \overline{\rho}-} |\overline{\rho}-\rho_R|^{-\beta+1}\mbox{ for some }\beta > 5/2.
\end{equation}
The relations \eqref{ER}, \eqref{rel:HR} and \eqref{HR:beh} help us to exclude the possibility of equality $\rho_R=\overline{\rho}$ in \eqref{bd:rhoR1} and we can conclude
\begin{equation}\label{bd:rhoR2}
0\leqslantqslant \rho_R < \overline{\rho}.
\end{equation}
\underline{Step 2: Uniform integrability of the pressure.}
In order to pass the limit $m\rightarrow\infty$ in the weak formulation of the momentum equation \eqref{eq:weakpenmom}, we need the uniform bound of the pressure with respect to the parameter $m$. This is our aim to achieve in this step. We choose cut-off functions $\eta\in C_c^{\infty}(0,T)$ with $0\leqslantqslant \eta \leqslantqslant 1$ and $\psi\in C_c^1(B_R\setminus \mathcal{S})$ with $0\leqslantqslant \psi \leqslantqslant 1$. We consider the following test functions
\begin{equation}\label{testm}
\varphi=\eta(t)\mathcal{B}\leqslantft(\psi\rho_m -\psi \alpha_m\right),\quad\mbox{ where }\quad\alpha_m=\frac{\int_{B_R\setminus \mathcal{S}} \psi\rho_m}{\int_{B_R\setminus \mathcal{S}} \psi}.
\end{equation}
in the weak formulation of momentum equation \eqref{eq:weakpenmom}, where $\mathcal{B}$ is the Bogovskii operator which assigns to each $g\in L^p(B_R\setminus \mathcal{S})$, $\int\limits_{B_R\setminus \mathcal{S}} g\ dx=0$, a solution to the problem
\begin{equation*}
\operatorname{div} \mathcal{B}[g] = g \mbox{ in }B_R\setminus \mathcal{S},\quad \mathcal{B}[g] =0 \mbox{ on }\partial B_R \cup \partial\mathcal{S}.
\end{equation*}
Here $\mathcal{B}$ is a bounded linear operator from $L^p(B_R\setminus \mathcal{S})$ to $W^{1,p}_0(B_R\setminus \mathcal{S})$, for any $1< p < \infty$ and it can be extended as a bounded linear operator on $[W^{1,p}(B_R\setminus \mathcal{S})]'$ with values in $L^{p'}(B_R\setminus \mathcal{S})$ for any $1< p < \infty$.
We test the momentum equation \eqref{eq:weakpenmom} with $\varphi$ defined in \eqref{testm} to obtain the following identity:
\begin{equation*}
\int\limits_0^T\int\limits_{V} \eta p(\rho_m) (\psi\rho_m - \psi\alpha_m)\ dx\ dt = \sum\limits_{i=1}^{5} I_i,
\end{equation*}
where
\begin{equation*}
I_1= -\int\limits_0^T \partial_t\eta \int\limits_{V} \rho_m\mathbf{u}_m\cdot \mathcal{B}(\psi\rho_m - \psi\alpha_m)\ dx\ dt,
\end{equation*}
\begin{equation*}
I_2= \int\limits_0^T \eta \int\limits_{V} \rho_m\mathbf{u}_m\cdot \mathcal{B}(\operatorname{div}(\rho_m\mathbf{u}_m\psi))\ dx\ dt,
\end{equation*}
\begin{equation*}
I_3= -\int\limits_0^T \eta \int\limits_{V} \rho_m\mathbf{u}_m\cdot \mathcal{B}\leqslantft(\rho_m\mathbf{u}_m\cdot \nabla \psi - \frac{\psi}{\int_V \psi \ dx}\int_V \rho_m\mathbf{u}_m \cdot \nabla\psi \ dx\right)\ dx\ dt,
\end{equation*}
\begin{equation*}
I_4= -\int\limits_0^T \eta \int\limits_{V} \rho_m\mathbf{u}_m \otimes \mathbf{u}_m : \nabla \mathcal{B}(\psi\rho_m - \psi\alpha_m)\ dx\ dt,
\end{equation*}
\begin{equation*}
I_5= \int\limits_0^T \eta \int\limits_{V} \mathbb{S}(\nabla\mathbf{u}_m) : \nabla \mathcal{B}(\psi\rho_m - \psi\alpha_m)\ dx\ dt.
\end{equation*}
The uniform bounds obtained in \eqref{E1}, \eqref{E2}, and \eqref{E4}--\eqref{bd:rhou} along with boundedness of the operator $\mathcal{B}$ from $L^p(B_R\setminus \mathcal{S})$ to $W^{1,p}_0(B_R\setminus \mathcal{S})$ (see \cite[Chapter 3]{MR1284205}), we have that the integrals $I_i$, $i=1,\ldots , 5$ are uniformly bounded with respect to $m$. Consequently, we have
\begin{equation}\label{12:43}
\leqslantft|\int\limits_0^T\int\limits_{V} \eta p(\rho_m) (\psi\rho_m - \psi\alpha_m)\ dx\ dt \right| \leqslantqslant c ,
\end{equation}
where $c$ is independent of parameter $m$. We also know from \eqref{ini:m} that
\begin{equation*}
\frac{1}{|V|}\int\limits_V \rho_m \ dx = \frac{1}{|V|}\int\limits_V \rho_0 \ dx = \mathcal{M}_{0,V} < \overline{\rho}.
\end{equation*}
We can write
\begin{equation}\label{decom:J1J2}
\int\limits_0^T\int\limits_{V} \eta p(\rho_m) (\psi\rho_m - \psi\alpha_m)\ dx\ dt = J_1 + J_2,
\end{equation}
where
\begin{equation*}
J_1= \int\limits_0^T\int\limits_{\{\rho_m< (\mathcal{M}_{0,V}+\overline{\rho})/2\}} \eta p(\rho_m) (\psi\rho_m - \psi\alpha_m)\ dx\ dt,
\end{equation*}
\begin{equation*}
J_2= \int\limits_0^T\int\limits_{\{\rho_m\geqslantqslant {\mathcal{M}_{0,V}+\overline{\rho}}/{2}\}} \eta p(\rho_m) (\psi\rho_m - \psi\alpha_m)\ dx\ dt.
\end{equation*}
Since $\eta$, $\psi$ are bounded, the integral $J_1$ is also bounded and we can bound $J_2$ in the following way:
\begin{equation*}
J_2 \geqslantqslant \frac{\overline{\rho}-\mathcal{M}_{0,V}}{2}\int\limits_0^T \int\limits_{\{\rho_m\geqslantqslant (\mathcal{M}_{0,V}+\overline{\rho})/2\}}\eta\psi p(\rho_m)\ dx\ dt.
\end{equation*}
Thus, we conclude from estimate \eqref{12:43}, decomposition \eqref{decom:J1J2} and bounds of $J_1$, $J_2$ that
for any compact set $K\subset B_R\setminus \mathcal{S}$:
\begin{equation}\label{bdd:pm}
\|p(\rho_m)\|_{L^1(0,T;L^1(K))} \leqslantqslant c(K).
\end{equation}
Since the pressure satisfies the hypothesis \eqref{p-law}, in particular, for $\delta>0$, we also have \begin{equation}\label{rhom:beta}
\int\limits_0^T\int\limits_{K \cap \{0\leqslantqslant \rho\leqslantqslant \overline{\rho}-\delta\}} |\overline{\rho}-\rho_m|^{-\beta} \leqslantqslant c(K).
\end{equation}
\underline{Step 3: Equi-integrability of the pressure.} The $L^1(0,T;L^1(K))$ boundedness of the pressure $\{p(\rho_m)\}$ obtained in \eqref{bdd:pm} is not sufficient to pass the limit in the pressure term. We know from Dunford-Pettis Theorem that we need to establish equi-integrability of the pressure family $\{p(\rho_m)\}$ to obtain the weak convergence of the pressure. As in the previous section, we fix the cut-off functions $\eta\in C_c^{\infty}(0,T)$ with $0\leqslantqslant \eta \leqslantqslant 1$ and $\psi\in C_c^1(B_R\setminus \mathcal{S})$ with $0\leqslantqslant \psi \leqslantqslant 1$. We consider the following test functions
\begin{equation}\label{test1m}
\varphi_m=\eta(t)\mathcal{B}\leqslantft(\psi b(\rho_m) - \alpha_m\right),\quad\mbox{ where }\quad\alpha_m=\frac{\int_{B_R\setminus \mathcal{S}} \psi b(\rho_m)}{|{B_R\setminus \mathcal{S}}|}
\end{equation}
with
\begin{align}\label{def:b}
b(\rho)=
\begin{cases}
\log (\overline{\rho}-\rho)&\mbox{ if }\rho \in [0, \overline{\rho}-\delta),\\
\log \delta &\mbox{ if }\rho \geqslantqslant \overline{\rho}-\delta,
\end{cases}
\end{align}
for some $\delta >0$. Observe that
\begin{equation*}
b'(\rho)=\frac{1}{\overline{\rho}-\rho}\mathds{1}_{[0, \overline{\rho}-\delta)}(\rho).
\end{equation*}
We can use estimate \eqref{rhom:ant}, the boundedness \eqref{bdd:rhom} of $\rho_m$, estimate \eqref{rhom:beta} and the expressions of $b(\rho)$, $b'(\rho)$ to obtain: for any $1\leqslantqslant p< \infty$ and any compact set $K\subset B_R\setminus \mathcal{S}$,
\begin{align}
\|b(\rho_m)\|_{L^{\infty}(0,T;L^p(B_R\setminus \mathcal{S}))} &\leqslantqslant c(p)\label{b1},\\
\|\rho_m b'(\rho_m)-b(\rho_m)\|_{L^{\beta}((0,T)\times K)} &\leqslantqslant c(K)\label{b2},\\
\|\rho_m b'(\rho_m)-b(\rho_m)\|_{L^{\infty}(0,T; L^{\beta-1}(B_R\setminus \mathcal{S}))} &\leqslantqslant c\label{b3}.
\end{align}
We consider the momentum equation \eqref{eq:weakpenmom} with $\varphi_m$ (defined in \eqref{test1m}) as a test function and use renormalized equation \eqref{eq:penrenorm} to obtain the following identity:
\begin{equation*}
\int\limits_0^T \eta\int\limits_{B_R\setminus \mathcal{S}}\psi p(\rho_m) b(\rho_m)\ dx\ dt = \sum\limits_{i=1}^{7} I_i,
\end{equation*}
where
\begin{equation*}
I_1= \frac{1}{|B_R\setminus \mathcal{S}|}\int\limits_0^T \eta (t) \int\limits_{B_R\setminus \mathcal{S}} \psi b(\rho_m)\ dx \int\limits_V p( \rho_m)\ dx\ dt,
\end{equation*}
\begin{equation*}
I_2= \int\limits_0^T \partial_t\eta \int\limits_{V} \rho_m\mathbf{u}_m\cdot \mathcal{B}(\psi b(\rho_m) - \alpha_m)\ dx\ dt,
\end{equation*}
\begin{equation*}
I_3= \int\limits_0^T \eta \int\limits_{V} \rho_m\mathbf{u}_m\cdot \mathcal{B}(\operatorname{div}(\rho_m\mathbf{u}_m\psi))\ dx\ dt,
\end{equation*}
\begin{equation*}
I_4= -\int\limits_0^T \eta \int\limits_{V} \rho_m\mathbf{u}_m\cdot \mathcal{B}\leqslantft(b(\rho_m)\mathbf{u}_m\cdot \nabla \psi - \frac{1}{|B_R \setminus \mathcal{S}|}\int_V b(\rho_m)\mathbf{u}_m \cdot \nabla\psi \ dx\right)\ dx\ dt,
\end{equation*}
\begin{equation*}
I_5= -\int\limits_0^T \eta \int\limits_{V} \rho_m\mathbf{u}_m\cdot \mathcal{B}\leqslantft[\psi(\rho_mb'(\rho_m)-b(\rho_m))\operatorname{div}\mathbf{u}_m -\frac{1}{|B_R \setminus \mathcal{S}|}\int\limits_{B_R\setminus \mathcal{S}} \psi(\rho_mb'(\rho_m)-b(\rho_m))\operatorname{div}\mathbf{u}_m \ dx\right]\ dx\ dt,
\end{equation*}
\begin{equation*}
I_6= -\int\limits_0^T \eta \int\limits_{V} \rho_m\mathbf{u}_m \otimes \mathbf{u}_m : \nabla \mathcal{B}(\psi b(\rho_m) - \alpha_m)\ dx\ dt,
\end{equation*}
\begin{equation*}
I_7= \int\limits_0^T \eta \int\limits_{V} \mathbb{S}(\nabla\mathbf{u}_m) : \nabla \mathcal{B}(\psi\rho_m - \alpha_m)\ dx\ dt.
\end{equation*}
We follow \cite[Section 4.4]{MR3912678}, \cite[Section 3.5]{MR2646821} and use the bounds \eqref{E1}, \eqref{E2}, \eqref{E4}--\eqref{bd:rhou}, \eqref{b1}--\eqref{b3} along with the boundedness of the operator $\mathcal{B}$ from $[W^{1,q}(B_R\setminus \mathcal{S})]^{*}$ to $L^{q'}(B_R\setminus \mathcal{S})$ (see \cite{MR2240056}) to conclude that for any compact set $K\subset B_R\setminus \mathcal{S}$:
\begin{equation}\label{eqint:pm}
\|p(\rho_m)b(\rho_m)\|_{L^1(0,T;L^1(K))} \leqslantqslant c(K).
\end{equation}
The estimate \eqref{eqint:pm}
implies that the sequence $p(\rho_m)$ is equi-integrable in $L^1((0,T)\times K)$ and we also have
\begin{equation}\label{prholim}
p(\rho_m) \rightarrow \overline{p(\rho_R)} \quad \mbox{weakly in }L^1((0,T)\times K)
\end{equation}
for any compact set $K\subset B_R\setminus \mathcal{S}$ up to a subsequence.
\underline{Step 4: Limit in the momentum equation.}
We already have from \eqref{con:rhou} that \begin{equation*}
\rho_m \mathbf{u}_m \rightarrow \rho_R\mathbf{u}_R \mbox{ weakly in }L^2(0,T; L^{\frac{6q}{6+q}}(V)) \mbox{ and weakly}-* \mbox{ in }L^{\infty}(0,T;L^{2}(V)).
\end{equation*}
We use the bounds \eqref{bd:rhou}--\eqref{bd:rhouu} and \eqref{prholim} in the momentum equation \eqref{eq:weakpenmom} to obtain the equicontinuity of the sequence $t\mapsto \int\limits_{V} \rho_m\mathbf{u}_m \phi$, $\phi\in C_c^1(V)$ in $C[0,T]$. Moreover, $\rho_m\mathbf{u}_m$ is uniformly bounded in $L^2(V)$. We apply Arzela-Ascoli theorem \cite[Lemma 6.2, page 301]{MR2084891} to have
\begin{equation}\label{m:rhouCw}
\rho_m\mathbf{u}_m\rightarrow \rho_R\mathbf{u}_R\quad \mbox{ in }\quad C_{\mbox{weak}}([0,T]; L^2(V)).
\end{equation}
Moreover, the compact embedding of $L^2(V)\hookrightarrow\hookrightarrow W^{-1,2}(V)$ ensures that we can apply \cite[Lemma 6.4, page 302]{MR2084891} to have
\begin{equation}\label{m:rhoustrl}
\rho_m\mathbf{u}_m\rightarrow \rho_R\mathbf{u}_R\quad \mbox{ strongly in }L^{2}(0,T; W^{-1,2}(V)).
\end{equation}
The estimates \eqref{bd:rhouu}, \eqref{um} and weak compactness result \cite[Lemma 6.6, page 304]{MR2084891} yield
\begin{equation}\label{con:rhouum}
\rho_m \mathbf{u}_m \otimes \mathbf{u}_m \rightarrow \rho_R\mathbf{u}_R \otimes \mathbf{u}_R \mbox{ weakly in }L^2(0,T; L^{\frac{3}{2}}(V)).
\end{equation}
Let us recall the notation: $$\Omega_R:= B_R \setminus \overline{\mathcal{S}}.$$
We take limit $m \rightarrow \infty$ in
the momentum equation \eqref{eq:weakpenmom} and use \eqref{rhom}, \eqref{con:rhou}, \eqref{m:rhouCw}--\eqref{con:rhouum} to conclude that: for all $\tau\in [0,T]$ and
for any test
function $\varphi \in C^1_c([0,T] \times \Omega_R)$,
\begin{multline}\label{limit-m}
\int_{\Omega_R}\rho_R\mathbf{u}_R(\tau,\cdot)\cdot\varphi(\tau,\cdot){\rm d} x - \int_{\Omega_R} \mathbf{q}_0(\cdot)\cdot\varphi(0,\cdot){\rm d} x \\
=\int_0^\tau \int_{\Omega_R}\Big(
\rho_R \mathbf{u}_R \cdot \partial_t \varphi + \rho_R \mathbf{u}_R
\otimes \mathbf{u}_R : \nabla \varphi +
\overline{p(\rho_R)}\operatorname{div}\varphi - \mathbb {S}(\nabla \mathbf{u}_R) : \nabla \varphi \Big)\ dx \ dt.
\end{multline}
Moreover, we know from the estimate \eqref{E3} that
\begin{equation}\label{uRuinf}
\mathbf{u}_R=\mathbf{u}_{\infty} \mbox{ a.e. in }(0,T)\times [(V\setminus B_R)\cap \mathcal{S}].
\end{equation}
We want to show that
\begin{equation*}
\overline{p(\rho_R)}=p(\rho_R).
\end{equation*}
It is enough to establish that the family of densities $\{\rho_m\}$ converges almost everywhere in $B_R \setminus \mathcal{S}$. In order to show this, we need to use the idea of \textit{effective viscous flux} which is developed in \cite{MR1637634, MR1867887} in the context of isentropic compressible fluid (see also \cite[Proposition 7.36, Page 338]{MR2084891}). Let us denote by $\nabla \Delta^{-1}$ the pseudodifferential operator of the Fourier symbol $\frac{i\xi}{|\xi|^2}$. We use the test function
\begin{equation*}
\varphi(t,x)=\eta (t)\psi (x) \nabla \Delta^{-1} [\rho_m \psi],\quad \eta\in C_c^{\infty}(0,T),\ 0\leqslantqslant \eta \leqslantqslant 1,\ \psi\in C_c^1(B_R\setminus \mathcal{S}),\ 0\leqslantqslant \psi \leqslantqslant 1
\end{equation*}
in the $m$-th level approximating momentum equation \eqref{eq:weakpenmom} and the test function
\begin{equation*}
\varphi(t,x)=\eta (t)\psi (x) \nabla \Delta^{-1} [\rho_R \psi],\quad \eta\in C_c^{\infty}(0,T),\ 0\leqslantqslant \eta \leqslantqslant 1,\ \psi\in C_c^1(B_R\setminus \mathcal{S}),\ 0\leqslantqslant \psi \leqslantqslant 1
\end{equation*}
in the limiting momentum equation \eqref{limit-m}. We subtract these two resulting identities and taking the limit $m\rightarrow \infty$. These limiting procedure is the main step in the barotropic Navier-Stokes case (see \cite{MR1637634}, \cite[Lemma 3.2]{MR1867887}). This procedure has been adapted in \cite[Section 3.6]{MR2646821}, \cite[Section 4.6]{MR3912678} in the context of hard pressure law and we follow it here to obtain:
\begin{equation}\label{eq:EVF}
(2\mu+\lambda) \leqslantft(\overline{\rho_R\operatorname{div}\mathbf{u}_R}-\rho_R\operatorname{div}\mathbf{u}_R \right) = \leqslantft(\overline{p(\rho_R) \rho_R}-\overline{p(\rho_R)}\rho_R\right),
\end{equation}
where the quantity $\leqslantft(p(\rho)-(2\mu+\lambda)\operatorname{div}\mathbf{u}\right)$ is termed as \textit{effective viscous flux}. In the above relation and in the sequel the overlined quantities are used to denote the $L^1$-weak limits of the corresponding sequences.
We already established in Step 1 that the bounded density $\rho_R\in C_{\mbox{weak}}([0,T]; L^1(V))$ satisfies the continuity equation
\eqref{eq:weakmmass}. We can use the regularization procedure in the transport theory by DiPerna and Lions \cite{DiPerna1989} to show that $(\rho_R,\mathbf{u}_R)$ also satisfy a renormalized continuity equation in a weak sense, i.e,
\begin{equation}\label{eq:renormR}
\partial_t b(\rho_R) + \operatorname{div}(b(\rho_R)\mathbf{u}_R) + (b'(\rho_R)-b(\rho_R))\operatorname{div}\mathbf{u}_R=0 \mbox{ in }\, \mathcal{D}'([0,T)\times \overline{V}) ,
\end{equation}
for any $b\in C([0,\infty)) \cap C^1((0,\infty))$. Now use $b(\rho_R)=\rho_R\log\rho_R$ in \eqref{eq:renormR} and subtract the resulting equation to the equation \eqref{eq:penrenorm} with $b(\rho_m)=\rho_m\log\rho_m$ and letting $m\rightarrow \infty$ to get
\begin{equation}\label{renorm:diff}
\int\limits_{V} \leqslantft(\overline{\rho_R \log \rho_R}-\rho_R \log \rho_R\right)(\tau,\cdot)\ dx = \int\limits_0^{\tau}\int\limits_{V} \leqslantft(\rho_R\operatorname{div}\mathbf{u}_R - \overline{\rho_R\operatorname{div}\mathbf{u}_R}\right)\ dx\ dt .
\end{equation}
Now we use the relation \eqref{eq:EVF} in the inequality \eqref{renorm:diff} to obtain
\begin{equation}\label{13:44}
\int\limits_{V} \leqslantft(\overline{\rho_R \log \rho_R}-\rho_R \log \rho_R\right)(\tau,\cdot)\ dx = \frac{1}{2\mu+\lambda} \int\limits_0^{\tau}\int\limits_{V} \leqslantft(\overline{p(\rho_R)}\rho_R-\overline{p(\rho_R) \rho_R}\right)\ dx\ dt .
\end{equation}
Since the function $\rho \mapsto \rho\log\rho$ is a convex lower semi-continuous function on $(0,T)$ with
\begin{equation*}
\rho_m \rightarrow \rho_R\quad\mbox{ weakly in }\quad L^1(V),\quad
\rho_m\log\rho_m \rightarrow \overline{\rho_R\log\rho_R}\quad\mbox{ weakly in }\quad L^1(V),
\end{equation*}
then we can apply the result \cite[Corollary 3.33, Page 184]{MR2084891} from convex analysis to conclude
\begin{equation}\label{c1}
\rho_R\log\rho_R \leqslantqslant \overline{\rho_R\log\rho_R}\quad\mbox{ a.e. in }V.
\end{equation}
Moreover, we know from the assumption \eqref{p-law} on pressure that $p(\rho)$ is strictly increasing on $(0,\infty)$, that is $p'(\rho)>0\ \forall\ \rho>0$. We can use the result \cite[Lemma 3.35, Page 186]{MR2084891} on weak convergence and monotonicity to get:
\begin{equation}\label{c2}
\overline{p(\rho_R)\rho_R} \geqslantqslant \overline{p(\rho_R)}\rho_R\quad\mbox{ a.e. in }V.
\end{equation}
Thus, the relations \eqref{c1}--\eqref{c2} and the inequality \eqref{13:44} yield
\begin{equation*}
\rho_R\log\rho_R = \overline{\rho_R\log\rho_R}\quad\mbox{ a.e. in }V.
\end{equation*}
Since the function $\rho \mapsto \rho\log\rho$ is strictly convex on $[0,\infty)$, we obtain that
\begin{equation}\label{lim:rho}
\rho_m \rightarrow {\rho_R}\quad\mbox{ a.e. in }(0,T)\times V \mbox{ and in }L^p((0,T)\times V)\mbox{ for }1\leqslantqslant p< \infty.
\end{equation}
We use \eqref{lim:rho} together with \eqref{prholim} to conclude that for any compact set $K\subset B_R\setminus \mathcal{S}$:
\begin{equation*}
p(\rho_m) \rightarrow {p(\rho_R)} \quad \mbox{ a.e. in }(0,T)\times V \mbox{ and in }L^1((0,T)\times K).
\end{equation*}
In particular, we have $\overline{p(\rho_R)}=p(\rho_R)$. The substitution of this relation in \eqref{limit-m} yields the limiting momentum equation : for all $\tau\in [0,T]$ and
for any test
function $\varphi \in C^1_c([0,T] \times \Omega_R)$,
\begin{multline*}
\int_{\Omega_R}\rho_R\mathbf{u}_R(\tau,\cdot)\cdot\varphi(\tau,\cdot){\rm d} x - \int_{\Omega_R} \mathbf{q}_0(\cdot)\cdot\varphi(0,\cdot){\rm d} x \\
=\int_0^\tau \int_{\Omega_R}\Big(
\rho_R \mathbf{u}_R \cdot \partial_t \varphi + \rho_R \mathbf{u}_R
\otimes \mathbf{u}_R : \nabla \varphi +
{p(\rho_R)}\operatorname{div}\varphi - \mathbb {S}(\nabla \mathbf{u}_R) : \nabla \varphi \Big)\ dx \ dt.
\end{multline*}
Hence, we have verified the momentum equation \eqref{eq:weakmmom} and it only remains to establish the energy inequality \eqref{m:energy}.
\underline{Step 5: Energy inequality.} Let us recall the energy inequality \eqref{pen:energy} for the $m$-th level penalized problem: for a.e. $\tau\in (0,T)$,
\begin{multline}\label{pen:energyag}
\int_{V}\Big(\frac{1}{2}\rho_m|\mathbf{u}_m-\mathbf {u}_{\infty}|^2 +
E(\rho_m|\rho_{\infty})\Big)(\tau)\ dx
+ \int_0^\tau\int_{V}\Big( \mathbb{S}(\nabla (\mathbf{u}_m-\mathbf{u}_{\infty})):\nabla (\mathbf{u}_m-\mathbf{u}_{\infty}) +m\mathds{1}_{\{(V\setminus B_R)\cup \mathcal{S}\}} |\mathbf{u}_m-\mathbf{u}_{\infty}|^2\Big)\ dx\ dt\\
\leqslantqslant
\int_{V}\Big(\frac{1}{2}\frac{|\mathbf{q}_0-\rho_{0}\mathbf{u}_{\infty}|^2}{\rho_0} +
E(\rho_0|\rho_{\infty})\Big)\ dx + \int_0^\tau
\int_{V} \rho_m\mathbf{u}_m\cdot\nabla\mathbf {u}_{\infty}
\cdot(\mathbf{u}_{\infty}- \mathbf{u}_m)\,{ d}x\,{ d}t - \int _0^{ \tau}\int _V\mathbb{S}(\nabla\mathbf{u}_{\infty}):\nabla(\mathbf{u}_m-\mathbf{u}_\infty)\,{ d} x\, { d}t.
\end{multline}
We use
\begin{itemize}
\item the properties \eqref{d2} of $\mathbf{u}_{\infty}$,
\item the relation \eqref{uRuinf}, that is $\mathbf{u}_R=\mathbf{u}_{\infty}$ on the set $[(V\setminus B_R)\cup \mathcal{S}]$,
\item the convergences obtained for $\rho_m$, $\mathbf{u}_m$, $\rho_m\mathbf{u}_m$, \item the lower semi-continuity of the convex functionals at the left-hand side of \eqref{pen:energyag},
\end{itemize}
and take the limit $m \rightarrow \infty$ in \eqref{pen:energyag} to obtain
\begin{multline*}
\int_{\Omega_R}\Big(\frac{1}{2}\rho_R|\mathbf{u}_R-\mathbf {u}_{\infty}|^2 +
E(\rho_R|\rho_{\infty})\Big)(\tau)\ dx
+ \int_0^\tau\int_{\Omega_R} \mathbb{S}(\nabla (\mathbf{u}_R-\mathbf{u}_{\infty})):\nabla (\mathbf{u}_R-\mathbf{u}_{\infty})\ dx\ dt\\
\leqslantqslant
\int_{\Omega_R}\Big(\frac{1}{2}\frac{|\mathbf{q}_0-\rho_{0}\mathbf{u}_{\infty}|^2}{\rho_0} +
E(\rho_0|\rho_{\infty})\Big)\ dx + \int_0^\tau
\int_{B_1\setminus\mathcal{S}} \rho\mathbf{u}\cdot\nabla\mathbf {u}_{\infty}
\cdot(\mathbf{u}_{\infty}- \mathbf{u})\,{ d}x\,{ d}t - \int _0^{ \tau}\int _{B_1\setminus\mathcal{S}}\mathbb{S}(\nabla\mathbf{u}_{\infty}):\nabla(\mathbf{u}-\mathbf{u}_\infty)\,{ d} x\, { d}t,
\end{multline*}
for a.e. $\tau\in (0,T)$. Thus we have established the existence of at least one renormalized bounded energy weak solution $(\rho_R,\mathbf{u}_R)$ of the problem \eqref{eq:mmass}--\eqref{mini} according to \cref{def:mbddenergy}.
\end{proof}
\section{Proof of the main result}\label{proof}
In this section, we want to show the existence of the solution $(\rho,\mathbf{u})$ according to \cref{def:bddenergy} to the system \eqref{eq:mass}--\eqref{behaviour} as a limit of the solution $(\rho_R,\mathbf{u}_R)$ to the system \eqref{eq:mmass}--\eqref{mini} as $R\rightarrow \infty$. The existence of such a sequence $\{(\rho_R,\mathbf{u}_R)\}$ to problem \eqref{eq:mmass}--\eqref{mini} on $\Omega_R=B_R\setminus \overline{\mathcal{S}}$ is guaranteed by \cref{thm:m}. In order to prove \cref{thm:main}, we are going to follow the idea of \cite[Section 7.11]{MR2084891} concerning the method of `` invading domains'' to obtain existence in unbounded domain.
\begin{proof}[Proof of \cref{thm:main}]
Let us define
\begin{align}\label{def:Uinf}
\mathbf{U}_{\infty} := \begin{cases}
\mathbf{u}_{\infty}&\mbox{ in }B_1,\\
\mathbf{a}_{\infty}&\mbox{ in }\mathbb{R}^3\setminus B_1,
\end{cases}
\end{align}
where $\mathbf{u}_{\infty}$ is given by \eqref{d2}.
Now we extend $\rho_R$ by $\rho_{\infty}$ and $\mathbf{u}_R$ by $\mathbf{U}_{\infty}$ outside $\Omega_{R}$ and with the abuse of notation, we still denote the new functions by $\rho_R$, $\mathbf{u}_R$ respectively.
\underline{Step 1: Some convergences.}
We denote by $C$, a generic constant that is independent of $R$ but depends on $\mathbf{U}_{\infty}$ and initial data $(\rho_0,\mathbf{q}_0)$. As $(\rho_R,\mathbf{u}_R)$ satisfies energy inequality \eqref{m:energy}, we can deduce the following estimates: \begin{equation}\label{est:R1}
\displaystyle
\sup_{t\in (0,T)}\int _{\mathbb{R}^3} \rho _R|\mathbf u_R-\mathbf{U}_{\infty}|^2
\mbox{d}x \leqslantqslant C,
\end{equation}
\begin{equation}\label{est:R2}
\sup_{t\in (0,T)}\int _{\mathbb{R}^3} E(\rho _R|\rho_{\infty})
\mbox{d}x \leqslantqslant C, \end{equation}
\begin{equation}\label{est:R3}
\|\mathbf{u}_R - \mathbf{U}_{\infty} \|_{L^2(0,T;W^{1,2}(\mathbb{R}^3))}\leqslant C.
\end{equation}
Recall that
\begin{equation*}
E(\rho_R|\rho_{\infty}) =
H(\rho_R)-H'(\rho_{\infty})(\rho_R-\rho_{\infty})-H(\rho_{\infty}),
\end{equation*}
where
\begin{equation*}
H(\rho) = \rho \int _1^{\rho} \frac{p(s)}{s^2}ds.
\end{equation*}
Then we have
\begin{equation}\label{est:ER}
E(\rho_R|\rho_{\infty}) =H''(\xi)|\rho_R-\rho_{\infty}|^2=\frac{p'(\xi)}{\xi}|\rho_R-\rho_{\infty}|^2, \mbox{ for some }\xi\in (0,\overline{\rho}).
\end{equation}
Now, the assumption \eqref{p-law} on pressure \begin{equation*}
p'(\rho)>0\ \forall\ \rho>0,\quad \liminf_{\rho\rightarrow 0}\frac{p'(\rho)}{\rho}>0,
\end{equation*}
along with the relations \eqref{est:R2},
\eqref{est:ER} yield
\begin{equation}\label{beh:rhoR}
\sup_{t\in (0,T)}\int_{\mathbb{R}^3} |\rho_R-\rho_{\infty}|^2\mathds{1}_{\{0\leqslantqslant \rho_R< \overline{\rho}\}} \leqslantqslant C. \end{equation}
Thus, we obtain from \eqref{est:R3} and \eqref{beh:rhoR} that
\begin{equation}\label{diff:rhoR}
\rho_R-\rho_{\infty}
\rightarrow \rho -\rho_{\infty}\quad\mbox{weakly in }L^{\infty}(0,T;L^2(\mathbb{R}^3)),
\end{equation}
\begin{equation}\label{diff:uR}
\mathbf{u}_R-\mathbf{U}_{\infty}\rightarrow \mathbf{u} -\mathbf{U}_{\infty}\quad\mbox{weakly in }L^{2}(0,T;W^{1,2}(\mathbb{R}^3)) .
\end{equation}
Let us fix $n\in \mathbb{N}$ and define:
\begin{equation*}
\Omega_n:= B_n \setminus \overline{\mathcal{S}}.
\end{equation*}
We consider the test function $\phi_R$ (see \eqref{test1m}--\eqref{def:b}) and use it in the momentum equation \eqref{eq:weakmmom}. We use renormalized equation \eqref{eq:renormR} and follow the same procedure as in step 3 of the proof of \cref{thm:m} to have equi-integrability of the sequence of integrable functions $\{p(\rho_R)\}$ and conclude that
\begin{equation}\label{fl:pressure}
p(\rho_R)\rightarrow \overline{\rho(\rho)}\quad\mbox{weakly in }\quad L^1((0,T)\times \Omega_n).
\end{equation}
\underline{Step 2: Limit in the continuity equation.}
We use the estimates \eqref{est:R1}, \eqref{est:R3} and boundedness of $\rho_R$ in $(0,T)\times \mathbb{R}^3$, we have
\begin{equation}\label{bd:rhouR}
\|\rho_R \mathbf{u}_R\|_{L^{\infty}(0,T;L^{2}(\Omega_n))} + \|\rho_R \mathbf{u}_R\|_{L^2(0,T; L^{\frac{6q}{6+q}}(\Omega_n))} \leqslantqslant C,\mbox{ for any }1\leqslantqslant q <\infty,
\end{equation}
\begin{equation}\label{bd:rhouuR}
\|\rho_R |\mathbf{u}_R|^2\|_{L^2(0,T; L^{\frac{3}{2}}(\Omega_n))} \leqslantqslant C.
\end{equation}
Furthermore, for any fixed $n\in \mathbb{N}$, we deduce from the continuity equation \eqref{eq:weakmmass} and the estimate \eqref{bd:rhouR} that the sequence of functions $t\mapsto \int\limits_{\Omega_n} \rho_R \varphi$, $\varphi \in C_c^1(\Omega_n)$ is equi-continuous. The density $\rho_R$ is uniformly bounded. We can apply Arzela-Ascoli theorem \cite[Lemma 6.2, page 301]{MR2084891} to have $\rho_R\rightarrow \rho|_{B_n}$ in $C_{\mbox{weak}}([0,T]; L^q(\Omega_n))\mbox{ for any }1\leqslantqslant q <\infty$. Using the diagonalization process, we thus obtain
\begin{equation}\label{R:rhoCw}
\rho_R\rightarrow \rho\quad \mbox{ in }\quad C_{\mbox{weak}}([0,T]; L^q(\Omega_n)),\ n\in \mathbb{N},\mbox{ for any }1\leqslantqslant q <\infty.
\end{equation}
Moreover, the compact embedding of $L^q(\Omega_n)\hookrightarrow\hookrightarrow W^{-1,2}(\Omega_n)$, for $q>6/5$ ensures that we can apply \cite[Lemma 6.4, page 302]{MR2084891} to have
\begin{equation}\label{R:rhostr}
\rho_R\rightarrow \rho\quad \mbox{ strongly in }L^{2}(0,T; W^{-1,2}(\Omega_n)),\ n\in \mathbb{N}.
\end{equation}
We combine the convergence \eqref{R:rhoCw}--\eqref{R:rhostr} of density $\rho_R$, convergence \eqref{diff:uR} of $\mathbf{u}_R$, boundedness \eqref{bd:rhouR} of $\rho_R\mathbf{u}_R$ and local weak compactness result \cite[Lemma 6.6, page 304]{MR2084891} in unbounded domains to obtain
\begin{equation}\label{conv:rhouR}
\rho_R \mathbf{u}_R \rightarrow \rho\mathbf{u} \mbox{ weakly-}*\mbox{ in }{L^{\infty}(0,T;L^{2}(\Omega_n))}\mbox{ and weakly in } {L^2(0,T; L^{\frac{6q}{6+q}}(\Omega_n))} ,\mbox{ for any }1\leqslantqslant q <\infty,\ n\in \mathbb{N}.
\end{equation}
Consequently, the convergences \eqref{R:rhoCw}--\eqref{conv:rhouR} enable us to pass the limit $R\rightarrow \infty$ in the equation \eqref{eq:weakmmass} and we obtain:
\begin{equation*}
\int_\Omega \rho(\tau,\cdot)\varphi(\tau,\cdot)\ {\rm
d}x - \int_\Omega \rho_0(\cdot)\varphi(0,\cdot)\ {\rm
d}x=\int_0^\tau \int_{\Omega}\leqslantft(\rho
\partial_t \varphi + \rho\mathbf{u} \cdot \nabla \varphi\right)
\ dx\ dt,
\end{equation*}
for all $\tau\in [0,T]$ and any test function $\varphi
\in C^1_{c}([0,T] \times \overline{\Omega})$.
We can establish as in Step 1 of the proof of \cref{thm:m} that the density $\rho$ is bounded and we have already proved that it satisfies the continuity equation
\eqref{eqf:weakmass}. We can use the regularization procedure in the transport theory by DiPerna and Lions \cite{DiPerna1989} to show that $(\rho,\mathbf{u})$ also satisfy a renormalized continuity equation in a weak sense, i.e,
\begin{equation}\label{eqf:renorm}
\partial_t b(\rho) + \operatorname{div}(b(\rho)\mathbf{u}) + (b'(\rho)-b(\rho))\operatorname{div}\mathbf{u}=0 \mbox{ in }\, \mathcal{D}'([0,T)\times \overline{\Omega}) ,
\end{equation}
for any $b\in C([0,\infty)) \cap C^1((0,\infty))$.
\underline{Step 3: Limit in the momentum equation.}
Fix any $n\in \mathbb{N}$, we use the bounds \eqref{bd:rhouR}--\eqref{bd:rhouuR} and \eqref{fl:pressure} in the momentum equation \eqref{eq:weakmmom} to obtain the equicontinuity of the sequence $t\mapsto \int\limits_{\Omega_n} \rho_R\mathbf{u}_R \phi$, $\phi\in C_c^1(\Omega_n)$ in $C[0,T]$. Moreover, $\rho_R\mathbf{u}_R$ is uniformly bounded in $L^2(\Omega_n)$. Again, we apply Arzela-Ascoli theorem \cite[Lemma 6.2, page 301]{MR2084891} to have $\rho_R\mathbf{u}_R\rightarrow \rho\mathbf{u}|_{B_n}$ in $C_{\mbox{weak}}([0,T]; L^2(\Omega_n))$. Using the diagonalization process, we arrive at
\begin{equation}\label{R:rhouCw}
\rho_R\mathbf{u}_R\rightarrow \rho\mathbf{u}\quad \mbox{ in }\quad C_{\mbox{weak}}([0,T]; L^2(\Omega_n)),\ n\in \mathbb{N}.
\end{equation}
Moreover, the compact embedding of $L^2(\Omega_n)\hookrightarrow\hookrightarrow W^{-1,2}(\Omega_n)$ ensures that we can apply \cite[Lemma 6.4, page 302]{MR2084891} to have
\begin{equation}\label{R:rhoustr}
\rho_R\mathbf{u}_R\rightarrow \rho\mathbf{u}\quad \mbox{ strongly in }L^{2}(0,T; W^{-1,2}(\Omega_n)),\ n\in \mathbb{N}.
\end{equation}
The estimates \eqref{bd:rhouuR}, \eqref{diff:uR} and weak compactness result \cite[Lemma 6.6, page 304]{MR2084891} yield
\begin{equation}\label{con:rhouuR}
\rho_R \mathbf{u}_R \otimes \mathbf{u}_R \rightarrow \rho\mathbf{u} \otimes \mathbf{u} \mbox{ weakly in }L^2(0,T; L^{\frac{3}{2}}(B_n)), \ n\in \mathbb{N}.
\end{equation}
We take the limit $R \rightarrow \infty$ in
the momentum equation \eqref{eq:weakmmom} and use convergences \eqref{R:rhoCw}--\eqref{R:rhostr} of $\rho_R$, convergences \eqref{R:rhouCw}--\eqref{R:rhoustr} , convergence \eqref{con:rhouuR} of $\rho_R \mathbf{u}_R \otimes \mathbf{u}_R$ and convergence \eqref{fl:pressure} of pressure to conclude that: \begin{multline}\label{limitmom-R}
\int_\Omega\rho\mathbf{u}(\tau,\cdot)\cdot\varphi(\tau,\cdot){\rm d} x - \int_\Omega \mathbf{q}_0(\cdot)\cdot\varphi(0,\cdot){\rm d} x \\
=\int_0^\tau \int_{\Omega}\Big(
\rho \mathbf{u} \cdot \partial_t \varphi + \rho \mathbf{u}
\otimes \mathbf{u} : \nabla \varphi +
\overline{p(\rho)}\operatorname{div}\varphi - \mathbb {S}(\nabla \mathbf{u}) : \nabla \varphi \Big)\ dx \ dt,
\end{multline}
for all $\tau\in [0,T]$ and
for any test
function $\varphi \in C^1_c([0,T] \times \Omega)$.
\underline{Step 4: Identify the limit of the pressure.}
In this step, we want to show that
\begin{equation*}
\overline{p(\rho)}=p(\rho).
\end{equation*}
In order to show this, we need to use the idea of \textit{effective viscous flux} as explained in Step 4 of the proof of \cref{thm:m}. We can establish the identity
\begin{equation}\label{eqf:EVF}
(2\mu+\lambda) \leqslantft(\overline{\rho\operatorname{div}\mathbf{u}}-\rho\operatorname{div}\mathbf{u} \right) = \leqslantft(\overline{p(\rho) \rho}-\overline{p(\rho)}\rho\right),
\end{equation}
where the quantity $\leqslantft(p(\rho)-(2\mu+\lambda)\operatorname{div}\mathbf{u}\right)$ is termed as \textit{effective viscous flux}. Now we consider $b(\rho_R)=\rho_R\log\rho_R$ in \eqref{eq:renormR} letting $R\rightarrow \infty$ to get
\begin{align}
\int_{\Omega} \overline{\rho \log \rho}(\tau,\cdot)\varphi(\tau,\cdot)\ {\rm
d}x - \int_{\Omega} \rho_0\log\rho_0(\cdot)\varphi(0,\cdot)\ {\rm
d}x\notag\\
=\int_0^\tau \int_{\Omega}\leqslantft(\overline{\rho \log \rho}\
\partial_t \varphi + \overline{\rho \log \rho}\ \mathbf{u} \cdot \nabla \varphi - \overline{\rho\operatorname{div}\mathbf{u}}\cdot \varphi\right)
\ dx\ dt,\label{renorm:Rinf}
\end{align}
for all $\tau\in [0,T]$ and any test function $\varphi
\in C^1_{c}([0,T] \times \overline{\Omega})$. We take $b(\rho)=\rho\log\rho$ in the equation \eqref{eqf:renorm} and subtract it from \eqref{renorm:Rinf} to obtain
\begin{multline}\label{renormf:diff}
\int_{\Omega} \leqslantft(\overline{\rho \log \rho}-{\rho \log \rho}\right)(\tau,\cdot)\varphi(\tau,\cdot)\ {\rm
d}x \\
=\int_0^\tau \int_{\Omega}\leqslantft[(\overline{\rho \log \rho}-{\rho \log \rho})\
\partial_t \varphi + (\overline{\rho \log \rho}-{\rho \log \rho})\ \mathbf{u} \cdot \nabla \varphi - (\overline{\rho\operatorname{div}\mathbf{u}}-{\rho\operatorname{div}\mathbf{u}})\cdot \varphi\right]
\ dx\ dt,
\end{multline}
for all $\tau\in [0,T]$ and any test function $\varphi
\in C^1_{c}([0,T] \times \overline{\Omega})$. Without loss of generality, we suppose that $\mathbf{a}_{\infty}$, the prescribed behaviour of $\mathbf{u}$ at infinity (see \eqref{behaviour}) is of the following form:
\begin{equation*}
\mathbf{a}_{\infty}=(a_{\infty},0,0),\quad a_{\infty}>0.
\end{equation*}
We define
\begin{equation}\label{def:PhiR1}
\Phi_R (x):= \eta\leqslantft(\frac{x_1}{R}\right)\zeta\leqslantft(\frac{x'}{R^{\alpha}}\right),\quad x'=(x_2,x_3),\quad R>1,\quad \alpha>0,
\end{equation}
where
\begin{equation}\label{def:PhiR2}
\eta \in C_c^{\infty}(\mathbb{R}),\quad 0\leqslantqslant \eta \leqslantqslant 1,\quad \eta(s)=\begin{cases}
1\mbox{ if }|s|\leqslantqslant 1,\\ 0 \mbox{ if }|s|\geqslantqslant 2,
\end{cases}
\quad \zeta \in C_c^{\infty}(\mathbb{R}^2),\quad 0\leqslantqslant \zeta \leqslantqslant 1,\quad \zeta(s)=\begin{cases}
1\mbox{ if }|x'|\leqslantqslant 1,\\ 0 \mbox{ if }|x'|\geqslantqslant 2.
\end{cases}
\end{equation}
We take $\Phi_R$ as the test function in \eqref{renormf:diff} and use the \textit{effective viscous flux} identity
\eqref{eqf:EVF} to get \begin{equation}\label{renormf:diffR}
\int_{\Omega} \leqslantft(\overline{\rho \log \rho}-{\rho \log \rho}\right)(\tau,\cdot)\Phi_R(\cdot)\ {
d}x + \int_0^\tau \int_{\Omega}(\overline{p(\rho) \rho}-\overline{p(\rho)}\rho)\cdot \Phi_R\ dx \ dt
=\int_0^\tau \int_{\Omega} (\overline{\rho \log \rho}-{\rho \log \rho})\ \mathbf{u} \cdot \nabla \Phi_R
\ dx\ dt.
\end{equation}
Since $p(\rho)$ is strictly increasing on $(0,\infty)$, we can use the result \cite[Lemma 3.35, Page 186]{MR2084891} on weak convergence and monotonicity to get:
\begin{equation*}
\overline{p(\rho)\rho} \geqslantqslant \overline{p(\rho)}\rho\quad\mbox{ a.e. in }\Omega.
\end{equation*}
The above relation and \eqref{renormf:diffR} yield
\begin{equation}\label{rhofinal1}
\int_{\Omega} \leqslantft(\overline{\rho \log \rho}-{\rho \log \rho}\right)(\tau,\cdot)\Phi_R(\cdot)\ {
d}x
\leqslantqslant \int_0^\tau \int_{\Omega} (\overline{\rho \log \rho}-{\rho \log \rho})\ \mathbf{u} \cdot \nabla \Phi_R
\ dx\ dt.
\end{equation}
Since the function $\rho \mapsto \rho\log\rho$ is a convex lower semi-continuous function on $(0,T)$ with
\begin{equation*}
\rho_R\log\rho_R \rightarrow \overline{\rho\log\rho}\quad\mbox{ weakly in }\quad L^1(\Omega),
\end{equation*}
we can apply the result \cite[Corollary 3.33, Page 184]{MR2084891} from convex analysis to conclude
\begin{equation}\label{11:51}
\rho\log\rho \leqslantqslant \overline{\rho\log\rho}\quad\mbox{ a.e. in }\Omega.
\end{equation}
Furthermore, we estimate the right-hand side of \eqref{rhofinal1} in the following way
\begin{multline}\label{rediff1}
\int_0^\tau \int_{\Omega} (\overline{\rho \log \rho}-{\rho \log \rho})\ \mathbf{u} \cdot \nabla \Phi_R
\ dx\ dt\\ \leqslantqslant \int_0^\tau \int_{\Omega} |\overline{\rho \log \rho}-{\rho_{\infty} \log \rho_{\infty}}|\ \mathbf{u} \cdot \nabla \Phi_R
\ dx\ dt + \int_0^\tau \int_{\Omega} |{\rho_{\infty} \log \rho_{\infty}}-{\rho \log \rho}|\ \mathbf{u} \cdot \nabla \Phi_R
\ dx\ dt\\ \leqslantqslant
\int_0^\tau \int_{\Omega} |\overline{\rho \log \rho}-{\rho_{\infty} \log \rho_{\infty}}|\ (\mathbf{u}-\mathbf{U}_{\infty}) \cdot \nabla \Phi_R
\ dx\ dt + \int_0^\tau \int_{\Omega} |\overline{\rho \log \rho}-{\rho_{\infty} \log \rho_{\infty}}|\ \mathbf{U}_{\infty} \cdot \nabla \Phi_R
\ dx\ dt \\
+\int_0^\tau \int_{\Omega} |{\rho_{\infty} \log \rho_{\infty}}-{\rho \log \rho}|\ (\mathbf{u}-\mathbf{U}_{\infty}) \cdot \nabla \Phi_R
\ dx\ dt + \int_0^\tau \int_{\Omega} |{\rho_{\infty} \log \rho_{\infty}}-{\rho \log \rho}|\ \mathbf{U}_{\infty} \cdot \nabla \Phi_R
\ dx\ dt.
\end{multline}
Observe that the construction of test function $\Phi_R$ in \eqref{def:PhiR1} yield
\begin{equation}\label{prop:PhiR1}
|\mbox{supp}\nabla \Phi_R|\leqslantqslant CR^{1+2\alpha},\quad \|\nabla \Phi_R\|_{L^p(\mathbb{R}^3)}\leqslantqslant CR^{\frac{1+2\alpha-\alpha p}{p}},
\end{equation}
\begin{equation}\label{prop:PhiR2}
|\partial_{x_1} \Phi_R|\leqslantqslant \frac{C}{R},\quad \|\nabla_{x'} \Phi_R\|\leqslantqslant \frac{C}{R^{\alpha}}.
\end{equation}
Moreover,
\begin{equation*}
\lim_{s\rightarrow \rho_{\infty}} \frac{s\log s - \rho_{\infty}\log \rho_{\infty}}{s-\rho_{\infty}}= 1 +\log\rho_{\infty},\quad \lim_{s\rightarrow {\infty}} \frac{s\log s - \rho_{\infty}\log \rho_{\infty}}{|s-\rho_{\infty}|^q}=0 \mbox{ for any }1< q < \infty,
\end{equation*}
implies
\begin{align}
|{\rho_{\infty} \log \rho_{\infty}}-{\rho \log \rho}|&= |{\rho_{\infty} \log \rho_{\infty}}-{\rho \log \rho}|\mathds{1}_{\{|\rho-\rho_{\infty}|\leqslantqslant 1\}} + |{\rho_{\infty} \log \rho_{\infty}}-{\rho \log \rho}|\mathds{1}_{\{|\rho-\rho_{\infty}|\geqslantqslant 1\}}\notag\\
&\leqslantqslant
C\leqslantft(|\rho-\rho_{\infty}|\mathds{1}_{\{|\rho-\rho_{\infty}|\leqslantqslant 1\}} + |\rho-\rho_{\infty}|^q\mathds{1}_{\{|\rho-\rho_{\infty}|\geqslantqslant 1\}}\right)\label{13:16}
\end{align}
Then, we can estimate the terms in the right-hand side of \eqref{rediff1} by using H\"{o}lder inequality, properties \eqref{prop:PhiR1}--\eqref{prop:PhiR2} of test function $\Phi_R$ and the estimates \eqref{13:16}, \eqref{est:R3} and \eqref{beh:rhoR} to obtain
\begin{multline}\label{RHS1}
\int_{\Omega} |{\rho_{\infty} \log \rho_{\infty}}-{\rho \log \rho}|\ (\mathbf{u}-\mathbf{U}_{\infty}) \cdot \nabla \Phi_R
\ dx \\
\leqslantqslant \|\nabla \Phi_R\|_{L^6(\Omega)}\|\mathbf{u}-\mathbf{U}_{\infty}\|_{L^6(\Omega)}\|{\rho_{\infty} \log \rho_{\infty}}-{\rho \log \rho}\|_{L^{3/2}(\Omega)}\mathds{1}_{\{|\rho-\rho_{\infty}|\geqslantqslant 1\}} \\+ \|\nabla \Phi_R\|_{L^{\infty}(\Omega)}\|\mathbf{u}-\mathbf{U}_{\infty}\|_{L^2(\Omega)}\|{\rho_{\infty} \log \rho_{\infty}}-{\rho \log \rho}\|_{L^{2}(\Omega)}\mathds{1}_{\{|\rho-\rho_{\infty}|\leqslantqslant 1\}} \leqslantqslant C\leqslantft(R^{\frac{1-4\alpha}{6}} + \frac{1}{R^{\alpha}}\right),
\end{multline}
\begin{multline}\label{RHS2}
\int_{\Omega} |{\rho_{\infty} \log \rho_{\infty}}-{\rho \log \rho}|\ \mathbf{U}_{\infty} \cdot \nabla \Phi_R
\ dx \\
\leqslantqslant \|\partial_{x_1} \Phi_R\|_{L^{\infty}(\Omega)}\|\mathbf{U}_{\infty}\|_{L^{\infty}(\Omega)}\|{\rho_{\infty} \log \rho_{\infty}}-{\rho \log \rho}\|_{L^{1}(\Omega)}\mathds{1}_{\{|\rho-\rho_{\infty}|\geqslantqslant 1\}} \\+ \|\partial_{x_1} \Phi_R\|_{L^{\infty}(\Omega)}\|\mathbf{U}_{\infty}\|_{L^{\infty}(\Omega)}\|{\rho_{\infty} \log \rho_{\infty}}-{\rho \log \rho}\|_{L^{2}(\Omega)}\mathds{1}_{\{|\rho-\rho_{\infty}|\leqslantqslant 1\}}|\mbox{supp}\nabla \Phi_R|^{1/2} \leqslantqslant CR^{\frac{2\alpha-1}{2}}.
\end{multline}
Thus, if we choose $\alpha\in \leqslantft(\frac{1}{4},\frac{1}{2}\right)$ and take $R\rightarrow \infty$, both the terms in \eqref{RHS1}--\eqref{RHS2} converge to zero. We can treat the other two terms in the right-hand side of \eqref{rediff1} in the same way and combining with the inequality \eqref{11:51}, we conclude that
\begin{equation*}
\rho\log\rho = \overline{\rho\log\rho}\quad\mbox{ a.e. in }\Omega.
\end{equation*}
Since the function $\rho \mapsto \rho\log\rho$ is strictly convex on $[0,\infty)$, we obtain that
\begin{equation}\label{lim:rhoag}
\rho_R \rightarrow {\rho}\quad\mbox{ a.e. in }(0,T)\times \Omega.
\end{equation}
In particular, we have $\overline{p(\rho)}=p(\rho)$. The substitution of this relation in \eqref{limitmom-R} yields the limiting momentum equation : for all $\tau\in [0,T]$ and
for any test
function $\varphi \in C^1_c([0,T] \times \Omega)$,
\begin{multline*}
\int_\Omega\rho\mathbf{u}(\tau,\cdot)\cdot\varphi(\tau,\cdot){\rm d} x - \int_\Omega \mathbf{q}_0(\cdot)\cdot\varphi(0,\cdot){\rm d} x \\
=\int_0^\tau \int_{\Omega}\Big(
\rho \mathbf{u} \cdot \partial_t \varphi + \rho \mathbf{u}
\otimes \mathbf{u} : \nabla \varphi +
p(\rho)\operatorname{div}\varphi - \mathbb {S}(\nabla \mathbf{u}) : \nabla \varphi \Big)\ dx \ dt,
\end{multline*}
Hence, we have verified the momentum equation \eqref{eqf:weakmom} and it only remains to establish the energy inequality \eqref{eqf:ee}.
\underline{Step 5: Energy inequality.} Let us recall the energy inequality \eqref{m:energy} for the $R$-th level approximate problem: for a.e. $\tau\in (0,T)$,
\begin{multline*}
\int_{\Omega_R}\Big(\frac{1}{2}\rho_R|\mathbf{u}_R-\mathbf {u}_{\infty}|^2 +
E(\rho_R|\rho_{\infty})\Big)(\tau)\ dx
+ \int_0^\tau\int_{\Omega_R} \mathbb{S}(\nabla (\mathbf{u}_R-\mathbf{u}_{\infty})):\nabla (\mathbf{u}_R-\mathbf{u}_{\infty})\ dx\ dt\\
\leqslantqslant
\int_{\Omega_R}\Big(\frac{1}{2}\frac{|\mathbf{q}_0-\rho_{0}\mathbf{u}_{\infty}|^2}{\rho_0} +
E(\rho_0|\rho_{\infty})\Big)\ dx + \int_0^\tau
\int_{B_1\setminus\mathcal{S}} \rho\mathbf{u}\cdot\nabla\mathbf {u}_{\infty}
\cdot(\mathbf{u}_{\infty}- \mathbf{u})\,{ d}x\,{ d}t - \int _0^{ \tau}\int _{B_1\setminus\mathcal{S}}\mathbb{S}(\nabla\mathbf{u}_{\infty}):\nabla(\mathbf{u}-\mathbf{u}_\infty)\,{ d} x\, { d}t.
\end{multline*}
We use
\begin{itemize}
\item the definition \eqref{def:Uinf} of $\mathbf{U}_{\infty}$,
\item the convergences obtained for $\rho_R$, $\mathbf{u}_R$, $\rho_R\mathbf{u}_R$, \item the lower semi-continuity of the convex functionals at the left-hand side of \eqref{m:energy},
\end{itemize}
and take the limit $R \rightarrow \infty$ in \eqref{m:energy} to obtain
\begin{multline*}
\int_{\Omega}\Big(\frac{1}{2}\rho|\mathbf{u}-\mathbf {U}_{\infty}|^2 +
E(\rho|\rho_{\infty})\Big)(\tau)\ dx
+ \int_0^\tau\int_{\Omega} \mathbb{S}(\nabla (\mathbf{u}-\mathbf{U}_{\infty})):\nabla (\mathbf{u}-\mathbf{U}_{\infty})\ dx\ dt\\
\leqslantqslant
\int_{\Omega}\Big(\frac{1}{2}\frac{|\mathbf{q}_0-\rho_{0}\mathbf{U}_{\infty}|^2}{\rho_0} +
E(\rho_0|\rho_{\infty})\Big)\ dx - \int_0^\tau
\int_{B_1\setminus\mathcal{S}} \rho\mathbf{u}\cdot\nabla\mathbf {U}_{\infty}
\cdot(\mathbf{u} - \mathbf{U}_{\infty})\,{ d}x\,{ d}t - \int _0^{ \tau}\int _{B_1\setminus\mathcal{S}}\mathbb{S}(\nabla\mathbf{U}_{\infty}):\nabla(\mathbf{u}-\mathbf{U}_\infty)\,{ d} x\, { d}t.
\end{multline*}
for a.e. $\tau\in (0,T)$. Thus we have established the energy inequality \eqref{eqf:ee} and hence the existence of at least one renormalized bounded energy weak solution $(\rho,\mathbf{u})$ of the problem \eqref{eq:mass}--\eqref{p-law} according to \cref{def:bddenergy}.
\end{proof}
\section*{Acknowledgment}
{\it \v S. N. and A. R. have been supported by the Czech Science Foundation (GA\v CR) project GA19-04243S. The Institute of Mathematics, CAS is supported by RVO:67985840. The work of A.N. was partially supported by the distinguished Eduard \v Cech visiting program at the
Institute of Mathematics of the Academy of Sciences of the Czech Republic.}
\end{document}
|
\begin{document}
\title{The mixed quantum Rabi model}
\author{Liwei Duan$^{1}$, You-Fei Xie$^{1}$, and Qing-Hu Chen$^{1,2,*}$}
\address{
$^{1}$ Department of Physics and Zhejiang Province Key Laboratory of Quantum Technology and Device, Zhejiang University, Hangzhou 310027, China \\
$^{2}$ Collaborative Innovation Center of Advanced Microstructures, Nanjing University, Nanjing 210093, China}
\date{\today }
\begin{abstract}
The analytical exact solutions to the mixed quantum Rabi model (QRM) including
both one- and two-photon terms are found by using Bogoliubov operators.
Transcendental functions in terms of $4 \times 4$ determinants responsible
for the exact solutions are derived. These so-called $G$-functions with pole
structures can be reduced to the previous ones in the unmixed QRMs. The
zeros of $G$-functions reproduce completely the regular spectra. The
exceptional eigenvalues can also be obtained by another transcendental
function. From the pole structure, we can derive two energy limits when the
two-photon coupling strength tends to the collapse point.
All energy levels only collapse to the lower one, which
diverges negatively. The level crossings in the unmixed QRMs are
relaxed to avoided crossings in the present mixed QRM due to absence of
parity symmetry. In the weak two-photon coupling regime, the mixed QRM is
equivalent to an one-photon QRM with an effective positive bias, suppressed
photon frequency and enhanced one-photon coupling, which may pave a highly
efficient and economic way to access the deep-strong one-photon coupling
regime.
\end{abstract}
\pacs{42.50.-p, 42.50.Pq, 71.27.+a, 03.65.Ge}
\maketitle
\section{Introduction}
The quantum Rabi model (QRM) describes the simplest and at the same time
most important interaction between a two-level system (or qubit) and a
single-mode bosonic cavity which is linear in the quadrature operators~\cite
{braak2}. This model \ is a paradigmatic one in quantum optics for a long
time. \ It has been reactivated in the past decade, due to the progress in
many solid-state devices, such as the circuit quantum electrodynamics (QED)
\cite{Niemczyk,exp}, trapped ions \cite{ion1,ion2}, and quantum dots \cite{dot},
where the strong coupling even ultra-strong coupling has been realized. Here
we study a natural generalization of the QRM which exhibits both linear and
non-linear couplings between the qubit and the cavity, \emph{i.e.} the mixed QRM
having both one- and two-photon terms, with Hamiltonian
\begin{eqnarray}
H&=&-\frac{\Delta }{2}\sigma _{x}+\omega a^{\dagger }a \nonumber\\
&&+\sigma _{z}\left(
g_{1}\left( a^{\dagger }+a\right) +g_{2}\left[ \left( a^{\dagger }\right)
^{2}+a^{2}\right] \right) , \label{12p-rabimodel}
\end{eqnarray}
where $\Delta $\ is the tunneling amplitude of the qubit, $\omega $ is the
frequency of cavity, $\sigma _{x,z}$ are Pauli matrices describing the
two-level system, $a$ ($a^{\dagger }$) is the annihilation (creation)
bosonic operator of the cavity mode, and $g_{1}$ ($g_{2}$) is the linear
(nonlinear) qubit-cavity coupling constant.
The nonlinear coupling appears naturally as an effective model for a
three-level system when the third (off-resonant) state can be eliminated.
The two-photon model has been proposed to apply to certain Rydberg atoms in
superconducting microwave cavities~\cite{Bertet,Brune}. Recently, a
realistic implementation of the two-photon QRM using trapped ions has been
proposed~\cite{Felicetti}. In the trapped ions, the atom-cavity coupling
could be tuned to the collapse regime.
The mixed QRM described by Eq.~(\ref{12p-rabimodel}) can also be implemented
in the proposal of the circuit QED~\cite{Felicetti1} if non-zero DC current
biases are applied. Using alternative methods, both linear and nonlinear
interaction terms can be present in different circuit QED setup by Bertet
\textit{et al.}~\cite{Bertet2, Bertet3}. Besides the one-photon process, the
two-photon process was also detected in the superconducting qubit and
oscillator coupling system \cite{tiefu}. It was shown
recently that a general Hamiltonian realized in the microwave driven ions
can be used to simulate the \ QRM with nonlinear coupling \cite{Jorge} by
chosing properly the time dependent phase and in a suitable interaction
picture. The combined linear and non-linear couplings can also be attained.
More recently, Pedernales \textit{et al.} proposed that a background of a ($
1+1$)-dimensional black hole requires a QRM with both one- and two-photon terms
that can be implemented in a trapped ion for the quantum simulation of Dirac
particles in curved spacetime~\cite{Pedernales}. So the QRM with both one-
and two-photon couplings is not only a generic model in the circuit QED and
trapped ions, but also has applications in other realm of physics.
The unmixed QRMs, where either linear or nonlinear coupling is
present, have been extensively studied for a few decades (for a review,
please refer to Refs.~\cite{treview,yuxi,ReviewF}). The solution based on
the well-defined $G$-function with pole structures was only found for
one-photon model by Braak \cite{Braak} in the Bargmann representation and
two-photon model by Chen \textit{et al.}~\cite{Chen2012} using Bogoliubov
transformations. These solutions have stimulated extensive research
interests in the exact solutions to the unmixed QRMs and their variants with
either one-photon~\cite{Zhong,Maciejewski,Chilingaryan,Peng,Wang, Fanheng}
or two-photon term~\cite{Trav,Trav1,Trav2,Maciejewski2,duan2016,Zhangyz,Zhangyz1,Lupo}. In the
literature, many analytical approximate but still very accurate results have
also been given~\cite
{Feranchuk,Irish,chenqh,zheng,chen2,chen3,yunbo,luo2,zhang,Zhiguo,cong,Casanova,PengJS}. In some
limits of model parameters, the dynamics and quantum criticality have
been studied exactly as well~\cite{plenio,hgluo,peng2}. In the unmixed QRMs, the
parity symmetry is very crucial to get the analytical solution in the
closed system. Recently, the role of the parity symmetry has been
characterized in the excitation-relaxation dynamics of the system as a
function of light-matter coupling in the open systems~\cite{Malekakhlagh}
In the mixed QRM with both linear and nonlinear couplings, the parity
symmetry is however broken naturally, and the analytical solution thus
becomes more difficult \cite{xieyf}, compared to the unmixed models. In this
paper, we propose an analytical exact solutions to this mixed QRM. We derive a G-function by the Bogoliubov transformations, which can be reduced to
the previous G-functions for both one- and two-photon QRMs if either of the
couplings appears. We demonstrate that the derived $G$
-function can really yield the regular spectra by checking with the
numerics. The exceptional eigenvalues are also given with the help of the
non-degeneracy property in this mixed model due to the absence of any
symmetry. Two kinds of formulae for the collapse points are derived.
The avoided crossings are confirmed. The level collapse in the strong
two-photon coupling regime is also discussed. Finally, we study the
influence of mixed coupling by constructing an equivalent one-photon QRM with
an effective positive bias where the photon frequency is suppressed and the
one-photon coupling is enhanced.
\section{Solutions within the Bogoliubov operators approach}
In the basis of spin-up and spin-down states, the Hamiltonian (\ref
{12p-rabimodel}) can be transformed to the following matrix form in units of
$\omega =1$
\begin{widetext}
\begin{equation}
H=\left(
\begin{array}{ll}
a^{\dagger }a+g_{1}\left( a^{\dagger }+a\right) +g_{2}\left[ \left(
a^{\dagger }\right) ^{2}+a^{2}\right] & ~~~~~~~~-\frac{\Delta }{2} \\
~~~~~~~~-\frac{\Delta }{2} & a^{\dagger }a-g_{1}\left( a^{\dagger }+a\right)
-g_{2}\left[ \left( a^{\dagger }\right) ^{2}+a^{2}\right]
\end{array}
\right) . \label{Hamiltonian}
\end{equation}
\end{widetext}
First, we perform Bogoliubov transformation
\begin{eqnarray}
A&=&S(r) D^{\dagger}(w) a D(w)S^{\dagger}(r) =ua+va^{\dagger }+w,\\
A^{\dagger }&=&S(r) D^{\dagger}(w) a^{\dagger} D(w)S^{\dagger}(r) =ua^{\dagger }+va+w,\nonumber
\end{eqnarray}
to generate a new bosonic operator, where $S(r)$ is the squeezing operator and $D(w )$ is the displaced operator
\begin{equation*}
S(r)=e^{\frac{r}{2}(a^{2}-a^{\dag 2})}, ~~ D(w )=e^{w (a^{\dag }-a)},
\end{equation*}
with $r=arc\cosh u$. If set
\begin{equation}
u=\sqrt{\frac{1+\beta }{2\beta }}, ~~ v=\sqrt{\frac{1-\beta }{2\beta }}, ~~ w=
\frac{u^{2}+v^{2}}{u+v}g_{1},
\end{equation}
with $\beta =\sqrt{1-4g_{2}^{2}}$, we have a simple quadratic form of one
diagonal Hamiltonian matrix element
\begin{equation}
H_{11}=\frac{A^{\dagger }A-v^{2}-w^{2}}{u^{2}+v^{2}}. \label{H11_A}
\end{equation}
The eigenstates of $H_{11}$ are the number states $\left\vert n\right\rangle _{A}$ which can be written in terms of the Fock states $\left\vert
n\right\rangle \ $ in original bosonic operator $a$ as
\begin{eqnarray}
\left\vert n\right\rangle _{A} =S(r)D^{\dag }(w)\left\vert n\right\rangle .
\end{eqnarray}
Similarly, we can introduce another operator
\begin{eqnarray}
B&=&S^{\dagger}(r) D^{\dagger}(w^{\prime}) a D(w^{\prime})S(r) =ua-va^{\dagger }+w^{\prime},\\
B^{\dagger }&=&S^{\dagger}(r) D^{\dagger}(w^{\prime}) a^{\dagger} D(w^{\prime})S(r) =ua^{\dagger }-va+w^{\prime},\nonumber
\end{eqnarray}
with
\begin{equation*}
w^{\prime }=\frac{u^{2}+v^{2}}{v-u}g_{1},
\end{equation*}
which yields a simple quadratic form of the other diagonal Hamiltonian
matrix element
\begin{equation}
H_{22}=\frac{B^{\dagger }B-v^{2}-w^{\prime 2}}{u^{2}+v^{2}}. \label{H22_B}
\end{equation}
Note that if $g_{2}=0$, we have $w=g_{1}$ and $w^{\prime}=-g_{1}$, which are exactly the same as those in the one-photon QRM~\cite{Chen2012}.
Similarly, the eigenstates of $H_{22}$ are the number states $\left\vert
n\right\rangle _{B}$ which can be written in terms of the Fock states $\left\vert
n\right\rangle \ $ in original bosonic operator $a$ as
\begin{eqnarray}
\left\vert n\right\rangle _{B} = S^{\dag }(r)D^{\dag }(w^{\prime
})\left\vert n\right\rangle .
\end{eqnarray}
In terms of the Bogoliubov operator $A$, the Hamiltonian can be written as
\begin{equation}
H=\left(
\begin{array}{ll}
\frac{A^{\dagger }A-v^{2}-w^{2}}{u^{2}+v^{2}} & ~-\frac{\Delta }{2} \\
~~-\frac{\Delta }{2} & \;\;\;H_{22}^{\prime }
\end{array}
\right) ,
\end{equation}
where
\begin{eqnarray}
H_{22}^{\prime } &=&\left( \left( u^{2}+v^{2}\right) +4g_{2}uv\right)
A^{\dag }A-2uv\left( \left( A^{\dag }\right) ^{2}+A^{2}\right)\nonumber\\
&& -2\left( u-v\right) ^{2}w\left( A^{\dag }+A\right) +h_{A},
\end{eqnarray}
with
\begin{equation*}
h_{A}=v^{2}+\left( u-v\right) ^{2}w^{2}\left( 1-2g_{2}\right) +2g_{1}\left(
u-v\right) w+2g_{2}uv.
\end{equation*}
In principle, the eigenfunctions of the Hamiltonian can be expanded in terms of the number states
of operator $A$
\begin{equation}
\left\vert \psi \right\rangle _{A}=\sum_{n=0}^{\infty }\sqrt{n!}\left( \
\begin{array}{l}
e_{n}\left\vert n\right\rangle _{A} \\
f_{n}\left\vert n\right\rangle _{A}
\end{array}
\right) . \label{wave_A}
\end{equation}
Projecting the Schr$\overset{..}{o}$dinger equation onto $\left\vert
n\right\rangle _{A}$ gives
\begin{equation}
e_{n}=\frac{\Delta /2}{\frac{n-v^{2}-w^{2}}{u^{2}+v^{2}}-E}f_{n},
\label{coef1A}
\end{equation}
\begin{widetext}
\begin{equation}
f_{n+2}=\frac{-\frac{\Delta }{2}e_{n}+\left[ \Omega \left( n,E\right) +h_{A}
\right] f_{n}-2\left( u-v\right) ^{2}w\left( f_{n-1}+\left( n+1\right)
f_{n+1}\right) -2uvf_{n-2}}{2uv\left( n+1\right) \left( n+2\right) },
\label{coef2A}
\end{equation}
\end{widetext}
with
\begin{eqnarray*}
\Omega \left( n,E\right) &=&\left( u^{2}+v^{2}+4g_{2}uv\right) n-E \\
&=&\frac{\left( 1+4g_{2}^{2}\right) }{\beta }n-E.
\end{eqnarray*}
This is actually a five terms recurrence relation for $f_{n}$. All
coefficients $e_{n}$ and $f_{n}$ for $n>1$ are determined in terms of $f_{0}$
and $f_{1}$ linearly.
In terms of the Bogoliubov operator $B$, the Hamiltonian can be written as
\begin{equation}
H=\left(
\begin{array}{ll}
\;\;\;H_{11}^{\prime } & ~-\frac{\Delta }{2} \\
~~-\frac{\Delta }{2} & \frac{B^{\dagger }B-v^{2}-w^{\prime 2}}{u^{2}+v^{2}}
\end{array}
\right) ,\nonumber
\end{equation}
where
\begin{eqnarray}
H_{11}^{\prime } &=&\left( \left( u^{2}+v^{2}\right) +4g_{2}uv\right)
B^{\dag }B +2uv\left(
\left( B^{\dag }\right) ^{2}+B^{2}\right)\nonumber\\
&&-2\left( u+v\right) ^{2}w^{\prime }\left( B^{\dag }+B\right)+h_{B},\nonumber
\end{eqnarray}
with
\begin{equation*}
h_{B}=v^{2}+\left( u+v\right) ^{2}w^{\prime 2} (1 + 2 g_2) -2g_{1}\left( u+v\right)
w^{\prime }+2g_{2}uv.\nonumber
\end{equation*}
We can express the eigenfunctions as
\begin{equation}
\left\vert {\psi}\right\rangle _{B}=\sum_{n=0}^{\infty }\sqrt{n!}\left( \
\begin{array}{l}
f_{n}^{\prime }\left\vert n\right\rangle _{B} \\
e_{n}^{\prime }\left\vert n\right\rangle _{B}
\end{array}
\right) . \label{wave_B}
\end{equation}
Similarly, we can get
\begin{equation}
e_{n}^{\prime }=\frac{\frac{\Delta }{2}}{\frac{n-v^{2}-w^{\prime 2}}{
u^{2}+v^{2}}-E}f_{n}^{\prime }, \label{coef1B}
\end{equation}
and the similar five terms recurrence relation for $f_{n}^{\prime }$.
Analogously, all coefficients $e_{n}^{\prime }$ and $f_{n}^{\prime }$ for $
n>1$ are determined through $f_{0}^{\prime }$ and $f_{1}^{\prime }\ $
linearly.
Except for the crossing points in
the energy spectra, the eigenstates are nondegenerate. Two wavefunctions in terms of operators $A$ and $B$ correspond to the same eigenstate. Therefore, they should be proportional with each other by a constant $r$,
\begin{equation}
\sum_{n=0}^{\infty }\sqrt{n!}\left( \
\begin{array}{l}
e_{n}\left\vert n\right\rangle _{A} \\
f_{n}\left\vert n\right\rangle _{A}
\end{array}
\right) =r\sum_{n=0}^{\infty }\sqrt{n!}\left( \
\begin{array}{l}
f_{n}^{\prime }\left\vert n\right\rangle _{B} \\
e_{n}^{\prime }\left\vert n\right\rangle _{B}
\end{array}
\right) . \label{twowave}
\end{equation}
We will set $r=1$, because only ratios among $f_{0},f_{1},rf_{0}^{\prime }$
and $rf_{1}^{\prime }$ are relevant. In this case we can absorb $r$ into new
$f_{0}^{\prime }$ and $f_{1}^{\prime }$. Then we have
\begin{eqnarray}
\sum_{n=0}^{\infty }\sqrt{n!}e_{n}|n\rangle _{A} &=&\sum_{n=0}^{\infty }
\sqrt{n!}f_{n}^{\prime }|n\rangle _{B}, \label{identity1} \\
\sum_{n=0}^{\infty }\sqrt{n!}f_{n}|n\rangle _{A} &=&\sum_{n=0}^{\infty }
\sqrt{n!}e_{n}^{\prime }|n\rangle _{B}. \label{identity2}
\end{eqnarray}
In the unmixed QRM, the well-defined $G$-functions can be derived by using
the lowest number state $\left\vert 0\right\rangle $ in the original Fock
basis for the one-photon model \cite{Braak,Chen2012}, and two lowest number
states $\left\vert 0\right\rangle $ and $\left\vert 1\right\rangle $ for the
two-photon model \cite{Chen2012,duan2016}. Here, we also project Eqs. (\ref
{identity1}) and (\ref{identity2}) onto two original number states $
\left\vert 0\right\rangle $ and $\left\vert 1\right\rangle $, and then
obtain the following 4 equations
\begin{eqnarray}
G^{(0,0)} &=&\sum_{n=0}^{\infty }\sqrt{n!}\left[ f_{n}\langle 0|n\rangle
_{A}-e_{n}^{\prime }\langle 0|n\rangle _{B}\right] =0, \label{G1} \\
G^{(0,1)} &=&\sum_{n=0}^{\infty }\sqrt{n!}\left[ f_{n}\langle 1|n\rangle
_{A}-e_{n}^{\prime }\langle 1|n\rangle _{B}\right] =0, \label{G2} \\
G^{(1,0)} &=&\sum_{n=0}^{\infty }\sqrt{n!}\left[ e_{n}\langle 0|n\rangle
_{A}-f_{n}^{\prime }\langle 0|n\rangle _{B}\right] =0, \label{G3} \\
G^{(1,1)} &=&\sum_{n=0}^{\infty }\sqrt{n!}\left[ e_{n}\langle 1 |n\rangle
_{A}-f_{n}^{\prime }\langle 1|n\rangle _{B}\right] =0. \label{G4}
\end{eqnarray}
They form $4$ sets of linear homogeneous equations with $4$ unknown
variables $f_{0},f_{1},f_{0}^{\prime },$and $f_{1}^{\prime }$. Nonzero
solutions require the vanishing of the following $4\times 4$ determinant
\begin{equation}
G(E)=\left\vert G_{i,j}\right\vert =0, \label{G-func}
\end{equation}
where elements $G_{i,j}s$ are just coefficients before $f_{0},f_{1},f_{0}^{
\prime },$and $f_{1}^{\prime }$ in Eqs. (\ref{G1})-(\ref{G4}).
Eq.~(\ref{G-func}) is just the $G$-function of the present mixed QRM. Its zeros thus give all regular eigenvalues of
the mixed QRM, which in turn give the eigenstates using Eq.~(\ref{wave_A})
or Eq.~(\ref{wave_B}). Note from the coefficients in Eqs.~(\ref{coef1A}) and
(\ref{coef2A}) that this $G$-function is a well-defined transcendental
function. Thus analytical exact solutions have been formally found. In the
next section, we will employ it to analyze the characteristics of the
spectra.
Note that for the one-photon QRM ($g_2=0$), the parity symmetry leads to
\begin{eqnarray*}
e_{n}^{\prime } =\pm \left( -1\right) ^{n}e_{n}, ~~
f_{n}^{\prime } =\pm \left( -1\right) ^{n}f_{n}.
\end{eqnarray*}
then Eq.~(\ref{G1}) becomes
\begin{equation}
G^{(0,0)}\left( E\right) = \sum_{n=0}^{\infty }\sqrt{n!}\left[ f_{n}\langle
0|n\rangle _{A}\mp \left( -1\right) ^{n}e_{n}\langle 0|n\rangle _{B}\right],
\label{G_1p}
\end{equation}
which is just the $G$-function of one-photon QRM ($g_2=0$)~\cite{Braak}.
Similarly, Eq.~(\ref{G1}) (Eq.~(\ref{G2})) can be reduced to the previous
ones~\cite{Chen2012,duan2016} of the two-photon QRM ($g_1=0$) in the subspace
with even (odd) bosonic number.
In various unmixed QRMs, such as one-photon~\cite{Braak,Chen2012}, two-photon~\cite{duan2016} or two-mode~\cite{liwei2015} QRM, the coefficients of the eigenstates satisfy a three-term recurrence relation which can be achieved by performing the Bogoliubov transformation.
All these $G$-functions with explicit pole structures have been summarized in Eq. (27) of Ref.~\cite{liwei2015}. But in the mixed model,
the Hilbert space cannot be separated into invariant subspaces due to the lack of parity symmetry, so the
recurrence relation of the coefficients $\{f_n\}$ is of higher order, as seen in Eq. (\ref{coef2A}). A
similar behavior also happens in the Dicke model \cite{Braak2013, Heshu}.
The possible reason is that the symmetry does not suffice to label each state
uniquely, indicating that mixed QRM is non-integrable
according to Braak's criterion for quantum intergrability \cite{Braak}.
\section{Exact spectra}
\subsection{Regular spectra}
\begin{figure}
\caption{ $G$-curves for $\Delta =0.5,g_{1}
\label{G-function}
\end{figure}
To show the validity of the $G$-function (\ref{G-func}), we first check with
independent numerics. The $G$-curves as a function of $E$ for $\Delta =0.5$, $g_{1}=0.1$, $g_{2}=0.2$ and $0.47$
are depicted in Fig. \ref{G-function}. We find that the zeros of $G$
-function indeed yield the true eigenvalues by comparing with the numerical
diagonalization in truncated Hilbert spaces with sufficiently high dimension.
\begin{figure}
\caption{ Energy spectra as a function of $g_2$. The solid black lines denote the energy spectra obtained from the $G$-function. The blue dash (red dot) lines denote poles associated with $A$ ($B$) operator.}
\label{spectra}
\end{figure}
Then we plot the energy spectra calculated by the $G$-function (\ref{G-func}) in Fig. \ref
{spectra}. Checking with numerics, our $G$-function reproduces completely
all eigenvalues of the present mixed model.
When $g_{2}$ is close to $1/2$, the energy spectra collapse to negative infinity.
The parity symmetry in this mixed QRM is lacking, so in principle, the energy
degeneracy should be relieved, and level crossings should be absent. However, as shown in Fig. \ref{spectra} (a)-(c), it seems
that some crossings still occur for
the small $g_{1}$. It will be shown later that these "crossing" can
be actually discerned as avoided crossings.
\subsection{Pole structure and collapse}
From Eqs. (\ref{coef1A}) and (\ref{coef1B}), we can find two kinds of poles
associated with the $A$ and $B$ operators respectively, which will lead to the divergency
of the recurrence relations,
\begin{eqnarray}
E_{n}^{\mathrm{(pole\_A)}} &=&\beta n-\frac{1-\beta }{2}-\frac{g_{1}^{2}}{1+2g_{2}},
\label{poleA} \\
E_{n}^{\mathrm{(pole\_B)}} &=&\beta n-\frac{1-\beta }{2}-\frac{g_{1}^{2}}{1-2g_{2}}.
\label{poleB}
\end{eqnarray}
With the same $n$, the difference of two poles is independent of $n$.
\begin{equation*}
\Delta E^{\mathrm{(p)}}=\frac{g_{1}^{2}}{1-2g_{2}}-\frac{g_{1}^{2}}{1+2g_{2}}
=g_{1}^{2}\frac{4g_{2}}{\beta ^{2}}.
\end{equation*}
In the limit of $g_{2}\rightarrow 1/2,$ $\beta \rightarrow 0$, all $
E_{n}^{\mathrm{(pole\_A)}}$ are squeezed into a single finite value $\ -\frac{1}{2}\left( 1+g_{1}^{2}\right) $, while
all $E_{n}^{\mathrm{(pole\_B)}}$ diverge to $-\infty $. It seems that there are two kinds
of collapse energies. But actually, all energy levels tend to $B$-poles only, namely $-\infty $, if
$g_{2} \rightarrow 1/2$, as shown in Fig.~\ref{spectra}. This spectral characteristics is quite different from the two-photon QRM where the energy spectra
collapse to the finite value~\cite{Ng,Felicetti,duan2016}. For the mixed QRM, the divergence of the eigenenergies to negative infinity for $g_2 \rightarrow 1/2$ suggests some underlying unphysicality, which deserves further studies.
The energies of the high excited states cross
the pole A curves and then asymptotically converge to the pole B,
which lead to exceptional solutions.
\subsection{Exceptional solutions}
As shown in Fig. \ref{exceptional} (a), most energy level curves pass through the pole curves on the
way to $g_{2}=1/2$, which results in so-called exceptional solutions. They can
be located in the following way.
At the intersecting point of the energy levels and the $m$-th pole line
associated with the $A$-operator (\ref{poleA}), the coefficient $f_{m}$ must
vanish so that the pole is lift. Otherwise, the coefficient $e_{m}$ would
diverge due to zero denominator in Eq. (\ref{coef1A}). In the unmixed QRM, $
f_{m}=0$ can uniquely yield the necessary and sufficient condition for the
occurrence of the exceptional solution. But here it is not that case,
because $f_{m}$ depends on two initial variable $f_{0},f_{1}$, and cannot be
determined uniquely. The corresponding coefficient $e_{m}$ should be finite
and can be regarded as an unknown variable. In
all summations in Eqs.~(\ref{G1})$-$(\ref{G4}), the $m$-th terms should be
treated specially, i.e. let $f_{m}$ be $0$ and $e_{m}$ be a new variable. By
the recurrence relation (\ref{coef2A}), we can add a new equation for this
case
\begin{widetext}
\begin{equation}
f_{m}=\frac{-\frac{\Delta }{2}e_{m-2}+\left[ \Omega \left( m-2,E\right) +h_{A}
\right] f_{m-2}-2\left( u-v\right) ^{2}w\left( f_{m-3}+\left( m-1\right)
f_{m-1}\right) -2uvf_{m-4}}{2uv m\left( m-1\right) }=0. \label{new_eq}
\end{equation}
\end{widetext}
So for the exceptional solution, we have a set of linear homogeneous
equations (Eqs. (\ref{G1})-(\ref{G4}) and (\ref{new_eq})) with $5$ unknown variables $f_{0}$, $f_{1}$, $f_{0}^{\prime}$, $f_{1}^{\prime }$ and $e_{m}$ for $m \ge 2$. While for $0 \le m<2$, $f_m=0$
, we have only $4$ unknown variables $f_{1-m}$, $f_{0}^{\prime }$, $f_{1}^{\prime }$ and $
e_{m}$, which can be determined by solving another set of linear homogeneous
equations (Eqs. (\ref{G1})-(\ref{G4})). Nonzero solution requires the vanishing of the $
5\times 5$ ($4\times 4$) determinant whose elements are just coefficients before $f_{0}$, $f_{1}$, $f_{0}^{\prime}$, $f_{1}^{\prime }$ and $e_{m}$ in Eqs. (\ref{G1})-(\ref{G4}) and (\ref{new_eq}) ($f_{1-m}$, $f_{0}^{\prime }$, $f_{1}^{\prime }$ and $
e_{m}$ in Eqs. (\ref{G1})-(\ref{G4})) for $m \ge 2$ ($0 \le m<2$),
\begin{equation}
G_{m-A}^{\mathrm{exc}}\left( \Delta ,g_{1},g_{2}\right) =0. \label{except_A}
\end{equation}
We call this function as exceptional $G$-function. Here the energy is not an
explicit variable, but determined by Eq. (\ref{poleA}). The $m$-th exceptional solution associated with the $B$ operator can be
detected in the same way by zero of an exceptional $G$-function
\begin{equation}
G_{m-B}^{\mathrm{exc}}\left( \Delta ,g_{1},g_{2}\right) =0. \label{except_B}
\end{equation}
\begin{figure}
\caption{(a) Enlarged view of energy spectra in Fig. \ref{spectra}
\label{exceptional}
\end{figure}
With the help of the exceptional $G$-function, we can determine the intersecting points of the energy levels and the pole curves as shown in Fig. \ref{exceptional} (a) for $
\Delta =0.5,g_{1}=0.1$. The $G^{\mathrm{exc}}$ curves associated with $A$
-pole as a function of $g_{2}$ for $m=0$ and $1$ are shown in Fig. \ref{exceptional} (c) and (d) respectively, and the $G^{\mathrm{exc}}$ curve associated with
$B$-pole for $m=1$ is shown in Fig. \ref{exceptional} (b). The detected
exceptional solutions in the $G^{\mathrm{exc}}$ curves are marked by
the same symbols as those in the enlarged spectra graph. One can find many zeros for $G^{
\mathrm{exc}}$ curve associated with $A$-pole in Fig. \ref{exceptional} (c) and (d), which are corresponding to
intersecting points of energy levels and the $m$-th $A$-pole curves as displayed in Fig. \ref{exceptional} (a). For $G^{\mathrm{exc}}$ curve associated with $B$-pole, there is only one zero
for $m=1$ as exhibited in Fig. \ref{exceptional} (b), also consistent with the single intersecting point shown in Fig. \ref{exceptional} (a). No exceptional
solution exists even for $B$-pole with $m=0$. All $g_{2}$ obtained from $G^{\mathrm{exc}}=0$ can find
their corresponding intersecting points in the energy spectra exactly.
Now we can judge whether it is the true level crossing or avoided crossing
in the Fig. \ref{spectra} (a)-(c).
Around this regime, we have not found any exceptional solutions, indicating
that the energy level cannot intersect with the pole curves. So although two
energy levels are very close but blocked off by two pole curves with
difference $\Delta E^{\mathrm{(p)}}\propto $ $g_{1}^{2}$, they neither collide nor
cross with each other. It is actually avoided crossing. For small $g_{1}$, $
\Delta E^{\mathrm{(p)}}$ is very small, so it looks like a "level crossing" as depicted
in Fig. \ref{spectra} (a)-(c). For large $g_{1}$, the avoided
crossing is quite clear, as shown in Fig. \ref{spectra} (d)-(f).
Actually, these avoided crossings are just remnants of the traces of the
doubly degeneracy in the unmixed model, which is relieved in the mixed model.
\section{Effect of the mixed couplings}
In the mixed QRM, if we combine Eqs.~(\ref{H11_A}) and (\ref{H22_B}),
the Hamiltonian (Eq.~(\ref{Hamiltonian})) can be written as
\begin{equation}
H=\left(
\begin{array}{ll}
\frac{A^{\dagger }A-v^{2}-w^{2}}{u^{2}+v^{2}} & ~~~~~-\frac{\Delta }{2} \\
~~-\frac{\Delta }{2} & \;\;\;\frac{B^{\dagger }B-v^{2}-w^{\prime 2}}{
u^{2}+v^{2}}
\end{array}
\right) ,
\end{equation}
which can be reorganized and separated into three terms:
\begin{eqnarray}
H &=&H_{0}+\frac{\epsilon ^{(\mathrm{eff})}}{2}\sigma _{z} - \frac{1 - \beta}{2} \mathbf{I},
\end{eqnarray}
where $\mathbf{I}$ is a unit matrix, and
\begin{eqnarray}
H_{0} &=&\left(
\begin{array}{ll}
\beta A^{\dagger }A-\frac{g_{1}^{2}}{\beta ^{2}} &
~~~~~~-\frac{\Delta }{2} \\
~~~~~-\frac{\Delta }{2} & \;\;\;\beta B^{\dagger }B-
\frac{g_{1}^{2}}{\beta ^{2}}
\end{array}
\right) , \label{H_0} \\
\epsilon ^{(\mathrm{eff})} &=&\frac{4g_{2}}{1-4g_{2}^{2}}g_{1}^{2}.
\end{eqnarray}
An effective bias $\epsilon^{\mathrm{(eff)}}$ appears naturally, as well as a
total energy shift $(1 - \beta)/2$.
Recalling the one-photon QRM~\cite{Chen2012},
\begin{eqnarray}
H_{\mathrm{1P}} = -\frac{\Delta}{2} \sigma_x + \omega a^{\dagger} a + g_1 \sigma_z (a^{\dagger} + a),\label{H_1p}
\end{eqnarray}
this Hamiltonian can be expressed with a new set of bosonic operators $P = D^{\dagger}(g_1) a D(g_1) = a + g_1$
and $Q = D(g_1) a D^{\dagger}(g_1) = a - g_1$ as
\begin{eqnarray}
H_{\mathrm{1P}} = \left(
\begin{array}{ll}
\omega P^{\dagger }P-\frac{g_{1}^{2}}{\omega} &
~~~~~~-\frac{\Delta }{2} \\
~~~~~-\frac{\Delta }{2} & \;\;\;\omega Q^{\dagger }Q-
\frac{g_{1}^{2}}{\omega}
\end{array}
\right) .\label{H_1p_AB}
\end{eqnarray}
Comparing Eqs.~(\ref{H_0}) and (\ref{H_1p_AB}), we can introduce an effective photon frequency
$\omega^{\mathrm{(eff)}}$ and an effective one-photon coupling strength $g_1^{\mathrm{(eff)}}$,
\begin{eqnarray}
\omega^{\mathrm{(eff)}} &=& \beta,\nonumber\\
g_1^{\mathrm{(eff)}} &=& \frac{g_1}{\sqrt{\beta}},\nonumber
\end{eqnarray}
and rewrite $H_0$ as
\begin{eqnarray*}
H_{0} = -\frac{\Delta}{2} \sigma_x + \omega^{\mathrm{(eff)}} a^{\dagger} a + g_1^{\mathrm{(eff)}} \sigma_z (a^{\dagger} + a).
\end{eqnarray*}
Therefore, we construct an effective one-photon QRM to describe the mixed one, which provides
a more intuitional description of the influence of the mixed coupling,
\begin{eqnarray}
H^{\mathrm{(eff)}} &=& \frac{\epsilon ^{(\mathrm{eff})}}{2}\sigma _{z}
-\frac{\Delta}{2} \sigma_x - \frac{1 - \beta}{2}\nonumber\\
&&+ \omega^{\mathrm{(eff)}} a^{\dagger} a + g_1^{\mathrm{(eff)}} \sigma_z (a^{\dagger} + a).\label{Heff}
\end{eqnarray}
Comparing $P$ ($Q$) with $A$ ($B$), the mainly difference is the lack of squeezing operator $S(r)$. The two-photon interaction leads to the squeezed field state~\cite{PengJS,Felicetti1}, which can be well captured by the squeezing operator~\cite{Chen2012,duan2016,Lupo}. Therefore, the squeezing operator is introduced explicitly to deal with the mixed QRM, but it is not necessary for the one-photon model.
The definitions of $P$ ($Q$) in Eq.~(\ref{H_0})
is equivalent with that of $A$ ($B$) in Eq.~(\ref{H_1p_AB}) only if $g_{2}=0$, so it is hard to use the
effective Hamiltonian to deal with the strong two-photon coupling regime, especially the bosonic part due to the intense squeezing effect. We calculate the Wigner function $W(\alpha, \alpha^*)$ of the ground state~\cite{qutip1,qutip2} in Fig.~\ref{wigner}, which describes the probability distribution of the bosonic field in the phase space. When $g_2=0.1$, the differences of the Wigner functions calculated from $H$ and $H^{(\mathrm{eff})}$ are negligible. However, It is shown in Fig.~\ref{wigner} (c) that the squeezing effect becomes apparent for $H$ when $g_2=0.3$. The effective Hamiltonian can hardly describe the squeezing effect as demonstrated in Fig.~\ref{wigner} (d).
Nevertheless, it shed light on the analysis of strong coupling case, especially the properties of the two-level system. The ground-state magnetization $M=\langle \psi_{GS} |\sigma_z |\psi_{GS}\rangle$ calculated from $H^{(\mathrm{eff})}$ is in good agreement with that calculated from $H$ even in strong two-photon coupling regime, as shown in Fig.~\ref{sz_g2}. When $g_{2}$
tends to $1/2$, the effective bias $\epsilon ^{(\mathrm{eff})}$ will tend to
infinity, and the two-level system will prefer to stay in the lower level as indicated by the ground-state magnetization $M \rightarrow -1$ in Fig.~\ref{sz_g2}. Therefore, the energy contributed by $\epsilon^{(\mathrm{eff})} \sigma_z / 2$ would be negative infinite, which is one of the reason for the negative divergence of the eigenenergies in the mixed QRM in the limit of $g_2 \rightarrow 1/2$, as observed
in Fig.~\ref{spectra}. What is more, with the
increase of $g_{2}$, the effective photon frequency $\omega ^{(\mathrm{eff})}
$ will decrease while the effective coupling $g_{1}^{(\mathrm{eff})}$ will
increase, which might provide a novel and economic way to reach deep-strong
one-photon coupling regime.
\begin{figure}
\caption{The Wigner function of the ground state at $\Delta=1$, $g_1=1$, with (a)-(b) $g_2=0.1$ and (c)-(d) $g_2=0.3$. Left column is calculated from the original Hamiltonian $H$, and the right column is calculated from the effective Hamiltonian $H^{(\mathrm{eff}
\label{wigner}
\end{figure}
\begin{figure}
\caption{The ground-state magnetization $M$ as a function of $g_2$ for $\Delta=1$, $g_1=1$. Results for the full mixed model (\ref{12p-rabimodel}
\label{sz_g2}
\end{figure}
\begin{figure}
\caption{The fidelity of $H^{(\mathrm{eff}
\label{dynamics}
\end{figure}
To further demonstrate the accuracy of the effective Hamiltonian, we calculate the dynamics of fidelity as shown in Fig.~\ref{dynamics}. The fidelity is defined as the overlap of the wavefunctions $|\psi^{(\mathrm{eff})}(t)\rangle$ obtained from $H^{(\mathrm{eff})}$ (Eq. (\ref{Heff})) and $|\psi(t)\rangle$ obtained from $H$ (Eq. (\ref{12p-rabimodel})), namely $F^{(\mathrm{eff})}(t)=|\langle \psi^{(\mathrm{eff})}(t) |\psi(t)\rangle|$, which can be used to judge how accurately the state of the effective Hamiltonian reproduces that of the original Hamiltonian. The initial states are $|\uparrow\rangle |0\rangle$ for both of them. The
fidelity of the unbiased one-photon QRM, namely $H_{\mathrm{1P}}$ (Eq. (\ref{H_1p})), is also
presented for comparison, \textit{i. e.} $F_{1P}(t)=|\langle \psi_{1P}(t) |\psi(t)\rangle|$. When $g_2$ is as small as $0.05$, the corresponding effective bias reaches $
\epsilon^{(\mathrm{eff})}\simeq 0.202$. This effective bias
is large enough to play a significant
role in the evolution of fidelity, as clearly seen in the Fig.~\ref{dynamics} (a). The fidelity of $H^{(\mathrm{eff})}$ tends to one, which is a strong evidence of the equivalence between $|\psi^{(\mathrm{eff})}(t)\rangle$ and $|\psi(t)\rangle$. The fidelity of $H_{\mathrm{1P}}$ is much smaller, indicating that it deviates from the original Hamiltonian significantly.
When we further increase $g_2$, the effective Hamiltonian still gives considerably good results while the deviation of $H_{\mathrm{1P}}$ becomes more obvious. The fidelity of $H^{(\mathrm{eff})}$ drops slightly in the long time, which is mainly due to the error accumulation. The results of fidelity confirm the limitation of the effective Hamiltonian in dealing with the strong two-photon coupling and long-time limits.
\begin{figure}
\caption{The energy difference $\protect\delta E_{n}
\label{dE_tot}
\end{figure}
The mixed QRM can be realized in the experiment by coupling the
flux qubit to the plasma mode of its DC-SQUID detector~\cite{Bertet2}.
We expect that the effective Hamiltonian can be employed to explain the
experimental results. One of the most widely measured quantity in experiments
is the transmission spectrum~\cite{exp, Yoshihara,Forn2}.
The transmission spectrum, \textit{i.e.} $\delta E_{n}=E_{n}-E_{0}$,
from both original full model (\ref{12p-rabimodel}) and the effective model (\ref{Heff}) are shown in Fig.~\ref{dE_tot}.
We introduce an additional bias $\epsilon
\sigma _{z}/2$ which is originated from the an externally applied
magnetic flux in circuit QED system, and Eqs.~(\ref{12p-rabimodel}) and (\ref{Heff}) become
\begin{eqnarray}
H_{\epsilon } &=&\frac{\epsilon }{2}\sigma _{z}-\frac{\Delta }{2}\sigma
_{x}+a^{\dagger }a \notag \\
&&+\sigma _{z}\left( g_{1}\left( a^{\dagger }+a\right) +g_{2}\left[ \left(
a^{\dagger }\right) ^{2}+a^{2}\right] \right) , \label{H_e} \\
H_{\epsilon }^{(\mathrm{eff})} &=&\frac{\epsilon +\epsilon ^{(\mathrm{eff})}
}{2}\sigma _{z}-\frac{\Delta }{2}\sigma _{x} \nonumber\\
&&+\omega ^{\mathrm{(eff)}}a^{\dagger }a+g_{1}^{\mathrm{(eff)}}\sigma
_{z}(a^{\dagger }+a).\label{H_e_eff}
\end{eqnarray}
Therefore, the total bias in the effective Hamiltonian is ($\epsilon ^{(
\mathrm{eff})}+\epsilon $). It is obvious from Fig.~\ref{dE_tot} (a) that the effective Hamiltonian can capture
the effects of two-photon coupling in the weak coupling regime very well. Even for strong
two-photon coupling $g_{2}=0.45$, the effective Hamiltonian (\ref{H_e_eff})
still provides quite accurate energy structure and almost captures all
features of the mixed QRM, as shown in Fig.~
\ref{dE_tot} (b)-(c). With the decrease of $g_1$, the deviation appears gradually
because the two-photon interaction becomes dominated. One should also note that the
energy differences of the effective Hamiltonian (Eq.~(\ref{H_e_eff})) is symmetry about $\epsilon
=-\epsilon ^{(\mathrm{eff})}$, which is different from that of the original one
(Eq.~(\ref{H_e})), as shown in Fig.~\ref{dE_tot} (c). For the effective
Hamiltonian with an additional bias, we can easy confirm that only the
absolute value of $(\epsilon +\epsilon ^{(\mathrm{eff})})$ affects the
eigenenergies, as well as the energy differences. For the mixed QRM, this
symmetry is broken due to the two-photon interaction term. Therefore,
the asymmetry in the transmission spectrum can be regarded as a signature of
the mixed one- and two-photon couplings. Far from the symmetry point $
\epsilon =-\epsilon ^{(\mathrm{eff})}$, the energy difference tends to be a
multiple of the photon frequency. The energy difference decreases with the
increase of $g_{2}$, which can be explained by the suppressed effective
photon frequency $\omega^{(\mathrm{eff})}$.
\begin{figure}
\caption{ (a) The magnetization $M$ and (b) photon
number $N_{\mathrm{ph}
\label{Sz_Nph}
\end{figure}
Recently, the quantum phase transition of one-photon QRM in
$\Delta/\omega\rightarrow\infty$ has drawn much attention~\cite{plenio,hgluo}.
The magnetization $M = \langle \sigma_z \rangle$ serves as an order parameter
which changes from zero in the normal phase to nonzero
in the superradiant phase when $g_1$ crosses the critical point
$g_{1c} = \sqrt{\Delta \omega} / 2$. Above $g_{1c}$,
photons are extremely activated as well.
The ground-state magnetization $M$ and the photon
number $N_{\mathrm{ph}} = \langle a^{\dagger} a \rangle$ calculated
from $H$ (Eq.~(\ref{12p-rabimodel}))
as a function of the scaled coupling $g_1^{(\mathrm{eff})}/g_{1c}^{(\mathrm{eff})}$
for large values of $\Delta$ are shown in Fig.~\ref{Sz_Nph},
where $g_{1c}^{(\mathrm{eff})} = \sqrt{\Delta \omega^{(\mathrm{eff})}}/2$.
A negative $M$ emerges in the mixed QRM due to the positive effective bias.
Clearly, the photon number $N_{\mathrm{ph}}$ increases with the increase of $
g_2$, which also indicates that the qubit-cavity interactions are enhanced
and more photons are excited. For different
two-photon coupling in the mixed QRM, the photons are considerably enhanced at
almost the same scaled coupling around $1$. Whether it is a signature of the
quantum phase transitions in the mixed QRM as observed in Ref.~\cite{peng2,ying} deserves further study.
\section{Summary}
In this paper, by using Bogoliubov operators, we exactly solve the mixed QRM
with both one- and two-photon terms analytically. The $G$-functions with the
pole structures are derived, which reproduce completely the regular spectra.
They can also be reduced to the unmixed ones. It is found that there are two
sets of poles associated with two Bogoliubov operators. Two types of
exceptional eigenvalues are then derived, which cannot be obtained solely by
requiring that the corresponding coefficients vanish like in the unmixed
models. When the two-photon coupling strength $g_2$ is close to $1/2$, two
collapse energies are derived. One is finite, while the other diverges
negatively. All energy levels collapse to the lower one,
therefore diverge also, in sharp contrast to the unmixed two-photon model.
The level degeneracy in the unmixed model is relieved due to the absence of
parity symmetry. The avoided crossings are strictly discerned from the very
close levels in the mixed model by the absence of exceptional eigenvalues
around the "crossings".
We construct an effective one-photon Hamiltonian to describe the the mixed QRM, which is valid in weak two-photon coupling regime. The mixed QRM is equivalent to a single-photon
one with an effective positive bias, suppressed photon frequency and
enhanced one-photon coupling. This feature in the mixed system is very
helpful to the recent circuit QED experiments where the intense competition
to increase one-photon coupling is performed in many groups~\cite
{Niemczyk,exp,Yoshihara,Forn2}. We suggest that the simultaneous presence of
both one- and two-photon couplings would cooperate to provide richer physics.
\textbf{ACKNOWLEDGEMENTS} This work is supported by the National Key Research and Development Program of China (No. 2017YFA0303002), the National Science Foundation of China (Grants No. 11674285 and No. 11834005).
$^{\ast }$ Corresponding author. Email: [email protected]
\end{document}
|
\begin{document}
\title[Bounds for the ratio of two gamma functions]
{Bounds for the ratio of two gamma functions---From Gautschi's and Kershaw's inequalities to completely monotonic functions}
\author[F. Qi]{Feng Qi}
\address[F. Qi]{Research Institute of Mathematical Inequality Theory, Henan Polytechnic University, Jiaozuo City, Henan Province, 454010, China}
\email{\href{mailto: F. Qi <[email protected]>}{[email protected]}, \href{mailto: F. Qi <[email protected]>}{[email protected]}, \href{mailto: F. Qi <[email protected]>}{[email protected]}}
\urladdr{\url{http://qifeng618.spaces.live.com}}
\begin{abstract}
In this expository and survey paper, along one of main lines of bounding the ratio of two gamma functions, we look back and analyse some inequalities, the complete monotonicity of several functions involving ratios of two gamma or $q$-gamma functions, the logarithmically complete monotonicity of a function involving the ratio of two gamma functions, some new bounds for the ratio of two gamma functions and divided differences of polygamma functions, and related monotonicity results.
\end{abstract}
\keywords{Bound, ratio of two gamma functions, inequality, completely monotonic function, logarithmically completely monotonic function, divided difference, gamma function, $q$-gamma function, psi function, polygamma function}
\subjclass[2000]{26A48, 26A51, 26D07, 26D20, 33B15, 33D05, 65R10}
\thanks{The author was partially supported by the China Scholarship Council}
\thanks{This paper was typeset using \AmS-\LaTeX}
\maketitle
\tableofcontents
\section{Introduction}
For the sake of proceeding smoothly, we briefly introduce some necessary concepts and notation.
\subsection{The gamma and $q$-gamma functions}
It is well-known that the classical Euler gamma function may be defined by
\begin{equation}\label{egamma}
\Gamma(x)=\int^\infty_0t^{x-1} e^{-t}\td t
\end{equation}
for $x>0$. The logarithmic derivative of $\Gamma(x)$, denoted by $\psi(x)=\frac{\Gamma'(x)}{\Gamma(x)}$, is called the psi or digamma function, and $\psi^{(k)}(x)$ for $k\in \mathbb{N}$ are called the polygamma functions. It is common knowledge that special functions $\Gamma(x)$, $\psi(x)$ and $\psi^{(k)}(x)$ for $k\in\mathbb{N}$ are fundamental and important and have much extensive applications in mathematical sciences.
\par
The $q$-analogues of $\Gamma$ and $\psi$ are defined~\cite[pp.~493\nobreakdash--496]{andrews} for $x>0$ by
\begin{gather}\label{q-gamma-dfn}
\Gamma_q(x)=(1-q)^{1-x}\prod_{i=0}^\infty\frac{1-q^{i+1}}{1-q^{i+x}},\quad 0<q<1,\\
\label{q-gamma-dfn-q>1}
\Gamma_q(x)=(q-1)^{1-x}q^{\binom{x}2}\prod_{i=0}^\infty\frac{1-q^{-(i+1)}}{1-q^{-(i+x)}}, \quad q>1,
\end{gather}
and
\begin{align}\label{q-gamma-1.4}
\psi_q(x)=\frac{\Gamma_q'(x)}{\Gamma_q(x)}&=-\ln(1-q)+\ln q \sum_{k=0}^\infty\frac{q^{k+x}}{1-q^{k+x}}\\
&=-\ln(1-q)-\int_0^\infty\frac{e^{-xt}}{1-e^{-t}}\td\gamma_q(t) \label{q-gamma-1.5}
\end{align}
for $0<q<1$, where $\td\gamma_q(t)$ is a discrete measure with positive masses $-\ln q$ at the positive points $-k\ln q$ for $k\in\mathbb{N}$, more accurately,
\begin{equation}
\gamma_q(t)=
\begin{cases}
-\ln q\sum\limits_{k=1}^\infty\delta(t+k\ln q),&0<q<1,\\ t,&q=1.
\end{cases}
\end{equation}
See~\cite[p.~311]{Ismail-Muldoon-119}.
\par
The $q$-gamma function $\Gamma_q(z)$ has the following basic properties:
\begin{equation}
\lim_{q\to1^+}\Gamma_q(z)=\lim_{q\to1^-}\Gamma_q(z)=\Gamma(z)\quad \text{and}\quad \Gamma_q(x)=q^{\binom{x-1}2}\Gamma_{1/q}(x).
\end{equation}
\subsection{The generalized logarithmic mean}
The generalized logarithmic mean $L_p(a,b)$ of order $p\in\mathbb{R}$ for positive numbers $a$ and $b$ with $a\ne b$ may be defined~\cite[p.~385]{bullenmean} by
\begin{equation}
L_p(a,b)=
\begin{cases}
\left[\dfrac{b^{p+1}-a^{p+1}}{(p+1)(b-a)}\right]^{1/p},&p\ne-1,0;\\[1em]
\dfrac{b-a}{\ln b-\ln a},&p=-1;\\[1em]
\dfrac1e\left(\dfrac{b^b}{a^a}\right)^{1/(b-a)},&p=0.
\end{cases}
\end{equation}
It is well-known that
\begin{gather}
L_{-2}(a,b) =\sqrt{ab}\,=G(a,b),\quad L_{-1}(a,b)=L(a,b),\\
L_0(a,b)=I(a,b)\quad \text{and} \quad L_1(a,b)=\frac{a+b}2=A(a,b)
\end{gather}
are called respectively the geometric mean, the logarithmic mean, the identric or exponential mean, and the arithmetic mean. It is also known~\cite[pp.~386\nobreakdash--387, Theorem~3]{bullenmean} that the generalized logarithmic mean $L_p(a,b)$ of order $p$ is increasing in $p$ for $a\ne b$. Therefore, inequalities
\begin{equation}\label{mean-ineq}
G(a,b)<L(a,b)<I(a,b)<A(a,b)
\end{equation}
are valid for $a>0$ and $b>0$ with $a\ne b$. See also~\cite{abstract-jipam, abstract-rgmia, qi1}. Moreover, the generalized logarithmic mean $L_p(a,b)$ is a special case of $E(r,s;x,y)$ defined by~\eqref{emv-dfn}, that is, $L_p(a,b)=E(1,p+1;a,b)$.
\subsection{Logarithmically completely monotonic functions}
A function $f$ is said to be completely monotonic on an interval $I$ if $f$ has derivatives of all orders on $I$ and
\begin{equation}\label{cmf-dfn-ineq}
(-1)^{n}f^{(n)}(x)\ge0
\end{equation}
for $x \in I$ and $n \ge0$.
\begin{thm}[{\cite[p.~161]{widder}}]\label{p.161-widder}
A necessary and sufficient condition that $f(x)$ should be completely monotonic for $0<x<\infty$ is that
\begin{equation}
f(x)=\int_0^\infty e^{-xt}\td\alpha(t),
\end{equation}
where $\alpha(t)$ is nondecreasing and the integral converges for $0<x<\infty$.
\end{thm}
\begin{thm}[{\cite[p.~83]{bochner}}]\label{p.83-bochner}
If $f(x)$ is completely monotonic on $I$, $g(x)\in I$, and $g'(x)$ is completely monotonic on $(0,\infty)$, then $f(g(x))$ is completely monotonic on $(0,\infty)$.
\end{thm}
A positive function $f(x)$ is said to be logarithmically completely monotonic on an interval $I\subseteq\mathbb{R}$ if it has derivatives of all orders on $I$ and its logarithm $\ln f(x)$ satisfies
$$
(-1)^k[\ln f(x)]^{(k)}\ge0
$$
for $k\in\mathbb{N}$ on $I$.
\par
The notion ``logarithmically completely monotonic function'' was first put forward in~\cite{Atanassov} without an explicit definition. This terminology was explicitly recovered in~\cite{minus-one} whose revised and expanded version was formally published as~\cite{minus-one.tex-rev}.
\par
It has been proved once and again in~\cite{CBerg, clark-ismail-nonlinear, clark-ismail-rgmia, compmon2, absolute-mon.tex, minus-one, minus-one.tex-rev, schur-complete} that a logarithmically completely monotonic function on an interval $I$ must also be completely monotonic on $I$. C. Berg points out in~\cite{CBerg} that these functions are the same as those studied by Horn~\cite{horn} under the name infinitely divisible completely monotonic functions. For more information, please refer to~\cite{CBerg, e-gam-rat-comp-mon, auscm-rgmia} and related references therein.
\subsection{Outline of this paper}
In this expository and survey paper, along one of main lines of bounding the ratio of two gamma functions, we look back and analyse Gautschi's double inequality and Kershaw's second double inequality, the complete monotonicity of several functions involving ratios of two gamma or $q$-gamma functions by Alzer, Bustoz-Ismail, Elezovi\'c-Giordano-Pe\v{c}ari\'c and Ismail-Muldoon, the logarithmically complete monotonicity of a function involving the ratio of two gamma functions, some new bounds for the ratio of two gamma functions and the divided differences of polygamma functions, and related monotonicity results by Batir, Elezovi\'c-Pe\v{c}ari\'c, Qi and others.
\section{Gautschi's and Kershaw's double inequalities}
In this section, we begin with the papers~\cite{gaut, kershaw} to introduce a kind of inequalities for bounding the ratio of two gamma functions.
\subsection{Gautschi's double inequalities}
The first result of the paper~\cite{gaut} was the double inequality
\begin{equation}\label{gaut-3-ineq}
\frac{(x^p+2)^{1/p}-x}2<e^{x^p}\int_x^\infty e^{-t^p}\td t\le c_p\biggl[\biggl(x^p+\frac1{c_p}\biggr)^{1/p}-x\biggr]
\end{equation}
for $x\ge0$ and $p>1$, where
\begin{equation}
c_p=\biggl[\Gamma\biggl(1+\frac1p\biggr)\biggr]^{p/(p-1)}
\end{equation}
or $c_p=1$. By an easy transformation, the inequality~\eqref{gaut-3-ineq} was written in terms of the complementary gamma function
\begin{equation}
\Gamma(a,x)=\int_x^\infty e^{-t}t^{a-1}\td t
\end{equation}
as
\begin{equation}\label{gaut-4-ineq}
\frac{p[(x+2)^{1/p}-x^{1/p}]}2<e^x\Gamma\biggl(\frac1p,x\biggr)\le pc_p\biggl[\biggl(x+\frac1{c_p}\biggr)^{1/p}-x^{1/p}\biggr]
\end{equation}
for $x\ge0$ and $p>1$. In particular, if letting $p\to\infty$, the double inequality
\begin{equation}
\frac12\ln\biggl(1+\frac2x\biggr)\le e^xE_1(x)\le\ln\biggl(1+\frac1x\biggr)
\end{equation}
for the exponential integral $E_1(x)=\Gamma(0,x)$ for $x>0$ was derived from~\eqref{gaut-4-ineq}, in which the bounds exhibit the logarithmic singularity of $E_1(x)$ at $x=0$. As a direct consequence of the inequality~\eqref{gaut-4-ineq} for $p=\frac1s$, $x=0$ and $c_p=1$, the following simple inequality for the gamma function was deduced:
\begin{equation}\label{gaut-none-ineq}
2^{s-1}\le\Gamma(1+s)\le1,\quad 0\le s\le 1.
\end{equation}
\par
The second result of the paper~\cite{gaut} was a sharper and more general inequality
\begin{equation}\label{gaut-6-ineq}
e^{(s-1)\psi(n+1)}\le\frac{\Gamma(n+s)}{\Gamma(n+1)}\le n^{s-1}
\end{equation}
for $0\le s\le1$ and $n\in\mathbb{N}$ than~\eqref{gaut-none-ineq}. It was obtained by proving that the function
\begin{equation}
f(s)=\frac1{1-s}\ln\frac{\Gamma(n+s)}{\Gamma(n+1)}
\end{equation}
is monotonically decreasing for $0\le s<1$ and that
\begin{equation*}
\lim_{s\to1^-}f(s)=-\lim_{s\to1^-}\psi(n+s)=-\psi(n+1).
\end{equation*}
\begin{rem}
For more information on refining the inequality~\eqref{gaut-3-ineq}, please refer to~\cite{incom-gamma-L-N, qi-senlin-mia, Qi-Mei-99-gamma} and related references therein.
\end{rem}
\begin{rem}
The left-hand side inequality in~\eqref{gaut-6-ineq} can be rearranged as
\begin{equation}\label{gaut-ineq-1}
\frac{\Gamma(n+s)}{\Gamma(n+1)}\exp((1-s)\psi(n+1))\ge1
\end{equation}
or
\begin{equation}\label{gaut-ineq-2-exp}
\biggl[\frac{\Gamma(n+s)}{\Gamma(n+1)}\biggr]^{1/(s-1)}e^{-\psi(n+1)}\le1
\end{equation}
for $n\in\mathbb{N}$ and $0\le s\le 1$. Since the limit
\begin{equation}
\lim_{n\to\infty}\biggl\{\biggl[\frac{\Gamma(n+s)}{\Gamma(n+1)}\biggr]^{1/(s-1)} e^{-\psi(n+1)}\biggr\}=1
\end{equation}
can be verified by using Stirling's formula in~\cite[p.~257, 6.1.38]{abram}: For $x>0$, there exists $0<\theta<1$ such that
\begin{equation}\label{Stirling-formula}
\Gamma(x+1)=\sqrt{2\pi}\,x^{x+1/2}\exp\biggl(-x+\frac{\theta}{12x}\biggr),
\end{equation}
it is natural to guess that the function
\begin{equation}\label{gaut-funct}
\biggl[\frac{\Gamma(x+s)}{\Gamma(x+1)}\biggr]^{1/(s-1)}e^{-\psi(x+1)}
\end{equation}
for $0\le s<1$ is possibly increasing with respect to $x$ on $(-s,\infty)$.
\end{rem}
\begin{rem}
For information on the study of the right-hand side inequality in~\eqref{gaut-6-ineq}, please refer to~\cite{bounds-two-gammas.tex, Wendel-Gautschi-type-ineq.tex, Wendel2Elezovic.tex} and a great amount of related references therein.
\end{rem}
\subsection{Kershaw's second double inequality and its proof}\label{kershaw-sec}
In 1983, over twenty years later after the paper~\cite{gaut}, among other things, D.~Kershaw was motivated by the left-hand side inequality~\eqref{gaut-6-ineq} in~\cite{gaut} and presented in~\cite{kershaw} the following double inequality for $0<s<1$ and $x>0$:
\begin{gather}\label{gki2}
\exp\big[(1-s)\psi\big(x+\sqrt{s}\,\big)\big] <\frac{\Gamma(x+1)}{\Gamma(x+s)}
<\exp\biggl[(1-s)\psi\biggl(x+\frac{s+1}2\biggr)\biggr].
\end{gather}
It is called in the literature Kershaw's second double inequality.
\begin{proof}[Kershaw's proof for~\eqref{gki2}]
Define the function $f_\alpha$ by
\begin{equation}\label{kershaw-f-dfn}
f_\alpha(x)=\frac{\Gamma(x+1)}{\Gamma(x+s)}\exp((s-1)\psi(x+\alpha))
\end{equation}
for $x>0$ and $0<s<1$, where the parameter $\alpha$ is to be determined.
\par
It is not difficult to show, with the aid of Stirling's formula, that
\begin{equation}\label{kershaw-2.3}
\lim_{x\to\infty}f_\alpha(x)=1.
\end{equation}
\par
Now let
\begin{equation}\label{kershaw-F-dfn}
F(x)=\frac{f_\alpha(x)}{f_\alpha(x+1)}=\frac{x+s}{x+1}\exp\frac{1-s}{x+\alpha}.
\end{equation}
Then
\begin{equation*}
\frac{F'(x)}{F(x)}=(1-s)\frac{(\alpha^2-s)+(2\alpha-s-1)x}{(x+1)(x+s)(x+\alpha)^2}.
\end{equation*}
It is easy to show that
\begin{enumerate}
\item
if $\alpha=s^{1/2}$, then $F'(x)<0$ for $x>0$;
\item
if $\alpha=\frac{s+1}2$, then $F'(x)>0$ for $x>0$.
\end{enumerate}
Consequently if $\alpha=s^{1/2}$ then $F$ strictly decreases, and since $F(x)\to1$ as $x\to\infty$ it follows that $F(x)>1$ for $x>0$. But, from~\eqref{kershaw-2.3}, this implies that $f_\alpha(x)>f_\alpha(x+1)$ for $x>0$, and so $f_\alpha(x)>f_\alpha(x+n)$. Take the limit as $n\to\infty$ to give the result that $f_\alpha(x)>1$, which can be rewritten as the left-hand side inequality in~\eqref{gki2}. The corresponding upper bound can be verified by a similar argument when $\alpha=\frac{s+1}2$, the only difference being that in this case $f_\alpha$ strictly increases to unity.
\end{proof}
\begin{rem}
The idea contained in the above stated proof of~\eqref{gki2} was also utilized by other mathematicians. For detailed information, please refer to related contents and references in~\cite{bounds-two-gammas.tex}.
\end{rem}
\begin{rem}
The inequality~\eqref{gki2} can be rearranged as
\begin{equation}\label{gki2-rew-1}
\frac{\Gamma(x+s)}{\Gamma(x+1)}\exp\big[(1-s)\psi\big(x+\sqrt{s}\,\big)\big] <1 <\frac{\Gamma(x+s)}{\Gamma(x+1)}\exp\biggl[(1-s)\psi\biggl(x+\frac{s+1}2\biggr)\biggr]
\end{equation}
or
\begin{multline}\label{gki2-rew-2}
\biggl[\frac{\Gamma(x+s)}{\Gamma(x+1)}\biggr]^{1/(s-1)}\exp\big[-\psi\big(x+\sqrt{s}\,\big)\big] >1\\
>\biggl[\frac{\Gamma(x+s)}{\Gamma(x+1)}\biggr]^{1/(s-1)}\exp\biggl[-\psi\biggl(x+\frac{s+1}2\biggr)\biggr]. \end{multline}
By Stirling's formula~\eqref{Stirling-formula}, we can prove that
\begin{equation}
\lim_{x\to\infty}\biggl\{\biggl[\frac{\Gamma(x+s)}{\Gamma(x+1)}\biggr]^{1/(s-1)} \exp\big[-\psi\big(x+\sqrt{s}\,\big)\big]\biggr\}=1
\end{equation}
and
\begin{equation}
\lim_{x\to\infty}\biggl\{\biggl[\frac{\Gamma(x+s)}{\Gamma(x+1)}\biggr]^{1/(s-1)} \exp\biggl[-\psi\biggl(x+\frac{s+1}2\biggr)\biggr]\biggr\}=1.
\end{equation}
These clues make us to conjecture that the functions in the very ends of inequalities~\eqref{gki2-rew-1} and~\eqref{gki2-rew-2} are perhaps monotonic with respect to $x$ on $(0,\infty)$.
\end{rem}
\section{Several complete monotonicity results}
The complete monotonicity of the functions in the very ends of inequalities~\eqref{gki2-rew-1} were first demonstrated in~\cite{Bustoz-and-Ismail}, and then several related functions were also proved in~\cite{Alzer1, egp, laj-7.pdf} to be (logarithmically) completely monotonic.
\subsection{Bustoz-Ismail's complete monotonicity results}\label{Bustoz-Ismail-sec}
In 1986, motivated by the double inequality~\eqref{gki2} and other related inequalities, J. Bustoz and M.~E.~H. Ismail revealed in~\cite[Theorem~7 and Theorem~8]{Bustoz-and-Ismail} that
\begin{enumerate}
\item
the function
\begin{equation}\label{bustol-ismail-AM}
\frac{\Gamma(x+s)}{\Gamma(x+1)}\exp\biggl[(1-s)\psi\biggl(x+\frac{s+1}2\biggr)\biggr]
\end{equation}
for $0\le s\le1$ is completely monotonic on $(0,\infty)$; When $0<s<1$, the function~\eqref{bustol-ismail-AM} satisfies $(-1)^nf^{(n)}(x)>0$ for $x>0$;
\item
the function
\begin{equation}\label{bustol-ismail-AMM}
\frac{\Gamma(x+1)}{\Gamma(x+s)}\exp\bigl[(s-1)\psi\bigl(x+s^{1/2}\bigr)\bigr]
\end{equation}
for $0<s<1$ is strictly decreasing on $(0,\infty)$.
\end{enumerate}
\begin{rem}
The proof of the complete monotonicity of the function~\eqref{bustol-ismail-AM} in~\cite[Theorem~7]{Bustoz-and-Ismail} relies on the inequality
\begin{equation}\label{lemma3.1-ism}
(y+a)^{-n}-(y+b)^{-n}>(b-a)n\biggl(y+\frac{a+b}2\biggr)^{-n-1},\quad n>0
\end{equation}
for $y>0$ and $0<a<b$, the series representation
\begin{equation}\label{series-repr}
\psi(x)=-\gamma-\frac1x+\sum_{n=1}^\infty\biggl(\frac1n-\frac1{x+n}\biggr)
\end{equation}
in~\cite[p.~15]{er}, and the above Theorem~\ref{p.83-bochner} applied to $f(x)=e^{-x}$.
\end{rem}
\begin{rem}
The inequality~\eqref{lemma3.1-ism} verified in~\cite[Lemma~3.1]{Bustoz-and-Ismail} can be rewritten as
\begin{equation}\label{lemma3.1-ism-rew}
\biggl[\frac1{-n}\cdot\frac{(y+a)^{-n}-(y+b)^{-n}}{(y+a)-(y+b)}\biggr]^{1/[(-n)-1]}
<\frac{(y+a)+(y+b)}2,\quad n>0
\end{equation}
for $y>0$ and $0<a<b$, which is equivalent to
\begin{equation}\label{E-E(1,2)}
E(1,-n;y+a,y+b)<E(1,2;y+a,y+b),
\end{equation}
where $E(r,s;x,y)$ stands for extended mean values and is defined for two positive numbers $x$ and $y$ and two real numbers $r$ and $s$ by
\begin{equation}
\begin{aligned}\label{emv-dfn}
E(r,s;x,y)&=\biggl(\frac{r}{s}\cdot\frac{y^s-x^s}
{ y^r-x^r}\biggr)^{{1/(s-r)}}, & rs(r-s)(x-y)&\ne 0; \\
E(r,0;x,y)&=\biggl(\frac{1}{r}\cdot\frac{y^r-x^r}
{\ln y-\ln x}\biggr)^{{1/r}}, & r(x-y)&\ne 0; \\
E(r,r;x,y)&=\frac1{e^{1/r}}\biggl(\frac{x^{x^r}}{y^{y^r}}\biggr)^{ {1/(x^r-y^r)}},& r(x-y)&\ne 0; \\
E(0,0;x,y)&=\sqrt{xy}, & x&\ne y; \\
E(r,s;x,x)&=x, & x&=y.
\end{aligned}
\end{equation}
Actually, the inequality~\eqref{E-E(1,2)} is an immediate consequence of monotonicity of $E(r,s;x,y)$, see~\cite{ls2}. For more information, please refer to~\cite{bullenmean, ajmaa-mean-chen-qi, emv-log-convex-simple.tex, pqsx, schext-rgmia, schext-rocky, qi1, pams-62, cubo, cubo-rgmia, exp-funct-appl-means-simp.tex, ql, Qi-Luo-1999-Sig, qx1, (b^x-a^x)/x, qx3, zhang-chen-qi-emv} and related references therein.
\end{rem}
\begin{rem}
The proof of the decreasing monotonicity of the function~\eqref{bustol-ismail-AMM} just used the formula~\eqref{series-repr} and and the above Theorem~\ref{p.83-bochner} applied to $f(x)=e^{-x}$.
\end{rem}
\begin{rem}
Indeed, J. Bustoz and M. E. H. Ismail had proved in~\cite[Theorem~7]{Bustoz-and-Ismail} that the function~\eqref{bustol-ismail-AM} is logarithmically completely monotonic on $(0,\infty)$ for $0\le s\le1$. However, because the inequality~\eqref{cmf-dfn-ineq} strictly holds for a completely monotonic function $f$ on $(a,\infty)$ unless $f(x)$ is constant (see~\cite[p.~98]{Dubourdieu}, \cite[p.~82]{e-gam-rat-comp-mon} and~\cite{haerc1}), distinguishing between the cases $0\le s\le1$ and $0<s<1$ is not necessary.
\end{rem}
\subsection{Alzer's and related complete monotonicity results}\label{alzer-comp-sec}
Stimulated by the complete monotonicity obtained in~\cite{Bustoz-and-Ismail}, including those mentioned above, H. Alzer obtained in~\cite[Theorem~1]{Alzer1} that the function
\begin{equation}\label{alzer-func}
\frac{\Gamma(x+s)}{\Gamma(x+1)}\cdot\frac{(x+1)^{x+1/2}}{(x+s)^{x+s-1/2}}
\exp\biggl[s-1+\frac{\psi'(x+1+\alpha)-\psi'(x+s+\alpha)}{12}\biggr]
\end{equation}
for $\alpha>0$ and $s\in(0,1)$ is completely monotonic on $(0,\infty)$ if and only if $\alpha\ge\frac12$, so is the reciprocal of~\eqref{alzer-func} for $\alpha\ge0$ and $s\in(0,1)$ if and only if $\alpha=0$.
\par
As consequences of the monotonicity of the function~\eqref{alzer-func}, the following inequalities are deduced in~\cite[Corollary~1 and Corollary~2]{Alzer1}:
\begin{enumerate}
\item
The inequalities
\begin{equation}\label{alzer-fun-ineq}
\begin{gathered}
\exp\biggl[s-1+\frac{\psi'(x+1+\beta) -\psi'(x+s+\beta)}{12}\biggr] \le\frac{(x+s)^{x+s-1/2}}{(x+1)^{x+1/2}}\cdot\frac{\Gamma(x+1)}{\Gamma(x+s)}\\
\le \exp\biggl[s-1+\frac{\psi'(x+1+\alpha) -\psi'(x+s+\alpha)}{12}\biggr],\quad \alpha>\beta\ge0
\end{gathered}
\end{equation}
are valid for all $s\in(0,1)$ and $x\in(0,\infty)$ if and only if $\beta=0$ and $\alpha\ge\frac12$.
\item
If
\begin{equation}
a_n=\frac32\biggl\{1+\ln\biggl[\frac{2[\Gamma((n+1)/2)]^2} {[\Gamma(n/2)]^2}\cdot{n^{n-1}}{(n+1)^n}\biggr]\biggr\},
\end{equation}
then
\begin{equation}\label{sum-alzer-ineq}
a_n<(-1)^{n+1}\Biggl[\frac{\pi^2}{12}-\sum_{k=1}^n(-1)^{k+1}\frac1{k^2}\Biggr]<a_{n+1},\quad n\in\mathbb{N}.
\end{equation}
\end{enumerate}
\begin{rem}
The inequality~\eqref{sum-alzer-ineq} follows from the formula
\begin{equation}
\frac14\biggl[\psi'\biggl(\frac{n}2+1\biggr)-\psi'\biggl(\frac{n+1}2\biggr)\biggr] =\sum_{k=1}^\infty\frac{(-1)^k}{(n+k)^2} =(-1)^{n}\Biggl[\sum_{k=1}^n\frac{(-1)^{k+1}}{k^2}-\frac{\pi^2}2\Biggr]
\end{equation}
and the inequality~\eqref{alzer-fun-ineq} applied to $s=\frac12$, $\alpha=\frac12$ and $\beta=0$.
\end{rem}
\begin{rem}
The proof of the complete monotonicity of the function~\eqref{alzer-func} in~\cite{Alzer1} is based on Theorem~\ref{p.83-bochner} applied to $f(x)=e^{-x}$, the formulas
\begin{equation}
\frac1x=\int_0^\infty e^{-xt}\td t,\quad \ln\frac{y}x=\int_0^\infty\frac{e^{-xt}-e^{-yt}}t\td t
\end{equation}
and
\begin{equation}
\psi(x)=-\gamma+\int_0^\infty\frac{e^{-t}-e^{-xt}}{1-e^{-t}}\td t
\end{equation}
for $x,y>0$, and discussing the positivity of the functions
\begin{equation}
\frac{12-t^2e^{-\alpha t}}{12(1-e^{-t})}-\frac12-\frac1t\quad\text{and}\quad
\frac12+\frac1t-\frac{12-t^2}{12(1-e^{-t})}
\end{equation}
for $x\in(0,\infty)$ and $\alpha\ge\frac12$. Therefore, H. Alzer essentially gave in~\cite[Theorem~1]{Alzer1} necessary and sufficient conditions for the function~\eqref{alzer-func} to be logarithmically completely monotonic on $(0,\infty)$.
\end{rem}
\begin{rem}
In~\cite[Theorem~3]{laj-7.pdf}, a slight extension of~\cite[Theorem~1]{Alzer1} was presented: The function
\begin{equation}\label{li-ext-fun}
\frac{\Gamma(x+s)}{\Gamma(x+t)}\cdot\frac{(x+t)^{x+t-1/2}}{(x+s)^{x+s-1/2}}
\exp\biggl[s-t+\frac{\psi'(x+t+\alpha) -\psi'(x+s+\alpha)}{12}\biggr]
\end{equation}
for $0<s<t$ and $x\in(0,\infty)$ is logarithmically completely monotonic if and only if $\alpha\ge\frac12$, so is the reciprocal of~\eqref{li-ext-fun} if and only if $\alpha=0$.
\par
The decreasing monotonicity of~\eqref{li-ext-fun} and its reciprocal imply that the double inequality
\begin{multline}\label{li-ext-fun-ineq}
\exp\biggl[t-s+\frac{\psi'(x+s+\beta) -\psi'(x+t+\beta)}{12}\biggr] \le\frac{(x+t)^{x+t-1/2}}{(x+s)^{x+s-1/2}}\cdot\frac{\Gamma(x+s)}{\Gamma(x+t)}\\
\le \exp\biggl[t-s+\frac{\psi'(x+s+\alpha) -\psi'(x+t+\alpha)}{12}\biggr]
\end{multline}
for $\alpha>\beta\ge0$ are valid for $0<s<t$ and $x\in(0,\infty)$ if and only if $\beta=0$ and $\alpha\ge\frac12$.
\par
It is obvious that the inequality~\eqref{li-ext-fun-ineq} is a slight extension of the double inequality~\eqref{alzer-fun-ineq} obtained in~\cite[Corollary~2]{Alzer1}.
\end{rem}
\begin{rem}
In~\cite[Theorem~3.4]{Ismail-Muldoon-119}, the following complete monotonicity were established: Let $0<q<1$ and
\begin{equation}\label{g{alpha,q}(x)}
g_{\alpha,q}(x)=(1-q)^x(1-q^x)^{1/2}\Gamma_q(x) \exp\biggl[\frac{F(q^x)}{\ln q}-\frac{\psi_q'(x+\alpha)}{12}\biggr],
\end{equation}
where
\begin{equation}
F(x)=\sum_{n=1}^\infty\frac{x^n}{n^2}=-\int_0^x\frac{\ln(1-t)}{t}\td t.
\end{equation}
Then $[\ln g_{\alpha,q}(x)]'$ is completely monotonic on $(0,\infty)$ for $\alpha\ge\frac12$, $-[\ln g_{\alpha,q}(x)]'$ is completely monotonic on $(0,\infty)$ for $\alpha\le0$, and neither is completely monotonic on $(0,\infty)$ for $0<\alpha<\frac12$.
\par
As a consequence of~\cite[Theorem~3.4]{Ismail-Muldoon-119}, the following result was deduced in~\cite[Corollary~3.5]{Ismail-Muldoon-119}: Let $0<q<1$, $0<s<1$ and
\begin{equation}
\begin{split}\label{f-alpha-q}
f_{\alpha,q}(x)&=\frac{g_\alpha(x+s)}{g_\alpha(x+1)}\\
&=\frac{(1-q)^{s-1}(1-q^{x+s})^{1/2}\Gamma_q(x+s)}{(1-q^{x+1})^{1/2}\Gamma_q(x+1)}\\ &\quad\times\exp\biggl[\frac{F(q^{x+s})-F(q^{x+1})}{\ln q}+\frac{\psi_q'(x+1+\alpha)-\psi_q'(x+s+\alpha)}{12}\biggr].
\end{split}
\end{equation}
Then $[\ln f_{\alpha,q}(x)]'$ is completely monotonic on $(0,\infty)$ for $\alpha\ge\frac12$, $-[\ln f_{\alpha,q}(x)]'$ is complete monotonic on $(0,\infty)$ for $\alpha\le0$, and neither is completely monotonic on $(0,\infty)$ for $0<\alpha<\frac12$.
\par
Taking the limit $q\to1^-$ in~\eqref{f-alpha-q} yields~\cite[Corollary~3.6]{Ismail-Muldoon-119}, a recovery of~\cite[Theorem~1]{Alzer1} mentioned above.
\end{rem}
\begin{rem}
It is clear that~\cite[Theorem~3]{laj-7.pdf} can be derived by taking the limit
\begin{equation}
\lim_{q\to1^-}\frac{g_\alpha(x+s)}{g_\alpha(x+t)}
\end{equation}
for $0<s<t$, where $g_\alpha(x)$ is defined by~\eqref{g{alpha,q}(x)}.
\end{rem}
\subsection{Ismail-Muldoon's complete monotonicity results}
Inspired by inequalities~\eqref{gaut-6-ineq} and~\eqref{gki2}, Ismail and Muldoon proved in~\cite[Theorem~3.2]{Ismail-Muldoon-119} the following conclusions: For $0<a<b$ and $0<q<1$, let
\begin{equation}\label{Gamma-q(x+a)}
h(x)=\ln\biggl\{\frac{\Gamma_q(x+a)}{\Gamma_q(x+b)}\exp[(b-a)\psi_q(x+c)]\biggr\}.
\end{equation}
If $c\ge\frac{a+b}2$, then $-h'(x)$ is completely monotonic on $(-a,\infty)$; If $c\le a$, then $h'(x)$ is completely monotonic on $(-c,\infty)$; Neither $h'(x)$ or $-h'(x)$ is completely monotonic for $a<c<\frac{a+b}2$. Consequently, the following inequality was deduced in~\cite[Theorem~3.3]{Ismail-Muldoon-119}: If $0<q<1$, the inequality
\begin{equation}\label{q(x+1)}
\frac{\Gamma_q(x+1)}{\Gamma_q(x+s)} <\exp\biggl[(1-s)\psi_q\biggl(x+\frac{s+1}2\biggr)\biggr],\quad 0<s<1
\end{equation}
holds for $x>-s$.
\par
Influenced by~\eqref{q(x+1)}, H. Alzer posed in the final of the paper~\cite[p.~13]{Alzer-Math-Nachr-2001} the following open problem: For real numbers $0<q\ne1$ and $s\in(0,1)$, determine the best possible values $a(q,s)$ and $b(q,s)$ such that the inequalities
\begin{equation}
\exp[(1-s)\psi_q(x+a(q,s))]<\frac{\Gamma_q(x+1)}{\Gamma_q(x+s)} <\exp[(1-s)\psi_q(x+b(q,s))]
\end{equation}
hold for all $x>0$.
\begin{rem}
Since the paper~\cite{Ismail-Muldoon-119} was published in a conference proceedings, it is not easy to acquire it, so the completely monotonic properties of the function $h(x)$, obtained in~\cite[Theorem~3.2]{Ismail-Muldoon-119}, were neglected in most circumstances.
\end{rem}
\subsection{Elezovi\'c-Giordano-Pe\v{c}ari\'c's inequality and monotonicity results}
Inspired by the double inequality~\eqref{gki2}, the following problem was posed in~\cite[p.~247]{egp}: What are the best constants $\alpha$ and $\beta$ such that the double inequality
\begin{equation}
\psi(x+\alpha)\le\frac1{t-s}\int_s^t\psi(u)\td u\le\psi(x+\beta)
\end{equation}
holds for $x>-\min\{s,t,\alpha,\beta\}$?
\par
An answer to the above problem was procured in~\cite[Theorem~4]{egp}: The double inequality
\begin{equation}\label{second-egp-thm4}
\psi\biggl(x+\psi^{-1}\biggl(\frac1{t-s}\int_s^t\psi(u)\td u\biggr)\biggr)
<\frac1{t-s}\int_s^t\psi(x+u)\td u<\psi\biggl(x+\frac{s+t}2\biggr)
\end{equation}
is valid for every $x\ge0$ and positive numbers $s$ and $t$.
\par
Moreover, the function
\begin{equation}\label{gamma-arithmetic-funct}
\psi\biggl(x+\frac{s+t}2\biggr)-\frac1{t-s}\ln\frac{\Gamma(x+t)}{\Gamma(x+s)}
\end{equation}
for $s,t>0$ and $r=\min\{s,t\}$ was proved in~\cite[Theorem~5]{egp} to be completely monotonic on $(-r,\infty)$.
\begin{rem}
It is clear that~\cite[Theorem~5]{egp} stated above extends or generalizes the complete monotonicity of the function~\eqref{bustol-ismail-AM}.
\end{rem}
\begin{rem}
By the way, the complete monotonicity in~\cite[Theorem~5]{egp} was extended and iterated in~\cite[Proposition~5]{notes-best.tex-mia} and~\cite[Proposition~5]{notes-best.tex-rgmia} as follows: The function
\begin{equation}\label{cmf-lcmf}
\bigg[\dfrac{\Gamma(x+t)}{\Gamma(x+s)}\bigg]^{1/(s-t)} \exp\biggl[\psi\biggl(x+\frac{s+t}2\biggr)\biggr]
\end{equation}
is logarithmically completely monotonic with respect to $x$ on $(-\alpha,\infty)$, where $s$ and $t$ are real numbers and $\alpha=\min\{s,t\}$.
\end{rem}
\begin{rem}
Along the same line as proving the inequality~\eqref{second-egp-thm4} in~\cite{egp}, the inequality~\eqref{second-egp-thm4} was generalized in \cite[Theorem~2]{Chen-Ai-Jun-rgmia-07} as
\begin{multline}\label{chen-ai-jun-rgmia-07-ineq}
(-1)^n\psi^{(n)}\biggl(x+\bigl(\psi^{(n)}\bigr)^{-1}\biggl(\frac1{t-s}\int_s^t\psi^{(n)}(u)\td u\biggr)\biggr)<\\ \frac{(-1)^n\bigl[\psi^{(n-1)}(x+t)-\psi^{(n-1)}(x+s)\bigr]}{t-s} <(-1)^n\psi^{(n)}\biggl(x+\frac{s+t}2\biggr)
\end{multline}
for $x>0$, $n\ge0$, and $s,t>0$, where $\bigl(\psi^{(n)}\bigr)^{-1}$ denotes the inverse function of $\psi^{(n)}$.
\end{rem}
\begin{rem}
Since the inverse functions of the psi and polygamma functions are involved, it is much difficult to calculate the lower bounds in~\eqref{second-egp-thm4} and~\eqref{chen-ai-jun-rgmia-07-ineq}.
\end{rem}
\begin{rem}
In~\cite{kershaw-anal.appl}, by the method used in~\cite{kershaw}, it was proved that the double inequality
\begin{equation}\label{kershaw-singapore-ineq}
\psi\bigl(x+\sqrt{st}\,\bigr)<\frac{\ln\Gamma(x+t)-\ln\Gamma(x+s)}{t-s}<\psi\biggl(x+\frac{s+t}2\biggr)
\end{equation}
holds for $s,t>0$.
It s clear that the upper bound in~\eqref{kershaw-singapore-ineq} is a recovery of~\eqref{second-egp-thm4} and an immediate consequence of the complete monotonicity of the function~\eqref{gamma-arithmetic-funct}.
\end{rem}
\section{Two logarithmically complete monotonicity results}
Suggested by the double inequality~\eqref{gki2}, it is natural to put forward the following problem: What are the best constants $\delta_1(s,t)$ and $\delta_2(s,t)$ such that
\begin{equation}\label{gki2-gen}
\exp[\psi(x+\delta_1(s,t))] \le\biggl[\frac{\Gamma(x+t)}{\Gamma(x+s)}\biggr]^{1/(t-s)} \le\exp[\psi(x+\delta_2(s,t))]
\end{equation}
is valid for $x>-\min\{s,t,\delta_1(s,t),\delta_2(s,t)\}$? where $s$ and $t$ are real numbers.
\par
It is clear that the inequality~\eqref{gki2-gen} can also be rewritten as
\begin{equation}\label{gki2-gen-rew-1}
\biggl[\frac{\Gamma(x+t)}{\Gamma(x+s)}\biggr]^{1/(s-t)}\exp[\psi(x+\delta_1)] \le1 \le\biggl[\frac{\Gamma(x+t)}{\Gamma(x+s)}\biggr]^{1/(s-t)} \exp[\psi(x+\delta_2)]
\end{equation}
which suggests some monotonic properties of the function
\begin{equation}\label{gamma-delta-ratio}
\biggl[\frac{\Gamma(x+t)}{\Gamma(x+s)}\biggr]^{1/(t-s)}\exp[-\psi(x+\delta(s,t))],
\end{equation}
since the limit of the function~\eqref{gamma-delta-ratio} as $x\to\infty$ is $1$ by using~\eqref{Stirling-formula}.
\par
This problem was considered in~\cite{ratio-gamma-polynomial.tex-jcam, ratio-gamma-polynomial.tex-rgmia, gamma-batir.tex-jcam, gamma-batir.tex-rgmia} along two different approaches and the following results of different forms were established.
\begin{thm}[{\cite[Theorem~1]{ratio-gamma-polynomial.tex-jcam} and~\cite[Theorem~1]{ratio-gamma-polynomial.tex-rgmia}}]\label{gamma-ratio-multply}
Let $a$, $b$, $c$ be real numbers and $\rho=\min\{a,b,c\}$. Define
\begin{equation}\label{gamma-multply}
F_{a,b;c}(x)=
\begin{cases}
\biggl[\dfrac{\Gamma(x+b)}{\Gamma(x+a)}\biggr]^{1/(a-b)}\exp[\psi(x+c)], &a\ne
b\\
\exp[\psi(x+c)-\psi(x+a)],&a=b\ne c
\end{cases}
\end{equation}
for $x\in(-\rho,\infty)$. Furthermore, let $\theta(t)$ be an implicit function
defined by equation
\begin{equation}\label{implicit}
e^t-t =e^{\theta(t)}-\theta(t)
\end{equation}
on $(-\infty,\infty)$. Then $\theta(t)$ is decreasing and $t\theta(t)<0$ for
$\theta(t)\ne t$, and
\begin{enumerate}
\item
$F_{a,b;c}(x)$ is logarithmically completely monotonic on $(-\rho,\infty)$ if
\begin{equation}\label{d1}
\begin{split}
(a,b;c)&\in \{c\ge a,c\ge b\}\cup\{c\ge a,0\ge c-b\ge\theta(c-a)\}\\*
&\quad\cup\{c\le a,c-b\ge\theta(c-a)\}\setminus\{a=b=c\};
\end{split}
\end{equation}
\item
$[F_{a,b;c}(x)]^{-1}$ is logarithmically completely monotonic on $(-\rho,\infty)$ if
\begin{equation}\label{d2}
\begin{split}
(a,b;c)&\in \{c\le a,c\le b\}\cup\{c\ge a,c-b\le\theta(c-a)\}\\*
&\quad\cup\{c\le a,0\le c-b\le\theta(c-a)\}\setminus\{a=b=c\}.
\end{split}
\end{equation}
\end{enumerate}
\end{thm}
\begin{thm}[{\cite[Theorem~1]{gamma-batir.tex-jcam} and~\cite[Theorem~1]{gamma-batir.tex-rgmia}}]\label{nu-log-mon}
For real numbers $s$ and $t$ with $s\ne t$ and $\theta(s,t)$ a constant depending on $s$ and $t$, define
\begin{equation}\label{nudef}
\nu_{s,t}(x)=\frac1{\exp\big[\psi\bigl(x+\theta(s,t)\bigr)\big]}
\biggl[\frac{\Gamma(x+t)}{\Gamma(x+s)}\biggr]^{1/(t-s)}.
\end{equation}
\begin{enumerate}
\item
The function $\nu_{s,t}(x)$ is logarithmically completely monotonic on the interval $(-\theta(s,t),\infty)$ if and only if $\theta(s,t)\le\min\{s,t\}$;
\item
The function $[\nu_{s,t}(x)]^{-1}$ is logarithmically completely monotonic on the interval $(-\min\{s,t\},\infty)$ if and only if $\theta(s,t)\ge\frac{s+t}2$.
\end{enumerate}
\end{thm}
\begin{rem}
In~\cite{ratio-gamma-polynomial.tex-jcam, ratio-gamma-polynomial.tex-rgmia}, it was deduced by standard argument that
\begin{gather*}
(-1)^i[\ln F_{a,b;c}(x)]^{(i)}
=\int_0^\infty\biggl[\frac{e^{(c-a)u}-e^{(c-b)u}}{u(b-a)}-1\biggr]
\frac{u^ie^{-(x+c)u}}{1-e^{-u}}\td u\\
=\int_0^\infty\biggl[\frac{[e^{(c-a)u}-(c-a)u]-[e^{(c-b)u}-(c-b)u]} {[(c-a)-(c-b)]u}\biggr] \frac{u^ie^{-(x+c)u}}{1-e^{-u}}\td u
\end{gather*}
for $i\in\mathbb{N}$ and $a\ne b$. Therefore, the sufficient conditions in~\cite[Theorem~1]{ratio-gamma-polynomial.tex-jcam} and~\cite[Theorem~1]{ratio-gamma-polynomial.tex-rgmia} are stated in terms of the implicit function $\theta(t)$ defined by~\eqref{implicit}.
\end{rem}
\begin{rem}
In~\cite{gamma-batir.tex-jcam, gamma-batir.tex-rgmia}, the logarithmic derivative of $\nu_{s,t}(x)$ was rearranged as
\begin{equation}
\ln\nu_{s,t}(x)=\int_0^\infty\frac{e^{-[x+\theta(s,t)]u}}{1-e^{-u}} \Bigl\{1-e^{u[\theta(s,t)+\ln p_{s,t}(u)]}\Bigr\}\td u,
\end{equation}
where
\begin{equation}
p_{s,t}(u)=\biggl(\frac1{t-s}\int_s^t e^{-uv}\td v\biggr)^{1/u}.
\end{equation}
Since the function $p_{s,t}(u)$ is increasing on $[0,\infty)$ with
\begin{equation}
\lim_{u\to0}p_{s,t}(u)=e^{-(s+t)/2}\quad \text{and}\quad \lim_{u\to\infty}p_{s,t}(u)=e^{-\min\{s,t\}},
\end{equation}
the necessary and sufficient conditions in~\cite[Theorem~1]{gamma-batir.tex-jcam} and~\cite[Theorem~1]{gamma-batir.tex-rgmia} may be derived immediately by considering Theorem~\ref{p.161-widder}.
\par
However, the necessary conditions in~\cite[Theorem~1]{gamma-batir.tex-jcam} and~\cite[Theorem~1]{gamma-batir.tex-rgmia} were proved by establishing the following inequalities involving the polygamma functions and their inverse functions in~\cite[Proposition~1]{gamma-batir.tex-jcam} and~\cite[Proposition~1]{gamma-batir.tex-rgmia}:
\begin{enumerate}
\item
If $m>n\ge0$ are two integers, then
\begin{equation}\label{m>n>0}
\left(\psi^{(m)}\right)^{-1}\left(\frac1{t-s} \int_s^t\psi^{(m)}(v)\td
v\right) \le\left(\psi^{(n)}\right)^{-1}\left(\frac1{t-s}
\int_s^t\psi^{(n)}(v)\td v\right),
\end{equation}
where $\left(\psi^{(k)}\right)^{-1}$ stands for the inverse function of $\psi^{(k)}$ for $k\ge0$;
\item
The inequality
\begin{equation}\label{log-mean-ineq}
\psi^{(i)}(L(s,t))\le\frac1{t-s}\int^t_s\psi^{(i)}(u)\td u
\end{equation}
is valid for $i$ being positive odd number or zero and reversed for $i$ being positive even number;
\item
The function
\begin{equation}\label{increas-conc}
\left(\psi^{(\ell)}\right)^{-1}\left(\frac1{t-s} \int_s^t\psi^{(\ell)}(x+v)\td
v\right)-x
\end{equation}
for $\ell\ge0$ is increasing and concave in $x>-\min\{s,t\}$ and has a sharp
upper bound $\frac{s+t}{2}$.
\end{enumerate}
\par
Note that if taking $m=1$, $n=0$, $i=0$ and $\ell=0$ in~\eqref{m>n>0}, \eqref{log-mean-ineq} and~\eqref{increas-conc}, then~\cite[Lemma~1]{f-mean} and~\cite[Theorem~6]{f-mean} may be derived straightforwardly.
\end{rem}
\section{New bounds and monotonicity results}
\subsection{Elezovi\'c-Pe\v{c}ari\'c's lower bound}
The inequality~\eqref{log-mean-ineq} for $i=0$, that is, \cite[Lemma~1]{f-mean}, may be rewritten as
\begin{equation}
\frac{\ln\Gamma(t)-\ln\Gamma(s)}{t-s}\ge\psi(L(s,t))
\end{equation}
or
\begin{equation}\label{Elezovic-Pecaric-ineq-lower}
\biggl[\frac{\Gamma(t)}{\Gamma(s)}\biggr]^{1/(t-s)}\ge e^{\psi(L(s,t))}
\end{equation}
for positive numbers $s$ and $t$.
\begin{rem}
From the left-hand side inequality in~\eqref{mean-ineq}, it is easy to see that the inequality~\eqref{Elezovic-Pecaric-ineq-lower} refines the traditionally lower bound $e^{\psi(G(s,t))}$.
\end{rem}
\begin{rem}
In~\cite[Theorem~2.4]{gamma-fun-ineq-batir}, the following incorrect double inequality
was obtained:
\begin{equation}\label{kershaw-batir}
e^{(x-y)\psi(L(x+1,y+1)-1)} \le\frac{\Gamma(x)}{\Gamma(y)} \le e^{(x-y)\psi(A(x,y))},
\end{equation}
where $x$ and $y$ are positive real numbers.
\end{rem}
\subsection{Allasia-Giordano-Pe\v{c}ari\'c's inequalities}
In Section~4 of \cite{Allasia-Gior-Pecaric-MIA-02}, as straightforward consequences of Hadamard type inequalities obtained in~\cite{agpit}, the following double inequalities for bounding $\ln\frac{\Gamma(y)}{\Gamma(x)}$ were listed: For $y>x>0$, $n\in\mathbb{N}$ and $h=\frac{y-x}n$, we have
\begin{gather*}
\frac{h}2[\psi(x)+\psi(y)]+h\sum_{k=1}^{n-1}\psi(x+kh)<\ln\frac{\Gamma(y)}{\Gamma(x)} <h\sum_{k=0}^{n-1}\psi\biggl(x+\biggl(k+\frac12\biggr)h\biggr),\\
0<h\sum_{k=0}^{n-1}\psi\biggl(x+\biggl(k+\frac12\biggr)h\biggr) -\ln\frac{\Gamma(y)}{\Gamma(x)}
<\ln\frac{\Gamma(y)}{\Gamma(x)} -\frac{h}2[\psi(x)+\psi(y)]-h\sum_{k=1}^{n-1}\psi(x+kh),\\
\frac{h}2[\psi(x)+\psi(y)]+h\sum_{k=1}^{n-1}\psi(x+kh) -\sum_{i=1}^{m-1}\frac{B_{2i}h^{2i}}{(2i)!}\bigl[\psi^{(2i-1)}(y)-\psi^{(2i-1)}(x)\bigr]< \\
\ln\frac{\Gamma(y)}{\Gamma(x)}
<h\sum_{k=0}^{n-1}\psi\biggl(x+\biggl(k+\frac12\biggr)h\biggr) -\sum_{i=1}^{m-1}\frac{B_{2i}h^{2i}}{(2i)!}\bigl[\psi^{(2i-1)}(y)-\psi^{(2i-1)}(x)\bigr],\\
0<h\sum_{k=0}^{n-1}\psi\biggl(x+\biggl(k+\frac12\biggr)h\biggr) -\sum_{i=1}^{m-1}\frac{B_{2i}(1/2)h^{2i}}{(2i)!}\bigl[\psi^{(2i-1)}(y)-\psi^{(2i-1)}(x)\bigr]
-\ln\frac{\Gamma(y)}{\Gamma(x)}\\
<\ln\frac{\Gamma(y)}{\Gamma(x)}-\frac{h}2[\psi(x)+\psi(y)]-h\sum_{k=1}^{n-1}\psi(x+kh)
+\sum_{i=1}^{m-1}\frac{B_{2i}h^{2i}}{(2i)!}\bigl[\psi^{(2i-1)}(y)-\psi^{(2i-1)}(x)\bigr],\\
h\sum_{k=0}^{n-1}\psi\biggl(x+\biggl(k+\frac12\biggr)h\biggr) -\sum_{i=1}^{m-2}\frac{B_{2i}(1/2)h^{2i}}{(2i)!}\bigl[\psi^{(2i-1)}(y)-\psi^{(2i-1)}(x)\bigr]
<\ln\frac{\Gamma(y)}{\Gamma(x)}\\
<h\sum_{k=0}^{n-1}\psi\biggl(x+\biggl(k+\frac12\biggr)h\biggr) -\sum_{i=1}^{m-1}\frac{B_{2i}(1/2)h^{2i}}{(2i)!}\bigl[\psi^{(2i-1)}(y)-\psi^{(2i-1)}(x)\bigr],\\
\frac{h}2[\psi(x)+\psi(y)]+h\sum_{k=1}^{n-1}\psi(x+kh) -\sum_{i=1}^{m-1}\frac{B_{2i}h^{2i}}{(2i)!}\bigl[\psi^{(2i-1)}(y)-\psi^{(2i-1)}(x)\bigr]
<\ln\frac{\Gamma(y)}{\Gamma(x)}\\
<\frac{h}2[\psi(x)+\psi(y)]+h\sum_{k=1}^{n-1}\psi(x+kh) -\sum_{i=1}^{m-2}\frac{B_{2i}h^{2i}}{(2i)!}\bigl[\psi^{(2i-1)}(y)-\psi^{(2i-1)}(x)\bigr],
\end{gather*}
where $m$ is an odd and positive integer,
\begin{equation}
B_k\biggl(\frac12\biggr)=\biggl(\frac1{2^{k-1}}-1\biggr)B_k,\quad k\ge0
\end{equation}
and $B_i$ for $i\ge0$ are Bernoulli numbers defined by
\begin{equation}
\frac{t}{e^t-1}=\sum_{i=0}^\infty B_i\frac{t^i}{i!} =1-\frac{x}2+\sum_{j=1}^\infty B_{2j}\frac{x^{2j}}{(2j)!}, \quad\vert x\vert<2\pi.
\end{equation}
If replacing $m$ by an even and positive integer, then the last four double inequalities are reversed.
\subsection{Batir's double inequality for polygamma functions}
It is clear that the double inequality~\eqref{gki2} can be rearranged as
\begin{equation}\label{kershaw-rearr}
\psi\bigl(x+\sqrt{s}\,\bigr)<\frac{\ln\Gamma(x+1)-\ln\Gamma(x+s)}{1-s} <\psi\biggl(x+\frac{s+1}2\biggr)
\end{equation}
for $0<s<1$ and $x>1$. The middle term in~\eqref{kershaw-rearr} can be regarded as a divided difference of the function $\ln\Gamma(t)$ on $(x+s,x+1)$. Stimulated by this, N.~Batir extended and generalized in~\cite[Theorem~2.7]{batir-jmaa-06-05-065} the double inequality~\eqref{kershaw-rearr} as
\begin{equation}\label{batir-psi-ineq}
-\bigl\vert \psi^{(n+1)}(L_{-(n+2)}(x,y))\bigr\vert <\frac{\bigl\vert \psi^{(n)}(x)\bigr\vert -\bigl\vert \psi^{(n)}(y)\bigr\vert }{x-y}<-\bigl\vert \psi^{(n+1)}(A(x,y))\bigr\vert
\end{equation}
where $x,y$ are positive numbers and $n\in\mathbb{N}$.
\subsection{Chen's double inequality in terms of polygamma functions}
In~\cite[Theorem~2]{chen-mean-GK-ineq}, by virtue of the composite Simpson rule
\begin{equation}
\int_a^bf(t)\td t=\frac{b-a}6\biggl[f(a)+4f\biggl(\frac{a+b}2\biggr)+f(b)\biggr] -\frac{(b-a)^5}{2880}f^{(4)}(\xi),\quad \xi\in(a,b)
\end{equation}
in~\cite{Hammerlin-Hoffmann-bbok-91} and the formula
\begin{equation}
\frac1{y-x}\int_x^yf(t)\td t=\sum_{k=0}^\infty\frac1{(2k+1)!} \biggl(\frac{y-x}2\biggr)^{2k}f^{(2k)}\biggl(\frac{x+y}2\biggr)
\end{equation}
in~\cite{Neuman-Sandor-05-Aus}, the following double inequalities and series representations were trivially shown: For $n\in\mathbb{N}$ and positive numbers $x$ and $y$ with $x\ne y$,
\begin{gather*}
\frac13A(\psi(x),\psi(y))+\frac23\psi(A(x,y))-\frac{(y-x)^4}{2880}\psi^{(4)}(\max\{x,y\}) <\frac{\ln\Gamma(y)-\ln\Gamma(x)}{y-x} \\
<\frac13A(\psi(x),\psi(y))+\frac23\psi(A(x,y))-\frac{(y-x)^4}{2880}\psi^{(4)}(\min\{x,y\}),\\
(-1)^{n-1}\biggl[\frac{A\bigl(\psi^{(n)}(x),\psi^{(n)}(y)\bigr)}3 +\frac{2\psi^{(n)}(A(x,y))}3-\frac{(y-x)^4\psi^{(n+4)}(\min\{x,y\})}{2880}\biggr]\\
<\frac{(-1)^{n-1}\bigl[\psi^{(n-1)}(y)-\psi^{(n-1)}(x)\bigr]}{y-x} \\
<(-1)^{n-1}\biggl[\frac{A\bigl(\psi^{(n)}(x),\psi^{(n)}(y)\bigr)}3 +\frac{2\psi^{(n)}(A(x,y))}3-\frac{(y-x)^4\psi^{(n+4)}(\max\{x,y\})}{2880}\biggr],\\
\frac{\ln\Gamma(y)-\ln\Gamma(x)}{y-x}=\sum_{k=0}^\infty\frac1{(2k+1)!} \biggl(\frac{y-x}2\biggr)^{2k}\psi^{(2k)}\biggl(\frac{x+y}2\biggr),\\
\frac{\psi^{(n-1)}(y)-\psi^{(n-1)}(x)}{y-x}=\sum_{k=0}^\infty\frac1{(2k+1)!} \biggl(\frac{y-x}2\biggr)^{2k}\psi^{(2k+n)}\biggl(\frac{x+y}2\biggr).
\end{gather*}
\subsection{Recent monotonicity results by Qi and his coauthors}
Motivated by the left-hand side inequality in~\eqref{kershaw-batir}, although it is not correct, several refinements and generalizations about inequalities~\eqref{Elezovic-Pecaric-ineq-lower} and~\eqref{batir-psi-ineq} were established by Qi and his coauthors in recent years.
\subsubsection{}
In~\cite[Theorem~1]{gamma-psi-batir.tex-jcam} and~\cite[Theorem~1]{gamma-psi-batir.tex-rgmia}, by virtue of the method used in~\cite[Theorem~2.4]{gamma-fun-ineq-batir} and the inequality~\eqref{log-mean-ineq} for $i=0$, the inequality~\eqref{Elezovic-Pecaric-ineq-lower} and the right-hand side inequality in~\eqref{kershaw-batir} were recovered.
\subsubsection{}
In~\cite[Theorem~2]{gamma-psi-batir.tex-jcam} and~\cite[Theorem~2]{gamma-psi-batir.tex-rgmia}, the decreasing monotonicity of the function~\eqref{bustol-ismail-AMM} and the right-hand side inequality in~\eqref{second-egp-thm4} were extended and generalized to the logarithmically complete monotonicity, and the inequality~\eqref{Elezovic-Pecaric-ineq-lower} was generalized to a decreasing monotonicity.
\begin{thm}[{\cite[Theorem~2]{gamma-psi-batir.tex-jcam} and~\cite[Theorem~2]{gamma-psi-batir.tex-rgmia}}]\label{log-complete-fcn}
For $s,t\in\mathbb{R}$ with $s\ne t$, the function
\begin{equation}\label{f_s,t}
\biggl[\frac{\Gamma(x+s)}{\Gamma(x+t)}\biggr]^{1/(s-t)}
\frac1{e^{\psi(L(s,t;x))}}
\end{equation}
is decreasing and
\begin{equation}\label{g_s,t}
\biggl[\frac{\Gamma(x+s)}{\Gamma(x+t)}\biggr]^{1/(t-s)} e^{\psi(A(s,t;x))}
\end{equation}
is logarithmically completely monotonic on $(-\min\{s,t\},\infty)$, where
$$
L(s,t;x)=L(x+s,x+t)\quad \text{and}\quad A(s,t;x)=A(x+s,x+t).
$$
\end{thm}
\subsubsection{}
In \cite{new-upper-kershaw.tex, new-upper-kershaw-JCAM.tex}, the upper bounds in~\eqref{gki2}, \eqref{second-egp-thm4}, \eqref{kershaw-batir}, \eqref{batir-psi-ineq} and related inequalities in~\cite{ratio-gamma-polynomial.tex-jcam, ratio-gamma-polynomial.tex-rgmia, gamma-batir.tex-jcam, gamma-batir.tex-rgmia} were refined and extended as follows.
\begin{thm}[\cite{new-upper-kershaw.tex, new-upper-kershaw-JCAM.tex}]\label{identric-kershaw-thm}
The inequalities
\begin{equation}\label{identric-kershaw-ineq-equiv}
\biggl[\frac{\Gamma(a)}{\Gamma(b)}\biggr]^{1/(a-b)}\le e^{\psi(I(a,b))}
\end{equation}
and
\begin{equation}\label{batir-psi-ineq-ref-equiv}
\frac{(-1)^{n}\bigl[\psi^{(n-1)}(a) -\psi^{(n-1)}(b)\bigr]}{a-b} \le(-1)^n\psi^{(n)}(I(a,b))
\end{equation}
for $a>0$ and $b>0$ hold true.
\end{thm}
\begin{rem}
The basic tools to prove~\eqref{identric-kershaw-ineq-equiv} and~\eqref{batir-psi-ineq-ref-equiv} are an inequality in~\cite{cargo} and and a complete monotonicity in \cite{subadditive-qi-3.tex} respectively. They may be recited as follows:
\begin{enumerate}
\item
If $g$ is strictly monotonic, $f$ is strictly increasing, and $f\circ g^{-1}$ is convex (or concave, respectively) on an interval $I$, then
\begin{equation}\label{carton-ineq}
g^{-1}\left(\frac1{t-s}\int_s^tg(u)\td u\right) \le
f^{-1}\left(\frac1{t-s}\int_s^tf(u)\td u\right)
\end{equation}
holds (or reverses, respectively) for $s,t\in I$. See also \cite[p.~274, Lemma~2]{bullenmean} and \cite[p.~190, Theorem~A]{f-mean}.
\item
The function
\begin{equation}\label{psi-abs-minus-cm-1}
x\bigl|\psi^{(i+1)}(x)\bigr|-\alpha\bigl|\psi^{(i)}(x)\bigr|,\quad i\in\mathbb{N}
\end{equation}
is completely monotonic on $(0,\infty)$ if and only if $0\le\alpha\le i$. See also~\cite{subadditive-qi-guo.tex}.
\end{enumerate}
\end{rem}
\begin{rem}
By the so-called G-A convex approach, the inequality~\eqref{identric-kershaw-ineq-equiv} was recovered in~\cite{Zhang-Morden}: For $b>a>0$,
\begin{equation}
[b-L(a,b)]\psi(b)+[L(a,b)-a]\psi(a)<\ln\frac{\Gamma(b)}{\Gamma(a)}<(b-a)\psi(I(a,b)).
\end{equation}
See also \href{http://www.ams.org/mathscinet-getitem?mr=2413632}{MR2413632}. Moreover, by the so-called geometrically convex method, the following double inequality was shown in~\cite[Theorem~1.2]{Xiao-Ming-JIPAM}: For positive numbers $x$ and $y$,
\begin{equation}
\frac{x^x}{y^y}\biggl(\frac{x}y\biggr)^{y[\psi(y)-\ln y]}e^{y-x}\le \frac{\Gamma(x)}{\Gamma(y)} \le\frac{x^x}{y^y}\biggl(\frac{x}y\biggr)^{x[\psi(x)-\ln x]}e^{y-x}.
\end{equation}
\end{rem}
\subsubsection{}
In~\cite{subadditive-qi-guo.tex, subadditive-qi-3.tex}, the function
\begin{equation}\label{psi-abs-minus-cm-2}
\alpha\bigl|\psi^{(i)}(x)\bigr|-x\bigl|\psi^{(i+1)}(x)\bigr|
\end{equation}
was proved to be completely monotonic on $(0,\infty)$ if and only if $\alpha\ge i+1$. Utilizing the inequality~\eqref{carton-ineq} and the completely monotonic properties of the functions~\eqref{psi-abs-minus-cm-1} and~\eqref{psi-abs-minus-cm-2} yields the following double inequality.
\begin{thm}[{\cite[Theorem~1]{new-upper-kershaw-2.tex} and~\cite[Theorem~1]{new-upper-kershaw-2.tex-mia}}] \label{new-upper-2-thm-1}
For real numbers $s>0$ and $t>0$ with $s\ne t$ and an integer $i\ge0$, the inequality
\begin{equation}\label{new-upper-main-ineq}
(-1)^i\psi^{(i)}(L_p(s,t))\le \frac{(-1)^i}{t-s}\int_s^t\psi^{(i)}(u)\td u \le(-1)^i\psi^{(i)}(L_q(s,t))
\end{equation}
holds if $p\le-i-1$ and $q\ge-i$.
\end{thm}
\begin{rem}
The double inequality~\eqref{new-upper-main-ineq} recovers, extends and refines inequalities~\eqref{Elezovic-Pecaric-ineq-lower}, \eqref{batir-psi-ineq}, \eqref{identric-kershaw-ineq-equiv} and~\eqref{batir-psi-ineq-ref-equiv}.
\end{rem}
\begin{rem}
A natural question is whether the above sufficient conditions $p\le-i-1$ and $q\ge-i$ are also necessary for the inequality~\eqref{new-upper-main-ineq} to be valid.
\end{rem}
\subsubsection{}
As generalizations of the inequalities~\eqref{Elezovic-Pecaric-ineq-lower}, \eqref{batir-psi-ineq}, the decreasing monotonicity of the function~\eqref{f_s,t}, and the left-hand side inequality in~\eqref{new-upper-main-ineq}, the following monotonic properties were presented.
\begin{thm}[{\cite[Theorem~3]{new-upper-kershaw-2.tex} and~\cite[Theorem~3]{new-upper-kershaw-2.tex-mia}}] \label{new-upper-2-thm-3}
If $i\ge0$ is an integer, $s,t\in\mathbb{R}$ with $s\ne t$, and $x>-\min\{s,t\}$, then the function
\begin{equation}\label{psi-minus-mon}
(-1)^i\left[\psi^{(i)}(L_p(s,t;x)) -\frac{1}{t-s}\int_s^t\psi^{(i)}(x+u)\td u\right]
\end{equation}
is increasing with respect to $x$ for either $p\le-(i+2)$ or $p=-(i+1)$ and decreasing with respect to $x$ for $p\ge1$, where $L_p(s,t;x)=L_p(x+s,x+t)$.
\end{thm}
\begin{rem}
It is not difficult to see that the ideal monotonic results of the function~\eqref{psi-minus-mon} should be as follows.
\end{rem}
\begin{conj}\label{new-upper-2-thm-3-conj}
Let $i\ge0$ be an integer, $s,t\in\mathbb{R}$ with $s\ne t$, and $x>-\min\{s,t\}$. Then the function~\eqref{psi-minus-mon} is increasing with respect to $x$ if and only if $p\le-(i+1)$ and decreasing with respect to $x$ if and only if $p\ge-i$.
\end{conj}
\begin{rem}
Corresponding to Conjecture~\ref{new-upper-2-thm-3-conj}, the complete monotonicity of the function~\eqref{psi-minus-mon} and its negative may also be discussed.
\end{rem}
\end{document}
|
\begin{document}
\title[Complementary inequalities to Davis-Choi-Jensen's inequality]
{Complementary inequalities to Davis-Choi-Jensen's inequality and operator power means}
\author[A. G. Ghazanfari]{A. G. Ghazanfari}
\address{Department of Mathematics, Lorestan University, P. O. Box 465, Khoramabad, Iran.}
\email{, [email protected]}
\setcounter{page}{1}
\date{}
\subjclass[2010]{47A63, 47A64, 47B65}
\keywords{Karcher mean; Geometric mean; Positive linear map}
\begin{abstract}
Let $f$ be an operator convex function on $(0,\infty)$, and $\Phi$ be a unital positive linear maps on $B(H)$.
we give a complementary inequality to
Davis-Choi-Jensen's inequality as follows
\begin{equation*}
f(\Phi(A))\geq \frac{4R(A,B)}{(1+R(A,B))^2}\Phi(f(A)),
\end{equation*}
where $R(A, B)=\max\{r(A^{-1}B) ,r(B^{-1}A)\}$ and $r(A)$ is the spectral radius of $A$.
We investigate the complementary inequalities related to the operator power means and the Karcher means via unital positive linear maps,
and obtain the following result:
If $A_{1}$, $A_{2}$,\dots, $A_{n}$, are positive definite operators in $B(H)$, and $0<m_i\leq A_i\leq M_i$, then
\begin{equation*}
\Lambda( \omega;\Phi(\mathbb{A}))\geq\Phi(\Lambda( \omega; \mathbb{A}))\geq \frac{4\hbar}{(1+\hbar)^2}~\Lambda( \omega;\Phi(\mathbb{A})),
\end{equation*}
where $\hbar= \max\limits_{1\leq i\leq n} \frac{M_i}{m_i}$.
Finally, we prove that if $G(A_1,\dots,A_n)$ is the generalized geometric mean defined by Ando-Li-Mathias for $n$ positive definite operators, then
\begin{align*}
\Phi(G(A_1,\dots,A_n))\geq\left(\frac{2h^\frac{1}{2}}{1+h}\right)^{n-1}G(\Phi(A_1),\dots,\Phi(A_n)),
\end{align*}
where $h=\max\limits_{1\leq i,j\leq n} R(A_i, A_j)$.
\end{abstract}
\maketitle
\section{\bf Introduction}\vskip 2mm
Let $B(H)$ denote the set of all bounded linear operators on a complex Hilbert
space $H$.
An operator $A \in B(H)$ is positive definite (resp. positive semi-definite)
if $\langle Ax, x\rangle > 0$ (resp. $\langle Ax, x\rangle \geq 0 )$ holds for all non‐zero $x \in H$ . If $A$ is
positive semi‐definite, we denote $A\geq 0$. Let $\mathcal{P}\mathcal{S}, \mathcal{P}\subset B(H)$ be the sets of all positive
semi-definite operators and positive definite operators, respectively.
To reach inequalities for bounded self-adjoint operators on Hilbert space, we shall use
the following monotonicity property for operator functions:\\
If $X\in B(H)$ is self adjoint with a spectrum $Sp(X)$, and $f,g$ are continuous real valued functions
on an interval containing $Sp(X)$, then
\begin{equation}\label{1.1}
f(t)\geq g(t),~t\in Sp(X)\Rightarrow ~f(X)\geq g(X).
\end{equation}
For more details about this property, the reader is referred to \cite{pec}.
For $A,B\in\mathcal{P}$ the geometric mean $A\sharp B$ of $A$ and $B$ is defined by
$A\sharp B =A^{\frac{1}{2}}(A^{\frac{-1}{2}}BA^{\frac{-1}{2}})^{\frac{1}{2}}A^{\frac{1}{2}}.$
For $A,B\in\mathcal{P}$, let
\[
R(A, B)=\max\{r(A^{-1}B) ,r(B^{-1}A)\}
\]
where $r(A)$ means the spectral radius of $A$ and we have
$$r(B^{-1}A)= \inf\{ \lambda >0 : A\leq \lambda B \}=\|B^{\frac{-1}{2}}AB^{\frac{-1}{2}}\|.$$
$R(A, B)$ was defined in \cite{and}, and many nice properties
of $R(A,B)$ were shown as follows: If $A,B, C\in\mathcal{P}$, then
\begin{align*}
&(i)~ R(A,C) \leq R(A,B)R(B,C) \text{ (triangle inequality) }\\
&(ii)~ R(A,B) \geq 1,\text{ and }R(A,B) = 1 \text{ iff } A = B\\
&(iii)~\|A-B\|\leq (R(A,B)-1)\|A\|.
\end{align*}
The Thompson distance $d(A, B)$
on the convex cone $\Omega$ of positive definite operators is defined by
$$d(A,B ) = \log R(A, B)=\max\{\log r(A^{-1}B) , \log r(B^{-1}A) \},$$
see \cite{and, cor, nus}. we know that $\Omega$
is a complete metric space with respect to this metric and the corresponding metric topology
on $\Omega$ agree with the relative norm topology.
As a basic inequality with respect to the metric, the following inequality for a weighted geometric mean of two operators hold \cite{and, cor}:
$$ d(A_{1} \sharp_{\nu} A_{2},B_{1}\sharp_{\nu} B_{2})\leq (1-\nu)d(A_{1},B_{1})+\nu d(A_{2},B_{2})$$
for $A_{1},A_{2},B_{1},B_{2} \in \Omega$ and $\nu \in (0,1)$.
Let $A$ and $B$ be two positive definite operators on a Hilbert space $H$ and $\Phi$ be a unital positive linear map on $B(H)$.
Ando \cite[Theorem 3]{ando1} showed the
following property of a positive linear map in
connection with the operator geometric mean.
\begin{equation}\label{2.1}
\Phi(A\sharp B)\leq \Phi(A)\sharp \Phi(B).
\end{equation}
Inequality \eqref{2.1} is extended to an operator mean $\sigma$ in Kubo-Ando theory as follows:
\begin{equation*}
\Phi(A\sigma B)\leq \Phi(A)\sigma \Phi(B),
\end{equation*}
In particular for the weighted geometric mean, we have
\begin{equation}\label{2.2}
\Phi(A\sharp_\nu B)\leq \Phi(A)\sharp_\nu \Phi(B),
\end{equation}
where $\nu$ is a real number in $(0,1]$.
A complementary inequality to inequality \eqref{2.2}, is the following important inequality\cite{mic}:\\
Let $0<m_1 I\leq A\leq M_1 I$ and $0<m_2 I\leq B\leq M_2 I$
and $0<\nu\leq 1$, then
\begin{equation}\label{2.3}
K(h,\nu)\Phi(A)\sharp_\nu \Phi(B)\leq \Phi(A\sharp_\nu B),
\end{equation}
where $h=\frac{M_1M_2}{m_1m_2}$ and $K(h,\nu)$ is the generalization Kantorovich constant defined by
\[
K(h,\nu)=\frac{h^\nu-h}{(\nu-1)(h-1)}\left(\frac{(\nu-1)(h^\nu-1)}{\nu(h^\nu-h)}\right)^\nu.
\]
The special case, $K(h,2)=K(h,-1)=\frac{(1+h)^2}{4h}$ is called the Kantorovich constant.
The generalization Kantorovich constant $K(h,\nu)$ has the following properties:
\begin{align*}
&K(h,\nu)=K(h,1-\nu)\\
0&<K(h,\nu)\leq 1 \text{ for } 0<\nu\leq 1,
\end{align*}
and $K(h,\nu)$ is decreasing for $\nu\leq \frac{1}{2}$ and increasing for $\nu> \frac{1}{2}$ therefore for all
$\nu\in \mathbb{R}$, $K(h)=K(h,\frac{1}{2})=\frac{2h^\frac{1}{4}}{1+h^\frac{1}{2}}\leq K(h,\nu)$.
For some fundamental results on complementary inequalities to famed inequalities, we would like to refer
the readers to \cite{mic, pec}. Afterwards, several complementary inequalities to
celebrated inequalities was discussed by many mathematicians.
For more information and some recent results on this topic; see \cite{and, fuj, tom, yam, yam1}.
\section{A complementary inequality to weighted geometric mean}\vskip 2mm
First, we state another complementary inequality to \eqref{2.2} with respect to $R(A,B)$ as follows:
\begin{theorem}\label{t1}
Let $A$ and $B$ be two positive definite operators in $\mathcal{P}$, then
\begin{equation}\label{2.4}
K\left(R(A,B)^2,\nu\right)\Phi(A)\sharp_\nu\Phi(B)\leq \Phi(A\sharp_\nu B).
\end{equation}
\end{theorem}
\begin{proof}
we know that
\begin{equation}\label{2.5}
\dfrac{1}{R(A,B)}A\leq B \leq R(A,B)A.
\end{equation}
Define a linear map $\Psi$ on $H$ by
$$ \Psi(X)=\Phi(A)^{-\frac{1}{2}}\Phi(A^{\frac{1}{2}}XA^{\frac{1}{2}})\Phi(A)^{-\frac{1}{2}}.$$
Then $\Psi$ is a unital positive linear map. From \eqref{2.5}
\begin{align}\label{2.6}
m=\dfrac{1}{R(A,B)}\leq A^{-\frac{1}{2}}BA^{-\frac{1}{2}}\leq R(A,B)=M.
\end{align}
Using \eqref{2.3} for $\Psi $, and \eqref{2.6}, we get
\begin{equation}\label{2.8}
K(h,\nu)\Psi(I)\sharp_\nu\Psi(X)\leq\Psi(I\sharp_\nu X)
\end{equation}
where $X=A^{-\frac{1}{2}}BA^{-\frac{1}{2}}$ and $h=\dfrac{_{M}}{m}=R^{2}(A,B)$.
From \eqref{2.8}, we have
$$K(h,\nu)\Psi(X)^\nu\leq\Psi(X^\nu)$$ or
\begin{align*}
&K(h,\nu)\Big(\Phi(A)^{-\frac{1}{2}}\Phi(B)\Phi(A)^{-\frac{1}{2}}\Big)^\nu\leq
\Phi(A)^{-\frac{1}{2}}\Phi\Big(A^{\frac{1}{2}}(A^{-\frac{1}{2}}BA^{-\frac{1}{2}})^\nu A^{\frac{1}{2}}\Big)\Phi(A)^{-\frac{1}{2}}\\
&K(h,\nu)\Phi(A)^{\frac{1}{2}}\Big(\Phi(A)^{-\frac{1}{2}}\Phi(B)\Phi(A)^{-\frac{1}{2}}\Big)^\nu\Phi(A)^{\frac{1}{2}}\leq
\Phi\Big(A^{\frac{1}{2}}(A^{-\frac{1}{2}}BA^{-\frac{1}{2}})^\nu A^{\frac{1}{2}}\Big).
\end{align*}
Therefore, we obtain the desired inequality \eqref{2.4}.
\end{proof}
The inequality \eqref{2.3} with $\nu=\frac{1}{2}$, becomes the following inequality.
\begin{equation}\label{2.9}
\frac{2R(A,B)^\frac{1}{2}}{1+R(A,B)}\Phi(A)\sharp\Phi(B)\leq \Phi(A\sharp B).
\end{equation}
Next, we recall the Kadison's Schwarz inequalities
\begin{equation}\label{2.9.1}
\Phi(A^2)\geq \Phi(A)^2,~~\Phi(A^{-1})\geq \Phi(A)^{-1}
\end{equation}
and two complementary inequalities to them, whenever $0<m\leq A\leq M$:
\begin{align}\label{2.9.2}
\frac{(m+M)^2}{4mM}\Phi(A)^2\geq \Phi(A^2),~~\frac{(m+M)^2}{4mM}\Phi(A)^{-1}\geq \Phi(A^{-1}).
\end{align}
The following inequality unifies Kadison's Schwarz inequalities into a single form.
\begin{equation}\label{2.10}
\Phi(BA^{-1}B)\geq\Phi(B)\Phi(A)^{-1}\Phi(B).
\end{equation}
If $0<m\leq A, B\leq M$, then the following inequality is a complementary inequality to \eqref{2.10}
\begin{equation}\label{2.11}
\frac{(m+M)^2}{4mM}\Phi(B)\Phi(A)^{-1}\Phi(B)\geq \Phi(BA^{-1}B).
\end{equation}
The similar to the proof of Theorem\ref{t1}, we give another complementary inequality to \eqref{2.10} with respect to $R(A,B)$ as follows:
\begin{equation}\label{2.12}
\frac{(1+R(A,B)^2)^2}{4R(A,B)}\Phi(B)\Phi(A)^{-1}\Phi(B)\geq \Phi(BA^{-1}B).
\end{equation}
To compare the inequalities \eqref{2.3} with \eqref{2.9} and \eqref{2.11} with \eqref{2.12}, the following examples
show that neither Kantorovich constants in \eqref{2.3} and \eqref{2.11} nor Kantorovich constants in \eqref{2.9} and \eqref{2.12} are uniformly
better than the other.
\begin{example}
Let $A=
\begin{bmatrix}
2 & 0 \\
0 & \frac{1}{3} \\
\end{bmatrix}$
and $B=
\begin{bmatrix}
4 & 0 \\
0 & \frac{1}{2} \\
\end{bmatrix}$.
Clearly, $m_1I=\frac{1}{3}I\leq A\leq 2I=M_1I$ and $m_2I=\frac{1}{2}I\leq B\leq 4I=M_2I$. Then
$A^{-1}B=
\begin{bmatrix}
2 & 0 \\
0 & \frac{3}{2} \\
\end{bmatrix}$
and
$B^{-1}A=
\begin{bmatrix}
\frac{1}{2} & 0 \\
0 & \frac{2}{3} \\
\end{bmatrix}$,
therefore $R(A,B)^2=4\leq h=\frac{M_1M_2}{m_1m_2}=48$.
Consequently, $K(R(A,B)^2, \frac{1}{2})\geq K(h, \frac{1}{2})$ and $K(R(A,B)^2, 2)\leq K(h, 2)$.
\end{example}
\begin{example}
Let $C=
\begin{bmatrix}
2 & 1 \\
1 & 1 \\
\end{bmatrix}$
and $D=
\begin{bmatrix}
1 & 0 \\
0 & 2 \\
\end{bmatrix}$.
Clearly, $m_1I=\frac{3-\sqrt{5}}{2}I\leq C\leq \frac{3+\sqrt{5}}{2}I=M_1I$ and $m_2I=I\leq D\leq 2I=M_2I$. Then
$C^{-1}D=
\begin{bmatrix}
1 & -2 \\
-1 & 4 \\
\end{bmatrix}$
and
$D^{-1}C=
\begin{bmatrix}
2 & 1 \\
\frac{1}{2} & \frac{1}{2} \\
\end{bmatrix}$,
therefore $R(C,D)^2=\left(\frac{5+\sqrt{17}}{2}\right)^2\geq h=\frac{M_1M_2}{m_1m_2}=\frac{(3+\sqrt{5})^2}{2}$.
Consequently, $K(R(C,D)^2, \frac{1}{2})\leq K(h, \frac{1}{2})$ and $K(R(C,D)^2, 2)\geq K(h, 2)$.
\end{example}
\begin{theorem}\label{t2}
If $f$ is an operator convex function on $(0,\infty)$ and $\Phi$ is a unital positive linear map on $B(H)$. Then
\begin{align}\label{2.13}
K(h,2)f(\Phi(A))\geq \Phi(f(A))\geq f(\Phi(A))
\end{align}
for every positive definite operators $A$, where $h=R(A,B)^2$, or $h=\frac{M}{n}$ whenever $0<m\leq A\leq M$.
\end{theorem}
\begin{proof}
It is known that every operator convex function
$f$ on $(0,\infty)$ has a special integral representation as follows:
\begin{align}\label{2.14}
f(t)=\alpha+\beta t+\gamma t^2+\int_0^\infty\frac{\lambda t^2}{\lambda+t}d\mu(\lambda),
\end{align}
where $\alpha, \beta$ are real numbers, $\gamma \geq 0$, and $\mu$ is
a positive finite measure. Thus
\begin{align*}
\Phi(f(A))=\alpha1_K+\beta\Phi(A)+\gamma \Phi(A^2)+\int_0^\infty\Phi(\lambda A^2(\lambda+A)^{-1})d\mu(\lambda),
\end{align*}
and
\begin{align*}
f(\Phi(A))=\alpha1_K+\beta\Phi(A)+\gamma \Phi(A)^2+\int_0^\infty\lambda \Phi(A)^2(\lambda+\Phi(A))^{-1}d\mu(\lambda).
\end{align*}
By \eqref{2.10}, we have
\begin{align*}
\Phi(\lambda A^2(\lambda+A)^{-1})&=\lambda\Phi( A^2(\lambda+A)^{-1})=\lambda\Phi( A(\lambda+A)^{-1}A)\\
&\geq \lambda\Phi(A)(\Phi(\lambda+A))^{-1}\Phi(A)=\lambda\Phi(A)(\lambda+\Phi(A))^{-1}\Phi(A)\\
&=\lambda \Phi(A)^2(\lambda+\Phi(A))^{-1}.
\end{align*}
Therefore $\Phi(f(A))\geq f(\Phi(A))$, since $\Phi(A^2)\geq\Phi(A)^2$.
On the other hand, from \eqref{2.11} or \eqref{2.12}, we get
\begin{align*}
&\Phi(\lambda A^2(\lambda+A)^{-1})=\lambda\Phi( A^2(\lambda+A)^{-1})=\lambda\Phi( A(\lambda+A)^{-1}A)\\
&\leq K(h,2) \lambda\Phi(A)(\Phi(\lambda+A))^{-1}\Phi(A)=K(h,2)\lambda\Phi(A)(\lambda+\Phi(A))^{-1}\Phi(A)\\
&=K(h,2)\lambda \Phi(A)^2(\lambda+\Phi(A))^{-1}.
\end{align*}
Therefore $\Phi(f(A))\leq K(h,2) f(\Phi(A))$, since $\Phi(A^2)\leq K(h,2)\Phi(A)^2$.
\end{proof}
\section{The power means and the Karcher means}
The Karcher mean, also called the Riemannian mean, has long
been of interest in the field of differential geometry. Recently it has been used in a diverse variety of settings:
diffusion tensors in medical imaging and radar, covariance matrices in statistics, kernels in machine learning and
elasticity. Power means for positive definite matrices and operators have been introduced in \cite{law2, lim1}.
It is shown in \cite{bha, law3} that the Karcher mean and power means satisfy all ten properties stated in \cite{ando}.
The Karcher mean and power means have recently become an important tool
for studying of positive definite operators and an interesting subject for matrix analysts and operator theorists.
We would like to refer the reader to \cite{law3,law2,lim1,lim2,moa,pal,yam1} and references therein for more information.
Geometric and power means of two operators
can be extended over more than $3$‐operators via the solution of operator
equations as follows. Let $n$ be a natural number, and let $\triangle_{n}$ be a set of all probability
vectors, i.e.,
\begin{equation*}
\triangle_{n}=\{ \omega= (w_{1}, \dots, w_{n})\in(0, 1)^{n}| \sum_{i=1}^{n}w_{i}=1\}.
\end{equation*}
Let $\mathbb{A}= (A_{1},\dots, A_{n}) \in \mathcal{P}^{n}$ and $\omega=(\mathrm{w}_{1},\dots, w_{n}) \in \triangle_{n} $.
Then the weighted Karcher mean $\Lambda( \omega; \mathbb{A})$ is defined by a unique
positive solution of the following operator equation;
\begin{equation}\label{3.1}
\sum_{i=1}^{n}\mathrm{w}_{i}\log(X^\frac{-1}{2}A_{i}X^\frac{-1}{2})=0.
\end{equation}
The weighted power mean $P_t( \omega; \mathbb{A})$ is defined by a unique
positive solution of the following operator equation;
\begin{equation*}
I=\sum_{i=1}^{n}\mathrm{w}_{i}(X^\frac{-1}{2}A_{i}X^\frac{-1}{2})^t \text{ for } t\in (0,1],
\end{equation*}
or equivalently
\begin{equation}\label{3.2}
X=\sum_{i=1}^{n}\mathrm{w}_{i}(X\sharp_t A_i) \text{ for } t\in (0,1].
\end{equation}
For $t\in [-1,0)$, it is defined by
\[
P_t( \omega; \mathbb{A})=(P_{-t}( \omega; \mathbb{A}^{-1}))^{-1},
\]
where $\mathbb{A}^{-1}= (A_{1}^{-1},\dots, A_{n}^{-1})$.
Lawson and Lim in \cite[Corollary 6.7]{law2} gave an important connection between the weighted Karcher means and
the weighted power means in strong operator topology, as follows:
\begin{equation}\label{3.3}
\Lambda(\omega, \mathbb{A})=\lim_{t\rightarrow 0}P_t(\omega, \mathbb{A}).
\end{equation}
Moreover, they also showed that the following property holds for these means:
\begin{equation}\label{3.4}
P_{-t}( \omega; \mathbb{A})\leq \Lambda( \omega; \mathbb{A})\leq P_{t}( \omega; \mathbb{A})\text{ for all }t\in(0,1].
\end{equation}
Let $\Phi$ be a unital positive linear map on $B(H)$. In \cite{lim1} is proved that if $t\in (0,1]$,
then
\begin{equation}\label{3.5}
\Phi(P_t( \omega; \mathbb{A}))\leq P_t( \omega; \Phi(\mathbb{A})),
\end{equation}
where $\Phi(\mathbb{A})=(\Phi(A_{1}),\dots, \Phi(A_{n}))$.
In the following Theorem, we give a reverse inequality to the inequality \eqref{3.5}.
\begin{theorem}\label{t3}
Let $\Phi$ be a unital positive linear map on $B(H)$ and $n\geq2$ be a positive integer. If $0< t\leq 1$ and $(A_1, \dots , A_n)\in \mathcal{P}^n$.
Then
\begin{equation}\label{3.6}
\Phi(P_t( \omega; \mathbb{A}))\geq K\left(h_0, \frac{1}{2}\right)^\frac{1}{t}P_t( \omega;\Phi(\mathbb{A})),
\end{equation}
where $h_0= \max\limits_{1\leq i,j\leq n} R^2(A_i,A_j)$.
\end{theorem}
\begin{proof}
Let $t\in (0,1]$ and $X_t=P_t( \omega; \mathbb{A})$. Then
$X_t=\sum_{i=1}^n \omega_i(X_t\sharp_t A_i)$, by the operator equation \eqref{3.2}.
From inequality \eqref{2.4}, we have
\begin{equation}\label{3.6.1}
K\left(R^2(X_t,A_i), t\right)(\Phi(X_t)\sharp_t \Phi(A_i))\leq \Phi(X_t\sharp_t A_i).
\end{equation}
In the proof of proposition 3.5 of \cite{law2}, was shown that
\begin{equation*}
d(P_t( \omega; \mathbb{A}), A_j)\leq \max\limits_{1\leq i,j\leq n} d(A_i,A_j),
\end{equation*}
Where $d$ is the Thompson metric. This implies that
\begin{equation*}
R^2(X_t, A_j)=R^2(P_t( \omega; \mathbb{A}), A_j)\leq \max\limits_{1\leq i,j\leq n} R^2(A_i,A_j)=h_0.
\end{equation*}
Since the function $K(h, \frac{1}{2})=\frac{2h^\frac{1}{4}}{1+h^\frac{1}{2}}$ is decreasing for $h\geq1$, Thus
\begin{equation}\label{3.6.2}
K\left(h_0, \frac{1}{2}\right)\leq K\left(R^2(X_t,A_i), \frac{1}{2}\right)\leq K\left(R^2(X_t,A_i), t\right)
\end{equation}
Define
$f(X)=\sum_{i=1}^n \omega_i(X\sharp_t \Phi(A_i))$. Then $\lim_{n\rightarrow\infty}f^n(X)=P_t( \omega; \mathbb{A})$ for any $X>0$.
Let $\Phi$ be a unital positive linear map on $B(H)$. From relations \eqref{3.6.1} and \eqref{3.6.2}, we get
\begin{align*}
\Phi(X_t)&=\sum_{i=1}^n \omega_i\Phi(X_t\sharp_t A_i)\geq \sum_{i=1}^n \omega_i K\left(R^2(X_t,A_i), t\right)(\Phi(X_t)\sharp_t \Phi(A_i))\\
&\geq \sum_{i=1}^n \omega_i K\left(R^2(X_t,A_i), \frac{1}{2}\right)(\Phi(X_t)\sharp_t \Phi(A_i))\\
&\geq\sum_{i=1}^n \omega_i K\left(h_0, \frac{1}{2}\right)(\Phi(X_t)\sharp_t \Phi(A_i))=K\left(h_0, \frac{1}{2}\right)f(\Phi(X_t)).
\end{align*}
Since $f$ is an increasing function, we have
\begin{align*}
f\left(K\left(h_0, \frac{1}{2}\right)^{-1}\Phi(X_t)\right)&=\sum_{i=1}^n \omega_i \left(K\left(h_0, \frac{1}{2}\right)^{-1}\Phi(X_t)\sharp_t \Phi(A_i)\right)\\
&=K\left(h_0, \frac{1}{2}\right)^{t-1}\sum_{i=1}^n \omega_i (\Phi(X_t)\sharp_t \Phi(A_i))\\
&=K\left(h_0, \frac{1}{2}\right)^{t-1}f(\Phi(X_t))\geq f^2(\Phi(X_t)).
\end{align*}
Therefore
\begin{align*}
\Phi(X_t)\geq K\left(h_0, \frac{1}{2}\right)f(\Phi(X_t))\geq K\left(h_0, \frac{1}{2}\right)^{1+(1-t)}f^2(\Phi(X_t)).
\end{align*}
Consequently
\begin{align*}
\Phi(X_t)\geq K\left(h_0, \frac{1}{2}\right)f(\Phi(X_t))\geq K\left(h_0, \frac{1}{2}\right)^{1+(1-t)+(1-t)^2+\dots+(1-t)^{n-1}}f^n(\Phi(X_t)).
\end{align*}
This implies that
\begin{align*}
\Phi(P_t(\omega; \mathbb{A}))=\Phi(X_t)\geq K\left(h_0, \frac{1}{2}\right)^\frac{1}{t}\lim_{n\rightarrow \infty}f^n(\Phi(X_t))=K\left(h_0, \frac{1}{2}\right)^\frac{1}{t}P_t(\omega; \Phi(\mathbb{A})).
\end{align*}
\end{proof}
Let $X$ be a positive definite operator on a Hilbert space $H$.
Then $m=\lambda_{min}(X)\leq X\leq \lambda_{max}(X)=M$, where $\lambda_{min}(X)(\text{resp. }\lambda_{max}(X))$ is the minimum (resp. maximum) of the specrum of $X$.
\begin{theorem}\label{t4}
Let $\Phi$ be a unital positive linear map on $B(H)$ and $n\geq2$ be a positive integer. If $(A_1, \dots , A_n)\in \mathcal{P}^n$.
Then
\begin{equation}\label{3.7}
\Lambda( \omega;\Phi(\mathbb{A}))\geq\Phi(\Lambda( \omega; \mathbb{A}))\geq \frac{4\hbar}{(1+\hbar)^2}~\Lambda( \omega;\Phi(\mathbb{A})),
\end{equation}
where
$m_i=\lambda_{min}(A_i),~M_i=\lambda_{max}(A_i)$ and $\hbar= \max\limits_{1\leq i\leq n} \frac{M_i}{m_i}$.
\end{theorem}
\begin{proof}
From relations \eqref{3.3} and \eqref{3.5}, we get
\begin{equation}\label{3.8}
\Phi(\Lambda( \omega; \mathbb{A}))\leq\Phi(P_t( \omega; \mathbb{A}))\leq P_t(\omega;\Phi(\mathbb{A})).
\end{equation}
If $t\rightarrow 0$ in \eqref{3.8}, then
\begin{equation}\label{3.9}
\Phi(\Lambda( \omega; \mathbb{A}))\leq \Lambda(\omega;\Phi(\mathbb{A})).
\end{equation}
Using \eqref{2.9.2}, we obtain the following inequalities for $\mathbb{A}=(A_1,A_2,\dots,A_n)$
\begin{equation}\label{3.9.1}
\Phi(\mathbb{A})^{-1}\leq\Phi(\mathbb{A}^{-1})\leq K(\hbar,2)\Phi(\mathbb{A})^{-1}=\Phi(K^{-1}(\hbar,2)\mathbb{A})^{-1}.
\end{equation}
Let $-1\leq t<0$, utilizing \eqref{3.5} and \eqref{3.9.1}, we have
\begin{align}\label{3.10}
\Phi(P_t( \omega; \mathbb{A}))&=\Phi\left(P_{-t}( \omega; \mathbb{A}^{-1}\right)^{-1})\geq\left(\Phi(P_{-t}( \omega; \mathbb{A}^{-1}))\right)^{-1}\notag\\
&\geq\left(P_{-t}( \omega; \Phi(\mathbb{A}^{-1}))\right)^{-1}\geq\left(P_{-t}( \omega; (\Phi(K^{-1}(\hbar, 2)\mathbb{A)})^{-1})\right)^{-1}\notag\\
&\geq P_{t}( \omega; \Phi(K^{-1}(\hbar, 2)\mathbb{A)})=K^{-1}(\hbar, 2)P_{t}( \omega; \Phi(\mathbb{A)}).
\end{align}
From relation \eqref{3.3} and \eqref{3.10}, we obtain
\begin{equation}\label{3.11}
K^{-1}(\hbar, 2)P_{-t}( \omega; \Phi(\mathbb{A)})\leq \Phi(P_{-t}( \omega; \mathbb{A}))\leq\Phi(\Lambda( \omega; \mathbb{A}))\leq \Lambda(\omega;\Phi(\mathbb{A})).
\end{equation}
If $t\rightarrow 0$ in \eqref{3.11}, then
\begin{equation*}
K^{-1}(\hbar, 2)\Lambda(\omega;\Phi(\mathbb{A}))\leq\Phi(\Lambda( \omega; \mathbb{A}))\leq\Lambda(\omega;\Phi(\mathbb{A})).
\end{equation*}
\end{proof}
\section{The geometric mean due to Ando-Li-Mathias}\vskip 2mm
Arithmetic and harmonic means of $n$-operators can be defined, easily. But the geometric mean case gives a lot of trouble
because the product of operators is non-commutative.
Although several geometric means
of $n$-operators are defined, they do not have some important properties, for example, permutation
invariant or monotonicity.
Some of mathematicians have long been interested to extend the geometric mean of two operators to $n$-operators case.
Ando, Lim and Mathias in \cite{ando} suggested a good definition of the geometric mean for extending it to
raise the number of positive semi-defined matrices. It is defined by a symmetric method and has many good properties.
They listed ten properties that a geometric mean of $m$ matrices
should satisfy, and displayed that their mean possesses all of them.
Lawson and Lim in \cite{law3} have shown that $G$ has all the ten properties. Other ideas of geometric
mean with all the ten properties have been suggested in \cite{bha1,bin, izu}.
In \cite{yam} Yamazaki pointed out that definition of the geometric mean by Ando. Li and Mathias can
be extended to Hilbert space operators.
The geometric mean $G(A_{1},A_{2},\dots ,A_{n})$ of any $n$-tuple
of positive definite operators $\mathbb{A}=(A_1,\dots,A_n)\in\mathcal{P}^n$ is defined by induction.
(i) $G(A_{1},A_{2})$ =$A_{1}\sharp A_{2}$
(ii) Assume that the geometric mean any $(n-1)$-tuple of operators is defined. Let
\[
G ((A_j)_{j\neq i })=G(A_{1},A_{2},\dots,A_{i-1},A_{i+1},\dots,A_{n}),
\]
and let sequences $\{\mathbb{A}_{i}^{(r)}\} _{r=1}^{\infty}$ be $\mathbb{A}_{i}^{(1)}= A_{i}$ and
$ \mathbb{A}_{i}^{(r+1)}=G((\mathbb{A}_{j}^{(r)})_{j\neq i }) $.
If there exists $ \lim_{r\rightarrow\infty}{\mathbb{A}_{i}^{(r)}} $, and it does not depend on $i$.
Hence the geometric mean of $n$-operators is defined by
\begin{equation}\label{4.1}
\lim_{r\rightarrow\infty}{\mathbb{A}_{i}^{(r)}} = G((\mathbb{A}))=G(A_{1},A_{2},\dots,A_{n}) \text{ for } i= 1,\dots,n.
\end{equation}
In \cite{ando}, Ando et al. showed there exists this limit and \eqref{4.1} is uniformly convergence.
Furthermore, for $\mathbb{A}=(A_1,\dots,A_n)$ and $\mathbb{B}=(B_1,\dots,B_n)\in\mathcal{P}^n$, the following important inequality holds
\begin{equation}\label{4.2}
R(G(\mathbb{A}), G(\mathbb{B}))\leq \left(\prod_{i=1}^n R(A_i,B_i)\right)^\frac{1}{n}.
\end{equation}
Particular,
\begin{equation}\label{4.3}
R(\mathbb{A}_i^{(2)}, \mathbb{A}_k^{(2)})=R(G((A_j)_{j\neq i}), G((A_j)_{j\neq k}))\leq R(A_i, A_k)^\frac{1}{n-1}
\end{equation}
holds.
Yamazaki in \cite{yam} also obtained a converse of the arithmetic-geometric mean inequality
of $n$-operators via Kantorovich constant. Soon after, Fujii el al. \cite{fuj} also proved a stronger
reverse inequality of the weighted arithmetic and geometric means due to Lawson and
Lim of $n$-operators by the Kantorovich inequality.
Let $\mathbb{A}=(A_1,\dots,A_n)\in\mathcal{P}^n$, $\Phi$ be a unital positive linear map on $B(H)$. Then $\Phi(A_{1}),\Phi(A_{2}),\dots,\Phi(A_{n})$ are $n$
positive definite operators in $\mathcal{P}$.
We consider the geometric mean $G(\Phi(\mathbb{A}))$, where $\Phi(\mathbb{A})=(\Phi(A_{1}),\Phi(A_{2}),\dots,\Phi(A_{n}))$,
as follows:
\begin{align*}
\Phi^{(1)}(\mathbb{A}_{i})&=\Phi(A_i) \text{ for } i=1,\dots,n,
\end{align*}
and for $r\geq 1, ~ i=1,\dots,n$
\begin{align*}
\Phi^{(r+1)}(\mathbb{A}_{i})&=G((\Phi^{(r)}(\mathbb{A}_{i}))_{j\neq i})\\
&=G(\Phi^{(r)}(\mathbb{A}_{1}),\dots,\Phi^{(r)}(\mathbb{A}_{i-1}), \Phi^{(r)}(\mathbb{A}_{i+1}),\dots, \Phi^{(r)}(\mathbb{A}_{n})).
\end{align*}
Finally,
\begin{align*}
G(\Phi(\mathbb{A}))=G((\Phi(A_{1}),\Phi(A_{2}),\dots,\Phi(A_{n}))) =\lim _{r\rightarrow\infty} \Phi^{(r)}(\mathbb{A}_{i}).
\end{align*}
\begin{theorem}\label{t5}
Let $n\geq2$ be a positive integer, and $(A_1, \dots , A_n)\in \mathcal{P}^n$.
Then
\begin{align}\label{4.4}
\Phi(G(\mathbb{A}))\geq\left(\frac{2h_1^{\frac{1}{2}}}{1+h_1}\right)^{n-1}G(\Phi(\mathbb{A}))
\end{align}
\end{theorem}
where $h_1=\max\limits_{1\leq i,j\leq n} R(A_i, A_j)$.
\begin{proof}
First, for $r\geq1$, we put $h_r=\max\limits_{1\leq i,j\leq n} R(\mathbb{A}_i^{(r)}, \mathbb{A}_j^{(r)})$ and $K_{r}=K(h^2_{r},\frac{1}{2})$.
By \eqref{4.3}, we have
\[
1\leq h_{r} \leq h_{r-1}^{\frac{1}{n-1}} \leq\dots\leq h_{1}^{(\frac{1}{n-1})^{r}}.
\]
Concavity the function $ f(t)=t^{\alpha} $ for $0< \alpha\leq 1$ implies that
\begin{equation*}
\dfrac {1+t^{\alpha}}{2}\leq \left(\dfrac{1+t}{2}\right)^{\alpha},
\text{ or } \dfrac{2t^{\frac{\alpha}{2}}}{1+t^{\alpha}}\geq\left(\dfrac{2t^\frac{1}{2}}{1+t}\right)^{\alpha}.
\end{equation*}
Since the function $K(t)=\dfrac{2t^\frac{1}{2}}{1+t}$ is a decreasing function for $t\geq 1$, thus
\begin{equation}\label{4.5}
K_{r}=\dfrac{2h_{r}^\frac{1}{2}}{1+h_{r}}\geq\dfrac{2h_{1}^{\frac{1}{2}(\frac{1}{n-1})^r}}{1+h_{1}^{(\frac{1}{n-1})^r}}
\geq\left(\dfrac{2h_{1}^\frac{1}{2}}{1+h_{1}}\right)^{(\frac{1}{n-1})^r}=K_{1}^{(\frac{1}{n-1})^r}.
\end{equation}
Now, we will prove \eqref{4.4} by induction on $n$.
For $n=2$, from inequality \eqref{2.9}, we have
\begin{equation*}
\Phi(A_{1}\sharp A_{2}) \geq K_{1}(\Phi(A_{1}) \sharp \Phi( A_{2})).
\end{equation*}
For $n=3$. A simple calculation shows that
\begin{align*}
\Phi\Big(G(A_{1}\sharp A_{3},A_{1}\sharp A_{2})\Big)&=
\Phi\Big((A_{1}\sharp A_{3})\sharp(A_{1}\sharp A_{2} )\Big)
\geq K_{2}\Big[\Phi\Big(A_{1}\sharp A_{3}\Big)\sharp\Phi\Big(A_{1}\sharp A_{2}\Big)\Big]\\
&\geq K_{2}\Big[K_{1}\Big(\Phi(A_{1})\sharp \Phi(A_{3})\Big)\sharp K_{1}\Big(\Phi(A_{1})\sharp\Phi(A_{2})\Big)\Big]\\
&=K_{2}K_{1}\Big[\Big(\Phi(A_{1})\sharp\Phi(A_{3})\Big)\sharp\Big(\Phi(A_{1})\sharp(\Phi(A_{2})\Big)\Big].\\
&=K_{2}K_{1} G\Big(\Phi(A_{1})\sharp\Phi(A_{3}),\Phi(A_{1})\sharp\Phi(A_{2})\Big).
\end{align*}
Therefore $\Phi(A_{1}^{(3)})\geq K_{2}K_{1} \Phi^{(3)}(A_{1})$.
Hence for $r\geq1$, we obtain
\begin{equation*}
\Phi(A_{i}^{(r)}) \geq (K_{r-1}K_{r-2}\dots K_{1}) \Phi^{(r)}(A_{i}), \text{ for } i=1,2,3.
\end{equation*}
Assume that Theorem \ref{t2} holds for $n-1$. We prove the case $n$.
From the induction hypothesis, we have
\begin{align*}
\Phi(A_i^{(r)})&=\Phi(G ((A_j^{(r-1)})_{j\neq i }))=\Phi\left(G\big(A_{1}^{(r-1)},A_{2}^{(r-1)},\dots,A_{i+1}^{(r-1)},\dots,A_{n}^{(r-1)}\big)\right)\\\
&\geq K_{r-1}^{n-2}G\big(\Phi(A_1^{(r-1)}),\dots,\Phi(A_{i+1}^{(r-1)}),\dots,\Phi(A_n^{(r-1)})\big)\\
&\geq K_{r-1}^{n-2}G\big(K_{r-2}^{n-2}\Phi(A_1^{(r-1)}),\dots,K_{r-2}^{n-2}\Phi(A_{i+1}^{(r-1)}),\dots,K_{r-2}^{n-2}\Phi(A_n^{(r-1)})\big)\\
&\geq K_{r-1}^{n-2}K_{r-2}^{n-2}G\big(\Phi(A_1^{(r-1)}),\dots,\Phi(A_{i+1}^{(r-1)}),\dots,\Phi(A_n^{(r-1)})\big)\\
&= K_{r-1}^{n-2}K_{r-2}^{n-2}G\big(\Phi(G(A_1^{(r-2)}),\dots,\Phi(G(A_{i+1}^{(r-2)}),\dots,\Phi(G(A_n^{(r-2)})\big)\\
&\vdots\\
&\geq(K_{r-1}K_{r-2}\dots K_{1})^{n-2}G\big(\Phi(A_1),\dots,\Phi(A_{i+1}),\dots,\Phi(A_n)\big)\\
&=(K_{r-1}K_{r-2}\dots K_{1})^{n-2}G(\Phi(A_j)_{j\neq i}),
\end{align*}
therefore, for positive integer $i=1,2,\dots,n$, we deduce
\begin{equation}\label{4.6}
\Phi(A_i^{(r)})\geq(K_{r-1}K_{r-2}\dots K_{1})^{n-2}G(\Phi(A_j)_{j\neq i}).
\end{equation}
From \eqref{4.5}, we get
$$K_{r-1}K_{r-2}\dots K_{1}\geq K_{1}K_{1}^{\frac{1}{n-1}}\dots K_{1}^{(\frac{1}{n-1})^{r-1}}
=K_{1}^{1+\frac{1}{n-1}+\dots+(\frac{1}{n-1})^{r-1}}.$$
Consequently
\begin{equation}\label{4.7}
\liminf_{r\rightarrow \infty}K_{r-1}K_{r-2}\dots,K_{1}\geq K_{1}^{\frac{n-1}{n-2}},
\end{equation}
since $K_{1}^{1+\frac{1}{n-1}+\dots+(\frac{1}{n-1})^{r-1}}\rightarrow K_{1}^{\frac{n-1}{n-2}}$.
We know that
\begin{equation}\label{4.8}
\lim_{r\rightarrow\infty}\Phi(A_{i}^{(r)})=\Phi(\lim_{r\rightarrow\infty}A_{i}^{(r)})=\Phi(G(A_1, A_2,\dots,A_n))
\end{equation}
and
\begin{align}\label{4.9}
\lim _{r\rightarrow\infty} \Phi^{(r)}(A_{i})=G(\Phi(A_{1}),\Phi(A_{2}),\dots,\Phi(A_{n})).
\end{align}
From inequalities \eqref{4.6}, \eqref{4.7}, \eqref{4.8} and \eqref{4.9}, we obtain the desired inequality \eqref{4.4}.
\end{proof}
\begin{corollary}\label{c2}
Let $n\geq2$ be a positive integer and $(A_1, \dots , A_n)$ be a $n$-tuple in $\mathcal{P}^n$.
If $0<m_iI\leq A_i\leq M_iI$ for some scalers $0<m_i<M_i~(i=1,2,\dots,n)$.
Then
\begin{equation}\label{4.10}
\left(\frac{2\sqrt{M_0}}{1+M_0}\right)^{n-1}\left(\prod_{j=1}^{n}\langle A_{j}x,x\rangle\right)^{\frac{1}{n}}
\leq\langle G (A_{1},\dots,A_{n})x,x\rangle\leq \left(\prod_{j=1}^{n}\langle A_{j}x,x\rangle\right)^{\frac{1}{n}},
\end{equation}
where $M_0=\max\limits_{1\leq i,j\leq n}\{\frac{M_i}{m_j}\}$.
\end{corollary}
\begin{proof}
Let $(A_1, \dots , A_n)\in \mathcal{P}^n$. The relation $0<m_iI\leq A_i\leq M_iI$ implies that
\begin{align*}
R(A_i,A_j)=\max\{r(A_i^{-1}A_j), r(A_j^{-1}A_i)\}\leq \max\left\{\frac{M_j}{m_i}, \frac{M_i}{m_j}\right\}.
\end{align*}
Therefore
\[
h_0=\max_{1\leq i,j\leq n} R(A_i,A_j)\leq \max_{1\leq i,j\leq n}\left\{\frac{M_i}{m_j}\right\}=M_0.
\]
This implies that
\begin{equation}\label{4.11}
K(h^2_0,\frac{1}{2})\geq K(M^2_0, \frac{1}{2})=\frac{2\sqrt{M_0}}{1+M_0}.
\end{equation}
It can easily be verified by the iteration
argument from the two variable case that the following inequality holds
\begin{equation}\label{4.12}
\Phi(G(A_1, A_2,\dots,A_n))\leq G(\Phi(A_1), \Phi(A_2),\dots,\Phi(A_n)).
\end{equation}
Applying \eqref{4.4} and \eqref{4.12} to the positive linear functional $\Phi(A)=\langle Ax,x\rangle$, where $x$ is a unit vector in $H$,
we get the inequalities in \eqref{4.10} with Kantorovich constant $K(h^2_0,\frac{1}{2})$. The desired inequalities in \eqref{4.10} is obtained by \eqref{4.11}.
\end{proof}
In particular, if $0<mI\leq A_i\leq MI$, then $M_0=\frac{M}{m}$ and the relation \eqref{4.10} becomes
\begin{equation*}
\left(\frac{2\sqrt{mM}}{m+M}\right)^{n-1}\left(\prod_{j=1}^{n}\langle A_{j}x,x\rangle\right)^{\frac{1}{n}}
\leq\langle G (A_{1},\dots,A_{n})x,x\rangle\leq \left(\prod_{j=1}^{n}\langle A_{j}x,x\rangle\right)^{\frac{1}{n}}.
\end{equation*}
From inequalities \eqref{4.4} for $\Phi(A)=\langle Ax,x\rangle$ and as regards for positive linear operators
$\|A\|=\sup\{\langle Ax,x\rangle :~\|x\|\leq 1\}$,
we give the following result:
\begin{corollary}\label{c3}
Let $n\geq2$ be a positive integer, $(A_1, \dots , A_n)\in \mathcal{P}^n$ and let $0<m_iI\leq A_i\leq M_iI$ for $i=1,2,\dots,n$ and for some scalers $0<m_i<M_i$.
Then
\begin{equation}\label{2.20}
\left(\frac{2\sqrt{M_0}}{1+M_0}\right)^{n-1}\prod_{j=1}^{n}\| A_{j}\|^{\frac{1}{n}}
\leq\| G (A_{1},\dots,A_{n})\|\leq \prod_{j=1}^{n}\| A_{j}\|^{\frac{1}{n}},
\end{equation}
where $M_0=\max\limits_{1\leq i,j\leq n}\{\frac{M_i}{m_j}\}$.
\end{corollary}
\end{document}
|
\begin{document}
\title{An integral formula for multiple summing norms of operators}
\author{Daniel Carando \and Ver\'onica Dimant \and Santiago Muro \and Dami\'an Pinasco}
\thanks{This work was partially supported by CONICET PIP 0624, ANPCyT PICT 2011-1456, ANPCyT PICT 11-0738,
UBACyT 1-746 and UBACyT 20020130300052BA}
\address{Daniel Carando. Departamento de Matem\'{a}tica - Pab I,
Facultad de Cs. Exactas y Naturales, Universidad de Buenos Aires,
(1428) Buenos Aires, Argentina and IMAS-CONICET} \email{[email protected]}
\address{Ver\'onica Dimant. Departamento de Matem\'{a}tica, Universidad de San
Andr\'{e}s, Vito Dumas 284, (B1644BID) Victoria, Buenos Aires,
Argentina and CONICET} \email{[email protected]}
\address{Santiago Muro. Departamento de Matem\'{a}tica - Pab I,
Facultad de Cs. Exactas y Naturales, Universidad de Buenos Aires,
(1428) Buenos Aires, Argentina and CONICET} \email{[email protected]}
\address{Dami\'an Pinasco. Departamento de Matem\'aticas y Estad\'{\i}stica,
Universidad Torcuato di Tella, Av. F. Alcorta 7350, (1428), Ciudad Aut\'onoma de Buenos Aires, ARGENTINA and
CONICET}
\email{{\tt [email protected]}}
\keywords{absolutely summing operators, multilinear operators, multiple summing operators, stable measures}
\subjclass[2010]{15A69,15A60,47B10,47H60,46G25}
\maketitle
\begin{abstract}
We prove that the multiple summing norm of multilinear operators defined on some $n$-dimensional real or
complex vector spaces with the $p$-norm may be
written as an integral with respect to stables measures. As an application we show inclusion and
coincidence results for multiple summing mappings. We also present some contraction properties and compute or
estimate the limit orders of this class of operators.
\end{abstract}
\section*{Introduction}
The rotation invariance of the Gaussian measure on $\mathbb K^N$, which we will denote by $\mu_2^N$, allows us
to
show the Khintchine equality. It asserts that if $c_{2,q}$ denotes the $q$-th moment of the one dimensional
Gaussian measure, and $\ell_2^N$ denotes $\mathbb K^N$ with the euclidean norm, then for
any $\alpha\in\mathbb K^N$, $1\le q<\infty$,
\begin{equation}\label{khintchine gaussiano}
c_{2,q}\|\alpha\|_{\ell_2^N}=
\Big(\int_{\mathbb K^N}|\langle\alpha,z\rangle|^qd\mu^N_2(z)\Big)^{1/q}.
\end{equation}
We may interpret this formula as follows: the norm of a linear functional $\alpha$ on $\ell_2^N$ is a multiple
of the $L^q$-norm of the linear functional with respect to the Gaussian measure on $\ell_2^N$. One may ask if
there is a formula like (\ref{khintchine gaussiano}) for linear functionals on some other space, or even for
linear or multilinear operators. For linear functionals, an answer is provided by the $s$-stable L\'evy measure
(see for example \cite[24.4]{DefFlo93}): for $s< 2$ there exists a measure on $\mathbb K^N$, called the {\it
$s$-stable L\'evy measure} and denoted by $\mu_{s}$, which satisfies that for any $0<q<s$, $\alpha\in\mathbb
K^N$,
\begin{equation}\label{estable}
c_{s,q}\|\alpha\|_{\ell_s^N}=\Big(\int_{\mathbb K^N}|\langle\alpha,z\rangle|^qd\mu^N_{s}(z)\Big)^{1/q},
\end{equation}
where
$$c_{s,q}=\Big(\int_{\mathbb K}|z|^qd\mu_{s}^1(z)\Big)^{1/q}.$$
The question for linear operators is more subtle because there are many norms which are natural to consider on $\mathcal
L(\ell_2^N)$. The first result in this direction is due to Gordon
\cite{Gor69} (see also \cite[11.10]{DefFlo93}), who showed that the formula holds for the identity operator on
$\ell_2^N$, considering the absolutely $p$-summing norm of $id_{\ell_2^N}$, that is
$$\pi_p(id_{\ell_2^N})=c_{2,q}\Big(\int_{\mathbb K^N}\|z\|_{\ell_2^N}^q \, d\mu^N_2(z)\Big)^{1/q}.$$
Pietsch \cite{Pie72} extended this formula for arbitrary linear operators from $\ell_{s'}^N\to\ell_s^N$,
$s\ge2$ and used it to compute some limit orders (see also \cite[22.4.11]{Pie80}).
To generalize the formula to the multilinear setting there is again a new issue, because there are many
natural candidates of classes of multilinear operators that extend the ideal of absolutely $p$-summing linear
operators (for instance the articles \cite{PelSan11,Per05} are devoted to their comparison).
Among those candidates, the ideal of multiple summing multilinear operators is considered by many authors the
most important of these extensions and is also the most studied one. Some of the reasons are its
connections with the Bohnenblust-Hille inequality \cite{PerVil04}, or the results on the unconditional
structure of the space of multiple summing operators \cite{DefPer08}. Multiple summing operators were
introduced by Bombal, P\'erez-Garc\'ia and Villanueva \cite{BomPerVil04} and independently by Matos
\cite{Mat03}.
In this note we show that multiple summing operators constitute the correct framework for a multilinear
generalization of formula \eqref{khintchine gaussiano}. For this we present integral formulas for the exact
value of the multiple summing norm of multilinear forms and operators defined on $\ell_p^N$ for some values
of $p$. Moreover, we prove that for some other finite dimensional Banach spaces these formulas hold up to
some constant independent of the dimension.
One particularity of the class of multiple summing operators on Banach spaces is that, unlike the linear
situation,
there is no general inclusion result. In \cite{BotMicPel10,Per04,Pop09} the authors investigate this problem
and prove several results showing that on some Banach spaces inclusion results hold, but on some other spaces
not. The integral formula for the multiple summing norm, together with Khintchine/Kahane type inequalities
will allow us to show some new coincidence and inclusion results for multiple summing operators.
Another application of these formulas deals with unconditionality in tensor products. Defant and
P\'erez-Garc\'ia showed in \cite{DefPer08} that the tensor norm associated to the ideal of multiple 1-summing
multilinear forms preserves unconditionality on $\mathcal L_r$ spaces. As a consequence of our formulas, we
give a simple proof of this fact for $\ell_r$ with $ r \ge 2$. Moreover, we show that
vector-valued multiple 1-summing
operators also satisfy a kind of unconditionality property in the appropriate range of Banach spaces.
Finally, we compute limit orders for the ideal of multiple summing operators.
Our main results are stated in Theorems~\ref{formulita escalar} and \ref{formulita}, which give an exact
formula for the multiple summing norm, and Proposition~\ref{normas equivalentes}, which gives integral
formulas for
estimating these norms in a wider range of spaces.
\section{Main results and their applications}
Let $E_1,\dots,E_m,F$ be real or complex Banach spaces. Recall that an $m$-linear operator
$T\in\mathcal L(E_1,\dots,E_m;F)$ is {\it multiple $p$-summing} if there exists $C>0$ such that for all finite sequences of vectors $(x^1_{j_1})_{j_1=1}^{J_1}\subset E_1,\dots,(x^m_{j_m})_{j_m=1}^{J_m}\subset E_m$
$$
\left(\sum_{j_1,\dots,j_m}\|T(x^1_{j_1},\dots,x^m_{j_m})\|_F^p\right)^{\frac1{p}}\le C
w_p((x_{j_1}^1)_{j_1})\dots w_p((x_{j_m}^m)_{j_m}),
$$
where $$w_p((y_j)_j)=\sup\left\{\Big(\sum_j|\gamma(y_j)|^p\Big)^{1/p}\,:\,\gamma\in B_{E'}\right\}.$$ The
infimum of all those constants $C$ is the multiple $p$-summing norm of $T$ and is denoted by $\pi_p(T)$. The
space of multiple $p$-summing multilinear operators is denoted by $\Pi_p(E_1,\dots,E_m;F)$. When
$E_1=\dots=E_m=E$, the spaces of continuous and multiple $p$-summing multilinear are denoted by $\mathcal
L(^mE;F)$ and $\Pi_p(^mE;F)$ respectively.
The following theorems are our main results. Their proofs will be given in Section~ \ref{sec-proofs}.
\begin{theorem}\label{formulita escalar}
Let $\phi$ be a multilinear form in $\mathcal L(^m\ell_r^N;\mathbb K)$, $p<r'<2$ or $r=2$. Then
$$
\pi_p(\phi)=\frac{1}{c_{r',p}^m}\ \Big(\int_{\mathbb K^N}\dots\int_{\mathbb
K^N}|\phi(z^{(1)},\dots,z^{(m)})|^pd\mu^N_{r'}(z^{(1)})\dots d\mu^N_{r'}(z^{(m)})\Big)^{1/p}.
$$
\end{theorem}
Before we state our second theorem, let us recall some necessary definitions and facts.
For $1\leq q \leq \infty$ and $1 \leq \lambda < \infty$ a normed space $X$ is called an
$\mathcal{L}_{q,\lambda}^g$\emph{-space}, if for each finite dimensional subspace $M \subset X$ and
$\varepsilon >0$ there are $R \in \mathcal{L}(M,\ell_q^m)$ and $S \in \mathcal{L}(\ell_q^m, X)$ for some $m \in
\mathbb{N}$ factoring the inclusion map $I_M^X:M\to X$ such that $\|S\| \|R\| \leq \lambda + \varepsilon$:
\begin{equation} \label{facto}
\xymatrix{ M \ar@{^{(}->}[rr]^{I_M^X} \ar[rd]^{R} & & {X} \\
& {\ell_q^m} \ar[ur]^{S} & }.
\end{equation}
$X$ is called an $\mathcal{L}_{q}^g$\emph{-space} if it is an $\mathcal{L}_{q,\lambda}^g$-space for some $\lambda \geq 1$.
Loosely speaking, $\mathcal{L}_{q}^g$-spaces share many properties of $\ell_q$, since they \emph{locally look like $\ell_q^m$}. The spaces $L_q(\mu)$ are $\mathcal{L}_{q,1}^g$-spaces.
For more information and properties of $\mathcal{L}_{q}^g$-spaces see \cite[Section 23]{DefFlo93}.
\begin{theorem}\label{formulita}
Let $T$ be a multilinear map in $\mathcal L(^m\ell_r^N;X)$, where $X$ is an $\mathcal L_{q,1}^g$-space and suppose $r$, $q$ and $p>0$ satisfy one of the following conditions
\begin{itemize}
\item[a)] $r=q=2$;
\item[b)] $r=2$ and either $p<q<2$ or $p=q$;
\item[c)] $p<r'<2$ and either $p<q\le 2$ or $p=q$.
\end{itemize}
Then
$$
\pi_p(T)= \frac{1}{c_{r',p}^m}\ \Big(\int_{\mathbb K^N}\dots\int_{\mathbb
K^N}\|T(z^{(1)},\dots,z^{(m)})\|_X^p \, d\mu^N_{r'}(z^{(1)})\dots d\mu^N_{r'}(z^{(m)})\Big)^{1/p}.
$$
\end{theorem}
It is clear that Theorem~\ref{formulita escalar} follows from Theorem~\ref{formulita}, but in fact, the proof of Theorem~\ref{formulita} uses the scalar result, which is much simpler and is interesting on its own.
We remark that the formula also holds for any multilinear map in $\mathcal
L(\ell_{r_1}^N,\dots,\ell_{r_m}^N;X)$, where $X$ is an $\mathcal L_{q,1}^g$-space and
$r_1,\dots,r_m$, $q$ and $p$ satisfy conditions analogous to those of Theorem~\ref{formulita}. Moreover, the
formula turns into an
equivalence between the $\pi_p$ norm and the integral if we take general $\mathcal L_q^g$-spaces.
On the other hand, if we put $\ell_r$ in the domain, since multiple summing operators form a maximal ideal,
the formula holds with a limit over $N$ in the right hand side (here we consider $\mathbb K^N$ as a subset of
$\ell_r$).
There are situations not covered by the previous theorem where we have an equivalence or, at least, an
inequality between the $\pi_p$ and the $L_p(\mu_{s})$ norms.
\begin{proposition} \label{normas equivalentes} Let $T\in\mathcal L(^m\ell_r^N;X).$
($i$) Suppose either $r=2$ and $p,q<2;$ or $r=2$ and $q\le p$; or $p<r'<2$ and $q\le 2$. If $X$ is an $\mathcal L_{q}^g$-space, then we have
$$
\pi_p(T)\asymp \left(\int_{\mathbb K^N}\dots\int_{\mathbb
K^N}\|T(z^{(1)},\dots,z^{(m)})\|_X^p \, d\mu^N_{r'}(z^{(1)})\dots d\mu^N_{r'}(z^{(m)})\right)^{1/p},
$$
that is, the multiple $p$-summing and the $L_{r'}(\mathbb K^N\times\dots\times\mathbb
K^N,\mu_{r'}^N\times\dots\times\mu_{r'}^N)$ norm are equivalent in $\mathcal
L(^m\ell_r^N;X)$, with constants which are independent of $N$.
($ii$) If $r=2$ or $p<r'<2$ then we have, for any Banach space $X$,
$$
\pi_p(T)\succeq \left(\int_{\mathbb K^N}\dots\int_{\mathbb
K^N}\|T(z^{(1)},\dots,z^{(m)})\|_X^p \, d\mu^N_{r'}(z^{(1)})\dots d\mu^N_{r'}(z^{(m)})\right)^{1/p}.
$$
\end{proposition}
Now we describe some applications of these results. The most direct one is an asymptotically correct
relationship between the multiple summing norm of a multilinear operator and the usual (supremum) norm.
Cobos, K\"uhn and Peetre \cite{CobKuhPee99} compared the Hilbert-Schmidt norm, $\pi_2$, with the usual norm of
multilinear forms. They showed that if $T$ is any $m$-linear form in $\mathcal L(^m\ell_2^N,\mathbb K)$ then
$$
\pi_2(T)\le N^{\frac{m-1}{2}}\|T\|.
$$
Moreover, the asymptotic bound is optimal in the sense that there exist constants $c_m$ and $m$-linear forms
$T$ on $\ell_2^N$ with $\|T\|=1$ and $\pi_2(T)\ge c_mN^{\frac{m-1}{2}}$. It is easy to see from this that the
correct exponent for the asymptotic bound for the Hilbert-valued case is $\frac{m}{2}$. The same holds for the
multiple $p$-summing norm for any $p$ because all those norms are equivalent to the Hilbert-Schmidt norm in
$\mathcal L(^m\ell_2;\ell_2)$, see \cite{Mat03,Per04}. We see now that the same optimal exponent holds for multiple $p$-summing operators with values on $\mathcal L_q^g$-spaces.
First, note that passing to polar coordinates we have, in the complex case (the real case follows similarly)
\begin{eqnarray*}
& &\hspace{-5pt} \int_{\mathbb K^N}\dots\int_{\mathbb
K^N}\|T(z^{(1)},\dots,z^{(m)})\|_X^p \, d\mu^N_2(z^{(1)})\dots
d\mu^N_2(z^{(m)}) \\
& =&\hspace{-5pt} \frac{1}{\Gamma(N)^m}\int_{(
S^{2N-1})^m} \hspace{-5pt}\|T(\omega^{(1)},\dots,\omega^{(m)})\|_X^p \, d\sigma_{2N-1}(\omega^{(1)})\dots
d\sigma_{2N-1}(\omega^{(m)})\, \Big(\int_{0}^\infty 2\rho^{2N+p-1}e^{-\rho^2}d\rho\Big)^m\\
&\le &\hspace{-5pt} \|T\|^p\Big(\frac{\Gamma(N+p/2)}{\Gamma(N)}\Big)^{m},
\end{eqnarray*}
where $S^{2N-1}$ denotes the unit sphere in $\mathbb R^{2N}$ and $\sigma_{2N-1}$ the normalized Lebesgue
measure defined on it.
As a consequence of Proposition~\ref{normas equivalentes}, we obtain
\begin{equation}\label{desigualdad entre normas}
\pi_p(T) \preceq \left(\frac{\Gamma(N+p/2)}{\Gamma(N)}\right)^{m/p} \|T\| \preceq
N^{\frac{m}{2}}\|T\|
\end{equation}
for $X$ a $\mathcal L_{q,\lambda}^g$-space and $p\ge q$ or $p,q<2$.
Let us see that for $p,q\le 2$, the exponents are optimal. Since for any $T\in\mathcal
L(^m\ell_2^N;\ell_q)$ we have
$$
\Big(\sum_{j_1,\dots,j_m=1}^N\|T(e_{j_1},\dots,e_{j_m})\|_{\ell_q}
^p\Big) ^ { \frac1 {p}}\le \pi_p(T)N^{\frac{m}{p}-\frac{m}{2}}\preceq N^{\frac{m}{p}}\|T\|,
$$
it suffices to show that the inequality
\begin{equation}\label{triangular}
\Big(\sum_{j_1,\dots,j_m=1}^N\|T(e_{j_1},\dots,e_{j_m})\|_{\ell_q}
^p\Big) ^ { \frac1 {p}}\preceq N^{\frac{m}{p}}\|T\|
\end{equation}
is optimal.
By \cite[Theorem 4]{Boa00}, there exist symmetric multilinear operators $\tilde T_N\in\mathcal
L(^m\ell_2^N,\ell_2^N)=\mathcal
L(^{m+1}\ell_2^N)$, such that,
$\displaystyleplaystyle \tilde
T_N=\sum_{j_1,\dots,j_{m+1=1}}^N\varepsilon_{j_1,\dots,j_{m+1}}e_{j_1}\otimes\dots\otimes e_{j_{m+1}}$,
with $\varepsilon_{j_1,\dots,j_{m+1}}=\pm 1$ and $\|\tilde T_N\|\asymp \sqrt{N}$.
Let $T_N=i_{2q}\circ \tilde T_N$, where $i_{2q}:\ell_2^N\to\ell_q^N$ is the inclusion.
Then, $\displaystyleplaystyle \|T_N\|\preceq N^\frac1{q}$ and
$$
\displaystyleplaystyle\Big(\sum_{j_1,\dots,j_m=1}^N\|T_N(e_{j_1},\dots,e_{j_m})\|_{\ell_q}
^p\Big)^ { \frac1 {p}}N^{\frac1{q}+\frac{m}{p}}\succeq N^{\frac{m}{p}}\|T_N\|.
$$
This implies that inequality (\ref{triangular}) is optimal and, hence, so is \eqref{desigualdad entre normas}.
\subsection{Inclusion theorems}
The well-known inclusion theorem for absolutely summing linear operators states that for any Banach spaces
$E,F$ we have
$$
\Pi_s(E,F)\subset\Pi_t(E,F),\quad \textrm{ when }s\le t.
$$
Although multiple summing mappings share several properties of linear summing operators, there is no
general inclusion theorem in the multilinear case (see \cite{PerVil04}).
It is therefore interesting to investigate in which situations we do have inclusion type theorems.
The following theorem summarizes some of the most important known results on this topic.
\begin{theorem}[\cite{BotMicPel10,Per04,Pop09}]
($i$) If $E$ has cotype $r\ge2$ then
$$
\Pi_s(^mE,F)=\Pi_1(^mE,F),\quad \textrm{ for }1\le s<r^*.
$$
($ii$) If $F$ has cotype $2$ then
$$
\Pi_s(^mE,F)\subset\Pi_2(^mE,F),\quad \textrm{ for }2\le s <\infty.
$$
\end{theorem}
The following picture illustrates the above theorem in the particular case where $E=\ell_2$ and $F=\ell_q$,
\[
\begin{pspicture}(3,3)
\pspolygon[linecolor=green!60!red!60,
fillstyle=hlines](1.5,0)(3,0)(3,3)(1.5,3)
\pspolygon[linecolor=black!60!pink!60,fillcolor=black!60!pink!60,
fillstyle=solid](1.5,3)(0,3)(0,1.5)(1.5,1.5)
\psline(0,-0.2)(0,3.2)\psline(-0.2,0)(3.3,0)
\rput[l](3.4,0){$\frac1{p}$}
\rput[d](0.2,3.2){$\frac1{q}$}
\rput[u](3,-0.2){$1$}
\rput[r](0,3){$1$}
\rput[u](1.5,-0.3){$\frac1{2}$}
\rput[r](-0.1,1.5){$\frac1{2}$}
\psline[linestyle=dashed,linewidth=0.7pt](1.5,0)(1.5,3)
\psline[linewidth=0.7pt,linecolor=green!60!red!60](0.05,1.5)(2.95,1.5)
\end{pspicture}
\]
$
$
In the ruled area we have $\Pi_{p_1}(^m\ell_2;\ell_q)=\Pi_{p_2}(^m\ell_2;\ell_q)$ and in the shaded area
we have the reverse inclusion $\Pi_{p_1}(^m\ell_2;\ell_q)\subset\Pi_{2}(^m\ell_2;\ell_q)$ for
$p_1\ge 2$.
As a consequence of our integral formula, we obtain the following improvement to the previous result, which will be proved in Section \ref{sec-proofs}.
\begin{proposition}\label{prop inclusion} Let $Y$ be a $\mathcal L_2^g$-space and $X$ a $\mathcal L_q^g$-space.
If $p\ge q$, then $\Pi_p(^mY;X)=\Pi_q(^mY;X)$.
If $p\le q$, then $\Pi_p(^mY;X)\subset \Pi_q(^mY;X)$.
\end{proposition}
With the information given by the above proposition, we have the following new picture.
\[
\begin{pspicture}(3,3)
\pspolygon[linecolor=green!60!red!60,
fillstyle=hlines](1.5,0)(3,0)(3,3)(0,3)(0,0)(1.5,1.5)
\pspolygon[linecolor=black!60!pink!60,fillcolor=black!60!pink!60,
fillstyle=solid](1.5,1.5)(0,0)(1.5,0)
\psline(0,-0.2)(0,3.2)\psline(-0.2,0)(3.2,0)
\rput[l](3.4,0){$\frac1{p}$}
\rput[d](0.2,3.2){$\frac1{q}$}
\rput[u](3,-0.2){$1$}
\rput[r](0,3){$1$}
\rput[u](1.5,-0.3){$\frac1{2}$}
\rput[r](-0.1,1.5){$\frac1{2}$}
\psline[linestyle=dashed,linewidth=0.7pt](1.5,0)(1.5,1.5)
\end{pspicture}
\]
$
$
\textsl{}
In the ruled area we have $\Pi_{p_1}(^m\ell_2;\ell_q)=\Pi_{p_2}(^m\ell_2;\ell_q)$ and in the shaded
area
we have the (direct) inclusion $\Pi_{p}(^m\ell_2;\ell_q)\subset\Pi_{q}(^m\ell_2;\ell_q)$ for
$p\le q$.
\subsection{A contraction result and unconditionality}
Let us begin with this contraction result for the $p$-summing norm of multilinear operators.
\begin{theorem}\label{contraction}
Suppose $X$ is a $\mathcal L_q^g$-space and let $r$, $q$ and $p>0$ satisfy one of the conditions in
Proposition~\ref{normas equivalentes} ($i$). Then, there is a constant $K$ (depending on $r$, $q$ and $p$),
such that for any finite matrix $(x_{i_1,\dots,i_m})_{i_1,\dots,i_m}\subset X$ and any choice of scalars
$\alpha_{i_1,\dots,i_m}$ we have,
$$
\pi_p\left( \sum _{i_1,\dots,i_m} \alpha_{i_1,\dots,i_m}\ e_{i_1}'\otimes\cdots\otimes e_{i_m}' \
x_{i_1,\dots,i_m} \right) \le K \|(\alpha_{i_1,\dots,i_m})\|_\infty\ \pi_p\left( \sum _{i_1,\dots,i_m}
e_{i_1}' \otimes\cdots\otimes e_{i_m}' \ x_{i_1,\dots,i_m} \right),
$$
where the $\pi_p$ norms are taken in $\Pi_p(^m\ell_r;X)$.
\end{theorem}
\begin{proof} If we show the inequality for $\alpha_{i_1,\dots,i_m}=\pm 1$, standard procedures lead to the
desired inequality for general scalars, eventually with different constants (see, for example, Section 1.6 in
\cite{DieJarTon95}).
We set
$$T=\sum _{i_1,\dots,i_m} e_{i_1}' \otimes\cdots\otimes e_{i_m}' \ x_{i_1,\dots,i_m} \quad \text{and}\quad T_\alpha=\sum _{i_1,\dots,i_m} \alpha_{i_1,\dots,i_m} e_{i_1}' \otimes\cdots\otimes e_{i_m}' \ x_{i_1,\dots,i_m}$$
and let $(r_k)_k$ be the sequence of Rademacher functions on $[0,1]$. For any choice of $t_1\dots,t_m\in [0,1]$, we have
\begin{eqnarray*}
& &
\int_{\mathbb K^N}\dots\int_{\mathbb K^N}\|T_\alpha(r_1(t_1)z^{(1)},\dots,r_n(t_m)z^{(m)})\|_X^p
d\mu^N_{r'}(z^{(1)})\dots d\mu^N_{r'}(z^{(m)}) \\
&=& \int_{\mathbb K^N}\dots\int_{\mathbb K^N}\|T_\alpha(z^{(1)},\dots,z^{(m)})\|_X^p d\mu^N_{r'}(z^{(1)})\dots
d\mu^N_{r'}(z^{(m)}).\end{eqnarray*}
We integrate on $t_j\in [0,1]$, $j=1,\dots,m$ and use Fubini's theorem to obtain
\begin{eqnarray}
& & \int_{\mathbb K^N}\dots\int_{\mathbb K^N}\|T_\alpha(z^{(1)},\dots,z^{(m)})\|_X^p d\mu^N_{r'}(z^{(1)})\dots
d\mu^N_{r'}(z^{(m)}) \label{conalfa}\\
&=& \hspace{-8pt} \int_{\mathbb K^N}\dots\int_{\mathbb K^N} \int_0^1 \dots \int_0^1
\big\|T_\alpha(r_1(t_1)z^{(1)},\dots,r_n(t_m)z^{(m)})\big\|_X^p dt_1\dots dt_m d\mu^N_{r'}(z^{(1)})\dots
d\mu^N_{r'}(z^{(m)})\nonumber \\
&=& \hspace{-8pt}
\int_{[0;1]^m}\int_{(\mathbb {K^N})^m} \hspace{-3pt}
\big\|\hspace{-8pt} \sum_{i_1,\dots,i_m}\hspace{-6pt} r_{i_1}(t)\dots r_{i_m}(t) \alpha_{i_1,\dots,i_m} z^{(1)}_{i_1} \cdots
z^{(m)}_{i_m} x_{i_1,\dots,i_m} \big\|_X^p \hspace{-1pt} dt_1\ldots dt_m d\mu^N_{r'}(z^{(1)}\hspace{-1pt})\dots
d\mu^N_{r'}(z^{(m)}\hspace{-1pt}).\nonumber
\end{eqnarray}
Since $X$ has nontrivial cotype and local unconditional structure, we can apply a multilinear version of Pisier's deep result \cite[Proposition 2.1]{Pis78} (which follows the same lines as the bilinear result) to show that, for any $z^{(1)},\dots,z^{(m)} \in \mathbb K^N$, we have
\begin{eqnarray*}
& &
\int_0^1 \dots \int_0^1
\big\| \sum _{i_1,\dots,i_m} r_{i_1}(t)\dots r_{i_m}(t)\ \alpha_{i_1,\dots,i_m}\ z^{(1)}_{i_1} \cdots z^{(m)}_{i_m} \ x_{i_1,\dots,i_m} \big\|_X^2 dt_1\dots dt_m \\
&\le & K_X \ \int_0^1 \dots \int_0^1
\big\| \sum _{i_1,\dots,i_m} r_{i_1}(t)\dots r_{i_m}(t)\ z^{(1)}_{i_1} \cdots z^{(m)}_{i_m} \ x_{i_1,\dots,i_m} \big\|_X^2 dt_1\dots dt_m
\end{eqnarray*}
Using a multilinear Kahane inequality (which may be proved by induction on $m$), the same holds, with
a different constant, if we consider the power $p$ in the integrals.
This means that we can take the $\alpha_{i_1,\dots,i_k}$ from \eqref{conalfa}, paying the price of a constant $K$. Now, we can go all the way back as before to obtain
\begin{eqnarray*} & &
\int_{\mathbb K^N}\dots\int_{\mathbb K^N}\|T_\alpha(z^{(1)},\dots,z^{(m)})\|_X^p d\mu^N_{r'}(z^{(1)})\dots
d\mu^N_{r'}(z^{(m)})\\ &\le & K \int_{\mathbb K^N}\dots\int_{\mathbb K^N}\|T(z^{(1)},\dots,z^{(m)})\|_X^p
d\mu^N_{r'}(z^{(1)})\dots d\mu^N_{r'}(z^{(m)}).
\end{eqnarray*}
The integral formula in Proposition~\ref{normas equivalentes} gives the result.
\end{proof}
Note that, in the scalar valued case, the previous theorem asserts that the monomials form an unconditional
basic sequence in $\Pi_1(^m\ell_r)$ for $r\ge 2$. This is a particular case of the result of Defant and
Pérez-García in \cite{DefPer08}. It should be noted that the analogous scalar valued result is much easier to
prove: after introducing the Rademacher functions as in the previous proof, we just have to use a multilinear
Khintchine inequality and the integral formula from Theorem~\ref{formulita escalar} to obtain the result
(Pisier's result is, of course, not needed in this case).
\subsection{Limit orders}
As a consequence of the integral formula for the $p$-summing norm, we are able to compute limit orders of
multiple summing operators (see definitions below). Limit orders of the ideal of scalar valued multiple 1-summing forms were computed
in \cite{DefPer08} for the bilinear case. In the multlinear case, they were computed in \cite{CarDimSev07}
for $\ell_r$ with $1\le r\le 2$ and in \cite{Paltesis} for $\ell_r$ with $r\ge 2$. This latter case can be
easily obtained from our integral formula for the multiple summing norm.
We will actually use the integral formula to compute some limit orders for the vector
valued case, the mentioned scalar case being very similar.
A subclass $\mathfrak{A}$ of the class $\mathcal{L}$
of all $m$-linear continuous mappings between Banach spaces is called an {\it ideal
of $m$-linear mappings} if
\begin{enumerate}
\item For all Banach spaces $E_1,\dots,E_m,F$, the component set $\mathfrak{A}(E_1,\dots,E_m;F):=\mathfrak{A}\cap\mathcal(E_1,\dots,E_m;F)$ is a linear subspace of $\mathcal(E_1,\dots,E_m;F)$.
\item If $T_j\in\mathcal(E_j;G_j)$, $\phi\in\mathfrak{A}(G_1,\dots,G_m;G)$ and $S\in\mathcal L(G,F)$, then $S\circ\phi\circ(T_1,\dots,T_m)$ belongs to $\mathfrak{A}(E_1,\dots,E_m;F)$.
\item The application $\mathbb K^m\ni(\lambda_1,\dots,\lambda_m)\mapsto \lambda_1\cdot\ldots\cdot\lambda_m\in\mathbb K$ is in $\mathfrak{A}(\mathbb K,\dots,\mathbb K;\mathbb K)$.
\end{enumerate}
A {\it normed ideal of $m$-linear operators}
$(\mathfrak A,\|\cdot\|_{\mathfrak A})$ is an ideal $\mathfrak A$ of $m$-linear operators together with an ideal norm $\|\cdot\|_{\mathfrak A}$, that is,
\begin{enumerate}
\item $\|\cdot\|_{\mathfrak A}$ restricted to each component is a norm.
\item If $T_j\in\mathcal(E_j;G_j)$, $\phi\in\mathfrak{A}(G_1,\dots,G_m;G)$ and $S\in\mathcal L(G,F)$, then $\|S\circ\phi\circ(T_1,\dots,T_m)\|_{\mathfrak A}\le\|S\|\|\phi\|_{\mathfrak A}\|T_1\|\cdot\dots\cdot\|T_m\|$.
\item $\|\mathbb K^m\ni(\lambda_1,\dots,\lambda_m)\mapsto \lambda_1\cdot\ldots\cdot\lambda_m\in\mathbb K\|_{\mathfrak{A}}=1$.
\end{enumerate}
Given a normed ideal of $m$-linear operators
$(\mathfrak A,\|\cdot\|_{\mathfrak A})$, the {\it limit order}
$\lambda_m(\mathfrak A,r,q)$ is defined as the infimum of all $\lambda\ge0$ such that there is a constant $C>0$
satisfying
$$
\|\Phi_N\|_{\mathfrak A}\le CN^\lambda,
$$
for every $N\ge1$, where $\Phi_N:\ell_r^N\times\dots\times\ell_r^N\to\ell_q^N$ is the $m$-linear operator,
$\Phi_N(x^1,\dots,x^m)=\sum_{j=1}^Nx^1_j\dots x^m_je_j$.
\begin{proposition}
$$
\lambda_m(\Pi_1,r,q)=\left\{\begin{array}{lll}
\frac1{q} &\textrm{ if }&q\le r'\le 2\\
\frac1{r'}&\textrm{ if }&r'\le q\le 2\\
\frac1{q}+\frac{m}{2}-\frac{m}{r}&\textrm{ if }& \frac{2mq}{2+mq}< r\le 2\textrm{
and }q\le 2\\
0&\textrm{ if }&1\le r\le \frac{2mq}{2+mq}
\end{array}\right.
$$
\end{proposition}
These values can be represented by the following picture:
$$
\begin{pspicture}(3,3)
\pspolygon[linecolor=green!60!red!60,linewidth=0.2pt,
fillstyle=hlines,hatchangle=71,hatchwidth=0.1pt](1.5,1.5)(2.1,1.5)(2.6,3)(1.5,3)
\pspolygon[linecolor=green!60!red!60,linewidth=0.2pt,fillcolor=gray!40!white!60,
fillstyle=solid ](2.1,0)(3,0)(3,3)(2.585,3)(2.1,1.5)
\pspolygon[linecolor=green!60!red!60,linewidth=0.2pt,
fillstyle=hlines,hatchangle=90,hatchwidth=0.1pt](1.5,1.5)(0,3)(0,1.5)
\pspolygon[linecolor=green!60!red!60,linewidth=0.2pt,
fillstyle=hlines,hatchangle=0,hatchwidth=0.1pt](1.5,1.5)(0,3)(1.5,3)
\psline(0,-0.2)(0,3.2)\psline(-0.2,0)(3.2,0)
\rput[d](0,3.4){$_{1/{q}}$}
\rput[l](3.4,0){$_{1/{r}}$}
\rput[u](3,-0.2){$_1$}
\rput[r](0,3){$_1$}
\rput[u](1.5,-0.3){$_\frac1{2}$}
\rput[r](-0.1,1.5){$_\frac1{2}$}
\psline[linestyle=dashed,linewidth=0.2pt](1.5,0)(1.5,1.5)
\rput[l](4,1.5){$\lambda_m(\Pi_1,r,q)$}
\rput[c](1.1,2.5){$_{1/q}$}
\rput[c](2.6,1.3){$_0$}
\rput[c](.5,1.9){$_{1/r'}$}
\end{pspicture}
$$
The proof will be splitted in several lemmas.
\begin{lemma}
Let $p\le q\le r'\le 2$ then $\lambda_m(\Pi_p,r,q)=\frac1{q}$.
\end{lemma}
\begin{proof}
Let $p\le q< r'\le 2$. Then by Theorem \ref{formulita},
\begin{eqnarray*}
c_{r',p}^m\pi_p(\Phi_N)
&=& \Big(\int_{\mathbb K^N}\dots\int_{\mathbb K^N}\Big(\sum_j|z^{(1)}_j\dots
z^{(m)}_j|^q\Big)^{p/q}d\mu^N_{r'}(z^{(1)})\dots d\mu^N_{r'}(z^{(m)})\Big)^{1/p}\\
&\le & \Big(\int_{\mathbb K^N}\dots\int_{\mathbb K^N}\sum_j|z^{(1)}_j\dots
z^{(m)}_j|^qd\mu^N_{r'}(z^{(1)})\dots d\mu^N_{r'}(z^{(m)})\Big)^{1/q}\\
&= & c_{r',q}^{m}N^{1/q}. \\
\end{eqnarray*}
Thus, $\lambda_m(\Pi_p,r,q)\le \frac1{q}$. On the other hand,
\begin{eqnarray*}
c_{r',p}^m\pi_p(\Phi_N) &=& \Big(\int_{\mathbb K^N}\dots\int_{\mathbb K^N}\Big(\sum_j|z^{(1)}_j\dots
z^{(m)}_j|^q\Big)^{p/q}d\mu^N_{r'}(z^{(1)})\dots d\mu^N_{r'}(z^{(m)})\Big)^{1/p}\\
&\ge & \Big(\int_{\mathbb K^N}\dots\int_{\mathbb K^N}N^{p/q-1}\sum_j|z^{(1)}_j\dots
z^{(m)}_j|^pd\mu^N_{r'}(z^{(1)})\dots d\mu^N_{r'}(z^{(m)})\Big)^{1/p}\\
&= & c_{r',p}^{m}N^{1/q}.
\end{eqnarray*}
Hence, $\lambda_m(\Pi_p, r, q)\ge \frac{1}{q}$ and the proof is done.
\end{proof}
\begin{lemma}
Let $p<r'\le q<2$. Then $\lambda_m(\Pi_p,r,q)= \frac1{r'}$.
\end{lemma}
\begin{proof}
Let $1<s<r'\le q<2$. Then, by Theorem \ref{formulita},
\begin{eqnarray*}
c_{r',1}^m\pi_1(\Phi_N)
&=& \int_{\mathbb K^N}\dots\int_{\mathbb K^N}\Big(\sum_j|z^{(1)}_j\dots
z^{(m)}_j|^q\Big)^{1/q}d\mu^N_{r'}(z^{(1)})\dots d\mu^N_{r'}(z^{(m)})\\
&\le& \int_{\mathbb K^N}\dots\int_{\mathbb K^N}\Big(\sum_j|z^{(1)}_j\dots
z^{(m)}_j|^s\Big)^{1/s}d\mu^N_{r'}(z^{(1)})\dots d\mu^N_{r'}(z^{(m)})\\
&\le & \Big(\int_{\mathbb K^N}\dots\int_{\mathbb K^N}\sum_j|z^{(1)}_j\dots
z^{(m)}_j|^sd\mu^N_{r'}(z^{(1)})\dots d\mu^N_{r'}(z^{(m)})\Big)^{1/s}\\
&= & c_{r',s}^{m}N^{1/s}. \\
\end{eqnarray*}
Since this is true for every $s<r'$, $\lambda_m(\Pi_1,r,q)\le \frac1{r'}$.
On the other hand, let $\Psi_N:\ell_r^N\times\dots\ell_r^N\times\ell_{q'}^N\to \mathbb C$, the $(m+1)$-linear
form induced by $\Phi_N$. By \cite[Proposition 2.2]{PerVil04} or
\cite[Proposition 2.5]{Mat03}, $\pi_1(\Phi_N)\ge\pi_1(\Psi_N)$. Thus, by Theorem \ref{formulita
escalar} taking into account the comments after Theorem \ref{formulita}, we have
\begin{eqnarray*}
c_{r',1}^mc_{q,1}\pi_1(\Psi_N) &=& \int_{\mathbb K^N}\dots\int_{\mathbb
K^N}|\Psi_N(z^{(1)},\dots,z^{(m+1)})|d\mu^N_{r'}(z^{(1)})\dots d\mu^N_{r'}(z^{(m)})d\mu^N_{q}(z^{(m+1)})\\
&=& \int_{\mathbb K^N}\dots\int_{\mathbb K^N}\Big|\sum_jz^{(1)}_j\dots
z^{(m+1)}_j\Big|d\mu^N_{r'}(z^{(1)})\dots
d\mu^N_{r'}(z^{(m)})d\mu^N_{q}(z^{(m+1)})\\
&=& c_{r',1} \int_{\mathbb K^N}\dots\int_{\mathbb K^N}\Big(\sum_j|z^{(2)}_j\dots
z^{(m+1)}_j|^{r'}\Big)^{1/r'}d\mu^N_{r'}(z^{(2)})\dots d\mu^N_{r'}(z^{(m)})d\mu^N_{q}(z^{(m+1)})\\
&\ge& c_{r',1}N^{\frac1{r'}-1} \int_{\mathbb K^N}\dots\int_{\mathbb K^N}\sum_j|z^{(2)}_j\dots
z^{(m+1)}_j|d\mu^N_{r'}(z^{(2)})\dots d\mu^N_{r'}(z^{(m)})d\mu^N_{q}(z^{(m+1)})\\
&=& c_{r',1}^mc_{q,1}N^{\frac1{r'}-1}N \,=\, c_{r',1}^mc_{q,1}N^{\frac1{r'}}
\end{eqnarray*}
Therefore $\lambda_m(\Pi_1,r,q)= \frac1{r'}$.
This proves our assertions for $p=1$. By \cite[Theorem 4.7]{BotMicPel10},
$\Pi_p(\ell_r,\ell_q)=\Pi_1(\ell_r,\ell_q)$ for every $1\le p\le 2$, and the lemma follows.
\end{proof}
Since $\ell_r$ has cotype 2 for $1\le r\le 2$, given any $m$-linear form $T\in \mathcal
L(\ell_r^N,\dots,\ell_r^N; \mathbb C)$, we know from
\cite[Lemma 4.5]{DefPer08} that
\begin{equation}\label{lemadeDP}
\pi_1(T)\asymp \sup \pi_1(T\circ (D_{\sigma_1},\dots,D_{\sigma_m})),
\end{equation}
where the supremum is taken over the set of norm one diagonal operators $D_{\sigma_j}:\ell_2^N\to \ell_r^N$. The vector-valued version of this result follows the same lines, so \eqref{lemadeDP} holds for any $m$-linear map $T\in \mathcal
L(\ell_r^N,\dots,\ell_r^N; Y)$, for every Banach space $Y$.
\begin{lemma}
Let $1\le p,q,r\le2$. Then
$(i)$ $\lambda_m(\Pi_p,r,q)=0$ for $1\le r\le \frac{2mq}{2+mq}$.
$(ii)$ $\lambda_m(\Pi_p,r,q)=\frac1{q}+\frac{m}{2}-\frac{m}{r}$ for $\frac{2mq}{2+mq}< r\le 2$.
\end{lemma}
\begin{proof}
Let $\frac1{t}=\frac1{r}-\frac12$, then for any diagonal operator we have $\|D_\sigma\|_{\mathcal L(\ell_2^N;\ell_r^N)}=\|\sigma\|_{\ell_t^N}$.
Since $\Phi_N\circ (D_{\sigma_1},\dots,D_{\sigma_m})\in\mathcal L(^m\ell_2^N;\ell_q^N)$, by Theorem
\ref{formulita} we have
\begin{equation*}\label{pi_1 con diagonales}
\pi_1(\Phi_N\circ (D_{\sigma_1},\dots,D_{\sigma_m}))\asymp
\int_{\mathbb K^N}\dots\int_{\mathbb
K^N}\Big(\sum_{j=1}^N |\sigma_1(j)z_j^{(1)}\dots\sigma_m(j)z_j^{(m)}|^q\Big)^{1/q}d\mu^N_{2}(z^{(1)})\dots
d\mu^N_{2}(z^{(m)}).
\end{equation*}
$(i)$ The assumption $1\le r\le \frac{2mq}{2+mq}$ implies $t\le mq$. Then
$$
\Big(\sum_{j=1}^N |\sigma_1(j)z_j^{(1)}\dots\sigma_m(j)z_j^{(m)})|^q\Big)^{1/q}\le \|\sigma_1\|_{\ell_t^N}\dots\|\sigma_m\|_{\ell_t^N}\sup_{j} |z_j^{(1)}\dots z_j^{(m)}|.
$$
Consequently, for any $s\ge1$, we have
\begin{align*}
\pi_1(\Phi_N\circ (D_{\sigma_1},\dots,D_{\sigma_m})) &\preceq \int_{\mathbb K^N}\dots\int_{\mathbb
K^N}\sup_{j} |z_j^{(1)}\dots z_j^{(m)}|d\mu^N_{2}(z^{(1)})\dots
d\mu^N_{2}(z^{(m)}) \\
&\le \Big(\int_{\mathbb K^N}\dots\int_{\mathbb
K^N}\sum_{j=1}^N |z_j^{(1)}\dots z_j^{(m)}|^s d\mu^N_{2}(z^{(1)})\dots
d\mu^N_{2}(z^{(m)})\Big)^{1/s} = c_{2,s}^mN^\frac1{s},
\end{align*}
which implies that $\lambda_m(\Pi_1,r,q)=0$.
$(ii)$ The assumption $\frac{2mq}{2+mq}\le r< 2$ implies $t>mq$. Let $\frac1{q}=\frac{m}{t}+\frac1{s}$. Then
$$
\Big(\sum_{j=1}^N |\sigma_1(j)z_j^{(1)}\dots\sigma_m(j)z_j^{(m)}|^q\Big)^{1/q}\le \|\sigma_1\|_{\ell_t^N}\dots\|\sigma_m\|_{\ell_t^N}\Big(\sum_{j=1}^N |z_j^{(1)}\dots z_j^{(m)})|^s\Big)^{1/s}.
$$
Thus we have,
\begin{eqnarray*}
\pi_1(\Phi_N\circ (D_{\sigma_1},\dots,D_{\sigma_m})) &\preceq& \int_{\mathbb K^N}\dots\int_{\mathbb
K^N}\Big(\sum_{j=1}^N |z_j^{(1)}\dots z_j^{(m)}|^s\Big)^{1/s}d\mu^N_{2}(z^{(1)})\dots
d\mu^N_{2}(z^{(m)})\\
&\le & \Big(\int_{\mathbb K^N}\dots\int_{\mathbb
K^N}\sum_{j=1}^N |z_j^{(1)}\dots z_j^{(m)}|^s d\mu^N_{2}(z^{(1)})\dots
d\mu^N_{2}(z^{(m)})\Big)^{1/s}\\
&=& c_{2,s}^{m} N^{1/s}\quad \asymp\, N^{\frac1{q}+\frac{m}{2}-\frac{m}{r}}.
\end{eqnarray*}
On the other hand,
\begin{align*}
\pi_1(\Phi_N\circ (D_{\sigma_1},\dots,D_{\sigma_m})) &\succeq
N^{-1/q'}\int_{\mathbb K^N}\dots\int_{\mathbb K^N}
\sum_{j=1}^N |\sigma_1(j)z_j^{(1)}\dots\sigma_m(j)z_j^{(m)}|d\mu^N_{2}(z^{(1)})\dots
d\mu^N_{2}(z^{(m)})\\
&= N^{-1/q'}\,c_{2,1}^m\,
\sum_{j=1}^N |\sigma_1(j)\dots\sigma_m(j)|.
\end{align*}
Taking supremum over $\sigma_k\in B_{\ell_t^N}$, $k=1,\dots,m$, and using \eqref{lemadeDP} we get that $$\pi_1(\Phi_N) \succeq N^{-1/q'} N^{1-m/t} c_{2,1}^m \asymp\, N^{\frac1{q}+\frac{m}{2}-\frac{m}{r}}.$$
This proves our assertions for $p=1$. Since $\ell_r$ has cotype 2, by \cite[Theorem 4.6]{BotMicPel10},
$\Pi_p(\ell_r,\ell_q)$ coincides with $\Pi_1(\ell_r,\ell_q)$ for every $1\le p\le 2$, and the lemma follows.
\end{proof}
\section{Proofs of the main results}\label{sec-proofs}
The proofs will be splitted in a few lemmas. We will also use the following result, which is \cite[Proposition 3.1]{Per04}.
\begin{proposition}[P\'erez-Garc\'ia]\label{perez}
Let $T\in\Pi_p^m(X_1,\dots,X_m;Y)$ and let $(\Omega_j,\mu_j)$ be measure spaces for each $1\le j\le m$. We
have
\begin{multline*}
\Big(\int_{\Omega_1}\dots\int_{\Omega_m}\|T(f_1(w_1),\dots,f_m(w_m))\|_Y^p \, d\mu_1(w_1)\dots
d\mu_m(w_m)\Big)^{1/p} \\ \le \pi_p(T) \prod_{j=1}^m\sup_{x_j^*\in B_{X_j^*}} \Big(\int_{\Omega_j} |\langle
x_j^*,f_j(w_j)\rangle|^pd\mu_j(w_j)\Big)^{1/p},
\end{multline*}
for every $f_j\in L_p (\mu_j,X_j)$.
\end{proposition}
A simple consequence of this proposition is the following.
\begin{lemma}\label{formulita desig facil}
Let $T$ be a multilinear operator in $\mathcal L(^m\ell_r^N;Y)$, and $p<r'<2$ or $r=2$.
Then
$$
\Big(\int_{\mathbb K^N}\dots\int_{\mathbb K^N}\|T(z^{(1)},\dots,z^{(m)})\|_Y^p \, d\mu^N_{r'}(z^{(1)})\dots
d\mu^N_{r'}(z^{(m)})\Big)^{1/p}\le c_{r',p}^m\pi_p(T).
$$
\end{lemma}
\begin{proof}
Let $(\Omega_j,\mu_j)=(\mathbb K^N,\mu_{r'})$, $f_j\in L_p((\mathbb K^N,\mu_{r'}),\mathbb K^N)$, $f_j(z)=z$
for all $j$ and $p<r'<2$ or $r=2$. By Proposition \ref{perez} and rotation invariance of stable measures,
\begin{multline*}
\Big(\int_{\mathbb K^N}\dots\int_{\mathbb K^N}\|T(z^{(1)},\dots,z^{(m)})\|_Y^p \, d\mu^N_{r'}(z^{(1)})\dots
d\mu^N_{r'}(z^{(m)})\Big)^{1/p} \\
\le \pi_p(T) \prod_{j=1}^m\sup_{w_j\in B_{\ell_{r'}^N}} \Big(\int_{\mathbb K^N} |\langle
z^{(j)},w_j\rangle|^pd\mu^N_{r'}(z^{(j)})\Big)^{1/p}\\
= \pi_p(T)\Big(\int_{\mathbb K^N} |e_1'(z)|^pd\mu^N_{r'}(z)\Big)^{m/p} \; = \; \pi_p(T)c_{r',p}^{m}. \qedhere
\end{multline*}
\end{proof}
Now we are ready to prove Theorem~\ref{formulita escalar}.
\begin{proof} \emph{of Theorem~\ref{formulita escalar}}
One inequality is given in the previous Lemma.
We prove the reverse inequality by induction on $m$. For $m=1$, we have $\phi\in(\ell_r^N)'=\ell_{r'}^N$ and then
\begin{eqnarray*}
\pi_p(\phi) & = & \|\phi\|_{\ell_{r'}^N} \; = \;\Big(\sum_{j=1}^N |e_j'(\phi)|^{r'}\Big)^{1/r'} = \;
c_{r',p}^{-1}\Big(\int_{\mathbb K^N}\Big|\sum_{j=1}^N e_j'(\phi) z_j\Big|^pd\mu^N_{r'}(z)\Big)^{1/p} \\
& = & c_{r',p}^{-1}\Big(\int_{\mathbb K^N}|\phi(z)|^pd\mu^N_{r'}(z)\Big)^{1/p} .
\end{eqnarray*}
Suppose that for any $k$-linear form $\psi:\ell_r^N\times\dots\times\ell_r^N\to\mathbb K$, with $k<m$,
we have,
\begin{eqnarray*}
\sum_{n_1,\dots,n_k}|\psi(u_{n_1}^{(1)},\dots,u_{n_k}^{(k)})|^p \le c_{r',p}^{-kp}\Big(\int_{\mathbb
K^N}\dots\int_{\mathbb K^N}|\psi(z^{(1)},\dots,z^{(k)})|^pd\mu^N_{r'}(z^{(1)})\dots d\mu^N_{r'}(z^{(k)})\Big),
\end{eqnarray*}
for all sequences $(u_{n_j}^{(j)})\subset \ell_r^N$, with $w_p(u_{n_j}^{(j)})=1$, $j=1,\dots, k$.
Let $\phi$ be an $m$-linear form, and $(u_{n_j}^{(j)})\subset \ell_r^N$, with $w_p(u_{n_j}^{(j)})=1$,
$j=1,\dots, m$. Then
\begin{align*}
\sum_{n_1,\dots,n_m} & |\phi(u_{n_1}^{(1)},\dots,u_{n_m}^{(m)})|^p =
\sum_{n_1}\sum_{n_2,\dots,n_m}|\phi(u_{n_1}^{(1)},\dots,u_{n_m}^{(m)})|^p \\
& \le c_{r',p}^{-(m-1)p}\sum_{n_1}\Big(\int_{\mathbb K^N}\dots\int_{\mathbb
K^N}|\phi({u_{n_1}^{(1)}},z^{(2)},\dots,z^{(m)})|^pd\mu^N_{r'}(z^{(2)})\dots d\mu^N_{r'}(z^{(m)})\Big) \\
&= c_{r',p}^{-(m-1)p}\Big(\int_{\mathbb K^N}\dots\int_{\mathbb
K^N}\sum_{n_1}|\phi({u_{n_1}^{(1)}},z^{(2)},\dots,z^{(m)})|^pd\mu^N_{r'}(z^{(2)})\dots
d\mu^N_{r'}(z^{(m)})\Big) \\
&\le c_{r',p}^{-(m-1)p}\int_{\mathbb K^N}\dots\int_{\mathbb K^N} \Big(c_{r',p}^{-p}\int_{\mathbb
K^N}|\phi(z^{(1)},z^{(2)},\dots,z^{(m)})|^pd\mu^N_{r'}(z^{(1)})\Big) d\mu^N_{r'}(z^{(2)})\dots d\mu^N_{r'}(z^{(m)}) \\
&=c_{r',p}^{-mp}\Big(\int_{\mathbb K^N}\dots\int_{\mathbb K^N} |\phi(z^{(1)},\dots,z^{(m)})|^p
d\mu_{r'}^N(z^{(1)})\dots d\mu^N_{r'}(z^{(m)})\Big) .
\end{align*}
Therefore,
$$
c_{r',p}^m\pi_p(\phi)\le \Big(\int_{\mathbb K^N}\dots\int_{\mathbb
K^N}|\phi(z^{(1)},\dots,z^{(m)})|^pd\mu^N_{r'}(z^{(1)})\dots d\mu^N_{r'}(z^{(m)})\Big)^{1/p}. \qedhere
$$
\end{proof}
Let us continue our way to the proof of Theorem~\ref{formulita}.
\begin{lemma}
Let $T$ be an $m$-linear mapping in $\mathcal L(^mE;\ell_q^M)$, $0<p<q<2$ or $q=2$. Then
$$
c_{q,p}\pi_p(T)\le \Big(\int_{\mathbb K^M}\pi_p(z\circ T)^pd\mu^M_{q}(z)\Big)^{1/p}.
$$
In particular, if $T$ is linear,
$$
c_{q,p}\pi_p(T)\le \Big(\int_{\mathbb K^M}\|T'(z)\|_{E'}^p \, d\mu^M_{q}(z)\Big)^{1/p}.
$$
\end{lemma}
\begin{proof}
For $(u_{k_j}^j)\subset \ell_2^N$ with $w_p((u_{k_j}^j))=1$, $j=1, \ldots, m,$ we have
\begin{eqnarray*}
\sum_{k_1,\dots,k_m} \|T(u_{k_1}^1,\dots,u_{k_m}^m)\|_{\ell_q^M}^p & = & \sum_{k_1,\dots,k_m} \Big(\sum_{j=1}^M |e_j'\circ T(u_{k_1}^1,\dots,u_{k_m}^m)|^q\Big)^{p/q} \\
& = & \sum_{k_1,\dots,k_m} c_{q,p}^{-p}\Big(\int_{\mathbb K^M}\Big|\sum_{j=1}^M e_j'\circ T(u_{k_1}^1,\dots,u_{k_m}^m)
z_j\Big|^pd\mu^M_{q}(z)\Big) \\
& = & \sum_{k_1,\dots,k_m} c_{q,p}^{-p}\Big(\int_{\mathbb K^M}|z\circ
T(u_{k_1}^1,\dots,u_{k_m}^m)|^pd\mu^M_{q}(z)\Big)
\\
& \le & c_{q,p}^{-p}\Big(\int_{\mathbb K^M}\pi_p(z\circ T)^pd\mu^M_{q}(z)\Big) .
\end{eqnarray*}
Therefore,
$$
c_{q,p}\pi_p(T)\le \Big(\int_{\mathbb K^M}\pi_p(z\circ T)^pd\mu^M_{q}(z)\Big)^{1/p}.
$$
For $m=1$, $z\circ T$ is a linear form, and then we have $\pi_p(z\circ T)=\|z\circ T\|_{E'}=\|T'(z)\|_{E'}$.
\end{proof}
By a {\it Banach sequence space} we mean a Banach space $X\subset \mathbb K^{\mathbb N}$ of sequences in $\mathbb K$ such that $\ell_1\subset X\subset \ell_\infty$ with norm one inclusions satisfying that if
$x\in\mathbb K^{\mathbb N}$ and $y\in X$ are such that $|x_n|\le |y_n|$ for every $n\in\mathbb N$, then $x$ belongs to $X$ and $\|x\|_X\le\|y\|_X$.
We will now show that if we consider multilinear mappings whose range are certain Banach sequence spaces, then
the norm of the multilinear mapping defined by the
integral formula is equivalent to the multiple summing norm.
We will need the following remark, which may be seen as a Khintchine/Kahane type multilinear inequality for the stable
measures.
\begin{remark}\label{khintchine estable}
If $T$ is an $m$-linear form on $\mathbb K^N$ and $q\le p<s<2$, or $q\le p$ and $s=2$, then
\begin{multline*}
c_{s,p}^{-m}\Big(\int_{\mathbb K^N}\dots\int_{\mathbb
K^N}|T(z^{(1)},\dots,z^{(m)})|^pd\mu^N_{s}(z^{(1)})\dots d\mu^N_{s}(z^{(m)})\Big)^{1/p} \\
\le c_{s,q}^{-m}\Big(\int_{\mathbb K^N}\dots\int_{\mathbb
K^N}|T(z^{(1)},\dots,z^{(m)})|^qd\mu^N_{s}(z^{(1)})\dots d\mu^N_{s}(z^{(m)})\Big)^{1/q}.
\end{multline*}
For $m=1$ it follows from property \eqref{estable} of L\'evy stable measures, and then we just apply induction on $m$.
\end{remark}
Recall that a Banach sequence space $X$ is called {\it $q$-concave}, $q\ge1$, if there
exists $C>0$ such that for any $x_1,\dots,x_n\in X$ we have
$$
\Big(\sum_{k=1}^n\|x_k\|_X^q\Big)^\frac1{q}\le C\Big\|\Big(\sum_{k=1}^n|x_k|^q\Big)^{1/q}\Big\|_X.
$$
\begin{lemma}\label{lema1}
Let $X$ be a $q$-concave Banach sequence space with constant $C$ and let
$T\in\mathcal
L(^mE;X)$ be an $m$-linear operator. Denote by $T_j$ the $j$-coordinate of
$T$ ($T_j$ is a scalar $m$-linear form). Then $\pi_q(T)\le C \|(\pi_q(T_j))_j\|_X$.
\end{lemma}
\begin{proof} Just note that for finite sequences $\big(u_{n_k}^{(k)}\big)_{n_k}\subset X$, with
$w_q\Big(\big(u_{n_k}^{(k)}\big)_{n_k}\Big)=1$ we have
$$
\Big(\sum_{n_1,\dots,n_m}\|T(u_{n_1}^{(1)},\dots,u_{n_m}^{(m)})\|_X^q\Big)^{1/q} \le C
\Big\|\Big(\big(\sum_{n_1,\dots,n_m}|T_j(u_{n_1}^{(1)},\dots,u_{n_m}^{(m)})|^q\big)^{1/q}\Big)_j\Big\|_X \le
C\big\|(\pi_q(T_j))_j\big\|_X. $$
\end{proof}
\begin{lemma}\label{lema2}
Let $X$ be a Banach sequence space and let $T\in\mathcal
L(^m\ell_r^N;X)$ be an $m$-linear operator, $r\ge2$.
Then if either $p,q<r'<2$, or $r=2$, then
$$\|(\pi_p(T_j))_j\|_X\le c_{r',1}^{-m} \int_{\mathbb K^N}\dots\int_{\mathbb
K^N}\|T(z^{(1)},\dots,z^{(m)})\|_Xd\mu^N_{r'}(z^{(1)})\dots d\mu^N_{r'}(z^{(m)})\le
(c_{r',q}/c_{r',1})^{m} \pi_q(T).$$
\end{lemma}
\begin{proof}
By Theorem \ref{formulita escalar}, Remark \ref{khintchine
estable}
and Lemma \ref{formulita desig facil} we have
\begin{eqnarray*}
\|(\pi_p(T_j))_j\|_X &= & c_{r',p}^{-m}\Big\|\left(\Big(\int_{\mathbb K^N}\dots\int_{\mathbb
K^N}|T_j(z^{(1)},\dots,z^{(m)})|^pd\mu^N_{r'}(z^{(1)})\dots d\mu^N_{r'}(z^{(m)})\Big)^{1/p}\right)_j\Big\|_X
\\
&\le & c_{r',1}^{-m}\Big\|\Big(\int_{\mathbb K^N}\dots\int_{\mathbb
K^N}|T_j(z^{(1)},\dots,z^{(m)})|d\mu^N_{r'}(z^{(1)})\dots d\mu^N_{r'}(z^{(m)})\Big)\Big\|_X \\
&\le & c_{r',1}^{-m} \int_{\mathbb K^N}\dots\int_{\mathbb
K^N}\|T(z^{(1)},\dots,z^{(m)})\|_Xd\mu^N_{r'}(z^{(1)})\dots d\mu^N_{r'}(z^{(m)})\\
&\le & c_{r',1}^{-m} \Big(\int_{\mathbb K^N}\dots\int_{\mathbb
K^N}\|T(z^{(1)},\dots,z^{(m)})\|_X^qd\mu^N_{r'}(z^{(1)})\dots d\mu^N_{r'}(z^{(m)})\Big)^{1/q}\\
&\le & (c_{r',q}/c_{r',1})^{m} \pi_q(T).
\end{eqnarray*}
\end{proof}
As a consequence of Lemma \ref{formulita desig facil} we obtain one inequality in the following result. For the other inequality, note that if $X$ is $q$-concave, then it is also $s$-concave for any $s\ge q$ and apply the previous two lemmas.
\begin{corollary}\label{formulita q-concave}
Let $X$ be a $q$-concave Banach sequence space and let $T\in\mathcal
L(^m\ell_r^N;X)$. Then for $r=2$ and $q\le s$, or $q\le s<r'<2$, we have
\begin{eqnarray*} \pi_s(T) \asymp
\Big(\int_{\mathbb K^N}\dots\int_{\mathbb
K^N}\|T(z^{(1)},\dots,z^{(m)})\|_X^qd\mu^N_{r'}(z^{(1)})\dots d\mu^N_{r'}(z^{(m)})\Big)^{1/q} \\
\end{eqnarray*}
\end{corollary}
Standard localization techniques and the previous corollary readily show the coincidence of multiple
$s-$summing and multiple $q-$summing operators from $\mathcal L_r^g$-spaces to $q$-concave Banach sequence
spaces.
\begin{corollary}
Let $X$ be a $q$-concave Banach sequence space, and let $E$ be an $\mathcal L_r^g$-space.
Then
$$
\Pi_s(^mE;X)=\Pi_q(^mE;X),
$$
for $q\le s<r'<2$, or $q\le s$ and $r=2$.
\end{corollary}
Proceeding as above we may prove the following.
\begin{corollary}\label{pi_q en L_q}
Let $X$ be an $\mathcal L_{q,1}^g$-space and let $T\in\mathcal
L(^m\ell_r^N;X)$ be an $m$-linear operator. If $q<r'<2$ or,
$r=2$, then we have
$$\pi_q(T) = c_{r',q}^{-m}\Big(\int_{\mathbb K^N}\dots\int_{\mathbb
K^N}\|T(z^{(1)},\dots,z^{(m)})\|_{X}^qd\mu^N_{r'}(z^{(1)})\dots d\mu^N_{r'}(z^{(m)})\Big)^{1/q}.$$
\end{corollary}
We have almost finished the proofs of the main results.
\begin{proof}[Proof of Theorem \ref{formulita}]
It is clearly enough to show the result for operators with range in $\ell_q^M$ for some $M\in\mathbb N$.
One inequality is Lemma \ref{formulita desig facil}.
For the other inequality, if either $r=q=2$ or; $r=2$ and $p<q<2$ or; $p<r'<2$ and $p<q\le2$, a
combination
of the previous results gives:
\begin{eqnarray*}
\pi_p(T) & \le & c_{q,p}^{-1}\Big(\int_{\mathbb K^N}\pi_p(z\circ T)^pd\mu^N_{q}(z)\Big)^{1/p} \\
& \le & c_{r',p}^{-m}c_{q,p}^{-1}\Big(\int_{\mathbb K^N}\Big(\int_{\mathbb K^N}\dots\int_{\mathbb K^N}|z\circ
T(z^{(1)},\dots,z^{(m)})|^pd\mu^N_{r'}(z^{(1)})\dots d\mu^N_{r'}(z^{(m)})\Big)d\mu^N_{q}(z)\Big)^{1/p} \\
& \le & c_{r',p}^{-m}\Big(\int_{\mathbb K^N}\dots\int_{\mathbb K^N}\Big(c_{q,p}^{-p}\int_{\mathbb K^N}|z\circ
T(z^{(1)},\dots,z^{(m)})|^pd\mu^N_{q}(z)\Big)d\mu^N_{r'}(z^{(1)})\dots d\mu^N_{r'}(z^{(m)})\Big)^{1/p} \\
& \le & c_{r',p}^{-m}\Big(\int_{\mathbb K^N}\dots\int_{\mathbb
K^N}\pi_p\big(T(z^{(1)},\dots,z^{(m)});\ell_{q'}^M\to\mathbb K\big)^pd\mu^N_{r'}(z^{(1)})\dots
d\mu^N_{r'}(z^{(m)})\Big)^{1/p} \\
& = & c_{r',p}^{-m}\Big(\int_{\mathbb K^N}\dots\int_{\mathbb
K^N}\|T(z^{(1)},\dots,z^{(m)})\|_{\ell_q^M}^pd\mu^N_{r'}(z^{(1)})\dots d\mu^N_{r'}(z^{(m)})\Big)^{1/p}, \\
\end{eqnarray*}
where by $\pi_p\big(T(z^{(1)},\dots,z^{(m)});\ell_{q'}^M\to\mathbb K\big)$ we denote the absolutely
$p$-summing norm of the vector $T(z^{(1)},\dots,z^{(m)})$ thought of as a linear functional on $\ell_{q'}^M$,
whose norm is just the $\ell_{q}^M$-norm of the vector.
The cases where $p=q$ follow from Corollary \ref{pi_q en L_q}.
\end{proof}
\begin{proof}[Proof of Proposition \ref{normas equivalentes}]
($i$)
For $r=2$, the equivalence of norms is a consequence of Theorem \ref{formulita} when $p<q\le 2$ or $p=q$ and
of
Corollary \ref{formulita q-concave} for $q\le p$.
For $r>2$, the equivalence of norms is a consequence of Theorem \ref{formulita} when $p<r'$ and $p<q\le 2$ and
of
Corollary \ref{formulita q-concave} $q\le p<r'$.
($ii$) This statement follows from Lemma \ref{formulita desig facil}.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop inclusion}] The first assertion follows from Corollary~\ref{formulita q-concave} and localization. For the second assertion just combine Lemma \ref{lema1} with Lemma \ref{lema2}.
\end{proof}
\end{document}
|
\begin{document}
\title{Secondary Chern-Euler forms and the Law of Vector Fields}
\alphauthor{Zhaohu Nie}
\email{[email protected]}
\alphaddress{Department of Mathematics\\
Penn State Altoona\\
3000 Ivyside Park\\
Altoona, PA 16601, USA}
\deltaate{\today}
\subjclass[2000]{57R20, 57R25}
\begin{abstract}
The Law of Vector Fields is a term coined by Gottlieb for a relative Poincar\'e-Hopf theorem. It was first proved by Morse \cite{morse} and expresses the Euler characteristic of a manifold with boundary in terms of the indices of a generic vector field and the inner part of its tangential projection on the boundary. We give two elementary differential-geometric proofs of this topological theorem, in which secondary Chern-Euler forms \cite{chern} naturally play an essential role. In the first proof, the main point is to construct a chain away from some singularities. The second proof employs a study of
the secondary Chern-Euler form on the boundary, which may be of independent interest. More precisely, we show by explicitly constructing a primitive that, away from the outward and inward unit normal vectors, the secondary Chern-Euler form is exact up to a pullback form. In either case, Stokes' theorem is used to complete the proof.
\end{abstract}
\maketitle
\section{Introduction}
Let $X$ be a smooth oriented compact Riemannian manifold with boundary $M$. Throughout the paper we fix $\deltaim X=n\gammaeq 2$ and hence $\deltaim M=n-1$.
On $M$, we have a canonical decomposition
\begin{equation}
\label{collar}
TX|_M{\mathcal O}ng \nu\oplus TM,
\end{equation}
where $\nu$ is the rank 1 trivial normal bundle of $M$.
Let $V$ be a smooth vector field on $X$.
We assume that $V$ has only isolated singularities, i.e., the set $\sing V:=\{x\in X|V(x)=0\}$ is finite, and that the restriction $V|_M$ is nowhere zero. Define the index $\ind_x V$ of $V$ at an isolated singularity $x$ as usual (see, e.g., \cite[p. 136]{hirsch}), and let $\ind V=\sum_{x\in \sing V} \ind_x V$
denote the sum of the local indices.
\begin{para}\label{n&N}
As an important special case, let $\vn$ be the outward unit normal vector field of $M$, and $\vec N$ a generic extension of $\vn$ to $X$. Then by definition
\begin{equation}
\label{standard0}
\ind \vec N=\chi(X),
\end{equation}
where $\chi(X)$ is the Euler characteristic of $X$ (see, e.g., \cite[p. 135]{hirsch}).
\end{para}
For a general $V$, let $\phiartial V$ be the projection of $V|_M$ to $TM$ according to \eqref{collar}, and let
$\phiartial_- V$ (resp. $\phiartial_+ V$) be the restriction of $\phiartial V$ to the subspace of $M$ where $V$ points inward (resp. outward) to $X$. Generically $\phiartial_\phim V$ have isolated singularities. (A non-generic $V$ can always be modified by adding an extension to $X$ of a normal vector field or a tangent vector field to $M$.)
Using
the flow along $-V$ and counting fixed points with multiplicities, we have the following
\emph{Law of Vector Fields}:
\begin{equation}
\label{law of v f}
\ind V+\ind \phimv=\chi(X).
\end{equation}
Naturally this is a relative Poincar\'e-Hopf theorem. It was first proved by Morse \cite{morse} and later on publicized by Gottlieb, who also coined the name ``Law of Vector Fields".
One main purpose of this paper is to give two elementary differential-geometric proofs of this theorem \eqref{law of v f}.
In his famous proof \cite{chern} of the Gauss-Bonnet theorem, Chern constructed a differential form $\Phi$ (see \eqref{phi}) of degree $n-1$
on the tangent sphere bundle $STX$, consisting of unit vectors in $TX$, satisfying the following two conditions:
\begin{equation*}
\label{dP=O}
d\Phi=-\Omega,
\end{equation*}
where $\Omega$ is the Euler curvature form of $X$ (pulled back to $STX$), which is defined to be $0$ when $\deltaim X$ is odd (see \eqref{omega}), and
\begin{equation*}
\wt{\Phi_0}=\wt{d\sigma}_{n-1},
\end{equation*}
i.e., the 0th term $\wt{\Phi_0}$ of $\Phi$ is the relative unit volume form for the fibration $S^{n-1}\to STX \to X$
(see \eqref{0th term}). We call $\Phi$ the \emph{secondary Chern-Euler form}.
Define $\alphalpha_V:M\to STX|_M$ by rescaling $V$, i.e., $\alpha_V(x)=\frac{V(x)}{|V(x)|}$ for $x\in M$. Then Chern's basic method \cite[\S 2]{chern2}\cite[\S 6]{bc}
using the above two conditions
and Stokes' theorem gives
\begin{equation}
\label{general}
\int_X \Omega=-\int_{\alphalpha_V(M)}\Phi+\ind V
\end{equation}
(see \eqref{basic method}).
Applying \eqref{general} to the $\vec n$ and $\vec N$ in \ref{n&N} and using \eqref{standard0}, one gets the following relative Gauss-Bonnet theorem in \cite{chern2}
\begin{equation}
\label{standard}
\int_X \Omega=-\int_{\vn(M)}\Phi+\ind(\vec N)=-\int_{\vn(M)}\Phi+\chi(X).
\end{equation}
Comparison of \eqref{general} and \eqref{standard} gives
\begin{equation}
\label{from chern's method}
\chi(X)=\ind V+\int_{\vn(M)}\Phi-\int_{\alphalpha_V(M)}\Phi.
\end{equation}
The following is our main result that identifies \eqref{from chern's method} with the Law of Vector Fields \eqref{law of v f}.
\begin{theorem}\label{two terms}
The following formula holds:
$$
\int_{\vn(M)}\Phi-\int_{\alphalpha_V(M)}\Phi=\ind \phimv.
$$
\end{theorem}
A first proof of the above theorem is given in Section \ref{first proof}. The main point of this first proof is to construct, away from some singularities, a chain connecting $\alpha_V(M)$ to $\vec n(M)$ and then to apply the Stokes' theorem.
A second proof of Theorem \ref{two terms} to be given in Section \ref{trans} employs a study of the secondary Chern-Euler form on the boundary, i.e., when the structure group is reduced from $\so(n)$ to $1\times \so(n-1)$. This study may be of some independent interest.
In more detail, the images $\vec n(M)$ and $(-\vec n)(M)$ in $STX|_M$
are the spaces of outward and inward unit normal vectors of $M$. Define
\begin{equation}
\label{eq-def-cstm}
CSTM:=STX|_M\backslash(\vec n(M)\cup (-\vec n)(M))
\end{equation}
($C$ for cylinder) to be the complement. Also let $\phii:STX|_M\to M$ be the natural projection.
\begin{theorem}\label{new main} There exists a differential form $\Gamma$ of degree $n-2$ on CSTM, such that after restricting to $CSTM$
\begin{equation}
\label{main formula}
\Phi-\phii^*\vec n^* \Phi=d\Gamma.
\end{equation}
\end{theorem}
The form $\Gamma$ is defined in \eqref{def Gamma}, and the above theorem is proved right after that utilizing Propositions \ref{much quicker} and \ref{big Phi}.
At the end of Section \ref{trans}, we employ Stokes' theorem to give a second proof of Theorem \ref{two terms}, and hence of the Law of
Vector Fields \eqref{law of v f}, using Theorem \ref{new main}.
\begin{remark} Unlike in \cite{sha} or \cite{nie}, we do not assume that the metric on $X$ is locally product near its boundary $M$. Therefore our results in this paper deal with the general case and generalize those in \cite{nie}.
\end{remark}
\begin{remark} We would like to emphasize the elementary nature of our approaches, in the classical spirit of Chern in \cites{chern,chern2}. Transgression of Euler classes has gone through some modern development utilizing Berezin integrals.
The Thom class in a vector bundle and its transgression are studied in \cite{MQ}. This Mathai-Quillen form is further studied in \cite{BZ} and \cite{BM}.
For the modern developments, we refer the reader to the above references and two books, \cite{BGV} and \cite{Z}, on this subject.
\end{remark}
\section{Secondary Chern-Euler forms}
In this section, we review the construction, properties and usage of the secondary Chern-Euler form $\Phi$ in \cite{chern}, which plays an essential role in our approaches.
Throughout the paper, $c_{r-1}$ denotes the volume of the unit $(r-1)$-sphere $S^{r-1}$. We also agree on the following ranges of indices
\begin{equation}
\label{range}
1\leq A,B\leq n,\ 2\leq \alpha,\beta\leq n-1,\ 2\leq s,t \leq n.
\end{equation}
The secondary Chern-Euler form $\Phi$ is defined as follows.
Choose oriented local orthonormal frames $\{e_1,e_2,\cdots,e_n\}$ for the tangent bundle $TX$. Let $(\omega_{AB})$ and
$(\Omega_{AB})$ be the $\mathfrak{so}(n)$-valued connection forms and curvature forms
for the Levi-Civita connection $\nabla$ of the Riemannian metric on $X$ defined by
\begin{gather}
\nabla e_A=\sum_{B=1}^n \omega_{AB}e_B,\label{def-omega}\\
\Omega_{AB}=d\omega_{AB}-\sum_{C=1}^n \omega_{AC}\omega_{CB}.\label{def-O}
\end{gather}
(In this paper, products of differential forms always mean ``exterior products" although we omit the notation $\wedge$ for simplicity.)
Let the $u_A$ be the coordinate functions on $STX$ in terms of the frames defined by
\begin{equation}
\label{def-u}
v=\sum_{A=1}^{n} u_A(v)e_A,\quad \forall v\in STX.
\end{equation}
Let the $\theta_A$ be the 1-forms on $STX$ defined by
\begin{equation}
\label{def-theta}
\theta_A=du_A+\sum_{B=1}^{n}u_B\omega_{BA}.
\end{equation}
For $k=0,1,\cdots,[\frac{n-1}2]$ (with $[-]$ standing for the integral part), define degree $n-1$ forms on $STX$
\begin{equation}
\label{eq-phi-j}
\Phi_k=\sum_{A} \epsilon(A)u_{A_1}\theta_{A_2}\cdots\theta_{A_{n-2k}}\Omega_{A_{n-2k+1}A_{n-2k+2}}\cdots \Omega_{A_{n-1}A_{n}},
\end{equation}
where the summation runs over all permutations $A$ of $\{1,2,\cdots,n\}$, and $\epsilon(A)$ is the sign of $A$. (The index $k$ stands for the number of curvature forms involved. Hence the restriction $0\leq k\leq [\frac{n-1}2]$. This convention applies throughout the paper.) Define the secondary Chern-Euler form as
\begin{align}
\Phi=\frac{1}{(n-2)!!c_{n-1}} \sum_{k=0}^{[\frac{n-1}2]} (-1)^k \frac{1}{2^k k! (n-2k-1)!!} \Phi_k\label{phi}
=:\sum_{k=0}^{[\frac{n-1}2]} \wt{\Phi_k}.
\end{align}
The $\Phi_k$ and hence $\Phi$ are invariant under $\mathrm{SO}(n)$-transformations of the local frames and hence are intrinsically defined. Note that the 0th term
\begin{equation}
\label{0th term}
\wt{\Phi_0}=\frac{1}{(n-2)!!c_{n-1}} \frac 1{(n-1)!!}\Phi_0=\frac{1}{c_{n-1}}d\sigma_{n-1}=\wt{d\sigma}_{n-1}
\end{equation}
is the relative unit volume form of the fibration $S^{n-1}\to STX\to X$, since by \eqref{eq-phi-j}
\begin{equation}\label{factorial}
\Phi_0=\sum_A \epsilon(A)u_{A_1}\theta_{A_2}\cdots\theta_{A_n}=(n-1)!d\sigma_{n-1}
\end{equation}
(see \cite[(26)]{chern}).
Then \cite[(23)]{chern} and \cite[(11)]{chern2} prove that
\begin{equation}
\label{dphi}
d\Phi=-\Omega,
\end{equation}
where
\begin{equation}
\label{omega}
\Omega=
\begin{cases}
0, & \text{if }n\text{ is odd,}\\
(-1)^{m}\frac{1}{(2\phii)^m 2^m m!}\sum_{A} \epsilon(A)\Omega_{A_1A_2}\cdots\Omega_{A_{n-1}A_n}, & \text{if }n=2m \text{ is even}
\end{cases}
\end{equation}
is the Euler curvature form of $X$.
Now we review Chern's basic method \cite[\S 2]{chern2}\cite[\S 6]{bc} of relating indices, $\Phi$ and $\Omega$ using Stokes' theorem. Similar procedures will be employed twice later. Let $V$ be a generic vector field on $X$ with isolated singularities $\sing V$. Let $B_r^X(\sing V)$ (resp. $S_r^X(\sing V))$ denote the union of small open balls (resp. spheres) of radii $r$ in $X$ around the finite set of points $\sing V$. Define $\alphalpha_V:X\backslash B_r^X(\sing V)\to STX$ by rescaling $V$. Then using \eqref{dphi} and Stokes' theorem,
one proves \eqref{general} as
\begin{align}
\label{basic method}
&\int_{X} \Omega=\lim_{r\to 0}\int_{\alphalpha_V(X-B_r^X(\sing V))} \Omega=\lim_{r\to 0}\int_{\alphalpha_V(X-B_r^X(\sing V))} -d\Phi\\
=&-\int_{\alphalpha_V(M)}\Phi+\lim_{r\to 0} \int_{\alpha_V(S^X_r(\sing V))} \Phi=-\int_{\alphalpha_V(M)}\Phi+\ind V\nonumber,
\end{align}
where the last equality follows from the definition of index and \eqref{0th term}.
\section{First proof by constructing a chain}
\label{first proof}
In this section, we give a first proof of Theorem \ref{two terms} by constructing a chain, away from $\sing \phimv$, connecting $\alphalpha_V(M)$ to $\vec n(M)$.
\begin{proof}[First proof of Theorem \ref{two terms}]
By definition, $\sing\phimv$ consists of a finite number of points $x\in M$ such that $\alpha_V(x)= -\vn(x)$. For $x\notin \sing\phimv$, let $C_x$ be the unique directed shortest great circle segment pointing from $\alpha_V(x)$ to $\vn(x)$ in $ST_xX$. With the obvious notation from before, let $U_r=M\backslash B_r^M(\sing\phimv)$ denote the complement in $M$ of the union of open balls of radii $r$ in $M$ around $\sing\phimv$. Obviously its boundary $\phiartial U_r=-S_r^M(\sing\phimv)$.
Then
\begin{equation}
\label{boundary}
\phiartial\left({\cup_{x\in U_r}C_x}\right)=\cup_{x\in U_r}\phiartial C_x-\cup_{x\in \phiartial U_r} C_x=
\vn(U_r)-\alpha_V(U_r)+W_r,
\end{equation}
with
\begin{equation}
\label{wr}
W_r:=\cup_{x\in S_r^M(\sing\phimv)}C_x.
\end{equation}
Note the negative sign from graded differentiation in the second expression.
From \eqref{dphi} and \eqref{omega}, we have
\begin{equation}
\label{d=0}
d\Phi=0\text{ on }STX|_M,
\end{equation}
since even if $\deltaim X$ is even, $\Omega|_M=0$ by dimensional reason. \eqref{boundary}, Stokes' theorem and \eqref{d=0} imply
\begin{align}
\label{start}
&\int_{\vn(M)}\Phi-\int_{\alphalpha_V(M)}\Phi=\lim_{r\to 0} \left(\int_{\vn(U_r)}\Phi-\int_{\alphalpha_V(U_r)}\Phi\right)\\
=&-\lim_{r\to 0}\int_{W_r} \Phi=-\lim_{r\to 0}\int_{W_r} \wt{\Phi_0},\nonumber
\end{align}
where the last equality follows from \eqref{phi} and $\deltaisplaystyle\lim_{r\to 0}\int_{W_r} \wt{\Phi_k}=0$ for $k\gammaeq 1$, since such $\wt{\Phi_k}$'s in \eqref{eq-phi-j} involve curvature forms and don't contribute in the limit (see \cite[\S 2]{chern2}).
By \eqref{0th term}, $\wt{\Phi_0}=\frac{1}{c_{n-1}}d\sigma_{n-1}$ is the relative unit volume form.
We then compute the RHS of \eqref{start} using spherical coordinates.
\begin{para}\label{setup}
At $TX|_M$, we choose oriented local orthonormal frames $\{e_1,e_2,\cdots,e_n\}$ such that $e_1=\vec n$ is the outward unit normal vector of $M$. Therefore $(e_2,\cdots,e_{n})$ are oriented local orthonormal frames for $TM$.
Let $\phihi$ be the angle coordinate on $STX|_M$ defined by
\begin{equation}
\label{def-phi}
\phihi(v)=\alphangle(v,e_1)=\alphangle(v,\vn),\ \forall v\in STX|_M.
\end{equation}
we have from \eqref{def-u}
\begin{equation}
\label{eq-u1}
u_1={\mathcal O}s\phihi.
\end{equation}
Let
\begin{align}
p:CSTM=STX|_M\backslash(\vn(M)\cup (-\vn)(M))\to STM;\ v&\mapsto \frac{\phiartial v}{|\phiartial v|}
\label{def p}\\
\text{(in coordinates)} ({\mathcal O}s\phihi,u_2,\cdots,u_{n})&\mapsto\frac{1}{\sin\phihi}(u_2,\cdots,u_{n})\nonumber
\end{align}
be the projection to the equator $STM$.
By definition,
\begin{equation}\label{pa}
p\circ \alpha_V=\alpha_{\phiartial V}\text{ when }\phiartial V\neq 0.
\end{equation}
\end{para}
Therefore the image of $W_r$ in \eqref{wr} under the above projection is
$$
p(W_r)=\cup_{x\in S_r^M(\sing\phimv)}p(C_x)=\cup_{x\in S_r^M(\sing\phimv)}\alpha_{\phiartial V}(x)=\alpha_{\phiartial V}\bigl(S_r^M(\sing\phimv)\bigr).
$$
On $C_x$ for $x\in M$, the $\phihi$ \eqref{def-phi} ranges from $\phihi(\alphalpha_V(x))$ to 0.
The relative volume forms $d\sigma_{n-1}$ of $S^{n-1}\to STX|_M\to M$ and $d\sigma_{n-2}$ of $S^{n-2}\to STM\to M$ are related by
\begin{equation}
\label{vol relation}
d\sigma_{n-1}=\sin^{n-2}\phihi\,d\phihi\,p^*d\sigma_{n-2}+\text{terms involving }\omega_{1s}\text{ or }\Omega^M_{\alphalpha\beta}.
\end{equation}
(See \eqref{wo Ome} for the definition of the curvature forms $\Omega^M_{\alpha\beta}$. Also compare \eqref{total Phi_k} when $k=0$
in view of \eqref{factorial}. In the case of one fixed sphere and its equator, \eqref{vol relation} without the extra terms is easy and follows from using spherical coordinates.)
In the limit when $r\to 0$, the integrals of the terms involving $\omega_{1s}$ or $\Omega^M_{\alpha\beta}$ are zero by the same reason as
in the last step of \eqref{start}.
Therefore, continuing \eqref{start} and using iterated integrals, we have
\begin{align*}
&\int_{\vn(M)}\Phi-\int_{\alphalpha_V(M)}\Phi=-\lim_{r\to 0}\int_{W_r} \wt{\Phi_0}=-\frac{1}{c_{n-1}}\lim_{r\to 0}\int_{W_r} d\sigma_{n-1}\\
=&-\frac{1}{c_{n-1}}\lim_{r\to 0}\int_{\alphalpha_{\phiartial V}(S_r^M(\sing\phimv))} \left(\int_{\phihi(\alphalpha_V(x))}^0 \sin^{n-2}\phihi\,d\phihi\right) d\sigma_{n-2}\\
\overset{(1)}{=}&\frac{1}{c_{n-1}}\left(\int_{0}^\phii \sin^{n-2}\phihi\,d\phihi\right)
\lim_{r\to 0}\int_{\alphalpha_{\phiartial V}(S_r^M(\sing\phimv))} d\sigma_{n-2}\\
\overset{(2)}{=}&\frac{1}{c_{n-2}}\lim_{r\to 0}\int_{\alphalpha_{\phiartial V}(S_r^M(\sing\phimv))} d\sigma_{n-2}\overset{(3)}{=}\ind \phimv.
\end{align*}
Here equality (1) uses
\begin{equation}
\label{lim of phi}
\phihi(\alphalpha_V(x))\to \phii \text{ for }x\in S_r^M(\sing\phimv),\text{ as }r\to 0,
\end{equation}
equality (2) uses the basic knowledge
\begin{equation}\label{basic-know}
c_{n-1}= {c_{n-2}}\int_{0}^\phii \sin^{n-2}\phihi\,d\phihi,
\end{equation}
and equality (3) is by the definition of index.
\end{proof}
\begin{remark} The construction of the chain $\cup_{x\in U_r}C_x$ is reminiscent of the topological method \cite{morse} of attaching $M\times I$ to $X$ and extending $V|_M$ to a vector field on $M\times I$ whose value at $(x,t)\in M\times I$ is $(1-t)V(x)+t\vec n(x)$.
\end{remark}
\begin{remark} The homology group $H_{n-1}(STX|_M,\mathbb Z){\mathcal O}ng \mathbb Z\oplus \mathbb Z$ has two generators as the image $\vec n(M)$ and a fiber sphere $ST_xM$ for $x\in M$ (see \cite{nie2}). Our proof shows that as a homology class,
$$
\alpha_V(M)=\vec n(M)+(\ind \phimv) ST_xM.
$$
\end{remark}
\begin{remark} This first proof can also be presented very efficiently using the Mathai-Quillen form $\Psi$ \cite[III d)]{BZ} instead of our secondary Chern-Euler form $\Phi$. The author thanks a previous anonymous referee for pointing this out.
\end{remark}
\section{Second proof by transgressing $\Phi$}
\label{trans}
In this section, we present a study of the secondary Chern-Euler form $\Phi$ on $CSTM\subset STX|_M$ \eqref{eq-def-cstm}, leading to a proof of Theorem \ref{new main} and a second proof of Theorem \ref{two terms} using that.
Recall the definition of the angle coordinate $\phihi$ \eqref{def-phi}. Then $d\phihi$ and $\deltaph$ are well-defined 1-form and vector field on $CSTM$. We write $d$ for exterior differentiation on $CSTM$, and $\iota_{\deltaph}$ for interior product with $\deltaph$.
\begin{proposition}\label{much quicker} On $CSTM$, let
\begin{equation}\label{write out}
\up=\iota_{\frac{\phiartial}{\phiartial \phihi}}\Phi.
\end{equation}
Then the Lie derivative
\begin{equation}
\label{new one}
\calL_\deltaph \Phi=d\up.
\end{equation}
Therefore
\begin{equation}\label{formal one}
\Phi-\phii^*\vec n^*\Phi=d\int_0^\phihi \up \, dt.
\end{equation}
\end{proposition}
\begin{proof} \eqref{new one} follows from the Cartan homotopy formula (see, e.g., \cite[Prop. I.3.10]{KN})
$$
\calL_\deltaph \Phi=(d\,\iota_{\deltaph}+\iota_{\deltaph}d)\Phi=d\up,
$$
by \eqref{write out} and $d\Phi=0$ \eqref{d=0}.
\eqref{formal one} then follows by integration since $\phii^*\vec n^*\Phi$ corresponds to the evaluation of $\Phi$ at $\phihi=0$ by the definition of $\phihi$ \eqref{def-phi} and we have for any fixed $\phihi$
$$
\Phi-\phii^*\vec n^*\Phi=\int_0^\phihi \calL_\deltaph \Phi\, dt=\int_0^\phihi d\up \, dt=d\int_0^\phihi \up\, dt.
$$
\end{proof}
Now we calculate $\up$ explicitly. Since $\Phi$ \eqref{phi} is invariant under $\so(n)$-changes of local frames, we adapt an idea from \cite{chern2} to use a nice frame for $TX|_M$ to facilitate the calculations about $\Phi$ on $CSTM$.
Choose $e_1$ as in \ref{setup}. For $v\in CSTM$, let
\begin{equation}
\label{en}
e_n=p(v)
\end{equation}
as defined in \eqref{def p}. Choose $e_2,\cdots,e_{n-1}$ so that $\{e_1,e_2,\cdots,e_{n-1},e_n\}$ is a positively oriented frame for $TX|_M$. (Therefore we need $n\gammaeq 3$ from now on, with the $n=2$ case being simple.) Then in view of \eqref{def-phi}
\begin{equation}
\label{trig}
v={\mathcal O}s\phihi\, e_1+\sin\phihi\, e_n.
\end{equation}
Let $(\Ome_{st})$ denote the curvature forms on $M$ of the induced metric from $X$. In view of \eqref{def-O},
\begin{gather}
\label{wo Ome}
\Ome_{st}=d\om_{st}-\sum_{r=2}^n \om_{sr}\om_{rt},
\end{gather}
\begin{gather}
\label{Ome}
\Om_{st}=\Ome_{st}+\om_{1s}\om_{1t}.
\end{gather}
Define the following differential forms on $STM$, regarded to be pulled back to $CSTM$ by $p$ \eqref{def p}, of degree $n-2$:
\begin{align}
\Phi^M(i,j)=\sum_\alphalpha \epsilon(\alpha) & \omega_{1\alpha_2}\cdots\omega_{1\alpha_{n-2i-j-1}}\Omega^M_{\alpha_{n-2i-j}\alpha_{n-2i-j+1}}\cdots \Omega^M_{\alpha_{n-j-2}\alpha_{n-j-1}}\label{phi ji}\\
&\omega_{\alpha_{n-j}n}\cdots\omega_{\alpha_{n-1}n}\nonumber
\end{align}
where the summations run over all permutations $\alpha$ of $\{2,\cdots,n-1\}$. It is easy to check that these $\Phi^M(i,j)$ are invariant under $\so(n-2)$-changes of the partial frames $\{e_2,\cdots,e_{n-1}\}$. Here the two parameters $i$ and $j$ stand for the numbers of curvature forms and $\om_{\alphalpha n}$'s involved. Define the following region of the indices $i,j$
\begin{align*}
D_1=\{(i,j)\in \mathbb Z\times\mathbb Z\, |\, i\gammaeq 0,j\gammaeq 0, 2i+j\leq n-2\}\label{d1}
\end{align*}
Then
$$
\Phi^M(i,j)\neq 0\Rightarrow (i,j)\in D_1.
$$
\begin{remark} Our choice of the letter $\Phi^M$
is due to the following special case when there are no $\om_{1\alpha}$'s:
\begin{gather*}
\Phi^M(i,n-2i-2)=\Phi_i^M,
\end{gather*}
where $\Phi_i^M$
are forms on $STM$ defined by Chern \cite{chern2}. Since we are considering the case of boundary, we have the extra $\om_{1\alpha}$'s in our more general forms.
Also note that the $\om_{1s}=0$ if the metric on $X$ is locally product near the boundary $M$. Therefore a lot of our forms vanish in that simpler case as considered in \cite{nie}.
\end{remark}
We also introduce the following functions of $\phihi$ \eqref{def-phi}, for non-negative integers $p$ and $q$,
\begin{gather}
T(p,q)(\phihi)={\mathcal O}s^p\phihi\sin^q\phihi\label{def-t}\nonumber,\\
I(p,q)(\phihi)=\int_0^\phihi T(p,q)(t)\, dt\label{def-i}.
\end{gather}
\begin{proposition}\label{big Phi} We have the following concrete formulas
\begin{align}
\up=&\iota_\deltaph\Phi=\frac{1}{(n-2)!!c_{n-1}}\sum_{(i,j)\in D_1} a(i,j)(\phihi)\, \Phi^M(i,j)\label{form of up}\\
a(i,j)(\phihi)=&\sum_{k=i}^{[\frac{n-j}{2}]-1} (-1)^{n+j+k}\frac{(n-2k-2)!!}{2^k j!(n-2k-j-2)!i!(k-i)!}T(n-2k-j-2,j)(\phihi)\label{aij}
\end{align}
\end{proposition}
\begin{proof} From \eqref{def-u}, \eqref{def-theta} and \eqref{trig}, we have
\begin{gather}
u_1={\mathcal O}s\phihi,\ u_n=\sin\phihi,\ u_\alphalpha=0;\label{the u's}\\
\theta_1=-\sin\phihi\, (d\phihi+\om_{1n}),\ \theta_n={\mathcal O}s\phihi\,(d\phihi+\om_{1n}),\label{theta 1 n}\\
\theta_\alpha={\mathcal O}s\phihi\, \om_{1\alpha}-\sin\phihi\, \om_{\alpha n}\label{theta alpha}.
\end{gather}
From \eqref{the u's}, there are only two non-zero coordinates $u_1$ and $u_n$. Hence there are four cases for the positions of the indices $1$ and $n$ in $\Phi_k$ \eqref{eq-phi-j}:
\begin{enumerate}
\item $n-2k-1$ possibilities of $u_1\theta_n$
\item $2k$ possibilities of $u_1\Omega_{\alpha n}$
\item $n-2k-1$ possibilities of $u_n\theta_1$
\item $2k$ possibilities of $u_n\Omega_{1\alpha}$
\end{enumerate}
Only cases (i) and (iii) contribute $d\phihi$ in view of \eqref{theta 1 n}, and hence we are only concerned with these two cases for the computation of $\up=\iota_\deltaph\Phi$.
Starting with \eqref{eq-phi-j}, taking signs into considerations, by \eqref{the u's} and \eqref{theta 1 n}, by ${\mathcal O}s^2\phihi+\sin^2\phihi=1$, \eqref{theta alpha}, \eqref{Ome} and the multinomial theorem, we have
\begin{align}
\Phi_k=& (n-2k-1) (-1)^n {\mathcal O}s^2\phihi (d\phihi+\omega_{1n})\label{total Phi_k}\\
&\quad \sum_\alpha \ep(\alpha)\theta_{\alpha_2}\cdots\theta_{\alpha_{n-2k-1}}\Omega_{\alpha_{n-2k}\alpha_{n-2k+1}}\cdots\Omega_{\alpha_{n-2}\alpha_{n-1}}\nonumber\\
&+(n-2k-1)(-1)^n \sin^2\phihi (d\phihi+\omega_{1n})\nonumber\\
&\quad \sum_\alpha \ep(\alpha)\theta_{\alpha_2}\cdots\theta_{\alpha_{n-2k-1}}\Omega_{\alpha_{n-2k}\alpha_{n-2k+1}}\cdots\Omega_{\alpha_{n-2}\alpha_{n-1}}\nonumber\\
&+\cdots\nonumber\\
=&(-1)^n (n-2k-1) (d\phihi+\omega_{1n}) \sum_\alpha \ep(\alpha)\nonumber\\
&\quad ({\mathcal O}s\phihi\,\om_{1\alpha_2}-\sin\phihi\,\om_{a_2 n})\cdots ({\mathcal O}s\phihi\,\om_{1\alpha_{n-2k-1}}-\sin\phihi\,\om_{a_{n-2k-1} n})\nonumber\\
&\quad (\Omega^M_{\alpha_{n-2k}\alpha_{n-2k+1}}+\om_{1\alpha_{n-2k}}\om_{1\alpha_{n-2k+1}})\cdots(\Omega^M_{\alpha_{n-2}\alpha_{n-1}}+\om_{1\alpha_{n-2}}\om_{1\alpha_{n-1}})\nonumber\\
&+\cdots\nonumber\\
=&(-1)^n (n-2k-1) (d\phihi+\omega_{1n}) \nonumber\\
&\quad \mathop{\sum_{0\leq i\leq k}}_{0\leq j\leq n-2k-2} \frac{(n-2k-2)!}{j!(n-2k-j-2)!} {\mathcal O}s^{n-2k-j-2}\phihi\,(-\sin \phihi)^j \frac{k!}{i!(k-i)!} \Phi^M(i,j)\nonumber\\
&+\cdots\nonumber\\
=&\mathop{\sum_{0\leq i\leq k}}_{0\leq j\leq n-2k-2} (-1)^{n+j}\frac{(n-2k-1)!k!}{j!(n-2k-j-2)!i!(k-i)!} T(n-2k-j-2,j)(\phihi)\nonumber\\
&\qquad\qquad\quad (d\phihi+\om_{1n})\Phi^M(i,j)+\cdots.\nonumber
\end{align}
From \eqref{phi} and the above, we get \eqref{form of up} and the coefficients $a(i,j)(\phihi)$ in \eqref{aij},
after some immediate cancellations.
\end{proof}
\begin{definition} For $(i,j)\in D_1$, define the following functions on $CSTM$
\begin{align}
&A(i,j)(\phihi)=\int_0^\phihi a(i,j)(t)\, dt\nonumber\\
=&\sum_{k=i}^{[\frac{n-j}{2}]-1} (-1)^{n+j+k}\frac{(n-2k-2)!!}{2^k j!(n-2k-j-2)!i!(k-i)!}I(n-2k-j-2,j)(\phihi),\label{wo A}
\end{align}
in view of \eqref{aij} and \eqref{def-i}. Also define the differential form of degree $n-2$ on $CSTM$
\begin{equation}
\label{def Gamma}
\Gamma=\frac{1}{(n-2)!!c_{n-1}}\sum_{(i,j)\in D_1}A(i,j)(\phihi)\Phi^M(i,j).
\end{equation}
\end{definition}
\begin{proof}[Proof of Theorem \ref{new main}] We just need to notice that $\Gamma=\int_0^\phihi \up\, dt$ by Proposition \ref{big Phi} and use \eqref{formal one} in Proposition \ref{much quicker}.
\end{proof}
\begin{remark} Our first proof of Theorem \ref{new main} was through very explicit differentiations. Write
$\Phi=d\phihi\, \Upsilon+\Xi$ in view of \eqref{write out}. We can compute $\Xi$ explicitly. After correctly guessing the $\Gamma$ in \eqref{def Gamma}, we prove Theorem \ref{new main} by some differentiation formulas of differential forms in the spirit of \cite{chern2}, and some induction formulas for the functions $I(p,q)(\phihi)$ in \eqref{def-i} through integration by parts.
\end{remark}
We finally arrive at
\begin{proof}[Second proof of Theorem \ref{two terms}] Let $B_r^M(\sing \phiartial V)$ (resp. $S_r^M(\sing \phiartial V))$ denote the union of smalls open balls (resp. spheres) of radii $r$ in $M$ around the finite set of points $\sing \phiartial V$.
Then by $\phiartial V(x)=0\Lambdaeftrightarrow \alpha_V(x)=\phim \vec n(x)$,
$$\alpha_V(M\backslash B^M_r(\sing \phiartial V))\subset CSTM.$$
By Theorem \ref{new main} and Stokes' theorem,
\begin{align}
\label{stokes 1}
&\int_{\alpha_V(M)}\Phi-\int_{\vec n(M)}\Phi=\int_{\alphalpha_V(M)}\Phi-\phii^*\vec n^*\Phi=\lim_{r\to 0} \int_{\alpha_V(M\backslash B_r^M(\sing\phiartial V))}\Phi-\phii^*\vec n^*\Phi\\
=& \lim_{r\to 0} \int_{\alpha_V(M\backslash B_r^M(\sing\phiartial V))} d\Gamma
=-\lim_{r\to 0} \int_{\alpha_V(S_r^M(\sing \phiartial V))} \Gamma\nonumber\\
=&-\lim_{r\to 0} \int_{\alpha_V(S_r^M(\sing\phiartial V))} \frac{1}{(n-2)!!c_{n-1}} A(0,n-2)(\phihi)\Phi^M(0,n-2),\nonumber
\end{align}
since all the other $A(i,j)(\phihi)\Phi^M(i,j)$ in \eqref{def Gamma}, for $(i,j)\in D_1$ and not equal to $(0,n-2)$,
involve either curvature forms $\Omega^M_{\alpha\beta}$ or connection forms $\om_{1\alpha}$ and hence don't contribute in the limit when integrated over small spheres.
We have by \eqref{wo A} and \eqref{phi ji}
\begin{align}
&\frac{1}{(n-2)!!c_{n-1}} A(0,n-2)(\phihi)\Phi^M(0,n-2)\label{vol again}\\
=&\frac{1}{(n-2)!!c_{n-1}}\frac{(n-2)!!}{(n-2)!} I(0,n-2)(\phihi)\sum_\alpha \epsilon(\alpha) \om_{\alpha_2 n}\cdots\om_{\alpha_{n-1} n}\nonumber\\
=&\frac 1 {c_{n-1}} I(0,n-2)(\phihi)\,p^*d\sigma_{n-2}\nonumber
\end{align}
with $d\sigma_{n-2}$ being the relative volume form of $S^{n-2}\to STM\to M$, since
$$\sum_\alpha \epsilon(\alpha) \om_{\alpha_2 n}\cdots\om_{\alpha_{n-1} n}=(n-2)!\,p^*d\sigma_{n-2}$$
in view of \eqref{en} and by comparison with \eqref{factorial}.
Continuing \eqref{stokes 1} and using \eqref{vol again}, we have
{\alphallowdisplaybreaks
\begin{align*}
& \int_{\alphalpha_V(M)}\Phi-\int_{\vec n(M)}\Phi\\
=&-\frac{1}{c_{n-1}} \lim_{r\to 0} \int_{\alpha_V(S_r^M(\sing\phipv)\cup S_r^M(\sing \phimv))} I(0,n-2)(\phihi)\,p^*d\sigma_{n-2}\\
\overset{(1)}=& -\frac{1}{c_{n-1}} \bigl[I(0,n-2)(0)\lim_{r\to 0} \int_{\alpha_{\phiartial V}(S_r^M(\sing\phipv))} d\sigma_{n-2}\\
&\qquad\ \,+ I(0,n-2)(\phii)\lim_{r\to 0} \int_{\alpha_{\phiartial V}(S_r^M(\sing\phimv))} d\sigma_{n-2}\bigr]\\
\overset{(2)}=&\frac{1}{c_{n-2}} \lim_{r\to 0} \int_{\alpha_{\phiartial V}(S_r^M(\sing\phimv))} d\sigma_{n-2}\\
\overset{(3)}=&-\ind\phimv
\end{align*}
}
Here equality (1) uses \eqref{pa}, \eqref{lim of phi} and the similar
\begin{gather*}
\phihi(\alphalpha_V(x))\to 0 \text{ for }x\in S_r^M(\sing\phipv),\text{ as }r\to 0.
\end{gather*}
In view of \eqref{def-i}, we have
\begin{gather*}
I(0,n-2)(0)=0,\ I(0,n-2)(\phii)=\int_0^\phii \sin^{n-2}\phihi\,d\phihi.
\end{gather*}
Then equality (2) follows from \eqref{basic-know}.
Equality (3) is by the definition of index.
\end{proof}
\begin{bibdiv}
\begin{biblist}
\bib{BGV}{book}{
author={Berline, Nicole},
author={Getzler, Ezra},
author={Vergne, Mich{\`e}le},
title={Heat kernels and Dirac operators},
series={Grundlehren der Mathematischen Wissenschaften [Fundamental
Principles of Mathematical Sciences]},
volume={298},
publisher={Springer-Verlag},
place={Berlin},
date={1992},
pages={viii+369},
isbn={3-540-53340-0},
}
\bib{BZ}{article}{
author={Bismut, Jean-Michel},
author={Zhang, Weiping},
title={An extension of a theorem by Cheeger and M\"uller},
language={English, with French summary},
note={With an appendix by Fran\c cois Laudenbach},
journal={Ast\'erisque},
number={205},
date={1992},
pages={235},
issn={0303-1179},
}
\bib{bc}
{article}{
author={Bott, Raoul},
author={Chern, S. S.},
title={Hermitian vector bundles and the equidistribution of the zeroes of
their holomorphic sections},
journal={Acta Math.},
volume={114},
date={1965},
pages={71--112},
issn={0001-5962},
}
\bib{BM}{article}{
author={Br{\"u}ning, J.},
author={Ma, Xiaonan},
title={An anomaly formula for Ray-Singer metrics on manifolds with
boundary},
journal={Geom. Funct. Anal.},
volume={16},
date={2006},
number={4},
pages={767--837},
issn={1016-443X},
}
\bib{chern}
{article}{
author={Chern, Shiing-shen},
title={A simple intrinsic proof of the Gauss-Bonnet formula for closed
Riemannian manifolds},
journal={Ann. of Math. (2)},
volume={45},
date={1944},
pages={747--752},
issn={0003-486X},
}
\bib{chern2}{article}{
author={Chern, Shiing-shen},
title={On the curvatura integra in a Riemannian manifold},
journal={Ann. of Math. (2)},
volume={46},
date={1945},
pages={674--684},
issn={0003-486X},
}
\bib{hirsch}{book}{
author={Hirsch, Morris W.},
title={Differential topology},
note={Graduate Texts in Mathematics, No. 33},
publisher={Springer-Verlag},
place={New York},
date={1976},
pages={x+221},
}
\bib{KN}{book}{
author={Kobayashi, Shoshichi},
author={Nomizu, Katsumi},
title={Foundations of differential geometry. Vol I},
publisher={Interscience Publishers, a division of John Wiley \& Sons, New
York-Lond on},
date={1963},
pages={xi+329},
}
\bib{MQ}{article}{
author={Mathai, Varghese},
author={Quillen, Daniel},
title={Superconnections, Thom classes, and equivariant differential
forms},
journal={Topology},
volume={25},
date={1986},
number={1},
pages={85--110},
issn={0040-9383},
}
\bib{morse}{article}{
author={Morse, Marston},
title={Singular Points of Vector Fields Under General Boundary
Conditions},
journal={Amer. J. Math.},
volume={51},
date={1929},
number={2},
pages={165--178},
issn={0002-9327},
}
\bib{nie2}{article}{
author = {Nie, Zhaohu},
title = {Secondary Chern-Euler class for general submanifold},
journal={arXiv:0906.3908, to appear in Canadian Mathematical Bulletin},
year = {2009}
}
\bib{nie}{article}{
author = {Nie, Zhaohu},
title = {On Sha's secondary Chern-Euler class},
journal={arXiv:0901.2611, to appear in Canadian Mathematical Bulletin},
year = {2009}
}
\bib{sha}
{article}{
author={Sha, Ji-Ping},
title={A secondary Chern-Euler class},
journal={Ann. of Math. (2)},
volume={150},
date={1999},
number={3},
pages={1151--1158},
issn={0003-486X},
}
\bib{Z}{book}{
author={Zhang, Weiping},
title={Lectures on Chern-Weil theory and Witten deformations},
series={Nankai Tracts in Mathematics},
volume={4},
publisher={World Scientific Publishing Co. Inc.},
place={River Edge, NJ},
date={2001},
pages={xii+117},
isbn={981-02-4686-2},
}
\end{biblist}
\end{bibdiv}
\end{document}
|
\begin{document}
\title{Complex Semidefinite Programming and {\sc Max-$k$-Cut}}
\author{\textsc{Alantha
Newman}\thanks{CNRS and Universit\'e Grenoble Alpes.
Supported in part by LabEx
PERSYVAL-Lab (ANR-11-LABX-0025).}}
\maketitle
\def{\sc Max}-$k$-{\sc Cut}{{\sc Max}-$k$-{\sc Cut}}
\def{\sc Max}-$k$-{\sc Cut}s{{\sc Max}-$k$-{\sc Cut} }
\begin{abstract}
In a second seminal paper on the application of semidefinite
programming to graph partitioning problems, Goemans and Williamson
showed how to formulate and round a {\em complex semidefinite program}
to give what is to date still the best-known approximation guarantee
of .836008 for {\sc Max-$3$-Cut}. (This approximation
ratio was also achieved independently by De Klerk et
al.) Goemans and Williamson left
open the problem of how to apply their techniques to {\sc Max-$k$-Cut}
for general $k$. They point out that it does not seem straightforward
or even possible to formulate a good quality complex semidefinite
program for the general {\sc Max-$k$-Cut} problem, which presents a
barrier for the further application of their techniques.
We present a simple rounding algorithm for the standard semidefinite
programmming relaxation of {\sc Max-$k$-Cut} and show that it is
equivalent to the rounding of Goemans and Williamson in the case of
{\sc Max-$3$-Cut}. This allows us to transfer the elegant analysis of
Goemans and Williamson for {\sc Max-3-Cut} to {\sc Max-$k$-Cut}. For
$k \geq 4$, the resulting approximation ratios are about $.01$ worse
than the best known guarantees. Finally, we present a generalization
of our rounding algorithm and conjecture (based on computational
observations) that it matches the best-known guarantees of De Klerk et
al.
\end{abstract}
\section{Introduction}
In the {\sc Max-$k$-Cut} problem, we are given an undirected graph,
$G=(V,E)$, with non-negative edge weights. Our objective is to divide the
vertices into at most $k$ disjoint sets, for some given positive
integer $k$, so as to maximize the weight of the edges whose endpoints
lie in different sets. When $k=2$, this problem is known simply as
the {\sc Max-Cut} problem. The approximation guarantee of $1-1/k$
can be achieved for all $k$ by placing each vertex uniformly at
random in one of $k$ sets. For all values of $k \geq 2$, this
simple algorithm yielded the best-known approximation ratio until
1994. In that year, Goemans and Williamson gave a
.87856-approximation algorithm for the {\sc Max-Cut} problem based on semidefinite
programming (SDP), thereby introducing this method as a successful new
technique for designing approximation algorithms~\cite{GW}.
Frieze and Jerrum subsequently developed an algorithm for the {\sc
Max-$k$-Cut} problem that can be viewed as a generalization of
Goemans and Williamson's algorithm for {\sc Max-Cut} in the sense that
it is same algorithm when $k=2$~\cite{FJ}. Although the rounding
algorithm of Frieze and Jerrum is arguably simple and natural, the
analysis is quite involved. Their approximation ratios improved upon
the previously best-known guarantees of $1-1/k$ for $k \geq 3$ and are
shown in Table \ref{tbl:chart}. A few years later, Andersson,
Engebretsen and H\aa stad also used semidefinite programming to design
an algorithm for the more general problem of {\sc Max-E2-Lin mod $k$},
in which the input is a set of equations or inequations mod $k$ on two
variables (e.g., $x-y \equiv c ~\bmod k$) and the objective is to
assign an integer from the range $[0,k-1]$ to each variable so that
the maximum number of equations are satisfied~\cite{AEH}. They proved
that the approximation guarantee of their algorithm is at least $f(k)$
more than that of the simple randomized algorithm, where $f(k)$ is a
(small) linear function of $k$. In the special case of {\sc
Max-$k$-Cut}, they showed that the performance ratio of their
algorithm is no better than that of Frieze and Jerrum. Although they
did not show the equivalence of these two algorithms, they stated that
numerical evidence suggested that the two algorithms have the same
approximation ratio. Shortly thereafter, De Klerk, Pasechnik and
Warners presented an algorithm for {\sc Max-$k$-Cut} with improved
approximation guarantees for all $k\geq3$, shown in Table
\ref{tbl:chart}. Additionally, they showed that their algorithm has
the same worst-case performance guarantee as that of Frieze and
Jerrum~\cite{DBLP:journals/jco/KlerkPW04}.
Around the same time, Goemans and Williamson independently presented
another algorithm for {\sc Max-3-Cut} based on {\it complex
semidefinite programming} (CSDP)~\cite{GW2}. For this problem, they
improved the best-known approximation guarantee of $.832718$ due to
Frieze and Jerrum to $.836008$, the same approximation ratio proven by
De Klerk, Pasechnik and Warners. Goemans and Williamson showed that
their algorithm is equivalent to that of Andersson, Engebretsen and
H\aa stad and to that of Frieze and Jerrum (and therefore to that of
De Klerk, Pasechnik and Warners) in the case of {\sc
Max-3-Cut}~\cite{GW2}. However, they argued that their decision to
use complex semidefinite programming and, specifically, their choice
to represent each vertex by a single complex vector resulted in
``cleaner models, algorithms, and analysis than the equivalent models
using standard semidefinite programming.''
One issue noted by Goemans and Williamson with respect to their elegant
new model was that it is not clear how to apply their techniques to
{\sc Max-$k$-Cut} for $k \geq 4$. Their approach
seemed to be tailored specifically to the {\sc Max-3-Cut} problem. This is
because one cannot model, say, the {\sc Max-$4$-Cut} problem directly
using a complex semidefinite program. This limitation is discussed in Section 8
of \cite{GW2}. In fact, as they point out, a direct attempt to model
{\sc Max-$k$-Cut} with a complex semidefinite program would only
result in a $(1-1/k)$-approximation for $k\geq 4$.
De Klerk et al. also state that there is no obvious way
to extend the approach based on CSDP to {\sc Max-$k$-Cut} for $k >
3$. (See page 269 in \cite{DBLP:journals/jco/KlerkPW04}.)
\subsection{Our Contribution}
In this paper, we make the following contributions.
\begin{enumerate}
\item {We present a simple rounding algorithm based on the standard
semidefinite programming relaxation of {\sc Max-$k$-Cut} and show that
it can be analyzed using the tools from \cite{GW2}.
\begin{itemize}
\item For $k=3$, this results in an implementation of the Goemans-Williamson
algorithm that avoids complex semidefinite
programming.
\item For $k \geq 4$,
the resulting approximation ratios are slightly worse than the
best-known guarantees.
\end{itemize}
}
\item We present a simple generalization of this rounding algorithm and
conjecture that it yields the best-known approximation ratios.
\end{enumerate}
Thus, the main contribution of this paper is to show that, despite its limited
modeling power, we can still apply the tools from complex semidefinite
programming developed by Goemans and Williamson to {\sc Max-$k$-Cut}.
In fact, we obtain the following worst-case approximation guarantee
for the {\sc Max-$k$-Cut} problem for all $k$, which is the same bound
they achieve for $k=3$:
\begin{eqnarray}
\phi_k & = & \frac{k-1}{k}
+ \frac{k}{4\pi^2} \left[\arccos^2\left(\left(\frac{1}{k-1}\right)\cos\left( \frac{2\pi}{k} \right) \right)
- \arccos^2\left(\frac{1}{k-1}\right) \right].
\end{eqnarray}
We note that for $k \geq 4$, the approximation ratio $\phi_k$ is about
$.01$ worse than the approximation ratio proved by Frieze and Jerrum.
See Table \ref{tbl:chart} for a comparison.
However, given the
technical difficulty of Frieze and Jerrum's analysis, we believe that
it is beneficial to present an alternative algorithm and analysis that
yields a similiar approximation guarantee. Moreover, we wish to take
a closer look at the techniques used by Goemans and Williamson for
{\sc Max-3-Cut} since these tools have not been widely applied in the
area of approximation algorithms, in sharp contrast to the tools used
to solve the {\sc Max-Cut} problem. In fact, we are aware of only two
papers that use the main tools of \cite{GW2}: The first is for a
generalization of the {\sc Max-3-Cut}
problem~\cite{ling2009approximation} and the second is for an
optimization problem in which the variables are to be assigned complex
vectors~\cite{zhang2006complex}.
While Goemans and Williamsons' framework of complex semidefinite
programming does result in an elegant formulation and analysis for
{\sc Max-3-Cut}, it also to some extent obscures the geometric
structure that is apparent when one views the same algorithm from the
viewpoint of standard semidefinite programming. Specifically, in the
latter framework, their complex semidefinite program is equivalent to
modeling each vertex with a 2-dimensional circle or disc of vectors.
In our opinion, their main technical contribution is a formula for the
exact distribution of the difference of the angles resulting when a
normal vector is projected onto two of these discs that are correlated
in a particular way. (See Lemma 8 in \cite{GW2}.) Thus, while the
limitation in modeling {\sc Max-$k$-Cut} with complex semidefinite
programming comes from the fact that we cannot model the general
problem with these 2-dimensional discs, we can circumvent this barrier
in the following way. We construct 2-dimensional discs using the
vectors obtained from a solution to the standard semidefinite program.
We then show that a pair of these 2-dimensional discs (i.e., one disc
for each vertex) are correlated in the same way as those produced in
the case of {\sc Max-3-Cut}. Then we can apply and analyze the same
algorithm used for {\sc Max-3-Cut}.
In some cases (e.g., {\sc Max-3-Cut}), using the
distribution of the angle between two elements is stronger than using
the expected angle, which is what is used for {\sc Max-Cut}. It
therefore seems that this tool has unexplored potential applications
for other optimization problems, for which it may also be possible to
overcome the modeling limitations of complex semidefinite programming
in a similiar manner as we do here. On a high level, the idea of
constructing the ``complex'' vectors from a solution to a standard
semidefinite program was used for a circular arrangement
problem~\cite{DBLP:conf/innovations/MakarychevN11}.
Finally, we remark that the approach used in Section \ref{sec:mkcut}
to create a disc from a vector is reminiscent of Zwick's method of
outward rotations in which he combines hyperplane rounding and
independent random assignment~\cite{Zwick}. For each unit vector
$v_i$ from an SDP solution, he computes a disc in the plane spanned by
$v_i$ and $u_i$, where the $u_i$'s form a set of pairwise orthogonal
vectors that are also orthogonal to the $v_i$'s, and chooses a new
vector from this disc based on a predetermined angle. Thus, the goal
is to rotate each vector $v_i$ to obtain a new set of unit vectors, which are
then given as input to a now standard rounding algorithm, such as
random-hyperplane rounding. In contrast, our goal is to use the actual disc in
the rounding, as done originally by Goemans and Williamson in the case
of {\sc Max-3-Cut}.
\begin{table}
\begin{center}
\fbox{\parbox{11cm}{
\begin{tabular}
{c|c|c|c|c|c}
$k$ & \cite{GW} & \cite{FJ} & \cite{GW2} & \cite{DBLP:journals/jco/KlerkPW04} & This paper\\
$k=2$ & .878956 & - & - & - & - \\
$k=3$ & - & .832718 & .836008 & .836008 & -\\
$k=4$ & - & .850304 & - & .857487 & .846478\\
$k=5$ & - & .874243 & - & .876610 & .862440\\
$k=10$ & - & .926642 & - & .926788 & .915885
\end{tabular}}}
\caption{Approximation guarantees for {\sc Max-$k$-Cut}.}\label{tbl:chart}
\end{center}
\end{table}
\subsection{Organization}
We give some background on the (standard) semidefinite programming
relaxation used by Frieze and Jerrum and discuss their algorithm for
{\sc Max-$k$-Cut} in Section \ref{sec:FJ}. In Section \ref{sec:GW},
we present Goemans and Williamson's algorithm for {\sc Max-3-Cut} from
the viewpoint of standard semidefinite programming. In Section
\ref{sec:mkcut}, we show how to create a 2-dimensional disc for each
vertex given a solution to the standard semidefinite program for {\sc
Max-$k$-Cut}. We do not wish to formally prove the relationship
between these discs and the complex vectors. Thus, in Section
\ref{sec:analysis}, we simply prove that if two discs are correlated
in a specified way, then the distribution of the angle is equivalent
to a distribution already computed exactly by Goemans and Williamson
in \cite{GW2}. We can then easily prove that the 2-dimensional discs
we create for the vertices have the required pairwise correlation.
This results in a closed form approximation ratio for general $k$,
Theorem \ref{thm:main}.
\section{Frieze and Jerrum's Algorithm}\label{sec:FJ}
Consider
the following integer program for {{\sc Max}-$k$-{\sc Cut}}:
\begin{eqnarray*}
& \max & \sum_{ij \in E} (1 - v_i \cdot v_j) \frac{k-1}{k} \\
v_i \cdot v_i & = & 1, \quad \forall i \in V,\\
v_i & \in & \Sigma_k, \quad \forall i \in V. \hspace{20mm} (P)
\end{eqnarray*}
Here, $\Sigma_k$ are the vertices of the equilateral simplex, where
each vertex is represented by a $k$-dimensional vector, and each pair
of vectors corresponding to a pair of vertices has dot product $-1/(k-1)$. If
we relax the dimension of the vectors, we obtain the following
semidefinite relaxation, where $n = |V|$:
\begin{eqnarray*}
& \max & \sum_{ij \in E} (1 - v_i \cdot v_j) \frac{k-1}{k}\nonumber \\
v_i \cdot v_i & = & 1, \quad \forall i \in V,\nonumber\\
v_i\cdot v_j & \geq & -\frac{1}{k-1}, \quad \forall i, j \in V,\nonumber\\
v_i & \in & \mathbb{R}^n, \quad \forall i \in V. \hspace{20mm} (Q)
\end{eqnarray*}
Frieze and Jerrum used this semidefinite relaxation to obtain an
algorithm for the {\sc Max}-$k$-{\sc Cut} problem~\cite{FJ}.
Specifically, they proposed the following rounding algorithm: Choose
$k$ random vectors, $g_1, g_2, \dots, g_k \in \mathbb{R}^n$, with each entry of each
vector chosen from the normal distribution ${\cal{N}}(0,1)$. For
each vertex $i\in V$, consider the $k$ dot products of vector $v_i$
with each of the $k$ random vectors, $v_i\cdot g_1, v_i \cdot g_2,
\dots, v_i \cdot g_k$. One of these dot products is maximum. Assign
the vertex the label of the random vector with which it has the
maximum dot product. In other words, if $v_i \cdot g_h = \max_{\ell=1}^k \{v_i
\cdot g_{\ell}\}$, then vertex $i$ is assigned to to cluster $h$.
Frieze and Jerrum were able to prove a lower bound on the
approximation guarantee of this algorithm for every $k$. See Table
\ref{tbl:chart} for some of these ratios.
\section{Goemans-Williamson Algorithm for {\sc Max-3-Cut}}\label{sec:GW}
Goemans and Williamson gave an algorithm for {\sc Max-3-Cut} in which
they first model the problem as a complex semidefinite program (i.e.,
each element is represented by a complex vector). It is not too
difficult to see that these complex vectors are equivalent to
2-dimensional discs or sets of unit vectors. For example, here is an
equivalent semidefinite program for {\sc Max-3-Cut}. The input is an
undirected graph $G=(V,E)$ with non-negative edge weights
$\{w_{ij}\}$.
\begin{eqnarray}
& \max & \sum_{ij\in E} w_{ij} (1-v_i^1 \cdot v_j^1)\frac{2}{3}\\
v_i^a \cdot v_i^b & = & -1/2, \hspace{13mm} \forall i \in V, ~ a \neq b \in [3],\\
v_i^a \cdot v_j^b & = & v_i^{a+c} \cdot v_j^{b+c}, \hspace{5mm} \forall i,j
\in V, ~ a,b,c \in [3],\\
v_i^a \cdot v_j^b & \geq & -1/2, \hspace{13mm}\forall i,j \in V, ~a,b \in [3],\\
v_i^a \cdot v_i^a & = & 1, \hspace{20mm} \forall i \in V, ~ a \in [3],\\
v_i^a & \in & \mathbb{R}^{3n}, \hspace{15mm} \forall i \in V, ~a \in [3].
\end{eqnarray}
Consider a set of $3n$ unit vectors forming a solution to this
semidefinite program.
Note that for a fixed vertex $i \in V$, the vectors $v_i^1, v_i^2$ and
$v_i^3$ are in the same 2-dimensional plane, since they are
constrained to be pairwise $120^{\circ}$ apart.
\begin{figure}
\caption{Three vectors $v_i^1, v_i^2$ and $v_i^3$ lie on a
2-dimensional plane corresponding to vertex $i$. The vector $g$
is projected onto the disc for element $i$ to obtain $\theta_i$. Angle
$\theta_{ij}
\label{fig:circle}
\end{figure}
In an ``integer'' solution for this semidefinite program, all these discs would be
constrained to be in the same 2-dimensional space and each angle of
rotation of the discs would be constrained to be $0, 2\pi/3$ or
$4\pi/3$, where each angle would correspond to a partition. In a
solution to the above relaxation, these discs are no longer
constrained to be in two dimensions.
In the rounding algorithm of Goemans and Williamson, we first pick a
vector $g \in \mathbb{R}^{3n}$ such that each entry is chosen
according to the normal distribution ${\cal N}(0,1)$. Then for each
vertex $i \in V$, we project this vector $g$ onto its corresponding
disc. This gives an angle $\theta_i$ in the range $[0,2\pi)$ for
each element $i$. (Note that without loss of generality, we can
assume that $\theta_i$ is the angle in the clockwise direction
between the projection of $g$ and the vector $v_i^3$.) We can
envision the angles $\{\theta_i\}$ for each $i \in V$ embedded onto
the same disc. Then we randomly partition this disc into three
equal pieces, each of length $2\pi/3$ (i.e., we choose an angle $\psi
\in [0, 2\pi)$ and let the three angles of partition be $\psi, \psi
+ 2\pi/3$ and $\psi + 4\pi/3$). These three pieces correspond to
the three sets in the partition.
The angle $\theta_{ij}$ is the angle $\theta_j - \theta_j$ modulo
$2\pi$. The probability that an edge $ij$ is cut in this
partitioning scheme is equal to $3\theta_{ij}/2\pi$ if $\theta_{ij} <
2\pi/3$ and 1 otherwise. In expectation, the angle $\theta_{ij}$ is
equal to $\arccos{(v_i^1 \cdot v_j^1)}$. (This can be shown using the
techniques in \cite{GW}. See Lemma 3 in
\cite{DBLP:conf/innovations/MakarychevN11}.) But using the expected
angle is not sufficient to obtain an approximation guarantee better
than $2/3$; If angle $\theta_{ij}$ is $2\pi/3$ in expectation, then
one third of the time it could be zero (not cut) and two thirds of the
time it could be $\pi$ (cut). However, it contributes 1 to the
objective function.
The exact probability that edge $ij$ is cut is:
\begin{eqnarray*}
\Pr[\text{edge } ij \text{ is cut}] ~ = ~ \sum^{2\pi/3}_{\gamma = 0} \Pr[\theta_{ij} = \theta]
\times \frac{\theta}{2\pi/3} + \sum^{4\pi/3}_{\gamma = 2\pi/3}
\Pr[\theta_{ij} = \theta] + \sum_{\theta = 4\pi/3}^{2\pi}
\Pr[\theta_{ij} = \theta] \times \frac{2\pi-\theta}{2\pi/3}.
\end{eqnarray*}
Therefore, we must compute $\Pr[\theta_{ij} = \theta]$ for all $\theta
\in [0,2\pi)$. One of the main technical contributions of Goemans and
Williamson~\cite{GW2} is that they compute the exact probability
that $\theta_{ij} < \delta$ for all $\delta \in [0, 2\pi)$. This
can be found in Lemma 8~\cite{GW2}. This enables them to compute
the probability that an edge is cut, resulting in their
approximation guarantee.
\section{Algorithm for {\sc Max-$k$-Cut}}\label{sec:mkcut}
As previously mentioned, we cannot model {\sc Max-$k$-Cut} as an
integer program directly using 2-dimensional discs as we do for {\sc
Max-3-Cut}, because any rotation corresponding to an angle of at
least $2\pi/k$ should contribute 1 to the objective function. Note
that in the case of {\sc Max-3-Cut}, there are two possible non-zero
rotations in an integer solution: $2\pi/3$ and $4\pi/3$ and both of
the contribute the same amount (i.e., 1) to the objective function.
Since it seems impossible to penalize all angles greater than $2\pi/k$
at the same cost, it seems similiarly impossible to model the problem
directly with a complex semidefinite program.
We now present our approach for rounding the semidefinite programming
relaxation $(Q)$ for {{\sc Max}-$k$-{\sc Cut}}.
After solving the semidefinite program, we obtain a set of vectors
$\{v_i\}$ corresponding to each vertex $i \in V$. We can assume these
vectors to be in dimension $n$. Let ${\bf{0}}$ represent the vector
with $n$ zeros. For each vertex $i \in V$, we construct the following
two orthogonal vectors:
\begin{eqnarray}
v_i ~ := ~(v_i,{\bf 0}), \quad \quad
v_i^{\perp} ~ := ~ ({\bf 0}, v_i).
\end{eqnarray}
Each vertex $i \in V$ now corresponds to a 2-dimensional disc spanned
by vectors $v_i$ and $v_i^{\perp}$. Specifically, this 2-dimensional
disc consists of the
(continuous) set of vectors defined for $\phi \in [0,2\pi)$:
\begin{eqnarray}\label{def:phi}
v_i(\phi) & = & v_i \cos{\phi} + v_i^{\perp}\sin{\phi}.
\end{eqnarray}
Now that we have constructed a 2-dimensional disc for each element,
we can use the same rounding scheme due to Goemans and Williamson
described in the previous section: First, we choose a vector
$g \in \mathbb{R}^{2n}$ in
which each coordinate is randomly chosen according to the normal
distribution ${\cal N}(0,1)$. For each $i \in V$, we project this
vector $g$ onto the disc $\{v_i(\phi)\}$, which results in an angle
$\theta_i$, where:
\begin{eqnarray*}
g \cdot v_i(\theta_i) & = & \max_{0 \leq \phi < 2\pi} g \cdot v_i(\phi).
\end{eqnarray*}
Note that we do not have to compute infinitely many dot products,
since, for example, if $g\cdot v_i, ~g \cdot v_i^{\perp} \geq 0$, then:
\begin{eqnarray*}
\theta_i & = & \arctan{\left(\frac{g \cdot v_i^{\perp}}{g\cdot v_i}\right)},
\end{eqnarray*}
and the three other cases depending on the sign of $g\cdot v_i$ and $g \cdot
v_i^{\perp}$ can be handled accordingly.
After we find an angle $\theta_i$ for each $i \in V$, we can assign
each element to a position corresponding to its angle $\theta_i$ on a
single disc and divide this disc (randomly) into $k$ equal sections
of size $2\pi/k$. Specifically, choose a random angle $\psi$ and use
the partition $\psi + \frac{c \cdot 2\pi}{k}$ for all integers $c \in
[0,k)$, where angles are taken modulo $2\pi$. These are the $k$
partitions of the vertices in the $k$-cut.
\begin{figure}
\caption{A 2-dimensional plane for vertex $i$ spanning $v_i$ and
$v_i^{\perp}
\end{figure}
\section{Analysis}\label{sec:analysis}
We prove that the distribution of the angle $\theta_{ij}$ is the
same as Lemma 8 of \cite{GW2}. This implies that we can use the
analysis that Goemans and Williamson use for {\sc Max-3-Cut} to obtain an analogous
approximation ratio for {\sc Max-$k$-Cut}.
\begin{lemma}
Given two sets of vectors $x_i = \{x_i(\phi)\}$ and $x_j =
\{x_j(\phi)\}$ defined on $\phi \in [0, 2\pi)$, where
\begin{eqnarray*}
x_i(\phi) & = & (\cos\phi, ~\sin\phi, ~0, ~0),\\
x_j(\phi) & = & (\cos\theta\cos\phi, ~\cos\theta\sin\phi,
~\sin\theta\cos\phi, ~\sin\theta\sin\phi).
\end{eqnarray*}
Let $\gamma \in [0, 2\pi)$ denote the angle $\theta_j - \theta_i$ after the
vector $g \in {\cal N}(0,1)^{2n}$ is projected onto $x_i$ and $x_j$.
Then for $\delta \in [0,2\pi)$,
\begin{eqnarray}
\Pr[0 \leq \gamma < \delta] = \frac{1}{2\pi} \left(\delta + \frac{r \sin{\delta}}{\sqrt{1-r^2\cos^2{\delta}}}
\arccos{(-r \cos{\delta})} \right).\label{form:last}
\end{eqnarray}
\end{lemma}
\begin{proof}
Note that the set of vectors $x_j$ is 2-dimensional, since
the
angle between $x_j(\phi_1)$ and $x_j(\phi_2)$ for $\phi_2 > \phi_1$ is
$\phi_2-\phi_1$. Thus, the rounding algorithm in Section
\ref{sec:mkcut} is well defined.
Recall that each coordinate of the vector $g$
is chosen according to the normal distribution ${\cal N}(0,1)$.
Even though the vector $g$ has $2n$ dimensions, we only need to
consider the first four, $g = (g_1, g_2, g_3, g_4)$.
This vector is chosen
equivalently to choosing $\alpha,\beta$ uniformly in $[0,2\pi)$ and
$p_1, p_2$ according to the distribution:
$$f(y) = y e^{-y^2/2}.$$
In other words, the vector $g$ is equivalent to:
\begin{eqnarray*}
g & = & (p_1\cos{\beta}, ~p_1 \sin{\beta}, ~p_2\cos{\alpha}, ~p_2\sin{\alpha}).
\end{eqnarray*}
Let $r = \cos{\theta}$ and let $s = \sin{\theta}$.
We will show that the probability that $\gamma \in [0,\delta)$ for
$\delta \leq \pi$ is:
\begin{eqnarray}
\Pr[0 \leq \gamma < \delta] & = & \frac{1}{2\pi} \left[ \delta +
\int^{\pi}_{\delta} \Pr\left[\frac{p_2 \cdot s}{\sin{\delta}}
\leq \frac{p_1 \cdot r}{\sin{(\alpha-\delta)}}\right]
d\alpha\right]. \label{fourteen}
\end{eqnarray}
Lemma 8 in \cite{GW2} shows this is equivalent to probability in
\eqref{form:last}.
First, let us consider the case when $\theta \in [0,\pi/2]$, or
$\cos\theta \geq 0$.
Without loss of generality, assume that the projection of $g$ onto the
2-dimensional disc $x_i$ occurs at $\phi = 0$. Then we can see that
\begin{eqnarray*}
x_i(0) \cdot g = p_1.
\end{eqnarray*}
In other words, we can assume that $\theta_i = 0$.
As previously mentioned, $\alpha$ is chosen uniformly in the range
$[0,2\pi)$. However, if $\gamma < \delta$, then $\alpha < \pi$.
If $\alpha < \delta$, then the projection of $g$ onto
$x_j$, namely $\theta_j$ (which equals $\theta_{ij}$ in this case,
because we have assumed that $\theta_i = 0$), is less than $\delta$.
The probability
that $\gamma \leq \delta$ if $\alpha \in [\delta, \pi)$ is equal to
the probability
that:
\begin{eqnarray*}
\frac{p_2 \cdot s}{\sin{\delta}} & \leq & \frac{p_1 \cdot
r}{\sin{(\alpha-\delta)}} \quad \iff\\
p_2 \cdot s & \leq & \frac{p_1 \cdot
r}{\sin{(\alpha-\delta)}} \cdot \sin{\delta}.
\end{eqnarray*}
(See Figure 3 in \cite{GW2}.)
If $\theta \in (\pi/2, \pi)$ and $r = \cos\theta < 0$, then the
probability that $\gamma$ is in $[0, \delta)$ is the probability that
$\gamma$ is in $[\pi, \pi+\delta)$, which is $\delta/(2\pi)$. And
the probability that $\gamma$ is in $[\delta, \pi)$ is the
probability that $\gamma$ is in $[\pi+\delta, 2\pi)$ for $-r$.
This is:
\begin{eqnarray}
p_2 \cdot s & \leq & \frac{p_1 \cdot
(-r)}{\sin{(\alpha-\delta)}} \cdot \sin{(\pi + \delta)}.
\end{eqnarray}
However, since $\sin{(\pi + \delta)} = - \sin{\delta}$, we have:
\begin{eqnarray}
p_2 \cdot s & \leq & \frac{p_1 \cdot
r}{\sin{(\alpha-\delta)}} \cdot \sin{\delta}.
\end{eqnarray}
Thus for all $\delta < \pi$, we have proved the expression in
\eqref{fourteen}.
In Lemma 8 of \cite{GW2}, they show that Equation \eqref{fourteen}
is equivalent to Equation \eqref{form:last} when $\delta < \pi$. Then
they argue by symmetry that Equation \eqref{form:last} also holds when
$\pi \leq \delta < 2\pi$.\end{proof}
\begin{lemma}
Suppose $v_i \cdot v_j = \cos\theta$ for two unit vectors $v_i$ and
$v_j$. Let $v_i(\phi)$ and $v_j(\phi)$ be defined as in equation
\eqref{def:phi}. Then, we can assume that:
\begin{eqnarray*}
v_i(\phi) & = & (\cos\phi, ~\sin\phi, ~0, ~0),\\
v_j(\phi) & = & (\cos\theta\cos\phi, ~\cos\theta\sin\phi,
~\sin\theta\cos\phi, ~\sin\theta\sin\phi).
\end{eqnarray*}
\end{lemma}
\begin{proof}
From the definition (in Equation \eqref{def:phi}) of $v_i(\phi)$, we
can see that:
\begin{eqnarray*}
v_i(\phi_1) \cdot v_j(\phi_2) & = & (v_i \cos{\phi_1} +
v_i^{\perp}\sin{\phi_1})\cdot (v_j \cos{\phi_2} + v_j^{\perp}
\sin{\phi_2})\\
& = & v_i \cdot v_j \cos\phi_1 \cos\phi_2 + v_i^{\perp} \cdot
v_j^{\perp} \sin\phi_1 \sin\phi_2 + v_i \cdot v_j^{\perp} \cos\phi_1
\sin\phi_2 + v_i^{\perp} \cdot v_j \sin\phi_1 \cos\phi_2\\
& = &
\cos\theta
(\cos{\phi_1}\cos{\phi_2} + \sin{\phi_1}\sin{\phi_2}).
\end{eqnarray*}
Note that $v_i \cdot v_j^{\perp} = v_i^{\perp}\cdot v_j = 0$ since
each $v_i$ vector has $n$ zeros in the second half of the entries and each
$v_i^{\perp}$ vector has $n$ zeros in the first half of the entries.
If we compute $v_i(\phi_1)\cdot v_j(\phi_2)$ using the assumption in
the lemma, then we get the same dot product. Thus, the two sets are
equivalent.\end{proof}
Since the distribution of the angle is the same, we can use the
same analysis of \cite{GW2} (generalized from $3$ to $k$) to prove the
following Lemma. Although it is essentially the exact same proof, we
include it here for completeness.
As in Corollary 9 of \cite{GW2}, we define:
\begin{eqnarray*}
g(r,\delta) & = & \frac{1}{2\pi}\left(\delta + \frac{r \sin{\delta}}{\sqrt{1-r^2\cos^2{\delta}}}\arccos{(-r \cos{\delta})} \right).
\end{eqnarray*}
In other words, $g(r,\delta)$ is the probability that angle
$\theta_{ij}$ obtained by projecting $g$ onto
the two discs $\{v_i{(\phi)}\}$ and $\{v_j(\phi)\}$, correlated
by $r = v_i \cdot v_j$, is less than $\delta$.
\begin{lemma}\label{closed_form}
Let $r = v_i\cdot v_j$ and let $y_i \in [0,1,2, \dots k)$ be the
integer assignment of vertex $i$ to its partition. Then the probability that the equation $y_i -
y_j \equiv c ~(\bmod ~k)$ is satisfied is
\begin{eqnarray*}
\frac{1}{k} + \frac{k}{8\pi^2}\left[2
\arccos^2\left(-r\cos\left(\frac{2\pi c}{k}\right)
\right) - \arccos^2\left(-r\cos\left( \frac{2\pi(c+1)}{k} \right) \right) -
\arccos^2\left(-r \cos\left( \frac{2\pi(c-1)}{k}\right)
\right) \right].
\end{eqnarray*}
\end{lemma}
\begin{proof}
$\Pr[y_i-y_j \equiv c ~(\bmod ~k) \text{
satisfied}]$
\begin{align*}
& = \frac{k}{2\pi} \int^{\frac{k}{2\pi}}_0
\Pr_{\gamma}\left[\frac{2\pi c}{k}-\tau \leq \gamma <
\frac{2\pi(c+1)}{k} -\tau \right] d\tau\\
& = \frac{k}{2\pi} \int^{\frac{2\pi}{k}}_0 \left( g\left(\tau, \frac{2\pi(c+1)}{k}
- \tau) - g(\tau, \frac{2\pi c}{k} - \tau\right)
\right) d\tau\\
& = \frac{k}{2\pi} \int^{\frac{2\pi(c+1)}{k}}_{\frac{2\pi
c}{k}} g(r, \nu)d\nu - \frac{k}{2\pi} \int^{\frac{2\pi
c}{k}}_{\frac{2\pi(c-1)}{k}} g(r,\nu) d\nu\\
& = \frac{k}{2\pi}\frac{1}{2\pi}
\left( \int^{\frac{2\pi(c+1)}{k}}_{\frac{2\pi c}{k}}
\nu d\nu -\left[\frac{1}{2}\arccos^2{(-r\cos{\nu})}
\right]^{\frac{2\pi(c+1)}{k}}_{\frac{2\pi
c}{k}} \nonumber \right.\\
& \quad \quad \left.- \int^{\frac{2\pi c}{k}}_{\frac{2\pi(c-1)}{k}} \nu
d\nu + \left[\frac{1}{2} \arccos^2{(-r\cos{\nu})}
\right]^{\frac{2\pi c}{k}}_{\frac{2\pi(c-1)}{k}} \right) \\
& = \frac{k}{8\pi^2} \left[ \left( \frac{2\pi(c+1)}{k}\right)^2 + \left(\frac{2\pi(c-1)}{k}\right)^2 -2
\left(\frac{2\pi c}{k}\right)^2 \nonumber \right]\\
& \quad \quad + \frac{k}{8\pi^2}\left[2 \arccos^2\left(-r \cos \left(\frac{2\pi c}{k}\right) \right) \right. \nonumber\\
& \quad \quad \left.- \arccos^2\left(-r\cos \left(\frac{2\pi(c+1)}{k} \right)\right)
-\arccos^2\left(-r \cos \left(\frac{2\pi(c-1)}{k}\right)
\right) \right]\\
& = \frac{1}{k} + \frac{k}{8\pi^2}\left[2
\arccos^2\left(-r\cos\left(\frac{2\pi c}{k}\right)
\right) \right. \nonumber\\
& \quad \quad \left. - \arccos^2\left(-r\cos\left( \frac{2\pi(c+1)}{k} \right) \right) -
\arccos^2\left(-r \cos\left( \frac{2\pi(c-1)}{k}\right)
\right) \right].
\end{align*}
\end{proof}
\begin{lemma}\label{lemm:not}
Let $r = v_i\cdot v_j$.
The
probability that edge $ij$ is {\em not} cut by our algorithm is:
\begin{eqnarray*}
\frac{1}{k} + \frac{k}{4\pi^2} \left[\arccos^2\left(-r\right)-
\arccos^2\left(-r\cos\left( \frac{2\pi}{k} \right) \right)\right].
\end{eqnarray*}
\end{lemma}
\begin{proof}
In the case of {\sc Max-$k$-Cut}, we set $c=0$. By Lemma
\ref{closed_form}, we have the probability that edge $ij$ is not
cut is:
\begin{eqnarray*}
\frac{1}{k} + \frac{k}{8\pi^2} \left[2 \arccos^2\left(-r\right)- \arccos^2\left(-r\cos\left( \frac{2\pi}{k} \right) \right) -
\arccos^2\left( -r \cos\left( -\frac{2\pi}{k} \right)
\right)\right]
\end{eqnarray*}
\begin{eqnarray*}
& = & \frac{1}{k} + \frac{k}{8\pi^2} \left[2 \arccos^2\left(-r\right)-
2\arccos^2\left(-r\cos\left( \frac{2\pi}{k} \right) \right)\right]\\
& = & \frac{1}{k} + \frac{k}{4\pi^2} \left[\arccos^2\left(-r\right)-
\arccos^2\left(-r\cos\left( \frac{2\pi}{k} \right) \right)\right].
\end{eqnarray*}\end{proof}
\begin{lemma}
Let $r = v_i \cdot v_j$. The probability that
edge $ij$ is cut by our algorithm is:
\begin{eqnarray}
\frac{k-1}{k}
+ \frac{k}{4\pi^2} \left[\arccos^2\left(-r\cdot \cos\left( \frac{2\pi}{k} \right) \right)
- \arccos^2{(-r)} \right].\label{edge_guar}
\end{eqnarray}
\end{lemma}
\begin{proof}
By Lemma \ref{lemm:not} and the previously stated assumption that $r = v_i
\cdot v_j = \cos{(\theta_{ij})}$, we have:
\begin{eqnarray*}
&& 1 - \left[\frac{1}{k} + \frac{k}{4\pi^2} \left[\arccos^2\left(-r\right)-
\arccos^2\left(-r\cos\left( \frac{2\pi}{k} \right) \right)\right]
\right] \\
& = & \frac{k-1}{k} - \frac{k}{4\pi^2} \left[\arccos^2\left(-r\right)-
\arccos^2\left(-r\cos\left( \frac{2\pi}{k} \right) \right)\right]\\
& = & \frac{k-1}{k}
+ \frac{k}{4\pi^2} \left[\arccos^2\left(-r\cos\left( \frac{2\pi}{k} \right) \right)
- \arccos^2\left(-r\right) \right].
\end{eqnarray*}
\end{proof}
\begin{theorem}\label{thm:main}
The worst case approximation ratio of our algorithm for {\sc
Max-$k$-Cut} is:
\begin{eqnarray*}
\phi_k & = & \frac{k-1}{k}
+ \frac{k}{4\pi^2} \left[\arccos^2\left(\left(\frac{1}{k-1}\right)\cos\left( \frac{2\pi}{k} \right) \right)
- \arccos^2\left(\frac{1}{k-1}\right) \right].
\end{eqnarray*}
\end{theorem}
\begin{proof}
As a function of $r$ in the range $[1,-1/(k-1)]$, the
expression in Equation \ref{edge_guar} is minimized when $r =
-1/(k-1)$. Thus, if we do an edge-by-edge analysis, the worst case
approximation ratio is obtained when $v_i \cdot v_j = -1/(k-1)$ for
all edges $ij \in E$. \end{proof}
\section{Another Rounding Algorithm}\label{sec:another}
The algorithm presented in Section 4 can be restated as the following
rounding scheme. Let $w_1, w_2$ and $w_3$ denote vectors in
$\mathbb{R}^2$ with pairwise dot product $-1/2$. In other words,
$w_1, w_2$ and $w_3$ are the vertices of the simplex $\Sigma_3$. Now
take two random gaussians $g_1, g_2 \in \mathbb{R}^n$ and set $x_i =
g_1 \cdot v_i$, $y_i = g_2 \cdot v_i$. To assign the vertex $i$ to
one of the three partitions, we simply assign it to $j$ such that $w_j
\cdot (x_i, y_i)$ is maximized.
We can generalize this approach by choosing $k-1$ random gaussians,
$g_1, \dots, g_{k-1}$. For each vertex $i$, we obtain the vector
$(g_1\cdot v_i, g_2 \cdot v_i, \dots, g_{k-1} \cdot v_i)$ in
$\mathbb{R}^{k-1}$. This vector is assigned to the closest vertex of
$\Sigma_k$. Computationally, this rounding scheme seems to yield
approximation ratios that match those of De Klerk et al.
\end{document}
|
\begin{document}
\begin{frontmatter}
\title{Shadowing relations with structural and topological stability in iterated function systems}
\author{Fatemeh Rezaei\corref{first}}
\cortext[first]{Principal corresponding author}
\ead{f$\[email protected]}
\address{PhD student in Mathematics, Department of Mathematics, Yazd University, Iran}
\author{Mehdi Fatehi Nia\corref{fn1}}
\cortext[fn1]{Corresponding author}
\ead{[email protected]}
\address{PhD, Department of Mathematics, Yazd University, Yazd 89195-741, Iran}
\begin{abstract}
This paper aims at formulating definitions of topological stability, structural stability, and expansiveness property for an iterated function system( abbrev, IFS). It is going to show that the shadowing property is necessary condition for structural stability in IFSs. Then, it proves the previous converse demonstration with the addition of expansiveness property for IFSs. It asserts that structural stability implies shadowing property in IFSs and presents an example to reject of the converse assertion.
\end{abstract}
\begin{keyword}
shadowing property \sep structural stability \sep topological stability \sep expansiveness property \sep iterated function system
\end{keyword}
\end{frontmatter}
\section{Introduction}
Is there any relation between shadowing and topological stability or between shadowing and structural stability in iterated function systems? We know that topological stability and structural stability are important properties of dynamical systems, so how can we define these properties for the iterated function systems? These concepts are related in dynamical systems, so can we find the same relationships in iterated function systems?
We are going to answer these questions in this paper. In this comprehensive introduction, we explain concepts that we deal with them in our study and moreover mention some of studies that have done on these themes.\\\\
{\emph{\textbf {Iterated function system?}}}\\
The concept of the iterated functions systems was applied in 1981 by Hutchinson. Moreover, the mathematical basic of the iterated functions systems was established by him;\cite{HUTCHINSON1981}, but this phrase was presented by Barnsley,\cite{HS2012}, briefly call IFS.
An IFS includes a set $\Lambda$ and some functions $f_\lambda,\lambda\in \Lambda$, on an arbitrary space $M$. As, in an IFS, the nonempty set $\Lambda$ can be finite or infinite(countable) or its functions can be special, so different IFSs have been investigated. The most studies on the finite IFSs have been done by Barnsley;\citep{BARNSLEY1985,BARNSLEY1988,BARNSLEY1993,BARNSLEY2006,ABVW2010}.
But why is studying the IFSs important? The importance of using the IFSs is their applicable attractor set that is called fractal. In fact, a fractal is made of the iteration of functions on a set( or IFS).
But what is a fractal? We can not have accurate description of geometric structure of many natural things like clouds, forests, mountains, flowers, galaxies and so on by using classical geometry. Mandelbrot, 1982, changed this perspective through which classical geometry extended into, so called, fractal geometry. The IFS model is a base for different applications, such as computer graphics, image compression, learning automata, neural nets and statistical physics\cite{EDALAT1996}. So, the study of the fractal is important and therefore, from one point of view, the study of an IFS as the way that can generate a fractal, is important;\cite{BARNSLEY1985}. The existence and uniqueness of the attractor of a finite IFS was proved in 1985 by Hata \cite{hata1985}, also you can see \cite{duvall1992}.\\\\
{\emph{\textbf{Shadowing property?}}}\\
From the numerical perspective, whenever we simulate a dynamical system( abbrev, DS) by computer, since any number is represented in computer with finite precision, there will be small difference between the original number and the registered number in the process of the resolution. That is, the error occur, for example resultant error from round-off and so on. Passing the time, this error is growing and amplified. Now, some questions arise:
{\emph{Question 1: Are the generated solutions from the simulation related to true mathematical solution of the considered DS? In other words, can we find a true solution nearby the generated solution?}} If yes, then we say that the system has shadowing property(abbrev, SP). It's mean, shadowing property is finding of true orbit which it remain near by generated orbit. Nowadays, the shadowing is a branch of global theory of DS and is growing and developing and also it's considered powerful tool for the analysis of chaotic DSs.
The approximated( or generated) orbit is called pseudo orbit. For the first time, the notation of pseudo orbit was proposed by Brikoff's study in the year 1925,\cite{birkhoff1926extension}. The pseudo orbits have important role in shadowing and every shadowed pseudo orbit provide useful information about the dynamic of system. For the first time, Bowen in \cite{bowen1975equilibrium} and Conley in \cite{conley1978isolated}, independently, discovered that the pseudo orbits can be used as a tool to connect the true solution with the approximation solution. The first result of classic shadowing for the hyperbolic sets was presented by Anosov in the year 1967 in \cite{anosov1967geodesic}. Sinai, 1972, proved the shadowing lemma for Anosov diffeomorphisms in [\cite{sinai1972gibbs}, Lemma(1.5)], by using the theorem which Anosov had affirmed in \cite{anosov1967geodesic} about structural stability of Anosov diffeomorphisms.
Using the stable manifold theorem, Bowen in [\cite{bowen1975omega}, page: 335] proved the shadowing lemma for diffeomorphisms satisfying axiom A.( Bowen on page 337, said that he really has brought the proof of this lemma in \cite{bowen1970markov} and \cite{bowen1972periodic}). The researches by Anosov and Bowen are regarded as initial results of the classical shadowing.\\
The SP is the property of uniform hyperbolic sets of DS, but some of researchers also had attempts to investigate the SP on non-hyperbolic sets or on sets which aren't uniformly hyperbolic; for example, Hammel and his co-workers studied the SP of Logistic and Henon maps for the special parameter values in the articles \citep{hammel1987numerical,hammel1988numerical}, in \cite{grebogi1990shadowing} researchers provide estimation of the length of time of the shadowing for the non-hyperbolic and chaotic systems and in \cite{sauer1997long} Sauer and et al answer two problems; i.e. the proximity of the solutions to the original solution and the length of time of the validation of the shadowing, by using shadowing distance and shadowing time, respectively. Lately, also, in article \cite{soldatenko2015shadowing} has investigated the SP of coupled nonlinear DS.\\
Pilyugin called the shadowing theory developed on the basis of the structural stability theory, as modern shadowing. There are relations between the notations of hyperbolicity and transversality in SS theory and shadowing theory. Up until last 20th century, the modern shadowing can be summarized in two books \citep{pilyugin1999shadowing,palmer2000shadowing} published almost simultaneously. Knowing about the ways and results of shadowing theory, you can see \cite{pilyugin1999shadowing}, and see \cite{palmer2000shadowing} in order to know about the applications of shadowing and the problems that can be theoretically justified, which are raised from the results of numerical simulation.\\
The shadowing lemma has been also extended for homeomorphisms, for example the studies \cite{anosov1970one} and \cite{robinson1977stability} have done near a hyperbolic set of a homeomorphism, and in \cite{shub1978global}, the shadowing lemma is proved for a set similar to this set.
Ombach used two methods in 1993 and in \cite{ombach1993simplest} demonstrated that every hyperbolic linear homeomorphism of a Banach space has the SP. In fact, this case is the proof of the shadowing lemma in the simplest state( but a nontrivial situation). But the studies has been also done on a non-hyperbolic set of a homeomorphism, for instance, in the year 2013, Petrov and Pilyugin give sufficient conditions under which a homeomorphism of a compact metric space has the SP,\cite{petrov2014lyapunov}.
They performed this study in terms of the existence of a pair of Lyapunov functions.\\
The SP in other systems has also been investigated, including autonomous system,\cite{coomes1995rigorous};
non-autonomous systems,\citep{palmer1984exponential,Palmer1988,aoki1994topological};
singularly perturbed systems,\cite{Lin1989shadowing}( nonlinear shadowing theorem);
induced set-valued systems with some expansive maps,\cite{Wn2010shadowing};
and $C^1$- generic conservative systems,\cite{Bessa2015shadowing}.\\
We know that in the direction of a vector field, hyperbolic property is not satisfied. However, efforts have been done in order to explore the SP in vector fields. Franke and Selgrade, for the first time in 1977, extended the shadowing lemma for the hyperbolic sets of vector fields,\cite{FRANKE1977}. These attempts caused the arrival of shadowing lemma in ordinary differential equations(abbrev, ODE). You can see the works by Coomes and his co-workers \citep{Coomes1994shadowing,Coomes1995ashadowing} and \cite{Brian1995}.
They provided in \cite{Kocak2007} the formula of pseudo orbit and shadowing in ODE.\\
{\emph{Question 2: Is there a system with no SP?}} Researchers's answer to this question is yes. In fact, Bonatti and his co-workers(2000) mentioned three dimension manifold of $C^1$-diffeomorphisms as an example and showed that it has a neighborhood which is contained maps with no SP,\cite{Bonatti2000}.
{\emph{Question 3: What are the applications of SP?}} Some of the applications of SP are provided as samples and to indicate the importance of this property. Considering SP the following cases were studied:
\begin{tabbing}
$\bullet$\, \=specification property for a space,\cite{Sigmund1974};\\
$\bullet$\, \> the problem of prediction of DS,\cite{Pearson2001};\\
$\bullet$\, \> proving the existence of and computing transversal homoclinic orbits in certain spaces,\cite{homoclinic2005};\\
$\bullet$\, \> the existence of various unstable periodic orbits, including transversal homoclinic or\\
\> heteroclinic orbits in particular systems such as Lorenz equations, \citep{homoclinic2005,Kirchgraber2004};\\
$\bullet$\, \> the determination of hyperbolicity of a system,\citep{Sakai2008stability,Tian2012diffeo};\\
$\bullet$\, \> the proof of Smale theorem,\citep{Palmer1988,Conley1975hyper,Anosov1995hyper};\\
$\bullet$\, \> the study of stability of functional equations,\cite{Lee2009stability}.
\end{tabbing}
{\emph{Question 4: Has the shadowing property in IFSs ever been studied? }}
In 1999, Bielecki investigated the SP of an attractor of an IFS, \cite{Bielecki1999}, and proved that if an IFS includes continuous and weak contraction functions, the attractor of IFS has the SP. In 2006, the concept of the SP was provided for orbits of an IFS by Glavan and Gutu,\cite{Glavan2006shadowing} , and they in \cite{Glavan2009} proved that a scalar affine IFS has the SP if and only if it includes contracting or strictly expanding functions. Recently, Fatehi Nia proposed the concept of the average shadowing property( abbrev, APS) for IFS in \cite{Fatehi2016} and studied the properties of an IFS with this property,( for the first time, the average shadowing property DS was presented by Blank in 1988,\cite{Blank1988metric}). Another definition of SP and ASP for an IFS including two function has presented in \cite{Zamani2015shadowing}.
In this paper, we use the concept of shadowing as defined in \cite{Glavan2006shadowing}.\\\\
{\emph{\textbf{Topological stability?}}}\\
Is there a neighborhood of a function( or a vector field) in which the orbits of the functions( or the flows) essentially are similar, that is, have the same structure in topology? The most useful notion for this matter is topological stability(abbrev, TS), sometimes it was previously called lower semistable. The notation of TS was used by Walters in 1970 for the first time,\cite{Walters1970topology}. He show that Anosov diffeomorphism are TS. There exist relations between TS and shadowing of homeomorphism. In fact, in the year 1978, Walters proved that the shadowing is necessary condition for TS of a homeomorphism on manifold of dimension$\geq$ 2,\cite{Walters1978orbit}. In 1980, Yano proved that this condition is not sufficient,\cite{Yano1980circle}, he presented a homeomorphism on circle that has SP but is no TS. Later, in 1999, pilyugin proved the converse of Walters's demonstration by adding a condition,\cite{pilyugin1999shadowing}.
In this paper, we define TS for an IFS. Now, the following questions arise:\\
{\emph{Does an IFS have the SP if it is topologically stable whereas the functions of IFS are homeomorphisms?}}\\
{\emph{Is the converse of above demonstration true?}}\\\\
{\emph{\textbf{Structural stability?}}}\\
Though, sometimes, systems look like, seemingly, they have completely different dynamical behaviors,( it raises bifurcation, chaos,...). Therefore, it leads to creating another concept that is called "structural stability".(abbrev, SS) The literature such as \cite{bonatti2001dynamical} said that the concept of the structural stability with this name was introduced by M. M. Peixoto. In fact, this concept is a generalization of the concept of systems grossier or rough systems in 1973 by A. A. Andronov and L. S. Pontryagin. Andronov was interested the preservation of the qualitative properties of the flows under small perturbations and asked a question whose history can be seen in \cite{Anosov1985}. Indeed Peixoto in 1959 introduced the concept of the structural stability using corrections of the mistakes of the article \cite{Baggis1955}.\\
We know that every Anosov diffeomorphism has SP on a hyperbolic set, but Robinson asserted a more general demonstration in 1977,\cite{Robinson1976stability}. He showed that each Structurally stable diffeomorphism has the SP on closed manifold, then using this assertion, he proved the stability of a diffeomorphism nearby a hyperbolic set. We can see another proof of this demonstration in Sawada's research in article \cite{Sawada1980extended}, 1980. The concept of extended f-orbits was presented by F. Takens in \cite{Takens1974tolerance}. He described some conjectures in that article, Sawada answered his third conjecture in \cite{Sawada1980extended}. Then in 1994, Sakai extended Robinson's demonstration in \cite{Sakai1994orbit}. He proved that $C^1$-interior of all Axiom A diffeomorphisms satisfying strong transversality have SP. In 2006, he and Lee developed this assertion about $C^1$-vector fields. They showed that each $C^1$-vector fields with no singular point belongs to $C^1$-interior all of vector fields with SP if and only if it's Structurally stable,\cite{Lee2007structural}. Pilyugin demonstrated Robinson's assertion for flows in \cite{Pilyugin1997structural}, 1997. We know that the set of all diffeomorphisms with SP isn't equivalent to the set of all structurally stable diffeomorphisms. In fact, there exist examples of diffeomorphisms with SP but without SS like a diffeomorphism of the circle $S^1$ that is presented in article \cite{Pilyugin2010variational} . In 2010, Pilyugin proposed a special type of shadowing known as variational shadowing in \cite{Pilyugin2010variational} and showed an equivalence between the set of all diffeomorphisms with variational shadowing and the set of all Structurally stable diffeomorphisms. In the same year, he and his co-workers proved another equivalence between the set of all diffeomorphisms with Lipschitz shadowing and the set of all Structurally stable diffeomorphisms in \cite{Pilyugin2010lipschitz}.( Lipschitz shadowing property was proposed by Bowen in 1975,\cite{bowen1975equilibrium}.) Again, in 2014, Pilyugin demonstrated this equivalence in a different way in \cite{Pilyugin2014mane}. We can see the summary of important and new results in the theory of pseudo-orbit shadowing in the first decade of the 21st century in survey \cite{Pilyugin2011orbit}. The main objective of this summary is SP, SS and some equivalent sets on these cases.\\
In this paper, we define the concept of SS for an IFS. The first question comes to mind:\\
{\emph{Can we also present the demonstration for IFSs similar to Robinson's assertion?}}
That is, {\emph{Has an IFS including some of diffeomorphisms on compact manifold M, SP if it's Structurally stable }}\\
{\emph{Is the previous reverse demonstration true? }}\\\\
{\emph{\textbf{Expansiveness property?}}}\\
We consider two unequal points and study their orbits. Do their orbits remain nearby each other? If for every two arbitrary points the answer of this question is negative, we say that DS has expansiveness property. In other words, if DS has expansiveness property, the two points that remain nearby due to frequent have to be equal. This concept has an important role in researches of stability, and it was first put forward by Utz in 1950; however, he proposed expansiveness known as unstable homeomorphism,\cite{Utz1950}. The problem of existence of expansiveness homeomorphism and its construct method have been investigated in some studies, for example, in 1955, Willams presented an expansiveness homeomorphism on dyadic solenoid and Reddy proved the existence of expansiveness homeomorphism on torus in 1965,\cite{Reddy1965}. This concept was proposed for one-parameter flows by Bowen and Walter in 1972; they proved similar theorems to diffeomorphisms in \cite{Bowen1972flow}. What led us to define this concept for IFSs was this subject that
Pilyugin in \cite{pilyugin1999shadowing} showed that if a DS has expansiveness and shadowing properties, it is TS.\\
{\emph{Can we define a similar concept to expansiveness for IFSs?}}\\
{\emph{Is an IFS TS if it has expansiveness and shadowing properties?}}\\\\
{\emph{\textbf{Contents?}}}\\
Here is description of the sections in this paper.\\
$\bullet$ In Section 2, we present the basic definitions. We also formulate definition of TS for an IFS. Then, we prove fundamental Theorem \ref{Main},
through proving some lemmas:\\
{\emph{\textbf{Theorem \ref{Main}.}}} Suppose that $\mathcal{F}=\Big\{f_{\lambda},\,M\,: \lambda\in\Lambda\Big\}$ is an IFS that $dim M\geq 2$. If $\mathcal{F}$ is topologically stable, then $\mathcal{F}$ has shadowing property.\\
$\bullet$ In Section 3, we define expansiveness and shadowing uniqueness properties for an IFS. In the following, we prove some lemmas to provide a proof of the following technical theorem:\\
{\emph{\textbf{Theorem \ref{secondary Main}.}}} Suppose that $\mathcal{F}=\Big\{f_{\lambda},\,M\,: \lambda\in\Lambda\Big\}\subset Homeo(M)$ is an expansive IFS relative to
$\sigma=\Big\{\ldots,\lambda_{-1},\lambda_{0},\lambda_{1},\ldots\Big\}$ with constant expansive $\eta$.
Also $\mathcal{F}$ has the shadowing property. Then there exist $\epsilon>0$, $3\epsilon<\eta$, and $\delta>0$ with the following property:\\
If $\mathcal{G}=\Big\{g_{\lambda},\,M\,: \lambda\in\Lambda\Big\}\subset Homeo(M)$ is an IFS that ${\mathcal{D}}_0\Big(\mathcal{F}, \mathcal{G}\Big)<\delta$ then for the above $\sigma$ there exists a continuous function $h: M\rightarrow M$ such that:\\
$\left \{\begin{array}{lll}
i) & r\Big(G_{\sigma_k}(x), F_{\sigma_k}(h(x))\Big)<\epsilon, & \forall x\in M\,\,and\,\,\forall k\in\Bbb{Z}, \\
ii) & r\Big(x, h(x)\Big)<\epsilon, & \forall x\in M.
\end{array}\right.$\\
Moreover, if $\epsilon>0$ is sufficiently small, then the function $h$ is surjective and also $F_{\sigma_k}oh= h o G_{\sigma_k}$ for every $k\in\Bbb{Z}$.\\
Furthermore, we prove Corollary \ref{corone} that is an important conclusion of the above theorem:\\
{\emph{\textbf{Corollary \ref{corone}.}}} If IFS $\mathcal{F}$ has shadowing property and moreover $\mathcal{F}$ is expansive relative to any sequence $\sigma$ with small constant expansive, then $\mathcal{F}$ is topologically stable.\\
$\bullet$ In Section 4, we present a formulation of TS concept for an IFS. Then, using the Sections 2 and 3, we show that: \\
{\emph{\textbf{Corollary \ref{cortwo}.}}} Let $\mathcal{F}\subset Diff^{1}(M)$ be an IFS and $dim M\geq 2$.
If $\mathcal{F}$ is structurally stable then it has shadowing property.\\
Finally, we reject the validity of the above converse demonstration by giving an example.\\
\section{ Topological stability implies Shadowing property in an IFS }
Studying TS and SS of a diffeomorphism has been simultaneously progressed. As we know examining TS of a diffeomorphism has been done by tools; for example, shadowing property,\cite{Moriyasu1991TS}, Lyapunov functions,\cite{Lewowicz1980Lyapunov} and \cite{Tolosa2007Lyapunov}. In this paper, we study TS of an IFS by SP. First, we give basic definitions.
\begin{dfn}
Let $(M,d)$ be a complete metric space and $\mathcal{F}$ be a family of continuous mappings $f_{\lambda}:M\rightarrow M$ for every $\lambda \in \Lambda$, where $\Lambda$ is a finite nonempty set; that is, $\mathcal{F}=\Big\{f_{\lambda},\,M\,: \lambda\in\Lambda=\{1,2,\ldots,N\}\Big\}$. We call this family an {\emph{\textbf{Iterated Function System}}} or shortly, IFS.
\end{dfn}
\begin{dfn}
Suppose that $\mathcal{F}=\Big\{f_{\lambda},\,M\,: \lambda\in\Lambda\Big\}$ is an IFS. The sequence ${\{x_k\}}_{k\in\Bbb Z}\subset M$(or sometimes ${\{x_k\}}_{k\in\Bbb N}$) is said to be a {\emph{\textbf{chain}}} for IFS $\mathcal{F}$ if for every $k\in\Bbb Z$(or $k\in\Bbb N$), there exists ${\lambda_k}\in\Lambda$ such that $x_{k+1}= f_{\lambda_k}(x_k)$.
\end{dfn}
\begin{dfn}
Let $\mathcal{F}=\Big\{f_{\lambda},\,M\,: \lambda\in\Lambda\Big\}$ be an IFS. For the given $\delta>0$, the sequence ${\{x_k\}}_{k\in\Bbb Z}$ is called a $\delta-${\emph{\textbf{chain}}} for IFS $\mathcal{F}$ if for every $k\in\Bbb Z$ there exists ${\lambda_k}\in\Lambda$ such that $d\Big(x_{k+1},f_{\lambda_k}(x_k)\Big)\leq\delta$.
\end{dfn}
\begin{dfn}
Suppose that $\mathcal{F}=\Big\{f_{\lambda},\,M\,: \lambda\in\Lambda\Big\}$ is an IFS. We say that $\mathcal{F}$ has {\emph{\textbf{shadowing property}}} if for every $\epsilon>0$ there exists $\delta>0$ such that for every $\delta$-chain ${\{x_k\}}_{k\in\Bbb Z}$, there exists the chain ${\{y_k\}}_{k\in\Bbb Z}$ that $d(x_{k}, y_{k})\leq\epsilon$ for every $k\in\Bbb Z$. sometimes is said that the chain ${\{y_k\}}_{k\in\Bbb Z}$ ($\epsilon$)-shadows $\delta$-chain ${\{x_k\}}_{k\in\Bbb Z}$.
\end{dfn}
Now suppose $M$ is a $C^{\infty}$ smooth m-dimensional closed (that is, compact and boundaryless) manifold, and $r$ is a Riemannian metric on $M$. We consider the space of homeomorphisms on $M$ with the metric $\rho_{0}$ defined as follows:\\
if $f$ and $g$ are the homeomorphisms on $M$, we define
$$\rho_0(f, g)= Max\bigg\{r\Big(f(x), g(x)\Big),\,r\Big(f^{-1}(x),g^{-1}(x)\Big);\,\,for\,\,all\,\,x\in M\bigg\}$$
This space is denoted to {\emph{\textbf{Homeo($M$)}}}. IFSs, in this paper, are subsets of $Homeo(M)$.
\begin{dfn}
Suppose that $\mathcal{F}=\Big\{f_{\lambda},\,M\,: \lambda\in\Lambda\Big\}$ and $\mathcal{G}=\Big\{g_{\overline\lambda},\,M\,: {\overline\lambda}\in{\overline\Lambda}\Big\}$ are two IFSs. We define measure distance for the two IFSs as follows:\\
If $\mathcal{F}=\mathcal{G}$, then put ${\mathcal{D}}_0\Big(\mathcal{F}, \mathcal{G}\Big)=0$\\
If $\mathcal{F}\neq\mathcal{G}$, then
$${\mathcal{D}}_0\Big(\mathcal{F}, \mathcal{G}\Big)= Max\Big\{\rho_0(f_{\lambda}, g_{\overline\lambda}):\quad for\;all\;f_{\lambda}\in\mathcal{F}\; and\; g_{\overline{\lambda}}\in\mathcal{G}\Big\}$$
\end{dfn}
\begin{dfn}
Let $\mathcal{F}=\Big\{f_{\lambda},\,M\,: \lambda\in\Lambda\Big\}$ be an IFS. We say that an IFS $\mathcal{F}$ is {\emph{\textbf{topologically stable}}} if for a given $\epsilon>0$, there exists $\delta>0$ such that if $\mathcal{G}=\Big\{g_{\lambda},\,M\,: \lambda\in{\Lambda}\Big\}$ be an IFS that ${\mathcal{D}}_0\Big(\mathcal{F}, \mathcal{G}\Big)<\delta$, then for each sequence $\sigma=\Big\{\ldots,\lambda_{-1},\lambda_{0},\lambda_{1},\ldots\Big\}$$\Big($or $\sigma=\Big\{\lambda_{1},\lambda_{2},\ldots\Big\}$$\Big)$ there exists a continuous mapping $h_{\sigma}$ of $M$ onto $M$ with the following properties:\\
$\left \{\begin{array}{lll}
i) & F_{\sigma_n}oh_{\sigma}= h_{\sigma}o G_{\sigma_n}, & \forall n\in \Bbb{Z}, \\
ii) & r\Big(x, h_{\sigma}(x)\Big)<\epsilon, & \forall x\in M.
\end{array}\right.$
\end{dfn}
Now, we are going to find the relation between topological stability and shadowing property in IFSs. In \cite{pilyugin1999shadowing}, Pilyugin proved that a topologically stable homeomorphism has SP. In this paper, we also show this demonstration for IFSs using his methods. But since we deal with the set of functions, proving is harder and more complex.\\
In the proof of the following lemma, the method of proof of SP on $N$ subset of $M$ is a bit different from pilyugin's method in Part(a) in Lemma 1.1.1 in \cite{pilyugin1999shadowing} because of presenting of chain.
\begin{lem}\label{finite shadowing}
Consider IFS $\mathcal{F}=\Big\{f_{\lambda},\,M\,: \lambda\in\Lambda\Big\}$. Suppose that $\mathcal{F}$ has finite shadowing property on $N (N\subset M)$; that is, for a given $\epsilon>0$ there exists $\delta>0$ such that for every set $\Big\{x_0,\ldots,x_m\Big\}\subset N$ that satisfies in the inequality $r\Big(x_{k+1}, f_{\lambda_{k}}(x_k)\Big)\leq \delta$ for every $k=0,\ldots,m-1$, then there exists a chain $\{y_k\}\subset M$ such that for every $k=0,\ldots,m-1$, $r(x_{k}, y_{k})<\epsilon$. Thus, $\mathcal{F}$ has shadowing property on $N$.
\end{lem}
\begin{pf}
For a given $\epsilon>0$, by using the uniformly continuous $f_{\lambda}$, $\lambda\in \Lambda$, there exists $\delta_\lambda>0$ such that for every $x,y\in M$ if $r(x,y)<\delta_\lambda$, then $r\Big(f_\lambda(x), f_\lambda(y)\Big)<\frac{\epsilon}{4}$. Put $\delta_0=min\{\delta_\lambda\,:\,\lambda\in \Lambda\}$. Thus, for every $x,y\in M$ and for each $\lambda\in \Lambda$, $r\Big(f_\lambda(x), f_\lambda(y)\Big)<\frac{\epsilon}{4}$ if $r(x,y)<\delta_0$. For the obtained $\delta_0$, duo to finite shadowing property of $\mathcal{F}$, there exists $\delta>0$ such that for every set $\Big\{x_0,\ldots,x_m\Big\}\subset N$ that satisfies in the inequality $r\Big(x_{k+1}, f_{\lambda_{k}}(x_k)\Big)\leq \delta$ for every $k=0,\ldots,m-1$, then there exists a chain $\{y_k\}\subset M$ such that for every $k=0,\ldots,m-1$, $r(x_{k}, y_{k})<\delta_0$. We can assume that $\delta<\frac{\epsilon}{4}$. Now, suppose that
$\xi= {\{x_k\}}_{k\in\Bbb Z}\subset N$ is a $\delta$-chain for IFS $\mathcal{F}$. Let $m>0$ be a constant number. Consider the set $\{x_k\,:\,\,-m\leq k\leq m\}$ that $r\Big(x_{k+1}, f_{\lambda_{k}}(x_k)\Big)\leq \delta$ for every $k$, $-m\leq k\leq m-1$, then there exists a chain ${\{{y^\prime}_{m,k}\}}_{k\in\Bbb Z}\subset M$ that
\begin{eqnarray}
r\Big(x_k, {y^\prime}_{m,k}\Big)<\frac{\epsilon}{4}\hspace{2cm} \forall k,\hspace{0.5cm} -m\leq k\leq m-1.
\end{eqnarray}
Now, let $k$ be fixed. According to what was said, we see that for the arbitrary number $m$ there exists ${y^\prime}_{m,k}\in M$. Consider the sequence ${\{{y^\prime}_{m,k}\}}_{m=1}^\infty\subset M$. Since $M$ is a compact metric space, this sequence has unique limit point $y_k$ in this space; $y_k\in M$. We claim that ${\{y_k\}}_{k\in\Bbb Z}$ is a chain for IFS $\mathcal{F}$ and for each $k\in\Bbb Z$, $r(x_k,y_k)<\frac{\epsilon}{4}$. Since the metric $r$ and $f_{\lambda}$, $\lambda\in \Lambda$, are continuous, for every $k\in\Bbb Z$ we have
\begin{eqnarray*}
\begin{array}{ll}
r\Big(y_{k+1}, f_{\lambda_{k}}(y_k)\Big) &= \lim_{m\rightarrow \infty} r\Big({y^\prime}_{m,k+1}, f_{\lambda_{k}}({y^\prime}_{m,k})\Big)\\
&\leq \lim_{m\rightarrow \infty} r\Big({y^\prime}_{m,k+1},x_{k+1}\Big) + \lim_{m\rightarrow \infty} r\Big(x_{k+1}, f_{\lambda_{k}}({y^\prime}_{m,k})\Big).
\end{array}
\end{eqnarray*}
Passing to the limit as $m\rightarrow \infty$ in the inequality (1), we get that
\begin{eqnarray}
r(x_k,y_k)<\frac{\epsilon}{4},\hspace{2cm} \forall k\in\Bbb Z.
\end{eqnarray}
Since $r$ is metric so we can write
\begin{eqnarray*}
\begin{array}{ll}
\lim_{m\rightarrow \infty} r\Big(x_{k+1}, f_{\lambda_{k}}({y^\prime}_{m,k})\Big)\leq &
\lim_{m\rightarrow \infty} r\Big(x_{k+1}, f_{\lambda_{k}}(x_k)\Big)\\
& +\lim_{m\rightarrow \infty} r\Big(f_{\lambda_{k}}(x_k),f_{\lambda_{k}}(y_k)\Big)\\
& +\lim_{m\rightarrow \infty} r\Big(f_{\lambda_{k}}(y_k),f_{\lambda_{k}}({y^\prime}_{m,k})\Big).
\end{array}
\end{eqnarray*}
We know that $\xi$ is a $\delta$-chain, the function $f_{\lambda_{k}}(\lambda_{k}\in \Lambda)$ is uniformly continuous, $\delta<\frac{\epsilon}{4}$, and the sequence ${\{{y^\prime}_{m,k}\}}_{m=1}^\infty$ is convergent to $y_k$. Regarding these facts, we will get the following relation from the latter inequality:
\begin{eqnarray}
\lim_{m\rightarrow \infty} r\Big(x_{k+1}, f_{\lambda_{k}}({y^\prime}_{m,k})\Big)\leq
\epsilon+\epsilon+\epsilon=3\frac{\epsilon}{4}.
\end{eqnarray}
Also from the relation (2), we have
\begin{eqnarray}
\lim_{m\rightarrow \infty} r\Big({y^\prime}_{m,k+1}, x_{k+1}\Big)=r\Big(y_{k+1},x_{k+1}\Big)<\frac{\epsilon}{4}.
\end{eqnarray}
Thus, for each $k\in\Bbb Z$, we obtain the following relation from the relations (3) and (4):
$$r\Big(y_{k+1}, f_{\lambda_{k}}(y_k)\Big)\leq \epsilon.$$
According to making of the sequence ${\{y_k\}}_{k\in\Bbb Z}$, we see that if $\epsilon>0$ be very small value, then the obtained sequence ${\{y_k\}}_{k\in\Bbb Z}$ is valid for every arbitrary value $\epsilon>0$. Therefore for this sequence ${\{y_k\}}_{k\in\Bbb Z}$, the previous relation is true for every arbitrary value $\epsilon>0$. So we will get that $r\Big(y_{k+1}, f_{\lambda_{k}}(y_k)\Big)=0$ and hence $y_{k+1}= f_{\lambda_{k}}(y_k)$ and this means that, ${\{y_k\}}_{k\in\Bbb Z}$ is a chain for IFS $\mathcal{F}$ and by considering the relation (2), $r(x_k,y_k)<\epsilon$ for every $k\in\Bbb Z$, so our claim is proved. Therefore, for a given $\epsilon>0$ we found $\delta>0$ such that for every $\delta$-chain, ${\{x_k\}}_{k\in\Bbb Z}$, there exists a chain ${\{y_k\}}_{k\in\Bbb Z}$ that $r(x_k,y_k)<\epsilon$, that is, $\mathcal{F}$ has shadowing property on $N$. $\square$
\end{pf}
\begin{lem}\label{finite}
Assume $dim(M)\geq2$. Consider a finite collection $\Big\{(p_{i},q_{i})\in M\times M\,:\hspace{0.5cm}i=1,\ldots,k\Big\}$ such that
$$\left \{\begin{array}{lll}
i) & p_{i}\neq p_{j},\, q_{i}\neq q_{j} & for\,\, 1\leq i< j\leq k, \\
ii) & r\Big(p_{i},q_{i}\Big)<\delta, & for\,\,i=1,\ldots,k,\,with\,small\,positive\,\delta.
\end{array}\right.$$
Then, there exists a diffeomorphism $f$ of $M$ with the following properties:\\
$$\left \{\begin{array}{lll}
i) & \rho_0(f, id)<2\delta, & (here\;id\;is\,the\,identity\,mapping\,of\,M), \\
ii) & f(p_{i})=q_{i}, & for\,\,i=1,\ldots,k.
\end{array}\right.$$
\end{lem}
\begin{pf}
It has proved at Lemma 2.1.1.in \cite{pilyugin1999shadowing}. $\square$
\end{pf}
\begin{lem}\label{to find a set points}
Suppose $\xi={\{x_k\}}_{k\in\Bbb Z}$ is a $\delta$-chain for the IFS $\mathcal{F}=\Big\{f_{\lambda},\,M\,: \lambda\in\Lambda\Big\}$ with the sequence $\sigma$. Consider the integer number $m\geq 0$ and also the number $\eta>0$. Then, there exists a set of points $\Big\{y_0,\ldots,y_m\Big\}$ such that it satisfies in the following conditions:
\begin{enumerate}
\item for every $k$, $0\leq k\leq m$, $r(x_k,y_k)<\eta$,\\
\item for every $k$, $0\leq k\leq m-1$, and $\lambda_k \in \sigma$, $r\Big(y_{k+1}, f_{\lambda_{k}}(y_k)\Big)<3\delta$,\\
\item for every $i,j$, $0\leq i<j\leq m$, $y_i\neq y_j$.
\end{enumerate}
\end{lem}
\begin{pf}
We prove the statement by using induction on $m$. If $m=0$, then it is sufficient to consider the singleton set $\Big\{y_0= x_0\Big\}$ then, $r(x_0,y_0)= r(x_0,x_0)=0<\eta$, so for $m=0$, the lemma is true.\\
Suppose that the statement is true for $m-1$. Now we prove the lemma for $m$.\\
For a given $\eta>0$, assume that $\eta<\delta$ since if $\eta\geq \delta$, then there exist $q,p\in\Bbb N$ such that $\eta=q\delta + p$ where $0<p<\delta$ or $p=0$. If $0<p<\delta$ then, assume $\eta=p$ and if $p=0$ then, take the new $\eta$ be less than $\eta\over q$. Since $\mathcal{F}\subset Homeo(M)$ and $M$ is compact, each function $f_\lambda$, $\lambda\in\Lambda$, is uniformly continuous. Consequently, for $\delta$ and $f_\lambda$ there exists $\delta_{\lambda}(\delta)>0$ such that for every $x,y\in M$ that $r(x,y)<\delta_{\lambda}$ then $r\Big(f_{\lambda}(x), f_{\lambda}(y)\Big)<\delta$. Put $\delta_0 = min\{\delta_{\lambda}\,\,:\,\, \lambda\in\Lambda\}$. We can consider $\delta_0<\eta$, that is, $\delta_{0}\in (0, \eta)$ because if $\delta_0\geq\eta$, then it is sufficient to take the new $\delta_0$, $\delta^{\prime}_0$, be less than $\eta$. Thus, for every $x,y\in M$ that $r(x,y)<\delta^{\prime}_0$ we have $r(x,y)<\delta^{\prime}_0<\eta\leq\delta_0$, according to the assumption of uniformly continuous, $r\Big(f_{\lambda}(x), f_{\lambda}(y)\Big)<\delta$ for every $\lambda\in\Lambda$. By using the assumption of induction, we can find a set of points $\Big\{y_0,\ldots,y_{m-1}\Big\}$ such that
\begin{enumerate}
\item for every $k$, $0\leq k\leq m-1$, $r(x_k,y_k)<\delta_0$,\\
\item for every $k$, $0\leq k\leq m-2$, and $\lambda_k \in \sigma$, $r\Big(y_{k+1}, f_{\lambda_{k}}(y_k)\Big)<3\delta$,\\
\item for every $i,j$, $0\leq i<j\leq m-1$, $y_i\neq y_j$.
\end{enumerate}
Since the functions of IFS$\mathcal{F}$ are uniformly continuous, we can choose a point $y_m$ such that $r(x_m,y_m)<\delta_0$ and also $y_m\neq y_i$ for every $i=0,\ldots,m-1$. Now, for $\lambda_{m-1}\in\sigma$, we have
\begin{eqnarray*}
\begin{array}{ll}
r\Big(f_{\lambda_{m-1}}(y_{m-1}), y_m\Big) & < r\Big(f_{\lambda_{m-1}}(y_{m-1}), x_m\Big) + r(x_m,y_m)\\
& < r\Big(f_{\lambda_{m-1}}(y_{m-1}), f_{\lambda_{m-1}}(x_{m-1})\Big)\\
& + r\Big(f_{\lambda_{m-1}}(x_{m-1}), x_m\Big) + r(x_m,y_m).
\end{array}
\end{eqnarray*}
We know that the function $f_\lambda$, $\lambda\in\Lambda$, is uniformly continuous and $r(y_{m-1},x_{m-1})\linebreak[2] <\delta_0$ then $r\Big(f_{\lambda_{m-1}}(y_{m-1}), f_{\lambda_{m-1}}(x_{m-1})\Big)<\delta$. Also $\lambda_{m-1}\in\sigma$ and $\xi={\{x_k\}}_{k\in\Bbb Z}$ is a $\delta$-chain and $r(x_m,y_m)<\delta_0$ and $\delta_0<\eta<\delta$ thus, the previous relation is $r\Big(f_{\lambda_{m-1}}(y_{m-1}), y_m\Big)<\delta+\delta+\delta=3\delta$. Thus, the statement of induction for $m$ was proved. $\square$
\end{pf}
The method of finding of the required IFS in the demonstration of the following lemma is complicated.
\begin{lem}\label{existence another IFS}
Suppose $dim M\geq2$ and let $\mathcal{F}=\Big\{f_{\lambda},\,M\,: \lambda\in\Lambda\Big\}$ be an IFS that $\mathcal{F}\subset Homeo(M)$. Let $m\in\Bbb N$ and $\Delta>0$ be given. Then there exists $\delta>0$ with the following property:\\
If $\xi={\{x_k\}}_{k\in\Bbb Z}$ is a $\delta$-chain for IFS $\mathcal{F}$ with the sequence $\sigma$, then there exists IFS $\mathcal{G}_\sigma=\Big\{g_{\lambda_{k}},\,M\,: \lambda_{k}\in\sigma\Big\}\subset Homeo(M)$ such that ${\mathcal{D}}_0\Big(\mathcal{F}, \mathcal{G}_\sigma\Big)<\Delta$. Also there exists a chain ${\{y_k\}}_{k\in\Bbb Z}$ for IFS $\mathcal{G}_\sigma$ that $r(x_k,y_k)<\Delta$ for $k=0,\ldots,m$.
\end{lem}
\begin{pf}
Since $\mathcal{F}\subset Homeo(M)$, the function $f_{\lambda}^{-1}$ for every $\lambda\in\Lambda$ is continuous and consequently, it is uniformly continuous on the compact space $M$. Thus for a given $\Delta>0$ and for each $\lambda\in\Lambda$ there exists $\delta_{\lambda}(\Delta)>0$ such that for every $x, y\in M$ that
$r(x, y)<\delta_{\lambda}$ then $r\Big(f_{\lambda}^{-1}(x), f_{\lambda}^{-1}(y)\Big)<\Delta$. Put
$\delta_0= min\{{\Delta\over 2},\,\delta_{\lambda};\,\,\lambda\in\Lambda\}$.
Then, put $\delta= {\delta_0\over 6}$ and suppose that $\xi={\{x_k\}}_{k\in\Bbb Z}$ is a $\delta$-chain for IFS $\mathcal{F}$ with the sequence $\sigma$. By Lemma \ref{to find a set points}, there exists a finite sequence $\Big\{y_0,\ldots,y_m\Big\}$ such that
\begin{enumerate}
\item for every $k$, $0\leq k\leq m$, $r(x_k,y_k)<\Delta$,\\
\item for every $k$, $0\leq k\leq m-1$, and $\lambda_k \in \sigma$, $r\Big(y_{k+1}, f_{\lambda_{k}}(y_k)\Big)<3\delta={\delta_0\over 2}$,\\
\item for every $i,j$, $0\leq i<j\leq m$, $y_i\neq y_j$.
\end{enumerate}
Now, for every $k$, $0\leq k\leq m-1$, and $\lambda_{k}\in\sigma$, consider the singelton set
$\bigg\{\Big(f_{\lambda_{k}}(y_{k}), y_{k+1}\Big)\bigg\}$.
Considering condition(2) and using Lemma \ref{finite}, there exists a diffeomorphism $h_{\lambda_{k}}$ of $M$ such that $\rho_0\Big(h_{\lambda_{k}}, id\Big)<\delta_0$ and also $h_{\lambda_{k}}\Big(f_{\lambda_{k}}(y_k)\Big)= y_{k+1}$. Now, for every $k$, $k=0,\ldots,m-1$, $\lambda_{k}\in\sigma$, and $\lambda\in\Lambda$, put $g_{{\lambda_{k}},\lambda}=h_{\lambda_{k}} o f_{\lambda}$. We prove that $\rho_0\Big(g_{{\lambda_{k}},\lambda}, f_{\lambda}\Big)<\Delta$.\\
Suppose $x\in M$. We have
\begin{eqnarray*}
\begin{array}{ll}
r\Big(g_{{\lambda_{k}},\lambda}(x), f_{\lambda}(x)\Big)
& = r\Big((h_{\lambda_{k}} o f_{\lambda})(x), f_{\lambda}(x)\Big)\\
& = r\bigg(h_{\lambda_{k}}\Big(f_{\lambda}(x)\Big), f_{\lambda}(x)\bigg)\\
& = r\bigg(h_{\lambda_{k}}\Big(f_{\lambda}(x)\Big), id\Big(f_{\lambda}(x)\Big)\bigg)\\
& <\rho_0\Big(h_{\lambda_{k}}, id\Big)<\delta_0<\Delta,
\end{array}
\end{eqnarray*}
and
\begin{eqnarray*}
\begin{array}{ll}
r\Big(g_{{\lambda_{k}},\lambda}^{-1}(x), f_{\lambda}^{-1}(x)\Big)
& = r\Big((f_{\lambda}^{-1} o h_{\lambda_{k}}^{-1})(x), f_{\lambda}^{-1}(x)\Big)\\
& = r\bigg(f_{\lambda}^{-1}\Big(h_{\lambda_{k}}^{-1}(x)\Big), f_{\lambda}^{-1}(x)\bigg)<\Delta,
\end{array}
\end{eqnarray*}
because $r\Big( h_{\lambda_{k}}^{-1}(x), x\Big)= r\Big( h_{\lambda_{k}}^{-1}(x), I^{-1}(x)\Big) <\rho_0\Big(h_{\lambda_{k}}, id\Big)<\delta_0$ and the function $f_{\lambda}^{-1}$ is uniformly continuous.\\
Therefore,\\
$\rho_0\Big(g_{{\lambda_{k}},\lambda}, f_{\lambda}\Big)= Max\bigg\{r\Big(g_{{\lambda_{k}},\lambda}(x), f_{\lambda}(x)\Big),\, r\Big(g_{{\lambda_{k}},\lambda}^{-1}(x), f_{\lambda}^{-1}(x)\Big);\,\,for\,\,all\,\, x\in M\linebreak[3]\bigg\}<\Delta$\\
Consider the IFS $\mathcal{G}_\sigma$ as
$\mathcal{G}_{\sigma}= \Big\{g_{{\lambda_{k}},\lambda},\,M\,:\,\,\lambda_{k}\in\sigma,\,\lambda\in\Lambda\Big\}$.
Clearly, for every $\lambda_{k}\in\sigma$ and $\lambda\in\Lambda$, $g_{{\lambda_{k}},\lambda}\in Homeo(M)$ since it is the composition of homeomorphisms so $\mathcal{G}_{\sigma}\subset Homeo(M)$ and also according the above ${\mathcal{D}}_0\Big(\mathcal{F}, \mathcal{G}_\sigma\Big)<\Delta$. Now, we are going to extend the set $\Big\{y_0,\ldots,y_m\Big\}$ such that ${\{y_k\}}_{k\in\Bbb Z}$ be a chain for IFS $\mathcal{G}_\sigma$. It is sufficient to put for every $k$, $k\geq m$, $y_{k+1}=g_{{\lambda_{k}},{\lambda_{k}}}(y_k)$ and for every $k$, $k<0$, $y_{k}=g_{{\lambda_{k}},{\lambda_{k}}}^{-1}(y_{k+1})$ that $\lambda_{k}\in\sigma$.
Thus, ${\{y_k\}}_{k\in\Bbb Z}$ is a chain for IFS $\mathcal{G}_\sigma$ such that for every $k$, $k=0,\ldots,m$, $r(x_k,y_k)<\Delta$ by condition(1). So, the proof is completed. $\square$
\end{pf}
\begin{thm}\label{Main}
Suppose that $\mathcal{F}=\Big\{f_{\lambda},\,M\,: \lambda\in\Lambda\Big\}$ is an IFS that $dim M\geq 2$. If $\mathcal{F}$ is topologically stable, then $\mathcal{F}$ has shadowing property.
\end{thm}
\begin{pf}
Let $\epsilon>0$ be given. For $\epsilon\over 2$, since $\mathcal{F}$ is topologically stable, there exists $\Delta>0$ with the above-mentioned properties.\\
We assume that $\Delta<{\epsilon\over 2}$ because if $\Delta\geq{\epsilon\over 2}$, then there exist $q, p\in\Bbb N$ such that $\Delta=q.{\epsilon\over 2}+ p$, consequently, we can consider $\Delta$ to be equal to $p$ or less than $\Delta\over q$.
For $\Delta$ and an arbitrary natural number $m$, using Lemma \ref{existence another IFS}, there exists $\delta>0$ such that for every $\delta$-chain $\xi={\{x_k\}}_{k\in\Bbb Z}$ of IFS $\mathcal{F}$ with the sequence $\sigma$ exists an IFS $\mathcal{G}_{\sigma}=\Big\{g_{{\lambda_{k}},\lambda},\,M\,:\,\,\lambda_{k}\in\sigma,\,\lambda\in\Lambda\Big\}\subset Homeo(M)$ that ${\mathcal{D}}_0\Big(\mathcal{F}, \mathcal{G}_\sigma\Big)<\Delta$ and also there exists a chain ${\{y_k\}}_{k\in\Bbb Z}$ for IFS $\mathcal{G}_\sigma$ that $r(x_k,y_k)<\Delta$ for all $k$, $0\leq k \leq m$.
Since ${\mathcal{D}}_0\Big(\mathcal{F}, \mathcal{G}_\sigma\Big)<\Delta$, then for this $\sigma$ on the basis of topological stability of IFS $\mathcal{F}$, there exists a continuous mapping $h_\sigma$ of $M$ onto $M$ such that\\
$\left \{\begin{array}{lll}
i) & F_{\sigma_n}oh_{\sigma}= h_{\sigma}o G_{\sigma_n}, & \forall n\in \Bbb{Z}, \\
ii) & r\Big(x, h_{\sigma}(x)\Big)<{\epsilon\over 2}, & \forall x\in M.
\end{array}\right.$\\
Now, for every $k\in\Bbb Z$ put $z_k=h_{\sigma}(y_k)$. We prove that the sequence ${\{z_k\}}_{k\in\Bbb Z}$ is a chain for IFS $\mathcal{F}$.\\
Since ${\{y_k\}}_{k\in\Bbb Z}$ is a chain for IFS $\mathcal{G}_\sigma$ with the sequence\\ $\Big\{\ldots, (\lambda_{-1},\lambda_{-1}), (\lambda_{0},\lambda_{0}), (\lambda_{1},\lambda_{1}), \ldots\Big\}$ where $\lambda_{k}\in\sigma$, thus for every $k\in\Bbb Z$ we have $y_{k+1}= g_{{\lambda_{k}},{\lambda_{k}}}(y_k)$.
Considering this relation and the part$(i)$ of the above relation, we see that
\begin{eqnarray*}
\begin{array}{ll}
z_{1} & = h_{\sigma}(y_{1})= h_{\sigma}\Big(g_{{\lambda_{0}},{\lambda_{0}}}(y_{0})\Big)= f_{\lambda_{0}}\Big(h_{\sigma}(y_{0})\Big)= f_{\lambda_{0}}(z_{0})\\
z_{2} & = h_{\sigma}(y_{2})= h_{\sigma}\Big(g_{{\lambda_{1}},{\lambda_{1}}}(y_{1})\Big)=
h_{\sigma}\Big((g_{{\lambda_{1}},{\lambda_{1}}} o g_{{\lambda_{0}},{\lambda_{0}}})(y_{0})\Big)\\
& = (f_{\lambda_{1}} o f_{\lambda_{0}})\Big(h_{\sigma}(y_{0})\Big)
= f_{\lambda_{1}}\Big(f_{\lambda_{0}}(h_{\sigma}(y_{0}))\Big)
= f_{\lambda_{1}}\Big(f_{\lambda_{0}}(z_0)\Big)
= f_{\lambda_{1}}(z_{1})\\
\vdots & \\
z_{k+1} & = h_{\sigma}(y_{k+1})= h_{\sigma}\Big(G_{\sigma_k}(y_{0})\Big)= F_{\sigma_k}\Big(h_{\sigma}(y_{0})\Big)
= F_{\sigma_k}(z_{0})\\
& = (f_{\lambda_k}of_{\lambda_{k-1}}o\ldots of_{\lambda_1}of_{\lambda_0})(z_0)
= (f_{\lambda_k}of_{\lambda_{k-1}}o\ldots of_{\lambda_1})(f_{\lambda_0}(z_0))\\
& = (f_{\lambda_k}of_{\lambda_{k-1}}o\ldots of_{\lambda_1})(z_1)
=\ldots = f_{\lambda_k}(z_k)
\end{array}
\end{eqnarray*}
The relation $z_{k+1}= f_{\lambda_k}(z_k)$ shows that ${\{z_k\}}_{k\in\Bbb Z}$ is a chain for IFS $\mathcal{F}$.
Also, for every $k$, $k=0,\ldots,m$, we obtain that
\begin{eqnarray*}
\begin{array}{ll}
r(x_k, z_k) & = r\Big(x_k, h_{\sigma}(y_k)\Big)\\
& \leq r(x_k, y_k) + r\Big(y_k, h_{\sigma}(y_k)\Big)\\
& \leq {\epsilon\over 2}+ {\epsilon\over 2}= \epsilon
\end{array}
\end{eqnarray*}
Thus, for given $\epsilon>0$ we found $\delta>0$ such that if $\xi={\{x_k\}}_{k\in\Bbb Z}$ is a $\delta$-chain for IFS $\mathcal{F}$, then there exists a chain ${\{z_k\}}_{k\in\Bbb Z}$ for $\mathcal{F}$ that
$r(x_k, z_k) \leq \epsilon$, for every $k$, $k=0,\ldots,m$. Therefore, according to Lemma \ref{finite shadowing}, $\mathcal{F}$ has shadowing property. $\square$
\end{pf}
\section{The converse demonstration of the previous section}
In the previous section, we proved that every topologically stable IFS has SP. Pilyugin proved the converse demonstration" {\emph{Topologically stable homeomorphism has SP}}" by adding the expansiveness property in \cite{pilyugin1999shadowing}. We also define expansiveness property for an IFS and then use Pilyugin's method and prove the converse demonstration of the previous section by Lemmas \ref{expansive} and \ref{uniq} and Theorem \ref{secondary Main}.
\begin{dfn}
Consider IFS $\mathcal{F}=\Big\{f_{\lambda},\,M\,: \lambda\in\Lambda\Big\}$. Assume that the sequence
$\sigma=\Big\{\ldots,\lambda_{-1},\lambda_{0},\lambda_{1},\ldots\Big\}$ be given. We say that $\mathcal{F}$ is {\emph{\textbf{expansive relative to $\sigma$\/}}} if there exists $\Delta>0$ such that for two arbitrary points $x$ and $y$ in $M$ that $r\Big(F_{\sigma_n}(x), F_{\sigma_n}(y)\Big)\leq\Delta$, for each $n\in\Bbb Z$, then $x=y$. The number $\Delta$ is called {\emph{\textbf{constant expansive relative to $\sigma$\/}}}.
\end{dfn}
\begin{lem}\label{expansive}
Suppose that IFS $\mathcal{F}=\Big\{f_{\lambda},\,M\,: \lambda\in\Lambda\Big\}$ is expansive relative to
$\sigma=\Big\{\ldots,\lambda_{-1},\lambda_{0},\lambda_{1},\ldots\Big\}$ with constant expansive $\eta$. Let $\mu>0$ be given. Thus, there exists $N\geq 1$ such that if for every $x, y\in M$ that verify to the relation
$r\Big(F_{\sigma_n}(x), F_{\sigma_n}(y)\Big)\leq\eta$, for each $n$ that $|n|<N$, then $r(x, y)<\mu$.
\end{lem}
\begin{pf}
Let $\mu>0$ be given. By demonstration of contradiction, we assume that there exists no $N\geq 1$ that satisfies in the properties of the lemma, thus for each $N\geq 1$ there exist the points $x_N$ and $y_N$ such that
$r\Big(F_{\sigma_n}(x_N), F_{\sigma_n}(y_N)\Big)\leq\eta$ for all $n$ with $|n|<N$ and also
$r\Big(x_N, y_N\Big)\geq\mu$. Choose two arbitrary subsequences ${\Big\{x_{N_i}\Big\}}_{i=1}^{\infty}$ and ${\Big\{y_{N_i}\Big\}}_{i=1}^{\infty}$. Since $M$ is a compact metric space so these subsequences have unique limit points in this space. Therefore, the points $x$ and $y$ there exist that $x_{N_i}\rightarrow x$ and $y_{N_i}\rightarrow y$ as $i\rightarrow \infty$.\\
Since $\mathcal{F}\subset Homeo(M)$, the function $F_{\sigma_n}$, for every $n$, is the composition of continuous functions and consequently, itself is continuous function. Therefore, we have $F_{\sigma_n}(x_{N_i})\rightarrow F_{\sigma_n}(x)$ and $F_{\sigma_n}(y_{N_i})\rightarrow F_{\sigma_n}(y)$ as $i\rightarrow \infty$. Thus, for a given $\epsilon>0$, there exist $k_1, k_2\in\Bbb N$ such that for every
$i\geq k_1$, $r\Big(F_{\sigma_n}(x_{N_i}), F_{\sigma_n}(x)\Big)<{\epsilon\over 2}$
and for every $i\geq k_2$, $r\Big(F_{\sigma_n}(y_{N_i}), F_{\sigma_n}(y)\Big)<{\epsilon\over 2}$.
Consequently, for every $i\geq k= max\{k_1, k_2\}$ also the above relations are true.
We have
\begin{eqnarray*}
\begin{array}{ll}
r\Big(F_{\sigma_n}(x), F_{\sigma_n}(y)\Big) & \leq r\Big( F_{\sigma_n}(x), F_{\sigma_n}(x_{N_i})\Big)+
r\Big(F_{\sigma_n}(x_{N_i}), F_{\sigma_n}(y_{N_i})\Big)\\
& + r\Big(F_{\sigma_n}(y_{N_i}), F_{\sigma_n}(y)\Big).
\end{array}
\end{eqnarray*}
Now, for sufficiently large $N_i$ when $i\rightarrow \infty$, we see that
$$r\Big(F_{\sigma_n}(x), F_{\sigma_n}(y)\Big)\leq{\epsilon\over 2}+ \eta+ {\epsilon\over 2}=\epsilon+ \eta.$$
Clearly, the pervious relation for each $n\in\Bbb Z$ is true. Since $\epsilon>0$ is a small arbitrary number then for each $n\in\Bbb Z$ we have $r\Big(F_{\sigma_n}(x), F_{\sigma_n}(y)\Big)\leq \eta$. Whereof $\mathcal{F}$ is expansive relative to $\sigma$ with constant expansive $\eta$ hence we obtain $x= y$ and so $r(x, y)=0$.
So
\begin{eqnarray*}
\begin{array}{ll}
r\Big(x_{N_i}, y_{N_i}\Big)& < r\Big(x_{N_i}, x\Big)+ r(x, y)+ r\Big(y, y_{N_i}\Big)\\
& = r\Big(x_{N_i}, x\Big)+ r\Big(y, y_{N_i}\Big).
\end{array}
\end{eqnarray*}
As $i\rightarrow \infty$, on the basis of convergency of the subsequences ${\Big\{x_{N_i}\Big\}}_{i=1}^{\infty}$ and ${\Big\{y_{N_i}\Big\}}_{i=1}^{\infty}$ to $x$ and $y$ respectively, for given $\epsilon>0(\epsilon<\mu)$ there exists $k_3\in\Bbb N$ such that for every $i\geq k_3$, $r\Big(x_{N_i}, x\Big)< {\epsilon\over 2}$ and
$r\Big(y_{N_i}, y\Big)< {\epsilon\over 2}$. Thus, for sufficiently large $N_i$ when $i\rightarrow \infty$, the pervious relation will be as follows
$$r\Big(x_{N_i}, y_{N_i}\Big)< {\epsilon\over 2}+ {\epsilon\over 2}= \epsilon< \mu$$
This contradicts absurd hypothesis and the assertion is proved.
\end{pf}
\begin{dfn}
We say that IFS $\mathcal{F}$ has {\emph{\textbf{shadowing uniqueness property relative to $\sigma$}}} if there exists a constant number $\epsilon>0$ such that for every $\delta$-chain $\xi={\{x_k\}}_{k\in\Bbb Z}$ with the sequence $\sigma$ there exists only one chain ${\{y_k\}}_{k\in\Bbb Z}$ with the same sequence $\sigma$ that
$r\Big(x_{k}, y_{k}\Big)< \epsilon$ for all $k\in\Bbb Z$.
\end{dfn}
In the following lemma, we show that if an IFS has SP and is expansive relative to given $\sigma$, IFS has shadowing uniqueness property relative to $\sigma$
\begin{lem}\label{uniq}
Suppose that $\mathcal{F}$ is an expansive IFS relative to $\sigma=\Big\{\ldots,\lambda_{-1}\linebreak[2],\lambda_{0},\lambda_{1},\ldots\Big\}$ with constant expansive $\eta$. Also $\mathcal{F}$ has the shadowing property. Then $\mathcal{F}$ has the shadowing uniqueness property relative to $\sigma$.
\end{lem}
\begin{pf}
Put $\epsilon= {\eta\over 2}$. Assume that $\xi={\{x_k\}}_{k\in\Bbb Z}$ is a $\delta$-chain with the given sequence $\sigma$. By considering the proof of Theorem(3.4) in \cite{FatehiNia2016various}, we see that there exists a chain ${\{y_k\}}_{k\in\Bbb Z}$ with the same sequence $\sigma$ that $r\Big(x_{k}, y_{k}\Big)< \epsilon$ for all $k\in\Bbb Z$.\\
Now, we prove the uniqueness. Let ${\{y_k\}}_{k\in\Bbb Z}$ and ${\{z_k\}}_{k\in\Bbb Z}$ be two chain with the given sequence $\sigma$ that for every $k\in\Bbb Z$, $r\Big(x_{k}, y_{k}\Big)< \epsilon$ and $r\Big(x_{k}, z_{k}\Big)< \epsilon$. Thus, for each $k\in\Bbb Z$, we obtain the following relation
\begin{eqnarray*}
\begin{array}{ll}
r\Big(F_{\sigma_k}(y_0), F_{\sigma_k}(z_0)\Big) & = r\Big(y_{k}, z_{k}\Big)\\
& \leq r\Big(y_{k}, x_{k}\Big)+ r\Big(x_{k}, z_{k}\Big)\\
& <\epsilon+ \epsilon= 2\epsilon= \eta.
\end{array}
\end{eqnarray*}
Since $\mathcal{F}$ is expansive relative to $\sigma$ with constant expansive $\eta$ thus $y_0=z_0$. We know that $\mathcal{F}$ is an IFS, then for each $k\in\Bbb Z$, $F_{\sigma_k}$ is a function and so $F_{\sigma_k}(y_0)= F_{\sigma_k}(z_0)$. That is, $y_{k}= z_{k}$. Therefore, the chain ${\{y_k\}}_{k\in\Bbb Z}$ is unique and the statement is proved. $\square$
\end{pf}
Now, we prove the following technical theorem whose process of proof is specially complicated and delicate. This theorem has a critical role in the proof of the converse demonstration of Theorem \ref{Main} with further conditions.
\begin{thm}\label{secondary Main}
Suppose that $\mathcal{F}=\Big\{f_{\lambda},\,M\,: \lambda\in\Lambda\Big\}\subset Homeo(M)$ is an expansive IFS relative to
$\sigma=\Big\{\ldots,\lambda_{-1},\lambda_{0},\lambda_{1},\ldots\Big\}$ with constant expansive $\eta$.
Also $\mathcal{F}$ has the shadowing property. Then there exist $\epsilon>0$, $3\epsilon<\eta$, and $\delta>0$ with the following property:\\
If $\mathcal{G}=\Big\{g_{\lambda},\,M\,: \lambda\in\Lambda\Big\}\subset Homeo(M)$ is an IFS that ${\mathcal{D}}_0\Big(\mathcal{F}, \mathcal{G}\Big)<\delta$ then for the above $\sigma$ there exists a continuous function $h: M\rightarrow M$ such that:\\
$\left \{\begin{array}{lll}
i) & r\Big(G_{\sigma_k}(x), F_{\sigma_k}(h(x))\Big)<\epsilon, & \forall x\in M\,\,and\,\,\forall k\in\Bbb{Z}, \\
ii) & r\Big(x, h(x)\Big)<\epsilon, & \forall x\in M.
\end{array}\right.$\\
Moreover, if $\epsilon>0$ is sufficiently small, then the function $h$ is surjective and also $F_{\sigma_k}oh= h o G_{\sigma_k}$ for every $k\in\Bbb{Z}$.
\end{thm}
\begin{pf}
We consider $\epsilon>0$ with condition $3\epsilon<\eta$. Since $\mathcal{F}$ has shadowing property, there exists $\delta>0$ that every $\delta$-chain ($\epsilon$)-is shadowed by a chain. Now, we consider
$\mathcal{G}=\Big\{g_{\lambda},\,M\,: \lambda\in\Lambda\Big\}\subset Homeo(M)$ with ${\mathcal{D}}_0\Big(\mathcal{F}, \mathcal{G}\Big)<\delta$. Fix $x\in M$. Using the given $\sigma$, we make the sequence $\xi={\Big\{G_{\sigma_k}(x)\Big\}}_{k\in\Bbb Z}$. We claim that $\xi$ is a $\delta$-chain for IFS $\mathcal{F}$. For every $k\in\Bbb Z$, we have
\begin{eqnarray*}
\begin{array}{ll}
r\bigg(G_{\sigma_{k+1}}(x), f_{\lambda_{k+1}}\Big(G_{\sigma_k}(x)\Big)\bigg)
& = r\bigg(g_{\lambda_{k+1}}\Big(G_{\sigma_k}(x)\Big), f_{\lambda_{k+1}}\Big(G_{\sigma_k}(x)\Big)\bigg)\\
& < \rho_0\Big(g_{\lambda_{k+1}}, f_{\lambda_{k+1}}\Big)< \delta,
\end{array}
\end{eqnarray*}
and thus the claim is proved. By using Lemma \ref{uniq} and according to the proof of this lemma, IFS $\mathcal{F}$ has the shadowing uniqueness property relative to $\sigma$ with the constant $\epsilon= {\eta\over 2}$. Thus there is a unique chain ${\{y_k\}}_{k\in\Bbb Z}$ such that for every $k\in\Bbb Z$, we have
\begin{eqnarray}
r\Big(G_{\sigma_k}(x), y_k\Big)<\epsilon
\end{eqnarray}
Therefore, for every $x\in M$, we obtain the unique chain ${\{y_k\}}_{k\in\Bbb Z}$, it follows that the function defined $h: M\rightarrow M$ with the criterion $h(x)= y_0$ is well-defined.\\
If in the relation(5), we take $k=0$, then we see that $r(x, y_0)<\epsilon$ and by substitution $h(x)= y_0$ we will get $r\Big(x, h(x)\Big)<\epsilon$. Since ${\{y_k\}}_{k\in\Bbb Z}$ is a chain, we can obtain that $y_k= F_{\sigma_k}(y_0)$ thus, we rewrite the relation(5) as
$r\Big(G_{\sigma_k}(x), F_{\sigma_k}(y_0)\Big)= r\Big(G_{\sigma_k}(x), F_{\sigma_k}(h(x))\Big)<\epsilon.$\\
Now, we show that the function $h$ is continuous. Whereas the space $M$ is compact, continuity is equivalent to uniform continuity thus, we prove that $h$ is uniformly continuous on $M$.
Suppose that $\mu>0$ be given. By using Lemma \ref{expansive} for this $\sigma$ there is $N\geq 1$ such that if $u,v\in M$ that for every $k$, $\mid k\mid<N$, $r\Big(F_{\sigma_k}(u), F_{\sigma_k}(v)\Big)\leq \eta$, then $r(u,v)<\mu$. We know that for every $k\in\Bbb Z$ the functions $F_{\sigma_k}$ and $G_{\sigma_k}$ are uniformly continuous on $M$ so for every $k$, $\mid k\mid<N$, there exist the positive numbers $\beta_k$ and $\alpha_k$, respectively, dependent on the given values $\eta\over 3$ and $\epsilon$, respectively, such that if
$r(x, y)<\beta_k$, then $r\Big(F_{\sigma_k}(x), F_{\sigma_k}(y)\Big)<{\eta\over 3}$ and if
$r(x, y)<\alpha_k$, then $r\Big(G_{\sigma_k}(x), G_{\sigma_k}(y)\Big)<\epsilon$.
Put $\beta= min \Big\{\beta_k\;|\;-N<k<N\Big\}$ and $\alpha= min \Big\{\alpha_k\;|\;-N<k<N\Big\}$.
Now, we choose the positive number $\gamma< min\{\beta, \alpha\}$. Subsequently, for every $x,y\in M$ with $r(x, y)< \gamma$, the relations $r\Big(F_{\sigma_k}(x), F_{\sigma_k}(y)\Big)<{\eta\over 3}$ and $r\Big(G_{\sigma_k}(x), G_{\sigma_k}(y)\Big)<\epsilon$ are true for every $k$, $\mid k\mid<N$, and thereby, we see that
\begin{eqnarray*}
\begin{array}{ll}
r\bigg(F_{\sigma_{k}}\Big(h(x)\Big), F_{\sigma_{k}}\Big(h(y)\Big)\bigg)
& \leq r\bigg(F_{\sigma_{k}}\Big(h(x)\Big), G_{\sigma_k}(x)\bigg)
+ r\Big(G_{\sigma_k}(x), G_{\sigma_k}(y)\Big)\\
& +\,r\bigg(G_{\sigma_k}(y), F_{\sigma_{k}}\Big(h(y)\Big)\bigg)\\
& < \epsilon+ \epsilon+ \epsilon= 3\epsilon< \eta.
\end{array}
\end{eqnarray*}
The pervious relation is true for very $k$, $\mid k\mid<N$, so $r\Big(h(x), h(y)\Big)< \mu$.
Thus, for every $\mu> 0$ there exists $\gamma> 0$ such that every $x,y\in M$ with $r(x, y)< \gamma$ implies
$r\Big(h(x), h(y)\Big)< \mu$ and this means that the function $h$ is uniformly continuous on $M$ and consequently, it is continuous.\\
Now, assume that $\epsilon> 0$ is sufficiently small. Since $M$ is a compact metric space, thus for the given $\epsilon> 0$ there exist $x_1, x_2,\ldots, x_n$ in $M$ that
$M= \bigcup_{i=1}^{n} B_{\epsilon}(x_i)$. Choose $y\in M$. Therefore, there is $x_j\in M$ such that
$y\in B_{\epsilon}(x_j)$. Hence,
$$r\Big(y, h(x_j)\Big)\leq r(y, x_j)+ r\Big(x_j, h(x_j)\Big)<\epsilon+ \epsilon= 2\epsilon.$$
Also, for every $x\in M$, we have
\begin{eqnarray*}
\begin{array}{ll}
r\bigg(F_{\sigma_{k}}\Big(h(x)\Big), h\Big(G_{\sigma_{k}}(x)\Big)\bigg)
& \leq r\bigg(F_{\sigma_{k}}\Big(h(x)\Big), G_{\sigma_{k}}(x)\bigg)\\
& + r\bigg(G_{\sigma_{k}}(x), h\Big(G_{\sigma_{k}}(x)\Big)\bigg)\\
& < \epsilon+ \epsilon= 2\epsilon
\end{array}
\end{eqnarray*}
Since $\epsilon> 0$ is sufficiently small, we can calculate the limitation as $\epsilon\rightarrow 0$ in the two previous relations. From the former relation, we will obtain the relation $r\Big(y, h(x_j)\Big)= 0$, that is, $y= h(x_j)$, and from the latter relation, for every $x\in M$, we get the relation
$r\bigg(F_{\sigma_{k}}\Big(h(x)\Big), h\Big(G_{\sigma_{k}}(x)\Big)\bigg)= 0.$
Consequently, these relations show that the function $h$ is surjective and $F_{\sigma_k}oh= h o G_{\sigma_k}$ for every $k\in\Bbb{Z}$. $\square$
\end{pf}
Considering the previous theorem and the definition of topological stability, we have the following corollary;
\begin{cor}\label{corone}
If IFS $\mathcal{F}$ has shadowing property and moreover $\mathcal{F}$ is expansive relative to any sequence $\sigma$ with sufficiently small constant expansive, then $\mathcal{F}$ is topologically stable.
\end{cor}
\section{ Structural stability implies Shadowing property in an IFS}
Now, we are going to study the relation between shadowing property and structural stability in an IFS. First, we define the space of diffeomorphisms on $M$. Let the functions $f$ and $g$ be $C^{1}$-diffeomorphisms on $M$. We define the metric $\rho_1$ as follows;
$$\rho_1(f, g)= \rho_0(f, g)+ Max\bigg\{\parallel Df(x)-Dg(x)\parallel;\,\,\forall x\in M\bigg\};$$
that here
$${\scriptstyle Max\bigg\{\parallel Df(x)-Dg(x)\parallel;\,\,\forall x\in M\bigg\}= Max\bigg\{\mid Df(x)u-Dg(x)u\mid;\,\,\forall x\in M\;and\;\forall u\in T_{x}M:\, \mid u\mid= 1\bigg\}}$$
The space of $C^{1}$-diffeomorphisms on $M$ with the metric $\rho_1$ is denoted to
{\emph{\textbf{$Diff^{1}(M)$}}}.\\
We know that there is no relation of equivalence between the set of all structurally stable diffeomorphisms and the set of all diffeomorphisms with SP. In fact, SS is stronger than SP. Robinson proves that a structurally stable diffeomorphism on a closed manifold has SP. The previous converse demonstration has been rejected by giving a counter example; for example, in article \cite{Pilyugin2010variational}. In the following, we show that this equivalence is not also true for IFSs.
\begin{dfn}
Let IFSs $\mathcal{F}=\Big\{f_{\lambda},\,M\,: \lambda\in\Lambda\Big\}$ and $\mathcal{G}=\Big\{g_{\overline\lambda},\,M\,: {\overline\lambda}\in{\overline\Lambda}\Big\}$ be subset of $Diff^{1}(M)$ then we denote the measure distance for two IFSs by ${\mathcal{D}}_1$ and define as follows:\\
If $\mathcal{F}=\mathcal{G}$ then put ${\mathcal{D}}_1\Big(\mathcal{F}, \mathcal{G}\Big)=0.$\\
If $\mathcal{F}\neq\mathcal{G}$ then
$${\mathcal{D}}_1\Big(\mathcal{F}, \mathcal{G}\Big)= Max\Big\{\rho_1(f_{\lambda}, g_{\overline\lambda}):\quad for\;all\;f_{\lambda}\in\mathcal{F}\; and\; g_{\overline{\lambda}}\in\mathcal{G}\Big\}$$
\end{dfn}
\begin{dfn}
Assume that $\mathcal{F}=\{f_{\lambda}, M : \lambda\in\Lambda\}\subset Diff^{1}(M)$ is an IFS. We say that IFS\,$\mathcal{F}$ is $\emph{\textbf{structurally stable}}$ if for given $\epsilon>o$ there is $\delta>0$ such that for any IFS\,$\mathcal{G}=\{g_{\lambda}, M : \lambda\in\Lambda\}\subset Diff^{1}(M)$
with ${\mathcal{D}}_1\Big(\mathcal{F}, \mathcal{G}\Big)<\delta$ and for any the sequence $\sigma=\{\ldots,\lambda_{-1},\lambda_0,\lambda_1,\ldots\}$ and for every $n\in\Bbb Z$ there is a homeomorphism $h:\,M\rightarrow M$ with the following properties:\\
$\left \{\begin{array}{lll}
i) & F_{\sigma_n}oh= hoG_{\sigma_n}, & \\
ii) & r\Big(x, h(x)\Big)<\epsilon, & \forall x\in M.
\end{array}\right.$
\end{dfn}
\begin{cor}\label{cortwo}
Let $\mathcal{F}\subset Diff^{1}(M)$ be an IFS and $dim M\geq 2$.
If $\mathcal{F}$ is structurally stable then it has shadowing property.
\end{cor}
\begin{pf}
Based on the definitions of structural stability and topological stability, it's clear that if IFS $\mathcal{F}$ is structurally stable then it's topologically stable. Hence, we gain the shadowing property for $\mathcal{F}$ by using Theorem \ref{Main}.
\end{pf}
Notice that the converse of Corollary \ref{cortwo} is not true; that is, there exists an IFS with the
shadowing property but not structural stability. We show this theme in the next example.
\begin{example}
We define the function $F:T^{2}\rightarrow T^{2}$( $T^{2}$ is two dimensional torus) with the following criterion:
$$F(x, y, u, v)= \Big(2x-c(u, v)f(x)+y, x-c(u, v)f(x)+y, 2u+v, u+v\Big),$$
where $f(x)= {1\over 2\pi}\sin{2\pi x}$ and $c$ is a $C^{\infty}$ function from $T^{2}$ to $\Bbb R$ such that the first order derivatives are small and also $0< c(u, v)\leq 1$ for every $(u, v)\in T^{2}$. Moreover, $c(u, v)= 1$ if and only if $(u, v)$ belong to nontrivial and the minimal set of the function $G$, $G:T^{2}\rightarrow T^{2}$ with the criterion $G(u, v)= (2u+v, u+v)$.\\
In the criterion of the function $F$, we once put $c(u, v)= \cos^{2}{\pi(u+ v)}$ and call the obtained function $F_1$, again we set $c(u, v)= \cos^{2}{\pi(u- v)}$ and call this obtained function $F_2$. Now, consider
$\mathcal{F}=\Big\{F_1, F_2\; ; \; T^{2}\Big\}$. Clearly, $\mathcal{F}$ is an IFS and $T^{2}$ is a metric compact space.\\
First, we show that IFS $\mathcal{F}$ has shadowing property. In the basis of Corollary(3.4) in \cite{Fatehi2015IFS}, it is sufficient we prove that for a given $\epsilon> 0$ there is $\delta> 0$ such that
\begin{eqnarray}
B\Big(F_i(X), \epsilon+ \delta\Big)\subseteq F_i\Big(B(X, \epsilon)\Big);\;\;\;\forall X\in T^{2},\;i=1, 2.
\end{eqnarray}
The functions $F_1$ and $F_2$ are diffeomorphisms by the criteria of the functions, so these functions are uniformly continuous on the compact space $T^{2}$. Assume that $\epsilon> 0$ be given. Put $\delta= \epsilon$. For an arbitrary and assumed values $X\in T^{2}$ and $i$, $i=1, 2$, consider $Z\in B\Big(F_i(X), 2\epsilon\Big)$. According to the uniform continuity of the function $F_i$, for this value $\epsilon$ there exists $\delta_1> 0$ such that for every $Y\in T^{2}$ with $r(Y, X)< \delta_1$ then $r\Big(F_i(Y), F_i(X)\Big)< \epsilon$. Since $F_i$ is one to one function, we put $Z^*= F_{i}^{-1}(Z)$. $F_i$ is uniformly continuous, there is $\delta_2$ such that for every $Y\in T^{2}$ with $r(Y, Z^* )< \delta_2$ then $r\Big(F_i(Y), F_i(Z^*)\Big)< \epsilon$. We can choose the values $\delta_1$ and $\delta_2$ such that $\delta_1, \delta_2< {\epsilon\over 2}$. Put $\delta^*= min\{\delta_1, \delta_2\}$. Assume that $Y\in T^{2}$ with $r(Y, X)< \delta^*$ and $r(Y, Z^* )< \delta^*$, then we see that
$$r(Z^*, X)< r(Z^*, Y)+ r(Y, X)< 2\delta^*< \epsilon.$$
Hence, $Z^*\in B(X, \epsilon)$ and whereas $F_i(Z^*)= Z$, we obtain that $Z\in F_i\Big(B(X, \epsilon)\Big)$ and consequently, the relation(6) is proved.\\
Second, we claim that IFS $\mathcal{F}$ isn't structurally stable. By reduction ad absurdum, we assume that IFS $\mathcal{F}$ is structurally stable. Thus, for given $\epsilon> 0$ there exists $\delta> 0$ such that if
$\mathcal{G}=\Big\{G_1, G_2\; ; \; T^{2}\Big\}$ be an IFS including $C^{1}$-diffeomorphisms functions with
${\mathcal{D}}_1\Big(\mathcal{F}, \mathcal{G}\Big)< \delta$ then for the sequence $\sigma= \{1, 1,\ldots\}$ and $n=1$ there is a homeomorphism $h:T^{2}\rightarrow T^{2}$ such that $F_1 oh= hoG_1$. Since the $G_1$ is an arbitrary function with $\rho_1\Big(F_1, G_1\Big)< \delta$, according to the previous relation, we conclude that the function $F_1$ is structurally stable. But we conclude from the article \cite{Robbin1972} that the function $F_1$ isn't structurally stable and this is a contradiction. Thus, our claim is proved.
\end{example}
\section*{References}
\end{document}
|
\begin{document}
\title{Advanced Multilevel Node Separator Algorithms}
\author{Peter Sanders and Christian Schulz\\
\textit{Karlsruhe Institute of Technology},
\textit{Karlsruhe, Germany} \\
\mathord{-}mall \textit{\{\url{sanders, christian.schulz}\}\url{@kit.edu}}}
\date{}
\institute{}
\maketitle
\begin{abstract}
A node separator of a graph is a subset $S$ of the nodes such that removing $S$ and its incident edges divides the graph into two disconnected components of about equal size.
In this work, we introduce novel algorithms to find small node separators in large graphs.
With focus on solution quality, we introduce novel flow-based local search algorithms which are integrated in a multilevel framework.
In addition, we transfer techniques successfully used in the graph partitioning field.
This includes the usage of edge ratings tailored to our problem to guide the graph coarsening algorithm as well as highly localized local search and iterated multilevel cycles to improve solution quality even further.
Experiments indicate that flow-based local search algorithms on its own in a multilevel framework are already \emph{highly} competitive in terms of separator quality.
Adding additional local search algorithms further improves solution quality.
Our strongest configuration almost always outperforms competing systems while on average computing 10\% and 62\% smaller separators than Metis and Scotch, respectively.
\end{abstract}
\thispagestyle{empty}
\mathord{-}ection{Introduction}
Given a graph $G=(V,E)$, the \emph{node separator problem} asks to find three disjoint subsets $V_1, V_2$ and $S$ of the node set, such that there are no edges between $V_1$ and $V_2$ and $V=V_1\cup V_2 \cup S$.
The objective is to minimize the size of the separator $S$ or depending on the application the weight of its nodes while $V_1$ and $V_2$ are balanced.
Note that removing the set $S$ from the graph results in at least two connected components.
There are many algorithms that rely on small node separators.
For example, small balanced separators are a popular tool in divide-and-conquer strategies~\cite{lipton1980applications,leiserson1980area,BHATT1984300}, are useful to speed up the computations of shortest paths~\cite{schulz2002using,delling2009high,dibbelt2014customizable} or are necessary in scientific computing to compute fill reducing orderings with nested dissection algorithms~\cite{george1973nested}.
Finding a balanced node separator on general graphs is NP-hard even if the maximum node degree is three~\cite{bui1992finding,garey2002computers}.
Hence, one relies on heuristic and approximation algorithms to find small node separators in general graphs.
The most commonly used method to tackle the node separator problem on large graphs in practice is the multilevel approach.
During a coarsening phase, a multilevel algorithm reduces the graph size by iteratively contracting nodes and edges until the graph is small enough to compute a node separator by some other algorithm. A node separator of the input graph is then constructed by successively transferring the solution to the next finer graph and applying local search algorithms to improve the current solution.
Current solvers are typically more than fast enough for most applications (for example~\cite{leiserson1980area,BHATT1984300}) but lack high solution quality. In this work, we address this problem and focus on solution quality. The remainder of the paper is organized as follows.
We begin in Section~\ref{s:preliminaries} by introducing basic concepts and by summarizing related work.
Our main contributions are presented in Section~\ref{s:mainpartseparator} where we transfer techniques previously used for the graph partitioning problem to the node separator problem and introduce novel flow based local search algorithms for the problem that can be used in a multilevel framework. This includes edge ratings to guide a graph coarsening algorithm within a multilevel framework, highly localized local search to improve a node separator and iterated multilevel cycles to improve solution quality even further.
Experiments in Section~\ref{s:experiments} indicate that our algorithms are able to provide excellent node separators and outperform other state-of-the-art algorithms.
Finally, we conclude with Section~\ref{s:conclusion}.
All of our algorithms have been implemented in the open source graph partitioning package KaHIP~\cite{kabapeE} and will be available within this framework.
\mathord{-}ection{Preliminaries}
\label{s:preliminaries}
\mathord{-}ubsection{Basic concepts}
In the following we consider an undirected graph $G=(V=\{0,\ldots, n-1\},E)$ with $n = |V|$, and $m = |E|$.
$\Gamma(v)\Is \mathord{-}etGilt{u}{\mathord{-}et{v,u}\in E}$ denotes the neighbors of a node $v$.
A set $C \mathord{-}ubset V$ of a graph is called \textit{closed node set} if there are no connections from $C$ to $V \mathord{-}etminus C$, i.e. for every node $u \in C$ an edge $(u,v) \in E$ implies that $v \in C$ as well.
In other words, a subset $C$ is a \emph{closed node set} if there is no edge starting in $C$ and ending in its complement $V \mathord{-}etminus C$.
A graph $S=(V', E')$ is said to be a \emph{subgraph} of $G=(V, E)$ if $V' \mathord{-}ubseteq V$ and $E' \mathord{-}ubseteq E \cap (V' \times V')$. We call $S$ an \emph{induced} subgraph when $E' = E \cap (V' \times V')$.
For a set of nodes $U\mathord{-}ubseteq V$, $G[U]$ denotes the subgraph induced by $U$.
We define multiple partitioning problems.
The \emph{graph partitioning problem} asks for \emph{blocks} of nodes $V_1$,\ldots,$V_k$
that partition $V$, i.e., $V_1\cup\cdots\cup V_k=V$ and $V_i\cap V_j=\emptyset$
for $i\neq j$. A \emph{balancing constraint} demands that
$\forall i\in \{1..k\}: |V_i|\leq L_{\max}\Is (1+\epsilon)\lceil |V|/k \rceil$ for
some parameter $\epsilon$.
In this case, the objective is often to minimize the total \emph{cut} $\mathord{-}um_{i<j}|E_{ij}|$ where
$E_{ij}\Is\mathord{-}etGilt{\mathord{-}et{u,v}\in E}{u\in V_i,v\in V_j}$.
The set of cut edges is also called \emph{edge separator}.
A node $v \in V_i$ that has a neighbor $w \in V_j, i\neq j$, is a boundary node.
An abstract view of the partitioned graph is the so called \emph{quotient graph}, where nodes represent blocks and edges are induced by connectivity between blocks.
The \emph{node separator problem} asks to find blocks, $V_1, V_2$ and a separator $S$ that partition $V$ such that there are no edges between the blocks.
Again, a balancing constraint demands $|V_i| \leq (1+\epsilon)\lceil|V|/k \rceil $. However, there is no balancing constraint on the separator $S$.
The objective is to minimize the size of the separator $|S|$.
Note that removing the set $S$ from the graph results in at least two connected components and that the blocks $V_i$ itself do not need to be connected components.
By default, our initial inputs will have unit edge and node weights. However, the results in this paper are easily transferable to node and edge weighted problems.
A matching $M\mathord{-}ubseteq E$ is a set of edges that do not share any common nodes,
i.e.\ the graph $(V,M)$ has maximum degree one. \emph{Contracting} an edge $\mathord{-}et{u,v}$ means to replace the nodes $u$ and $v$ by a
new node $x$ connected
to the former neighbors of $u$ and $v$. We set $c(x)=c(u)+c(v)$.
If replacing
edges of the form $\mathord{-}et{u,w},\mathord{-}et{v,w}$ would generate two parallel edges
$\mathord{-}et{x,w}$, we insert a single edge with
$\omega(\mathord{-}et{x,w})=\omega(\mathord{-}et{u,w})+\omega(\mathord{-}et{v,w})$.
\emph{Uncontracting} an edge $e$ undos its contraction.
In order to avoid tedious notation, $G$ will denote the current state of the graph
before and after a (un)contraction unless we explicitly want to refer to
different states.
The multilevel approach consists of three main phases.
In the \emph{contraction} (coarsening) phase,
we iteratively identify matchings $M\mathord{-}ubseteq E$
and contract the edges in $M$. Contraction should quickly reduce the size of the input and each computed level
should reflect the global structure of the input network.
Contraction is stopped when the graph is small enough so that the problem can be solved by some other potentially more expensive algorithm.
In the \emph{local search} (or uncoarsening) phase, matchings are iteratively uncontracted.
After uncontracting a matching, the local search algorithm moves nodes to decrease the size of the separator or to to improve balance of the block while keeping the size of the separator.
The succession of movements is based on priorities called \textit{gain}, i.e., the decrease in the size of the separator.
The intuition behind the approach is that a good solution at one level of the hierarchy will also be a good solution on the next finer level so that local search will quickly find a good solution.
\mathord{-}ubsection{Related Work}
\label{s:related}
There has been a \emph{huge} amount of research on graph partitioning so that we refer the reader to \cite{GPOverviewBook,SPPGPOverviewPaper} for most of the material in this area.
Here, we focus on issues closely related to our main contributions and previous work on the node separator problem.
Lipton and Tarjan~\cite{lipton1979separator} provide the \textit{planar separator theorem} stating that on planar graphs one can always find a separator $S$ in linear time that satisfies $|S| \in O(\mathord{-}qrt{|V|})$ and $|V_i| \leq 2 |V| / 3$.
For more balanced cases, the problem remains NP-hard~\cite{fukuyama2006np} even on planar graphs.
For general graphs there exist several heuristics to compute small node separators.
A common and simple method is to derive a node separator from an edge separator \cite{pothen1990partitioning,dissSchulz} which is usually computed by a multilevel graph partitioning algorithm.
Clearly, taking the boundary nodes of the edge separator in one block of the partition yields a node separator.
Since one is interested in a small separator, one can use the smaller set of boundary nodes.
A better method has been first described by Pothen and Fan~\cite{pothen1990partitioning}.
The method employs the set of cut edges of the partition and computes the smallest node separator that can be found by using a subset of the boundary nodes.
The main idea is to compute a subset $S$ of the boundary nodes such that each cut edge is incident to at least one of the nodes in $S$ (a vertex cover).
A problem of the method is that the graph partitioning problem with edge cut as objective has a somewhat different combinatorial structure compared to the node separator problem. This makes it unlikely to find high quality solutions with that approach.
Metis~\cite{karypis1998fast} and Scotch~\cite{scotch} use a multilevel approach to obtain a node separator.
After contraction, both tools compute a node separator on the coarsest graph using a greedy algorithm.
This separator is then transferred level-by-level, dropping non-needed nodes on each level and applying Fiduccia-Mattheyses (FM) style local search.
Previous versions of Metis and Scotch also included the capability to compute a node separator from an edge separator.
Recently, Hamann and Strasser~\cite{hamann2015graph} presented a max-flow based algorithm specialized for road networks.
Their main focus is not on node separators. They focus on a different formulation of the edge-cut version graph partitioning problem. More precisely, Hamann and Strasser find Pareto solutions in terms of edge cut versus balance instead of specifying the allowed amount of imbalance in advance and finding the best solution satisfying the constraint.
Their work also includes an algorithm to derive node separators, again in a different formulation of the problem, i.e.\ node separator size versus balance.
We cannot make meaningful comparisions since the paper contains no data on separator quality and the implementation of the algorithm is not available.
Hager et~al.\ \cite{hager2014multilevel} recently proposed a multilevel approach for medium sized graphs using continuous bilinear quadratic programs and a combination of those with local search algorithms. However, a different formulation of the problem is investigated, i.e.\ the solver enforces upper \emph{and} lower bounds to the block sizes which makes the results incomparable to our results.
LaSalle and Karypis~\cite{lasalle2015efficient} present a shared-memory parallel algorithm to compute node separators used to compute fill reducing orderings.
Within a multilevel approach they evaluate different local search algorithms indicating that a combination of greedy local search with a segmented FM algorithm can outperform serial FM algorithms. We compare the solution quality of our algorithm against the data presented there in our experimental section (see Section~\ref{s:experiments}).
\pagebreak
\mathord{-}ection{Advanced Multilevel Algorithms for Node Separators}
\label{s:mainpartseparator}
We now present our core innovations.
In brevity, the novelties of our algorithm include edge ratings during coarsening to compute graph hierarchies that fulfill the needs of the node separator problem and a combination of localized local search with flow problems to improve the size of the separator. In addition, we transfer a concept called iterative multilevel scheme previously used in graph partitioning to further improve solution quality.
The description of our algorithm in this section follows the multilevel scheme.
We start with the description of the edge ratings that we use during coarsening, continue with the description of the algorithm used to compute an initial node separator on the coarsest level and then describe local search algorithms as well as other techniques.
\mathord{-}ubsection{Coarsening}
Before we explain the matching algorithm that we use in our system, we present the general two-phase procedure which was already used in multiple graph partitioning frameworks \cite{kappa,kaffpa,kaspar}.
The two-phase approach makes contraction more systematic by separating two issues: A \emph{rating function} and a \emph{matching} algorithm.
A rating function indicates how much sense it makes to contract an edge based on \emph{local} information.
A matching algorithm tries to maximize the sum of the ratings of the contracted edges looking at the \emph{global} structure of the graph.
While the rating function allows a flexible characterization of what a ``good'' contracted graph is, the simple, standard definition of the matching problem allows to reuse previously developed algorithms for weighted matching.
Note that we can use the same edge rating functions as in the graph partitioning case but also can define new ones since the problem structure of the node separator problem is different.
We use the \textit{Global Path Algorithm (GPA)} which runs in near linear time to compute matchings.
GPA was proposed in \cite{MauSan07} as a synthesis of the Greedy Algorithm and the Path Growing Algorithm~\cite{DH03a}.
We choose this algorithm since in \cite{kappa} it gives empirically considerably better results than Sorted Heavy Edge Matching, Heavy Edge Matching or Random Matching \cite{SchKarKum00}.
GPA scans the edges in order of decreasing weight
but rather than immediately building a matching, it first constructs a collection
of paths and even length cycles. Afterwards, \textit{optimal solutions} are computed for each
of these paths and cycles using dynamic programming.
\paragraph{Edge Ratings for Node Separator Problems.}
We want to guide the contraction algorithm so that coarse levels in the graph hierarchy still contain small node separators if present in the input problem. This way we can provide a good starting point for the initial node separator routine.
There are a lot of possibilities that we have tried.
The most important edge rating functions for an edge $e=\{u,v\} \in E$ are the following:
\begin{align*}
\text{exp*}(e)&= \omega(e)/(d(u)d(v)) \\
\text{exp{**}}(e) &= \omega(e)^2/(d(u)d(v)) \\
\text{max}(e) &= 1/\max\{{d(u),d(v)}\} \\
\text{log}(e) &= 1/\log(d(u)d(v))
\end{align*}
The first two ratings have already been successfully used in the graph partitioning field.
To give an intuition behind these ratings, we have to characterize the properties of ``good'' matchings for the purpose of contraction in a multilevel algorithm for the node separator problem.
Our main objective is to find a small node separator on the coarsest graph.
A matching should contain a \emph{large number of edges}, e.g.\ being maximal, so that there are only few levels in the hierarchy and the algorithm can converge quickly.
In order to \emph{represent the input} on the coarser levels, we want to find matchings such that the graph after contraction has somewhat \emph{uniform node weights} and \emph{small node degrees}. In addition, we want to keep nodes having a small degree since they are potentially good separators.
Uniform node weights are also helpful to achieve a balanced node separator on coarser levels and makes local search algorithms more effective.
We also included ratings that do not contain the edge weight of the graph since intuitively a matching does not have to care about large edge weights -- they do not show up in the objective of the node separator problem.
\mathord{-}ubsection{Initial Node Separators}
We stop coarsening as soon as the graph has less than ten thousand nodes.
Our approach first computes an edge separator and then derives a node separator from that. More precisely, we partition the coarsest graph into two blocks using KaFFPa~\cite{kabapeE}.
We then look at the bipartite graph induced by set of cut edges including the given node weights.
Our goal is to select a minimum weight node separator in that graph.
As a side note, this corresponds to finding a minimum weight vertex cover in the bipartite graph.
Also note that this is similar to the approach of Pothen et~al.\ ~\cite{pothen1990partitioning}, however we integrate node weights.
To solve the problem, we put all of the nodes of the bipartite graph into the initial separator $S$ and use the \emph{flow-based technique} defined below to select the smallest separator contained in that subgraph.
Since our algorithms are randomized, we repeat the overall procedure twenty five times and pick the best node separator that we have found.
\mathord{-}ubsection{Local Search}
\paragraph{Localized Local Search.}
In graph partitioning it has been shown that higher localization of local search can improve solution quality~\cite{dissSchulz,kaspar}.
Hence, we develop a novel localized algorithm for the node separator problem that starts local search only from a couple of selected separator nodes.
Our localized local search procedure is based on the FM scheme.
Before we explain our approach to localization, we present a commonly used FM-variant for completeness.
For each of the two blocks $V_1$, $V_2$ under consideration, a priority queue of separator nodes eligible to move is kept.
The priority is based on the \emph{gain} concept, i.e.\ the decrease in the objective function value when the separator node is moved into that block.
More precisely, if a node $v \in S$ would be moved to $V_1$, then the neighbors of $v$ that are in $V_2$ have to be moved into the separator.
Hence, in this case the gain of the node is the weight of $v$ minus the weight of the nodes that have to be added to the separator.
The gain value in the other case (moving $v$ into to $V_2$) is similar.
After the algorithm computed both gain values it chooses the largest gain value such that moving the node does not violate the balance constraint and performs the movement.
Each node is moved at most once out of the separator within a single local search.
The queues are initialized randomly with the separator nodes.
After a node is moved, newly added separator nodes become eligible for movement (and hence are added to the priority queues).
There are different possibilities to select a block to which a node shall be moved.
The most common variant of the classical FM-algorithm alternates between both blocks.
After a stopping criterion is applied, the best feasible node separator found is reconstructed (among ties choose the node separator that has better balance).
We have two strategies to \emph{balance blocks}.
The first strategy tries to create a balanced situation without increasing the size of the separator. It always selects the queue of the heavier block and uses the same roll back mechanism as before.
The second strategy allows to increase the size of the node separator.
It also selects a node from the queue of the heavier block, but the roll back mechanism recreates the node separator having the best balance (among ties we choose the smaller node separator).
Our approach to localization works as follows.
Previous local search methods were initialized with \emph{all} separator nodes, i.e.\ all separator nodes are eligible for movement at the beginning.
In contrast, our method is repeatedly initialized only with a \emph{subset} of the separator nodes (the precise amount of nodes in the subset is a tuning parameter).
Intuitively, this introduces a larger amount of diversification and boosts the algorithms ability to climb out of local minima.
The algorithm is organized in rounds.
One round works as follows.
Instead of putting \emph{all} separator nodes directly into the priority queues, we put the current separator nodes into a todo list $T$.
Subsequently, we begin local search starting with a random \emph{subset} $\mathcal{S}$ of the todo list $T$.
We select the subset $\mathcal{S}$ by repeatedly picking a random node $v$ from $T$.
We add $v$ to $\mathcal{S}$ if it still is a separator node and has not been moved by a previous local search in that round.
Either way, $v$ is removed from the todo list.
Our localized search is restricted to the movement of nodes that have not been touched by a previous local search during the round.
This assures that each node is moved at most once out of the separator during a round of the algorithm and avoids cyclic local search. By default our local search routine first uses classic local search (including balancing) to get close to a good solution and afterwards uses localization to improve the result further.
We repeat this until no further improvement is found.
We now give intuition why localization of local search boosts the algorithms ability to climb out of local minima.
Consider a situation in which a node separator is a locally optimal in the sense that at least two node movements are necessary until moving a node out of the separator with positive gain is possible. Recall that classical local search is initialized with all separator nodes (in this case all of them have negative gain values).
It then starts to move nodes with negative gain at multiple places of the graph.
When it finally moves nodes with positive gain the separator is already much worse than the input node separator.
Hence, the movement of these positive gain nodes does not yield an improvement with respect to the given input partition.
On the other hand, a localized local search that starts close to the nodes with positive gain, can find the positive gain nodes by moving only a small number of nodes with negative gain.
Since it did not move as many negative gain nodes as the classical local search, it may still finds an improvement with respect to the input.
\paragraph{Maximum Flows as Local Search.}
We define the node-capacitated flow problem $\mathcal{F}=(V_\mathcal{F}, E_\mathcal{F})$ that we solve to improve a given node separator as follows.
First we introduce a few notations.
Given a set of nodes $A \mathord{-}ubset V$, we define its \emph{border}
$\partial A := \{ u \in A \mid \exists (u,v) \in E : v \not\in A\}$.
The set $\partial_1 A := \partial A \cap V_1$ is called \emph{left border} of $A$ and the set $\partial_2 A := \partial A \cap V_2$ is called \emph{right border} of $A$.
An \emph{$A$ induced flow problem} $\mathcal{F}$ is the node induced subgraph $G[A]$ using $\infty$ as edge-capacities and the node weights of the graph as node-capacities. Additionally there are two nodes $s,t$ that are connected to the border of $A$.
More precisely, $s$ is connected to all left border nodes $\partial_1 A$ and all right border nodes $\partial_2 A$ are connected to $t$.
These new edges get capacity $\infty$.
Note that the additional edges are directed.
$\mathcal{F}$ has the \emph{balance property} if each ($s$,$t$)-flow induces a balanced node separator in $G$, i.e.\ the blocks $V_i$ fulfill the balancing constraint.
The basic idea is to construct a flow problem $\mathcal{F}$ having the balance property.
We now explain how we find such a subgraph.
We start by setting $A$ to $S$ and extend it by performing two breadth first searches (BFS).
The first BFS is initialized with the current separator nodes $S$ and only looks at nodes in block $V_1$.
The same is done during the second BFS with the difference that we now look at nodes from block $V_2$.
Each node touched by any of the BFS is added to $A$.
The first BFS is stopped as soon as the size of the newly added nodes would exceed $L_\text{max}-c(V_2)-c(S)$. Similarly, the second BFS is stopped as soon as the size of the newly added nodes would exceed $L_\text{max}-c(V_1)-c(S)$.
\begin{figure}
\caption{ The construction of an $A$ induced flow problem $\mathcal{F}
\label{fig:flowconstruction}
\end{figure}
A solution of the $A$ induced flow problem yields a valid node separator of the original graph:
First, since all edges in our flow network have capacity $\infty$ and the separator $S$ is contained in the problem, a maximum flow yields a separator $S'$, $V_\mathcal{F}=V'_1 \cup V'_2 \cup S'$, in the flow network that separates $s \in V'_1$ from $t \in V'_2$.
Since there is a one-to-one mapping between the nodes of our flow problem and the nodes of the input graph, we directly obtain a separator in the original network $V=V^*_1\cup V^*_2 \cup S'$.
Additionally, the node separator computed by our method fulfills the balance constraint -- presuming that the input solution is balanced.
To see this, we consider the size of $V^*_1$. We can bound the size of this block by assuming that all of the nodes that have been touched by the second BFS get assigned to $V^*_1$ (including the old separator $S$).
However, in this case the balance constraint is still fulfilled $c(V^*_1) \leq c(V_1) + c(S) + L_\text{max} - c(V_1) - c(S) = L_\text{max}$.
The same holds for the opposite direction.
Note that the separator is always smaller or equal to the input separator since $S$ is contained in the construction.
To solve the node-capacitated flow problem $\mathcal{F}$, we transform it into a flow problem $\mathcal{H}$ without node-capacities.
We use a standard technique~\cite{ravindra1993network}: first we insert the source and the sink into our model. Then, for each node $u$ in our flow problem $\mathcal{F}$ that is not the source or the sink, we introduce two nodes $u_1$ and $u_2$ in $V_\mathcal{H}$ which are connected by a directed edge $(u_1,u_2) \in E_\mathcal{H}$ with an edge-capacity set to the node-capacity of the current node.
For an edge $(u,v) \in E_\mathcal{F}$ not involving the source or the sink, we insert $(u_2, v_1)$ into $E_\mathcal{H}$ with capacity $\infty$.
If $u$ is the source $s$, we insert $(s,v_1)$ and if $v$ is the sink, we insert $(u_2, t)$ into $E_\mathcal{H}$. In both cases we use capacity $\infty$.
\paragraph{Larger Flow Problems and Better Balanced Node Separators.}
The definition of the flow problem to improve a node separator requires that each cut in the flow problem corresponds to a \emph{balanced} node separator in the original graph. We now simplify this definition and stop the BFSs if the size of the touched nodes exceeds $(1+\alpha) L_\text{max}-c(V_i)-c(S)$ with $\alpha \geq 0$.
We then solve the flow problem and check afterwards if the corresponding node separator is balanced. If this is the case, we accept the node separator and continue.
If this is not the case, we set $\alpha := \alpha/2$ and repeat the procedure. After ten unsuccessful iterations, we set $\alpha=0$.
Additionally, we stop the process if the flow value of the flow problem corresponds to the separator weight of the input separator.
\begin{figure}
\caption{Left: the set $C=\{a,d,e,f\}
\label{fig:closednodeset}
\end{figure}
We apply heuristics to extract a better balanced node separator from the solved max-flow problem.
Picard and Queyranne~\cite{picard1980structure} made the observation that \emph{one} $(s,t)$-max-flow contains information about \emph{all} minimum ($s$,$t$)-cuts in the graph (however, finding the most balanced minimum cut is NP-hard~\cite{bonsma2010most}).
We follow the heuristic approach of \cite{kaffpa} and extract better balanced ($s$,$t$)-cuts from the given maximum flow in $\mathcal{H}$. This results in better balanced separators in the node-capacitated problem $\mathcal{F}$ and hence in better balanced node separators for our original problem.
To be more precise, Picard and Queyranne have shown that each closed node set in the residual graph of a maximum $(s,t)$-flow that contains the source $s$ but not the sink induces a minimum $s$-$t$ cut.
Observe that a cycle in the residual graph cannot contain a node of both, a closed node set and its complement.
Hence, Picard and Queyranne compactify the residual network by contracting all strongly connected components.
Afterwards, their algorithm tries to find the most balanced minimum cut by enumeration.
In \cite{kaffpa}, we find better balanced cuts heuristically.
First a random topological order of the strongly connected component graph is computed.
This is then scanned in reverse order.
By subsequently adding strongly connected components several closed node sets are obtained, each inducing a minimum $s$-$t$ cut.
The closed node set with the best occurred balance among multiple runs of the algorithm with different random topological orders is returned.
An example closed node set and the scanning algorithm is shown in Figure~\ref{fig:closednodeset}.
\mathord{-}ubsection{Miscellanea}
An easy way to obtain high quality node separators is to use a multilevel algorithm multiple times using different random seeds and use the best node separator that has been found.
However, instead of performing a full restart, one can use the information that has already been obtained.
In the graph partitioning context, the notion of iterated multilevel schemes has been introduced by Walshaw \cite{walshaw2004multilevel} and later has been augmented to more complex cycles~\cite{kaffpa}.
Here, one transfers a solution of a previous multilevel cycle down the hierarchy and uses it as initial solution.
More precisely, this can be done by not contracting any cut edge.
We \emph{transfer this technique} to the node separator problem as follows.
One can interpret a node separator as a three way partition $V_1,V_2,S$.
Hence, to obtain an iterated multilevel scheme for the node separator problem, our matching algorithm is not allowed to match any edge that runs between $V_i$ and $S$ ($i=1,2$).
Hence, when contraction is done, every edge leaving the separator will remain and we can transfer the node separator down in the hierarchy.
Thus a given node separator can be used as initial node separator of the coarsest graph (having the same balance and size as the node separator of the finest graph).
This ensures non-decreasing quality, if the local search algorithm guarantees no worsening.
To increase diversification during coarsening in later V-cycles we pick a random edge rating of the ones described above.
\pagebreak
\mathord{-}ection{Experiments}
\label{s:experiments}
\paragraph*{Methodology.}
We have implemented the algorithm described above within the KaHIP framework using C++ and compiled all algorithms using gcc 4.63 with full optimization's turned on (-O3 flag).
We integrated our algorithms in KaHIP v0.71 and compare ourselves against Metis~5.1 and Scotch~6.0.4 using the quality option that has focus on solution quality instead of running time.
Our new codes will be included into the KaHIP graph partitioning framework.
We perform ten repetitions of each algorithm using different random seeds for initialization.
Each run was made on a machine that has four Octa-Core Intel Xeon E5-4640 processors running at 2.4\,GHz.
It has 512 GB local memory, 20 MB L3-Cache and 8x256 KB L2-Cache.
Our main objective is the cardinality of node separators on the input graph.
In our experiments, we use $\epsilon=20\%$ since this is the default value for node separators in Metis.
We mostly present two kinds of views on the data: average values and minimum values as well as plots that show the ratios of the quality achieved by the algorithms.
\paragraph*{Algorithm Configuration.}
We performed a number of experiments to evaluate the influence and choose the parameters of our algorithms.
We mark the instances that have also been used for the parameter tuning in Appendix~\ref{sec:appendixdetailedresults} with a * and exclude these graphs when we report average values over multiple instances in comparisons with our competitors. However, our full algorithm is not too sensitive about the precise choice with most of the parameters.
In general, using more sophisticated edge ratings improves solution quality slightly and improves partitioning speed over using edge weight.
We exclude further experiments from the main text and use the $exp^*$ edge rating function as a default since it has a slight advantage in our preliminary experiments.
In later iterated multilevel cycles, we pick one of the other ratings at random to introduce more diversification.
Indeed, increasing the number of V-cycles reduces the objective function.
We fixed the number of V-cycles to three.
By default, we use the better balanced minimum cut heuristic in our node separator algorithm since it keeps the node separator cardinality and improves balance. In the localized local search algorithm, we set the size of the random subset of separator nodes from which local search is started $|\mathcal{S}|$ to five.
\paragraph*{Instances.}
We use graphs from various sources to test our algorithm.
We use all 34 graphs from Chris Walshaw's benchmark archive~\cite{soper2004combined}.
Graphs derived from sparse matrices have been taken from the Florida Sparse Matrix Collection~\cite{UFsparsematrixcollection}.
We also use graphs from the 10th DIMACS Implementation Challenge~\cite{benchmarksfornetworksanalysis} website.
Here, \Id{rggX} is a \emph{random geometric graph} with
$2^{X}$ nodes where nodes represent random points in the unit square and edges
connect nodes whose Euclidean distance is below $0.55 \mathord{-}qrt{ \ln n / n }$.
The graph \Id{delX} is a Delaunay triangulation of $2^{X}$ random points in the unit square.
The graphs \Id{af_shell9}, \Id{thermal2}, \Id{nlr} and \Id{nlpkkt240} are from the matrix and the numeric section of the DIMACS benchmark set.
The graphs \Id{europe} and \Id{deu} are large road networks of Europe and Germany taken from~\cite{DSSW09}.
Due to large running time of our algorithm, we exclude the graph \Id{nlpkkt240} from general comparisons and only use our full algorithm to compute a result.
Basic properties of the graphs under consideration can be found in Appendix~\ref{apdx:graphs}, Table~\ref{tab:test_instances_walshaw}.
\pagebreak
\mathord{-}ubsection{Separator Quality}
\mathord{-}etlength{\tabcolsep}{1ex}
\begin{wraptable}{r}{7cm}
\centering
\vspace*{-.75cm}
\begin{tabular}{lrrr}
\toprule
Algorithm & Avg. Inc. & $t_\text{avg}$[s] & $\#\leq_\text{Metis}$ \\
\midrule
Metis & 10.3\% &0.12&-\\
Scotch & 62.2\% & 0.23& 0\%\\
\midrule
Flow$_0$ & 3.3\% &17.72 & 89\%\\
Flow$_{0.5}$ & 0.1\% &38.21 & 96\%\\
Flow$_1$ & 0.3\% &47.81& 94\%\\
\midrule
LSFlow$_0$ & 1.5\% &28.61&96\%\\
LSFlow$_{0.5}$ & -0.1\% &49.08&94\%\\
\hline
\hline
LSFlow$_{1}$ & - &58.50&96\%\\
\bottomrule
\end{tabular}
\caption{Avg. increase in separator size over LSFlow$_1$ , avg. running times of the different algorithms and relative number of instances with a separator smaller or equal to Metis ($\#\leq_\text{Metis}$).}
\vspace*{-.75cm}
\label{tab:compressedresults}
\end{wraptable}
We now assess the size of node separators derived by our algorithms and by other state-of-the-art tools, i.e.\ Metis and Scotch as well as the data recently presented by LaSalle and Karypis~\cite{lasalle2015efficient}.
We use multiple configurations of our algorithm to estimate the influence of the multiplicative factor $\alpha$ that controls the size of the flow problems solved during uncoarsening and to see the effect of adding local search.
The algorithms named Flow$_\alpha$ use \emph{only} flows during uncoarsening as local search with a multiplicative factor $\alpha$. Algorithms labeled LSFlow$_\alpha$ start on each level with local search and localized local search until no improvement is found and afterwards perform flow based local search with a multiplicative factor $\alpha$.
Table~\ref{tab:compressedresults} summarizes the results of the experiments. We present detailed per instances results in Appendix~\ref{sec:appendixdetailedresults}, Table~\ref{tab:detailedsize} (separator size and balance) and Table~\ref{tab:detailedtime} (running times).
We now summarize the results.
First of all, only using flow-based local search during uncoarsening is already highly competitive, even for small flow problems with $\alpha=0$.
On average, Flow$_0$ computes 6.7\% smaller separators than Metis and 57\% than Scotch.
It computes a smaller or equally sized separator than Metis in 89\% of the cases and than Scotch in \emph{every} case. However, it also needs more time to compute a result. This is due to the large flow problems that have to be solved.
Indeed, increasing the value of $\alpha$, i.e.\ searching for separators in larger areas around the initial separator, improves the objective further at the cost of running time.
For example, increasing $\alpha$ to 0.5 reduces the average size of the computed separator by 3.2\%, but also increases the running time by more than a factor~2 on average.
Using even larger values of $\alpha>1$ did not further improve the result so that we do not include the data here.
Adding non-flow-based local search also helps to improve the size of the separator. For example, it improves the separator size by 1.8\% when using $\alpha=0$. However, the impact of non-flow-based local search decreases for larger values of $\alpha$.
The strongest configuration of our algorithm is LSFlow$_{1}$. It computes smaller or equally sized separators than Metis in all but two cases and than Scotch in every case.
On average, separators are~10.3\% smaller than the separators computed by Metis and 62.2\% than the ones computed by
\begin{figure}
\caption{Improvement of LSFlow$_{1}
\label{fig:performanceratios}
\end{figure}
Scotch.
Figure~\ref{fig:performanceratios} shows the average improvement ratios over Metis and Scotch on a per instance basis, sorted by absolute value of improvement.
The largest improvement over Metis was obtained on the road network europe where our separator is a factor 2.3 smaller whereas the largest improvement over Scotch is on \Id{add32} where our separator is a factor 12 smaller.
On the instance \Id{G2_circuit} Metis computes a 19.9\% smaller separator which is the largest improvement of Metis over our algorithm.
We now compare the size of our separators against the recently published data by LaSalle and Karypis~\cite{lasalle2015efficient}.
The networks used therein that are publicly available are $\Id{auto}$, $\Id{nlr}$, $\Id{del24}$ and $\Id{nlpkkt240}$.
On these graphs our strongest configuration computes separators that are 10.7\%, 10.0\%, 20.1\% and 27.1\% smaller than their best configuration (Greedy+Segmented FM), respectively.
\mathord{-}ection{Conclusion}
\label{s:conclusion}
In this work, we derived algorithms to find small node separators in large graphs.
We presented a multilevel algorithm that employs novel flow-based local search algorithms and transferred techniques successfully used in the graph partitioning field to the node separator problem.
This includes the usage of edge ratings tailored to our problem to guide the graph coarsening algorithm as well as highly localized local search and iterated multilevel cycles to improve solution quality even further.
Experiments indicate that using flow-based local search algorithms as only local search algorithm in a multilevel framework is already highly competitive in terms of separator quality.
Important future work includes shared-memory parallelization of our algorithms, e.g. currently most of the running time in our algorithm is consumed by the max-flow solver so that a parallel solver will speed up computations. In addition, it is possible to define a simple evolutionary algorithm for the node separator problem by transferring the iterated multilevel scheme to multiple input separators. This will likely result in even better solutions.
\pagebreak
\begin{appendix}
\mathord{-}ection{Benchmark Set}
\label{apdx:graphs}
\begin{table}[H]
\centering
\begin{tabular}{| l | r | r || l | r | r | }
\hline
Graph & $n$& $m$ & Graph & $n$& $m$\\
\hline \hline
\multicolumn{3}{|c||}{ Small Walshaw Graphs} & \multicolumn{3}{c|}{UF Graphs}\\
\hline
add20 & \numprint{2395} & \numprint{7462} & cop20k\_A* & \numprint{99843} & \numprint{1262244}\\
data & \numprint{2851} & \numprint{15093} & 2cubes\_sphere* & \numprint{101492} & \numprint{772886}\\
3elt & \numprint{4720} & \numprint{13722} & thermomech\_TC & \numprint{102158} & \numprint{304700}\\
uk & \numprint{4824} & \numprint{6837} & cfd2 & \numprint{123440} & \numprint{1482229}\\
add32 & \numprint{4960} & \numprint{9462} & boneS01 & \numprint{127224} & \numprint{3293964}\\
bcsstk33 & \numprint{8738} & \numprint{291583} & Dubcova3 & \numprint{146689} & \numprint{1744980}\\
whitaker3 & \numprint{9800} & \numprint{28989} & bmwcra\_1 & \numprint{148770} & \numprint{5247616}\\
crack & \numprint{10240} & \numprint{30380} & G2\_circuit & \numprint{150102} & \numprint{288286} \\
wing\_nodal* & \numprint{10937} & \numprint{75488} & c-73 & \numprint{169422} & \numprint{554926} \\
fe\_4elt2 & \numprint{11143} & \numprint{32818} & shipsec5 & \numprint{179860} & \numprint{4966618}\\
vibrobox & \numprint{12328} & \numprint{165250} & cont-300 & \numprint{180895} & \numprint{448799} \\
\cline{4-6}
bcsstk29* & \numprint{13992} & \numprint{302748} & \multicolumn{3}{c|}{ Large Walshaw Graphs} \\
\cline{4-6}
4elt & \numprint{15606} & \numprint{45878} & 598a & \numprint{110971} & \numprint{741934} \\
fe\_sphere & \numprint{16386} & \numprint{49152} & fe\_ocean & \numprint{143437} & \numprint{409593} \\
cti & \numprint{16840} & \numprint{48232} & 144 & \numprint{144649} & \numprint{1074393} \\
memplus & \numprint{17758} & \numprint{54196} & wave & \numprint{156317} & \numprint{1059331} \\
cs4 & \numprint{22499} & \numprint{43858} & m14b & \numprint{214765} & \numprint{1679018} \\
bcsstk30 & \numprint{28924} & \numprint{1007284} & auto & \numprint{448695} & \numprint{3314611} \\
\cline{4-6}
bcsstk31 & \numprint{35588} & \numprint{572914} & \multicolumn{3}{c|}{ Large Other Graphs}\\
\cline{4-6}
fe\_pwt & \numprint{36519} & \numprint{144794} & del23 & $\approx$8.4M & $\approx$25.2M \\
bcsstk32 & \numprint{44609} & \numprint{985046} & del24 & $\approx$16.7M & $\approx$50.3M \\
\cline{4-6}
fe\_body & \numprint{45087} & \numprint{163734} & rgg23 & $\approx$8.4M & $\approx$63.5M \\
t60k* & \numprint{60005} & \numprint{89440} & rgg24 & $\approx$16.7M & $\approx$132.6M\\
\cline{4-6}
wing & \numprint{62032} & \numprint{121544} & deu & $\approx$4.4M & $\approx$5.5M \\
brack2 & \numprint{62631} & \numprint{366559} & eur & $\approx$18.0M & $\approx$22.2M \\
\cline{4-6}
finan512* & \numprint{74752} & \numprint{261120} & af\_shell9 & $\approx$504K & $\approx$8.5M \\
fe\_tooth & \numprint{78136} & \numprint{452591} & thermal2 & $\approx$1.2M & $\approx$3.7M \\
fe\_rotor & \numprint{99617} & \numprint{662431} & nlr & $\approx$4.2M & $\approx$12.5M \\
\hline
\hline
& & & nlpkkt240 & $\approx$27.9M & $\approx$373M \\
\hline
\end{tabular}
\vspace*{.25cm}
\caption{Basic properties of the instances used for evaluation.}
\label{tab:test_instances_walshaw}
\end{table}
\mathord{-}ection{Detailed per Instance Results}
\label{sec:appendixdetailedresults}
\begin{landscape}
\mathord{-}etlength{\tabcolsep}{1ex}
\thispagestyle{empty}
\begin{table}
\tiny
\vspace*{-.25cm}
\hspace*{-1.25cm}
\begin{tabular}{l rrr@{\hskip 13pt}rrr @{\hskip 13pt}rrr @{\hskip 13pt}rrr @{\hskip 13pt}rrr @{\hskip 13pt}rrr @{\hskip 13pt}rrr @{\hskip 13pt}rrr @{\hskip 13pt}rrr @{\hskip 13pt}rrr}
\toprule
& \multicolumn{3}{c}{Metis} & \multicolumn{3}{c}{Scotch} & \multicolumn{3}{c}{LSFlow$_0$} & \multicolumn{3}{c}{LSFlow$_{0.5}$} & \multicolumn{3}{c}{LSFlow$_{1}$} & \multicolumn{3}{c}{Flow$_{0}$} & \multicolumn{3}{c}{Flow$_{0.5}$} & \multicolumn{3}{c}{Flow$_{1}$} \\
Graph & Avg. & Best& Bal. & Avg. & Best& Bal. & Avg. & Best& Bal. & Avg. & Best& Bal. & Avg. & Best& Bal. & Avg. & Best& Bal. & Avg. & Best& Bal. & Avg. & Best& Bal. \\
\cmidrule(r){1-1} \cmidrule(r){2-4} \cmidrule(r){5-7} \cmidrule(r){8-10} \cmidrule(r){11-13} \cmidrule(r){14-16} \cmidrule(r){17-19} \cmidrule(r){20-22} \cmidrule(r){23-25}
\texttt{\detokenize{144}} & \numprint{1539} & \numprint{1511} & \numprint{1,13} & \numprint{1639} & \numprint{1602} & \numprint{1,00} & \numprint{1482} & \numprint{1467} & \numprint{1,12} & \numprint{1444} & \numprint{1437} & \numprint{1,19} & \numprint{1445} & \numprint{1439} & \numprint{1,19} & \numprint{1495} & \numprint{1481} & \numprint{1,09} & \numprint{1444} & \numprint{1437} & \numprint{1,20} & \numprint{1446} & \numprint{1437} & \numprint{1,19}\\
\texttt{\detokenize{2cubes_sphere}} & \numprint{1398} & \numprint{1335} & \numprint{1,11} & \numprint{1587} & \numprint{1530} & \numprint{1,00} & \numprint{1265} & \numprint{1245} & \numprint{1,14} & \numprint{1228} & \numprint{1221} & \numprint{1,19} & \numprint{1230} & \numprint{1221} & \numprint{1,19} & \numprint{1274} & \numprint{1266} & \numprint{1,11} & \numprint{1237} & \numprint{1221} & \numprint{1,18} & \numprint{1235} & \numprint{1221} & \numprint{1,18}\\
\texttt{\detokenize{3elt}} & \numprint{42} & \numprint{42} & \numprint{1,09} & \numprint{50} & \numprint{46} & \numprint{1,00} & \numprint{42} & \numprint{42} & \numprint{1,11} & \numprint{42} & \numprint{42} & \numprint{1,11} & \numprint{42} & \numprint{42} & \numprint{1,11} & \numprint{42} & \numprint{42} & \numprint{1,11} & \numprint{42} & \numprint{42} & \numprint{1,11} & \numprint{42} & \numprint{42} & \numprint{1,11}\\
\texttt{\detokenize{4elt}} & \numprint{69} & \numprint{68} & \numprint{1,02} & \numprint{82} & \numprint{73} & \numprint{1,00} & \numprint{68} & \numprint{68} & \numprint{1,01} & \numprint{68} & \numprint{68} & \numprint{1,01} & \numprint{68} & \numprint{68} & \numprint{1,01} & \numprint{68} & \numprint{68} & \numprint{1,02} & \numprint{68} & \numprint{68} & \numprint{1,01} & \numprint{68} & \numprint{68} & \numprint{1,01}\\
\texttt{\detokenize{598a}} & \numprint{615} & \numprint{603} & \numprint{1,03} & \numprint{639} & \numprint{629} & \numprint{1,00} & \numprint{594} & \numprint{593} & \numprint{1,04} & \numprint{593} & \numprint{593} & \numprint{1,03} & \numprint{593} & \numprint{593} & \numprint{1,03} & \numprint{594} & \numprint{593} & \numprint{1,04} & \numprint{593} & \numprint{593} & \numprint{1,03} & \numprint{593} & \numprint{593} & \numprint{1,03}\\
\texttt{\detokenize{add20}} & \numprint{25} & \numprint{23} & \numprint{1,09} & \numprint{142} & \numprint{128} & \numprint{1,10} & \numprint{26} & \numprint{23} & \numprint{1,11} & \numprint{23} & \numprint{23} & \numprint{1,08} & \numprint{24} & \numprint{23} & \numprint{1,08} & \numprint{28} & \numprint{23} & \numprint{1,10} & \numprint{23} & \numprint{23} & \numprint{1,08} & \numprint{24} & \numprint{23} & \numprint{1,08}\\
\texttt{\detokenize{add32}} & \numprint{1} & \numprint{1} & \numprint{1,08} & \numprint{14} & \numprint{4} & \numprint{1,00} & \numprint{1} & \numprint{1} & \numprint{1,12} & \numprint{1} & \numprint{1} & \numprint{1,12} & \numprint{1} & \numprint{1} & \numprint{1,12} & \numprint{1} & \numprint{1} & \numprint{1,12} & \numprint{1} & \numprint{1} & \numprint{1,12} & \numprint{1} & \numprint{1} & \numprint{1,12}\\
\texttt{\detokenize{af_shell9}} & \numprint{934} & \numprint{885} & \numprint{1,00} & \numprint{1382} & \numprint{1095} & \numprint{1,00} & \numprint{880} & \numprint{880} & \numprint{1,06} & \numprint{880} & \numprint{880} & \numprint{1,06} & \numprint{880} & \numprint{880} & \numprint{1,06} & \numprint{880} & \numprint{880} & \numprint{1,06} & \numprint{880} & \numprint{880} & \numprint{1,06} & \numprint{880} & \numprint{880} & \numprint{1,06}\\
\texttt{\detokenize{auto}} & \numprint{2109} & \numprint{2073} & \numprint{1,18} & \numprint{3158} & \numprint{2547} & \numprint{1,00} & \numprint{2034} & \numprint{2021} & \numprint{1,19} & \numprint{1986} & \numprint{1977} & \numprint{1,20} & \numprint{1992} & \numprint{1978} & \numprint{1,20} & \numprint{2093} & \numprint{2062} & \numprint{1,17} & \numprint{1992} & \numprint{1981} & \numprint{1,20} & \numprint{1988} & \numprint{1978} & \numprint{1,20}\\
\texttt{\detokenize{bcsstk29}} & \numprint{180} & \numprint{180} & \numprint{1,00} & \numprint{260} & \numprint{234} & \numprint{1,01} & \numprint{180} & \numprint{180} & \numprint{1,02} & \numprint{180} & \numprint{180} & \numprint{1,11} & \numprint{180} & \numprint{180} & \numprint{1,11} & \numprint{180} & \numprint{180} & \numprint{1,01} & \numprint{180} & \numprint{180} & \numprint{1,11} & \numprint{180} & \numprint{180} & \numprint{1,10}\\
\texttt{\detokenize{bcsstk30}} & \numprint{208} & \numprint{206} & \numprint{1,04} & \numprint{439} & \numprint{393} & \numprint{1,02} & \numprint{206} & \numprint{206} & \numprint{1,00} & \numprint{206} & \numprint{206} & \numprint{1,00} & \numprint{206} & \numprint{206} & \numprint{1,00} & \numprint{206} & \numprint{206} & \numprint{1,00} & \numprint{206} & \numprint{206} & \numprint{1,00} & \numprint{206} & \numprint{206} & \numprint{1,00}\\
\texttt{\detokenize{bcsstk31}} & \numprint{298} & \numprint{285} & \numprint{1,07} & \numprint{482} & \numprint{437} & \numprint{1,04} & \numprint{271} & \numprint{268} & \numprint{1,10} & \numprint{268} & \numprint{268} & \numprint{1,17} & \numprint{268} & \numprint{268} & \numprint{1,17} & \numprint{271} & \numprint{270} & \numprint{1,09} & \numprint{268} & \numprint{268} & \numprint{1,17} & \numprint{268} & \numprint{268} & \numprint{1,17}\\
\texttt{\detokenize{bcsstk32}} & \numprint{276} & \numprint{252} & \numprint{1,19} & \numprint{752} & \numprint{463} & \numprint{1,04} & \numprint{236} & \numprint{229} & \numprint{1,19} & \numprint{239} & \numprint{229} & \numprint{1,18} & \numprint{232} & \numprint{229} & \numprint{1,20} & \numprint{252} & \numprint{239} & \numprint{1,17} & \numprint{239} & \numprint{229} & \numprint{1,18} & \numprint{233} & \numprint{229} & \numprint{1,20}\\
\texttt{\detokenize{bcsstk33}} & \numprint{421} & \numprint{421} & \numprint{0,96} & \numprint{549} & \numprint{179} & \numprint{1,21} & \numprint{282} & \numprint{262} & \numprint{1,18} & \numprint{267} & \numprint{261} & \numprint{1,20} & \numprint{283} & \numprint{265} & \numprint{1,19} & \numprint{292} & \numprint{274} & \numprint{1,17} & \numprint{272} & \numprint{266} & \numprint{1,20} & \numprint{288} & \numprint{266} & \numprint{1,19}\\
\texttt{\detokenize{bmwcra_1}} & \numprint{318} & \numprint{318} & \numprint{1,13} & \numprint{1006} & \numprint{576} & \numprint{1,06} & \numprint{318} & \numprint{318} & \numprint{1,14} & \numprint{350} & \numprint{318} & \numprint{1,13} & \numprint{350} & \numprint{318} & \numprint{1,13} & \numprint{318} & \numprint{318} & \numprint{1,14} & \numprint{350} & \numprint{318} & \numprint{1,13} & \numprint{350} & \numprint{318} & \numprint{1,13}\\
\texttt{\detokenize{boneS01}} & \numprint{1583} & \numprint{1542} & \numprint{1,08} & \numprint{4137} & \numprint{3969} & \numprint{1,00} & \numprint{1525} & \numprint{1500} & \numprint{1,04} & \numprint{1500} & \numprint{1500} & \numprint{1,10} & \numprint{1500} & \numprint{1500} & \numprint{1,10} & \numprint{1524} & \numprint{1500} & \numprint{1,04} & \numprint{1500} & \numprint{1500} & \numprint{1,10} & \numprint{1500} & \numprint{1500} & \numprint{1,10}\\
\texttt{\detokenize{brack2}} & \numprint{182} & \numprint{181} & \numprint{1,07} & \numprint{237} & \numprint{214} & \numprint{1,00} & \numprint{181} & \numprint{181} & \numprint{1,07} & \numprint{181} & \numprint{181} & \numprint{1,07} & \numprint{181} & \numprint{181} & \numprint{1,07} & \numprint{181} & \numprint{181} & \numprint{1,07} & \numprint{181} & \numprint{181} & \numprint{1,07} & \numprint{181} & \numprint{181} & \numprint{1,07}\\
\texttt{\detokenize{cfd2}} & \numprint{1040} & \numprint{1030} & \numprint{1,05} & \numprint{1303} & \numprint{1163} & \numprint{1,00} & \numprint{1030} & \numprint{1030} & \numprint{1,06} & \numprint{1030} & \numprint{1030} & \numprint{1,09} & \numprint{1030} & \numprint{1030} & \numprint{1,08} & \numprint{1030} & \numprint{1030} & \numprint{1,06} & \numprint{1030} & \numprint{1030} & \numprint{1,08} & \numprint{1030} & \numprint{1030} & \numprint{1,07}\\
\texttt{\detokenize{cont-300}} & \numprint{598} & \numprint{598} & \numprint{1,00} & \numprint{616} & \numprint{598} & \numprint{1,00} & \numprint{598} & \numprint{598} & \numprint{1,00} & \numprint{598} & \numprint{598} & \numprint{1,00} & \numprint{579} & \numprint{534} & \numprint{1,06} & \numprint{598} & \numprint{598} & \numprint{1,02} & \numprint{598} & \numprint{598} & \numprint{1,18} & \numprint{598} & \numprint{598} & \numprint{1,18}\\
\texttt{\detokenize{cop20k_A}} & \numprint{680} & \numprint{660} & \numprint{1,02} & \numprint{1904} & \numprint{1833} & \numprint{1,00} & \numprint{613} & \numprint{613} & \numprint{1,04} & \numprint{613} & \numprint{613} & \numprint{1,04} & \numprint{613} & \numprint{613} & \numprint{1,04} & \numprint{613} & \numprint{613} & \numprint{1,04} & \numprint{613} & \numprint{613} & \numprint{1,04} & \numprint{613} & \numprint{613} & \numprint{1,04}\\
\texttt{\detokenize{crack}} & \numprint{72} & \numprint{69} & \numprint{1,08} & \numprint{92} & \numprint{81} & \numprint{1,00} & \numprint{69} & \numprint{68} & \numprint{1,13} & \numprint{68} & \numprint{68} & \numprint{1,16} & \numprint{68} & \numprint{68} & \numprint{1,16} & \numprint{69} & \numprint{68} & \numprint{1,13} & \numprint{68} & \numprint{68} & \numprint{1,16} & \numprint{68} & \numprint{68} & \numprint{1,16}\\
\texttt{\detokenize{cs4}} & \numprint{289} & \numprint{281} & \numprint{1,11} & \numprint{332} & \numprint{323} & \numprint{1,00} & \numprint{281} & \numprint{279} & \numprint{1,09} & \numprint{267} & \numprint{264} & \numprint{1,19} & \numprint{268} & \numprint{264} & \numprint{1,19} & \numprint{284} & \numprint{282} & \numprint{1,08} & \numprint{267} & \numprint{265} & \numprint{1,19} & \numprint{269} & \numprint{265} & \numprint{1,18}\\
\texttt{\detokenize{cti}} & \numprint{268} & \numprint{266} & \numprint{1,00} & \numprint{291} & \numprint{283} & \numprint{1,00} & \numprint{267} & \numprint{266} & \numprint{0,99} & \numprint{266} & \numprint{266} & \numprint{0,98} & \numprint{266} & \numprint{266} & \numprint{0,98} & \numprint{267} & \numprint{266} & \numprint{1,01} & \numprint{266} & \numprint{266} & \numprint{1,00} & \numprint{266} & \numprint{266} & \numprint{1,00}\\
\texttt{\detokenize{data}} & \numprint{59} & \numprint{45} & \numprint{1,10} & \numprint{69} & \numprint{64} & \numprint{1,00} & \numprint{44} & \numprint{41} & \numprint{1,17} & \numprint{42} & \numprint{41} & \numprint{1,18} & \numprint{43} & \numprint{41} & \numprint{1,18} & \numprint{45} & \numprint{43} & \numprint{1,15} & \numprint{42} & \numprint{41} & \numprint{1,17} & \numprint{43} & \numprint{41} & \numprint{1,18}\\
\texttt{\detokenize{del23}} & \numprint{2486} & \numprint{2434} & \numprint{1,03} & \numprint{2933} & \numprint{2741} & \numprint{1,00} & \numprint{2050} & \numprint{2048} & \numprint{1,01} & \numprint{2048} & \numprint{2048} & \numprint{1,05} & \numprint{2048} & \numprint{2048} & \numprint{1,04} & \numprint{2050} & \numprint{2048} & \numprint{1,01} & \numprint{2048} & \numprint{2048} & \numprint{1,04} & \numprint{2048} & \numprint{2048} & \numprint{1,04}\\
\texttt{\detokenize{del24}} & \numprint{3541} & \numprint{3472} & \numprint{1,01} & \numprint{4004} & \numprint{3792} & \numprint{1,00} & \numprint{2908} & \numprint{2904} & \numprint{1,01} & \numprint{2907} & \numprint{2904} & \numprint{1,03} & \numprint{2907} & \numprint{2904} & \numprint{1,03} & \numprint{2908} & \numprint{2904} & \numprint{1,01} & \numprint{2907} & \numprint{2904} & \numprint{1,03} & \numprint{2907} & \numprint{2904} & \numprint{1,03}\\
\texttt{\detokenize{deu}} & \numprint{241} & \numprint{217} & \numprint{1,07} & \numprint{325} & \numprint{286} & \numprint{1,00} & \numprint{152} & \numprint{152} & \numprint{1,04} & \numprint{145} & \numprint{145} & \numprint{1,12} & \numprint{145} & \numprint{145} & \numprint{1,12} & \numprint{152} & \numprint{152} & \numprint{1,04} & \numprint{145} & \numprint{145} & \numprint{1,12} & \numprint{145} & \numprint{145} & \numprint{1,12}\\
\texttt{\detokenize{Dubcova3}} & \numprint{406} & \numprint{383} & \numprint{1,02} & \numprint{1495} & \numprint{1395} & \numprint{1,00} & \numprint{383} & \numprint{383} & \numprint{1,04} & \numprint{383} & \numprint{383} & \numprint{1,16} & \numprint{383} & \numprint{383} & \numprint{1,15} & \numprint{383} & \numprint{383} & \numprint{1,05} & \numprint{383} & \numprint{383} & \numprint{1,16} & \numprint{383} & \numprint{383} & \numprint{1,18}\\
\texttt{\detokenize{eur}} & \numprint{430} & \numprint{349} & \numprint{1,09} & \numprint{620} & \numprint{486} & \numprint{1,01} & \numprint{218} & \numprint{109} & \numprint{1,07} & \numprint{208} & \numprint{200} & \numprint{1,12} & \numprint{206} & \numprint{195} & \numprint{1,13} & \numprint{218} & \numprint{109} & \numprint{1,07} & \numprint{208} & \numprint{200} & \numprint{1,12} & \numprint{206} & \numprint{195} & \numprint{1,13}\\
\texttt{\detokenize{fe_4elt2}} & \numprint{66} & \numprint{66} & \numprint{0,99} & \numprint{69} & \numprint{67} & \numprint{1,00} & \numprint{66} & \numprint{66} & \numprint{0,99} & \numprint{66} & \numprint{66} & \numprint{0,99} & \numprint{66} & \numprint{66} & \numprint{0,99} & \numprint{66} & \numprint{66} & \numprint{1,02} & \numprint{66} & \numprint{66} & \numprint{1,04} & \numprint{66} & \numprint{66} & \numprint{1,04}\\
\texttt{\detokenize{fe_body}} & \numprint{86} & \numprint{65} & \numprint{1,11} & \numprint{160} & \numprint{122} & \numprint{1,01} & \numprint{78} & \numprint{66} & \numprint{1,12} & \numprint{77} & \numprint{61} & \numprint{1,15} & \numprint{75} & \numprint{62} & \numprint{1,14} & \numprint{78} & \numprint{66} & \numprint{1,12} & \numprint{77} & \numprint{61} & \numprint{1,15} & \numprint{75} & \numprint{62} & \numprint{1,14}\\
\texttt{\detokenize{fe_ocean}} & \numprint{273} & \numprint{263} & \numprint{1,01} & \numprint{340} & \numprint{322} & \numprint{1,00} & \numprint{263} & \numprint{263} & \numprint{1,02} & \numprint{263} & \numprint{263} & \numprint{1,02} & \numprint{263} & \numprint{263} & \numprint{1,02} & \numprint{263} & \numprint{263} & \numprint{1,02} & \numprint{263} & \numprint{263} & \numprint{1,02} & \numprint{263} & \numprint{263} & \numprint{1,02}\\
\texttt{\detokenize{fe_pwt}} & \numprint{120} & \numprint{120} & \numprint{1,01} & \numprint{132} & \numprint{124} & \numprint{1,00} & \numprint{116} & \numprint{116} & \numprint{1,03} & \numprint{116} & \numprint{116} & \numprint{1,09} & \numprint{116} & \numprint{116} & \numprint{1,12} & \numprint{116} & \numprint{116} & \numprint{1,03} & \numprint{116} & \numprint{116} & \numprint{1,13} & \numprint{116} & \numprint{116} & \numprint{1,13}\\
\texttt{\detokenize{fe_rotor}} & \numprint{453} & \numprint{441} & \numprint{1,04} & \numprint{576} & \numprint{514} & \numprint{1,05} & \numprint{441} & \numprint{439} & \numprint{1,07} & \numprint{441} & \numprint{439} & \numprint{1,08} & \numprint{441} & \numprint{439} & \numprint{1,07} & \numprint{441} & \numprint{439} & \numprint{1,08} & \numprint{442} & \numprint{439} & \numprint{1,08} & \numprint{442} & \numprint{439} & \numprint{1,08}\\
\texttt{\detokenize{fe_sphere}} & \numprint{195} & \numprint{192} & \numprint{0,99} & \numprint{239} & \numprint{227} & \numprint{1,00} & \numprint{192} & \numprint{192} & \numprint{1,04} & \numprint{192} & \numprint{192} & \numprint{1,05} & \numprint{192} & \numprint{192} & \numprint{1,05} & \numprint{192} & \numprint{192} & \numprint{1,02} & \numprint{192} & \numprint{192} & \numprint{1,13} & \numprint{192} & \numprint{192} & \numprint{1,14}\\
\texttt{\detokenize{fe_tooth}} & \numprint{882} & \numprint{867} & \numprint{1,16} & \numprint{1192} & \numprint{1094} & \numprint{1,00} & \numprint{882} & \numprint{869} & \numprint{1,13} & \numprint{849} & \numprint{837} & \numprint{1,19} & \numprint{848} & \numprint{826} & \numprint{1,19} & \numprint{885} & \numprint{882} & \numprint{1,11} & \numprint{852} & \numprint{827} & \numprint{1,19} & \numprint{853} & \numprint{839} & \numprint{1,19}\\
\texttt{\detokenize{finan512}} & \numprint{50} & \numprint{50} & \numprint{1,07} & \numprint{67} & \numprint{51} & \numprint{1,02} & \numprint{50} & \numprint{50} & \numprint{1,01} & \numprint{50} & \numprint{50} & \numprint{1,13} & \numprint{50} & \numprint{50} & \numprint{1,13} & \numprint{50} & \numprint{50} & \numprint{1,01} & \numprint{50} & \numprint{50} & \numprint{1,12} & \numprint{50} & \numprint{50} & \numprint{1,13}\\
\texttt{\detokenize{G2_circuit}} & \numprint{312} & \numprint{312} & \numprint{1,03} & \numprint{416} & \numprint{348} & \numprint{1,00} & \numprint{374} & \numprint{312} & \numprint{1,01} & \numprint{374} & \numprint{312} & \numprint{1,03} & \numprint{374} & \numprint{312} & \numprint{1,03} & \numprint{374} & \numprint{312} & \numprint{1,02} & \numprint{374} & \numprint{312} & \numprint{1,14} & \numprint{374} & \numprint{312} & \numprint{1,14}\\
\texttt{\detokenize{m14b}} & \numprint{885} & \numprint{859} & \numprint{1,04} & \numprint{895} & \numprint{870} & \numprint{1,00} & \numprint{835} & \numprint{834} & \numprint{1,02} & \numprint{834} & \numprint{834} & \numprint{1,00} & \numprint{834} & \numprint{834} & \numprint{1,00} & \numprint{835} & \numprint{834} & \numprint{1,02} & \numprint{834} & \numprint{834} & \numprint{1,00} & \numprint{834} & \numprint{834} & \numprint{1,00}\\
\texttt{\detokenize{memplus}} & \numprint{88} & \numprint{81} & \numprint{1,19} & \numprint{95} & \numprint{95} & \numprint{1,00} & \numprint{81} & \numprint{72} & \numprint{1,15} & \numprint{66} & \numprint{62} & \numprint{1,15} & \numprint{68} & \numprint{65} & \numprint{1,15} & \numprint{108} & \numprint{76} & \numprint{1,10} & \numprint{70} & \numprint{65} & \numprint{1,12} & \numprint{72} & \numprint{68} & \numprint{1,11}\\
\texttt{\detokenize{nlr}} & \numprint{1823} & \numprint{1805} & \numprint{1,01} & \numprint{2156} & \numprint{1991} & \numprint{1,00} & \numprint{1663} & \numprint{1663} & \numprint{1,04} & \numprint{1655} & \numprint{1655} & \numprint{1,17} & \numprint{1655} & \numprint{1655} & \numprint{1,17} & \numprint{1663} & \numprint{1663} & \numprint{1,04} & \numprint{1655} & \numprint{1655} & \numprint{1,17} & \numprint{1655} & \numprint{1655} & \numprint{1,17}\\
\texttt{\detokenize{rgg23}} & \numprint{3395} & \numprint{3327} & \numprint{1,02} & \numprint{3466} & \numprint{3298} & \numprint{1,00} & \numprint{2475} & \numprint{2471} & \numprint{1,09} & \numprint{2473} & \numprint{2470} & \numprint{1,14} & \numprint{2473} & \numprint{2470} & \numprint{1,14} & \numprint{2475} & \numprint{2471} & \numprint{1,09} & \numprint{2473} & \numprint{2470} & \numprint{1,14} & \numprint{2473} & \numprint{2470} & \numprint{1,14}\\
\texttt{\detokenize{rgg24}} & \numprint{5020} & \numprint{4850} & \numprint{1,02} & \numprint{5073} & \numprint{4961} & \numprint{1,00} & \numprint{3648} & \numprint{3636} & \numprint{1,13} & \numprint{3644} & \numprint{3636} & \numprint{1,14} & \numprint{3644} & \numprint{3636} & \numprint{1,14} & \numprint{3648} & \numprint{3636} & \numprint{1,13} & \numprint{3644} & \numprint{3636} & \numprint{1,14} & \numprint{3644} & \numprint{3636} & \numprint{1,14}\\
\texttt{\detokenize{shipsec5}} & \numprint{1222} & \numprint{1191} & \numprint{1,05} & \numprint{2031} & \numprint{1887} & \numprint{1,00} & \numprint{1199} & \numprint{1191} & \numprint{1,02} & \numprint{1185} & \numprint{1185} & \numprint{1,16} & \numprint{1185} & \numprint{1185} & \numprint{1,16} & \numprint{1202} & \numprint{1191} & \numprint{1,00} & \numprint{1185} & \numprint{1185} & \numprint{1,16} & \numprint{1185} & \numprint{1185} & \numprint{1,16}\\
\texttt{\detokenize{t60k}} & \numprint{58} & \numprint{56} & \numprint{1,09} & \numprint{97} & \numprint{87} & \numprint{1,00} & \numprint{56} & \numprint{56} & \numprint{1,10} & \numprint{56} & \numprint{56} & \numprint{1,10} & \numprint{56} & \numprint{56} & \numprint{1,10} & \numprint{56} & \numprint{56} & \numprint{1,10} & \numprint{56} & \numprint{56} & \numprint{1,10} & \numprint{56} & \numprint{56} & \numprint{1,10}\\
\texttt{\detokenize{thermal2}} & \numprint{468} & \numprint{462} & \numprint{1,02} & \numprint{524} & \numprint{494} & \numprint{1,00} & \numprint{430} & \numprint{430} & \numprint{1,03} & \numprint{430} & \numprint{430} & \numprint{1,03} & \numprint{430} & \numprint{430} & \numprint{1,03} & \numprint{430} & \numprint{430} & \numprint{1,03} & \numprint{430} & \numprint{430} & \numprint{1,03} & \numprint{430} & \numprint{430} & \numprint{1,03}\\
\texttt{\detokenize{thermomech}} & \numprint{132} & \numprint{129} & \numprint{1,03} & \numprint{153} & \numprint{136} & \numprint{1,00} & \numprint{126} & \numprint{126} & \numprint{1,06} & \numprint{126} & \numprint{126} & \numprint{1,07} & \numprint{126} & \numprint{126} & \numprint{1,07} & \numprint{126} & \numprint{126} & \numprint{1,06} & \numprint{126} & \numprint{126} & \numprint{1,07} & \numprint{126} & \numprint{126} & \numprint{1,07}\\
\texttt{\detokenize{uk}} & \numprint{15} & \numprint{14} & \numprint{1,16} & \numprint{25} & \numprint{21} & \numprint{1,00} & \numprint{14} & \numprint{14} & \numprint{1,19} & \numprint{14} & \numprint{14} & \numprint{1,19} & \numprint{14} & \numprint{14} & \numprint{1,19} & \numprint{15} & \numprint{14} & \numprint{1,18} & \numprint{14} & \numprint{14} & \numprint{1,19} & \numprint{14} & \numprint{14} & \numprint{1,19}\\
\texttt{\detokenize{vibrobox}} & \numprint{582} & \numprint{554} & \numprint{1,14} & \numprint{967} & \numprint{756} & \numprint{0,92} & \numprint{643} & \numprint{554} & \numprint{1,12} & \numprint{581} & \numprint{554} & \numprint{1,16} & \numprint{614} & \numprint{554} & \numprint{1,13} & \numprint{826} & \numprint{598} & \numprint{1,07} & \numprint{581} & \numprint{554} & \numprint{1,16} & \numprint{614} & \numprint{554} & \numprint{1,12}\\
\texttt{\detokenize{wave}} & \numprint{2254} & \numprint{2204} & \numprint{1,02} & \numprint{2451} & \numprint{2329} & \numprint{1,00} & \numprint{2168} & \numprint{2122} & \numprint{1,07} & \numprint{2114} & \numprint{2079} & \numprint{1,15} & \numprint{2101} & \numprint{2077} & \numprint{1,16} & \numprint{2174} & \numprint{2112} & \numprint{1,06} & \numprint{2121} & \numprint{2080} & \numprint{1,15} & \numprint{2101} & \numprint{2079} & \numprint{1,17}\\
\texttt{\detokenize{whitaker3}} & \numprint{64} & \numprint{63} & \numprint{1,02} & \numprint{70} & \numprint{67} & \numprint{1,00} & \numprint{63} & \numprint{63} & \numprint{0,99} & \numprint{62} & \numprint{62} & \numprint{1,19} & \numprint{62} & \numprint{62} & \numprint{1,19} & \numprint{63} & \numprint{63} & \numprint{1,00} & \numprint{62} & \numprint{62} & \numprint{1,19} & \numprint{62} & \numprint{62} & \numprint{1,19}\\
\texttt{\detokenize{wing}} & \numprint{630} & \numprint{607} & \numprint{1,12} & \numprint{612} & \numprint{188} & \numprint{1,16} & \numprint{613} & \numprint{605} & \numprint{1,10} & \numprint{590} & \numprint{583} & \numprint{1,19} & \numprint{586} & \numprint{584} & \numprint{1,18} & \numprint{615} & \numprint{608} & \numprint{1,08} & \numprint{589} & \numprint{581} & \numprint{1,18} & \numprint{587} & \numprint{583} & \numprint{1,18}\\
\texttt{\detokenize{wing_nodal}} & \numprint{389} & \numprint{383} & \numprint{1,17} & \numprint{381} & \numprint{167} & \numprint{1,23} & \numprint{386} & \numprint{378} & \numprint{1,16} & \numprint{375} & \numprint{374} & \numprint{1,20} & \numprint{375} & \numprint{374} & \numprint{1,19} & \numprint{407} & \numprint{406} & \numprint{1,06} & \numprint{375} & \numprint{374} & \numprint{1,19} & \numprint{375} & \numprint{374} & \numprint{1,19}\\
\bottomrule
\end{tabular}
\vspace*{.5cm}
\caption{Detailed per instances results as average and best values for the size of separator and average balance.}
\label{tab:detailedsize}
\end{table}
\end{landscape}
\mathord{-}etlength{\tabcolsep}{1ex}
\begin{table}[h!]
\mathord{-}criptsize
\centering
\begin{tabular}{l rr rrr rrr}
\toprule
& {Metis} & {Scotch} & {LSFlow$_0$} & {LSFlow$_{0.5}$} & {LSFlow$_{1}$} & {Flow$_{0}$} & {Flow$_{0.5}$} & {Flow$_{1}$} \\
Graph & $t_\text{avg.}$ & $t_\text{avg.}$ & $t_\text{avg.}$ & $t_\text{avg.}$ & $t_\text{avg.}$ & $t_\text{avg.}$ & $t_\text{avg.}$ & $t_\text{avg.}$ \\
\midrule
\texttt{\detokenize{144}} & \numprint{0,2} & \numprint{0,3} & \numprint{85,6} & \numprint{132,6} & \numprint{166,3} & \numprint{27,0} & \numprint{82,8} & \numprint{95,9}\\
\texttt{\detokenize{2cubes_sphere}} & \numprint{0,1} & \numprint{0,2} & \numprint{67,5} & \numprint{106,4} & \numprint{124,8} & \numprint{21,5} & \numprint{62,6} & \numprint{82,9}\\
\texttt{\detokenize{3elt}} & \numprint{0,1} & \numprint{0,1} & \numprint{1,2} & \numprint{1,3} & \numprint{1,5} & \numprint{1,0} & \numprint{1,2} & \numprint{1,3}\\
\texttt{\detokenize{4elt}} & \numprint{0,1} & \numprint{0,1} & \numprint{2,1} & \numprint{2,9} & \numprint{3,5} & \numprint{1,6} & \numprint{2,6} & \numprint{3,1}\\
\texttt{\detokenize{598a}} & \numprint{0,1} & \numprint{0,2} & \numprint{31,5} & \numprint{48,4} & \numprint{59,5} & \numprint{14,2} & \numprint{32,8} & \numprint{44,1}\\
\texttt{\detokenize{add20}} & \numprint{0,1} & \numprint{0,1} & \numprint{6,1} & \numprint{5,8} & \numprint{5,0} & \numprint{5,4} & \numprint{5,3} & \numprint{4,4}\\
\texttt{\detokenize{add32}} & \numprint{0,1} & \numprint{0,1} & \numprint{0,8} & \numprint{0,8} & \numprint{0,9} & \numprint{0,7} & \numprint{0,8} & \numprint{0,9}\\
\texttt{\detokenize{af_shell9}} & \numprint{0,6} & \numprint{1,3} & \numprint{140,1} & \numprint{278,6} & \numprint{343,8} & \numprint{103,6} & \numprint{237,9} & \numprint{339,6}\\
\texttt{\detokenize{auto}} & \numprint{0,6} & \numprint{1,0} & \numprint{146,0} & \numprint{468,7} & \numprint{603,9} & \numprint{65,3} & \numprint{386,2} & \numprint{450,1}\\
\texttt{\detokenize{bcsstk29}} & \numprint{0,1} & \numprint{0,4} & \numprint{8,1} & \numprint{9,4} & \numprint{10,2} & \numprint{5,4} & \numprint{7,0} & \numprint{7,9}\\
\texttt{\detokenize{bcsstk30}} & \numprint{0,1} & \numprint{1,2} & \numprint{26,7} & \numprint{38,6} & \numprint{42,9} & \numprint{11,8} & \numprint{23,1} & \numprint{29,2}\\
\texttt{\detokenize{bcsstk31}} & \numprint{0,1} & \numprint{0,4} & \numprint{17,6} & \numprint{24,0} & \numprint{25,8} & \numprint{7,6} & \numprint{12,1} & \numprint{15,1}\\
\texttt{\detokenize{bcsstk32}} & \numprint{0,1} & \numprint{0,9} & \numprint{21,0} & \numprint{35,2} & \numprint{39,7} & \numprint{9,6} & \numprint{24,0} & \numprint{32,5}\\
\texttt{\detokenize{bcsstk33}} & \numprint{0,1} & \numprint{1,4} & \numprint{39,0} & \numprint{41,8} & \numprint{47,4} & \numprint{28,6} & \numprint{31,6} & \numprint{37,0}\\
\texttt{\detokenize{bmwcra_1}} & \numprint{0,3} & \numprint{4,3} & \numprint{149,8} & \numprint{206,0} & \numprint{216,5} & \numprint{62,9} & \numprint{110,1} & \numprint{151,4}\\
\texttt{\detokenize{boneS01}} & \numprint{0,3} & \numprint{7,1} & \numprint{222,1} & \numprint{245,7} & \numprint{258,1} & \numprint{55,7} & \numprint{75,0} & \numprint{96,8}\\
\texttt{\detokenize{brack2}} & \numprint{0,1} & \numprint{0,1} & \numprint{10,1} & \numprint{15,1} & \numprint{20,2} & \numprint{5,5} & \numprint{11,1} & \numprint{15,4}\\
\texttt{\detokenize{cfd2}} & \numprint{0,2} & \numprint{0,2} & \numprint{73,3} & \numprint{103,8} & \numprint{114,2} & \numprint{27,5} & \numprint{62,4} & \numprint{77,5}\\
\texttt{\detokenize{cont-300}} & \numprint{0,1} & \numprint{0,1} & \numprint{12,8} & \numprint{25,9} & \numprint{41,3} & \numprint{7,8} & \numprint{23,0} & \numprint{33,0}\\
\texttt{\detokenize{cop20k_A}} & \numprint{0,2} & \numprint{1,5} & \numprint{69,1} & \numprint{88,0} & \numprint{100,9} & \numprint{18,0} & \numprint{39,6} & \numprint{51,1}\\
\texttt{\detokenize{crack}} & \numprint{0,1} & \numprint{0,1} & \numprint{2,0} & \numprint{3,1} & \numprint{3,6} & \numprint{1,6} & \numprint{2,8} & \numprint{3,2}\\
\texttt{\detokenize{cs4}} & \numprint{0,1} & \numprint{0,1} & \numprint{5,6} & \numprint{8,4} & \numprint{9,4} & \numprint{4,3} & \numprint{7,5} & \numprint{8,3}\\
\texttt{\detokenize{cti}} & \numprint{0,1} & \numprint{0,1} & \numprint{5,3} & \numprint{5,9} & \numprint{6,7} & \numprint{3,7} & \numprint{4,4} & \numprint{5,3}\\
\texttt{\detokenize{data}} & \numprint{0,1} & \numprint{0,1} & \numprint{1,8} & \numprint{2,1} & \numprint{2,4} & \numprint{1,6} & \numprint{1,9} & \numprint{2,2}\\
\texttt{\detokenize{del23}} & \numprint{7,9} & \numprint{3,6} & \numprint{1154,2} & \numprint{4114,6} & \numprint{6362,4} & \numprint{1306,3} & \numprint{4077,3} & \numprint{6159,8}\\
\texttt{\detokenize{del24}} & \numprint{17,4} & \numprint{7,2} & \numprint{2733,4} & \numprint{12807,8} & \numprint{18613,6} & \numprint{2580,3} & \numprint{12711,2} & \numprint{17219,3}\\
\texttt{\detokenize{deu}} & \numprint{4,8} & \numprint{1,3} & \numprint{337,6} & \numprint{860,1} & \numprint{1032,3} & \numprint{275,7} & \numprint{906,4} & \numprint{1086,2}\\
\texttt{\detokenize{Dubcova3}} & \numprint{0,2} & \numprint{1,0} & \numprint{42,2} & \numprint{65,4} & \numprint{82,2} & \numprint{18,3} & \numprint{42,2} & \numprint{59,7}\\
\texttt{\detokenize{eur}} & \numprint{24,0} & \numprint{5,3} & \numprint{2117,7} & \numprint{8213,9} & \numprint{8748,8} & \numprint{2135,6} & \numprint{8921,6} & \numprint{9323,0}\\
\texttt{\detokenize{fe_4elt2}} & \numprint{0,1} & \numprint{0,1} & \numprint{1,4} & \numprint{1,6} & \numprint{1,9} & \numprint{1,2} & \numprint{1,5} & \numprint{1,8}\\
\texttt{\detokenize{fe_body}} & \numprint{0,1} & \numprint{0,1} & \numprint{5,8} & \numprint{8,9} & \numprint{8,6} & \numprint{4,6} & \numprint{8,0} & \numprint{7,8}\\
\texttt{\detokenize{fe_ocean}} & \numprint{0,1} & \numprint{0,2} & \numprint{12,3} & \numprint{23,4} & \numprint{34,9} & \numprint{7,9} & \numprint{20,1} & \numprint{33,4}\\
\texttt{\detokenize{fe_pwt}} & \numprint{0,1} & \numprint{0,1} & \numprint{4,0} & \numprint{6,1} & \numprint{7,4} & \numprint{2,8} & \numprint{5,3} & \numprint{7,3}\\
\texttt{\detokenize{fe_rotor}} & \numprint{0,1} & \numprint{0,3} & \numprint{27,2} & \numprint{39,2} & \numprint{47,3} & \numprint{9,9} & \numprint{22,6} & \numprint{33,7}\\
\texttt{\detokenize{fe_sphere}} & \numprint{0,1} & \numprint{0,1} & \numprint{3,1} & \numprint{4,0} & \numprint{4,6} & \numprint{2,0} & \numprint{3,3} & \numprint{4,2}\\
\texttt{\detokenize{fe_tooth}} & \numprint{0,1} & \numprint{0,2} & \numprint{41,0} & \numprint{68,9} & \numprint{74,7} & \numprint{14,4} & \numprint{45,7} & \numprint{60,1}\\
\texttt{\detokenize{finan512}} & \numprint{0,1} & \numprint{0,1} & \numprint{7,1} & \numprint{9,8} & \numprint{12,8} & \numprint{5,2} & \numprint{8,5} & \numprint{12,1}\\
\texttt{\detokenize{G2_circuit}} & \numprint{0,1} & \numprint{0,1} & \numprint{13,5} & \numprint{19,7} & \numprint{24,6} & \numprint{8,4} & \numprint{16,9} & \numprint{24,8}\\
\texttt{\detokenize{m14b}} & \numprint{0,3} & \numprint{0,3} & \numprint{60,1} & \numprint{90,3} & \numprint{114,3} & \numprint{28,6} & \numprint{60,6} & \numprint{78,7}\\
\texttt{\detokenize{memplus}} & \numprint{0,1} & \numprint{0,3} & \numprint{32,5} & \numprint{37,2} & \numprint{32,8} & \numprint{24,6} & \numprint{30,8} & \numprint{27,4}\\
\texttt{\detokenize{nlr}} & \numprint{7,5} & \numprint{4,0} & \numprint{407,1} & \numprint{1935,6} & \numprint{3217,7} & \numprint{320,5} & \numprint{1940,9} & \numprint{3085,0}\\
\texttt{\detokenize{rgg23}} & \numprint{9,7} & \numprint{4,2} & \numprint{2088,1} & \numprint{6434,3} & \numprint{7651,5} & \numprint{2239,4} & \numprint{7493,0} & \numprint{8127,6}\\
\texttt{\detokenize{rgg24}} & \numprint{21,5} & \numprint{9,0} & \numprint{3116,0} & \numprint{9616,8} & \numprint{10415,3} & \numprint{2963,8} & \numprint{9530,6} & \numprint{10436,2}\\
\texttt{\detokenize{shipsec5}} & \numprint{0,3} & \numprint{3,0} & \numprint{114,2} & \numprint{146,9} & \numprint{177,2} & \numprint{46,9} & \numprint{90,8} & \numprint{129,0}\\
\texttt{\detokenize{t60k}} & \numprint{0,1} & \numprint{0,1} & \numprint{2,2} & \numprint{6,4} & \numprint{8,6} & \numprint{1,7} & \numprint{6,5} & \numprint{8,1}\\
\texttt{\detokenize{thermal2}} & \numprint{0,9} & \numprint{0,5} & \numprint{68,3} & \numprint{320,5} & \numprint{638,1} & \numprint{61,8} & \numprint{326,0} & \numprint{622,4}\\
\texttt{\detokenize{thermomech}} & \numprint{0,1} & \numprint{0,1} & \numprint{5,0} & \numprint{22,9} & \numprint{27,9} & \numprint{3,6} & \numprint{20,1} & \numprint{26,1}\\
\texttt{\detokenize{uk}} & \numprint{0,1} & \numprint{0,1} & \numprint{0,9} & \numprint{1,1} & \numprint{1,3} & \numprint{0,8} & \numprint{1,0} & \numprint{1,2}\\
\texttt{\detokenize{vibrobox}} & \numprint{0,1} & \numprint{0,8} & \numprint{44,7} & \numprint{47,9} & \numprint{48,0} & \numprint{21,5} & \numprint{25,3} & \numprint{25,3}\\
\texttt{\detokenize{wave}} & \numprint{0,2} & \numprint{0,3} & \numprint{118,2} & \numprint{157,5} & \numprint{183,7} & \numprint{28,2} & \numprint{71,5} & \numprint{94,6}\\
\texttt{\detokenize{whitaker3}} & \numprint{0,1} & \numprint{0,1} & \numprint{1,4} & \numprint{2,2} & \numprint{2,5} & \numprint{1,2} & \numprint{2,0} & \numprint{2,3}\\
\texttt{\detokenize{wing}} & \numprint{0,1} & \numprint{0,1} & \numprint{14,5} & \numprint{25,7} & \numprint{29,6} & \numprint{8,2} & \numprint{19,8} & \numprint{23,9}\\
\texttt{\detokenize{wing_nodal}} & \numprint{0,1} & \numprint{0,1} & \numprint{9,0} & \numprint{9,2} & \numprint{10,7} & \numprint{5,8} & \numprint{6,4} & \numprint{8,2}\\
\bottomrule
\end{tabular}
\vspace*{.5cm}
\caption{Detailed per instances results as average running time.}
\label{tab:detailedtime}
\end{table}
\pagebreak
\end{appendix}
\end{document}
|
\begin{document}
\title{f Inverse maximum theorems and some consequences}
\begin{abstract}
We deal with inverse maximum theorems, which are inspired by the ones given by Aoyama, Komiya, Li \emph{et al.}, Park and Komiya, and Yamauchi. As a consequence of our results, we state and prove an inverse maximum Nash theorem and show that any generalized Nash game can be reduced to a classical Nash game, under suitable assumptions. Additionally, we show that a result by Arrow and Debreu, on the existence of solutions for generalized Nash games, is actually equivalent to the one given by Debreu-Fan-Glicksberg for classical Nash games, which in turn is equivalent to Kakutani-Fan-Glisckberg's fixed point theorem.
\end{abstract}
\noindent{\textbf{Keywords:} Inverse maximum theorems; Berge's maximum theorem; Generalized Nash game; Kakutani's fixed point theorem.}
\noindent{{\bf MSC (2020)}: 46N10; 91B50; 49J35
\section{Introduction}
In 1959, Berge provided conditions for the continuity of its maximizers' correspondence with respect to its parameters. In 1997, Komiya \cite{Ko97} proposed the inverse problem, which consists of finding conditions on the correspondence for obtaining a continuous function defining it as its maximizers' correspondence.
In that sense, Komiya gave a positive answer to the previous problem on finite-dimensional spaces and he showed the equivalence between the Kakutani theorem and an existence theorem for maximal elements by Yannelis and Prabhakar in \cite{YP83}. In 2001, Park and Komiya \cite{PK01} extended a certain class of correspondences defined from a topological space to a metric topological vector space with convex balls. In 2003, Aoyama \cite{Ao03} worked on convex metric spaces and used the same class of correspondences. In 2008, Yamauchi \cite{Ya08} also gave an affirmative answer to this inverse problem on locally convex topological vector spaces, but he deals with upper semicontinuous correspondences whose graphs are $G_\delta$ sets. Recently in 2022, Li \emph{et al. } \cite{LiLiFeng2022} presented a similar result to the one given by Yamauchi, but they considered the correspondence with a compact and convex range instead the assumption of the $G_\delta$ graph. Moreover, they proved that Kakutani-Fan-Glicksberg's fixed point theorem is a consequence of the generalized coincidence point theorem, which is a consequence of the equilibrium theorem for generalized Nash games. Inspired by these works we also present some inverse maximum theorems.
By making use of the finite-dimensional inverse maximum theorem by Komiya in \cite{Ko97}, in 2016, Yu \emph{et al.} \cite{YuEtAl16} proved that the Kakutani and Brouwer fixed point theorems, among other important results, can be obtained from the Nash equilibrium theorem. Recently in 2021, Bueno and Cotrina \cite{BC-2021} used an inverse maximum result to reformulate quasi-equilibrium problems and quasi-variational inequalities as generalized Nash games on Banach spaces. It is important to mention that the converse reformulation is known, see for instance \cite{AuEtAl21,BCC21,JCAHAS,CS21,JC-JZ-PS,Facchinei2007,HARKER199181} and the references therein. Motivated by these works, among the main applications of our inverse maximum theorems, we state and prove an inverse maximum Nash theorem, which consists of finding appropriate payoff functions that give solutions a predetermined strategy set. We think this formulation could be appropriate for economical agents having non-modifiable strategies due to some restrictions. We also reformulate generalized Nash games as classical Nash games, under suitable assumptions. Moreover, this reformulation allows us to show the equivalence between the Arrow and Debreu result in \cite{AD54} and the Debreu-Fan-Glicksberg theorem on locally convex spaces. We remark that some authors have generalized some equilibrium theorems by relaxing the continuity assumption of the payoff functions, see for instance \cite{AuEtAl21,Dasgupta,MORGAN2007,Reny,Tian95}. However, their existence results are in the finite-dimensional setting and are a consequence of the Kakutani-Fan-Glicksberg theorem, and, according to the equivalence proved in this work, they are equivalent.
By taking advantage of the fact that the inverse maximum theorems, presented in the current work, are valid in the infinite-dimensional setting, we prove Kakutani-Fan-Glicksberg's fixed point theorem \cite{Fa52,Gl52} and the Debreu-Glicksberg-Fan theorem on locally convex topological vector spaces, which generalizes the equivalences proved by Yu \emph{et al.} \cite{YuEtAl16}.
We subdivided this work as follows. We introduce in Section \ref{Preliminaries} some definitions and facts.
In Section \ref{Results}, we present our main results. Section \ref{Applications} is devoted to generalized Nash games and Section \ref{FPT} is devoted to fixed point theory. Finally, we summarize the major results of this work in Section \ref{Conclusions}.
\section{Preliminaries}\label{Preliminaries}
A real-valued function $f:C\to\mathbb{R}$ on a convex set $C$ in a vector space is said to be \emph{quasi-concave} if for each $\lambda\in\mathbb{R}$ the set $\{x\in C:~f(x)\geq\lambda\}$ is convex. Clearly, the function $f$ is quasi-concave if, and only if, $f(tx+(1-t)y)\geq \min\{f(x),f(y)\}$ for all $x,y\in C$ and all $t\in[0,1]$.
Let $X$ and $Y$ be two non-empty sets and $\mathcal{P}(Y)$ be the family of all subsets of $Y$. A \emph{correspondence} or \emph{set-valued map} $T:X\rightrightarrows Y$ is an application $T:X\to \mathcal{P}(Y)$, that is, for $u\in X$, $T(u)\subseteq Y$.
The \emph{graph} of $T$ is defined as
\[\mathrm{gra}(T)=\big\{(u,v)\in X\times Y\::\: v\in T(u)\big\}.\]
We now recall the notion of continuity for correspondences. Let $X$ and $Y$ be two topological spaces. A correspondence $T:X\rightrightarrows Y$ is said to be:
\begin{itemize}
\item \emph{closed}, when $\mathrm{gra}(T)$ is a closed subset of $X\times Y$;
\item \emph{lower semicontinuous} if the set $\{x\in X\::\: T(x)\cap G\neq \emptyset\}$ is open, whenever $G$ is open;
\item \emph{upper semicontinuous} if
the set $\{x\in X\::\: T(x)\cap F\neq \emptyset\}$ is closed, whenever $F$ is closed; and
\item \emph{continuous} if it is both lower and upper semicontinuous.
\end{itemize}
It is straightforward to verify that a correspondence $T$ is lower semicontinuous if, and only if, for all $x\in X$ and any open set $G\subseteq Y$, with $T(x)\cap G\neq\emptyset$, there exists a neighborhood $V_x$ of $x$ such that $V_x$ such that $T(x')\cap G\neq\emptyset$ for all $x'\in V_x$. In a similar way, $T$ is upper semicontinuous if, and only if, for all $x\in X$ and any open set $G$, with $T(x)\subseteq G$, there exists a neighborhood $V_x$ of $x$ such that $T(V_x) \subseteq V$.
From now on, we will assume that any topological space is Hausdorff.
\section{Main results}\label{Results}
Let us consider a correspondence $K:X\rightrightarrows Y$ and a function $\theta:\mathrm{gra}(K)\to\mathbb{R}$, where $X$ and $Y$ are two topological spaces. We associate with them the argmax correspondence $M_0:X\rightrightarrows Y$ defined as
\[
M_0(x)=\left\lbrace y\in K(x): \theta(x,y)=\sup_{z\in K(x)}\theta(x,z)\right\rbrace.
\]
The Berge maximum theorem can be stated as follows.
\begin{theorem}\label{t1}
If $K$ is continuous with non-empty compact values and $\theta$ is continuous. Then, the argmax correspondence $M_0$, is upper semicontinuous and has non-empty compact values. Moreover, the function $m:X\to\mathbb{R}$ defined as $m(x)=\sup_{y\in K(x)}\theta(x,y)$ is continuous.
\end{theorem}
Now, if $K$ is continuous and $M:X\rightrightarrows Y$ is a correspondence such that $\mathrm{gra}(M)\subseteq\mathrm{gra}(K)$, then does there exists a function $\theta:\mathrm{gra}(K)\to\mathbb{R}$ such that $M$ is the argmax correspondence associated to $K$ and $\theta$? A first answer to this question was given by Komiya in \cite{Ko97}, in the linear and finite dimensional setting. Inspired from this, but without considering convexity properties of $\theta$, we give a positive answer on topological spaces.
\begin{theorem}\label{t5} Let $X$ and $Y$ be two topological spaces, $K:X\rightrightarrows Y$ be a continuous correspondence with normal graph, compact and non-empty values, and $M:X\rightrightarrows Y$ be a correspondence with non-empty values such that $M(x)\subseteq K(x)$, for all $x\in X$.
Then, the following two conditions are equivalent:
\begin{itemize}
\item [(a)] there exists a continuous function $\theta:\mathrm{gra}(K)\to[0,1]$, such that
$$
\mathrm{gra}(M)=\left\{(x,y)\in\mathrm{gra}(K):\theta(x,y)=\sup_{z\in K(x)}\theta(x,z)\right\},
$$
and
\item [(b)] $\mathrm{gra}(M)$ is a closed and $G_\delta$ set.
\end{itemize}
\end{theorem}
\begin{proof}
Let $m:X\to\mathbb{R}$ be the function defined by $m(x)=\sup_{z\in K(x)}\theta(x,z)$ and suppose condition (a) holds.
Theorem \ref{t1} implies that $m$ is continuous, and consequently, $M$ is closed. Moreover,
\[
\mathrm{gra}(M)=\bigcap_{n=1}^{\infty}\{(x,y)\in \mathrm{gra}(K):~ \theta(x,y)>(1-1/n)m(x)\},
\]
which proves that $\mathrm{gra}(M)$ is a $G_\delta$ set and hence
condition (b) holds.
Next, suppose there exists a non-increasing sequence of open subsets of $X\times Y$, $\{U_n\}_{n\in\mathbb{N}}$, such that $\mathrm{gra}(M)=\bigcap_{n\in\mathbb{N}}U_n\cap \mathrm{gra}(K)$. Since $\mathrm{gra}(K)$ is normal, for each $n\in\mathbb{N}$, there exists a Urysohn function $\theta_{n}:\mathrm{gra}(K)\to[0,1]$ such that $\theta_{n}\equiv 1$ on $\mathrm{gra}(M)$ and $\theta_{n}\equiv 0$ on $\mathrm{gra}(K)\setminus U_n$. Let $\theta:\mathrm{gra}(K)\to[0,1]$ be defined as
$$
\theta(x,y)=\sum_{n=1}^\infty\frac{1}{2^n}\theta_{n}(x,y).
$$
We have, $\theta$ is continuous. Moreover, $\theta^{-1}(\{1\})=\mathrm{gra}(M)$ and, since $M$ is non-empty valued, it follows that
$$
\mathrm{gra}(M)=\left\{(x,y)\in\mathrm{gra}(K):\theta(x,y)=\sup_{z\in K(x)}\theta(x,z)\right\}.
$$
Thus, the proof is complete.
\end{proof}
In order to guarantee the normality of $\mathrm{gra}(K)$, we can assume for instance that $X\times Y$ is a normal space and $\mathrm{gra}(K)$ is closed. However, this condition is not necessary, as we can see if $X$ is a normal space, $Y$ is a non-empty non-normal space and $K:X\rightrightarrows Y$ is defined by $K(x)=\{y_0\}$, for all $x\in X$. Clearly, $X\times Y$ is not normal but $\mathrm{gra}(K)$ is.
\begin{remark}\rm
According to Theorem \ref{t1}, the correspondence $M$, in Theorem \ref{t5}, is upper semicontinuous with non-empty and compact values, whenever the two equivalent conditions hold.
On the other hand, the function $\theta$ given en Theorem \ref{t5} is not unique. It is enough to see that for any strictly increasing function $h:\mathbb{R}\to\mathbb{R}$, the function $\vartheta=h\circ\theta$ satisfies Theorem \ref{t5}.
\end{remark}
The following result is an inverse maximum theorem and it is also a generalization of Lemma 4.1 in \cite{BC-2021}.
\begin{proposition}\label{t7}
Let $X$ and $Y$ be two topological spaces such that $X\times Y$ is normal and $M:X\rightrightarrows Y$ be a correspondence with non-empty and compact values. If $M$ is upper semicontinuous and its graph is a $G_\delta$ set, then
there exists a continuous function $\theta:X\times Y\to[0,1]$ such that
\[
\mathrm{gra}(M)=\left\{(x,y)\in X\times Y:\theta(x,y)=\sup_{z\in Y}\theta(x,z)\right\}.
\]
\end{proposition}
\begin{proof}
From the fact that any upper semicontinuous correspondence with compact values is closed, this proof follows by the same steps of the proof of (b) implies (a), in Theorem \ref{t5}.
\end{proof}
The following result gives sufficient conditions in order to guarantee the inverse of Berge's maximum theorem in the context of topological spaces without normality assumption.
\begin{theorem}\label{P-3}
Let $X$ and $Y$ be two topological spaces, and $K,M:X\rightrightarrows Y$ be two correspondences such that $M(x)$ is non-empty and $M(x)\subseteq K(x)$, for all $x\in X$. Suppose there exists a family of open sets in $\mathrm{gra}(K)$, $\{U_t\}_{t>0}$, such that
\begin{itemize}
\item[(i)] $\bigcup_{t>0}U_t=\mathrm{gra}(K)$,
\item[(ii)] $\overline{U}_s\subseteq U_t$, for all $s<t$, and
\item[(iii)] $\mathrm{gra}(M)=\bigcap_{t>0} U_t$.
\end{itemize}
Then, there exists a continuous function $\theta:\mathrm{gra}(K)\to[0,1]$ such that
\[
\mathrm{gra}(M)=\left\{(x,y)\in \mathrm{gra}(K):\theta(x,y)=\sup_{z\in K(x)}\theta(x,z)\right\}.
\]
\end{theorem}
\begin{proof}
Thanks to conditions (i) and (ii), by Lemma 3, Chapter 4 in \cite{Ke55}, the function $\tau:X\times Y\to\mathbb{R}$ defined as
\[
\tau(x,y)=\inf\{t>0:~(x,y)\in U_t\}
\]
is continuous. Consequently, the function $\theta$ defined as
\[
\theta(x,y)=1-\tau(x,y)\wedge1
\]
is also continuous. Moreover, we can see that $\theta(x,y)=1$ if, and only if, $\tau(x,y)=0$, which in turn is equivalent to $(x,y)\in U_t$, for all $t>0$. This allows us to conclude that $\theta(x,y)=1$ if, and only if, $(x,y)\in M$.
Finally, the result follows from the fact that $M$ is non-empty valued.
\end{proof}
\begin{remarks}\rm
A few remarks are needed.
\begin{enumerate}
\item Notice that in above result we do not require $M$ to be compact-valued.
\item
Conditions (ii) and (iii), in the previous result, imply that $\mathrm{gra}(M)$ is a $G_\delta$ set.
Indeed, for each $t>0$, let $q(t)\in\mathbb{Q}$ such that $0<q(t)<t$. Hence, $\mathrm{gra}(M)\subseteq U_{q(t)}\subseteq U_t$ and accordingly
\[
\mathrm{gra}(M)\subseteq\bigcap_{q\in\mathbb{Q}\cap(0,\infty)} U_{q}\subseteq\bigcap_{t>0} U_{q(t)}\subseteq \bigcap_{t>0} U_t.
\]
Therefore, $\mathrm{gra}(M)=\bigcap_{q\in\mathbb{Q}\cap(0,\infty)} U_{q}$ is a $G_\delta$ set.
\item Theorem \ref{P-3} fails to be true if the correspondence $M$ has empty values. Indeed,
consider $M:\mathbb{R}\rightrightarrows\mathbb{R}$ such that its graph is $\{(0,0)\}$ and the family of sets $\{B(0,t)\}_{t>0}$, where $B(0,t)$ is the open ball center at $0$ and radius $t$. It is clear that this family of sets satisfies the assumptions of Theorem \ref{P-3}. The function $\theta$ given in the proof is continuous, but the conclusion does not hold, because $M(1)=\emptyset$ and
$\{y\in\mathbb{R}:~ \theta(1,y)=\max_{z\in \mathbb{R}}\theta(1,z)\}=\mathbb{R}$.
\end{enumerate}
\end{remarks}
As an important consequence of the previous result, we have the following corollary.
\begin{corollary}\label{closed-function}
Let $(X,d_X)$ and $(Y,d_Y)$ be two metric spaces, and $K,M:X\rightrightarrows Y$ be two correspondences such that $M(x)$ is non-empty and $M(x)\subseteq K(x)$, for all $x\in X$. Suppose $\mathrm{gra}(M)$ is closed in $\mathrm{gra}(K)$ with the metric $d:\mathrm{gra}(K)\to\mathbb{R}$ defined as $d((x,y),(u,v))=d_X(x,u)+d_Y(y,v)$. Then, there exists a continuous function $\theta:\mathrm{gra}(K)\to[0,1]$ such that, for all $x\in X$,
\[
M(x)=\{y\in Y:~\theta(x,y)=\sup_{z\in Y}\theta(x,z)\}.
\]
Moreover, for each $x\in X$ and $t>0$, we have
\[
U_t(x)=\bigcup_{x'\in B(x,t)}R_{t-d_X(x,x')}(x'),
\]
where $U_t:X\rightrightarrows Y$ is the correspondence with $\mathrm{gra}(U_t)=\{(x,y)\in X\times Y:~d((x,y),\mathrm{gra}(M))<t\}$, $B(a,r)=\{z\in X:~d_X(a,z)<r\}$, and $R_s(z)=\{y\in Y:~d_Y(y,M(z))<s\}$, for all $a,z\in X$ and $r,s>0$.
\end{corollary}
\begin{proof}
It is clear that the family of open sets, $\{U_t\}_{t>0}$, satisfies all assumptions of Theorem \ref{P-3}. Hence, the existence of function $\theta$ satisfying the above conditions follows. The last part holds by noticing that $y$ is an element of $U_t(x)$ if, and only if, there exists $(x_0,y_0)\in\mathrm{gra}(M)$ such that $d((x,y),(x_0,y_0))<t$, which is equivalent that $d_X(x,x_0)<t$ and $d_Y(y,y_0)<t-d_X(x,x_0)$.
\end{proof}
The conclusion of Theorem \ref{P-3} can be improved when the range space of the correspondence is a vector space.
\begin{theorem}\label{P-4}
Let $X$ and $Y$ be two topological spaces, with $Y$ a vector space; $K,M:X\rightrightarrows Y$ be two correspondences such that $M(x)$ is non-empty and $M(x)\subseteq K(x)$, for all $x\in X$.
Suppose there exists an increasing family, $\{U_t\}_{t\geq0}$, of open sets in $\mathrm{gra}(K)$ such that
\begin{itemize}
\item[(i)] $\bigcup_{t>0}U_t=\mathrm{gra}(K)$,
\item[(ii)] $\overline{U}_s\subseteq U_t$, for all $s<t$,
\item[(iii)] $\mathrm{gra}(M)=\bigcap_{t>0} U_t$, and
\item [(iv)] $\{y\in Y: (x,y)\in U_t\}$ is convex, for all $t>0$ and $x\in X$.
\end{itemize}
Then, there exists a continuous function $\theta:\mathrm{gra}(K)\to[0,1]$ such that the following two conditions hold:
\begin{itemize}
\item [(v)] $\mathrm{gra}(M)=\{(x,y)\in \mathrm{gra}(K):~\theta(x,y)=\sup_{z\in K(x)}\theta(x,z)\}$, and
\item [(vi)] $\theta(x,\cdot)$ is quasi-concave, for all $x\in X$.
\end{itemize}
\end{theorem}
\begin{proof}
Thanks to Theorem \ref{P-3}, condition (v) holds and, by applying Lemma 2, Chapter 4 in \cite{Ke55}, for all $s\in (0,1]$, we deduce
\[
\{(x,y)\in \mathrm{gra}(K):~\theta(x,y)\geq s\}= \bigcap_{t>1-s}U_t.
\]
Hence, for each $x\in X$ we have
\[
\{y\in Y:~ \theta(x,y)\geq s\}=\left\lbrace \begin{array}{cc}
\bigcap_{t>1-s}\{y\in Y: (x,y)\in U_{t}\},&0<s\leq 1,\\
Y,& s\leq 0,\\
\emptyset,&s>1.
\end{array}\right.
\]
Therefore, condition (vi) holds and the proof is complete.
\end{proof}
The following example shows that Theorem \ref{closed-function} is neither a consequence of Theorem 3.5 in \cite{LiLiFeng2022} by Li \emph{et al.} nor Theorem 1.3 in \cite{Ya08} by Yamauchi.
\begin{example}
We consider the correspondence $M:\mathbb{R}\to\mathbb{R}$ defined by
\[
M(x)=\left\lbrace\begin{matrix}
[-1/|x|,1/|x|],&x\neq0\\
\mathbb{R},&x=0.
\end{matrix}\right.
\]
For each $t>0$, we define the correspondence $U_t:\mathbb{R}\rightrightarrows\mathbb{R}$ such that
\[
\mathrm{gra}(U_t)=\{(x,y)\in\mathbb{R}^2:~d((x,y),\mathrm{gra}(M))<t\},
\]
where $d$ is the $\ell^1$-metric on $\mathbb{R}^2$.
Clearly the following hold: $U_t$ has open graph, $\bigcup_{t>0} U_t=\mathbb{R}^2$, $\overline{\mathrm{gra}(U_s)}\subseteq \mathrm{gra}(U_t)$ for all $s<t$, and $\mathrm{gra}(M)=\bigcap_{t>0} U_t$. Moreover, for each $x\in\mathbb{R}$, the set $U_t(x)$ is convex. Indeed, by Corollary \ref{closed-function}, there exists a family, $\{R_\lambda\}_{\lambda\in\mathcal{L}ambda}$, of connected subsets of $\mathbb{R}$, such that $U_t(x)=\bigcup_{\lambda\in\mathcal{L}ambda}R_{\lambda}$ and $0\in\bigcap_{\lambda\in\mathcal{L}ambda}R_{\lambda}$. Hence, $U_t(x)$ is connected in $\mathbb{R}$, that is, $U_t(x)$ is convex. Thus, by Theorem \ref{P-4}, there is a continuous function $\theta:\mathbb{R}^2\to[0,1]$ such that
it is quasi-concave in its second argument and, for any $x\in\mathbb{R}$, it holds:
\[
M(x)=\left\{ y\in\mathbb{R}:~ \theta(x,y)=\max_{z\in\mathbb{R}}\theta(x,z)\right\}.
\]
Since $M$ does not have compact values, we cannot apply Theorem 3.5 in \cite{LiLiFeng2022} nor Theorem 1.3 in \cite{Ya08}.
\end{example}
As we see below, it is easy to find other correspondences without compact values, admitting an inverse result.
\begin{proposition}\label{pro}
Let $X$ and $Y$ be two normed spaces, $D$ a non-empty and convex subset of $X$ and $M:D\rightrightarrows Y$ be a correspondence with non-empty convex and closed graph in $D\times Y$. Then, there exists a continuous function $\theta:D\times Y\to[0,1]$ such that the following two conditions hold:
\begin{itemize}
\item [(i)] $\mathrm{gra}(M)=\{(x,y)\in D\times Y:~\theta(x,y)=\sup_{z\in Y}\theta(x,z)\}$, and
\item [(ii)] $\theta(x,\cdot)$ is quasi-concave, for all $x\in D$.
\end{itemize}
\end{proposition}
The following lemma is Proposition 1.2.23 in \cite{Lucchetti2006}.
\begin{lemma}\label{lema}
Let $H$ be a normed space, $C$ be a closed, non-empty, and convex subset of $H$, and $f_C:H\to\mathbb{R}$ be the function defined as $f_C(x)=d(x,C)$. Then, $f_C$ is convex.
\end{lemma}
\begin{proof}[Proof of Proposition \ref{pro}]Indeed, let $C=\mathrm{gra}(M)$ and, for each $t>0$, define $U_t=\{(x,y)\in D\times Y:d((x,y),\mathrm{gra}(M))<t\}$. We have $U_t$ has open graph, $\bigcup_{t>0} U_t=D\times Y$, $\overline{\mathrm{gra}(U_s)}\subset \mathrm{gra}(U_t)$ for all $s<t$, and $\mathrm{gra}(M)=\bigcap_{t>0} U_t$. In order to apply Theorem \ref{P-4}, it only remains to prove that, for each $x\in\mathbb{R}$, the sets $U_t(x)=\{y\in Y:d((x,y),\mathrm{gra}(M))<t\}$ are convex. Let $y_1,y_2\in U_t(x)$ and $\lambda\in[0,1]$. By Lemma \ref{lema}, we have
$$
d(\lambda(x, y_1)+(1-\lambda)(x, y_2),\mathrm{gra}(M))\leq \lambda d((x,y_1),\mathrm{gra}(M))+(1-\lambda) d((x,y_2),\mathrm{gra}(M))<t,
$$
which completes the proof.
\end{proof}
Next, we introduce a simple correspondence with non-compact values, where, contrary to Komiya \cite{Ko97}, Li \emph{et al.} \cite{LiLiFeng2022}, and Yamauchi \cite{Ya08} results, our Proposition \ref{pro} applies.
\begin{example} Let $M:(0,\infty)\to\mathbb{R}$ be the correspondence defined by
$M(x)= [1/x,\infty)$. It is clear that $M$ has a non-empty convex and closed graph. Hence, by Proposition \ref{pro},
there exists a continuous function $\theta:X\times Y\to[0,1]$ such that the following two conditions hold:
\begin{itemize}
\item [(i)] $\mathrm{gra}(M)=\{(x,y)\in X\times Y:~\theta(x,y)=\sup_{z\in Y}\theta(x,z)\}$, and
\item [(ii)] $\theta(x,\cdot)$ is quasi-concave, for all $x\in X$.
\end{itemize}
\end{example}
Now, in a similar way to Theorem \ref{t5}, we present the following result.
\begin{proposition}
Let $X$ be a non-empty paracompact space, $Y$ be a non-empty convex and compact subset of a locally convex space, and
$M:X\rightrightarrows Y$ be a non-empty convex compact-valued and upper semicontinuous correspondence. Then, the following two conditions are equivalents:
\begin{itemize}
\item [(i)] there exists a continuous function $\theta:X\times Y\to[0,1]$, such that
\[
\mathrm{gra}(M)=\left\{(x,y)\in X\times Y:\theta(x,y)=\sup_{z\in Y}\theta(x,z)\right\},
\]
and the function $\theta(x,\cdot):Y\to[0,1]$ is quasi-concave, for each $x\in X$, and
\item [(ii)] $\mathrm{gra}(M)$ is a $G_\delta$ set.
\end{itemize}
\end{proposition}
\begin{proof}
Since $X\times Y$ is normal (c.f. Corollary 1.16, Chapter 3 in \cite{MN89}),
by Theorem \ref{t5}, condition (i) implies condition (ii).
Reciprocally, due to Theorem 1.3 in \cite{Ya08}, we have that condition (i) follows from condition (ii).
\end{proof}
Thanks to the previous result and Theorem 3.5 in \cite{LiLiFeng2022}, we have the following result.
\begin{proposition}
Let $X$ be a non-empty paracompact space, $Y$ be a non-empty convex and compact subset of a locally convex space, and $M:X\rightrightarrows Y$ be a non-empty convex compact-valued and upper semicontinuous correspondence. Then, the graph of $M$, $\mathrm{gra}(M)$, is a $G_\delta$ set.
\end{proposition}
\section{Applications to generalized Nash games}\label{Applications}
A \emph{Nash game}, \cite{Na51}, consists of $p$ players, each player $i$ controls the decision variable $x_i$, which belongs to a subset $C_i$ of a topological space $E_i$.
The ``total strategy vector'' is $x$,
which will be often denoted by
\[
x=(x_1,\dots,x_i,\dots,x_p).
\]
Sometimes we write $(x_i,x_{-i})$ instead of $x$ in order to emphasize the $i$-th player's variables within $x$, where $x_{-i}$ is the strategy vector of the other players.
Player $i$ has a payoff function $\theta_i:C\to\mathbb{R}$ that depends on all player's strategies, where $C=\prod_{i=1}^p C_i$.
Given the strategies $x_{-i}\in C_{-i}=\prod_{j\neq i}C_j$ of the other players, the aim of player $i$ is to choose a strategy $x_i$ solving the problem $P_i(x_{-i})$:
\begin{align*}
\max_{ x_i }\theta_i(x_i,x_{-i}) ~\mbox{ subject to }~x_i\in C_i.
\end{align*}
A vector $\hat{x}\in C$ is a \emph{Nash equilibrium} if, for all $i\in\{1,\dots,p\}$, $\hat{x}_i$ solves $P_i(\hat{x}_{-i})$. We denote by $NG(\theta_i,C_i)$ the set of Nash equilibria associated to the functions $\theta_i$ and the sets $C_i$.
In a generalized Nash game, each player's strategy must belong to a set identified by the correspondence $K_i: C_{-i}\rightrightarrows C_i$ in the sense that the strategy space of player $i$ is $K_i(x_{-i})$, which depends on the rival player's strategies $x_{-i}$.
Given the strategy $x_{-i}$, player $i$ chooses a strategy $x_i$ such that it solves the following problem $GP(x_{-i})$
\begin{equation*}
\max_{x_i}\theta_i(x_i,x_{-i})~\mbox{ subject to }~x_i\in K_i(x_{-i}).
\end{equation*}
Thus, a \emph{generalized Nash equilibrium} is a vector $\hat{x}\in C$ such that
the strategy $\hat{x}_i$ is a solution of the problem $GP(\hat{x}_{i})$, for any $i\in\{1,\dots,p\}$. We denote by $GNG(\theta_i,K_i)$ the set of generalized Nash equilibria associated to the functions $\theta_i$ and the correspondences $K_i$. Thus,
\[
GNG(\theta_i,K_i)=\{\hat{x}\in C:~\hat{x}\in NG(\theta_i, K_i(\hat{x}_{-i})\}.
\]
It is clear that any Nash game is a generalized Nash game. However, the last one is more complex due to the strategy set of each player depends of the strategy of his/her rivals.
\subsection{An inverse Nash theorem}
It is not difficult to see the following:
$$
GNG(\theta_i,K_i)=\bigcap_{i=1}^p\left\{x\in C:\theta_i(x)=\max_{z_i\in K_i(x_{-i})}\theta_i(z_i,x_{-i})\right\}.
$$
Indeed, we notice that, if for any player $i$, we consider its argmax correspondence $M_i:C_{-i}\rightrightarrows C_i$; then $\left\{x\in C:\theta_i(x)=\max_{z_i\in K_i(x_{-i})}\theta_i(z_i,x_{-i})\right\}=\mathrm{gra}(M_i)$. Thus,
\[
GNG(\theta_i,K_i)=\bigcap_{i=1}^p\mathrm{gra}(M_i).
\]
The following result establishes that $GNG(\theta_i,K_i)$ is a $G_\delta$ set, under suitable assumptions.
\begin{proposition}
For each $i\in\{1,\dots,p\}$, let $K_i: C_{-i}\rightrightarrows C_i$ be a continuous correspondence with non-empty and compact values. If for each $i\in\{1,\dots,p\}$, the payoff function $\theta_i:C\to\mathbb{R}$ is continuous, then $GNG(\theta_i,K_i)$ is a $G_\delta$ set.
\end{proposition}
\begin{proof}
For each $i\in\{1,\dots,p\}$, let $m_i:C_{-i}\to\mathbb{R}$ be a function defined as
\[
m_i(x_{-i})=\max_{x_i\in K_i(x^{-i})}\theta_i(x_i,x_{-i}).
\]
The Berge maximum theorem, Theorem \ref{t1}, implies that $m_i$ is continuous. Thus, the result follows from
\[
GNG(\theta_i,K_i)=\bigcap_{n=1}^\infty\bigcap_{i=1}^p\{x\in C:\theta_i(x)>(1-1/n)m_i(x_{-i})\}.
\]
\end{proof}
It is important to notice that in the previous result, each payoff function $\theta_i$ is continuous on $C$, but in order to apply Berge's maximum theorem we just need that $\theta_i$ be continuous on $\mathrm{gra}(K_i)$.
Now, we are interested in the inverse problem, which consists of finding payoff functions $\theta_1,\dots,\theta_p$ giving as solution a predetermined strategy set, $\hat{X}$, of the game. In other words, given the correspondences $K_i$ and a set $\hat{X}\subseteq C$ we want to find payoff functions $\theta_i$ such that $\hat{X}=GNG(\theta_i,K_i)$.
We present the following example to illustrate the previous problem.
\begin{example}\label{exa-inverse}
Consider $C_1=C_2=[0,1]$ and the correspondences $K_1,K_2:[0,1]\rightrightarrows[0,1]$ defined as
\[
K_1(y)=[0,y]\mbox{ and }K_2(x)=[0,1-x].
\]
Figure \ref{F1} shows their graphs and consider $\hat{X}=\mathrm{gra}(K_1)\cap \mathrm{gra}(K_2)$.
\begin{figure}
\caption{Graphs of the sets $\mathrm{gra}
\label{F1}
\end{figure}
Now, we ask then do there are continuous functions $\theta_1,\theta_2:[0,1]\times[0,1]\to\mathbb{R}$ such that $\hat{X}=GNG(\theta_i,K_i)$? In this case, the answer is positive. Indeed,
we can consider the functions $\theta_1,\theta_2:[0,1]\times[0,1]\to\mathbb{R}$ defined as
\[
\theta_1(x,y)=1-d((x,y),\mathrm{gra}(K_1))\wedge 1\mbox{ and }\theta_2(x,y)=1-d((x,y),\mathrm{gra}(K_2))\wedge 1,
\]
where $d$ denotes the Euclidean metric in $\mathbb{R}^2$.
Clearly $\theta_1$ and $\theta_1$ are continuous on $[0,1]\times[0,1]$. Moreover, it is not difficult to show that $\hat{X}=GNG(\theta_i,K_i)$, for $i\in\{1,2\}$.
\end{example}
The following result gives sufficient conditions to obtain a positive answer to this new problem.
\begin{proposition}\label{prop}
For each $i\in\{1,\dots,p\}$, let $K_i: C_{-i}\rightrightarrows C_i$ be a continuous correspondence with non-empty and compact values, and normal graph; and let $\hat{X}\subseteq \bigcap_{i=1}^p\mathrm{gra}(K_i)$. If $\hat{X}$ is a $G_\delta$ closed set then there exist
continuous functions $\theta_i:\mathrm{gra}(K_i)\to[0,1]$ such that
\[
\hat{X}=GNG(\theta_i,K_i).
\]
\end{proposition}
\begin{proof}
Suppose there exists a non-increasing sequence of open subsets of $C$, $\{U_n\}_{n\in\mathbb{N}}$, such that $\hat{X}=\bigcap_{n\in\mathbb{N}}U_n$. For each $i\in\{1,\dots,p\}$, since $\mathrm{gra}(K_i)$ is normal, for each $n\in\mathbb{N}$, there exists a Urysohn function $\theta_{n,i}:\mathrm{gra}(K_i)\to[0,1]$ such that $\theta_{n,i}\equiv 1$ on $\hat{X}$ and $\theta_{n,i}\equiv 0$ on $\mathrm{gra}(K_i)\setminus U_n$. Let $\theta_{i}:\mathrm{gra}(K_i)\to[0,1]$ be defined as
\[
\theta_{i}(x)=\sum_{n=1}^\infty\frac{1}{2^n}\theta_{n,i}(x).
\]
We have, $\theta_{i}$ is continuous, $\theta_{i}^{-1}(\{1\})=\hat{X}$ and therefore
$\hat{X}=GNG(\theta_i,K_i)$, which completes the proof.
\end{proof}
\begin{remark}
When $C$ is a subset of a metric space with metric $d$ and the graphs $\mathrm{gra}(K_i)$ are closed, functions $\theta_i:\mathrm{gra}(K_i)\to[0,1]$ in Proposition \ref{prop} can be defined as $\theta_i(x)=1-d(x,\mathrm{gra}(K_i))\wedge 1$, for all $i\in\{1,\dots,p\}$.
\end{remark}
We give an interpretation of the previous problem as follows: agents could have a priori a set of strategies, which, due to certain restrictions, are not possible to be replaced. In this case, they only go to the market to participate in businesses whose payment functions favor the application of their strategies. An inverse Nash theorem provides the existence of appropriate actions giving place to predetermined results.
\subsection{Equivalent results}
This section aims to show the equivalence between two known results. First, we stated a classical result concerning the existence of Nash equilibria, inspired by Debreu, Fan, and Glicksberg.
\begin{theorem}\label{D}
Suppose for each $i\in\{1,2,\dots,p\}$, $C_i$ is a compact, convex and non-empty subset of a locally convex topological vector space $E_i$,
the payoff function $\theta_i$ is continuous and the correspondence $M_i:C_{-i}\rightrightarrows C_i$, defined as
\[
M_i(x_{-i})=\left\{x_i\in C_i:~\theta_i(x_i,x_{-i})=\max_{z_i\in C_i}\theta_i(z_i,x_{-i})\right\},
\]
is convex-valued. Then,
the set $NG(\theta_i,C_i)$ is non-empty.
\end{theorem}
The following result is due to Arrow and Debreu \cite{AD54}. We state it in the setting of locally convex spaces and slightly modify the convexity condition as follows.
\begin{theorem}\label{A-D}
Suppose for each $i\in\{1,2,\dots,p\}$, $C_i$ is compact, convex and non-empty subset of a locally convex topological vector space $E_i$, and the following three conditions hold:
\begin{itemize}
\item[(i)] the payoff function $\theta_i$ is continuous,
\item[(ii)] the correspondence $K_i$ is continuous with convex, closed and non-empty values, and
\item[(iii)] the correspondence $M_i:C_{-i}\rightrightarrows C_i$ defined as
\[
M_i(x_{-i})=\left\{x_i\in C_i:~\theta_i(x_i,x_{-i})=\max_{z_i\in K_i(x_{-i})}\theta_i(z_i,x_{-i})\right\}
\]
is convex-valued.
\end{itemize}
Then, the set $GNG(\theta_i,K_i)$ is non-empty.
\end{theorem}
\begin{remark}\label{re1}
In the original version of Theorems \ref{D} and \ref{A-D}, the quasi-concavity of $\theta_i$ in $x_i$ for all players was assumed by Debreu, Fan and Glickberg for Theorem \ref{D}, and Debreu and Arrow for Theorem \ref{A-D}. However this assumption implies the maps $M_i$ are convex-valued in both results.
Moreover, since Theorem \ref{A-D} was initially proved in finite dimensional spaces, this is a generalized version of the original result.
\end{remark}
In the following example we can see that a particular generalized Nash equilibrium problem can be considered as a classical Nash game.
\begin{example}
Consider a generalized Nash game with two players with correspondences $K_1,K_2:[0,1]\rightrightarrows[0,1]$ defined as in Example \ref{exa-inverse} and payoff functions $\theta_1,\theta_2:[0,1]^2\to\mathbb{R}$ defined as
\[
\theta_1(x,y)=y-x^2\mbox{ and }\theta_2(x,y)=2x-y^2.
\]
Thus, their maximizer correspondences are
\[
M_1(y)=\{0\}\mbox{ and }M_2(x)=\{0\},
\]
which are also upper semicontinuous with convex, compact, and non-empty values. Futhermore, we can consider the new functions $\vartheta_1,\vartheta_2:[0,1]^2\to\mathbb{R}$ defined as
\[
\vartheta_1(x,y)=-x\mbox{ and }\vartheta_2(x,y)=-y,
\]
which are continuous and concave. Moreover, $ GNG(\theta_i,K_i)=NG(\vartheta_i,C_i)$.
\end{example}
In the line of Theorem 5.6 in \cite{Co21},
we will show that it is possible to reformulate the generalized Nash game as a classical Nash game.
\begin{theorem}\label{GNEP-NEP}
Suppose for each $i\in\{1,2,\dots,p\}$, $C_i$ is compact and non-empty, and the following two conditions hold:
\begin{itemize}
\item[(i)] the payoff function $\theta_i$ is continuous, and
\item[(ii)] the set-valued map $K_i$ is continuous with closed and non-empty values.
\end{itemize}
Then, there exist continuous functions $\vartheta_i:C\to[0,1]$ such that
$GNG(\theta_i,K_i)=NG(\vartheta_i,C_i)$.
\end{theorem}
\begin{proof}
For each $i\in\{1,2,\dots,p\}$, by the Berge maximum theorem, Theorem \ref{t1}, there exists an upper semicontinuous correspondence with compact, convex, and non-empty values, $M_i:C_{-i}\rightrightarrows C_i$, such that
\[
M_i(x_{-i})=\left\lbrace x_i\in K_i(x_{-i}):~\theta_i(x_i,x_{-i})=\max_{z_i\in K_i(x_{-i})}\theta_i(z_i,x_{-i})\right\rbrace.
\]
Now, by Proposition \ref{t7}, there exists a continuous function $\vartheta_i:C\to[0,1]$ such that
\[
M_i(x_{-i})=\left\lbrace x_i\in C_i:~\vartheta_i(x_i,x_{-i})=\max_{z_i\in C_i}\vartheta_i(z_i,x_{-i})\right\rbrace.
\]
Thus, $\hat{x}\in GNG(\theta_i,K_i)$ if, and only if, $\hat{x}\in NG(\vartheta_i,C_i)$.
\end{proof}
Another kind of reformulation is given by considering extended-real valued functions, that means associated to each player $i$, we define the function $\varphi_i:C\to\mathbb{R}\cup\{-\infty\}$ by
\[
\varphi_i(x)=\theta_i(x)-\delta_{\mathrm{gra}(K_i)}(x),
\]
where $\delta_{\mathrm{gra}(K_i)}$ is the indicator function associated to the graph of $K_i$, that is $\delta_{\mathrm{gra}(K_i)}(x)=0$ if, $x\in \mathrm{gra}(K_i)$, and $\delta_{\mathrm{gra}(K_i)}=\infty$ if $x\notin \mathrm{gra}(K_i)$. We affirm that $ GNG(\theta_i,K_i)=NG(\varphi_i,C_i)$. Indeed, let $\hat{x}$ be an element of
$GNG(\theta_i,K_i)$, that is for every $i$, $\hat{x}_i\in K_i(\hat{x}_{-i})$ and
\[
\theta_i(\hat{x})\geq \theta_i(x_i,\hat{x}_{-i}),\mbox{ for all }x_i\in K_i(\hat{x}_{-i}).
\]
Consequently, $\varphi_i(\hat{x})=\theta_i(\hat{x})$ and for any $x_i\notin K_i(\hat{x}_{-i})$ one has $\varphi_i(x_i,\hat{x}_{-i})=-\infty$. Thus, $\hat{x}\in NG(\varphi_i,C_i)$.
Reciprocally, let $\hat{x}\in NG(\varphi_i,C_i)$. Since $K_i$ is non-empty valued, we deduce that $\hat{x}_i\in K_i(\hat{x}_{-i})$. The result follows from the fact that for any $x_i\in K_i(\hat{x}_{-i})$ we have $\varphi_i(x_i,\hat{x}_{-i})=\theta_i(x_i,\hat{x}_{-i})$.
However, this kind of reformulation does not guarantee the continuity of each function $\varphi_i$.
On the other hand, it is clear that Theorem \ref{A-D} implies Theorem \ref{D}. However, they are actually equivalent. This is stated below.
\begin{theorem}\label{A-D-D}
Theorem \ref{D} implies Theorem \ref{A-D}.
\end{theorem}
\begin{proof}
This is a consequence of Theorem \ref{GNEP-NEP} and
Theorem \ref{D}.
\end{proof}
\begin{remark}
Considering the original version of Theorems \ref{D} and \ref{A-D}, we can apply Theorem 3.5 in \cite{LiLiFeng2022} to show the previous result.
\end{remark}
\section{Application to fixed point theory}\label{FPT}
The authors in \cite{YuEtAl16} showed that Kakutani's fixed point theorem, Theorem \ref{KFFPT}, is consequence of Theorem \ref{D} on finite dimensional spaces. Moreover,
the Kakutani fixed point theorem admits an extension to locally convex spaces, by means of the Kakutani-Fan-Glicksberg theorem (see \cite{Fa52,Gl52}), which we state below.
\begin{theorem}\label{KFFPT}
Let $C$ be a non-empty convex and compact subset of a Hausdorff locally convex topological vector space $Y$ and let $T:C\rightrightarrows C$ be a correspondence. If $T$ is upper semicontinuous with convex, closed, and non-empty values, then the set $\{x\in C:~x\in T(x)\}$ is non-empty.
\end{theorem}
We will show that the implication given in \cite{YuEtAl16} is also true on locally convex spaces.
\begin{proposition}\label{D-KFFPT}
Theorem \ref{D} implies Theorem \ref{KFFPT}.
\end{proposition}
\begin{proof}
Assume that $T$ does not have a fixed point. Then the diagonal of $C\times C$, $D=\{(x,x)\in Y \times Y: x\in C\}$, is closed and disjoint to the graph of $T$. Thus, $\mathrm{gra}(T)\subset D^c$ and by Proposition 3.1 in \cite{Ya08}, there exists a continuous function $\theta:C\times C\to\mathbb{R}$ such that $\theta(\mathrm{gra}(T))={1}$ and $\theta(D)=\{0\}$. Moreover, $\theta$ is quasi-concave in its second argument.
Let $P$ be a separating family of seminorms that generates the topology of $E$. For each $\rho\in P$, we consider the set
\[
F_\rho=\{(x,y)\in C\times C:~\theta(x,y)\geq \theta(x,z)\mbox{ and }\rho(x-y)\leq \rho(w-y),\mbox{ for all }w,z\in C\}.
\]
Clearly $F_\rho$ is compact. Furthermore, the family $\{F_\rho\}_{\rho}$ has the finite intersection property. Indeed, consider $\rho_1,~\dots,\rho_n\in P$ and consider the game with two players and their payoff functions $\theta_1,\theta_2:C\times C\to\mathbb{R}$ defined by
\[
\theta_1(x,y)=-\sum_{i=1}^n\rho_i(x-y)\mbox{ and }\theta_2(x,y)=\theta(x,y).
\]
Since each function $\theta_j$ is continuous and quasi-concave in $x_j$, from Theorem \ref{D}, we deduce the existence of a Nash equilibrium $(\hat{x},\hat{y})$. Thi means
\[
\theta(\hat{x},\hat{y})\geq \theta(\hat{x},y)\mbox{ for all }y\in C,
\]
and
\[
\sum_{i=1}^n\rho_i(\hat{x}-\hat{y})\leq \sum_{i=1}^n\rho_i(x-\hat{y}),\mbox{ for all }x\in X.
\]
In this last inequality for $x=\hat{y}$ we obtain that $\rho_i(\hat{x}-\hat{y})=0$ for all $i\in\{1,2,\dots,n\}$. Consequently, $(\hat{x},\hat{y})\in \bigcap_{i=1}^n F_{\rho_i}$. Hence, there exists $(\hat{x},\hat{y})\in\bigcap_{\rho\in P}F_\rho$ and this implies
$\rho(\hat{x}-\hat{y})=0$, for all $\rho\in P$. Thus $\hat{x}=\hat{y}$. Since $\hat{y}$ maximizes the function $\theta(\hat{x},\cdot)$, $\theta(\mathrm{gra}(T))=\{1\}$ and $\theta(D)=\{0\}$ we get a contradiction.
\end{proof}
Finally, inspired by Komiya \cite{Ko97}, we prove that the famous minimax inequality, due to Ky Fan \cite{Fa72}, implies Kakutani-Fan-Glicksberg's theorem
\begin{theorem}[Fan, 1972]\label{minmax}
Let $X$ be a compact convex subset of a topological vector space. Let $f$ be a real-valued function defined on $X\times X$ such that
\begin{enumerate}
\item[(i)] for each $y\in X$, $f(\cdot,y)$ is lower semicontinuous; and
\item[(ii)] for each $x\in X$, $f(x,\cdot)$ is quasi-concave.
\end{enumerate}
Then, the minimax inequality
\[
\min_{x\in X}\max_{y\in X}f(x,y)\leq \max_{x\in X} f(x,x)
\]
holds.
\end{theorem}
\begin{proposition}\label{mmc->KFF}
Theorem \ref{KFFPT} is a consequence of Theorem \ref{minmax}.
\end{proposition}
\begin{proof} Let $T:C\rightrightarrows C$ be a correspondence satisfying the assumptions of Theorem \ref{KFFPT}.
Thanks to Theorem 3.5 in \cite{LiLiFeng2022}, there exists a continuous function $\theta:C\times C\to[0,1]$ such that $\theta$ is quasi-concave in its second argument and
\[
T(x)=\left\lbrace y\in C:~ \theta(x,y)=\max_{z\in C}\theta(x,z)\right\rbrace, \mbox{ for all }x\in C.
\]
We consider the function $f:C\times C\to\mathbb{R}$ defined as
\[
f(x,y)=\theta(x,y)-\theta(x,x).
\]
Clearly, $f$ vanishes on the diagonal of $C\times C$ and it satisfies all assumptions of Theorem \ref{minmax}. Thus, there exists $x_0\in C$ such that
$f(x_0,y)\leq 0$, for all $y\in C$. This means, $x_0$ maximizes $f(x_0,\cdot)$.
On the other hand, it is clear that $T(x)=\{y\in C:~ f(x,y)=\max_{z\in C}f(x,z)\}$.
Therefore, we deduce that $x_0\in T(x_0)$. This completes the proof.
\end{proof}
\section{Conclusions}\label{Conclusions}
Motivated by the works of Komita \cite{Ko97}, Park and Komiya \cite{PK01}, Aoyama \cite{Ao03}, Yamauchi \cite{Ya08}
and Li \emph{et al.} \cite{LiLiFeng2022}, we present some inverse maximum theorems that are independent of those previously mentioned. Some introduced examples give an account of this independence.
As applications of our results, we first present an inverse maximum Nash theorem, second, we reformulate generalized Nash games as a classical Nash game under continuity assumption, and third, we prove the equivalence between famous equilibrium theorems and that they are equivalent to the well-known Kakutani-Fan-Glicksberg theorem and the famous minimax inequality due to Ky Fan.
\end{document}
|
\begin{document}
{\mathfrak t}itle{Non-abelian cohomology of universal curves in positive characteristic }
{{\mathfrak m}athfrak a}uthor{Tatsunari Watanabe}
{{\mathfrak m}athfrak a}ddress{Mathematics Department, Embry-Riddle Aeronautical University, 3700 Willow Creek Rd. Prescott, AZ 86301, USA}
\epsilonmail{[email protected]}
{\mathfrak m}aketitle
\begin{abstract}
{\mathfrak t}extcolor{black}{
{\mathfrak t}extcolor{black}{In this paper, we will compute the non-abelian cohomology of the universal complete curve in positive characteristic.
This extends Hain’s result on the non-abelian cohomology of generic curves in characteristic zero to positive characteristics.
Furthermore, we will prove that the exact sequence of etale fundamental groups of the universal $n$-punctured curve in positive characteristic does not split.}
}
{\mathfrak s}mallskip
\epsilonnd{abstract}
{\mathfrak s}ection{Introduction}
For a DM stack $X$ with a geometric point $\bar x$, denote the \'etale fundamental group of $X$ with base point $\bar x$ by ${\mathfrak p}i_1(X, \bar x)$. Let $F$ be a field. Fix a separable closure $\overline{F}$ of $F$. Let $C$ be a geometrically connected smooth projective curve of genus $g$ over $F$ and $\bar x$ a geometric point of $C_{\overline{F}}:= C\otimes \overline{F}$. Associated to $C$, there is the homotopy exact sequence of fundamental groups
\begin{equation}\ellabel{hom seq for galois grp}
1 {\mathfrak t}o {\mathfrak p}i_1(C_{\overline{F}}, \bar x){\mathfrak t}o {\mathfrak p}i_1(C, \bar x){\mathfrak t}o G_F{\mathfrak t}o 1,
\epsilonnd{equation}
where $G_F$ is the Galois group of $\overline{F}$ over $F$. Each $F$-rational point $y$ of $C$ induces a section $s_y$ of the projection ${\mathfrak p}i_1(C, \bar x){\mathfrak t}o G_F$ that is unique up to conjugation by an element of ${\mathfrak p}i_1(C_{\overline{F}}, \bar x)$. Grothendieck's section conjecture predicts that if $F$ is finitely generated over ${{\mathfrak m}athbb Q}$ and $g{{\mathfrak m}athfrak g}eq 2$, then there is a bijection between the set of $F$-rational points of $C$ and the set of ${\mathfrak p}i_1(C_{\overline{F}}, \bar x)$-conjugacy classes of {\mathfrak t}extcolor{black}{continuous} sections of ${\mathfrak p}i_1(C, \bar x){\mathfrak t}o G_F$.
Assume that $2g-2+n >0$. Denote the moduli stack of curves of type $(g, n)$ with an abelian level $m$ over a field $k$ by ${{\mathfrak m}athcal M}_{g,n/k}[m]$. Our main results are concerned with the universal curves over ${{\mathfrak m}athcal M}_{g,n/k}[m]$, denoted by ${\mathfrak p}i:{{\mathfrak m}athcal C}_{g,n/k}[m]{\mathfrak t}o {{\mathfrak m}athcal M}_{g,n/k}[m]$ and ${\mathfrak p}i^o:{{\mathfrak m}athcal M}_{g,n+1/k}[m]{\mathfrak t}o {{\mathfrak m}athcal M}_{g,n/k}[m]$. {\mathfrak t}extcolor{black}{The universal {\mathfrak t}extcolor{black}{complete} curve ${\mathfrak p}i$ is equipped with $n$ disjoint sections called the tautological sections. }{\mathfrak t}extcolor{black}{The universal punctured curve ${\mathfrak p}i^o$ is the restriction of ${\mathfrak p}i$ to the complement of the $n$ tautological sections in ${{\mathfrak m}athcal C}_{g,n/k}[m]$.}
One of the key ingredients used in \cite{hain2} and this paper is the weighted completion of a profinite group. For $g {{\mathfrak m}athfrak g}eq 3$ or {\mathfrak t}extcolor{black}{$g=2$ and $n>2g+2$}, let $K$ be the function field $k({{\mathfrak m}athcal M}_{g,n/k}[m])$ of ${{\mathfrak m}athcal M}_{g,n/k}[m]$. Fix a separable closure $\overline{K}$ of $K$. Let $\epsilontabar:{\mathrm{Sp}}ec \overline{K}{\mathfrak t}o {{\mathfrak m}athcal M}_{g,n/k}[m]$ be a geometric generic point of ${{\mathfrak m}athcal M}_{g,n/k}[m]$. {\mathfrak t}extcolor{black}{Denote the fibers of ${\mathfrak p}i$ and ${\mathfrak p}i^o$ over $\epsilontabar$ by $C_\epsilontabar$ and $C^o_\epsilontabar$, respectively.} Let $\epsilonll$ be a prime number distinct from ${\mathfrak m}athrm{char}(k)$. Set $H =H^1_\epsilont(C_{\epsilontabar}, {{\mathfrak m}athbb Q}l(1))$.
There is a monodromy representation ${\mathfrak r}ho_{\epsilontabar}:{\mathfrak p}i_1({{\mathfrak m}athcal M}_{g,n/k}, \epsilontabar) {\mathfrak t}o \GammaSp(H)$. {\mathfrak t}extcolor{black}{Fix a geometric point $\bar x$ in $C^o_\epsilontabar$ and so in $C_\epsilontabar$.} Denote the continuous $\epsilonll$-adic unipotent completions of ${\mathfrak p}i_1(C_\epsilontabar, \bar x)$ and {\mathfrak t}extcolor{black}{${\mathfrak p}i_1(C^o_\epsilontabar, \bar x)$} by ${{\mathfrak m}athcal P}$ and {\mathfrak t}extcolor{black}{${{\mathfrak m}athcal P}^o$}
, respectively.
Then there are the exact sequences of proalgebraic ${{\mathfrak m}athbb Q}l$-groups
\begin{equation}\ellabel{main exact seq for p}
1{\mathfrak t}o {{\mathfrak m}athcal P} {\mathfrak t}o {{\mathfrak m}athcal G}_{{{\mathfrak m}athcal C}_{g,n}}[\epsilonll^r]{\mathfrak t}o {{\mathfrak m}athcal G}_{g,n}[\epsilonll^r]{\mathfrak t}o 1,
\epsilonnd{equation}
and
\begin{equation}\ellabel{main exact seq for p open}
{\mathfrak t}extcolor{black}{1{\mathfrak t}o {{\mathfrak m}athcal P}^o {\mathfrak t}o {{\mathfrak m}athcal G}_{g, n+1}[\epsilonll^r]{\mathfrak t}o {{\mathfrak m}athcal G}_{g,n}[\epsilonll^r]{\mathfrak t}o 1. }
\epsilonnd{equation}
where $r$ is a nonnegative integer and {\mathfrak t}extcolor{black}{${{\mathfrak m}athcal G}_{{{\mathfrak m}athcal C}_{g,n}}[\epsilonll^r]$, ${{\mathfrak m}athcal G}_{g,n+1}[\epsilonll^r]$, and ${{\mathfrak m}athcal G}_{g,n}[\epsilonll^r]$ are the weighted completions of ${\mathfrak p}i_1({{\mathfrak m}athcal C}_{g,n/k}[\epsilonll^r], \bar x)$, ${\mathfrak p}i_1({{\mathfrak m}athcal M}_{g,n+1/k}[\epsilonll^r], \bar x)$, and ${\mathfrak p}i_1({{\mathfrak m}athcal M}_{g,n/k}[\epsilonll^r], \epsilontabar)$ with respect to ${\mathfrak r}ho_\epsilontabar\circ {\mathfrak p}i_{{\mathfrak m}athfrak a}st$, ${\mathfrak r}ho_\epsilontabar\circ{\mathfrak p}i^o_{{\mathfrak m}athfrak a}st$, and ${\mathfrak r}ho_\epsilontabar$, respectively. }
{\mathfrak t}extcolor{black}{Pulling back the exact sequences ({\mathfrak r}ef{main exact seq for p}) and ({\mathfrak r}ef{main exact seq for p open})
along ${\mathfrak t}ilde{{\mathfrak r}ho}_{\epsilontabar}:{\mathfrak p}i_1({{\mathfrak m}athcal M}_{g,n/k}[\epsilonll^r], \epsilontabar){\mathfrak t}o {{\mathfrak m}athcal G}_{g,n}[\epsilonll^r]({{\mathfrak m}athbb Q}l)$ induced by weighted completion, we obtain extensions
\begin{equation*} \ellabel{main exact seq}1 {\mathfrak t}o {{\mathfrak m}athcal P}({{\mathfrak m}athbb Q}l){\mathfrak t}o {{\mathfrak m}athcal E}_{g,n} {\mathfrak t}o {\mathfrak p}i_1({{\mathfrak m}athcal M}_{g,n/k}[\epsilonll^r], \epsilontabar){\mathfrak t}o 1\epsilonnd{equation*}
and
\begin{equation*} \ellabel{main exact seq open}1 {\mathfrak t}o {{\mathfrak m}athcal P}^o({{\mathfrak m}athbb Q}l){\mathfrak t}o {{\mathfrak m}athcal E}^o_{g,n+1} {\mathfrak t}o {\mathfrak p}i_1({{\mathfrak m}athcal M}_{g,n/k}[\epsilonll^r], \epsilontabar){\mathfrak t}o 1\epsilonnd{equation*}
of ${\mathfrak p}i_1({{\mathfrak m}athcal M}_{g,n/k}[\epsilonll^r], \epsilontabar)$ by ${{\mathfrak m}athcal P}({{\mathfrak m}athbb Q}l)$ and ${{\mathfrak m}athcal P}^o({{\mathfrak m}athbb Q}l)$, respectively.}
Denote the set of ${{\mathfrak m}athcal P}({{\mathfrak m}athbb Q}l)$-conjugacy classes of continuous sections of ${{\mathfrak m}athcal E}_{g,n}{\mathfrak t}o {\mathfrak p}i_1({{\mathfrak m}athcal M}_{g,n/k}[\epsilonll^r], \epsilontabar)$ by $H^1_{\mathfrak n}ab( {\mathfrak p}i_1({{\mathfrak m}athcal M}_{g,n/k}[\epsilonll^r], \epsilontabar), {{\mathfrak m}athcal P}({{\mathfrak m}athbb Q}l))$. {\mathfrak t}extcolor{black}{Similarly, we define $H^1_{\mathfrak n}ab( {\mathfrak p}i_1({{\mathfrak m}athcal M}_{g,n/k}[\epsilonll^r], \epsilontabar), {{\mathfrak m}athcal P}^o({{\mathfrak m}athbb Q}l))$ as the ${{\mathfrak m}athcal P}^o({{\mathfrak m}athbb Q}l)$-conjugacy classes of continuous sections of ${{\mathfrak m}athcal E}^o_{g,n+1}{\mathfrak t}o {\mathfrak p}i_1({{\mathfrak m}athcal M}_{g,n/k}[\epsilonll^r], \epsilontabar)$. }
The non-abelian cohomology of extensions of a profinite group by a prounipotent group was introduced and developed by Kim in \cite{kim}.
Each of the $n$ tautological sections of ${\mathfrak p}i$ induces a class in $H^1_{\mathfrak n}ab( {\mathfrak p}i_1({{\mathfrak m}athcal M}_{g,n/k}[\epsilonll^r], \epsilontabar), {{\mathfrak m}athcal P}({{\mathfrak m}athbb Q}l))$, which we denote by $s^{\mathfrak u}n_j$ for $j=1,\elldots,n$.
\begin{bigtheorem}\ellabel{nonopen case}
{\mathfrak t}extcolor{black}{Suppose that $p$ is a prime number, that $\epsilonll$ is a prime number distinct from $p$, and that $r$ is a nonnegative integer.
Let $k$ be a finite field with ${{\mathfrak m}athbb C}har(k) =p$ that contains all $\epsilonll^r$th roots of unity. }
If $g{{\mathfrak m}athfrak g}eq 4$ and $n{{\mathfrak m}athfrak g}eq 1$, then we have
$$H^1_{\mathfrak n}ab( {\mathfrak p}i_1({{\mathfrak m}athcal M}_{g,n/k}[\epsilonll^r], \epsilontabar), {{\mathfrak m}athcal P}({{\mathfrak m}athbb Q}l)) =
\{s_1^{\mathfrak u}n, \elldots,s_n^{\mathfrak u}n\}.$$
\epsilonnd{bigtheorem}
{\mathfrak t}extcolor{black}{The case where $n=0$ directly follows from \cite[Prop.~11.3(ii)]{wat1}. In this case, the non-abelian cohomology of ${\mathfrak p}i_1({{\mathfrak m}athcal M}_{g/k}[\epsilonll^r], \epsilontabar)$ is empty. For the universal punctured curve ${\mathfrak p}i^o:{{\mathfrak m}athcal M}_{g,n+1/k}[\epsilonll^r]{\mathfrak t}o {{\mathfrak m}athcal M}_{g,n/k}[\epsilonll^r]$, we have the following result. }
\begin{bigtheorem}\ellabel{open case}
{\mathfrak t}extcolor{black}{Suppose that $p$ is a prime number, that $\epsilonll$ is a prime number distinct from $p$, and that $r$ is a nonnegative integer.
Let $k$ be a finite field with ${{\mathfrak m}athbb C}har(k) =p$ that contains all $\epsilonll^r$th roots of unity. }
If $g{{\mathfrak m}athfrak g}eq 4$ and $n{{\mathfrak m}athfrak g}eq {\mathfrak t}extcolor{black}{1}$, then the sequence
$$
1{\mathfrak t}o {{\mathfrak m}athcal P}^o {\mathfrak t}o {{\mathfrak m}athcal G}_{g, n+1}[\epsilonll^r]{\mathfrak t}o {{\mathfrak m}athcal G}_{g,n}[\epsilonll^r]{\mathfrak t}o 1
$$
does not split. Consequently, we have
$$
H^1_{\mathfrak n}ab( {\mathfrak p}i_1({{\mathfrak m}athcal M}_{g,n/k}[\epsilonll^r], \epsilontabar), {{\mathfrak m}athcal P}^o({{\mathfrak m}athbb Q}l)) = \epsilonmptyset.
$$
\epsilonnd{bigtheorem}
{\mathfrak t}extcolor{black}{There is the homotopy exact sequence associated to the universal punctured curve ${\mathfrak p}i^o$
\begin{equation}\ellabel{homotopy seq for univ punc curve}
1{\mathfrak t}o {\mathfrak p}i_1(C^o_\epsilontabar, \bar x)^{(\epsilonll)}{\mathfrak t}o{\mathfrak p}i_1'({{\mathfrak m}athcal M}_{g,n+1/ k}[\epsilonll^r],\bar x ){\mathfrak t}o{\mathfrak p}i_1({{\mathfrak m}athcal M}_{g,n/ k}[\epsilonll^r], \epsilontabar){\mathfrak t}o1,
\epsilonnd{equation}
where ${\mathfrak p}i_1(C^o_\epsilontabar, \bar x)^{(\epsilonll)}$ is the pro-$\epsilonll$ completion of ${\mathfrak p}i_1(C^o_\epsilontabar, \bar x)$ and the middle group is the quotient of ${\mathfrak p}i_1({{\mathfrak m}athcal M}_{g,n+1/ k}[\epsilonll^r],\bar x )$ by a certain distinguished subgroup (see \cite[SGA 1, Expos{\'e} XIII, \S4]{sga1} and Prop.~{\mathfrak r}ef{exact seq for punctured universal curve}). Since the sequence ({\mathfrak r}ef{main exact seq for p open}) is induced from the sequence ({\mathfrak r}ef{homotopy seq for univ punc curve}) by weighted completion and a splitting of ({\mathfrak r}ef{homotopy seq for univ punc curve}) induces that of ({\mathfrak r}ef{main exact seq for p open}) by a universal property of weighted completion, we have an immediate consequence.
\begin{bigcorollary}If $g {{\mathfrak m}athfrak g}eq 4$ and $n {{\mathfrak m}athfrak g}eq 1$, then the sequence ({\mathfrak r}ef{homotopy seq for univ punc curve}) does not split.
\epsilonnd{bigcorollary}}
{\mathfrak t}extcolor{black}{
Theorem {\mathfrak r}ef{nonopen case} can be viewed as an analogue of Hain's result on generic curves in characteristic zero \cite[Thm.~3]{hain2} for the universal complete curve in positive characteristic. On the other hand, Theorem 2 is the positive characteristic analogue of the author's result in \cite[Thm.~1]{wat2}. In \cite{wat1}, the author proves that the rational points of the universal complete curve in positive characteristic are given by exactly the tautological sections and that the exact sequence ({\mathfrak r}ef{hom seq for galois grp}) associated to the function field $K$ of ${{\mathfrak m}athcal M}_{g/k}[\epsilonll^r]$ does not split when $n =0$. From the {\mathfrak t}extcolor{black}{point of }view of anabelian geometry, our main results provide more evidence for the bijection between the set of the rational points of the universal complete curve and the set of ${\mathfrak p}i_1(C_\epsilontabar, \bar x)$-conjugacy classes of continuous sections of ${\mathfrak p}i_1({{\mathfrak m}athcal C}_{g,n/k}[\epsilonll^r], \bar x){\mathfrak t}o {\mathfrak p}i_1({{\mathfrak m}athcal M}_{g,n/k}[\epsilonll^r], \epsilontabar)$.
The main new ingredients used in this paper are the non-abelian cohomology of extensions of a profinite group by a prounipotent group introduced by Kim in \cite{kim} and the non-abelian cohomology schemes of weighted completions developed by Hain in \cite{hain4}. In our case where $k$ is a finite field, Proposition {\mathfrak r}ef{condition for existence} allows us to use a key exact sequence for the non-abelian cohomology schemes to compute the non-abelian cohomology of ${\mathfrak p}i_1({{\mathfrak m}athcal M}_{g,n/k}[\epsilonll^r], \epsilontabar)$. }
{\mathfrak s}ection{The universal curve of type $(g, n)$ and level structures}
Let $T$ be a scheme. By a curve of type $(g,n)$ over $T$, we mean a smooth proper morphism $f:C{\mathfrak t}o T$ whose geometric fibers are connected one-dimensional schemes of arithmetic genus g, that is equipped with $n$ disjoint sections. For nonnegative integers $g$ {\mathfrak t}extcolor{black}{and} $n$ satisfying $2g-2+n>0$, we have the smooth Deligne-Mumford (DM) stack ${{\mathfrak m}athcal M}_{g,n}$ over ${\mathrm{Sp}}ec {{\mathfrak m}athbb Z}$ classifying curves of type $(g, n)$. For a field $k$, the stack ${{\mathfrak m}athcal M}_{g,n/k}$ is the base change ${{\mathfrak m}athcal M}_{g,n}\otimes k$. The universal curve ${\mathfrak p}i:{{\mathfrak m}athcal C}_{g,n}{\mathfrak t}o {{\mathfrak m}athcal M}_{g,n}$ exists and satisfies the property that for a curve $f:C{\mathfrak t}o T$ of type $(g, n)$, there exists a unique morphism ${\mathfrak p}si_f:T{\mathfrak t}o {{\mathfrak m}athcal M}_{g,n}$ such that $f$ is the pullback of ${\mathfrak p}i$ along ${\mathfrak p}si_f$. \\
\indent Assume that $2g-2+n >0$. Let $m$ be a positive integer. Let $k$ be a field containing all $m$th roots of unity ${\mathfrak m}u_m(\bar k)$ {\mathfrak t}extcolor{black}{with $({{\mathfrak m}athbb C}har(k), m) =1$}. Fix an isomorphism ${\mathfrak m}u: {\mathfrak m}u_m^{\otimes-1}\cong {{\mathfrak m}athbb Z}/m{{\mathfrak m}athbb Z}$. For a curve $f:C{\mathfrak t}o T$ of type $(g, n)$, an abelian level $m$ on $f$ is an isomorphism ${\mathfrak p}hi:R^1f_{{\mathfrak m}athfrak a}st({{\mathfrak m}athbb Z}/m{{\mathfrak m}athbb Z}) \cong ({{\mathfrak m}athbb Z}/m{{\mathfrak m}athbb Z})^{2g}$ such that the diagram
$$\mathbf{x}ymatrix@C=1pc @R=1pc{
{{\mathfrak m}athbb L}ambda^2R^1f_{{\mathfrak m}athfrak a}st({{\mathfrak m}athbb Z}/m{{\mathfrak m}athbb Z}) {{\mathfrak m}athfrak a}r[r]^-{{\mathfrak m}athrm{cup}}{{\mathfrak m}athfrak a}r[d]_{{{\mathfrak m}athbb L}ambda^2{\mathfrak p}hi}& R^2f_{{\mathfrak m}athfrak a}st({{\mathfrak m}athbb Z}/m{{\mathfrak m}athbb Z})\cong {\mathfrak m}u_m^{\otimes-1}{{\mathfrak m}athfrak a}r[d]^{{\mathfrak m}u}\\
{{\mathfrak m}athbb L}ambda^2({{\mathfrak m}athbb Z}/m{{\mathfrak m}athbb Z})^{2g} {{\mathfrak m}athfrak a}r[r]& {{\mathfrak m}athbb Z}/m{{\mathfrak m}athbb Z}
}$$
commutes, where $R^1f_{{\mathfrak m}athfrak a}st({{\mathfrak m}athbb Z}/m{{\mathfrak m}athbb Z})$ is equipped with a symplectic structure via the cup product and $({{\mathfrak m}athbb Z}/m{{\mathfrak m}athbb Z})^{2g}$ is equipped with the standard symplectic structure. The moduli stack of curves of type $(g,n)$ with an abelian level $m$ over $k$ is denoted by ${{\mathfrak m}athcal M}_{g, n/k}[m]$. {\mathfrak t}extcolor{black}{It is a geometrically connected, finite \'etale cover of ${{\mathfrak m}athcal M}_{g,n/k}$ (see \cite{DM}).
} When $m {{\mathfrak m}athfrak g}eq 3$, it is a smooth scheme over $k$. In this paper, we always assume that $k$ contains all $m$th roots of unity. {\mathfrak t}extcolor{black}{For $m =1$, we denote ${{\mathfrak m}athcal M}_{g,n/k}[1]$ by ${{\mathfrak m}athcal M}_{g,n/k}$. }
{\mathfrak s}ection{Weighted Completion and its Application to the Universal Curves}\ellabel{weight}
{\mathfrak s}ubsection{The weighted completion of a profinite group}
The weighted completion of a profinite group is introduced and developed by Hain and Matsumoto. A detailed introduction of the theory and properties are included in \cite{wei}. Here, we briefly review the definition and list some key properties needed in this paper.
Let $F$ be a field of characteristic zero. Suppose that $\Gamma$ is a profinite group, $R$ is a reductive group over $F$, $\omega: \Gammam{\mathfrak t}o R$ is a nontrivial central cocharacter, and that ${\mathfrak r}ho:\Gamma{\mathfrak t}o R(F)$ is a continuous Zariski-dense homomorphism.
A negatively weighted extension of $R$ is an (pro)algebraic group $G$ that is an extension of $R$ by a (pro)unipotent group $U$ over $F$, $1{\mathfrak t}o U{\mathfrak t}o G{\mathfrak t}o R{\mathfrak t}o 1$,
such that $H_1(U)$ admits only negative weights as a $\Gammam$-representation via the central cocharacter $\omega$.
The {\it weighted completion} of $\Gamma$ with respect to ${\mathfrak r}ho$ and $\omega$ consists of a proalgebraic $F$-group ${{\mathfrak m}athcal G}$ that is a negatively weighted extension of $R$ and a homomorphism ${\mathfrak t}ilde{{\mathfrak r}ho}:\Gamma{\mathfrak t}o{{\mathfrak m}athcal G}(F)$ lifting ${\mathfrak r}ho$ that admits a universal property: if $G$ is a negatively weighted extension of $R$ and there is a Zariski-dense homomorphism ${\mathfrak r}ho_G:\Gamma{\mathfrak t}o G(F)$ lifting ${\mathfrak r}ho$, then there is a unique morphism ${\mathfrak p}hi_G:{{\mathfrak m}athcal G}{\mathfrak t}o G$ such that
$${\mathfrak r}ho_G={\mathfrak p}hi_G\circ {\mathfrak t}ilde{{\mathfrak r}ho}.$$
Denote the prounipotent radical of ${{\mathfrak m}athcal G}$ by ${{\mathfrak m}athcal U}$. {\mathfrak t}extcolor{black}{Since ${{\mathfrak m}athcal U}$ is prounipotent, by a generalization of Levi's Theorem, the extension $1{\mathfrak t}o {{\mathfrak m}athcal U}{\mathfrak t}o {{\mathfrak m}athcal G}{\mathfrak t}o R{\mathfrak t}o 1$ splits, and any two splittings are conjugate by an element of ${{\mathfrak m}athcal U}(F)$.}
Furthermore, there is a natural weight filtration $W_\bullet M$ on a finite dimensional ${{\mathfrak m}athcal G}$-module $M$. The following is the list of the key properties.
\begin{proposition}[{\cite[{\mathfrak t}extcolor{black}{Prop.~3.8, Thms~3.9 \& 3.12 }]{wei}}]\ellabel{weight filt} With the notation as above, the weight filtration $W_\bullet M$ satisfies the properties:
\begin{enumerate}
\item The defining ${{\mathfrak m}athcal G}$-action preserves the weight filtration.
\item The ${{\mathfrak m}athcal G}$-action on each associated graded quotient $\Gammar^W_nM$ factors through $R$.
\item The functors $M{\mathfrak m}apsto \Gammar_\bullet M$, $M{\mathfrak m}apsto W_mM$, and $M{\mathfrak m}apsto M/W_mM$ are exact on the category of finite dimensional ${{\mathfrak m}athcal G}$-modules.
\epsilonnd{enumerate}
\epsilonnd{proposition}
We immediately see that the results of the proposition extend to direct and inverse limits of finite dimensional ${{\mathfrak m}athcal G}$-modules.
{\mathfrak s}ubsection{Applications to the universal curves}\ellabel{Applications to universal curves}
Let $k$ be a finite field of characteristic $p$. Let $g{{\mathfrak m}athfrak g}eq 3$, $n{{\mathfrak m}athfrak g}eq 0$, and $m {{\mathfrak m}athfrak g}eq 1$ be prime to $p$.
For a prime $\epsilonll$ distinct from $p$ and {\mathfrak t}extcolor{black}{$A={{\mathfrak m}athbb Z}l$ or ${{\mathfrak m}athbb Q}l$}, set $H_A=H^1_\epsilont(C_\epsilontabar, A(1))$, where $C_\epsilontabar$ is the fiber of the universal curve ${\mathfrak p}i: {{\mathfrak m}athcal C}_{g,n/k}[m]{\mathfrak t}o {{\mathfrak m}athcal M}_{g,n/k}[m]$ over $\epsilontabar$. For simplicity, when $A ={{\mathfrak m}athbb Q}l$, we denote $H_{{\mathfrak m}athbb Q}l$ by $H$.
Since the $\epsilonll$-adic sheaf $R^1{\mathfrak p}i_{{\mathfrak m}athfrak a}st {{\mathfrak m}athbb Z}l(1)$ is smooth over ${{\mathfrak m}athcal M}_{g,n/k}[m]$, we obtain a representation ${\mathfrak r}ho_\epsilontabar: {\mathfrak p}i_1({{\mathfrak m}athcal M}_{g,n/k}[m], \epsilontabar){\mathfrak t}o \GammaSp(H_{{{\mathfrak m}athbb Z}l})$. Note that when $k$ is a finite field {\mathfrak t}extcolor{black}{with ${{\mathfrak m}athbb C}har(k){\mathfrak n}ot =\epsilonll$}, a number field, or {\mathfrak t}extcolor{black}{a local field ${{\mathfrak m}athbb Q}_q$}, the $\epsilonll$-adic cyclotomic character $\chi_\epsilonll:G_k {\mathfrak t}o {{\mathfrak m}athbb Z}l^{\mathfrak t}imes$ has infinite image.
\begin{proposition}[{\cite[{\mathfrak t}extcolor{black}{Prop.~8.3}]{wat1}}]\ellabel{monodromy density}
Let $k$ be a finite field of characteristic $p$. If $g{{\mathfrak m}athfrak g}eq 3$, $n{{\mathfrak m}athfrak g}eq 0$, $\epsilonll$ is a prime number distinct from $p$, and $r{{\mathfrak m}athfrak g}eq 0$, then the image of the monodromy representation
$${\mathfrak r}ho_\epsilontabar:{\mathfrak p}i_1({{\mathfrak m}athcal M}_{g,n/k}[\epsilonll^r], \epsilontabar){\mathfrak t}o \GammaSp(H)$$ is Zariski-dense in $\GammaSp(H)$.
\epsilonnd{proposition}
Define a central cocharacter $\omega:\Gammam{\mathfrak t}o \GammaSp(H)$ by sending $z$ to $z^{-1}\id$.
Let $r$ be a nonnegative integer. Denote the weighted completion of ${\mathfrak p}i_1({{\mathfrak m}athcal M}_{g,n/k}[\epsilonll^r],\epsilontabar)$ with respect to ${\mathfrak r}ho_{\epsilontabar}$ and $\omega$ by
$$({{\mathfrak m}athcal G}_{g,n}[\epsilonll^r],\,\, {\mathfrak t}ilde{{\mathfrak r}ho}_{\epsilontabar}:{\mathfrak p}i_1({{\mathfrak m}athcal M}_{g,n/k}[\epsilonll^r],\epsilontabar){\mathfrak t}o {{\mathfrak m}athcal G}_{g,n}[\epsilonll^r]({{\mathfrak m}athbb Q}l)).$$
The completion ${{\mathfrak m}athcal G}_{g,n}[\epsilonll^r]$ is a negatively weighted extension of $\GammaSp(H)$ by a prounipotent ${{\mathfrak m}athbb Q}l$-group {\mathfrak t}extcolor{black}{${{\mathfrak m}athcal U}_{g,n}[\epsilonll^r]$. The} Lie algebras of ${{\mathfrak m}athcal G}_{g,n}[\epsilonll^r]$ and ${{\mathfrak m}athcal U}_{g,n}[\epsilonll^r]$ are denoted by ${{\mathfrak m}athfrak g}_{g,n}[\epsilonll^r]$ and ${\mathfrak u}_{g,n}[\epsilonll^r]$, respectively.
\begin{variant} Let $\bar x$ be a geometric point in the fiber $C^o_\epsilontabar$. We also consider $\bar x$ as a geometric point in ${{\mathfrak m}athcal C}_{g,n/k}[\epsilonll^r]$. Let $({{\mathfrak m}athcal G}_{{{\mathfrak m}athcal C}_{g,n}}[\epsilonll^r], {\mathfrak t}ilde{{\mathfrak r}ho}^{{\mathfrak m}athcal C}_\epsilontabar:{\mathfrak p}i_1({{\mathfrak m}athcal C}_{g,n/k}[\epsilonll^r], \bar x) {\mathfrak t}o {{\mathfrak m}athcal G}_{{{\mathfrak m}athcal C}_{g,n}}[\epsilonll^r]({{\mathfrak m}athbb Q}l))$ be the weighted completion of ${\mathfrak p}i_1({{\mathfrak m}athcal C}_{g,n/k}[\epsilonll^r], \bar x)$ with respect to the representation ${\mathfrak r}ho^{{\mathfrak m}athcal C}_{\epsilontabar}:={\mathfrak r}ho_\epsilontabar\circ{\mathfrak p}i_{{\mathfrak m}athfrak a}st :{\mathfrak p}i_1({{\mathfrak m}athcal C}_{g,n/k}[\epsilonll^r], \bar x) {\mathfrak t}o {\mathfrak p}i_1({{\mathfrak m}athcal M}_{g,n/k}[\epsilonll^r], \epsilontabar){\mathfrak t}o \GammaSp(H)$ and $\omega$. Denote the prounipotent radical of ${{\mathfrak m}athcal G}_{{{\mathfrak m}athcal C}_{g,n}}[\epsilonll^r]$ by ${{\mathfrak m}athcal U}_{{{\mathfrak m}athcal C}_{g,n}}[\epsilonll^r]$. The Lie algebras of ${{\mathfrak m}athcal G}_{{{\mathfrak m}athcal C}_{g,n}}[\epsilonll^r]$ and ${{\mathfrak m}athcal U}_{{{\mathfrak m}athcal C}_{g,n}}[\epsilonll^r]$ are denoted by ${{\mathfrak m}athfrak g}_{{{\mathfrak m}athcal C}_{g,n}}[\epsilonll^r]$ and ${\mathfrak u}_{{{\mathfrak m}athcal C}_{g,n}}[\epsilonll^r]$, respectively.
\epsilonnd{variant}
{\mathfrak s}ubsection{Relative completion of ${\mathfrak p}i_1({{\mathfrak m}athcal M}_{g,n/\bar k}[\epsilonll^r])$ and the ${{\mathfrak m}athcal G}_{g,n}[\epsilonll^r]$-module ${\mathfrak u}^{{\mathfrak m}athfrak g}eom_{g,n}[\epsilonll^r]$}
Let $\bar k$ be the separable closure of $k$ in $\overline{K}$, where $k$ is as above.
Another key ingredient in this paper is the Lie algebra of the relative completion of ${\mathfrak p}i_1({{\mathfrak m}athcal M}_{g,n/\bar k}[\epsilonll^r], \epsilontabar)$, which admits a ${{\mathfrak m}athcal G}_{g,n}[\epsilonll^r]$-module structure and hence a weight filtration. A detailed review and basic properties of relative completion are given in \cite{hain5}.
Recall that when $\epsilonll$ is different from ${\mathfrak m}athrm{char}(k)$, the image of the monodromy representation ${\mathfrak r}ho^{{\mathfrak m}athfrak g}eom_{\epsilontabar}: {\mathfrak p}i_1({{\mathfrak m}athcal M}_{g,n/\bar k}[\epsilonll^r], \epsilontabar){\mathfrak t}o {\mathrm{Sp}}(H)$ is Zariski-dense in ${\mathrm{Sp}}(H)$.
The relative completion of ${\mathfrak p}i_1({{\mathfrak m}athcal M}_{g,n/\bar k}[\epsilonll^r], \epsilontabar)$ with respect to ${\mathfrak r}ho^{{\mathfrak m}athfrak g}eom_{\epsilontabar}$ consists of a proalgebraic ${{\mathfrak m}athbb Q}l$-group ${{\mathfrak m}athcal G}^{{{\mathfrak m}athfrak g}eom}_{g,n}[\epsilonll^r]$ and a Zariski-dense homomorphism ${\mathfrak t}ilde{{\mathfrak r}ho}^{{\mathfrak m}athfrak g}eom_{\epsilontabar}:{\mathfrak p}i_1({{\mathfrak m}athcal M}_{g,n/\bar k}[\epsilonll^r], \epsilontabar){\mathfrak t}o {{\mathfrak m}athcal G}^{{{\mathfrak m}athfrak g}eom}_{g,n}[\epsilonll^r]({{\mathfrak m}athbb Q}l)$. The proalgebraic group ${{\mathfrak m}athcal G}^{{{\mathfrak m}athfrak g}eom}_{g,n}[\epsilonll^r]$ is an extension of ${\mathrm{Sp}}(H)$ by a prounipotent ${{\mathfrak m}athbb Q}l$-group {\mathfrak t}extcolor{black}{${{\mathfrak m}athcal U}^{{{\mathfrak m}athfrak g}eom}_{g,n}[\epsilonll^r]$,
and} the homomorphism ${\mathfrak t}ilde{{\mathfrak r}ho}^{{\mathfrak m}athfrak g}eom_{\epsilontabar}$ satisfies a universal property:
if $G$ is an extension of ${\mathrm{Sp}}(H)$ by a prounipotent group $U$ and there is a Zariski-dense homomorphism ${\mathfrak r}ho^{{\mathfrak m}athfrak g}eom_G: {\mathfrak p}i_1({{\mathfrak m}athcal M}_{g,n/\bar k}[\epsilonll^r], \epsilontabar){\mathfrak t}o G({{\mathfrak m}athbb Q}l)$ lifting ${\mathfrak r}ho^{{\mathfrak m}athfrak g}eom_{\epsilontabar}$, then there is a unique morphism ${\mathfrak p}hi_G:{{\mathfrak m}athcal G}^{{{\mathfrak m}athfrak g}eom}_{g,n}[\epsilonll^r]{\mathfrak t}o G$ such that
${\mathfrak r}ho^{{\mathfrak m}athfrak g}eom_G={\mathfrak p}hi_G\circ {\mathfrak t}ilde{{\mathfrak r}ho}^{{\mathfrak m}athfrak g}eom_{\epsilontabar}.$
Denote the Lie algebras of ${{\mathfrak m}athcal G}^{{{\mathfrak m}athfrak g}eom}_{g,n}[\epsilonll^r]$, ${{\mathfrak m}athcal U}^{{{\mathfrak m}athfrak g}eom}_{g,n}[\epsilonll^r]$, and ${\mathrm{Sp}}(H)$ by ${{\mathfrak m}athfrak g}^{{{\mathfrak m}athfrak g}eom}_{g,n}[\epsilonll^r], {\mathfrak u}^{{{\mathfrak m}athfrak g}eom}_{g,n}[\epsilonll^r]$, and ${\mathfrak r}$, respectively. {\mathfrak t}extcolor{black}{For $r =0$, we denote ${{\mathfrak m}athcal G}^{{\mathfrak m}athfrak g}eom_{g,n}[1]$, ${{\mathfrak m}athcal U}^{{\mathfrak m}athfrak g}eom_{g,n}[1]$, ${{\mathfrak m}athfrak g}^{{\mathfrak m}athfrak g}eom_{g,n}[1]$, and ${\mathfrak u}^{{\mathfrak m}athfrak g}eom_{g,n}[1]$ by ${{\mathfrak m}athcal G}^{{\mathfrak m}athfrak g}eom_{g,n}$, ${{\mathfrak m}athcal U}^{{\mathfrak m}athfrak g}eom_{g,n}$, ${{\mathfrak m}athfrak g}^{{\mathfrak m}athfrak g}eom_{g,n}$, and ${\mathfrak u}^{{\mathfrak m}athfrak g}eom_{g,n}$, respectively. }
We list results needed in this paper:
\begin{proposition}[{ \cite[Thm.~3.9 ]{wei}, \cite[Thm.~5.6, Props.~8.4, ~8.6 \& ~8.7]{wat1}}] \ellabel{comparison}With notation as above, {\mathfrak t}extcolor{black}{when $g{{\mathfrak m}athfrak g}eq 3$,} we have the following results:
\begin{enumerate}
\item The conjugation action of ${\mathfrak p}i_1({{\mathfrak m}athcal M}_{g,n/k}[\epsilonll^r], \epsilontabar)$ on ${\mathfrak p}i_1({{\mathfrak m}athcal M}_{g,n/\bar k}[\epsilonll^r], \epsilontabar))$ induces the adjoint action of ${{\mathfrak m}athcal G}_{g,n}[\epsilonll^r]$ on ${{\mathfrak m}athfrak g}^{{{\mathfrak m}athfrak g}eom}_{g,n}[\epsilonll^r]$, and hence on ${\mathfrak u}^{{{\mathfrak m}athfrak g}eom}_{g,n}[\epsilonll^r]$. Therefore, the Lie algebras ${{\mathfrak m}athfrak g}^{{{\mathfrak m}athfrak g}eom}_{g,n}[\epsilonll^r]$ and ${\mathfrak u}^{{{\mathfrak m}athfrak g}eom}_{g,n}[\epsilonll^r]$ admit weight filtrations $W_\bullet {{\mathfrak m}athfrak g}^{{{\mathfrak m}athfrak g}eom}_{g,n}[\epsilonll^r]$ and $W_\bullet {\mathfrak u}^{{{\mathfrak m}athfrak g}eom}_{g,n}[\epsilonll^r] $, respectively.
\item There is an isomorphism ${{\mathfrak m}athcal G}^{{\mathfrak m}athfrak g}eom_{g,n}[\epsilonll^r]\cong {{\mathfrak m}athcal G}^{{\mathfrak m}athfrak g}eom_{g,n}$ and
there are weight filtration preserving isomorphisms
$${{\mathfrak m}athfrak g}^{{\mathfrak m}athfrak g}eom_{g,n}[\epsilonll^r]\cong {{\mathfrak m}athfrak g}^{{\mathfrak m}athfrak g}eom_{g,n} \,\,{\mathfrak t}ext{ and }\,\, {\mathfrak u}^{{\mathfrak m}athfrak g}eom_{g,n}[\epsilonll^r] \cong {\mathfrak u}^{{\mathfrak m}athfrak g}eom_{g,n}$$
that induce $\GammaSp(H)$-equivariant isomorphisms of graded Lie algebras
$$\Gammar^W_\bullet{{\mathfrak m}athfrak g}^{{{\mathfrak m}athfrak g}eom}_{g,n}[\epsilonll^r]\cong\Gammar^W_\bullet{{\mathfrak m}athfrak g}^{{{\mathfrak m}athfrak g}eom}_{g,n} \,\,{\mathfrak t}ext{ and }\,\, \Gammar^W_\bullet{\mathfrak u}^{{\mathfrak m}athfrak g}eom_{g,n}[\epsilonll^r] \cong \Gammar^W_\bullet{\mathfrak u}^{{\mathfrak m}athfrak g}eom_{g,n}.$$
\item There is a weight filtration preserving isomorphism between ${{\mathfrak m}athfrak g}^{{\mathfrak m}athfrak g}eom_{g,n}$ and {\mathfrak t}extcolor{black}{the Lie algebra of }the relative completion of ${\mathfrak p}i_1({{\mathfrak m}athcal M}_{g,n/\bar{{\mathfrak m}athbb Q}}, \epsilontabar')$ with respect to ${\mathfrak r}ho^{{\mathfrak m}athfrak g}eom_{\epsilontabar'}:{\mathfrak p}i_1({{\mathfrak m}athcal M}_{g,n/\bar{{\mathfrak m}athbb Q}}, \epsilontabar'){\mathfrak t}o {\mathrm{Sp}}(H^1_\epsilont(C_{\epsilontabar'}, {{\mathfrak m}athbb Q}l(1)))$, where $\epsilontabar'$ is a geometric generic point of ${{\mathfrak m}athcal M}_{g,n/\bar{{\mathfrak m}athbb Q}}$.
\epsilonnd{enumerate}
\epsilonnd{proposition}
{\mathfrak s}ubsubsection{Variant} {\mathfrak t}extcolor{black}{Denote by ${{\mathfrak m}athcal G}^{{{\mathfrak m}athfrak g}eom}_{{{\mathfrak m}athcal C}_{g,n}}[\epsilonll^r]$ the relative completion of ${\mathfrak p}i_1({{\mathfrak m}athcal C}_{g,n/\bar k}[\epsilonll^r], \bar x)$ with respect to the composition ${\mathfrak p}i_1({{\mathfrak m}athcal C}_{g,n/\bar k}[\epsilonll^r], \bar x){\mathfrak t}o {\mathfrak p}i_1({{\mathfrak m}athcal M}_{g,n/\bar k}[\epsilonll^r], \epsilontabar){\mathfrak t}o {\mathrm{Sp}}(H)$. It is an extension of ${\mathrm{Sp}}(H)$ by a prounipotent ${{\mathfrak m}athbb Q}l$-group denoted by ${{\mathfrak m}athcal U}^{{{\mathfrak m}athfrak g}eom}_{{{\mathfrak m}athcal C}_{g,n}}[\epsilonll^r]$. Denote the Lie algebras of ${{\mathfrak m}athcal G}^{{{\mathfrak m}athfrak g}eom}_{{{\mathfrak m}athcal C}_{g,n}}[\epsilonll^r]$ and ${{\mathfrak m}athcal U}^{{{\mathfrak m}athfrak g}eom}_{{{\mathfrak m}athcal C}_{g,n}}[\epsilonll^r]$ by ${{\mathfrak m}athfrak g}^{{{\mathfrak m}athfrak g}eom}_{{{\mathfrak m}athcal C}_{g,n}}[\epsilonll^r]$ and ${\mathfrak u}^{{{\mathfrak m}athfrak g}eom}_{{{\mathfrak m}athcal C}_{g,n}}[\epsilonll^r]$, respectively.
As above, for $r =0$, we denote ${{\mathfrak m}athcal G}^{{\mathfrak m}athfrak g}eom_{{{\mathfrak m}athcal C}_{g,n}}[1]$, ${{\mathfrak m}athcal U}^{{\mathfrak m}athfrak g}eom_{{{\mathfrak m}athcal C}_{g,n}}[1]$, ${{\mathfrak m}athfrak g}^{{\mathfrak m}athfrak g}eom_{{{\mathfrak m}athcal C}_{g,n}}[1]$, and ${\mathfrak u}^{{\mathfrak m}athfrak g}eom_{{{\mathfrak m}athcal C}_{g,n}}[1]$ by ${{\mathfrak m}athcal G}^{{\mathfrak m}athfrak g}eom_{{{\mathfrak m}athcal C}_{g,n}}$, ${{\mathfrak m}athcal U}^{{\mathfrak m}athfrak g}eom_{{{\mathfrak m}athcal C}_{g,n}}$, ${{\mathfrak m}athfrak g}^{{\mathfrak m}athfrak g}eom_{{{\mathfrak m}athcal C}_{g,n}}$, and ${\mathfrak u}^{{\mathfrak m}athfrak g}eom_{{{\mathfrak m}athcal C}_{g,n}}$, respectively. The analogue of Proposition {\mathfrak r}ef{comparison} also holds for ${{\mathfrak m}athfrak g}^{{{\mathfrak m}athfrak g}eom}_{{{\mathfrak m}athcal C}_{g,n}}[\epsilonll^r]$ and ${\mathfrak u}^{{{\mathfrak m}athfrak g}eom}_{{{\mathfrak m}athcal C}_{g,n}}[\epsilonll^r]$ (see \cite[\S 8.1]{wat1}). }
{\mathfrak s}ubsection{Exact sequences} \ellabel{exact seqs}The relative and weighted completions associated to the complete universal curve ${\mathfrak p}i$ fit in the following commutative diagram. {\mathfrak t}extcolor{black}{Let ${{\mathfrak m}athcal P}$ be the continuous unipotent completion of ${\mathfrak p}i_1(C_\epsilontabar, \bar x)$ over ${{\mathfrak m}athbb Q}l$. It is a prounipotent group over ${{\mathfrak m}athbb Q}l$.} Then for $g{{\mathfrak m}athfrak g}eq 2$, by \cite[Prop.~7.6]{wat1} there is a commutative diagram of completions ($a$)\ellabel{key seq}
$$\mathbf{x}ymatrix@C=1pc @R=1pc{
1{{\mathfrak m}athfrak a}r[r]& {{\mathfrak m}athcal P}{{\mathfrak m}athfrak a}r[r]{{\mathfrak m}athfrak a}r@{=}[d] &{{\mathfrak m}athcal G}^{{\mathfrak m}athfrak g}eom_{{{\mathfrak m}athcal C}_{g,n}}[\epsilonll^r]{{\mathfrak m}athfrak a}r[r]{{\mathfrak m}athfrak a}r[d]&{{\mathfrak m}athcal G}^{{\mathfrak m}athfrak g}eom_{g,n}[\epsilonll^r]{{\mathfrak m}athfrak a}r[r]{{\mathfrak m}athfrak a}r[d]&1\\
1{{\mathfrak m}athfrak a}r[r] &{{\mathfrak m}athcal P}{{\mathfrak m}athfrak a}r[r] &{{\mathfrak m}athcal G}_{{{\mathfrak m}athcal C}_{g,n}}[\epsilonll^r]{{\mathfrak m}athfrak a}r[r]&{{\mathfrak m}athcal G}_{g,n}[\epsilonll^r]{{\mathfrak m}athfrak a}r[r]&1,
}
$$
where the rows are exact and the middle and right-hand vertical maps are induced by the natural map ${{\mathfrak m}athcal M}_{g,n/\bar k}[\epsilonll^r]{\mathfrak t}o {{\mathfrak m}athcal M}_{g,n/k}[\epsilonll^r]$ given by base change. {\mathfrak t}extcolor{black}{Let ${{\mathfrak m}athcal P}^o$ be the continuous unipotent completion of ${\mathfrak p}i_1(C^o_\epsilontabar, \bar x)$ over ${{\mathfrak m}athbb Q}l$. It is a prounipotent group over ${{\mathfrak m}athbb Q}l$.} Similarly, we have the following result for the universal punctured curve ${\mathfrak p}i^o$.
\begin{proposition}\ellabel{exact seq for punctured universal curve}
With notation as above, if $g{{\mathfrak m}athfrak g}eq 2$ and $n{{\mathfrak m}athfrak g}eq1$, then there is a commutative diagram of completions associated to ${\mathfrak p}i^o:{{\mathfrak m}athcal M}_{g,n+1/k}{\mathfrak t}o {{\mathfrak m}athcal M}_{g,n/k}$:
$$\mathbf{x}ymatrix@C=1pc @R=1pc{
1{{\mathfrak m}athfrak a}r[r]& {{\mathfrak m}athcal P}^o{{\mathfrak m}athfrak a}r[r]{{\mathfrak m}athfrak a}r@{=}[d] &{{\mathfrak m}athcal G}^{{\mathfrak m}athfrak g}eom_{g,n+1}[\epsilonll^r]{{\mathfrak m}athfrak a}r[r]{{\mathfrak m}athfrak a}r[d]&{{\mathfrak m}athcal G}^{{\mathfrak m}athfrak g}eom_{g,n}[\epsilonll^r]{{\mathfrak m}athfrak a}r[r]{{\mathfrak m}athfrak a}r[d]&1\\
1{{\mathfrak m}athfrak a}r[r] &{{\mathfrak m}athcal P}^o{{\mathfrak m}athfrak a}r[r] &{{\mathfrak m}athcal G}_{g,n+1}[\epsilonll^r]{{\mathfrak m}athfrak a}r[r]&{{\mathfrak m}athcal G}_{g,n}[\epsilonll^r]{{\mathfrak m}athfrak a}r[r]&1.
}
$$
\epsilonnd{proposition}
\begin{proof}
Consider the commutative diagram of fundamental groups
$$\mathbf{x}ymatrix@C=1pc @R=1pc{
&{\mathfrak p}i_1(C^o_\epsilontabar, \bar x){{\mathfrak m}athfrak a}r[r]{{\mathfrak m}athfrak a}r@{=}[d] &{\mathfrak p}i_1({{\mathfrak m}athcal M}_{g,n+1/\bar k}[\epsilonll^r],\bar x ){{\mathfrak m}athfrak a}r[r]{{\mathfrak m}athfrak a}r[d]&{\mathfrak p}i_1({{\mathfrak m}athcal M}_{g,n/\bar k}[\epsilonll^r], \epsilontabar){{\mathfrak m}athfrak a}r[r]{{\mathfrak m}athfrak a}r[d]&1\\
&{\mathfrak p}i_1(C^o_\epsilontabar, \bar x){{\mathfrak m}athfrak a}r[r] {{\mathfrak m}athfrak a}r@{=}[d] &{\mathfrak p}i_1({{\mathfrak m}athcal M}_{g,n+1/ k}[\epsilonll^r], \bar x){{\mathfrak m}athfrak a}r[r]{{\mathfrak m}athfrak a}r[d]&{\mathfrak p}i_1({{\mathfrak m}athcal M}_{g,n/k}[\epsilonll^r], \epsilontabar){{\mathfrak m}athfrak a}r[r]{{\mathfrak m}athfrak a}r[d]&1\\
&{\mathfrak p}i_1(C^o_\epsilontabar, \bar x){{\mathfrak m}athfrak a}r[r] &{\mathfrak p}i_1({{\mathfrak m}athcal M}_{g,n+1/{{\mathfrak m}athbb Z}[1/\epsilonll]},\bar x ){{\mathfrak m}athfrak a}r[r]&{\mathfrak p}i_1({{\mathfrak m}athcal M}_{g,n/{{\mathfrak m}athbb Z}[1/\epsilonll]}, \epsilontabar){{\mathfrak m}athfrak a}r[r]&1.
}
$$
Let $Y$ be the kernel of ${\mathfrak p}i_1({{\mathfrak m}athcal M}_{g,n+1/\bar k}[\epsilonll^r],\bar x ){\mathfrak t}o {\mathfrak p}i_1({{\mathfrak m}athcal M}_{g,n/\bar k}[\epsilonll^r], \epsilontabar)$, and let $W$ be the kernel of the projection $Y{\mathfrak t}o Y^{(\epsilonll)}$, where $Y^{(\epsilonll)}$ is the pro-$\epsilonll$ completion of $Y$. Define ${\mathfrak p}i_1'({{\mathfrak m}athcal M}_{g,n+1/\bar k}[\epsilonll^r],\bar x )$ as the quotient of ${\mathfrak p}i_1({{\mathfrak m}athcal M}_{g,n+1/\bar k}[\epsilonll^r],\bar x )$ by $W$. Define ${\mathfrak p}i_1'({{\mathfrak m}athcal M}_{g,n+1/k}[\epsilonll^r],\bar x )$ and ${\mathfrak p}i_1'({{\mathfrak m}athcal M}_{g,n+1/{{\mathfrak m}athbb Z}[1/\epsilonll]},\bar x )$ in a similar manner.
Then the rows of the commutative diagram (${{\mathfrak m}athfrak a}st$)
$$\mathbf{x}ymatrix@C=1pc @R=1pc{
&{\mathfrak p}i_1(C^o_\epsilontabar, \bar x)^{(\epsilonll)}{{\mathfrak m}athfrak a}r[r]{{\mathfrak m}athfrak a}r@{=}[d] &{\mathfrak p}i_1'({{\mathfrak m}athcal M}_{g,n+1/\bar k}[\epsilonll^r],\bar x ){{\mathfrak m}athfrak a}r[r]{{\mathfrak m}athfrak a}r[d]&{\mathfrak p}i_1({{\mathfrak m}athcal M}_{g,n/\bar k}[\epsilonll^r], \epsilontabar){{\mathfrak m}athfrak a}r[r]{{\mathfrak m}athfrak a}r[d]&1\\
&{\mathfrak p}i_1(C^o_\epsilontabar, \bar x)^{(\epsilonll)}{{\mathfrak m}athfrak a}r[r]{{\mathfrak m}athfrak a}r@{=}[d] &{\mathfrak p}i_1'({{\mathfrak m}athcal M}_{g,n+1/ k}[\epsilonll^r], \bar x){{\mathfrak m}athfrak a}r[r]{{\mathfrak m}athfrak a}r[d]&{\mathfrak p}i_1({{\mathfrak m}athcal M}_{g,n/k}[\epsilonll^r], \epsilontabar){{\mathfrak m}athfrak a}r[r]{{\mathfrak m}athfrak a}r[d]&1\\
&{\mathfrak p}i_1(C^o_\epsilontabar, \bar x)^{(\epsilonll)}{{\mathfrak m}athfrak a}r[r] &{\mathfrak p}i_1'({{\mathfrak m}athcal M}_{g,n+1/{{\mathfrak m}athbb Z}[1/\epsilonll]},\bar x ){{\mathfrak m}athfrak a}r[r]&{\mathfrak p}i_1({{\mathfrak m}athcal M}_{g,n/{{\mathfrak m}athbb Z}[1/\epsilonll]}, \epsilontabar){{\mathfrak m}athfrak a}r[r]&1
}
$$
are exact (see \cite[SGA 1, Expos{\'e} XIII, 4.3 and 4.4]{sga1}). Moreover, there is a commutative diagram (${{\mathfrak m}athfrak a}st{{\mathfrak m}athfrak a}st$) with exact rows
$$\mathbf{x}ymatrix@C=1pc @R=1pc{
&{\mathfrak p}i_1(C^o_\epsilontabar, \bar x)^{(\epsilonll)}{{\mathfrak m}athfrak a}r[r]{{\mathfrak m}athfrak a}r@{=}[d] &{\mathfrak p}i_1'({{\mathfrak m}athcal M}_{g,n+1/ {{\mathfrak m}athbb Z}[1/\epsilonll]}, \bar x){{\mathfrak m}athfrak a}r[r]{{\mathfrak m}athfrak a}r[d]&{\mathfrak p}i_1({{\mathfrak m}athcal M}_{g,n/{{\mathfrak m}athbb Z}[1/\epsilonll]}, \epsilontabar){{\mathfrak m}athfrak a}r[r]{{\mathfrak m}athfrak a}r[d]&1\\
1{{\mathfrak m}athfrak a}r[r] &{\mathfrak p}i_1(C^o_\epsilontabar, \bar x)^{(\epsilonll)}{{\mathfrak m}athfrak a}r[r] &{\mathfrak p}i_1({{\mathfrak m}athcal C}({{\mathfrak m}athcal M}_{g, n+1/{{\mathfrak m}athbb Z}[1/\epsilonll]}),F_{\bar x} ){{\mathfrak m}athfrak a}r[r]&{\mathfrak p}i_1({{\mathfrak m}athcal C}({{\mathfrak m}athcal M}_{g, n/{{\mathfrak m}athbb Z}[1/\epsilonll]}), F_\epsilontabar){{\mathfrak m}athfrak a}r[r]&1,
}
$$
where ${{\mathfrak m}athcal C}({{\mathfrak m}athcal M}_{g, n/{{\mathfrak m}athbb Z}[1/\epsilonll]})$ is the Galois category of geometrically relative-$\epsilonll$ coverings over ${{\mathfrak m}athcal M}_{g, n/{{\mathfrak m}athbb Z}[1/\epsilonll]}$ defined in \cite[\S 7]{relative prol} and where $F_{\bar x}$ and $F_\epsilontabar$ are fiber functors over $\bar x$ and $\epsilontabar$, respectively.
The left exactness of the bottom row of the diagram $({{\mathfrak m}athfrak a}st{{\mathfrak m}athfrak a}st)$ follows from
\cite[Prop.~3.1 (2)]{relative prol} and the proof of \cite[Thm.~7.6]{relative prol}. Hence the top row of the diagram (${{\mathfrak m}athfrak a}st{{\mathfrak m}athfrak a}st$) is left exact. Therefore, the first and middle rows of the diagram $({{\mathfrak m}athfrak a}st)$ are left exact as well. By the center freeness of ${{\mathfrak m}athcal P}^o$ (e.g. \cite{ntu}) and the exactness criterion for weighted completions \cite[Prop.~6.11]{hain2}, the bottom row of the diagram
$$\mathbf{x}ymatrix@C=1pc @R=1pc{
& {{\mathfrak m}athcal P}^o{{\mathfrak m}athfrak a}r[r]{{\mathfrak m}athfrak a}r@{=}[d] &{{\mathfrak m}athcal G}^{{\mathfrak m}athfrak g}eom_{g,n+1}[\epsilonll^r]{{\mathfrak m}athfrak a}r[r]{{\mathfrak m}athfrak a}r[d]&{{\mathfrak m}athcal G}^{{\mathfrak m}athfrak g}eom_{g,n}[\epsilonll^r]{{\mathfrak m}athfrak a}r[r]{{\mathfrak m}athfrak a}r[d]&1\\
1{{\mathfrak m}athfrak a}r[r] &{{\mathfrak m}athcal P}^o{{\mathfrak m}athfrak a}r[r] &{{\mathfrak m}athcal G}_{g,n+1}[\epsilonll^r]{{\mathfrak m}athfrak a}r[r]&{{\mathfrak m}athcal G}_{g,n}[\epsilonll^r]{{\mathfrak m}athfrak a}r[r]&1.
}
$$
is exact. The right exactness of relative completions \cite[Prop.~3.7]{hain5} implies that the top row is exact. The exactness of the bottom row implies that the top row is left exact as well.
\epsilonnd{proof}
{\mathfrak s}ubsection{$\Gammar^W_\bullet{\mathfrak u}_{g,n}[\epsilonll^r]/W_{-3}$} The Lie algebra ${\mathfrak u}_{g,n}[\epsilonll^r]$ is a ${{\mathfrak m}athcal G}_{g,n}[\epsilonll^r]$-module via the adjoint action and hence admits a weight filtration $W_\bullet{\mathfrak u}_{g,n}[\epsilonll^r]$. We study the relation between $\Gammar^W_\bullet{\mathfrak u}^{{\mathfrak m}athfrak g}eom_{g,n}[\epsilonll^r]/W_{-3}$ and $\Gammar^W_\bullet{\mathfrak u}_{g,n}[\epsilonll^r]/W_{-3}$. The case when ${\mathfrak m}athrm{char}(k)=0$ is determined in \cite[\S 8]{hain2}. First, for a finite field $k$, we study the weighted completion of $G_k$. Let $\chi_\epsilonll:G_k{\mathfrak t}o {{\mathfrak m}athbb Z}^{\mathfrak t}imes_\epsilonll {\mathfrak s}ubset \Gammam({{\mathfrak m}athbb Q}l)$ be the $\epsilonll$-adic cyclotomic character. Define a central cocharacter $\omega_\epsilonll:\Gammam{\mathfrak t}o \Gammam$ by mapping $z{\mathfrak m}apsto z^{-2}$. Denote the weighted completion of $G_k$ with respect to $\chi_\epsilonll$ and $\omega_\epsilonll$ by (${{\mathfrak m}athcal A}_k$, ${\mathfrak t}ilde{\chi}_\epsilonll:G_k{\mathfrak t}o {{\mathfrak m}athcal A}_k({{\mathfrak m}athbb Q}l)$) and denote the unipotent radical of ${{\mathfrak m}athcal A}_k$ by ${{\mathfrak m}athcal N}_k$. Denote the Lie algebra of ${{\mathfrak m}athcal N}_k$ by ${\mathfrak n}_k$.
Let ${\mathfrak t}au:\GammaSp(H){\mathfrak t}o \Gammam$ be the similitude character. There is a commutative diagram
$$\mathbf{x}ymatrix@C=1pc @R=1pc{
1{{\mathfrak m}athfrak a}r[r] &{\mathfrak p}i_1({{\mathfrak m}athcal M}_{g,n/\bar k}[\epsilonll^r], \epsilontabar){{\mathfrak m}athfrak a}r[r]{{\mathfrak m}athfrak a}r[d]^{{\mathfrak r}ho^{{\mathfrak m}athfrak g}eom_\epsilontabar} &{\mathfrak p}i_1({{\mathfrak m}athcal M}_{g,n/k}[\epsilonll^r], \epsilontabar){{\mathfrak m}athfrak a}r[r]{{\mathfrak m}athfrak a}r[d]^{{\mathfrak r}ho_\epsilontabar}&G_k{{\mathfrak m}athfrak a}r[r]{{\mathfrak m}athfrak a}r[d]^{\chi_\epsilonll}&1\\
1{{\mathfrak m}athfrak a}r[r] &{\mathrm{Sp}}(H){{\mathfrak m}athfrak a}r[r] &\GammaSp(H){{\mathfrak m}athfrak a}r[r]^{\mathfrak t}au&\Gammam{{\mathfrak m}athfrak a}r[r]&1\\
& &\Gammam{{\mathfrak m}athfrak a}r[u]_\omega{{\mathfrak m}athfrak a}r@{=}[r] &\Gammam{{\mathfrak m}athfrak a}r[u]_{\omega_\epsilonll}&
}
$$
whose rows are exact. Note that ${\mathfrak t}au\circ \omega =\omega_\epsilonll$.
In the above diagram, applying weighted completion to the middle and right columns and relative completion to the left column produces a commutative diagram
$$\mathbf{x}ymatrix@C=1pc @R=1pc{
&{{\mathfrak m}athcal G}^{{\mathfrak m}athfrak g}eom_{g,n}[\epsilonll^r]{{\mathfrak m}athfrak a}r[r]{{\mathfrak m}athfrak a}r[d]&{{\mathfrak m}athcal G}_{g,n}[\epsilonll^r]{{\mathfrak m}athfrak a}r[r]{{\mathfrak m}athfrak a}r[d]&{{\mathfrak m}athcal A}_k{{\mathfrak m}athfrak a}r[r]{{\mathfrak m}athfrak a}r[d]&1\\
1{{\mathfrak m}athfrak a}r[r] &{\mathrm{Sp}}(H){{\mathfrak m}athfrak a}r[r] &\GammaSp(H){{\mathfrak m}athfrak a}r[r]^{\mathfrak t}au&\Gammam{{\mathfrak m}athfrak a}r[r]&1.
}
$$
The right exactness of relative and weighted completion implies that the top row is exact. In our case, we have the following result.
\begin{proposition}\ellabel{weighted comp for G_k}
With the notation as above, we have ${{\mathfrak m}athcal A}_k =\Gammam$.
\epsilonnd{proposition}
\begin{proof} By \cite[Prop.~6.8]{hain2}, there is an isomorphism ${{\mathfrak m}athbb H}om_\Gammam(H_1({\mathfrak n}_k), V) \cong H^1(G_k, V)$ for each finite dimensional $\Gammam$-module $V$ of weight $m < 0$. Thus it suffices to show that $H^1(G_k, V)=0$. Let $I_\epsilonll$ and $K_\epsilonll$ be the image and kernel of $\chi_\epsilonll$, respectively. Since $k$ is a finite field, the image $I_\epsilonll$ is infinite. Associated to the exact sequence,
$1{\mathfrak t}o K_\epsilonll{\mathfrak t}o G_k\overset{\chi_\epsilonll}{\mathfrak t}o I_\epsilonll{\mathfrak t}o1,$
there is an exact sequence
$$0{\mathfrak t}o H^1(I_\epsilonll, H^0(K_\epsilonll, V)){\mathfrak t}o H^1(G_k, V) {{\mathfrak m}athfrak h}space{2in}$$
$${{\mathfrak m}athfrak h}space{1in}{\mathfrak t}o H^0(I_\epsilonll, H^1(K_\epsilonll, V)) {\mathfrak t}o H^2(I_\epsilonll, H^0(K_\epsilonll, V)).$$
For $m{\mathfrak n}ot=0$, the first and fourth terms vanish (see \cite[Ex.~4.12]{wei}). Since $K_\epsilonll$ acts trivially on $V$, there is an isomorphism $H^0(I_\epsilonll, H^1(K_\epsilonll, V))\cong {{\mathfrak m}athbb H}om_{I_\epsilonll}(K_\epsilonll, V)$. The Galois group $G_k\cong \omegaidehat{{{\mathfrak m}athbb Z}}$ is abelian, and so the induced action of $I_\epsilonll$ on $K_\epsilonll$ is trivial. Let ${\mathfrak p}hi \in {{\mathfrak m}athbb H}om_{I_\epsilonll}(K_\epsilonll, V)$ and $x\in K_\epsilonll$. For $a \in I_\epsilonll$, we have
${\mathfrak p}hi(x) ={\mathfrak p}hi(ax) = a^m{\mathfrak p}hi(x).$
For $m{\mathfrak n}ot= 0$, we can find $a$ such that $1-a^m$ is in ${{\mathfrak m}athbb Q}^{\mathfrak t}imes_\epsilonll$ since $I_\epsilonll$ is infinite, so ${\mathfrak p}hi(x) =0$. Thus it follows that ${{\mathfrak m}athbb H}om_{I_\epsilonll}(K_\epsilonll, V)=0$. Therefore, $H^1(G_k, V)$ vanishes.
\epsilonnd{proof}
By Proposition {\mathfrak r}ef{weighted comp for G_k}, ${{\mathfrak m}athcal N}_k$ is trivial and so the natural map ${{\mathfrak m}athcal G}^{{\mathfrak m}athfrak g}eom_{g,n}[\epsilonll^r]{\mathfrak t}o{{\mathfrak m}athcal G}_{g,n}[\epsilonll^r]$ induces a surjection $\beta: {{\mathfrak m}athcal U}^{{\mathfrak m}athfrak g}eom_{g,n}[\epsilonll^r]{\mathfrak t}o {{\mathfrak m}athcal U}_{g,n}[\epsilonll^r]$ and hence a surjection of pronilpotent Lie algebras $d\beta: {\mathfrak u}^{{\mathfrak m}athfrak g}eom_{g,n}[\epsilonll^r]{\mathfrak t}o{\mathfrak u}_{g,n}[\epsilonll^r]$. Applying the functor $\Gammar^W_\bullet$ gives a surjection of graded Lie algebras $\Gammar(d\beta): \Gammar^W_\bullet{\mathfrak u}^{{\mathfrak m}athfrak g}eom_{g,n}[\epsilonll^r]{\mathfrak t}o \Gammar^W_\bullet{\mathfrak u}_{g,n}[\epsilonll^r]$.
\begin{proposition}\ellabel{weight -1 and -2 injection}
For $g {{\mathfrak m}athfrak g}eq 3$ and $n{{\mathfrak m}athfrak g}eq 0$, the graded Lie algebra map $\Gammar(d\beta)$ induces $\GammaSp(H)$-equivariant isomorphisms
$$\Gammar^W_m{\mathfrak u}^{{\mathfrak m}athfrak g}eom_{g,n}[\epsilonll^r]\cong \Gammar^W_m{\mathfrak u}_{g,n}[\epsilonll^r]\,\,{\mathfrak t}ext{ for } m=-1,-2.$$
\epsilonnd{proposition}
\begin{proof} It suffices to show that $\Gammar(d\beta)$ is injective for weight $m=-1,-2$.
The Lie algebra of ${{\mathfrak m}athcal P}$ denoted by ${\mathfrak p}$ admits a weight filtration via the adjoint action of ${{\mathfrak m}athcal G}_{g,n}[\epsilonll^r]$ on ${\mathfrak p}$ and so does the derivation Lie algebra of ${\mathfrak p}$ denoted by ${{\mathfrak m}athcal D}er {\mathfrak p}$. Assume $n{{\mathfrak m}athfrak g}eq 1$. In the diagram $(a)$ in \S{\mathfrak r}ef{key seq}, each tautological section $s_j$ of ${{\mathfrak m}athcal G}_{{{\mathfrak m}athcal C}_{g,n}}[\epsilonll^r]{\mathfrak t}o{{\mathfrak m}athcal G}_{g,n}[\epsilonll^r]$ gives the weight filtration preserving adjoint action that produces the commutative diagram of graded Lie algebras
$$\mathbf{x}ymatrix@C=1pc @R=1pc{
{\mathfrak m}athrm{adj}_j:\Gammar^W_\bullet{\mathfrak u}^{{\mathfrak m}athfrak g}eom_{g,n}[\epsilonll^r]{{\mathfrak m}athfrak a}r[r]{{\mathfrak m}athfrak a}r[d]^{\Gammar(d\beta)}& \Gammar^W_\bullet{{\mathfrak m}athcal D}er {\mathfrak p} {{\mathfrak m}athfrak a}r@{=}[d] \\
{\mathfrak m}athrm{adj}_j:\Gammar^W_\bullet{\mathfrak u}_{g,n}[\epsilonll^r]{{\mathfrak m}athfrak a}r[r]&\Gammar^W_\bullet{{\mathfrak m}athcal D}er{\mathfrak p}.
}
$$
Recall that $k$ is a finite field of characteristic $p$. Let $\epsilontabar'$ be a geometric generic point of ${{\mathfrak m}athcal M}_{g,n/\bar{{\mathfrak m}athbb Q}_p}[\epsilonll^r]$ and $C_{\epsilontabar'}$ the fiber of the universal curve ${{\mathfrak m}athcal C}_{g,n/\bar{{\mathfrak m}athbb Q}_p}[\epsilonll^r]{\mathfrak t}o{{\mathfrak m}athcal M}_{g,n/\bar{{\mathfrak m}athbb Q}_p}[\epsilonll^r]$ over $\epsilontabar'$. Denote the Lie algebra of the $\epsilonll$-adic unipotent completion of ${\mathfrak p}i_1(C_{\epsilontabar'}, \bar x')$ by ${\mathfrak p}'$. Denote the relative completion of ${\mathfrak p}i_1({{\mathfrak m}athcal M}_{g,n/\bar {{\mathfrak m}athbb Q}_p}[\epsilonll^r], \epsilontabar')$ with respect to the monodromy representation ${\mathfrak r}ho^{\bar{{\mathfrak m}athbb Q}_p}_{\epsilontabar'}: {\mathfrak p}i_1({{\mathfrak m}athcal M}_{g,n/\bar {{\mathfrak m}athbb Q}_p}[\epsilonll^r], \epsilontabar') {\mathfrak t}o {\mathrm{Sp}}(H^1_\epsilont(C_{\epsilontabar'}, {{\mathfrak m}athbb Q}l(1)))$ by ${{\mathfrak m}athcal G}^{\bar{{\mathfrak m}athbb Q}_p}_{g,n}[\epsilonll^r]$. Denote the prounipotent radical of ${{\mathfrak m}athcal G}^{\bar{{\mathfrak m}athbb Q}_p}_{g,n}[\epsilonll^r]$ by ${{\mathfrak m}athcal U}^{\bar{{\mathfrak m}athbb Q}_p}_{g,n}[\epsilonll^r]$ and its Lie algebra by ${\mathfrak u}^{\bar{{\mathfrak m}athbb Q}_p}_{g,n}[\epsilonll^r]$. Fix an isomorphism ${{\mathfrak m}athfrak g}amma: {\mathfrak p}i_1({{\mathfrak m}athcal M}_{g,n/{{\mathfrak m}athbb Z}^{\mathfrak u}r_p}[\epsilonll^r], \epsilontabar)\cong {\mathfrak p}i_1({{\mathfrak m}athcal M}_{g,n/{{\mathfrak m}athbb Z}^{\mathfrak u}r_p}[\epsilonll^r], \epsilontabar')$, where ${{\mathfrak m}athbb Z}^{\mathfrak u}n_p$ is the maximal unramified extension of ${{\mathfrak m}athbb Z}_p$. The isomorphism ${{\mathfrak m}athfrak g}amma$ induces weight filtration preserving isomorphisms ${\mathfrak u}^{{\mathfrak m}athfrak g}eom_{g,n}[\epsilonll^r]\cong {\mathfrak u}^{\bar{{\mathfrak m}athbb Q}_p}_{g,n}[\epsilonll^r]$, ${\mathfrak p}\cong {\mathfrak p}'$, and ${{\mathfrak m}athcal D}er{\mathfrak p}\cong {{\mathfrak m}athcal D}er{\mathfrak p}'$ that make the diagram
$$\mathbf{x}ymatrix@C=1pc @R=1pc{
{\mathfrak p}rod_{j=1}^n\Gammar^W_\bullet{\mathfrak m}athrm{adj}_j:&\Gammar^W_\bullet{\mathfrak u}^{{\mathfrak m}athfrak g}eom_{g,n}[\epsilonll^r]{{\mathfrak m}athfrak a}r[r]{{\mathfrak m}athfrak a}r[d]& \bigoplus_{j=1}^n\Gammar^W_\bullet{{\mathfrak m}athcal D}er {\mathfrak p} {{\mathfrak m}athfrak a}r[d] \\
{\mathfrak p}rod_{j=1}^n\Gammar^W_\bullet{\mathfrak m}athrm{adj}_j:&\Gammar^W_\bullet{\mathfrak u}^{\bar{{\mathfrak m}athbb Q}_p}_{g,n}[\epsilonll^r]{{\mathfrak m}athfrak a}r[r]&\bigoplus_{j=1}^n\Gammar^W_\bullet{{\mathfrak m}athcal D}er{\mathfrak p}',
}
$$
commute, where the vertical maps are isomorphisms of graded Lie algebras induced ${{\mathfrak m}athfrak g}amma$. For example, see \cite[\S8.1]{wat1}. From \cite[Prop.~8.6 \& 8.8]{hain2}, it follows that the bottom adjoint map is injective for weight $m=-1, -2$ and so is the adjoint map on the top row. Therefore, for $n{{\mathfrak m}athfrak g}eq 1$, the graded Lie algebra map $\Gammar(d\beta)_m :\Gammar^W_m{\mathfrak u}^{{\mathfrak m}athfrak g}eom_{g,n}[\epsilonll^r]{\mathfrak t}o \Gammar^W_m{\mathfrak u}_{g,n}[\epsilonll^r]$ is injective for $m=-1,-2$. For $n=0$, consider the commutative diagram
$$\mathbf{x}ymatrix@C=1pc @R=1pc{
0{{\mathfrak m}athfrak a}r[r]&\Gammar^W_\bullet{\mathfrak p}{{\mathfrak m}athfrak a}r[r]{{\mathfrak m}athfrak a}r@{=}[d]&\Gammar^W_\bullet{\mathfrak u}^{{\mathfrak m}athfrak g}eom_{g,1}[\epsilonll^r]{{\mathfrak m}athfrak a}r[r]{{\mathfrak m}athfrak a}r[d]&\Gammar^W_\bullet{\mathfrak u}^{{\mathfrak m}athfrak g}eom_{g}[\epsilonll^r]{{\mathfrak m}athfrak a}r[r]{{\mathfrak m}athfrak a}r[d]&0\\
0{{\mathfrak m}athfrak a}r[r]&\Gammar^W_\bullet{\mathfrak p}{{\mathfrak m}athfrak a}r[r]&\Gammar^W_\bullet{{\mathfrak m}athcal D}er{\mathfrak p}{{\mathfrak m}athfrak a}r[r]&\Gammar^W_\bullet{{\mathfrak m}athcal O}ut{{\mathfrak m}athcal D}er{\mathfrak p}{{\mathfrak m}athfrak a}r[r]&0,
}
$$
where the injection $\Gammar^W_\bullet{\mathfrak p} {\mathfrak t}o \Gammar^W_\bullet{{\mathfrak m}athcal D}er{\mathfrak p}$ is the inner adjoint. It follows that the right-hand vertical map is injective for weight $m=-1,-2$. Since the outer action $\Gammar^W_\bullet{\mathfrak u}^{{\mathfrak m}athfrak g}eom_g[\epsilonll^r]{\mathfrak t}o \Gammar^W_\bullet {{\mathfrak m}athcal O}ut{{\mathfrak m}athcal D}er{\mathfrak p}$ factors through $\Gammar^W_\bullet{\mathfrak u}_g[\epsilonll^r]$, it follows that the map $\Gammar^W_m{\mathfrak u}^{{\mathfrak m}athfrak g}eom_g[\epsilonll^r]{\mathfrak t}o \Gammar^W_m{\mathfrak u}_g[\epsilonll^r]$ is injective for $m=-1,-2$.
\epsilonnd{proof}
Together with the diagram $(a)$ in \S{\mathfrak r}ef{key seq}, Propositions {\mathfrak r}ef{comparison} and {\mathfrak r}ef{weight -1 and -2 injection} give the following result.
\begin{proposition}\ellabel{iso in weight -1 and -2}For $g{{\mathfrak m}athfrak g}eq 3$ and ${\mathfrak u}\in \{{\mathfrak u}_{g,n}, {\mathfrak u}_{{{\mathfrak m}athcal C}_{g,n}}\} $ there is a $\GammaSp(H)$-equivariant graded Lie algebra isomorphism
$$\Gammar^W_{\bullet}{\mathfrak u}/W_{-3}\cong \Gammar^W_{\bullet}{\mathfrak u}^{{\mathfrak m}athfrak g}eom/W_{-3}.$$
\epsilonnd{proposition}
{\mathfrak s}ubsection{${{\mathfrak m}athfrak d}_{g,n}$} \ellabel{two-step} Here we review the two-step graded nilpotent Lie algebra ${{\mathfrak m}athfrak d}_{g,n}$ associated to the universal curve ${\mathfrak p}i:{{\mathfrak m}athcal C}_{g,n/k}[\epsilonll^r]{\mathfrak t}o {{\mathfrak m}athcal M}_{g,n/k}[\epsilonll^r]$. The Lie algebra ${{\mathfrak m}athfrak d}_{g,n}$ introduced in \cite[\S10]{hain2} plays a key role in \cite{hain2} and our main results. Let ${\mathfrak t}heta$ be the cup product pairing ${{\mathfrak m}athbb L}ambda^2H_{{\mathfrak m}athbb Z}l{\mathfrak t}o {{\mathfrak m}athbb Z}l(1)$. The dual ${\mathfrak t}hetadual$ of ${\mathfrak t}heta$ can be viewed as an element of ${{\mathfrak m}athbb L}ambda^2H_{{\mathfrak m}athbb Z}l(-1)$.
Denote the representation $({{\mathfrak m}athbb L}ambda^3H)(-1)/{\mathfrak t}hetadual\omegaedge H$ by ${{\mathfrak m}athbb L}ambda^3_0H$ and denote the kernel of ${\mathfrak t}heta:{{\mathfrak m}athbb L}ambda^2H{\mathfrak t}o {{\mathfrak m}athbb Q}l(1)$ by ${{\mathfrak m}athbb L}ambda^2_0H$. The representations ${{\mathfrak m}athbb L}ambda^3_0H$ and ${{\mathfrak m}athbb L}ambda^2_0H$ are irreducible as $\GammaSp(H)$-modules.
Define
$${{\mathfrak m}athfrak d}_{g,n}:=\Gammar^W_\bullet{\mathfrak u}_{g,n}[\epsilonll^r]/(W_{-3} + ({{\mathfrak m}athbb L}ambda^2_0H)^{\mathfrak p}erp) \,\,{\mathfrak t}ext{ and } \,\,{{\mathfrak m}athfrak d}_{{{\mathfrak m}athcal C}_{g,n}}: = \Gammar^W_\bullet{\mathfrak u}_{{{\mathfrak m}athcal C}_{g,n}}[\epsilonll^r]/(W_{-3} + ({{\mathfrak m}athbb L}ambda^2_0H)^{\mathfrak p}erp),$$
where $({{\mathfrak m}athbb L}ambda^2_0H)^{\mathfrak p}erp$ denotes the $\GammaSp(H)$-complement of the isotypical ${{\mathfrak m}athbb L}ambda^2_0H$-component in weight $-2$.
The fact that $H_1({\mathfrak p})$ is pure of weight $-1$ implies that by strictness the weight filtration $W_\bullet {\mathfrak p}$ agrees with the lower central series of ${\mathfrak p}$. Hence, recall the following well known descriptions of $\Gammar^W_m{\mathfrak p}$ for $m=-1, -2$.
\begin{proposition}\ellabel{low degree p rep}For $g {{\mathfrak m}athfrak g}eq 2$, there are $\GammaSp(H)$-module isomorphisms
$$\Gammar^W_{-1}{\mathfrak p} \cong H\,\, {\mathfrak t}ext{ and }\,\, \Gammar^W_{-2}{\mathfrak p} \cong {{\mathfrak m}athbb L}ambda^2_0H.$$
\epsilonnd{proposition}
The universal curve ${\mathfrak p}i:{{\mathfrak m}athcal C}_{g,n/k}[\epsilonll^r]{\mathfrak t}o {{\mathfrak m}athcal M}_{g,n/k}[\epsilonll^r]$ induces a $\GammaSp(H)$-equivariant graded Lie algebra surjection ${{\mathfrak m}athfrak d}({\mathfrak p}i):{{\mathfrak m}athfrak d}_{{{\mathfrak m}athcal C}_{g,n}}{\mathfrak t}o {{\mathfrak m}athfrak d}_{g,n}$ whose kernel is given by $\Gammar^W_\bullet {\mathfrak p}/W_{-3}$: there is the exact sequence of graded Lie algebras
$$0{\mathfrak t}o \Gammar^W_{\bullet}{\mathfrak p}/W_{-3} {\mathfrak t}o {{\mathfrak m}athfrak d}_{{{\mathfrak m}athcal C}_{g,n}}\overset{{{\mathfrak m}athfrak d}({\mathfrak p}i)}{\mathfrak t}o {{\mathfrak m}athfrak d}_{g,n}{\mathfrak t}o 0.$$
Hence, Proposition {\mathfrak r}ef{comparison} (iii), Proposition {\mathfrak r}ef{iso in weight -1 and -2}, Proposition {\mathfrak r}ef{low degree p rep}, and \cite[Thm.~9.11]{hain2} imply that there are $\GammaSp(H)$-module isomorphisms in weight $-1$ and $-2$:
$$\Gammar^W_{-1}{{\mathfrak m}athfrak d}_{g,n} \cong {{\mathfrak m}athbb L}ambda^3_0H \oplus \bigoplus_{j=1}^nH_j,\,\, \Gammar^W_{-2}{{\mathfrak m}athfrak d}_{g,n} \cong \bigoplus_{j=1}^n{{\mathfrak m}athbb L}ambda^2_0H_j,\,\,\Gammar^W_m{{\mathfrak m}athfrak d}_{g,n} =0 {\mathfrak t}ext{ for } m\elleq -3$$
and
$$\Gammar^W_{-1}{{\mathfrak m}athfrak d}_{{{\mathfrak m}athcal C}_{g,n}} \cong {{\mathfrak m}athbb L}ambda^3_0H \oplus \bigoplus_{j=0}^nH_j,\,\, \Gammar^W_{-2}{{\mathfrak m}athfrak d}_{{{\mathfrak m}athcal C}_{g,n}} \cong \bigoplus_{j=0}^n{{\mathfrak m}athbb L}ambda^2_0H_j,\,\,\Gammar^W_m{{\mathfrak m}athfrak d}_{{{\mathfrak m}athcal C}_{g,n}} =0 {\mathfrak t}ext{ for } m\elleq -3,$$
{\mathfrak t}extcolor{black}{where $H_j$ is a copy of $H$.}
Furthermore, by \cite[Prop.~10.2]{hain2}, the open immersion ${{\mathfrak m}athcal M}_{g,n+1/k}[\epsilonll^r]{{\mathfrak m}athfrak h}ookrightarrow {{\mathfrak m}athcal C}_{g,n/k}[\epsilonll^r]$ induces a $\GammaSp(H)$-equivariant graded Lie algebra isomorphism ${{\mathfrak m}athfrak d}_{g,n+1}\overset{{\mathfrak s}im}{\mathfrak t}o {{\mathfrak m}athfrak d}_{{{\mathfrak m}athcal C}_{g,n}}$ that makes the diagram
$$\mathbf{x}ymatrix@C=1pc @R=1pc{
{{\mathfrak m}athfrak d}_{g, n+1}{{\mathfrak m}athfrak a}r[r]^{{\mathfrak s}im}{{\mathfrak m}athfrak a}r[dr]&{{\mathfrak m}athfrak d}_{{{\mathfrak m}athcal C}_{g,n}}{{\mathfrak m}athfrak a}r[d]^{{{\mathfrak m}athfrak d}({\mathfrak p}i)}\\
&{{\mathfrak m}athfrak d}_{g,n}
}
$$
commute, where the surjection ${{\mathfrak m}athfrak d}_{g, n+1}{\mathfrak t}o {{\mathfrak m}athfrak d}_{g,n}$ is induced by the projection ${{\mathfrak m}athcal M}_{g,n+1}{\mathfrak t}o {{\mathfrak m}athcal M}_{g,n}$ mapping $[C, x_0, x_1, \elldots, x_n]{\mathfrak m}apsto [C, x_1,\elldots,x_n]$.
Each tautological section $s_j$ of ${\mathfrak p}i:{{\mathfrak m}athcal C}_{g,n/k}[\epsilonll^r]{\mathfrak t}o {{\mathfrak m}athcal M}_{g,n/k}[\epsilonll^r]$ induces a $\GammaSp(H)$-equivariant graded Lie algebra section ${{\mathfrak m}athfrak d}(s_j)$ of ${{\mathfrak m}athfrak d}({\mathfrak p}i)$ (see \cite[Cor.~10.3]{hain2}).
We have the following key result.
\begin{proposition} [{ \cite[Prop.~10.4 \& 10.8]{hain2}, \cite[Prop.~11.3]{wat1} }] \ellabel{sections of dpi}
For $g{{\mathfrak m}athfrak g}eq 4$ and $n{{\mathfrak m}athfrak g}eq 1$, the only $\GammaSp(H)$-equivariant graded Lie algebra sections of ${{\mathfrak m}athfrak d}({\mathfrak p}i)$ are the sections ${{\mathfrak m}athfrak d}(s_1),\elldots, {{\mathfrak m}athfrak d}(s_n)$. \epsilonnd{proposition}
{\mathfrak s}ection{Non-abelian cohomology of ${\mathfrak p}i_1({{\mathfrak m}athcal M}_{g,n/k}[\epsilonll^r], \epsilontabar)$ }\ellabel{non-ab coho of arith}
{\mathfrak s}ubsection{Definition} Assume that $g{{\mathfrak m}athfrak g}eq 3$.
Let $k$ be a finite field with characteristic $p$ and $\epsilonll$ a prime number distinct from $p$.
Let $r$ be a nonnegative integer. Here set $\Gamma^{{\mathfrak m}athfrak a}rith_{g,n}[\epsilonll^r] := {\mathfrak p}i_1({{\mathfrak m}athcal M}_{g,n/k}[\epsilonll^r], \epsilontabar)$.
Now, from the exact sequence \begin{equation}\ellabel{6th exact seq for p}1{\mathfrak t}o {{\mathfrak m}athcal P}{\mathfrak t}o {{\mathfrak m}athcal G}_{{{\mathfrak m}athcal C}_{g,n}}[\epsilonll^r]{\mathfrak t}o {{\mathfrak m}athcal G}_{g,n}[\epsilonll^r]{\mathfrak t}o 1,\epsilonnd{equation} the conjugation action of $ {{\mathfrak m}athcal G}_{{{\mathfrak m}athcal C}_{g,n}}[\epsilonll^r]$ on ${{\mathfrak m}athcal P}$ induces the commutative diagram
$$\mathbf{x}ymatrix@C=1pc @R=.7pc{
1{{\mathfrak m}athfrak a}r[r]&{{\mathfrak m}athcal P}{{\mathfrak m}athfrak a}r@{=}[d]{{\mathfrak m}athfrak a}r[r]&{{\mathfrak m}athcal G}_{{{\mathfrak m}athcal C}_{g,n}}[\epsilonll^r]{{\mathfrak m}athfrak a}r[r]{{\mathfrak m}athfrak a}r[d]&{{\mathfrak m}athcal G}_{g,n}[\epsilonll^r]{{\mathfrak m}athfrak a}r[d]{{\mathfrak m}athfrak a}r[r]&1\\
1{{\mathfrak m}athfrak a}r[r]&{{\mathfrak m}athcal P}{{\mathfrak m}athfrak a}r[r]&{{\mathfrak m}athcal A}ut({{\mathfrak m}athcal P}){{\mathfrak m}athfrak a}r[r] &{{\mathfrak m}athcal O}ut({{\mathfrak m}athcal P}){{\mathfrak m}athfrak a}r[r] &1.
}
$$
The right-hand vertical is the map ${\mathfrak p}hi:{{\mathfrak m}athcal G}_{g,n}[\epsilonll^r]{\mathfrak t}o {{\mathfrak m}athcal O}ut({{\mathfrak m}athcal P})$ given in the introduction. Since ${{\mathfrak m}athcal G}_{{\mathfrak p}hi}$ is the fiber product ${{\mathfrak m}athcal A}ut({{\mathfrak m}athcal P}){\mathfrak t}imes_{{{\mathfrak m}athcal O}ut({{\mathfrak m}athcal P}), {\mathfrak p}hi}{{\mathfrak m}athcal G}_{g,n}[\epsilonll^r]$, there is the commutative diagram
$$\mathbf{x}ymatrix@C=1pc @R=.7pc{
1{{\mathfrak m}athfrak a}r[r]&{{\mathfrak m}athcal P}{{\mathfrak m}athfrak a}r@{=}[d]{{\mathfrak m}athfrak a}r[r]&{{\mathfrak m}athcal G}_{{{\mathfrak m}athcal C}_{g,n}}[\epsilonll^r]{{\mathfrak m}athfrak a}r[r]{{\mathfrak m}athfrak a}r[d]&{{\mathfrak m}athcal G}_{g,n}[\epsilonll^r]{{\mathfrak m}athfrak a}r@{=}[d]{{\mathfrak m}athfrak a}r[r]&1\\
1{{\mathfrak m}athfrak a}r[r]&{{\mathfrak m}athcal P}{{\mathfrak m}athfrak a}r[r]&{{\mathfrak m}athcal G}_{{\mathfrak p}hi}{{\mathfrak m}athfrak a}r[r]&{{\mathfrak m}athcal G}_{g,n}[\epsilonll^r]{{\mathfrak m}athfrak a}r[r]&1. }
$$
{\mathfrak t}extcolor{black}{\begin{lemma}\ellabel{exact seq with A}
For each ${{\mathfrak m}athbb Q}l$-algebra $A$, the sequences
$$
1{\mathfrak t}o{{\mathfrak m}athcal P}(A){\mathfrak t}o {{\mathfrak m}athcal G}_{{{\mathfrak m}athcal C}_{g,n}}[\epsilonll^r](A){\mathfrak t}o {{\mathfrak m}athcal G}_{g,n}[\epsilonll^r](A){\mathfrak t}o 1
$$
and
$$
1{\mathfrak t}o{{\mathfrak m}athcal P}^o(A){\mathfrak t}o{{\mathfrak m}athcal G}_{g,n+1}[\epsilonll^r](A){\mathfrak t}o{{\mathfrak m}athcal G}_{g,n}[\epsilonll^r](A){\mathfrak t}o 1.
$$
are exact.
\epsilonnd{lemma}
\begin{proof} For each ${{\mathfrak m}athbb Q}l$-algebra $A$, the sequence $1{\mathfrak t}o {{\mathfrak m}athcal P}(A){\mathfrak t}o {{\mathfrak m}athcal G}_{{{\mathfrak m}athcal C}_{g,n}}[\epsilonll^r](A){\mathfrak t}o {{\mathfrak m}athcal G}_{g,n}[\epsilonll^r](A)$ is exact. Hence it remains to show that ${{\mathfrak m}athcal G}_{{{\mathfrak m}athcal C}_{g,n}}[\epsilonll^r](A){\mathfrak t}o {{\mathfrak m}athcal G}_{g,n}[\epsilonll^r](A)$ is surjective. Fix a splitting of ${{\mathfrak m}athcal G}_{{{\mathfrak m}athcal C}_{g,n}}{\mathfrak t}o \GammaSp(H)$ in the category of ${{\mathfrak m}athbb Q}l$-groups. This induces a splitting of ${{\mathfrak m}athcal G}_{g,n}[\epsilonll^r]{\mathfrak t}o \GammaSp(H)$. Set $H_A:= H\otimes_{{\mathfrak m}athbb Q}l A$. These splittings yield isomorphisms ${{\mathfrak m}athcal G}_{{{\mathfrak m}athcal C}_{g,n}}[\epsilonll^r](A)\cong {{\mathfrak m}athcal U}_{{{\mathfrak m}athcal C}_{g,n}}[\epsilonll^r](A){\mathfrak r}times \GammaSp(H_A)$ and ${{\mathfrak m}athcal G}_{g,n}(A)\cong {{\mathfrak m}athcal U}_{g,n}[\epsilonll^r](A){\mathfrak r}times \GammaSp(H_A)$, which are compatible with the map ${{\mathfrak m}athcal G}_{{{\mathfrak m}athcal C}_{g,n}}[\epsilonll^r](A){\mathfrak t}o {{\mathfrak m}athcal G}_{g,n}[\epsilonll^r](A)$.
The surjectivity of the homomorphism ${{\mathfrak m}athcal U}_{{{\mathfrak m}athcal C}_{g,n}}[\epsilonll^r]{\mathfrak t}o {{\mathfrak m}athcal U}_{g,n}[\epsilonll^r]$ of prounipotent groups implies that the Lie algebra map ${\mathfrak u}_{{{\mathfrak m}athcal C}_{g,n}}[\epsilonll^r]\otimes_{{\mathfrak m}athbb Q}l A{\mathfrak t}o {\mathfrak u}_{g,n}[\epsilonll^r]\otimes_{{\mathfrak m}athbb Q}l A$ is surjective, and so is the map ${{\mathfrak m}athcal U}_{{{\mathfrak m}athcal C}_{g,n}}[\epsilonll^r](A){\mathfrak t}o {{\mathfrak m}athcal U}_{g,n}[\epsilonll^r](A)$, because the log maps ${{\mathfrak m}athcal U}_{{{\mathfrak m}athcal C}_{g,n}}[\epsilonll^r](A){\mathfrak t}o {\mathfrak u}_{{{\mathfrak m}athcal C}_{g,n}}[\epsilonll^r]\otimes_{{\mathfrak m}athbb Q}l A$ and ${{\mathfrak m}athcal U}_{g,n}[\epsilonll^r](A){\mathfrak t}o {\mathfrak u}_{g,n}[\epsilonll^r]\otimes_{{\mathfrak m}athbb Q}l A$ are bijections. Therefore, the map ${{\mathfrak m}athcal G}_{{{\mathfrak m}athcal C}_{g,n}}[\epsilonll^r](A){\mathfrak t}o {{\mathfrak m}athcal G}_{g,n}[\epsilonll^r](A)$ is surjective. A similar argument applies to the second sequence.
\epsilonnd{proof}}
The exactness of the first sequence in Lemma {\mathfrak r}ef{exact seq with A} implies that for each ${{\mathfrak m}athbb Q}l$-algebra $A$, the map ${{\mathfrak m}athcal G}_{{\mathfrak p}hi}(A){\mathfrak t}o {{\mathfrak m}athcal G}_{g,n}[\epsilonll^r](A)$ is surjective, and the induced homomorphism ${{\mathfrak m}athcal G}_{{{\mathfrak m}athcal C}_{g,n}}[\epsilonll^r]{\mathfrak t}o {{\mathfrak m}athcal G}_{{\mathfrak p}hi}$ is an isomorphism. Pulling back the exact sequence
$$1{\mathfrak t}o{{\mathfrak m}athcal P}({{\mathfrak m}athbb Q}l){\mathfrak t}o {{\mathfrak m}athcal G}_{{{\mathfrak m}athcal C}_{g,n}}[\epsilonll^r]({{\mathfrak m}athbb Q}l){\mathfrak t}o {{\mathfrak m}athcal G}_{g,n}[\epsilonll^r]({{\mathfrak m}athbb Q}l){\mathfrak t}o 1$$
along the representation ${\mathfrak t}ilde{{\mathfrak r}ho}_\epsilontabar: \Gamma^{{\mathfrak m}athfrak a}rith_{g,n}[\epsilonll^r]{\mathfrak t}o {{\mathfrak m}athcal G}_{g,n}[\epsilonll^r]({{\mathfrak m}athbb Q}l)$,
we obtain an extension
$$1{\mathfrak t}o {{\mathfrak m}athcal P}({{\mathfrak m}athbb Q}l){\mathfrak t}o {{\mathfrak m}athcal E}_{g,n}{\mathfrak t}o \Gamma^{{\mathfrak m}athfrak a}rith_{g,n}[\epsilonll^r]{\mathfrak t}o 1$$
of $\Gamma^{{\mathfrak m}athfrak a}rith_{g,n}[\epsilonll^r]$ by ${{\mathfrak m}athcal P}({{\mathfrak m}athbb Q}l)$.
Recall that the weight filtration $W_\bullet{\mathfrak p}$ agrees with the lower central series of ${\mathfrak p}$. So the filtration $W_\bullet {{\mathfrak m}athcal P}$ induced by the exponential function on ${\mathfrak p}$ is also the lower central series of ${{\mathfrak m}athcal P}$. So each normal subgroup $W_m{{\mathfrak m}athcal P}$ of ${{\mathfrak m}athcal P}$ is also normal in ${{\mathfrak m}athcal G}_{{{\mathfrak m}athcal C}_{g,n}}[\epsilonll^r]$. For each $N\elleq -1$, pushing down the exact sequence ({\mathfrak r}ef{6th exact seq for p}) along ${{\mathfrak m}athcal P}{\mathfrak t}o {{\mathfrak m}athcal P}/W_N{{\mathfrak m}athcal P}$, we obtain the exact sequence
\begin{equation}\ellabel{7th exact seq for p}1{\mathfrak t}o {{\mathfrak m}athcal P}/W_N{{\mathfrak m}athcal P} {\mathfrak t}o {{\mathfrak m}athcal G}_{{{\mathfrak m}athcal C}_{g,n}}[\epsilonll^r]/W_N{{\mathfrak m}athcal P} {\mathfrak t}o {{\mathfrak m}athcal G}_{g,n}[\epsilonll^r]{\mathfrak t}o 1.\epsilonnd{equation}
Pulling back the exact sequence $({\mathfrak r}ef{7th exact seq for p})$ along ${\mathfrak t}ilde{{\mathfrak r}ho}_\epsilontabar$ gives an extension
$$1{\mathfrak t}o ({{\mathfrak m}athcal P}/W_N{{\mathfrak m}athcal P})({{\mathfrak m}athbb Q}l){\mathfrak t}o {{\mathfrak m}athcal E}^N_{g,n}{\mathfrak t}o \Gamma^{{\mathfrak m}athfrak a}rith_{g,n}[\epsilonll^r]{\mathfrak t}o 1$$
of $\Gamma^{{\mathfrak m}athfrak a}rith_{g,n}[\epsilonll^r]$ by $({{\mathfrak m}athcal P}/W_N{{\mathfrak m}athcal P})({{\mathfrak m}athbb Q}l)$. Define the set of ${{\mathfrak m}athcal P}({{\mathfrak m}athbb Q}l)$-conjugacy classes of {\mathfrak t}extcolor{black}{continuous} sections of ${{\mathfrak m}athcal E}_{g,n}{\mathfrak t}o \Gamma^{{\mathfrak m}athfrak a}rith_{g,n}[\epsilonll^r]$ by $H^1_{\mathfrak n}ab(\Gamma^{{\mathfrak m}athfrak a}rith_{g,n}[\epsilonll^r], {{\mathfrak m}athcal P}({{\mathfrak m}athbb Q}l))$. Similarly, for each $N\elleq -1$, define the set of $({{\mathfrak m}athcal P}/W_N{{\mathfrak m}athcal P})({{\mathfrak m}athbb Q}l)$-conjugacy classes of {\mathfrak t}extcolor{black}{continuous} sections of ${{\mathfrak m}athcal E}^N_{g,n}{\mathfrak t}o \Gamma^{{\mathfrak m}athfrak a}rith_{g,n}[\epsilonll^r]$ by $H^1_{\mathfrak n}ab(\Gamma^{{\mathfrak m}athfrak a}rith_{g,n}[\epsilonll^r], {{\mathfrak m}athcal P}/W_N{{\mathfrak m}athcal P}({{\mathfrak m}athbb Q}l))$.
{\mathfrak t}extcolor{black}{\begin{variant} Using Proposition {\mathfrak r}ef{exact seq for punctured universal curve}, we can apply a similar construction to the universal punctured curve ${\mathfrak p}i^o$ and we have $H^1_{\mathfrak n}ab(\Gamma^{{\mathfrak m}athfrak a}rith_{g,n}[\epsilonll^r], {{\mathfrak m}athcal P}^o({{\mathfrak m}athbb Q}l))$.
\epsilonnd{variant}}
{\mathfrak s}ubsection{Non-abelian cohomology scheme of ${{\mathfrak m}athcal G}_{g,n}[\epsilonll^r]$}Denote the Lie algebras of ${{\mathfrak m}athcal G}_{{{\mathfrak m}athcal C}_{g,n}}[\epsilonll^r]$, ${{\mathfrak m}athcal U}_{{{\mathfrak m}athcal C}_{g,n}}[\epsilonll^r]$, ${{\mathfrak m}athcal G}_{g,n}[\epsilonll^r]$, ${{\mathfrak m}athcal U}_{g,n}[\epsilonll^r]$, and $\GammaSp(H)$,by ${{\mathfrak m}athfrak g}_{{{\mathfrak m}athcal C}_{g,n}}[\epsilonll^r]$, ${\mathfrak u}_{{{\mathfrak m}athcal C}_{g,n}}[\epsilonll^r]$, ${{\mathfrak m}athfrak g}_{g,n}[\epsilonll^r]$, ${\mathfrak u}_{g,n}[\epsilonll^r]$, and ${\mathfrak r}$, respectively. A spectral sequence produced from the extension $0{\mathfrak t}o {\mathfrak u}_{g,n}[\epsilonll^r]{\mathfrak t}o{{\mathfrak m}athfrak g}_{g,n}[\epsilonll^r]{\mathfrak t}o {\mathfrak r}{\mathfrak t}o0$ implies that for each finite dimensional $\GammaSp(H)$-module $V$, there are isomorphisms
$$H^j({{\mathfrak m}athfrak g}_{g,n}[\epsilonll^r], V)\cong H^0({\mathfrak r}, H^j({\mathfrak u}_{g,n}[\epsilonll^r])\otimes V)\cong {{\mathfrak m}athbb H}om_{\GammaSp(H)}(H_j({\mathfrak u}_{g,n}[\epsilonll^r]), V). $$
\begin{proposition}\ellabel{condition for existence} For $g{{\mathfrak m}athfrak g}eq 3$ and $n{{\mathfrak m}athfrak g}eq 0$, we have
$$H^1({{\mathfrak m}athfrak g}_{g,n}[\epsilonll^r], \Gammar^W_m{\mathfrak p}) \cong \begin{cases}
\oplus_{i=1}^n{{\mathfrak m}athbb Q}l & m =-1 \\
0& m <-1. \\
\epsilonnd{cases} $$
\epsilonnd{proposition}
\begin{proof} Recall that the Lie algebra map $d\beta :{\mathfrak u}^{{\mathfrak m}athfrak g}eom_{g,n}[\epsilonll^r]{\mathfrak t}o {\mathfrak u}_{g,n}[\epsilonll^r]$ is surjective and the induced graded Lie algebra map $\Gammar(d\beta)$ is an isomorphism in weight $-1$ and $-2$. Since $H_1({\mathfrak u}^{{\mathfrak m}athfrak g}eom_{g,n}[\epsilonll^r])\cong H_1({\mathfrak u}^{{\mathfrak m}athfrak g}eom_{g,n})$ and $H_1({\mathfrak u}^{{\mathfrak m}athfrak g}eom_{g,n})$ is pure of weight $-1$ by \cite[Thm.~9.11]{hain2}, the surjectivity of $d\beta$ implies that $H_1({\mathfrak u}_{g,n}[\epsilonll^r])$ is also pure of weight $-1$. We have $H_1({\mathfrak u}) =\Gammar^W_{-1}H_1({\mathfrak u})$ for ${\mathfrak u} \in \{{\mathfrak u}^{{\mathfrak m}athfrak g}eom_{g,n}[\epsilonll^r], {\mathfrak u}_{g,n}[\epsilonll^r]\}$ and there is a commutative diagram
$$\mathbf{x}ymatrix@C=1pc @R=.7pc{
\Gammar^W_{-1}{\mathfrak u}^{{\mathfrak m}athfrak g}eom_{g,n}[\epsilonll^r]{{\mathfrak m}athfrak a}r[r]^{\mathfrak s}im{{\mathfrak m}athfrak a}r[d] &\Gammar^W_{-1}{\mathfrak u}_{g,n}[\epsilonll^r]{{\mathfrak m}athfrak a}r[d]\\
H_1({\mathfrak u}^{{\mathfrak m}athfrak g}eom_{g,n}[\epsilonll^r]){{\mathfrak m}athfrak a}r[r] & H_1({\mathfrak u}_{g,n}[\epsilonll^r]),
}
$$
where the vertical maps are isomorphisms and the top map is $\Gammar_{-1}(d\beta)$. Thus the bottom map in the diagram is an isomorphism.
Here, we have isomorphisms
\begin{align*}H^1({{\mathfrak m}athfrak g}_{g,n}[\epsilonll^r], \Gammar^W_m{\mathfrak p}) &\cong {{\mathfrak m}athbb H}om_{\GammaSp(H)}(H_1({\mathfrak u}_{g,n}[\epsilonll^r]), \Gammar^W_m{\mathfrak p})\\
&\cong {{\mathfrak m}athbb H}om_{\GammaSp(H)}(\Gammar^W_mH_1({\mathfrak u}^{{\mathfrak m}athfrak g}eom_{g,n}), \Gammar^W_m{\mathfrak p})\\
&\cong \begin{cases}
\oplus_{i=1}^n{{\mathfrak m}athbb Q}l & m =-1, \\
0& m <-1 \\
\epsilonnd{cases}
\epsilonnd{align*}
where the last isomorphism follows from \cite[Thm.~9.11]{hain2}.
\epsilonnd{proof}
Proposition {\mathfrak r}ef{condition for existence} shows that $H^1({{\mathfrak m}athfrak g}_{g,n}[\epsilonll^r], \Gammar^W_m{\mathfrak p})$ is of finite dimensional for all $m\elleq -1$. Therefore, it follows from a result of Hain \cite[Thm.~4.6]{hain4} that for each $N\elleq -1$, there exists an affine ${{\mathfrak m}athbb Q}l$-scheme of finite type $H^1_{\mathfrak n}ab( {{\mathfrak m}athcal G}_{g,n}[\epsilonll^r], {{\mathfrak m}athcal P}/W_{N}{{\mathfrak m}athcal P})$ that represents the functor associating to each ${{\mathfrak m}athbb Q}l$-algebra $A$ the set of $({{\mathfrak m}athcal P}/W_{N}{{\mathfrak m}athcal P})(A)$-conjugacy classes of {\mathfrak t}extcolor{black}{continuous} sections of ${{\mathfrak m}athcal G}_{{{\mathfrak m}athcal C}_{g,n}}[\epsilonll^r]/W_{N}{{\mathfrak m}athcal P}\otimes_{{\mathfrak m}athbb Q}l A{\mathfrak t}o {{\mathfrak m}athcal G}_{g,n}[\epsilonll^r]\otimes_{{\mathfrak m}athbb Q}l A$.
Applying the exact functor $\Gammar^W_\bullet$ to the exact sequence ({\mathfrak r}ef{7th exact seq for p}) gives the exact sequence of the corresponding associated graded Lie algebras
$$0{\mathfrak t}o \Gammar^W_\bullet{\mathfrak p}/W_N{\mathfrak p}{\mathfrak t}o\Gammar^W_\bullet{{\mathfrak m}athfrak g}_{{{\mathfrak m}athcal C}_{g,n}}[\epsilonll^r]/W_N{\mathfrak p}{\mathfrak t}o\Gammar^W_\bullet{{\mathfrak m}athfrak g}_{g,n}[\epsilonll^r]{\mathfrak t}o 0.$$
For $N \elleq -1$, denote by ${\mathfrak m}athrm{Sect}_{\GammaSp(H)}(\Gammar^W_\bullet{{\mathfrak m}athfrak g}_{g,n}[\epsilonll^r], \Gammar^W_\bullet{\mathfrak p}/W_{N}{\mathfrak p})$ the set of $\GammaSp(H)$-equivariant graded Lie algebra sections of $\Gammar^W_\bullet{{\mathfrak m}athfrak g}_{{{\mathfrak m}athcal C}_{g,n}}[\epsilonll^r]/W_{N}{\mathfrak p}{\mathfrak t}o\Gammar^W_\bullet{{\mathfrak m}athfrak g}_{g,n}[\epsilonll^r]$.
\begin{proposition}[{\cite[Thm.~4.6 \& Cor.~4.7]{hain4}}]\ellabel{nonabelian cohomology}With notation as above,
there are bijections
$$H^1_{\mathfrak n}ab( {{\mathfrak m}athcal G}_{g,n}[\epsilonll^r], {{\mathfrak m}athcal P}/W_{N}{{\mathfrak m}athcal P})({{\mathfrak m}athbb Q}l)\cong {\mathfrak m}athrm{Sect}_{\GammaSp(H)}(\Gammar^W_\bullet{{\mathfrak m}athfrak g}_{g,n}[\epsilonll^r], \Gammar^W_\bullet{\mathfrak p}/W_{N}{\mathfrak p}),$$
and
$$ H^1_{\mathfrak n}ab({{\mathfrak m}athcal G}_{g,n}[\epsilonll^r], {{\mathfrak m}athcal P})({{\mathfrak m}athbb Q}l)\cong {\vec{v}}arprojlim_{N\elleq -1} H^1_{\mathfrak n}ab( {{\mathfrak m}athcal G}_{g,n}[\epsilonll^r], {{\mathfrak m}athcal P}/W_{N}{{\mathfrak m}athcal P})({{\mathfrak m}athbb Q}l).$$
\epsilonnd{proposition}
The following result allows us to use the exact sequence for non-abelian cohomology schemes \cite[\S4.4]{hain4}. It follows from the universal property of weighted completion (see \cite[Prop.~15.2]{hain2}).
\begin{proposition}\ellabel{non-abelian coh iso}With notation as above, there are bijections
$$H^1_{\mathfrak n}ab(\Gamma^{{\mathfrak m}athfrak a}rith_{g,n}[\epsilonll^r], {{\mathfrak m}athcal P}/W_N{{\mathfrak m}athcal P}({{\mathfrak m}athbb Q}l))\cong H^1_{\mathfrak n}ab({{\mathfrak m}athcal G}_{g,n}[\epsilonll^r], {{\mathfrak m}athcal P}/W_N{{\mathfrak m}athcal P})({{\mathfrak m}athbb Q}l) \,\,{\mathfrak t}ext{ for all }N\elleq -1$$
and
$$H^1_{\mathfrak n}ab(\Gamma^{{\mathfrak m}athfrak a}rith_{g,n}[\epsilonll^r], {{\mathfrak m}athcal P}({{\mathfrak m}athbb Q}l))\cong H^1_{\mathfrak n}ab({{\mathfrak m}athcal G}_{g,n}[\epsilonll^r], {{\mathfrak m}athcal P})({{\mathfrak m}athbb Q}l).$$
\epsilonnd{proposition}
{\mathfrak s}ubsection{Proof of Theorem 1} {\mathfrak t}extcolor{black}{Suppose that $p$ is a prime number, that $\epsilonll$ is a prime number distinct from $p$, and that $r$ is a nonnegative integer.
Let $k$ be a finite field with ${{\mathfrak m}athbb C}har(k) =p$ that contains all $\epsilonll^r$th roots of unity. Assume that $g {{\mathfrak m}athfrak g}eq 4$ and $n{{\mathfrak m}athfrak g}eq 1$. }
For each $j=1,\elldots,n$, the tautological section $s_j$ of ${\mathfrak p}i:{{\mathfrak m}athcal C}_{g,n/k}[\epsilonll^r]{\mathfrak t}o {{\mathfrak m}athcal M}_{g,n/k}[\epsilonll^r]$ induces a section of the projection ${{\mathfrak m}athcal G}_{{{\mathfrak m}athcal C}_{g,n}}[\epsilonll^r]{\mathfrak t}o {{\mathfrak m}athcal G}_{g,n}[\epsilonll^r]$ of the weighted completions and hence a section of ${{\mathfrak m}athcal G}_{{{\mathfrak m}athcal C}_{g,n}}[\epsilonll^r]/W_N{{\mathfrak m}athcal P}{\mathfrak t}o {{\mathfrak m}athcal G}_{g,n}[\epsilonll^r]$ for $N\elleq -1$. Thus each section $s_j$ induces a class in $H^1_{\mathfrak n}ab({{\mathfrak m}athcal G}_{g,n}[\epsilonll^r], {{\mathfrak m}athcal P})({{\mathfrak m}athbb Q}l)$ and $H^1_{\mathfrak n}ab({{\mathfrak m}athcal G}_{g,n}[\epsilonll^r], {{\mathfrak m}athcal P}/W_N{{\mathfrak m}athcal P})({{\mathfrak m}athbb Q}l)$, both of which are denoted by $s_j^{\mathfrak u}n$. Furthermore, the homomorphism ${\mathfrak t}ilde{{\mathfrak r}ho}_\epsilontabar: \Gamma^{{\mathfrak m}athfrak a}rith_{g,n}[\epsilonll^r]{\mathfrak t}o {{\mathfrak m}athcal G}_{g,n}[\epsilonll^r]({{\mathfrak m}athbb Q}l)$ pulls back each $s^{\mathfrak u}n_j$ to give a class in $H^1_{\mathfrak n}ab(\Gamma^{{\mathfrak m}athfrak a}rith_{g,n}[\epsilonll^r], {{\mathfrak m}athcal P}({{\mathfrak m}athbb Q}l))$ and $H^1_{\mathfrak n}ab(\Gamma^{{\mathfrak m}athfrak a}rith_{g,n}[\epsilonll^r], {{\mathfrak m}athcal P}/W_N{{\mathfrak m}athcal P}({{\mathfrak m}athbb Q}l))$, which are also denoted by $s^{\mathfrak u}n_j$.
First, we show that for $g{{\mathfrak m}athfrak g}eq 4$, there is a bijection
$H^1_{\mathfrak n}ab({{\mathfrak m}athcal G}_{g,n}[\epsilonll^r], {{\mathfrak m}athcal P}/W_{-3}{{\mathfrak m}athcal P})({{\mathfrak m}athbb Q}l) = \{s^{\mathfrak u}n_1,\elldots, s^{\mathfrak u}n_n\} .$
By Proposition {\mathfrak r}ef{nonabelian cohomology}, we have
$$H^1_{\mathfrak n}ab({{\mathfrak m}athcal G}_{g,n}[\epsilonll^r], {{\mathfrak m}athcal P}/W_{-3}{{\mathfrak m}athcal P})({{\mathfrak m}athbb Q}l)\cong {\mathfrak m}athrm{Sect}_{\GammaSp(H)}(\Gammar^W_\bullet{{\mathfrak m}athfrak g}_{g,n}[\epsilonll^r], \Gammar^W_\bullet{\mathfrak p}/W_{-3}{\mathfrak p}).$$
For each $j =1, \elldots, n$, denote the image of $s^{\mathfrak u}n_j$ in ${\mathfrak m}athrm{Sect}_{\GammaSp(H)}(\Gammar^W_\bullet{{\mathfrak m}athfrak g}_{g,n}[\epsilonll^r], \Gammar^W_\bullet{\mathfrak p}/W_{-3}{\mathfrak p})$ by $\Gammar^W_\bullet ds^{\mathfrak u}n_j$.
By Proposition {\mathfrak r}ef{weight filt}, the functors $V{\mathfrak m}apsto\Gammar^W_\bullet V $ and $V{\mathfrak m}apsto V/W_mV$ are exact. Thus there is the exact sequece
$$0{\mathfrak t}o\Gammar^W_\bullet{\mathfrak p}/W_{-3}{\mathfrak t}o \Gammar^W_\bullet{{\mathfrak m}athfrak g}_{{{\mathfrak m}athcal C}_{g,n}}[\epsilonll^r]/W_{-3}{\mathfrak t}o \Gammar^W_\bullet{{\mathfrak m}athfrak g}_{g,n}[\epsilonll^r]/W_{-3}{\mathfrak t}o 0, $$
where the map $ \Gammar^W_\bullet{{\mathfrak m}athfrak g}_{{{\mathfrak m}athcal C}_{g,n}}[\epsilonll^r]/W_{-3}{\mathfrak t}o \Gammar^W_\bullet{{\mathfrak m}athfrak g}_{g,n}[\epsilonll^r]/W_{-3}$ is denoted by $\Gammar^W_{\bullet} d{\mathfrak p}i_{{\mathfrak m}athfrak a}st/W_{-3}$.
Denote by ${\mathfrak m}athrm{Sect}_{\GammaSp(H)}(\Gammar^W_\bullet{{\mathfrak m}athfrak g}_{g,n}[\epsilonll^r]/W_{-3}, \Gammar^W_\bullet{\mathfrak p}/W_{-3})$ the set of $\GammaSp(H)$-equivariant graded Lie algebra sections of $\Gammar^W_\bullet d{\mathfrak p}i_{{\mathfrak m}athfrak a}st/W_{-3}$. Note that ${\mathfrak m}athrm{Sect}_{\GammaSp(H)}(\Gammar^W_\bullet{{\mathfrak m}athfrak g}_{g,n}[\epsilonll^r]/W_{-3}, \Gammar^W_\bullet{\mathfrak p}/W_{-3})$ is in bijection with the set ${\mathfrak m}athrm{Sect}_{\GammaSp(H)}(\Gammar^W_\bullet{{\mathfrak m}athfrak g}_{g,n}[\epsilonll^r], \Gammar^W_\bullet{\mathfrak p}/W_{-3}{\mathfrak p}).$ Fix $\GammaSp(H)$-module isomorphisms ${{\mathfrak m}athfrak g}amma:\Gammar^W_{-2}{\mathfrak p} \cong {{\mathfrak m}athbb L}ambda^2_0H$ and ${{\mathfrak m}athfrak a}lpha:\Gammar^W_{-2}{{\mathfrak m}athfrak g}_{g,n}[\epsilonll^r]\cong \bigoplus_{j=1}^n{{\mathfrak m}athbb L}ambda^2_0H_j \oplus ({{\mathfrak m}athbb L}ambda^2_0H)^{\mathfrak p}erp.$
Then in weight $-2$, there is a commutative diagram of $\GammaSp(H)$-modules
$$\mathbf{x}ymatrix@C=1pc @R=.7pc{
0{{\mathfrak m}athfrak a}r[r] & \Gammar^W_{-2}{\mathfrak p} {{\mathfrak m}athfrak a}r[r]{{\mathfrak m}athfrak a}r[d]^{{{\mathfrak m}athfrak g}amma}&\Gammar^W_{-2}{{\mathfrak m}athfrak g}_{{{\mathfrak m}athcal C}_{g,n}}[\epsilonll^r]{{\mathfrak m}athfrak a}r[r]^{\Gammar^W_{-2}d{\mathfrak p}i_{{\mathfrak m}athfrak a}st}{{\mathfrak m}athfrak a}r[d]^{\cong}&\Gammar^W_{-2}{{\mathfrak m}athfrak g}_{g,n}[\epsilonll^r]{{\mathfrak m}athfrak a}r[r]{{\mathfrak m}athfrak a}r[d]^{{{\mathfrak m}athfrak a}lpha}&0\\
0{{\mathfrak m}athfrak a}r[r] &{{\mathfrak m}athbb L}ambda^2_0H {{\mathfrak m}athfrak a}r[r] & {{\mathfrak m}athbb L}ambda^2_0H \oplus \bigoplus_{j=1}^n{{\mathfrak m}athbb L}ambda^2_0H_j\oplus ({{\mathfrak m}athbb L}ambda^2_0H)^{\mathfrak p}erp {{\mathfrak m}athfrak a}r[r] &\bigoplus_{j=1}^n{{\mathfrak m}athbb L}ambda^2_0H_j \oplus ({{\mathfrak m}athbb L}ambda^2_0H)^{\mathfrak p}erp {{\mathfrak m}athfrak a}r[r] &0,
}
$$
where the rows are exact and the middle vertical isomorphism is determined by ${{\mathfrak m}athfrak g}amma$ and ${{\mathfrak m}athfrak a}lpha\circ\Gammar^W_{-2}d{\mathfrak p}i_{{\mathfrak m}athfrak a}st$. From the description of the map ${{\mathfrak m}athfrak d}({\mathfrak p}i)$ in \S{\mathfrak r}ef{two-step}, it follows that each $\GammaSp(H)$-equivariant graded Lie algebra section $\Gammar^W_\bullet ds/W_{-3}$ of $\Gammar^W_{\bullet} d{\mathfrak p}i_{{\mathfrak m}athfrak a}st/W_{-3}$ induces a $\GammaSp(H)$-equivariant graded Lie algebra section ${{\mathfrak m}athfrak d}(s)$ of ${{\mathfrak m}athfrak d}({\mathfrak p}i)$. Note that the restriction of a section of $\Gammar^W_\bullet d{\mathfrak p}i_{{\mathfrak m}athfrak a}st/W_{-3}$ to the $({{\mathfrak m}athbb L}ambda^2_0H)^{\mathfrak p}erp$-component in weight $-2$ is independent of the choice of a section. Therefore, there is a bijection between ${\mathfrak m}athrm{Sect}_{\GammaSp(H)}(\Gammar^W_\bullet{{\mathfrak m}athfrak g}_{g,n}[\epsilonll^r]/W_{-3}, \Gammar^W_\bullet{\mathfrak p}/W_{-3})$ and the set of $\GammaSp(H)$-graded Lie algebra sections of ${{\mathfrak m}athfrak d}({\mathfrak p}i)$. By Proposition {\mathfrak r}ef{sections of dpi}, the sections of ${{\mathfrak m}athfrak d}({\mathfrak p}i)$ are given by ${{\mathfrak m}athfrak d}(s_1),\elldots, {{\mathfrak m}athfrak d}(s_n)$. Thus it follows that ${\mathfrak m}athrm{Sect}_{\GammaSp(H)}(\Gammar^W_\bullet{{\mathfrak m}athfrak g}_{g,n}[\epsilonll^r], \Gammar^W_\bullet{\mathfrak p}/W_{-3}{\mathfrak p})$ consists of exactly the sections $\Gammar^W_\bullet ds^{\mathfrak u}n_1,\elldots, \Gammar^W_\bullet ds^{\mathfrak u}n_n$. Hence our first claim follows. \\
Next, using the non-abelian exact sequence \cite[Thm.~3]{hain4}, we will show that for each $N\elleq -4$,
$ H^1_{\mathfrak n}ab({{\mathfrak m}athcal G}_{g,n}, {{\mathfrak m}athcal P}/W_N{{\mathfrak m}athcal P})({{\mathfrak m}athbb Q}l) = \{s_1^{\mathfrak u}n, \elldots, s_n^{\mathfrak u}n \}. $
Consider the following sequence given by \cite[Thm.~3]{hain4}
$$H^1_{\mathfrak n}ab({{\mathfrak m}athcal G}_{g,n}[\epsilonll^r], {{\mathfrak m}athcal P}/W_{N}{{\mathfrak m}athcal P}){\mathfrak t}o H^1_{\mathfrak n}ab({{\mathfrak m}athcal G}_{g,n}[\epsilonll^r], {{\mathfrak m}athcal P}/W_{N+1}{{\mathfrak m}athcal P})\overset{{{\mathfrak m}athfrak d}elta}{\mathfrak t}o H^2({{\mathfrak m}athfrak g}_{g,n}[\epsilonll^r], \Gammar^W_{N+1}{\mathfrak p}),$$
where $H^1_{\mathfrak n}ab({{\mathfrak m}athcal G}_{g,n}[\epsilonll^r], {{\mathfrak m}athcal P}/W_{N}{{\mathfrak m}athcal P})({{\mathfrak m}athbb Q}l)$ is a principal $H^1({{\mathfrak m}athfrak g}_{g,n}[\epsilonll^r], \Gammar^W_{N+1}{\mathfrak p})$ set over the set of ${{\mathfrak m}athbb Q}l$-rational points $({{\mathfrak m}athfrak d}elta^{-1}(0))({{\mathfrak m}athbb Q}l)$. For the definition of the map ${{\mathfrak m}athfrak d}elta$, see \cite[\S 14.2]{hain2}.
By Proposition {\mathfrak r}ef{condition for existence}, we have $H^1({{\mathfrak m}athfrak g}_{g,n}[\epsilonll^r], \Gammar^W_{N}{\mathfrak p})=0$ for $N<-1$. Thus we have
$$H^1_{\mathfrak n}ab({{\mathfrak m}athcal G}_{g,n}, {{\mathfrak m}athcal P}/W_{N})({{\mathfrak m}athbb Q}l)\cong \{0\}{\mathfrak t}imes({{\mathfrak m}athfrak d}elta^{-1}(0))({{\mathfrak m}athbb Q}l) \cong ({{\mathfrak m}athfrak d}elta^{-1}(0))({{\mathfrak m}athbb Q}l).$$
The fact that each $s^{\mathfrak u}n_j$ lifts from $H^1_{\mathfrak n}ab({{\mathfrak m}athcal G}_{g,n}[\epsilonll^r], {{\mathfrak m}athcal P}/W_{N+1}{{\mathfrak m}athcal P})$ to $ H^1_{\mathfrak n}ab({{\mathfrak m}athcal G}_{g,n}[\epsilonll^r], {{\mathfrak m}athcal P}/W_{N}{{\mathfrak m}athcal P})$ implies that by construction, ${{\mathfrak m}athfrak d}elta(s^{\mathfrak u}n_j) =0$ for $j =1,\elldots, n$. Therefore, for $N=-3$, we have
$$({{\mathfrak m}athfrak d}elta^{-1}(0))({{\mathfrak m}athbb Q}l) = H^1_{\mathfrak n}ab({{\mathfrak m}athcal G}_{g,n}[\epsilonll^r], {{\mathfrak m}athcal P}/W_{-3}{{\mathfrak m}athcal P})({{\mathfrak m}athbb Q}l) = \{s^{\mathfrak u}n_1,\elldots, s^{\mathfrak u}n_n\}.$$
Hence, inductively, we have $H^1_{\mathfrak n}ab({{\mathfrak m}athcal G}_{g,n}, {{\mathfrak m}athcal P}/W_N{{\mathfrak m}athcal P})({{\mathfrak m}athbb Q}l)=\{s_1^{\mathfrak u}n, \elldots, s_n^{\mathfrak u}n \}$ for all $N\elleq -4$ as well.
Since $H^1_{\mathfrak n}ab({{\mathfrak m}athcal G}_{g,n}, {{\mathfrak m}athcal P})({{\mathfrak m}athbb Q}l) \cong {\vec{v}}arprojlim_{N\elleq -1}H^1_{\mathfrak n}ab({{\mathfrak m}athcal G}_{g,n}, {{\mathfrak m}athcal P}/W_N{{\mathfrak m}athcal P})({{\mathfrak m}athbb Q}l)$, it follows that
$$H^1_{\mathfrak n}ab({{\mathfrak m}athcal G}_{g,n}, {{\mathfrak m}athcal P})({{\mathfrak m}athbb Q}l) = \{s_1^{\mathfrak u}n, \elldots, s_n^{\mathfrak u}n \}.$$
By Proposition {\mathfrak r}ef{non-abelian coh iso}, we have
$$H^1(\Gamma^{{\mathfrak m}athfrak a}rith_{g,n}[\epsilonll^r], {{\mathfrak m}athcal P}({{\mathfrak m}athbb Q}l)) = \{s_1^{\mathfrak u}n, \elldots, s_n^{\mathfrak u}n \}.$$
\qed
{\mathfrak s}ubsection{Proof of Theorem 2} {\mathfrak t}extcolor{black}{With the same assumptions as in Theorem 1, consider the exact sequence of weighted completions associated to the universal punctured curve ${\mathfrak p}i^o:{{\mathfrak m}athcal M}_{g, n+1/k}[\epsilonll^r]{\mathfrak t}o {{\mathfrak m}athcal M}_{g,n/k}[\epsilonll^r]$
$$
1{\mathfrak t}o {{\mathfrak m}athcal P}^o{\mathfrak t}o{{\mathfrak m}athcal G}_{g, n+1}[\epsilonll^r]{\mathfrak t}o {{\mathfrak m}athcal G}_{g,n}[\epsilonll^r]{\mathfrak t}o 1.
$$ }
{\mathfrak t}extcolor{black}{Proposition {\mathfrak r}ef{exact seq for punctured universal curve} implies that ${{\mathfrak m}athcal G}^{{\mathfrak m}athfrak g}eom_{g, n+1}[\epsilonll^r]$ is isomorphic to the fiber product of ${{\mathfrak m}athcal G}^{{\mathfrak m}athfrak g}eom_{g,n}[\epsilonll^r]$ and ${{\mathfrak m}athcal G}_{g, n+1}[\epsilonll^r]$ over $ {{\mathfrak m}athcal G}_{g,n}[\epsilonll^r]$, and hence a section of the projection ${{\mathfrak m}athcal G}_{g, n+1}[\epsilonll^r]{\mathfrak t}o {{\mathfrak m}athcal G}_{g,n}[\epsilonll^r]$ induces a section of ${{\mathfrak m}athcal G}^{{\mathfrak m}athfrak g}eom_{g, n+1}[\epsilonll^r]{\mathfrak t}o {{\mathfrak m}athcal G}^{{\mathfrak m}athfrak g}eom_{g,n}[\epsilonll^r]$. }
Therefore, it will suffice to show that the sequence
$$
1{\mathfrak t}o {{\mathfrak m}athcal P}^o{\mathfrak t}o{{\mathfrak m}athcal G}^{{\mathfrak m}athfrak g}eom_{g, n+1}[\epsilonll^r]{\mathfrak t}o {{\mathfrak m}athcal G}^{{\mathfrak m}athfrak g}eom_{g,n}[\epsilonll^r]{\mathfrak t}o 1
$$
does not split. But this directly follows from the comparison between characteristic zero and $p$ in Proposition {\mathfrak r}ef{comparison} and \cite[Thm.~1]{wat2}.
{\mathfrak t}extcolor{black}{Secondly, by Lemma {\mathfrak r}ef{exact seq with A}} the sequence
$$
1{\mathfrak t}o {{\mathfrak m}athcal P}^o({{\mathfrak m}athbb Q}l){\mathfrak t}o{{\mathfrak m}athcal G}_{g, n+1}[\epsilonll^r]({{\mathfrak m}athbb Q}l){\mathfrak t}o {{\mathfrak m}athcal G}_{g,n}[\epsilonll^r]({{\mathfrak m}athbb Q}l){\mathfrak t}o 1
$$
is exact, and by pulling back this sequence along the homomorphism ${\mathfrak t}ilde{\mathfrak r}ho_\epsilontabar: \Gamma^{{\mathfrak m}athfrak a}rith_{g,n}[\epsilonll^r]{\mathfrak t}o {{\mathfrak m}athcal G}_{g,n}[\epsilonll^r]({{\mathfrak m}athbb Q}l)$, we obtain an extension
\begin{equation}\ellabel{nonab open}
1{\mathfrak t}o {{\mathfrak m}athcal P}^o({{\mathfrak m}athbb Q}l){\mathfrak t}o{{\mathfrak m}athcal E}^o_{g,n+1}{\mathfrak t}o \Gamma^{{\mathfrak m}athfrak a}rith_{g,n}[\epsilonll^r]{\mathfrak t}o 1
\epsilonnd{equation}
of $\Gamma^{{\mathfrak m}athfrak a}rith_{g,n}[\epsilonll^r]$ by ${{\mathfrak m}athcal P}^o({{\mathfrak m}athbb Q}l)$. We have the commutative diagram
$$\mathbf{x}ymatrix@C=1pc @R=.7pc{
1{{\mathfrak m}athfrak a}r[r]&{{\mathfrak m}athcal P}^o({{\mathfrak m}athbb Q}l){{\mathfrak m}athfrak a}r@{=}[d]{{\mathfrak m}athfrak a}r[r]&{{\mathfrak m}athcal E}^o_{g,n+1}{{\mathfrak m}athfrak a}r[r]{{\mathfrak m}athfrak a}r[d]&\Gamma^{{\mathfrak m}athfrak a}rith_{g,n}[\epsilonll^r]{{\mathfrak m}athfrak a}r[d]^{{\mathfrak t}ilde{\mathfrak r}ho_\epsilontabar}{{\mathfrak m}athfrak a}r[r]&1\\
1{{\mathfrak m}athfrak a}r[r]&{{\mathfrak m}athcal P}^o({{\mathfrak m}athbb Q}l){{\mathfrak m}athfrak a}r[r]&{{\mathfrak m}athcal G}_{g, n+1}[\epsilonll^r]({{\mathfrak m}athbb Q}l){{\mathfrak m}athfrak a}r[r]&{{\mathfrak m}athcal G}_{g,n}[\epsilonll^r]({{\mathfrak m}athbb Q}l){{\mathfrak m}athfrak a}r[r]&1. }
$$
By the universal property of weighted completion, a continuous section of ${{\mathfrak m}athcal E}^o_{g,n+1}{\mathfrak t}o \Gamma^{{\mathfrak m}athfrak a}rith_{g,n}[\epsilonll^r]$ induces a section of ${{\mathfrak m}athcal G}_{g, n+1}[\epsilonll^r]{\mathfrak t}o {{\mathfrak m}athcal G}_{g,n}[\epsilonll^r]$. Thus, the extension ({\mathfrak r}ef{nonab open}) does not split.
Consequently, the non-abelian cohomology $H^1_{\mathfrak n}ab(\Gamma^{{\mathfrak m}athfrak a}rith_{g,n}[\epsilonll^r], {{\mathfrak m}athcal P}^o({{\mathfrak m}athbb Q}l))$ is empty.
\begin{thebibliography}{}
\bibitem{DM}
P.~Deligne and D.~Mumford:
{\epsilonm The irreducibility of the space of curves of given genus}, Publ. Math. Inst. Hautes \'Etudes Sci. 36 (1969) 75-109.
\bibitem{sga1}
A.~Grothendieck:
{\epsilonm S\'eminaire de g\'eom\'etrie alg\'ebrique 1- Rev\^etement \'Etales et Groupe Fondamental}, Lecture Notes in Math., vol. 224, Springer-Verlag, 1971.
\bibitem{hain2}
R.~Hain:
{\epsilonm Rational points of universal curves}, J.~Amer.~Math. Soc. 24 (2011), 709-769.
\bibitem{hain5}
R.~Hain:
{\epsilonm Relative weight filtrations on completions of mapping class groups}, in Groups of Diffeomorphisms, Advanced Studies in Pure Mathematics, vol. 52 (2008), pp.~309-368, Mathematical Society of Japan.
\bibitem{hain4}
R.~Hain:
{\epsilonm Remarks on non-abelian cohomology of proalgebraic groups}, Journal of Algebraic Geometry, vol. 22 (2013), pp. 581-598, ISSN 1056-3911.
\bibitem{relative prol}
R.~Hain and M.~Matsumoto:
{\epsilonm Relative pro-$\epsilonll$ completions of mapping class groups}, J. Algebra, vol. 321 (2009), pp. 3335-3374
\bibitem{wei}
R.~Hain and M.~Matsumoto:
{\epsilonm Weighted completion of Galois groups and Galois actions on the fundamental group of ${{\mathfrak m}athbb P}^1-\{0,1,\infty\}$}, Compositio Math. 139 (2003), 119-167.
\bibitem{kim}
M.~Kim:
{\epsilonm The motivic fundamental group of ${\mathfrak m}athbb{P}^1-\{0,1,\infty\}$ and the the theorem of
Siegel}, Invent. Math. 161 (2005), 629-656. MR2181717 (2006k:11119)
\bibitem{ntu}
H.~Nakamura, N.~Takao, R.~Ueno:
{\epsilonm Some stability properties of Teichm\"uller modular function fields with pro-$\epsilonll$ weight structures}, Math. Ann. 302 (1995), 197-213.
\bibitem{wat1}
T.~Watanabe,
\epsilonmph{Rational points of universal curves in positive characteristics}, Trans. Amer. Math. Soc. 372 (2019) 7639-7676.
\bibitem{wat2}
T.~Watanabe, \epsilonmph{Remarks on rational points of universal curves}, Proc. Amer. Math. Soc. 148 (2020) 3761-3773.
\epsilonnd{thebibliography}
\epsilonnd{document}
|
\begin{document}
\title{Locating a service facility and a rapid transit line}
\linenumbers
\begin{abstract}
In this paper we study a facility location problem in the plane in which a single point
(facility) and a rapid transit line (highway) are simultaneously located in order to
minimize the total travel time of the clients to the facility, using the $L_1$ or
Manhattan metric. The rapid transit line is represented by a line segment with fixed
length and arbitrary orientation. The highway is an alternative
transportation system that can be used by the clients to reduce their
travel time to the facility. This problem was introduced by Espejo
and Rodríguez-Chía in~\cite{espejo11}. They gave both a characterization of the optimal
solutions and an algorithm running in $O(n^3\log n)$ time, where $n$ represents the number
of clients. In this paper we show that Espejo and Rodríguez-Chía's algorithm does not
always work correctly.
At the same time, we provide a proper characterization of the solutions
and give an algorithm solving the problem in $O(n^3)$ time.
\end{abstract}
\textit{Keywords:} Geometric optimization; Facility location;
Transportation; Time distance.
\section{Introduction}
Suppose that we have a set of clients represented as a set of points in
the plane, and a service facility represented as a point to which
all clients have to move. Every client can reach the facility
directly or by using an alternative rapid transit line or highway,
represented by a straight line segment of
fixed length and arbitrary
orientation, in order to reduce the travel time. Whenever a client
moves directly to the facility, it moves at unit speed and the
distance traveled is the Manhattan or $L_1$ distance to the
facility. In the case where a client uses the highway, it travels
the $L_1$ distance at unit speed to one endpoint of the
highway, traverses the entire highway with a speed greater than one,
and finally travels the $L_1$ distance from the other endpoint to the facility at unit speed. All clients traverse the
highway at the same speed. Given the set of points representing the clients,
the facility location problem consists in determining at the
same time the facility point and the highway in order to minimize
the \emph{total weighted travel time} from the clients to the facility. The
weighted travel time of a client is its travel time multiplied by
a weight representing the intensity of its demand. This problem was introduced
by Espejo and Rodríguez-Chía~\cite{espejo11}. We refer to~\cite{espejo11} and references
therein to review both the state of the art and
applications of this problem.
Geometric problems related to transportation networks have been recently
considered in computational geometry.
Abellanas {\em et. al.} introduced the {\em time metric} model
in~\cite{abellanas03}: Given an underlying metric, the user can
travel at speed $v$ when moving along a highway $h$ or unit
speed elsewhere. The particular case in which the underlying metric
is the $L_1$ metric and all highways are axis-parallel segments of
the same speed, is called the {\em city metric}~\cite{aichholzer02}.
The optimal positioning of transportation systems that minimize the maximum travel time
among a set of points has been investigated in detail in recent papers~\cite{ahn07,
cardinal08,aloupis10}. Other more general models are studied in~\cite{korman08}. The
variant introduced by Espejo and
Rodríguez-Chía aims to minimize the sum of the travel times (transportation cost) from
the demand points to the new facility service,
which has to be located simultaneously with a highway. The highway is used by a demand
point whenever it saves time to reach the facility.
Notation to formulate the problem is as follows. Let $S$ be the set of $n$ client points;
$f$ the service facility point; $h$ the highway; $\ell$ the length of $h$; $t$ and
$t'$ the endpoints of $h$; and
$v\geq 1$ the speed in which the points move along $h$. Let $w_p>0$ be the weight (or
demand) of a client point
$p$. Given a point $u$ of the plane, let $\x{u}$ and $\y{u}$ denote
the $x$- and $y-$coordinates of $u$ respectively. The distance or travel time
(see Figure~\ref{fig:distance}), between a point $p$ and the service
facility $f$ is given by the function
$$d_{t,t'}(p,f):=\min\left\{\|p-f\|_1,\|p-t\|_1+\frac{\ell}{v}+\|t'-f\|_1,
\|p-t'\|_1+\frac{\ell}{v}+\|t-f\|_1\right\}.$$
\begin{figure}
\caption{\small{The distance between a point $p$ and the facility $f$ using the highway.}
\label{fig:distance}
\end{figure}
Then the problem can be formulated as follows:
\begin{quote}
{\bf The Facility and Highway Location problem (FHL-problem)}: Given a set $S$ of $n$
points, a weight $w_p>0$ associated with each point $p$ of $S$, a fixed highway length
$\ell>0$, and a fixed speed $v\geq1$, locate a point (facility) $f$ and a line segment
(highway) $h$ of length $\ell$ with endpoints $t$ and $t'$ such that the function
$\sum_{p\in S}w_p \cdot d_{t,t'}(p,f)$ is minimized.
\end{quote}
Espejo and Rodríguez-Chía~\cite{espejo11} studied the FHL-problem and gave the
following characterization of the solutions. Consider the grid $G$
defined by the set of all axis-parallel lines passing through the
elements of $S$. They stated that there always exists an optimal
highway having one endpoint at a vertex of $G$. Based on this, they
proposed an $O(n^3\log n)$-time algorithm to solve the problem. In this paper we
show that the
characterization given by Espejo and Rodríguez-Chía is not true in general, hence
their algorithm does not always give the optimal solution.
\paragraph{Addendum} An anonymous referee pointed out that the authors of~\cite{espejo11} published a {\em corrigendum} to their paper the 19th of January 2012, and that our result was not novel. In here we provide a chronological order of the events so that the reader can reach his/her own conclusions. The first version of this paper appeared the 5th of April 2011 on arXiv (and a preliminary version also appeared in the proceedings of the Spanish Meeting on Computational Geometry the 27th of June 2011). We contacted the authors of \cite{espejo11}, and provided them with a copy of our paper, including the counterexample. Naturally, they were interested in our research, and wanted to know where had they done a mistake. The 29th of October 2011, the authors of~\cite{espejo11} contacted us claiming that they had found the error in their paper. They provided us a write-up containing the corrected version of their proof, and suggested we combine our results. Given the difference in notation and the fact that this paper subsumes their result, we declined. From the conversation we can only deduce that the authors of~\cite{espejo11} submitted their corrigendum sometime in early November 2011.
As of now (16th March 2012), our paper is currently under supervision for journal publication, whereas the corrigendum has already appeared at COR. Although we would love if the submission, correction and publication process takes less than three months (as it appears to have happened with corrigendum at {\em Computers and Operations Research} journal), we understand that this is not possible in high-end journals. Regardless of our personal opinion of the actions of Espejo and Rodriguez-Chia, we believe that the date in which the result was found (and not published in a journal) is the relevant one. Thus, we claim that our paper is the first one to claim the error of~\cite{espejo11}.
On a side note, we note that the corrigendum of Espejo and Rodriguez-Chia is also wrong, since they claim that our characterization is weaker. They specifically say that ``The description given by [this paper] means an infinite many number of candidates to be one of the endpoints of an optimal segment''. Although Lemma \ref{lemma:endpoint} does not explicitly say so, the algorithm of Section \ref{section:algorithm} only considers $O(n^3)$ cases (in particular a finite amount).
\paragraph{Paper Organization} In Section~\ref{section:properties} we first provide a proper
characterization of the solutions.
After that we give a counterexample to the Espejo and
Rodríguez-Chía's characterization. We provide a set of
five points, all having weight equal to one, and prove that no
optimal highway has one endpoint in a vertex of $G$. In
Section~\ref{section:algorithm} we present an improved algorithm running in
$O(n^3)$ time that correctly solves the FHL-problem. Finally, in
Section~\ref{section:conclusions}, we state our conclusions and proposal for
further research.
\section{Properties of an optimal solution}\label{section:properties}
A primary observation (also stated in~\cite{espejo11}) is that the service facility can be
located at one of the endpoints of the rapid transit line. From now on,
we assume $f=t'$ throughout the paper. This assumption simplifies the distance from
a point $p\in S$ to the facility to the following expression,
$$d_t(p,f)=\min\left\{\|p-f\|_1,\|p-t\|_1+\frac{\ell}{v}\right\}.$$
Using this observation, the expression of our objective function to minimize is
$\Phi(f,t)=\sum_{p\in S}w_p \cdot d_{t}(p,f)$. We call this value the {\em total
transportation cost} associated with $f$ and $t$ (or simply the {\em cost} of $f$ and
$t$).
We say that a point $p$ uses the highway if $\|p-t\|_1+\frac{\ell}{v}<\|p-f\|_1$, and
that $p$ does not use it
(or goes directly to the facility) otherwise. Given $f$ and $t$, we call \emph{travel bisector} of $f$ and $t$ (or \emph{ bisector} for short) as the set of points $z$ such that $\|z-f\|_1=\|z-t\|_1+\frac{\ell}{v}$,
see Figure~\ref{fig:bisector}. A geometrical description of such a bisector can be found
in~\cite{espejo11}, as the boundary of the so-called {\em captation region}.
\begin{figure}
\caption{\small{The bisector of $f$ and $t$.}
\label{fig:bisector}
\end{figure}
\begin{lemma}~\label{lemma:endpoint}
There exists an optimal solution to the FHL-problem satisfying one
of the next conditions:
\begin{itemize}
\item[$(a)$] One of the endpoints of the highway is a vertex of $G$.
\item[$(b)$] One endpoint of the highway is on a horizontal line
of $G$, and the other endpoint is on a vertical line of $G$.
\end{itemize}
\end{lemma}
\begin{proof}
Let $f$ and $t$ be the endpoints of an optimal highway $h$ and assume neither of
conditions $(a)$ and $(b)$ is satisfied. Using local perturbation we will transform this
solution into one that satisfies one of these conditions.
Assume neither $f$ nor $t$ is
on any vertical line of $G$.
Let $\delta_1>0$ (resp. $\delta_2>0$) be the smallest value such
that if we translate $h$ with vector $(-\delta_1,0)$ (resp. $(\delta_2,0)$)
then either one endpoint of $h$ touches a vertical line of $G$ or a demand point hits
the bisector of $f$ and $t$.
Given $\varepsilon\in[-\delta_1,\delta_2]$, let $f_{\varepsilon}$,
$t_{\varepsilon}$, and $h_{\varepsilon}$ be $f$, $t$, and $h$ translated with
vector $(\varepsilon,0)$, respectively. It is easy to see that
$|d_{t_{\varepsilon}}(p,f_{\varepsilon})-d_t(p,f)|=|\varepsilon|$ for all points $p$. Given
a real number $x$, let $\sgn(x)$ denote the sign of $x$. We partition $S$ into three sets
$S_1$, $S_2$ and $S_3$ as follows:
\begin{eqnarray*}
S_1 & = & \{p\in S~|~\sgn(d_{t_{\varepsilon}}(p,f_{\varepsilon})-d_t(p,f))=\sgn(\varepsilon),~~\forall\varepsilon\in[-\delta_1,\delta_2]\setminus\{0\}\} \\
S_2 & = & \{p\in S~|~\sgn(d_{t_{\varepsilon}}(p,f_{\varepsilon})-d_t(p,f))=-\sgn(\varepsilon),~~\forall\varepsilon\in[-\delta_1,\delta_2]\setminus\{0\}\} \\
S_3 & = & \{p\in S~|~\sgn(d_{t_{\varepsilon}}(p,f_{\varepsilon})-d_t(p,f))=-1,~~\forall\varepsilon\in[-\delta_1,\delta_2]\setminus\{0\}\}
\end{eqnarray*}
Observe that points of $S_3$ are in the bisector of $f$ and $t$; $S_1$ contains the
demand points that travel rightwards to reach $f$ directly or by using the highway, and $S_2$ contains the points that travel leftwards.
Theoretically, one could consider the case in which a point belongs to set $S_4 = \{p\in S~|~\sgn(d_{t_{\varepsilon}}(p,f_{\varepsilon})-d_t(p,f))=1,~~\forall\varepsilon\in[-\delta_1,\delta_2]\setminus\{0\}\}$. Geometrically speaking, the points of this set are those that, when translating the highway in either directions, the distance between them and the entry point of the highway increases. This situation can only happen when the point is aligned with the entry point. That is, point $p\in S_4$ if and only if either $(i)$ $p$ uses the highway to reach the facility and it is vertically aligned with $t$, or $(ii)$ $p$ walks to the facility and it is vertically aligned with $f$. However, by definition of $\delta_1$ and $\delta_2$, no point of $S$ can belong to (or enter) $S_4$ during the whole translation.
By the linearity of the $L_1$ metric, whenever we translate the highway $\varepsilon$ units to
the right (for some arbitrarily small $\varepsilon$, $0<\varepsilon\leq \delta_1$), the highway will be
$\varepsilon$ units closer for points in $S_2\cup S_3$, but $\varepsilon$ units further away for points
of $S_1$. Analogously, the distance to the facility
decreases for points in $S_1 \cup S_3$ and increases for points of $S_2$ when translating
$h$ leftwards. Let $N=\sum_{p\in S_1}w_p-\sum_{p\in S_2}w_p$ and
$k=\sum_{p\in S_3}w_p$. Thus, for any
vector $(\varepsilon,0)$,
$\varepsilon\in[-\delta_1,\delta_2]\setminus\{0\}$, the change of the
objective function when we translate the highway with vector $(\varepsilon,0)$ is equal
to the following expression:
\begin{eqnarray*}
\Phi(f_{\varepsilon},t_{\varepsilon})-\Phi(f,t)&=&\sum_{p\in S}w_p\cdot
d_{t_{\varepsilon}}(p,f_{\varepsilon})-\sum_{p\in S}w_p\cdot d_{t}(p,f)\\
&=& \varepsilon\sum_{p\in S_1}w_p-\varepsilon\sum_{p\in S_2}w_p-|\varepsilon|\sum_{p\in S_3}w_p\\
&=&N\varepsilon-k|\varepsilon|
\end{eqnarray*}
Since we initially assumed that the location of $h$ is optimal, we must have both $N=k=0$ (otherwise translating $h$ rightwards or leftwards would result in a decrease of the objective fuction). In particular, we can translate $h$ in either direction so that the cost of the objective function is unchanged.
More importantly, observe that the value of $k$ must remain $0$ on the whole translation: if at some point it becomes positive we can find a translation from that point that reduces the cost of the objective function. In particular, the set $S_3$ must remain empty during the whole translation. Any point that changes from set $S_1$ to $S_2$ (or {\em vice versa}) must first enter $S_3$. Since the latter set remains empty during the whole translation, no point can change between sets $S_1,S_2$, or $S_3$ until either $f$ or $t$ is vertically aligned with a point of $S$.
We perform the
same operations on the
$y$ coordinates and obtain that one of the two endpoints is
on a horizontal line of $G$, hence satisfying one of the two conditions
of the Lemma.
\end{proof}
When the highway's length is equal to
zero, the FHL-problem is the weighted 1-median problem in metric $L_1$~\cite{durier1985},
and in this case the item (a) of Lemma~\ref{lemma:endpoint} holds.
Espejo and Rodríguez-Chía~\cite{espejo11} claimed that there always
exists an optimal solution of the FHL-problem that satisfies
Lemma~\ref{lemma:endpoint}~(a). Unfortunately, this claim is not true in general
and their algorithm may miss some highway locations; indeed, it may miss the optimal
location and thus fail. We provide here one counterexample and the
following result.
\begin{lemma}~\label{lemma:counter-example}
There exists a set of unweighted points in which no optimal solution to the
FHL-problem satisfies Lemma~\ref{lemma:endpoint}~(a).
\end{lemma}
\begin{proof}
Consider the problem instance with five points whose coordinates are $(-4,0)$, $(-3,-1)$, $(12,8)$,
$(13,5)$, and $(13,7)$, respectively (see Figure~\ref{fig:counter-example}). In the problem instance, we give unit weight to all points, and set the length $h$ of the highway as $\ell=\sqrt{180}\approx 13,5$. For simplicity in the calculations, we also set $v=\ell$, but any other large number works as well. The cost associated to the highway of endpoints $f=(12,6)$ and $t=(0,0)$ is $10+2\ell/v=12$. We claim that this location is better than any other solution with an endpoint at a vertex of $G$.
\begin{figure}
\caption{\small{A counterexample to the algorithm of Espejo and Rodríguez-Chía.}
\label{fig:counter-example}
\end{figure}
If one endpoint of $h$ is a vertex of
$G$ in the line $x=-3$, then the other endpoint is located to the
left of the line $x=11$ because $-3+\ell<11$. In
that case we can translate $h$ rightwards with vector
$(\frac{1}{2},0)$ improving the objective function. The same holds
if one endpoint of $h$ is a vertex in the line $x=-4$. Similarly, if
one endpoint is a vertex in the line $x=13$, then we can translate
$h$ leftwards with vector $(-\frac{1}{2},0)$ and the objective
function decreases.
Consider now locating one of the highway endpoints at coordinates $(12,0)$ or $(12,-1)$. Observe that the walking time (i.e., the traveling time when the highway is not used) from the points $(-4,0)$ and $(-3,-1)$ takes at least $15$ units of time, which is more than the cost associated with our solution. The same happens to the sum of the traveling times of the three other points. Hence, if $f$ is located at one of the two vertices, the five points must use highway (otherwise the travel time is higher than our solution). Analogously, if $t$ is located at grid points $(12,0)$ or $(12,-1)$, no point of $S$ will use the highway. In either case, the corresponding solution is at least as high as the sum of distances from all points of $S$ to the geometric median, which is higher than the cost associated with our solution.
Consider now the cases in which one of the endpoints has
coordinates $(12,y_0)$ for some $y_0\in\{5,7,8\}$. We start by showing
that, in any of the three cases, the optimal position of the other
endpoint of the highway (denoted by $e$) must lie on the line $y=0$.
Since the highway's length is equal to $\ell$, the possible positions of $e$ lie
both in circle $\sigma$ of radius $\ell$ centered at $(12,y_0)$ and to the left of line
$x=12$.
Observe that the clients that
walk to $e$ are points $a=(-4,0)$ and $b=(-3,-1)$, located always
to the left of $e$.
Hence, we are interested in minimizing the
expression $\|a-e\|_1+\|b-e\|_1$.
Let $a',b'\in\sigma$ denote respectively the closest points to $a$ and $b$
with the $L_1$ metric, which verify $y(a')=0$ and $y(b')=-1$.
Observe that if $y(e)>0$ then $\|a-a'\|_1<\|a-e\|_1$ and $\|b-a'\|_1<\|b-e\|_1$ implying
$$\|a-a'\|_1+\|b-a'\|_1<\|a-e\|_1+\|b-e\|_1$$
(see Figure~\ref{fig:counter-example2} a)). Similarly, if $y(e)<-1$, then
$$\|a-b'\|_1+\|b-b'\|_1<\|a-e\|_1+\|b-e\|_1.$$
Therefore, $e$ must satisfy $-1\leq y(e)\leq 0$ (see Figure~\ref{fig:counter-example2}
b)). In this case we have
\begin{eqnarray*}
\|a-e\|_1+\|b-e\|_1&=&x(e)-x(a)+y(a)-y(e)+x(e)-x(b)+y(e)-y(b)\\
&=& 2x(e) + 8
\end{eqnarray*}
Then $\|a-e\|_1+\|b-e\|_1$ is minimized when $x(e)$ is minimum, and it
happens when $y(e)=0$.
\begin{figure}
\caption{\small{$a=(-4,0)$ and $b=(-3,-1)$.
When one endpoint of the highway has coordinates $(12,8)$, $(12,7)$, or $(12,5)$, the
optimal position of the other endpoint $e$ is on the line $y=0$.}
\label{fig:counter-example2}
\end{figure}
If $y_0=8$, then $h$ can be translated downwards with vector
$(0,-\frac{1}{2})$ and the value of the objective function decreases.
Thus point $(12,8)$ is discarded. It remains
to show that there is a solution better than the one having an
endpoint at either $(12,7)$ or $(12,5)$, and the other endpoint on
the line $y=0$. Observe that if $f$ and $t$ belong to the lines $y=0$
and $x=12$, respectively, then by exchanging $f$ and $t$ the value
of the objective function reduces in $\ell/v$. Then consider the case
where $y(t)=0$ and $x(f)=12$.
Let $t=(0,0)$ and $f=(12,6)$.
Given a value $\varepsilon$, let $t_{\varepsilon}$ be the point with coordinates
$(\varepsilon,0)$ and $f_{\varepsilon}$ be the point in the line $x=12$ such that
$\y{f_{\varepsilon}}>0$ and the Euclidean distance between $f_{\varepsilon}$ and
$t_{\varepsilon}$ is equal to $\ell$ (see
Figure~\ref{fig:counter-example3}). Let $[-\delta_1,\delta_2]$,
$\delta_1,\delta_2>0$, be the maximal-length interval such that $5
\leq \y{f_{\varepsilon}}\leq 7$ for all $\varepsilon\in[-\delta_1,\delta_2]$.
Note $\delta_1=\sqrt{155}-12<1$ and $\delta_2=12-\sqrt{131}<1$. Then $|\varepsilon|<1$.
\begin{figure}
\caption{\small{Definitions of $f_{\varepsilon}
\label{fig:counter-example3}
\end{figure}
The variation of the objective function's value when $f$ and $t$ are moved
to $f_{\varepsilon}$ and $t_{\varepsilon}$, respectively, is equal to
\begin{eqnarray*}
g(\varepsilon)&:=&\Phi(f_{\varepsilon},t_{\varepsilon})-\Phi(f,t)\\
&=&2\left(\x{t_{\varepsilon}}-\x{t}\right)-\left(\y{f_{\varepsilon}}-\y{f}\right)\\
&=&2\varepsilon-\left(\sqrt{36+24\varepsilon-\varepsilon^2}-6\right).
\end{eqnarray*}
In the following we will show that $\sqrt{36+24\varepsilon-\varepsilon^2}<6+2\varepsilon$, for all $\varepsilon\in[-\delta_1,\delta_2]\setminus\{0\}$. In particular, we will have $g(\varepsilon)>0$ (except when $\varepsilon=0$), implying that our highway location is optimal. First observe that
$4\varepsilon^2+24\varepsilon+36=(2\varepsilon+6)^2>36+24\varepsilon-\varepsilon^2$.
Since $|\varepsilon|<1$ then $2\varepsilon+6>0$ and $36+24\varepsilon-\varepsilon^2>0$, which implies
$2\varepsilon+6>\sqrt{36+24\varepsilon-\varepsilon^2}$. Thus $g(\varepsilon)>0$ and
the highway with endpoints $f$ and $t$ gives a better solution than that
having an endpoint at $(12,7)$ or $(12,5)$. This completes the proof.
\end{proof}
In the next section we provide a correct algorithm
that solves the problem in $O(n^3)$ time.
We assume general position, that is, there are no two points on a
same line having slope in the set $\{-1,0,1,\infty\}$.
\section{The algorithm}\label{section:algorithm}
Lemma~\ref{lemma:endpoint} can be used to find an optimal solution
to the FHL-problem. Although the method is quite similar for both cases in
Lemma~\ref{lemma:endpoint}, we address the two cases independently for the sake of
clarity.
By Vertex-FHL-problem we will denote the FHL-problem for the cases in which
Lemma~\ref{lemma:endpoint} a) holds, and by
Edge-FHL-problem the FHL-problem for the cases in which
Lemma~\ref{lemma:endpoint} b) holds. In the next subsections we give an
$O(n^3)$-time algorithm for each variant of the problem. In both of
them we assume w.l.o.g. that highway's length $\ell$ is equal to
one.
In the following $\theta$ will denote the positive angle of the highway with
respect to the positive direction of the $x$-axis. For the sake of clarity, we will assume
that $\theta\in[0,\frac{\pi}{4}]$. When $\theta$ belongs to
the interval $[k\frac{\pi}{4},(k+1)\frac{\pi}{4}]$, $k=1,\dots,7$, both the
Vertex- and Edge-FHL-problem
can be solved in a similar way.
Given a point $u$ and an angle $\theta$, let $u(\theta)$ be the
point with coordinates $(\x{u} + \cos\theta, \y{u} + \sin\theta)$.
There exists an angle
$\phi\in[0,\frac{\pi}{4}]$ such that the bisector of the endpoints $f$ and
$t=f(\theta)$ has the shape in Figure~\ref{fig:bisector} a) for all
$\theta\in[0,\phi)$, and has the shape in Figure~\ref{fig:bisector}
b) for all $\theta\in(\phi,\frac{\pi}{4}]$. Such an angle $\phi$ verifies
$\cos(\phi)-\sin(\phi)=\frac{1}{v}$. Furthermore,
$\phi=\frac{1}{2}\arcsin(1-\frac{1}{v^2})$ and
$\phi\neq\frac{\pi}{4}$ unless $v$ is infinite. Refer to~\cite{espejo11}
for a detailed description of this situation.
Let $\Pi_x$, $\Pi_y$, and $\Pi_{x+y}$ denote the point set $S$ sorted according to the $x$-, $y$-,
and $(x+y)$-order, respectively.
\subsection{Solving the Vertex-FHL-problem}
For each vertex $u$ of $G$ we can solve the problem subject to $f=u$ or $t =u$. We show
how to obtain a solution if $f=u$. The case where $t =u$ can be solved analogously.
Suppose w.l.o.g. that the vertex $f=u$ is the origin of the coordinate system and the
highway angle
is $\theta$, for $\theta\in[0,\frac{\pi}{4}]$.
Then $t=u(\theta)=(\cos\theta,\sin\theta)$.
and the distance $d_t(p,f)$ between a point $p\in S$ and the facility $u$ has the
expression $c_1+c_2\cos\theta+c_3\sin\theta$, where $c_1,c_2,c_3$
are constants satisfying
$c_2,c_3\in\{-1,0,1\}$.
When $\theta$ goes from $0$ to
$\frac{\pi}{4}$, this expression changes at the values of $\theta$
such that:
\begin{itemize}
\item The point $p$ switches from using the highway to going directly to
the facility (or vice versa). We call these changes \emph{bisector
events}. A bisector event occurs when the bisector between the highway's endpoints
$u$ and $u(\theta)$, contains $p$. At most two bisector events are obtained for each point $p$.
\item The highway endpoint $u(\theta)$ crosses the vertical or horizontal
line passing through $p$. We call this event \emph{grid event}. Again, each
point of $S$ generates at most two grid events.
\item $\theta=\phi$. We call it the $\phi$\emph{-event}.
\end{itemize}
We refer the interested reader to~\cite{espejo11} for a detailed description of the above events\footnote{Although their events are very similar to the ones we described, the authors of~\cite{espejo11} refer to them as {\em projection} and {\em limit points}. We prefer to use the term ``event'', since ``point'' is reserved for the elements of $S$}. The cost of their algorithm is dominated by the time spent sorting the order in which events take place. In order to avoid this time, we use the following result:
\begin{lemma}\label{lemma:events}
After an $O(n\log n)$-time preprocessing, the angular order of all
the events associated with a given vertex of $G$ can be obtained in
linear time.
\end{lemma}
\begin{proof}
The preprocessing consists in computing $\Pi_x$, $\Pi_y$, and $\Pi_{x+y}$,
which can be done in $O(n\log n)$ time. Now, let $u$ be a vertex of
$G$. It is straightforward to see that they are $O(n)$ grid events
and that we can obtain their angular order in linear time by using
both $\Pi_x$ and $\Pi_y$. Let us show how to obtain the
bisector events in $O(n)$ time.
The bisector of $u$ and $u(\theta)$ consists of two axis-aligned
half-lines and a line segment with slope -1 connecting their endpoints
(see Figure~\ref{fig:bisector} and~\cite{espejo11} for further details). Given a point
$p$, when $\theta$ goes from $0$ to $\pi/4$ the bisector between $u$
and $u(\theta)$ passes through $p$ at most twice, that is, when $p$
belongs to one of the half-lines of the bisector and when $p$
belongs to the line segment. If $p$ belongs to the line segment of the
bisector then the event is denoted by $\alpha_p$ (see
Figure~\ref{fig:bisector-events} b)). If $p$ belongs to the leftmost
half-line of the bisector, which is always vertical, we denote that
event by $\beta_p$ (see Figure~\ref{fig:bisector-events} a)).
Otherwise, if $p$ belongs to the rightmost half-line which can be
either vertical or horizontal we denote that event by $\gamma_p$
(see Figure~\ref{fig:bisector-events} c) and d)). Observe that if
the rightmost half-line is vertical then $\gamma_p<\phi$, otherwise
$\gamma_p>\phi$. Refer to~\cite{espejo11} for a characterization to
identify whether a point $p\in S$ generates a bisector event for some angle $\theta$.
\begin{figure}
\caption{\small{The bisector events of $p$ when $\theta\in[0,\frac{\pi}
\label{fig:bisector-events}
\end{figure}
Let $\Pi_1$ be the subsequence of $\Pi_{x+y}$ containing all elements $p$
such that $\alpha_p \in [0,\frac{\pi}{4}]$, $\Pi_2$ be the
subsequence of $\Pi_x$ containing all elements $p$ such that $\beta_p \in
[0,\frac{\pi}{4}]$, and $\Pi_3$ be the subsequence of
$\Pi_x$ that contains all elements $p$ such that $\y{p}<\y{u}$ and
$\gamma_p \in [0,\phi]$, concatenated with the
subsequence of $\Pi_y$ that contains all elements $p$ such that
$\x{p}>\x{u}$ and $\gamma_p \in [\phi,\frac{\pi}{4}]$. Given a
point $p\in S$, the corresponding events of $p$ in $[0,\frac{\pi}{4}]$
can be found in constant time, thus $\Pi_1$,
$\Pi_2$, and $\Pi_3$ can be built in linear time.
The following statements are true for any point $p\in S$:
\begin{itemize}
\item[$(a)$] $\x{p}+\y{p}=\frac{1}{2}(\cos\alpha_p+\sin\alpha_p+\frac{1}{v})$ for all points $p$ in $\Pi_1$.
\item[$(b)$] $\x{p}=\frac{1}{2}(\cos\beta_p-\sin\beta_p+\frac{1}{v})$ for all points $p$ in $\Pi_2$.
\item[$(c)$] $\x{p}=\frac{1}{2}(\cos\gamma_p+\sin\gamma_p+\frac{1}{v})$ for all points $p$ in $\Pi_3$ such that $\gamma_p<\phi$.
\item[$(d)$] $\y{p}=\frac{1}{2}(-\cos\gamma_p+\sin\gamma_p+\frac{1}{v})$ for all points $p$ in $\Pi_3$ such that $\gamma_p>\phi$.
\end{itemize}
Let $\Gamma_1$ (resp. $\Gamma_2$, $\Gamma_3$) be the sequence
obtained by replacing each element $p$ in $\Pi_1$ (resp. $\Pi_2$,
$\Pi_3$) by $\alpha_p$ (resp. $\beta_p$, $\gamma_p$). Therefore,
from statements $(a)-(d)$ and the monotonicity of the functions
$\cos\theta+\sin\theta$, $\cos\theta-\sin\theta$, and
$-\cos\theta+\sin\theta$ in the interval $[0,\frac{\pi}{4}]$, we
obtain that $\Gamma_1$, $\Gamma_2$, and $\Gamma_3$ are sorted
sequences. Using a standard method for merging sorted lists, we can
merge in linear time $\Gamma_1$, $\Gamma_2$, $\Gamma_3$, the grid
events, and the $\phi$-event. Therefore, the angular order of all
events associated with a vertex $u$ can be obtained in $O(n)$ time and the result follows.
\end{proof}
\begin{theorem}\label{theorem:vertex}
The Vertex-FHL-problem can be solved in $O(n^3)$ time.
\end{theorem}
\begin{proof}
Let $u$ be a vertex of $G$. Using Lemma~\ref{lemma:events}, we
obtain in linear time the angular order of the $O(n)$ events
associated with $u$. The events induce a partition of
$[0,\frac{\pi}{4}]$ into maximal intervals. For each of those
intervals, the objective function takes the form $g(\theta):=
\Phi(f,t)=\Phi(u,u(\theta))=b_1+b_2\cos\theta+b_3\sin\theta$,
where $b_1,b_2,b_3$ are constants.
This problem is of constant size in each subinterval and the minimum of
$g(\theta)$ can be found in $O(1)$ time. Furthermore, the expression of
$g(\theta)$ can be updated in constant time when $\theta$ crosses an
event point distinct of $\phi$ when going from $0$ to
$\frac{\pi}{4}$. In the case where $\theta$ crosses $\phi$,
$g(\theta)$ can be updated in at most $O(n)$ time. Then the problem
subject to $f=u$ can be solved in linear time. The case in which $t=u$ can be addressed in a similar way.
It gives an overall
$O(n^3)$ time complexity because $G$ has $O(n^2)$ vertices. \end{proof}
\subsection{Solving the Edge-FHL-problem}
We now consider the case in which the optimal solution satisfies condition b) of Lemma~\ref{lemma:endpoint}.
Namely, we consider a horizontal line $e_h$ of $G$ and each vertical line
$e_v$ of $G$. For every pair of such lines, we consider eight different sub-cases, depending on whether $h$ is located above/below $e_h$, rightwards/leftwards of $e_v$, and $f\in e_h$ and $t\in e_v$ (or {\em vice versa}). For a fixed sub-case, we parametrize the location of the highway by the angle $\theta$ that the highway forms with $e_h$. As in the Vertex-FHL case, we assume that $f\in e_h$, $t\in e_v$, and $\theta\in[0,\frac{\pi}{4}]$.
We implicitly redefine the coordinate system so that $e_h$ and $e_v$ intersect at the origin $o$. Let $\theta\in[0,\frac{\pi}{4}]$ be the positive angle of the highway with respect to
the positive direction of the $x$-axis and $f=x_{\theta}$, $t=y_{\theta}$ be the highway endpoints, see Figure~\ref{fig:edge-FHL}.
First notice that, since we are again doing a continuous translation of $h$, the events that affect the value of the objective function are exactly the same as those that happen in the Vertex-FHL-problem: bisector-, grid- and $\phi$- events. We start by showing that the equivalent of Lemma \ref{lemma:events} also holds:
\begin{figure}
\caption{\small{Solving the Edge-FHL-problem.}
\label{fig:edge-FHL}
\end{figure}
\begin{lemma}\label{lemma:events2}
After an $O(n\log n)$-time preprocessing, the angular order of all
the events associated with a pair of perpendicular lines of $G$
can be obtained in linear time.
\end{lemma}
\begin{proof}
We can follow the arguments of Lemma~\ref{lemma:events}.
Firstly, we note that there are $O(n)$ grid
events and their angular order can be obtained in linear time by
using both $\Pi_x$ and $\Pi_y$.
Given a point $p\in S$, let the events $\alpha_p$, $\beta_p$, and
$\gamma_p$ be defined as in the Vertex-FHL case.
Refer to Figure~\ref{fig:bisector-events}. Let $\Pi_1$ be the
subsequence of $\Pi_{x+y}$ containing all elements $p$ such that
$\alpha_p \in [0,\frac{\pi}{4}]$, $\Pi_2$ be the subsequence
of $\Pi_x$ containing all elements $p$ such that $\beta_p \in
[0,\frac{\pi}{4}]$, and $\Pi_3$ be the subsequence of $\Pi_x$ that
contains all elements $p$ such that $\y{p}<\y{o}$ and $\gamma_p \in
[0,\phi]$, concatenated with the subsequence of
$\Pi_y$ that contains all elements $p$ such that $\x{p}>\x{o}$ and
$\gamma_p \in [\phi,\frac{\pi}{4}]$. Note that
$\Pi_1$, $\Pi_2$, and $\Pi_3$ can be built in linear time.
Given a point $p\in S$, the following statements are true:
\begin{itemize}
\item[$(a)$] $\x{p}+\y{p}=\frac{1}{2}(-\cos\alpha_p+\sin\alpha_p+\frac{1}{v})$ for all points $p$ in $\Pi_1$.
\item[$(b)$] $\x{p}=\frac{1}{2}(-\cos\beta_p-\sin\beta_p+\frac{1}{v})$ for all points $p$ in $\Pi_2$.
\item[$(c)$] $\x{p}=\frac{1}{2}(-\cos\gamma_p+\sin\gamma_p+\frac{1}{v})$ for all points $p$ in $\Pi_3$ such that $\gamma_p<\phi$.
\item[$(d)$] $\y{p}=\frac{1}{2}(-\cos\gamma_p+\sin\gamma_p+\frac{1}{v})$ for all points $p$ in $\Pi_3$ such that $\gamma_p>\phi$.
\end{itemize}
Let $\Gamma_1$ (resp. $\Gamma_2$, $\Gamma_3$) be the sequence
obtained by replacing each element $p$ in $\Pi_1$ (resp. $\Pi_2$,
$\Pi_3$) by $\alpha_p$ (resp. $\beta_p$, $\gamma_b$). Therefore,
by using similar arguments to those used in Lemma~\ref{lemma:events} the angular order of
all events can be obtained in $O(n)$ time,
once the lists $\Pi_x$, $\Pi_y$
and $\Pi_{x+y}$ have been precomputed.
\end{proof}
Consider now a small interval $[\theta_1,\theta_2]$ in which no event occurs. Observe that, after the coordinate system redefinition, we have $f=x_{\theta}=(-\cos\theta,0)$, and $t=y_{\theta}=(0,\ell\sin \theta)$. Let $p\in S$ be a point that uses the highway to reach the facility; since only the $y$-coordinate of $t$ changes, its distance to $f$ can be expressed as $c_1\pm\sin\theta$ for some $c_1>0$. Analogously, if $p$ walks to $f$, its distance is of the form $c_1\pm\cos\theta$ for some $c_1>0$. That is, the distance between a point of $S$ and $f$ in any interval is of the form $c_1+c_2\sin\theta+c_3\cos\theta$ for some constants $c_1>0$ and $c_2,c_3\in\{-1,0,1\}$.
\begin{theorem}
The Edge-FHL-problem can be solved in $O(n^3)$ time.
\end{theorem}
\begin{proof}
We can use a method similar to the one used in the Vertex-FHL-problem.
Let $e_h$ be a horizontal line of $G$ and $e_v$ be a vertical line
of $G$.
Using Lemma~\ref{lemma:events2}, we obtain in linear time
the angular order of the $O(n)$ events associated with $e_h$ and
$e_v$. The events induce a partition of $[0,\frac{\pi}{4}]$ into
maximal intervals. For each of those intervals the objective
function has the form
$g(\theta):=\Phi(f,t)=\Phi(x_{\theta},y_{\theta}
)=b_1+b_2\cos\theta+b_3\sin\theta$, where $b_1>0$, and $b_2,b_3\in\mathbb{Z}$ are constants.
This problem has constant
size, hence the minimum of $g(\theta)$ can be found in $O(1)$ time.
Furthermore, the expression of $g(\theta)$ can be updated in
constant time when $\theta$ crosses an event point distinct of
$\phi$ when it goes from $0$ to $\frac{\pi}{4}$. In the case where
$\theta$ crosses $\phi$, $g(\theta)$ can be updated in at most
$O(n)$ time. Then the problem subject to $f\in e_h$ and $t\in e_v$
can be solved in linear time. It gives an overall $O(n^3)$ time
complexity because $G$ has $O(n^2)$ pairs consisting of a horizontal and a
vertical line.
\end{proof}
\section{Experimental results}\label{sec_exper}
Similar to \cite{espejo11}, we explore examples of solutions to the FHL-problem for different values
of the length of the line segment. The problem instance is given by the unweighted points
with coordinates $(-4,0)$, $(-3,-1)$, $(12,8)$,
$(13,5)$, and $(13,7)$ as in Lemma~\ref{lemma:counter-example} and we consider locating a
highway for different values of length and speed. Given a fixed value of speed, say $v=2$,
Figure~\ref{fig_examples} shows the location of the optimal highways for some values of
$\ell$. Note that the case $\ell=0$ is the Fermat-Weber problem for the $L_1$-metric. The
highway's length and the associated total transportation cost for each of these solutions
can be seen in Table~\ref{tab_examples}. The optimal solution for each of the cases (and
its associated cost) has been obtained with the help of a computer.
Observe that, for some values of $\ell$, the optimal solution satisfies condition $(a)$
of Lemma~\ref{lemma:endpoint}, but in other situations condition $(b)$ is satisfied
instead (see Figure~\ref{fig_examples} d), where the highway's length has been set to
$13.41$). Experimentally we observed that increasing the highway's length decreases the
total transportation cost until $\ell=\sqrt{305}$, in which a total cost of $5+2\ell/v$ is
obtained (see Figure~\ref{fig_examples} e)). Afterwards the cost gradually increases until
we locate a highway so long that no point of $S$ uses it to reach $f$. We also note that
for this demand point set the highway's speed has a small impact on the optimal solution.
Indeed, increasing the highway's speed changes the total cost but the location of the
highway in the above instance is unaffected by the highway's speed (provided that $v>1$).
The fourth column in Table~\ref{tab_examples} gives the small variation of the total cost
with respect to the speed. This suggests the following open problem: given an
instance of the FHL-problem, can we efficiently compute the highway's
length that minimizes the total transportation cost?
\begin{figure}
\caption{\small{Solution of the same instance of the FHL-problem for different values
of $\ell$. The optimal highway is depicted in red (and the endpoint containing $f$ as a
cross). The exact highway's length and the associated total transportation cost can be
seen in Table \ref{tab_examples}
\label{fig_examples}
\end{figure}
\begin{table}
\small
\label{tab_examples}
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
\rowcolor[gray]{0.95}\hspace*{0.1cm} Figure~\ref{fig_examples} & $\ell$ & $v$ & Cost & Ratio\\ \hline \hline
a) & 0 & - & 49 & 1 \\ \hline
b) & 1 & 2 & 46 &0.93 \\
& & 4 & 45.5 & 0.92 \\
& & $10^6$ & 45 & 0.91 \\ \hline
c) & 7.07 & 2 & 34.07 & 0.7 \\
& & 4 & 30.54 & 0.62 \\
& & $10^6$ & 27 & 0.55 \\ \hline
d) & 13.41 & 2 & 27.41 &0.56 \\
& & 4 & 20.71 & 0.42 \\
& & $10^6$ & 14 & 0.29 \\ \hline
e) & 16.55 & 2 & 9 & 0.18 \\
& & 4 & 8.5 & 0.17 \\
& & $10^6$ & 8 & 0.16 \\ \hline
\end{tabular}
\caption{\small{Total transportation cost as a function of the highway's length and speed.
The last column shows how much does the highway improve the total transportation cost
(compared to the case in which only a facility is located)}}
\end{table}
\section{Concluding remarks}\label{section:conclusions}
As further research, it would be worth studying the same problem in other
metrics or using different optimization criteria. Another interesting variant would be to
consider the problem when the length of the highway is not given in advance and it is a
variable in the problem. Additionally, we could consider a similar distance model in
which the clients can enter and exit the highway at any point (called \emph{freeway}
in~\cite{bae09}).
Motivated from the experimental results of Section~\ref{sec_exper}, we can deduce that
the highway's length has a strong impact on the optimal solution. As one would expect,
when the highway's length is small, the total cost barely changes. We obtain a similar
effect when the highway to locate is very long, since traveling to the opposite endpoint
takes more time than walking directly to the facility. Hence, it would be interesting to
consider a variation of the problem in which we can also adjust the highway's length.
Specially, one would like to find a balance between the cost of constructing a longer
highway and the improvement in the total transportation cost.
\small
{}
\end{document}
|
\begin{document}
\title{An algorithm for the word problem in braid groups}
\author{Bert Wiest}
\address{UFR de Math\'ematiques, Universit\'e de Rennes 1, Campus de Beaulieu,
\\ 35042 Rennes cedex, France; {\tt [email protected]}}
\begin{abstract}
We suggest a new algorithm for finding a canonical representative of a given
braid, and also for the harder problem of finding a $\sigma_1$-consistent
representative. We conjecture that the algorithm is quadratic-time.
We present numerical evidence for this conjecture, and prove two results:
(1) The algorithm terminates in finite time. (2) The conjecture holds in
the special case of 3-string braids - in fact, we prove that the algorithm
finds a minimal-lenght representative for any 3-string braid.
\end{abstract}
\keywords{braid, word problem, quasigeodesic}
\primaryclass{20F36}\secondaryclass{20F60, 20F65}
\makeshorttitle
\section{Introduction}
In this paper we propose an algorithm for finding a unique
short representative for any given element of the Artin braid group
$$
B_n \cong \langle \sigma_1,\ldots,\sigma_{n-1} {\ |\ } [\sigma_i,\sigma_j]=0
\hbox{ if } |i-j|\geqslant 2, \ \sigma_i \sigma_{i+1}\sigma_i =
\sigma_{i+1}\sigma_i \sigma_{i+1} \rangle.
$$
Several algorithms with the same aim are already
known, for instance Artin's combing of pure braids \cite{Artin}, as
well as Garside's
\cite{Garside,Epstein,FGM} and Dehornoy's \cite{Dehandle} algorithms.
Still, we believe that the new algorithm, which we call the
\emph{relaxation algorithm,} is of theoretical interest:
if our conjecture that the algorithm is efficient -- a conjecture which
is supported by strong numerical evidence -- is correct, then we would
be dealing with a new type of convexity property of mapping class groups.
A particularly surprising aspect is that the idea of the algorithm
appears to have been well-known to the experts for decades
(we just have to be specific about some details), only its efficiency
has apparently been overlooked.
The algorithm is very geometric in nature, exploiting the natural
identification of $B_n$ with $\mathcal{MCG}(D_n)$, the mapping class group of the
$n$ times punctured disk. The idea is very naive, and not new --
c.f.~\cite{Larue,FGRRW}. One thinks of the disk $D_n$ as being made of an
elastic material; if a given braid is represented by a homeomorphism
$\varphi\co D_n \to D_n$, then one obtains a representative of the inverse
of the braid by authorizing the puncture points to move, and letting the
map $\varphi$ relax into the identity map. However, this relaxation process
is decomposed into a sequence of applications of generators of the
braid group, where in each step one chooses the generator which
reduces the tension of the elastic $D_n$ by as much as possible. Of course,
both the notion of ``tension/relaxation'', as well as the choice of
generating system of the braid group, must be specified carefully.
Here are, in more detail, the properties of the algorithm.
(a) The algorithm appears to be quadratic-time in the length of the
input braid, and the length of the output braid appears to be bounded
linearly by the length of the input braid. Unfortunately, we are not
currently able to prove this. However, we shall prove some partial
results, and report on some strong numerical evidence supporting the
conjecture that the algorithm is efficient in the above sense.
(b) The algorithm can be finetuned to output only $\sigma_1$-consistent
braids (in the sense of Dehornoy \cite{Dehbook}: the output braid words
may contain the letter $\sigma_1$ but not $\sigma_1^{-1}$, or vice versa).
The bounds from (a) on running time and length of the output word still
seem to hold for this modified algorithm. This is remarkable,
because it is not currently known whether every braid of lenght $l$ admits
a $\sigma_1$-consistent representative of length $c(n)\cdot l$, where
$c(n)>1$ is a constant depending only on the number of strings $n$.
Characteristics (a) and (b) are almost shared by Dehornoy's handle reduction
algorithm \cite{Dehandle}: it is believed to be of cubic complexity
and to yield $\sigma_1$-consistent braids of linearly bounded length, but
this is currently only a conjecture.
(c) The algorithm generalizes to other surface braid
groups. We conjecture that in this setting as well it is efficient in
the sense of (a) above (note that we have no reasonable notion of
$\sigma_1$-consistency here).
(d) The main interest of the algorithm, however, is theoretical: any
step towards finding a polynomial bound on its algorithmic complexity
would likely give new insights into the interactions between
the geometry of the Cayley graph of $B_n$ and the dynamical properties
of braids. Indeed, we conjecture that the output braid words are
quasi-geodesics in the Cayley graph.
Moreover, the problems raised here may be linked to
the question whether train track splitting sequences are quasi-geodesics,
as well as to the efficiency of the Dehornoy algorithm \cite{Dehandle}.
The plan of this paper is as follows: in section \ref{thealg} we describe
the algorithm and its possible modifications. In section \ref{numerical}
we present numerical evidence that the algorithm is efficient.
In section \ref{terminates} we prove that it terminates in finite time.
In section \ref{3string} we prove that in the special case of 3-string
braids the algorithm is indeed quadratic time, and finds a shortest
possible representative for any braid.
\section{The algorithm}\label{thealg}
We start by setting up some notation.
We denote by $D_n$ ($n\in {\mathbb N}$) the closed disk in the complex plane
which intersects ${\mathbb R}$ in the interval $[0,n+1]$, but with the
points $1,\ldots,n \in {\mathbb C}$ removed.
We recall that the braid group $B_n$ is naturally isomorphic to the
mapping class group $\mathcal{MCG}(D_n)$. We denote by $E$ the diagram
in $D_n$ consisting of $n-1$ properly embedded line segments intersecting
the real axis halfway between the punctures, as indicated in figure
\ref{cddef}. We shall also consider the diagram $E'$ in $D_n$, consisting
of $n+1$ horizontal open line segments. The arcs of both diagrams are
labelled as indicated.
\begin{figure}
\caption{The diagram $E'$, and reduced curve diagrams for $id$, $\sigma_1$
and $\sigma_1 \sigma_2^{-1}
\label{cddef}
\end{figure}
Given any homeomorphism
$\varphi\co D_n \to D_n$ with $\varphi|_{\partial D^2} = id$,
we write $[\varphi]$ for the element of $\mathcal{MCG}(D_n)$ represented by $\varphi$.
A \emph{curve diagram} for $[\varphi]\in \mathcal{MCG}(D_n)$ is the image $\varphi(E)$ of
$E$ under any homeomorphism $\varphi\co D_n \to D_n$ representing $[\varphi]$.
We say a curve diagram is \emph{reduced} if it intersects the diagram $E'$
in finitely many points,
transversely, in such a way that $\varphi(E)$ and $E'$ together do not enclose
any bigons. This is equivalent to requiring that the number of intersection
points $\varphi(E) \cap E'$ be minimal among all homeomorphisms representing
$[\varphi]$ (see e.g.\ \cite{FGRRW}).
Reduced curve diagrams are essentially unique in the sense
that any two reduced curve diagrams of $[\varphi]\in \mathcal{MCG}(D_n)$ can be deformed
into each other by an isotopy of $D_n$ which fixes $E'$ setwise.
We shall call the diagram $E$ the ``trivial curve diagram'', because it
is a reduced curve diagram for the trivial element of $\mathcal{MCG}(D_n)$.
We define the \emph{complexity} of an element $[\varphi]$ of $\mathcal{MCG}(D_n)$, and of
any curve diagram of $[\varphi]$, to be the
number of intersections points of a reduced curve diagram of $[\varphi]$
with $E'$. In other words, the complexity of $[\varphi]$ is
$\min |\psi(E) \cap E'|$, where the minimum is taken over all
homeomorphisms $\psi$ isotopic to $\varphi$.
For instance, the elements $[id], \sigma_1$ and
$\sigma_1 \sigma_2^{-1}$ of $\mathcal{MCG}(D_4)$ have complexity
$3, 5$ and $9$, respectively. We remark that the complexity
can grow exponentially with the number of crossings of the braid.
(The reader may feel that our definition of complexity is quite arbitrary.
In informal computer experiments we have tested some variations of
this definition, and the results appear to be qualitatively unchanged.)
Next we define the set of generators of the braid group $\mathcal{MCG}(D_n)$ which
will be useful for our purposes: we define a \emph{semicircular move}
to be any element of $\mathcal{MCG}(D_n)$ either of the form
$\sigma_i^\epsilon \sigma_{i+1}^\epsilon \ldots \sigma_{j-1}^\epsilon
\sigma_j^\epsilon$ with $i<j$, or of the form
$\sigma_i^\epsilon \sigma_{i-1}^\epsilon \ldots \sigma_{j+1}^\epsilon
\sigma_j^\epsilon$ with $i>j$; in either case we have
$\epsilon = \pm 1$ and $i,j \in \{1,\ldots, n-1\}$.
In order to explain the name, we remark that semicircular braids can
be realised by a semicircular movement of one puncture of $D_n$ in the
upper or lower half of $D_n$ back into $D_n \cap {\mathbb R}$. (The movement is
in the upper half if $j>i$ and $\epsilon=1$, or if $j<i$ and $\epsilon=-1$,
it is in the lower half in the reverse case, and when $i=j$ then it can be
regarded as lying in either half.) There are $2(n-1)^2$ different
semicircular moves in $\mathcal{MCG}(D_n)$.
We say a semicircular movement is \emph{disjoint from} a curve diagram $D$
if it is given by a movement of a puncture along a semicircular arc in the
upper or lower half of $D_n$ which is disjoint from the diagram $D$.
\sh{The ``standard'' relaxation algorithm}
We are now ready to describe the algorithm. One inputs a braid $[\varphi]$,
(e.g.\ as a word in the generators $\sigma_i$),
and it outputs an expression of $[\varphi]^{-1}$ as a product of semicircular
moves (and thus of standard generators $\sigma_i^{\pm 1}$):
{\it Step 1 } Construct the curve diagram $D$ of $[\varphi]$.
{\it Step 2 } For each semicircular movement $\gamma$ which exists in
$B_n$ and which is disjoint from $D$, calculate the complexity of $\gamma(D)$.
{\it Step 3 } Among the possible moves in step 2, choose the one
($\gamma'$ say) which yields the minimal complexity. (If several
moves yield the same minimal complexity, choose one of them arbitrarily or
e.g.\ the lexicographically smallest one.)
Write down the move $\gamma'$, and define $D := \gamma'(D)$.
{\it Step 4 } If $D$ is the trivial diagram, terminate. If not, go to
Step 2.
This could be summarized in one sentence: iteratively untangle the curve
diagram of $[\varphi]$, where each iteration consists of the one semicircular
move which simplifies the curve diagram as much as possible. We shall
prove in section \ref{terminates} that the algorithm terminates, i.e.
that every nontrivial curve diagram admits a move which decreases the
complexity. Note that there is no
obvious reason to expect this algorithm to be efficient: one might expect
that one has to perform some seemingly inefficient steps first which
then allow a very rapid untangling later on. Yet, as we shall see,
it appears that such phenomena do not occur.
\sh{The $\sigma_1$-consistent version of the relaxation algorithm}
Next we indicate how the algorithm can be improved in order to output
only $\sigma_1$-consistent braid words. We recall that a braid word is
\emph{$\sigma_1$-consistent} if it contains only the letter $\sigma_1$
but not $\sigma_1^{-1}$ (``$\sigma_1$-positive''), or if it contains
$\sigma_1^{-1}$ but not $\sigma_1$ (``$\sigma_1$-negative''),
or indeed if it contains no letter $\sigma_1^{\pm 1}$ at all
(``$\sigma_1$-neutral'').
It is a theorem of Dehornoy \cite{Dehbook} that every braid has a
$\sigma_1$-consistent representative. More precisely, it was shown in
\cite{FGRRW} that a braid $[\varphi]$ is $\sigma_1$-positive if and only if in
a reduced curve diagram $\varphi(E)$, the ``first'' intersection of $\varphi(e_1)$
with $E'$ lies in $e'_0$, where $e_1$ is oriented from bottom to top.
By contrast, the braid is $\sigma_1$-negative if and only if in the
opposite orientation of $e_1$ (from top to bottom) the ``first'' intersection
with $E'$ lies in $e'_0$. Finally, a braid is $\sigma_1$-neutral if and only
if $\varphi(e_1)\sim e_1$. By abuse of notation we shall also speak of a
reduced curve diagram as being $\sigma_1$-positive, negative, or neutral.
Roughly speaking, a diagram is $\sigma_1$-positive if $\varphi(e_1)$ starts
by going up, and $\sigma_1$-negative if it starts by going down.
We remark that these geometric conditions are easy to check once the curve
diagram of a braid has been calculated: using the notation of the appendix,
a curve diagram is $\sigma_1$-positive if $d^0_/>0$ holds, it is
$\sigma_1$-negative if $d^0_\backslash > 0$ holds, and it is $\sigma_1$-neutral
if $d^0_/ = d^0_\backslash = 0$. For instance, one sees in figure~\ref{cddef} that
the curve diagrams for both
$[\sigma_1]$ and $[\sigma_1 \sigma_2^{-1}]$ are $\sigma_1$-positive.
Here are the changes that need to be made to the above algorithm: after
Step 1 is completed, one checks whether the braid $[\varphi]$ is
$\sigma_1$-positive, negative, or neutral. For definiteness let us say
it is $\sigma_1$-positive (the $\sigma_1$-negative case being symmetric).
Thus during the untangling process we want to avoid using semicircular
moves which involve the letter $\sigma_1$, only $\sigma_1^{-1}$s are
allowed. Thus in Step 3 we cannot choose among \emph{all} semicircular
moves which are disjoint from $D$, but we restrict our choice to those
moves $\gamma$ which satisfy
\begin{itemize}
\item[(a)] the move $\gamma$ does not involve the letter $\sigma_1$,
\item[(b)] the braid $[\varphi]\gamma^{-1}$ is not $\sigma_1$-negative;
in other words, the curve diagram $\gamma(D)$ is still $\sigma_1$-positive
or $\sigma_1$-neutral, but not $\sigma_1$-negative.
\end{itemize}
Note that condition (b) is really necessary: if applying $\gamma$ turned our
$\sigma_1$-positive curve diagram $D$ into a $\sigma_1$-negative one, then
we would have no chance of completing the untangling process without using
the letter $\sigma_1$ later on.
\begin{remark}\rm The restriction to semicircular moves \emph{which are
disjoint from the curve diagram} is superfluous -- all results in this
paper are true
without it, and indeed for the standard algorithm the semicircular movement
which reduces complexity by as much as possible is automatically disjoint
from the diagram. We only insist on this restriction because it simplifies
the proof in section \ref{3string}
\end{remark}
\begin{example}\rm \label{algex}
The curve diagram of the braid $\sigma_1^{-1}\sigma_2\sigma_1$ is shown in
figure \ref{F:algex}(a). The move $\sigma_1^{-1} \sigma_2^{-1}$ reduces the
complexity of the diagram by 4, whereas $\sigma_2$ (the only other
semicircular move disjoint from the diagram) reduces the complexity only
by 2. However, the move $\sigma_1^{-1} \sigma_2^{-1}$ is forbidden, since
its action would turn the positive diagram into a negative one. Thus we
start by acting by $\sigma_2$, and the resulting diagram is shown in
figure~\ref{F:algex}(b). In this diagram, the action of
$\sigma_1^{-1}\sigma_2^{-1}$ is legal, and relaxes the diagram into
the trivial one. In total, the curve diagram of
$\sigma_1^{-1}\sigma_2\sigma_1$ was untangled by
$\sigma_2 \; \sigma_1^{-1}\sigma_2^{-1}$.
\begin{figure}
\caption{Untangling the curve diagram of the braid $\sigma_1^{-1}
\label{F:algex}
\end{figure}
\end{example}
\section{Numerical evidence}\label{numerical}
The above algorithms were implemented by the author using the programming
language C \cite{myCprog}. The aim of this section is to
report on the results of systematic experimentation with this program.
A brief description how curve diagrams were coded and manipulated can
be found in Appendix 1. Since the algorithm involves calculations with
integer numbers whose size grows exponentially with the length of the
input braid, and no software capable of large integer arithmetic was used,
the search was restricted to braids with up to 50 crossings.
The following notation
will be used: for any braid word $w$ in the letters $\sigma_i^{\pm 1}$
we denote by $l(w)$ the number of letters (i.e.\ the number of crossings),
and by $l_{\rm{out}}(w)$ the number of letters of the output of our algorithm.
The results of the experiments have two surprising aspects:
firstly, they strongly support the following
\begin{conj}[Main conjecture]\label{efficcon}
For every $n>2$ there exists a constant $c(n)\geqslant 1$ such
that for all braid words $w$ we have $l_{\rm{out}}(w) < c(n) l(w)$.
\end{conj}
Secondly, they show that the $\sigma_1$-consistent version of the algorithm
very often yields \emph{shorter} output-braids than the standard version.
The first table gives the average length of the output braid
among 10.000 randomly generated braid words of length 10, 20, 30, 40
and 50, with 4, 5, and 6 strings. Our random braid words did not
contain any subwords of the form $\sigma_i^\epsilon \sigma_i^{-\epsilon}$,
i.e.\ they did not contain any obvious simplifications. Each entry in
the table is of the form ***/***, where the first number refers to the
standard algorithm, and the second one to the $\sigma_1$-consistent one.
\begin{verbatim}
length 10 length 20 length 30 length 40 length 50
N=4 8.5/8.4 16.0/15.6 23.4/22.8 30.9/29.9 38.3/36.8
N=5 8.7/8.7 17.2/17.0 25.9/25.3 34.5/33.8 43.2/42.2
N=6 8.7/8.6 17.3/17.2 26.2/26.2 35.6/35.7 45.0/45.0
\end{verbatim}
The next table shows the worst cases (that is, the longest output words)
that occurred among $\geqslant 40.000.000$ randomly generated braids.
(Thus every entry in this table required 40.000.000, and in some cases
much more, complete runs of the algorithm.)
\begin{verbatim}
length 10 length 20 length 30 length 40 length 50
N=4 24 / 20 52 / 38 70 / 54 86 / 66 106 / 80
N=5 26 / 26 58 / 54 76 / 66 116 / 82 130 / 104
N=6 30 / 38 78 / 68 102 / 100 140 / 120 178 / 132
\end{verbatim}
While these values appear to support conjecture \ref{efficcon}, one should
keep in mind that the worst examples may be exceedingly rare, and thus
may have been overlooked by the random search.
If conjecture \ref{efficcon} holds, then we would have that the
running time of the algorithm (on a Turing machine) is bounded by
$c'(n) (l(w))^2$, where $c'$ is some other constant depending only on $n$.
This is because there is an exponential
bound on the numbers $d^i_\supset, d^i_\subset, d^i_{\overline{\cdot\cdot}},
d^i_{\underline{\cdot\cdot}}, d^i_/$, and $d^i_\backslash$ describing the
curve diagram (see appendix) in terms of the length of the braid word.
Performing an elementary
operation (addition, comparison etc.) on numbers bounded by
$\exp(l(w))$ takes time at most $O(\log(\exp(l(w))))=O(l(w))$. Since the
number of operations performed in each step of the algorithm is constant,
and the number of steps which need to be performed is (we conjecture)
bounded linearly by $l(w)$, we get a total running time bounded by
$c'(n) (l(w))^2$.
The algorithm admits variations. For instance, one can use a different
diagram on $D_n$ as a trivial curve diagram (instead of $E$), and one can
define the
complexity of a curve diagram in a different way (e.g. not counting the
intersection points of $\varphi(E)$ but those of $\varphi^{-1}(E)$ with the
horizontal axis). Informal experiments indicate that our results seem
to be ``stable'' under such modifications of the algorithm: computation
time and length of output braids still appear to be quadratic, respectively
linear.
It should also be mentioned that the requirement that the complexity
of the curve diagrams be reduced \emph{as much as possible} in each step
is essential: if one only asks for the complexity to be reduced (by any
amount), then $l_{\rm{out}}(w)$ can depend exponentially on $l(w)$.
The reader should be warned of two attacks on conjecture~\ref{efficcon}
which appear not to work.
Firstly, running the algorithm on pairs of words $(w,w\sigma_i)$ or
$(w,\sigma_i w)$, where $\sigma_i$ is a generator, often yields
pairs of output braid words with dramatically different lengths.
Thus it appears unlikely that any fellow-traveller type property between
output braid words can serve as an explanation for the algorithm's
efficiency.
Secondly, there are instances of braids where the number of semicircular
factors in the output braid word is larger than the number of Artin factors
in the input braid word.
One may hope, however, that Mosher's automatic structure on $B_n$
\cite{Mosher} can help to prove the main conjecture \ref{efficcon},
because the conjecture would follow from a positive answer to the following
{\bf Question } Suppose that the algorithm produces a sequence
$D^{(1)},\ldots, D^{(N)}$ of curve diagrams of decreasing complexity and
with $D^{(N)}=E$,
representing braids $[\varphi_1],\ldots,[\varphi_N]$ in $\mathcal{MCG}(D_n)$.
(In particular, $[\varphi_N]=1$). Is it true that the lengths of the Mosher
normal forms (the ``flipping sequences'') of these braids are strictly
decreasing?
\section{The algorithm terminates}\label{terminates}
While we have no explanation for the extraordinarily good performance
of the algorithm, we can at least prove that it terminates in finite time
(and one could easily deduce from our proof an exponential bound on the
running time). In fact, in order to ascertain the termination of the
algorithm, even of its $\sigma_1$-consistent version, it suffices to prove
\begin{prop}\sl \label{simplexists}
For any reduced, $\sigma_1$-positive curve diagram $D$
in $D_n$ there exists a semicircular braid $\gamma$ in the upper half
of $D_n$ such that the diagram $\gamma(D)$ is still $\sigma_1$-positive
or $\sigma_1$-neutral and such that the complexity of $\gamma(D)$ is
strictly lower than the complexity of $D$.
\end{prop}
We recall that by a semicircular braid ``in the upper half of $D_n$'' we
mean one which can be realized by a simicircular movement of one of the
punctures in the part of $D_n$ belonging to the upper half plane.
Let us first see why this proposition implies that the algorithm terminates.
The curve diagram of the input braid $[\varphi]$ has finite complexity (and in
fact if $[\varphi]$ is given as a braid word of length $l$ in the letters
$\sigma_i^\epsilon$ then one can easily obtain an exponential bound on the
complexity in terms of $l$ and $n$: the complexity is bounded by
$(n-1)l^3$). If $[\varphi]$ is $\sigma_1$-positive, then the proposition
implies that the curve diagram can be turned into a $\sigma_1$-neutral
diagram by applying a sequence of semicircular braids which contain no
letter $\sigma_1$. By a symmetry
argument we can prove an analogue statement for $\sigma_1$-negative
curve diagrams: they can be untangled by applying a $\sigma_1$-positive
braid. Once we have arrived at a $\sigma_1$-neutral diagram, we can simply
cut $D_n$ along a vertical arc through the leftmost puncture, concentrate
on the part of our cut $D_n$ which contains punctures number $2,\ldots,n$,
and proceed by induction to make the diagram $\sigma_2$-neutral, then
$\sigma_3$-neutral etc., until it coincides with $E$.
\begin{proof}[Proof of proposition \ref{simplexists}]
Our argument was partly inspired by the unpublished thesis of Larue
\cite{Larue}. We denote the lower half of $D_n$ by $D_n^\vee$
and the upper half by $D_n^{\wedge}$.
We say a that the $i$th puncture of $D_n$ is \emph{being pushed up}
by the diagram $D$ if one of the path components of
$D\cap D_n^\vee $ consists of an arc with one endpoint in $e'_{i-1}$ and
the other in $e'_i$.
We first claim that some puncture other than the first (leftmost) one is
being pushed up by $D$.
To see this, let us denote by $v$ the vertical arc in $D_n$ through the
first puncture - it is cut into two halves by the first puncture.
Let us write $D_n^>$ for the closure of the component of $D_n\backslash v$
which contains punctures number $2,\ldots,n$ (that is, $D_n^>$ equals
$D_n$ with everything left of $v$ removed).
Since $D$ is $\sigma_1$-positive, we have that the upper half of $v$ has
two more intersection points with $D$ than the lower half. Therefore
at least one of the components of $D_n^>\cap D$ is an arc both of whose
endpoints lie in the upper half of $v$. Since this arc has to intersect
the diagram $E'$, we deduce that some component of $D\cap D_n^\vee$ has both
endpoints in intervals $e'_i$, $e'_j$ with $i,j \neq 0$. Now that we have
found at least one ``U-shaped'' arc in $D_n^\vee \cap D_n^>$, we can
consider an innermost one. Since all punctures of $D_n$ must lie in
different path components of $D_n\backslash D$, all innermost U-shaped arcs
must have endpoints in a pair of \emph{adjacent} arcs $e'_{i-1}, e'_i$
of $E'$ with $i-1 \neq 0$. This completes the proof of the first claim.
It is this $i$th puncture that is going to perform the semicircular
move: our second claim is that there exists an oriented arc $g$ in
$D_n \backslash D$ which starts on the $i$th puncture, lies entirely in
the upper half of $D_n$, and terminates on one of the horizontal arcs
$e'_j$ with $j\neq i,i-1$.
To convince ourselves of the second claim, let us look at the two arcs
of $D\cap D_n^\wedge$ which have boundary points in common with our
innermost U-shaped arc in $D\cap D_n^\vee$. Not both of these arcs can
end on $\partial D_n$, so at least one of them, which we shall call $g'$,
ends on $E'$. Then we can define $g$ to be the arc in $D_n\backslash D$ which
starts on the $i$th puncture, and stays parallel and close to $g'$, until
it terminates on the same arc of $E'$ as $g'$. This construction proves
the second claim.
We now define our semicircular braid $\gamma$ to be the slide of the $i$th
puncture along the arc $g$.
Next, we prove that this move decreases the complexity of the curve diagram.
Let $D'$ be the diagram obtained from the diagram $D$ by first applying the
homeomrphism $\gamma$ and then reducing the resulting diagram with respect
to the horizontal line $E'$. In comparison to $D$, in this new diagram $D'$
the arc $e'_j$ has been divided into two arcs by the formerly $i$th puncture,
whereas the arcs $e'_{i-1}$ and $e'_i$ have merged. Moreover, if the diagram
$D$ contained $k$ U-shaped arcs in $D_n^{\vee}$ with one endpoint
in $e'_{i-1}$ and the other in $e'_i$, then the complexity of $D'$ is
$2k$ lower than the complexity of $D$: our semicircular move $\gamma$
eliminated these $2k$ intersections of $D$ with $E'$, and did not create
or eliminate any others.
The only thing left to be seen is that the diagram $D'$ is still
$\sigma_1$-positive or $\sigma_1$-neutral, but not $\sigma_1$-negative.
But a $\sigma_1$-negative diagram could only be obtained
if the arc $g$ ended in the
interval $e'_0$ (i.e.\ if the last letter of $\gamma$ was $\sigma_1^{-1}$).
The proof is now exactly analogue to the proof of Proposition 4.3, Claim 2
in \cite{FGRRW} which states that curve diagram obtained by ``sliding a
puncture along a useful arc'' is not $\sigma_1$-negative.
\end{proof}
\section{The special case of 3-string braids}\label{3string}
The reader may have wondered why no experimental results for 3-string
braids were presented in section \ref{numerical}. The reason is that we
can \emph{prove} that the best possible results hold in this case:
\begin{theorem}\sl\label{3stringthm}
For three-string braids, the $\sigma_1$-consistent version of our algorithm
outputs only braid words of minimal length.
\end{theorem}
Thus for $n=3$, conjecture~\ref{efficcon} holds with $c(n)=1$.
In other words, if we input any braid word $w'$ in the letters
$\sigma_i^{\epsilon}$ with $i\in \{1,2\}$ and $\epsilon=\pm 1$, then
the output word $w$ satisfies $l(w)\leqslant l(w')$. Unfortunately, the
proof below is very complicated, and it would be nice to have a more
elegant proof.
We remark that the analogue of theorem \ref{3stringthm} for the standard
algorithm also holds. We shall not prove this here, because the argument
becomes even more complicated.
In what follows, we shall always denote by $w$ a braid word output by
the algorithm.
The word $w$ is given as a product of semicircular braids,
and we shall indicate this decomposition, when necessary, by separating
the semicircular factors by a dot ``${\, \cdot \,}$''. Thus
$\sigma_2 \sigma_1 {\, \cdot \,} \sigma_2^{-1}$ denotes the product of the two
semicircular braids $\sigma_2 \sigma_1$ and $\sigma_2^{-1}$, and the word
$\sigma_2 \sigma_1^{-1}{\, \cdot \,} \sigma_2$ (without a dot between $\sigma_2$ and
$\sigma_1^{-1}$) is meaningless. Finally, we shall denote by an asterisk
$*$ any symbol among $\{1,-1,2,-2,{\, \cdot \,} \}$, or the absence of symbols.
\begin{lem}\sl \label{3strkeylem}
Let $w$ be the output braid word of the $\sigma_1$-consistent algorithm. Then
\begin{itemize}
\item[(i)] $w$ does not contain any subword of the form
$ *\,\sigma_i{\, \cdot \,} \sigma_i^{-1} *$ with $i\in \{1,2\}$.
\item[(ii)] The only place in $w$ where the subword
${\, \cdot \,} \sigma_2^{-1} {\, \cdot \,} \sigma_1 \sigma_2$ can occur is near the end of $w$,
in the context of a terminal subword ${\, \cdot \,} \sigma_2^{-1} {\, \cdot \,} \sigma_1
\sigma_2 ({\, \cdot \,} \sigma_2)^k$ with $k\in {\mathbb N} \cup \{0\}$.
\item[(iii)] $w$ does not contain any subword of the form
${\, \cdot \,}\sigma_1{\, \cdot \,}\sigma_2{\, \cdot \,}$ or ${\, \cdot \,}\sigma_2{\, \cdot \,}\sigma_1{\, \cdot \,}$.
\end{itemize}
Items (i),(ii), and (iii) are also true if every single letter in their
statement is replaced by its inverse.
\begin{itemize}
\item[(iv)] Subwords of the form ${\, \cdot \,} \sigma_2 \sigma_1{\, \cdot \,}\sigma_2 \sigma_1{\, \cdot \,}
\sigma_2^{-1} *$ cannot occur in $w$.
\item[(v)] Subwords of the form ${\, \cdot \,}\sigma_2 \sigma_1{\, \cdot \,}\sigma_2 \sigma_1{\, \cdot \,}
\sigma_1*$ cannot occur in $w$.
\item[(vi)] Subwords of the form ${\, \cdot \,} \sigma_1{\, \cdot \,} \sigma_2\sigma_1{\, \cdot \,}$ and
${\, \cdot \,} \sigma_2\sigma_1{\, \cdot \,} \sigma_2{\, \cdot \,}$ can occur in $w$, but only in the
context of a \emph{terminal} subword which is positive (i.e.\ has
not a single letter $\sigma_1^{-1}$ or $\sigma_2^{-1}$.) Moreover, they cannot
be immediately preceded by the subword ${\, \cdot \,}\sigma_2^{-1}$.
\end{itemize}
Items (iv)--(vi) are also true if every single letter in their statement
is replaced by its inverse or if the r\^oles of $\sigma_1$ and $\sigma_2$
are interchanged.
\end{lem}
Items (iii)--(vi) state essentially that the subwords $*\sigma_1{\, \cdot \,}\sigma_2*$
and $*\sigma_2{\, \cdot \,}\sigma_1*$ cannot occur, except in a few very special cases
near the end of the word.
\begin{proof}[Proof of lemma \ref{3strkeylem}]
For the proof of (i) we suppose, for a contradiction, that $w$ contains a
subword consisting of two semicircular braids: the first subword consists of
or terminates in $\sigma_i^\epsilon$, the second one consists of or begins
with $\sigma_i^{-\epsilon}$.
For definiteness we suppose that we have the subword $*\sigma_2{\, \cdot \,}
\sigma_2^{-1}*$, the other cases being exactly analogue. The first factor
$\sigma_1\sigma_2$ or $\sigma_2$ can be realized by a semicircular movement
in the upper half of $D_n$. We consider the curve diagram $D$ that is
obtained after the action of the first factor. In this diagram, the third
puncture is either being pulled down, or it may be completely to the right
of $D$ (in this case, $D$ can be untangled by a braid which contains only
the letters $\sigma_1^{\pm 1}$), but it is certainly not being pushed up.
But in this situation, acting on $D$ by $\sigma_2^{-1}$ or
$\sigma_2^{-1} \sigma_1^{-1}$ cannot reduce the complexity.
(It is worth remarking that the statement (i) is wrong for braids with more
than three strings -- there the first factor might be followed by a
semicircular movement of the second puncture in the lower half of $D_n$ to
a point further to the right.)
\begin{figure}
\caption{Which curve diagrams might give rise to the subword $\sigma_2^{-1}
\label{F:trtr}
\end{figure}
The proof of (ii) is more complicated.
Suppose that $D$ is a curve diagram which the $\sigma_1$-consistent algorithm
wants to untangle by first performing a $\sigma_2^{-1}$, and then a
$\sigma_1 \sigma_2$. Then the curve diagram $D$ must be negative (because
it can be untangled by a $\sigma_1$-positive braid word), and it must
contain arcs as indicated
in figure \ref{F:trtr}(a) (note that there may be a number of parallel
copies of these arcs). We deduce that $D$ is carried by one of two
possible train tracks, which are both indicated in figure \ref{F:trtr}(b)
-- the two possibilities are labelled $(\alpha)$ and $(\beta)$. The
labels $k,l,m,p$ are, of course, integer variables which indicate how often
the corresponding piece of track is traversed by the curve diagram.
Case $(\alpha)$ is quite easy to eliminate: the essential observation is
that in case $(\alpha)$ we must have $m=0$.
To see this, consider the arc of the curve diagram whose extremities are
the points labelled $*$ and $**$. This arc, together with the segment of
$\partial D_n$ shown, must bound a disk which contains exactly two of the
punctures. If we had $m\neq 0$, however, then the arc from $*$ to $**$ would
intersect $E'$ only once, namely in $e'_3$, and the disk would contain all
three punctures, which is absurd. Thus, in case $(\alpha)$ we must have $m=0$.
It is now an exercice to show that the only curve diagrams carried
by the train track of figure \ref{F:trtr}(b)($\alpha$) with $m=0$ and
containing arcs as indicated in figure
\ref{F:trtr}(a) belong to the family of curve diagrams given in figure
\ref{F:trtr}($\alpha$). However, the algorithm chooses to untangle these
diagrams with the movement $(\sigma_2^{-1}{\, \cdot \,})^{2+k}\sigma_1$, not with
$\sigma_2^{-1}{\, \cdot \,}\sigma_1 \sigma_2...$, as hypothesised. Thus the curve
diagram $D$ cannot be carried by the train track from figure
\ref{F:trtr}(b)($\alpha$).
For case $(\beta)$ we consider again the arc of the curve diagram with
endpoints $*$ and $**$. This arc, together with the segment of
$\partial D_n$ shown, must bound a disk which contains two of the punctures.
For this to happen, however, we need that starting from the point $**$ and
following the arc of the diagram we traverse the track labelled $m$, not
the track labelled $k$. Therefore in case $(\beta)$ we must have $m\neq 0$.
Moreover, the calculation shown in the figure proves that the curve diagram
$D$ traverses the track labelled $k$ twice more than the track labelled $l$.
This implies that the third puncture is being pushed up by $l$ U-shaped arcs
below it
(and similarly there are $l$ caps above the second puncture),
but there are at least $l+1$ U-shaped arcs below the first puncture.
Thus the standard algorithm would choose to perform the movement
$\sigma_1 \sigma_2$, and the $\sigma_1$-consistent algorithm only avoids
doing so because the movement $\sigma_1 \sigma_2$ is illegal (it would
render the diagram positive). However, we know by hypothesis that after
applying $\sigma_2^{-1}$, the move $\sigma_1 \sigma_2$ \emph{is} legal.
\begin{figure}
\caption{Applying $\sigma_2^{-1}
\label{F:end3braid}
\end{figure}
This means that the curve diagram is of one of two shapes shown in figure
\ref{F:end3braid}. We observe that the diagram in figure \ref{F:end3braid}(b)
cannot be extended to a reduced curve diagram -- indeed, the reader will find
that any attempt to extend the arcs labelled by a question-mark will yield
arcs of infinite length. Therefore we must be in case (a).
It follows that after applying the braid $\sigma_2^{-1}{\, \cdot \,}\sigma_1\sigma_2$
to the diagram $D$ we are left with a curve diagram that can be untangled
by $\sigma_2^k$ for some $k\in {\mathbb N}\cup \{0\}$. This completes the proof of (ii).
For the proof of (iii) we remark that the algorithm prefers the
word ${\, \cdot \,} \sigma_1^\epsilon \sigma_2^\epsilon{\, \cdot \,}$ to the word
${\, \cdot \,}\sigma_1^\epsilon{\, \cdot \,}\sigma_2^\epsilon{\, \cdot \,}$, and similarly the word
${\, \cdot \,}\sigma_2^\epsilon \sigma_1^\epsilon{\, \cdot \,}$ to the word
${\, \cdot \,}\sigma_2^\epsilon{\, \cdot \,}\sigma_1^\epsilon{\, \cdot \,}$ for $\epsilon = \pm 1$.
For the proof of (iv) and (v) we consider a diagram $D$ such that the
untangling of $D$ begins, according to our algorithm, with the movements
$\sigma_2 \sigma_1{\, \cdot \,} \sigma_2 \sigma_1$. We observe that the diagram $D$
must contain curve segements as shown in figure \ref{F:casede}(a) (solid
lines). The action of $\sigma_2 \sigma_1{\, \cdot \,} \sigma_2 \sigma_1$ on
this diagram is indicated by the arrows.
\begin{figure}
\caption{Which curve diagrams are relaxed by $\sigma_2 \sigma_1{\, \cdot \,}
\label{F:casede}
\end{figure}
Let us first prove (iv), i.e.\ that the movement
$\sigma_2 \sigma_1{\, \cdot \,} \sigma_2 \sigma_1{\, \cdot \,} \sigma_2^{-1} *$ cannot occur.
After the action of $\sigma_2 \sigma_1{\, \cdot \,} \sigma_2 \sigma_1$ on $D$, the
puncture which started out in the leftmost position is now the rightmost one.
In order to get the move $\sigma_2^{-1}$ or $\sigma_2^{-1} \sigma_1^{-1}$ next,
we would need that in the diagram
$\sigma_2 \sigma_1{\, \cdot \,} \sigma_2 \sigma_1(D)$ the rightmost puncture is being
pushed up by a U-shaped arc below it.
However, one can see
that applying $\sigma_2 \sigma_1{\, \cdot \,} \sigma_2 \sigma_1$ to the
diagram in figure \ref{F:casede}(a) can never yield such a diagram.
For the proof of (v), we distinguish two cases. First, the sequence of moves
$\sigma_2 \sigma_1{\, \cdot \,} \sigma_2 \sigma_1{\, \cdot \,} \sigma_1{\, \cdot \,}$ is impossible,
because the fat arrow in figure \ref{F:casede}(b) must intersect the
curve diagram $D$. Regarding the second case, if we suppose that the
curve diagram $D$ gives rise to the moves
$\sigma_2 \sigma_1{\, \cdot \,} \sigma_2 \sigma_1{\, \cdot \,} \sigma_1\sigma_2$, then we can add
information to figure \ref{F:casede}(a): in this case, the diagram $D$ must
have contained the arcs shown in figure \ref{F:casede}(c). However, this
diagram cannot be extended to a reduced curve diagram which is disjoint from
the dashed arrow: indeed, any attempt to extend the arc labelled by a
question-mark yields an arc of infinite length. This completes the proof
of (v).
Finally, we turn to the proof of (vi). Again, we suppose that the
algorithm chooses to untangle the curve diagram $D$ by a sequence of
semicircular moves that starts with ${\, \cdot \,} \sigma_1{\, \cdot \,} \sigma_2\sigma_1{\, \cdot \,}$
or with ${\, \cdot \,} \sigma_2\sigma_1{\, \cdot \,} \sigma_2{\, \cdot \,}$. We observe that in both cases
the curve diagram must be carried by one of the two train tracks ($\alpha$
or $\beta$) both shown in figure \ref{F:casef}(a), where the tracks labelled
$k$ and $l$ must each be traversed by at least one curve segment of $D$.
However, as explained in the figure, case ($\alpha$) is completely
impossible, and case ($\beta$) is possible only if each of the two pieces
of track labelled $k$ and $l$ is traversed by exactly one segment of $D$.
\begin{figure}
\caption{Case (vi) of Lemma \ref{3strkeylem}
\label{F:casef}
\end{figure}
We leave it as an exercice to show that the only curve diagrams fitting these
restrictions are the two shown in figure \ref{F:casef}(b) and (c),
and all their images under the action of $\sigma_1^{-p} \Delta^{-2q}$, where
$p,q\in {\mathbb N}\cup \{0\}$. In other words, our curve diagram must in fact be
the curve diagram of a braid $\sigma_1^{-1} \sigma_2^{-1} \sigma_1^{-1}
\sigma_1^{-p} \Delta^{-2q}$ or $\sigma_2^{-1} \sigma_1^{-1} \sigma_2^{-1}
\sigma_1^{-1} \sigma_2^{-1} \sigma_1^{-1} \sigma_1^{-p} \Delta^{-2q}$, where
$p,q\in {\mathbb N}\cup\{0\}$. Now we observe that any untangling of any of these
curve diagrams that our algorithm may find has only positive letters.
Moreover, if we act by $\sigma_2$ on any of these curve diagrams, then the
relaxation (according to our algorithm) of the resulting diagram starts
with $\sigma_1 \sigma_2$, \emph{not} with $\sigma_2^{-1}$. This completes
the proof of (vi).
\end{proof}
\begin{proof}[Proof of theorem \ref{3stringthm}]
Let us denote by $\widetilde{w}$ the braid word obtained from an output
braid word $w$ by removing all the dot-symbols.
Then lemma \ref{3strkeylem} implies that $\widetilde{w}$ cannot
contain any subword of the form
$\sigma_2 (\sigma_1 \sigma_2 \sigma_2 \sigma_1)^k \sigma_2^{-1}$, or
$\sigma_2^{-1} (\sigma_1 \sigma_2 \sigma_2 \sigma_1)^k \sigma_2$, or
$\sigma_1 (\sigma_2 \sigma_1 \sigma_1 \sigma_2)^k
\sigma_2 \sigma_1 \sigma_2^{-1}$, or
$\sigma_2^{-1} (\sigma_1 \sigma_2 \sigma_2 \sigma_1)^k
\sigma_1 \sigma_2 \sigma_1$, $(k\in {\mathbb N}\cup \{0\})$,
or any of the images of such braids under one of the automorphisms of $B_3$
$(\sigma_1 \to \sigma_2, \sigma_2 \to \sigma_1)$, or
$(\sigma_1 \to \sigma_1^{-1}, \sigma_2 \to \sigma_2^{-1})$, or
$(\sigma_1 \to \sigma_2^{-1}, \sigma_2 \to \sigma_1^{-1})$.
That is, the word $\widetilde{w}$ admits no obvious simplifications.
It is well-known that 3-string braids without this type of obvious
shortenings are in fact of minimal length.
This completes the proof of theorem \ref{3stringthm}.
\end{proof}
\begin{appendix}
{\Large{\bf Appendix: Coding of curve diagrams}}
{\small
If we cut the disk $D_n$ along $n$ vertical lines through the puncture
points, we obtain $n+1$ vertical ``bands''. The connected components of
the intersection of a curve diagram with every such band come in six
different types, which are in an obvious way described by the symbols
$\supset, \subset, \overline{\cdot\cdot}, \underline{\cdot\cdot}, /$
and $\backslash$.
Thus curve diagrams in $D_n$ are coded by $6(n+1)$ integer numbers
$d^i_\supset, d^i_\subset, d^i_{\overline{\cdot\cdot}},
d^i_{\underline{\cdot\cdot}}, d^i_/$, and $d^i_\backslash$ (with $i=0,\ldots n$).
For instance, in figure \ref{F:algex}(b), we have $d^0_/=2,
d^0_{\overline{\cdot\cdot}}=2, d^1_\backslash=2,d^1_{\overline{\cdot\cdot}}=2,
d^2_{\overline{\cdot\cdot}}=1, d^2_\supset=1, d^2_{\underline{\cdot\cdot}}=1,
d^3_\supset=1$, and all other coeffiecients equal zero.
Using this coding, reduced curve diagrams of arbitrary braids can be
calculated by the rule that applying $\sigma_i$ to a curve diagram $D$
changes the diagram as follows. Firstly, all $d_\times^j$ (where $\times$
can be any symbol) with $j\notin \{i-1, i, i+1\}$ are unchanged. Apart from
that, the following rules apply simultaneously:
$d^{i-1}_\supset \leftarrow d^{i-1}_\supset$\\
$d^{i-1}_\subset \leftarrow d^i_\subset + d^i_{\backslash} + \max(0,d^i_{\underline{\cdot\cdot}}-d^{i-1}_{\underline{\cdot\cdot}}-d^{i-1}_{\backslash})$\\
$d^{i-1}_{\overline{\cdot\cdot}} \leftarrow d^{i-1}_{\overline{\cdot\cdot}} + d^{i-1}_\backslash - \min(d^{i-1}_\subset, \max(0, d^i_{\underline{\cdot\cdot}}-d^{i-1}_{\underline{\cdot\cdot}}))$\\
$d^{i-1}_{\underline{\cdot\cdot}} \leftarrow \min(d^{i-1}_{\underline{\cdot\cdot}}, d^i_{\underline{\cdot\cdot}})$ \\
$d^{i-1}_/ \leftarrow d^{i-1}_/ + d^{i-1}_{\underline{\cdot\cdot}} -
\min(d^{i-1}_{\underline{\cdot\cdot}}, d^i_{\underline{\cdot\cdot}})$\\
$d^{i-1}_\backslash \leftarrow \min(d^{i-1}_\subset, \max(0, d^i_{\underline{\cdot\cdot}}-d^{i-1}_{\underline{\cdot\cdot}}))$\\
$d^i_\supset \leftarrow \min(\min(d^i_/, d^{i+1}_\supset) , \max(0,d^{i-1}_\subset + d^{i-1}_{\underline{\cdot\cdot}} - d^i_{\underline{\cdot\cdot}}))$ \\
$d^i_\subset \leftarrow \min(\min(d^{i-1}_\subset, d^i_/),\max(0,d^{i+1}_{\overline{\cdot\cdot}} + d^{i+1}_\backslash - d^i_{\overline{\cdot\cdot}}))$ \\
$d^i_{\overline{\cdot\cdot}} \leftarrow d^i_\supset d^i_\backslash + d^i_{\overline{\cdot\cdot}} - \min(d^{i-1}_\subset, d^i_/, d^i_{\underline{\cdot\cdot}})$ \\
$d^i_{\underline{\cdot\cdot}} \leftarrow d^i_\subset + d^i_\backslash + d^i_{\underline{\cdot\cdot}} - \min(d^{i+1}_\supset , d^i_/, d^i_{\underline{\cdot\cdot}})$ \\
$d^i_/ \leftarrow \max(0,\min(d^i_/, d^{i-1}_\subset)+\min(d^i_/,d^{i+1}_\supset) - d^i_/ )$ \\
$d^i_\backslash \leftarrow d^i_\supset+d^i_\subset+d^i_\backslash+\max(0,d^i_/ - d^{i-1}_\subset-d^{i+1}_\supset)$ \\
$d^{i+1} \leftarrow d^i_\supset+d^i_\backslash+\max(0,d^i_{\overline{\cdot\cdot}}-d^{i+1}_{\overline{\cdot\cdot}}-d^{i+1}_\backslash$ \\
$d^{i+1}_\subset \leftarrow d^{i+1}_\subset$ \\
$d^{i+1}_{\overline{\cdot\cdot}} \leftarrow \min(d^{i+1}_{\overline{\cdot\cdot}},d^i_{\overline{\cdot\cdot}})$ \\
$d^{i+1}_{\underline{\cdot\cdot}} \leftarrow d^{i+1}_{\underline{\cdot\cdot}} + d^{i+1}_\backslash - \min(d^{i+1}_\backslash,\max(0,d^i_{\overline{\cdot\cdot}}-d^{i+1}_{\overline{\cdot\cdot}}))$ \\
$d^{i+1}_/ \leftarrow d^{i+1}_/ + d^{i+1}_{\overline{\cdot\cdot}} - \min(d^{i+1}_{\overline{\cdot\cdot}},d^i_{\overline{\cdot\cdot}})$ \\
$d^{i+1}_\backslash \leftarrow \min(d^{i+1}_\backslash,\max(0,d^i_{\overline{\cdot\cdot}}-d^{i+1}_{\overline{\cdot\cdot}}))$ \\
}
\end{appendix}\\
{\bf Acknowledgement } I thank Lee Mosher and Juan Gonzalez-Meneses for
helpful discussions, and Heather Jenkins and Sandy Rutherford of the
Pacific institute for the mathematical sciences (PIMS) at UBC Vancouver
for letting me use their most powerful computers.
\end{document}
|
\betaegin{document}
\large
\bold title[Parabolic Curl System]
{Regularity of a Parabolic System Involving Curl}
\alphauthor[X.B.Pan]{Xing-Bin Pan }
\alphaddress{School of Science and Engineering,
The Chinese University of Hong Kong (Shenzhen), Shenzhen 518172, Guangdong, China; and School of Mathematical Sciences, East China Normal University, Shanghai 200062, China}
\bold email{[email protected]; [email protected]}
\bold thanks{ }
\kappaeywords{parabolic curl system, regularity, Schauder estimate, Maxwell system}
\sigmaubjclass[2010]{35Q65; 35K51, 35K65, 35Q61}
\betaegin{abstract} This note presents a regularity result with proof for an initial-boundary value problem of a linear parabolic system involving curl of the unknown vector field, subjected to the boundary condition of prescribing the tangential component of the solution.
\bold end{abstract}
\maketitle
\bold tableofcontents
\sigmaection{Introduction}
We are interested in the regularity theory of linear parabolic systems involving curl. We believe that the regularity results are well-known to the experts. However it is difficult to find the statements with complete proofs in the literature. Therefore we wish to write out the conclusions with proofs, for our later references.
We wish to start our program with the equation of the following form
\betaegin{equation}\label{eqB}
\left\{\alphaligned
&{\partial\bold u\over\partial t}+a\,\text{\rm curl\,}^2\bold u+\mathcal B\,\text{\rm curl\,}\bold u+c\bold u=\bold f,\quad\bold nablaiv\bold u=0,\quad\;& (t,x)\in Q_T,\\
&\bold u_T=\bold 0,\quadqq\, &(t,x)\in S_T,\\
&\bold u(0,x)=\bold u^0,\quadq\;\;\;& x\in \Omega,
\bold endaligned\bold right.
\bold end{equation}
where $a, c$ are scalar functions, $\mathcal B$ is a matrix-valued function, $Q_T=(0,T]\bold times\Omega$ with $\Omega$ being a bounded domain in $\bold Bbb R^3$ with smooth boundary $\partial\Omega$, $S_T=(0,T]\bold times\partial\Omega$. We denote $\text{\rm curl\,}\bold u\betaegin{equation}uiv \bold nabla\bold times\bold u$, and denote by $\bold u_T$ the tangential component of $\bold u$ at boundary $\partial\Omega$, namely $\bold u_T=(\bold nu\bold times\bold u)\bold times\bold nu$, where $\bold nu$ is the unit outer normal vector of $\partial\Omega$.
In this paper, we use $M(3)$ to denote the set of all $3\bold times 3$ matrices, and let
$$
C^{k+\alpha}_{t0}(\overline{\Omega},\bold nablaiv0)=\{\bold w\in C^{k+\alpha}(\overline{\Omega},\bold Bbb R^3):~ \bold nablaiv\bold w=0\;\;\bold text{in }\Omega,\;\; \bold w_T=\bold 0\;\;\bold text{on }\partial\Omega\}.
$$
Note that the boundary condition in \betaegin{equation}ref{eqB} is to prescribe the tangential component of the solution, and it makes \betaegin{equation}ref{eqB} significantly different to the usual parabolic equation with Dirichlet boundary condition which prescribes the full trace.
The regularity of weak solutions of \betaegin{equation}ref{eqB} will be used in \cite{KP} to establish existence and regularity of weak solutions of the time-dependent model of Meissner states of superconductors.
\betaegin{Thm}\label{Thm1} Let $\Omega$ be a bounded domain in $\bold Bbb R^3$ with a $C^{3+\alpha}$ boundary, $Q_{T}=(0,T]\bold times \Omega$. Assume that
$$\alphaligned
& 0<\alpha<1,\quadq a,\; c\in C^{\alpha,\alpha/2}(\overline{Q}_T),\quadq a(t,x)\gammaeq a_0>0,\\
& \mathcal B\in C^{\alpha,\alpha/2}(\overline{Q}_T,M(3)),\quadq
\bold f\in C^{\alpha,\alpha/2}(\overline{Q}_T,\bold Bbb R^3),\quadq
\bold u^0\in C^{2+\alpha}_{t0}(\overline{\Omega},\bold nablaiv0),
\bold endaligned
$$
and
\betaegin{equation}\label{adm}
a(0,x)[\text{\rm curl\,}^2\bold u^0(x)]_T+[\mathcal B(0,x)\text{\rm curl\,}\,\bold u^0(x)]_T=[\bold f(0,x)]_T,\quad x\in\partial\Omega.
\bold end{equation}
If $\bold u$ is a weak solution of \betaegin{equation}ref{eqB} on $Q_{T}$, then $\bold u\in C^{2+\alpha,1+\alpha/2}(\overline{Q}_T)$ and
$$
\|\bold u\|_{C^{2+\alpha,1+\alpha/2}(\overline{Q}_T)}\leq C\{\|\bold f\|_{C^{\alpha,\alpha/2}(Q_{T})}+\|\bold u^0\|_{C^{2+\alpha}(\overline{\Omega})}\},
$$
where $C$ depends only on $\Omega, T, \alpha$ and the $C^{\alpha,\alpha/2}(\overline{Q}_T)$ norm of $a, b, B$.
\bold end{Thm}
In \betaegin{equation}ref{adm} we use $[\cdot]_T$ to denote the tangential component of the enclosed vector. Let us mention that the assumption $\bold u^0\in C^{2+\alpha}_{t0}(\overline{\Omega},\bold nablaiv0)$ implies that $\bold u^0_T=\bold 0$ for $x\in \partial\Omega$, which is consistent with the boundary condition $\bold u_T=\bold 0$. This together with the assumption \betaegin{equation}ref{adm} consists of the compatibility condition for the problem \betaegin{equation}ref{eqB}.
\sigmaection{Estimates Near Flat Boundary}
\sigmaubsection{$W^{2,1,q}$-estimates}\
We consider regularity of weak solutions of \betaegin{equation}ref{eqB}, where $a, c$ are scalar functions and $\mathcal B$ is a matrix-valued function.
Let $\bold u$ be a weak solution of \betaegin{equation}ref{eqB}. Then $\bold u\in L^2(0,T; H^1(\Omega,\bold Bbb R^3))$. To get higher regularity of the solutions, one may first use the difference method to show that $\bold u\in L^2(0,T; H^2(\Omega,\bold Bbb R^3))$ and $\partial_t\bold u\in L^2(0,T; L^2(\Omega,\bold Bbb R^3))$, then show $\bold u$ is of $C^{2+\alpha, 1+\alpha/2}$. Here we use the different approach. We shall start with a weak solution $\bold u\in L^2(0,T; H^1(\Omega,\bold Bbb R^3))$ and show directly $\bold u$ is of $C^{2+\alpha,1+\alpha/2}$.
We shall derive the a priori estimates for smooth functions. Then the regularity of weak solutions follow from the estimates.
By considering cut-off, we only need to examine regularity near boundary. We start with a flat boundary. Denote by $B^+_R$ the upper half ball with center at the origin and radius $R$, and
$$
\Sigma_R=\{x=(x_1,x_2,0):~ |x|<R\}.
$$
Let
$$
Q_{R,T}=(0,T]\bold times B^+_R,\quadq \Gamma_{R,T}=(0,T]\bold times \Sigma_R.
$$
With the divergence-free condition, \betaegin{equation}ref{eqB} can be written in the following form
\betaegin{equation}\label{eqB.1}
\left\{\alphaligned
&{\partial\bold u\over\partial t}-a\,\bold Delta\bold u+\mathcal B\,\text{\rm curl\,}\bold u+c\bold u=\bold f,\quad \bold nablaiv\bold u=0,\quad\;& (t,x)\in Q_{R,T},\\
&\bold u_T=\bold 0,\quadqq\, &(t,x)\in \Gamma_{R,T},\\
&\bold u(0,x)=\bold u^0,\quadq\;\;\;& x\in B^+_R.
\bold endaligned\bold right.
\bold end{equation}
As mentioned in the introduction, the boundary condition in \betaegin{equation}ref{eqB.1} is to prescribe the tangential component, but not the full trace, of the solution. As such, the regularity of \betaegin{equation}ref{eqB.1} is not a direct consequence of the regularity theory of the classical initial-Dirichlet boundary problem of parabolic equations.
The compatibility condition \betaegin{equation}ref{adm} can be written as
\betaegin{equation}\label{adm2}
-a(0,x)[\bold Delta \bold u^0(x)]_T+[\mathcal B(0,x)\text{\rm curl\,}\,\bold u^0(x)]_T=[\bold f(0,x)]_T,\quad x\in\partial\Omega.
\bold end{equation}
Note that $\Gamma_{R,T}$ is the flat part of the parabolic boundary of $Q_{R,T}$. We shall establish the following local estimate:
\betaegin{Lem}\label{Lem2.1}
Assume that
$$\alphaligned
&a,\; c\in C^{\alpha,\alpha/2}(\overline{Q}_{R_0,T}),\quadq a(t,x)\gammaeq a_0>0,\quadq 1<q<\infty,\\
&\mathcal B\in C^{\alpha,\alpha/2}(\overline{Q}_{R_0,T},M(3)),\quadq \bold f\in L^q(Q_{R_0,T},\bold Bbb R^3),\\
& \bold u^0\in W^{2,q}(B_{R_0}^+,\bold Bbb R^3),\quadq \bold nablaiv\bold u^0=0\;\;\bold text{\bold rm in }B_{R_0},\quadq \bold u^0_T=\bold 0\;\;\bold text{\bold rm on }\Sigma_{R_0},
\bold endaligned
$$
and assume \betaegin{equation}ref{adm} holds.
If $\bold u$ is a weak solution of \betaegin{equation}ref{eqB.1} on $Q_{R_0,T}$, then for any $0<R<R_0$ we have $\bold u\in W^{2,q}(Q_{R,T},\bold Bbb R^3)$ and
$$
\|\bold u\|_{W^{2,1,q}(Q_{R,T})}\leq C\{\|\bold f\|_{L^q(Q_{R_0,T})}+\|\bold nabla_x\bold u\|_{L^2(Q_{R_0,T})}+\|\bold u\|_{L^2(Q_{R_0,T})}+\|\bold u^0\|_{W^{2,q}(B_{R_0}^+)}\},
$$
where $C$ depends only on $R_0$, $R$, $T$, $\mathcal B$, $a$, $c$, $q$.
\bold end{Lem}
\betaegin{proof}
Let $\bold u$ be a solution of \betaegin{equation}ref{eqB.1}.
Write
$$
\bold u=(u_1,u_2,u_3)^t,\quadq \bold f=(f_1,f_2,f_3)^t,\quadq
\mathcal B\,\text{\rm curl\,}\bold u=\bold h=(h_1,h_2,h_3)^t.
$$
$u_1, u_2$ correspond the tangential component of $\bold u$ and $u_3$ corresponds to the normal component.
Recall the formula (see \cite[p.210]{DaL})
$$
\bold nablaiv\bold u=\bold nablaiv_\Gamma(\partiali\bold u)+2(\bold nu\cdot\bold u)H(x)+{\partial\over\partial\bold nu}(\bold nu\cdot\bold u).
$$
In the above $\partiali\bold u$ denotes the tangential component of $\bold u$ on the domain boundary, $\bold nablaiv_\Gamma$ denotes the surface divergence, and $H(x)$ is the mean curvature of the domain boundary. Applying the above equality on the flat part of the boundary $\Sigma_R$ where $H(x)\betaegin{equation}uiv 0$ we see that, the boundary condition $\bold u_T=\bold 0$ together with the divergence-free condition $\bold nablaiv\bold u=0$ implies the Neumann boundary condition for $u_3$.
In fact, for $x\in \Sigma_R$ we have
$$
\partiali\bold u=\bold u_T=(u_1,u_2,0),\quadq \bold nu\cdot \bold u=u_3.
$$
Hence
$$
{\partial u_3\over\partial\bold nu}={\partial\over\partial\bold nu}(\bold nu\cdot\bold u)=\bold nablaiv\bold u-\bold nablaiv_\Gamma(\partiali\bold u)-2(\bold nu\cdot\bold u)H(x).
$$
So
$$
u_1=0,\quadq u_2=0,\quadq {\partial u_3\over \partial\bold nu}=0\quadq\bold text{on } \Gamma_{R,T}.
$$
We can write the equations for $u_1$ and $u_2$ as follows:
\betaegin{equation}\label{eq-u12}
\left\{\alphaligned
&\partial_tu_j-a\bold Delta u_j+cu_j=f_j-h_j,\quad& (t,x)\in Q_{R,T},\\
&u_j=0,\quad &(t,x)\in \Gamma_{R,T},\\
&u_j(0,x)=u_j^0,\quad& x\in B_R^+,
\bold endaligned\bold right.
\bold end{equation}
$j=1,2$, and write the equation for $u_3$ as follows:
\betaegin{equation}\label{eq-u3}
\left\{\alphaligned
&\partial_tu_3-a\bold Delta u_3+cu_3=f_3-h_3, \quad& (t,x)\in Q_{R,T},\\
&{\partial u_3\over\partial\bold nu}=0,\quad & (t,x)\in \Gamma_{R,T},\\
&u_3(0,x)=u_3^0,\quad&x\in B_R^+.
\bold endaligned\bold right.
\bold end{equation}
However, since $\Gamma_{R,T}$ is only a subset of the parabolic boundary of $Q_{R,T}$,
\betaegin{equation}ref{eq-u12} and \betaegin{equation}ref{eq-u3} are not exactly the standard initial-boundary value problem of parabolic equations with Dirichlet or Norman boundary condition.
We shall modify $u_j$'s to get the standard initial-boundary problems of parabolic equations.
Given $0<R<R_0$, we can take a domain $U$ with $C^{2+\alpha}$ boundary and a smooth function $\bold eta$ supported in $U$ such that
\betaegin{equation}\label{domain-U}
B_R^+\sigmaubset U\sigmaubset B_{R_0}^+,\quad \bold eta(x)=1\;\;\bold text{if } x\in B_R^+,\quad \bold eta(x)=0\;\;\bold text{if } x\in B^+_{R_0}\sigmaetminus\overline{U}.
\bold end{equation}
In fact we can first choose $\bold tilde R$ such that $R<\bold tilde R<R_0$. Take a smooth cur-off function $\bold eta$ such that
\betaegin{equation}\label{eta}
\sigmapt(\bold eta)\sigmaubset B_{\bold tilde R},\quad \bold eta=1\quad\bold text{in }B_{\bold tilde R},\quadq {\partial\bold eta\over \partial\bold nu}=0\quad\bold text{on }\Sigma_{\bold tilde R}.
\bold end{equation}
Then we take a domain $\bold tilde U$ with smooth boundary such that
$$
\overline{B}_{\bold tilde R}\sigmaubset \bold tilde U\sigmaubset \overline{\bold tilde U}\sigmaubset B_{R_0}.
$$
We can choose $\bold tilde U$ such that $U\betaegin{equation}uiv \bold tilde U\cap B_{R_0}^+$ has $C^{2+\alpha}$ boundary. Then $U$ and $\bold eta$ satisfy \betaegin{equation}ref{domain-U}.
Let $\bold w=\bold eta\bold u$. Then
$$\bold w(t,x)=\bold 0\quad\bold text{for }0< t\leq T,\;\; x\in \partial U\sigmaetminus \Sigma_{R_1}.
$$
Moreover we actually have $\bold w(t,x)=0$ for $x\in U\sigmaetminus B_{R_1}$, thus
$$
{\partial\over \partial\bold nu}(\bold nu\cdot\bold w)=0\quad\bold text{for }\;\; x\in \partial U\sigmaetminus \Sigma_{R_1}.
$$
If $x\in \Sigma_{R_1}$, then from \betaegin{equation}ref{eta} we have
$$
{\partial \over \partial\bold nu}(\bold nu\cdot\bold w)={\partial\over\partial\bold nu}(\bold eta\bold nu\cdot\bold u)=\bold eta{\partial\over\partial\bold nu}(\bold nu\cdot\bold u)+(\bold nu\cdot\bold u){\partial\over \partial\bold nu}\bold eta=0.
$$
So we have
\betaegin{equation}
\bold w_T=\bold 0,\quadq {\partial\over\partial\bold nu}(\bold nu\cdot\bold w)=0\quadq\bold text{if } 0<t\leq T,\;\; x\in\partial U.
\bold end{equation}
Denote
$$
G_T=(0,T]\bold times U,\quadq
L_T=(0,T]\bold times \partial U.
$$
We see that $\bold w$ is a weak solution of a modified system on $G_T$, namely
\betaegin{equation}\label{eqw}
\left\{\alphaligned
&{\partial\bold w\over\partial t}-a\,\bold Delta \bold w+\mathcal B\,\text{\rm curl\,}\bold w+c\bold w=\bold F,\quad\bold nablaiv\bold w=g,\quad\;& (t,x)\in G_T,\\
&\bold w_T=\bold 0,\quadqq\, &(t,x)\in L_T,\\
&\bold w(0,x)=\bold w^0,\quadq\;\;\;& x\in U,
\bold endaligned\bold right.
\bold end{equation}
where
$$\alphaligned
&\bold F=\bold eta\bold f-a(\bold Delta\bold eta\bold u+2\sigmaum_{j=1}^3\partial_j\bold eta\partial_j\bold u)-\mathcal B(\bold nabla\bold eta\bold times\bold u),\\
&g=\bold nabla\bold eta\cdot\bold u,\quadq \bold w^0=\bold eta\bold u^0.
\bold endaligned
$$
Now we write
$$
\bold w=(w_1,w_2,w_3)^t,\quad \bold F=(F_1,F_2,F_3)^t,\quad
\mathcal B\,\text{\rm curl\,}\bold w=\bold H=(H_1,H_2,H_3)^t.
$$
Then $w_1$, $w_2$ satisfy
\betaegin{equation}\label{eq-w12}
\left\{\alphaligned
&\partial_tw_j-a\bold Delta w_j+cw_j=F_j-H_j,\quad& (t,x)\in G_{T},\\
&w_j=0,\quad &(t,x)\in L_T,\\
&w_j(0,x)=w_j^0,\quad& x\in U,
\bold endaligned\bold right.
\bold end{equation}
$j=1,2$, and $w_3$ satisfies
\betaegin{equation}\label{eq-w3}
\left\{\alphaligned
&\partial_tw_3-a\bold Delta w_3+cw_3=F_3-H_3, \quad& (t,x)\in G_{T},\\
&{\partial w_3\over\partial\bold nu}=0,\quad & (t,x)\in L_T,\\
&w_3(0,x)=w_3^0,\quad&x\in U.
\bold endaligned\bold right.
\bold end{equation}
From the assumption on $\bold u^0$ and \betaegin{equation}ref{adm2} we see that the following compatibility for the parabolic Dirichlet problem \betaegin{equation}ref{eq-w12} condition is satisfied for $x\in \partial U$ and for $j=1,2$:
$$
w^0_j(x)=0,\quadq -a(0,x)\bold Delta w^0_j(x)=F_j(0,x)-H_j(0,x).
$$
We can apply the theory of regularity of parabolic equations to get a priori estimates of the solutions $\bold w$ in terms of $F_j$'s and $H_j$'s. However we can not directly get the final estimation by iteration the local estimates on $B_R^+$. Recall that the standard iteration processes such as the bootstrap argument require the right hand terms be controlled by the unknowns. In our case $F_j$'s can be controlled by $\bold nabla \bold u$, but not by $\bold nabla \bold w$. So we can not improve the regularity on $F_j$'s over the whole region $B_R^+$ by iteration.
Nevertheless, we have improved the regularity of $\bold w$ in $B_R^+$, then we get the improved regularity of $\bold u$ in $B_{R_1}^+$ with some $R_1<R$ where $\bold eta=1$. Hence we can improve the regularity of $F_j$'s over $B_{R_1}^+$. Then we can iterate the above estimation to get the further improved estimates of the solution $\bold w$ on $B_{R_1}^+$ in terms of $F_j$'s.
We iterate this procedure in a finite times to get improved estimates on smaller regions.
Following the above idea, we shall first estimate the estimates of the solution $\bold w$ of \betaegin{equation}ref{eqw} in terms of $\bold F$ and $g$.
(a) First of all, by the Sobolev imbedding in $\bold Bbb R^3$ we have
$$
L^2(0,T; H^2(U))\bold hookrightarrow L^2(0,T, W^{1,p}(U)),\quad\bold forall 1<p<\infty.
$$
(b)
\betaegin{equation}\label{F}
\alphaligned
&|\bold H|\leq C|\text{\rm curl\,}\bold w|,\\
&|\bold F|\leq C(|\bold f|+|\bold u|+|\bold nabla \bold u|).
\bold endaligned
\bold end{equation}
In the following we derive the $L^p$ estimates.
{\it Step 1}. $L^p$ estimate for $w_1, w_2$.
We take the following iteration argument. If $1<p<\infty$ is such that
\betaegin{equation}\label{dpsi-Lp}
\|\text{\rm curl\,}\bold w\|_{L^p(G_T)}< \infty,\quadq \|\bold F\|_{L^p(G_T)}<\infty,
\bold end{equation}
then
\betaegin{equation}\label{H-Lp}
\|\bold H\|_{L^p(G_T)}<\infty,
\bold end{equation}
and we can
apply the global $L^p$ estimate for Dirichlet problem of heat equation (see \cite[p.176, Theorem 7.17]{Lib})
to \betaegin{equation}ref{eq-w12} to get, for $j=1,2$,
\betaegin{equation}\label{wjLp1}
\alphaligned
\|w_j\|_{W^{2,1,p}(G_T)}\leq& C\{ \|F_j-H_j\|_{L^p(G_T)}+\|w_j^0\|_{W^{2,p}(U)}\}\\
\leq & C\{\|\bold F\|_{L^p(G_T)}+\|\text{\rm curl\,}\bold w\|_{L^p(G_T)}+\|w_j^0\|_{W^{2,p}(U)}\},
\bold endaligned
\bold end{equation}
where $C$ depends only on $U, T, a, c, p$.
Now \betaegin{equation}ref{dpsi-Lp} is true for $p=2$ by the assumption, hence by \betaegin{equation}ref{wjLp1} we have $$
w_j\in W^{2,1,2}(G_T),\quad j=1,2.
$$
Then by Sobolev imbedding (see \cite[p.26, Theorem 3.14 (i)]{H} with
$$
p_1\betaegin{equation}uiv q={(n+2)p\over n+2-p}={5p\over 3}={10\over 3}
$$
when $p=2$) we see that $|\bold nabla_x w_j|\in L^{p_1}(G_T)$ with $p_1=10/3$, and
$$\alphaligned
\|\bold nabla_x w_j\|_{L^{p_1}(G_T)}\leq& C\|w_j\|_{W^{2,1,2}(G_T)}
\leq C\{\|\bold F\|_{L^2(G_T)}+\|\text{\rm curl\,}\bold w\|_{L^2(G_T)}+\|w_j^0\|_{W^{2,2}(U)}\}
\bold endaligned
$$
for $j=1, 2$, where $C$ depends only on $U, T, a, c, p_1$.
{\it Step 2}. $L^p$ estimate for $w_3$.
If $1<p<\infty$ is such that \betaegin{equation}ref{dpsi-Lp} holds hence \betaegin{equation}ref{H-Lp} is true, then we can apply the global $L^p$ estimate for Neumann problem of heat equation to \betaegin{equation}ref{eq-w3} to get
\betaegin{equation}\label{w3Lp1}
\alphaligned
\|w_3\|_{W^{2,1,p}(G_T)}\leq& C\{\|F_3-H_3\|_{L^p(G_T)} +\|w_3^0\|_{W^{2,p}(U)}\}\\
\leq & C\{\|\bold F\|_{L^p(G_T)}+\|\text{\rm curl\,}\bold w\|_{L^p(G_T)} +\|w_3^0\|_{W^{2,p}(U)}\},
\bold endaligned
\bold end{equation}
where $C$ depends only on $U, T, a, c, p$.
Now \betaegin{equation}ref{dpsi-Lp} is true for $p=2$ by the assumption, hence by \betaegin{equation}ref{w3Lp1} $w_3\in W^{2,1,2}(G_T)$. Then from the Sobolev imbedding (see \cite[p.26, Theorem 3.14 (i)]{H} with $p=2$) we see that $|\bold nabla_x w_3|\in L^{p_1}(G_T)$ with $p_1=10/3$, and
$$
\|\bold nabla_x w_3\|_{L^{p_1}(G_T)}\leq C\|w_3\|_{W^{2,1,2}(G_T)},
$$
where $C$ depends only on $U, T, a, c, p_1$.
Combining the results for $w_1, w_2, w_3$ we get
\betaegin{equation}\label{w-Lp1}
\alphaligned
&\|\bold nabla_x \bold w\|_{L^{p_1}(G_T)}\leq\|\bold w\|_{W^{2,1,p}(G_T)}
\leq C\{\|\bold F\|_{L^2(G_T)}+\|\text{\rm curl\,}\bold w\|_{L^2(G_T)}+\|\bold w^0\|_{W^{2,2}(U)}\},
\bold endaligned
\bold end{equation}
where $C$ depends only on $U, T, a, c, p_1$.
It follows that \betaegin{equation}ref{dpsi-Lp} is true with $p$ replaced by $p_1=10/3$.
{\it Step 3.} By the choice of the cut-off function $\bold eta$ we have $\bold eta=1$ on $\overline{B^+_R}$, so $\bold w=\bold u$ on $\overline{B^+_R}$. From steps 1 and 2 we see that
$$
\bold u\in W^{2,1,2}(Q_{R,T},\bold Bbb R^3),
$$
and
\betaegin{equation}\label{u-Lp1}
\alphaligned
\|\bold u\|_{L^{p_1}(Q_{R,T})}\leq &\|\bold u\|_{W^{2,1,2}(Q_{R,T})}\leq \|\bold w\|_{W^{2,1,2}(G_T)}\\
\leq& C\{\|\bold F\|_{L^{2}(G_T)}+\|\text{\rm curl\,}\bold w\|_{L^{2}(G_T)}+\|\bold w^0\|_{W^{2,2}(U)}\},
\bold endaligned
\bold end{equation}
where $C$ depends only on $R, R_0, T, U, \bold eta, a, c, p_1$. We can construct the domain $U$ in \betaegin{equation}ref{domain-U} and the cut-off function $\bold eta$ which depend only on $R$ and $R_0$. Then in the above inequality the constant $C$ depends only on $R, R_0, T, a, c$.
Then we have
$$\alphaligned
&\|\text{\rm curl\,}\bold w\|_{L^2(G_T)}\leq C\{ \|\bold u\|_{L^2(G_T)}+\|\bold nabla_x \bold u\|_{L^2(G_T)}\},\\
&\|\bold w^0\|_{W^{2,2}(U)}\leq C\|\bold u^0\|_{W^{2,2}(B_{R_0}^+)},
\bold endaligned
$$
where $C$ depends only on $R, R_0$. From \betaegin{equation}ref{F} and \betaegin{equation}ref{u-Lp1} we see that, for $p_1=10/3$,
$$\alphaligned
\|\bold F\|_{L^{p_1}(Q_{R,T})}\leq& C\{\|\bold f\|_{L^{p_1}(Q_{R,T})}+\|\bold u\|_{L^{p_1}(Q_{R,T})}+\|\bold nabla \bold u\|_{L^{p_1}(Q_{R,T})}\}\\
\leq & C\{\|\bold f\|_{L^{p_1}(G_T)}+\|\bold F\|_{L^2(G_T)}+\|\text{\rm curl\,}\bold w\|_{L^2(G_T)}+\|\bold w^0\|_{W^{2,2}(U)}\}\\
\leq &C\{\|\bold f\|_{L^{p_1}(G_T)}
+\|\bold u\|_{L^2(G_T)}+\|\bold nabla_x\bold u\|_{L^2(G_T)}+\|\text{\rm curl\,}\bold w\|_{L^2(G_T)}+\|\bold w^0\|_{W^{2,2}(U)}\}\\
\leq &C\{\|\bold f\|_{L^{p_1}(Q_{R_0,T})}+\|\bold u\|_{L^2(Q_{R_0,T})}+\|\bold nabla_x\bold u\|_{L^2(Q_{R_0,T})}+\|\bold u^0\|_{W^{2,2}(B^+_{R_0})}\},
\bold endaligned
$$
where $C$ depends only on $R, R_0, T, a, c, p_1$.
{\it Step 4.}
Let $0<R_2<R_1<R_0$ be given. We change $R$ to $R_1$ in the above argument to get that $\bold u\in W^{2,1,2}(Q_{R_1,T})$ with
\betaegin{equation}\label{u-Lp1R1}
\alphaligned
&\|\bold u\|_{L^{p_1}(Q_{R_1,T})}\leq \|\bold u\|_{W^{2,1,2}(Q_{R_1,T})}\\
\leq& C\{\|\bold f\|_{L^{p_1}(Q_{R_0,T})}+\|\bold u\|_{L^2(Q_{R_0,T})}+\|\bold nabla_x\bold u\|_{L^2(Q_{R_0,T})}+\|\bold u^0\|_{W^{2,2}(B^+_{R_0})}\},
\bold endaligned
\bold end{equation}
where $p_1=10/3$ and $C$ depends only on $R_1, R_0, T, a, c, p_1$. Then
$$
\|\bold F\|_{L^{p_1}(Q_{R_1,T})}\leq C\{\|\bold f\|_{L^{p_1}(\Omega_{R_0,T})}+\|\bold u\|_{L^2(Q_{R_0,T})}+\|\bold nabla_x\bold u\|_{L^2(Q_{R_0,T})}+\|\bold u^0\|_{W^{2,2}(B^+_{R_0})}\},
$$
where $C$ depends only on $R_1, R_0, T, a, c, p_1$. From \betaegin{equation}ref{w-Lp1} and \betaegin{equation}ref{u-Lp1} we have
\betaegin{equation}\label{H-Lp1}
\alphaligned
&\|\bold H\|_{L^{p_1}(Q_{R_1,T})}\leq C\|\text{\rm curl\,}\bold w\|_{L^{p_1}(Q_{R_1,T})}\\
\leq& C\{\|\bold f\|_{L^{p_1}(Q_{R_0,T})}+\|\bold u\|_{L^2(Q_{R_0,T})}+\|\bold nabla_x\bold u\|_{L^2(Q_{R_0,T})}+\|\bold u^0\|_{W^{2,2}(B^+_{R_0})}\},
\bold endaligned
\bold end{equation}
where $C$ depends only on $R_1, R_0, T, a, c, p_1$.
Then we can repeat the above argument with $R$ and $R_0$ replaced by $R_2$ and $R_1$ to get $W^{2,1,p_1}$ estimate on $\Omega_{R_2,T}$. More precisely,
we can take a domain $U_2$ with $C^{2+\alpha}$ boundary and a smooth function $\bold eta_2$ supported in $U_2$ such that
\betaegin{equation}\label{domain-U2}
B_{R_2}^+\sigmaubset U_2\sigmaubset B_{R_1}^+,\quad \bold eta_2(x)=1\;\;\bold text{if } x\in B_{R_2}^+,\quad \bold eta_2(x)=0\;\;\bold text{if } x\in B^+_{R_1}\sigmaetminus U_2.
\bold end{equation}
Set $G^2_T=(0,T]\bold times U_2$.
Then the conditions \betaegin{equation}ref{dpsi-Lp} and \betaegin{equation}ref{H-Lp} hold on $G^2_T$ with $p$ replaced by $p_1$.
As in steps 1.1 and 1.2, we apply the $L^p$ estimates of heat equation to get \betaegin{equation}ref{wjLp1} and \betaegin{equation}ref{w3Lp1}
on $G^2_T$ with $p$ replaced by $p_1$, then we get \betaegin{equation}ref{u-Lp1} with $R$ and $R_0$ replaced
by $R_2$ and $R_1$. So we have now
\betaegin{equation}\label{u-Lp2R2}
\alphaligned
&\|\bold u\|_{L^{p_2}(Q_{R_2,T})}\leq \|\bold u\|_{W^{2,1,p_1}(Q_{R_2,T})}\\
\leq& C\{\|\bold f\|_{L^{p_1}(Q_{R_1,T})}+\|\bold u\|_{L^{p_1}(Q_{R_1,T})}
+\|\bold nabla\bold u\|_{L^{p_1}(Q_{R_1,T})}
+\|\bold u^0\|_{W^{2,p_1}(B^+_{R_1})}\}\\
\leq& C\{\|\bold f\|_{L^{p_1}(Q_{R_0,T})}+\|\bold u\|_{L^2(Q_{R_0,T})}+\|\bold nabla_x\bold u\|_{L^2(Q_{R_0,T})}+\|\bold u^0\|_{W^{2,p_1}(B^+_{R_0})}\},
\bold endaligned
\bold end{equation}
where
$$
p_2={(n+2)p_1\over n+2-p_1}={5p_1\over 5-p_1}=10
$$
and $C$ depends only on $R_1, R_0, T, a, c, p_2$. It further follows that
$$
\bold F\in L^{p_2}(Q_{R_2,T},\bold Bbb R^3),\quadq \bold H\in L^{p_2}(Q_{R_2,T},\bold Bbb R^3).
$$
{\it Step 5}.
Now we fix $0<R<R_0$ and let
$$
R_k=R_0-{R_0-R\over 2^k},\quad p_{k+1}={5p_k\over 5-p_k},\quad k=1, 2,3,\cdots.
$$
After having got
$$
\bold u\in W^{2,1,p_k}(Q_{R_k,T},\bold Bbb R^3),
$$
hence
$$
\bold F\in L^{p_{k+1}}(Q_{R_k,T},\bold Bbb R^3),\quadq \bold H\in L^{p_{k+1}}(Q_{R_k,T},\bold Bbb R^3),
$$
we apply the iteration argument with $p=p_{k+1}$ on $Q_{R_k,T}$ to conclude that
$$
\bold u\in W^{2,1,p_{k+1}}(Q_{R_{k+1,T}},\bold Bbb R^3),
$$
and
\betaegin{equation}
\alphaligned
&\|\bold u\|_{W^{2,1,p_{k+1}}(Q_{R_{k+1},T})}\\
\leq& C\{\|\bold f\|_{L^{p_{k+1}}(Q_{R_k,T})}+\|\bold u\|_{L^{p_k}(Q_{R_k,T})}+\|\bold nabla \bold u\|_{L^{p_k}(Q_{R_k,T})}+\|\bold u^0\|_{W^{2,p_{k+1}}(B^+_{R_k})}\}\\
\leq& C\{\|\bold f\|_{L^{p_{k+1}}(Q_{R_0,T})}+\|\bold u\|_{L^2(Q_{R_0,T})}+\|\bold nabla_x\bold u\|_{L^2(Q_{R_0,T})}+\|\bold u^0\|_{W^{2,p_{k+1}}(B^+_{R_0})}\},
\bold endaligned
\bold end{equation}
$C$ depends only on $R, R_0, T, a, c, p_k$.
Note that $p_k\bold to\infty$ and $0<R<R_k<R_0$ for all $k$.
{\it Step 6.}
For any $1<q<\infty$, we can choose $U$ and $\bold eta$ such that \betaegin{equation}ref{domain-U} and \betaegin{equation}ref{eta} hold. Then we repeat the $L^q$ estimate on $\bold w=\bold eta\bold u$ on $G_T=(0,T]\bold times U$ to get $w^{2,1,q}$ estimate for $\bold w$ on $G_T$, which implies that
$$
\bold u\in W^{2,1,q}(Q_{R,T},\bold Bbb R^3),
$$
and
\betaegin{equation}
\alphaligned
&\|\bold u\|_{W^{2,1,q}(Q_{R,T})}\leq \|\bold w\|_{W^{2,1,q}(G_T)}\\
\leq& C\{\|\bold f\|_{L^q(G_T)}+\|\bold w\|_{L^q(G_T)}+\|\text{\rm curl\,}\bold w\|_{L^q(G_T)}+\|\bold w^0\|_{W^{2,q}(U)}\}\\
\leq & C\{ \|\bold f\|_{L^q(Q_{R_0,T})}+\|\bold u\|_{L^2(Q_{R_0,T})}+\|\bold nabla_x\bold u\|_{L^2(Q_{R_0,T})}+\|\bold u^0\|_{W^{2,q}(B^+_{R_0})}\},
\bold endaligned
\bold end{equation}
$C$ depends only on $R, R_0, T, a, c, q$.
\bold end{proof}
\sigmaubsection{$C^{1+\alpha,(1+\alpha)/2}$-estimates}\
\betaegin{Cor}\label{Cor2.2} Under the assumption of Lemma \bold ref{Lem2.1}, if $0<\alpha<1$ and $0<R<R_0$, the weak solution $\bold u$ is of
$C^{1+\alpha,\alpha/2}$, and
$$
\|\bold u\|_{C^{1+\alpha,(1+\alpha)/2}(\overline{Q}_{R,T})}\leq C\{ \|\bold f\|_{L^q(Q_{R_0,T})}+\|\bold u\|_{L^2(Q_{R_0,T})}+\|\bold nabla_x\bold u\|_{L^2(Q_{R_0,T})}+\|\bold u^0\|_{W^{2,q}(B^+_{R_0})}\},
$$
where $5/(1-\alpha)<q<\infty$, $C$ depends only on $R, R_0, T, a, c, \alpha, q$.
\bold end{Cor}
\betaegin{proof} We apply the Sobolev imbedding given in \cite[p.26, Theorem 3.14 (3)]{H} (with $p=q>5/(1-\alpha)$) to conclusion that
$$
\bold u\in C^{1+\alpha,(1+\alpha)/2}(\overline{Q}_{R,T},\bold Bbb R^3)
$$
and
$$
\|\bold u\|_{C^{1+\alpha,(1+\alpha)/2}(\overline{Q}_{R,T})}\leq C\|\bold u\|_{W^{2,1,q}(Q_{R,T})}.
$$
Then the conclusion follows from Lemma \bold ref{Lem2.1}.
\bold end{proof}
\sigmaubsection{Schauder estimates}\
\betaegin{Lem}\label{Lem2.3}
Assume that
$$\alphaligned
&a,\; c\in C^{\alpha,\alpha/2}(\overline{Q}_{R_0,T}),\quadq a(t,x)\gammaeq a_0>0,\\
&\mathcal B\in C^{\alpha,\alpha/2}(\overline{Q}_{R_0,T},M(3)),\quadq \bold f\in C^{\alpha,\alpha/2}(\overline{Q}_{R_0,T},\bold Bbb R^3),\\
& \bold u^0\in C^{2+\alpha}(\overline{B}_{R_0}^+,\bold Bbb R^3),\quadq \bold nablaiv\bold u^0=0\;\;\bold text{\bold rm in }B_{R_0},\quadq \bold u^0_T=\bold 0\;\;\bold text{\bold rm on }\Sigma_{R_0},
\bold endaligned
$$
and assume \betaegin{equation}ref{adm} holds.
If $\bold u$ is a weak solution of \betaegin{equation}ref{eqB.1} on $Q_{R_0,T}$, then for any $0<R<R_0$ we have $\bold u\in C^{2+\alpha,1+\alpha/2}(\overline{Q}_{R,T})$ and
$$
\|\bold u\|_{C^{2+\alpha,1+\alpha/2}(\overline{Q}_{R,T})}\leq C\{\|\bold f\|_{C^{\alpha,\alpha/2}(\overline{Q}_{R_0,T})}+\|\bold u\|_{L^2(Q_{R_0,T})}+\|\bold nabla_x\bold u\|_{L^2(Q_{R_0,T})}+\|\bold u^0\|_{C^{2+\alpha}(\overline{B}_{R_0}^+)}\},
$$
where $C$ depends only on $R_0$, $R$, $T$, $\alpha$, $\mathcal B$, $a$, $c$.
\bold end{Lem}
\betaegin{proof}
In the following we use the fact that if $u\in C^{1+\alpha,(1+\alpha)/2}(\overline{\Omega}_{R,T})$, then $\bold nabla_x u\in C^{\alpha,\alpha/2}(\overline{\Omega}_{R,T})$.
Take $R_1$ depending only on $R$ and $R_0$ such that $R<R_1<R_0$.
From Corollary \bold ref{Cor2.2} we know that $\bold u\in C^{1+\alpha,(1+\alpha)/2}(\overline{Q}_{R_1,T},\bold Bbb R^3)$, hence $\text{\rm curl\,}\bold u\in C^{\alpha,\alpha/2}(\overline{Q}_{R_1,T},\bold Bbb R^3)$, thus
$$
\mathcal B\,\text{\rm curl\,}\bold u\in C^{\alpha,\alpha/2}(\overline{Q}_{R_1,T},\bold Bbb R^3).
$$
Take a domain $U$ and a smooth cut-off function $\bold eta$ such that \betaegin{equation}ref{domain-U} is satisfied with $R_0$ replaced by $R_1$. Set $G_T=(0,T]\bold times U$ and $L_T=(0,T]\bold times\partial U$. Set $\bold w=\bold eta\bold u$. Then $\bold w$ is a weak solution of \betaegin{equation}ref{eqw}, where $\bold F\in C^{\alpha,\alpha/2}(\overline{G}_T,\bold Bbb R^3)$ because $\bold f\in C^{\alpha,\alpha/2}(\overline{G}_T,\bold Bbb R^3)$ and $\partial_j\bold u\in C^{\alpha,\alpha/2}(\overline{G}_T,\bold Bbb R^3)$.
Again we write $\bold w=(w_1,w_2,w_3)^t$, $\bold F=(F_1,F_2,F_3)^t$, $\mathcal B\,\text{\rm curl\,}\bold w=\bold H=(H_1,H_2,H_3)^t$. Then $w_1, w_2$ are weak solutions of \betaegin{equation}ref{eq-w12} and $w_3$ is a weak solution of \betaegin{equation}ref{eq-w3}.
Applying the global Schauder estimate for Dirichlet problem (see \cite[p.78, Theorem 4.28]{Lib})
to \betaegin{equation}ref{eq-w12} we have
$$
\|w_j\|_{C^{2+\alpha,1+\alpha/2}(\overline{G}_T)}\leq C_D\{\|F_j-H_j\|_{C^{\alpha,\alpha/2}(\overline{G}_T)}+\|w_j^0\|_{C^{2+\alpha}(\overline{U})}\},\quad j=1,2.
$$
Then applying the global Schauder estimate for Neumann problem \cite[p.79, Theorem 4.31]{Lib}
to \betaegin{equation}ref{eq-w3} we have
$$
\|w_3\|_{C^{2+\alpha,1+\alpha/2}(\overline{G}_T)}\leq C_N\{\|F_3-H_3\|_{C^{\alpha,\alpha/2}(\overline{G}_T)}+\|w_3^0\|_{C^{2+\alpha}(\overline{U})}\}.
$$
Therefore $\bold w\in C^{2+\alpha,1+\alpha/2}(\overline{G}_T,\bold Bbb R^3)$, and
\betaegin{equation}\label{wca}
\|\bold w\|_{C^{2+\alpha,1+\alpha/2}(\overline{G}_T)}\leq C\{\|\bold F\|_{C^{\alpha,\alpha/2}(\overline{G}_T)}+\|\bold H\|_{C^{\alpha,\alpha/2}(\overline{G}_T)}+\|\bold w^0\|_{C^{2+\alpha}(\overline{U})}\}.
\bold end{equation}
Note that
$$\alphaligned
\|\bold F\|_{C^{\alpha,\alpha/2}(\overline{G}_T)}\leq & C\{\|\bold f\|_{C^{\alpha,\alpha/2}(\overline{G}_T)}+\|\bold u\|_{C^{\alpha,\alpha/2}(\overline{G}_T)}+\|\bold nabla_x\bold u\|_{C^{\alpha,\alpha/2}(\overline{G}_T)},
\\
\|\bold H\|_{C^{\alpha,\alpha/2}(\overline{G}_T)}\leq& C\|\text{\rm curl\,}\bold w\|_{C^{\alpha,\alpha/2}(\overline{G}_T)}
\leq C\|\bold w\|_{C^{1+\alpha,(1+\alpha)/2}(\overline{G}_T)}.
\bold endaligned
$$
From these and \betaegin{equation}ref{wca} we get
$$
\|\bold w\|_{C^{2+\alpha,1+\alpha/2}(\overline{G}_T)}\leq C\{\|\bold f\|_{C^{\alpha,\alpha/2}(\overline{G}_T)}+\|\bold u\|_{C^{1+\alpha,(1+\alpha)/2}(\overline{G}_T)}+\|\bold w^0\|_{C^{2+\alpha}(\overline{U})}\}.
$$
Using the construction of $G_T$ and Corollary \bold ref{Cor2.2} with $R$ replaced by $R_1$, and with the index $q$ determined by $\alpha$, we have
$$\alphaligned
&\|\bold u\|_{C^{1+\alpha,(1+\alpha)/2}(\overline{G}_T)}\leq \|\bold u\|_{C^{1+\alpha,(1+\alpha)/2}(\overline{Q}_{R_1,T})}\\
\leq& C\{\|\bold f\|_{L^q(Q_{R_1,T})}+\|\bold u\|_{L^2(Q_{R_0,T})}+\|\bold nabla_x\bold u\|_{L^2(Q_{R_0,T})}
+\|\bold u^0\|_{C^{2+\alpha}(\overline{B}_{R_0}^+)}\}\\
\leq& C\{\|\bold f\|_{C^{\alpha,\alpha/2}(\overline{Q}_{R_0,T})}+\|\bold u\|_{L^2(Q_{R_0,T})}+\|\bold nabla_x\bold u\|_{L^2(Q_{R_0,T})}
+\|\bold u^0\|_{C^{2+\alpha}(\overline{B}_{R_0}^+)}\},
\bold endaligned
$$
where $C$ depends only on $R, R_0, T, \alpha, \mathcal B, a, c$, as $R_1$ is determined by $R$ and $R_0$.
Hence we get
$$
\|\bold u\|_{C^{2+\alpha,1+\alpha/2}(\overline{Q}_{R,T})}\leq C\{\|\bold f\|_{C^{\alpha,\alpha/2}(\overline{Q}_{R_0,T})}+\|\bold u\|_{L^2(Q_{R_0,T})}+\|\bold nabla_x\bold u\|_{L^2(Q_{R_0,T})}+\|\bold u^0\|_{C^{2+\alpha}(\overline{B}^+_{R_0})}\}.
$$
\bold end{proof}
\sigmaection{Estimates Near Curved Boundary}
\sigmaubsection{Computations in local coordinates near
boundary}\
Let us briefly recall the local coordinates near boundary $\partial\Omega$
determined by a diffeomorphism that straightens a piece of
surface, see \cite[section 3]{P} and \cite[Appendix]{BaP}. Let us fix a point $x_0\in \partial\Omega$, and
introduce new variables $y_1$, $y_2$ such that $\partial\Omega$ can be
represented (at least near $x_0$) by $\bold r=\bold r(y_1,y_2)$, and
$\bold r(0,0)=x_0$. Here and henceforth we denote $y=(y_1, y_2) $ and use
the notation $\bold r_j(y)=\partial_{y_j}\bold r(y)$, $\bold r_{ij}=\partial_{y_iy_j}\bold r(y)$,
etc. Let
$$\bold n(y)={\bold r_1(y)\bold times\bold r_2(y)\over |\bold r_1(y)\bold times\bold r_2(y)|}.
$$
We choose $(y_1, y_2)$ in such a way that $\bold n(y)$ is the inward
normal of $\partial\Omega$, and that the $y_1$- and $y_2$-curves on $\partial\Omega$
are the lines of principal curvature; thus, $\bold r_1(y)$ and $\bold r_2(y)$ are
orthogonal to each other. Let
$$g_{ij}(y)=\bold r_i(y)\cdot\bold r_j(y),\quadq
g(y)=\bold nablaet (g_{ij}(y))=g_{11}(y)g_{22}(y).
$$
Let us define a map $\mathcal F$ by
$$x={\mathcal F}(y, z)=\bold r(y_1,
y_2)+z\bold n(y_1, y_2).
$$
${\mathcal F}$ is a diffeomorphism from a ball
$B_R(0)$ onto a neighborhood $\mathcal U$ of the point $x_0$, and it
maps the half ball $B_R^+(0)$ onto a subdomain $\mathcal U\cap\Omega$, and
maps the disc $\{(y_1,y_2,0): y_1^2+y_2^2<R^2\}$ onto a subset $\Gamma$ of
$\partial\Omega$.
Denote the partial derivative $\partial_{y_j}$ by $\partial_j$, $j=1,2$, and denote $\partial_z$ by $\partial_3$. Let
$$G_{ij}(y,z)=\partial_i\mathcal F\cdot\partial_j\mathcal F,\quad i,j=1,2,3,
$$
and let
$G^{ij}(y,z)$ denote the elements of the inverse of the matrix
$(G_{ij}(y,z))_{3\bold times 3}$. Then
$$\alphaligned
&G_{jj}(y,z)=g_{jj}(y)[1-\kappa_j(y)z]^2,\quadq G^{jj}={1\over G_{jj}},\quadq j=1,2,\\
&G_{12}=G_{13}=G_{23}=G^{12}=G^{13}=G^{23}=0,\quadq G_{33}=G^{33}=1,\\
&G(y,z)\betaegin{equation}uiv \bold text{det}(G_{ij}(y,z))=G_{11}(y,z)G_{22}(y,z).
\bold endaligned
$$
Note that if $\partial\Omega$ is of $C^{k+\alpha}$, then $G_{jj}$ and $G^{jj}$ are of $C^{k-1+\alpha}$.
On the domain $\mathcal U$ we have an orthogonal coordinate framework
$\{\bold E_1,\bold E_2,\bold E_3\}$, where
$$\alphaligned
&\bold E_j(y,z)={\partial_j\mathcal F\over|\partial_j\mathcal
F|}={\bold r_j(y)\over\sigmaqrt{g_{jj}}},\quad j=1,2;\quadq \bold E_3(y,z)={\partial_3\mathcal
F\over|\partial_3\mathcal F|}=\bold n(y).
\bold endaligned
$$
Given a vector field $\bold B$ defined on $\overline{\Omega}$, we can represent $\bold B$ in a neighborhood of $x_0\in\partial\Omega$ in the
new variables $(y,z)\in B_R^+=\mathcal F^{-1}(\mathcal U\cap\Omega)$ as
follows:
\betaegin{equation}\label{A.1}
\alphaligned
&\bold tilde\bold B(y,z)=\bold B({\mathcal F}(y,z))
=\sigmaum_{j=1}^3G^{jj}(y,z)b_j(y,z)\partial_j{\mathcal F}(y,z)
=\sigmaum_{j=1}^3\bold tilde B_j(y,z)\bold E_j(y,z),\\
&b_j(y,z)=\bold B(\mathcal F(y,z))\cdot\partial_j{\mathcal F}(y,z),\quadq \bold tilde
B_j(y,z)={b_j(y,z)\over\sigmaqrt{G_{jj}(y,z)}}.
\bold endaligned
\bold end{equation}
We compute, at the point $x=\mathcal F(y,z)$,
\betaegin{equation}\label{A.2}
\alphaligned
&\text{\rm curl\,}\bold B(x)=\sigmaum_{j=1}^3\bold tilde R_j(y,z)\bold E_j(y,z),\\
&\bold nablaiv\bold B={1\over\sigmaqrt{G}}\bold Big[\sigmaum_{j=1}^2\partial_j\bold Big(\sigmaqrt{G\over G_{jj}}\bold tilde B_j\bold Big)
+\partial_3\bold Big(\sigmaqrt{G\over G_{33}}\bold tilde B_3\bold Big)\bold Big],
\bold endaligned
\bold end{equation}
where
$$\alphaligned
\bold tilde R_1(y,z)&={1\over\sigmaqrt{G_{22}G_{33}}}[\partial_2(\bold tilde
B_3\sigmaqrt{G_{33}})-\partial_3(\bold tilde B_2\sigmaqrt{G_{22}})]
={1\over \sigmaqrt{G_{22}}}[\partial_2b_3-\partial_3b_2],\\
\bold tilde R_2(y,z)&={1\over\sigmaqrt{G_{33}G_{11}}}[\partial_3(\bold tilde
B_1\sigmaqrt{G_{11}})-\partial_1(\bold tilde B_3\sigmaqrt{G_{33}})]
={1\over \sigmaqrt{G_{11}}}[\partial_3b_1-\partial_1b_3],\\
\bold tilde R_3(y,z)&={1\over\sigmaqrt{G_{11}G_{22}}}[\partial_1(\bold tilde
B_2\sigmaqrt{G_{22}})-\partial_2(\bold tilde B_1\sigmaqrt{G_{11}})] ={1\over
\sigmaqrt{G_{11}G_{22}}}[\partial_1b_2-\partial_2b_1].
\bold endaligned
$$
If we write
$$
\text{\rm curl\,}^2\bold B=\sigmaum_{j=1}^3\bold tilde T_j(y,z)\bold E_j(y,z),
$$
then
$$\alphaligned
\bold tilde T_1(y,z)&={1\over\sigmaqrt{G_{22}G_{33}}}[\partial_2(\sigmaqrt{G_{33}}\bold tilde
R_3)-\partial_3(\sigmaqrt{G_{22}}\bold tilde R_2)]\\
&={1\over\sigmaqrt{G_{22}G_{33}}}\partial_2\bold Big\{ {\sigmaqrt{G_{33}}\over\sigmaqrt{G_{11}G_{22}}}\bold Big[\partial_1(\bold tilde
B_2\sigmaqrt{G_{22}})-\partial_2(\bold tilde B_1\sigmaqrt{G_{11}})\bold Big]\bold Big\}\\
&- {1\over\sigmaqrt{G_{22}G_{33}}} \partial_3\bold Big\{ {\sigmaqrt{G_{22}}\over\sigmaqrt{G_{33}G_{11}}}\bold Big[\partial_3(\bold tilde
B_1\sigmaqrt{G_{11}})-\partial_1(\bold tilde B_3\sigmaqrt{G_{33}})\bold Big]\bold Big\},
\bold endaligned
$$
$$\alphaligned
\bold tilde T_2(y,z)&={1\over\sigmaqrt{G_{33}G_{11}}}[\partial_3(\sigmaqrt{G_{11}}\bold tilde
R_1)-\partial_1(\sigmaqrt{G_{33}}\bold tilde R_3)]\\
&={1\over\sigmaqrt{G_{33}G_{11}}}\partial_3\bold Big\{ {\sigmaqrt{G_{11}}\over\sigmaqrt{G_{22}G_{33}}}\bold Big[\partial_2(\bold tilde
B_3\sigmaqrt{G_{33}})-\partial_3(\bold tilde B_2\sigmaqrt{G_{22}})\bold Big]\bold Big\}\\
&- {1\over\sigmaqrt{G_{33}G_{11}}} \partial_1\bold Big\{ {\sigmaqrt{G_{33}}\over\sigmaqrt{G_{11}G_{22}}}\bold Big[\partial_1(\bold tilde
B_2\sigmaqrt{G_{22}})-\partial_2(\bold tilde B_1\sigmaqrt{G_{11}})\bold Big]\bold Big\},
\bold endaligned
$$
$$\alphaligned
\bold tilde T_3(y,z)&={1\over\sigmaqrt{G_{11}G_{22}}}[\partial_1(\sigmaqrt{G_{22}}\bold tilde
R_2)-\partial_2(\sigmaqrt{G_{11}}\bold tilde R_1)]\\
&={1\over\sigmaqrt{G_{11}G_{22}}}\partial_1\bold Big\{ {\sigmaqrt{G_{22}}\over\sigmaqrt{G_{33}G_{11}}}\bold Big[\partial_3(\bold tilde
B_1\sigmaqrt{G_{11}})-\partial_1(\bold tilde B_3\sigmaqrt{G_{33}})\bold Big]\bold Big\}\\
&- {1\over\sigmaqrt{G_{11}G_{22}}} \partial_2\bold Big\{ {\sigmaqrt{G_{11}}\over\sigmaqrt{G_{22}G_{33}}}\bold Big[\partial_2(\bold tilde
B_3\sigmaqrt{G_{33}})-\partial_3(\bold tilde B_2\sigmaqrt{G_{22}})\bold Big]\bold Big\}.
\bold endaligned
$$
Let $\bold u$ be a solution of \betaegin{equation}ref{eqB}. In the neighbourhood $\mathcal U$ near boundary, we write
\betaegin{equation}
\alphaligned
&\bold tilde\bold u(t,y,z)=\bold u(t,{\mathcal F}(y,z))
=\sigmaum_{j=1}^3\bold tilde u_j(t,y,z)\bold E_j(y,z),\\
&\bold nablaiv\bold u={1\over\sigmaqrt{G}}\bold Big[\sigmaum_{j=1}^2\partial_j\bold Big(\sigmaqrt{G\over G_{jj}}\bold tilde u_j\bold Big)
+\partial_3\bold Big(\sigmaqrt{G\over G_{33}}\bold tilde u_3\bold Big)\bold Big],\\
&\text{\rm curl\,}\bold u(x)=\sigmaum_{j=1}^3\bold tilde R_j(t,y,z)\bold E_j(y,z),\quadq
\text{\rm curl\,}^2\bold u=\sigmaum_{j=1}^3\bold tilde T_j(t,y,z)\bold E_j(y,z),\\
&\mathcal B\,\text{\rm curl\,}\bold u=\bold h=\sigmaum_{j=1}^3\bold tilde h_j(t,y,z)\bold E_j(y,\bold z),\quadq \bold f=\sigmaum_{j=1}^3\bold tilde f_j(t,y,z)\bold E_j(y,\bold z).
\bold endaligned
\bold end{equation}
Recall that $G_{33}=1$. We have
$$\alphaligned
\bold tilde T_1&={1\over\sigmaqrt{G_{22}G_{33}}}\partial_2\bold Big\{ {\sigmaqrt{G_{33}}\over\sigmaqrt{G_{11}G_{22}}}\bold Big[\partial_1(\bold tilde
u_2\sigmaqrt{G_{22}})-\partial_2(\bold tilde u_1\sigmaqrt{G_{11}})\bold Big]\bold Big\}\\
&- {1\over\sigmaqrt{G_{22}G_{33}}} \partial_3\bold Big\{ {\sigmaqrt{G_{22}}\over\sigmaqrt{G_{33}G_{11}}}\bold Big[\partial_3(\bold tilde
u_1\sigmaqrt{G_{11}})-\partial_1(\bold tilde u_3\sigmaqrt{G_{33}})\bold Big]\bold Big\}\\
=&{1\over G_{22}\sigmaqrt{G_{11}}}\bold Big[\partial_{12}(\bold tilde
u_2\sigmaqrt{G_{22}})-\partial_{22}(\bold tilde u_1\sigmaqrt{G_{11}})\bold Big]\\
&+{1\over\sigmaqrt{G_{22}G_{33}}}\partial_2({\sigmaqrt{G_{33}}\over\sigmaqrt{G_{11}G_{22}}})\bold Big[\partial_1(\bold tilde
u_2\sigmaqrt{G_{22}})-\partial_2(\bold tilde u_1\sigmaqrt{G_{11}})\bold Big]\\
&- {1\over G_{33}\sigmaqrt{G_{11}}}\bold Big[\partial_{33}(\bold tilde
u_1\sigmaqrt{G_{11}})-\partial_{13}(\bold tilde u_3\sigmaqrt{G_{33}})\bold Big]\\
&- {1\over\sigmaqrt{G_{22}G_{33}}} \partial_3( {\sigmaqrt{G_{22}}\over\sigmaqrt{G_{33}G_{11}}})\bold Big[\partial_3(\bold tilde
u_1\sigmaqrt{G_{11}})-\partial_1(\bold tilde u_3\sigmaqrt{G_{33}})\bold Big]\\
=&{1\over G_{22}\sigmaqrt{G_{11}}}\bold Big[\sigmaqrt{G_{22}}\partial_{12}\bold tilde u_2+\partial_2\bold tilde u_2\partial_1\sigmaqrt{G_{22}}+\partial_1(\bold tilde u_2\partial_2\sigmaqrt{G_{22}})\\
&\quadqq -\sigmaqrt{G_{11}}\partial_{22}\bold tilde u_1-\partial_2\bold tilde u_1\partial_2\sigmaqrt{G_{11}}-\partial_2(\bold tilde u_1\partial_2\sigmaqrt{G_{11}})\bold Big]\\
&+{1\over\sigmaqrt{G_{22}G_{33}}}\partial_2({\sigmaqrt{G_{33}}\over\sigmaqrt{G_{11}G_{22}}})\bold Big[\partial_1(\bold tilde
u_2\sigmaqrt{G_{22}})-\partial_2(\bold tilde u_1\sigmaqrt{G_{11}})\bold Big]\\
&- {1\over G_{33}\sigmaqrt{G_{11}}}\bold Big[\sigmaqrt{G_{11}}\partial_{33}\bold tilde u_1+\partial_3\bold tilde u_1\partial_3\sigmaqrt{G_{11}}+\partial_3(\bold tilde u_1\partial_3\sigmaqrt{G_{11}}) -\partial_{13}\bold tilde u_3\bold Big]\\
&- {1\over\sigmaqrt{G_{22}G_{33}}} \partial_3( {\sigmaqrt{G_{22}}\over\sigmaqrt{G_{33}G_{11}}})\bold Big[\partial_3(\bold tilde
u_1\sigmaqrt{G_{11}})-\partial_1(\bold tilde u_3\sigmaqrt{G_{33}})\bold Big]\\
=&-{1\over G_{22}}\partial_{22}\bold tilde u_1-\partial_{33}\bold tilde u_1+{1\over\sigmaqrt{G}}\partial_{12}\bold tilde u_2+{1\over\sigmaqrt{G_{11}}}\partial_{13}\bold tilde u_3+\partialhi_1,
\bold endaligned
$$
where
$$\alphaligned
\partialhi_1=&{1\over G_{22}\sigmaqrt{G_{11}}}\bold Big[\partial_2\bold tilde u_2\partial_1\sigmaqrt{G_{22}}+\partial_1(\bold tilde u_2\partial_2\sigmaqrt{G_{22}})
-\partial_2\bold tilde u_1\partial_2\sigmaqrt{G_{11}}-\partial_2(\bold tilde u_1\partial_2\sigmaqrt{G_{11}})\bold Big]\\
&+{1\over\sigmaqrt{G_{22}}}\partial_2({1\over\sigmaqrt{G_{11}G_{22}}})\bold Big[\partial_1(\bold tilde
u_2\sigmaqrt{G_{22}})-\partial_2(\bold tilde u_1\sigmaqrt{G_{11}})\bold Big]\\
&- {1\over \sigmaqrt{G_{11}}}\bold Big[\partial_3\bold tilde u_1\partial_3\sigmaqrt{G_{11}}+\partial_3(\bold tilde u_1\partial_3\sigmaqrt{G_{11}})\bold Big]
- {1\over\sigmaqrt{G_{22}}} \partial_3( {\sigmaqrt{G_{22}}\over\sigmaqrt{G_{11}}})\bold Big[\partial_3(\bold tilde
u_1\sigmaqrt{G_{11}})-\partial_1\bold tilde u_3\bold Big].
\bold endaligned
$$
$$\alphaligned
\bold tilde T_2&={1\over\sigmaqrt{G_{33}G_{11}}}\partial_3\bold Big\{ {\sigmaqrt{G_{11}}\over\sigmaqrt{G_{22}G_{33}}}\bold Big[\partial_2(\bold tilde
u_3\sigmaqrt{G_{33}})-\partial_3(\bold tilde u_2\sigmaqrt{G_{22}})\bold Big]\bold Big\}\\
&- {1\over\sigmaqrt{G_{33}G_{11}}} \partial_1\bold Big\{ {\sigmaqrt{G_{33}}\over\sigmaqrt{G_{11}G_{22}}}\bold Big[\partial_1(\bold tilde
u_2\sigmaqrt{G_{22}})-\partial_2(\bold tilde u_1\sigmaqrt{G_{11}})\bold Big]\bold Big\}\\
=&{1\over G_{33}\sigmaqrt{G_{22}}}\bold Big[\partial_{23}(\bold tilde
u_3\sigmaqrt{G_{33}})-\partial_{33}(\bold tilde u_2\sigmaqrt{G_{22}})\bold Big]\\
&+{1\over\sigmaqrt{G_{33}G_{11}}}\partial_3({\sigmaqrt{G_{11}}\over\sigmaqrt{G_{22}G_{33}}})\bold Big[\partial_2(\bold tilde
u_3\sigmaqrt{G_{33}})-\partial_3(\bold tilde u_2\sigmaqrt{G_{22}})\bold Big]\\
&- {1\over G_{11}\sigmaqrt{G_{22}}}\bold Big[\partial_{11}(\bold tilde
u_2\sigmaqrt{G_{22}})-\partial_{12}(\bold tilde u_1\sigmaqrt{G_{11}})\bold Big]\\
&- {1\over\sigmaqrt{G_{33}G_{11}}} \partial_1( {\sigmaqrt{G_{33}}\over\sigmaqrt{G_{11}G_{22}}})\bold Big[\partial_1(\bold tilde
u_2\sigmaqrt{G_{22}})-\partial_2(\bold tilde u_1\sigmaqrt{G_{11}})\bold Big]\\
=&{1\over G_{33}\sigmaqrt{G_{22}}}\bold Big[\partial_{23}\bold tilde u_3-\sigmaqrt{G_{22}}\partial_{33}\bold tilde u_2 -\partial_3\bold tilde u_2\partial_3\sigmaqrt{G_{22}}-\partial_3(\bold tilde u_2\partial_3\sigmaqrt{G_{22}})\bold Big]\\
&+{1\over\sigmaqrt{G_{33}G_{11}}}\partial_3({\sigmaqrt{G_{11}}\over\sigmaqrt{G_{22}G_{33}}})\bold Big[\partial_2(\bold tilde
u_3\sigmaqrt{G_{33}})-\partial_3(\bold tilde u_2\sigmaqrt{G_{22}})\bold Big]\\
&- {1\over G_{11}\sigmaqrt{G_{22}}}\bold Big[\sigmaqrt{G_{22}}\partial_{11}\bold tilde u_2+\partial_1\bold tilde u_2\partial_1\sigmaqrt{G_{22}}+\partial_1(\bold tilde u_2\partial_1\sigmaqrt{G_{22}})\\
&\quadqq - \sigmaqrt{G_{11}}\partial_{12}\bold tilde u_1-\partial_2\bold tilde u_1\partial_1\sigmaqrt{G_{11}}-\partial_1(\bold tilde u_1\partial_2\sigmaqrt{G_{11}}) \bold Big]\\
&- {1\over\sigmaqrt{G_{33}G_{11}}} \partial_1( {\sigmaqrt{G_{33}}\over\sigmaqrt{G_{11}G_{22}}})\bold Big[\partial_1(\bold tilde
u_2\sigmaqrt{G_{22}})-\partial_2(\bold tilde u_1\sigmaqrt{G_{11}})\bold Big]\\
=&-{1\over G_{11}}\partial_{11}\bold tilde u_2-\partial_{33}\bold tilde u_2+{1\over\sigmaqrt{G}}\partial_{12}\bold tilde u_1+{1\over\sigmaqrt{G_{22}}}\partial_{23}\bold tilde u_3+\partialhi_2,
\bold endaligned
$$
where
$$\alphaligned
\partialhi_2=&-{1\over \sigmaqrt{G_{22}}}\bold Big[\partial_3\bold tilde u_2\partial_3\sigmaqrt{G_{22}}+\partial_3(\bold tilde u_2\partial_3\sigmaqrt{G_{22}})\bold Big]
+{1\over\sigmaqrt{G_{11}}}\partial_3({\sigmaqrt{G_{11}}\over\sigmaqrt{G_{22}}})\bold Big[\partial_2\bold tilde
u_3-\partial_3(\bold tilde u_2\sigmaqrt{G_{22}})\bold Big]\\
&- {1\over G_{11}\sigmaqrt{G_{22}}}\bold Big[\partial_1\bold tilde u_2\partial_1\sigmaqrt{G_{22}}+\partial_1(\bold tilde u_2\partial_1\sigmaqrt{G_{22}})
-\partial_2\bold tilde u_2\partial_1\sigmaqrt{G_{11}}-\partial_1(\bold tilde u_1\partial_2\sigmaqrt{G_{11}})\bold Big]\\
&- {1\over\sigmaqrt{G_{11}}} \partial_1( {1\over\sigmaqrt{G}})\bold Big[\partial_1(\bold tilde
u_2\sigmaqrt{G_{22}})-\partial_2(\bold tilde u_1\sigmaqrt{G_{11}}\bold Big].
\bold endaligned
$$
$$\alphaligned
\bold tilde T_3&={1\over\sigmaqrt{G_{11}G_{22}}}\partial_1\bold Big\{ {\sigmaqrt{G_{22}}\over\sigmaqrt{G_{33}G_{11}}}\bold Big[\partial_3(\bold tilde
u_1\sigmaqrt{G_{11}})-\partial_1(\bold tilde u_3\sigmaqrt{G_{33}})\bold Big]\bold Big\}\\
&- {1\over\sigmaqrt{G_{11}G_{22}}} \partial_2\bold Big\{ {\sigmaqrt{G_{11}}\over\sigmaqrt{G_{22}G_{33}}}\bold Big[\partial_2(\bold tilde
u_3\sigmaqrt{G_{33}})-\partial_3(\bold tilde u_2\sigmaqrt{G_{22}})\bold Big]\bold Big\}\\
=&{1\over G_{11}\sigmaqrt{G_{33}}}\bold Big[\partial_{13}(\bold tilde
u_1\sigmaqrt{G_{11}})-\partial_{11}(\bold tilde u_3\sigmaqrt{G_{33}})\bold Big]\\
&+{1\over\sigmaqrt{G_{11}G_{22}}}\partial_1({\sigmaqrt{G_{22}}\over\sigmaqrt{G_{33}G_{11}}})\bold Big[\partial_3(\bold tilde
u_1\sigmaqrt{G_{11}})-\partial_1(\bold tilde u_3\sigmaqrt{G_{33}})\bold Big]\\
&- {1\over G_{22}\sigmaqrt{G_{33}}}\bold Big[\partial_{22}(\bold tilde
u_3\sigmaqrt{G_{33}})-\partial_{23}(\bold tilde u_2\sigmaqrt{G_{22}})\bold Big]\\
&- {1\over\sigmaqrt{G_{11}G_{22}}} \partial_2( {\sigmaqrt{G_{11}}\over\sigmaqrt{G_{22}G_{33}}})\bold Big[\partial_2(\bold tilde
u_3\sigmaqrt{G_{33}})-\partial_3(\bold tilde u_2\sigmaqrt{G_{22}})\bold Big]\\
=&{1\over G_{11}\sigmaqrt{G_{33}}}\bold Big[\partial_{13}\bold tilde u_1\sigmaqrt{G_{11}}+\partial_1\bold tilde u_1\partial_3\sigmaqrt{G_{11}}+\partial_3(\bold tilde u_1\partial_1\sigmaqrt{G_{11}})-\partial_{11}\bold tilde u_3 \bold Big]\\
&+{1\over\sigmaqrt{G_{11}G_{22}}}\partial_1({\sigmaqrt{G_{22}}\over\sigmaqrt{G_{33}G_{11}}})\bold Big[\partial_3(\bold tilde
u_1\sigmaqrt{G_{11}})-\partial_1(\bold tilde u_3\sigmaqrt{G_{33}})\bold Big]\\
&- {1\over G_{22}\sigmaqrt{G_{33}}}\bold Big[\partial_{22}\bold tilde u_3 - \sigmaqrt{G_{22}}\partial_{23}\bold tilde u_2-\partial_2\bold tilde u_2\partial_3\sigmaqrt{G_{22}}-\partial_3(\bold tilde u_3\partial_2\sigmaqrt{G_{22}}) \bold Big]\\
&- {1\over\sigmaqrt{G_{11}G_{22}}} \partial_2( {\sigmaqrt{G_{11}}\over\sigmaqrt{G_{22}G_{33}}})\bold Big[\partial_2(\bold tilde
u_3\sigmaqrt{G_{33}})-\partial_3(\bold tilde u_2\sigmaqrt{G_{22}})\bold Big]\\
=&-{1\over G_{11}}\partial_{11}\bold tilde u_3-{1\over G_{22}}\partial_{22}\bold tilde u_3+{1\over\sigmaqrt{G_{11}}}\partial_{13}\bold tilde u_1+{1\over\sigmaqrt{G_{22}}}\partial_{23}\bold tilde u_2+\partialhi_3,
\bold endaligned
$$
where
$$\alphaligned
\partialhi_3=&{1\over G_{11}}\bold Big[\partial_1\bold tilde u_1\partial_3\sigmaqrt{G_{11}}+\partial_3(\bold tilde u_1\partial_1\sigmaqrt{G_{11}}) \bold Big]
+{1\over\sigmaqrt{G}}\partial_1({\sigmaqrt{G_{22}}\over\sigmaqrt{G_{11}}})\bold Big[\partial_3(\bold tilde
u_1\sigmaqrt{G_{11}})-\partial_1\bold tilde u_3\bold Big]\\
&+ {1\over G_{22}}\bold Big[\partial_2\bold tilde u_2\partial_3\sigmaqrt{G_{22}}+\partial_3(\bold tilde u_3\partial_2\sigmaqrt{G_{22}}) \bold Big]
- {1\over\sigmaqrt{G}} \partial_2( {\sigmaqrt{G_{11}}\over\sigmaqrt{G_{22}}})\bold Big[\partial_2\bold tilde
u_3-\partial_3(\bold tilde u_2\sigmaqrt{G_{22}})\bold Big].
\bold endaligned
$$
\sigmaubsection{Proof of Theorem \bold ref{Thm1}}\
\betaegin{proof} We only need to derive regularity near boundary. Let $x^0\in \partial\Omega$ and we take a neighbourhood near $x^0$, and take a differentiable isomorphism to map the neighbourhood $\mathcal U$ of $x^0$ to a domain $U$ with flat boundary $\Gamma$, and the image of $x^0$ locates in the interior of $\Gamma$.
The equation in \betaegin{equation}ref{eqB} can be written as
\betaegin{equation}
\partial_t\bold tilde u_j+\bold tilde a\bold tilde T_j+\bold tilde c\bold tilde u_j=\bold tilde f_j-\bold tilde h_j.
\bold end{equation}
Now we simplify the formula by using the condition $\bold nablaiv\bold u=0$, which gives
$$\alphaligned
0=&\sigmaum_{j=1}^2\partial_j\bold Big(\sigmaqrt{G\over G_{jj}}\bold tilde u_j\bold Big)
+\partial_3\bold Big(\sigmaqrt{G\over G_{33}}\bold tilde u_3\bold Big)\\
=&\partial_1(\sigmaqrt{G_{22}}\bold tilde u_1)+\partial_2(\sigmaqrt{G_{11}}\bold tilde u_2)+\partial_3(\sigmaqrt{G}\bold tilde u_3)\\
=&\sigmaqrt{G_{22}}\partial_1\bold tilde u_1+\sigmaqrt{G_{11}}\partial_2\bold tilde u_2+\sigmaqrt{G}\partial_3\bold tilde u_3+\bold tilde u_1\partial_1\sigmaqrt{G_{22}}+\bold tilde u_2\partial_2\sigmaqrt{G_{11}}+\bold tilde u_3\partial_3\sigmaqrt{G}.
\bold endaligned
$$
Hence
\betaegin{equation}\label{div0}
\sigmaqrt{G_{22}}\partial_1\bold tilde u_1+\sigmaqrt{G_{11}}\partial_2\bold tilde u_2+\sigmaqrt{G}\partial_3\bold tilde u_3=-\bold tilde u_1\partial_1\sigmaqrt{G_{22}}-\bold tilde u_2\partial_2\sigmaqrt{G_{11}}-\bold tilde u_3\partial_3\sigmaqrt{G}.
\bold end{equation}
Write \betaegin{equation}ref{div0} as
$$
{1\over
\sigmaqrt{G}}\partial_2\bold tilde u_2+{1\over\sigmaqrt{G_{11}}}\partial_3\bold tilde u_3=-{1\over G_{11}}\partial_1\bold tilde u_1-{1\over G_{11}\sigmaqrt{G_{22}}}[\bold tilde u_1\partial_1\sigmaqrt{G_{22}}+\bold tilde u_2\partial_2\sigmaqrt{G_{11}}+\bold tilde u_3\partial_3\sigmaqrt{G}].
$$
Differentiating in $y_1$ yields
\betaegin{equation}\label{eq1}
{1\over
\sigmaqrt{G}}\partial_{12}\bold tilde u_2+{1\over\sigmaqrt{G_{11}}}\partial_{13}\bold tilde u_3=-{1\over G_{11}}\partial_{11}\bold tilde u_1+\bold zeta_1,
\bold end{equation}
where
$$\alphaligned
\bold zeta_1=&-\partial_1\bold tilde u_1\partial_1({1\over G_{11}})-\partial_2\bold tilde u_2\partial_1({1\over\sigmaqrt{G}})-\partial_3\bold tilde u_3\partial_1({1\over\sigmaqrt{G_{11}}})\\
&-\partial_1\bold Big\{{1\over G_{11}\sigmaqrt{G_{22}}}\bold Big[\bold tilde u_1\partial_1\sigmaqrt{G_{22}}+\bold tilde u_2\partial_2\sigmaqrt{G_{11}}+\bold tilde u_3\partial_3\sigmaqrt{G}\bold Big]\bold Big\}.
\bold endaligned
$$
Write \betaegin{equation}ref{div0} as
$$
{1\over
\sigmaqrt{G}}\partial_1\bold tilde u_1+{1\over\sigmaqrt{G_{22}}}\partial_3\bold tilde u_3=-{1\over G_{22}}\partial_2\bold tilde u_2-{1\over G_{22}\sigmaqrt{G_{11}}}[\bold tilde u_1\partial_1\sigmaqrt{G_{22}}+\bold tilde u_2\partial_2\sigmaqrt{G_{11}}+\bold tilde u_3\partial_3\sigmaqrt{G}].
$$
Differentiating in $y_2$ yields
\betaegin{equation}\label{eq2}
{1\over
\sigmaqrt{G}}\partial_{12}\bold tilde u_1+{1\over\sigmaqrt{G_{22}}}\partial_{23}\bold tilde u_3=-{1\over G_{22}}\partial_{22}\bold tilde u_2+\bold zeta_2,
\bold end{equation}
where
$$\alphaligned
\bold zeta_2=&-\partial_1\bold tilde u_1\partial_2({1\over\sigmaqrt{G}})-\partial_2\bold tilde u_2\partial_{2}({1\over G_{22}})-\partial_3\bold tilde u_3\partial_2({1\over\sigmaqrt{G_{22}}})\\
&-\partial_2\bold Big\{{1\over G_{22}\sigmaqrt{G_{11}}}\bold Big[\bold tilde u_1\partial_1\sigmaqrt{G_{22}}+\bold tilde u_2\partial_2\sigmaqrt{G_{11}}+\bold tilde u_3\partial_3\sigmaqrt{G}\bold Big]\bold Big\}.
\bold endaligned
$$
Write \betaegin{equation}ref{div0} as
$$
{1\over \sigmaqrt{G_{11}}}\partial_1\bold tilde u_1+{1\over \sigmaqrt{G_{22}}}\partial_2\bold tilde u_2=-\partial_3\bold tilde u_3-{1\over G}[\bold tilde u_1\partial_1\sigmaqrt{G_{22}}+\bold tilde u_2\partial_2\sigmaqrt{G_{11}}+\bold tilde u_3\partial_3\sigmaqrt{G}].
$$
Differentiating in $z$ yields
\betaegin{equation}\label{eq3}
{1\over \sigmaqrt{G_{11}}}\partial_{13}\bold tilde u_1+{1\over \sigmaqrt{G_{22}}}\partial_{23}\bold tilde u_2=-\partial_{33}\bold tilde u_3+\bold zeta_3,
\bold end{equation}
where
$$
\alphaligned
\bold zeta_3=&-\partial_1\bold tilde u_1\partial_3({1\over\sigmaqrt{G_{11}}})-\partial_2\bold tilde u_2\partial_{3}({1\over G_{22}})
-\partial_3\bold Big\{{1\over G}\bold Big[\bold tilde u_1\partial_1\sigmaqrt{G_{22}}+\bold tilde u_2\partial_2\sigmaqrt{G_{11}}+\bold tilde u_3\partial_3\sigmaqrt{G}\bold Big]\bold Big\}.
\bold endaligned
$$
Plugging \betaegin{equation}ref{eq1}, \betaegin{equation}ref{eq2}, \betaegin{equation}ref{eq3} into the equalities of $\bold tilde T_1, \bold tilde T_2, \bold tilde T_3$ respectively, we get
$$\alphaligned
&\bold tilde T_1=-{1\over G_{11}}\partial_{11}\bold tilde u_1
-{1\over G_{22}}\partial_{22}\bold tilde u_1-\partial_{33}\bold tilde u_1+\bold zeta_1
+\partialhi_1,\\
&\bold tilde T_2=-{1\over G_{11}}\partial_{11}\bold tilde u_2-{1\over G_{22}}\partial_{22}\bold tilde u_2-\partial_{33}\bold tilde u_2+\bold zeta_2+\partialhi_2,\\
&\bold tilde T_3=-{1\over G_{11}}\partial_{11}\bold tilde u_3-{1\over G_{22}}\partial_{22}\bold tilde u_3-\partial_{33}\bold tilde u_3+\bold zeta_3+\partialhi_3.
\bold endaligned
$$
On $\Gamma$ we have $\bold tilde u_1=\bold tilde u_2=0$. Then from \betaegin{equation}ref{div0} we see that
$$
\partial_3\bold tilde u_3+H\bold tilde u_3=0,
$$
where $H={\partial_3\sigmaqrt{G}\over\sigmaqrt{G}}$. So we find that $\bold tilde u_1,\bold tilde u_2$ satisfy
$$
\left\{\alphaligned
&\partial_t\bold tilde u_j-{\bold tilde a\over G_{11}}\partial_{11}\bold tilde u_j
-{\bold tilde a\over G_{22}}\partial_{22}\bold tilde u_j-\bold tilde a\partial_{33}\bold tilde u_j+\bold tilde c\bold tilde u_j=F_j\quad&\bold text{in }(0,T)\bold times U,\\
&\bold tilde u_j=0\quad&\bold text{on }(0,T)\bold times\Gamma, \\
&\bold tilde u_j=\bold tilde u_j^0\quad&\bold text{in } U,
\bold endaligned\bold right.
$$
and $\bold tilde u_3$ satisfies
$$
\left\{\alphaligned
&\partial_t\bold tilde u_3-{\bold tilde a\over G_{11}}\partial_{11}\bold tilde u_3
-{\bold tilde a\over G_{22}}\partial_{22}\bold tilde u_3-\bold tilde a\partial_{33}\bold tilde u_3+\bold tilde c\bold tilde u_3=F_3\quad&\bold text{in }(0,T)\bold times U,\\
&\partial_3\bold tilde u_3+H\bold tilde u_3=0\quad&\bold text{on }(0,T)\bold times\Gamma,\\
&\bold tilde u_3=\bold tilde u_3^0\quad&\bold text{in } U,
\bold endaligned\bold right.
$$
where
$$F_j=\bold tilde f_j-\bold tilde h_j-\bold tilde a(\bold zeta_j+\partialhi_j),\quadq j=1,2,3.
$$
Note that $\bold tilde h_j, \bold tilde \bold zeta_j, \partialhi_j$ contain derivatives of $\bold tilde u_j$ up to the first order, and contain terms involving $G_{jj}$ and their derivatives up to order $2$. Hence if $\partial\Omega$ is of $C^{3+\alpha}$, then those terms are determined by $\bold tilde u_j$ and their first order derivatives, with coefficients that are $C^\alpha$ in $y_1, y_2, z$.
Note that the boundary condition for $\bold tilde u_3$ can be changed to a homogeneous Neumann boundary condition
if we consider a new function
$$
\bold hat u_3=e^{\int_0^{x_3}Hdx_3}\bold tilde u_3.
$$
Although the boundary conditions for $\bold tilde u_j$ are satisfied only on $\Gamma$, a part of the boundary of $U$, we can multiply $\bold tilde u_j$ by a smooth cut-off function such that $\bold tilde u_j$ satisfies the same type boundary condition on $\partial U\sigmaetminus \Gamma$.
So we can repeat the proof of Lemmas \bold ref{Lem2.1} and \bold ref{Lem2.3}, to derive the Schauder estimates for $\bold tilde u_j$, $j=1,2,3$, in $(0,T]\bold times B^+_R$. Using the diffeomorphism $\mathcal F$ we obtain the Schauder estimates of $\bold u$ in $(0,T]\bold times \mathcal V$, where $\mathcal V$ is a neighborhood of $\mathcal U$ which contains the point $x^0$.
Then the conclusion of the proposition follows by covering a tubular neighbourhood of $\partial\Omega$ with a finite number of domains as above, and using interpolation.
\bold end{proof}
\vskip0.1in
\sigmaubsection*{Acknowledgements} This work was partially supported
by the National Natural Science Foundation of China grant no. 12071142 and 11671143, and by the research grants no. UDF01001805 and CUHKSZWDZC0003.
\vskip0.2in
\betaegin{thebibliography}{DUMA}
\betaibitem[BaP]{BaP} P. Bates and X. B. Pan, {\it On a problem related to
vortex nucleation of 3 dimensional superconductors}, Comm. Math.
Phys., {\betaf 276} (2007), 571-610. Erratum, {\betaf 283} (2008), 861.
\betaibitem[DaL]{DaL} R. Dautray and J. -L. Lions, {\it Mathematical
Analysis and Numerical Methods for Science and technology}, vol.
{\betaf 3}, Springer-Verlag, New York, 1990.
\betaibitem[H]{H} B. Hu, {\it Blow-up Theories for Semilinear Parabolic Equations},
Lecture Notes in Math., vol. 2018, Springer-Verlag Berlin Heidelberg 2011.
\betaibitem[KP]{KP} K. K. Kang and X. B. Pan, {\it On a quasilinear parabolic curl system motivated by time evolution of Meissner states of superconductors}, preprint.
\betaibitem[Lib]{Lib} G. M. Lieberman, {\it Second Order Parabolic Differential Equations}, World Sci. Publishing Co., Inc., River Edge, NJ, 1996 (Reprinted 2005).
\betaibitem[P]{P} X. B. Pan, {\it Surface superconductivity in 3
dimensions}, Trans. Amer. Math. Soc., {\betaf 356} (2004), 3899-3937.
\bold end{thebibliography}
\bold end{document}
|
\begin{document}
{\rm d}ate{\today}
\author{Christian Maire}
\address{Laboratoire de Mathématiques de Besançon (UMR 6623), Université Bourgogne Franche-Comté et CNRS, 16 route de Gray, 25030 Besançon cédex, France}
\address{Department of Mathematics, 310 Malott Hall, Cornell University, Ithaca, NY USA 14853}
\email{[email protected]}
\title{Unramified $2$-extensions of totally imaginary number fields and $2$-adic analytic groups}
\subjclass{11R37, 11R29}
{\rm k}eywords{Unramified extensions, uniform pro-$2$ groups, the Fontaine-Mazur conjecture (5b).}
\thanks{\emph{Acknowledgements.}
This work has been done during a visiting scholar position at Cornell University for the academic year 2017-18, and funded by the program "Mobilit\'e sortante" of the R\'egion Bourgogne Franche-Comt\'e. The author thanks the Department of Mathematics at Cornell University for providing a stimulating research atmosphere. He also thanks Georges Gras and Farshid Hajir for the encouragements and their useful remarks, and Ravi Ramakrishna for very inspiring discussions.
}
\begin{abstract} Let ${\rm K}$ be a totally imaginary number field. Denote by ${\rm G}^{ur}_{\rm K}(2)$ the Galois group of the maximal unramified pro-$2$ extension of ${\rm K}$.
By comparing cup-products in \'etale cohomology of ${\rm Spec } {{\mathcal M}athcal O}_{\rm K}$ and cohomology of uniform pro-$2$ groups, we obtain situations where ${\rm G}^{ur}_{\rm K}(2)$ has no
non-trivial uniform analytic quotient, proving some new special cases of the unramified Fontaine-Mazur conjecture. For example, in the family of imaginary quadratic fields ${\rm K}$ for which the $2$-rank of the class group
is equal to~$5$, we obtain that for at least $33.12 \% $ of such ${\rm K}$, the group ${\rm G}^{ur}_{\rm K}(2)$ has no non-trivial uniform analytic quotient.
\end{abstract}
\maketitle
\tableofcontents
\section*{Introduction}
Given a number field ${\rm K}$ and a prime number $p$,
the tame version of the conjecture of Fontaine-Mazur (conjecture (5a) of \cite{FM}) asserts that every finitely and tamely ramified continuous
Galois representation $\rho : {\rm G}^{ur}al(\overline{{\rm K}}/{\rm K}) \rightarrow {\rm G}^{ur}l_m({{\mathcal M}athbb Q}_p)$ of the absolute Galois group of ${\rm K}$, has finite image.
Let us mention briefly two strategies to attack this conjecture.
$-$ The first one is to use the techniques coming from the considerations that inspired the conjecture, {\em i.e.}, from the Langlands program
(geometric Galois representations, modular forms, deformation theory, etc.). Fore more than a decade, many authors have contributed to understanding
the foundations of this conjecture with some serious progress having been made. As a partial list of such results, we refer the reader to Buzzard-Taylor \cite{BT},
Buzzard \cite{Buzzard}, Kessaei \cite{Kassaei1}, Kisin \cite{Kisin}, Pilloni \cite{Pilloni}, Pilloni-Stroh \cite{Pilloni-Stroh}, etc.
$-$ The second one consists in comparing properties of $p$-adic analytic pro-$p$ groups and arithmetic.
Thanks to this strategy, Boston gave in the 90's the first evidence for the tame version of the Fontaine-Mazur conjecture (see \cite{Boston1}, \cite{Boston2}). This one has been extended by Wingberg \cite{Wingberg}. See also \cite{Maire} and the recent work of Hajir-Maire \cite{H-M}. In all of these situations, the key fact is to use a semi-simple action. Typically, this approach gives no information for quadratic number fields when $p=2$.
\
In this work, when $p=2$, we propose to give some families of imaginary quadratic number fields
for which the unramified Fontaine-Mazur conjecture is true (conjecture (5b) of \cite{FM}). For doing this, we compare the étale cohomology of ${\rm Spec } {{\mathcal M}athcal O}_{\rm K}$ and the cohomology of $p$-adic analytic pro-$p$ groups. In particular, we exploit the fact that in characteristic $2$, cup products in $H^2$ could be not alternating (meaning $x\cup x \neq 0$), more specificaly, a beautiful computation of cup-products in $ H_{et}^3({\rm Spec } {{\mathcal M}athcal O}_k, {{\mathcal M}athbb F}_2)$ made by Carlson and Schlank in \cite{Carlson-Schlank}.
And so, surprisingly our strategy works \emph{only} for $p=2$ !
\
Given a prime number $p$, denote by ${\rm K}^{ur}(p)$ the maximal pro-$p$ unramifed extension of ${\rm K}$; put ${\rm G}^{ur}_{\rm K}(p):={\rm G}^{ur}al({\rm K}^{ur}(p)/{\rm K})$. Here we are interested in uniform quotients of ${\rm G}^{ur}_{\rm K}(p)$ (see section \ref{section2} for definition) which are related to
the unramified Fontaine-Mazur conjecture thanks to the following equivalent version:
\begin{conjecture} \label{conj2}
Every uniform quotient ${\rm G}^{ur}g$ of ${\rm G}^{ur}_{\rm K}(p)$ is trivial.
\end{conjecture}
Remark that Conjecture \ref{conj2} can be rephrased as follows: the pro-$p$ grop ${\rm G}^{ur}_{\rm K}(p)$ has no uniform quotient ${\rm G}^{ur}g$ of dimension $d$ for all $d>0$.
Of course, this is obvious when $d>{\rm d}_2 {\rm C}l_{\rm K}$, and when $d\leq 2$, thanks to the fact that ${\rm G}^{ur}_{\rm K}(p)$ is FAb (see later the definition).
Now take $p=2$. Let $(x_i)_{i=1,{\rm cd}ots, n}$ be an ${{\mathcal M}athbb F}_2$-basis of $H^1({\rm G}^{ur}_{\rm K}(2),{{\mathcal M}athbb F}_2)\simeq H_{et}^1({\rm Spec } {{\mathcal M}athcal O}_{\rm k},{{\mathcal M}athbb F}_2)$, and consider the $n\times n$-square matrix ${\rm M}_{\rm K}:=(a_{i,j})_{i,j}$ with coefficients in ${{\mathcal M}athbb F}_2$, where $$a_{i,j}=x_i\cup x_i \cup x_j,$$
thanks to the fact that here $H^3_{et}({\rm Spec } {{\mathcal M}athcal O}_{\rm K},{{\mathcal M}athbb F}_2) \simeq {{\mathcal M}athbb F}_2$.
As we will see, this is the Gram matrix of a certain bilinear form defined, via Artin symbol, on the Kummer radical of
the $2$-elementary abelian maximal unramified extension ${\rm K}^{ur,2}/{\rm K}$ of ${\rm K}$. We also will see that for imaginary quadratic number fields, this matrix is often of large rank.
First, we prove:
\begin{Theorem} \label{maintheorem0}
Let ${\rm K}$ be a totally imaginary number field. Let $n:=d_2{\rm C}l_{\rm K}$ be the $2$-rank of the class group of ${\rm K}$.
\begin{itemize}
\item[$(i)$] Then the pro-$2$ group ${\rm G}^{ur}_{\rm K}(2)$ has no uniform quotient of dimension $d>n-\frac{1}{2} \rm rk({\rm M}_{\rm K})$.
\item[$(ii)$] Moreover, Conjecture \ref{conj2} holds (for $p=2$) when:
\begin{enumerate}
\item[$\bullet$] $n=3$, and $\rm rk({\rm M}_{\rm K})>0$;
\item[$\bullet$] $n=4$, and $\rm rk({\rm M}_{\rm K}) \geq 3$;
\item[$\bullet$] $n=5$, and $\rm rk({\rm M}_K)=5$.
\end{enumerate}
\end{itemize}
\end{Theorem}
By relating the matrix ${\rm M}_{\rm K}$ to a R\'edei-matrix type, and thanks to the work of Gerth \cite{Gerth} and Fouvry-Kl\"uners \cite{Fouvry-Klueners}, one can also deduce some density information when ${\rm K}$ varies in the family ${\rm F}f$ of imaginary quadratic fields.
For $n,d, X \geq 0$, denote by $${\rm S}_X:=\{ {\rm K} \in {\rm F}f, \ -{\rm d}isc_{\rm K} \leq X\}, \ \ {\rm S}_{n,X}:=\{ {\rm K}\in {\rm S}_{X}, \ \ d_2 {\rm C}l_{\rm K}=n\}$$
$${\rm F}M_{n,X}^{(d)}:=\{ {\rm K} \in {\rm S}_{n,X}, \ {\rm G}^{ur}_{\rm K}(2) {\rm \ has \ no \ uniform \ quotient \ of \ dimension} > d\},$$
$$ {\rm F}M_{n,X}:=\{ {\rm K} \in {\rm S}_{n,X}, \ {\rm Conjecture \ \ref{conj2} \ holds \ for \ }{\rm K}\},$$
and consider the limits:
$${\rm F}M_n^{(d)}:= \liminf_{X\rightarrow + \infty} \frac{\# {\rm F}M_{n,X}^{(d)}}{\#{\rm S}_{n,X}}, \ \ \ {\rm F}M_{n}:=\liminf_{X \rightarrow + \infty} \frac{\# {\rm F}M_{n,X}}{\#{\rm S}_{n,X}}.$$
${\rm F}M_n$ measures the proportion of imaginary quadratic fields ${\rm K}$ with $d_2 {\rm C}l_{\rm K}=n$, for which Conjecture \ref{conj2} holds (for $p=2$); and ${\rm F}M_n^{(d)}$ measures the proportion of imaginary quadratic fields ${\rm K}$ with $d_2 {\rm C}l_{\rm K}=n$, for which ${\rm G}^{ur}_{\rm K}(2)$ has no uniform quotient of dimension $>d$.
Then \cite{Gerth} allows us to obtain the following densities for uniform groups of small dimension:
\begin{Corollary} \label{coro-intro1}
One has:
\begin{enumerate}
\item[$(i)$] ${\rm F}M_3 \geq .992187$,
\item[$(ii)$] ${\rm F}M_4 \geq .874268$, ${\rm F}M_4^{(4)} \geq .999695$,
\item[$(iii)$] ${\rm F}M_5 \geq .331299$, ${\rm F}M_5^{(4)} \geq .990624$, ${\rm F}M_5^{(5)} \geq .9999943$,
\item[(iv)] for all $d\geq 3$, ${\rm F}M_{d}^{(1+d/2)} \geq 0.866364 $, and ${\rm F}M_d^{(2+d/2)} \geq .999953$.
\end{enumerate}
\end{Corollary}
\begin{Remark} At this level, one should make two observations.
1) Perhaps for many ${\rm K} \in {\rm S}_{3,X}$ and ${\rm S}_{4,X}$, the pro-$2$ group ${\rm G}^{ur}_{\rm K}(2)$ is finite but, by the Theorem of Golod-Shafarevich (see for example \cite{Koch}), for every ${\rm K} \in {\rm S}_{n,X}$, $n\geq 5$,
the pro-$2$ group ${\rm G}^{ur}_{\rm K}(2)$ is infinite.
2) In our work, it will appear that we have no information about the Conjecture \ref{conj2} for number fields ${\rm K}$ for which
the $4$-rank of the class group is large. Typically, in ${\rm F}M_i$ one keeps out all the number fields having maximal $4$-rank.
\end{Remark}
To conclude, let us mention a general asymptotic estimate thanks to the work of Fouvry-Kl\"uners \cite{Fouvry-Klueners}.
Put $${\rm F}M_{X}^{[i]}:=\{{\rm K} \in {\rm S}_{X}, \ {\rm G}^{ur}_{\rm K}(2) {\rm \ has \ no \ uniform \ quotient \ of \ dimension} > i+ \frac{1}{2}d_2 {\rm C}l_{\rm K}\}$$
and $${\rm F}M^{\lbrack i\rbrack }:= \liminf_{X\rightarrow + \infty} \frac{\# {\rm F}M^{[i]}_X}{\# {\rm S}_{X}}.$$
Our work allows us to obtain:
\begin{Corollary} \label{coro-intro2}
One has: $${\rm F}M^{[1]} \geq .0288788, \ \ {\rm F}M^{[2]} \geq 0.994714, \ \ {\rm and } \ \ {\rm F}M^{[3]} \geq 1-9.7 {\rm cd}ot 10^{-8}.$$
\end{Corollary}
\
\
This paper has three sections.
In Section 1 and Section 2, we give the basic tools concerning the \'etale cohomology of number fields and the $p$-adic analytic groups.
Section 3 is devoted to arithmetic considerations. After the presentation of our strategy, we develop some basic facts about bilinear forms over ${{\mathcal M}athbb F}_2$, specially for
the form introduced in our study (which is defined on a certain Kummer radical). In particular, we insist on the role played by totally isotropic subspaces. To finish, we consider a relation with a R\'edei matrix that allows us to obtain density information.
\
\
{\bf Notations.}
Let $p$ be a prime number and let ${\rm K}$ be a number field.
Denote by
\begin{enumerate}
\item[$-$] $p^*=(-1)^{(p-1)/2}p$, when $p$ is odd;
\item[$-$] ${{\mathcal M}athcal O}_{\rm K}$ the ring of integers of ${\rm K}$;
\item[$-$] ${\rm C}l_{\rm K}$ the $p$-Sylow of the Class group of ${{\mathcal M}athcal O}_{\rm K}$;
\item[$-$] ${\rm K}^{ur}$ the maximal profinite extension of ${\rm K}$ unramified everywhere. Put ${\rm G}^{ur}G_{\rm K}={\rm G}^{ur}al({\rm K}^{ur}/{\rm K})$;
\item[$-$] ${\rm K}^{ur}(p)$ the maximal pro-$p$ extension of ${\rm K}$ unramified everywhere. Put ${\rm G}^{ur}_{{\rm K}}(p):={\rm G}^{ur}al({\rm K}^{ur}(p)/{\rm K})$;
\item[$-$] ${\rm K}^{ur,p}$ the elementary abelian maximal unramified $p$-extension of ${\rm K}$.
\end{enumerate}
Recall that the group ${\rm G}^{ur}_{{\rm K}}(p)$ is a finitely presented pro-$p$ group. See \cite{Koch}. See also \cite{NSW} or \cite{Gras}.
Moreover by class field theory, ${\rm C}l_{\rm K} $ is isomorphic to the abelianization of ${\rm G}^{ur}_{\rm K}(p)$. In particular it implies that every open subgroup ${{\mathcal M}athcal H}$ of ${\rm G}^{ur}_{{\rm K}}(p)$ has
finite abelianization: this property is kwnon as "FAb".
\
\section{Etale cohomology: what we need} \label{section1}
\subsection{}
For what follows, the references are plentiful: \cite{Mazur}, \cite{Milne}, \cite{Milne1}, \cite{Schmidt1}, \cite{Schmidt2}, etc.
Assume that ${\rm K}$ is totally imaginary when $p=2$, and put ${\rm X}_{\rm K}= {\rm Spec } {{\mathcal M}athcal O}_{\rm K}$.
The Hochschild-Serre spectral sequence (see \cite{Milne}) gives for every $i\geq 1$ a map $$\alpha_i : H^i ({\rm G}^{ur}_{\rm K}(p)) \longrightarrow H^i_{et}({\rm X}_{\rm K}), $$
where the coefficients are in ${{\mathcal M}athbb F}_p$ (meaning the constant sheaf for the \'etale site ${\rm X}_{\rm K}$).
As $\alpha_1$ is an isomorphism, one obtains the long exact sequence:
$$H^2({\rm G}^{ur}_{\rm K}(p)) \hookrightarrow H^2_{et}({\rm X}_{\rm K}) \longrightarrow H_{et}^{2}({\rm X}_{{\rm K}^{ur}(p)}) \longrightarrow
H^3({\rm G}^{ur}_{\rm K}(p)) \longrightarrow H^3_{et}({\rm X}_{\rm k}) $$
where $H^3_{et}({\rm X}_{\rm k}) \simeq (\mu_{{\rm K},p})^\vee$, here $(\mu_{{\rm K},p})^\vee$ is the Pontryagin dual of the group of $p$th-roots of unity in ${\rm K}$.
\subsection{}
Take now $p=2$.
Let us give $x,y,z \in H^1_{et}({\rm X}_{\rm K})$.
In \cite{Carlson-Schlank}, Carlson and Schlank give a formula in order to determine the cup-product $x \cup y \cup z \in H^3_{et}({\rm X}_{\rm K})$. In particular, they
show how to produce some arithmetical situations for which such cup-products
$x\cup x \cup y $ are not zero.
Now, one has the commutative diagram:
$$\xymatrix{H^3({\rm G}^{ur}_{\rm K}(p)) \ar[r]^{\alpha_3} & H^3_{et}({\rm X}_{\rm K}) \\
H^1({\rm G}^{ur}_{\rm K}(p))^{\otimes^3}\ar[r]^\simeq_{\alpha_1} \ar[u]^{\beta} & \ar[u]^{\beta_{et}} H^1_{et}({\rm X}_{\rm K})^{\otimes^3}}$$
Hence $(\alpha_3\circ \beta)(a\otimes b \otimes a)=\alpha_1(a)\cup \alpha_1(b)\cup \alpha_1(c)$. By taking $x=\alpha_1(a)=\alpha_1(b)$ and $y=\alpha_1(c)$, one gets
$a\cup a \cup b \neq 0 \in H^3({\rm G}^{ur}_{\rm K}(p))$ when $x\cup x\cup y \neq 0 \in H^3_{et}({\rm X}_{\rm K})$.
\subsection{The computation of Carlson and Schlank} \label{section:Carlson-Schlank}
Take $x$ and $y$ two non-trivial characters of $H^1({\rm G}^{ur}(p))\simeq H^1_{et}({\rm X})$. Put ${\rm K}_x={\rm K}^{ker(x)}$ and ${\rm K}_y={\rm K}^{ker(y)}$. By Kummer theory, there exist $a_x, a_y \in {\rm K}^\times/({\rm K}^\times)^2$ such that ${\rm K}_x={\rm K}(\sqrt{a_x})$ and ${\rm K}_y={\rm K}(\sqrt{a_y})$.
As the extension ${\rm K}_y/{\rm K}$ is unramified, for every prime ideal ${{\mathcal M}athfrak p}$ of ${{\mathcal M}athcal O}_{\rm K}$, the ${{\mathcal M}athfrak p}$-valuation $v_{{\mathcal M}athfrak p}(a_y)$ is even, and then $\sqrt{(a_y)}$ has a sense (as an ideal of ${{\mathcal M}athcal O}_{\rm K}$).
Let us write $$\sqrt{(a_y)}:={{\mathcal M}athfrak p}rod_i{{\mathcal M}athfrak p}_{y,i}^{e_{y,i}}.$$
Denote by $I_x$ the set of prime ideals ${{\mathcal M}athfrak p}$ of ${{\mathcal M}athcal O}_{\rm K}$ such that ${{\mathcal M}athfrak p}$ is inert in ${\rm K}_x/{\rm K}$ (or equivalently, $I_x$ is the set of primes of ${\rm K}$ such that the Frobenius at ${{\mathcal M}athfrak p}$ generates ${\rm G}^{ur}al({\rm K}_x/{\rm K})$).
\begin{prop}[Carlson and Schlank] \label{proposition:C-S}
The cup-product $x\cup x \cup y \in H^3_{et}(X)$ is non-zero if and only if, ${\rm d}isplaystyle{\sum_{{{\mathcal M}athfrak p}_{y,i} \in I_x}e_{y,i}}$ is odd.
\end{prop}
\begin{rema} \label{remarque:symbole} The condition of Proposition \ref{proposition:C-S} is equivalent to the triviality of the Artin symbol ${\rm d}isplaystyle{\left(\frac{{\rm K}_x/{\rm K}}{\sqrt{(a_y)}}\right)}$.
Hence if one takes $b_y=a_y \alpha^2$ with $\alpha\in {\rm K}$ instead of $a_y$ then, as
${\rm d}isplaystyle{\left(\frac{{\rm K}_x/{\rm K}}{(\alpha)}\right)}$ is trivial, the condition is well-defined.
\end{rema}
Let us give an easy example inspired by a computation of \cite{Carlson-Schlank}.
\begin{prop}\label{criteria}
Let ${\rm K}/{{\mathcal M}athbb Q}$ be an imaginary quadratic field. Suppose that there exist $p$ and $q$ two different odd prime numbers ramified in ${\rm K}/{{\mathcal M}athbb Q}$, and such that: ${\rm d}isplaystyle{\left(\frac{p^*}{q}\right)=-1}$.
Then there exist $x \neq y \in H^1_{et}({\rm X}_{\rm K})$ such that $x\cup x \cup y \neq 0$.
\end{prop}
\begin{proof}
Take ${\rm K}_x={\rm K}(\sqrt{p^*})$ and ${\rm K}_y={\rm K}(\sqrt{q^*})$, and apply Proposition \ref{proposition:C-S}.
\end{proof}
\section{Uniform pro-$p$-groups and arithmetic: what we need} \label{section2}
\subsection{} Let us start with the definition of a uniform pro-$p$ group (see for example \cite{DSMN}).
\begin{defi}
Let ${\rm G}^{ur}g$ be a finitely generated pro-$p$ group.
We say that ${\rm G}^{ur}g$ is uniform if:
\begin{enumerate}
\item[$-$] ${\rm G}^{ur}g$ is torsion free, and
\item [$-$] $[{\rm G}^{ur}g,{\rm G}^{ur}g] \subset {\rm G}^{ur}g^{2p}$.
\end{enumerate}
\end{defi}
\begin{rema}
For a uniform group ${\rm G}^{ur}g$, the $p$-rank of ${\rm G}^{ur}g$ coincides with the dimension of ${\rm G}^{ur}g$.
\end{rema}
The uniform pro-$p$ groups play a central rule in the study of analytic pro-$p$ group, indeed:
\begin{theo}[Lazard \cite{lazard}] \label{Lazard0}
Let ${\rm G}^{ur}g$ be a profinite group. Then ${\rm G}^{ur}g$ is $p$-adic analytic {\em i.e.} ${\rm G}^{ur}g \hookrightarrow_c {\rm Gl}_m({{\mathcal M}athbb Z}_p)$ for a certain positive integer $m$, if and only if, ${\rm G}^{ur}g$ contains an open uniform subgroup ${{\mathcal M}athcal H}$.
\end{theo}
\begin{rema}
For different equivalent definitions of $p$-adic analytic groups, see \cite{DSMN}. See also \cite{Lubotzky-Mann}.
\end{rema}
\begin{exem}
The correspondence between $p$-adic analytic pro-$p$ groups and ${{\mathcal M}athbb Z}_p$-Lie algebra via the log/exp maps,
allows to give examples of uniform pro-$p$ groups (see \cite{DSMN}, see also \cite{H-M}). Typically,
let ${\mathfrak s} {\mathfrak l}_n({{\mathcal M}athbb Q}_p)$ be the ${{\mathcal M}athbb Q}_p$-Lie algebra of the square matrices $n\times n$ with coefficients in ${{\mathcal M}athbb Q}_p$ and of zero trace. It is a simple algebra of dimension $n^2-1$.
Take the natural basis:
\begin{enumerate}
\item[(a)] for $i \neq j$, $E_{i,j}=(e_{k,l})_{k,l}$ for which all the coefficient are zero excepted $e_{i,j}$ that takes value $2p$;
\item[(b)] for $i>1$, $D_i=(d_{k,l})_{k,l}$ which is the diagonal matrix $D_i=(2p,0,{\rm cd}ots,0,-2p,0,{\rm cd}ots, 0)$, where $d_{i,i}=-2p$.
\end{enumerate}
Let ${\mathfrak s} {\mathfrak l}_n$ be the ${{\mathcal M}athbb Z}_p$-Lie algebra generated by the $ E_{i,j}$ and the $D_i$.
Put $X_{i,j}=\exp E_{i,j}$ and $Y_i=\exp D_i$. Denote by ${\rm Sl}_n^1({{\mathcal M}athbb Z}_p)$ the subgroup of ${\rm Sl}_n({{\mathcal M}athbb Z}_p)$ generated by the matrices $X_{i,j}$ and $Y_i$. The group ${\rm Sl}_n^1({{\mathcal M}athbb Z}_p)$
is uniform and of dimension $n^2-1$. It is also the kernel of the reduction map of ${\rm Sl}_n({{\mathcal M}athbb Z}_p)$ modulo $2p$. Moreover, ${\rm Sl}_n^1({{\mathcal M}athbb Z}_p)$ is also FAb, meaning that every open subgroup ${{\mathcal M}athcal H}$ has finite abelianization.
\end{exem}
Recall by Lazard \cite{lazard} (see also \cite{Symonds-Weigel} for an alternative proof):
\begin{theo}[Lazard \cite{lazard}] \label{Lazard} Let ${\rm G}^{ur}g$ be a uniform pro-$p$ group (of dimension $d>0$). Then for all $i\geq 1$, one has: $$H^i({\rm G}^{ur}g) \simeq \bigwedge^i(H^1({\rm G}^{ur}g),$$
where here the exterior product is induced by the cup-product.
\end{theo}
As consequence, one has immediately:
\begin{coro} Let ${\rm G}^{ur}g$ be a uniform pro-$p$ group. Then for all $x,y \in H^1({\rm G}^{ur}g)$, one has $x\cup x \cup y =0 \in H^3({\rm G}^{ur}g)$.
\end{coro}
\begin{rema}
For $p>2$, Theorem \ref{Lazard} is an equivalence: a pro-$p$ group ${\rm G}^{ur}g$ is uniform if and only if, for $i\geq 1$, $H^i({\rm G}^{ur}g) \simeq \bigwedge^i(H^1({\rm G}^{ur}g)$. (See \cite{Symonds-Weigel}.)
\end{rema}
Let us mention another consequence useful in our context:
\begin{coro}
Let ${\rm G}^{ur}g$ be a FAb uniform prop-$p$ group of dimension $d>0$. Then $d\geq 3$.
\end{coro}
\begin{proof}
Indeed, if ${\rm d}im {\rm G}^{ur}g=1$, then ${\rm G}^{ur}g \simeq {{\mathcal M}athbb Z}_p$ (${\rm G}^{ur}g$ is pro-$p$ free) and, if ${\rm d}im{\rm G}^{ur}g=2$, then by Theorem \ref{Lazard0}, $H^2({\rm G}^{ur}g) \simeq {{\mathcal M}athbb F}_p$ and ${\rm G}^{ur}g^{ab} \twoheadrightarrow {{\mathcal M}athbb Z}_p$. Hence, ${\rm d}im {\rm G}^{ur}g $ should be at least $3$.
\end{proof}
\subsection{}
Let us recall the Fontaine-Mazur conjecture (5b) of \cite{FM}.
\begin{conjectureNC} \label{conj1}
Let ${\rm K}$ be a number field. Then every continuous Galois representation $\rho : {\rm G}^{ur}_{\rm K} \rightarrow {\rm G}^{ur}l_m({{\mathcal M}athbb Z}_p)$ has finite image.
\end{conjectureNC}
Following the result of Theorem \ref{Lazard0} of Lazard, we see that proving Conjecture (5b) of \cite{FM}
for ${\rm K}$, is equivalent to proving Conjecture \ref{conj2} for every finite extension ${\rm L}/{\rm K}$ in ${\rm K}^{ur}/{\rm K}$.
\section{Arithmetic consequences}
\subsection{The strategy} \label{section:strategy}
Usually, when $p$ is odd, cup-products factor through the exterior product. But, for $p=2$, it is not the case! This is the obvious observation that we will combine with \'etale cohomology and with the cohomology of uniform pro-$p$ groups.
\
From now on we assume that $p=2$.
\
Suppose given ${\rm G}^{ur}g$ a non-trivial uniform quotient of ${\rm G}^{ur}_{\rm K}(p)$. Then by the inflation map one has: $$H^1({\rm G}^{ur}g) \hookrightarrow H^1({\rm G}^{ur}_{\rm K}(p)).$$ Now take $a,b \in H^1({\rm G}^{ur}_{\rm K}(p))$ coming from $H^1({\rm G}^{ur}g)$.
Then, the cup-product $a\cup a \cup b \in H^3({\rm G}^{ur}_{\rm K}(p))$ comes from $H^3({\rm G}^{ur}g)$ by the inflation map. In other words, one has the following commutative diagram:
$$\xymatrix{H^3({\rm G}^{ur}g) \ar[r]^{inf}&H^3({\rm G}^{ur}_{\rm K}(p)) \ar[r]^{\alpha_3} & H^3_{et}({\rm X}_{\rm K}) \\
H^1({\rm G}^{ur}g)^{\otimes^3} \ar@{->>}[u]^{\beta_0} \ar@{^(->}[r] &H^1({\rm G}^{ur}_{\rm K}(p))^{\otimes ^3}\ar[r]^\simeq \ar[u]^\beta & \ar[u]^{\beta_{et}} H^1_{et}({\rm X}_{\rm K})^{\otimes^3} }$$
But by Lazard's result (Theorem \ref{Lazard}), $\beta_0(a\otimes a \otimes b)=0$, and then one gets a contradiction if $\alpha_1(a)\cup \alpha_1(a) \cup \alpha_1(b) $ is non-zero in $H^3_{et}({\rm X}_{\rm K})$: it is at this level that one may use the computation of Carlson-Schlank.
Before developing this observation in the context of analytic pro-$2$ group,
let us give two immediate consequences:
\begin{coro}
Let ${\rm K}/{{\mathcal M}athbb Q}$ be a quadratic imaginary number field satisfying the condition of Proposition \ref{criteria}. Then ${\rm G}^{ur}_{{\rm K}}(2)$ is of cohomological dimension at least $3$.
\end{coro}
\begin{proof}
Indeed, there exists a non-trivial cup-product $x\cup x \cup y \in H^3_{et}(X)$ and then non-trivial in $H^3({\rm G}^{ur}g_{\rm K}(2))$.
\end{proof}
\begin{coro}\label{coro-exemple} Let $p_1,p_2,p_3,p_4$ be four prime numbers such that $p_1 p_2 p_3 p_4 \equiv 3 (\mod 4)$.
Take ${\rm K}={{\mathcal M}athbb Q}(\sqrt{-p_1 p_2 p_3 p_4})$. Suppose that there exist $i\neq j$ such that ${\rm d}isplaystyle{\left(\frac{p_{i}^*}{p_{j}}\right)=-1}$. Then ${\rm G}^{ur}_{\rm K}(2)$ has non-trivial uniform quotient.
\end{coro}
\begin{rema} Here, one may replace $p_1$ by $2$.
But we are not guaranteed in all cases of the infiniteness of ${\rm G}^{ur}_{\rm K}(2)$, as we are outside the conditions of the result of Hajir \cite{Hajir}. We will see later the reason.
\end{rema}
\begin{proof}
Let us start with a non-trivial uniform quotient ${\rm G}^{ur}g$ of ${\rm G}^{ur}_{\rm K}(2)$. As by class field theory, the pro-$2$ group ${\rm G}^{ur}g$ should be FAb, it is of dimension $3$, {\emph i.e.} $H^1({\rm G}^{ur}g) \simeq H^1({\rm G}^{ur}_{\rm K}(2))$. By Proposition \ref{criteria}, there exist $x,y \in H^1({\rm G}^{ur}g)$ such that $x\cup y \neq 0 \in H^3_{et}({\rm X}_{\rm K})$, and the "strategy" applies.
\end{proof}
Now, we would like to extend this last construction.
\subsection{Bilinear forms over ${{\mathcal M}athbb F}_2$ and conjecture \ref{conj2}}
\subsubsection{Totally isotropic subspaces}
Let ${{\mathcal M}athcal B}$ be a bilinear form over an ${{\mathcal M}athbb F}_2$-vector space ${\rm V}$ of finite dimension. Denote by $n$ the dimension of ${\rm V}$ and by $\rm rk({{\mathcal M}athcal B})$ the rank of ${{\mathcal M}athcal B}$.
\begin{defi}
Given a bilinear form ${{\mathcal M}athcal B}$, one define the index $\nu({{\mathcal M}athcal B})$ of ${{\mathcal M}athcal B}$ by $$\nu({{\mathcal M}athcal B}):=\max \{ {\rm d}im W, \ {{\mathcal M}athcal B}(W,W)=0\}.$$
\end{defi}
The index $\nu({\rm K})$ is then an upper bound of the dimension of totally isotropic subspaces $W$ of ${\rm V}$. As we will see, the index $\nu({{\mathcal M}athcal B})$ is well-known when ${{\mathcal M}athcal B}$ is symmetric.
For the general case, one has:
\begin{prop} \label{bound-nu} The index $\nu({{\mathcal M}athcal B})$ of a bilinear form ${{\mathcal M}athcal B}$ is at most than $n - \frac{1}{2}\rm rk({{\mathcal M}athcal B})$.
\end{prop}
\begin{proof}
Let ${\rm W}$ be a totally isotropic subspace of ${\rm V}$ of dimension $i$. Let us complete a basis of $W$ to a basis ${\rm B}$ of ${\rm V}$. It is then easy to see that the Gram
matrix of ${{\mathcal M}athcal B}$ in ${\rm B}$ is of rank at most $2n-2i$.
\end{proof}
This bound is in a certain sense optimal as we can achieve it in the symmetric case.
\begin{defi}
$(i)$ Given $a\in {{\mathcal M}athbb F}_2$.
The bilinear form $b(a)$ with matrix $\left(\begin{array}{cc} a&1 \\
1&0 \end{array}\right) $ is called a metabolic plan.
A metabolic form is an orthogonal sum of metabolic plans (up to isometry).
$(ii)$ A symmetric bilinear form $(V,{{\mathcal M}athcal B})$ is called alternating if ${{\mathcal M}athcal B}(x,x) = 0$ for all $x\in V$. Otherwise ${{\mathcal M}athcal B}$ is called nonalternating.
\end{defi}
Recall now a well-known result on symmetric bilinear forms over ${{\mathcal M}athbb F}_2$.
\begin{prop}\label{proposition:dimension-isotropic}
Let $(V,{{\mathcal M}athcal B})$ be a symmetric bilinear form of dimension $n$ over ${{\mathcal M}athbb F}_2$. Denote by $r$ the rank of ${{\mathcal M}athcal B}$. Write $r=2r_0 +{\rm d}elta$, with ${\rm d}elta =0$ or $1$, and $r_0 \in {{\mathcal M}athbb N}$.
\begin{enumerate}
\item[$(i)$] If ${{\mathcal M}athcal B}$ is nonalternating, then $(V,{{\mathcal M}athcal B})$ is isometric to $$
\overbrace{b(1) \bot {\rm cd}ots \bot b(1)}^{r_0} \bot \overbrace{\langle 1 \rangle}^{{\rm d}elta} \bot \overbrace{\langle 0 \rangle \bot {\rm cd}ots \bot \langle 0 \rangle}^{n-r} \ \simeq_{iso} \ \overbrace{\langle 1 \rangle \bot {\rm cd}ots \bot \langle 1 \rangle}^{r} \bot \overbrace{\langle 0 \rangle \bot {\rm cd}ots \bot \langle 0 \rangle}^{n-r} ;$$
\item[$(ii)$] If $ {{\mathcal M}athcal B}$ is alternating, then ${{\mathcal M}athcal B}$ is isometric to $$ \overbrace{b(0) \bot {\rm cd}ots \bot b(0)}^{r_0} \bot \overbrace{\langle 0 \rangle \bot {\rm cd}ots \bot \langle 0 \rangle}^{n-r}.$$
\end{enumerate}
Moreover, $\nu({{\mathcal M}athcal B})=n-r+r_0=n-r_0-{\rm d}elta$.
\end{prop}
When $({\rm V},{{\mathcal M}athcal B})$ is not necessary symmetric, let us introduce the symmetrization ${{\mathcal M}athcal B}^{sym}$ of ${{\mathcal M}athcal B}$ by $${{\mathcal M}athcal B}^{sym}(x,y)={{\mathcal M}athcal B}(x,y)+{{\mathcal M}athcal B}(y,x), \ \ \ \forall x,y \in {\rm V}.$$
One has:
\begin{prop}\label{proposition:dimension-isotropic2}
Let $({\rm V},{{\mathcal M}athcal B})$ be a bilinear form of dimension $n$ over ${{\mathcal M}athbb F}_2$. Then $$\nu({{\mathcal M}athcal B}) \geq n - \lfloor \frac{1}{2} \rm rk({{\mathcal M}athcal B}^{sym}) \rfloor - \lfloor \frac{1}{2} \rm rk({{\mathcal M}athcal B}) \rfloor.$$
In particular, $\nu({{\mathcal M}athcal B}) \geq n - \frac{3}{2} \rm rk({{\mathcal M}athcal B})$.
\end{prop}
\begin{proof}
It is easy. Let us start with a maximal totally isotropic subspace $W$ of $({\rm V},{{\mathcal M}athcal B}^{sym})$.
Then ${{\mathcal M}athcal B}_{|{\rm W}}$ is symmetric: indeed, for any two $x,y \in {\rm W}$, we get $0={{\mathcal M}athcal B}^{sym}(x,y)={{\mathcal M}athcal B}(x,y)+{{\mathcal M}athcal B}(y,x)$, and then ${{\mathcal M}athcal B}(x,y)={{\mathcal M}athcal B}(y,x)$ (recall that $V$ is defined over ${{\mathcal M}athbb F}_2$). Hence by Proposition \ref{proposition:dimension-isotropic}, ${{\mathcal M}athcal B}_{|{\rm W}}$ has a totally isotropic subspace of dimension $\nu({{\mathcal M}athcal B}_{|{\rm W}})={\rm d}im {\rm W} - \lfloor \frac{1}{2} \rm rk({{\mathcal M}athcal B}_{|{\rm W}}) \rfloor$.
As ${\rm d}im {\rm W}=n-\lfloor \frac{1}{2} \rm rk({{\mathcal M}athcal B}^{sym})\rfloor$ (by Proposition \ref{proposition:dimension-isotropic}), one obtains the first assertion. For the second one, it is enough to note that $\rm rk({{\mathcal M}athcal B}^{sym}) \leq 2 \rm rk({{\mathcal M}athcal B})$.
\end{proof}
\subsubsection{Bilinear form over the Kummer radical of the $2$-elementary abelian maximal unramified extension}
Let us start with a totally imaginary number field ${\rm K}$. Denote by $n$ the $2$-rank of ${\rm G}^{ur}_{\rm K}(2)$, in other words, $n=d_2 {\rm C}l_{\rm K}$.
Let $V=\langle a_1,{\rm cd}ots, a_n\rangle ({\rm K}^\times)^2 \in {\rm K}^\times /({\rm K}^\times)^2$ be the Kummer radical of the $2$-elementary abelian maximal unramified extension ${\rm K}^{ur,2}/{\rm K}$. Then $V$ is an ${{\mathcal M}athbb F}_2$-vector space of dimension~$n$.
As we have seen in section \ref{section:Carlson-Schlank}, for every prime ideal ${{\mathcal M}athfrak p} $ of ${{\mathcal M}athcal O}_{\rm K}$, the ${{\mathcal M}athfrak p}$-valuation of $a_i$
is even, and then $\sqrt{(a_i)}$ as ideal of ${{\mathcal M}athcal O}_{\rm K}$ has a sense.
For $x\in V$, denote ${\rm K}_x:={\rm K}(\sqrt{x})$, and ${{\mathcal M}athfrak a}(x):=\sqrt{(x)} \in {{\mathcal M}athcal O}_{\rm K}$. We can now introduce the bilinear form ${{\mathcal M}athcal B}_{\rm K}$ that plays a central role in our work.
\begin{defi}
For $a,b \in V$, put: $${{\mathcal M}athcal B}_{\rm K}(a,b)=\left(\frac{{\rm K}_a/{\rm K}}{{{\mathcal M}athfrak a}(b)}\right){\rm cd}ot\sqrt{a} {{\mathcal M}athcal B}igg/ \sqrt{a} \in {{\mathcal M}athbb F}_2,$$
where here we use the additive notation.
\end{defi}
\begin{rema} The Hilbert symbol between $a$ and $b$ is trivial due to the parity of $v_{{\mathcal M}athfrak q}(a)$.
\end{rema}
Of course, we have:
\begin{lemm}
The application ${{\mathcal M}athcal B}_{\rm K}: V \times V \rightarrow {{\mathcal M}athbb F}_2$ is a bilinear form on $V$.
\end{lemm}
\begin{proof} The linearity on the right comes from the linearity of the Artin symbol and the linearity on the left is an easy observation.
\end{proof}
\begin{rema} \label{remarque:matrix}
If we denote by $\chi_{i}$ a generator of $H^1({\rm G}^{ur}al({\rm K}(\sqrt{a_i})/{\rm K}))$, then the Gram matrix of the bilinear form ${{\mathcal M}athcal B}_{\rm K}$ in the basis $\{a_1({\rm K}^\times)^2,{\rm cd}ots, a_n({\rm K}^\times)^2\}$ is exactly the matrix $(\chi_{i} \cup \chi_i \cup \chi_j)_{i,j}$ of the cup-products in $H_{et}^3({\rm Spec } {{\mathcal M}athcal O}_{\rm K})$. See Proposition \ref{proposition:C-S} and Remark \ref{remarque:symbole}.
Hence the bilinear form ${{\mathcal M}athcal B}_{\rm K}$ coincides with the bilinear form ${{\mathcal M}athcal B}_{\rm K}^{et}$ on $H_{et}^1({\rm Spec } {{\mathcal M}athcal O}_{\rm K})$ defined by ${{\mathcal M}athcal B}_{\rm K}^{et}(x,y)=x\cup x\cup y \ \in H^3_{et}({\rm Spec } {{\mathcal M}athcal O}_{\rm K})$.
\end{rema}
The bilinear form ${{\mathcal M}athcal B}_{\rm K}$ is not necessarily symmetric, but we will give later some situations where ${{\mathcal M}athcal B}_{\rm K}$ is symmetric.
Let us give now two types of totally isotropic subspaces ${\rm W}$ that may appear.
\begin{defi}
The right-radical ${\rm R}ad$ of a bilinear form ${{\mathcal M}athcal B}$ on ${\rm V}$ is the subspace of ${\rm V}$ defined by: ${\rm R}ad:=\{x\in {\rm V}, \ {{\mathcal M}athcal B}(V,x)=0\}$.
\end{defi}
Of course one always has
${\rm d}im {{\mathcal M}athcal B}={\rm rk {{\mathcal M}athcal B}} + {{\rm d}im} {{\rm R}ad}$. And,
remark moreover that the restriction of ${{\mathcal M}athcal B}$ at ${\rm R}ad$ produces a totally isotropic subspace of ${\rm V}$.
Let us come back to the bilinear form ${{\mathcal M}athcal B}_{\rm K}$ on the Kummer radical of ${\rm K}^{ur,2}/{\rm K}$.
\begin{prop}
Let ${\rm W}:=\langle \varepsilon_1, {\rm cd}ots, \varepsilon_r \rangle ({\rm K}^\times)^2 \subset {\rm V}$ be an ${{\mathcal M}athbb F}_2$-subspace of dimension~$r$, generated by some units $\varepsilon_i \in {{\mathcal M}athcal O}_{\rm K}^\times$.
Then ${\rm W} \subset {\rm R}ad $, and thus $({\rm V},{{\mathcal M}athcal B}_{\rm K})$ contains ${\rm W}$ as a totally isotropic subspace of dimension~$r$.
\end{prop}
\begin{proof}
Indeed, here ${{\mathcal M}athfrak a}(\varepsilon_i)={{\mathcal M}athcal O}_{\rm K}$ for $i=1,{\rm cd}ots, r$.
\end{proof}
\begin{prop}
Let ${\rm K}={\rm k}(\sqrt{b})$ be a quadratic extension. Suppose that there exist $a_1,{\rm cd}ots, a_r \in {\rm k}$ such that the extensions ${\rm k}(\sqrt{a_i})/{\rm k}$ are independent and unramified everywhere. Suppose moreover that $b \notin \langle a_1,{\rm cd}ots, a_r \rangle ({\rm k}^\times)^2$. Then ${\rm W} := \langle a_1, {\rm cd}ots,a_r \rangle ({\rm K}^\times)^2 $ is a totally isotropic subspace of dimension $r$.
\end{prop}
\begin{proof}
Let ${{\mathcal M}athfrak p} \subset {{\mathcal M}athcal O}_{\rm k}$ be a prime ideal of ${{\mathcal M}athcal O}_{\rm k}$. It is sufficient to prove that ${\rm d}isplaystyle{\left(\frac{{\rm K}_{a_i}/{\rm K}}{{{\mathcal M}athfrak p}}\right)}$ is trivial. Let us study all the possibilities.
$\bullet$ If ${{\mathcal M}athfrak p}$ is inert in ${\rm K}/{\rm k}$, then as ${\rm K}(\sqrt{a_i})/{\rm K}$ is unramified at ${{\mathcal M}athfrak p}$, necessary ${{\mathcal M}athfrak p}$ splits in ${\rm K}(\sqrt{a_i})/{\rm K}$ and then ${\rm d}isplaystyle{\left(\frac{{\rm K}_{a_i}/{\rm K}}{{{\mathcal M}athfrak p}}\right)}$ is trivial.
$\bullet$ If ${{\mathcal M}athfrak p}={{\mathcal M}athfrak P}^2$ is ramified in ${\rm K}/{\rm k}$, then ${\rm d}isplaystyle{\left(\frac{{\rm K}_{a_i}/{\rm K}}{{{\mathcal M}athfrak p}}\right)=
\left(\frac{{\rm K}_{a_i}/{\rm K}}{{{\mathcal M}athfrak P}}\right)^2}$ is trivial.
$\bullet$ If ${{\mathcal M}athfrak p}={{\mathcal M}athfrak P}_1{{\mathcal M}athfrak P}_2$ splits, then obviously ${\rm d}isplaystyle{\left(\frac{{\rm K}_{a_i}/{\rm K}}{{{\mathcal M}athfrak P}_1}\right)=\left(\frac{{\rm K}_{a_i}/{\rm K}}{{{\mathcal M}athfrak P}_2}\right)}$, and then ${\rm d}isplaystyle{\left(\frac{{\rm K}_{a_i}/{\rm K}}{{{\mathcal M}athfrak p}}\right)}$ is trivial.
\end{proof}
It is then natural to define the index of ${\rm K}$ as follows:
\begin{defi}
The index $\nu({\rm K})$ of ${\rm K}$ is the index of the bilinear form ${{\mathcal M}athcal B}_{\rm K}$.
\end{defi}
Of course, if the form ${{\mathcal M}athcal B}_{\rm K}$ is non-degenerate, one has: $\nu({\rm K}) \leq \frac{1}{2} d_2 {\rm C}l_{\rm K}$.
Thus one says that ${\rm C}l_{\rm K}$ is non-degenerate if the form ${{\mathcal M}athcal B}_{\rm K}$ is non-degenerate.
One can now present the main result of our work:
\begin{theo} \label{maintheorem}
Let ${\rm K}/{{\mathcal M}athbb Q}$ be a totally imaginary number field.
Then ${\rm G}^{ur}_{\rm K}(2)$ has no uniform quotient of dimension $d> \nu({\rm K})$. In particular:
\begin{enumerate}
\item[$(i)$] if $\nu({\rm K}) <3$, then the Conjecture \ref{conj2} holds for ${\rm K}$ (and $p=2$);
\item[$(ii)$] if ${\rm C}l_{\rm K}$ is non-degenerate, then ${\rm G}^{ur}_{\rm K}(2)$ has no uniform quotient of dimension $d > \frac{1}{2} d_2 {\rm C}l_{\rm K}$.
\end{enumerate}
\end{theo}
\begin{proof} Let ${\rm G}^{ur}g$ be a non-trivial uniform quotient of ${\rm G}^{ur}_{\rm K}(p)$ of dimension $d>0$. Let $W$ be the Kummer radical of $H^1({\rm G}^{ur}g)^\vee $; here $W$ is a subspace of
the Kummer radical ${\rm V}$ of ${\rm K}^{ur,2}/{\rm K}$.
As $d>\nu({\rm K})$, the space ${\rm W}$ is not totally isotropic. Then, one can find $x,y \in H_{et}^1({\rm G}^{ur}g) \subset H^1({\rm X}_{\rm K})$
such that $x \cup x\cup y \in H^3_{et}({\rm X}_{\rm K})$ is not zero (by Proposition \ref{proposition:C-S}). See also Remark \ref{remarque:matrix}. And thanks to the stategy developed in Section \ref{section:strategy}, we are done for the first part ot the theorem.
$(i)$: as ${\rm G}^{ur}(2)$ is FAb, every non-trivial uniform quotient ${\rm G}^{ur}g$ of ${\rm G}^{ur}(2)$ should be of dimension at least $d\geq 3$.
$(ii)$: in this case $\nu({\rm K}) \leq \frac{1}{2} \rm rk({{\mathcal M}athcal B}_{\rm K})$.
\end{proof}
We finish this section with the proof of the theorem presented in the introduction.
$\bullet$ As $\nu({\rm K}) \leq n-\frac{1}{2} \rm rk({{\mathcal M}athcal B}_{\rm K})$, see Proposition \ref{bound-nu} and Remark \ref{remarque:matrix}, the assertion $(i)$ can be deduced by Theorem \ref{maintheorem}.
$\bullet$ This is an obvious observation for the small dimensions. In the three cases, $\nu({\rm K}) \leq n-\frac{1}{2} \rm rk({{\mathcal M}athcal B}_{\rm K}) <3$.
\subsection{The imaginary quadratic case}
\subsubsection{The context} \label{section:thecontext}
Let us consider an imaginary quadratic field ${\rm K}={{\mathcal M}athbb Q}(\sqrt{D})$, $D \in {{\mathcal M}athbb Z}_{<0}$ square-free.
Let $p_1,{\rm cd}ots, p_{k+1}$ be the {\it odd} prime numbers dividing $D$. Let us write the discriminant ${\rm d}isc_{\rm K}$ of ${\rm K}$ by: ${\rm d}isc_{\rm K}=p_0^*{\rm cd}ot p_1^* {\rm cd}ots p_{k+1}^*$,
where $p_0^*\in \{1, -4,{{\mathcal M}athfrak p}m 8\}$.
The $2$-rank $n$ of ${\rm C}l_{\rm K}$ depends on the ramification of $2$ in ${\rm K}/{{\mathcal M}athbb Q}$. Put ${\rm K}^{ur,2}$ the $2$-elementary abelain maximal unramified extension of ${\rm K}$:
\begin{enumerate}
\item[$-$] if $2$ is unramified in ${\rm K}/{{\mathcal M}athbb Q}$, {\emph i.e.} $p_0^*=1$, then $n=k$ and $V=<p_1^*,{\rm cd}ots, p_k^*> ({\rm K}^\times)^2\subset {\rm K}^\times$ is the Kummer radical of ${\rm K}^{ur,2}/{\rm K}$;
\item[$-$] is $2$ is ramified in ${\rm K}/{{\mathcal M}athbb Q}$, {\emph i.e.} $p_0^*=-4$ or ${{\mathcal M}athfrak p}m 8$, then $n=k+1$ and $V=<p_1^*,{\rm cd}ots, p_{k+1}^*> ({\rm K}^\times)^2\subset {\rm K}^\times$ is the Kummer radical of ${\rm K}^{ur,2}/{\rm K}$.
\end{enumerate}
We denote by ${{\mathcal M}athcal S}=\{p_1^*,{\rm cd}ots, p_{n}^*\}$ the ${{\mathcal M}athbb F}_2$-basis of $V$, where here $n=d_2 {\rm C}l_{\rm K}$ ($=k$ or $k+1$).
\begin{lemm}
$(i)$ For $p^*\neq q^* \in {{\mathcal M}athcal S}$, one has: ${{\mathcal M}athcal B}_{\rm K}(p^*,q^*)=0$ if and only if, ${\rm d}isplaystyle{\left(\frac{p^*}{q}\right)=1}$.
$(ii)$ For $p |D$, put $D_p:=D/p^*$. Then for $p^*\in{{\mathcal M}athcal S}$, one has:
${\rm d}isplaystyle{{{\mathcal M}athcal B}_{\rm K}(p^*,p^*):=\left(\frac{D_p}{p}\right)}$.
\end{lemm}
\begin{proof}
Obvious.
\end{proof}
Hence the matrix of the bilinear form ${{\mathcal M}athcal B}_{\rm K}$ in the basis ${{\mathcal M}athcal S}$ is a square $n\times n$ R\'edei-matrix type ${{\rm M}_{\rm K}}=\left( m_{i,j}\right)_{i,j}$,
where $$m_{i,j}=\left\{\begin{array}{ll} {\rm d}isplaystyle{ \left(\frac{p_i^*}{p_j}\right)} & {\rm if \ } i\neq j, \\
{\rm d}isplaystyle{ \left(\frac{D_{p_i}}{p_i}\right)} & {\rm if \ } i=j.
\end{array}\right.$$
Here as usual, one uses the additive notation (the $1$'s are replaced by $0$'s and the $-1$'s by~$1$).
\begin{exem} \label{exemple-Boston}
Take ${\rm K}={{\mathcal M}athbb Q}(\sqrt{-4{\rm cd}ot 3 {\rm cd}ot 5 {\rm cd}ot 7 {\rm cd}ot 13})$. This quadratic field has a root discriminant $|{\rm d}isc_{\rm K}|^{1/2}= 73. 89 {\rm cd}ots$, but we dont know actually if ${\rm G}^{ur}_{\rm K}(2)$ is finite or not; see the recent works of Boston and Wang \cite{Boston-Wang}.
Take ${{\mathcal M}athcal S}=\{-3,-5,-7,-13\}$. Then the Gram matrix of ${{\mathcal M}athcal B}_{\rm K}$ in ${{\mathcal M}athcal S}$ is:
$${\rm M}_{\rm K}=\left(\begin{array}{cccc}
1&1&1&0 \\
1&1&1&1 \\
0&1&1&1 \\
0&1&1&0
\end{array}\right).$$
Hence $\rm rk({{\mathcal M}athcal B}_{\rm K})=3$ and $\nu({\rm K}) \leq 4- \frac{3}{2} =2.5$. By Theorem \ref{maintheorem}, one concludes that ${\rm G}^{ur}_{\rm K}(2)$ has no non-trivial uniform quotient.
By Proposition \ref{proposition:dimension-isotropic2}, remark that here one has: $\nu({\rm K})=2$.
\end{exem}
Let us recall at this level a part of the theorem of the introduction:
\begin{coro} \label{coro-FM-quadratic}
The Conjecture \ref{conj2} holds when:
\begin{enumerate}
\item[$(i)$] $d_2{\rm C}l_{\rm K}= 5$ and ${{\mathcal M}athcal B}_{\rm K}$ is non-degenerate;
\item[$(ii)$] $d_2{\rm C}l_{\rm K}=4$ and $\rm rk({{\mathcal M}athcal B}_{\rm K}) \geq 3$;
\item[$(iii)$] $d_2{\rm C}l_{\rm K}=3$ and $\rm rk({{\mathcal M}athcal B}_{\rm K}) >0$.
\end{enumerate}
\end{coro}
Remark that $(iii)$ is an extension of corollary \ref{coro-exemple}.
\subsubsection{Symmetric bilinear forms. Examples}
Let us conserve the context of the previous section \ref{section:thecontext}. Then, thanks to the quadratic reciprocity law, one gets:
\begin{prop} \label{prop:congruence} The bilinear form ${{\mathcal M}athcal B}_{\rm K}: V \times V \rightarrow {{\mathcal M}athbb F}_2$ is symmetric, if and only if, there is at most one prime $p \equiv 3 (\mod 4)$ dividing $D$.
\end{prop}
\begin{proof} Obvious.
\end{proof}
Let us give some examples (the computations have been done by using PARI/GP \cite{pari}).
\begin{exem}
Take $k+1$ prime numbers $p_1,{\rm cd}ots, p_{k+1}$, such that
\begin{enumerate}
\item[$\bullet$] $p_1\equiv {\rm cd}ots p_{k} \equiv 1({\rm mod} \ 4)$ and $p_{k+1} \equiv 3 ({\rm mod} \ 4)$;
\item[$\bullet$] for $1\leq i < j \leq k$, ${\rm d}isplaystyle{\left(\frac{p_i}{p_j}\right)=1}$;
\item[$\bullet$] for $i=1,{\rm cd}ots, k$, ${\rm d}isplaystyle{\left(\frac{p_i}{p_{k+1}}\right)=-1}$
\end{enumerate}
Put ${\rm K}={{\mathcal M}athbb Q}(\sqrt{-p_1 {\rm cd}ots p_{k+1}})$.
In this case the matrix of the bilinear form ${{\mathcal M}athcal B}_{\rm K}$ in the basis $(p_i)_{1 \leq k}$ is the identity matrix of dimension $k \times k$ and, $\nu({\rm K})=\lfloor \frac{k}{2} \rfloor$.
Hence, ${\rm G}^{ur}_{\rm K}(p)$ has no uniform quotient of dimension at least $\lfloor \frac{k}{2} \rfloor +1$.
In particular, if we choose an integer $t>0$ such that \ $\sqrt{k+1}\geq t \geq \sqrt{\lfloor \frac{k}{2} \rfloor +2}$, then there is no quotient of ${\rm G}^{ur}_{\rm K}(2)$ onto ${\rm Sl}_t^1({{\mathcal M}athbb Z}_2)$. (If $t > \sqrt{k+1}$, it is obvious.)
Here are some more examples. For ${\rm K}_1={{\mathcal M}athbb Q}(\sqrt{-5{\rm cd}ot 29 {\rm cd}ot 109 {\rm cd}ot 281 {\rm cd}ot 349 {\rm cd}ot 47})$, ${\rm G}^{ur}_{{\rm K}_1}(2)$ has no uniform quotient; here ${\rm C}l({\rm K}_1) \simeq ({{\mathcal M}athbb Z}/2{{\mathcal M}athbb Z})^5$.
Take ${\rm K}_2={{\mathcal M}athbb Q}(\sqrt{-5{\rm cd}ot 29 {\rm cd}ot 109 {\rm cd}ot 281 {\rm cd}ot 349 {\rm cd}ot 1601 {\rm cd}ot 1889 {\rm cd}ot 5581 {\rm cd}ot 3847})$; here ${\rm C}l({\rm K}_2) \simeq ({{\mathcal M}athbb Z}/2{{\mathcal M}athbb Z})^8 $. Then ${\rm G}^{ur}_{{\rm K}_2}(2)$ has no uniform quotient of dimension at least $5$. In particular, there is
no unramified extension of~${\rm K}_2$ with Galois group isomorphic to ${\rm Sl}_3^1({{\mathcal M}athbb Z}_2)$.
\end{exem}
\begin{exem}
Take $2m+1$ prime numbers $p_1,{\rm cd}ots, p_{2m+1}$, such that
\begin{enumerate}
\item[$\bullet$] $p_1\equiv {\rm cd}ots p_{2m} \equiv 1({\rm mod} \ 4)$ and $p_{2m+1} \equiv 3 ({\rm mod} \ 4)$;
\item[$\bullet$] ${\rm d}isplaystyle{\left(\frac{p_1}{p_2}\right)= \left(\frac{p_3}{p_4}\right) = {\rm cd}ots = \left(\frac{p_{2m-1}}{p_{2m}}\right) = -1}$;
\item[$\bullet$] for the other indices $1\leq i <j \leq 2m$, ${\rm d}isplaystyle{\left(\frac{p_i}{p_j}\right)= 1}$;
\item[$\bullet$] for $i=1,{\rm cd}ots, 2m$, ${\rm d}isplaystyle{\left(\frac{p_i}{p_{2m+1}}\right)=-1}$
\end{enumerate}
Put ${\rm K}={{\mathcal M}athbb Q}(\sqrt{-p_1{\rm cd}ots p_{2m+1}})$.
In this case the bilinear form ${{\mathcal M}athcal B}_{\rm K}$ is nondegenerate and alternating, then isometric to $ {\rm d}isplaystyle{\overbrace{b(0) \bot {\rm cd}ots \bot b(0)}^{m} }$. Hence, $\nu({\rm K})=m$, and ${\rm G}^{ur}_{\rm K}(p)$ has no uniform quotient of dimension at least $m +1$.
For example, for ${\rm K}={{\mathcal M}athbb Q}(\sqrt{-5{\rm cd}ot 13 {\rm cd}ot 29 {\rm cd}ot 61 {\rm cd}ot 1049 {\rm cd}ot 1301 {\rm cd}ot 743})$, ${\rm G}^{ur}_{\rm K}(2)$ has no uniform quotient of dimension at least $4$.
\end{exem}
\subsubsection{Relation with the $4$-rank of the Class group - Density estimations}
The study of the $4$-rank of the class group of quadratic number fields started with the work of R\'edei \cite{Redei} (see also \cite{Redei-Reichardt}). Since, many authors
have contribued to its extensions, generalizations and applications. Let us cite an article of Lemmermeyer \cite{Lemmermeyer} where one can find a
large litterature about the question. See also a nice paper of Stevenhagen \cite{Stevenhagen}, and the work of Gerth \cite{Gerth} and Fouvry-Kl\"uners \cite{Fouvry-Klueners} concerning the density question.
\begin{defi}
Let ${\rm K}$ be a number field, define by ${\rm R}_{{\rm K},4}$ the $4$-rank of ${\rm K}$ as follows: $${\rm R}_{{\rm K},4}:= {\rm d}im_{{{\mathcal M}athbb F}_2} {\rm C}l_{\rm K}[4]/{\rm C}l_{\rm K}[2],$$
where ${\rm C}l_{\rm K}[m]=\{c \in {\rm C}l_{\rm K},c^m=1\}$.
\end{defi}
Let us conserve the context and the notations of the section \ref{section:thecontext}: here ${\rm K}={{\mathcal M}athbb Q}(\sqrt{D})$ is an imaginary quadratic field of discrimant ${\rm d}isc_{\rm K}$, $D \in {{\mathcal M}athbb Z}_{<0}$ square-free. Denote by $\{q_1,{\rm cd}ots q_{n+1}\}$ the set of prime numbers that ramify in ${\rm K}/{{\mathcal M}athbb Q}$; $d_2 {\rm C}l_{\rm K}=n$. Here we can take $q_i=p_i$ for $1\leq i \leq n$, and $q_n=p_{k+1}$ or $q_n=2$ following the ramification at $2$. Then,
consider the R\'edei matrix ${\rm M}_{\rm K}'=(m_{i,j})_{i,j}$ of size $(n+1)\times (n+1)$ with coefficients in ${{\mathcal M}athbb F}_2$,
where $$m_{i,j}=\left\{\begin{array}{ll} {\rm d}isplaystyle{ \left(\frac{q_i^*}{q_j}\right)} & {\rm if \ } i\neq j, \\
{\rm d}isplaystyle{ \left(\frac{D_{q_i}}{q_i}\right)} & {\rm if \ } i=j.
\end{array}\right.$$
It is not difficult to see that the sum of the rows is zero, hence the rank of ${\rm M}'_{\rm K}$ is smaller than $n$.
\begin{theo}[R\'edei] \label{theorem:Redei} Let ${\rm K}$ be an imaginary quadratic number field. Then $ {\rm R}_{{\rm K},4}=d_2 {\rm C}l_{\rm K}-\rm rk({\rm M}'_{\rm K})$.
\end{theo}
\begin{rema}
The strategy of R\'edei is to construct for every couple $(D_1,D_2)$ "of second kind", a degree $4$ cyclic unramified extension of ${\rm K}$. Here to be of second kind
means that ${\rm d}isc_{\rm K}=D_1D_2$, where $D_i$ are fundamental discriminants such that $\left(\frac{D_1}{p_2}\right)= \left(\frac{D_2}{p_1}\right)=1$, for every prime $p_i|D_i$, $i=1,2$.
And clearly, this condition corresponds exactly to the existence of orthogonal subspaces $W_i$ of the Kummer radical ${\rm V}$, $i=1,2$, generated
by the $p_i^*$, for all $p_i|D_i$: ${{\mathcal M}athcal B}_{\rm K}(W_1,W_2)={{\mathcal M}athcal B}_{\rm K}(W_2,W_1)=\{0\}$. Such orthogonal subspaces allow us to construct totally isotropic subspaces.
And then, the larger the $4$-rank of ${\rm C}l_{\rm K}$, the larger $\nu({\rm K})$ must be.
\end{rema}
Consider now the matrix ${\rm M}''_{\rm K}$ obtained from ${\rm M}'_{\rm K}$ after missing the last row.
Remark here that the matrix ${\rm M}_{\rm K}$ is a submatrix of the R\'edei matrix ${\rm M}''_{\rm K}$:
$${\rm M}''_{\rm K}=\left(\begin{array}{cl}
\begin{array}{|c|} \hline \\ {\rm M}_{\rm K} \\ \\ \hline \end{array}
& \begin{array}{c} * \\ \vdots \\ * \end{array} \end{array} \right)$$
Hence, $\rm rk({\rm M}_{\rm K}) +1 \geq \rm rk({\rm M}'_{\rm K}) \geq \rm rk({\rm M}_{\rm K})$. Remark that in example \ref{exemple-Boston}, $\rm rk({\rm M}_{\rm K})=3$ and $\rm rk({\rm M}'_{\rm K})=4$.
But sometimes one has $\rm rk({\rm M}'_{\rm K})=\rm rk({\rm M}_{\rm k})$, as for example:
\begin{enumerate}
\item[$(A)$:] when: $p_0=1$ (the set of primes $p_i \equiv 3 ({\rm mod} \ 4)$ is odd);
\item[$(B)$:] or, when ${{\mathcal M}athcal B}_{\rm K}$ is non-degenerate.
\end{enumerate}
For situation $(A)$, it suffices to note that the sum of the columns is zero (thanks to the properties of the Legendre symbol).
From now on we follow the work of Gerth \cite{Gerth}. Denote by ${\rm F}f$ the set of imaginary quadratic number fields.
For $0 \leq r \leq n$ and $X\geq 0$, put
$${\rm S}_{X}=\big\{ {\rm K} \in {\rm F}f, \ |{\rm d}isc_{\rm K}|\leq X \big\},$$
$${\rm S}_{n,X}=\big\{{\rm K}\in {\rm S}_{X}, \ d_2{\rm C}l_{\rm K}=n \big\}, \ \ {\rm S}_{n,r,X}=\big\{{\rm K} \in {\rm S}_{n,X}, \ {\rm R}_{{\rm K},4}=r \big\}.$$
Denote also $${\rm A}_{X}=\big\{{\rm K}\in {\rm S}_X, {\rm \ satisfying } \ (A) \big\}$$
$$ {\rm A}_{n,X}=\big\{ {\rm K} \in {\rm A}_{X}, \ d_2 {\rm C}l_{\rm K}=n\big\}, \ \ {\rm A}_{n,r,X}=\big\{ {\rm K} \in {\rm A}_{n,X},
\ {\rm R}_{{\rm K},4}=r\big\}.$$
One has the following density theorem due to Gerth:
\begin{theo}[Gerth \cite{Gerth}]
The limits ${\rm d}isplaystyle{\lim_{X\rightarrow \infty} \frac{|{\rm A}_{n,r,X}|}{|{\rm A}_{n,X}|} }$ and
${\rm d}isplaystyle{\lim_{X\rightarrow \infty} \frac{|{\rm S}_{n,r,X}|}{|{\rm S}_{n,X}|} }$ exist and are equal. Denote by $d_{n,r}$ this quantity.
Then $d_{n,r}$ is explicit and, $$d_{\infty,r}:=\lim_{n \rightarrow \infty} d_{n,r}= \frac{2^{-r^2} {{\mathcal M}athfrak p}rod_{k=1}^\infty(1-2^{-k})}{{{\mathcal M}athfrak p}rod_{k=1}^r(1-2^{-k})}.$$
\end{theo}
Recall also the following quantities introduced at the beginning of our work:
$${\rm F}M_{n,X}^{(d)}:=\{ {\rm K} \in {\rm S}_{n,X}, \ {\rm G}^{ur}_{\rm K}(2) {\rm \ has \ no \ uniform \ quotient \ of \ dimension} > d\},$$
$$ {\rm F}M_{n,X}:=\{ {\rm K} \in {\rm S}_{n,X}, \ {\rm Conjecture \ \ref{conj2} \ holds \ for \ }{\rm K}\},$$
and the limits:
$${\rm F}M_{n}:=\liminf_{X \rightarrow + \infty} \frac{\# {\rm F}M_{n,X}}{\#{\rm S}_{n,X}}, \ \ \ {\rm F}M_n^{(d)}:= \liminf_{X\rightarrow + \infty} \frac{\# {\rm F}M_{n,X}^{(d)}}{\#{\rm S}_{n,X}}.$$
After combining all our observations, we obtain (see also Corollary \ref{coro-intro1}):
\begin{coro}
For $d\leq n$, one has $${\rm F}M_n^{(d)} \geq d_{n,0}+d_{n,1}+{\rm cd}ots + d_{n,2d-n-1}.$$
In particular:
\begin{enumerate}
\item[$(i)$] $ {\rm F}M_{3} \geq .992187 $;
\item[$(ii)$] ${\rm F}M_{4} \geq .874268$, ${\rm F}M_4^{(4)} \geq .999695$;
\item[$(iii)$] ${\rm F}M_5 \geq .331299$, ${\rm F}M_5^{(4)} \geq .990624$, ${\rm F}M_5^{(5)} \geq .9999943$;
\item[$(iv)$] ${\rm F}M_{6}^{(4)} \geq .867183$, ${\rm F}M_6^{(5)} \geq .999255$, ${\rm F}M_6^{(6)} \geq 1-5.2 {\rm cd}ot 10^{-8}$;
\item[$(v)$] for all $d\geq 3$, ${\rm F}M_d^{(1+d/2)} \geq .866364$, ${\rm F}M_d^{(2+d/2)} \geq .999953$.
\end{enumerate}
\end{coro}
\begin{proof}
As noted by Gerth in \cite{Gerth}, the dominating set in the density computation is the set ${\rm A}_{n,X}$ of imaginary quadratric number fields ${\rm K}={{\mathcal M}athbb Q}(\sqrt{D})$ satisfying $(A)$.
But for ${\rm K}$ in ${\rm A}_{n,X}$, one has ${\rm rk({{\mathcal M}athcal B}_{\rm K})}={\rm rk({\rm M}_{\rm K})}= {n}-{\rm R}_{{\rm K},4}$. Hence for
${\rm K} \in {\rm A}_{n,X,r}$, by Proposition \ref{bound-nu} $$\nu({\rm K}) \leq n -\frac{1}{2} \big(n - {\rm R}_{{\rm K},4}\big)=\frac{1}{2}\big(n+ {\rm R}_{{\rm K},4}\big).$$
Now one uses Corollary \ref{coro-FM-quadratic}. Or equivalently, one sees that Conjecture \ref{conj2} holds when $3 > \frac{1}{2}\big( n + {\rm R}_{{\rm K},4}\big)$, {\emph i.e.}, when
${\rm R}_{{\rm K},4} < 6-n$.
More generaly, ${\rm G}^{ur}_{\rm K}(2)$ has no uniform quotient of dimension $d$ when
${\rm R}_{{\rm K},4} < 2d -n$. In particular, $${\rm F}M_n^{(d)} \geq d_{n,0}+d_{n,1}+{\rm cd}ots + d_{n,2d-n-1}.$$
Now one uses the estimates of Gerth in \cite{Gerth}, to obtain: \begin{enumerate}
\item[$(i$)]${\rm F}M_{3} \geq d_{3,0}+d_{3,1}+d_{3,2} \simeq 0.992187 {\rm cd}ots$
\item[$(ii)$] ${\rm F}M_4 \geq d_{4,0} + d_{4,1} \simeq 0.874268 {\rm cd}ots $, ${\rm F}M_4^{(4)} \geq d_{4,0} + d_{4,1}+d_{4,2}+d_{4,3} \simeq .999695 {\rm cd}ots$,
\item[$(iii)$] $ {\rm F}M_5 \geq d_{5,0} \simeq 0.331299 {\rm cd}ots $, ${\rm F}M_5^{(4)} \geq d_{5,0}+d_{5,1}+d_{5,2} \simeq .990624 {\rm cd}ots$, ${\rm F}M_5^{(5)} \geq d_{5,0}+d_{5,1}+d_{5,2}+d_{5,3}+d_{5,4} \simeq .9999943 {\rm cd}ots$,
\item[$(iv)$] ${\rm F}M_6^{(4)} \geq d_{6,0}+d_{6,1} \simeq 0.867183 {\rm cd}ots$, ${\rm F}M_6^{(5)} \geq d_{6,0}+d_{6,1}+d_{6,2}+d_{6,3} \simeq .999255 {\rm cd}ots$, ${\rm F}M_6^{(6)} \geq 1- d_{6,6} \simeq 1-5.2 {\rm cd}ot 10^{-8}$,
\item[$(v)$] ${\rm F}M_d^{(1+d/2)} \geq d_{\infty,0}+ d_{\infty,1} \simeq .866364{\rm cd}ots $, ${\rm F}M_d^{(2+d/2)} \geq d_{\infty,0}+d_{\infty,1}+d_{\infty,2}+d_{\infty,3} \simeq .999953 {\rm cd}ots$.
\end{enumerate}
\end{proof}
In the spirit of the Cohen-Lenstra heuristics, the work of Gerth has been improved by Fouvry-Kl\"uners \cite{Fouvry-Klueners}.
This work allows us to give a more general density estimation as announced in Introduction.
Recall $${\rm F}M_{X}^{[i]}:=\{{\rm K} \in {\rm S}_{X}, \ {\rm G}^{ur}_{\rm K}(2) {\rm \ has \ no \ uniform \ quotient \ of \ dimension} > i+ \frac{1}{2}d_2 {\rm C}l_{\rm K}\}$$
and $${\rm F}M^{[i]}:= \liminf_{X\rightarrow + \infty} \frac{\# {\rm F}M^{[i]}_X}{\# {\rm S}_{X}}.$$
Our work allows us to obtain (see Corollary \ref{coro-intro2}):
\begin{coro}
For $i\geq 1$, one has: $${\rm F}M^{[i]} \geq d_{\infty,0}+ d_{\infty,1}+ {\rm cd}ots + d_{\infty,2i-2} .$$
In particular, $${\rm F}M^{[1]} \geq .288788, \ \ {\rm F}M^{[2]} \geq . 994714, \ {\rm and } \ {\rm F}M^{[3]} \geq 1-9.7{\rm cd}ot 10^{-8}.$$
\end{coro}
\begin{proof}
By Fouvry-Kl\"uners \cite{Fouvry-Klueners}, the density of imaginary quadratic fields for which ${\rm R}_{{\rm K},4}=r$, is equal to $d_{\infty,r}$.
Remind of that for ${\rm K} \in {\rm F}f$: ${\rm rg}({\rm M}_{\rm K}) \geq {\rm rg}({\rm M}_{\rm K}') -1 $. Then thanks to Proposition \ref{bound-nu} and Theorem \ref{theorem:Redei}, we get $$\nu({\rm K}) \leq \frac{1}{2} d_2 {\rm C}l_{\rm K} + \frac{1}{2} +\frac{1}{2} {\rm R}_{{\rm K},4}.$$
Putting this fact together with Theorem \ref{maintheorem}, we obtain that ${\rm G}^{ur}_{\rm K}(2)$ has no uniform quotient
of dimension $d > \frac{1}{2} d_2 {\rm C}l_{\rm K} + \frac{1}{2} +\frac{1}{2} {\rm R}_{{\rm K},4}$. Then for $i\geq 1$, the proportion of the fields ${\rm K}$ in ${\rm F}M^{[i]}$ is at least the proportion of ${\rm K} \in {\rm F}f$ for which ${\rm R}_{{\rm K},4} < 2i -1$, hence at least $d_{\infty,0}+ d_{\infty,1}+ {\rm cd}ots + d_{\infty,2i-2}$ by \cite{Fouvry-Klueners}.
To conclude:
$${\rm F}M^{[1]} \geq d_{\infty,0} \simeq .288788{\rm cd}ots$$ $$ {\rm F}M^{[2]} \geq d_{\infty,0} + d_{\infty,1}+ d_{\infty,2} \simeq .994714{\rm cd}ots$$
$$ {\rm F}M^{[3]} \geq d_{\infty,0} + d_{\infty,1}+d_{\infty,2} +d_{\infty,3} +d_{\infty,4} \simeq 1-9.7{\rm cd}ot 10^{-8}.$$
\end{proof}
\
\end{document}
|
\begin{document}
\newtheorem{theorem}{Theorem}[section]
\newtheorem{definition}{Definition}[section]
\newtheorem{example}{Example}[section]
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{remark}{Remark}[section]
\theorembodyfont{\normalfont}
\newproof{pf}{Proof}
\newproof{pot}{Proof of Theorem \ref{thm2}}
\def\hspace*{\fill}~$\square$\par\endtrivlist\unskip{\hspace*{\fill}~$\square$\par\endtrivlist\unskip}
\begin{frontmatter}
\title{On the solution of the coupled steady-state dual-porosity-Navier-Stokes fluid flow model with the Beavers-Joseph-Saffman interface condition\tnoteref{mytitlenote}}
\author[add1]{Di Yang}\ead{[email protected]}
\author[add1]{Yinnian He\corref{correspondingauthor}}\ead{[email protected]}
\author[add1]{Luling Cao}\ead{[email protected]}
\cortext[correspondingauthor]{Corresponding author.}
\address[add1]{School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an, Shaanxi 710049, P.R. China }
\begin{abstract}
\par In this work, we propose a new analysis strategy to establish an a priori estimate of the weak solutions to the coupled steady-state dual-porosity-Navier-Stokes fluid flow model with the Beavers-Joseph-Saffman interface condition. The most advantage of our proposed method is that the a priori estimate and the existence result are independent of small data and the large viscosity restriction. Therefore the global uniqueness of the weak solution is naturally obtained.
\end{abstract}
\begin{keyword}
weak solution\sep dual-porosity-Navier-Stokes\sep Beavers-Joseph-Saffman interface condition\sep a priori estimate\sep existence\sep global uniqueness
\end{keyword}
\end{frontmatter}
\section{Introduction}
\label{section:introuction}
Coupled free flow and porous medium flow systems play an important role in many practical engineering fields, e.g., the flood simulation of arid areas in geological science~\cite{Discacciati2004phd}, filtration treatment in industrial production~\cite{ hanspal2006numerical, nassehi1998modelling}, petroleum exploitation in mining, and blood penetration between vessels and organs in life science~\cite{d2011robust}. Specifically, the systems are usually described by Navier-Stokes equations (or Stokes equations) coupled with Darcy's equation, and there are amounts of achievements such as \cite{Cao20132013, Cao2011Robin, Cai2009NUMERICAL, Chen2012Efficient, Girault20092052, Mu2010Decoupled, zhao2016two-grid}. However, the standard Darcy's equation describes fluids flowing through only a single porosity medium, which is not accurate to deal with the complicated multiple porous media similar to naturally fractured reservoir. Actually, the naturally fractured reservoir is comprised of low permeable rock matrix blocks surrounded by an irregular network of natural microfractures, and further they have different fluid storage and conductivity properties~\cite{New1983, The1963}. In 2016, Hou et al. \cite{Hou2016710} proposed and numerically solved a coupled dual-porosity-Stokes fluid flow model with four multi-physics interface conditions. The authors used the dual-porosity equations over Darcy's region to describe fluid flowing through the multiple porous medium. Recently, several related research on the above model can be found in the literatures~\cite{AlMahbub2019803, AlMahbub2020112616, Gao202001, he2020an, Shan2019389}. In particular, Gao and Li \cite{Gao202001} proposed a decoupled stabilized finite element method to solve the coupled dual-porosity-Navier-Stokes fluid flow model in the numerical field.
The steady-state dual-porosity-Navier-Stokes fluid flow model has distinct features and difficulties in mathematical analysis. Many numerical methods have been studied for the well-known stationary or time-dependent Navier-Stokes/Darcy model with Beavers-Joseph or Beaver-Joseph-Saffman interface condition, including coupled finite element methods \cite{Badea2010195, Cao2021113128, Discacciati2009315, Discacciati2017571, Zuo20151009}, discontinuous Galerkin methods \cite{Chidyagwai20093806, Girault20092052, Girault201328}, domain decomposition methods \cite{Cao20132013, Discacciati2004phd, Discacciati2009315, He2015264, Qiu2020109400}, and decoupled methods based on two-grid finite element \cite{Cai2009NUMERICAL, Fang2020915, zhao2016two-grid, Zuo20151009}. In spite of the above great contributions to numerical simulation, the existence of a weak solution to the coupled dual-porosity-Navier-Stokes fluid flow model with Beavers-Joseph-Saffman interface condition for general data keeps unresolved. In many literatures \cite{Badea2010195, Cai2009NUMERICAL, Chidyagwai20093806, Discacciati2004phd, Discacciati2009315, Discacciati2017571, Girault20092052, Girault201328, He2015264, Zuo20151009}, a priori estimates and existence of a weak solution need suitable small data and/or large viscosity restrictions, and therefore only local uniqueness can be established when the data satisfy additional restrictions. In \cite{Girault20092052}, the authors pointed out that the difficulty for a priori estimates and existence with general data is stemmed from the transmission interface condition, which does not completely compensate the nonlinear convection term from the Navier-Stokes equations in the energy balance.
Therefore in this paper, stemming from resolving steady-state Navier-Stokes equations with mixed boundary conditions in \cite{Hou201947}, we shall establish a new a priori estimate of the weak solutions by coupling the model problem with a designed auxiliary problem in order to completely compensate the nonlinear convection term from the Navier-Stokes equations. In addition, we shall also prove existence of a weak solution without small data or large viscosity restriction. As a result, the global uniqueness of the weak solution is naturally obtained.
The rest of this paper is organized as follows. In Section 2, we specify the steady-state dual-porosity-Navier-Stokes fluid flow model with Beavers-Joseph-Saffman interface condition and provide its Galerkin variational formulation. In Section 3, we establish a new a priori estimate of the weak solutions by coupling the model problem with an auxiliary problem, which is designed subject to the model problem. Finally, in Section 4, we prove existence of the weak solution without small data and/or large viscosity restriction by the Galerkin method and Brouwer's fixed-point theorem, and global uniqueness of all variables by the inf-sup condition and Babu\u{s}ka--Brezzi's theory.
\section{Model specification}
\label{section:model}
\subsection{Setting of the problem}
In this section, we consider the steady-state dual-porosity-Navier-Stokes fluid flow model in a bounded open polygonal domain $\Omega\subset\mathbb{R}^N(N=2,3)$ with four physically valid interface conditions.
\begin{figure}
\caption{Schematic representation of fluid flow region $\Omega_s$ and dual-porous medium region $\Omega_d$, separated by an interface $\Gamma$.}
\label{global:domain}
\end{figure}
The domain $\Omega$ consists of a fluid flow region $\Omega_s$ and a dual-porous medium region $\Omega_d$, with interface $\Gamma=\overline{\Omega}_s\cap\overline{\Omega}_d$ (see Figure \ref{global:domain}). Both $\Omega_s$ and $\Omega_d$ are open, regular, simply connected, and bounded with Lipschitz continuous boundaries $\Gamma_i=\partial\Omega_i\setminus\Gamma~ (i=s,d)$, respectively. Here, $\Omega_s\cap\Omega_d=\emptyset$, $\overline{\Omega}_s\cup\overline{\Omega}_d=\Omega$. The unit normal vector of the interface $\Gamma$ pointing from $\Omega_s$ to $\Omega_d$ (from $\Omega_d$ to $\Omega_s$) is denoted by $\bm{n}_s$ (resp. $\bm{n}_d$), and the corresponding unit tangential vectors are denoted by $\bm{\tau}_i$, $i=1,\cdots,N-1$. In two-dimensional case, if we write $\bm{n}_s=(n_1,n_2)\in\mathbb{R}^2$, then $\bm{\tau}=\bm{n}_s^\top=(-n_2,n_1)$.
In $\Omega_s$, the fluid flow is governed by the Navier-Stokes equation \cite{Cao2021113128, Cao20132013, Chidyagwai20093806, Discacciati2004phd, Discacciati2009315, Girault20092052, Girault201328}:
\begin{equation}\label{stokes}
\begin{cases}
\left(\bm{u}_s\cdot\nabla\right)\bm{u}_s-\nabla\cdot
\mathbb{T}(\bm{u}_s,p_s)=\bm{f}_s&\text{in}~\Omega_s,\\
\nabla\cdot\bm{u}_s=0&\text{in}~\Omega_s,\\
\bm{u}_s=\bm{0}&\text{on}~\Gamma_s,
\end{cases}
\end{equation}
where $\mathbb{T}(\bm{u}_s,p_s)=-p_s\mathbb{I}+2\nu\mathbb{D}(\bm{u}_s)$ is the stress tensor, $\nu$ is the kinetic viscosity, $\mathbb{D}(\bm{u}_s)=\frac{1}{2}\left[\nabla\bm{u}_s+(\nabla\bm{u}_s)^\top\right]$ is the velocity deformation tensor, $\bm{u}_s(\bm{x})$ denotes the fluid velocity in $\Omega_s$, $p_s(\bm{x})$ denotes the kinematic pressure in $\Omega_s$, and $\bm{f}_s(\bm{x})$ denotes a general body force term that includes gravitational acceleration. As usual, we write formally:
\[(\bm{v}\cdot\nabla)\bm{w}=\sum_{i=1}^N v_i\frac{\partial\bm{w}}{\partial x_i},\quad
\nabla\cdot\bm{v}=\sum_{i=1}^N\frac{\partial v_i}{\partial x_i}.\]
The filtration of an incompressible fluid through porous media is often described using Darcy's law. So in dual-porous medium domain $\Omega_d$, the flow is governed by a traditional dual-porosity model, which is composed of matrix and microfracture equations as follows:
\begin{equation}\label{dual:darcy}
\begin{cases}
-\nabla\cdot
\big(\frac{k_m}{\mu}\nabla\phi_m\big)+\frac{\sigma k_m}{\mu}(\phi_m-\phi_f)=0&\text{in}~\Omega_d,\\
-\nabla\cdot
\big(\frac{k_f}{\mu}\nabla\phi_f\big)+\frac{\sigma k_m}{\mu}(\phi_f-\phi_m)=f_d&\text{in}~\Omega_d,\\
\phi_m=0&\text{on}~\Gamma_{d},\\
\phi_f=0&\text{on}~\Gamma_{d},
\end{cases}
\end{equation}
where $\phi_m(\bm{x})$, $\phi_f(\bm{x})$ denote the matrix and microfracture flow pressure, $\mu$ is the dynamic viscosity, $k_m$, $k_f$ are the intrinsic permeability of the matrix and microfracture regions, $\sigma$ is the shape factor characterizing the morphology and dimension of the microfractures, $f_d$ is the sink/source term for the microfractures, and the term $\frac{\sigma k_m}{\mu}(\phi_m-\phi_f)$ describes the mass exchange between matrix and microfractures.
Based on the fundamental properties of the dual-porosity fluid flow model and traditional Stokes--Darcy flow model, Hou et al. \cite{Hou2016710} introduced four physically valid interface conditions as follows to couple appropriately the dual-porosity-Stokes model, which are also adapted to our problem.
\begin{itemize}
\item No-exchange condition between the matrix and the conduits/macrofractures:
\begin{equation}\label{interface:condition:1}
-\frac{k_m}{\mu}\nabla\phi_m\cdot\bm{n}_d=0
\quad\text{on}~\Gamma.
\end{equation}
\item Mass conservation:
\begin{equation}\label{interface:condition:2}
\bm{u}_s\cdot\bm{n}_s=\frac{k_f}{\mu}
\nabla\phi_f\cdot\bm{n}_d
\quad\text{on}~\Gamma.
\end{equation}
\item Balance of normal forces:
\begin{equation}\label{interface:condition:3}
-\bm{n}_s\cdot(\mathbb{T}(\bm{u}_s,p_s)\bm{n}_s)
=\frac{\phi_f}{\rho}
\quad\text{on}~\Gamma.
\end{equation}
\item The Beavers-Joseph-Saffman interface condition \cite{Beavers1967197, Jone1973231, Mikelic20001111, Saffman197193}: for $i=1,\cdots,N-1$,
\begin{equation}\label{interface:condition:4}
-\bm{\tau}_i\cdot\left(\mathbb{T}(\bm{u}_s,p_s)\bm{n}_s\right)
=\frac{\alpha\nu\sqrt{N}}{\sqrt{\text{trace}(\bm{\Pi})}}
\bm{u}_s\cdot\bm{\tau}_i
\quad \text{on}~\Gamma.
\end{equation}
\end{itemize}
In \eqref{interface:condition:1}--\eqref{interface:condition:4}, $\alpha$ is the Beavers-Joseph constant depending on the properties of the dual-porous medium, $\bm{\Pi}$ represents the intrinsic permeability that satisfies the relation $\bm{\Pi}=k_f\mathbb{I}$, $\mathbb{I}$ is the $N\times N$ identity matrix, and $\rho$ is the fluid density.
\subsection{Galerkin variational formulation}
Throughout this paper we use the following standard function spaces. For a Lipschitz domain $\mathcal D\subset\mathbb{R}^N$, $N\geqslant 1$, we denote by $W^{k,p}(\mathcal D)$ the Sobolev space with indexes $k\geqslant 0$, $1\leqslant p\leqslant\infty$ of real-valued functions defined on $\mathcal D$, endowed with the seminorm $|\cdot|_{W^{k,p}(\mathcal D)}$ denoted by $|\cdot|_{k,p,\mathcal D}$ and norm $\|\cdot\|_{W^{k,p}(\mathcal D)}$ denoted by $\|\cdot\|_{k,p,\mathcal D}$ \cite{Adams2003}. When $p=2$, $W^{s,2}(\mathcal D)$ is denoted as $H^k(\mathcal D)$ and the corresponding seminorm and norm are written as $|\cdot|_{k,\mathcal D}$ and $\|\cdot\|_{k,\mathcal D}$, respectively. In addition, with $|\mathcal D|$ we denote the $N$-dimensional Hausdorff measure of $\mathcal D$.
To perform the variational formulation, we define some necessary Hilbert spaces given by
\begin{align*}
\bm{X}_s&:=\left\{
\bm{v}\in\bm{H}^1(\Omega_s):\bm{v}=\bm{0}~\text{on}~\Gamma_s
\right\},\\
X_d&:=\left\{
\psi\in H^1(\Omega_d):\psi=0~\text{on}~\Gamma_d
\right\},\\
Q_s&:=L^2(\Omega_s).
\end{align*}
We also need the trace space $\bm{H}_{00}^{1/2}(\Gamma):=\bm{X}_s|_\Gamma$ (resp. $H_{00}^{1/2}(\Gamma):=X_d|_\Gamma$), which is a nonclosed subspace of $\bm{H}^{1/2}(\Gamma)$ (resp. $H^{1/2}(\Gamma)$) and has a continuous zero extension to $\bm{H}^{1/2}(\partial\Omega_s)$ (resp. $H^{1/2}(\partial\Omega_d)$) \cite{Cao20104239, Cao20101, Discacciati2004phd}. For the trace space $\bm{H}_{00}^{1/2}(\Gamma)$ and its dual space $(\bm{H}_{00}^{1/2}(\Gamma))'$, we have the following continuous imbedding result \cite{Cao20101}:
\begin{equation}\label{interface:space:imbedding}
\bm{H}_{00}^{1/2}(\Gamma)
\subsetneqq \bm{H}^{1/2}(\Gamma)
\subsetneqq \bm{L}^2(\Gamma)
\subsetneqq \bm{H}^{-1/2}(\Gamma)
\subsetneqq (\bm{H}_{00}^{1/2}(\Gamma))'.
\end{equation}
One can see more details in \cite{Cao20101, Li201692, Shan2013813} and the references therein. For any bounded domain $\mathcal D\subset\mathbb{R}^N$, $(\cdot,\cdot)_{\mathcal D}$ denotes the $L^2$ inner product on $\mathcal D$, and $\langle\cdot,\cdot\rangle_{\partial\mathcal D}$ denotes the $L^2$ inner product (or duality pairing) on the boundary $\partial\mathcal D$. We also consider the following product Hilbert space
$\underline{\bm{Y}}:=\bm{X}_s\times X_d\times X_d$ with norm
\[
\|\underline{\bm{w}}\|_{\underline{\bm{Y}}}=\Big(\|\bm{w}_s\|_{1,\Omega_s}^2+\|\psi_m\|_{1,\Omega_d}^2
+\|\psi_f\|_{1,\Omega_d}^2\Big)^{1/2},\quad\forall\,\underline{\bm{w}}=(\bm{w}_s,\psi_m,\psi_f)\in\underline{\bm{Y}}.
\]
In addition, based on the following formula:
\[
\left((\bm{u}\cdot\nabla)\bm{v},\bm{w}\right)_{\mathcal D}
=\left\langle\bm{u}\cdot\bm{n},\bm{v}\cdot\bm{w}\right\rangle_{\partial\mathcal D}
-\left((\bm{u}\cdot\nabla)\bm{w},\bm{v}\right)_{\mathcal D}-\left((\nabla\cdot\bm{u})\bm{v},\bm{w}\right)_{\mathcal D},\quad\forall\,\bm{u},\bm{v},\bm{w}\in\bm{H}^1(\mathcal D),
\]
we introduce the trilinear form $b(\cdot;\cdot,\cdot)$ given by $\forall\,\bm{u}_s,\bm{v}_s,\bm{w}_s\in\bm{X}_s$,
\begin{equation}\label{trilinear:form}
\begin{split}
b(\bm{u}_s;\bm{v}_s,\bm{w}_s)
&=\left((\bm{u}_s\cdot\nabla)\bm{v}_s,\bm{w}_s\right)_{\Omega_s}
+\frac{1}{2}\left((\nabla\cdot\bm{u}_s)\bm{v}_s,\bm{w}_s\right)_{\Omega_s}\\
&=\frac{1}{2}\left\langle\bm{u}_s\cdot\bm{n}_s,\bm{v}_s\cdot\bm{w}_s\right\rangle_\Gamma
+\frac{1}{2}\left((\bm{u}_s\cdot\nabla)\bm{v}_s,\bm{w}_s\right)_{\Omega_s}
-\frac{1}{2}\left((\bm{u}_s\cdot\nabla)\bm{w}_s,\bm{v}_s\right)_{\Omega_s}.
\end{split}
\end{equation}
Hence, the Galerkin variational formulation of the coupled problem \eqref{stokes}--\eqref{interface:condition:4} is proposed that: to find $(\underline{\bm{u}},p_s)\in\underline{\bm{Y}}\times Q_s$ such that
\begin{equation}\label{continuous:weak:formulation}
a(\underline{\bm{u}},\underline{\bm{v}})
+d(\bm{v}_s,p_s)
-d(\bm{u}_s,q_s)
+b(\bm{u}_s;\bm{u}_s,\bm{v}_s)
=\rho(\bm{f}_s,\bm{v}_s)_{\Omega_s}
+(f_d,\psi_f)_{\Omega_d},\quad
\forall\,(\underline{\bm{v}},q_s)\in\underline{\bm{Y}}\times Q_s,
\end{equation}
where $\underline{\bm{u}}=(\bm{u}_s,\phi_m,\phi_f)$, $\underline{\bm{v}}=(\bm{v}_s,\psi_m,\psi_f)$, the bilinear forms $a(\cdot,\cdot)$ and $d(\cdot,\cdot)$ are defined as
\begin{align*}
a_s(\underline{\bm{u}},\underline{\bm{v}})&=
2\rho\nu\left(\mathbb{D}(\bm{u}_s),\mathbb{D}(\bm{v}_s)\right)_{\Omega_s},\\
a_d(\underline{\bm{u}},\underline{\bm{v}})&=
\frac{k_m}{\mu}\left(\nabla\phi_m,\nabla\psi_m\right)_{\Omega_d}+
\frac{k_f}{\mu}\left(\nabla\phi_f,\nabla\psi_f\right)_{\Omega_d}+
\frac{\sigma k_m}{\mu}\left(\phi_m-\phi_f,\psi_m\right)_{\Omega_d}+
\frac{\sigma k_m}{\mu}\left(\phi_f-\phi_m,\psi_f\right)_{\Omega_d},\\
a_\Gamma(\underline{\bm{u}},\underline{\bm{v}})&=
\left\langle\phi_f,\bm{v}_s\cdot\bm{n}_s\right\rangle_\Gamma
-\left\langle\psi_f,\bm{u}_s\cdot\bm{n}_s\right\rangle_\Gamma
+\sum_{i=1}^{N-1}\frac{\alpha\rho\nu}{\sqrt{k_f}}
\left\langle\bm{u}_s\cdot\bm{\tau}_i,\bm{v}_s\cdot\bm{\tau}_i\right\rangle_\Gamma,\\
a(\underline{\bm{u}},\underline{\bm{v}})&=a_s(\underline{\bm{u}},\underline{\bm{v}})+
a_d(\underline{\bm{u}},\underline{\bm{v}})+a_\Gamma(\underline{\bm{u}},\underline{\bm{v}}),\\
d(\bm{v}_s,q_s)&=-\rho(\nabla\cdot\bm{v}_s,q_s)_{\Omega_s}.
\end{align*}
\begin{remark}
We note that the term $\frac{1}{2}\left((\nabla\cdot\bm{u}_s)\bm{v}_s,\bm{w}_s\right)_{\Omega_s}$ vanishes in \eqref{trilinear:form} if $\nabla\cdot\bm{u}_s=0$.
\end{remark}
\begin{remark}\label{remark:dirichlet:boundary}
For the sake of clarity, in our analysis we shall adopt homogenous boundary conditions. In addition, for the general Dirichlet boundary conditions:
\begin{equation*}
\bm{u}|_{\Gamma_s}=\bm{u}^{\text{dir}},\quad
\phi_m|_{\Gamma_d}=\phi_m^{\text{dir}},\quad
\phi_f|_{\Gamma_d}=\phi_f^{\text{dir}},
\end{equation*}
the standard homogenization technique and the lifting operators employed in \cite{Discacciati2004phd} can be used to obtain an equivalent system with the homogeneous Dirichlet boundary conditions.
\end{remark}
Thanks to \cite{Chidyagwai20093806, Girault20092052}, we have the following important result:
\begin{lemma}\label{lemma:solution:equvi}
Assume that $\bm{f}_s\in\bm{L}^2(\Omega_s)$ and $f_d\in L^2(\Omega_d)$. Then if $(\bm{u}_s,\phi_f,\phi_m,p_s)\in \bm{X}_s\times X_d\times X_d\times Q_s$ satisfies \eqref{stokes}--\eqref{interface:condition:4}, it is also a solution to problem \eqref{continuous:weak:formulation}. Conversely, any solution of problem \eqref{continuous:weak:formulation} satisfies \eqref{stokes}--\eqref{interface:condition:4}.
\end{lemma}
\section{An a priori estimate}
\label{sec:priori:estimate}
In this section, we shall propose an a priori estimate for possible solutions of \eqref{continuous:weak:formulation}.
\subsection{Some technique inequalities}
Firstly, throughout this paper we use $C$ to denote a generic positive constant independent of discretization parameters, which may take different values in different occasions. Then, based on general Sobolev inequalities, the trace theorem and the Sobolev embedding theorem \cite{Adams2003}, we have that for any bounded open set $\mathcal D\subset\mathbb{R}^N$ with Lipschitz continuous boundary $\partial\mathcal D$ and for all $v\in H^1(\mathcal D)$,
\begin{align}
\label{lq:domain}
\|v\|_{0,q,\mathcal D}&\leqslant C\|v\|_{1,\mathcal D},\quad 1\leqslant q\leqslant 6,\\
\label{l2:partial:domain}
\|v\|_{0,\partial\mathcal D}&\leqslant C\|v\|_{0,\mathcal D}^{1/2}\|v\|_{1,\mathcal D}^{1/2}\leqslant C\|v\|_{1,\mathcal D},\\
\label{l4:partial:domain}
\|v\|_{0,4,\partial\mathcal D}&\leqslant C\|v\|_{1,\mathcal D},
\end{align}
which can be also found in \cite{Chidyagwai20093806, Girault20092052}. We also need Poincar\'{e} inequality and Korn's inequality \cite{Cao20101} that for all $\bm{v}_s\in\bm{X}_s$ and for all $\psi\in X_d$,
\begin{align}
\label{v:Poincare}
\|\bm{v}_s\|_{0,\Omega_s}&\leqslant C|\bm{v}_s|_{1,\Omega_s},\\
\label{psi:Poincare}
\|\psi\|_{0,\Omega_d}&\leqslant C|\psi|_{1,\Omega_d},\\
\label{v:Korn}
|\bm{v}_s|_{1,\Omega_s}&\leqslant C\|\mathbb{D}(\bm{v}_s)\|_{0,\Omega_s}.
\end{align}
Moreover, for the trilinear form $b(\cdot;\cdot,\cdot)$, the following lemma holds with the help of \cite{Aubin1982, Bergh1976}, \eqref{lq:domain}, \eqref{v:Poincare} and \eqref{v:Korn}.
\begin{lemma}\label{lemma:trilinear:inequality}
For any functions $\bm{u}_s$, $\bm{v}_s$, $\bm{w}_s\in\bm{X}_s$, we have
\begin{align}
\label{trilinear:embedding}
\left|\left((\bm{u}_s\cdot\nabla)\bm{v}_s,\bm{w}_s\right)_{\Omega_s}\right|&\leqslant
C\|\bm{u}_s\|_{0,\Omega_s}^{1/2}\|\mathbb{D}(\bm{u}_s)\|_{0,\Omega_s}^{1/2}
\|\mathbb{D}(\bm{v}_s)\|_{0,\Omega_s}\|\mathbb{D}(\bm{w}_s)\|_{0,\Omega_s},\\
\label{trilinear:aux:embedding}
\left|\left((\nabla\cdot\bm{u}_s)\bm{v}_s,\bm{w}_s\right)_{\Omega_s}\right|&\leqslant
C\|\bm{v}_s\|_{0,\Omega_s}^{1/2}\|\mathbb{D}(\bm{v}_s)\|_{0,\Omega_s}^{1/2}
\|\mathbb{D}(\bm{u}_s)\|_{0,\Omega_s}\|\mathbb{D}(\bm{w}_s)\|_{0,\Omega_s}.
\end{align}
\end{lemma}
\begin{pf}
First, let us recall the standard interpolation inequality \cite{Aubin1982, Bergh1976}: for any bounded open set $\mathcal D\subset\mathbb{R}^N$ with Lipschitz continuous boundary $\partial\mathcal D$, $1\leqslant p<q<\infty$ and $\theta=N(1/p-1/q)\leqslant 1$,
\begin{equation}\label{interpolation:inequality}
\|v\|_{0,q,\mathcal D}\leqslant C\|v\|_{0,p,\mathcal D}^{1-\theta}\|v\|_{1,p,\mathcal D}^\theta,
\quad \forall\,v\in W^{1,p}(\mathcal D).
\end{equation}
In \eqref{interpolation:inequality}, we set $\mathcal D=\Omega_s$, and
\begin{itemize}
\item if $N=2$, $p=2$, $q=4$, we then have
\begin{equation}\label{2d:interpolation:inequality}
\|\bm{v}_s\|_{0,4,\Omega_s}\leqslant C\|\bm{v}_s\|_{0,\Omega_s}^{1/2}\|\bm{v}_s\|_{1,\Omega_s}^{1/2},
\quad \forall\,\bm{v}_s\in \bm{X}_s;
\end{equation}
\item if $N=3$, $p=2$, $q=3$, we then have
\begin{equation}\label{3d:interpolation:inequality}
\|\bm{v}_s\|_{0,3,\Omega_s}\leqslant C\|\bm{v}_s\|_{0,\Omega_s}^{1/2}\|\bm{v}_s\|_{1,\Omega_s}^{1/2},
\quad \forall\,\bm{v}_s\in \bm{X}_s.
\end{equation}
\end{itemize}
Hence, the results \eqref{trilinear:embedding} and \eqref{trilinear:aux:embedding} then follow from H\"{o}lder inequality, \eqref{lq:domain}, \eqref{v:Poincare}, \eqref{v:Korn}, \eqref{2d:interpolation:inequality} and \eqref{3d:interpolation:inequality} when $N=2$ or $3$.
\end{pf}
\subsection{The equivalent problem}
Let us introduce the following divergence-free space:
\[
\bm{V}_s:=\left\{\bm{v}_s\in\bm{X}_s:\,\nabla\cdot\bm{v}_s = 0\right\},
\]
and we consider the new product Hilbert space:
\[
\underline{\bm{W}}:=\bm{V}_s\times X_d\times X_d,
\]
which equipped with the same norm as $\underline{\bm{Y}}$.
Thanks to \cite{Girault20092052}, it provides a lower constant bound $\beta_0>0$ depending only on $\Omega$ to guarantee the following inf-sup condition:
\begin{equation}\label{con:infsup}
\inf_{q_s\in Q_s}\sup_{\underline{\bm{v}}=(\bm{v}_s,\psi_m,\psi_f)\in\underline{\bm{Y}}}
\frac{d(\bm{v}_s,q_s)}{\|q_s\|_{0,\Omega_s}\|\underline{\bm{v}}\|_{\underline{\bm{Y}}}}\geqslant\beta_0.
\end{equation}
Hence, by \eqref{con:infsup} and the same argument in \cite{Girault1986}, the Galerkin variational formulation \eqref{continuous:weak:formulation} is equivalent to the following problem: to find $\underline{\bm{u}}\in\underline{\bm{W}}$ such that
\begin{equation}\label{continuous:weak:formulation:2}
a(\underline{\bm{u}},\underline{\bm{v}})
+b(\bm{u}_s;\bm{u}_s,\bm{v}_s)
=\rho(\bm{f}_s,\bm{v}_s)_{\Omega_s}
+(f_d,\psi_f)_{\Omega_d},\quad
\forall\,\underline{\bm{v}}\in\underline{\bm{W}}.
\end{equation}
\subsection{Discretization and an auxiliary problem}
Let $\mathcal R_{i,h}$ be a quasi-uniform regular triangulation of the domain $\Omega_i$, $i=s,d$, respectively. If assuming that the two meshes $\mathcal R_{s,h}$ and $\mathcal R_{d,h}$ coincide along the interface $\Gamma$, then we can define $\mathcal R_h:=\mathcal R_{s,h}\cup\Gamma\cup\mathcal R_{d,h}$, which is also a quasi-uniform regular triangulation of $\Omega$. The diameter of element $K\in\mathcal R_h$ is denoted as $h_K$, and we set the mesh parameter $h=\max_{K\in\mathcal R_h}h_K$.
Then we denote by $\bm{X}_h\subset \bm{H}_0^1(\Omega)$ a finite element space defined on $\Omega$. The discrete space $\bm{X}_h$ can be naturally restricted to $\Omega_s$, so we set $\bm{X}_{s,h}=\bm{X}_h|_{\Omega_s}\subset\bm{X}_s$. Following the same technique, we establish other finite element spaces $Q_{s,h}\subset Q_s$ and $X_{d,h}\subset X_d$. We assume that $(\bm{X}_{s,h},Q_{s,h})$ is a stable finite element space pair. Further we define the following vector-valued Hilbert space on $\Omega_d$:
\[
\bm{X}_d:=\left\{\bm{v}\in\bm{H}^1(\Omega_d):\,\bm{v}=\bm{0}~\mathrm{on}~\Gamma_d\right\}.
\]
In addition, we need a finite element subspace $\bm{V}_h\subset\bm{X}_h$ defined on $\Omega$ given by
\[
\bm{V}_h:=\left\{\bm{v}_h\in\bm{X}_h:\,d(\bm{v}_h,q_{s,h})=0,\,\forall\,q_{s,h}\in Q_{s,h}\right\}.
\]
Similarly, we have
\[
\bm{V}_{s,h}=\bm{V}_h|_{\Omega_s}\subset\bm{X}_s,\quad \bm{X}_{d,h}=\bm{V}_h|_{\Omega_d}\subset\bm{X}_d,
\]
where $\bm{V}_{s,h}$ is a weakly divergence-free finite element space on $\Omega_s$, and any function $\bm{v}_{d,h}\in\bm{X}_{d,h}$ has an implicit restriction that $\int_\Gamma\bm{v}_{d,h}\cdot\bm{n}_d\mathrm{d}s = 0$ because of the continuity of $\bm{V}_h$ across the interface $\Gamma$.
According to \cite{Girault20092052}, the difficulty of obtaining an a priori estimate of the Navier-Stokes/Darcy fluid flow model comes from the energy unbalance due to the nonlinear convection term $(\bm{u}_s\cdot\nabla)\bm{u}_s$ in \eqref{stokes}. It motivates us to construct an auxiliary discrete Galerkin variational problem defined on $\Omega_d$ with compatible boundary conditions, so that the auxiliary problem can compensate the nonlinear convection term in the energy balance of the Navier-Stokes equations.
To fix ideas, we first define a lifting operator $\mathcal L:\,\bm{H}_{00}^{1/2}(\Gamma)\rightarrow\bm{X}_d$ as follows: for any $\bm{\eta}\in\bm{H}_{00}^{1/2}(\Gamma)$ with $\int_\Gamma\bm{\eta}\mathrm{d}s=0$ such that
\[
\mathcal L\bm{\eta}\in\bm{X}_d,\quad (\mathcal L\bm{\eta})|_\Gamma=\bm{\eta},\quad \nabla\cdot(\mathcal L\bm{\eta})|_{\Omega_d}=0.
\]
Then, we introduce the Scott-Zhang interpolator $\Pi_{s,h}:\,\bm{X}_s\rightarrow\bm{X}_{s,h}$ satisfying the following properties \cite{Scott1990483}:
\begin{align}
\label{SZ:error}
\|\bm{v}_s-\Pi_{s,h}\bm{v}_s\|_{0,\Omega_s}&\leqslant Ch\|\mathbb{D}(\bm{v}_s)\|_{0,\Omega_s},\quad\forall\,\bm{v}_s\in\bm{X}_s,\\
\label{SZ:h1:norm}
\|\Pi_{s,h}\bm{v}_s\|_{1,\Omega_s}&\leqslant
C\|\mathbb{D}(\bm{v}_s)\|_{0,\Omega_s},\quad\ \;\forall\,\bm{v}_s\in\bm{X}_s.
\end{align}
Now, with the mesh parameter $h$, the interpolator $\Pi_{s,h}$ and any given $\bm{u}_s\in\bm{V}_s$, we consider the following auxiliary discrete Galerkin variational problem: to find $\bm{u}_{d,h}\in\bm{X}_{d,h}$ with $\bm{u}_{d,h}|_\Gamma=(\Pi_{s,h}\bm{u}_s)|_\Gamma$ such that for all $\bm{v}_{d,h}\in\bm{X}_{d,h}$,
\begin{equation}\label{auxiliary:pde}
2\kappa\left(\mathbb{D}(\bm{u}_{d,h}),\mathbb{D}(\bm{v}_{d,h})\right)_{\Omega_d}
+\left((\bm{u}_d^0\cdot\nabla)\bm{u}_{d,h},\bm{v}_{d,h}\right)_{\Omega_d}-
\kappa\left\langle\frac{\partial\bm{u}_{d,h}}{\partial\bm{n}_d},\bm{v}_{d,h}\right\rangle_\Gamma=0,
\end{equation}
where $\bm{u}_d^0=\mathcal L(\bm{u}_s|_\Gamma)\in\bm{X}_d$, and $\kappa>0$ is a certain positive constant specified later. Furthermore, the discrete problem \eqref{auxiliary:pde} is uniquely solvable in $\bm{X}_{d,h}$ from Lemma 3.1 in \cite{Hou201947}.
\begin{remark}
The discrete problem \eqref{auxiliary:pde} is the conforming Galerkin approximation of the following convection-diffusion equation defined on $\Omega_d$: for any given $\bm{u}_s\in\bm{V}_s$, to find $\bm{u}_d\in\bm{X}_d$ with $\bm{u}_d|_\Gamma=\bm{u}_s|_\Gamma$ such that
\begin{equation}\label{auxiliary:pde:con}
\begin{cases}
-2\kappa\nabla\cdot\mathbb{D}(\bm{u}_d)+(\bm{u}_d^0\cdot\nabla)\bm{u}_d=\bm{0}
& \mathrm{in}~\Omega_d,\\
\bm{u}_d=\bm{0} & \mathrm{on}~\Gamma_d.
\end{cases}
\end{equation}
Clearly, for any constant $\kappa>0$ and $\bm{u}_s\in\bm{V}_s$, the solution of \eqref{auxiliary:pde:con} is well-posed \cite{Roos2008}.
\end{remark}
\subsection{An a priori estimate of weak solutions}\label{subsec:a:priori}
We realize that \eqref{continuous:weak:formulation:2} and \eqref{auxiliary:pde} can form a larger coupled system because of the convection term $\bm{u}_d^0$ and the constraint $\bm{u}_{d,h}|_\Gamma=(\Pi_{s,h}\bm{u}_s)|_\Gamma$ for the unknown variable $\bm{u}_{d,h}$ over the interface $\Gamma$, and we also stress that the new coupled system has no energy exchange via the interface $\Gamma$. It implies that \eqref{auxiliary:pde} is subjected to \eqref{continuous:weak:formulation:2} but any possible solution of \eqref{continuous:weak:formulation:2} has nothing to do with the auxiliary problem \eqref{auxiliary:pde}.
\begin{theorem}\label{theorem:priori:estimate}
Assume that the data in the auxiliary discrete problem \eqref{auxiliary:pde} satisfy $h$ small enough and $0<\kappa\leqslant C\rho\nu h$. If problem \eqref{continuous:weak:formulation:2} exists a possible solution $(\bm{u}_s,\phi_m,\phi_f)\in\underline{\bm{W}}$, we then have the following a priori estimate that
\begin{equation}\label{a:priori:estimate}
\rho\nu\|\mathbb{D}(\bm{u}_s)\|_{0,\Omega_s}^2+\frac{2\sigma k_m}{\mu}\|\nabla\phi_m\|_{0,\Omega_d}^2
+\frac{\sigma k_f}{\mu}\|\nabla\phi_f\|_{0,\Omega_d}^2\leqslant\mathcal C^2,
\end{equation}
where
\[
\mathcal C^2=\rho\nu^{-1}\|\bm{f}_s\|_{-1,\Omega_s}^2
+\mu \sigma^{-1} k_f^{-1}\|f_d\|_{-1,\Omega_d}^2.
\]
Here we stress that there are no assumptions for the data and physical parameters of the model problem.
\end{theorem}
\begin{pf}
We denote by $\underline{\bm{u}}=(\bm{u}_s,\phi_m,\phi_f)\in\underline{\bm{W}}$ a possible solution of problem \eqref{continuous:weak:formulation:2}, and denote by $\bm{u}_{d,h}\in\bm{X}_{d,h}$ the solution of problem \eqref{auxiliary:pde}. Then, we assume that there is a positive finite constant $M_s<\infty$ such that $\|\mathbb{D}(\bm{u}_s)\|_{0,\Omega_s}\leqslant M_s$. Taking $\underline{\bm{v}}=\underline{\bm{u}}$ in \eqref{continuous:weak:formulation:2}, and noting that the terms $a_\Gamma(\underline{\bm{u}},\underline{\bm{u}})$ and $\frac{\sigma k_m}{\mu}\left(\phi_m-\phi_f,\phi_m\right)_{\Omega_d}+\frac{\sigma k_m}{\mu}\left(\phi_f-\phi_m,\phi_f\right)_{\Omega_d}$ are non-negative, we have
\begin{equation}\label{priori:estimate:temp1}
2\rho\nu\|\mathbb{D}(\bm{u}_s)\|_{0,\Omega_s}^2+\frac{\sigma k_m}{\mu}\|\nabla\phi_m\|_{0,\Omega_d}^2
+\frac{\sigma k_f}{\mu}\|\nabla\phi_f\|_{0,\Omega_d}^2+\frac{1}{2}\left\langle\bm{u}_s\cdot\bm{n}_s,|\bm{u}_s|^2\right\rangle_\Gamma\leqslant \rho(\bm{f}_s,\bm{u}_s)_{\Omega_s}+(f_d,\phi_f)_{\Omega_d}.
\end{equation}
In addition, we note that $\nabla\cdot\bm{u}_d^0=0$ in $\Omega_d$, and the $\bm{u}_d^0|_\Gamma=\bm{u}_s|_\Gamma$ by the definition of $\mathcal L$. Hence, the identity \eqref{trilinear:form} can also be applied to the second term in the left hand of \eqref{auxiliary:pde}, that is for all $\bm{v}_{d,h}\in\bm{X}_{d,h}$,
\begin{equation}\label{priori:estimate:temp2}
\begin{split}
&\left((\bm{u}_d^0\cdot\nabla)\bm{u}_{d,h},\bm{v}_{d,h}\right)_{\Omega_d}
=\left((\bm{u}_d^0\cdot\nabla)\bm{u}_{d,h},\bm{v}_{d,h}\right)_{\Omega_d}
+\frac{1}{2}\left((\nabla\cdot\bm{u}_d^0)\bm{u}_{d,h},\bm{v}_{d,h}\right)_{\Omega_d}\\
&=-\frac{1}{2}\left\langle\bm{u}_s\cdot\bm{n}_s,|\Pi_{s,h}\bm{u}_s|^2\right\rangle_\Gamma
+\frac{1}{2}\left((\bm{u}_d^0\cdot\nabla)\bm{u}_{d,h},\bm{v}_{d,h}\right)_{\Omega_d}
-\frac{1}{2}\left((\bm{u}_d^0\cdot\nabla)\bm{v}_{d,h},\bm{u}_{d,h}\right)_{\Omega_d}.
\end{split}
\end{equation}
So, taking $\bm{v}_{d,h}=\bm{u}_{d,h}$ in \eqref{auxiliary:pde} and using \eqref{priori:estimate:temp2}, we obtain
\begin{equation}\label{priori:estimate:temp3}
2\kappa\|\mathbb{D}(\bm{u}_{d,h})\|_{0,\Omega_d}^2
-\frac{1}{2}\left\langle\bm{u}_s\cdot\bm{n}_s,|\Pi_{s,h}\bm{u}_s|^2\right\rangle_\Gamma
=\kappa\left\langle\frac{\partial\bm{u}_{d,h}}{\partial\bm{n}_d},\Pi_{s,h}\bm{u}_s\right\rangle_\Gamma.
\end{equation}
It follows from \eqref{l2:partial:domain}, \eqref{l4:partial:domain}, \eqref{v:Poincare}, \eqref{v:Korn}, \eqref{SZ:error}, \eqref{SZ:h1:norm}, H\"{o}lder inequality and the triangle inequality that
\begin{equation}\label{priori:estimate:temp4}
\begin{split}
&\frac{1}{2}\left\langle\bm{u}_s\cdot\bm{n}_s,|\bm{u}_s|^2\right\rangle_\Gamma
-\frac{1}{2}\left\langle\bm{u}_s\cdot\bm{n}_s,|\Pi_{s,h}\bm{u}_s|^2\right\rangle_\Gamma\\
\leqslant&
\frac{1}{2}\left\|\bm{u}_s\right\|_{0,4,\Gamma}\|\bm{u}_s+\Pi_{s,h}\bm{u}_s\|_{0,4,\Gamma}\left\|\bm{u}_s-\Pi_{s,h}\bm{u}_s\right\|_{0,\Gamma}\\
\leqslant& Ch^{1/2}M_s\|\mathbb{D}(\bm{u}_s)\|_{0,\Omega_s}^2.
\end{split}
\end{equation}
As follows, we now define the dual norms of $\bm{X}_s$ and $X_d$, which are denoted as $\|\cdot\|_{-1,\Omega_s}$ and $\|\cdot\|_{-1,\Omega_d}$, respectively.
\[
\|\bm{f}_s\|_{-1,\Omega_s}:=\inf_{\bm{v}_s\in\bm{X}_s}\frac{(\bm{f}_s,\bm{v}_s)_{\Omega_s}}{\|\mathbb{D}(\bm{v}_s)\|_{0,\Omega_s}},
\quad \|f_d\|_{-1,\Omega_d}:=\inf_{\psi\in X_d}\frac{(f_d,\psi)_{\Omega_d}}{\|\nabla\psi\|_{0,\Omega_d}}.
\]
Hence, by using H\"{o}lder inequality and Young's inequality, we have
\begin{equation}\label{priori:estimate:temp5}
\rho(\bm{f}_s,\bm{u}_s)_{\Omega_s}+(f_d,\phi_f)_{\Omega_d}
\leqslant \frac{\rho\nu}{2}\|\mathbb{D}(\bm{u}_s)\|_{0,\Omega_s}^2
+\frac{\sigma k_f}{2\mu}\|\nabla\phi_f\|_{0,\Omega_d}^2+\frac{\rho}{2\nu}\|\bm{f}_s\|_{-1,\Omega_s}^2
+\frac{\mu}{2\sigma k_f}\|f_d\|_{-1,\Omega_d}^2.
\end{equation}
In addition, based on the imbedding result \eqref{interface:space:imbedding}, the standard trace theorem \cite{Adams2003}, H\"{o}lder inequality, Korn's inequality, Young's inequality, \eqref{SZ:h1:norm} and the following inverse inequality \cite{Riviere2008}: for any polynomial $\bm{v}$ on $K$,
\[
\|(\nabla\bm{v})\bm{n}\|_{0,e}\leqslant Ch^{-1/2}|\bm{v}|_{1,K},\quad\forall\,e\subset\partial K,\ \forall\,K\in\mathcal R_h,
\]
we have
\begin{equation}\label{priori:estimate:temp6}
\begin{split}
&\kappa\left\langle\frac{\partial\bm{u}_{d,h}}{\partial\bm{n}_d},\Pi_{s,h}\bm{u}_s\right\rangle_\Gamma
\leqslant \kappa\left\|(\nabla\bm{u}_{d,h})\bm{n}_d\right\|_{\left(\bm{H}_{00}^{1/2}(\Gamma)\right)'}
\left\|\Pi_{s,h}\bm{u}_s\right\|_{\bm{H}_{00}^{1/2}(\Gamma)}\\
&\quad\leqslant C\kappa
\left\|(\nabla\bm{u}_{d,h})\bm{n}_d\right\|_{0,\Gamma}\left\|\Pi_{s,h}\bm{u}_s\right\|_{1,\Omega_s}
\leqslant C\kappa
\left(\sum_{e\in\Gamma}\|(\nabla\bm{u}_{d,h})\bm{n}\|_{0,e}^2\right)^{1/2}
\left\|\mathbb{D}(\bm{u}_s)\right\|_{0,\Omega_s}\\
&\quad\leqslant C\kappa h^{-1/2}\left\|\mathbb{D}(\bm{u}_{d,h})\right\|_{0,\Omega_d}
\left\|\mathbb{D}(\bm{u}_s)\right\|_{0,\Omega_s}\leqslant
\frac{C\kappa^2}{\rho\nu h}\left\|\mathbb{D}(\bm{u}_{d,h})\right\|_{0,\Omega_d}^2
+\frac{\rho\nu}{2}\left\|\mathbb{D}(\bm{u}_s)\right\|_{0,\Omega_s}^2.
\end{split}
\end{equation}
Finally, with the assumptions that $h$ is small enough such that $Ch^{1/2}M_s<\rho\nu/2$, and $\kappa$ satisfies $0<\kappa\leqslant C\rho\nu h$, gathering \eqref{priori:estimate:temp1}, \eqref{priori:estimate:temp3}, \eqref{priori:estimate:temp4}, \eqref{priori:estimate:temp5} and \eqref{priori:estimate:temp6} yields \eqref{a:priori:estimate}, where
\[
\mathcal C^2=\rho\nu^{-1}\|\bm{f}_s\|_{-1,\Omega_s}^2
+\mu \sigma^{-1} k_f^{-1}\|f_d\|_{-1,\Omega_d}^2.
\]
\end{pf}
\section{Existence and global uniqueness of the solution}
In this section, we shall use the technique of the Galerkin method to verify that problem \eqref{continuous:weak:formulation:2} has at least one weak solution, and then we can prove the global uniqueness of the weak solution due to the a priori estimate \eqref{a:priori:estimate} obtained in Section \ref{sec:priori:estimate}.
\subsection{The solvability of the conforming Galerkin approximation problem}
We denote by $\underline{\bm{W}}_h$ the product finite element space $\bm{V}_{s,h}\times X_{d,h}\times X_{d,h}$. Then, we consider the following conforming Galerkin approximation problem of \eqref{continuous:weak:formulation:2}: to find $\underline{\bm{u}}_h=(\bm{u}_{s,h},\phi_{m,h},\phi_{f,h})\in\underline{\bm{W}}_h$ such that $\forall\,\underline{\bm{v}}_h=(\bm{v}_{s,h},\psi_{m,h},\psi_{f,h})\in\underline{\bm{W}}_h$,
\begin{equation}\label{discrete:weak:formulation}
a(\underline{\bm{u}}_h,\underline{\bm{v}}_h)
+b(\bm{u}_{s,h};\bm{u}_{s,h},\bm{v}_{s,h})
=\rho(\bm{f}_s,\bm{v}_{s,h})_{\Omega_s}
+(f_d,\psi_{f,h})_{\Omega_d}.
\end{equation}
To show the solvability of \eqref{discrete:weak:formulation}, stemming from Theorem \ref{theorem:priori:estimate}, we shall consider the following constructed coupled discrete system: to find $(\underline{\bm{u}}_h,\bm{u}_{d,h})\in\underline{\bm{W}}_h\times \bm{X}_{d,h}$ such that
\begin{align}
\label{discrete:au:1}
&a(\underline{\bm{u}}_h,\underline{\bm{v}}_h)
+b(\bm{u}_{s,h};\bm{u}_{s,h},\bm{v}_{s,h})
=\rho(\bm{f}_s,\bm{v}_{s,h})_{\Omega_s}
+(f_d,\psi_{f,h})_{\Omega_d}
& \forall\,\underline{\bm{v}}_h\in\underline{\bm{W}}_h,\\
\label{discrete:au:2}
&2\xi\left(\mathbb{D}(\bm{u}_{d,h}),\mathbb{D}(\bm{v}_{d,h})\right)_{\Omega_d}
+\left((\bm{\beta}\cdot\nabla)\bm{u}_{d,h},\bm{v}_{d,h}\right)_{\Omega_d}
-\xi\left\langle\frac{\partial\bm{u}_{d,h}}{\partial\bm{n}_d},\bm{v}_{d,h}\right\rangle_\Gamma
=0&\forall\,\bm{v}_{d,h}\in\bm{X}_{d,h},\\
\label{discrete:au:3}
&\bm{u}_{d,h}|_\Gamma=\bm{u}_{s,h}|_\Gamma,&
\end{align}
where $\xi>0$ is a positive constant specified later, $\bm{\beta}:=\mathcal L(\bm{u}_{s,h}|_\Gamma)$, and $\nabla\cdot\bm{\beta}=0$ in $\Omega_d$ by the definition of the lifting operator $\mathcal L$. Furthermore, it follows the head statements of Section \ref{subsec:a:priori} that if $(\underline{\bm{u}}_h,\bm{u}_{d,h})\in\underline{\bm{W}}_h\times \bm{X}_{d,h}$ is a solution of problem \eqref{discrete:au:1}--\eqref{discrete:au:3}, then $\underline{\bm{u}}_h\in\underline{\bm{W}}_h$ will solve \eqref{discrete:weak:formulation}.
\begin{lemma}\label{lemma:discrete:priori:estimate}
If problem \eqref{discrete:weak:formulation} exists a possible solution $\underline{\bm{u}}_h=(\bm{u}_{s,h},\phi_{m,h},\phi_{f,h})\in\underline{\bm{W}}_h$, we have the following a priori estimate that
\begin{equation}\label{discrete:a:priori:estimate}
\rho\nu\|\mathbb{D}(\bm{u}_{s,h})\|_{0,\Omega_s}^2+\frac{2\sigma k_m}{\mu}\|\nabla\phi_{m,h}\|_{0,\Omega_d}^2
+\frac{\sigma k_f}{\mu}\|\nabla\phi_{f,h}\|_{0,\Omega_d}^2\leqslant\mathcal C^2,
\end{equation}
where $\mathcal C^2$ is defined in Theorem \ref{theorem:priori:estimate}.
\end{lemma}
\begin{pf}
The proof is quite close to the proof of Theorem \ref{theorem:priori:estimate}. To avoid repeating, we just present the differences. Since $\bm{\beta}=\mathcal L(\bm{u}_{s,h}|_\Gamma)$, we have
\[
\left((\bm{\beta}\cdot\nabla)\bm{u}_{d,h},\bm{u}_{d,h}\right)_{\Omega_d}
=-\frac{1}{2}\left\langle\bm{u}_{s,h}\cdot\bm{n}_s,\left|\bm{u}_{s,h}\right|^2\right\rangle_\Gamma,
\]
and thus the term \eqref{priori:estimate:temp4} vanishes here. As a result, some subtle differences occur to the following estimate:
\[
\xi\left\langle\frac{\partial\bm{u}_{d,h}}{\partial\bm{n}_d},\bm{u}_{s,h}\right\rangle_\Gamma
\leqslant \frac{C\xi^2}{2\rho\nu h}\|\mathbb{D}(\bm{u}_{d,h})\|_{0,\Omega_d}^2
+\rho\nu\|\mathbb{D}(\bm{u}_{s,h})\|_{0,\Omega_s}^2.
\]
Thus, for any given mesh parameter $h$ , the result \eqref{discrete:a:priori:estimate} follows the assumption that $0<\xi\leqslant C\rho\nu h$.
\end{pf}
Now we start to verify the solvability of problem \eqref{discrete:au:1}--\eqref{discrete:au:3}. For any $\widehat{\bm{v}}_h\in\bm{V}_h$, following the similar technique proposed in \cite{Hou201947}, we denote $\bm{v}_h^s=\widehat{\bm{v}}_h|_{\Omega_s}\in\bm{V}_{s,h}$, $\bm{v}_h^d=\widehat{\bm{v}}_h|_{\Omega_d}\in\bm{X}_{d,h}$, and
\[
\widehat{\widetilde{\bm{v}}}_h:=\left\{
\begin{array}{cl}
\bm{v}_h^s&\mathrm{in}~\Omega_s,\\
\bm{v}_d^0&\mathrm{in}~\Omega_d,
\end{array}
\right.
\]
where $\bm{v}_d^0=\mathcal L(\bm{v}_h^s|_\Gamma)\in\bm{X}_d$ with $\nabla\cdot\bm{v}_d^0=0$ in the domain $\Omega_d$. Then, we denote by $\underline{\bm{Z}}_h$ the product space $\bm{V}_h\times X_{d,h}\times X_{d,h}$, and define a mapping $\mathcal F_h:\underline{\bm{Z}}_h\rightarrow\underline{\bm{Z}}_h$ as: for each $\widehat{\underline{\bm{v}}}_h=(\widehat{\bm{v}}_h,\psi_{m,h},\psi_{f,h})\in \underline{\bm{Z}}_h$ such that $\forall\,\widehat{\underline{\bm{w}}}_h=(\widehat{\bm{w}}_h,\varphi_{m,h},\varphi_{f,h})\in \underline{\bm{Z}}_h$,
\begin{equation}\label{discrete:mapping}
\begin{split}
&\left(\mathcal F_h\widehat{\underline{\bm{v}}}_h,\widehat{\underline{\bm{w}}}_h\right)_{\underline{\bm{Z}}_h}
:=2\rho\nu\left(\mathbb{D}(\bm{v}_h^s),\mathbb{D}(\bm{w}_h^s)\right)_{\Omega_s}
+\frac{k_m}{\mu}\left(\nabla\psi_{m,h},\nabla\varphi_{m,h}\right)_{\Omega_d}+
\frac{k_f}{\mu}\left(\nabla\psi_{f,h},\nabla\varphi_{f,h}\right)_{\Omega_d}\\
&\quad+\frac{\sigma k_m}{\mu}\left(\psi_{m,h}-\psi_{f,h},\varphi_{m,h}\right)_{\Omega_d}
+\frac{\sigma k_m}{\mu}\left(\psi_{f,h}-\psi_{m,h},\varphi_{f,h}\right)_{\Omega_d}
+2\xi\left(\mathbb{D}(\bm{v}_h^d),\mathbb{D}(\bm{w}_h^d)\right)_{\Omega_d}\\
&\quad
+\left\langle\psi_{f,h},\bm{w}_h^s\cdot\bm{n}_s\right\rangle_\Gamma
-\left\langle\varphi_{f,h},\bm{v}_h^s\cdot\bm{n}_s\right\rangle_\Gamma
+\sum_{i=1}^{N-1}\frac{\alpha\rho\nu}{\sqrt{k_f}}
\left\langle\bm{v}_h^s\cdot\bm{\tau}_i,\bm{w}_h^s\cdot\bm{\tau}_i\right\rangle_\Gamma
-\xi\left\langle\frac{\partial\bm{v}_h^d}{\partial\bm{n}_d},\bm{w}_h^d\right\rangle_\Gamma\\
&\quad+\left((\widehat{\widetilde{\bm{v}}}_h\cdot\nabla)\widehat{\bm{v}}_h,\widehat{\bm{w}}_h\right)_{\Omega}
+\frac{1}{2}\left((\nabla\cdot\widehat{\widetilde{\bm{v}}}_h)\widehat{\bm{v}}_h,\widehat{\bm{w}}_h\right)_{\Omega}
-\rho(\bm{f}_s,\bm{w}_h^s)_{\Omega_s}-(f_d,\varphi_{f,h})_{\Omega_d}.
\end{split}
\end{equation}
Clearly, $\mathcal F_h$ define a mapping from $\underline{\bm{Z}}_h$ into itself, and a zero of $\mathcal F_h$ is a solution of the coupled system \eqref{discrete:au:1}--\eqref{discrete:au:3}. Further we introduce the Brouwer's fixed-point theorem:
\begin{lemma}\label{lemma:Brouwer:fix}\cite{Cioranescu2016}
Let $U$ be a nonempty, convex, and compact subset of a normed vector space and let $F$ be a continuous mapping from $U$ into $U$. Then $F$ has at least one fixed point.
\end{lemma}
Based on Lemma \ref{lemma:discrete:priori:estimate}, we define $\underline{\bm{U}}_h$ a subset of $\underline{\bm{Z}}_h$ as:
\[
\underline{\bm{U}}_h:=\left\{\widehat{\underline{\bm{v}}}_h\in\underline{\bm{Z}}_h:\,
\rho\nu\|\mathbb{D}(\bm{v}_h^s)\|_{0,\Omega_s}^2+\frac{2\sigma k_m}{\mu}\|\nabla\psi_{m,h}\|_{0,\Omega_d}^2+\frac{\sigma k_f}{\mu}\|\nabla\psi_{f,h}\|_{0,\Omega_d}^2\leqslant\mathcal C^2\right\},
\]
where $\mathcal C^2$ is defined in Theorem \ref{theorem:priori:estimate}. Then, taking $\widehat{\underline{\bm{w}}}_h=\widehat{\underline{\bm{v}}}_h\in\underline{\bm{U}}_h$ in \eqref{discrete:mapping}, and following the steps of proving Theorem \ref{theorem:priori:estimate} and Lemma \ref{lemma:discrete:priori:estimate}, we obtain that
\[
\left(\mathcal F_h\widehat{\underline{\bm{v}}}_h,
\widehat{\underline{\bm{v}}}_h\right)_{\underline{\bm{Z}}_h}\geqslant 0,
\quad\forall\,\widehat{\underline{\bm{v}}}_h\in\underline{\bm{U}}_h.
\]
Hence, it follows Lemma \ref{lemma:Brouwer:fix} that there is at least one zero of $\mathcal F_h$ in the ball $\underline{\bm{U}}_h$ centered at the origin.
Gathering all the above results, we conclude the following theorem:
\begin{theorem}\label{theorem:discrete:existence}
For any given mesh parameter $h>0$, the conforming Galerkin approximation problem \eqref{discrete:weak:formulation} has at least one solution $(\bm{u}_{s,h},\phi_{m,h},\phi_{f,h})\in\bm{V}_{s,h}\times X_{d,h}\times X_{d,h}$ with the following estimate:
\begin{equation}\label{discrete:a:priori:estimate:2}
\rho\nu\|\mathbb{D}(\bm{u}_{s,h})\|_{0,\Omega_s}^2+\frac{2\sigma k_m}{\mu}\|\nabla\phi_{m,h}\|_{0,\Omega_d}^2
+\frac{\sigma k_f}{\mu}\|\nabla\phi_{f,h}\|_{0,\Omega_d}^2\leqslant\mathcal C^2,
\end{equation}
where $\mathcal C^2$ is defined in Theorem \ref{theorem:priori:estimate}.
\end{theorem}
\subsection{Existence and global uniqueness}
It stems from Theorem \ref{theorem:discrete:existence} and the conforming property $\bm{V}_{s,h}\times\bm{X}_{d,h}\times\bm{X}_{d,h}\subset\bm{V}_s\times\bm{X}_d\times\bm{X}_d$ that there exists a 3-tuple function $(\bm{u}_s,\phi_m,\phi_f)$ in the Hilbert space $\underline{\bm{W}}=\bm{V}_s\times\bm{X}_d\times\bm{X}_d$, and a uniformly bounded subsequence $\left\{(\bm{u}_{s,h},\phi_{m,h},\phi_{f,h})\right\}_{h>0}$ such that
\begin{equation}\label{limit:h1}
\lim_{h\rightarrow 0}(\bm{u}_{s,h},\phi_{m,h},\phi_{f,h})=(\bm{u}_{s},\phi_m,\phi_f)\quad\mathrm{weakly~in}~\underline{\bm{W}}.
\end{equation}
Furthermore, the Sobolev imbedding implies that the above convergence results are strong in $L^q(\Omega)$ for any $1\leqslant q<6$ whenever $N=2$ or $3$. In particular, by extracting another subsequence, still denoted by $h$, we obtain
\begin{equation}\label{limit:l2}
\lim_{h\rightarrow 0}(\bm{u}_{s,h},\phi_{m,h},\phi_{f,h})=(\bm{u}_{s},\phi_m,\phi_f)\quad\mathrm{strongly~in}~\bm{L}^2(\Omega_s)\times L^2(\Omega_d)\times L^2(\Omega_d).
\end{equation}
\begin{theorem}\label{theorem:unique:1}
If the data satisfies that
\begin{equation}\label{assum:unique}
\mathcal N\left(\rho^{-1}\nu^{-2}\|\bm{f}_s\|_{-1,\Omega_s}+\rho^{-3/2}\nu^{-3/2}\mu^{1/2}\sigma^{-1/2}k_f^{-1/2}\|f_d\|_{-1,\Omega_d}\right)<1,
\end{equation}
the problem \eqref{continuous:weak:formulation:2} then admits a unique solution $(\bm{u}_s,\phi_m,\phi_f)$ in $\bm{V}_s\times X_d\times X_d$ such that
\[
\rho\nu\|\mathbb{D}(\bm{u}_{s})\|_{0,\Omega_s}^2+\frac{2\sigma k_m}{\mu}\|\nabla\phi_{m}\|_{0,\Omega_d}^2
+\frac{\sigma k_f}{\mu}\|\nabla\phi_{f}\|_{0,\Omega_d}^2\leqslant\mathcal C^2,
\]
where $\mathcal C^2$ is defined in Theorem \ref{theorem:priori:estimate}.
\end{theorem}
\begin{pf}
For $(\bm{u}_{s},\phi_m,\phi_f)$ denoted as $\underline{\bm{u}}$ and the subsequences $\{(\bm{u}_{s,h},\phi_{m,h},\phi_{f,h})\}_{h>0}$ denoted as $\{\underline{\bm{u}}_h\}_{h>0}$ defined in \eqref{limit:h1} and \eqref{limit:l2}, we can easily obtain
\begin{equation}\label{limit:h:temp1}
\lim_{h\rightarrow 0}\{a_s(\underline{\bm{u}}_h,\underline{\bm{v}})
+a_d(\underline{\bm{u}}_h,\underline{\bm{v}})\}=a_s(\underline{\bm{u}},\underline{\bm{v}})
+a_d(\underline{\bm{u}},\underline{\bm{v}}),\quad\forall\,\underline{\bm{v}}=(\bm{v}_s,\psi_m,\psi_f)\in\underline{\bm{W}},
\end{equation}
because of \eqref{limit:h1}. Then, for the trace bilinear term $a_\Gamma(\underline{\bm{u}}_h,\underline{\bm{v}})$, it follows from H\"{o}lder inequality, \eqref{l2:partial:domain}, and \eqref{v:Poincare}--\eqref{v:Korn} that
\begin{align*}
&a_\Gamma(\underline{\bm{u}}_h,\underline{\bm{v}})\\
&=\left\langle\phi_{f,h}-\phi_f,\bm{v}_s\cdot\bm{n}_s\right\rangle_\Gamma
-\left\langle\psi_f,(\bm{u}_{s,h}-\bm{u}_s)\cdot\bm{n}_s\right\rangle_\Gamma
+\sum_{i=1}^{N-1}\frac{\alpha\rho\nu}{\sqrt{k_f}}
\left\langle(\bm{u}_{s,h}-\bm{u}_s)\cdot\bm{\tau}_i,\bm{v}_s\cdot\bm{\tau}_i\right\rangle_\Gamma\\
&\quad+\left\langle\phi_f,\bm{v}_s\cdot\bm{n}_s\right\rangle_\Gamma
-\left\langle\psi_f,\bm{u}_s\cdot\bm{n}_s\right\rangle_\Gamma
+\sum_{i=1}^{N-1}\frac{\alpha\rho\nu}{\sqrt{k_f}}
\left\langle\bm{u}_s\cdot\bm{\tau}_i,\bm{v}_s\cdot\bm{\tau}_i\right\rangle_\Gamma\\
&\leqslant C\|\phi_{f,h}-\phi_f\|_{0,\Omega_d}^{1/2}\|\nabla(\phi_{f,h}-\phi_f)\|_{0,\Omega_d}^{1/2}\|\mathbb{D}(\bm{v}_s)\|_{0,\Omega_s}
+C\|\bm{u}_{s,h}-\bm{u}_s\|_{0,\Omega_s}^{1/2}\|\mathbb{D}(\bm{u}_{s,h}-\bm{u}_s)\|_{0,\Omega_s}^{1/2}\|\nabla\psi_f\|_{0,\Omega_d}\\
&\quad+C\|\bm{u}_{s,h}-\bm{u}_s\|_{0,\Omega_s}^{1/2}\|\mathbb{D}(\bm{u}_{s,h}-\bm{u}_s)\|_{0,\Omega_s}^{1/2}\|\mathbb{D}(\bm{v}_s)\|_{0,\Omega_s}
+a_\Gamma(\underline{\bm{u}},\underline{\bm{v}}).
\end{align*}
Since the uniform boundedness of $(\bm{u}_{s,h},\phi_{m,h},\phi_{f,h})$ shown in \eqref{discrete:a:priori:estimate:2} and the strong $L^2$-convergence result \eqref{limit:l2}, we derive that
\begin{equation}\label{limit:h:temp2}
\lim_{h\rightarrow 0}a_\Gamma(\underline{\bm{u}}_h,\underline{\bm{v}})=a_\Gamma(\underline{\bm{u}},\underline{\bm{v}}),
\quad\forall\,\underline{\bm{v}}\in\underline{\bm{W}}.
\end{equation}
In addition, for the limit of the trilinear form $b(\bm{u}_{s,h};\bm{u}_{s,h},\bm{v}_s)$, gathering \eqref{trilinear:form} and Lemma \ref{lemma:trilinear:inequality} yields
\begin{align*}
&\left|b(\bm{u}_{s,h};\bm{u}_{s,h},\bm{v}_s)-b(\bm{u}_s;\bm{u}_s,\bm{v}_s)\right|\\
&\leqslant
\left|\left((\bm{u}_{s,h}-\bm{u}_s)\cdot\nabla)\bm{u}_{s,h},\bm{v}_s\right)_{\Omega_s}\right|
+\frac{1}{2}\left|\left(\nabla\cdot\bm{u}_{s,h},(\bm{u}_{s,h}-\bm{u}_s)\cdot\bm{v}_s\right)_{\Omega_s}\right|\\
&\quad
+\left|\left((\bm{u}_s\cdot\nabla)(\bm{u}_{s,h}-\bm{u}_s),\bm{v}_s\right)_{\Omega_s}\right|
+\frac{1}{2}\left|\left(\nabla\cdot(\bm{u}_{s,h}-\bm{u}_s),\bm{u}_s\cdot\bm{v}_s\right)_{\Omega_s}\right|\\
&\leqslant
\underbrace{C\|\bm{u}_{s,h}-\bm{u}_s\|_{0,\Omega_s}^{1/2}\|\mathbb{D}(\bm{u}_{s,h}-\bm{u}_s)\|_{0,\Omega_s}^{1/2}
\|\mathbb{D}(\bm{u}_{s,h})\|_{0,\Omega_s}\|\mathbb{D}(\bm{v}_s)\|_{0,\Omega_s}}_{\uppercase\expandafter{\romannumeral1}_h}\\
&\quad
+\underbrace{\left|\left((\bm{u}_s\cdot\nabla)(\bm{u}_{s,h}-\bm{u}_s),\bm{v}_s\right)_{\Omega_s}\right|
+\frac{1}{2}\left|\left(\nabla\cdot(\bm{u}_{s,h}-\bm{u}_s),\bm{u}_s\cdot\bm{v}_s\right)_{\Omega_s}\right|}_{\uppercase\expandafter{\romannumeral2}_h}.
\end{align*}
We can easily obtain that $\lim_{h\rightarrow 0}\uppercase\expandafter{\romannumeral1}_h=0$ by \eqref{discrete:a:priori:estimate:2} and \eqref{limit:l2}, and $\lim_{h\rightarrow 0}\uppercase\expandafter{\romannumeral2}_h=0$ by \eqref{limit:h1}. Therefore the following limit holds:
\begin{equation}\label{limit:h:temp3}
\lim_{h\rightarrow 0}b(\bm{u}_{s,h};\bm{u}_{s,h},\bm{v}_s)=b(\bm{u}_s;\bm{u}_s,\bm{v}_s),
\quad\forall\,\bm{v}_s\in\bm{V}_s.
\end{equation}
It follows from \eqref{limit:h:temp1}--\eqref{limit:h:temp3} that
\[
a(\underline{\bm{u}},\underline{\bm{v}})
+b(\bm{u}_s;\bm{u}_s,\bm{v}_s)
=\rho(\bm{f}_s,\bm{v}_s)_{\Omega_s}
+(f_d,\psi_f)_{\Omega_d},\quad
\forall\,\underline{\bm{v}}\in\underline{\bm{W}},
\]
which implies that $\underline{\bm{u}}=(\bm{u}_s,\phi_m,\phi_f)\in\underline{\bm{W}}$ is a solution of \eqref{continuous:weak:formulation:2}.
Finally, we assume that there are two solutions $\underline{\bm{u}}^1,\underline{\bm{u}}^2\in\underline{\bm{W}}$ to \eqref{continuous:weak:formulation:2}. Then, there differences $\bm{\mathrm{e}}_s=\bm{u}_s^1-\bm{u}_s^2$, $\mathrm{e}_m=\phi_m^1-\phi_m^2$ and $\mathrm{e}_f=\phi_f^1-\phi_f^2$ satisfy that $\forall\,(\bm{v}_s,\psi_m,\psi_f)\in\bm{V}_s\times X_d\times X_d$,
\begin{equation}\label{error:equation}
\begin{split}
&2\rho\nu\left(\mathbb{D}(\bm{\mathrm{e}}_s),\mathbb{D}(\bm{v}_s)\right)_{\Omega_s}
+\frac{k_m}{\mu}\left(\nabla\mathrm{e}_m,\nabla\psi_m\right)_{\Omega_d}
+\frac{k_f}{\mu}\left(\nabla\mathrm{e}_f,\nabla\psi_f\right)_{\Omega_d}
+\left((\bm{\mathrm{e}}_s\cdot\nabla)\bm{u}_s^1,\bm{v}_s\right)_{\Omega_s}
+\left((\bm{u}_s^2\cdot\nabla)\bm{\mathrm{e}}_s,\bm{v}_s\right)_{\Omega_s}\\
&+\frac{\sigma k_m}{\mu}\left(\mathrm{e}_m-\mathrm{e}_f,\psi_m\right)_{\Omega_d}+\frac{\sigma k_m}{\mu}\left(\mathrm{e}_f-\mathrm{e}_m,\psi_f\right)_{\Omega_d}
+\left\langle\mathrm{e}_f,\bm{v}_s\cdot\bm{n}_s\right\rangle_\Gamma-
\left\langle\psi_f,\bm{\mathrm{e}}_s\cdot\bm{n}_s\right\rangle_\Gamma
+\sum_{i=1}^{N-1}\left\langle\bm{\mathrm{e}}_s\cdot\bm{\tau}_i,\bm{v}_s\cdot\bm{\tau}_i\right\rangle_\Gamma=0.
\end{split}
\end{equation}
Taking $\bm{v}_s=\bm{\mathrm{e}}_s$, $\psi_m=\mathrm{e}_m$ and $\psi_f=\mathrm{e}_f$ in \eqref{error:equation} and using another version of \eqref{trilinear:embedding}, which is
\begin{equation}\label{another:trilinear}
\left|\left((\bm{v}_s\cdot\nabla)\bm{w}_s,\bm{z}_s\right)_{\Omega_s}\right|
\leqslant \mathcal N\|\mathbb{D}(\bm{v}_s)\|_{0,\Omega_s}\|\mathbb{D}(\bm{w}_s)\|_{0,\Omega_s}\|\mathbb{D}(\bm{z}_s)\|_{0,\Omega_s},
\quad\forall\,\bm{v}_s,\bm{w}_s,\bm{z}_s\in\bm{X}_s,
\end{equation}
we obtain
\begin{equation}\label{error:equation:2}
\left[2\rho\nu-\mathcal N(\|\mathbb{D}(\bm{u}_s^1)\|_{0,\Omega_s}+\|\mathbb{D}(\bm{u}_s^2)\|_{0,\Omega_s})\right]
\|\mathbb{D}(\bm{\mathrm{e}}_s)\|_{0,\Omega_s}^2+\frac{k_m}{\mu}\|\nabla\mathrm{e}_m\|_{0,\Omega_d}^2+
\frac{k_f}{\mu}\|\nabla\mathrm{e}_f\|_{0,\Omega_d}^2+\frac{\sigma k_m}{\mu}\|\mathrm{e}_m-\mathrm{e}_f\|_{0,\Omega_d}^2
\leqslant 0.
\end{equation}
Hence, if we assume \eqref{assum:unique} holds, then \eqref{error:equation:2} shows the unique solution to \eqref{continuous:weak:formulation:2} based on a priori estimate \eqref{a:priori:estimate}.
\end{pf}
In order to prove the existence and uniqueness of the solution $(\underline{\bm{u}},p_s)\in\underline{\bm{Y}}\times Q_s$ to the model problem \eqref{continuous:weak:formulation}, we shall use the inf-sup condition \eqref{con:infsup} and the Babu\u{s}ka--Brezzi's theory \cite{Babuska1973179, Brezzi1974129, Girault1986, Temam1984}.
\begin{theorem}\label{theorem:unique:2}
Under the assumption \eqref{assum:unique} of Theorem \ref{theorem:unique:1}, the model problem \eqref{continuous:weak:formulation} admits a unique solution $(\underline{\bm{u}},p_s)\in\underline{\bm{Y}}\times Q_s$ such that
\begin{align}
\label{estimate:velocity}
&\rho\nu\|\mathbb{D}(\bm{u}_{s})\|_{0,\Omega_s}^2+\frac{2\sigma k_m}{\mu}\|\nabla\phi_{m}\|_{0,\Omega_d}^2
+\frac{\sigma k_f}{\mu}\|\nabla\phi_{f}\|_{0,\Omega_d}^2\leqslant\mathcal C^2,\\
\label{estimate:pressure}
&\|p_s\|_{0,\Omega_s}\leqslant\beta_0^{-1}\left(2\rho^{1/2}\nu^{1/2}\mathcal C+\rho^{-1}\nu^{-1}\mathcal N\mathcal C^2+
\rho\|\bm{f}_s\|_{-1,\Omega_s}+\|f_d\|_{-1,\Omega_d}\right),
\end{align}
where $\mathcal C^2$ is defined in Theorem \ref{theorem:priori:estimate}.
\end{theorem}
\begin{pf}
For the solution $\underline{\bm{u}}=(\bm{u}_s,\phi_m,\phi_f)\in \underline{\bm{W}}$ to \eqref{continuous:weak:formulation:2}, the following mapping:
\[
\underline{\bm{v}}=(\bm{v}_s,\psi_m,\psi_f)\in\underline{\bm{Y}}\;\mapsto\;
a(\underline{\bm{u}},\underline{\bm{v}})+b(\bm{u}_s;\bm{u}_s,\bm{v}_s)-\rho(\bm{f}_s,\bm{v}_s)_{\Omega_s}-(f_d,\psi_f)_{\Omega_d}
\]
defines an element $L(\underline{\bm{v}})$ of the dual space $\underline{\bm{Y}}'$, and furthermore, $L$ vanishes on $\underline{\bm{W}}$. As a result, the inf-sup condition \eqref{con:infsup} implies that there exists exactly one $p_s\in Q_s$ such that
\begin{equation}\label{unique:pressure}
L(\underline{\bm{v}})=d(\bm{v}_s,p_s),\quad\forall\,\underline{\bm{v}}=(\bm{v}_s,\psi_m,\psi_f)\in\underline{\bm{Y}}.
\end{equation}
Therefore the fact $\bm{u}_s\in\bm{X}_s$ and \eqref{unique:pressure} show that the model problem \eqref{continuous:weak:formulation} admits a unique solution $(\underline{\bm{u}},p_s)\in\underline{\bm{Y}}\times Q_s$. Finally, the result \eqref{estimate:pressure} is a straightforward application of the inf-sup condition \eqref{con:infsup} with the help of \eqref{another:trilinear} and \eqref{estimate:velocity}.
\end{pf}
\end{document}
|
\begin{equation}gin{document}
\title{\large\bf\sc Complementarity Problem With Nekrasov $Z$ tensor}
\author{
R. Deb$^{a,1}$ and A. K. Das$^{b,2}$\\
\emph{\small $^{a}$Jadavpur University, Kolkata, India.}\\
\emph{\small $^{b}$Indian Statistical Institute, Kolkata, India.}\\
\emph{\small $^{1}$Email: [email protected]}\\
\emph{\small $^{2}$Email: [email protected]}\\
}
\date{}
\maketitle
\begin{equation}gin{abstract}
\noindent It is worth knowing that a particular tensor class belongs to $P$-tensor which ensures the compactness to solve tensor complementarity problem (TCP). In this study, we propose a new class of tensor, Nekrasov $Z$ tensor, in the context of the tensor complementarity problem. We show that the class of $P$-tensor contains the class of even ordered Nekrasov $Z$ tensors with positive diagonal elements. In this context, we propose a procedure by which a Nekrasov $Z$ tensor can be transformed into a tensor which is diagonally dominant.\\
\noindent{\bf Keywords:} Diagonally dominant tensor, Nekrasov tensors, Nonsingular $H$ tensor, Nekrasov $Z$ tensor, $P$-tensors, Tensor complementarity problem.\\
\noindent{\bf AMS subject classifications:} 15A69, 99C33, 65F15
\end{abstract}
\footnotetext[1]{Corresponding author}
\section{Introduction}
$\mathcal{T}= (t_{j_1 ... j_m}) $ has multidimensional array of elements $t_{j_1... j_m} \in \mathbb{C}$ where $j_i \in [n]$ with $i\in [m],$ $[n]=\{1,...,n\}.$ The set of $m$th order $n$ dimensional complex (real) tensors is denoted by $\mathbb{C}^{[m,n]}\; (\mathbb{R}^{[m,n]}).$
Several classes of tensors have significant impact on different branches of science and engineering. Applications of tensors can be found in physics, quantum computing, diffusion tensor imaging, image authenticity verification problem, spectral hypergraph theory, optimization theory and in several other areas. In optimization theory, Song and Qi {\cal I}te{song2017properties} introduced the tensor complementarity problem (TCP) which is identified a subclass of the nonlinear complementarity problem. Here functions are obtained with the help of a tensor.
\noindent Given $\mathcal{T}\in \mathbb{R}^{[m,n]}$ and $q\in\mathbb{R}^n,$ TCP is to obtain $x\in \mathbb{R}^n $ such that
\begin{equation}gin{equation}\label{tensor comp equation}
x\geq 0,\;\; \mathcal{T}x^{m-1}+q \geq 0,\;\; x^T (\mathcal{T}x^{m-1}+q) = 0.
\end{equation}
This problem is denoted by TCP$(\mathcal{T}, q)$ and the solution set of TCP$(\mathcal{T}, q)$ is denoted by SOL$(\mathcal{T}, q).$
TCP are observed in optimization theory, game theory and in other areas. Some mathematical problem can be reformulated TCP, such as a class of multiperson noncooperative game {\cal I}te{huang2017formulating}, hypergraph clustering problem and traffic equilibrium problem {\cal I}te{huang2019tensor}.
\noindent TCP can be viewed as one kind of extension of the linear complementarity problem (LCP) where the involved functions are homogeneous polynomials constructed with the help of a tensor.
\noindent Given $M\in \mathbb{R}^{[2,n]}$ and $q\in \mathbb{R}^n$, the LCP {\cal I}te{cottle2009linear} is to obtain $u\in \mathbb{R}^n$ such that
\begin{equation}gin{equation}\label{linear comp equation}
u\geq 0,\;\; Mu + q \geq 0,\;\; u^T (Mu + q) = 0.
\end{equation}
The problem is denoted by LCP$(M,q)$ and the solution set of LCP$(M,q)$ is denoted by SOL$(M,q).$ The stability of a large number of problems can be explained with the formulation of a linear complementarity problem. For details see {\cal I}te{das2019some}, {\cal I}te{neogy2006some}, {\cal I}te{dutta2021some} {\cal I}te{dutta2022on}, {\cal I}te{neogy2013weak}, {\cal I}te{jana2018semimonotone}, {\cal I}te{neogy2005almost}, {\cal I}te{neogy2011singular}, {\cal I}te{jana2019hidden}, {\cal I}te{jana2021more}, {\cal I}te{neogy2009modeling}, {\cal I}te{das2017finiteness}, {\cal I}te{das2018invex}. For the formulation of conflicting situations see {\cal I}te{mondal2016discounted}, {\cal I}te{neogy2008mathematical}, {\cal I}te{neogy2008mixture}, {\cal I}te{neogy2005linear}, {\cal I}te{neogy2016optimization}, {\cal I}te{das2016generalized}, {\cal I}te{neogy2011generalized} and {\cal I}te{mohan2004note}. In connection with linear complementarity problem computation methods are dependent on the matrix classes. For details see {\cal I}te{mohan2001classes}, {\cal I}te{mohan2001more} {\cal I}te{neogy2005principal}, {\cal I}te{das2016properties}, {\cal I}te{neogy2012generalized}, {\cal I}te{jana2019hidden}, {\cal I}te{jana2021iterative}, {\cal I}te{jana2021more}, {\cal I}te{jana2018processability}.
\noindent Nekrasov $Z$-matrix was introduced by Orea and Pe$\tilde{n}$a {\cal I}te{orera2019accurate} to study the methods to compute its inverse with high relative accuracy.
\noindent Several matrix types play a crusial role in the theory of linear complementarity problem, similarly, some structured tensors are important to study tensor complementarity theory. In recent years some special types of matrices are extended to tensors in connection with TCP. Among the several tensor classes, the class of $P(P_0)$-tensor, $Z$ tensor, $M$-tensor, $H$ tensor, and Nekrasov tensor are of special interest. $Z$ tensor and $M$-tensor are introduced by Zhang et al. {\cal I}te{zhang2014m} in context of tensor complementarity problem. They prove that for a $Z$ tensor the TCP has the least element property. Ding et al. {\cal I}te{ding2013m} introduce $H$ tensor and establish its importance in the tensor complementarity problem. Zhang and Bu {\cal I}te{zhang2018nekrasov} introduce the Nekrasov tensor and show that a Nekrasov tensor can be posed as a $H$ tensor.
\noindent Song and Qi {\cal I}te{song2014properties} introduce $P$-tensor. Various properties of $P$-tensors are considered in connection with TCP. For details see {\cal I}te{bai2016global}, {\cal I}te{ding2018p}, {\cal I}te{yuan2014some}.
\noindent In this paper, we introduce Nekrasov $Z$ tensor. We study the connection of Nekrasov $Z$ tensor and diagonally dominant tensor. We prove that the class of Nekrasov $Z$ tensor of even order with positive diagonal elements is a subclass of $P$-tensor.
The paper is organized as follows. Section 2 contains few essential and early observations. In Section 3, we develop Nekrasov $Z$ tensor. We investigate its connection with the $P$-tensor.
\section{Preliminaries}
Here we consider some basic notations and present few useful definitions and theorems used in this paper. $x\in \mathbb{C}^n$ denotes a column vector and row transpose of $x$ is denoted by $x^T.$ A diagonal matrix $W=[w_{ij}]_{n \times n}=diag(w_1, \; ..., \; w_n)$ is defined as $w_{ij}=\left \{ \begin{equation}gin{array}{ll}
w_i &;\; \forall \; i=j, \\
0 &; \; \forall \; i \neq j.
\end{array} \right.$
\begin{equation}gin{defn}{\cal I}te{mangasarian1976linear}
$A\in \mathbb{R}^{[2,n]},$ is a $Z$-matrix if all its off-diagonal elements are nonpositive.
\end{defn}
\begin{equation}gin{defn}{\cal I}te{cvetkovic2009new,bailey1969bounds,kolotilina2015some, garcia2014error}
A matrix $M=(m_{ij})\in \mathbb{C}^{[2,n]},$ is a Nekrasov matrix if $\lvert m_{ii} \rvert> \Lambda_i(M), \; \forall\; i\in [n],$ where
\begin{equation}gin{center}
$\Lambda_i(M) =\left\{\begin{equation}gin{array}{ll}
\sum_{j=2}^n \lvert m_{ij} \rvert &, i=1, \\
\sum_{j=1}^{i-1} \lvert m_{ij} \rvert \frac{\Lambda_j(M)}{\lvert m_{jj} \rvert} + \sum_{j=i+1}^n \lvert m_{ij} \rvert &, i=2,3,...,n.
\end{array} \right.$
\end{center}
\end{defn}
\begin{equation}gin{defn}{\cal I}te{orera2019accurate}
$A\in \mathbb{R}^{[2,n]}$ is Nekrasov $Z$-matrix, if $A$ is a Nekrasov mtrix as well as a $Z$-matrix.
\end{defn}
\noindent Let $\mathcal{O}$ be the zero tensor with zero at each entry of $\mathcal{O}.$ For $\mathcal{T}\in \mathbb{C}^{[m,n]}$ and $x\in \mathbb{C}^n,\; \mathcal{T}x^{m-1}\in \mathbb{C}^n $ is a vector defined by
\[ (\mathcal{T}x^{m-1})_i = \sum_{i_2, ..., i_m =1}^{n} t_{i_1 ...i_m} x_{i_2} {\cal D}ots x_{i_m} , \mbox{ for all } i \in [n] \]
$\mathcal{T}x^m\in \mathbb{C} $ is a scalar defined by
\[ x^T\mathcal{T}x^{m-1}= \mathcal{T}x^m = \sum_{i_1, ..., i_m =1}^{n} t_{i_1 ...i_m} x_{i_1} {\cal D}ots x_{i_m}. \]
\noindent Let $\mathcal{U}$ and $\mathcal{V}$ be two $n$ dimensional tensor of order $p \geq 2$ and $r \geq 1,$ respectively. Shao {\cal I}te{shao2013general} introduced the general product of $\mathcal{U}$ and $\mathcal{V}.$ Let $\mathcal{U} {\cal D}ot \mathcal{V} = \mathcal{T}.$ Then $\mathcal{T}$ is an $n$ dimensional tensor of order $((p-1)(r-1)) + 1$ where the elements of $\mathcal{T}$ are given by
\[t_{j \begin{equation}ta_1 {\cal D}ots \begin{equation}ta_{p-1} } =\sum_{j_2, {\cal D}ots ,j_p \in [n]} u_{j j_2 {\cal D}ots j_p} v_{j_2 \begin{equation}ta_1} {\cal D}ots v_{j_p \begin{equation}ta_{p-1}},\] where $j \in [n]$, $\begin{equation}ta_1, {\cal D}ots, \begin{equation}ta_{p-1} \in [n]^{r-1}$
\begin{equation}gin{defn}{\cal I}te{song2016properties}
Given $\mathcal{T} \in \mathbb{R}^{[m,n]} $ and $q\in \mathbb{R}^n$, $x$ solves TCP$(\mathcal{T},q)$ if there exists a $x$ satisfying the equation (\ref{tensor comp equation}).
\end{defn}
\begin{equation}gin{defn}{\cal I}te{song2015properties}
$\mathcal{T}\in \mathbb{R}^{[m,n]} $ is a $P$-tensor, if for any $x\in \mathbb{R}^n \backslash \{0\}$, $\exists$ a $j\in [n]$ such that $x_j \neq 0$ and $x_j (\mathcal{T}x^{m-1})_j >0.$
\end{defn}
\noindent The row subtensors are defined in {\cal I}te{shao2016some}. Here for the sake of convenience we denote $i$th rowsubtensor of $\mathcal{T}$ by $\mathcal{R}_i(\mathcal{T}).$
\begin{equation}gin{defn}{\cal I}te{shao2016some}
For each $i$ the $i$th row subtensor of $\mathcal{T}\in \mathbb{C}^{[m,n]}$ is denoted by $\mathcal{R}_i(\mathcal{T})$ and its elements are given as $(\mathcal{R}_i(\mathcal{T}))_{i_2 ... i_m}=(t_{i i_2... i_m})$, where $i_l\in [n]$ and $2\leq l \leq m.$
\end{defn}
\begin{equation}gin{defn}{\cal I}te{zhang2018nekrasov}
Let $\mathcal{T}\in \mathbb{C}^{[m,n]}$ such that $\mathcal{T}=(t_{i_1 i_2 ... i_m})$ and
\begin{equation}gin{equation*}
R_i(\mathcal{T})= \sum_{(i_2 ... i_m) \neq (i ... i)} \lvert t_{i i_2 ... i_m} \rvert,\; \forall \; i.
\end{equation*}
The tensor $\mathcal{T}$ is diagonally dominant (strict diagonally dominant) if $\lvert t_{i ... i} \rvert \geq (>) R_i(\mathcal{T})$ for all $i\in [n].$
\end{defn}
\begin{equation}gin{defn}{\cal I}te{zhang2018nekrasov}
$\mathcal{T}\in \mathbb{C}^{[m,n]}$ is quasidiagonally dominant tensor if there exists a diagonal matrix $W=diag(w_1, w_2, {\cal D}ots, w_n)$ such that $\mathcal{T} W$ is strictly diagonally dominant tensor, i.e.,
\begin{equation}gin{equation*}
\lvert t_{i ... i} \rvert w_i^{m-1} > \lvert t_{i i_2 {\cal D}ots i_m}\rvert w_{i_2} {\cal D}ots w_{i_m}, \;\;\; \forall\; i\in [n].
\end{equation*}
\end{defn}
\noindent Qi {\cal I}te{qi2005eigenvalues} and Lim {\cal I}te{lim2005singular} proposed the idea of eigenvalues and eigenvectors in case of tensors. If a pair $(tbda, x) \in \mathbb{C}\times (\mathbb{C}^n \backslash \ \{0\})$ satisfies the equation $\mathcal{T} x^{m-1} = tbda x^{m-1},$ then $tbda$ is an eigenvalue of $\mathcal{T}$, and $x$ is an eigenvector with respect to $tbda$ of $\mathcal{T}$, for which $x^{[m−1]} = (x_1^{m−1},... , x_n^{m−1})^T.$ The spectral radius of $\mathcal{T}$ is defined as $\rho(\mathcal{T}) = \max\{|tbda| \; : \; tbda \text{ is an eigen value of } \mathcal{T}\}.$
\begin{equation}gin{defn}{\cal I}te{ding2013m}
$\mathcal{T}$ is a $Z$ tensor if all its off diagonal elements are nonpositive.
\end{defn}
\begin{equation}gin{defn}{\cal I}te{ding2013m}
A $Z$ tensor $\mathcal{T}$ is an $M$ tensor if $\exists$ $\mathcal{B}\; \geq \mathcal{O}$ and $s>0$ such that $\mathcal{T}= s \mathcal{I} - \mathcal{B}$, where $s\geq \rho(\mathcal{B})$. $\mathcal{T}$ is a nonsingular $M$ tensor if $s>\rho(\mathcal{B}).$
\end{defn}
\begin{equation}gin{defn}{\cal I}te{ding2013m}
For a tensor $\mathcal{T}= t_{i_1 i_2 ... i_m}\in \mathbb{C}^{[m,n]},$ $\mathcal{C}(\mathcal{T})= c_{i_1 i_2 ... i_m}$ is a comparison tensor of $\mathcal{T}$ if
\begin{equation}gin{center}
$c_{i_1 i_2 ... i_m}=\left\{\begin{equation}gin{array}{ll}
|t_{i_1 i_2 ... i_m}| & , \text{ if } (i_1, i_2, ... ,i_m) =(i,i, ..., i), \\
-|t_{i_1 i_2 ... i_m}| & , \text{ if } (i_1, i_2, ... ,i_m) \neq (i,i, ..., i).
\end{array} \right.$
\end{center}
\end{defn}
\begin{equation}gin{defn}{\cal I}te{ding2013m}
$\mathcal{T}$ is an $H$ tensor, if its comparison tensor is an $M$-tensor, and it is called a nonsingular $H$ tensor if its comparison tensor is a nonsingular $M$-tensor.
\end{defn}
\begin{equation}gin{defn}{\cal I}te{zhang2018nekrasov}
For $\mathcal{T}=(t_{i_1 i_2 ... i_m}) \in \mathbb{C}^{[m,n]}, \; t_{i i ... i}\neq 0,$ $\forall \;i$ and
\begin{equation}gin{align*}
\Lambda_1(\mathcal{T}) & =R_1(\mathcal{T}),\\
\Lambda_i(\mathcal{T}) & =\sum_{i_2...i_m \in [i-1]^{m-1}} \lvert t_{i i_2 ... i_m} \rvert \left(\frac{\Lambda_{i_2} (\mathcal{T})}{\lvert t_{i_2 ... i_2} \rvert}\right)^{\frac{1}{m-1}} {\cal D}ots \left(\frac{\Lambda_{i_m} (\mathcal{T})}{\lvert t_{i_m ... i_m} \rvert}\right)^{\frac{1}{m-1}}\\
& \qquad + \sum_{i_2...i_m \notin [i-1]^{m-1}, (i_2 ... i_m)\neq (i ... i)} \lvert t_{i i_2 ... i_m} \rvert, \;\;\;\; i=2,3,...,n.
\end{align*}
$\mathcal{T}=(t_{i_1 i_2 ... i_m})\in \mathbb{R}^{[m,n]}$ is a Nekrasov tensor if
\begin{equation}gin{equation}
\lvert t_{i ... i} \rvert > \Lambda_i(\mathcal{T}), \; \forall\; i\in [n].
\end{equation}
\end{defn}
\begin{equation}gin{theorem}{\cal I}te{ding2013m}\label{nonsingular H if and only if quasi-SDD}
$\mathcal{T}$ is a nonsingular $H$ tensor iff it is a quasi-strictly diagonally dominant tensor.
\end{theorem}
\begin{equation}gin{theorem}{\cal I}te{zhang2018nekrasov}\label{nekrasov implies nonsingular H}
If a tensor $\mathcal{T}=(t_{i_1 i_2 ... i_m})\in \mathbb{C}^{[m,n]}$ is a Nekrasov tensor, then $\mathcal{T}$ is a nonsingular $H$ tensor.
\end{theorem}
\begin{equation}gin{theorem}{\cal I}te{ding2018p}\label{nonsingular H implies P}
A nonsingular $H$ tensor of even order with all positive diagonal elements is a $P$-tensor.
\end{theorem}
\begin{equation}gin{theorem}{\cal I}te{bai2016global}
For $q\in \mathbb{R}^n$ and a $P$-tensor $\mathcal{T} \in \mathbb{R}^{[m,n]}$, the solution set of TCP$(\mathcal{T},q)$ is nonempty and compact.
\end{theorem}
\section{Main results}
We begin by introducing Nekrasov $Z$ tensor.
\begin{equation}gin{defn}
$\mathcal{T}\in \mathbb{R}^{[m,n]}$ is a Nekrasov $Z$ tensor if $\mathcal{T}$ is a Nekrasov tensor as well as a $Z$ tensor.
\end{defn}
The following is an example of a Nekrasov $Z$ tensor.
\begin{equation}gin{examp}\label{example of nekrasov Z tensor}
We consider the Example 3.3 of {\cal I}te{zhang2018nekrasov}. Let $\mathcal{B}\in \mathbb{R}^{[4,4]}$ be such that $b_{1111}=8,\; b_{2222}=3.8,\; b_{3333}=3,\; b_{4444}=10,\; b_{1112}=b_{2111}=b_{1211}=b_{1121}=-1,\;$ $b_{3222}=b_{2322}=b_{2232}=b_{2223}=-1,$ $b_{4441}=b_{4414}=b_{4144}=b_{1444}=-3$ and all other elements of $\mathcal{B}$ are zeros. Then $R_1(\mathcal{B})=6,\; R_2(\mathcal{B})=4,\; R_3(\mathcal{B})=1,\; R_4(\mathcal{B})=9.$ Since $|b_{2222}=| < R_2(\mathcal{B}),$ so $\mathcal{B}$ is not diagonally dominant tensor. However $tbda_1(\mathcal{B})=6,\; tbda_2(\mathcal{B})=3.75,\;tbda_3(\mathcal{B})=0.98,\;tbda_4(\mathcal{B})=9.$ Also $|b_{ii...i}|>tbda_i(\mathcal{B}),\; \forall\; i\in [4].$ Hence $\mathcal{B}$ is Nekrasov tensor. Also all the off-diagonal elements of $\mathcal{B}$ are nonpositive. Therefore $\mathcal{B}$ is a $Z$ tensor. Hence $\mathcal{B}$ is a Nekrasov $Z$ tensor.
\end{examp}
Now we show that $P$-tensor contains even order Nekrasov $Z$ tensor with positive diagonal elements.
\begin{equation}gin{theorem}\label{1st theorem of Nekrasov Z tensor}
A Nekrasov $Z$ tensor of even order with positive diagonal elements is a $P$-tensor.
\end{theorem}
\begin{equation}gin{proof}
Let $\mathcal{T}$ be a Nekrasov $Z$ tensor which has positive diagonal elements. By Theorem \ref{nekrasov implies nonsingular H}, a Nekrasv tensor is a $H$ tensor which is also nonsingular. Therefore $\mathcal{T}$ is a nonsingular $H$ tensor which has positive diagonal elements. Therefore by Theorem \ref{nonsingular H implies P} $\mathcal{T}$ is a $P$-tensor.
\end{proof}
Since a Nekrasov tensor is a nonsingular $H$ tensor, then there exists a positive diagonal matrix $W$ such that $\mathcal{T}$ is a quasi-diagonally dominant tensor. Here we are developing a diagonal matrix $W$ for which $\mathcal{T}W$ is a diagonally dominant tensor.
\begin{equation}gin{theorem}
Suppose $\mathcal{T}\in \mathbb{R}^{[m,n]},$ $m$ is even. Let $\mathcal{T}$ be a Nekrasov $Z$ tensor whose diagonal elements are positive and let $W$ be a diagonal matrix such that,
\begin{equation}gin{equation}
W= diag\left( \left(\frac{\Lambda_1(\mathcal{T})}{t_{11{\cal D}ots 1}}\right)^{\frac{1}{m-1}},\; \left(\frac{\Lambda_2(\mathcal{T})}{t_{22{\cal D}ots 2}}\right)^{\frac{1}{m-1}},\; {\cal D}ots\; , \left(\frac{\Lambda_n(\mathcal{T})}{t_{nn{\cal D}ots n}}\right)^{\frac{1}{m-1}} \right)
\end{equation}
Then $\mathcal{T}W$ is a diagonally dominant $Z$ tensor.
\end{theorem}
\begin{equation}gin{proof}
Firstly note that $\frac{\Lambda_i(\mathcal{T})}{t_{ii{\cal D}ots i}} \geq 0$ and so $\left(\frac{\Lambda_i(\mathcal{T})}{t_{ii{\cal D}ots i}}\right)^{\frac{1}{m-1}} \geq 0, \; \forall\; i\in [n],$ as $\mathcal{T}$ is a Nekrasov $Z$ tensor with positive diagonal elements and $m$ is even. So $W$ is a nonnegative diagonal matrix. Let $\mathcal{B}=(b_{i_1 i_2 ...i_m})\in \mathbb{R}^{[m,n]}$ be such that $\mathcal{B} = \mathcal{T}W.$ Then
\begin{equation}gin{equation}
b_{i_1 i_2 ... i_m}= t_{i_1 i_2 ... i_m} \left(\frac{\Lambda_{i_2}(\mathcal{T})}{t_{{i_2}{i_2}{\cal D}ots {i_2}}} \right)^{\frac{1}{m-1}} {\cal D}ots \left(\frac{\Lambda_{i_m}(\mathcal{T})}{t_{{i_m}{i_m}{\cal D}ots {i_m}}} \right)^{\frac{1}{m-1}}
\end{equation}
and
\begin{equation}gin{equation}
b_{ii ...i} =\Lambda_i(\mathcal{T}).
\end{equation}
It is easy to observe that the signs of $\mathcal{B}$ are same as the signs of $\mathcal{T}.$ This infers that $\mathcal{B}$ is a $Z$ tensor, since $\mathcal{T}$ is a $Z$ tensor. Now we show that $\mathcal{B}$ is a diagonally dominant tensor. We write
\begin{equation}gin{equation}\label{for the conclusion of Nekrasov Z}
t_{k{\cal D}ots k}= \lvert t_{k{\cal D}ots k} \rvert > \Lambda_k(\mathcal{T}), \;\; \forall\; k\in [n],
\end{equation}
since $\mathcal{T}$ is a Nekrasov tensor.
Now for each $i\in [n]$ we obtain,
\begin{equation}gin{align*}
|b_{i{\cal D}ots i}| &=b_{i{\cal D}ots i}\\
&=\Lambda_i (\mathcal{T})\\
&=\sum_{i_2 ... i_m \in [i-1]^{m-1}} \lvert t_{i i_2 ... i_m} \rvert \left(\frac{\Lambda_{i_2} (\mathcal{T})}{\lvert t_{i_2 ... i_2} \rvert}\right)^{\frac{1}{m-1}} {\cal D}ots \left(\frac{\Lambda_{i_m} (\mathcal{T})}{\lvert t_{i_m ... i_m} \rvert}\right)^{\frac{1}{m-1}}\\
& \qquad + \sum_{i_2 ... i_m \notin [i-1]^{m-1}, (i_2 ... i_m)\neq (i ... i)} \lvert t_{i i_2 ... i_m} \rvert\\
&\geq \sum_{i_2 ... i_m \in [i-1]^{m-1}} \lvert t_{i i_2 ... i_m} \rvert \left(\frac{\Lambda_{i_2} (\mathcal{T})}{\lvert t_{i_2 ... i_2} \rvert}\right)^{\frac{1}{m-1}} {\cal D}ots \left(\frac{\Lambda_{i_m} (\mathcal{T})}{\lvert t_{i_m ... i_m} \rvert}\right)^{\frac{1}{m-1}} \\
& \qquad + \sum_{i_2 ... i_m \notin [i-1]^{m-1}, (i_2 ... i_m)\neq (i ... i)} \lvert t_{i i_2 ... i_m} \rvert \left(\frac{\Lambda_{i_2} (\mathcal{T})}{\lvert t_{i_2 ... i_2} \rvert}\right)^{\frac{1}{m-1}} {\cal D}ots \left(\frac{\Lambda_{i_m} (\mathcal{T})}{\lvert t_{i_m ... i_m} \rvert}\right)^{\frac{1}{m-1}} \text{ by (\ref{for the conclusion of Nekrasov Z})} \\
&=\sum_{i_2 ... i_m \in [n]^{m-1}, (i_2 ... i_m)\neq (i ... i) } \lvert t_{i i_2 ... i_m} \rvert \left(\frac{\Lambda_{i_2} (\mathcal{T})}{\lvert t_{i_2 ... i_2} \rvert}\right)^{\frac{1}{m-1}} {\cal D}ots \left(\frac{\Lambda_{i_m} (\mathcal{T})}{\lvert t_{i_m ... i_m} \rvert}\right)^{\frac{1}{m-1}}\\
&=\sum_{(i_2 ... i_m)\neq (i ... i) } |b_{i i_2 ... i_m}|.
\end{align*}
This implies that $\mathcal{B}$ is a diagonally dominant tensor.
\end{proof}
Now we want to find a class of tensors that will contain Nekrasov $Z$ tensors with positive diagonal elements. To do so we use a decomposition of tensors which will be useful in our discussion.\\
\noindent Given a tensor $\mathcal{T}=(t_{i_1 i_2 ... i_m})\in \mathbb{R}^{[m,n]},$ we can write $\mathcal{T}$ as
\begin{equation}gin{equation}\label{decomposition equation}
\mathcal{T} = \mathcal{B}^+ + \mathcal{C},
\end{equation}
where $\forall \; i\in [n]$ the elements of rowsubtensors of $\mathcal{B}$ and $\mathcal{C}$ are given by,
\begin{equation}gin{equation}\label{law for B+}
(R_i(\mathcal{B}^+))_{i_2 {\cal D}ots i_m} = t_{i i_2 ... i_m} - r_i^+, \;\forall \; i_l\in [n], \; l\in [m]
\end{equation}
and
\begin{equation}gin{equation}\label{law for C}
(R_i(\mathcal{C}))_{i_2 {\cal D}ots i_m} =r_i^+ , \;\forall \; i_l\in [n], \; l\in [m]..
\end{equation}
with
\begin{equation}gin{equation}\label{law for r_i^+}
r_i^+ =\max_{i_2, ..., i_m \in [n], \; ( i_2 {\cal D}ots i_m) \neq (i {\cal D}ots i)} \{ 0, t_{i i_2 ... i_m} \}
\end{equation}
\begin{equation}gin{prop}
Let $\mathcal{T}=(t_{i_1 i_2 ... i_m})\in \mathbb{R}^{[m,n]}.$ If $\mathcal{T}$ is decomposed as given in the equation (\ref{decomposition equation}), then $\mathcal{B}^+$ is a $Z$ tensor and $\mathcal{C}$ is nonnegative tensor.
\end{prop}
\begin{equation}gin{proof}
Firstly, by the definition of the tensor $\mathcal{B}^+$ we have $b_{i_1 i_2 ... i_m} = t_{i_1 i_2 ... i_m} - r_{i_1}^+,$ for all $i_1,i_2, ... ,i_m \in [n].$
Now from the definition of $r_{i_1}^+$ given by equation (\ref{law for r_i^+}) for all $i_1, i_2, ... ,i_m \in [n],$ we write
\begin{equation}gin{align*}
b_{i_1 i_2 ... i_m} &= t_{i_1 i_2 ... i_m} - r_{i_1}^+\\
&= t_{i_1 i_2 ... i_m} - \max_{i_2, ..., i_m \in [n], \; ( i_2 {\cal D}ots i_m) \neq (i {\cal D}ots i)} \{ 0, t_{i i_2 ... i_m} \} \\
&\leq 0, \text{ when } ( i_2 {\cal D}ots i_m) \neq (i {\cal D}ots i).
\end{align*}
Therefore all off diagonal elements of the tensor $\mathcal{B}^+$ are nonpositive. Hence the tensor $\mathcal{B}^+$ is a $Z$ tensor.
Secondly, observe that the elements of $\mathcal{C}$ are $r_i^+,$ $\forall \;i\in[n].$ Now by equation (\ref{law for r_i^+}) we write
\begin{equation}gin{equation*}
r_i^+ =\max_{i_2, ..., i_m \in [n], \; ( i_2 {\cal D}ots i_m) \neq (i {\cal D}ots i)} \{ 0, t_{i i_2 ... i_m} \}\geq 0.
\end{equation*}
This implies $\mathcal{C}$ is a nonnegative tensor.
\end{proof}
\begin{equation}gin{remk}
If the tensor $\mathcal{B}^+$ is a Nekrasov tensor then $\mathcal{B}^+$ becomes a Nekrasov $Z$ tensor.
\end{remk}
\begin{equation}gin{remk}
The class of tensors $\mathcal{T}$ that can be expressed as $\mathcal{T} = \mathcal{B}^+ + \mathcal{C},$ where $\mathcal{B}^+$ and $\mathcal{C}$ are constructed by (\ref{law for B+}), (\ref{law for C}) and (\ref{law for r_i^+}) with $\mathcal{B}^+$ as a Nekrasov tensor contains the class of Nekrasov $Z$ tensors.
\end{remk}
\begin{equation}gin{remk}
The $P$ tensors play a crucial role in tensor complementarity theory. We know that the SOL$(\mathcal{T}, q)$ is nonempty and compact if $\mathcal{T}$ is a $P$ tensor. Here we show that $\mathcal{T},$ a Nekrasov $Z$ tensor of even order which has positive diagonal elements, is a $P$-tensor. Hence we conclude that the solution set of TCP$(\mathcal{T}, q)$ is nonempty and compact.
\end{remk}
\section{Conclusion}
In this article, we introduce Nekrasov $Z$ tensor. For a Nekrasov $Z$ tensor $\mathcal{T}$ we find $W$ such that $\mathcal{T}W$ is a diagonally dominant tensor. We prove that the Nekrasov $Z$ tensors of even order with positive diagonal elements are contained in the class of $P$-tensor.
\section{Acknowledgment}
The author R. Deb acknowledges CSIR, India, JRF scheme for financial support.
\end{document}
|
\begin{document}
\title{\Large \bf
Conservation of energy and momenta \\
in nonholonomic systems with affine constraints\footnote{
This work is part of the research projects {\it
Symmetries and integrability of nonholonomic
mechanical systems} of the University of Padova and PRIN {\it Teorie
geometriche e analitiche dei sistemi Hamiltoniani in dimensioni finite
e infinite}.}
}
\author{\sigmac Francesco Fass\`o\footnote{\footnotesize Universit\`a di Padova,
Dipartimento di Matematica, Via Trieste 63, 35121 Padova, Italy.
Email: {\tt [email protected]} }
\ and
Nicola Sansonetto\footnote{\footnotesize Universit\`a di Padova,
Dipartimento di Matematica, Via Trieste 63, 35121 Padova, Italy.
Email: {\tt [email protected]} }\ \footnote
{Supported by the Research Project
{\it Symmetries and integrability of nonholonomic
mechanical systems} of the University of Padova.}
}
\date{}
\maketitle
\centerline{\sigmamall (\today)}
{\sigmamall
\begin{abstract}
\noindent
We characterize the conditions for the conservation of
the energy and of the components of the momentum maps of lifted actions, and
of their `gauge-like' generalizations, in
time-independent nonholonomic mechanical systems with affine
constraints. These conditions involve geometrical and mechanical
properties of the system, and are codified in the so-called
reaction-annihilator distribution.
\vskip3mm
{\sigmacriptsize
\noindent
{\bf Keywords:} Nonholonomic mechanical systems, Conservation of
energy, Reaction-annihilator \break
distribution, Gauge momenta, Nonholonomic Noether theorem.
\vskip1mm
\noindent
{\bf MSC:} 70F25, 37J60, 37J15, 70E18
}
\end{abstract}
}
\sigmaection{Introduction}
In this paper we study the conservation of the energy and of the
components of the momentum maps of lifted actions---and of their
`gauge' generalizations introduced in \cite{BGM}---in time-independent
mechanical
systems with ideal nonholonomic constraints that are affine
functions of the velocities. The conservation of both types of
functions is affected by the reaction forces exerted by the
nonholonomic constraint, but there is a difference between them. The
conservation of energy depends on certain properties of the
nonhomogeneous term of the constraint distribution, and thus differs
from the case of nonholonomic systems with constraints that are
linear functions of the velocities. Instead, the conservation of
momenta and gauge momenta is essentially the same as that in the case
of linear constraints, which has been extensively studied
\cite{bates-sniatycki,marle95,FRS,FGS2008}. Therefore, we will
mainly focus on the conservation of energy. (If the Lagrangian is not
quadratic kinetic energy minus positional potential energy, then the
function we call energy is, properly, the Jacobi integral).
It is well known that the energy is conserved in nonholonomic
mechanical systems with constraints that are linear in the velocities
\cite{pars, NF}, and that it is not always conserved if the
constraints are affine, or more generally nonlinear, in the
velocities. More than specifically in the case of affine constraints,
the issue of energy conservation has received extensive consideration
in the case of general nonlinear constraints \cite{BKMM, spagnoli,
sniatycki98, marle2003,
kobayashi-oliva2003,BM2008,bates-jim,zampieri}. The equation of
balance of the energy shows that the energy is conserved if and only
if the reaction forces exerted by the constraint do not do any work
on the constrained motions. The quoted references deduce from this
some sufficient conditions for the conservation of energy, such as
the tangency of the Liouville vector field to the constraint manifold
(as e.g. in \cite{marle2003,spagnoli}) or the fact that the
constraint is a homogeneous function of the velocities (as e.g. in
\cite{BM2008,bates-jim}). However, when particularized to the case of
affine constraints, as e.g. in \cite{marle2003,
kobayashi-oliva2003}, all these sufficient conditions reduce to the
linearity of the constraints. It seems, therefore, that conservation
of energy for nonholonomic systems with affine constraints is
presently not understood. The main purpose of this paper is to remedy
this lacune, by identifying the properties of a nonholonomic
mechanical system with affine constraints that determine the
conservation of its energy.
At the basis of our approach is the fact that, under the hypothesis of
ideality of the constraints (namely, d'Alembert principle), the
reaction force that the constraint exerts on the system is a known
function of the kinematic state of the system. This function depends
on properties of the system that are both of geometric nature (the
nonholonomic constraint) and of mechanical nature (the mass
distribution and the active forces\footnote{By `active forces' we
mean the forces that act on the system and are not reaction forces;
in part of the literature they are called `external forces'.}). The
inspection of this function reveals that, for a given system, the set
of all reaction forces exerted by the constraints on the constrained
motions might be (and typically is) {\it smaller} than the set of all
reaction forces that satisfy the condition of ideality. The reason is
that for a given system the active forces that act on the system are
fixed, while the notion of ideality makes reference to all possible
active forces that might possibly act on the system (see \cite{FS2009}
for a discussion of this fact). Consequently, any vector that
annihilates the linear part of the constraint may be an ideal
reaction force, but for a given system, the class of reaction forces
actually exerted by the constraint may be a subset of this
annihilator.
Therefore, the properties of a nonholonomic system that are
influenced by the reaction forces exerted on constrained motions may
depend, in a complicate way, on the geometric and mechanical
properties of the system, including the active forces. These
properties can be codified by a distribution on the configuration
manifold, which is called the {\it reaction-annihilator
distribution}. This distribution was introduced in \cite{FRS} in
connection with the conservation of the momentum map of lifted
actions in nonholonomic systems with linear constraints, and was
further used in \cite{FGS2008,FGS2009,FS2009, crampin,FGS2012,jotz2}.
We will show that a necessary and sufficient condition for energy
conservation in nonholonomic mechanical systems with affine
constraints is that the nonhomogeneous term of the constraint is a
section of the reaction-annihilator distribution. This clarifies in a
quantitative, computable way why energy conservation is not a
property of purely geometric type and, in particular, how it depends
on the active forces that act on the system: changing just the
(conservative) active forces that act on a given system (same
constraints, same kinetic energy) may destroy, or restore, the
conservation of energy. We will illustrate these behaviours on some
examples.
We recall the basic facts about nonholonomic mechanical systems with
affine constraints, and introduce the reaction-annihilator
distribution for these systems, in Section~2. Energy conservation is
studied in Section~3 and the conservation of momenta and gauge
momenta of lifted actions is concisely studied in Section 4. A short
Conclusion follows, where we stress the importance of exploiting the
knowledge of the reaction forces in the study of nonholonomic
systems. In the Appendix we derive the expression of the reaction
force as function of the kinematic state.
Throughout the paper all manifolds and maps are smooth and all vector
fields are assumed to be complete. For simplicity we restrict our
consideration to time-independent systems. For introductions to nonholonomic
mechanics see e.g. \hbox{\cite{pars,NF, pagani91, cortes,
marle2003,benenti,CDS}}.
Lastly, we mention that in certain nonholonomic mechanical systems
with affine constraints,
even if the energy is not conserved, there may exist
a modification of it (which
may be interpreted as the energy of the system in a moving reference
frame and has therefore been called a `moving energy') that is
conserved \cite{FS2015}.
\sigmaection{Affine constraints and the reaction-annihilator distribution}
\sigmaubsection{Nonholonomic systems with ideal affine constraints}
Since affine constraints appear typically
in problems of rigid bodies that roll on moving surfaces, it is
appropriate to work on a phase space that is a manifold. However, for
simplicity we will resort wherever possible to a coordinate
description.\footnote{In the sequel, symbols with a hat denote global
objects and the same symbols without the hat their local,
coordinate representatives.} Moreover, because of the possible
presence of moving holonomic constraints in this type of systems, it
is natural to allow for the presence of gyrostatic terms in the
Lagrangian, that may come either from the use of non-inertial frames
or from the use of moving coordinates. We assume however that the
system is time-independent, as is it typically happens if the bodies and the
surface have suitable symmetries and the latter moves at uniform
speed.
Our starting point is thus a Lagrangian system with $n$-dimensional
configuration manifold $\mf{Q}$ and Lagrangian
$\mf{L}:T\mf{Q}\to\bR{}$, that describes a mechanical system
subject to ideal holonomic constraints. We assume that the
Lagrangian has the mechanical form
\begin{equation}\label{hatL}
\mf{L}=\mf{T}-\mf{b} - \mf{V}\circ\pi
\end{equation}
where $\mf{T}$ is a positive definite quadratic form on $T\mf{Q}$,
$\mf{b}$ is a 1-form on $\mf{Q}$ regarded as a function on $T\mf{Q}$,
$\mf V$ a function on $Q$ and $\pi: T\mf{Q} \to \mf{Q}$ is the tangent
bundle projection. Following e.g. \cite{marle2003} we write Lagrange
equations as $[\mf L]=0$, where $[\mf L]$ may be regarded as a
1-form on $\mf Q$, whose coordinate expression is the well known
$\frac d{dt}\der L {\dot q} -\der L q$.
We add now the nonholonomic constraint that, at each point $\mf{q}\in
\mf{Q}$, the velocities of the system belong to an affine subspace
$\mf{\mathcal{M}}_{\mf{q}}$ of the tangent space $T_{\mf q}\mf{Q}$.
Specifically, we assume that there are a nonintegrable distribution
$\mf{\mathcal{D}}$ on $\mf{Q}$ of constant rank $r$, with $1<r<n$, and a
vector field $\mf{\xi}$ on $\mf{Q}$ such that, at each point
$\mf{q}\in \mf{Q}$,
$$
\mf{\mathcal{M}}_{\mf q}=\mf{\xi}(\mf{q})+\mf{\mathcal{D}}_{\mf q} \,.
$$
Clearly, the vector field $\mf{\xi}$ is defined up to a section
of~$\mf{\mathcal{D}}$. The affine distribution $\mf{\mathcal{M}}$ with fibers
$\mf{\mathcal{M}}_{\mf q}$ may also be regarded as a submanifold
$\mf{M}\sigmaubset T\mf{Q}$ of dimension $n+r$, which is actually an
affine subbundle of $T\mf{Q}$ of rank $r$, and is called the {\it
constraint manifold}. The case of linear constraints is recovered
when the vector field $\mf{\xi}$ is a section of the distribution
$\mf{\mathcal{D}}$, since then $\mf{\mathcal{M}}=\mf{\mathcal{D}}$.
We assume that the nonholonomic constraint is `ideal', namely, that it
satisfies d'Alembert principle. This means that, when the system is
in a configuration $\mf{q}\in \mf{Q}$, the set of reaction forces
that the nonholonomic constraint is capable of exerting coincides
with the annihilator $\mf{\mathcal{D}}_{\mf q}^\circ$ of $\mf{\mathcal{D}}_{\mf q}$
(see e.g. \cite{pagani91,marle2003}). Under this hypothesis there is
a unique function $\mf{R}_{\mf{L},\mf{M}}:\mf{M}\to\mf{\mathcal{D}}^\circ$,
namely a function that associates an ideal reaction force $\mf
R_{\mf L,\mf M}$ to each constrained kinematic state
$\mf{v}_{\mf q}\in \mf{M}$, which has the property that the restriction to
$\mf{M}$ of Lagrange equations with the reaction forces,
\begin{equation}\label{EqLagrWithRF}
[\mf{L}]\big|_{\mf M} = \mf{R}_{\mf{L},\mf{M}} \,,
\end{equation}
defines a dynamical system on $\mf M$ (that is, a vector field on
$\mf{M}$). For completeness, we give a proof of this fact in the
Appendix.
\begin{definition} Assume that $\mf L:T\mf Q\to\bR{}$ is as in
(\ref{hatL}) and that $\mf M$ is an affine subbundle of $T\mf Q$.
The {\rm nonholonomic mechanical system with affine
constraints} $(\mf{L},\mf{Q},\mf{M})$ is the
dynamical system defined by equation (\ref{EqLagrWithRF})
on $\mf{M}$.
\end{definition}
\sigmaubsection{Coordinate description}
We consider now a system of local coordinates $q$ on $\mf{Q}$, with
domain $Q\sigmaubseteq\bR n$, and lift them to bundle coordinates
$(q,\dot q)\in Q\times\bR n$ in $T\mf{Q}$. We write the local
representative of the Lagrangian
$\mf{L}=\mf{T}+\mf{b}-\mf{V}\circ\pi$ as
\begin{equation}\label{L}
L(q,\dot q) = \frac12\dot q\cdot
A(q)\dot q+b(q)\cdot q - V(q)
\end{equation}
with $A(q)$ an $n\times n$
symmetric nonsingular matrix and $b(q)\in\bR n$
The fibers of the local representative $\mathcal{D}$ of the distribution
$\mf{\mathcal{D}}$ can be described as the kernel of a $q$-dependent $k\times
n$ matrix $S(q)$ that has everywhere rank $k$, with $k=n-r$:
$$
\mathcal{D}_q=\{\dot q\in T_qQ =\bR n\,:\; S(q)\dot q=0\} \,.
$$
The matrix $S$ is not uniquely defined. However, if $S_1$ and $S_2$
are any two possible choices of it, then from $\ker S_1 = \ker S_2$
it follows that there exists a $q$-dependent $k\times k$ nonsingular
matrix $P(q)$ such that $S_2 = P S_1$.
Let now $\mathcal{M}$ be the local representative of $\mf{\mathcal{M}}$, $M\sigmaubset
Q\times\bR n$ that of $\mf M$ and $\xi:Q\to\bR n$ that of $\mf{\xi}$.
Then $\dot q\in \mathcal{M}_q$ if and only if $\dot q= \xi(q)+u$ for some
$u\in\ker S(q)$, that is, if and only if $S(q)[\dot q-\xi(q)]=0$.
Thus
$$
M = \big\{ (q,\dot q)\in Q\times \bR n \,:\, S(q)\dot q + s(q) = 0
\big\}
$$
with
$$
s(q) = -S(q) \xi(q) \in \bR{k} \,.
$$
Note that $s$ is independent of the arbitrariness in the choice of the
component of $\mf{\xi}$ along $\mf{\mathcal{D}}$, that is of the component of
$\xi$ along $\ker S$. However, $s$ depends on the choice of the matrix
$S$: if $s_1$ and $s_2$ are relative to two matrices
$S_1$ and $S_2=P S_1$, then $s_2 =P s_1$.
In coordinates, the equations of motion (\ref{EqLagrWithRF})
of the nonholonomic mechanical system $(\mf{L},\mf{Q},\mf{M})$ are
\begin{equation}\label{eq:LagrEq}
\Big( \frac{d}{dt}\frac{ \partial L }{\partial \dot q}
-
\frac{\partial L}{\partial q}\Big) \Big|_M \, = \, R_{L,M}
\end{equation}
where $R_{L,M}(q,\dot q)$ is the local representative of
$\mf{R}_{\mf{L},\mf{M}}$. As shown in the Appendix, $R_{L,M}$ equals
the restriction to $M$ of the function
\begin{equation}\label{eq:RF1}
S^T(SA^{-1}S^T)^{-1} ( SA^{-1} \ell - \sigma )
\end{equation}
where $\ell\in\bR n$ and $\sigma\in\bR k$ have components\footnote{We use
everywhere the convention of summation over repeated indexes.}
\begin{equation}\label{eq:RF2}
\ell_i = \dder L {\dot q_i}{q_j}\dot q_j
- \der L {q_i}
\,,\qquad
\sigmaigma_a =
\der{S_{ai}}{q_j} \dot q_i\dot q_j +
\der{s_{a}}{q_j} \dot q_j
\end{equation}
with $i,j,h=1,\ldots,n$ and $a = 1,\ldots, k$. Note that
$$
\ell=\alpha+\beta+V'
$$
with
$
\alpha_i \, = \,
\left( \frac{\partial A_{ij}}{\partial q_h}
-\frac12 \frac{\partial A_{jh}}{\partial q_i}\right)
\dot q_j\dot q_h
$,
$
\beta_i=\Big( \frac{\partial b_j}{\partial q_i}
-\frac{\partial b_i}{\partial q_j} \Big)\dot q_j
$ and
$V'_i=\der V{q_i}$.
In the case of linear constraints, expression (\ref{eq:RF1}) or
analogue expressions are given in \cite{ago,FRS,benenti}; in
the Appendix, besides considering the affine case, we complement these
treatments with a global perspective.
\begin{remark}\rm
The restriction to $M$ of the function (\ref{eq:RF1}) is independent
of the choice of $S$ and $s$, that is, under the replacement of $S,s$
by $P S,P s$. (This change produces an extra term in $\sigmaigma$, which
however vanishes on $M$).
\end{remark}
\sigmaubsection{The reaction-annihilator distribution}
While the condition of ideality assumes that, at each point $\mf
q\in\mf Q$, the constraint can---a priori---exert all reaction
forces that lie in $\mf{\mathcal{D}}_{\mf q}^\circ$, expression
(\ref{eq:RF1}) shows that, ordinarily, only a subset of these
possible reaction forces is actually exerted in the motions of the
system. In fact, in coordinates, $\mf{\mathcal{D}}_{\mf q}^\circ$ is the
orthogonal complement to $\ker S(q)$, namely the range of $S(q)^T$,
and the map
$$
S^T(SA^{-1}S^T)^{-1} (SA^{-1} \ell - \sigma )
\big|_{M_q} :M_q\to\mathrm{range} S(q)^T
$$
might not be surjective.
Specifically, the reaction forces that the constraint exerts, when the
system $(\mf{L},\mf{Q},\mf{M})$ is in a configuration $\mf{q}\in
\mf{Q}$ with any possible
velocity $\mf{v}_{\mf q} \in \mf{\mathcal{M}}_{\mf q}$, are the elements of the set
\[
\mf{\mathcal{R}}_{\mf q}:=
\bigcup_{\mf{v}\in \mf{\mathcal{M}}_{\mf q}}
\mf{R}_{\mf{L},\mf{M}}(\mf{v}_{\mf q})
\]
and this set may be (and typically is) a proper subset of
$\mf{\mathcal{D}}_{\mf q}^\circ$. This point of view was taken in \cite{FRS}
and leads to the following
\begin{definition}
The {\rm reaction-annihilator distribution} $\mf{\mathcal{R}}^\circ$ of a
nonholonomic mechanical system with affine constraints
$(\mf{L},\mf{Q},\mf{M})$ is the (possibly non-smooth and of
non-constant rank) distribution on $\mf{Q}$ whose fiber
$\mf{\mathcal{R}}^\circ_{\mf q}$ at $\mf{q}\in \mf{Q}$ is the annihilator of
$\mf{\mathcal{R}}_{\mf q}$.
\end{definition}
The interest of this distribution is not that much geometric, but
mechanical: a vector field $\mf{Z}$ on $\mf{Q}$ is a section of
$\mf{\mathcal{R}}^\circ$ if and only if, {\it in all constrained motions} of
the system, the reaction force does zero work on it: $\langle \mf{R}_{
\mf{L},\mf{M}} (\mf v_{\mf q}) ,\mf Z(\mf q) \rangle =0$ for all
$\mf{v}\in \mf{M}$. This is a system-dependent condition, which is
weaker than being a section of $\mf{\mathcal{D}}$ because
$$
\mf{\mathcal{D}}_{\mf q} \sigmaubseteq \mf{\mathcal{R}}_{\mf q}^\circ
\qquad \forall \, \mf{q} \in \mf{Q}
\,.
$$
Expression (\ref{eq:RF1}) of the reaction forces shows that
$\mf{\mathcal{R}}^\circ$ is computable a priori, without knowing
the motions of the system. Moreover, this expression shows how
$\mf{\mathcal{R}}^\circ$ depends on the geometry of the nonholonomic constraints
(through the matrix $S$ and the vector~$s$) and on the mass distribution
of the system and on the active forces that act on it (through the
Lagrangian).\footnote{$\mf{\mathcal{R}}^\circ$ depends of course also on the
holonomic constraint, through the dependence of $L$ on $(q,\dot q)$.}
Examples of reaction-annihilator distributions $\mf{\mathcal{R}}^\circ$, that show
that their fibers may actually be larger than those of $\mf{\mathcal{D}}$,
are given in \cite{FRS,FGS2008} for systems with linear constraints
and in sections 3.2 and 4.3 below for the case of affine
constraints. For a discussion of the relation between $\mf{\mathcal{R}}^\circ$
and d'Alembert principle see \cite{FS2009}.
\sigmaection{Conservation of energy in systems with affine constraints}
\sigmaubsection{Characterization of the conditions for energy conservation}
We recall that the {\it energy}, or more exactly the {\it Jacobi
integral}, of a Lagrangian $\mf{L}$ is the function
$$
\mf{E}_{\mf L}(\mf{v}) := \langle \mf{p}_{\mf L}, \mf{v} \rangle
-
\mf{L}(\mf v)
$$
where $\mf{p}_{\mf L}$ is the momentum 1-form relative
to $\mf{L}$, namely $\mf{p}_{\mf L} =\mathbf {F} L$
with $\mathbf F$ the fiber derivative (as defined, e.g., in
\cite{abraham-marsden}). In coordinates, $p_L=\der L{\dot q}$ and, if
$L$ is as in (\ref{L}), $p_L(q,\dot q)=A(q)\dot q - b(q)$.
If $\mf L$ is as in (\ref{hatL}) then $\mf{E}_L =
\mf{T}+\mf{V}\circ\pi$. The function $\mf{E}_{\mf L}$ can be
properly interpreted as the mechanical energy of the system only if
$\mf{L}=\mf{T}-\mf{V}\circ\pi$ but, as is customary, we will call it
energy in all cases. For a Lagrangian system, $\mf{E}_{\mf L}$ is a
first integral if and only if, as we do assume here, $\mf{L}$ is
independent of time.
\begin{definition}
{\it The {\rm energy} $\mf{E}_{\mf{L},\mf{M}}$ of the nonholonomic
mechanical system with affine constraints $(\mf{L},\mf{Q},\mf{M})$
is the restriction of $\mf{E}_{\mf L}$ to the constraint manifold
$\mf{M}$.
}\end{definition}
\begin{proposition}\label{thm1}
For a nonholonomic mechanical system with affine constraints
$(\mf{L},\mf{Q},\mf{M})$ with constraint distribution
$\mf{\mathcal{D}}+\mf{\xi}$, the energy $\mf{E}_{\mf{L},\mf{M}}$ is a first
integral if and only if $\mf{\xi}$ is a section of $\mf{\mathcal{R}}^\circ$.
\end{proposition}
\begin{proof} The proof can be given in coordinates.
By (\ref{eq:LagrEq}), along any curve $t\mapsto (q_t,\dot q_t)\in M$,
\begin{equation}
\label{sec:lie-jacobi}
\frac d{dt} E_{L,M}(q_t,\dot q_t)
=
\dot q_t\cdot
\left( \frac{d}{dt}\frac{\partial L}{\partial \dot q}
- \frac{\partial L}{\partial q}\right) (q_t,\dot q_t)
=
\dot q_t\cdot R_{L,M} (q_t,\dot q_t) \,.
\end{equation}
If $\dot q\in\mathcal{M}(q)$, namely $\dot q =u+\xi(q)$ with $u\in \mathcal{D}_q$, then
$$
R_{L,M} (q,\dot q)\cdot \, \dot q = R_{L,M} (q,\dot q) \cdot \, \xi(q)
$$
given that $R_{L,M} $ is ideal and hence annihilates $\mathcal{D}_q$.
It follows that $E_{L,M}$ is a first integral if and only if
$R_{L,M} (q,\dot q) \cdot \xi(q) = 0$ for all $q\in Q$, $\dot q\in \mathcal{M}_q$,
that is $\xi(q)\in \mathcal{R}^\circ_q$ for all~$q\in Q$.
\end{proof}
This shows that energy conservation is not a universal property of
nonholonomic mechanical systems with affine constraints. In particular, as we
have already stressed, it depends on the active forces that act on the
system.
In this respect we note that
at each point $\mf q\in \mf Q$, the
union of the fibers at $\mf q$ of the distributions $\mf\mathcal{R}^\circ$
relative to all functions $\mf V:\mf Q\to\bR{}$ equals $\mf\mathcal{D}_q$. (In
fact, in expression (\ref{eq:RF1}), $\ell=\alpha+\beta+V'$ and the
matrix $S^T\big(SA^{-1}S^T\big)^{-1}SA^{-1}$ is, at each point~$q$,
the $A(q)^{-1}$-orthogonal projector onto $\mathcal{D}_q^\circ$).
Thus, it follows from Proposition 1 that, given $\mf{T}$ and
$\mf{M}$, $\mf{E}_{\mf{L},\mf{M}}$ is a first integral in {\it all}
nonholonomic mechanical systems of the class
$(\mf{L}=\mf{T}-\mf{V}\circ\pi,\mf{Q},\mf{M})$, with {any} $\mf{V}:\mf
Q\to\bR{}$, if and only if the constraint is linear. This fact, which
can be generalized to allow for the presence of gyrostatic terms in
the Lagrangian, is a particular case of a result by
\cite{terra-kobayashi} for general nonlinear constraints (with
linearity replaced by homogeneity).
\begin{remark} {\rm
The result in Proposition \ref{thm1} does not depend on the choice of
$\xi$, which is defined up to the addition of a section of $\mathcal{D}$,
because $\mathcal{D}\sigmaubseteq \mathcal{R}^\circ$.
}\end{remark}
\sigmaubsection{Examples}
{\it 1. An affine nonholonomic particle. }
In order to illustrate the dependency of energy conservation on the
active forces we consider an affine version of Pars' nonholonomic
particle \cite{pars}. This system has configuration manifold $\bR3\ni
q= (x,y,z)$, Lagrangian
$$
L(q,\dot q) = \frac12\|\dot q\|^2 - V(q)
$$
with a potential energy $V$ for which we will make different choices,
and constraint
$$
\dot z +x\dot y -y\dot x-c =0
$$
with $c$ a nonzero real number. Thus $n=3$ and $k=1$,
$\xi=c\partial_z$ and
the distribution $\mathcal{D}$ is spanned by the two vector fields
$\partial_x+y\partial_z$ and $x\partial_x+y\partial_y$. $\mathcal{D}$
has rank 2 except where $y=0$, so we disregard
these points and restrict the configuration manifold to
$Q=\bR3\sigmaetminus\{y=0\}$.
Note that not only $\xi$ is not a section of $\mathcal{D}$ but, after the
restriction to~$Q$,
\begin{equation}\label{xi-not-in-D}
\xi(q)\notin\mathcal{D}_q \qquad \forall \, q\in Q \,.
\end{equation}
The constraint manifold $M$ is diffeomorphic to $Q\times\bR2$
and has global coordinates $(x,y,z,\dot x,\dot y)$. We may take
$S(x,y,z) = ( -y,x,1)$ and $s=-c$.
Due to its low dimensionality, this example offers little variety of
behaviours. Specifically, the distribution $\mathcal{R}^\circ$ depends on the
function $V$ but, since its fibers contain those of $\mathcal{D}$, that have
dimension 2, there are only two possibilities: at a point $\bar q$,
either $\mathcal{R}^\circ_{\bar q}= T_{\bar q}Q=\bR3$ or $\mathcal{R}^\circ_{\bar
q}=\mathcal{D}_{\bar q}$. The former possibility is realized if
$R_{L,M}(\bar q,\dot q)=0$ for all $\dot q\in \bR3$ and the second if
$R_{L,M}(\bar q,\dot q)\not=0$ for some $\dot q\in \bR3$.
At the same time, however, the low dimensionality makes
all computations straightforward, and it is
simple to find potentials $V$ that exemplify the
different possibilities. In fact $A=\mathbb{I}$ and $\alpha=\beta=0$,
so $\ell=V'$, and $\sigmaigma=0$ and (\ref{eq:RF1}) gives
\begin{equation}
\label{esempio}
R_{L,M}
\;=\;
S^T (S S^T)^{-1} V' \Big|_M
\;=\;
\frac1{1+x^2+y^2}
\begin{pmatrix}
xy^2 & -xy & -y \\
-xy & x^2 & x \\
-y & x & 1
\end{pmatrix}
\, V' \Big|_M \,.
\end{equation}
Thus, the reaction forces are independent of the $\dot q$. This
would make it straightforward to compute the fibers $\mathcal{R}^\circ_q$,
which are simply the orthogonal complements to $R_{L,M}(q)$ in
$\bR3$, but we need not doing it because we already know what these
fibers are.
Concerning the conservation of energy, there are two cases to consider:
\begin{list}{}
{\leftmargin2em\labelwidth1.2em\labelsep.5em\itemindent0em
\topsep0.5ex\itemsep-0.2ex}
\item[1.] If $V$ is such that $R_{L,M}(q)=0$ at all points $q\in Q$,
then $\mathcal{R}^\circ=\bR3$ and energy is conserved.\footnote{That energy is
conserved if the reaction forces vanish identically is of course
obvious, for a variety of reasons. For instance, the nonholonomic
system is a subsystem of the unconstrained system with Lagrangian $L$
on $TQ$.} From (\ref{esempio}) one verifies that this situation is
encountered, e.g., with $V=0$ and $V=\frac12(x^2+y^2)$.
\item[2.] If $V$ is such that $R_{L,M}$ is not identically zero, then there
is a point $\bar q\in Q$ at which $\mathcal{R}_{\bar q}^\circ=\mathcal{D}_{\bar q}$.
Thus, by (\ref{xi-not-in-D}),
$\xi(\bar q)\notin \mathcal{R}_{\bar q}^\circ$ and so $\xi$ is not a section of
$\mathcal{R}^\circ$. Hence energy is not conserved. An example is $V=z$.
\end{list}
A richer variety of possibilities, including (a) conservation of
energy even with nonzero reaction forces, and (b) violation of the
conservation of energy even in the absence of active forces, can be
easily constructed by considering four- or five-dimensional
extensions of the nonholonomic particle, similar to that considered
in \cite{FGS2009}. However, cases (a) and (b) are met also in known
mechanical systems. For instance, in the system formed by a heavy
sphere that rolls on a rotating horizontal plane, considered e.g. in
\cite{NF}, the potential energy
of the active forces is constant but the energy is not conserved
\cite{FS2015}. An example of case (a) is the following one.
\vskip4mm
\noindent{\it 2. A sphere rolling inside a rotating cylinder. }
As a second example we consider the system formed by a
homogeneous sphere constrained to roll without sliding inside a
cylinder that rotates with constant angular velocity about
its figure axis. We assume that the sphere is acted upon by
positional forces whose potential energy is a function of the position of
the sphere's center of mass. The case of a heavy sphere inside a
vertical cylinder that is at rest is classical \cite{routh, NF}. The
case of rotating cylinder, but without active forces, is considered
in~\cite{BMK2002}.
Let $a$ be the radius of the sphere and $r+a$ the radius of the cylinder.
The holonomic system we start from
is formed by the sphere constrained to keep its center on a cylinder
$C$ of radius $r$. Fix an orthonormal frame $\Sigma=\{O;e_x,e_y,e_z\}$
with the origin $O$ on the figure axis of the cylinder and $e_z$
aligned with it. Using cylindrical coordinates
$(z,\gamma)$ relative to $\Sigma$ we identify $C$
with $\bR{}\times S^1$ and the configuration manifold is
$\mf{Q} = \bR{}\times S^1 \times \mathrm{SO}(3)\ni (z,\gamma,\mathcal{R})$,
where the matrix $\mathcal{R}$ gives the sphere's orientation.
Up to an overall factor, the Lagrangian is then
\begin{equation}\label{L-cilindro}
\mf L = \frac{1}{2}\left(r^2\dot\gamma^2+\dot z^2\right) +
\frac{I}{2}\|\omega\|^2 - V(z,\gamma)
\end{equation}
where (up to the same factor) $I$ is the moment of
inertia of the sphere and $V$ is the potential energy of the active
forces, and $\omega=(\omega_x,\omega_y,\omega_z)$ is the
angular velocity of the sphere relative to $\Sigma$ (that we think of as a
function on $T\mathrm{SO}(3)$). We add now the
nonholonomic constraint that the sphere rolls without sliding on a
cylinder, coaxial with $C$ and of radius $r+a$, that rotates with
constant angular velocity $\Omega e_z$ relative to $\Sigma$:
\begin{equation}\label{M-cilindro}
r\dot\gamma+a\omega_z-(r+a)\Omega=0
\,,\qquad
\dot z +a (\omega_x\sigmain\gamma- \omega_y\cos\gamma)=0 \,.
\end{equation}
These equations can be solved for $(\dot z,\dot \gamma)$ and the
constraint manifold $\mf{M}$ is thus diffeomorphic to $\bR{}\times
S^1\times T\mathrm{SO(3)}$.
As local coordinates on $\mf{Q}$ we use the cylindrical coordinates
$(z,\gamma)$ of the center of mass of the sphere and three Euler angles
$(\varphi,\psi,\theta)\in S^1\times S^1\times(0,\pi)$ that fix the
orientation of a body frame relative to $\Sigma$ (we adopt the
convention of \cite{arnold-mmmc} for the choice of these angles). The
representative of $\mf L$ is
$$
L=\frac{1}{2}\left(r^2\dot\gamma^2+\dot z^2\right) +
\frac{I}{2}(\dot\theta^2+\dot\varphi^2+\dot\psi^2 +
2\dot\varphi\dot\psi\cos\theta)
- V(z,\gamma)
$$
and the constraint (\ref{M-cilindro}) becomes
$$
r\dot\gamma+a(\dot\varphi+\dot\psi\cos\theta)
- (r+a)\, \Omega =0
\,,\qquad
\dot z+a \,
[\dot\psi\sigmain\theta\cos(\gamma-\varphi)+\dot\theta\sigmain(\gamma-\varphi)]
= 0 \,.
$$
In these coordinates the vector field $\mf\xi$ becomes the constant
vector field $\xi=\Omega(\partial_\gamma+ \partial_\varphi)$, or $\xi
= (0,\Omega,\Omega,0,0)$, and a possible choice of $S$ and $s$ is
$$
S =
\left(\begin{matrix}
0 & r & a & a \cos\theta & 0 \\
1 & 0 & 0 & a \sigmain\theta \cos(\gamma-\varphi)
& a \sigmain(\gamma-\varphi)
\end{matrix}\right)
\,,\qquad
s = (0,- (r+a) \,\Omega) \,.
$$
As local coordinates on $\mf{M}$ we may use
$(z,\gamma,\varphi,\psi,\theta,\dot\varphi,\dot\psi,\dot\theta)$.
From (\ref{eq:RF1}), the reaction force is then
$$
R_{L,M}
=
\frac{I}{I+a^2} \Big(
f \,,\;
V'_\gamma \,,\;
\frac{a}{r} V'_\gamma \,,\;
af \cos(\gamma-\varphi) \sigmain\theta + \frac{a}{r}V'_\gamma\cos\theta \,,\;
af \sigmain(\gamma-\varphi)
\Big)
$$
with
$
f
=
\frac{a^2}r
\big(\dot\varphi+\dot\psi\cos\theta - \frac{a+r}a \Omega \big)
\big( \dot\theta \cos(\gamma-\varphi) -
\dot\psi \sigmain(\gamma-\varphi) \sigmain\theta \big)
+ V'_z
$.
Therefore,
$$
R_{L,M} \cdot \xi = \frac{a+r}a \Omega V'_\gamma \,.
$$
This shows that, when $\Omega\not=0$ and hence $\xi\not=0$ and
the constraint is affine,
the energy is conserved if and only $V$ depends on $z$ alone. This
includes the case of a heavy sphere that rolls inside a rotating vertical
cylinder, for which $V=gz$.
\sigmaection{Conservation of momenta and gauge momenta of lifted actions}
\sigmaubsection{Conservation of momenta}
We consider now a second problem in which the reaction forces of a
nonholonomic mechanical system with affine constraints play a
role: the conservation of the momentum map of a lifted action that
leaves the Lagrangian $\mf L$ invariant, and of its `gauge'
generalization.
This topic has been widely studied in the case of nonholonomic
mechanical systems
with linear constraints. For such systems, the momentum map is in
general not conserved, but in certain cases some of its components
are conserved. In early studies, it was pointed out that a sufficient
condition for the conservation of a component of the momentum map is
that its infinitesimal generator is `horizontal', that is, a section
of the constraint distribution (see e.g.
\cite{bates-sniatycki,marle95,BKMM}). It was later proved that the
components of the momentum map that are conserved are exactly those
whose infinitesimal generators are sections of $\mathcal{R}^\circ$ \cite{FRS}.
Our first goal here is to show that this result holds in the case of
affine constraints, too. In fact, the affine part of the constraint
plays no role in it.
Consider an action $\mf{\Psi}:G\times \mf{Q} \to \mf{Q}$ of a Lie
group $G$ on the configuration manifold $\mf{Q}$. For each
$\mf{q}\in\mf{Q}$ we write as usual
$\mf{\Psi}_{g}(\mf{q})$ for $\mf{\Psi}(g,\mf{q})$.
The tangent lift $\mf{\Psi}^{T\mf{Q}}:G\times
T\mf{Q}\to T\mf{Q}$ of the action $\mf{\Psi}$ is the action of $G$ on
$T\mf{Q}$ given by
$$
\mf{\Psi}^{T\mf{Q}}_{g}(\mf{v}_{\mf q}) = T_{\mf q}\mf{\Psi}_g
\cdot \mf{v}_{\mf q}
$$
(in coordinates, $\Psi^{TQ}_g(q,\dot q) = \big( \Psi_g(q),
\Psi'_g(q)\dot q\big)$ with $\Psi'_g = \der{\Psi_g}q$).
We denote by $\mf{Y}_\eta := \frac d{dt}\mf{\Psi}_{\exp(t\eta)}|_{t=0}$ the
infinitesimal generator relative to an element
$\eta\in\mathfrak g$, the Lie algebra of
$G$. Correspondingly, the $\eta$-component of the momentum map of
$\mf{\Psi}^{T\mf{Q}}$ is the function $\mf{J}_\eta :T\mf Q\to\bR{}$
defined as
$$
\mf{J}_\eta (\mf v_{\mf q}) :=
\langle \mf{p}_{\mf L}(\mf v_{\mf q}) , \mf{Y_\eta}(\mf q) \rangle
$$
(in coordinates, $\der L{\dot q}\cdot Y_\eta$).
The tangent lift of a vector field $\mf{Z}$ on $\mf{Q}$ is the
vector field $\mf{Z}^{T\mf{Q}}$ on $T\mf{Q}$ whose integral curves
$t\mapsto \mf{v}(t)$ are velocities of integral curves $t\mapsto
\mf{q}(t)$ of $\mf{Z}$, that is $\mf{v}(t)=\mf{Z}(q(t))\in
T_{\mf{q}(t)}\mf{Q}$ (in coordinates,
$Z^{TQ}
=
Z_i\partial_{q_i} + \dot q_j \der{Z_{i}}{q_j} \partial_{\dot q_i}$).
Clearly, $\mf{Y}_\eta^{T\mf{Q}}
=\frac d{dt}\mf{\Psi}^{T\mf{Q}}_{\exp(t\eta)}|_{t=0}$.
Consider now a nonholonomic mechanical system with affine constraints
$(\mf L,\mf Q,\mf M)$ and assume that $\mf L$ is invariant
under $\mf\Psi^{T\mf Q}$,
namely $\mf{L}\circ\mf{\Psi}^{T\mf{Q}}_{g}=\mf{L}$ for all $g\in G$.
Then, we say that the function
$
\mf{J}_\eta \big|_{\mf M}
$
is the {\it momentum} of $(\mf L,\mf Q,\mf M)$ generated by $\mf
Y_\eta$.
\begin{proposition}\label{prop4} Assume that $\mf L$ is invariant
under $\mf\Psi^{T\mf Q}$. Then a momentum
is a first integral of $(\mf{L},\mf{Q},\mf{M})$ if and only if its
generator is a section of $\mf{\mathcal{R}}^\circ$.
\end{proposition}
\begin{proof} We may work in coordinates. A computation gives
$\frac d{dt} (J_\eta|_M) = Y_\eta^{TQ}(L)\big|_M + R_{L,M}\cdot Y_\eta
\big|_M$. The invariance of $L$ implies $Y_\eta^{TQ}(L)=0$. Thus,
$J_\eta|_M$ is a first integral if and only if, at each $q\in Q$,
$Y_\eta$ annihilates all reaction forces $R_{L,M}(q,\dot q)$ with
$\dot q\in \mathcal{M}_q$, that is, $Y_\eta(q)\in\mathcal{R}^\circ_q$. \end{proof}
\sigmaubsection{Conservation of gauge momenta}
It was an original idea of \cite{BGM} that, for nonholonomic
mechanical systems with linear constraints whose Lagrangian is
invariant under a lifted action, certain conserved quantities that
are not components of the momentum map may be viewed as linked to the
group by a gauge-like mechanism. This situation extends to
nonholonomic mechanical systems with affine constraints.
For a general study of this topic in systems with linear
constraints, and more information on the topic, including e.g. its
relation to the so called `momentum equation', see
\cite{FGS2008,FGS2009,FS2009,FGS2012}.
Following the terminology of \cite{FGS2009} we say that a vector field
$\mf Y$ on $\mf Q$ is a {\it gauge symmetry} of
$(\mf L,\mf Q,\mf M)$ relative to the action $\mf \Psi$ if it
is everywhere tangent to the orbits of $\mf \Psi$ and, moreover,
$$
\mf Y^{T\mf Q}(\mf L)\big|_{\mf M}=0 \,.
$$
This invariance condition of $\mf L$ is independent of the
invariance of $\mf L$ under $\mf\Psi^{T\mf Q}$, even though it implies
that $\mf V$ is $\mf\Psi$-invariant.
The {\it gauge momentum} generated by a gauge symmetry $\mf Y$ is the
function
$$
\mf J :=
\langle \mf{p}_{\mf L} , \mf{Y} \rangle \big|_{\mf M} \,.
$$
The gauge symmetry that generates a given gauge momentum needs not be
unique.
\begin{proposition}
A gauge momentum is a first integral of $(\mf L,\mf Q,\mf M)$ if and
only if it is generated by a gauge symmetry which is a section of
$\mathcal{R}^\circ$.
\end{proposition}
The proof goes just as that of Proposition \ref{prop4}.
The need of considering gauge momenta generated by gauge symmetries
that are sections of $\mathcal{R}^\circ$, not only those generated by sections
of $\mathcal{D}$, is demonstrated by the example in the following section.
This is the heavy sphere that rolls inside a rotating vertical
cylinder. This system has as a first integral that depends smoothly
on the angular velocity of the cylinder and can be interpreted as a
gauge momentum. Interestingly, when the cylinder is at rest this
gauge momentum is generated by a gauge symmetry that is a section of
$\mathcal{D}$; but as soon as the cylinder rotates, the generating gauge
symmetry leaves $\mathcal{D}$ and becomes a section of $\mathcal{R}^\circ$.
\sigmaubsection{Example}
Consider a heavy sphere that rolls without sliding inside a cylinder
that rotates uniformly about its vertical axis, namely, the system of
section 3.2 with potential energy $V=gz$.
The Lagrangian $\mf L$ and the configuration
manifold $\hat Q$ are independent of $\Omega$, while the constraint
manifold $\mf M$ depends on $\Omega$ through the vector field
$\mf\xi$. We thus denote it by $\mf M_\Omega$.
It is classically known \cite{routh,NF} that, when $\Omega=0$, the system
$(\mf L,\mf Q,\mf M_0)$ has the two first integrals
$\mf F_0:=\mf F|_{\mf M_0}$ and $\mf K_0:=\mf K|_{\mf
M_0}$, where
\begin{equation}\label{FeH}
\mf F = I\omega_z -ar \dot\gamma
\,,\qquad
\mf K = a ( \omega_x \cos\gamma+ \omega_y \sigmain\gamma) -z\dot\gamma
\,.
\end{equation}
These two first integrals have been linked in \cite{BGM} to the action
$\mf\Psi$ of the group $G=S^1\times\mathrm{SO(3)}\ni(\zeta,S)$ on $\mf
Q\ni(z,\gamma,\mathcal{R})$ given by
$
\mf\Psi_{\zeta,S}(z,\gamma,\mathcal{R}) = (z, \gamma+\zeta, S_\zeta \mathcal{R} S)
$,
where $S_\zeta$ is the matrix of the rotation by $\zeta$ about the
third axis. This action emerges naturally in this problem because it
leaves the Lagrangian (\ref{L-cilindro}) and the constraint
(\ref{M-cilindro}) invariant.
With reference to this action, $\mf F_0$ is a momentum
generated by an infinitesimal generator that is a section of $\mf\mathcal{D}$
and $\mf K_0$ is a gauge momentum generated by a gauge symmetry which
is a section of~$\mf\mathcal{D}$, see \cite{BGM}.
When $\Omega\not=0$, the system $(\mf L,\mf Q,\mf M_\Omega)$
has the two first integrals
$$
\mf F_\Omega:=\mf F|_{\mf M_\Omega} \,,\qquad
\mf K_\Omega:=\mf K|_{\mf M_\Omega}
$$
which appear in \cite{BMK2002}. (Reference \cite{BMK2002} considers
only the case $V=0$, but these two first integrals exist for any
$V=V(z)$, see the remark below). We now show that, with reference to
the considered action $\mf \Psi$, when $\Omega\neq0$ the function
$\mf F_\Omega$ is a momentum generated by an infinitesimal generator
which is a section of $\mf\mathcal{D}$ and the function $\mf K_\Omega$ is a
gauge momentum generated by a gauge symmetry which is a section of
$\mf \mathcal{R}^\circ$, not of $\mf \mathcal{D}$. Therefore, at variance from the
case $\Omega=0$, in order to link the first integral $\mf K_\Omega$
to the group action using the gauge mechanism, when $\Omega\neq0$
it is necessary to take into account the role of the reaction forces.
To prove these assertions we pass to the local coordinates
$(z,\gamma,\varphi,\psi,\theta)$ on $\mf Q$. The
representatives $F$ of $\mf F$ and $K$ of $\mf K$ are obtained from
(\ref{FeH}) with
$\omega_x = \dot\theta\cos\varphi+\dot\psi\sigmain\varphi \sigmain\theta$,
$\omega_y = \dot\theta\sigmain\varphi-\dot\psi\cos\varphi \sigmain\theta$,
$\omega_z = \dot\varphi+\dot\psi \cos\theta$.
It follows from the analysis of section 3.2 that, in these
coordinates,
$$
\begin{aligned}
&\mathcal{D} =
\textrm{span}_\bR{}
\big\{
a\partial_\gamma - r \partial_\varphi ,
\partial_\theta - a \sigmain(\gamma-\varphi) \,\partial_z ,
\partial_\psi -a \cos(\gamma-\varphi) \sigmain\theta\,\partial_z -
\cos\theta\,\partial_\varphi
\big\}
\\
&\mathcal{R}^\circ =
\textrm{span}_\bR{}
\left\{
\partial_\gamma \,,\; \partial_\varphi \,,\;
\partial_\theta - a \sigmain(\gamma-\varphi) \,\partial_z \,,\;
\partial_\psi -a \cos(\gamma-\varphi) \sigmain\theta\,\partial_z -
\cos\theta\,\partial_\varphi
\right\} \,.
\end{aligned}
$$
The tangent spaces to the group orbits are spanned by $\partial_\gamma$
and by three infinitesimal generators of the
$\mathrm{SO(3)}$--action, e.g. the generators
\[
\begin{aligned}
&\eta_x =
\sigmain\varphi (\partial_\psi-\cos\theta\,\partial_\varphi)
+\cos\varphi\, \sigmain\theta \,\partial_\theta
\,, \qquad
\\
&\eta_y =
\cos\varphi (\partial_\psi-\cos\theta\,\partial_\varphi)
- \sigmain\theta \, \sigmain\varphi \, \partial_\theta \,,
\\
&\eta_z
=
\partial_\varphi
\end{aligned}
\]
of the components $\omega_x,\omega_y,\omega_z$ of the
$\mathrm{SO(3)}$-momentum map. The vector fields
$$
Y_F:=\eta_z-\frac ar\partial_\gamma
\,,\qquad
Y_K :=
\frac{a}{I \sigmain\theta} (\eta_x \cos\gamma - \eta_y\sigmain\gamma)
-\frac{z}{r^2} \partial_\gamma
$$
are tangent to the group orbits. $Y_F$ is an infinitesimal generator
of the $S^1\times\mathrm{SO}(3)$-action and is a section of $\mathcal{D}$. As
such it generates a conserved momentum, that equals $F_\Omega$.
$Y_K$ is instead a section of $\mathcal{R}^\circ$ and is
(the local representative of) a gauge symmetry because
$$
Y^{TQ}(L) =
\big(\dot z + a\omega_x\sigmain\gamma- a\omega_y\cos\gamma\big)
\dot\gamma
$$
vanishes on $M_\Omega$, see (\ref{M-cilindro}).
Thus, $Y_K$ generates a conserved gauge momentum,
which equals $K_\Omega$.
If we use coordinates $(z,\gamma,\varphi,\psi,\theta,\dot
\varphi,\dot\psi,\dot\theta)$ on $\mf M_\Omega$, then
$$
F_\Omega = (I+a^2)\omega_z - a (a+r) \Omega
\,,\qquad
K_\Omega = a\omega_x\cos\gamma + a\omega_y\sigmain\gamma +
\frac ar z\omega_z - \frac{r+a}r \Omega z \,.
$$
It remains to prove that $K_\Omega$ is not generated by any gauge
symmetry which is a section of $\mathcal{D}$. To this end, we make the
following observations. We call {\it generator} of a gauge momentum
any vector field $Z$---not necessarily a gauge symmetry---such that
$\mf J = \langle \mf{p}_{\mf L} , \mf{Z} \rangle \big|_{\mf M}$.
Then, {\it a gauge momentum has at most one generator that is a
section of $\mathcal{D}$}. We may prove this in coordinates. If $W$ and $Z$
are the representatives of two generators of a gauge momentum $\mf J$
then $(W-Z)\cdot (A\dot q-b)|_M=0$. Equivalently, at each point $q$,
$(W-Z)\cdot (Au+A\xi-b)=0$ for all $u\in \mathcal{D}_q$. Hence $Z$ and $W$
satisfy the two conditions
$$
(W-Z)\cdot Au=0 \ \forall u\in \mathcal{D}_q
\,,\qquad
(W-Z)\cdot (A\xi-b)=0 \,.
$$
Since $A$ defines a metric, the first of these two conditions implies
that, at each point $q$, $W-Z$ is orthogonal, in this metric, to
$\mathcal{D}_q$. Hence all generators of a gauge momentum have the same
component along $\mathcal{D}$. This argument shows that, moreover, {\it if $Y$ is
a generator of a gauge momentum, then its unique generator which is a
section of $\mathcal{D}$, if it exists, is $\Pi_AY$}, where, at each point
$q$, $\Pi_A$ is the $A$-orthogonal projector onto $\mathcal{D}_q$.
Furthermore, {\it $\Pi_AY$ is a generator of the gauge momentum if and only
if $(\Pi_AY-Y)\cdot(A\xi-b)=0$.}
In our case
$
\Pi_AY_K
=
-\frac{a^2 z}{(a^2+I) r^2} \, \partial_\gamma
+
\big( \frac{a z}{(a^2+I) r}+\frac{a\, \cos \theta
\sigmain(\gamma-\varphi)}{ I\sigmain\theta} \big)\, \partial_\varphi
-
\frac{a \sigmain(\gamma-\varphi)}{I \sigmain\theta} \, \partial_\psi
+
\frac aI \cos(\gamma - \varphi) \, \partial_\theta
$
is not a generator of $K_\Omega$ since $(\Pi_AY_K-Y_K)\cdot A\xi
= \frac{I (a+r) \Omega z}{(a^2+I) r} $.
\begin{remark} \rm
We have considered the case of constant gravity,
with potential energy $V(z)=gz$, in order to make a direct comparison
with \cite{BGM}, that treats only this case. However,
it is easy to verify that the two first integrals $F_\Omega$ and
$K_\Omega$ exist, and retain their interpretations as (gauge)
momenta, with the same generators, if the sphere is acted upon by
{\it any} potential energy that depends only on $z$, that is, which
is invariant under the considered action $\hat\Psi$. This is a
`Noether-like' property, called `weak-Noetherianity' in
\cite{FGS2008}. This example thus shows that, in this respect, the
case of nonholonomic systems with affine constraints differs from
that of nonholonomic systems with linear constraints, where the only gauge momenta
with such a weakly-Noetherian property are those that have a
generator which is a section of $\mathcal{D}$ \cite{FGS2012}.
\end{remark}
\sigmaection{Conclusions}
At the core of Proposition 1 lies the balance equation of the energy
(\ref{sec:lie-jacobi}). Together with the assumption of
ideality of the constraints it implies that
conservation of energy is equivalent to
\begin{equation}\label{ultima}
\langle \mf R(\mf v),\mf\xi(\mf v)\rangle = 0
\qquad \forall v\in \mf M \,,
\end{equation}
that is, to the fact that, in any constrained kinematical state, the
reaction force $\mf R$ does not do any work on the nonhomogeneous
term $\mf\xi$ of the constraint.
If the only knowledge assumed on the reaction forces is d'Alembert
principle, namely that they annihilate the distribution $\mf\mathcal{D}$, then
all can be deduced from equation (\ref{ultima}) is that
there is conservation of energy when $\mf\xi$ is a section of $\mf\mathcal{D}$,
that is, when the constraint is linear. This point of view is quite
widespread: the very use of expressions such as ``undetermined
multipliers'' or ``unknown [reaction] forces'' indicates a
perception of the reaction forces as intrinsically unknown
objects.
The novelty of our approach is that we exploit the fact that, for a given
system, the reaction forces are known functions of the kinematic
states. This allows to identify the cases in which the energy is
conserved even if the constraint is genuinely affine.
Very similar considerations can be made concerning the conservation
of momenta and gauge momenta.
The general message we convey, besides the specific information about
the conservation of energy and (gauge) momenta in nonholonomic
mechanical systems with
affine constraints, is that the reaction forces should be taken into
a clearer account in theoretical studies of these systems.
Particularly when analyzing differences from holonomic systems
(Noether theorem, conservation of energy, hamiltonianization,
perhaps invariant measures) it might be important to exploit the
presence of the reaction
forces in the equations of motion. From this point of view,
advancement in the comprehension of nonholonomic mechanical systems is not a purely
geometric matter and the comprehension of some dynamical aspects
might pass through a better
understanding---and perhaps attempts of classification---of the
reaction forces.
\sigmaection{Appendix: The equations of motion and the reaction forces}
We derive here the expression of the reaction forces as
functions of the kinematic states of the system for a nonholonomic
system with affine constraints.
\begin{proposition} In the hypotheses and with the notation of section
2.1:
\begin{list}{}
{\leftmargin2em\labelwidth1.2em\labelsep.5em\itemindent0em
\topsep0.5ex\itemsep-0.2ex}
\item[1.] For any $\bar v_0\in \mf M$ there exist unique
curves $\bR{}\ni t\mapsto (\mf v_t,\mf R_t) \in \mf M\times \mf \mathcal{D}^\circ$
such that $\mf v_0=\bar v_0$ and
\begin{equation}\label{EqLagr}
[\mf L](\mf v_t) = \mf R_t \qquad \forall t \,.
\end{equation}
\item[2.] There exists a function $\mf R_{\mf L,\mf M}: \mf M \to \mf
\mathcal{D}^\circ$ such that, for all $\bar v_0$, $\mf R_t=\mf R_{\mf L,\mf
M}(\mf v_t)$ $\forall t$.
\item[3.] In any system of bundle coordinates $(q,\dot
q)$ in $T\mf Q$, if $M$ is described as $S(q)\dot q+s(q)=0$, then
the representative $R_{L,M}$ of $\mf R_{\mf L,\mf M}$ is the restriction to
$M$ of the function (\ref{eq:RF1}),~(\ref{eq:RF2}).
\end{list}
\end{proposition}
\begin{proof} We first work in coordinates and then globalize the
result. The matrix $A=\dder L {\dot q}{\dot q} =
\dder {L_2} {\dot q}{\dot q}$ is nonsingular and independent of the
velocities.
Assume $t\mapsto (q_t,\dot q_t,R_t)$ is the local representative of a
curve in $M\times \mathcal{D}^\circ$ that satisfies (\ref{EqLagr}). Then, for
all $t$,
\begin{equation}\label{vincolo}
S(q_t)\dot q_t+s(q_t) = 0
\end{equation}
and $R_t\in\mathrm{range} S(q_t)^T$. Thus, there exists a curve
$t\mapsto \lambda_t\in\bR k$ (the Lagrange multiplier) such that
$R_t=S(q_t)^T\lambda_t$. Since $[L]=A(q)\ddot q + \ell(q,\dot q)$,
equation (\ref{EqLagr}) is
$$
A(q_t) \ddot q_t + \ell(q_t,\dot q_t) = S(q_t)^T\lambda_t
$$
and gives $\ddot q_t = A(q_t)^{-1}[S(q_t)^T\lambda_t - \ell(q_t,\dot
q_t)]$.
Inserting this expression into $S(q_t)\ddot q_t +\sigma(q_t,\dot q_t)=0$,
that follows from (\ref{vincolo}), gives
$$
S(q_t)A(q_t)^{-1}[S(q_t)^T \lambda_t - \ell(q_t,\dot q_t)]
+ \sigma(q_t,\dot q_t)=0 \,.
$$
This equation can be solved for $\lambda_t$ because the matrix
$SA^{-1}S^T$ is invertible and gives
\begin{equation}\label{lambda}
\lambda_t = [S(q_t)A(q_t)^{-1}S(q_t)^T]^{-1}
\big[ S(q_t)A(q_t)^{-1}\ell(q_t,\dot q_t) - \sigma(q_t,\dot q_t)\big]
\end{equation}
or $R_t = R_{L,M}(q_t,\dot q_t)$ with $R_{L,M}$ as stated. Together
with the independence of $R_{L,M}$ of the choice of $S$ and $s$, see
the remark at the end of section 2.2, this proves that if $\mf R_{\mf
L,\mf M}$ exists, then its local representative is $R_{L,M}$ and hence
the uniqueness of $t\mapsto (q_t,\dot q_t,R_t) \in M\times \mathcal{D}^\circ$, as in
item 1.
Consider now the equation
$$
A\ddot q + \ell = S^T(SA^{-1}S^T)^{-1} (SA^{-1} \ell - \sigma)
$$
in $\bR n$. Let $t\mapsto q_t$ be its unique solution with initial datum
$(q_0,\dot q_0)\in M$. Define $t\mapsto \lambda_t$ as in
(\ref{lambda}). With this choice of $\lambda_t$, the curve
$t\mapsto q_t$ satisfies $S\ddot q_t+\sigma(q_t,\dot q_t)=0$, or
$\frac d {dt}[S(q_t)\dot q_t+s(q_t)]=0$. Hence $(q_t,\dot q_t)\in M$
for all $t$ and satisfies equation (\ref{EqLagr}) with
$R_t=S(q_t)^T\lambda_t\in \mathcal{D}^\circ_{q_t}$. By d'Alembert principle,
such an $R_t$ is the representative of a reaction force that the
nonholonomic constraint can exert.
This proves a local version of items 1. and 2., which
hold true within each coordinate chart.
We now globalize these results. To this end we verify that the local
representatives of equation (\ref{EqLagr}), with reaction forces that
have the expression (\ref{eq:RF1}),
match in the intersection of different chart domains. Let $\tilde q\mapsto
q=\mathcal{C}(\tilde q)$ be a change of coordinates in $Q$. The local
representatives of the affine subbundle $M$ in the new coordinates is still
given by an equation of the form $\tilde S(\tilde q)\dot{\tilde q} +
\tilde s(\tilde q)=0$ and a simple computation shows that
$$
\tilde S = \tilde P\,[S\circ\mathcal{C}]\, \mathcal{C}' \,,\qquad
\tilde s = \tilde P\, [s\circ\mathcal{C}]
$$
where $\mathcal{C}'$ is the Jacobian matrix of $\mathcal{C}$ and $\tilde P=\tilde
P(\tilde q)$ is a $k\times k$ nonsingular matrix. The local
representatives $\tilde L$ of the Lagrangian in the new coordinates
is $\tilde L(\tilde q,\dot{\tilde q}) = L(\mathcal{C}(\tilde q),\mathcal{C}'(\tilde
q)\dot{\tilde q})$ and the matrix
$
\tilde A
:=
\dder {\tilde L}{\dot{\tilde q}}{\dot{\tilde q}}
$ is given by
$$
\tilde A = {\mathcal{C}'}^T\,[A\circ \mathcal{C}]\, \mathcal{C} ' \,.
$$
In order to prove the statement it suffices to show that, if
$t\mapsto q_t$ is a solution of
$$
\frac d{dt} \der {L} {\dot q} - \der{L}q
=
S^T(SA^{-1}S^T)^{-1} (SA^{-1} \ell - \sigma) \,,
$$
then $t\mapsto \tilde q_t=\mathcal{C}^{-1}(q_t)$ is a solution of
$$
\frac d{dt} \der {\tilde L} {\dot{\tilde q}} -
\der{L}{\tilde q}
=
\tilde S^T(\tilde S \tilde A^{-1} \tilde S^T)^{-1}
(\tilde S \tilde A^{-1} \tilde \ell - \tilde \sigma)
$$
with $\tilde \ell$ and $\tilde \sigma$ defined as $\ell$ and $\sigma$ in
(\ref{eq:RF2}), but in terms
of $\tilde L$, $\tilde S$ and $\tilde s$. It is well known from
Lagrangian mechanics that
$$
\frac d{dt} \der {\tilde L} {\dot{\tilde q}}
- \der{\tilde L}{\tilde q}
=
\mathcal{C} '^T
\Big[
\Big( \frac d{dt} \der {L} {\dot q} - \der{L}q \Big)\circ\mathcal{C}
\Big] \,.
$$
Elementary computations show that
\begin{equation}
\begin{aligned}
&\tilde \ell_i
=
({\mathcal{C}'}^T \ell)_i +
({\mathcal{C}'}^T A)_{ij} [\dot{\tilde q}\cdot \mathcal{C}''_j\dot{\tilde q} ]
\\
&\tilde \sigmaigma_a
=
(\tilde P\sigmaigma)_a +
(\tilde P S)_{aj} [\dot{\tilde q}\cdot \mathcal{C}''_j\dot{\tilde q} ]
\end{aligned}
\end{equation}
where $\mathcal{C}''_j$ is the Hessian matrix of the $j$-th component $\mathcal{C}_j$
of $\mathcal{C}$ and
with the convention, used below as well, that $\ell$, $A$, $\sigmaigma$
and $S$ are composed with $\mathcal{C}$. Since
$\tilde S\tilde A^{-1}=\tilde P SA^{-1}{\mathcal{C}'}^{-T}$, this implies
$
\tilde S\tilde A^{-1}\tilde \ell - \tilde \sigma
=
\tilde P[SA^{-1}\ell - \sigma ]
$
so that
$$
\tilde S^T\big(\tilde S\tilde A^{-1}\tilde S^T\big)^{-1}
\big[ \tilde S \tilde A^{-1} \tilde \ell - \tilde \sigma \big]
=
{\mathcal{C}'}^T
\Big[ \big(S^T\big(SA^{-1}S^T\big)^{-1} \big[ SA^{-1} \ell + \sigma
\big]\big) \circ \mathcal{C}\Big] \,.
$$
This completes the proof. \end{proof}
{\sigmamall
}
\end{document}
|
\begin{document}
\title[First eigenvalue for nonlocal problems]
{Lower and upper bounds for the first eigenvalue of nonlocal
diffusion problems in the whole space}
\author[L. I. Ignat, J. D. Rossi and A. San Antolin]{Liviu I. Ignat, Julio D. Rossi and Angel San Antolin}
\address{L. I. Ignat
\hslashfill\break\indent Institute of Mathematics ``Simion Stoilow'' of
the Romanian Academy, \hslashfill\break\indent 21 Calea Grivitei Street
\hslashfill\break\indent 010702, Bucharest, ROMANIA \hslashfill\break\indent
and \hslashfill\break\indent
BCAM - Basque Center for Applied Mathematics, \hslashfill\break\indent
Bizkaia Technology Park, Building 500 Derio, \hslashfill\break\indent
Basque Country, SPAIN.}
\email{{\tt
[email protected]}\hslashfill\break\indent {\it Web page: }{\tt
http://www.imar.ro/\~\,lignat}}
\address{J. D. Rossi
\hslashfill\break\indent Departamento de An\'{a}lisis Matem\'{a}tico,
Universidad de Alicante, \hslashfill\break\indent Ap. correos 99, 03080,
\hslashfill\break\indent Alicante, SPAIN. \hslashfill\break\indent On leave
from \hslashfill\break\indent Dpto. de Matem{\'a}ticas, FCEyN,
Universidad de Buenos Aires, \hslashfill\break\indent 1428, Buenos Aires,
ARGENTINA. } \email{{\tt [email protected]} \hslashfill\break\indent {\it
Web page: }{\tt http://mate.dm.uba.ar/$\sim$jrossi/}}
\address{A. San Antolin
\hslashfill\break\indent Departamento de An\'{a}lisis Matem\'{a}tico,
Universidad de Alicante,
\hslashfill\break\indent Ap. correos 99, \hslashfill\break\indent 03080,
Alicante, SPAIN. } \email{{\tt [email protected]}}
\mathbf{k}eywords{Nonlocal diffusion, eigenvalues.\\
\indent 2000 {\it Mathematics Subject Classification.} 35B40,
45A07, 45G10.}
\begin{abstract} We find lower and upper bounds for the first eigenvalue of a nonlocal
diffusion operator of the form $ T(u) = - \int_{\mathbb{R}^d} K(x,y)
(u(y)-u(x))
\, dy$. Here we consider a kernel $K(x,y)=\psi (y-a( x))+\psi(x-a( y))$
where $\psi$ is a bounded, nonnegative function supported in the unit
ball and $a$ means a diffeomorphism on $\mathbb{R}^d$. A simple
example being a linear function $a(x)= Ax$. The upper and lower
bounds that we obtain are given in terms of the Jacobian of $a$
and the integral of $\psi$. Indeed, in the linear case $a(x) = Ax$
we obtain an explicit expression for the first eigenvalue in the whole $\mathbb{R}^d$ and it is positive when the
the determinant of the matrix $A$ is different from
one. As an application of our results, we observe that, when the
first eigenvalue is positive, there is an exponential decay for
the solutions to the associated evolution problem. As a tool to
obtain the result, we also study the behaviour of the principal
eigenvalue of the nonlocal Dirichlet problem in the ball $B_R$ and prove that
it converges to the first eigenvalue in the whole space as $R\to
\infty$.
\mathcal{E}d{abstract}
\maketitle
\section{Introduction}
\langlebel{Sect.intro}
\setcounter{equation}{0}
Nonlocal problems have been recently widely used to model
diffusion processes. When $u(x,t)$ is interpreted as the density
of a single population at the point $x$ at time $t$ and $J(x-y)$
is the probability of ``jumping" from location $y$ to location
$x$, the convolution $(J*u)(x) = \int_{\mathbb R^d} J(y-x) u(y,t)\, dy$
is the rate at which individuals arrive to position $x$ from all
other positions, while $-\int_{\mathbb R^d} J(y-x) u(x,t)\, dy$ is the
rate at which they leave position $x$ to reach any other position.
If in addition no external source is present, we obtain that $u$
is a solution to the following evolution problem
\begin{equation}\langlebel{probgen}
u_t(x,t) = \displaystyle \int_{\mathbb R^d} J(y-x) (u(y,t)-u(x,t))\, dy.
\mathcal{E}d{equation}
This equation is understood to hold in a bounded domain, this is,
for $x\in
\Omega$ and has to be complemented with a ``boundary" condition.
For example,
$u=0$ in $\mathbb R^d\setminus
\Omega$ which means that the habitat $\Omega$ is surrounded by a hostile
environment (see
\cite{F} and \cite{Du} for a general nonlocal vector calculus).
Problem (\mathbf{r}ef{probgen}) and its stationary version have been
considered recently in connection with real applications (for
example to peridynamics, a recent model for elasticity), we quote
for instance
\cite{AMRT},
\cite{CERW1},
\cite{CERW2}, \cite{BCC},
\cite{BFRW},
\cite{Co1},
\cite{CD2},
\cite{CR1},
\cite{CCR}, \cite{PLPS}, \cite{Sill}, \cite{SL}
and the recent book \cite{libro}. See also \cite{IR2} for the appearance of convective
terms, \cite{AMRT2} for a problem with nonlinear nonlocal
diffusion and \cite{CCEM}, \cite{CER} for other features in
related nonlocal problems.
On the other hand, it is well known that eigenvalue problems are a
fundamental tool to deal with local problems. In particular, the
so-called principal eigenvalue of the Laplacian with Dirichlet
boundary conditions,
\begin{equation}\langlebel{eig-local}
\left\{
\begin{array}{ll}
-\Delta v(x)= \sigma v(x), \mathbf{q}quad & x\in \Omega,\\
\mathbf{q}uad v(x)=0, \mathbf{q}quad & x\in \partial\Omega,
\mathcal{E}d{array}
\mathbf{r}ight.
\mathcal{E}d{equation}
plays an important role, since it gives the exponential decay of
solutions to the associated parabolic problem, $u_t = \Delta u$
with $u|_{\partial \Omega}=0$. The properties of the principal
eigenvalue of (\mathbf{r}ef{eig-local}) are well-known, see \cite{GT}.
For the nonlocal problem, in \cite{GMR-autov} the authors consider
the ``Dirichlet" eigenvalue problem for a nonlocal operator in a
smooth bounded domain $\Omega$, that is,
\begin{equation}\langlebel{eigen}
\left\{
\begin{array}{ll}
\displaystyle (J*u)(x) -u(x) = -\langlembda u(x), \mathbf{q}uad & x\in \Omega,\\[0.25pc]
\mathbf{q}quad u (x) = 0, \mathbf{q}quad & x\in \mathbb R^d\setminus \Omega.
\mathcal{E}d{array}
\mathbf{r}ight.
\mathcal{E}d{equation}
They show that the first eigenvalue has associated a positive
eigenfunction and that the eigenvalue goes to zero as the domain is
expanded, i.e., $\langlembda_1 (k\Omega) \to 0$ as $k \to \infty$. In
addition, it is proved in \cite{CCR} that solutions to
\eqref{probgen} in the whole $\mathbb{R}^d$ decay in the $L^2$-norm as
$t^{-d/4}$. Therefore the first eigenvalue in the whole space $\mathbb R^d$
is zero for the convolution case. When we face a convolution one of
the main tools is the use of the Fourier transform, see \cite{CCR}.
For more general kernels, in \cite{IR3} energy methods where
applied to obtain decay estimates for solutions to nonlocal
evolution equations whose kernel is not given by a convolution,
that is, equations of the form
\begin{equation}\langlebel{lineal}
u_t (x,t) = \int_{\mathbb{R}^d} K(x,y)(u(y,t)
- u(x,t)) \, dy
\mathcal{E}d{equation}
with $K(x,y)$ a symmetric nonnegative kernel. The obtained decay
estimates are of polinomial type, more precisely, $\| u (\cdot,
t)\|_{L^2 (\mathbb{R}^d)} \leq C t^{-d/4}$. We remark that this decay bound
need not be optimal, in fact, in \cite{IR3} there is a particular
example of a kernel $K$ that give exponential decay in $L^2(\mathbb{R})$.
The exponential decay of solutions suggests that the associated
first eigenvalue is positive.
Our main goal in the present work is to study properties of the
principal eigenvalue of nonlocal diffusion operators when the
associated kernel is not of convolution type. Some preliminary
properties are already known, as existence, uniqueness and a
variational characterization. To go further, we need to assume
some structure for the kernel. Let us consider a function $\psi$
nonnegative, bounded and supported in the unit ball in $\mathbb{R}^d$. We associate with this function a kernel of the form
\begin{equation}\langlebel{forma.nucleo}
K(x,y)=\psi (y-a( x))+\psi(x-a( y))
\mathcal{E}d{equation}
where $a(x)$ is a diffeomorphism on $\mathbb{R}^d$. Note that $K$ is
symmetric and that the convolution type kernels also take the form
\eqref{forma.nucleo} (just put $a(x)=x$). For this kernels let us
look for the first eigenvalue of the associated nonlocal operator,
that is,
\begin{equation}\langlebel{eigenvalue}
- \int_{\mathbb{R}^d} K(x,y) (u(y)-u(x)) \, dy = \langlembda_1 u (x).
\mathcal{E}d{equation}
Some known results (that we state in the next section for
completeness) read as follows: For any bounded domain $\Omega$
there exists a principal eigenvalue $\langlembda_1 (\Omega)$ of
problem
\eqref{eigenvalue} with $u\equiv 0$ in $\mathbb{R}^d \setminus \Omega$. The
corresponding non-negative eigenfunction $\phi_1(x)$ is strictly positive in~$\Omega$.
Moreover, the first eigenvalue is given by
\begin{equation}\langlebel{lambdar}
\langlembda_1(\Omega)=
\inf _{u\in L^2(\Omega)}
\frac {\displaystyle\int _{\mathbb{R}^d}\int_{\mathbb{R}d} K(x,y)(\tilde u(x)-\tilde u(y))^2dxdy}
{\displaystyle\int_{\Omega}u^2(x)dx}.
\mathcal{E}d{equation}
Here we have denoted by $\tilde{u}$ the extension by zero of $u$,
$$\tilde u(x)=\left\{
\begin{array}{ll}
u(x),& x\in \Omega,\\
0,& x\in \mathbb{R}^d \setminus \Omega.
\mathcal{E}d{array}
\mathbf{r}ight.$$
We will use this notation trough the whole paper. When we deal
with the whole space we have
\begin{equation}\langlebel{lambda1}
\langlembda_1 (\mathbb{R}^d)=\inf _{u\in L^2(\mathbb{R}^d)}\frac{\displaystyle
\int_{\mathbb{R}^d}\int_{\mathbb{R}d} K(x,y)(u(x)-u(y))^2dxdy}{\displaystyle\int _{\mathbb{R}d}u^2(x)dx}.
\mathcal{E}d{equation}
The main results of this paper are the following:
\begin{theorem} \langlebel{teo.conver} Let $\Omega $ be a bounded domain with $0\in \Omega$ and
consider its dilations by a real factor $R$, $R\Omega= \{ Rx
\, : \, x\in \Omega\}$. Then
\begin{equation}\langlebel{lambdalimit.intro}
\langlembda_1(\mathbb{R}^d) ={l^\infty(\zz)}m _{R\mathbf{r}ightarrow \infty} \langlembda_1(R \Omega).
\mathcal{E}d{equation}
\mathcal{E}d{theorem}
Now, we state our result concerning lower bounds for the first
eigenvalue.
\begin{theorem} \langlebel{teo.cota.por.abajo}
Assume that the kernel is given by \eqref{forma.nucleo} and that
the Jacobian of $a^{-1}$, $J_{a^{-1}}$, verifies
\[
\displaystyle\sup_{x\in \mathbb{R}^d} |J_{a^{-1}}(x)|=M< 1
\mathbf{q}quad \text{ or } \mathbf{q}quad
\displaystyle\inf _{x\in \mathbb{R}^d}|J_{a^{-1}}(x)|=m> 1.
\]
Then
$$
\langlembda_1 (\mathbb{R}^d) \geq 2(1-M^{1/2})^2 \left(\int
_{\mathbb{R}^d}\psi(x)dx\mathbf{r}ight),
$$
in the first case and
$$
\langlembda_1(\mathbb{R}^d) \geq 2(m^{1/2}-1)^2 \left(\int_{\mathbb{R}^d}\psi(x)dx\mathbf{r}ight),
$$
in the second case.
\mathcal{E}d{theorem}
Concerning upper bounds we have the following less general result.
\begin{theorem} \langlebel{teo.cota.por.arriba}
Let $a$ be a diffeomorphism homogeneous of degree one, that is,
$a(Rx) =R a(x)$. Assume that the kernel is given by
\eqref{forma.nucleo}.
Then
\begin{equation}\langlebel{cota.por.arriba.intro}
\langlembda_1 (\mathbb{R}^d) \leq 2 \left(\int_{\mathbb{R}^d}\psi(x)dx\mathbf{r}ight)
\inf _{\|\phi\|_{L^2(B_1)=1} } \int _{\mathbb{R}^d}\big(\phi(x)-\phi(a(x))\big)^2dx,
\mathcal{E}d{equation}
where the infimum is taken over all functions $\phi$ supported in the unit ball of $\mathbb{R}^d$.
\mathcal{E}d{theorem}
\begin{remark}
Since we can consider $\phi \geq 0$, we get
$$
\begin{array}{rl}
\displaystyle \int_{\mathbb{R}^d}\big(\phi(x)-\phi(a(x))\big)^2dx &
\displaystyle = \int_{\mathbb{R}^d}\phi^2(x)dx+ \int_{\mathbb{R}^d} \phi^2(a(x))dx
-
2\int_{\mathbb{R}^d}\phi(x)\phi(a(x))dx \\[10pt]
& \displaystyle \leq \int_{\mathbb{R}^d}\phi^2(x)dx+ \int_{\mathbb{R}^d}
\phi^2(a(x))dx.
\mathcal{E}d{array}
$$
Hence, from \eqref{cota.por.arriba.intro} we immediately obtain the
following bound
\begin{equation}\langlebel{est.sup}
\langlembda_1 (\mathbb{R}^d) \leq 2 \big(1+ \displaystyle\sup_{x\in \mathbb{R}^d}
|J_{a^{-1}}(x)|\big)\left(\int_{\mathbb{R}^d}\psi(x)dx\mathbf{r}ight).
\mathcal{E}d{equation}
\mathcal{E}d{remark}
For invertible linear maps $a$ on $\mathbb R^d$ we obtain the following
sharp result.
\begin{theorem}\langlebel{main.result}
Let $K$ be given by \eqref{forma.nucleo} with an invertible linear
map $a(x)=Ax$. Then
\begin{equation}\langlebel{main.44}
\langlembda_1 (\mathbb R^d) = {l^\infty(\zz)}m _{R\mathbf{r}ightarrow \infty}\langlembda_1(B_R)=
2(1-|\det(A)|^{-1/2})^2\left(\int _{\mathbb{R}^d}\psi(x)dx\mathbf{r}ight).
\mathcal{E}d{equation}
\mathcal{E}d{theorem}
\begin{remark}
Note that for a linear function $a$ the bound
\eqref{est.sup} is not sharp. However, Theorems
\mathbf{r}ef{teo.cota.por.abajo} and \mathbf{r}ef{teo.cota.por.arriba} provide
lower and upper bounds for $\langlembda_1 (\mathbb{R}^d)$ when $M<1$ or
$m>1$ that depend linearly on $\int \psi$ in terms of the jacobian
of the diffeomorphism $a^{-1}$.
\mathcal{E}d{remark}
As an immediate application of our results, we observe that, when
the first eigenvalue is positive, we have exponential decay for
the solutions to the associated evolution problem in $\mathbb{R}^d$. In
fact, let us consider,
$$
u_t (x,t) = \int_{\mathbb{R}^d} K(x,y) (u(y,t)- u(x,t))\, dy,
$$
with an initial condition $u(x,0)=u_0 (x) \in L^2 (\mathbb{R}^d)$.
Multiply by $u(x,t)$ and integrate to obtain
$$
\begin{array}{rl}
\displaystyle \frac12 \frac{d}{dt} \int_{\mathbb{R}^d} u^2 (x,t) \, dx & \displaystyle =
\int_{\mathbb{R}^d} \int_{\mathbb{R}^d} K(x,y) (u(y,t)- u(x,t)) u(x,t)\, dy\, dx
\\[8pt]
& \displaystyle = - \frac12 \int_{\mathbb{R}^d} \int_{\mathbb{R}^d} K(x,y)
(u(y,t)- u(x,t))^2 \, dy\, dx
\\[8pt]
& \displaystyle \leq - \frac12 \langlembda_1 \int_{\mathbb{R}^d} u^2 (x,t)\,
dx.
\mathcal{E}d{array}
$$
Thus, an exponential decay of $u$ in $L^2$-norm follows
$$
\displaystyle \int_{\mathbb{R}^d} u^2 (x,t) \, dx \leq
\left(\int_{\mathbb{R}^d}u^2(x,0) \, dx \mathbf{r}ight) \cdot e^{-\langlembda_1 t}.
$$
The paper is organized as follows: in Section~\mathbf{r}ef{Sect.prelim} we
collect some preliminary results and prove Theorem~\mathbf{r}ef{teo.conver};
while in Section~\mathbf{r}ef{main.results} we collect the proofs of the
lower and upper bounds for the first eigenvalue; we prove
Theorem~\mathbf{r}ef{teo.cota.por.abajo}, Theorem~\mathbf{r}ef{teo.cota.por.arriba}
and Theorem~\mathbf{r}ef{main.result}.
\section{Properties of the first eigenvalue. Proof of Theorem~\mathbf{r}ef{teo.conver}}
\langlebel{Sect.prelim}
\setcounter{equation}{0}
First, let us state some known properties of the first eigenvalue
of our nonlocal operator.
\begin{theorem}
For any bounded domain $\Omega$ there exists a principal
eigenvalue $\langlembda_1 (\Omega)$ of problem \eqref{eigenvalue},
i.e. the
corresponding non-negative eigenfunction $\phi_1(x)$ is strictly positive in $\Omega$.
\mathcal{E}d{theorem}
\begin{proof} It follows from \cite{KR}.
\mathcal{E}d{proof}
\begin{theorem}
The first eigenvalue of problem \eqref{eigenvalue} satisfies
\begin{equation}\langlebel{lambdar.prelim}
\langlembda_1(\Omega)=\inf _{u\in L^2(\Omega)}
\frac{\displaystyle\int _{\mathbb{R}^d}\int_{\mathbb{R}d}
K(x,y)(\tilde u(x)-\tilde u(y))^2\, dx\,
dy}{\displaystyle\int_{\Omega}u^2(x)\, dx}.
\mathcal{E}d{equation}
\mathcal{E}d{theorem}
\begin{proof} See \cite{GMR-autov}.
\mathcal{E}d{proof}
Now, to simplify the presentation, we prove
Theorem~\mathbf{r}ef{teo.conver} in the special case of balls $B_R$ that
are centered at the origin with radius $R$ (we will use this
notation in the rest of the paper) and next we deduce from this
fact the general case, $\Omega$ a bounded domain.
\begin{lemma}\langlebel{asimptoticlimit}
Let $\langlembda_1(\mathbb{R}^d)$ be defined by \eqref{lambda1}. Then
\begin{equation}\langlebel{lambdalimit}
\langlembda_1 (\mathbb{R}^d)={l^\infty(\zz)}m _{R\mathbf{r}ightarrow \infty} \langlembda_1(B_R).
\mathcal{E}d{equation}
\mathcal{E}d{lemma}
\begin{proof}
First of all, observe that for any $R_1\leq R_2$ we have
$B_{R_1}\subset B_{R_2}$ and then
$$\langlembda_1(B_{R_1})\geq \langlembda _1(B_{R_2})>0.$$
Then we deduce that there exists the limit
$${l^\infty(\zz)}m _{R\mathbf{r}ightarrow \infty} \langlembda_1(B_{R})\geq 0.$$
{\textbf{Step I.}} Let us choose $u\in L^2(B_R)$. By the
definition of $\langlembda_1(\mathbb{R}^d)$ we get
$$\frac {\displaystyle
\int _{\mathbb{R}^d}\int_{\mathbb{R}^d} K(x,y)(\tilde u(x)-\tilde u(y))^2dxdy}{\displaystyle\int _{B_R}u^2(x)dx}
=\frac{\displaystyle\int _{\mathbb{R}^d}\int_{\mathbb{R}^d} K(x,y)(\tilde
u(x)-\tilde u(y))^2dxdy}{\displaystyle\int_{\mathbb{R}^d}\tilde u^2(x)dx}
\geq \langlembda_1(\mathbb{R}^d).$$
Taking the infimum in the right hand side over all functions $u\in L^2(B_R)$ we obtain that for any $R>0$
\begin{equation}\langlebel{est.1}
\langlembda_1(B_R)\geq \langlembda_1 (\mathbb{R}^d).
\mathcal{E}d{equation}
{\textbf{Step II.}} Let be $\varepsilon>0$. Then there exists $u_\varepsilon\in L^2(\mathbb{R})$ such that
\begin{equation}\langlebel{eps.est}
\langlembda_1(\mathbb{R}^d)+
\varepsilon\geq \frac {\displaystyle\int _{\mathbb{R}^d}\int_{\mathbb{R}d} K(x,y)(u_\varepsilon(x)-u_\varepsilon(y))^2dxdy}
{\displaystyle\int _{\mathbb{R}d}u_\varepsilon^2(x)dx}.
\mathcal{E}d{equation}
We choose $u_{\varepsilon,R}$ defined by
$$u_{\varepsilon,R}(x)=u_\varepsilon (x)\chi_{B_R}(x).$$
We claim that
\begin{equation}\langlebel{est.2}
\int _{B_R} u^2_{\varepsilon,R}(x)dx\mathbf{r}ightarrow \int_{\mathbb{R}^d} u_\varepsilon^2(x)dx
\mathcal{E}d{equation}
and
\begin{equation}\langlebel{est.3}
\int_{\mathbb{R}^d}\int_{\mathbb{R}d} K(x,y)( u_{\varepsilon,R}(x)- u_{\varepsilon,R}(y))^2\, dx\, dy\mathbf{r}ightarrow
\int_{\mathbb{R}^d}\int_{\mathbb{R}d} K(x,y)( u_\varepsilon(x)-u_\varepsilon(y))^2\, dx\, dy.
\mathcal{E}d{equation}
Assume these claims for the momment; using that $u_{\varepsilon,R}$
vanishes outside the ball $B_R$ and the definition of
$\langlembda_1(B_R)$ we get
$$\frac{\displaystyle\int _{\mathbb{R}^d}\int_{\mathbb{R}d} K(x,y)( u_{\varepsilon,R}(x)- u_{\varepsilon,R}(y))^2dxdy}
{\displaystyle\int _{B_R} u^2_{\varepsilon,R}(x)dx}
\geq \langlembda_1(B_R).$$
Using claims \eqref{est.2} and \eqref{est.3} and taking $R\mathbf{r}ightarrow\infty $ we obtain
$$\frac{\displaystyle\int_{\mathbb{R}^d}\int_{\mathbb{R}d} K(x,y)( u_\varepsilon(x)- u_\varepsilon(y))^2dxdy}
{\displaystyle\int _{\mathbb{R}^d} u_\varepsilon^2(x)dx}
\geq {l^\infty(\zz)}m _{R\mathbf{r}ightarrow \infty} \langlembda_1(B_R).$$
By \eqref{eps.est}, for any $\varepsilon>0$, we have
$\langlembda_1(\mathbb{R}^d)+\varepsilon\geq {l^\infty(\zz)}m _{R\mathbf{r}ightarrow \infty}
\langlembda_1(B_R) $. Thus
$$\langlembda_1(\mathbb{R}^d)\geq {l^\infty(\zz)}m _{R\mathbf{r}ightarrow \infty}\langlembda_1(B_R) .$$
Using now \eqref{est.1}
the proof of \eqref{lambdalimit} is finished.
It remains to prove claims \eqref{est.2} and \eqref{est.3}.
The first claim follows from Lebesgue's dominated convergence
theorem, since $|u_{\varepsilon,R}|\leq |u_\varepsilon|\in L^2(\mathbb{R}d)$. For the
second one we have that
$$u_{\varepsilon,R}(x)- u_{\varepsilon,R}(y)\mathbf{r}ightarrow u_{\varepsilon}(x)-u_{\varepsilon}(y),\mathbf{q}uad\text{as}\mathbf{q}uad R\mathbf{r}ightarrow \infty$$
and
\begin{equation}\langlebel{est.5}
K(x,y)| u_{\varepsilon,R}(x)- u_{\varepsilon,R}(y)|^2\leq 2 K(x,y)(
u_{\varepsilon,R}^2(x)+u_{\varepsilon,R}^2(y))
\leq 2 K(x,y)(u^2_\varepsilon(x)+u^2_\varepsilon(y)).
\mathcal{E}d{equation}
We show that under the assumptions on $K$ the right hand side in
\eqref{est.5} belongs to $L^1(\mathbb{R}d\times\mathbb{R}d)$:
\begin{align*}
\int _{\mathbb{R}d} \int _{\mathbb{R}d} K(x,y)(u^2_\varepsilon(x)&+u^2_\varepsilon(y))dx dy=2\int_{\mathbb{R}d} \int_{\mathbb{R}d} K(x,y)u^2_\varepsilon(x)dxdy\\
&=2\int _{\mathbb{R}d}u^2_\varepsilon(x)dx \int _{\mathbb{R}d} K(x,y)dy\leq 2\sup _{x\in
\mathbb{R}d} \int_{\mathbb{R}^d}
K(x,y)dy \int_{\mathbb{R}d} u_\varepsilon ^2(x)dx\\
&\leq 2\int_{\mathbb R^d} \psi(x) (1+|J_{a^{-1}}(x)|)dx\int_{\mathbb{R}d} u_\varepsilon
^2(x)d \leq C \int _{\mathbb{R}d} u_\varepsilon ^2(x)dx.
\mathcal{E}d{align*}
Applying now Lebesgue's convergence theorem we obtain
\eqref{est.3}.
\mathcal{E}d{proof}
When we consider dilations of a domain $\Omega$ with $0
\in \Omega$ we get the same limit. This provides a proof of Theorem~\mathbf{r}ef{teo.conver}.
\begin{proof}[Proof of Theorem~\mathbf{r}ef{teo.conver}] Let us
consider $B_{r_1} \subset \Omega \subset B_{r_2}$ then
$$
\langlembda_1 ( R B_{r_1}) \geq \langlembda_1 ( R \Omega) \geq
\langlembda_1 ( R B_{r_2}),
$$
and we just observe that
$$
{l^\infty(\zz)}m_{R \to \infty} \langlembda_1 ( R B_{r_1}) = {l^\infty(\zz)}m_{R \to \infty} \langlembda_1 ( R B_{r_1}) =
\langlembda_1 (\mathbb{R}^d).
$$
This ends the proof. \mathcal{E}d{proof}
\section{Proofs of lower and upper bounds for the first eigenvalue}
\langlebel{main.results}
\setcounter{equation}{0}
In this section we obtain estimates on $\langlembda_1(\mathbb{R}^d)$ defined by
\eqref{lambda1}. First we prove Theorem~\mathbf{r}ef{teo.cota.por.abajo}.
\begin{proof}[Proof of Theorem \mathbf{r}ef{teo.cota.por.abajo}]
First of all, let us perform the following computations: let
${\bf T}_heta$ be a positive constant which will be fixed latter.
Using the elementary inequality
\begin{align*}
(b-c)^2=b^2+c^2-2bc\geq b^2+c^2-{\bf T}_heta b^2-\frac 1{\bf T}_heta c^2=
(1-{\bf T}_heta)(b^2-\frac {c^2}{{\bf T}_heta})
\mathcal{E}d{align*}
we get
\begin{align*}
\iint_{\mathbb{R}^{2d}} \psi (y-a(x))&(u(x)-u(y))^2 dxdy\geq
(1-{\bf T}_heta)\iint_{\mathbb{R}^{2d}} \psi(y-a(x))\Big(u^2(x)-\frac {u^2(y)}{{\bf T}_heta}\Big)dxdy \\
&=(1-{\bf T}_heta)\Big(\int_{\mathbb{R}^d}u^2(x)dx\int_{\mathbb{R}^d}\psi(y)dy-\frac 1{\bf T}_heta
\int_{\mathbb{R}^d}u^2(y)\int _{\mathbb{R}^d}\psi(y-a(x))dxdy
\Big)\\
&=(1-{\bf T}_heta)\int_{\mathbb{R}^d}u^2(x)\Big(\int_{\mathbb{R}^d}\psi(y)dy-\frac 1{\bf T}_heta\int_{\mathbb{R}^d}\psi(x-a(y))dy
\Big)dx\\
&=(1-{\bf T}_heta)\int_{\mathbb{R}}u^2(x)\Big(\int_{\mathbb{R}^d}\psi(y)dy-\frac 1{\bf T}_heta\int_{\mathbb{R}^d}\psi(x-y)| |J_{a^{-1}}(y)||dy
\Big)dx\\
&=\frac{1-{\bf T}_heta}{\bf T}_heta \int_{\mathbb{R}^d}\psi(y)dy\int_{\mathbb{R}^d}u^2(x)
\Big({\bf T}_heta-\frac{(\psi \ast |J_{a^{-1}}|\big)(x)}{ \int_{\mathbb{R}^d}\psi(y)dy}
\Big)dx.
\mathcal{E}d{align*}
Then
\begin{align*}
\frac12 \iint_{\mathbb{R}^{2d}}
K(x,y)&(u(x)-u(y))^2 \, dx\, dy\\
\geq &\left\{
\begin{array}{ll}
\displaystyle \frac{1-{\bf T}_heta}{\bf T}_heta
\left( \int_{\mathbb{R}^d}\psi(y)dy\mathbf{r}ight)\int_{\mathbb{R}^d}u^2(x)\Big({\bf T}_heta-\frac{\sup_{x\in \mathbb{R}^d} \psi
\ast |J_{a^{-1}}|}{ \int_{\mathbb{R}^d}\psi(y)dy}
\Big)dx,& \mathbf{q}quad {\bf T}_heta<1, \\[15pt]
\displaystyle \frac{1-{\bf T}_heta}{\bf T}_heta
\left( \int_{\mathbb{R}^d}\psi(y)\, dy \mathbf{r}ight)
\int_{\mathbb{R}^d}u^2(x)\Big({\bf T}_heta-\frac{\inf \psi \ast |J_{a^{-1}}|}{ \int_{\mathbb{R}^d}\psi(y)\, dy}
\Big)\, dx,& \mathbf{q}quad {\bf T}_heta>1.
\mathcal{E}d{array}
\mathbf{r}ight.\\[10pt]
\geq &
\left\{
\begin{array}{ll}
\displaystyle \frac{1-{\bf T}_heta}{\bf T}_heta
\left( \int_{\mathbb{R}^d}\psi(y)dy\mathbf{r}ight)\int_{\mathbb{R}^d}u^2(x)\Big({\bf T}_heta-M \Big)dx,& \mathbf{q}quad{\bf T}_heta<1, \\[15pt]
\displaystyle \frac{{\bf T}_heta-1}{\bf T}_heta
\left( \int_{\mathbb{R}^d}\psi(y)\, dy \mathbf{r}ight)
\int_{\mathbb{R}^d}u^2(x)\Big(m-{\bf T}_heta
\Big)\, dx,& \mathbf{q}quad {\bf T}_heta>1.
\mathcal{E}d{array}
\mathbf{r}ight.
\mathcal{E}d{align*}
In the first case we choose ${\bf T}_heta=M^{1/2}$. In the second case
we get ${\bf T}_heta=m^{1/2}$.
Therefore, the statement holds from the definition of $\langlembda_1
(\mathbb{R}^d)$.
\mathcal{E}d{proof}
In the following we deal with upper bounds for the first
eigenvalue.
First, let us state a lemma with an upper bound for $\langlembda_1(B_R)$
in terms of the radius of the ball, $R$, and the function $\psi$.
Note that here we are assuming that $a$ is $1-$homogeneous.
\begin{lemma}\langlebel{lem.up}
Let $K(x,y)=\psi(y-a(x))+\psi(x-a(y))$ with an $1-$homogeneous map
$a$. For every $\delta>0$ there exists a constant $C(\delta)$ such
that the following
\begin{align*}
\langlembda_1(B_R)
&\leq (2+\delta)\int _{\mathbb{R}^d}\psi(z)dz\int _{\mathbb{R}^d}\Big(\phi( x)-\phi( {a(x)})\Big)^2dx\\
&\mathbf{q}uad +\frac{C(\delta)}{R^2}\int _{\mathbb{R}^d}\psi(z)|z|^2dz \int _{\mathbb{R}^d} |\nabla \phi (x )|^2dx \sup _{y\in B_{1+1/R}}|J_{a^{-1}}(y) |
\mathcal{E}d{align*}
holds for any function $\phi$ supported in the unit ball with
$\|\phi\|_{L^2(B_1)}=1$ and all $R>0$.
\mathcal{E}d{lemma}
\begin{proof}
Let $\phi$ be a smooth function supported in the unit ball with $\int _{B_1}\phi^2(x)dx=1$.
Taking as a test function $\phi_R(x)=\phi(x/R)$ in the variational characterization
\eqref{lambdar}, we obtain
\begin{align*}
\langlembda_1(B_R)&\leq\frac{\displaystyle\int _{\mathbb{R}^d}\int_{\mathbb{R}d}
K(x,y)(\phi_R(x)-\phi_R(y))^2dxdy}{\displaystyle\int _{B_R}\phi_R^2(x)dx}= \frac 1{R^d} \int_{\mathbb{R}^d}\int_{\mathbb{R}d} K(x,y)\Big(\phi(\frac xR)-\phi(\frac yR)\Big)^2dxdy\\
&= R^d \int_{\mathbb{R}^d}\int_{\mathbb{R}d} K(Rx,Ry)\Big(\phi( x)-\phi(
y)\Big)^2dxdy.
\mathcal{E}d{align*}
Using that $K(x,y)=\psi(y-a(x))+\psi(x-a(y))$ and that the right hand side in the
last term is symmetric we get
\begin{align*}
\langlembda_1(B_R)&\leq 2 R^d \int _{\mathbb{R}^d}\int_{\mathbb{R}d} \psi(Ry-a(Rx))\Big(\phi( x)-\phi( y)\Big)^2dxdy\\
&=2\int _{\mathbb{R}^d}\int _{\mathbb{R}^d}\psi(z)\Big(\phi( x)-\phi( \frac{z+a(Rx)}R)\Big)^2dxdz\\
&\leq (2+\delta)\int _{\mathbb{R}^d}\int _{\mathbb{R}^d}\psi(z)\Big(\phi( x)-\phi( \frac{a(Rx)}R)\Big)^2dxdz\\
&\mathbf{q}uad+
C(\delta)\int _{\mathbb{R}^d}\int _{\mathbb{R}^d}\psi(z)\Big(\phi( \frac{a(Rx)}R)-\phi( \frac{z+a(Rx)}R\Big)^2dxdz\\
&\leq (2+\delta)\int _{\mathbb{R}^d}\psi(z)dz\int _{\mathbb{R}^d}\Big(\phi( x)-\phi( {a(x)})\Big)^2dx\\
&\mathbf{q}uad +\frac{C(\delta)}{R^2}\int _{|z|\leq 1}\psi(z)\int
_{\mathbb{R}^d}\Big(\int _0^1\nabla
\phi({a(x)}+s\frac zR)\cdot zds\Big)^2dxdz.
\mathcal{E}d{align*}
Observe that we have
\begin{align*}
\int _{|z|\leq 1}\psi(z)\int
_{\mathbb{R}^d}\Big(\int _0^1\nabla &
\phi({a(x)}+ s\frac zR)\cdot zds\Big)^2dxdz
\\ & \leq
\int _{\mathbb{R}^d}\psi(z)|z|^2 \int _{\mathbb{R}^d}\int _0^1|\nabla \phi \big(a(x)+\frac{sz}R\big )|^2 dsdx dz\\
&\leq \int _{\mathbb{R}^d}\psi(z)|z|^2 \int _0^1 \int _{\mathbb{R}^d} |\nabla \phi \big(x+\frac{sz}R\big )|^2 |J_{a^{-1}}(x)| dxds dz\\
&\leq \int _{\mathbb{R}^d}\psi(z)|z|^2 \int _0^1 \int _{|x|\leq 1} |\nabla \phi \big(x\big )|^2 |J_{a^{-1}}(x-\frac{sz}R)| dxds dz\\
&\leq \int _{\mathbb{R}^d}\psi(z)|z|^2dz \int _{\mathbb{R}^d} |\nabla \phi (x )|^2dx \sup _{y\in B_{1+1/R}}|J_{a^{-1}}(y) | .
\mathcal{E}d{align*}
Hence, we have
\begin{align*}
\langlembda_1(B_R)
&\leq (2+\delta)\int _{\mathbb{R}^d}\psi(z)dz\int _{\mathbb{R}^d}\Big(\phi( x)-\phi( {a(x)})\Big)^2dx\\
&\mathbf{q}uad +\frac{C(\delta)}{R^2}\int _{\mathbb{R}^d}\psi(z)|z|^2dz \int _{\mathbb{R}^d} |\nabla \phi (x )|^2dx \sup _{y\in B_{1+1/R}}|J_{a^{-1}}(y) |,
\mathcal{E}d{align*}
as we wanted to show. \mathcal{E}d{proof}
Now we are ready to prove our general upper
bound.
\begin{proof}[Proof of Theorem \mathbf{r}ef{teo.cota.por.arriba}]
Let us fix $\delta>0$ and a function $\phi$ supported in the unit ball with
$\|\phi\|_{L^2(\mathbb{R}^d)}=1$. We apply Lemma~\mathbf{r}ef{lem.up} and let $R\to \infty$.
Then
$$\langlembda_1(\mathbb{R}^d)\leq (2+\delta)\int _{\mathbb{R}^d}\psi(z)dz\int _{\mathbb{R}^d}\Big(\phi( x)-\phi( {a(x)})\Big)^2dx$$
Letting $\delta\to 0$ we obtain the desired result.
\mathcal{E}d{proof}
Now, we deal with the case in which $a$ is an invertible linear
map on $\mathbb{R}^d$ of the form $a(x) = Ax$. To clarify the presentation we first
treat the case of a diagonal matrix $A$. We then
extend the result to the case of a
general matrix. The proof in the first case is
simpler while the proof of the general case is more involved and
requires different techniques.
\begin{lemma}\langlebel{minimizer}
Let $a(x)= Ax $ be an invertible linear map that in addition is
assumed to be diagonal, that is, $a(x) =
(\alpha_1x_1,\dots,\alpha_dx_d)^T$ with $\alpha_i \in \mathbb{R}$. Then,
if we consider functions $\phi\in L^2 (\mathbb{R}^d)$ supported in the
unit ball, we have
\begin{equation}
\langlebel{liviu.ii}
\inf _{\|\phi\|_{L^2(B_1)}=1}
\int_{\mathbb{R}^d}\Big(\phi( x)- \phi( a( x))\Big)^2dx =(1-|\det
(A)|^{-1/2})^2.
\mathcal{E}d{equation}
\mathcal{E}d{lemma}
\begin{proof}
For any function $\phi$ as in the statement we have
\begin{align*}
\int_{\mathbb{R}^d}\Big(\phi( x)-\phi( a( x))\Big)^2&=1+|\det(A)|^{-1}-2\int_{ \mathbb{R}^d}\phi( x) \phi( a( x))\\
&\geq 1+|\det(A)|^{-1}-2\left(\int_{\mathbb{R}^d}\phi^2(x)dx\mathbf{r}ight)^{1/2}
\left(\int_{\mathbb{R}^d}\phi^2(a (x))dx\mathbf{r}ight)^{1/2}\\
&=1+{|\det(A)|}^{-1}-2|\det(A)|^{-1/2}=(1-|\det(A)|^{-1/2})^2.
\mathcal{E}d{align*}
In order to prove \eqref{liviu.ii} we need to show
the existence of a sequence of functions $\phi$ as in the statement
such that
$$
\dfrac{\displaystyle
\int_{ \mathbb{R}^d}\phi( x)\phi (a( x))dx }{\displaystyle |\det(A)|^{-1/2}\int_{\mathbb{R}^d}\phi^2(x)dx
}
\mathbf{r}ightarrow 1.
$$
Choosing $\phi$ of the form (we use a standard separation of
variables here)
$$\phi(x)=\prod_{i=1}^d \phi_i(x_i){\chi}_{B_\varepsilon}(x),\mathbf{q}uad x=(x_1,\dots,x_d),$$
with $\varepsilon$ small enough such that $\phi$ to be supported in the
unit ball we reduce the problem to the one dimensional case:
$a(x)=\alpha x$ and construct a sequence of functions
$\phi_\sigma$ supported in $[-\varepsilon,\varepsilon]$ such that
$$\frac{\displaystyle
\int_{\mathbb{R}}\phi_\sigma( x)\phi_\sigma( a (x)))dx}{\displaystyle \alpha^{-1/2}\int_{\mathbb{R}}\phi_\sigma^2(x)dx}
\mathbf{r}ightarrow 1.$$
We choose
$$\phi_\sigma (x)=\frac 1{|x|^{\sigma}}{\chi}_{(0,\varepsilon)}(x), \mathbf{q}quad \mbox{ with }\sigma<1/2.$$
Then
$$\int _{\mathbb{R}}\phi_\sigma^2(x)dx= \int _0^\varepsilon\frac 1{|x|^{2\sigma}}=\frac {\varepsilon^{1-2\sigma}}{1-2\sigma}$$
and
$$
\begin{array}{l}
\displaystyle \int_{ \mathbb{R}}\phi_\sigma( x)\phi_\sigma( a (x))dx =
\int _0^{\min\{\varepsilon,\varepsilon/\alpha\}}\frac 1{|x|^{\sigma}}\frac 1{|\alpha
x|^{\sigma}} \\
\displaystyle \mathbf{q}quad \mathbf{q}quad = \alpha^{-\sigma}\int
_0^{\min\{\varepsilon,\varepsilon/\alpha\}}\frac 1{|x|^{2\sigma}}=
\frac{\alpha^{-\sigma}\min\{\varepsilon,\varepsilon/\alpha\}^{1-2\sigma}}{1-2\sigma}.
\mathcal{E}d{array}
$$
Thus
$$\displaystyle\frac{\displaystyle
\int_{ \mathbb{R}}\phi_\sigma( x)\phi_\sigma( a (x)))dx}{\displaystyle\alpha^{-1/2}\int_{\mathbb{R}}\phi_\sigma^2(x)dx}=
\frac{\alpha^{-\sigma}\min\{\varepsilon,\varepsilon/\alpha\}^{1-2\sigma}}{\alpha^{-1/2}\varepsilon^{1-2\sigma}}\mathbf{r}ightarrow 1, \mathbf{q}quad
\mbox{as }\sigma\mathbf{r}ightarrow 1/2.$$
This ends the proof.
\mathcal{E}d{proof}
We proceed now to prove our result concerning linear functions $a$
when $A$ is diagonal.
\begin{theorem} \langlebel{teo.lineal.ii}
Let $a(x)= Ax $ be an invertible linear map that in addition is
assumed to be diagonal, that is, $a(x) =
(\alpha_1x_1,\dots,\alpha_dx_d)^T$ with $\alpha_i \in \mathbb{R}$. Then
$$\langlembda_1 (\mathbb{R}^d) =
{l^\infty(\zz)}m_{R\mathbf{r}ightarrow \infty}\langlembda_1(B_R)=2(1-|\det(A)|^{-1/2})^2\int _{\mathbb{R}^d}\psi(z)dz.$$
\mathcal{E}d{theorem}
\begin{proof}
Using the results of Theorem \mathbf{r}ef{teo.cota.por.arriba} and Lemma \mathbf{r}ef{minimizer}
(here we are using that $A$ is diagonal) we obtain that
$${l^\infty(\zz)}m_{R\mathbf{r}ightarrow \infty}\langlembda_1(B_R)\leq 2(1-|\det(A)|^{-1/2})^2\int _{\mathbb{R}^d}\psi(z)dz.$$
On the other hand Theorem \mathbf{r}ef{teo.cota.por.abajo} gives us that
$${l^\infty(\zz)}m_{R\mathbf{r}ightarrow \infty}\langlembda_1(B_R)=\langlembda_1(\mathbb{R}^d)\geq 2(1-|\det(A)|^{-1/2})^2\int _{\mathbb{R}^d}\psi(z)dz.$$
Thus we conclude that
$${l^\infty(\zz)}m_{R\mathbf{r}ightarrow \infty}\langlembda_1(B_R)=2(1-|\det(A)|^{-1/2})^2\int _{\mathbb{R}^d}\psi(z)dz.$$
and the proof is finished.
\mathcal{E}d{proof}
Now our task is to extend the result, using different arguments to a
general lineal invertible map $a(x) =Ax$. In this case we use the
Jordan decomposition of $A$.
Recall that a linear map $a: \mathbb{R}^d \to \mathbb{R}^d$, $a(x)=Ax$ is
called expansive if the absolute value of the (complex)
eigenvalues of $A$ are bigger than one.
\begin{lemma}\langlebel{minimizerexpansive} Let $a: \mathbb{R}^d \to \mathbb{R}^d$
be an invertible linear map. If $a$ or $a^{-1}$ is expansive then
for functions $\phi\in L^2 (\mathbb{R}^d)$ supported in the unit ball with
$ \|\phi\|_{L^2(B_1)}=1$ the following holds:
$$\sup _{\phi} \int_{\mathbb{R}^d}\phi( x) \phi( a( x))dx
= |\det (A)|^{-1/2}.$$
Moreover, the supremum is
not attained.
\mathcal{E}d{lemma}
\begin{proof}
First, given $\phi$ as in the statement, we observe that
\begin{equation}\langlebel{kkl}
\int_{\mathbb{R}^d}\phi( x) \phi( a( x))dx \leq \left( \int_{\mathbb{R}^d}\phi^2( x) dx
\mathbf{r}ight)^{1/2} \left( \int_{\mathbb{R}^d}\phi^2( a(x)) dx
\mathbf{r}ight)^{1/2}.
\mathcal{E}d{equation}
Hence
$$\sup_{\phi} \int_{\mathbb{R}^d}\phi( x) \phi( a( x))dx
\leq |\det (A)|^{-1/2}.$$
Observe that in \eqref{kkl} we cannot have equality since in this case $\phi(a
(x))=\mu\phi(x)$ $a.e$ for some constant $\mu$. Since $a$ is expansive this implies that
$\phi$ should vanish identically.
Now we want to obtain the reverse inequality. Let us assume that $a$
is expansive. So, there exists $B \subset \mathbb{R}^d$ a ball with
center the origin such that $a^{-j}(B) \subset B_1$, $\forall j \in
\{ 0,1,\dots, \}$. Take the following sets
\begin{equation*}
\displaystyle F= \bigcup_{j=0}^{\infty}a^{-j}(B), \mathbf{q}quad E_l= a^{-l}(F) \setminus a^{-l-1}(F),
\mathbf{q}uad \textrm{for } l \in \{0,1, \dots \}
\mathcal{E}d{equation*}
and
\begin{equation*}
E= \bigcup_{j=0}E_j.
\mathcal{E}d{equation*}
Observe that given $l \in \{0,1, \dots \}$ we have $\mid E_l \mid_d
>0$. Here and in what follows we denote by $\mid \cdot \mid_d$ the
Lebesgue measure of a set in $\mathbb{R}^d$.
Since $|\det \ A| >1$, then
\begin{eqnarray*}
|a^{-l}(F)|_d =|a(a^{-l-1}(F))|_d = |\det (a)| |a^{-l-1}(F)|_d >
|a^{-l-1}(F)|_d.
\mathcal{E}d{eqnarray*}
Next, let us observe that
\begin{equation} \langlebel{Fintersection}
E_j \cap E_l= \emptyset \mathbf{q}uad \textrm{if $j,l \in \{0,1,\dots\}$
and $j \neq l$}.
\mathcal{E}d{equation}
Also, since $F\supset a^{-1}(F)\supset a^{-2}(F)\supset \dots $
we have
$$|E_j|_d=|a^{-j}(F)|_d-|a^{-j-1}(F)|_d=|\det (A)|^{-j}(|F|_d-|a^{-1}(F)|_d)= |\det (A)|^{-j}|E_0|_d. $$
For any $0< \sigma<|\det (A)|^{1/2}$ we now choose
$$\phi_\sigma (x)= \sum_{j=0}^{\infty} \sigma^j \chi_{E_j}(x).$$
These functions are supported in the unit ball and belong to
$L^2(\mathbb{R}^d)$, in fact,
$$
\|\phi_\sigma \|^2_{L^2(\mathbb{R}^d)}= \sum_{j=0}^{\infty} \sigma^{2j} |E_j|_d =
|E_0|_d \sum_{j=0}^{\infty} \sigma^{2j} |\det (A)|^{-j} (<
\infty).$$
On the other hand,
\begin{eqnarray*}
\int_{ \mathbb{R}}\phi_\sigma( x)\phi_\sigma( a (x))dx &=&
\sum_{j=1}^{\infty} \sigma^{j-1} \sigma^j |E_j|_d = \sum_{j=1}^{\infty} \sigma^{j-1}
\sigma^j |\det (A)|^{-j} |E_0|_d \\
&=& \sigma |\det (A)|^{-1} |E_0|_d \sum_{j=1}^{\infty}
\sigma^{2(j-1)} |\det (A)|^{-j+1} \\ &= & \sigma |\det (A)|^{-1}
\|\phi_\sigma \|^2_{L^2(\mathbb{R}^d)}.
\mathcal{E}d{eqnarray*}
Thus
$$\frac{\displaystyle
\int_{ \mathbb{R}}\phi( x)\phi( a (x)))dx}
{\displaystyle |\det (A)|^{-1/2}\int_{\mathbb{R}}\phi^2(x)dx}= \sigma
|\det (A)|^{-1/2}
\mathbf{r}ightarrow 1, \mathbf{q}uad \text{as}\mathbf{q}uad \sigma\mathbf{r}ightarrow (|\det (A)|^{1/2})^{-},$$
which proves Lemma \mathbf{r}ef{minimizerexpansive} in the case of an expansive function.
Assume now that $a^{-1}$ is expansive. Let $\phi $ as in the
statement, then after the change of variable $a(x)=y$ we have
$$ \int_{\mathbb{R}^d}\phi( x)\phi( a( x))dx =
|\det (A) |^{-1}\int_{\mathbb{R}^d}\phi( x)\phi( a^{-1}( x))dx.
$$
Hence, the proof finishes using the previous expansive case.
\mathcal{E}d{proof}
\begin{lemma}\langlebel{minimizeruni} Let $a: \mathbb{R}^d \to \mathbb{R}^d$, $a(x)=Ax$ be such that $A$ is
diagonalizable with all of its complex eigenvalues having
the absolute value equal to one. For functions $\phi\in L^2
(\mathbb{R}^d)$ supported in the unit ball with $
\|\phi\|_{L^2(B_1)}=1$ the following holds
$$\sup_{\phi} \int_{\mathbb{R}^d}\phi( x)\phi( a( x))dx
= \max_{\phi} \int_{\mathbb{R}^d}\phi( x)\phi( a( x))dx = 1.$$
\mathcal{E}d{lemma}
\begin{proof}
Take $\phi= |B_1|^{-1/2}_d\chi_{B_1}$ where $\chi_{B_1}$ is the
characteristic function of the ball with center the origin and
radius $1$. Since $\phi(a(x))= \chi_{B_1}(x)$, then the assertion
follows.
\mathcal{E}d{proof}
\begin{lemma}\langlebel{minimizeruni.44} Let $a: \mathbb{R}^d \to \mathbb{R}^d$, $a(x)=Ax$
be an invertible linear map such that the corresponding matrix
associated to the canonical basis is given by
\begin{equation} \langlebel{realJordan1}
J_k(\langlembda)= \left (
\begin{array}{ccccccccc}
\langlembda & 1 & & \\
& \ddots & & 1 \\
& & & \langlembda
\mathcal{E}d{array} \mathbf{r}ight ),
\mathcal{E}d{equation}
or
\begin{equation} \langlebel{complexJordan1}
\widetilde{J}_k({\bf T}_heta)= \left (
\begin{array}{ccccccccc}
\mathbf{M} & \mathbf{I} & & \\
& \ddots & & \mathbf{I} \\
& & & \mathbf{M}
\mathcal{E}d{array} \mathbf{r}ight ),
\mathcal{E}d{equation}
where $\langlembda\in \{\pm 1\}$, ${\bf T}_heta \in \mathbb{R}$, $\mathbf{M}
= \left (
\begin{array}{ccccccccc}
\cos {\bf T}_heta & \sin {\bf T}_heta \\
-\sin {\bf T}_heta & \cos {\bf T}_heta
\mathcal{E}d{array} \mathbf{r}ight )
$ and $\mathbf{I} = \left (
\begin{array}{ccccccccc}
1 & 0 \\
0 & 1
\mathcal{E}d{array} \mathbf{r}ight )$.
Then, if we consider functions $\phi\in L^2 (\mathbb{R}^d)$ supported in
the unit ball with $\|\phi\|_{L^2(B_1)}=1$, we get
$$\sup_\phi \int_{\mathbb{R}^d}\phi( x)\phi( a( x))dx
= 1.$$
\mathcal{E}d{lemma}
\begin{proof} {\bf Case I.}
Assume that the corresponding matrix of the linear map $a$
associated to the canonical basis is given by (\mathbf{r}ef{realJordan1}).
Given $j \in
\mathbb{N} $, $a^{j}(\mathbf{p})^t= (\langlembda^j + j \langlembda^{j-1},
\langlembda^j, 0, \dots, 0 )^t$ where $\mathbf{p}=(1, 1, 0, \dots,
0)\in \mathbb{R}^d$. Observe that $a^j(\mathbf{p}) \neq a^l(\mathbf{p})$
if $l,j \in \mathbb{N}$ and $j \neq l$. Indeed $\| a^j(\mathbf{p})
- a^l(\mathbf{p}) \| \geq 1$ if $j \neq l$. Thus,
$a^j(B_{1/4}(\mathbf{p})) \cap a^l(B_{1/4}(\mathbf{p})) =
\emptyset$ if $j \neq l$, where $B_{1/4}(\mathbf{p})$ is the ball
with center the point $\mathbf{p}$ and radius $1/4$.
Given $k \in \mathbb{N}$, $k \geq 5$, set the function
$$\phi_k(x)=
\sum_{j=0}^k \chi_{a^j(2^{-k}B_{1/4}(\mathbf{p}))}(x).
$$
Observe that the function $\phi_k$ is supported in the unit ball. If
$x$ is in the support of $\phi_k$ then there exists $j \in
\{0,1,\dots, k\}$ such that $| x - a^j(\mathbf{p}) | \leq 2^{-k-2}$
and we have
\begin{eqnarray*}
| x | & \leq& | x - a^j(\mathbf{p}) | + | a^j(\mathbf{p}) | \\
& \leq & 2^{-k-2} + 2^{-k}((1 + k )^2 + 1)^{1/2}
\leq 2^{-k-2} + 2^{-k} 3k \leq 2^{-k+1} 3k < 1.
\mathcal{E}d{eqnarray*}
Further,
$$
\|\phi_k\|^2_{L^2(\mathbb{R}^d)}= \sum_{j=0}^{k} |a^j(2^{-k}B_{1/4}(\mathbf{p}))|_d
= 2^{-kd}(k+1)|B_{1/4}(\mathbf{p})|_d.
$$
On the other hand,
\begin{eqnarray*}
\int_{ \mathbb{R}}\phi_k( x)\phi_k( a (x))dx &=& \sum_{j=1}^{k}
|a^j(2^{-k}B_{1/4}(\mathbf{p}))|_d = 2^{-kd}(k)|B_{1/4}(\mathbf{p})|_d
\mathcal{E}d{eqnarray*}
Thus
$$\frac{\displaystyle \int _{ \mathbb{R}}\phi_k( x)\phi_k( a (x)))dx}{\displaystyle |\det (A)|^{-1/2}\int_{\mathbb{R}}
\phi_k^2(x)dx}= \frac{k}{k+1}
\mathbf{r}ightarrow 1, \mathbf{q}uad k \mathbf{r}ightarrow \infty.$$
Having in mind that $|\det (A)|=1$, that $\phi$ satisfies the
hypotheses in the statement and using H\"{o}lder's inequality we
obtain that
$$\sup_\phi \int_{\mathbb{R}^d}\phi( x)\phi( a( x))dx
\leq 1= |\det (A)|^{-1/2}.$$ Hence, the conclussion follows.
{\bf Case II.} Assume that the corresponding matrix of $A$ in the
canonical basis is given by (\mathbf{r}ef{complexJordan1}).
For any $j \in \mathbb{N} $ we set
\begin{equation}
a^{j}(\mathbf{q}) = \left (
\begin{array}{ccccccccc}
\cos (j{\bf T}_heta) +\sin (j{\bf T}_heta) + (j-1) \cos ((j-1){\bf T}_heta) +(j-1)\sin ((j-1){\bf T}_heta)
\\ \cos (j{\bf T}_heta) -\sin (j{\bf T}_heta) + (j-1) \cos ((j-1){\bf T}_heta) -(j-1)\sin ((j-1){\bf T}_heta) \\
\cos (j{\bf T}_heta) +\sin (j{\bf T}_heta) \\
\cos (j{\bf T}_heta) -\sin (j{\bf T}_heta) \\
0 \\
\cdots \\
0
\mathcal{E}d{array} \mathbf{r}ight ).
\mathcal{E}d{equation}
Observe that for $\mathbf{q} =(1,1,1,1,0\dots 0)$, we have
$a^{j}(\mathbf{q}) \neq \mathbf{q}$ if $j \in \{ 0, \dots, k \}$,
where $k$ is a no negative integer number. So $a^j(\mathbf{q}) \neq
a^l(\mathbf{q})$ if $l,j \in \{ 0, \dots, k \}$ and $j \neq l$.
Thus, by continuity of the linear map $a$, there exists $B \subset
\mathbb{R}^d$ a ball with the center at the point $\mathbf{q}$ and radius
less or equal to $1$ such that $a^j(B) \cap a^l(B) = \emptyset$ if
$j,l \in \{ 0,1, \dots, k \}$, $j \neq l$.
Given $k \in \mathbb{N}$, $k \geq 7$, we set the function
$$\phi_k(x)=
\sum_{j=0}^k \chi_{a^j(2^{-k}B)}(x).$$ Observe that the function $\phi_k$
is supported in the unit ball. If $x$ is in the support of
$\phi_k$ then there exists $j \in \{0,1,\dots, k\}$ such that $|
x - a^j(\mathbf{q}) | \leq 2{-k}$ and we have
\begin{eqnarray*}
| x | & \leq& | x - a^j(\mathbf{q}) | +
| a^j(\mathbf{q}) | \\
& \leq & 2^{-k} + 2^{-k}(2(2 + 2(j-1))^2 + 2^3 )^{1/2}
\leq 2^{-k} + 2^{-k}(2(2 + 2(k-1))^2 + 2 \ 2^2 )^{1/2} \\
&\leq& 2^{-k} + 2^{-k}(2(4(k-1))^2 + 2 (k-1)^2 )^{1/2}\\
& \leq& 2^{-k} + 2^{-k}(2^6(k-1)^2 )^{1/2} = 2^{-k} + 2^{-k+3}(k-1) \leq 2^{-k+4}(k-1) < 1.
\mathcal{E}d{eqnarray*}
Further,
$$
\|\phi_k\|^2_{L^2(\mathbb{R}^d)}= \sum_{j=0}^{k} |a^j(2^{-k}B)|_d = 2^{-kd}(k+1)|B|_d.
$$
On the other hand,
\begin{eqnarray*}
\int_{ \mathbb{R}}\phi_k( x)\phi_k( a (x))dx &=& \sum_{j=1}^{k}
|a^j(2^{-k}B )|_d = 2^{-kd}k|B|_d
\mathcal{E}d{eqnarray*}
Thus
$$\frac{\displaystyle \int _{ \mathbb{R}}\phi_k( x)\phi_k( a (x)))dx}{\displaystyle |\det (A)|^{-1/2}
\int_{\mathbb{R}}\phi^2(x)dx}= \frac{k}{k+1}
\mathbf{r}ightarrow 1, \mathbf{q}uad k \mathbf{r}ightarrow \infty.$$
Now, we observe, as we did before, that
$$\sup_{\phi} \int_{\mathbb{R}^d} \phi( x)\phi( a( x))dx
\leq 1.$$
Hence, the conclusion follows.
\mathcal{E}d{proof}
Now we are ready to proceed with the proof of our main result
concerning linear maps $a$.
\begin{proof}[Proof of Theorem~\mathbf{r}ef{main.result}]
According to Theorem \mathbf{r}ef{teo.cota.por.abajo},
$${l^\infty(\zz)}m_{R\mathbf{r}ightarrow \infty}\langlembda_1(B_R)=\langlembda_1(\mathbb{R}^d)\geq 2(1-|\det (A)|^{-1/2})^2
\int _{\mathbb{R}^d}\psi(z)dz.$$ So, if we prove
\begin{equation} \langlebel{menoroigual}
{l^\infty(\zz)}m_{R\mathbf{r}ightarrow \infty}\langlembda_1(B_R)=\langlembda_1(\mathbb{R}^d)\leq 2(1-|\det (A)|^{-1/2})^2\int _{\mathbb{R}^d}\psi(z)dz,
\mathcal{E}d{equation}
the proof is finished. Let us see that (\mathbf{r}ef{menoroigual}) holds.
Using Jordan's decomposition, there exist $C$ and $J$ two
$d \times d$ invertible matrices with real entries such that $A=
CJC^{-1}$. Note that $J$ is defined by Jordan blocks, i.e,
\begin{equation} \langlebel{Jordan}
J= \left (
\begin{array}{ccccccccc}
J_1 (\langlembda_1)& & & & & & \\
& \ddots & & & & & \\
& & & J_r (\langlembda_r) & & & \\
& & & & J_{r+1} (\alpha_1, \beta_1)& & & \\
& & & & & \ddots & & \\
& & & & & & & J_{r+ s} (\alpha_s, \beta_s)
\mathcal{E}d{array} \mathbf{r}ight ),
\mathcal{E}d{equation}
with
\begin{equation} \langlebel{realJordan}
J_k(\langlembda)= \left (
\begin{array}{ccccccccc}
\langlembda & 1 & & \\
& \ddots & & 1 \\
& & & \langlembda
\mathcal{E}d{array} \mathbf{r}ight ), \mathbf{q}uad k=1,\dots, r,
\mathcal{E}d{equation}
or
\begin{equation} \langlebel{complexJordan}
J_k(\alpha, \beta)= \left (
\begin{array}{ccccccccc}
\mathbf{M} & \mathbf{I} & & \\
& \ddots & & \mathbf{I} \\
& & & \mathbf{M}
\mathcal{E}d{array} \mathbf{r}ight ), \mathbf{q}uad k=r+1,\dots,r+s.
\mathcal{E}d{equation}
Here $\langlembda$, $\alpha$ and $\beta$ are real numbers, $\mathbf{M}
=
\left (
\begin{array}{ccccccccc}
\alpha & \beta \\
-\beta & \alpha
\mathcal{E}d{array} \mathbf{r}ight )
$ and $\mathbf{I} = \left (
\begin{array}{ccccccccc}
1 & 0 \\
0 & 1
\mathcal{E}d{array} \mathbf{r}ight )$.
Given a $d_k \times d_k$ Jordan block $J_k$ as in
(\mathbf{r}ef{realJordan}) or (\mathbf{r}ef{complexJordan}), then either $J_k$ or
$J_k^{-1}$ is expansive, or the corresponding eigenvalue has
absolute value equal to $1$. Then by Lemma
\mathbf{r}ef{minimizerexpansive} or Lemma \mathbf{r}ef{minimizeruni.44} there exists
$ \{\phi_j^{(k)} \}_{j=1}^{\infty} \in L^2(\mathbb{R}^{d_k})$, $\|
\phi_j \|_{L^2(\mathbb{R}^{d_k})}=1$, a sequence of
functions supported in the unit ball of $\mathbb{R}^{d_k}$ such that
\begin{equation} \langlebel{limsup}
{l^\infty(\zz)}m_{j \to \infty}\int_{\mathbb{R}^{d_k} }
\phi^{(k)}_j( x) \phi^{(k)}_j ( J_k (x))dx=|\det \ J_k|^{-1/2}.
\mathcal{E}d{equation}
For $j \in \mathbb{N}$, we choose
$$
\varphi_j(x_1^{(1)}, \dots, x_{d_1}^{(1)}, \dots \dots,
x_1^{(r+s)}, \dots, x_{d_1}^{(r+s)})= \prod_{k=1}^{r+s} \phi_j^{(k)}
(x_1^{(k)}, \dots, x_{d_k}^{(k)})
$$
and
$$
\Phi_j(x)= (r+s)^{-d/4}\| C^{-1} \|^{-1/2} | \det C |^{-1/2}
\varphi_j ((r+s)^{-1/2}\| C^{-1} \|^{-1} C^{-1} x),
$$
where $\| C^{-1} \|$ denotes the norm of $C^{-1}$ as operator on
$\mathbb{R}^d$. Observe that $\Phi_j$ is supported in $B_{1}$ and $\|
\Phi_j \|_{L^2(\mathbb{R}^d)}=1$ After the change of variable $\| C^{-1}
\|^{-1}C^{-1}x=y$, we have
\begin{equation}
\langlebel{maximizersaaaa}
\begin{array}{l}
\displaystyle {l^\infty(\zz)}m_{j \to \infty}\int_{\mathbb{R}^{d} }
\Phi_j( x) \Phi_j ( a (x))dx \\
\displaystyle = (r+s)^{-d/2}\| C^{-1} \|^{-1} | \det C |^{-1}
{l^\infty(\zz)}m_{j \to \infty}\int_{\mathbb{R}^{d} } \varphi_j((r+s)^{-1/2} \|
C^{-1}
\|^{-1} C^{-1}x) \\
\mathbf{q}quad \mathbf{q}quad \mathbf{q}quad \mathbf{q}quad \mathbf{q}quad \mathbf{q}quad \mathbf{q}quad \times
\varphi_j ((r+s)^{-1/2}\| C^{-1} \|^{-1} C^{-1}CJC^{-1} (x))dx \\
\displaystyle = {l^\infty(\zz)}m_{j \to \infty}\int_{\mathbb{R}^{d} }
\varphi_j( y) \varphi_j ( J y)dy = \prod_{k=1}^{r+s}{l^\infty(\zz)}m_{j \to \infty}\int_{\mathbb{R}^{d_k} }
\phi^{(k)}_j( x) \phi^{(k)}_j ( J_k( x))dx \\
\displaystyle = \prod_{k=1}^{r+s}|\det \ J_k(\langlembda _k)|^{-1/2} = |\det (A)|^{-1/2}.
\mathcal{E}d{array}
\mathcal{E}d{equation}
Again, using Holder's inequality, we obtain, for any function
$\phi$ as in the statement,
\begin{equation} \langlebel{quimicos}
\int_{\mathbb{R}^{d} }
\phi ( x) \phi ( a (x))dx \leq |\det (A)|^{-1/2}.
\mathcal{E}d{equation}
Therefore, we have
\begin{align*}
\int _{\mathbb{R}^d}\Big(\phi( x)-\phi( a( x))\Big)^2&=1+|\det (A) |^{-1}-2\int_{ \mathbb{R}^d}\phi( x)\phi( a( x)),
\mathcal{E}d{align*}
then by (\mathbf{r}ef{maximizersaaaa}) and \eqref{quimicos},
$$
\inf_{\| \phi \|_{L^2(B_1)}=1}
\int_{\mathbb{R}^d}\Big(\phi( x)-\phi( a( x))\Big)^2 = (1-|\det (A)|^{-1/2})^2.
$$
Hence, using the results contained in Lemma \mathbf{r}ef{lem.up} the proof
is finished.
\mathcal{E}d{proof}
{\bf Acknowledgements.}
L. Ignat partially supported by grants PN-II-ID-PCE-2011-3-0075
and PN-II-TE 4/2010 of the Romanian National Authority for
Scientific Research, CNCS--UEFISCDI, MTM2011-29306-C02-00, MICINN,
Spain and ERC Advanced Grant FP7-246775 NUMERIWAVES.
J. D. Rossi and A. San Antolin partially supported by Supported by
DGICYT grant PB94-0153 MICINN, Spain. J. D. Rossi also
acknowledges support from UBA X066 (Argentina) and CONICET
(Argentina).
\begin{thebibliography}{XX}
\bibitem{AMRT} {\sc F. Andreu, J. M. Mazon, J.D. Rossi, J. Toledo},
{\em The Neumann problem for nonlocal nonlinear diffusion
equations}, J. Evol. Eqns. {\bf 8(1)} (2008), 189--215.
\bibitem{AMRT2} {\sc F. Andreu, J. M. Mazon, J.D. Rossi, J. Toledo},
{\em A nonlocal $p$-Laplacian evolution equation with Neumann
boundary conditions}, J. Math. Pures Appl. {\bf 90(2)}, (2008),
201--227.
\bibitem{libro} {\sc F. Andreu, J. M. Maz{\'o}n, J. D. Rossi, J. Toledo}.
{Nonlocal Diffusion Problems.} Amer. Math. Soc. Mathematical
Surveys and Monographs 2010. Vol. 165.
\bibitem{BN} {\sc G. Bachman, L. Narici}, {\em Functional analysis},
Dover, New York, 2000.
\bibitem{BCC} {\sc P. Bates, X. Chen, A. Chmaj}, {\em Heteroclinic
solutions of a van der Waals model with indefinite nonlocal
interactions}, Calc. Var. {\bf 24} (2005), 261--281.
\bibitem{BFRW} {\sc P. Bates, P. Fife, X. Ren, X. Wang}, {\em
Traveling waves in a convolution model for phase transitions},
Arch. Rat. Mech. Anal. {\bf 138} (1997), 105--136.
\bibitem{CCR} {\sc E. Chasseigne, M. Chaves, J.D. Rossi}, {\em
Asymptotic behavior for nonlocal diffusion equations}, J. Math.
Pures Appl. {\bf 86} (2006), 271--291.
\bibitem{CR1} {\sc A. Chmaj, X. Ren}, {\em Homoclinic solutions of
an integral equation: existence and stability}, J. Diff. Eqns.
{\bf 155} (1999), 17--43.
\bibitem{CCEM} {\sc C. Cort{\'a}zar, J. Coville, M. Elgueta, S.
Mart\'{\i}nez}, {\em A non local inhomogeneous dispersal process},
J. Diff. Eqns. {\bf 241} (2007), 332--358.
\bibitem{CER} {\sc C. Cort{\'a}zar, M. Elgueta, J.D. Rossi}, {\em A
nonlocal diffusion equation whose solutions develop a free
boundary}, Ann. Henri Poincar{\'e} {\bf 6} (2005), 269--281.
\bibitem{CERW1} {\sc C. Cort{\'a}zar, M. Elgueta, J.D. Rossi, N. Wolanski},
{\em Boundary fluxes for nonlocal diffusion}, J. Diff. Eqns. {\bf
234} (2007), 360--390.
\bibitem{CERW2} {\sc C. Cort{\'a}zar, M. Elgueta, J.D. Rossi, N. Wolanski},
{\em How to approximate the heat equation with Neumann boundary
conditions by nonlocal diffusion problems}, Arch. Rat. Mech. Anal.
{\bf 187} (2008) 137--156.
\bibitem{Co1} {\sc J. Coville}, {\em On uniqueness and monotonicity of
solutions on non-local reaction diffusion equations}, Ann. Mat.
Pura Appl. {\bf 185} (2006), 461--485.
\bibitem{CD2} {\sc J. Coville, L. Dupaigne}, {\em On a nonlocal
equation arising in population dynamics}, Proc. Roy. Soc.
Edinburgh {\bf 137} (2007), 1--29.
\bibitem{Du} {\sc Q. Du, M. Gunzburger, R.Lehoucq and K. Zhou.}
{\it A nonlocal vector calculus, nonlocal volume-constrained
problems, and nonlocal balance laws.} Preprint.
\bibitem{F} {\sc P. Fife}, {\em Some nonclassical trends in
parabolic and parabolic-like evolutions}, in ``Trends in nonlinear
analysis", pp. 153--191, Springer-Verlag, Berlin, 2003.
\bibitem{GR2} {\sc J. Garc\'{\i}a-Meli{\'a}n, J.D. Rossi}, {\em
Maximum and antimaximum principles for some nonlocal diffusion
operators}, Nonlinear Analysis TM\&A. {\bf 71}, (2009),
6116--6121.
\bibitem{GMR-autov} {\sc J. Garc\'{\i}a-Meli{\'a}n, J.D. Rossi}, {\em
On the principal eigenvalue of some nonlocal diffusion problems.}
J. Differential Equations. {\bf 246(1)}, (2009), 21--38.
\bibitem{GT} {\sc D. Gilbarg, N. Trudinger}, Elliptic partial differential equations of second order.
Classics in Mathematics. Springer-Verlag, Berlin, 2001.
\bibitem{HMMV} {\sc V. Hutson, S. Mart\'{\i}nez, K. Mischaikow,
G. T. Vickers}, {\em The evolution of dispersal}, J. Math. Biol.
{\bf 47} (2003), 483--517.
\bibitem{IR2} {\sc L. I. Ignat, J. D. Rossi}, {\em A nonlocal
convection-diffusion equation}, J. Funct. Anal. {\bf 251} (2007),
399--437.
\bibitem{IR3} {\sc L. I. Ignat, J. D. Rossi}, {\it Decay estimates for nonlocal problems via energy
methods.} J. Math. Pures Appl. {\bf 92(2)}, (2009), 163--187.
\bibitem{KR} {\sc M. G. Krein, M. A. Rutman}, {\em Linear operators
leaving invariant a cone in a Banach space}, Amer. Math. Soc.
Transl. {\bf 10} (1962), 199--325.
\bibitem{PLPS} {\sc M. L. Parks, R. B. Lehoucq, S. Plimpton, and S. Silling.}
{\it Implementing peridynamics within a molecular dynamics code},
Computer Physics Comm., {179}, (2008), 777--783.
\bibitem{Sill} {\sc S. A. Silling}. {\it
Reformulation of Elasticity Theory for Discontinuities and
Long-Range Forces}. J. Mech. Phys. Solids, {48}, (2000), 175–-209.
\bibitem{SL} {\sc S. A. Silling and R. B. Lehoucq.}
{\it Convergence of Peridynamics to Classical Elasticity Theory}.
J. Elasticity, {93} (2008), 13--37.
\mathcal{E}d{thebibliography}
\mathcal{E}d{document}
|
\begin{document}
\title{Long time dynamics of a three-species food chain model with Allee effect in the top predator}
\maketitle
\centerline{\scshape Rana D. Parshad, Emmanuel Quansah and Kelly Black}
{\footnotesize
\centerline{ Department of Mathematics,}
\centerline{Clarkson University,}
\centerline{ Potsdam, New York 13699, USA.}
}
\centerline{\scshape Ranjit K. Upadhyay and S.K.Tiwari}
{\footnotesize
\centerline{Department of Applied Mathematics,}
\centerline{Indian School of Mines,}
\centerline{ Dhanbad 826004, Jharkhand, India.}
}
\centerline{\scshape Nitu Kumari}
{\footnotesize
\centerline{School of Basic Sciences,}
\centerline{Indian Institute of Technology Mandi,}
\centerline{ Mandi, Himachal Pradesh 175 001, India.}
}
\begin{abstract}
The Allee effect is an important phenomenon in population biology characterized by positive density dependence, that is a positive correlation between population density and individual fitness. However, the effect is not well studied in multi-level trophic food chains. We consider a ratio dependent spatially explicit three species food chain model, where the top predator is subjected to a strong Allee effect.
We show the existence of a global attractor for the the model, that is upper semicontinuous in the Allee threshold parameter $m$. To the best of our knowledge this is the first robustness result, for a spatially explicit three species food chain model with an Allee effect.
Next, we numerically investigate the decay rate to a target attractor, that is when $m=0$, in terms of $m$.
We find decay estimates that are $\mathcal{O}(m^{\gamma})$, where $\gamma$ is found explicitly.
Furthermore, we prove various overexploitation theorems for the food chain model, showing that overexploitation has to be driven by the middle predator. In particular overexploitation is not possible without an Allee effect in place. We also uncover a rich class of Turing patterns in the model which depend significantly on the Allee threshold parameter $m$. Our results have potential applications to trophic cascade control, conservation efforts in food chains, as well as Allee mediated biological control.
\end{abstract}
\begin{keywords}
three species reaction diffusion food chain model, global existence, global attractor, upper semi-continuity, Allee effect, Turing instability .
\end{keywords}
\section{\textbf{Introduction}}
Interactions of predator and prey species are ubiquitous in spatial ecology. Therein a predator or a ``hunting" organism, hunts down and attempts to kill a prey, in order to feed.
Food webs, which comprise all of the predator-prey interactions in a given ecosystem, are inherently more complex. The food chains or linear links of these food webs in real ecosystems, have multiple levels of predator-prey interaction, across various trophic levels.
To better understand natural ecosystems, with multiple levels of trophic interaction, where predators and prey alike, disperse in space in search of food, mates and refuge, a natural starting point is to deviate from the classical predator-prey two species models, and investigate spatially explicit three species food chains.
Such models are appropriate to model populations of generalist predators, predating on a specialist predator, which in turn predates on a prey species. Also, they can model two specialist predators competing for a single prey \cite{M93, UR97}. The spatial component of ecological interaction has been identified as an important factor in how ecological communities are shaped, and thus the effects of space and spatially dispersing populations, via partial differential equations (PDE)/spatially explicit models of three and more interacting species, have been very well studied \cite{Gilligan1998, M93, okubo2001diffusion, sen2012bifurcation, PK14}. In most of these models, the prey is regulated or ``inhibited" from growing to carrying capacity due to predation by the predator. Whereas loss in the predator is due to death or intraspecific competition terms.
However, there are various other natural self regulating mechanisms in a population of predators (or prey).
For instance, much research in two species models, has focused on one such mechanism: the so called Allee effect. However, less attention has been paid to this mechanism in the three species case.
This effect, named after the ecologist Walter Clyde Allee, can occur whenever fitness of an individual in a small or sparse population, decreases as the population size or density does \cite{drake2011allee}. Since the pioneering work of Allee \cite{allee1931co,allee1949principles}, Allee dynamics has been regarded as one of the central issues in the population and community ecology.
The effect can be best understood by the following equation for a single species $u$,
\begin{equation}
\label{eq:af1}
\frac{d u}{d t} = u(u-m)(1-u/K),
\end{equation}
essentially a modification to the logistic equation. Here $u(t)$ the state variable, represents the numbers of a certain species at a given time $t$, $K$ is the carrying capacity of the environment that $u$ resides in, and $m$ is the Allee threshold, with $m < K$. A strong Allee effect occurs if $m > 0$, \cite{van2007heteroclinic}. This essentially means that if $u$ falls below the threshold population $m$, its growth rate is negative, and the species will go extinct \cite{pal2014bifurcation}. If $m < 0$, then there is a weak Allee effect, and we have a compensatory growth function \cite{liermann2001depensation, bioeconomics1990optimal, clark2006worldwide}. For our purposes we assume a strong Allee effect is in place, that is $m > 0$.
Note, $u^{*}=0,K$ are stable fixed points for \eqref{eq:af1}, and $u^{*}=m$ is unstable. Thus dynamically speaking the global attractor, which is the repository of all the long time dynamics of for \eqref{eq:af1} is $[0,K]$. An interesting question can now be posed: what happens to this attractor as $m \rightarrow 0$? Ecologically speaking, this is asking: what happens to the species $u$ as the Allee threshold $m$ is decreased? When $m=0$ the only stable fixed point is $u^{*}=K$, with $u^{*}=0$ being half stable. Thus the global attractor is now reduced to a single point $K$. What we observe is that the difference between having a slight Allee effect (say $0 < m <<1$) and having no Allee effect ($m=0$), can \emph{change completely} what the global attractor of \eqref{eq:af1} is. That is to say, the global attractor changes from an entire set $[0,K]$, to a single fixed point $K$.
From an ecological point of view, this says as long as there is a slight Allee effect there is an extinction risk for $u$, but without one \emph{there is none}, and $u$ will always grow to carrying capacity.
Proving that an attractor $\mathcal{A}_{m}$ approaches a target attractor $\mathcal{A}_{0}$, in a continuous sense, in the case of spatially explicit/PDE models requires a fair amount of work. This involves making several estimates of functional norms, independent of the parameter $m$ \cite{SY02}. If this can be proven however, the attractor $\mathcal{A}_{m}$ is said to be \emph{robust} at $m=0$. However, to the best of our knowledge there are no robustness results in three species food chain models, where the parameter of interest is the Allee threshold $m$. Note, unless a systems dynamics are robust, there is no possibility to capture the same in a laboratory experiment or natural setting.
The critical importance of the Allee effect has widely been realized in the conservation biology: That is: it is most likely to increase the extinction risk of low-density populations. This is also known as critical depensation in fisheries sciences. Essentially individual species population must surpass the threshold $m$ to grow. Hence a small introduced population under strong Allee effect can only succeed if it is faced with favorable ecological conditions, or experiences rapid adaptive evolution, or simply has good luck, so that it can surpass this critical threshold.
There are several factors that might cause an Allee effect. These include difficulty of finding mates at low population densities, inbreeding depression and environmental conditioning\cite{tobin2011exploiting}.
The Allee effect can be regarded not only as a suite of problems associated with rarity, but also as the basis of animal sociality \cite{stephens1999consequences}.
Petrovskii et al. \cite{SPM2002} found that a deterministic system with Allee effect, can induce patch invasion. Morozov et al. \cite{AMP2004} found that the temporal population oscillations can exhibit chaotic dynamics even when the distribution of the species in the space was regular. Also, Sharma and Samanta \cite{sharma2014ratio} developed a ratio-dependent predator-prey model with disease in prey, as well as an Allee effect in prey. Furthermore, Invasion biologists have recently attempted to consider Allee effects as a benefit in limiting establishment of an invading species\cite{tobin2011exploiting}, which without the effect would possibly run amuck, an cause excessive damages to native species because of predation and competition.
On the same lines, a phenomenon that occurs in large food chains, is the excessive harvesting/predation of certain species in the chain, via the predators in the trophic level above them. In many aquatic food chains, this can lead to trophic cascades \cite{C85}, and is among the major activities that is threatening global biodiversity. Formally, overexploitation refers to the phenomenon where excessive harvesting of a species can result in its extinction. Mathematically, for a two species predator-prey system, one can prove an overexploitation type theorem, if one shows for a large enough initial density of the predator, the prey will be predated on till extinction. In this case, If the predator does not have an alternate food source, it will also subsequently go extinct. Although there is a fair amount of literature on this in two species models \cite{shi11}, it is less studied in three species models. In particular, the Allee effect itself on overexploitation, in multi-trophic level food chains, has not been sufficiently explored.
Our primary goal in the current manuscript is to propose and analyze a reaction-diffusion three species food chain model, with an Allee effect in the top predator. In particular, we aim to investigate the link between the Allee threshold parameter $m$, and the longtime dynamics of the three species food chain, in terms of
\begin{itemize}
\item Robustness of global attractors in terms of the Allee threshold parameter $m$.
\item Overexploitation phenomenon in the food chain model, as it is effected by the Allee threshold parameter $m$ .
\item Pattern formation and the effect on patterns that form in the food chain model due to the Allee threshold parameter $m$.
\end{itemize}
Our hope is that this model could be used as a feasible toy model, to better understand the following in multi-trophic level food chains,
\begin{itemize}
\item Top-down pressure in multi-trophic level food chains leading to overexploitation phenomenon.
\item Trophic cascades and cascade control via an Allee effect.
\item Conservation efforts in food webs mediated via an Allee effect.
\item Biological control of an invasive top predator in a food chain via Allee effect
\end{itemize}
Thus, we consider a situation where a prey species $u$ serves as the only food for a specialist predator $v$ which is itself predated by a generalist top predator $r$. \\
This is a typical situation often seen in nature in various food chains \cite{P10}. The governing equations for populations $u$ and $v$ follow ratio dependent functional responses, and are modeled by the Volterra scheme i.e., the middle predator population dies out exponentially in the absence of its prey. There is recently much debate about functional responses used in ecology, and the ratio dependent response is considered to be more realistic than its Holling type counterparts \cite{A00}. The above situation is described via the following system of PDE. This is already nondimensionalised, see appendix \ref{app1} for the details of the nondimensionalisation.
\begin{align}
&\frac{\partial u}{\partial t}= d_1\Delta u + u-u^{2}-w_{1}\frac{uv}{u+v}, \label{eq:x1}\\
&\frac{\partial v}{ \partial t}= d_2 \Delta v -a_{2}v+w_{2}\frac{uv}{u+v}-w_{3}\left(\frac{vr}{v+r}\right), \label{eq:x2}\\
&\frac{\partial r}{ \partial t} = d_3 \Delta r + r\big({{r-m}}\big)\bigg(c-\frac{w_4r} {v+D_3}\bigg).\label{eq:x3},
\end{align}
the spatial domain for the above is $\Omega \subset \mathbb{R}^{n}$, $n=1,2,3$. $\Omega$ is assumed bounded, and we prescribe Neuman boundary conditions $\nabla u \cdot \textbf{n} = \nabla v \cdot \textbf{n} = \nabla r \cdot \textbf{n} = 0$. In the above $a_2$ is the intrinsic death rate of the specialist predator $v$ in the absence of its only food $u$, $c$ measures the rate of self-reproduction of the generalist predator $r$. $w_i's$ are the maximum values which per capita growth rate can attain. $D_{3}$ shows that the top predator $r$ is a generalist and can switch its food source in the absence of $v$. Also we model the top predator via a Leslie Gower scheme \cite{UR97}, and assume that it is subject to the Allee effect. The parameter $m$ is the Allee threshold or minimum of viable population level. In this work, we limit ourselves to the case where $m > 0$, that is we assume a strong Allee effect.
In particular we ask: How does the Allee threshold $m$ affect the various dynamical aspects of model system \eqref{eq:x1}-\eqref{eq:x3}? To this end we list our primary findings.
\begin{itemize}
\item We show that there is a $(L^{2}(\Omega),H^{1}(\Omega))$ global attractor for \eqref{eq:x1}-\eqref{eq:x3}, via theorem \ref{thm:ga2}.
\item We show that this attractor is upper semi-continuous w.r.t the Allee threshold $m$, via theorem \ref{thm:gaus1sc}. That is the attractor $\mathcal{A}_{m}$ is robust at $m=0$. This result requires making estimates of various functional norms, independent of the parameter $m$, and then proving and using a chain of theorems that facilitate passing to the limit as $m \rightarrow 0$. The calculations are detailed, so we confine them to an appendix section \ref{app}.
\item The decay rate of the global attractor $\mathcal{A}_{m}, m>0$, for the system \eqref{eq:x1}-\eqref{eq:x3}, to a target attractor $\mathcal{A}_{0}$ (that is when $m=0$) is estimated in terms of $m$ numerically. We find a decay rate of the order $\mathcal{O}(m^{\gamma})$, where $\gamma$ is close to 1.
\item We investigate overexploitation phenomenon in the model system \eqref{eq:x1}-\eqref{eq:x3}, showing that the Allee threshold $m$ can effect overexploitation in the system via theorems \ref{thm:ox}, \ref{thm:ox1} and lemma \ref{lem:ox2}. In particular overexploitation is \emph{not possible} without an Allee effect in place.
\item Turing instability exists in the model system \eqref{eq:x1}-\eqref{eq:x3}, via theorem \ref{thm:tur1}. Furthermore, the Allee threshold $m$ has a significant impact on the type of Turing patterns that form, see figures \ref{fig:turing1}, \ref{fig:turing2}, \ref{fig:turing3}, \ref{fig:turing4}, \ref{fig:turing5}, \ref{fig:Disp1}.
\end{itemize}
\section{Preliminary Estimates}
\subsection{Preliminaries}
We now present various notations and definitions that will be used frequently.
The usual norms in the spaces $\mathbb{L}^{p}(\Omega )$, $\mathbb{L}^{\infty
}(\Omega )$ and $\mathbb{C}\left( \overline{\Omega }\right) $ are
respectively denoted by
\begin{equation*}
\left\Vert u\right\Vert _{p}^{p}\text{=}\frac{1}{\left\vert \Omega
\right\vert }\int\limits_{\Omega }\left\vert u(x)\right\vert ^{p}dx,
\end{equation*}
\begin{equation*}
\left\Vert u\right\Vert _{\infty }\text{=}\underset{x\in \Omega }{max}
\left\vert u(x)\right\vert .
\end{equation*}
We define the following phase spaces
\begin{equation*}
H= L^{2}(\Omega)\times L^{2}(\Omega) \times L^{2}(\Omega) ,
\end{equation*}
\begin{equation*}
E= H^{1}(\Omega)\times H^{1}(\Omega) \times H^{1}(\Omega),
\end{equation*}
\begin{equation*}
X= H^{2}(\Omega) \times H^{2}(\Omega) \times H^{2}(\Omega) .
\end{equation*}
We now recall the following lemma.
\begin{lemma}[Uniform Gronwall Lemma]
\label{lem:gronwall}
Let $\beta, \zeta$, and $h$ be nonnegative functions in $L^{1}_{loc}[0,\infty;\mathbb{R})$. Assume that $\beta$
is absolutely continuous on $(0,\infty)$ and the following differential inequality is satisfied:
\begin{equation}
\frac{d \beta}{dt} \leq \zeta \beta + h, \ \mbox{for} \ t>0.
\end{equation}
If there exists a finite time $t_{1} > 0$ and some $q > 0$ such that
\begin{equation}
\int^{t+q}_{t}\zeta(\tau) d\tau \leq A, \ \int^{t+q}_{t}\beta(\tau) d\tau \leq B, \ \mbox{and} \ \int^{t+q}_{t} h(\tau) d\tau \leq C,
\end{equation}
for any $t > t_{1}$, where $A, B$, and $C$ are some positive constants, then
\begin{equation}
\beta(t) \leq \left(\frac{B}{q}+C\right)e^{A}, \ \mbox{for \ any} \ t > t_{1}+q.
\end{equation}
\end{lemma}
\subsection{Uniform $L^{2}(\Omega)$ and $H^{1}(\Omega)$ estimates}
In all estimates made hence forth the constants $C, C_{1}, C_{2}, C_{3}, C_{\infty}$ are generic constants, that can change in value from line to line, and sometimes within the same line if so required.
Using positivity of the solution, and comparison arguments, one is easily able to prove global existence of classical solutions to system
\eqref{eq:x1}-\eqref{eq:x3}.
We state the following result.
\begin{proposition}
\label{ge1}
All positive solutions of the system
\eqref{eq:x1}-\eqref{eq:x3}, with initial data in $\mathbb{L}^{\infty }(\Omega )$ are classical and global.
\end{proposition}
See appendix \ref{app0} for the proof.
Although we have global existence of classical via proposition \ref{ge1}, we can actually derive uniform $L^{\infty}(\Omega)$ bounds, for initial data only in $L^2(\Omega)$. This will facilitate estimates required to show existence of bounded absorbing sets and the compactness of trajectories, which will lead to the existence of a global attractor in the product space $L^2(\Omega)$. Since the construction of global attractors require a Hilbert space setting, where the appropriate function spaces are the Hilbert spaces $L^{2}(\Omega)$ and $H^{s}(\Omega)$, $s=1,2$, we want to proceed by showing there is a weak solution in these function spaces. We then make estimates on this class of solutions, to proceed with the requisite analysis for the global attractors. We state the following theorem:
\begin{theorem}
\label{thm:csol}
Consider the three species food chain model described via \eqref{eq:x1}-\eqref{eq:x3}. For any initial data $(u_0,v_0,r_0)$ in $L^{2}(\Omega)$, and spatial dimension $n=1, 2, 3$, there exists a global weak solution $(u,v,r)$ to the system, which becomes a strong solution and then a classical solution.
\end{theorem}
The proofs of proposition \ref{ge1} and theorem \ref{thm:csol} follow via remark \ref{ab} and the estimates referred to therein.
We state the following result next,
\begin{lemma}
\label{lem:lemba}
Consider $(u,v,r)$ that are solutions to the diffusive three species food chain model described via \eqref{eq:x1}-\eqref{eq:x3}, for any $(u_{0},v_{0},r_{0}) \in L^{2}(\Omega)$, and spatial dimension $n=1, 2, 3$, there exists a time $t^{*}(||u_{0}||_{2},||v_{0}||_{2})$ , and a constant $C$ independent of time, initial data and the allee threshold $m$,
and dependent only on the other parameters in \eqref{eq:x1}-\eqref{eq:x3}, such that for any $t > t^{*}$ the following uniform estimates hold:
\begin{equation}
\label{eq:xnn1}
||u||^{2}_{2} \leq C, || v||^{2}_{2} \leq C, \int^{t+1}_{t}||\nabla u||^{2}_{2}ds \leq C, \int^{t+1}_{t}||\nabla v||^{2}_{2}ds \leq C, ||\nabla u||^{2}_{2} \leq C, ||\nabla v||^{2}_{2} \leq C.
\end{equation}
\end{lemma}
This follows easily via the estimates and methods of \cite{P10,PK13,PK14}.
Note what differs here from \cite{P10,PK13,PK14} is the equation for $r$, particularly due to a Allee effect now being modeled.
We now proceed with the estimate on $r$. We multiply \eqref{eq:x3} by $r$ and integrate by parts to obtain
\begin{equation}
\label{eq:x11}
\frac{1}{2}\frac{d}{dt}||r||^{2}_{2} + d_{3}||\nabla r||^{2}_{2} + \frac{w_3}{||v||_{\infty}+D_{3}}||r||^{4}_{4} \leq \left(c+\frac{mw_{3}}{v+D_{3}} \right)||r||^{3}_{3}.
\end{equation}
Thus we obtain
\begin{equation}
\label{eq:x11n7}
\frac{1}{2}\frac{d}{dt}||r||^{2}_{2} + d_{3}||\nabla r||^{2}_{2} + cm||r||^{2}_{2} + \frac{w_3}{||v||_{\infty}+D_{3}}||r||^{4}_{4} \leq \left(c+\frac{mw_{3}}{v+D_{3}} \right)||r||^{3}_{3}.
\end{equation}
We then use H\"{o}lder's inequality followed by Young's inequality to obtain
\begin{eqnarray}
\label{eq:x11r}
&&\frac{1}{2}\frac{d}{dt}||r||^{2}_{2} + d_{3}||\nabla r||^{2}_{2} + cm||r||^{2}_{2} + \frac{w_3}{||v||_{\infty}+D_{3}}||r||^{4}_{4}, \nonumber \\
&& \leq \frac{w_3}{||v||_{\infty}+D_{3}}||r||^{4}_{4} + C_{1}(||v||_{\infty} + D_{3} + m^4)|\Omega|. \nonumber \\
\end{eqnarray}
Thus we obtain
\begin{equation}
\label{eq:x11n8}
\frac{1}{2}\frac{d}{dt}||r||^{2}_{2} + cm||r||^{2}_{2} \leq C_{1}(||v||_{\infty} + D_{3} + m^4)|\Omega|,
\end{equation}
which implies
\begin{equation}
\label{eq:x11n2n}
||r||^{2}_{2} \leq e^{-cmt}||r_{0}||^{2}_{2} + \frac{ C_{1}(||v||_{\infty} + D_{3} + m^4)|\Omega|}{cm}.
\end{equation}
We now use the estimate for $||v||_{\infty}$ via \eqref{eq:liea} (which does not depend on $r$ see remark \ref{rv}) to obtain
the following estimates for $r$,
\begin{equation}
\label{eq:x11n2}
||r||^{2}_{2} \leq 1 + \frac{ C_{1}(C + D_{3} + m^4)|\Omega|}{cm}, \ \mbox{for} \ t > \max\left(t^{*}+1, \frac{ \ln(||r_{0}||^{2}_{2})}{ cm }\right).
\end{equation}
Integrating \eqref{eq:x11r} in the time interval $[t,t+1]$ we obtain
\begin{equation}
\label{eq:x11n23n}
\int^{t+1}_{t}||\nabla r||^{2}_{2} ds \leq 1 + \frac{ C_{1}(C + D_{3} + m^4)|\Omega|}{cm} + 2|\Omega|^{\frac{1}{4}}\left(c+\frac{w_{4}}{D_{3}}\right) \left(\frac{w_{4}}{D_{3}}\right)^{\frac{1}{4}},
\end{equation}
for $t > \max\left(t^{*}+1, \frac{ \ln(||r_{0}||^{2}_{2})}{ cm }\right)$.
Here, $C, C_{1}$ do not depend on $m$.
\begin{remark}
What we notice is that the estimate via \eqref{eq:x11n2}, \eqref{eq:x11n23n} depend singularly on $m$. That is they yield no information, if we try to pass to the limit as $m \rightarrow 0$
\end{remark}
We now estimate the gradient of $r$. See the appendix \ref{app2} for details. What the standard analysis in appendix \ref{app2} shows, is that we have an estimate of the form
\begin{equation}
\label{eq:f1ns}
\mathop{\limsup}_{t \rightarrow \infty} ||\nabla r||^{2}_{2} \leq C,
\end{equation}
\begin{remark}
The constant $C$ in \eqref{eq:f1ns} is independent of time and initial conditions, \emph{but depend singularly on} $m$.
In our estimates we apply the uniform Gronwall lemma which requires us to use the estimate via \eqref{eq:x11n23n}. This also depends singularly on $m$, thus the estimate \eqref{eq:f1ns} is uniform with respect to time and initial data but depend singularly on $m$.
\end{remark}
\begin{remark}
\label{ab}
The uniform $H^{1}(\Omega)$ estimates via lemma \ref{lem:lemba} and \eqref{eq:f1d4nn} give us uniform $L^{6}(\Omega)$ bounds in $\mathbb{R}^{3}$. We can now prove the reaction terms are in $L^{p}(\Omega)$ for $p>\frac{3}{2}$. It suffices to show that
\begin{equation}
||u-u^{2}-w_{1}\frac{uv}{u+v}||_{2} \leq C||u||^{2}_{4} \leq C,
\end{equation}
\begin{equation}
||-a_{2}v+w_{2}\frac{uv}{u+v}-w_{3}\left(\frac{vr}{v+r}\right)||_{2} \leq C||v||_{2} \leq C,
\end{equation}
\begin{equation}
||r\big({{r-m}}\big)\bigg(c-\frac{w_4r} {v+D_3}\bigg)||_{2} \leq C||r||^{3}_{6} \leq C.
\end{equation}
But the above follows via the uniform $L^{6}(\Omega)$ bounds on $u,v,r$. This proves proposition \ref{ge1}. Furthermore the estimate via \eqref{eq:f1ns} and lemma \ref{lem:lemba} allow us to derive the appropriate $L^2(\Omega)$
and $H^{1}(\Omega)$ bounds on a Galerkin truncation of \eqref{eq:x1}-\eqref{eq:x3}, extract appropriate subsequebces and pass to the limit as is standard \cite{T97, SY02}, to prove theorem \ref{thm:csol}.
\end{remark}
\subsection{Uniform in the parameter $m$ $L^{2}(\Omega)$ and $H^{1}(\Omega)$ estimates for $r$}
Note the estimates via \eqref{eq:x11n2}, \eqref{eq:f1ns} are singular in $m$. This causes extensive difficulties if we try to pass to the limit as $m \rightarrow 0$, hence making it difficult to prove upper semicontinuity. Our next goal is to derive \emph{uniform in $m$ estimate} on $r$. These are derived in detail in the appendix \ref{app3}. However, the key estimate derived therein is
\begin{eqnarray}
\label{eq:naa}
||\nabla r||^{2}_{2} \leq C_{K} e^{K C_{K}}, \ \mbox{for} \ t > t_{1}=t_{0} + 1.
\end{eqnarray}
Here $C_{K}, K$ are independent of $m$. Thus the $H^{1}(\Omega)$ estimate for $r$ can be made independent of $m$.
\subsection{Uniform $H^{2}(\Omega)$ estimates}
We will now estimate the $H^2(\Omega)$ norms of the solution. We rewrite \eqref{eq:x1} as
\[
u_t- d_1 \Delta u = u-u^{2}-w_{1}\left(\frac{u v }{u+v}\right).
\]
\noindent We square both sides of the equation and integrate by parts over $\Omega$ to obtain
\begin{eqnarray}
\label{eq:x1pq}
&&||u_t||^{2}_{2} + d_{1}||\Delta u||^{2}_{2} + \frac{d}{dt}||\nabla u||^{2}_{2} , \nonumber \\
&=& \left\| \left( u-u^{2}-w_{1}\left(\frac{u v }{u+v}\right) \right) \right\|_2^2 ,\nonumber \\
&\leq& C\left(||u||^{2}_{2}+(w_{1})^2||u||^{2}_{2} + ||u||^{4}_{4}\right) \leq C. \nonumber
\end{eqnarray}
\noindent This result follows by the embedding of $H^{1}(\Omega) \hookrightarrow L^4(\Omega) \hookrightarrow L^2(\Omega)$. Therefore, we obtain
\[
||u_t||^{2}_{2} + d_{1}||\Delta u||^{2}_{2} + \frac{d}{dt}||\nabla u||^{2}_{2} \leq C\left(||u||^{2}_{2}+(w_{1})^2||u||^{2}_{2} + ||u||^{4}_{4}\right) \leq C.
\]
We now make uniform in time estimates of the higher order terms. Integrating the estimates of \eqref{eq:x1pq} in the time interval $[T,T+1]$, for $T > t^{*}$
\[
\int^{T+1}_{T}||u_t||^{2}_{2} dt \leq C_1, \ \int^{T+1}_{T}||\Delta u||^{2}_{2}dt \leq C_2.
\]
\noindent Similarly we adopt the same procedure for $v$ to obtain
\[
\int^{T+1}_{T}||v_t||^{2}_{2}dt \leq C_1, \ \int^{T+1}_{T}||\Delta v||^{2}_{2}dt \leq C_2.
\]
Next, consider the gradient of \eqref{eq:x1}. Following the same technique as in deriving \eqref{eq:x1pq} we obtain for the left hand side
\begin{eqnarray}
\label{eq:gut1}
&& ||\nabla u_t||^{2}_{2} + d_{1}||\nabla (\Delta u)||^{2}_{2} + \frac{d}{dt}||\Delta u||^{2}_{2} + \int_{\partial \Omega}\Delta u \nabla u_{t} \cdot \textbf{n} dS, \nonumber \\
&&=||\nabla u_t||^{2}_{2} + d_{1}||\nabla (\Delta u)||^{2}_{2} + \frac{d}{dt}||\Delta u||^{2}_{2} + \int_{\partial \Omega}\Delta u \frac{\partial}{\partial t}(\nabla u \cdot \textbf{n} )dS, \nonumber \\
&&= ||\nabla u_t||^{2}_{2} + d_{1}||\nabla (\Delta u)||^{2}_{2} + \frac{d}{dt}||\Delta u||^{2}_{2}.
\end{eqnarray}
\noindent This follows via the boundary condition. Thus we have
\begin{eqnarray}
\label{eq:x1pq1}
&& ||\nabla u_t||^{2}_{2} + d_{1}||\nabla (\Delta u)||^{2}_{2} + \frac{d}{dt}||\Delta u||^{2}_{2}, \nonumber \\
&& = \left(\nabla ( u-u^{2}-w_{1}\left(\frac{u v }{u+v}\right) )\right)^2, \nonumber \\
&& \leq C(|| u||^{4}_{4} + || v||^{4}_{4} + || \Delta u||^{4}_{2} + || \nabla v||^{2}_{2} + || \nabla u||^{2}_{2}).
\end{eqnarray}
\noindent This follows via Young's inequality with epsilon, as well as the embedding of $H^{2}(\Omega)\hookrightarrow W^{1,4}(\Omega) \hookrightarrow H^{1}(\Omega)$. This implies that
\[
\frac{d}{dt}||\Delta u||^{2}_{2} \leq C || \Delta u||^{4}_{2} + C(|| u||^{4}_{4} + || v||^{4}_{4} + || \nabla v||^{2}_{2} + || \nabla u||^{2}_{2}).
\]
\noindent We now use the uniform Gronwall lemma with
\[ \beta(t) = ||\Delta u||^{2}_{2}, \ \zeta(t)=||\Delta u||^{2}_{2} , \ h(t)= C(|| u||^{4}_{4} + || v||^{4}_{4} + || \nabla v||^{2}_{2} + || \nabla u||^{2}_{2}),~~ q=1, \]
\noindent to obtain
\[
||\Delta u||_{2} \leq C, \ ||\Delta v||_{2} \leq C, \ \mbox{for} \ t > t^{*} + 1.
\]
The estimate for $v$ is derived similarly.
\begin{remark}
\label{rv}
It is critical to notice that the estimate for $||\Delta v||_{2}$ does not involve $r$. This is because in order to derive it, we adopt same procedure as in \eqref{eq:x1pq}, instead for the $v$ equation, and square both sides to proceed. Therein notice that the term involving $r$, $\frac{vr}{v+r}$ can be estimated as $||\frac{vr}{v+r}||_{2} \leq C||\frac{r}{v+r}||_{\infty} ||v||_{2} \leq C_{1} ||v||_{2}$. So the right hand side has no dependence on $r$.
\end{remark}
In order to estimate the $H^2(\Omega)$ norm of $r$ we proceed similarly to obtain
\begin{eqnarray}
\label{eq:x1pqr}
||r_t||^{2}_{2} + d_{3}||\Delta r||^{2}_{2} + \frac{d}{dt}||\nabla r||^{2}_{2}&=& \left\| \left( \left( c+\frac{mw_{4}}{v+D_{3}}\right)r^{2} - \frac{w_{4}}{v+D_{3}}r^{3} - mcr \right) \right\|_2^2, \nonumber \\
&\leq& C\left(||r||^{2}_{2}+||r||^{4}_{4}\right) + ||r||^{6}_{6}, \nonumber \\
&\leq & C \left(||\nabla r||^{2}_{2}\right)^{3}. \nonumber \\
\end{eqnarray}
We now make uniform in time estimates of the higher order terms. Integrating the estimates of \eqref{eq:x1pqr} in the time interval $[T,T+1]$, for $T > t_{1}$, and using the estimates via \eqref{eq:nrn} yields
\[
\int^{T+1}_{T}||r_t||^{2}_{2} dt \leq C_1, \ \int^{T+1}_{T}||\Delta r||^{2}_{2}dt \leq C_2.
\]
Next, consider the gradient of \eqref{eq:x3}. Following the same technique as in deriving \eqref{eq:x1pq} we obtain
\begin{eqnarray}
\label{eq:x1pqr4}
||\nabla r_t||^{2}_{2} + d_{3}||\nabla (\Delta r)||^{2}_{2} + \frac{d}{dt}||\Delta r||^{2}_{2}&& = \left(\nabla \left( \left( c+\frac{mw_{4}}{v+D_{3}}\right)r^{2} - \frac{w_{4}}{v+D_{3}}r^{3} - mcr \right) \right)^2, \nonumber \\
&& \leq C\left( ||\nabla r||^{2}_{2}\right)^{2}. \nonumber \\
\end{eqnarray}
This yields
\begin{equation}
\label{eq:lieah2}
\frac{d}{dt}||\Delta r||^{2}_{2} \leq C\left( ||\nabla r||^{2}_{2}\right)^{2} \left( ||\Delta r||^{2}_{2}\right)+ C_{2}\left( || r||^{2}_{2}\right)^{2}.
\end{equation}
We can now use the estimates via \eqref{eq:rl2n}, \eqref{eq:nrn} in conjunction with the uniform Gronwall lemma to yield a uniform bound on $||\Delta r||_{2}$.
\noindent Thus, via elliptic regularity,
\begin{equation}
\label{eq:h2ea}
|| u||_{H^{2}(\Omega)} \leq C, \ ||v||_{H^{2}(\Omega)} \leq C, \ ||r||_{H^{2}(\Omega)} \ \leq C \ \mbox{for} \ t > \max(t_{1}+1,t^{*} + 1).
\end{equation}
\noindent Since $H^2(\Omega) \hookrightarrow L^{\infty}(\Omega)$ in $\mathbb{R}^2$ and $\mathbb{R}^{3}$, the following estimate is valid in $\mathbb{R}^2$ and $\mathbb{R}^{3}$
\begin{equation}
\label{eq:liea}
||u||_{\infty} \leq C, \ ||v||_{\infty} \leq C, \ ||r||_{\infty} \leq C \ \mbox{for} \ t > \max(t_{1}+1,t^{*} + 1).
\end{equation}
\section{Long Time Dynamics}
\subsection{Existence of global attractor}
In this section we prove the existence of a $(H,H)$ global attractor for system \eqref{eq:x1}-\eqref{eq:x3}, which is subsequently demonstrated to be a $(H,E)$ attractor.
We now state the following theorem,
\begin{theorem}
\label{thm:ga1}
Consider the reaction diffusion equation described via \eqref{eq:x1}-\eqref{eq:x3} where $\Omega$ is of spatial dimension $n=1, 2, 3$. There exists a $(H,H)$ global attractor $\mathcal{A}$ for the system. This is compact and invariant in $H$, and it attracts all bounded subsets of $H$ in the $H$ metric.
\end{theorem}
\textbf{proof}
We have shown that the system is well posed via theorems \ref{thm:csol}. Thus there exists a well defined semigroup $\left\{S(t)\right\}_{t \geq 0}:H \rightarrow H$. The estimates derived in Lemma \ref{lem:lemba} demonstrate the existence of bounded absorbing sets in $H$ and $E$. Thus given a sequence $\left\{u_{0,n}\right\}^{\infty}_{n=1}$, that is bounded in $L^{2}(\Omega)$, we know that for $t > t^{*}$,
\begin{equation}
S(t)(u_{0,n}) \subset B \subset H^{1}(\Omega).
\end{equation}
Here $B$ is the bounded absorbing set in $H^{1}(\Omega)$ from lemma \ref{lem:lemba}. Now for n large enough $t_{n} > t^{*}$, thus for such $t_{n}$ we have
\begin{equation}
S(t_{n})(u_{0,n}) \subset B \subset H^{1}(\Omega).
\end{equation}
This implies that we have the following uniform bound,
\begin{equation}
\label{eq:h1ga}
||S(t_{n})(u_{0,n})||_{H^{1}(\Omega)} \leq C ,
\end{equation}
This implies via standard functional analysis theory, see \cite{T97}, the existence of a subsequence still labeled $S(t_{n})(u_{0,n})$ such that
\begin{equation}
S(t_{n})(u_{0,n}) \rightharpoonup u \ \mbox{in} \ H^{1}(\Omega),
\end{equation}
Which implies via the compact Sobolev embedding of
\begin{equation}
E \hookrightarrow H,
\end{equation}
that
\begin{equation}
S(t_{n})(u_{0,n}) \rightarrow u \ \mbox{in} \ L^{2}(\Omega).
\end{equation}
This yields the asymptotic compactness of the semigroup $\left\{S(t)\right\}_{t \geq 0}$ in $H$. The convergences for the $v,r$ components follow similarly. The theorem is now proved.
$\square$
Next we can state the following theorem
\begin{theorem}
\label{thm:ga2}
Consider the reaction diffusion equation described via \eqref{eq:x1}-\eqref{eq:x3} where $\Omega$ is of spatial dimension $n=1, 2, 3$. There exists a $(H,E)$ global attractor $\mathcal{A}$ for the system. This is compact and invariant in $H$, and it attracts all bounded subsets of $H$ in the $E$ metric.
\end{theorem}
\textbf{proof}
The proof essentially follows verbatim theorem \ref{thm:ga1}. The existence of a bounded absorbing set in $E$ follows from lemma \ref{lem:lemba}. In order to prove the asymptotic compactness in $E$ we use the uniform $H^2(\Omega)$ estimates in \eqref{eq:h2ea}, and the compact Sobolev embedding of $H^2(\Omega) \hookrightarrow H^1(\Omega)$.
$\square$
\subsection{Upper-semicontinuity of Global attractor with respect to the Allee parameter $m$}
\label{alleeThreshold}
Here we state and prove the main semicontinuity result. This result follows via a series of estimates, and theorems, all of which we derive for the reader in appendix \ref{app4}
\begin{theorem}
\label{thm:gaus1sc}
Consider the reaction diffusion equation described via \eqref{eq:x1}-\eqref{eq:x3} where $\Omega$ is of spatial dimension $n=1, 2, 3$. Given a set of positive parameters excluding $m$, the family of global attractors is upper semicontinuous with respect to the Allee threshold $m \geq 0$ as it converges to zero. That is
\begin{equation}
dist_{E}(\mathcal{A}_{m},\mathcal{A}_{0}) \rightarrow 0, \ \mbox{as} \ m \rightarrow 0^{+}.
\end{equation}
\end{theorem}
\begin{proof}
We know that for $\epsilon > 0$, the global attractor $\mathcal{A}_{0}$ for the reaction diffusion equation described via \eqref{eq:x1}-\eqref{eq:x3} attracts $\mathcal{U}$. This follows via theorems \ref{thm:gaus1}, \ref{thm:gaus12} and \ref{thm:gaus13}. Thus there exists a finite time $t_{\epsilon} > 0$, such that
\begin{equation}
S_{0}(t_{\epsilon})\mathcal{U} \subset \mathcal{N}_{E}(\mathcal{A}_{0},\frac{\epsilon}{2}).
\end{equation}
Here, $\mathcal{N}_{E}(\mathcal{A}_{0},\frac{\epsilon}{2})$ is the $\frac{\epsilon}{2}$ ball of $\mathcal{A}_{0}$ in the space $E$.
Now we also know by the uniform convergence theorem, theorem \ref{thm:gaum} that there exists a $m_{\epsilon} \in (0,1]$ such that
\begin{equation}
\sup_{g_{0} \in \mathcal{U}} ||S_{m}(t_{\epsilon})g_{0} - S_{0}(t_{\epsilon})g_{0}||_{E} \leq \frac{\epsilon}{2} , \ \mbox{for} \ \mbox{any} \ m \in m_{\epsilon}.
\end{equation}
Since we know $\mathcal{A}_{m}$ is invariant we have
\begin{equation}
\mathcal{A}_{m} = S_{m}(t_{\epsilon})\mathcal{A}_{m} \subset S_{m}(t_{\epsilon})\mathcal{U} \subset \mathcal{N}_{E}(S_{0}(t_{\epsilon})\mathcal{U},\frac{\epsilon}{2})\subset \mathcal{N}_{E}(\mathcal{A}_{0},\epsilon).
\end{equation}
Thus one obtains the upper semicontinuity,
\begin{equation}
dist_{E}(\mathcal{A}_{m},\mathcal{A}_{0}) \rightarrow 0, \ \mbox{as} \ m \rightarrow 0^{+}.
\end{equation}
This proves the theorem.
\end{proof}
\subsection{Computing the decay rate with respect to the Allee threshold parameter $m$}
In the previous subsection \ref{alleeThreshold}, the role of the Allee threshold parameter $m$ on the long time dynamics of system \eqref{eq:x1}-\eqref{eq:x3} is explored. Theorem \ref{thm:gaus1sc}
shows that the global attractors $\mathcal{A}_{m}$ converge to $\mathcal{A}_{0}$ in the $H^{1}(\Omega)$ norm, as $m$ decreases to zero. It is also seen how changing $m$ effects the Turing dynamics in subsection \ref{turing}.
Theorem \ref{thm:gaus1sc} is a purely theoretical result. It gives no information as to the exact decay rate in terms of the parameter $m$. In general, there are no results in the literature to estimate this decay rate. We now aim to examine the role of
$m$ with respect to convergence, as $m$ decreases to zero as discussed
earlier. We aim to explicitly find the decay rate of $\mathcal{A}_{m}$ to a target attractor $\mathcal{A}_{0}$ (that is when $m=0$). We choose Turing patterns as the state of interest. In particular the numerical scheme discussed in \ref{turing} was employed, and the Turing patterns at an extended time are examined. The goal is to
gather numerical evidence as to the dependence on $m$ with respect to
the convergence rate,
\begin{eqnarray}
\label{eq:mBounds}
\| U_m - U_0 \|_{H^1(\Omega)} & < & C m^\gamma,
\end{eqnarray}
where $U_0=(u_, v_0, r_0)$ is the solution for $m=0$, $U_m=(u_m,v_m,r_m)$ is the solution for any
given value of $m$, and both $C$ and $\gamma$ are constants.
The numerical approximation for the case $m=0$ was established at the
scaled time, $t=10,000$. The approximations, $u_m(x,t)$, $v_m(x,t)$,
and $r_m(x,t)$, were then established for a range of values of $m$
with 121 equally spaced values from 0 to 0.0035. The approximation for
each value of $m$ was established up to $t=10,000$, and it was
determined that for each value of $m$ in this range a fixed spatial
pattern was found. The $H^1(\Omega)$ errors for the numerical approximation
were then calculated with respect to the $m=0$ case,
\begin{eqnarray*}
\| u_m(\cdot,10,000) - u_0(\cdot,10,000)\|_{H^1(\Omega)}, \\
\| v_m(\cdot,10,000) - v_0(\cdot,10,000)\|_{H^1(\Omega)}, \\
\| r_m(\cdot,10,000) - r_0(\cdot,10,000)\|_{H^1(\Omega)}.
\end{eqnarray*}
\begin{figure}
\caption{The $H^1(\Omega)$ errors for the differences
$\|u_m(\cdot,10,000)-u_0(\cdot,10,000)\|$,
$\|v_m(\cdot,10,000)-v_0(\cdot,10,000)\|$, and
$\|r_m(\cdot,10,000)-r_0(\cdot,10,000)\|$ are shown for a range
of values of $m$. The solid line is the linear least squares
regression line. }
\label{fig:l2Error}
\end{figure}
\begin{figure}
\caption{The $H^1(\Omega)$ errors for the differences
$\|u_m(\cdot,10,000)-u_0(\cdot,10,000)\|$,
$\|v_m(\cdot,10,000)-v_0(\cdot,10,000)\|$, and
$\|r_m(\cdot,10,000)-r_0(\cdot,10,000)\|$ are shown for a range
of values of $m$ but are plotted in a log log format. The solid
line is the linear least squares regression line for the log-log
data. }
\label{fig:l2ErrorMLog}
\end{figure}
The errors are shown in Figure \ref{fig:l2Error} and Figure
\ref{fig:l2ErrorMLog}. The first figure, Figure \ref{fig:l2Error},
shows the raw errors, and the second figure, Figure
\ref{fig:l2ErrorMLog}, shows the errors as a log-log plot. A linear
least squares fit for the data was also calculated, and the best fit
straight line is shown on both plots. In both cases the close linear
fit for larger values of $m$ contribute to strong, positive
correlations in the data. However, for smaller values $m$ the log-log
data moves away from a straight line. For the smaller values of $m$
the errors in the numerical approximation are of the same order as the
error in the $H^1(\Omega)$ norm, and the data no longer follows the same
trend.
Examining the errors for $u_m$, the correlation for the raw data is
0.99995, and the data reveals a strong, positive linear relationship.
In this scenario, it is assumed that the value of $\gamma$ in equation
\eqref{eq:mBounds} is one. The approximation for the slope of the raw
data gives an estimate of 20.6 with a 95\% confidence interval between
20.6215 and 20.6206. The 95\% confidence interval for the intercept is
between -5.1E-5 and -9.0E-4. Due to the large number of samples, the
error in the confidence interval is very small, but the estimate for
the parameter is close to zero.
Examining the errors for the log of the errors correlation is 0.993
for the log-log data. The approximation for the slope of the log-log
data gives an estimate of 1.003 with a 95\% confidence interval
between 0.72 and 1.28. The 95\% confidence interval for the intercept
is between 2.7 and 3.3. Note that the value of one is within the
confidence interval making it difficult to claim that the relationship
is not linear. The estimate for the slope, in fact, is very close to
one indicating that the value of $\gamma$ in equation
\eqref{eq:mBounds} is close to one.
\section{Overexploitation Phenomenon}
Overexploitation refers to the phenomenon where excessive harvesting of a species can result in its extinction. This finds applications in conservation, biodiversity, cascade control and so on. Although there is a fair amount of literature on this in two species models \cite{shi11}, it is much less studied in three species models.
To begin, we state the following theorem
\begin{theorem}[Middle predator and Allee effect mediated overexploitation]
\label{thm:ox}
Consider $(u,v,r)$ that are solutions to the diffusive three species food chain model described via \eqref{eq:x1}-\eqref{eq:x3}. If $w_{1} > a_{2}+1+w_{3}$, then for any given initial prey density $u_{0}$ there exists a threshold $M_{1}=\frac{1}{\alpha}||u_{0}||_{p}$, s.t if $M_{1} <||v_{0}||_{p}$ and $||r_{0}||_{\infty} < \min(m,\frac{cD_{3}}{w_{4}})$, then $(u,v,r) \rightarrow (0,0,0)$ uniformly for $x \in \bar{\Omega}$ as $t \rightarrow \infty$.
\end{theorem}
\textbf{proof:}
We begin by looking at the equation for $u$, and following the ideas in \cite{K98}
\begin{equation}
\label{eq:x1x0}
\frac{\partial u}{\partial t}= d_1\Delta u + u-u^{2}-w_{1}\frac{uv}{u+v} = d_1\Delta u + u-u^{2}-w_{1}\frac{u}{\frac{u}{v}+1}.
\end{equation}
We will claim that for all $t$, $||\frac{u}{v}||_{\infty} < \alpha$, and $\lim_{t \rightarrow \infty}||u||_{p} \rightarrow 0$, $\forall p$, if $\frac{||u_{0}||_{p}}{||v_{0}||_{p}} < \alpha$. If not there exists a first time $t_{1}$ s.t.,
$\frac{||u||_{p}}{||v||_{p}} < \alpha$ on $t \in [0,t_{1}]$ and $\frac{||u(t_{1})||_{p}}{||v(t_{1})||_{p}} = \alpha$.
Due to $w_{1} > a_{2}+1 + w_{3}$, there exists an $\alpha$ s.t $\frac{w_{1}}{1+\alpha} = 1 + a_{2}+w_{3}$, thus
\begin{eqnarray}
\label{eq:x1x0}
\frac{\partial u}{\partial t}= d_1\Delta u + u-u^{2}-w_{1}\frac{u}{\frac{u}{v}+1} && \leq d_1\Delta u + u-w_{1}\frac{u}{\alpha+1}, \nonumber \\
&& = d_1\Delta u + u-u -a_{2}u -w_{3}u, \nonumber \\
&& = d_1\Delta u -(a_{2}+w_{3})u.
\end{eqnarray}
Thus $u$ satisfies
\begin{equation}
||u||_{p} \leq ||u_{0}||_{p} e^{-(a_{2}+w_{3})t}.
\end{equation}
This easily follows via multiplying \eqref{eq:x1} by $|u|^{p-1}$, and integrating by parts.
We also notice that the $\tilde{v}$ solving the following equation, with $\tilde{v}_{0} = \sup v_{0}(x)$, is a super solution to \eqref{eq:x2}
\begin{equation}
\label{eq:x2xo}
\frac{\partial \tilde{v}}{ \partial t} = -(a_{2}+w_{3})\tilde{v}.
\end{equation}
Multiplying through by $|v|^{p-1}$, and integrating by parts yields
\begin{equation}
\label{eq:x2xo1}
||v||_{p} \geq ||v_{0}||_{p} e^{-(a_{2}+w_{3})t}.
\end{equation}
This implies
\begin{equation}
\label{eq:x2xo1}
\frac{||u||_{p}}{||v||_{p}} \leq \frac{||u_{0}||_{p}}{||v_{0}||_{p}} < \alpha.
\end{equation}
This implies a contradiction. Thus $\lim_{t \rightarrow \infty}||u||_{p} \rightarrow 0$, which implies uniform convergence of a subsequence, say $u_{n_{j}}$ to 0 (where $u_{n}$ is a Galerkin truncation of $u$ and standard theory is adopted \cite{SY02}), and by uniqueness of solutions,
$u \rightarrow 0$ uniformly, as $t \rightarrow \infty$. This reduces the $v$ equation to
\begin{equation}
\label{eq:x2ne}
\frac{\partial v}{ \partial t}= d_2 \Delta v - a_{2}v -w_{3}\left(\frac{vr}{v+r}\right) \leq d_2 \Delta v - a_{2}v,
\end{equation}
and it is easy to see that $v \rightarrow 0$ uniformly, as $t \rightarrow \infty$.
Now the $r$ equation reduces to
\begin{equation}
\frac{\partial r}{ \partial t} = d_3 \Delta r + r\big({{r-m}}\big)\bigg(c-\frac{w_4r} {D_3}\bigg),
\end{equation}
and if $||r_{0}||_{\infty} < \min(m,\frac{cD_{3}}{w_{4}})$ standard analysis yields
$r \rightarrow 0$ uniformly.
This proves the theorem.
$\square$
\begin{remark}
Note, in the three species food chain model \eqref{eq:x1}-\eqref{eq:x3}, overexploitation has to be middle predator mediated, and also depends on the allele threshold.
Clearly if $||r_{0}||_{p} > \max(m,\frac{cD_{3}}{w_{4}})$, or $\frac{cD_{3}}{w_{4}} < ||r_{0}||_{p} < m$, or $\frac{cD_{3}}{w_{4}} > ||r_{0}||_{p} > m$, $r$ cannot go extinct.
\end{remark}
Also, if the attack rate of the top predator $r$ is large enough, it could cause $v$ to go extinct, before $u$ does. Once $v$ goes extinct $u \rightarrow 1$, again prohibiting overexploitation.
\begin{theorem}[Top predator mediated overexploitation avoidance]
\label{thm:ox1}
Consider $(u,v,r)$ that are solutions to the diffusive three species food chain model described via \eqref{eq:x1}-\eqref{eq:x3}. If $w_{3} > w_{2}+1+w_{1}$, then for any given initial prey density $u_{0}$ there exists a threshold $M_{2}=\frac{1}{\alpha_{1}}||r_{0}||_{p}$, s.t if $\max \left(M_{2},m, \frac{cD_{3}}{w_{4}}\right) < ||r_{0}||_{p}$ then $(u,v,r) \rightarrow (1,0,r^{*})$ uniformly as $t \rightarrow \infty$.
\end{theorem}
\textbf{proof}:
We first derive a lower bound on $u$. Trivially we have $||u||_{\infty} \leq 1$. Now
\begin{equation}
\frac{\partial u}{ \partial t} \geq - u^2 -w_{1}\frac{vu}{v+u} \geq -u-w_{1}u.
\end{equation}
This yields
\begin{equation}
||u||_{p} \geq ||u_{0}||_{p} e^{-(1+w_{1})t}.
\end{equation}
Next we will claim that for all $t$, $||\frac{v}{r}||_{\infty} < \alpha_{1}$, and $\lim_{t \rightarrow \infty}||v||_{p} \rightarrow 0$, $\forall p$, if $\frac{||v_{0}||_{p}}{||r_{0}||_{p}} < \alpha_{1}$. If not there exists a first time $t_{2}$ s.t.,
$\frac{||v||_{p}}{||r||_{p}} < \alpha_{1}$ on $t \in [0,t_{2}]$ and $\frac{||u(t_{2})||_{p}}{||v(t_{2})||_{p}} = \alpha_{1}$.
Due to the choice of $w_{3} > w_{2}+1 + w_{1}$, there exists an $\alpha_{1}$ s.t $\frac{w_{3}}{1+\alpha_{1}} = 1 + w_{2}+w_{1}$, thus
\begin{eqnarray}
\label{eq:x1x0}
\frac{\partial v}{\partial t} && = d_2\Delta v - a_{2}v + w_{2}\frac{vu}{v+u} - w_{3}\frac{v}{\frac{v}{r}+1}, \nonumber \\
&& \leq d_2\Delta v - a_{2}v + w_{2}v - \frac{w_{3}}{1+\alpha_{1}}v, \nonumber \\
&& = d_2\Delta v - a_{2}v + w_{2}v - (1 + w_{2}+w_{1})v. \nonumber \\
\end{eqnarray}
Thus $v$ satisfies
\begin{equation}
||v||_{p} \leq ||v_{0}||_{p} e^{-(a_{2}+1+w_{3})t}.
\end{equation}
By the restriction on $r_{0}$, we see that trivially $r \rightarrow m$ or $r \rightarrow \frac{cD_{3}}{w_{3}}$.
This implies
\begin{equation}
\label{eq:x2xo1}
\frac{||v||_{p}}{||r||_{p}} \leq \frac{||v_{0}||_{p}}{||r_{0}||_{p}} < \alpha_{1}.
\end{equation}
This implies a contradiction. Thus $\lim_{t \rightarrow \infty}||v||_{p} \rightarrow 0$, which implies uniform convergence to 0 of a subsequence, say $v_{n_{j}}$ (where $v_{n}$ is a Galerkin truncation of $v$ and standard theory is adopted \cite{SY02}), and by uniqueness $v \rightarrow 0$ uniformly as $t \rightarrow \infty$. Looking at the decay rate for $v$, we see it converges to 0 faster than $u$, so $u > 0$, when $v \rightarrow 0$ uniformly, and so then the $u$ equation reduces to
\begin{equation}
\frac{\partial u}{ \partial t}= d_1 \Delta u + u - u^2,
\end{equation}
and we know via standard theory that now
$u \rightarrow 1$ uniformly.
This proves the theorem.
$\square$
\begin{remark}
This also happens if $\frac{cD_{3}}{w_{4}} < ||r_{0}||_{\infty} < m$, or $\frac{cD_{3}}{w_{4}} > ||r_{0}||_{\infty} > m$.
\end{remark}
The following lemma describes persistence in the top predator.
\begin{lemma}[Persistence in top predator]
\label{lem:ox2}
Consider $(u,v,r)$ that are solutions to the diffusive three species food chain model described via \eqref{eq:x1}-\eqref{eq:x3}. If $\min(m,\frac{cD_{3}}{w_{4}}) < ||r_{0}||_{\infty}$, then $r \rightarrow \max(m,\frac{cD_{3}}{w_{4}})$ uniformly as $t \rightarrow \infty$, hence persists.
\end{lemma}
\textbf{proof:}
Note $c- \frac{w_{4}r}{v+D_{3}} > c- \frac{w_{4}r}{D_{3}} $, due to positivity of $r,v$. Thus
we can compare the solution of \eqref{eq:x3} to the solution of the ODE
\begin{equation}
\label{eq:x2nn}
\frac{d\tilde{r}}{dt} = \tilde{r}(\tilde{r}-m)\bigg(c- \frac{w_{4}\tilde{r}}{v+D_{3}}\bigg),
\end{equation}
with $\tilde{r}_{0}= \min(r_{0}(x)) > \min(m,\frac{cD_{3}}{w_{4}}) $. Clearly $\tilde{r} \rightarrow \max(m,\frac{cD_{3}}{w_{4}})$ as $t \rightarrow \infty$, and since $r$ solving \eqref{eq:x3} is a supersolution to $\tilde{r}$ solving \eqref{eq:x2nn}, we have $\liminf_{t \rightarrow \infty} \min_{\bar{\Omega}} (r(x,t) \geq \max(m,\frac{cD_{3}}{w_{4}})$.
$\square$
\begin{remark}
In the event that there is no Allee effect, or $m=0$, $r \rightarrow \frac{cD_{3}}{w_{4}}$ uniformly as $t \rightarrow \infty$, hence always persists, independent of all other parameters or initial data $(u_{0}, v_{0}, r_{0})$. Thus in a three species system where the top predator is a generalist, and can switch its favorite food source, true overexploitation can only take place if an Allee effect is in place, and not otherwise.
\end{remark}
\begin{remark}
All of the over exploitation theorems were verified numerically, for various ranges of parameter values.
\end{remark}
\section{Turing Instability}
\label{turing}
In this section we establish the condition needed to ensure a positive interior steady state for model system (\ref{eq:x1}-\ref{eq:x3}). We focus on deriving the condition necessary and sufficient for Turing instability to occur as a result of the introduction of diffusion. This phenomena is referred to as \emph{diffusion driven instability}, which was first introduced by Alan Turing\cite{Turing52}. We first establish the steady state for model system (\ref{eq:x1od}-\ref{eq:x3od})
\subsection{Steady States}
Before we can begin the analysis of Turing instability, we have to establish the conditions needed to ensure a positive interior steady state for the ODE version of model system \eqref{eq:x1}-\eqref{eq:x3}, which is given as
\begin{align}
&\frac{du}{dt}= u-u^{2}-w_{1}\frac{uv}{u+v},\label{eq:x1od} \\
&\frac{dv}{ dt}= -a_{2}v+w_{2}\frac{uv}{u+v}-w_{3}\left(\frac{vr}{v+r}\right),\label{eq:x2od}\\
&\frac{dr}{ dt} = r\big({{r-m}}\big)\bigg(c-\frac{w_4r} {v+D_3}\bigg). \label{eq:x3od}
\end{align}
Also all parameters associated with model system (\ref{eq:x1od}-\ref{eq:x3od}) are assumed to be positive constants and have the usual biological meaning. \\
The model system \eqref{eq:x1od}-\eqref{eq:x3od} has the following non-negative steady states
\begin{enumerate}
\item Total Extinction of the three species: $E_0(0,0,0)$.
\item Predator free steady state: $E_1(1,0,0)$.
\item No primary food source for predator steady state: $E_2(0,0,m)$ and $E_3(0,0,\frac{cD_3} {w_4})$.
\item Top Predator free steady state: $E_4(\tilde {u},\tilde {v},0)$ where $\tilde {u}=\frac{w_2-w_1w_2+a_2w_1} {w_2},\tilde {v}=\frac{(w_2-a_2)\tilde{u}}{a_2}$ if $w_2>w_1(w_2-a_2),w_2>a_2$.
\item Middle Predator free steady state: $E_5(1,0,\frac{cD_3} {w_4})$ and $E_6(1,0,m)$.
\item Coexistence of Species: $E_7(\tilde {\tilde {u}},\tilde {\tilde {v}},m)$ where $\tilde {\tilde {v}}= \frac{(1-\tilde {\tilde {u}})\tilde {\tilde {u}}}{(w_1+\tilde {\tilde {u}}-1)}, (1-{w_1})<\tilde {\tilde {u}} <1, w_1<1$ and $\tilde {\tilde {u}}$ is the real positive root of following equation:\\
\begin{equation}\nu_1u^3+\nu_2u^2+\nu_3u+\nu_4=0,
\label{eq:4}
\end{equation} where $ \nu_1=w_2,\, \nu_2 = -[a_2w_1+w_2(-w_1+(2+m))], \nu_3= [a_2w_1-w_1w_2+w_2+mw_1w_3-m(-a_2w_1+2w_2(w_1-1))], \, \nu_4=m(w_1-1)[w_2+w_1(w_3-(w_2-a_2))].$
\item Coexistence of Species with interior equilibrium population: $E_8(u^{*},v^{*},r^{*})$ is the solution of following equations:
\begin{subequations} \label{eq:5}
\begin{eqnarray}
1-u-\frac{w_1v} {u+v}=0, \\
-a_2+\frac{w_2u} {u+ v}-\frac{w_3r} { r+v}=0,\label{eq:5ab}\\
c-\frac{w_4 r} {v+D_3}=0.\label{eq:5ac}
\end{eqnarray}
\end{subequations}
\end{enumerate}
From \eqref{eq:5} we have,\\
\begin{eqnarray}
\label{eq:6}
v=\frac{(1-u)u} {w_1+u-1},\\
\label{eq:1.11}
r=\frac{c(v+D_3)} {w_4}.
\end{eqnarray}
Putting the values of $v$ and $r$ from \eqref{eq:6} and \eqref{eq:1.11} into \eqref{eq:5ab}, we get
\begin{eqnarray}
\label{eq:1.12}
\alpha_1 u^3-\alpha_2 u^2-\alpha_3 u-\alpha_4=0,
\end{eqnarray}
where\\
$\alpha_1=w_2(w_4+c)$,\\ $\alpha_2= [w_4(a_2w_1-w_1w_2+2w_2)+c(w_2(2+ D_3)+w_1(w_3+a_2-w_2))]$,\\$\alpha_3=[-w_2(w_4+c)-cD_3w_1(w_3+(a_2-2w_2))-(w_1(a_2-w_2)w_4+c(2D_3w_2+w_1(w_3+a_2-w_2)))]$,\\
$\alpha_4=-cD_3(w_1-1)[w_2+w_1(w_3+a_2-w_2)]$.\\
$u^*$ is the real positive root of \eqref{eq:1.12}. Hence interior equilibrium point $ E_8(u^*,v^*,r^*)$ exists if $({1} -{w_1})<u<{1}, w_1<1$. Knowing the value of $u^*$ we can find the values of $v^*$ and $r^*$ from \eqref{eq:6} and \eqref{eq:1.11} respectively.\\
\begin{remark}
From a realistic biological point of view, we are only interested in the dynamical behavior of model system \eqref{eq:x1}-\eqref{eq:x3} around the positive interior equilibrium point $E_8(u^*,v^*,r^*)$. This ensures the coexistence of all three species.
\end{remark}
\begin{remark}
Equilibrium points $E_{0},E_{1}, E_{2}, E_{3}$ exist, even though an indeterminate form appears in \eqref{eq:x2od}, \eqref{eq:x3od} due to the ratio dependent functional response. This has been rigorosly proven in \cite{K98}.
\end{remark}
\subsection{Turing conditions}
In this subsection we derive sufficient conditions for Turing instability to occur in \eqref{eq:x1}-\eqref{eq:x3} where the positive interior equilibrium point $E_8(u^*,v^*,r^*)$ is stable in the absence of diffusion and unstable due to the addition of diffusion, under a small perturbation to $E_8(u^*,v^*,r^*)$.
This is achieved by first linearizing model \eqref{eq:x1}-\eqref{eq:x3} about the homogenous steady state, by introducing both space and time-dependent fluctuations around $E_8(u^*,v^*,r^*)$. This is given as
\begin{subequations}\label{eq:7}
\begin{align}
u=u^* + \hat{u}(\xi,t),\\
v=v^* + \hat{v}(\xi,t),\\
r=r^* + \hat{r}(\xi,t),
\end{align}
\end{subequations}
where $| \hat{u}(\xi,t)|\ll u^*$, $| \hat{v}(\xi,t)|\ll v^*$ and $| \hat{r}(\xi,t)|\ll r^*$. Conventionally we choose
\[
\left[ {\begin{array}{cc}
\hat{u}(\xi,t) \\
\hat{v}(\xi,t) \\
\hat{r}(\xi,t)
\end{array} } \right]
=
\left[ {\begin{array}{cc}
\epsilon_1 \\
\epsilon_2 \\
\epsilon_3
\end{array} } \right]
e^{\lambda t + ik\xi},
\]
where $\epsilon_i$ for $i=1,2,3$ are the corresponding amplitudes, $k$ is the wavenumber, $\lambda$ is the growth rate of perturbation in time $t$ and $\xi$ is the spatial coordinate.
Substituing \eqref{eq:7} into \eqref{eq:x1}-\eqref{eq:x3} and ignoring higher order terms including nonlinear terms, we obtain the characteristic equation which is given as
\begin{align}\label{eq:1.2.10}
({\bf J} - \lambda{\bf I} - k^2{\bf D})
\left[ {\begin{array}{cc}
\epsilon_1 \\
\epsilon_2 \\
\epsilon_3
\end{array} } \right]=0,
\end{align}
where
\[
\quad
\bf {D} =
\left[ {\begin{array}{ccc}
d_1 & 0 & 0 \\
0 & d_2 & 0 \\
0 & 0 & d_3
\end{array} } \right],
\]
$ \bf{J}= \begin{bmatrix}
u^*\left(-1+\frac{w_1v^*}{(u^*+v^*)^2}\right) & -\frac{w_1u^{*^2}}{(u^*+v^*)^2} & 0 \\
\frac{w_2v^{*^2}} {(u^*+v^*)^2} & v^*\left(-\frac{w_2u^*}{(u^*+v^*)^2}+\frac{w_3r^*}{(v^*+r^*)^2}\right) & -\frac{w_3v^{*^2}}{(v^*+r^*)^2} \\
0 & \frac{r^{*^2}(r^*-m)w_4}{(v^*+D_3)^2} & -\frac{r^*(r^*-m)w_4}{(v^*+D_3)}\\
\end{bmatrix}
=\begin{bmatrix}
J_{11} & J_{12} & J_{13}\\
J_{21} & J_{22} & J_{23}\\
J_{31} & J_{32} & J_{33}\\
\end{bmatrix},
$ \\
\\and $\bf{I}$ is a $3\times 3$ identity matrix.\\
For the non-trivial solution of \eqref{eq:1.2.10}, we require that
\[
\left|
\begin{array}{ccc}
J_{11}-\lambda -k^2d_1 & J_{12} & J_{13}\\
J_{21} & J_{22}-\lambda -k^2d_2 & J_{23}\\
J_{31} & J_{32} & J_{33}-\lambda -k^2d_3\\
\end{array} \right|=0,
\]
which gives a dispersion relation corresponding to $E_8(u^*,v^*,r^*)$. To determine the stability domain associated with $E_8(u^*,v^*,r^*)$, we rewrite the dispersion relation as a cubic polynomial function given as
\begin{align}\label{eq:1.1.11}
P(\lambda(k^2))=\lambda^3 + \boldsymbol {\mu_2}(k^2)\lambda^2 + \boldsymbol {\mu_1}(k^2)\lambda + \boldsymbol {\mu_0}(k^2),
\end{align}
with coefficients
\begin{align*}
\boldsymbol {\mu_2}(k^2)&=(d_1 + d_2 + d_3)k^2 - (J_{11} + J_{22} + J_{33} ) ,\\
\boldsymbol {\mu_1}(k^2)&=J_{11}J_{33} + J_{11}J_{22} + J_{22}J_{33} - J_{32}J_{23} - J_{12}J_{21}- k^2\big( (d_3 + d_1)J_{22} + (d_2 + d_1)J_{33} + (d_2 + d_3)J_{11}\big) \\
&+k^4(d_2d_3 + d_2d_1 + d_1d_3),\\
\boldsymbol {\mu_0}(k^2)&=J_{11} J_{32} J_{23} - J_{11}J_{22}J_{33} + J_{12} J_{21} J_{33} + k^2\big(d_1( J_{22} J_{33} - J_{32} J_{23} ) + d_2 J_{11} J_{33} + d_3 ( J_{22} J_{11} - J_{12} J_{21} )\big)\\
& - k^4\big( d_2d_1J_{33} + d_1d_3J_{22} + d_2d_3J_{11}\big) + k^6d_1d_2d_3.
\end{align*}
According to Routh-Hurwitz criterion for stability, $ \mathbb{R}e(\lambda(k^2))<0$ in model \eqref{eq:x1}-\eqref{eq:x3} around equilibrium point $E_8(u^*,v^*,r^*)$ (i.e $E_8$ is stable) if and only if these conditions holds:
\begin{align}\label{eq:1.1.12}
\boldsymbol {\mu_2}(k^2)>0,\,\boldsymbol {\mu_1}(k^2)>0,\,\boldsymbol {\mu_0}(k^2)>0\quad \text{and}\quad [\boldsymbol {\mu_2}\boldsymbol {\mu_1}-\boldsymbol {\mu_0}](k^2) >0.
\end{align}\\
As violating either of the above conditions implies instability (i.e $\mathbb{R}e(\lambda(k^2))>0$).\\
We now require conditions where an homogenous steady state ($E_8(u^*,v^*,r^*)$) will be stable to small perturbation in the absence of diffusion and unstable in the present of diffusion with certain $k$ vaules. Meaning, we require that around the homogenous steady state $E_8(u^*,v^*,r^*)$
\begin{align*}
\mathbb{R}e(\lambda(k^2>0))>0,\, \text{for some}\, k \, \text{and}\, \mathbb{R}e(\lambda(k^2=0))<0,
\end{align*}
where we consider $k$ to be real and positive even though $k$ can be complex. This behavior is called \emph{diffusion driven instability}. Models that exhibits this sort of behavior in $2$ and $3$ species has been extensively studied in \cite{Gilligan1998}, where several different patterns was observed.\\
In order for homogenous steady state $E_8(u^*,v^*,r^*)$ to be stable( in the absence of diffusion) we need
\begin{align*}
\boldsymbol {\mu_2}(k^2=0)>0,\,\boldsymbol {\mu_1}(k^2=0)>0,\,\boldsymbol {\mu_0}(k^2=0)>0\quad \text{and}\quad [\boldsymbol {\mu_2}\boldsymbol {\mu_1}-\boldsymbol {\mu_0}](k^2=0) >0,
\end{align*}
where as with diffusion ($k^2>0$) we look for conditions where we can drive the homogenous steady state to be unstable, this can be achieved by studying the coefficient of \eqref{eq:1.1.11}. In order to achieve this we need to reverse at least one of the signs in \eqref{eq:1.1.12}. By which we first study $\boldsymbol {\mu_2}(k^2)$. Irrespective of the value of $k^2$, $\boldsymbol {\mu_2}(k^2)$ will be positive since $J_{11}+J_{22}+J_{33}$ is always less than zero. Therefore we cannot depend on $\boldsymbol {\mu_2}(k^2)$ for diffusion driven instability to occur. Hence for diffusion driven instability to occur in our case, we only depend on the $2$ conditions which are
\begin{align}\label{eq:1.1.12a}
\boldsymbol {\mu_0}(k^2)\quad \text{and}\quad [\boldsymbol {\mu_2}\boldsymbol {\mu_1}-\boldsymbol {\mu_0}](k^2).
\end{align}
Both functions are cubic functions of $k^2$, which are generally of the form
\begin{align*}
G(k^2)=H_H + k^2D_D + (k^2)^2C_C + (k^2)^3B_B,\, \text{with}\, B_B>0,\,\text{and} \, H_H>0.
\end{align*}
The coefficient of $G(k^2)$ are given in Table \ref{tab:coefficientsTable}.
\begin{table}[htb]
\caption{Coefficients of cubic functions $\boldsymbol {\mu_0}(k^2)$ and $[\boldsymbol {\mu_2}\boldsymbol {\mu_1}-\boldsymbol {\mu_0}](k^2)$ used in determining conditions for Turing instability }
\addtolength{\tabcolsep}{-3pt}
\centering
{\scriptsize
\begin{tabular}{@{}llccc@{}}
\toprule
Coefficient of $G(k^2)$ & $\boldsymbol {\mu_0}(k^2)$ & $[\boldsymbol {\mu_2}\boldsymbol {\mu_1}-\boldsymbol {\mu_0}](k^2)$ \\ [0.5ex]
\hline
\multirow{4}{*}{\parbox{1cm}{$H_H$}} & $J_{11}J_{32}J_{23}+ J_{12}J_{21}J_{33}$ & $J_{11}J_{22}J_{33} - (J_{11} + J_{22} + J_{33})(J_{11}J_{22} - J_{12}J_{21} + J_{11}J_{33} + J_{22}J_{33} - J_{23}J_{32})$ \\
&$-J_{11}J_{22}J_{33}$&$ - J_{11}J_{23}J_{32} -J_{12}J_{21}J_{33}$ \\
\multirow{4}{*}{\parbox{1cm}{$D_D$}}& & \\& $d_1( J_{22}J_{33} - J_{32}J_{23} ) + d_2J_{11}J_{33} $ &$d_1( 2J_{11}J_{33} + 2J_{11}J_{22} + 2J_{22}J_{33} + J_{33}J_{33} + J_{22}J_{22} - J_{12}J_{21})$ \\
& $+ d_3( J_{11}J_{22} - J_{12}J_{21} )$ &$+ d_2( 2J_{22}J_{11} + 2J_{22}J_{33} + 2J_{33}J_{11} + J_{11}J_{11} + J_{33}J_{33} - J_{21}J_{12} - J_{23}J_{32})$ \\
& & $d_3( 2J_{22}J_{11} + 2J_{22}J_{33} + 2J_{33}J_{11} + J_{11}J_{11} + J_{22}J_{22} - J_{23}J_{32})$&\\
\multirow{4}{*}{\parbox{1cm}{$C_C$}} & &\\&$- d_1d_2J_{33} - d_1d_3J_{22} - d_2d_3J_{11}$ &$- J_{11}(d_2 + d_3)(2d_1 + d_2 + d_3)- J_{22}(d_1 + d_3)( d_1 + 2d_2 + d_3)$& \\
& & $ - J_{33}(d_1 + d_2)( d_1 + d_2 + 2d_3)$ \\
\multirow{4}{*}{\parbox{1cm}{$B_B$}} & &\\&$d_1d_2d_3$ &$(d_2 + d_3)( d_1d_1 + d_2d_3 + d_1d_2$& \\
& & $+ d_1d_3 )$ \\
\bottomrule
\end{tabular}}
\label{tab:coefficientsTable}
\end{table}
\\To drive either $\boldsymbol {\mu_0}(k^2)$ or $[\boldsymbol {\mu_2}\boldsymbol {\mu_1}-\boldsymbol {\mu_0}](k^2)$ to negative for some $k$, we need to find the minimum $k^2$ referred to as the minimum turing point ($k^2_T$) such that $G(k^2=k^2_T)<0$. This minimum turing point occurs when
$$ {\partial G / \partial (k^2)}=0,$$
this when solved for $k^2$ yields
\begin{align*}
k^2=k^2_T={-C_C + \sqrt{C_C^2 - 3B_BD_D} \over 3B_B }.
\end{align*}
To ensures $k^2$ is real and positive such that $ {\partial^2 G / \partial^2 (k^2)}>0$, we require either
\begin{align}\label{eq:1.1.13}
D_D<0\quad \text{or}\quad C_C<0,
\end{align}
which ensures that
\begin{align*}
C_C^2 - 3B_BD_D>0.
\end{align*}
Therefore $G(k^2)<0$, if at $k^2=k^2_T$
\begin{align}\label{eq:1.1.14}
G_{min}(k^2)=2C_C^3 - 9D_DC_CB_B - 2(C_C^2- 3D_DB_B)^{3/2} + 27B_BH_H^2<0.
\end{align}
Hence \eqref{eq:1.1.13}-\eqref{eq:1.1.14} are necessary and sufficient conditions for $E_8(u^*,v^*,r^*)$ to produce diffusion driven instability, leading to the emergence of patterns. Also to first establish stability when $k=0$, $H_H$ in each case has to be positive. We state this via the following theorem
\begin{theorem}
\label{thm:tur1}
Consider the three species food chain model described via \eqref{eq:x1}-\eqref{eq:x3}, with equilibrium point $E_8(u^*,v^*,r^*)$. Supposing we have a set of parameters such that the following conditions are satisfied,
\begin{enumerate}
\item $H_H>0$ when $k=0$ ,
\item $D_D<0$ or $C_C<0$( and $C_C^2>B_BD_D$),
\item $G_{min}(k^2)=2C_C^3 - 9D_DC_CB_B - 2(C_C^2- 3D_DB_B)^{3/2} + 27B_BH_H^2<0.$
\end{enumerate}
Then Turing instability will always occur in the model.
\end{theorem}
See \cite{Gilligan1998} for the connection between the signs of the coefficients in table \ref{tab:coefficientsTable} and the associated patterns.
\subsection{Effect of Allee threshold parameter $m$ on Turing Instability}
In this section we investigate the effects of Allee threshold $m$ on Turing patterns. We focus on whether $m$ can induce Turing instabilities. We perform numerical simulation of model system \eqref{eq:x1}-\eqref{eq:x3} and the use of coefficient of dispersion relation, given in Table \ref{tab:coefficientsTable}.\\
A Chebychev collocation scheme is employed for the approximation of
the time varying, one-dimensional
problem\cite{timeDependentSpectralMethods,doi:10.1137/0720063}.
Our one-dimensional numerical simulation employs the zero-flux boundary condition with a spatial domain of size $(256\times 256)$.
We define the spatial domain to be $X \in (0,L)$, in the nondimensionalistaion in appendix \ref{app1}, where $\quad x={X\pi\over L}$.
The initial condition as defined is a small perturbation around the positive homogenous steady state given as
\begin{align*}
u=u^{*} + \epsilon_1 cos^2(nx)(x > 0)(x < \pi),\\
v=v^{*} + \epsilon_2 cos^2(nx)(x > 0)(x < \pi),\\
r=r^{*} + \epsilon_3 cos^2(nx)(x > 0)(x < \pi),
\end{align*}
where $ \epsilon_i=0.05$ $\forall i$.
In the two-dimensional case, we numerically solve model system \eqref{eq:x1}-\eqref{eq:x3} using a finite difference method. A forward difference
scheme is used for the reaction part. A standard five-point explicit finite difference scheme is used for the two-dimensional diffusion terms. Model system \eqref{eq:x1}-\eqref{eq:x3} is numerically solved in two-dimension over $200\times 200$ mesh points with spatial resolution $\Delta x=\Delta y=0.1$ and time step $\Delta t=0.1$. \\
In our numerical simulation both in the one-dimensional and two-dimensional case, with specific parameter set as show in figure \eqref{fig:turing1}-figure \eqref{fig:turing5}, different types of dynamics are observed as a result of the value of the Allee threshold $m$. \\
From figure \eqref{fig:turing1}-figure \eqref{fig:turing2}, we numerically observe that change in $m$ can result in disappearance of patterns or the appearance of patterns. Specifically from figure \eqref{fig:turing1}-figure \eqref{fig:turing2}, patterns where observed when $m=0.01$ but these patterns change as $m$ gets larger and finally at $m=0.5$, these patterns completely disappear. The critical Allee threshold $m$, for patterns to disappear, depends on the parameter set being used. In some cases, patterns where observed when $m=0$ but as $m\to 0.2$ and beyond, these patterns disappeared. \\
\begin{figure}
\caption{The densities of the three species are shown as contour plots in the x-t plane (1 dimensional in space). The long-time simulation yields stripes-spots mixture pattern, that are spatio-temporal. The parameters are: $ m=0.01, w_1 = 0.96, w_2 = 0.52, w_3 = 1.06, w_4 = 0.37, a_2 = 0.014,D_3 = 0.1, c = 0.1, d_1 =10^{-3}
\label{fig:turing1}
\end{figure}
\begin{figure}
\caption{The densities of the three species are shown as contour plots in the x-y plane (2 dimensional in space). The long-time simulation yields spots pattern. The parameters are: $ m=0.01, w_1 = 0.96, w_2 = 0.52, w_3 = 1.06, w_4 = 0.37, a_2 = 0.014,D_3 = 0.1, c = 0.1, d_1 =10^{-3}
\label{fig:turing2}
\end{figure}
It is also conclusive from figure \eqref{fig:turing3}-figure \eqref{fig:turing5} that Allee threshold has an effect on the type of patterns that do form particularly, as in the case of figure \eqref{fig:turing3} and \eqref{fig:turing5}, changing $m$, can change spatio-temporal patterns to a fixed spatial patterns. This behavior can be observed using the dispersion relation as shown in figure \eqref{fig:Disp1}.
In choosing $\mu_0(k^2)<0$ for some $k$, which will mostly result in fixed spatial patterns, changing the value of $m$ does not result in a change in the type of patterns formed, but parameters move out of the Turing space. Therefore in our numerical experiments we do not find parameters such that fixed spatial patterns can change to spatial-temporal patterns, as $m$ increases from zero.\\
\begin{figure}
\caption{The densities of the three species are shown as contour plots in the x-t plane (1 dimensional in space). The long-time simulation yields spot and arrow type patterns. The parameters are: $ m=0, w_1 = 0.95, w_2 = 0.3, w_3 = 0.82, w_4 = 0.53, a_2 = 0.01,D_3 = 0.1, c = 0.1, d_1 =10^{-3}
\label{fig:turing3}
\end{figure}\\
\begin{figure}
\caption{The densities of the three species are shown as contour plots in the x-y plane (2 dimensional in space). The long-time simulation yields spot patterns. The parameters are: $ m=0, w_1 = 0.95, w_2 = 0.3, w_3 = 0.82, w_4 = 0.53, a_2 = 0.01,D_3 = 0.1, c = 0.1, d_1 =10^{-3}
\label{fig:turing4}
\end{figure}\\
\begin{figure}
\caption{The densities of the three species are shown as contour plots in the x-t plane (1 dimensional in space). The long-time simulation yields strip patterns. The parameters are: $ m=0.1, w_1 = 0.95, w_2 = 0.3, w_3 = 0.82, w_4 = 0.53, a_2 = 0.01,D_3 = 0.1, c = 0.1, d_1 =10^{-3}
\label{fig:turing5}
\end{figure}
\begin{figure}
\caption{The density of the three species shown in figure \ref{fig:turing1}
\label{fig:turing45}
\end{figure}\\
\begin{figure}
\caption{Plot of coefficient of dispersion relation with $m=0$ and $m=0.1$ for figure \eqref{fig:turing3}
\label{fig:Disp1}
\end{figure}
Table \eqref{tab:ODE Stability}, also displays the fact that no patterns were observed as $m$ increases from $m=0.01$ to $m=0.5$ from figure \eqref{fig:turing1} and figure \eqref{fig:turing2} because $m$ can destabilizes an already stable equilibrium point in the ode model as in when $k=0$. Hence as in figure \eqref{fig:turing1} and figure \eqref{fig:turing2} where $m=0.01$ produces patterns since $E_8(u^*,v^*,r^*)$ is stable when $k=0$, this stable ($+$) equilibrium point becomes unstable ($-$) even when $k=0$.
\begin{table}[htb]
\begin{center}
\begin{tabular*}{.935\textwidth}{| c | c | c |c | }
\hline\hline
$m$ & $H_H=\mu_0(k^2=0)$ & $H_H=[\mu_2\mu_1-\mu_0](k^2=0)$& Pattern\\ \hline
\hline
$0$ & $+$ & $+$ & Patterns may occur \\ \hline
$0.1$ & $+$ & $+$ & Patterns may occur \\ \hline
$0.2$ & $-$ & $-$ & No Patterns \\ \hline
$0.3$ & $-$ & $-$ & No Patterns \\ \hline
$0.4$ & $-$ & $-$ & No Patterns \\ \hline
$0.5$ & $-$ & $-$ & No Patterns \\ \hline
\end{tabular*}
\end{center}
\caption{The influence of $m$ on the sign of $\boldsymbol {\mu_0}(k^2)$ and $[\boldsymbol {\mu_2}\boldsymbol {\mu_1}-\boldsymbol {\mu_0}](k^2)$. Describing the stability of $E_8(u^*,v^*,r^*)$ when $k=0$ and how its might lead to pattern formation. The parameters are: $w_1 = 0.96, w_2 = 0.52, w_3 = 1.06, w_4 = 0.37, a_2 = 0.014,D_3 = 0.1, c = 0.1, d_1 =10^{-3}, d_2 = 10^{-5}, d_3 =10^{-5}$.}
\label{tab:ODE Stability}
\end{table}
\section{Discussion}
In this manuscript we have considered a spatially explicit ratio dependent three species food chain model, with a strong Allee effect in the top predator.
The model represents dynamics of three interacting species in ``well-mixed" conditions and is applicable to ecological problems in terrestrial, as well as aquatic systems. Our goal is to understand the effect of the Allee threshold on the system dynamics, as there is not much work on Allee effects in multi-trophic level food chains, and we hope to address this here.
We first prove the existence of a global attractor for the model. In many systems an interesting question is to consider the effect on the global attractor, under perturbation of the physical parameters in the system. Most often continuity cannot be proven, but upper semi-continuity can.
We show that the global attractor is upper semi-continuous w.r.t the Allee threshold parameter $m$, that is
$dist_{E}(\mathcal{A}_{m},\mathcal{A}_{0}) \rightarrow 0, \ \mbox{as} \ m \rightarrow 0^{+}$, via theorem \ref{thm:gaus1sc}.
To the best of our knowledge this is the first robustness result for a three-species model with strong Allee effect.
Unless a systems dynamics are robust, there is no possibility to capture the same ecological behavior in a laboratory experiment, or natural setting.
The next question of interest, after one has proved such a result, is estimating the rate of decay, in terms of the physical parameter in question. Since there are no theoretical results to estimate the decay rate to the target attractor (that is when $m=0$), we choose to investigate this rate numerically.
We find decay estimates of the order of $\mathcal{O}(m^{\gamma})$, where $\gamma$ is found explicitly to be slightly less, but very close to 1.
We also find that $m$ effects the spatiotemporal dynamics of the system in two distinct ways.
\newline
(1) It changes the Turing patterns that occur in the system, in both one and two spatial dimensions.
That is varying $m$, has a distinct effect on the sorts of Turing patterns that form.
This can have interesting effects on the patchiness of spatially dispersing species. Our results say that if one can effect the Allee threshold in the top predator, one can effectively change the spatial concentrations of the species involved. This is best visualized in figure \ref{fig:turing45}. This could have many potential applications in Allee mediated biological control, and conservation, as recently biologists have begun to consider Allee effects as beneficial in limiting establishment of an invading species \cite{tobin2011exploiting}.
\newline
(2) It facilitates overexploitation phenomenon. That is without $m$, or when $m=0$, one does not see extinction in top predator, as the origin becomes unstable. However,
introducing it stabilises the origin, and low initial concentrations of $r$, may yield extinction, via theorems \ref{thm:ox}. This has many possible applications to top down trophic cascades and cascade control. This also tells us that in a three species system, where the top predator can switch its food source, only having an Allee effect in place can possibly drive it to extinction, thus bring about true overexploitation. This could have potential applications in Allee mediated biological control, if the top predator is an invasive species.
As future investigations it would be interesting to try to model weak Allee effects in top predator, and/or Allee effects in the other species as well, be they weak or strong. It has been stipulated with evidence, that two or more Allee effects can occur simultaneously in the same population\cite{berec2007multiple}. Thus an extremely interesting question would be to consider different type of Allee effects in the different species, and investigate the interplay of the various Allee thresholds, as they affect the dynamics of the system.
All in all our results would be of interest to both the mathematical and ecological communities, and in particular to groups that are interested in conservation efforts in food chain systems, cascade control, such as in many aquatic systems, and even Allee mediated biological control.
\section{Appendix}
\label{app}
\subsection{ Nondimensionalisation}
\label{app1}
The model system in \eqref{eq:x1}-\eqref{eq:x3} is a nondimensionalised version of the following system
\begin{subequations} \label{eq:main}
\begin{align}
{\partial U \over {\partial T }}&=D_U\Delta U + A_1U-B_1U^2-\frac{W_1UV} {\beta_1 V+U},\\
{\partial V \over {\partial T }}&= D_V\Delta V -A_2V+\frac{W_2UV} {\beta_1 V+U}-\frac{W_3 VR} {\beta_3 R+V},\\
{\partial R \over {\partial T }}&= D_R\Delta R +R\big({{R-M}}\big)\bigg(C-\frac{W_4R} {V+A_3}\bigg).\label{eq:main1}
\end{align}
\end{subequations}
$A_1$ is the self-growth rate of the prey population $U$. $A_2$ the intrinsic death rate of the specialist predator $V$ in the absence of its only food $U$, $C$ measures the rate of self-reproduction of the generalist predator $R$. $B_1$ measures the strength of competition among individuals of the prey species $U$. $W_i's$ are the maximum values which per capita growth rate can attain. $A_3$ represents the residual loss in $R$ population due to severe scarcity of its favorite food $V$. $\beta_{1}$ is the parameters that describes the handling time of the prey $U$ by predator $V$, and $\beta_{3}$ is the parameter that describes the handling time of the prey $V$ by predator $R$.\\
We will now go over the details of the nondimensionalisation.
We consider \eqref{eq:main}, and aim to introduce a change of variables and time scaling which reduces the number of parameters of model system \eqref{eq:main}‎. We take
\begin{align*}
u&={U B_1\over A_1}, \quad v={VB_1\beta_1 \over A_1},\quad r={Rb_1\beta_1\beta_3 \over A_1},\\
t&={T \over {A_1}},\quad w_1={W_1\over \beta_1A_1},\quad a_2={A_2\over A_1},\\
w_2&={W_2\over A_1},\quad w_3={W_3\over \beta_3A_1},\quad c={CA_1\over {A_1B_1\beta_1\beta_3}},\\
m&={MB_1\beta_1\beta_3\over A_1},\quad D_3={A_3B_1\beta_1\over A_1},\quad w_4={W_4B_1\over {B_1^2\beta_1\beta_3^2}},\\
d_1&={D_U\pi^2\over B_1L^2},\quad d_2={D_V\pi^2\over B_1\beta_1L^2 } ,\quad d_3={D_R\pi^2\over {B_1\beta_1\beta_3L^2}}.
\end{align*}
Considering Neuman boundary conditions, model system \eqref{eq:main} reduces to
\begin{align}
&\frac{\partial u}{\partial t}= d_1\Delta u + u-u^{2}-w_{1}\frac{uv}{u+v}, \label{eq:x0a}\\
&\frac{\partial v}{ \partial t}= d_2 \Delta v -a_{2}v+w_{2}\frac{uv}{u+v}-w_{3}\left(\frac{vr}{v+r}\right), \label{eq:x2a}\\
&\frac{\partial r}{ \partial t} = d_3 \Delta r + r\big({{r-m}}\big)\bigg(c-\frac{w_4r} {v+D_3}\bigg).\label{eq:x3a}
\end{align}
Also all parameters associated with model system (\ref{eq:x0a}-\ref{eq:x3a}) are assumed to be positive constants and have the usual biological meaning. \\
\subsection{Global existence}
\label{app0}
Here we prove proposition \ref{ge1}. The positivity of solutions follow trivially from the form of the reaction terms, which are quasi positive.
By the positivity of the first component $u(t,.)$ of the solution to \eqref{eq:x1} on $
[0,T_{\max })$ $\times \Omega $, we get from equation \eqref{eq:x1}
\begin{equation}
\label{3.1}
\frac{\partial u}{\partial t}-d_{1}\Delta u\leq a_{1}u,\text{ \ on }[0,T_{\max })\times
\Omega ,
\end{equation}
then we use a comparison argument \cite{smoller}. That is, we can compare the solution $u$ of \eqref{3.1} to the solution $u^*$, of the linear heat equation
\begin{equation}
\label{3.1ns}
\frac{\partial u^{*}}{\partial t}-d_{1}\Delta u^{*}= a_{1}u^{*},\text{ \ on }[0,T_{\max })\times
\Omega ,
\end{equation}
where $u^*$ satisfies the same initial and boundary conditions as $u$.
Clearly $u \leq u^*$, and since $u^{*}$ (being the solution of a linear heat equation) is bounded, so is $u$, and we deduce
\begin{equation}
\label{3.2}
u(t,.)\leq C_{1},
\end{equation}
where $C_{1}$ is a constant depending only on $T_{\max }$. Then equation
\eqref{eq:x2} gives
\begin{equation}
\label{3.3}
\frac{\partial v}{\partial t}-d_{2}\Delta v \leq w_{1}C_{1}v, \text{ \ on }[0,T_{\max
}) \times \Omega ,
\end{equation}
which implies by the same arguments
\begin{equation}
\label{3.4}
v(t,.)\leq C_{2},\text{ \ on }[0,T_{\max }) \times \Omega ,
\end{equation}
where $C_{2}$ is a constant depending only on $T_{\max }$.
To get a bound on the component $r$ we apply Young's inequality to yield
\begin{eqnarray}
&& \frac{\partial r}{\partial t}-d_{3}\Delta r + Mcr \nonumber \\
&& = \left( c+\frac{Mw_{3}}{v+D_{3}}\right)r^{2} - \left( \frac{w_{3}}{v+D_{3}}\right)r^{3} \nonumber \\
&& \leq \left( c+\frac{Mw_{3}}{D_{3}}\right)r^{2} - \left( \frac{w_{3}}{v+D_{3}}\right)r^{3} \nonumber \\
&& = \left( c+\frac{Mw_{3}}{D_{3}}\right)\left( \frac{v+D_{3}}{w_{3}}\right)^{\frac{2}{3}} \left( \frac{v+D_{3}}{w_{3}}\right)^{-\frac{2}{3}} r^{2} - \left( \frac{w_{3}}{v+D_{3}}\right)r^{3} \nonumber \\
&& \leq \left( c+\frac{Mw_{3}}{D_{3}}\right)^{3} \left( \frac{v+D_{3}}{w_{3}}\right)^{2} + \left( \frac{w_{3}}{v+D_{3}}\right)r^{3} - \left( \frac{w_{3}}{v+D_{3}}\right)r^{3} \nonumber \\
&& \leq \left( c+\frac{Mw_{3}}{D_{3}}\right)^{3} \left( \frac{C_{2}+D_{3}}{w_{3}}\right)^{2}
\end{eqnarray}
Here we use the bound on $v$ via \eqref{3.4}. Thus there exists a positive constant $C_{3}$ such that
\begin{equation}
\label{3.9}
r(t,.)\leq C_{3} = \left( c+\frac{Mw_{3}}{D_{3}}\right)^{3} \left( \frac{C_{2}+D_{3}}{w_{3}}\right)^{2} ,\text{ \ on }[0,T_{\max }) \times \Omega .
\end{equation}
At this step via \eqref{3.2}, \eqref{3.4}, \eqref{3.9} standard theory \cite{henry} is
applicable and the solution is global ($T_{\max }=+\infty $).
This proves the proposition.
\subsection{Apriori estimates}
\label{app2}
In all estimates made hence forth the constants $C, C_{1}, C_{2}, C_{3}, C_{\infty}$ are generic constants, that can change in value from line to line, and sometimes within the same line if so required.
We estimate the gradient of $r$ by multiplying \eqref{eq:x3} by $-\Delta r$, and integrating by parts and using standard methods to obtain
\begin{equation}
\label{eq:x11}
\frac{1}{2}\frac{d}{dt}||\nabla r||^{2}_{2} + d_{3}||\Delta r||^{2}_{2} = \int_{\Omega}r(r-m)(c-w_{4}\frac{r}{v+D_{3}})(-\Delta r)dx.
\end{equation}
We have to estimate $\int_{\Omega}r(r-m)(c-w_{4}\frac{r}{v+D_{3}})(-\Delta r)dx$. Note
\begin{align*}
\int_{\Omega}r(r-m)(c-\frac{r}{v+D_{3}})(-\Delta r)dx
& = \int_{\Omega} \left( \left( c+\frac{mw_{4}}{v+D_{3}}\right)r^{2} - \frac{w_{4}}{v+D_{3}}r^{3} - mcr \right) (-\Delta r) dx, \nonumber \\
& = \int_{\Omega} \left( \left( c+\frac{mw_{4}}{v+D_{3}}\right)r^{2} - \frac{w_{4}}{v+D_{3}}r^{3} \right) (-\Delta r) dx - cm||\nabla r||^{2}_{2}. \nonumber \\
\end{align*}
Thus we obtain
\begin{equation}
\frac{1}{2}\frac{d}{dt}||\nabla r||^{2}_{2} + d_{3}||\Delta r||^{2}_{2} + cm||\nabla r||^{2}_{2} = \int_{\Omega} \left( \left(c+\frac{mw_{4}}{v+D_{3}}\right)r^{2} - \frac{w_{4}}{v+D_{3}}r^{3} \right) (-\Delta r) dx.
\end{equation}
We now proceed in 2 steps. We first assume $\left( \left(c+\frac{mw_{4}}{v+D_{3}}\right)r^{2} - \frac{w_{4}}{v+D_{3}}r^{3} \right) \geq 0$.
Young's Inequality with epsilon then gives us the following estimate
\begin{align}
\label{eq:x11y}
\left( \left(c+\frac{mw_{4}}{v+D_{3}}\right)r^{2} - \frac{w_{4}}{v+D_{3}}r^{3} \right)&\leq\left( \frac{v+D_{3}}{w_{4}}\right)^{\frac{1}{3}} \left(c+\frac{mw_{4}}{v+D_{3}}\right)r \left(\frac{w_{4}}{v+D_{3}}\right)^{\frac{1}{3}}r -w_{4}\frac{r^{3}}{v+D_{3}},\nonumber \\
& \leq \left(\frac{v+D_{3}}{w_{4}}\right)^{\frac{4}{9}} \left(c+\frac{mw_{4}}{v+D_{3}}\right)^{\frac{4}{3}}r^{\frac{4}{3}} + w_{4}\frac{r^{3}}{v+D_{3}} -w_{4}\frac{r^{3}}{v+D_{3}}, \nonumber \\
& \leq \left(\frac{||v+D_{3}||_{\infty}}{w_{4}}\right)\left(c+\frac{mw_{4}}{v+D_{3}}\right)^{\frac{4}{3}}r^{\frac{4}{3}} + w_{4}\frac{r^{3}}{v+D_{3}} -w_{4}\frac{r^{3}}{v+D_{3}} , \nonumber \\
&\leq \left(\frac{C+D_{3}}{w_{4}}\right) \left(c+\frac{mw_{4}}{D_{3}}\right)^{\frac{4}{3}}r^{\frac{4}{3}} + w_{4}\frac{r^{3}}{v+D_{3}} -w_{4}\frac{r^{3}}{v+D_{3}},\nonumber \\
& = \left(\frac{C+D_{3}}{w_{4}}\right) \left(c+\frac{mw_{4}}{D_{3}}\right)^{\frac{4}{3}}r^{\frac{4}{3}}. \nonumber \\
\end{align}
Then we obtain
\begin{eqnarray}
\label{eq:x11nn}
\frac{1}{2}\frac{d}{dt}||\nabla r||^{2}_{2} + d_{3}||\Delta r||^{2}_{2} + 2cm||\nabla r||^{2}_{2}&=& \int_{\Omega}\left( \left(c+\frac{mw_{4}}{v+D_{3}}\right)r^{2} - \frac{w_{4}}{v+D_{3}}r^{3} \right)(-\Delta r)dx, \nonumber \\
&\leq& \left(\frac{C+D_{3}}{w_{4}}\right) \left(c+\frac{mw_{4}}{D_{3}}\right)^{\frac{4}{3}}\int_{\Omega}r^{\frac{4}{3}}|-\Delta r| dx\nonumber \\
\end{eqnarray}
This follows via \eqref{eq:x11y}. Thus we obtain
\begin{equation}
\label{eq:x11}
\frac{1}{2}\frac{d}{dt}||\nabla r||^{2}_{2} + d_{3}||\Delta r||^{2}_{2} \leq C_{1}||r||^{\frac{8}{3}}_{\frac{8}{3}} + \frac{d_{3}}{2}||\Delta r||^{2}_{2}.
\end{equation}
Here $C_{1}=\left(\frac{C+D_{3}}{w_{4}}\right) \left(c+\frac{w_{4}}{D_{3}}\right)^{\frac{4}{3}}$.
Now we can use the sobolev embedding of $H^1(\Omega) \hookrightarrow L^{\frac{8}{3}}(\Omega)$, where $C_{3}$ is the embedding constant, to obtain
\begin{equation}
\label{eq:x11}
\frac{d}{dt}||\nabla r||^{2}_{2} + d_{3}||\Delta r||^{2}_{2} \leq C_{3}C_{1}\left(||\nabla r||^{2}_{2}\right)\left(||\nabla r||^{2}_{2}\right).
\end{equation}
Thus
\begin{equation}
\label{eq:x11}
\frac{d}{dt}||\nabla r||^{2}_{2} \leq C_{3}C_{1}(||\nabla r||^{2}_{2})(||\nabla r||^{2}_{2}).
\end{equation}
Now we invoke the estimate via \eqref{eq:x11n23n}, and the uniform Gronwall lemma to obtain
\begin{equation}
\label{eq:f1r4}
\mathop{\limsup}_{t \rightarrow \infty}||\nabla r||^{2}_{2} \leq C.
\end{equation}
Now assume $\left( \left(c+\frac{mw_{4}}{v+D_{3}}\right)r^{2} - \frac{w_{4}}{v+D_{3}}r^{3} \right) < 0$.
Then we obtain
\begin{eqnarray}
\label{eq:x11nn}
&&\frac{1}{2}\frac{d}{dt}||\nabla r||^{2}_{2} + d_{3}||\Delta r||^{2}_{2} + cm||\nabla r||^{2}_{2},\nonumber \\
&\leq& \int_{\Omega}\left(c+\frac{mw_{4}}{D_{3}}\right)r^{2}(-\Delta r)dx, \nonumber \\
&\leq& \frac{2}{d_{3}}\left(c+\frac{mw_{4}}{D_{3}}\right)||r||^{4}_{4} + \frac{d_{3}}{2}||\Delta r||^{2}_{2}. \nonumber \\
\end{eqnarray}
We now use the Sobolev embedding of $H^1(\Omega) \hookrightarrow L^{4}(\Omega)$ to obtain
\begin{equation}
\label{eq:x1mn}
\frac{d}{dt}||\nabla r||^{2}_{2} + d_{3}||\Delta r||^{2}_{2} + cm||\nabla r||^{2}_{2} \leq \frac{2}{d_{3}}\left(c+\frac{w_{4}}{D_{3}}\right) ||\nabla r||^{4}_{2},
\end{equation}
dropping the $cm||\nabla r||^{2}_{2}$ term yields
\begin{equation}
\frac{d}{dt}||\nabla r||^{2}_{2} \leq \frac{2}{d_{3}}\left(c+\frac{w_{4}}{D_{3}}\right) \left(||\nabla r||^{2}_{2}\right)\left(||\nabla r||^{2}_{2}\right),
\end{equation}
\begin{equation}
\label{eq:f1d4nn}
\mathop{\limsup}_{t \rightarrow \infty} ||\nabla r||^{2}_{2} \leq C,
\end{equation}
\subsection{Apriori estimates independent of the parameter $m$}
\label{app3}
Recall the following $L^2$ estimate
\begin{equation*}
\frac{1}{2}\frac{d}{dt}||r||^{2}_{2} + d_{3}||\nabla r||^{2}_{2} + cm||r||^{2}_{2} + \int_{\Omega}\frac{w_4}{v+D_{3}}r^{4}dx \leq \int_{\Omega}\left(c+\frac{mw_{4}}{v+D_{3}} \right)r^{3} dx.
\end{equation*}
We then use H\"{o}lder's inequality followed by Young's inequality to obtain
\begin{eqnarray}
\label{eq:x11r}
&&\frac{1}{2}\frac{d}{dt}||r||^{2}_{2} + d_{3}||\nabla r||^{2}_{2} + cm||r||^{2}_{2} + \int_{\Omega}\frac{w_4}{v+D_{3}}r^{4}dx, \nonumber \\
&& \leq \frac{3}{4}\int_{\Omega}\frac{w_4}{v+D_{3}}r^{4}dx + |\Omega|^{\frac{1}{4}}\left(c+\frac{w_{4}}{D_{3}}\right) \left(\frac{w_{4}}{D_{3}}\right)^{\frac{1}{4}}.\nonumber \\
\end{eqnarray}
Now we drop the $ cm||r||^{2}_{2}$ term from the left hand side, to avoid the singularity caused by $\frac{1}{m}$, the $d_{3}||\nabla r||^{2}_{2}$, and use the estimate on $||v||_{\infty}$ via \eqref{eq:liea}, and the embedding of $L^{4}(\Omega) \hookrightarrow L^{2}(\Omega)$ to obtain
\begin{equation}
\frac{d}{dt}||r||^{2}_{2} + \frac{Cw_4}{C+D_{3}}||r||^{2}_{2} \leq 2|\Omega|^{\frac{1}{4}}\left(c+\frac{w_{4}}{D_{3}}\right) \left(\frac{w_{4}}{D_{3}}\right)^{\frac{1}{4}}.
\end{equation}
Here, we assume $||r||_{2} \geq 1$, else we already have a bound for $||r||_{2} $.
Next we use Gronwall's inequality to obtain
\begin{equation}
||r||^{2}_{2} \leq e^{-\left( \frac{Cw_4}{C+D_{3}}\right)t}||r_{0}||^{2}_{2} + \frac{2|\Omega|^{\frac{1}{4}}\left(c+\frac{w_{4}}{D_{3}}\right) \left(\frac{w_{4}}{D_{3}}.\right)^{\frac{1}{4}}(C+D_{3})}{Cw_{4}}.
\end{equation}
Thus for $t > t_{0}= \max(t^{*}+1,\frac{ \ln(||r_{0}||^{2}_{2})}{\left( \frac{Cw_4}{C+D_{3}}\right)})$, we obtain
\begin{equation}
\label{eq:rl2n}
||r||^{2}_{2} \leq 1 + \frac{2|\Omega|^{\frac{1}{4}}\left(c+\frac{w_{4}}{D_{3}}\right) \left(\frac{w_{4}}{D_{3}}\right)^{\frac{1}{4}}(C+D_{3})}{Cw_{4}},
\end{equation}
which is a uniform $L^2(\Omega)$ bound in $r$ that is independent of the Allee parameter $m$, time and initial data.
We next focus on making $H^{1}(\Omega)$ estimates, that are independent of the Allee parameter $m$.
We choose the estimate derived in \eqref{eq:rl2n} for $||r||_{2}$ and insert this in \eqref{eq:x11n23n} to obtain
\begin{eqnarray}
\label{eq:nh1na}
&& \int^{t+1}_{t}||\nabla r||^{2}_{2} ds \nonumber \\
&& \leq 1 + \frac{2|\Omega|^{\frac{1}{4}}\left(c+\frac{w_{4}}{D_{3}}\right) \left(\frac{w_{4}}{D_{3}}\right)^{\frac{1}{4}}(C+D_{3})}{Cw_{4}}+ 2|\Omega|^{\frac{1}{4}}\left(c+\frac{w_{4}}{D_{3}}\right) \left(\frac{w_{4}}{D_{3}}\right)^{\frac{1}{4}}, \nonumber \\
&& \ \mbox{for} \ t > t_{0}.
\end{eqnarray}
Now we choose
\begin{equation}
\label{eq:kn1m}
K_{m}=\max \left(\frac{2}{d_{3}}\left(c+\frac{mw_{4}}{D_{3}}\right), C_{3}\left(\frac{C+D_{3}}{w_{4}}\right) \left(c+\frac{mw_{4}}{D_{3}}\right)^{\frac{4}{3}} \right).
\end{equation}
Then, we can majorise the right hand side of the above by plugging in $m=1$. Since the above quantities are not singular in $m$, this is possible. Thus we obtain
\begin{equation}
K=\max \left(\frac{2}{d_{3}}\left(c+\frac{w_{4}}{D_{3}}\right), C_{3}\left(\frac{C+D_{3}}{w_{4}}\right) \left(c+\frac{w_{4}}{D_{3}}\right)^{\frac{4}{3}} \right),
\end{equation}
going back to the $H^1(\Omega)$ estimates in \eqref{eq:f1r4}, and \eqref{eq:f1d4nn} one obtains,
\begin{equation}
\label{eq:nr1}
\frac{d}{dt}||\nabla r||^{2}_{2} \leq K \left(||\nabla r||^{2}_{2} \right) \left(||\nabla r||^{2}_{2}\right).
\end{equation}
Also note the integral in time estimate via \eqref{eq:nh1na} is now independent of $m$.
This allows us to apply the uniform Gronwall lemma on \eqref{eq:nr1} with $\beta(t) = ||\nabla r||^{2}_{2}, \ \zeta(t)=K||\nabla r||^{2}_{2} , \ h(t)= 0, q=1$, and the estimate via \eqref{eq:nh1na} to yield the following bound
\begin{eqnarray}
\label{eq:nrn}
&&||\nabla r||^{2}_{2} \nonumber \\
&&\leq C_{K} e^{K C_{K}}, \ \mbox{for} \ t > t_{1}=t_{0} + 1.
\end{eqnarray}
Thus the $H^{1}(\Omega)$ estimate for $r$ can be made independent of $m$.
\subsection{Upper semi-continuity of global attractors}
\label{app4}
In this section we prove upper semi continuity of global attractors.
We first introduce certain concepts pertinent to the upper semicontinuity of attractors.
\begin{definition}[Uniform dissipativity]
Suppose there is a family of semiflows $\{ \{ S_{\lambda}(t) \}_{t \geq 0}\}_{\lambda \in \Lambda}$ on a Banach space $H$, where $\Lambda$ is an open set in a Euclidean space of parameter $\lambda$, is called uniformly dissipative at $\lambda_{0} \in \Lambda$, if there is a neighborhood $\mathcal{N}$ of $\lambda_{0}$ in $\Lambda$ and there is a bounded set $\mathcal{B} \subset H$ such that $\mathcal{B}$ is an absorbing set for each semiflow $S_{\lambda}(t)$, $\lambda \in \mathcal{N}$, in common.
\end{definition}
\begin{definition}[Upper semicontinuity]
A family of semiflows $\{ \{ S_{\lambda}(t) \}_{t \geq 0}\}_{\lambda \in \Lambda}$ on a Banach space $H$, where $\Lambda$ is an open set in a Euclidean space of parameter $\lambda$, and that there exists a global attractor $\mathcal{A}_{\lambda}$ in $\mathcal{H}$ for each semiflow $\{ \{ S_{\lambda}(t) \}_{t \geq 0}\}_{\lambda \in \Lambda}$. If $\lambda_{0} \in \Lambda$ and
\begin{equation*}
dist_{\mathcal{H}}(\mathcal{A}_{\lambda},\mathcal{A}_{\lambda_{0}}) \rightarrow 0, \ \mbox{as} \ \lambda \rightarrow \lambda_{0} \ \mbox{in} \ \Lambda,
\end{equation*}
with respect to the Hausdorff semidistance, then we say that the family of global attractors $\{\mathcal{A}_{\lambda} \}_{\lambda \in \Lambda}$ is upper semicontinuous at $\lambda_{0}$, or that $\mathcal{A}_{\lambda} $ is \emph{robust} at $\lambda_{0}$.
\end{definition}
We next recall the following lemma
\begin{lemma}[Gronwall-Henry Inequality]
Let $\psi(t)$ be a nonnegative function in $L^{\infty}_{loc}[0,T;\mathbb{R})$ and $\zeta(.)$ $\in$ $L^{1}_{loc}[0,T;\mathbb{R})$, such that the following inequality is satisfied:
\begin{equation}
\psi(t) \leq \zeta(t) + \mu\int^{t}_{0} (t-s)^{r-1}\psi(s)ds, \ t \in (0,T),
\end{equation}
where $0 < T \leq \infty$, and $\mu, r$ are positive constants. Then it holds that
\begin{equation}
\psi(t) \leq \zeta(t) + \kappa\int^{t}_{0} \Phi(\kappa(t-s))\psi(s)ds, \ t \in (0,T),
\end{equation}
where $\kappa = (\mu\Gamma(r))^{\frac{1}{r}}$, $\Gamma(.)$ is the gamma function, and the function $\phi(t)$ is given by
\begin{equation}
\phi(t)= \sum^{\infty}_{n=1} \frac{nr}{\Gamma(nr+1)} t^{nr-1} = \sum^{\infty}_{n=1} \frac{nr}{\Gamma(nr)} t^{nr-1}.
\end{equation}
\end{lemma}
We state the following theorem
\begin{theorem}
\label{thm:gaus1}
Consider the reaction diffusion equation described via \eqref{eq:x1}-\eqref{eq:x3} where $\Omega$ is of spatial dimension $n=1, 2, 3$. There exists a universal constant $K_{\infty} > 0$, independent of the Allee parameter $m$ that bounds uniformly in $L^{\infty}$ the family of global attractors $\mathcal{A}_{m}$. That is,
\begin{equation}
\bigcup_{0\leq m \leq 1} \mathcal{A}_{m} \subset B_{L^{\infty}}(0,K_{\infty}).
\end{equation}
Where $B_{L^{\infty}}(0,K_{\infty})$ is the closed ball of radius $K_{\infty}$ in the space $L^{\infty}(\Omega)$, with $K_{\infty}=3C$, where the $C$ is from \eqref{eq:liea}.
\end{theorem}
This follows from the estimates via \eqref{eq:liea}.
We next focus on the uniform dissipativity in $L^{2}(\Omega)$.
The uniform in parameter $m$ $L^2(\Omega)$ estimate via \eqref{eq:rl2n} enables us to state the following theorem
\begin{theorem}
\label{thm:gaus1l2}
Consider the reaction diffusion system described via \eqref{eq:x1}-\eqref{eq:x3} where $\Omega$ is of spatial dimension $n=1, 2, 3$. The family of semiflows
$\{ \{ S_{m}(t) \}_{t \geq 0}\}_{m \geq 0}$ for this system on $H$, is uniformly dissipative at $m=0$. Specifically there exists a constant $K_{H} > 0$ such that the ball $ B_{H}(0,K_{H})$, which is the closed ball of radius $K_{H}$ in the space $H$ is a common absorbing set for the semiflows $\{ \{ S_{m}(t) \}_{t \geq 0}\}$ for all $m \in [0,1]$. Here $K_{H}=1 + \frac{2|\Omega|^{\frac{1}{4}}\left(c+\frac{w_{4}}{D_{3}}\right) \left(\frac{w_{4}}{D_{3}}\right)^{\frac{1}{4}}(C+D_{3})}{Cw_{4}}+ 2C$, where the $2C$ comes from lemma \ref{lem:lemba}.
\end{theorem}
Next, let us choose $(u_{0},v_{0},r_{0})$ $\in$ $\mathcal{U}$. We know $\mathcal{U} \subset B_{H}(0,K_{H})$, the common absorbing ball in $H$ for the semiflow $S_{m}(t)$, $m \in [0,1]$, from theorem \ref{thm:gaus1l2}.
We use the earlier $H^1(\Omega)$ and $L^2(\Omega)$ estimates on $u,v$ from lemma \ref{lem:lemba}, noticing that they do not depend on $m$, we also use \eqref{eq:rl2n} and \eqref{eq:nrn} to obtain
\begin{eqnarray}
&&||S_{m}(t)((u_{0},v_{0},r_{0}))||^{2}_{E} \nonumber \\
&&= ||(\nabla u,\nabla v,\nabla r)||^2 + ||( u, v, r)||^2 \nonumber \\
&& \leq 4C + C_{K} e^{K C_{K}} + 1 + \frac{2|\Omega|^{\frac{1}{4}}\left(c+\frac{w_{4}}{D_{3}}\right) \left(\frac{w_{4}}{D_{3}}\right)^{\frac{1}{4}}(C+D_{3})}{Cw_{4}}.
\end{eqnarray}
This is true for any $(u_{0},v_{0},r_{0}) \in \mathcal{U}$ and $m \in [0,1]$. Now by invariance $S_{m}(t)\mathcal{A}_{m}=\mathcal{A}_{m}$, and so $\mathcal{A}_{m} \subset B_{E}(0,K_{E})$. Where
\begin{equation}
\label{eq:Ke1}
K_{E} = 4C + C_{K} e^{K C_{K}} + 1 + \frac{2|\Omega|^{\frac{1}{4}}\left(c+\frac{w_{4}}{D_{3}}\right) \left(\frac{w_{4}}{D_{3}}\right)^{\frac{1}{4}}(C+D_{3})}{Cw_{4}}.
\end{equation}
This yields the following theorem
\begin{theorem}
\label{thm:gaus1}
Consider the reaction diffusion equation described via \eqref{eq:x1}-\eqref{eq:x3} where $\Omega$ is of spatial dimension $n=1, 2, 3$. There exists a universal constant $K_{E} > 0$, independent of the Allee parameter $m$ that bounds uniformly in $E$ the family of global attractors $\mathcal{A}_{m}$. That is,
\begin{equation}
\bigcup_{0\leq m \leq 1} \mathcal{A}_{m} \subset B_{E}(0,K_{E}),
\end{equation}
where $ B_{E}(0,K_{E})$ is the closed ball of radius $K_{E}$ explicitly given in \eqref{eq:Ke1}.
\end{theorem}
We next show that the trajectories $S_{m}(t)U, t\geq 0$, are uniformly $E$-bounded, with respect to $m$. Since the estimate in \eqref{eq:Ke1} does not depend on $m$, we can take a supremum over $m \in [0,1]$ and still achieve the same bound. Thus we have the following result
\begin{theorem}
\label{thm:gaus12}
Consider the reaction diffusion equation described via \eqref{eq:x1}-\eqref{eq:x3}. There exists a universal constant $K_{E} > 0$, independent of the Allee parameter $m$ such that
\begin{equation}
\sup_{0\leq m \leq 1} \sup_{t \geq 0} S_{m}(t)\left(\bigcup_{0\leq m \leq 1} \mathcal{A}_{m} \right) \subset B_{E}(0,K_{E}),
\end{equation}
where, $ B_{E}(0,K_{E})$ is the closed ball of radius $K_{E}$ in the space $E$.
\end{theorem}
We next set $m=0$ in \eqref{eq:x3} and note that the corresponding estimates in lemma \ref{lem:lemba} hold for $u, v$, and do not depend on the parameter $m$. Thus we carry out the analysis as in \eqref{eq:nh1na} - \eqref{eq:Ke1} to still yield
\begin{align}
||\nabla r||^{2}_{2} \leq C_{K} e^{K C_{K}} , \ \mbox{for} \ t > t_{1} + 1.
\end{align}
Where $C_{K} e^{K C_{K}}$ is independent of the parameter $m$. Thus we can state the following result
\begin{theorem}
\label{thm:ga1m0}
Consider the reaction diffusion equation described via \eqref{eq:x1}-\eqref{eq:x3}, when $m=0$, and where $\Omega$ is of spatial dimension $n=1, 2, 3$. There exists a $(H,E)$ global attractor $\mathcal{A}$ for the system. This is compact and invariant in $H$, and it attracts all bounded subsets of $H$ in the $E$ metric.
\end{theorem}
The proof of the above theorem follows essentially via theorem \ref{thm:ga2}.
Note, since $\left(\bigcup_{0\leq m \leq 1} \mathcal{A}_{m} \right)$ is a bounded set in $H$, we can state the following result ,
\begin{theorem}
\label{thm:gaus13}
Consider the reaction diffusion equation described via \eqref{eq:x1}-\eqref{eq:x3}, with $m=0$. The global attractor for this system $\mathcal{A}_{0}$, attracts the set $\left(\bigcup_{0\leq m \leq 1} \mathcal{A}_{m} \right) $, with respect to the $E$-norm. That is
\begin{equation}
\lim_{ t \rightarrow \infty} dist_{E} \left(S_{0}(t)\left(\bigcup_{0\leq m \leq 1} \mathcal{A}_{m}\right), \mathcal{A}_{0} \right).
\end{equation}
\end{theorem}
We now use the Gronwall-Henry lemma, to show uniform convergence in the parameter $m$.
\begin{theorem}
\label{thm:gaum}
Consider the reaction diffusion equation described via \eqref{eq:x1}-\eqref{eq:x3}, and associated semi-group $S_{m}(t)$. Also consider \eqref{eq:x1}-\eqref{eq:x3}, when $m=0$, that is with associated semi-group $S_{0}(t)$. For any given $t \geq 0$, it holds that
\begin{equation}
\sup_{g_{0} \in \mathcal{U}} ||S_{m}(t)g_{0} - S_{0}(t)g_{0}||_{E} \rightarrow 0, \mbox{as} \ m \rightarrow 0^{+} .
\end{equation}
\end{theorem}
For any given initial data $g_{0} \in \mathcal{U}$, we note $S_{m}(t)g_{0}$ and $S_{0}(t)g_{0}$, are both classical solutions, hence mild solutions. Thus we denote
\begin{equation}
w(t) = S_{m}(t)g_{0} - S_{0}(t)g_{0}, t \geq 0, \ \mbox{with} \ w(0)=0.
\end{equation}
Using the form of mild solutions, we can write down the following,
\begin{equation}
w(t) = \int^{t}_{0}e^{A(t-\sigma)}[f_{0}(S_{m}(\sigma)g_{0}) - f_{0}(S_{0}(\sigma)g_{0})] d \sigma + m \int^{t}_{0}e^{A(t-\sigma)}h(S_{m}(\sigma)g_{0}) d \sigma, \ t \geq 0.
\end{equation}
Here $e^{A t}$ , $t \geq 0$ is the $C_{0}$ semi-group generated by $A : D(A) \rightarrow H$. Note
\begin{equation}
f_{m}(u,v,r)=
\left(
\begin{array}{c}
u-u^{2}-w_{1}\frac{uv}{u+v}\\
-a_{2}v+w_{2}\frac{uv}{u+v}-w_{3}\left(\frac{vr}{v+r}\right) \\
-\frac{w_4} {v+D_3}r^{3} + cr^{2} \\
\end{array} \
\right),
h(r) = \left(
\begin{array}{c}
0\\
0 \\
-mcr+\frac{mw_4} {v+D_3}r^{2} \\
\end{array}
\right).
\end{equation}
The above is easily seen to be Lipschitz continuous in $E$, due to the Sobolev embedding of $H^{1}(\Omega) \hookrightarrow L^{6}(\Omega)$.
Here $f_{0}$, which is gotten by setting $m=0$ in the above, is also Lipschitz continuous, thus there is a Lipschitz constant $L_{f_{0}}(K_{E}) > 0$, depending only on $K_{E}$, where $K_{E}$ is given in \eqref{eq:Ke1}, s.t
\begin{equation}
||f_{0}(g_{1}) - f_{0}(g_{2})|| \leq L_{f_{0}}(K_{E}) || g_{1} - g_{2} ||_{E} ,
\end{equation}
for any $g_{1}, g_{2} \in B_{E}(0,K_{E})$. From theorems \ref{thm:gaus1}, \ref{thm:gaus12} we have
\begin{eqnarray}
&& ||w(t)||_{E} \nonumber \\
&& \leq \int^{t}_{0}||e^{A(t-\sigma)}||_{\mathcal{L}(H,E)} L_{f_{0}}(K_{E}) || S_{m}(\sigma)g_{0} -S_{0}(\sigma)g_{0}||_{E} d \sigma, \nonumber \\
&& + m \int^{t}_{0}||e^{A(t-\sigma)}||_{\mathcal{L}(H,E)}h(S_{m}(\sigma)g_{0}) d \sigma ,\nonumber \\
&& \leq Cm(K_{E})^{3} \int^{t}_{0} N(t-\sigma)^{-\frac{1}{2}} d \sigma + NL_{f_{0}}(K_{E}) \int^{t}_{0} (t-\sigma)^{-\frac{1}{2}} ||w(t)||_{E} d \sigma, \nonumber \\
&& \leq Cm(K_{E})^{3}t^{\frac{1}{2}} + NL_{f_{0}}(K_{E}) \int^{t}_{0} (t-\sigma)^{-\frac{1}{2}} ||w(t)||_{E} d \sigma.
\end{eqnarray}
This follows via standard decay estimates on the semi-group \cite{SY02}, theorems \ref{thm:gaus1}, \ref{thm:gaus12} and the Sobolev embedding of $H^{1}(\Omega) \hookrightarrow L^{6}(\Omega)$.
We now apply the Gronwall-Henry inequality with $\Psi(t)= ||w(t)||_{E}, \zeta(t)=Cm(K_{E})^{3}t^{\frac{1}{2}}, \mu = NL_{f_{0}}(K_{E}), r=\frac{1}{2}$ to obtain
\begin{equation}
||S_{m}(t)g_{0} - S_{0}(t)g_{0}||_{E} = ||w(t)||_{E} \leq m \left(\Psi(t) + k\int^{t}_{0}\Phi(k(t-s))\Psi(s)ds \right), \ t \geq 0.
\end{equation}
Here
\begin{equation}
\zeta(t) = C(K_{E})^{3}t^{\frac{1}{2}}, \ k=NL_{f_{0}}(K_{E})\left(\Gamma \left(\frac{1}{2}\right)\right)^{\frac{1}{2}} , \ \Phi(t)= \sum^{\infty}_{n=1}\frac{1}{\Gamma(\frac{n}{2})}t^{\frac{n}{2}-1}.
\end{equation}
Thus we get the uniform convergence as required, and
\begin{equation}
\sup_{g_{0} \in \mathcal{U}} ||S_{m}(t)g_{0} - S_{0}(t)g_{0}||_{E} = ||w(t)||_{E}\rightarrow 0, \mbox{as} \ m \rightarrow 0^{+} .
\end{equation}
where the convergence is uniform in the parameter $m$.
\end{document}
|
\begin{document}
\author{Oscar Higgott}
\email{[email protected]}
\affiliation{Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT, United Kingdom}
\author{Matthew Wilson}
\affiliation{Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT, United Kingdom}
\affiliation{Department of Computer Science, University of Oxford, Oxford OX1 3QD, United Kingdom}
\author{James Hefford}
\affiliation{Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT, United Kingdom}
\affiliation{Department of Computer Science, University of Oxford, Oxford OX1 3QD, United Kingdom}
\author{James Dborin}
\affiliation{Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT, United Kingdom}
\affiliation{London Centre for Nanotechnology, University College London,
Gordon St., London WC1H 0AH, United Kingdom}
\author{Farhan Hanif}
\affiliation{Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT, United Kingdom}
\author{Simon Burton}
\affiliation{Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT, United Kingdom}
\author{Dan E. Browne}
\affiliation{Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT, United Kingdom}
\title{Optimal local unitary encoding circuits for the surface code}
\date{\today}
\begin{abstract}
The surface code is a leading candidate quantum error correcting code, owing to its high threshold, and compatibility with existing experimental architectures. Bravyi \textit{et al.}~\cite{bravyi2006lieb} showed that encoding a state in the surface code using local unitary operations requires time at least linear in the lattice size $L$, however the most efficient known method for encoding an unknown state, introduced by Dennis \textit{et al.}~\cite{dennis2002topological}, has $O(L^2)$ time complexity. Here, we present an optimal local unitary encoding circuit for the planar surface code that uses exactly $2L$ time steps to encode an unknown state in a distance $L$ planar code. We further show how an $O(L)$ complexity local unitary encoder for the toric code can be found by enforcing locality in the $O(\log L)$-depth non-local renormalisation encoder. We relate these techniques by providing an $O(L)$ local unitary circuit to convert between a toric code and a planar code, and also provide optimal encoders for the rectangular, rotated and 3D surface codes.
Furthermore, we show how our encoding circuit for the planar code can be used to prepare fermionic states in the compact mapping, a recently introduced fermion to qubit mapping that has a stabiliser structure similar to that of the surface code and is particularly efficient for simulating the Fermi-Hubbard model.
\end{abstract}
\maketitle
\section{\label{sec:intro}Introduction}
One of the most promising error correcting codes for achieving fault-tolerant quantum computing is the surface code, owing to its high threshold and low weight check operators that are local in two dimensions~\cite{kitaev2003fault, dennis2002topological}. The stabilisers of the surface code are defined on the faces and sites of a $L \times L$ square lattice embedded on either a torus (the \textit{toric} code) or a plane (the \textit{planar} code). The toric code encodes two logical qubits, while the planar code encodes a single logical qubit.
An important component of any quantum error correction (QEC) code is its encoding circuit, which maps an initial product state of $k$ qubits in arbitrary unknown states (along with $n-k$ ancillas) to the same state on $k$ logical qubits encoded in a quantum code with $n$ physical qubits. The encoding of logical states has been realised experimentally for the demonstration of small-scale QEC protocols using various codes~\cite{ chiaverini2004realization, lu2008experimental, schindler2011experimental, taminiau2014universal, waldherr2014quantum, nigg2014quantum, kelly2015state, ofek2016extending, cramer2016repeated, linke2017fault, vuillot2017error, roffe2018protecting,gong2021experimental}, however one of the challenges of realising larger-scale experimental demonstrations of QEC protocols is the increasing complexity of the encoding circuits with larger system sizes, which has motivated the recent development of compiling techniques that reduce the number of noisy gates in unitary encoding circuits~\cite{xu2021variational}.
Encoding circuits can also be useful for implementing fermion-to-qubit mappings~\cite{seeley2012bravyi}, an important component of quantum simulation algorithms, since some mappings introduce stabilisers in order to mitigate errors~\cite{jiang2019majorana} or enforce locality in the transformed fermionic operators~\cite{bravyi2002fermionic,verstraete2005mapping,steudtner2019quantum,havlivcek2017operator}. Local unitary encoding circuits provide a method to initialise and switch between mappings without the need for ancilla-based stabiliser measurements and feedback.
The best known local unitary circuits for encoding an unknown state in the surface code are far from optimal. Bravyi \textit{et al.}~\cite{bravyi2006lieb} showed that any local unitary encoding circuit for the surface code must take time that is at least linear in the distance $L$, however the most efficient known local unitary circuit for encoding an unknown state in the surface code was introduced by Dennis \textit{et al.}~\cite{dennis2002topological}, and requires $\Omega(L^2)$ time to encode an unknown state in a distance $L$ planar code. Aguado and Vidal~\cite{aguado_MERA} introduced a Renormalisation Group (RG) unitary encoding circuit for preparing and unknown state in the toric code with $O(\log L)$ circuit depth, however their method requires non-local gates. More recently, Aharonov and Touati provided an $\Omega(\log L)$ lower bound on the circuit depth of preparing toric code states with non-local gates, demonstrating that the RG encoder is optimal in this setting~\cite{aharonov2018quantum}, and an alternative approach for preparing a specific state in the toric code with non-local gates and depth $O(\log L)$ was recently introduced in Ref.~\cite{liao2021quantum}. Dropping the requirement of unitarity, encoders have been found that use stabiliser measurements~\cite{lodyga2015simple, horsman2012surface,li2015magic} or local dissipative evolution~\cite{dengis2014optimal}, and it has been shown that local dissipative evolution cannot be used to beat the $\Omega(L)$ lower bound for local unitary encoders~\cite{konig2014generating}. If only the logical $\bar{\ket{0}}$ state is to be prepared, then stabiliser measurements~\cite{dennis2002topological} can be used, as well as optimal local unitaries that either use adiabatic evolution~\cite{hamma2008adiabatic} or a mapping from a cluster state~\cite{brown2011generating}. However, encoding circuits by definition should be capable of encoding an arbitrary unknown input state.
In this work, we present local unitary encoding circuits for both the planar and toric code that take time linear in the lattice size to encode an unknown state, achieving the $\Omega(L)$ lower bound given by Bravyi \textit{et al.}~\cite{bravyi2006lieb}. Furthermore, we provide encoding circuits for rectangular, rotated and 3D surface codes, as well as a circuit that encodes a toric code from a planar code. Our circuits also imply optimal encoders for the 2D color code~\cite{kubica2015unfolding}, some 2D subsystem codes~\cite{bombin2012universal, bravyi2012subsystem} and any 2D translationally invariant topological code~\cite{bombin2012universal}. On many Noisy Intermediate-Scale Quantum (NISQ)~\cite{preskill2018quantum} devices, which are often restricted to local unitary operations, our techniques therefore provide an optimal method for experimentally realising topological quantum order.
Another advantage of using a unitary encoding circuit is that it does not require the use of ancillas to measure stabilisers, therefore providing a more qubit efficient method of preparing topologically ordered states ($2\times$ fewer qubits are required to prepare a surface code state of a given lattice size).
Finally, we show how our unitary encoding circuits for the planar code can be used to construct $O(L)$ depth circuits to encode a Slater determinant state in the compact mapping~\cite{derby2021compact}, which can be used for the simulation of fermionic systems on quantum computers.
\section{Stabiliser codes}\label{sec:stab}
An $n$-qubit Pauli operator $P=\alpha P_n$ where $P_n\in\{I,X,Y,Z\}^{\otimes n}$ is an $n$-fold tensor product of single qubit Pauli operators with the coefficient $\alpha\in\{\pm 1, \pm i\}$. The set of all $n$-qubit Pauli operators forms the $n$-qubit Pauli group $\mathcal{P}_n$. The \textit{weight} $\mathrm{wt}(P)$ of a Pauli operator $P\in\mathcal{P}_n$ is the number of qubits on which it acts non-trivially. Any two Pauli operators commute if an even number of their tensor factors commute, and anti-commute otherwise.
Stabiliser codes~\cite{gottesman1997stabilizer} are defined in terms of a stabiliser group $\mathcal{S}$, which is an abelian subgroup of $\mathcal{P}_n$ that does not contain the element $-I$. Elements of a stabiliser group are called \textit{stabilisers}. Since every stabiliser group is abelian and Pauli operators have the eigenvalues~$\pm 1$, there is a joint $+1$-eigenspace of every stabiliser group, which defines the stabiliser code.
The \textit{check operators} of a stabiliser code are a set of generators of $\mathcal{S}$ and hence all measure $+1$ if the state is uncorrupted. Any check operator $M$ that anticommutes with an error $E$ will measure -1 (since $ME\ket{\psi}=-EM\ket{\psi}=-E\ket{\psi}$). The centraliser $C(\mathcal{S})$ of $\mathcal{S}$ in $\mathcal{P}_n$ is the set of Pauli operators which commute with every stabiliser. If an error $E\in C(\mathcal{S})$ occurs, it will be undetectable. If $E\in\mathcal{S}$, then it acts trivially on the codespace, and no correction is required. However if $E\in C(\mathcal{S})\setminus \mathcal{S}$, then an undetectable logical error has occurred. The distance $d$ of a stabiliser code is the smallest weight of any logical operator.
A stabiliser code is a Calderbank-Shor-Steane (CSS) code if there exists a generating set for the stabiliser group such that every generator is in $\{I,X\}^n\cup \{I,Z\}^n$.
\section{The Surface Code}
The surface code is a CSS code introduced by Kitaev~\cite{kitaev2003fault, dennis2002topological}, which has check operators defined on a square lattice embedded in a two-dimensional surface. Each \textit{site} check operator is a Pauli operator in $\{I,X\}^n$ which only acts non-trivially on the edges adjacent to a vertex of the lattice. Each $\textit{plaquette}$ check operator is a Pauli operator in $\{I,Z\}^n$ which only acts non-trivially on the edges adjacent to a face of the lattice. In the toric code, the square lattice is embedded in a torus, whereas in the planar code the lattice is embedded in a plane, without periodic boundary conditions (see \Cref{fig:surface_code}). These site and plaquette operators together generate the stabiliser group of the code. While the toric code encodes two logical qubits, the surface code encodes a single logical qubit.
\begin{figure}
\caption{The check operators for (a) the toric code and (b) the planar code. Opposite edges in (a) are identified and each edge corresponds to a qubit.}
\label{fig:surface_code}
\end{figure}
\section{Encoding an unknown state}\label{sec:encoding_unknown}
We are interested in finding a unitary encoding circuit that maps a product state $\ket{\phi_0}\otimes\ldots\otimes\ket{\phi_{k-1}}\otimes \ket{0}^{\otimes (n-k)}$ of $k$ physical qubits in unknown states (along with ancillas) to the state of $k$ logical qubits encoded in a stabiliser code with $n$ physical qubits. Labelling the ancillas in the initial state $k, k+1,\ldots ,n-1$, we note that the initial product state is a $+1$-eigenstate of the stabilisers $Z_k,Z_{k+1},\ldots,Z_{n-1}$. Thus, we wish to find a unitary encoding circuit that maps the stabilisers $Z_k,Z_{k+1},\ldots,Z_{n-1}$ of the product state to a generating set for the stabiliser group $\mathcal{S}$ of the code. The circuit must also map the logical operators $Z_0,Z_1,\ldots,Z_{k-1}$ and $X_0,X_1,\ldots,X_{k-1}$ of the physical qubits to the corresponding logical operators $\bar{Z}_0,\bar{Z}_1,\ldots,\bar{Z}_{k-1}$ and $\bar{X}_0,\bar{X}_1,\ldots,\bar{X}_{k-1}$ of the encoded qubits (up to stabilisers).
Applying a unitary $U$ to an eigenstate $\ket{\psi}$ of an operator $S$ (with eigenvalue $s$) gives $US\ket{\psi}=sU\ket{\psi}=USU^\dagger U\ket{\psi}$: an eigenstate of $S$ becomes an eigenstate of $USU^\dagger$. Therefore, we wish to find a unitary encoding circuit that, acting under conjugation, transforms the stabilisers and logicals of the initial product state into the stabilisers and logicals of the encoded state.
The CNOT gate, acting by conjugation, transforms Pauli $X$ and $Z$ operators as follows:
\begin{align}\label{eq:cnotstabilisers}
XI \leftrightarrow XX, \quad IZ \leftrightarrow ZZ,
\end{align}
and leaves $ZI$ and $IX$ invariant. Here $\sigma \sigma^\prime$ for $\sigma,\sigma^\prime\in \{I, Z, X\}$ denotes $\sigma_C\otimes \sigma_T$ with $C$ and $T$ the control and target qubit of the CNOT respectively. Since $Z=HXH$ and $X=HZH$, a Hadamard gate $H$ transforms an eigenstate of $Z$ into an eigenstate of $X$ and vice versa. We will show how these relations can be used to generate unitary encoding circuits for the surface code using only CNOT and Hadamard gates.
As an example, consider the problem of generating the encoding circuit for the repetition code, which has stabilisers $Z_0Z_1$ and $Z_1Z_2$. We start in the product state $\ket{\phi}\ket{0}\ket{0}$ which has stabilisers $Z_1$ and $Z_2$. We first apply CNOT$_{01}$ which transforms the stabiliser $Z_1\rightarrow Z_0Z_1$ and leaves $Z_2$ invariant. Then applying CNOT$_{12}$ transforms $Z_2\rightarrow Z_1Z_2$ and leaves $Z_0Z_1$ invariant. We can also verify that the logical $X$ undergoes the required transformation $X_0\rightarrow \bar{X}_0\coloneqq X_0X_1X_2$.
\section{General Encoding Methods for Stabiliser Codes}
There exists a general method for generating an encoding circuit for any stabiliser code~\cite{gottesman1997stabilizer, Cleve_1997}, which we review in Appendix A. The specific structure of the output of this method means it can immediately be rearranged to depth $O(n)$. Using general routing procedures presented in \cite{cheung2007translation, beals2013efficient,brierley2015efficient} the output circuit could be adapted to a surface architecture with overhead $O(\sqrt{n})$, giving a circuit with depth $O(n\sqrt{n})$. This matches the scaling $O(\min(2n^2,4nD\Delta))$ in depth for stabiliser circuits achieved in \cite{wu2019optimization}, where $D$ and $\Delta$ are the diameter and degree respectively of the underlying architecture graph. Any stabiliser circuit has an equivalent skeleton circuit \cite{Maslov_2007}, and so can be implemented on a surface architecture with depth $O(n) = O(L^2)$, matching the previously best known scaling \cite{dennis2002topological} for encoding the planar code. $O(n)$ is an optimal bound on the depth of the set of all stabiliser circuits \cite{Maslov_2007}, so we look beyond general methods and work with the specifics of the planar encoding circuit to improve on \cite{dennis2002topological}.
\section{Optimal encoder for the planar code}\label{sec:planar_encoding}
Dennis \textit{et al.}~\cite{dennis2002topological} showed how the methods outlined in section \ref{sec:encoding_unknown} can be used to generate an encoding circuit for the planar surface code. The inductive step in their method requires $\Omega(L)$ time steps and encodes a distance $L+1$ planar code from a distance $L$ code by turning smooth edges into rough edges and vice versa. As a result encoding a distance $L$ planar code from an unencoded qubit requires $
\Omega(L^2)$ time steps, which is quadratically slower than the lower bound given by Bravyi \textit{et al.}~\cite{bravyi2006lieb}.
\begin{figure}
\caption{Circuit to encode a distance 6 planar code from a distance 4 planar code. Each edge corresponds to a qubit. Each arrow denotes a CNOT gate, pointing from control to target. Filled black circles (centred on edges) denote Hadamard gates, which are applied at the beginning of the circuit. The colour of each CNOT gate (arrow) denotes the time step in which it is applied. The first, second, third and fourth time steps correspond to the blue, green, red and black CNOT gates respectively. Solid edges correspond to qubits originally encoded in the L=4 planar code, whereas dotted edges correspond to additional qubits that are encoded in the L=6 planar code.}
\label{fig:planar_L4_to_L6}
\end{figure}
However, here we present a local unitary encoding circuit for the planar code that requires only $2L$ time steps to encode a distance $L$ planar code. The inductive step in our method, shown in \Cref{fig:planar_L4_to_L6} for $L=4$, encodes a distance $L+2$ planar code from a distance $L$ planar code using 4 time steps, and does not rotate the code. This inductive step can then be used recursively to encode an unencoded qubit into a distance $L$ planar code using $2L$ time steps. If $L$ is odd, the base case used is the distance 3 planar code, which can be encoded in 6 time steps. If $L$ is even, a distance 4 planar code is used as a base case, which can be encoded in 8 time steps. Encoding circuits for the distance 3 and 4 planar codes are given in Appendix~\ref{app:additional_planar_encoders}. Our encoding circuit therefore matches the $\Omega(L)$ lower bound provided by Bravyi \textit{et al.}~\cite{bravyi2006lieb}.
\begin{figure}
\caption{The transformation of the stabiliser generators of the $L=4$ planar surface code when the circuit in \Cref{fig:planar_L4_to_L6}
\label{fig:planar_generators_mapped}
\end{figure}
Since the circuit for the inductive step in \Cref{fig:planar_L4_to_L6} uses only CNOT and $H$ gates, we can verify its correctness by checking that stabiliser generators and logicals of the distance $L$ surface code are mapped to stabiliser generators and logicals of the distance $L+2$ surface code using the conjugation rules explained in \Cref{sec:encoding_unknown}.
We show how each type of site and plaquette stabiliser generator is mapped by the inductive step of the encoding circuit in \Cref{fig:planar_generators_mapped}.
Note that the site stabiliser generator labelled c (red) is mapped to a weight 7 stabiliser in the $L=6$ planar code: this is still a valid generator of stabiliser group, and the standard weight four generator can be obtained by multiplication with a site of type b.
Similarly, the plaquette stabiliser generator labelled c becomes weight 7, but a weight four generator is recovered from multiplication by a plaquette of type a.
Therefore, the stabiliser group of the $L=4$ planar code is mapped correctly to that of the $L=6$ planar code, even though minimum-weight generators are not mapped explicitly to minimum-weight generators.
Using \Cref{eq:cnotstabilisers} it is straightforward to verify that the $X$ and $Z$ logical operators of the $L=4$ planar code are also mapped to the $X$ and $Z$ logicals of the $L=6$ planar code by the inductive step.
We can also encode rectangular planar codes with height $H$ and width $W$ by first encoding a distance $\min(H,W)$ square planar code and then using a subset of the gates in \Cref{fig:planar_L4_to_L6} (given explicitly in Appendix~\ref{app:additional_planar_encoders}) to either increase the width or the height as required. Increasing either the width or height by two requires three time steps, therefore encoding a $H\times W$ rectangular planar code from an unencoded qubit requires $2\min(H,W)+3\left\lceil\frac{|H-W|}{2}\right\rceil$ time steps.
In Appendix~\ref{app:rotated_code} we also provide an optimal encoder for the \textit{rotated} surface code, which uses fewer physical qubits for a given distance $L$~\cite{bombin2007optimal}. Our encoding circuit also uses an inductive step that increases the distance by two using four time steps, and therefore uses $2L + O(1)$ time steps to encode a distance $L$ rotated surface code.
\section{Local Renormalisation Encoder for the Toric Code}\label{sec:RenormalisationEncoder}
In this section we will describe an $O(L)$ encoder for the toric code based on the multi-scale entanglement renormalisation ansatz (MERA). The core of this method is to enforce locality in the Renormalisation Group (RG) encoder given by Aguado and Vidal~\cite{aguado_MERA}. The RG encoder starts from an $L=2$ toric code and then uses an $O(1)$ depth inductive step which enlarges a distance $2^k$ code to a distance $2^{k+1}$ code, as shown in \Cref{fig:L2_to_L4} for the first step ($k=1$) (and reviewed in more detail in Appendix~\ref{app:renormalisation_group}). The $L=2$ base case toric code can be encoded using the method given by Gottesman in Ref.~\cite{gottesman1997stabilizer}, as shown in Appendix \ref{basetoric}. While the RG encoder takes $O(\log L)$ time, it is non-local in it's original form.
\begin{figure}
\caption{Encoding a distance 4 toric code from a distance 2 toric code using the Renormalisation Group encoder of Aguado and Vidal~\cite{aguado_MERA}
\label{fig:timing1}
\label{fig:timing2}
\label{fig:timing3}
\label{fig:L2_to_L4}
\end{figure}
In order to enforce locality in the RG encoder, we wish to find an equivalent circuit that implements an identical operation on the same input state, using quantum gates that act locally on the physical architecture corresponding to the final distance $L$ toric code (here a gate is \textit{local} if it acts only on qubits that belong to either the same site or plaquette). One approach to enforce locality in a quantum circuit is to insert SWAP gates into the circuit to move qubits adjacent to each other where necessary. Any time step of a quantum circuit can be made local on a $L\times L$ 2D nearest-neighbour (2DNN) grid architecture using at most $O(L)$ time steps, leading to at most a multiplicative $O(L)$ overhead from enforcing locality~\cite{cheung2007translation, beals2013efficient, brierley2015efficient}. Placing an ancilla in the centre of each site and plaquette, we see that the connectivity graph of our physical architecture has a 2DNN grid as a subgraph. Therefore, using SWAP gates to enforce locality in the RG encoder immediately gives us a $O(L\log L)$ local unitary encoding circuit for the toric code which, while an improvement on the $O(L^2)$ encoder in Ref.~\cite{dennis2002topological}, does not match the $\Omega(L)$ lower bound.
However, we can achieve $O(L)$ complexity by first noticing that all `quantum circuit' qubits which are acted on non-trivially in the first $k$ steps of the RG encoder can be mapped to physical qubits in a $2^{k+1}\times 2^{k+1}$ square region of the physical architecture. Therefore, the required operations in iteration $k$ can all be applied within a $2^{k+1}\times 2^{k+1}$ region that also encloses the regions used in the previous steps. In Appendix \ref{routing} we use this property to provide circuits for routing quantum information using SWAP gates (and no ancillas) that enforce locality in each of the $O(1)$ time steps in iteration $k$ using $O(2^{k+1})$ time steps. This leads to a total complexity of $\sum_{k=1}^{\log_2(L)-1} O(2^{k+1}) = O(L)$ for encoding a distance $L$ code, also achieving the lower bound given by Bravyi \textit{et al.}~\cite{bravyi2006lieb}. In Appendix~\ref{routing} we provide a more detailed analysis to show that the total time complexity is $15L/2 - 6\log_2 L + 7 \sim O(L)$. Unlike the other encoders in this paper (which work for all $L$), the RG encoder clearly can only be applied when $L$ is a power of 2.
\begin{figure}
\caption{Circuit to encode a distance 5 toric code from a distance 5 planar code. Solid edges correspond to qubits in the original planar code and dotted edges correspond to qubits added for the toric code. Opposite edges are identified. Arrows denote CNOT gates, and filled black circles denote Hadamard gates applied at the beginning of the circuit. Blue and green CNOT gates correspond to those applied in the first and second time step respectively. Red CNOTs are applied in the time step that they are numbered with. The hollow circles denote the unencoded qubit that is to be encoded into the toric code.}
\label{fig:planar_to_toric}
\end{figure}
\section{Encoding a toric code from a planar code}
While the method in section \ref{sec:planar_encoding} is only suitable for encoding planar codes, we will now show how we can encode a distance $L$ toric code from a distance $L$ planar code using only local unitary operations. Starting with a distance $L$ planar code, $2(L-1)$ ancillas each in a $\ket{0}$ state, and an additional unencoded logical qubit, the circuit in \Cref{fig:planar_to_toric} encodes a distance $L$ toric code using $L+2$ time steps.
The correctness of this step can be verified using \Cref{eq:cnotstabilisers}: each ancilla initialised as $\ket{0}$ (stabilised by $Z$) is mapped to a plaquette present in the toric code but not the planar code.
Likewise, each ancilla initialised in $\ket{+}$ using an $H$ gate (stabilised by $X$) is mapped to a site generator in the toric code but not the planar code.
The weight-three site and plaquette stabilisers on the boundary of the planar code are also mapped to weight four stabilisers in the toric code.
Finally, we see that $X$ and $Z$ operators for the unencoded qubit (the hollow circle in \Cref{fig:planar_to_toric}) are mapped to the second pair of $X$ and $Z$ logicals in the toric code by the circuit, leaving the other pair of $X$ and $Z$ logicals already present from the planar code unaffected.
Therefore, encoding two unencoded qubits in a toric code can be achieved using $3L+2$ time steps using the circuits given in this section and in section \ref{sec:planar_encoding}. Similarly, we can encode a planar code using the local RG encoder for the toric code, before applying the inverse of the circuit in \Cref{fig:planar_to_toric}.
\section{Encoding a 3D Surface Code}
\begin{figure}
\caption{(a) Circuit to encode a $4\times 2$ planar code from a four qubit repetition code (where adjacent qubits in the repetition code are stabilised by $XX$). Applied to a column of qubits corresponding to a surface code $\bar{Z}
\label{fig:3D_surface_code}
\end{figure}
We will now show how the techniques developed to encode a 2D planar code can be used to encode a distance $L$ 3D surface code using $O(L)$ time steps. We first encode a distance $L$ planar code using the method given in section \ref{sec:planar_encoding}. This planar code now forms a single layer in the $xy$-plane of a 3D surface code (where the $y$-axis is defined to be aligned with a $Z$-logical in the original planar code). Using the circuit given in \Cref{fig:3D_surface_code}(a), we encode each column of qubits corresponding to a $Z$ logical in the planar code into a layer of the 3D surface code in the $yz$-plane (which has the same stabiliser structure as a planar code if the rest of the $x$-axis is excluded). Since each layer in the $yz$-plane can be encoded in parallel, this stage can also be done in $O(L)$ time steps. If we encode each layer in the $yz$-plane such that the original planar code intersects the middle of each layer in the $yz$-plane, then each layer in the $xz$-plane now has the stabiliser structure shown in \Cref{fig:3D_surface_code}(b). Using the circuit in \Cref{fig:3D_surface_code}(b) repeatedly, all layers in the $xz$-plane can be encoded in parallel in $O(L)$ time steps. Therefore, a single unknown qubit can be encoded into a distance $L$ 3D surface code in $O(L)$ time steps.
\section{Encoding circuit for the compact mapping}
Fermion to qubit mappings are essential for simulating fermionic systems using quantum computers, and an encoding circuit for such a mapping is an important subroutine in many quantum simulation algorithms.
We now show how we can use our encoding circuits for the surface code to construct encoding circuits that prepare fermionic states in the compact mapping~\cite{derby2021compact}, a fermion to qubit mapping that is especially efficient for simulating the Fermi-Hubbard model.
A fermion to qubit mapping defines a representation of fermionic states in qubits, as well as a representation of each fermionic operator in terms of Pauli operators.
Using such a mapping, we can represent a fermionic Hamiltonian as a linear combination $H=\sum_i \alpha_i P_i$ of tensor products of Pauli operators $P_i$, where $\alpha_i$ are real coefficients.
We can then simulate time evolution $e^{-iHt}$ of $H$ (e.g.~using a Trotter decomposition), which can be used in the quantum phase estimation algorithm to determine the eigenvalues of $H$.
The mapped Hamiltonian $H$ can also be used in the variational quantum eigensolver algorithm (VQE), where we can estimate the energy $\bra{\psi}H\ket{\psi}$ of a trial state $\ket{\psi}$ by measuring each Pauli term $\bra{\psi}P_i\ket{\psi}$ individually.
The Jordan-Wigner (JW) transformation maps fermionic creation ($a_i^\dagger$) and annihilation ($a_i$) operators to qubit operators in such a way that the canonical fermionic anti-commutation relations
\begin{equation}
\{a_{i}^{\dagger}, a_{j}^{\dagger}\}=0,\left\{a_{i}, a_{j}\right\}=0,\{a_{i}^{\dagger}, a_{j}\}=\delta_{i j}
\end{equation}
are satisfied by the encoded qubit operators.
The qubit operators used to represent $a_i^\dagger$ and $a_i$ are
\begin{align}
a_i^\dagger &\rightarrow Z_1\ldots Z_{i-1}\sigma_i^+ \\
a_i &\rightarrow Z_1\ldots Z_{i-1}\sigma_i^-
\end{align}
where $\sigma^+\coloneqq (X_i-iY_i)/2$ and $\sigma^-\coloneqq (X_i+iY_i)/2$.
Each electronic basis state (with $m$ modes) in the JW transformation is represented by $m$ qubits simply as a computational basis state $\ket{\omega_1,\omega_2,\ldots,\omega_m}$ where $\omega_i=1$ or $\omega_i=0$ indicates that mode $i$ is occupied or unoccupied by a fermion, respectfully.
A drawback of the Jordan-Wigner transformation is that, even if a fermionic operator acts on $O(1)$ modes, the corresponding JW-mapped qubit operator can still act on up to $O(m)$ qubits.
When mapped qubit operators have larger weight, the depth and number of gates required to simulate time evolution of a mapped Hamiltonian also tend to increase, motivating the design of fermion-to-qubit mappings that map fermionic operators to qubit operators that are both low weight and geometrically local.
Several methods have been proposed for mapping geometrically local fermionic operators to geometrically local qubit operators~\cite{verstraete2005mapping,whitfield2016local,steudtner2019quantum,bravyi2002fermionic,setia2019superfast,jiang2019majorana,derby2021compact}, all of which introduce auxiliary qubits and encode fermionic Fock space into a subspace of the full $n$-qubit system, defined as the $+1$-eigenspace of elements of a stabiliser group $\mathcal{S}$.
Mappings that have this property as referred to as \textit{local}.
We will now focus our attention on a specific local mapping, the compact mapping~\cite{derby2021compact}, since its stabiliser group is very similar to that of the surface code.
As we will see, this close connection to the surface code allows us to use the encoding circuits we have constructed for the surface code to encode fermionic states in the compact mapping.
The compact mapping maps nearest-neighbour hopping ($a_i^\dagger a_j + a_j^\dagger a_i$) and Coulomb ($a_i^\dagger a_i a_j^\dagger a_j$) terms to Pauli operators with weight at most 3 and 2, respectfully, and requires 1.5 qubits for each fermionic mode~\cite{derby2021compact}.
Rather than mapping individual fermionic creation and annihilation operators, the compact mapping instead defines a representation of the fermionic edge ($E_{j k}$) and vertex ($V_{j}$) operators, defined as
\begin{equation}
E_{j k}\coloneqq-i \gamma_{j} \gamma_{k}, \quad V_{j}\coloneqq-i \gamma_{j} \bar{\gamma}_{j},
\end{equation}
where $\gamma_j\coloneqq a_j+a_j^\dagger$ and $\bar{\gamma}_j\coloneqq (a_j-a_j^\dagger)/i$ are Majorana operators.
The vertex and edge operators must satisfy the relations
\begin{equation}
\left[E_{i j}, V_{l}\right]=0, \quad\left[V_{i}, V_{j}\right]=0, \quad\left[E_{i j}, E_{l n}\right]=0.
\end{equation}
for all $i\neq j \neq l \neq n$, and
\begin{equation}
\left\{E_{i j}, E_{j k}\right\}=0, \quad\left\{E_{j k}, V_{j}\right\}=0.
\end{equation}
In the compact mapping, there is a ``primary'' qubit associated with each of the $m$ fermionic modes, and there are also $m/2$ ``auxiliary'' qubits.
Each vertex operator $V_j$ is mapped to the Pauli operator $Z_j$ on the corresponding primary qubit.
We denote the mapped vertex and edge operators by $\tilde{V}_j$ and $\tilde{E}_{ij}$, respectfully, and so we have $\tilde{V}_j\coloneqq Z_j$.
Each edge operator $E_{ij}$ is mapped (up to a phase factor) to a three-qubit Pauli operator of the form $XYX$ or $XYY$, with support on two vertex qubits and a neighbouring ``face'' qubit.
The precise definition of the edge operators is not important for our purposes, and we refer the reader to Ref.~\cite{derby2021compact} for details.
The vertex and edge operators define a graph (in which they correspond to vertices and edges, respectfully), and an additional relation that must be satisfied in the mapping is that the product of any loop of edge operators must equal the identity:
\begin{equation}\label{eq:identity_loop}
i^{(|p|-1)}\prod_{i=1}^{(|p|-1)}\tilde{E}_{p_i p_{i+1}}=1,
\end{equation}
where here $p=\{p_1,p_2,\ldots\}$ is a sequence of vertices along any cycle in the graph.
The relation of \Cref{eq:identity_loop} can be satisfied by ensuring that the qubit operator corresponding to any mapped loop of edge operators is a stabiliser, if it is not already trivial, thereby ensuring that the relations are satisfied within the $+1$-eigenspace of the stabilisers.
\begin{figure}
\caption{The stabilisers of the compact mapping. A primary qubit is associated with each black circle, and an auxiliary qubit is associated with each edge of the surface code lattice. There is a plaquette stabiliser (blue) associated with each face of the surface code lattice, acting as $YXXY$ on the edges adjacent to the face, and as $Z$ on each of the four closest primary qubits. There is also a site stabiliser (red) associated with each vertex of the surface code lattice, also acting as $YXXY$ on the edges adjacent to the vertex, and as $Z$ on each of the four closest primary qubits.}
\label{fig:compact_mapping_stabilisers}
\end{figure}
The stabiliser group $\mathcal{S}$ of the compact mapping is therefore defined by \Cref{eq:identity_loop} and the definition of each $\tilde{E}_{ij}$.
The $+1$-eigenspace of $\mathcal{S}$ has dimension $2^{m+\Delta}$, where $m$ is the number of modes and $\Delta\in\{-1,0,1\}$ is the \textit{disparity}, which depends on the boundary conditions chosen for the square lattice geometry.
We will only consider the case where $\Delta=1$, since this choice results in a stabiliser structure most similar to the surface code.
In this $\Delta=1$ case the full Fock space is encoded, along with a topologically protected logical qubit.
The stabilisers of the compact mapping (for the case $\Delta=1$) are shown in \Cref{fig:compact_mapping_stabilisers}, from which it is clear that the stabiliser group is very similar to that of the planar surface code, a connection which was first discussed in Ref.~\cite{derby2021compact}.
Indeed, if we consider the support of the stabilisers on only the auxiliary qubits (associated with the edges of the surface code lattice shown in \Cref{fig:compact_mapping_stabilisers}), we recover the stabiliser group of the planar surface code up to single-qubit Clifford gates acting on each qubit.
Using this insight, we can use our surface code encoding circuit to construct a local unitary encoding circuit that prepares a Slater determinant state in the compact mapping, which is often required for its use in quantum simulation algorithms.
Note that we can write each fermionic occupation operator $a_j^\dagger a_j$ for mode $j$ in terms of the corresponding vertex operator $V_j$ as $a_j^\dagger a_j=(I-V_j)/2$, where $I$ is the identity operator.
A Slater determinant state $\ket{\phi_{\mathrm{det}}}$ is then a joint eigenstate of the stabilisers and vertex operators:
\begin{align}
S_i\ket{\phi_{\mathrm{det}}}&=\ket{\phi_{\mathrm{det}}},\quad\forall S_i\in\mathcal{S}, \label{eq:determinant_stabilisers} \\
\tilde{V}_j\ket{\phi_{\mathrm{det}}}&=v_j\ket{\phi_{\mathrm{det}}},\quad\forall \tilde{V}_j\in \tilde{V}, \label{eq:vertex_op_eigenvalues}
\end{align}
where $\mathcal{S}$ is the stabiliser group of the mapping, $\tilde{V}$ is the set of mapped vertex operators, and $v_j\in\{+1,-1\}$ indicates whether mode $j$ is occupied (-1) or unoccupied (+1)~\cite{jiang2019majorana}.
Let us denote the set of generators of $\mathcal{S}$ defined by the sites and plaquettes in \Cref{fig:compact_mapping_stabilisers} by $\{s_1,s_2,\ldots,s_r\}$ (i.e.~$\mathcal{S}=\langle s_1,s_2,\ldots,s_r \rangle$).
For any Pauli operator $c$, we denote its component acting only on the primary qubits as $c^p$, and its component acting only on auxiliary qubits is denoted $c^a$.
With this notation we can decompose each stabiliser generator as $s_i=s_i^p\otimes s_i^a$, where $|s_i^p|=|s_i^a|=4$ in the bulk of the lattice.
For the compact mapping, where $\tilde{V}_j\coloneqq Z_j$, from \Cref{eq:vertex_op_eigenvalues} we see that the primary qubits are in a product state for all Slater determinant states, and so we can write the state of the system on all qubits as $\ket{\phi}=\ket{\phi}_{p}\otimes\ket{\phi}_{a}$, where $\ket{\phi}_{p}$ is the state of the primary qubits and $\ket{\phi}_{a}$ is the state of the auxiliary qubits.
Our circuit to prepare a Slater determinant state in the compact mapping then proceeds in three steps.
In step one we prepare each primary qubit in state $\ket{0}$ or $\ket{1}$ if the corresponding fermionic mode is unoccupied or occupied, respectfully.
This ensures that the state satisfies \Cref{eq:vertex_op_eigenvalues} as required, and we denote the resultant state on the primary qubits by $\ket{\phi_{det}}_{p}$.
It now remains to show how we can prepare the state on the auxiliary qubits such that \Cref{eq:determinant_stabilisers} is also satisfied.
In step 2, we prepare a state $\ket{\phi_{surf}}_{a}$ on the auxiliary qubits that is in the $+1$-eigenspace of each stabiliser generator restricted to its support only on the auxiliary qubits.
In other words we prepare the state $\ket{\phi_{surf}}_{a}$ satisfying
\begin{equation}
S_i\ket{\phi_{surf}}_{a}=\ket{\phi_{surf}}_{a}\quad\forall S_i\in\mathcal{S}',
\end{equation}
where $\mathcal{S}'\coloneqq \langle s_1^a, s_2^a,\ldots,s_r^a\rangle$.
The generators of $\mathcal{S}'$ are the same as those of the planar surface code up to local Clifford gates, and so we can prepare $\ket{\phi_{surf}}_{a}$ by encoding the planar surface code on the auxiliary qubits using the circuit from \Cref{sec:planar_encoding} and applying $U_V$ ($U_H$) to each vertical (horizontal) edge of the lattice in \Cref{fig:compact_mapping_stabilisers}, where
\begin{align}
U_V &\coloneqq XHS=\frac{1}{\sqrt{2}}\left(\begin{array}{rr}
1 & -i \\
1 & i
\end{array}\right), \\
U_H &\coloneqq XHSH=\frac{1}{2}\left(\begin{array}{rr}
1-i & 1+i \\
1+i & 1-i
\end{array}\right).
\end{align}
This step can be verified by noticing that, under conjugation, $U_V$ maps $X\rightarrow Z$ and $Y\rightarrow X$, and $U_H$ maps $Y\rightarrow Z$ and $X\rightarrow X$, and so the generators of the surface code (\Cref{fig:surface_code}) are mapped to generators of $\mathcal{S}'$.
\begin{figure}
\caption{A $-1$ syndrome on any individual plaquette stabiliser (blue) can be generated by a string of Pauli $Y$ operators (labelled in blue) on qubits on vertical edges joining it to the left (or right) boundary. Similarly, a $-1$ syndrome on any site stabiliser (red) can be generated by a string of Pauli $X$ operators (labelled in red) on qubits on vertical edges joining it to the top (or bottom) boundary.}
\label{fig:compact_mapping_syndrome}
\end{figure}
Note that after step 2, the combined state of the primary and auxiliary qubits satisfies
\begin{align}
s_i^p\otimes s_i^a \ket{\phi_{det}}_{p}\otimes\ket{\phi_{surf}}_{a}&=b_i\ket{\phi_{det}}_{p}\otimes\ket{\phi_{surf}}_{a}
\end{align}
for each generator $s_i=s_i^p\otimes s_i^a$ of $\mathcal{S}$, where the eigenvalue $b_i\in\{-1,1\}$ is the parity of the primary qubits acted on non-trivially by $s_i^p$, satisfying $s_i^p\ket{\phi_{det}}_{p}=b_i\ket{\phi_{det}}_{p}$.
We say that $b_i$ is the syndrome of generator $s_i$.
In step 3, we apply a circuit that instead ensures that we are in the $+1$-eigenspace of elements of $\mathcal{S}$.
This can be done by applying a Pauli operator $R$, with support only on the auxiliary qubits, that commutes with each generator $s_i$ if its syndrome $b_i$ is 1 and anti-commutes otherwise.
Such a Pauli operator can always be found for any assignment of each $b_i\in\{1,-1\}$, as shown in \Cref{fig:compact_mapping_syndrome}: for each stabiliser generator $s_i$, we can find a Pauli operator that we denote $V(s_i)$ which, acting only on the auxiliary qubits, anti-commutes with $s_i$ while commuting with all other generators (note that the choice of $V(s_i)$ is not unique).
Taking the product of operators $V(s_i)$ for all $s_i$ with syndrome $b_i=-1$, we obtain a single Pauli operator
\begin{equation}\label{eq:compact_correction}
R=\prod_{i\in\{i: b_i=-1\}}V(s_i)
\end{equation}
that returns the state of our combined system to the $+1$-eigenspace of elements of $\mathcal{S}$, such that it satisfies \Cref{eq:determinant_stabilisers}.
Furthermore, since steps 2 and 3 have acted trivially on the primary qubits, \Cref{eq:vertex_op_eigenvalues} is still satisfied from step 1.
Therefore, a Slater determinant in the compact mapping can be encoded using the $O(L)$ depth unitary encoding circuit for the planar code as well as $O(1)$ layers of single qubit Clifford gates.
Note that the topologically protected logical qubit in the compact mapping is not used to store quantum information.
As a result, we can prepare any state in the codespace of the surface code in step 2, and it does not matter if the Pauli correction $R$ in step 3 acts non-trivially on the logical qubit.
The problem of finding a suitable correction $R$ in step 3 given the syndrome of each generator is essentially the same problem as decoding the XZZX surface code~\cite{wen2003quantum,bonilla2020xzzx} under the quantum erasure channel (and where every qubit is erased).
Therefore, any other suitable decoder could be used instead of using \Cref{eq:compact_correction}, such as the variant of minimum-weight perfect matching used in Ref.~\cite{bonilla2020xzzx}, or an adaptation of the peeling decoder~\cite{delfosse2020linear}.
The encoding step for the surface code could instead be done using stabiliser measurements. However, since it is not otherwise necessary to measure the stabilisers of the mapping, the additional complexity of using ancillas, mid-circuit measurements and real-time classical logic might make such a measurement-based approach more challenging to implement on either NISQ or fault-tolerant hardware than the simple $O(L)$ depth local unitary encoding circuit we present. Furthermore, the $O(L)$ complexity of our encoding circuit is likely negligible compared to the overall complexity of most quantum simulation algorithms within which it could be used.
Our encoding circuits for the surface code may also be useful for preparing states encoded in other fermion-to-qubit mappings.
As an example, it has previously been observed that the Verstraete-Cirac transform also has a similar stabiliser structure to the surface code~\cite{verstraete2005mapping,steudtner2019quantum}.
\hspace{0pt}
\section{Discussion}
We have presented local unitary circuits for encoding an unknown state in the surface code that take time linear in the lattice size $L$. Our results demonstrate that the $\Omega(L)$ lower bound given by Bravyi \textit{et al.}~\cite{bravyi2006lieb} for this problem is tight, and reduces the resource requirements for experimentally realising topological quantum order and implementing some QEC protocols, especially using NISQ systems restricted to local unitary operations. We have provided a new technique to encode the planar code in $O(L)$ time, as well as showing how an $O(L)$ local unitary encoding circuit for the toric code can be found by enforcing locality in the non-local RG encoder. We unify these two approaches by demonstrating how local $O(L)$-depth circuits can be used to convert between the planar and toric code, and generalise our method to rectangular, rotated and 3D surface codes.
We also show that our unitary encoding circuit for the planar code can be used to encode a Slater determinant state in the compact mapping~\cite{derby2021compact}, which has a similar stabiliser structure to the surface code.
This encoding circuit is therefore a useful subroutine for the simulation of fermionic systems on quantum computers, and it may be that similar techniques can be used to encode fermionic states in the Verstraete-Cirac transform, which has a similar stabiliser structure~\cite{verstraete2005mapping}.
Using known local unitary mappings from one or more copies of the surface code, our results also imply the existence of optimal encoders for any 2D translationally invariant topological code, some 2D subsystem codes~\cite{yoshida2011classification, bombin2012universal}, as well as the 2D color code with and without boundaries~\cite{kubica2015unfolding}. As an explicit example, the subsystem surface code with three-qubit check operators can be encoded from the toric code using the four time step quantum circuit given in Ref.~\cite{bravyi2012subsystem}.
The circuits we have provided in this work are not fault-tolerant for use in error correction: a single qubit fault at the beginning of the circuit can lead to a logical error on the encoded qubit.
Nevertheless, since our circuits have a lower depth than local unitary circuits given in prior work, we expect our circuits also to be more resilient to circuit noise (for example, our circuits have fewer locations for an idle qubit error to occur).
Fault-tolerance of the encoding circuit itself is also not required when using it to prepare fermionic states or to study topological quantum order: for these applications, our circuits could be implemented using either physical qubits (on a NISQ device) or logical qubits on a fault-tolerant quantum computer.
It would be interesting to investigate if our circuits could be adapted to be made fault-tolerant, perhaps for the preparation of a known state (e.g.~logical $\ket{0}$ or $\ket{+}$).
Further work could also investigate optimal local unitary encoding circuits for surface codes based on different lattice geometries (such as the hexagonal lattice~\cite{fujii2012error}), or for punctured~\cite{raussendorf2007fault, fowler2009high} or hyperbolic surface codes~\cite{breuckmann2016constructions}.
\begin{acknowledgments}
The authors would like to thank Mike Vasmer for informing us of the method for encoding stabiliser codes in Ref.~\cite{gottesman1997stabilizer}, as well as Charlie Derby and Joel Klassen for insightful discussions on fermion-to-qubit mappings. We are also grateful for helpful discussions with Austin Fowler and Benjamin Brown, and would like to thank Selwyn Simsek and Adam Callison for pointing out a formatting error in an earlier version of this manuscript. We thank Engineering and Physical Sciences Research Council (EPSRC) for funding this work. Dan Browne and Simon Burton were funded by EPSRC grant EP/R043647/1 and the remaining authors by EPSRC grant number EP/L015242/1. In addition, Farhan Hanif and James Dborin gratefully acknowledge funding from University College London.
\end{acknowledgments}
\textit{Note added}: After the first preprint of this article, Ref.~\cite{satzinger2021realizing} introduced an alternative $O(L)$ unitary encoding circuit for the rotated surface code, using it to experimentally realise topological quantum order.
\appendix
\section{Procedure for Encoding a Stabiliser Code}\label{app:gottesman}
\subsection{Review of the General Method}
\normalsize
In this section we review the general method for constructing an encoding circuit for arbitrary stabiliser codes given in \cite{gottesman1997stabilizer, Cleve_1997}, and show how it can be used to find an encoding circuit for an $L=2$ toric code as an example.
We present the method here for completeness, giving the procedure in full and in the simplified case for which the code is CSS.
From a set of check operators one can produce a corresponding bimatrix
\[
M\coloneqq
\begin{pmatrix}[c|c]
L & R\\
\end{pmatrix}
\]
Rows and columns represent check operators and qubits respectively. $L_{ij} =1$ indicates that check operator $i$ applies $X$ to qubit $j$ as opposed to the identity, similarly for the right hand side $R_{ij} = 1$ implies check operator $i$ applies $Z$ to qubit $j$. If both $L_{ij} = 1$ and $R_{ij} = 1$, then check operator $i$ applies $Y$ on qubit $j$.
A CSS code has check operators $P_n \in \{I,X\}^{\otimes n } \cup \{I,Z\}^{\otimes n }$, its corresponding bimatrix takes the form,
\[
\begin{pmatrix}[c|c]
A & 0\\
0 & B
\end{pmatrix}
\]
$A$ and $B$ have full row rank since they each represent an independent subset of the check operators. Labelling the rank of $A$ as $r$, the rank of $B$ is $n-k-r$.
Via row addition, row swaps and column swaps, the left and right matrices of this simplified form can be taken to standard form \cite{gottesman1997stabilizer} without changing the stabiliser group of the code. The standard form of the bimatrix is then
\[
\begin{pmatrix}[ccc|ccc]
I & A_1 & A_2 & B & C_1 & C_2\\
0 & 0 & 0 & D & I & E
\end{pmatrix}
\]
Where $I$,$A_1$,$A_2$ and $D,I,E$ have $(r)$, $(n-k-r)$, and $(k)$ columns respectively. We may also represent the set of logical $X$ operators as a bimatrix with each row representing the logical $X$ for a particular encoded qubit,
\[ \bar{X} =
\begin{pmatrix}[ccc|ccc]
U_1 & U_2 & U_3 & V_1 & V_2 & V_3\\
\end{pmatrix}
\]
It is shown in \cite{gottesman1997stabilizer} that the logical $\bar{X}$ operator can be taken to the form
\[ \bar{X} =
\begin{pmatrix}[ccc|ccc]
0 & U_2 & I & V_1 & 0 & 0\\
\end{pmatrix}
\]
In the CSS case the check operator bimatrix reduces to
\[
\begin{pmatrix}[ccc|ccc]
I & A_1 & A_2 & 0 & 0 & 0\\
0 & 0 & 0 & D & I & E
\end{pmatrix}
\]
and the logical $X$ bimatrix to
\[ \bar{X} =
\begin{pmatrix}[ccc|ccc]
0 & E^T & I & 0 & 0 & 0\\
\end{pmatrix}
\]
To produce a circuit which can encode state $\ket{c_1 \dots c_k}$ for any values of the $c_i$ one should find a circuit which applies logical operators $\bar{X}_1^{c_1} \dots \bar{X}_{k}^{c_k}$ to the encoded $\ket{0}$ state $\ket{\bar{0}} \equiv \sum_{S \in \mathcal{S}}S \ket{0 \dots 0 }$.
Let $F_{c}$ be the operator corresponding to row $c$ of bimatrix $F$.
We denote by $F_{c(m)}$ the operator corresponding to $F_c$, with the operator on the $m^{th}$ qubit replaced with identity, and then controlled by the $m^{th}$ qubit.
Since
\[ \bar{X}_1^{c_1} \dots \bar{X}_k^{c_k} \sum_{S \in \mathcal{S}}S \ket{0 \dots 0 } = \sum_{S \in \mathcal{S}}S \bar{X}_1^{c_1} \dots \bar{X}_k^{c_k} \ket{0 \dots 0 } \]
the application of the $X$ gates can be considered before applying the sum of stabiliser operations. Due to the $I$ in the form of $\bar{X}$,
\[\bar{X}_{k(n)} \ket{0_1 \dots 0_{n-k}}\ket{0_{n-k+1} \dots 0_{n-1} c_k} = \bar{X}^{c_k}_{k}\ket{0_1 \dots 0_n}.\] we see that independently of $\ket{c_1 \dots c_k}$ we can implement
\begin{align*}
\bar{X}_{1(n-k)} \dots \bar{X}_{k(n)} & \ket{0_1 \dots 0_{n-k}} \ket{c_1 \dots c_k} \\
& = \bar{X}^{c_1}_{1} \dots \bar{X}^{c_k}_{k} \ket{0_1 \dots 0_n} \\
& \equiv \ket{0_1 \dots 0_r}\ket{Xc}
\end{align*}
where in the last line it is emphasised that since $U_1 = 0$, $X_{i(j)}$ acts trivially on the first $r$ qubits. Next to consider is $\sum_{S \in \mathcal{S}}S = (I + M_{n-k}) \dots (I + M_r) \dots (I + M_1)$.
We denote the right matrix of bimatrix $M$ as $R$. In standard form $M_i$ always performs $X$ on qubit $i$ and it performs $Z$ on qubit $i$ when $R_{ii} = 1$, giving
\begin{align*}
M_{i}\ket{0 \dots 0_{i} \dots 0} = Z_{i}^{R_{ii}}M_{i(i)}\ket{0 \dots 1_{i} \dots 0}
\end{align*}
and so
\begin{align*}
(I+M_{i})\ket{0 \dots 0} & = \ket{0 \dots 0_{i} \dots 0} + M_{i}\ket{0 \dots 0_{i} \dots 0} \\ & = Z_{i}^{R_{ii}}M_{i(i)}H_{i}\ket{0 \dots 0}
\end{align*}
or generally
\begin{align*}
& \prod_{i = 1}^{r}(I + M_{i}) (\ket{0_1 \dots 0_r}\ket{Xc}) \\
= & \prod_{i = 1}^{r} Z_{i}^{R_{ii}}M_{i(i)}H_{i} (\ket{0_1 \dots 0_r}\ket{Xc})
\end{align*}
The remaining products
\begin{equation}
\prod_{i = r+1}^{n-k}(I + M_{i})
\end{equation}
can be ignored since they consist only of $\sigma_{z}$ operations and may be commuted to the front to act on $\ket{0}$ states.
Given initially some $k$ qubits we wish to encode, and some additional $n-k$ auxiliary qubits, initialised in $\ket{0}$, a choice of generators for the stabiliser group is
\[
\begin{pmatrix}[cccccccc|cccccccc]
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0\\
\end{pmatrix}
\]
The general circuit which transforms the initial generator set to the standard form bimatrix is given by,
\[
\prod_{i = 1}^{r}Z_{i}^{R_{ii}}M_{i(i)}H_{i} \prod_{j=1}^{k}\bar{X}_{j(n-k+j)}
\]
For CSS codes this reduces to
\[
\prod_{i = 1}^{r}M_{i(i)}H_{i} \prod_{j=1}^{k}\bar{X}_{j(n-k+j)}
\]
In the simplified case all gates are either initial $H$ gates or $CNOT$'s. We may write the circuit in two stages, performing first the $H$ gates and controlled $\bar{X}$ gates.
\begin{equation*}
\Qcircuit @C=1em @R=.7em {
& \gate{H} & \qw & \qw & \qw & \qw & \qw & \qw & \qw\\
& \cdots & & & & & & & \\
& \gate{H} & \qw & \qw & \qw & \qw & \qw & \qw & \qw\\
& \qw & \multigate{2}{{E}_{1(1)}} & \qw & \qw & \qw & \multigate{2}{{E}_{k(k)}} & \qw & \qw \\
& \cdots & \nghost{{E}_{1(1)}} & \cdots & & \cdots & \nghost{{E}_{k(k)}} & \cdots & \\
& \qw & \ghost{{E}_{1(1)}} & \qw & \qw & \qw & \ghost{{E}_{k(k)}} & \qw & \qw\\
& \qw & \qw & \qw & \cdots& & \ctrl{-1} & \qw & \qw\\
& \cdots & & & & & & & \\
& \qw & \ctrl{-3} & \qw & \cdots & & \qw & \qw & \qw \\
}
\end{equation*}
and in stage 2 the controlled $X$ gates.
\begin{equation*}
\Qcircuit @C=1em @R=.7em {
& \qw & \qw & \ctrl{3} & \qw & \cdots & & \qw & \qw\\
& \cdots & & & & & & & \\
& \qw & \qw & \qw & \qw & \cdots & & \ctrl{1} & \qw\\
& \qw & \qw & \multigate{5}{M_{1(1)}} & \qw & \cdots & & \multigate{5}{M_{r(r)}} & \qw \\
& \cdots & & \nghost{M_{1(1)}} & & \cdots & & & \\
& \qw & \qw & \ghost{M_{1(1)}} & \qw & \cdots & & \ghost{M_{r(r)}} & \qw\\
& \qw & \qw & \ghost{M_{1(1)}} & \qw & \cdots & & \ghost{M_{r(r)}} & \qw\\
& \cdots & & \nghost{M_{1(1)}} & & & & \nghost{M_{r(r)}} & \\
& \qw & \qw & \ghost{M_{1(1)}} & \qw & \cdots & & \ghost{M_{r(r)}} & \qw \\
}
\end{equation*}
In the general case stage 1 is identical but stage 2 takes the form
\begin{equation*}
\Qcircuit @C=1em @R=.7em {
& \qw & \multigate{2}{\Omega_{z}} & \ctrl{3} & \qw & \cdots & & \multigate{1}{M_{r(r)}} & \\
& \cdots & \nghost{\sigma_{z}} & \qw & \qw & & & \ghost{M_{r(r)}} & \\
& \qw & \ghost{\sigma_{z}} & \qw & \qw & \cdots & & \ctrl{1} \qwx & \qw\\
& \qw & \qw & \multigate{5}{M_{1(1)}} & \qw & \cdots & & \multigate{5}{M_{r(r)}} & \qw \\
& \cdots & & \nghost{M_{1(1)}} & & \cdots & & & \\
& \qw & \qw & \ghost{M_{1(1)}} & \qw & \cdots & & \ghost{M_{r(r)}} & \qw\\
& \qw & \qw & \ghost{M_{1(1)}} & \qw & \cdots & & \ghost{M_{r(r)}} & \qw\\
& \cdots & & \nghost{M_{1(1)}} & & & & \nghost{M_{r(r)}} & \\
& \qw & \qw & \ghost{M_{1(1)}} & \qw & \cdots & & \ghost{M_{r(r)}} & \qw \\
}
\end{equation*}
Where $\Omega_{z}$ consists of $Z$ operations on some of the first $r$ qubits and each $M_{i(i)}$ consists of controlled $Z$ gates on some of the first $r$ qubits and controlled Pauli gates on some of the following $n-r$ qubits. In the case of the $L=2$ toric code, with qubits labelled left to right and top to bottom the bimatrix is
\[
\begin{pmatrix}[cccccccc|cccccccc]
1 & 1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
1 & 1 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 1 & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 1 & 1 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 1 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 1 & 1\\
\end{pmatrix}
\]
The standard form of this bimatrix is
\[
\begin{pmatrix}[cccccccc|cccccccc]
1 & 0 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 1 & 0 & 0 & 1 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 1 & 0 & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 1\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 1 & 0 & 1 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 0 & 1 & 0 & 0\\
\end{pmatrix}
\]
The circuit which encodes the above stabiliser set is
\begin{equation*}
\Qcircuit @C=1em @R=.7em {
& \gate{H} & \qw & \ctrl{3}& \ctrl{5} & \ctrl{7} & \qw &\qw &\qw &\qw &\qw &\qw \\
& \gate{H} & \qw & \qw & \qw & \qw &\ctrl{3} &\ctrl{4}&\ctrl{6} &\qw &\qw&\qw \\
& \gate{H} & \qw & \qw & \qw& \qw & \qw &\qw &\qw &\ctrl{3} &\ctrl{4}&\ctrl{5} \\
& \qw & \targ & \targ & \qw & \qw &\qw &\qw &\qw &\qw &\qw&\qw \\
& \targ & \qw & \qw & \qw &\qw &\targ &\qw &\qw &\qw &\qw&\qw \\
& \qw & \qw & \qw & \targ &\qw &\qw &\targ &\qw &\targ &\qw&\qw \\
& \ctrl{-2} & \qw & \qw & \qw &\qw &\qw &\qw &\qw &\qw &\targ&\qw \\
& \qw & \ctrl{-4} & \qw & \qw &\targ &\qw &\qw &\targ &\qw &\qw&\targ }
\end{equation*}
It is important to have kept track of which column represents which qubit since column swaps are performed in bringing the matrix to standard form. Taking this into account gives the $L=2$ circuit on the toric architecture.
\subsection{Depth of the General Method}
Any stabiliser circuit has an equivalent skeleton circuit~\cite{Maslov_2007} (a circuit containing only generic two-qubit gates, with single-qubit gates ignored) which after routing on a surface architecture will have at worst $O(n)$ depth. The output of the general method for encoding a stabiliser code in fact already splits into layers of skeleton circuits. Stage 2 of the method applied to a CSS code has at worst $r(n-r)$ controlled Pauli gates $CP_{ij}$ with $i$,$j$ in $\{1 \dots r\}$ and $\{r+1 \dots n\}$ respectively, $CP_{ij}$ is implemented before $CP_{i'j'}$ so long as $i<i'$. Stage 2 then takes the form of a skeleton circuit and as such the number of timesteps needed is $O(n)$ for surface or linear nearest neighbour architectures. Stage 1 has at most $k(n-k-r)$ gates and also takes the form of a skeleton circuit. In the worst case scenario stage 2 includes, in addition to the $CP$ gates, controlled Z gates $CZ$ with targets on the first $r$ qubits. As noted in errata for \cite{gottesman1997stabilizer}, $i>j$ for any of the additional $CZ_{ij}$ in stage 2. All $CZ_{ij}$ can then be commuted to timesteps following all $CP$ gates since each $CP$ in a timestep following $CZ_{ij}$ takes the form $CP_{mn}$ with $n>m>i>j$. The circuit then splits into a layer of $CP$ gates and a layer of $CZ$ gates, each of which is a skeleton circuit, and so can be implemented in $O(n)$ timesteps on surface and linear nearest neighbour architectures.
\section{Additional planar encoding circuits}\label{app:additional_planar_encoders}
\subsection{Planar base cases and rectangular code}
In \Cref{fig:planar_base_cases} we provide encoding circuits for the $L=2$, $L=3$ and $L=4$ planar codes, requiring 4, 6 and 8 time steps respectively. These encoding circuits are used as base cases for the planar encoding circuits described in \Cref{sec:planar_encoding}. In \Cref{fig:rectangular_code} we provide encoding circuits that either increase the width or height of a planar code by two, using three time steps.
\begin{figure}
\caption{Encoding circuits for the L=2, L=3 and L=4 planar codes. Each edge corresponds to a qubit, each arrow denotes a CNOT gate pointing from control to target, and each filled black circle denotes a Hadamard gate applied at the beginning of the circuit. The colour of each CNOT gate corresponds to the time step it is implemented in, with blue, green, red, black, cyan and yellow CNOT gates corresponding to the first, second, third, fourth, fifth and sixth time steps respectively. The hollow circle in each of (a) and (b) denotes the initial unencoded qubit. The circuit in (c) encodes an L=4 planar code from an L=2 planar code, with solid edges denoting qubits initially encoded in the L=2 code.}
\label{subfig:L2_planar}
\label{subfig:L3_planar}
\label{subfig:L4_planar}
\label{fig:planar_base_cases}
\end{figure}
\subsection{Rotated Surface Code}\label{app:rotated_code}
In \Cref{fig:rotated_code} we demonstrate a circuit that encodes an $L=7$ rotated surface code from a distance $L=5$ rotated code. For a given distance $L$, the rotated surface code uses fewer physical qubits than the standard surface code to encode a logical qubit~\cite{bombin2007optimal}. Considering a standard square lattice with qubits along the edges, a rotated code can be produced by removing qubits along the corners of the lattice boundary, leaving a diamond of qubits from the centre of the original lattice. The diagram in \Cref{fig:rotated_code} shows the resultant code, rotated $45^{\circ}$ compared to the original planar code, and with each qubit now denoted by a vertex rather than an edge. For a distance $L$ code the rotated surface code requires $L^2$ qubits compared to $L^2 + (L-1)^2$ for the planar code.
The encoding circuit in \Cref{fig:rotated_code} takes 4 steps to grow a rotated code from a distance $L=5$ to $L=7$. This is a fixed cost for any distance $L$ to $L+2$. To produce a distance $L=2m$ code this circuit would be applied repeatedly $m+O(1)$ times to an $L=2$ or $L=3$ base case, requiring a circuit of total depth $2L + O(1)$.
The circuit in \Cref{fig:rotated_code} can be verified by using \Cref{eq:cnotstabilisers} to see that a set of generators for the $L=5$ rotated code (along with the single qubit $Z$ and $X$ stabilisers of the ancillas) is mapped to a set of generators of the $L=7$ rotated code, as well as seeing that the $X$ and $Z$ logicals of the $L=5$ code map to the $X$ and $Z$ logicals of the $L=7$ rotated code.
\begin{figure}
\caption{(a) Circuit to increase the width of a planar code by two. (b) Circuit to increase the height of a planar code by two. Notation is the same as in \Cref{fig:planar_base_cases}
\label{fig:rectangular_code}
\end{figure}
\begin{figure}
\caption{Encoding circuit for the $L=7$ rotated code from an $L=5$ rotated surface code (shown as a red outline). The colour of each arrow denotes the time step the gate is applied in. The gates are applied in the order: blue, red, black, purple.
The additional qubits are initialised in the $\ket{+}
\label{fig:rotated_code}
\end{figure}
\section{Renormalisation Group encoder}\label{app:renormalisation_group}
\subsection{Toric Code Encoder}\label{basetoric}
Applying the Gottesman encoder to the toric code, as shown in Appendix~\ref{app:gottesman}, and then enforcing locality using SWAP gates, gives the following encoding circuit for the $L=2$ toric code that requires 10 time steps:
\begin{equation*}
\Qcircuit @C=0.8em @R=.7em {
\lstick{\ket{0}}& \gate{H} & \qw & \qw& \ctrl{1} & \qw & \ctrl{4} &\ctrl{3} &\targ &\ctrl{3} &\qw &\ctrl{3}&\targ&\ctrl{3}&\qw \\
\lstick{\ket{0}}& \qw & \qw & \targ & \targ & \qw &\qw &\qw&\qw &\qw &\qw&\qw&\qw&\qw&\qw \\
\lstick{\ket{\psi_0}}& \qw & \ctrl{1} & \qw & \qw& \targ & \qw &\qw &\qw &\qw &\qw&\qw&\qw&\qw&\qw \\
\lstick{\ket{0}}& \qw & \targ & \qw & \targ & \qw &\qw &\targ &\ctrl{-3} &\targ &\ctrl{2}&\targ&\ctrl{-3}&\targ&\qw \\
\lstick{\ket{0}}& \qw & \qw & \qw & \qw &\qw &\targ &\qw &\qw &\qw &\qw&\targ&\targ&\qw&\qw \\
\lstick{\ket{\psi_1}}& \qw & \qw & \ctrl{-4} & \qw &\qw &\targ &\qw &\qw &\qw &\targ&\qw&\qw&\targ&\qw \\
\lstick{\ket{0}}& \gate{H} & \qw & \qw & \qw &\ctrl{-4} &\qw &\qw &\qw &\qw &\qw&\qw&\ctrl{-2}&\ctrl{-1}&\qw\\
\lstick{\ket{0}}& \gate{H} & \qw & \qw & \ctrl{-4} &\qw &\ctrl{-2} &\qw &\qw &\qw &\qw&\ctrl{-3}&\qw&\qw&\qw }
\end{equation*}
where the qubits are numbered $0\ldots 7$ from top to bottom. This circuit encodes the initial unknown qubit states $\ket{\psi_0}$ and $\ket{\psi_1}$ into logical states $\ket{\bar{\psi_0}}$ and $\ket{\bar{\psi_1}}$ of an $L=2$ toric code with stabiliser group generators $X_0X_1X_2X_6$, $X_0X_1X_3X_7$, $X_2X_4X_5X_6$, $Z_0Z_2Z_3Z_4$, $Z_1Z_2Z_3Z_5$ and $Z_0Z_4Z_6Z_7$.
Equipped with an $L=2$ base code emulated as the central core of a $4\times 4$ planar grid, where the surrounding qubits are initially decoupled $+1$ $Z$ eigenstates, one can apply the local routing methods of Appendix \ref{routing} to obtain the initial configuration as depicted in \Cref{fig:L8_renorm}. The ancillae qubits are then initialised as $\ket{0}$ or $\ket{+}$ eigenstates as depicted in \Cref{fig:L2_to_L4}(a) by means of Hadamard operations where necessary, before the circuit is implemented through the sequence of CNOT gates as depicted in \Cref{fig:L2_to_L4}(a)-(c).
By recursive application of \Cref{eq:cnotstabilisers}, it is seen that the circuit forms the stabiliser structure of an $L=4$ toric code on the planar architecture. Proceeding inductively, one can exploit the symmetry of a distance $L=2^k$ toric code to embed it in the centre of a $2L\times 2L$ planar grid, ``spread-out'' the core qubits in time linear in the distance, and ultimately perform the $L=2 \mapsto L=4$ circuit on each $4\times 4$ squarely-tesselated sub-grid.
\begin{figure}
\caption{Initial outwards spreading of qubits in a distance 4 toric code to prepare for the encoding of a distance 8 toric code. Solid black and unfilled nodes represent the routed qubits of the distance 4 code, and the ancillae respectively. One then executes the subroutine of \Cref{fig:L2_to_L4}
\label{fig:L8_renorm}
\end{figure}
\subsection{Routing circuits for enforcing locality}\label{routing}
To enforce locality in the Renormalisation Group encoder, which encodes a distance $L$ toric code, one can use SWAP gates to ``spread out'' the qubits between iteration $k$ and $k+1$, such that all of the $O(1)$ time steps in iteration $k+1$ are \textit{almost local} on a $2^{k+1}\times 2^{k+1}$ region of the $L\times L$ torus. By \textit{almost local}, we mean that the time step would be local if the $2^{k+1}\times 2^{k+1}$ region had periodic boundary conditions. Since at each iteration (until the final one) we use a region that is a subset of the torus, we in fact have a planar architecture (no periodic boundaries), and so it is not possible to simultaneously enforce locality in all of the $O(1)$ time steps in an iteration $k<\log L-2$ of the RG encoder, which are collectively local on a toric architecture. Thus it is necessary to emulate a toric architecture on a planar one. In a time step in iteration $k$, this can be achieved by using $3(2^{k}-1)$ time steps to move the top and bottom boundaries together (using SWAP gates) before applying any necessary gates which are now local (where the factor of three comes from the decomposition of a SWAP gate into 3 CNOT gates). Then $3(2^{k}-1)$ time steps are required to move the boundaries back to their original positions. The identical procedure can be applied simultaneously to the left and right boundaries. Thus there is an overhead of $3(2^{k+1}-2)$ to emulate a toric architecture with a planar architecture.
Starting from $L=2$ and ending on a size $L$ code gives an overall overhead to emulating the torus of 6($\sum_{i=1}^{\log_2(L)-2} 2^{i+1}-2) = 6L-12\log_2L$, since from \Cref{fig:L2_to_L4} it can be seen that opposite edges need be made adjacent two times per iteration to enforce locality in it. Additionally, the time steps within each iteration must be implemented. Noticing that the red CNOT gates in \Cref{fig:L2_to_L4}(b) can be applied simultaneously with the gates in \Cref{fig:L2_to_L4}(a), this can be done in 6 time steps, leading to an additional $6\log_2(L) -6$ time steps in total in the RG encoder.
It is key to our routine to be able to ``spread out'' the qubits between each MERA step. We now show that this can be achieved in linear time by routing qubits through the planar grid. We firstly consider a single step of moving from a $2^k$ to a $2^{k+1}$ sized grid.
Our first observation is that while the qubits lie on the edges of our $2^k \cross 2^k$ grid, one can subdivide this grid into one of dimensions $(2^{k+1}+1) \cross (2^{k+1}+1)$, such that the qubits lie on corners of this new grid, labelled by their positions $(i,j)$ with the centre of the grid identified with $(0,0)$. Under the taxicab metric we can measure the distance of qubits from the centre as $M_{i,j}:=|(i,j)| = |i|+|j|$ and one can check that qubits only ever lie at odd values of this metric, essentially forming a series of concentric circles with $M_{i,j} = 2n+1, n\in\mathbb{N}$. See \Cref{circles}.
\begin{figure}
\caption{$L=4$ code showing the circles of the $M_{i,j}
\label{circles}
\end{figure}
A general routing step requires enlarging these circles such that the initial radii $R_I$ are mapped to final radii $R_F$ in the following fashion:
\begin{equation}\label{sorting matrix}
\begin{matrix}
R_I & & R_F & \hspace{1cm} STEPS \\
2^k-1 & \rightarrow & 2^{k+1}-1 & \hspace{1cm} 2^{k-1}\\
2^k-3 & \rightarrow & 2^{k+1}-7 & \hspace{1cm} 2^{k-1}-2\\
2^k-5 & \rightarrow & 2^{k+1}-9 & \hspace{1cm} 2^{k-1}-2 \\
2^k-7 & \rightarrow & 2^{k+1}-15 & \hspace{1cm} 2^{k-1}-4\\
& \vdots & &\hspace{1cm} \vdots\\
3 & \rightarrow & 7 & \hspace{1cm} 2\\
\end{matrix}
\end{equation}
Routing the qubits requires a series of SWAP gates to iteratively make the circles larger, e.g. $3\rightarrow 5\rightarrow 7$, the number of steps this requires is shown in table \ref{sorting matrix}. At the initial time step, it is only possible to move the outermost circle ($R_I=2^k-1$) since all smaller circles are adjacent. One can check though, that the number of steps required to move these smaller circles is sufficiently small that it is possible to start moving them at a later time step. We provide a framework for the required steps in equation \ref{routingroutine}.
Thus all the qubits can be moved in $2^{k-1}$ steps. Each step requires (possibly) simultaneous SWAP gates, each of which can be decomposed into three CNOT gates. Thus the overall run time of each iteration is $3\cdot 2^{k-1}$. To start from the $L=2$ base code and enlarge to a desired $L=2^m$ requires $\log_2(L)-1$ iterations and thus the overall run time for the routing routine is given by the geometric series $\sum_{k=1}^{\log_2(L)-1} 3\cdot 2^{k-1} = \frac{3}{2}(L-2)$.
Combining this with the time to emulate a toric architecture with a planar architecture, and the 10 time steps required to encode the $L=2$ base case using the Gottesman encoder (see Appendix~\ref{app:gottesman}), the total number of time steps required for the local RG encoder is $15L/2 - 6\log_2 L + 7 \sim O(L)$, where $L$ must be a power of 2.
\onecolumn
\begin{equation}\label{routingroutine}
\begin{matrix*}[r]
Time step & &&&&\\
1. &\hspace{0.95cm} 2^k-1\rightarrow 2^k+1 & WAIT & WAIT & \dots & WAIT \\
2. & 2^k+1\rightarrow 2^k+3 &\hspace{0.95cm} 2^k-3\rightarrow 2^k-1 & WAIT & \dots & WAIT\\
3. & 2^k+3\rightarrow 2^k+5 & 2^k-1\rightarrow 2^k+1 &\hspace{1.3cm} 2^k-5\rightarrow 2^k-3& \hspace{0.5cm}\dots & WAIT\\
\vdots & \vdots & \vdots & \vdots & \vdots & \\
2^{k-1}-1. & 2^{k+1}-5\rightarrow 2^{k+1}-3 & 2^{k+1}-9\rightarrow 2^{k+1}-7 & 2^{k+1}-13\rightarrow 2^{k+1}-11 & \dots & \hspace{0.5cm}3\rightarrow 5\\
2^{k-1}. & 2^{k+1}-3\rightarrow 2^{k+1}-1 & DONE & 2^{k+1}-11\rightarrow 2^{k+1}-9 & \dots & 5\rightarrow 7\\
\end{matrix*}
\end{equation}
\end{document}
|
\begin{document}
\title{Subgroups of free
idempotent generated semigroups need not be free}
\section{Introduction}
Let $S$ be a semigroup with set $E(S)$ of idempotents, and let
$\mathrel{\leq _{\L}}angle E(S) {\mathrel{\leq _{\R}}m ran}gle$ denote the subsemigroup of $S$ generated by
$E(S)$. We say that $S$ is an {\em idempotent generated} semigroup
if $S = \mathrel{\leq _{\L}}angle E(S) {\mathrel{\leq _{\R}}m ran}gle$. Idempotent generated semigroups
have received considerable attention in the literature. For
example, an early result of J. A. Erd\"os \cite{Erdos} proves that
the idempotent generated part of the semigroup of $n \times n$
matrices over a field consists of the identity matrix and all
singular matrices. J. M. Howie \cite{Howfulltr} proved a similar
result for the full transformation monoid on a finite set and also
showed that every semigroup may be embedded in an idempotent
generated semigroup. This result has been extended in many
different ways, and many authors have studied the structure of
idempotent generated semigroups. Recently, Putcha \cite{Putch2}
gave necessary and sufficient conditions for a reductive linear
algebraic monoid to have the property that every non-unit is a
product of idempotents, significantly generalizing the results of
J.A. Erd\"os mentioned above.
In 1979 K.S.S. Nambooripad \cite{Namb} published an influential
paper about the structure of (von Neumann) regular semigroups.
Nambooripad observed that the set $E(S)$ of idempotents of a
semigroup carries a certain structure (the structure of a
``biordered set", or a ``regular biordered set" in the case of
regular semigroups) and he provided an axiomatic characterization
of (regular) biordered sets in his paper. If $E$ is a regular
biordered set, then there is a free object, which we will denote
by $RIG(E)$, in the category of regular idempotent generated
semigroups with biordered set $E$. Nambooripad showed how to
study $RIG(E)$ via an associated groupoid ${\mathcal N}(E)$. There
is also a free object, which we will denote by $IG(E)$, in the
category of idempotent generated semigroups with biordered set $E$
for an arbitrary (not necessarily regular) biordered set $E$.
In the present paper we provide a topological approach to
Nambooripad's theory by associating a $2$-complex $K(E)$ to each
regular biordered set $E$. The fundamental groupoid of the
$2$-complex $K(E)$ is Nambooripad's groupoid ${\mathcal N}(E)$.
Our concern in this paper is in analyzing the structure of the
maximal subgroups of $IG(E)$ and $RIG(E)$ when $E$ is a regular
biordered set. It has been conjectured that these subgroups are
free \cite{McElw}, and indeed there are several papers in the
literature (see for example, \cite{Pastijn3}, \cite{NP1},
\cite{McElw}) that prove that the maximal subgroups are free for
certain classes of biordered sets. The main result of this paper
is to use these topological tools to give the first example of
non-free maximal subgroups in free idempotent generated semigroups
over a biordered set.
We give an example of a regular biordered set $E$ associated to a
certain combinatorial configuration such that $RIG(E)$ has a
maximal subgroup isomorphic to the free abelian group of rank 2.
\section
{\bf Preliminaries on Biordered Sets and Regular Semigroups}
One obtains significant information about a semigroup by studying
its ideal structure. Recall that if $S$ is a semigroup and $a,b
\in S$ then the Green's relations ${\mathcal R, \mathcal L,
\mathcal H, \mathcal J}$ and $\mathcal D$ are defined by $a
{\mathcal R} b $ if and only if $aS^{1} = bS^{1}$, $a {\mathcal L}
b$ if and only if $S^{1}a = S^{1}b$, $a {\mathcal J} b$ if and
only if $S^{1}aS^{1} = S^{1}bS^{1}$, ${\mathcal H} = {\mathcal R}
\cap {\mathcal L}$ and ${\mathcal D} = {\mathcal R} \circ
{\mathcal L} = {\mathcal L} \circ {\mathcal R}$, so that
${\mathcal D}$ is the join of ${\mathcal R}$ and $ {\mathcal L}$
in the lattice of equivalence relations on $S$. The corresponding
equivalence classes of an element $a \in S$ are denoted by $R_{a},
L_{a}, H_{a}, J_{a}$ and $D_{a}$ respectively. Recall also that
there are quasi-orders defined on $S$ by $a \mathrel{\leq _{\R}} b$ if $aS^{1}
\subseteq bS^{1}$, and $a \mathrel{\leq _{\L}} b$ if $S^{1}a \subseteq S^{1}b$. As
usual, these induce partial orders on the set of $\mathrel{{\mathcal R}}$-classes and
$\mathrel{{\mathcal L}}$-classes respectively. The restrictions of these quasi-orders
to $E(S)$ will be denoted by ${\omega}^{r}$ and ${\omega}^{l}$
respectively in this paper, in accord with the notation in
Nambooripad's paper \cite{Namb}. It is easy to see that if $e$ and
$f$ are idempotents of $S$ then $e \, \, {\omega}^{r} \, \, f$
(i.e. $eS \subseteq fS$) if and only if $e = fe$, that $e \, \,
{\omega}^{l} \, \, f$ if and only if $e = ef$, that $e \, \,
{\mathcal R} \, \, f$ if and only if $e = fe$ and $f = ef$, and
that $e \, \, {\mathcal L} \, \, f$ if and only if $e = ef$ and $f
= fe$.
Let $e$ be an idempotent of a semigroup $S$. The set $eSe$ is a
submonoid in $S$ and is the largest submonoid (with respect to
inclusion) whose identity element is $e$. The group of units $G_e$
of $eSe$, that is the group of elements of $eSe$ that have two
sided inverses with respect to $e$, is the largest subgroup of $S$
(with respect to inclusion) whose identity is $e$ and is called
the maximal subgroup of $S$ at $e$.
Recall also that if $e$ and $f$ are idempotents of $S$ then the
natural partial order on $E(S)$ is defined by $e \,\, {\omega}
\,\, f$ if and only if $ef = fe = e$. Thus $\omega=\omega^{r}
\cap \omega^{l}$. An element $a \in S$ is called {\it regular} if
$a \in aSa$: in that case there is at least one {\it inverse} of
$a$, i.e. an element $b$ such that $a = aba$ and $b = bab$. Note
that regular semigroups have in general many idempotents: if $a$
and $b$ are inverses of each other, then $ab$ and $ba$ are both
idempotents (in general distinct). Standard examples of regular
semigroups are the semigroup of all transformations on a set (with
respect to composition of functions) and the semigroup of all $n
\times n$ matrices over a field (with respect to matrix
multiplication).
We recall the basic properties of the very important class of
completely 0-simple semigroup. A semigroup $S$ (with 0) is
(0)-simple if ($S^{2}\neq 0$ and) its only ideal is $S$ ($S$ and
{0}). A (0)-simple semigroup $S$ is completely (0)-semigroup if
$S$ contains an idempotent and every idempotent is (0)-minimal in
the natural partial order of idempotents defined above. It is a
fundamental fact that every finite (0)-simple semigroup is
completely (0)-simple.
Let $S$ be a completely 0-simple semigroup. The Rees theorem
\cite{CP,Lall} states that $S$ is isomorphic to a regular Rees
matrix semigroup $M^{0}(A,G,B,C)$ and conversely that every such
semigroup is completely (0)-simple. Here $A (B)$ is an index set
for the $\mathrel{{\mathcal R}} (\mathrel{{\mathcal L}})$-classes of the non-zero $\mathrel{{\mathcal J}}$-class of $S$ and
$C:B \times A\mathrel{\leq _{\R}}ightarrow G^{0}$ is a function called the structure
matrix. $C$ has the property that for each $a \in A$ there is a $b
\in B$ such that $C(b,a) \neq 0$ and for each $b \in B$ there is
an $a \in A$ such that $C(b,a) \neq 0$. We always assume that $A$
and $B$ are disjoint. The underlying set of $M^{0}(A,G,B,C)$ is $A
\times G \times B \cup \{0\}$ and the product is given by
$(a,g,b)(a',g',b')=(a,gC(b,a')g',b')$ if $C(b,a')\neq 0$ and 0
otherwise.
We refer the reader to the books of Clifford and Preston \cite{CP}
or Lallement \cite{Lall} for standard ideas and notation about
semigroup theory.
An {\it $E$-path} in a semigroup $S$ is a sequence of idempotents
$(e_{1},e_{2}, \mathrel{\leq _{\L}}dots , e_{n})$ of $S$ such that $e_{i} \, \,
({\mathcal R} \cup {\mathcal L}) \, \, e_{i+1}$ for all $i = 1,
\mathrel{\leq _{\L}}dots n-1$. This is just a path in the graph $(E, {\mathcal R}
\cup {\mathcal L})$: the set of vertices of this graph is the set
$E$ of idempotents of $S$ and there is an edge denoted $(e,f)$
from $e$ to $f$ for $e,f \in E$ if $e {\mathcal R} f$ or $e
{\mathcal L} f$. One can introduce an equivalence relation on the
set of $E$-paths by adding or removing ``inessential" vertices: a
vertex (idempotent) $e_{i}$ of a path $(e_{1}, e_{2} \mathrel{\leq _{\L}}dots ,
e_{n})$ is called {\it inessential} if $e_{i-1} \, \, {\mathcal R}
\, \, e_{i} \, \, {\mathcal R} \, \, e_{i+1}$ or $e_{i-1} \, \,
{\mathcal L} \, \, e_{i} \, \, {\mathcal L} \, \, e_{i+1}$.
Following Nambooripad \cite{Namb}, we define an {\it $E$-chain} to
be the equivalence class of an $E$-path relative to this
equivalence relation. It can be proved \cite{Namb} that each
$E$-chain has a unique canonical representative of the form
$(e_{1}, e_{2}, \mathrel{\leq _{\L}}dots , e_{n})$ where every vertex is essential.
We will often abuse notation slightly by identifying an $E$-chain
with its canonical representative.
The set ${\mathcal G}(E)$ of $E$-chains forms a {\it groupoid}
with set $E$ of objects (identities) and with an $E$-chain
$(e_{1},e_{2}, \mathrel{\leq _{\L}}dots , e_{n})$ viewed as a morphism from $e_{1}$
to $e_{n}$. The product $C_{1}C_{2}$ of two $E$-chains $C_{1} =
(e_{1},e_{2}, \mathrel{\leq _{\L}}dots , e_{n})$ and $C_{2} = (f_{1},f_{2}, \mathrel{\leq _{\L}}dots,
f_{m})$ is defined and equal to the canonical representative of
$(e_{1}, \mathrel{\leq _{\L}}dots , e_{n}, f_{1}, \mathrel{\leq _{\L}}dots f_{m})$ if and only if
$e_{n} = f_{1}$: the inverse of $(e_{1},e_{2}, \mathrel{\leq _{\L}}dots , e_{n})$ is
$(e_{n}, \mathrel{\leq _{\L}}dots , e_{2},e_{1})$. We refer the reader to
\cite{Namb} for more detail.
For future reference we give a universal characterization of
${\mathcal G}(E)$ in the category of small groupoids. Every
equivalence relation $R$ on a set $X$ can be considered to be a
groupoid with objects $X$ and arrows the ordered pairs of $R$.
There are obvious notions of free products and free products with
amalgamations in the category of small groupoids. See
\cite{Higgoids} for details. Clearly the objects of any groupoid
form a subgroupoid whose morphisms are the identities. We will
identify the objects of a groupoid as this subgroupoid and call it
the trivial subgroupoid. The proof of the following theorem
appears in \cite{Namb}.
\begin{Thm}\mathrel{\leq _{\L}}abel{amalg}
Let $S$ be a semigroup with non-empty set of idempotents $E$. Then
${\mathcal G}(E)$ is isomorphic to the free product with
amalgamation $\mathrel{{\mathcal L}}\ast_{E}\mathrel{{\mathcal R}}$ in the category of small groupoids.
\end{Thm}
As mentioned above, we are considering $E$ to be the trivial
subgroupoid of $\mathcal{G}(E)$.
It is easy to see from the characterizations of ${\mathcal R}$ and
$\mathcal L$ above that if $(f_{1},f_{2}, \mathrel{\leq _{\L}}dots ,f_{m})$ is the
canonical representative equivalent to an $E$-path
\\ $(e_{1},e_{2},\mathrel{\leq _{\L}}dots , e_{n})$, then $e_{1}e_{2} \mathrel{\leq _{\L}}dots e_{n} =
f_{1}f_{2} \mathrel{\leq _{\L}}dots f_{m}$ in $S$, since $efg = eg$ if $e {\mathcal R}
f {\mathcal R} g$ or $e {\mathcal L} f {\mathcal L} g$.
Standard results of Miller and Clifford \cite{CP} imply that
\\$e_{1} {\mathcal R} e_{1}e_{2} \mathrel{\leq _{\L}}dots e_{n} {\mathcal L} \,
\, e_{n}.$
In 1972, D.G. Fitzgerald \cite{FitzG} proved the following basic
result about the idempotent generated subsemigroup of any
semigroup.
\begin{Thm}\mathrel{\leq _{\L}}abel{fitz}
Let $S$ be any semigroup with non-empty set $E = E(S)$ of
idempotents and let $x$ be a regular element of $\mathrel{\leq _{\L}}angle E(S)
{\mathrel{\leq _{\R}}m ran}gle$. Then $x$ can be expressed as a product of idempotents $x
= e_{1}e_{2} \mathrel{\leq _{\L}}dots e_{n}$ in an $E$-path \\ $(e_{1}, e_{2},
\mathrel{\leq _{\L}}dots , e_{n})$ of $S$, and hence as a product of idempotents in
an $E$-chain. If $S$ is regular, then so is $<E(S)>$.
\end{Thm}
In 1979, Nambooripad introduced the notion of a {\it biordered
set} as an abstract characterization of the set of idempotents $E$
of a semigroup $S$ with respect to certain basic products that are
forced to be idempotents. We give the details that will be needed
in this paper.
Recall that if $e,f \in E=E(S)$ for some semigroup $S$ then $e \,
\, {\omega}^{r} \, \, f$ if and only if $fe = e$, and $e \, \,
{\omega}^{l} \, \, f$ if and only if $ef = e$. In the former case,
$ef$ is an idempotent that is $\mathcal R$ -related to $e$ and $ef
\,\, {\omega} \,\, f$ in the natural order on $E$: similarly, in
the latter case, $fe$ is an idempotent that is $\mathcal
L$-related to $e$ and $fe \,\, {\omega} \,\, f$. Thus in each case
both products $ef$ and $fe$ are defined within $E$, i.e. such
products of idempotents must always be idempotent. Products of
these type are referred to as {\it basic products}. The partial
algebra $E$ with multiplication restricted to basic products is
called the {\it biordered set} of $S$.
Nambooripad \cite{Namb} characterized the partial algebra of
idempotents of a (regular) semigroup with respect to these basic
products axiomatically. We refer the reader to Nambooripad's article
\cite{Namb} for the details. The axioms are complicated but do arise
naturally in mathematics. For example, Putcha proved that pairs of
opposite parabolic subgroups of a finite group of Lie type have the
natural structure of a biordered set \cite{Putchbook}. We will need
one more concept, the sandwich set $S(e,f)$ of two idempotents $e,f$
of $S$.
If $e,f$ are (not necessarily distinct) idempotents of a semigroup
$S$, then $S(e,f)=\{h \in E|ehf=ef, fhe=h\}$ is called the
sandwich set of $e$ and $f$ (in that order). It is straightforward
to prove that if $h \in S(e,f)$, then $h$ is an inverse of $ef$.
In particular, $S(e,f)$ is non-empty for any $e,f$ if $S$ is a
regular semigroup. Nambooripad also gave an order theoretic
definition of the sandwich set, but we will not need that in this
paper.
As mentioned above, Nambooripad gave a definition of a biordered
set as a partial algebra satisfying a collection of axioms. We
don't need the details of these axioms because of the following
theorems. He called a biordered set {\it regular} if the
(axiomatically defined) sandwich set of any pair of idempotents is
non-empty.
\begin{Thm} (Nambooripad \cite{Namb})
The set $E$ of idempotents of a regular semigroup is a regular
biordered set relative to the basic products in $E$. Conversely,
every regular (axiomatically defined) biordered set arises as the
biordered set of idempotents of some regular semigroup.
\end{Thm}
This was extended to non-regular semigroups and non-regular
biordered sets by Easdown \cite{Eas1}. We will give a more precise
statement of Easdown's result in the next section.
\section
{\bf Free idempotent generated semigroups on biordered sets}
If $E$ is a biordered set we denote by $IG(E)$ the semigroup with
presentation
$IG(E) = \mathrel{\leq _{\L}}angle E: e^{2} = e$ for all $e \in E$ and $e.f = ef$ if
$ef$ is a basic product in $E {\mathrel{\leq _{\R}}m ran}gle $.
If $E$ is a regular biordered set, then we define
$RIG(E) = \mathrel{\leq _{\L}}angle E: e^{2} = e$ for all $e \in E$ and $e.f = ef$
if $ef$ is a basic product in $E$ and $ef = ehf$ for all $e,f \in
E$ and $h \in S(e,f) {\mathrel{\leq _{\R}}m ran}gle$
The semigroup $IG(E)$ is called the {\it free idempotent generated
semigroup on $E$} and the semigroup $RIG(E)$ is called the {\it
free regular idempotent generated semigroup on $E$}. This
terminology is justified by the following results of Easdown
\cite{Eas1}, Nambooripad \cite{Namb} and Pastijn \cite{Past2}.
\begin{Thm}\cite{Eas1}\mathrel{\leq _{\L}}abel{Eas}
The biordered set of idempotents of $IG(E)$ is $E$. In particular,
every biordered set is the biordered set of some semigroup. If $S$
is any idempotent generated semigroup with biordered set of
idempotents isomorphic to $E$ then the natural map $E \mathrel{\leq _{\R}}ightarrow
S$ extends uniquely to a homomorphism $IG(E) \mathrel{\leq _{\R}}ightarrow S$.
\end{Thm}
\begin{Thm}\cite{Namb, Past2}\mathrel{\leq _{\L}}abel{Nam}
If $E$ is a regular biordered set then $RIG(E)$ is a regular
semigroup with biordered set of idempotents $E$. If $S$ is any
regular idempotent generated semigroup with biordered set biorder
isomorphic to $E$, then the natural map $E \mathrel{\leq _{\R}}ightarrow S$ extends
uniquely to a homomorphism $RIG(E) \mathrel{\leq _{\R}}ightarrow S$.
\end{Thm}
There is an obvious natural morphism ${\varphihi} : IG(E) \mathrel{\leq _{\R}}ightarrow
RIG(E)$ if $E$ is a regular biordered set. However, we remark that
this is not an isomorphism, and the semigroups $IG(E)$ and
$RIG(E)$ can be very different when $E$ is a regular biordered
set. Also, the regular elements of $IG(E)$ do not form a
subsemigroup in general, even if $E$ is a regular biordered set.
The following simple examples illustrate these facts.
\noindent {\bf Example 1.} Let $E$ be the (non-regular) biordered
set consisting of two idempotents $e$ and $f$ with trivial
quasi-orders ${\omega}^r$ and ${\omega}^l$. Clearly the rules
$e^{2} \mathrel{\leq _{\R}}ightarrow e, f^{2} \mathrel{\leq _{\R}}ightarrow f$ constitute a
terminating confluent rewrite system for the semigroup $IG(E)$.
Canonical forms for words in $IG(E)$ are of the form $efef \mathrel{\leq _{\L}}dots
e$ or $efef \mathrel{\leq _{\L}}dots f$ or $fe fe \mathrel{\leq _{\L}}dots f$ or $fefe \mathrel{\leq _{\L}}dots e$.
Clearly $IG(E)$ is an infinite semigroup with exactly two
idempotents ($e$ and $f$).
\noindent {\bf Example 2.} Let $F$ be the biordered set $E$ above
with a zero $0$ adjoined. Thus $F$ is a three-element semilattice,
freely generated as a semilattice by $e$ and $f$. It is easy to
see that $RIG(F) = F$ since $ef = e0f = fe = f0e = 0$ from the
presentation for $RIG(F)$ and since $0\in S(e,f)$. But $IG(F)$ is
$IG(E)^{0}$, where $IG(E)$ is the semigroup in Example 1. Thus
$IG(F)$ is infinite, but $RIG(F)$ is finite.
We will give more information about the relationship between
$IG(E)$ and $RIG(E)$, for $E$ a regular biordered set, at the end
of this section. In particular, we will show that the regular
elements of $IG(E)$ are in one-one correspondence with the
elements of $RIG(E)$ (even though the regular elements of $IG(E)$
do not necessarily form a subsemigroup of $IG(E)$).
Nambooripad studied the free regular idempotent generated
semigroup semigroup $RIG(E)$ on a regular biordered set via his
general theory of ``inductive groupoids" in \cite{Namb}. If $S$ is
a regular semigroup, then Nambooripad introduced an associated
groupoid ${\mathcal N}(S)$ (that we refer to as the {\it
Nambooripad groupoid of $S$)} as follows. The set of objects of
${\mathcal N}(S)$ is the set $E = E(S)$ of idempotents of $S$. The
morphisms of ${\mathcal N}(S)$ are of the form $(x,x')$ where $x'$
is an inverse of $x$: $(x,x')$ is viewed as a morphism from $xx'$
to $x'x$ and the composition of morphisms is defined by
$(x,x')(y,y') = (xy,y'x')$ if $x'x = yy'$ (and undefined
otherwise). With respect to this product, ${\mathcal N}(S)$
becomes a groupoid, which in fact is endowed with much additional
structure, making it an {\it inductive groupoid} in the sense of
Nambooripad \cite{Namb}. An inductive groupoid is an ordered
groupoid whose identities (objects) admit the structure of a
regular biordered set $E$, and which admits a way of evaluating
products of idempotents in an $E$-chain as elements of the
groupoid. There is an equivalence between the category of regular
semigroups and the category of inductive groupoids. We refer the
reader to Nambooripad's paper \cite{Namb} for much more detail. In
particular, it follows easily from Nambooripad's results that the
maximal subgroup of $S$ containing the idempotent $e$ is
isomorphic to the local group of ${\mathcal N}(S)$ based at the
object (identity) $e$ (i.e. the group of all morphisms from $e$ to
$e$ in ${\mathcal N}(S)$).
In his paper \cite{Namb}, Nambooripad also showed how to construct
the inductive groupoid ${\mathcal N}(RIG(E))$ associated with the
free regular idempotent generated semigroup on a regular biordered
set $E$ directly from the groupoid of $E$-chains of $E$. We review
this construction here.
Let $E$ be a regular biordered set. An {\it $E$-square} is an $E$
-path $(e,f,g,h,e)$ with $e\mathrel{{\mathcal R}} f \mathrel{{\mathcal L}} g \mathrel{{\mathcal R}} h \mathrel{{\mathcal L}} e$ or $(e,h,g,f,e)$
with $e \mathrel{{\mathcal L}} h \mathrel{{\mathcal R}} g \mathrel{{\mathcal L}} f \mathrel{{\mathcal R}} e$. We draw the square as: $
\mathrel{\leq _{\L}}eft[
\begin{array}{cc}
e & f \\
h & g \\
\end{array}
\mathrel{\leq _{\R}}ight] $. An $E$-square is degenerate if it is of one of the
following three types:
\begin{center}
$
\mathrel{\leq _{\L}}eft[
\begin{array}{cc}
e & e \\
e & e \\
\end{array}
\mathrel{\leq _{\R}}ight]$ $\mathrel{\leq _{\L}}eft[
\begin{array}{cc}
e & f \\
e & f \\
\end{array}
\mathrel{\leq _{\R}}ight] $
$\mathrel{\leq _{\L}}eft[
\begin{array}{cc}
e & e \\
f & f \\
\end{array}
\mathrel{\leq _{\R}}ight] $
Unless mentioned otherwise, all $E$-squares will be
non-degenerate.
\end{center}
An idempotent $t=t^2 \in E$ {\em left to right singularizes} the
$E$-square $
\mathrel{\leq _{\L}}eft[
\begin{array}{cc}
e & f \\
h & g \\
\end{array}
\mathrel{\leq _{\R}}ight] $ if $te=e, th=h, et=f$ and $ht=g$ where all of these
products are defined in the biordered set $E$. Right to left, top
to bottom and bottom to top singularization is defined similarly
and we call the $E$-square {\em singular} if it has a
singularizing idempotent of one of these types. Note that since
$te=e \in E$ if and only if $e\omega^{r}t$, all of these products
can also be defined in terms of the order structure as well.
The importance of singular $E$-squares is given by the next lemma.
\begin{Lem}\mathrel{\leq _{\L}}abel{Trivsing} Let $\SSQ$ be a singular $E$-square in
a semigroup $S$.Then the product of the elements in the $E$-cycle
$(e,f,g,h,e)$ satisfies $efghe=e$.
\end{Lem}
\begin{proof}
Let $t=t^2$ left to right singularize the $E$-square $\SSQ$. Then
in any idempotent generated semigroup with biordered set $E$,
$efghe=fh$ follows from the basic $\mathrel{{\mathcal R}}$ and $\mathrel{{\mathcal L}}$ relations of $E$.
Furthermore, $fh=eth=eh=e$ which follows from the definition of
left to right singularization. The other cases of singularization
are proved similarly.
\end{proof}
In order to build the inductive groupoid of $RIG(E)$, we must
therefore identify any singular $E$-cycle of $\mathcal{G}(E)$ from an
idempotent $e$ to itself with $e$. This is because any inductive
groupoid with biordered set $E$ is an image of $\mathcal{G}(E)$ by
Nambooripad's theory \cite{Namb}. This leads to the following
definition. For two $E$-chains $C = (e_{1},e_{2}, \mathrel{\leq _{\L}}dots , e_{n})$
and $C' = (f_{1},f_{2}, \mathrel{\leq _{\L}}dots , f_{m})$ define $C \mathrel{\leq _{\R}}ightarrow C'$
if there are $E$-chains $C_{1}$ and $C_{2}$ and a singular
$E$-square $\gamma$ such that $C = C_{1}C_{2}$ and $C' =
C_{1}{\gamma}C_{2}$ and let $\sim$ denote the equivalence relation
on ${\mathcal G}(E)$ induced by $\mathrel{\leq _{\R}}ightarrow$. The next theorem
follows from \cite{Namb} Theorem 6.9, 6.10 and ensures that the
quotient groupoid ${\mathcal G}(E)/{\sim} $ defined above has an
inductive structure and is isomorphic to the inductive groupoid of
$RIG(E)$.
\begin{Thm}(Nambooripad \cite{Namb}) \mathrel{\leq _{\L}}abel{N(E)}
If $E$ is a regular biordered set, then $ {\mathcal N}(RIG(E))
\cong {\mathcal G}(E)/{\sim} $.
\end{Thm}
It is convenient to provide a topological interpretation of this
theorem of Nambooripad. We remind the reader that just as groups
are presented by a set of generators and a set of words over the
generating set as relators (giving the group as a quotient of the
free group on the generating set), groupoids are presented by a
graph and a set of cycles in the graph as relators (giving the
groupoid as a quotient of the free groupoid on the graph). See
\cite{Higgoids} for more details.
It follows from Theorem \mathrel{\leq _{\R}}ef{amalg} and Theorem \mathrel{\leq _{\R}}ef{N(E)} that we
have the following presentation for $ {\mathcal N}(RIG(E)) \cong
{\mathcal G}(E)/{\sim} $.
{\bf Generators:} The graph with vertices $E$ and edges the
relation $\mathrel{{\mathcal R}} \cup \mathrel{{\mathcal L}}$.
{\bf Relators:} There are two types of relators:
\begin{enumerate}
\item {$((e,f),(f,g),(g,e))=1_e$ if $e\mathrel{{\mathcal R}} f\mathrel{{\mathcal R}} g$ or $e \mathrel{{\mathcal L}} f \mathrel{{\mathcal L}} g$}
\item {$((e,f),(f,g),(g,h),(h,e))=1_e$ if $\SSQ$ is a singular
$E$-square}.
\end{enumerate}
We will always assume that there are no trivial relators in the
list above. This means that for relators of type (1) all three
elements $e,f,g$ are distinct and for relators of type (2), all
four elements $e,f,g,h$ are distinct.
If $E$ is a regular biordered set we associate a 2-complex
$K(E)$ which is the analogue of the presentation complex of a
group presentation. The $1$-skeleton of $K(E)$ is the graph
$(E,{\mathcal R} \cup {\mathcal L})$ described above. Since $\mathrel{{\mathcal R}}$
and $\mathrel{{\mathcal L}}$ are symmetric relations we consider the underlying graph
to be undirected in the usual way. The $2$-cells of $K(E)$ are of
the following types:
(1) if $e \, \, {\mathcal R} \, \, f \, \, {\mathcal R} \, \, g$
or $e \, \, {\mathcal L} \, \, f \, \, {\mathcal L} \, \, g$ for
$e,f,g \in E$ then there is a $2$-cell with boundary edges $(e,f),
(f,g), (g,e)$.
(2) all singular $E$-squares bound $2$-cells.
We note that our 2-complexes are combinatorial objects and we follow
the notation of \cite{Rotman}, \cite{Spanier}.
We denote the fundamental groupoid of a $2$-complex $K$ by
${\varphii}_{1}(K)$: the fundamental group of $K$ based at $v$ will be
denoted by ${\varphii}_{1}(K,v)$. The following corollary is an
immediate consequence of Nambooripad's work and the definition of
the fundamental groupoid of a $2$-complex (see, for example,
\cite{Higgoids}).
\begin{Cor} If $E$ is a regular biordered set, then
${\varphii_{1}}(K(E)) \cong {\mathcal G}(E)/{\sim} $ and hence
${\varphii_{1}}(K(E)) \cong {\mathcal N}(RIG(E))$.
\end{Cor}
It follows that the maximal subgroup of $RIG(E)$ containing the
idempotent $e$ is isomorphic to the fundamental group of $K(E)$
based at $e$. The next theorem shows that there is a one to one
correspondence between regular elements of $IG(E)$ and $RIG(E)$ if
$E$ is a regular biordered set and that for every $e \in E$, the
maximal subgroup at $e$ in $IG(E)$ is isomorphic to the maximal
subgroup at $e$ in $RIG(E)$.
\begin{Thm}\mathrel{\leq _{\L}}abel{IG=RIG}
Let $E$ be a regular biordered set. Then the natural map ${\varphihi} :
IG(E) \mathrel{\leq _{\R}}ightarrow RIG(E)$ is a bijection when restricted to the
regular elements of IG(E). That is, for each element $r \in
RIG(E)$ there exists a unique regular element $s \in IG(E)$ such
that ${\varphihi}(s) = r$. In particular, the maximal subgroups of
$IG(E)$ and $RIG(E)$ are isomorphic.
\end{Thm}
\begin{proof}
It follows from Fitzgerald's theorem, Theorem \mathrel{\leq _{\R}}ef{fitz} that every
element of $RIG(E)$ is the product of the elements in an $E$ chain.
But it follows from the Clifford-Miller theorem \cite{CP} that the
product of an element in an $E$-chain is a regular element in any
idempotent generated semigroup with biordered set $E$. It follows
immediately that $\varphihi$ restricts to a surjective map from the
regular elements of $IG(E)$ to $RIG(E)$.
If $u$ and $v$ are regular elements of $IG(E)$, then there are
$E$-chains \\
$(e_{1},e_{2}, \mathrel{\leq _{\L}}dots , e_{n})$ and $(f_{1},f_{2},
\mathrel{\leq _{\L}}dots , f_{m})$ such that $u = e_{1}e_{2} \mathrel{\leq _{\L}}dots e_n$ and \\
$v =
f_{1}f_{2} \mathrel{\leq _{\L}}dots f_{m}$ in $IG(E)$. Suppose that ${\varphihi}(u) =
{\varphihi}(v)$. Clearly, on applying the morphism $\varphihi$, $e_{1}e_{2}
\mathrel{\leq _{\L}}dots e_{n} = f_{1}f_{2} \mathrel{\leq _{\L}}dots f_{m}$ in $RIG(E)$. We mentioned
previously that it follows from the Clifford-Miller theorem
\cite{CP} that $e_{1} {\mathcal R} f_{1}$ and $ e_{n} {\mathcal L}
f_{m}$. Thus without loss of generality, we may assume that $e_{1} =
f_{1}$ since $e_{1}f_{1}f_{2} \mathrel{\leq _{\L}}dots f_{m} = f_{1}f_{2} \mathrel{\leq _{\L}}dots
f_{m}$ in $IG(E)$, and similarly we may assume that $e_{n} = f_{m}$.
Applying \cite{Namb} Lemma 4.11 and Theorem \mathrel{\leq _{\R}}ef{N(E)}, it follows
that $(e_{1},e_{2}, \mathrel{\leq _{\L}}dots ,e_{n}) \sim (f_{1}, f_{2}, \mathrel{\leq _{\L}}dots
,f_{m})$. Thus it is possible to pass from $(e_{1},e_{2}, \mathrel{\leq _{\L}}dots
,e_{n})$ to $(f_{1}, f_{2}, \mathrel{\leq _{\L}}dots ,f_{m})$ by a sequence of
operations of two types:
(a) inserting or deleting paths of length 3 corresponding to $\mathrel{{\mathcal R}}$
or $\mathrel{{\mathcal L}}$ related idempotents; and
(b) inserting or deleting $E$-cycles corresponding to singular
$E$-squares.
Note that if $(e,f,g,h,e)$ is a singular $E$-square then $efghe =
e$ in any semigroup $S$ with biordered set $E$ by Lemma
\mathrel{\leq _{\R}}ef{Trivsing}. It follows easily that if $(g_{1},g_{2}, \mathrel{\leq _{\L}}dots
,g_{p})$ is obtained from $(e_{1},e_{2}, \mathrel{\leq _{\L}}dots ,e_{n})$ by one
application of an operation of type (a) or (b) above, then
$e_{1}e_{2} \mathrel{\leq _{\L}}dots e_{n} = g_{1}g_{2} \mathrel{\leq _{\L}}dots g_{p}$ in any
semigroup with biordered set $E$, and in particular this is true
in $IG(E)$. It follows by induction on the number of steps of
types (a) and (b) needed to pass from $(e_{1},e_{2}, \mathrel{\leq _{\L}}dots
,e_{n})$ to $(f_{1},f_{2}, \mathrel{\leq _{\L}}dots ,f_{m})$ that $u = e_{1}e_{2}
\mathrel{\leq _{\L}}dots e_{n} = f_{1}f_{2} \mathrel{\leq _{\L}}dots f_{m} = v$ in $IG(E)$, so $\varphihi$
is one-to-one on regular elements, as desired.
To prove the final statement of the theorem, note that elements of
the maximal subgroup of $IG(E)$ or $RIG(E)$ containing $e$ come
from
$E$-chains that start and end at $e$, since \\
$e_{1} \, \, {\mathcal R} \, \, e_{1}e_{2} \mathrel{\leq _{\L}}dots e_{n} \, \,
{\mathcal L} \, \, e_{n}$ for any $E$-chain $(e_{1},e_{2}, \mathrel{\leq _{\L}}dots
, e_{n})$. This shows that the map ${\varphihi}$ is surjective on
maximal subgroups: the first part of the theorem shows that it is
injective on maximal subgroups.
\end{proof}
\section{Connections between the Nambooripad Complex and the
Graham-Houghton Complex}
In this section we use the Bass-Serre theoretic methods of
\cite{HMM} to study the local groups of $\mathcal{G}(E)$ and $\mathcal{N}(E)$. The local
group of a groupoid $G$ at the object $v$ is the group of self
morphisms $G(v,v)$. For $\mathcal{G}(E)$ we give a rapid topological proof of
a result of Namboopripad and Pastijn \cite{NP1} who showed that
the local groups of $\mathcal{G}(E)$ are free groups. By applying \cite{HMM}
we are lead directly to the graphs considered by Graham and
Houghton \cite{Gr, Hough} for studying completely 0-simple
semigroups. We put a structure of a complex on top of the
Graham-Houghton graphs in order to have tools to study the vertex
subgroups of $\mathcal{N}(E)$, which by Theorem \mathrel{\leq _{\R}}ef{N(E)} and Theorem
\mathrel{\leq _{\R}}ef{IG=RIG} are the maximal subgroups of $IG(E)$ and $RIG(E)$
when $E$ is a regular biordered set.
Throughout this section, $E$ will denote a regular biordered set.
By Theorem \mathrel{\leq _{\R}}ef{Nam} $E$ is isomorphic to the biordered set of
idempotents of $RIG(E)$ and we will use this identification
throughout the section as well. Thus, we will refer to the
elements of $E$ as idempotents and talk about their Green classes
within $RIG(E)$. We have seen in Theorem \mathrel{\leq _{\R}}ef{amalg} that $\mathcal{G}(E)$
decomposes as the free product with amalgamation $\mathcal{G}(E)=\mathrel{{\mathcal L}} \ast_{E}
\mathrel{{\mathcal R}}$, where by abuse of notation, $E$ denotes the trivial
subgroupoid. Since $\mathrel{{\mathcal L}}$ and $\mathrel{{\mathcal R}}$ also have the same objects as
each other and as $E$, we can use the methods of \cite{HMM} to
study the maximal subgroup of $\mathcal{G}(E)$, since this paper was
concerned with amalgams of groupoids in which the intersection of
the two factors contains all the identity elements.
For every such amalgam of groupoids $G=A\ast_{U}B$, \cite{HMM}
associates a graph of groups in the sense of Bass-Serre Theory
\cite{Serre} whose connected components are in one to one
correspondence with the connected components of $G$ and such that
the fundamental group of a connected component is isomorphic to
the local group of the corresponding component of $G$.
First note that there is a one to one correspondence between the
$\mathrel{{\mathcal L}} (\mathrel{{\mathcal R}})$ classes of $E$ and the $\mathrel{{\mathcal L}} (\mathrel{{\mathcal R}})$ classes of $RIG(E)$.
This is because every $\mathrel{{\mathcal L}} (\mathrel{{\mathcal R}})$ class of $RIG(E)$ has an
idempotent and the $\mathrel{{\mathcal L}} (\mathrel{{\mathcal R}})$ relation restricted to idempotents
can be defined by basic products. We abuse notation by identifying
an $\mathrel{{\mathcal L}} (\mathrel{{\mathcal R}})$ class of $E$ with the $\mathrel{{\mathcal L}} (\mathrel{{\mathcal R}})$ class of $RIG(E)$
containing it.
We now describe explicitly the graph of groups associated to
$\mathcal{G}(E)$. For more details, see \cite{HMM}. The graph of groups
$\textsf{G}$ of $\mathcal{G}(E)$ consists of the following data: The set of
vertices is the disjoint union of the $\mathrel{{\mathcal L}}$ and $\mathrel{{\mathcal R}}$ classes of $E$
and its positive edges are the elements of $E$. If $e \in E$, its
initial edge is its $\mathrel{{\mathcal L}}$-class and its terminal edge is its
$\mathrel{{\mathcal R}}$-class. That is, there is a unique positive edge from an
$\mathrel{{\mathcal L}}$-class $L$ to an $\mathrel{{\mathcal R}}$-class $R$ if and only if the $\mathrel{{\mathcal H}}$-class
$L \cap R$ of $RIG(E)$ contains an idempotent. Each vertex group
of $\textsf{G}$ is the trivial group. This is an exact translation
for $\mathcal{G}(E)$ of the graph of groups defined for an arbitrary amalgam
on page 46 of \cite{HMM}.
Since the vertex groups of $\textsf{G}$ are trivial, we can consider
$\textsf{G}$ to be a graph in the usual sense. Therefore its
fundamental group is a free group and we have the following theorem
of Namboopripad and Pastijn \cite{NP1}.
\begin{Thm}\mathrel{\leq _{\L}}abel{free}
Every local subgroup of $\mathcal{G}(E)$ is a free group.
\end{Thm}
\begin{proof}
It follows from Theorem 3 of \cite{HMM} that for each element $e
\in E$ the local subgroup of $\mathcal{G}(E)$ at $e$ is isomorphic to the
fundamental group of $\textsf{G}$ based at the $\mathrel{{\mathcal L}}$-class of $e$.
Since the latter group is free by the discussion above, the
theorem is proved.
\end{proof}
In the case that a connected component of $\mathcal{G}(E)$ has a finite
number of idempotents, the rank of the free group will be the
Euler characteristic of the corresponding component of
$\textsf{G}$, that is, the number of edges of the graph minus the
number of vertices plus 1. Thus if the connected component of $e
\in E$ of $\mathcal{G}(E)$ has $m$ $\mathrel{{\mathcal R}}$-classes, $n$ $\mathrel{{\mathcal L}}$-classes and $k$
idempotents, then the free group $\mathcal{G}(E)(e,e)$ has rank $k-(m+n)+1$.
All the calculations of maximal subgroups of $RIG(E)$ or $IG(E)$
that have appeared in the literature \cite{McElw, NP1, Past2} have
been restricted to cases of biordered sets that have no
non-degenerate singular squares. In this case it follows from
Theorem \mathrel{\leq _{\R}}ef{N(E)} that $\mathcal{G}(E)$ is isomorphic to $\mathcal{N}(E)$. Since the
local groups of $\mathcal{N}(E)$ are isomorphic to the maximal subgroups of
$RIG(E)$ we have the following result of Nambooripad and Pastijn
\cite{NP1}.
\begin{Thm}\mathrel{\leq _{\L}}abel{nosing}
If $E$ is a biordered set that has no non-degenerate singular
squares, then every subgroup of $RIG(E)$ is free.
\end{Thm}
Nambooripad and Pastijn's proof of theorem \mathrel{\leq _{\R}}ef{nosing} uses
combinatorial word arguments. A topological proof of theorem
\mathrel{\leq _{\R}}ef{nosing} in the special case that the (not necessarily
regular) biordered set has no nontrivial biorder ideals was given
by McElwee \cite{McElw}. The graph that McElwee uses is the same
as ours in this case, but without reference to the general work of
\cite{HMM} or the connection with the Graham-Houghton graph
\cite{Gr, Hough} that we discuss below. There are a number of
interesting classes of regular semigroups whose biordered sets
have no non-degenerate singular squares including locally inverse
semigroups. See \cite{NP1} for more examples.
Connected components of the graph $\textsf{G}$ associated to $\mathcal{G}(E)$
defined above have arisen in the literature in connection with the
theory of finite 0-simple semigroups and in particular with the
theory of idempotent generated subsemigroups of finite 0-simple
semigroups. Finite idempotent generated 0-simple semigroups have the
property that all non-zero idempotents are connected by an
$E$-chain. This follows from the Clifford-Miller theorem \cite{CP}.
Thus the graph $\textsf{G}$ corresponding to the biordered set of a
finite 0-simple semigroup has a trivial component consisting of 0
and one other connected component. The graph defined independently
by Graham and Houghton \cite{Gr, Hough} associated to a finite
0-simple semigroup is exactly the graph that arises from Bass-Serre
theory associated to $\mathcal{G}(E)$ that we have defined above. Graham and
Houghton did not note the connection to Bass-Serre theory. A number
of papers have given connections between completely 0-simple
semigroups, the theory of graphs and algebraic topology \cite{Gr,
Hough}, \cite{Pollatch}. The monograph \cite{qtheory} gives an
updated version of these connections.
We now add 2-cells to $\textsf{G}$ of the graph associated to $\mathcal{G}(E)$,
one for each singular square $\SSQ$. Given this square and recalling
that the positive edges of $\textsf{G}$ are directed from the
$\mathrel{{\mathcal L}}$-class of an idempotent to its $\mathrel{{\mathcal R}}$-class we sew a 2-cell onto
$\textsf{G}$ with boundary $ef^{-1}gh^{-1}$. We call this 2-complex
the Graham-Houghton complex of $E$ and denote it by $GH(E)$.
We note two important properties of $GH(E)$. Its 1-skeleton is
naturally bipartite as each edge runs between an $\mathrel{{\mathcal L}}$-class and an
$\mathrel{{\mathcal R}}$-class. Furthermore $GH(E)$ is a square complex in that each of
its cells is a square bounded by a 4-cycle.
We now prove that the fundamental group of the connected component
of $GH(E)$ containing the vertex $\mathrel{{\mathcal L}}_{e}$ of an idempotent $e\in
E$ is isomorphic to the fundamental group of the Nambooripad
complex $K(E)$ containing the vertex $e$. We will then be able to
use $GH(E)$ to compute the maximal subgroups of $RIG(E)$.
As we have seen above, the Nambooripad complex $K(E)$ has vertices
$E$, the idempotents of $S$, edges $(e,f)$ whenever $e\mathrel{{\mathcal R}} f$ or
$e\mathrel{{\mathcal L}} f$, and two types of two cells: one triangular 2-cell
$(e,f)(f,g)(g,e)$ for each unordered triple $(e,f,g)$ of distinct
elements satisfying $e\mathrel{{\mathcal R}} f\mathrel{{\mathcal R}} g$ or $e\mathrel{{\mathcal L}} f\mathrel{{\mathcal L}} g$, and one square
2-cell $(e,f)(f,g)(g,h)(h,e)$ for each non-degenerate singular
$E$-square $\SSQ$.
The Graham-Houghton complex $GH(E)$ has one vertex for each $\mathrel{{\mathcal R}}$
or $\mathrel{{\mathcal L}}$-class of $E$, an edge labelled by $e\in E$ between $\mathrel{{\mathcal R}}_a$
and $\mathrel{{\mathcal L}}_b$ if $e\in\mathrel{{\mathcal R}}_a\cap\mathrel{{\mathcal L}}_b$ (giving a bipartite graph), and
square 2-cells attached along $(e,f,g,h)$ when $\SSQ$ is a
non-degenerate singular $E$-square.
We now describe a sequence of transformations of complexes which
starts with $GH(E)$ and ends with $K(E)$. Each step, we shall see,
does not change the isomorphism class of the fundamental groups of
the complex. This will imply that $GH(E)$ and $K(E)$ have
isomorphic fundamental groups. The basic idea is that the vertices
of $K(E)$ are the edges of $GH(E)$, and the vertices of $GH(E)$
are, in some sense, the edges of $K(E)$. The process basically
``blows up'' the vertices of $GH(E)$ to introduce the edges of
$K(E)$, and then crushes the original edges of $GH(E)$ to points
to create the vertices of $K(E)$. The blow-up process introduces
the triangular 2-cells needed for $K(E)$, and the crushing process
turns the square 2-cells of $GH(E)$ into the square 2-cells of
$K(E)$. All of the topological facts used below may be found, for
example, in \cite{Hatcher, Spanier}. More precisely, in the
theorem below, we prove that $K(E)$ is the 2-skeleton of a complex
that is homotopy equivalent to $GH(E)$ and in particular, they
have isomorphic fundamental groups at each vertex.
\begin{Thm} $\varphii_1(K(E),e)$ is isomorphic to $\varphii_1(GH(E),\mathrel{{\mathcal L}}_{e})$
for each $e \in E$.
\end{Thm}
\begin{proof}
The first step is to blow up each vertex $R$ or $L$ of $GH(E)$ to an
$n$-simplex, where $n$ is the valence of the vertex. Figure
\mathrel{\leq _{\R}}ef{Mark1} shows the essential details. The basic idea is that the
vertex $R$ or $L$ becomes the $n$-simplex, each edge of $GH(E)$
incident to $R$ or $L$ becomes an edge incident to a distinct vertex
of the $n$-simplex, and any square 2-cell incident to the vertex
receives an added edge of the $n$-simplex in its boundary, joining
the two vertices which its original pair of edges are now incident
to. Carrying out this process for all of the original vertices
results in a complex which we will call $Q_1$. Note that $Q_1$ is
homotopy equivalent to $GH(E)$, since $GH(E)$ may be obtained from
$Q_1$ by crushing each $n$-simplex $\sigma^n$ to a point (literally,
taking the quotient complex $Q_1/\sigma^n$). Since each $n$-simplex
is a contractible subcomplex of $Q_1$, the quotient map
$Q_1\mathrel{\leq _{\R}}ightarrow Q_1/\sigma^n$ is a homotopy equivalence
(\cite{Hatcher} Proposition 0.17); the result then follows by
induction, since $Q_1$ with every one of the introduced simplices
crushed to points is isomorphic to $GH(E)$. The original square
2-cells of $GH(E)$ have now become octagons in $Q_1$.
The complex $Q_1$ has a pair of vertices for each original edge of
$GH(E)$, that is, for each element $e\in E$. One of the vertices
lies in the 2-skeleton of the $n$-simplex corresponding to the
$\mathrel{{\mathcal L}}$-class of $e$, and the other in the corresponding $\mathrel{{\mathcal R}}$-class.
Our second step is to crush each of these original edges from
$GH(E)$ to points, resulting in a complex which we will call
$Q_2$; see Figure \mathrel{\leq _{\R}}ef{Mark2}. Each such edge forms a contractible
subcomplex of $Q_1$, since its vertices are distinct - the
1-skeleton of $GH(E)$ is a bipartite graph, so the vertices of
each edge lie on distinct $n$-simplices - so quotienting out by
each edge is again a homotopy equivalence. $Q_2$ is therefore
homotopy equivalent to $Q_1$. The vertices of $Q_2$ are now in
1-to-1 correspondence with $E$, since there is one vertex for each
edge in $GH(E)$. The edges of $Q_2$ are precisely the edges in the
$n$-simplices, so there is an edge from $e$ to $f$ precisely when
$e$ and $f$ lie in the same $\mathrel{{\mathcal L}}$- or $\mathrel{{\mathcal R}}$-class, which are
precisely the edges of the Nambooripad complex. Under the quotient
map the octagonal 2-cells of $Q_1$ have become square 2-cells,
whose boundaries are edge paths through the vertices $e,f,g,h$
given by the edges in the boundaries of the square 2-cells of
$GH(E)$. That is, they are precisely the singular $E$-squares of
the Nambooripad complex.
\mathrel{\leq _{\L}}eavevmode
Finally, the Nambooripad complex $K(E)$ is isomorphic to the
2-skeleton $Q_2^{(2)}\subseteq Q_2$ of $Q_2$. That is, $Q_2^{(2)}$
consists of the 1-skeleton, which is the 1-skeleton of $K(E)$,
together with the singular squares and all of the 2-faces of the
$n$-simplices, which are precisely the triangular 2-cells of
$K(E)$ for $e,f,g$ three distinct elements in the same $\mathrel{{\mathcal L}}$- or
$\mathrel{{\mathcal R}}$-class. Having the same vertices, edges, and 2-cells, the two
2-complexes are therefore isomorphic.
Since the fundamental groupoid of the 2-skeleton of a complex is
isomorphic to the fundamental groupoid of the complex, we have
$\varphii_1(K(E))\cong \varphii_1(Q_2^{(2)})\cong \varphii_1(Q_2)\cong
\varphii_1(Q_1)\cong\varphii_1(GH(E))$,
\noindent as desired.
\end{proof}
\section{An example of a free idempotent generated semigroup with
non-free subgroups}
In this section we present an example of a finite regular
biordered set $E$ such that $Z \times Z$, the free Abelian group
of rank 2, is isomorphic to a maximal subgroup of $RIG(E)$. This
is the first example of a subgroup of a free idempotent generated
semigroup that is not a free group.
Before presenting the example, we give more details on the
connection between bipartite graphs and completely 0-simple
semigroups. This will help us explain how we present our example.
Let $S=M^{0}(A,1,B,C)$ be a combinatorial completely 0-simple
semigroup. That is, the maximal subgroup is the trivial group 1.
Thus we can represent elements as pairs $(a,b) \in A \times B$ with
product $(a,b)(a',b')=(a,b')$ if $C(b,a')\neq 0$ and 0 otherwise. As
in the general case of the Graham-Houghton graph that we described
in the previous section, we associate a bipartite graph $\Gamma(S)$
to $S$. The vertices of $\Gamma(S)$ are $A \cup B$ (where as usual,
we assume $A \cap B$ is empty). There is an edge between $b \in B$
and $a \in A$ if and only if $C(b,a)=1$. Clearly $\Gamma(S)$ is a
bipartite graph with no isolated vertices.
Conversely, let $\Gamma$ be a bipartite graph with vertices the
disjoint union of two sets $A$ and $B$ and no isolated vertices.
We then have the incidence matrix $C=C(\Gamma):B \times A
\mathrel{\leq _{\R}}ightarrow \{0,1\}$ with $C(b,a)=1$ if and only if $\{b,a\}$ is
an edge of $\Gamma$. As usual we write $C$ as a $\{0,1\}$ matrix
with rows labelled by elements of $B$ and columns labelled by
elements of $A$. Define $S(\Gamma)$ to be the Rees matrix
semigroup $S(\Gamma)=M^{0}(A,1,B,C(\Gamma))$. Then it follows from
the fact that $\Gamma$ has no isolated vertices that $S(\Gamma)$
is a combinatorial 0-simple semigroup. Clearly, these assignments
give a one to one correspondence between combinatorial 0-simple
semigroups and directed bipartite graphs with no isolated
vertices. Isomorphisms of graphs are easily seen to correspond to
isomorphisms of the corresponding semigroup and vice versa.
We now explain the idea of our example. We will define a bipartite
graph $\Gamma$ that embeds on the surface of a torus. The graph will
represent the one skeleton of a square complex. We will then define
a finite regular semigroup $S$ that has $\Gamma$ as the bipartite
graph corresponding to a completely 0-simple semigroup that is an
ideal of $S$ and such that if we add the singular squares of the
biordered set $E(S)$ as 2-cells to $\Gamma$ (in the language of the
previous section, we build the Graham-Houghton complex), we obtain a
complex that has the fundamental group of the torus, that is, $Z
\times Z$ as maximal subgroup.
We begin by drawing the graph $\Gamma$ in Figure \mathrel{\leq _{\R}}ef{Gamma}.
\begin{figure}
\caption{The graph $\Gamma$}
\end{figure}
We call the colors of the bipartition $R$ and $L$ to remind the
reader of the Green relations $\mathrel{{\mathcal R}}$ and $\mathrel{{\mathcal L}}$ (but if the reader
insists, s/he can think of them as Red and bLue). Thus there are
16 vertices in the graph and 32 edges. Figure \mathrel{\leq _{\R}}ef{Gamma} is drawn
in a way that the graph is really drawn on the torus obtained by
identifying the top of the graph with the bottom and the left side
with the right side.
Before continuing we define the incidence matrix of $\Gamma$. For
our purposes, it is more convenient to write the transpose of the
incidence matrix. Thus the matrix in figure \mathrel{\leq _{\R}}ef{incidence} has
rows labelled by $R_{1}, \mathrel{\leq _{\L}}dots, R_{8}$ and columns labelled by
$L_{1}, \mathrel{\leq _{\L}}dots, L_{8}$. In particular, the matrix written this way
defines the biordered set of the 0-simple semigroup $S(\Gamma)$
corresponding to $\Gamma$. That is, idempotents correspond to the
$\mathrel{{\mathcal H}}$ classes with entries 1, the $\mathrel{{\mathcal R}}$ relation corresponds to
being idempotents in the same row and the $\mathrel{{\mathcal L}}$ relation
corresponds to being idempotents in the same column.
\begin{figure}
\caption{The transpose of the incidence matrix of the graph
$\Gamma$}
\end{figure}
Now consider the 2-complex one obtains by sewing on 2-cells
corresponding to the 16 visual 1 by 1 squares that we see in the
diagram of $\Gamma$. Notice that after identifying the graph on the
surface of a torus, there are 24 4-cycles in the graph. There are
the 16 4-cycles bounding 2-cells in our complex (such as
$R_{1},L_{3},R_{4},L_{1}$) that we see in figure \mathrel{\leq _{\R}}ef{Gamma}: there
are also the 8 4-cycles (such as $R_{1},L_{3},R_{3},L_{4}$) that are
obtained when we fold $\Gamma$ into a torus, but these 4-cells do
not bound cells in our complex. Clearly the fundamental group of
this complex is $Z \times Z$. We have simply drawn subsquares on the
usual representation of the torus as a square with opposite sides
identified. By killing off these corresponding 16 4-cycles we have a
space homeomorphic to the torus and thus its fundamental group is $Z
\times Z$.
Furthermore, each of the 16 visual 1 by 1 squares in the diagram
of the graph $\Gamma$ corresponds to an $E$-square in the
biordered set of the 0-simple semigroup $S(\Gamma)$ corresponding
to $\Gamma$. Thus if we can find a regular semigroup $S$ that has
the biordered set corresponding to $S(\Gamma)$ as a connected
component and also has exactly the 16 visible squares as the
singular squares in this component, it follows from the results of
the previous section that the maximal subgroup of the connected
component corresponding to $\Gamma$ in $RIG(E(S))$ is $Z \times
Z$. We proceed to construct such a regular semigroup.
Let $X=\{L_{1}, \mathrel{\leq _{\L}}dots, L_{8}\}$. The semigroup $S$ will be
defined as a subsemigroup of the monoid of partial functions
acting on the right of $X$. Let $C$ be the transpose of the matrix
in figure \mathrel{\leq _{\R}}ef{incidence}. Thus $C$ is the structure matrix of the
0-simple semigroup $S(\Gamma)$. To each element $s=(R_{i},L_{j})
\in S(\Gamma)$ we associate the partial constant function $f_s:X
\mathrel{\leq _{\R}}ightarrow X$ defined by $L_{x}f_{s}=L_{j}$ if $C(L_{x},R_{i})=1$
and undefined otherwise. In the language of semigroup theory,
$f_{s}$ is the image of $s$ under the right Schutzenberger
representation of $S(\Gamma)$ \cite{Arbib, qtheory}.
The semigroup generated by $\{f_{s}|s \in S(\Gamma)\}$ is isomorphic
to $S(\Gamma)$. This can be verified by direct computation by
showing that for all $s,t \in S(\Gamma)$, $f_{s}f_{t}=f_{st}$,
(where $st$ is the product of $s$ and $t$ in $S(\Gamma)$) and that
the assignment $s \mapsto f_{s}$ is one to one. This follows
directly from the definition of $f_s$ above. Alternatively, one can
verify this by noting as we did above that the assignment of $s$ to
$f_{s}$ is the right Schutzenberger representation. The structure
matrix of $S(\Gamma)$, that is, the transpose of the matrix in
figure \mathrel{\leq _{\R}}ef{incidence}, has no repeated rows and columns and this
implies that both the right and left Schutzenberger representations
are faithful \cite{Arbib, qtheory}.
Now we define two more functions $e,k$ by the following tables.
\begin{center}
$e=\mathrel{\leq _{\L}}eft[
\begin{array}{cccccccc}
L_{1} & L_{2} & L_{3} & L_{4} & L_{5} & L_{6} & L_{7} & L_{8} \\
L_{1} & L_{6} & L_{3} & L_{7} & L_{3} & L_{6} & L_{7} & L_{1} \\
\end{array}
\mathrel{\leq _{\R}}ight]$
$k=\mathrel{\leq _{\L}}eft[
\begin{array}{cccccccc}
L_{1} & L_{2} & L_{3} & L_{4} & L_{5} & L_{6} & L_{7} & L_{8} \\
L_{4} & L_{2} & L_{2} & L_{4} & L_{5} & L_{5} & L_{8} & L_{8} \\
\end{array}
\mathrel{\leq _{\R}}ight]$
\end{center}
Let $S$ be the semigroup generated by $\{e,k,f_{s}|s \in
S(\Gamma)\}$. We claim that $S$ is the semigroup that has the
properties we desire. Notice that $e$ and $k$ are idempotents and
that $S(\Gamma)$ is generated by its idempotents (this is known to
be equivalent to the graph $\Gamma$ being connected \cite{Gr,
Namb}), so in fact, $S$ is an idempotent generated semigroup.
The subsemigroup $T$ generated by $\{e,k\}$ has by direct
computation 8 elements
$\{e,k,(ek),(ke),(eke),(kek),h=(ek)^{2},f=(ke)^{2}\}$. This
semigroup consists of functions all of rank 4 and is a completely
simple semigroup whose idempotents are $e,f,k,h$. We claim that
$TS(\Gamma) \cup S(\Gamma)T \subseteq S(\Gamma)$. To see this we
first note that for $(R_{i},L_{j}) \in S(\Gamma)$, we have
$(R_{i},L_{j})t=(R_{i},L_{j}t)$ for $t \in \{e,k\}$. Therefore
$S(\Gamma)T \subseteq S(\Gamma)$ follows by induction on the length
of a product of elements in $\{e,k\}$.
We now list how $e$ and $k$ act on the left of $S(\Gamma)$. In the
charts below, we note, for $t \in \{e,k\}$ and $(R_{i},L_{j}) \in
S(\Gamma)$, that $t(R_{i},L_{j})=(tR_{i},L_{j})$ for the left action
$R_{i} \mapsto tR_{i}$ listed here. Again, all of this can be
verified by direct computation.
\begin{center}
$e:\mathrel{\leq _{\L}}eft[
\begin{array}{cccccccc}
R_{1} & R_{2} & R_{3} & R_{4} & R_{5} & R_{6} & R_{7} & R_{8} \\
R_{4} & R_{2} & R_{3} & R_{4} & R_{3} & R_{6} & R_{2} & R_{6} \\
\end{array}
\mathrel{\leq _{\R}}ight]$
$k:\mathrel{\leq _{\L}}eft[
\begin{array}{cccccccc}
R_{1} & R_{2} & R_{3} & R_{4} & R_{5} & R_{6} & R_{7} & R_{8} \\
R_{1} & R_{5} & R_{7} & R_{8} & R_{5} & R_{1} & R_{7} & R_{8} \\
\end{array}
\mathrel{\leq _{\R}}ight]$
\end{center}
For readers who know the terminology, we have listed the images of
$T$ in the left Schutzenberger representation on $S(\Gamma)$
\cite{Arbib, qtheory}. Our claim that $TS(\Gamma) \cup S(\Gamma)T
\subseteq S(\Gamma)$ follows from these charts by induction on the
length of a product from $T$. It follows that $S$ is the disjoint
union of $T$ and $S(\Gamma)$. Thus $S$ is a regular semigroup with
3 $\mathrel{{\mathcal J}}$ classes- one of them being $T$ and the other 2 coming from
$S(\Gamma)$ (its unique non-zero $\mathrel{{\mathcal J}}$-class and 0). $S(\Gamma)$ is
the unique 0-minimal ideal of $S$. The order of $S$ is 73 and the
order of $E(S)$ is 37.
We now look at the biorder structure on $E(S)$. We summarize the
usual idempotent order relation in figure \mathrel{\leq _{\R}}ef{order}.
\begin{figure}
\caption{The idempotent order on $E(S)$}
\end{figure}
We explain the symbols in this diagram. Each symbol represents an
idempotent in $T$ according to figure \mathrel{\leq _{\R}}ef{explanation}.
An entry of a symbol in a box in figure \mathrel{\leq _{\R}}ef{order} denotes a
relation in the usual idempotent order. For example, the idempotent
$(R_{1},L_{1})$ of $S(\Gamma)$ is below $f$ in the idempotent order.
For example it follows from the diagram that $(R_{2},L_{1})\mathrel{\leq _{\L}}b f$
but that $(R_{2},L_{1})$ is not below $f$ in the idempotent order.
The other relations in the regular biordered set $E(S)$ can be
computed directly in $S$. For example, $f(R_{2},L_{1}) =
(R_{7},L_{1}), k(R_{2},L_{2}) = (R_{5},L_{2})$, etc.
The partial order on $E(S)$ has many pleasant properties. For
example, each of the idempotents in $T$ is above exactly 8
idempotents in $S(\Gamma)$ and every idempotent in $S(\Gamma)$ is
below exactly one idempotent in $T$. The 8 idempotents in
$S(\Gamma)$ below a given idempotent in $T$ form an $E$-cycle.
Thus the idempotents in $S(\Gamma)$ decompose into the disjoint
union of 4 $E$-cycles of length 8. Below we give a more geometric
definition of the semigroup $S$ which will help explain some of
these properties.
Finally, in figure \mathrel{\leq _{\R}}ef{singularization}, we give the precise
information on which idempotents in $T$ singularize squares in
$E(S(\Gamma))$. Again, all of this can be verified by direct
computation.
\begin{figure}
\caption{Singularization of $E$-squares}
\end{figure}
The explanation of figure \mathrel{\leq _{\R}}ef{singularization} is as follows. An
entry in a square of the symbol of an idempotent from $T$
indicates that that idempotent singularizes the corresponding $2
\times 2$ rectangular set in $E(S)$. For example, the square,
$\mathrel{\leq _{\L}}eft[
\begin{array}{cc}
(R_{1},L_{1}) & (R_{1},L_{3}) \\
(R_{4},L_{1}) & (R_{4},L_{3}) \\
\end{array}
\mathrel{\leq _{\R}}ight]$, which is the square represented in the top left portion
of figure \mathrel{\leq _{\R}}ef{singularization} is singularized (bottom to top) by
$f$ and (top to bottom) by $e$. The diligent reader can verify all
that we claim by direct computation in $E(S)$. In particular,
exactly the 16 squares that we desire to be singularized in
$S(\Gamma)$ are the ones singularized in $S$ and therefore the
free (regular) idempotent semigroup on the biordered set $E(S)$
has $Z \times Z$ as a maximal subgroup for the connected component
corresponding to $\Gamma$ as explained at the beginning of this
section. This completes our first description of $S$. We now give
a more geometric description of the semigroup $S$.
\subsection{Incidence structures and
affine geometry over $Z_2$}
In this subsection we show that the semigroup $S$ discussed above
arises from a combinatorial structure related to affine 3-space
over $Z_2$. We first recall some connections between incidence
structures in the sense of combinatorics and finite 0-simple
semigroups.
Up to now, we have used the tight connection between bipartite
graphs and 0-simple semigroups over the trivial group to build our
example. As is well known, $\{0,1\}$-matrices arise naturally to
code information about other combinatorial structures besides
bipartite graphs.
An {\em incidence system} is a pair $D=(V,\mathcal{B})$ where $V$ is
a (usually finite) set of points and $\mathcal{B}$ is a list of
subsets of $V$ called {\em blocks}. We allow for the possibility
that a block, that is a certain subset of $V$, can appear more than
once in the list $\mathcal{B}$. The {\em incidence matrix} of $D$ is
the $|\mathcal{B}| \times |V|$ matrix $I_D$ (we will use the
elements of $\mathcal{B}$ and $V$ to name rows and columns) such
that $I_{D}(b,v) = 1$ if $v \in b$ and 0 otherwise, where $b \in
\mathcal{B}$ and $v \in V$. Sometimes, the transpose of this matrix
is called the incidence matrix, but it is more convenient for our
purposes to define things this way.
The semigroup $S(D)$ associated with $D$ is the Rees matrix
semigroup $M^{0}(\mathcal{B},1,V,C)$ where $C$ is the transpose of
$I_{D}$. It is straightforward to see that $S(D)$ is 0-simple if
and only if the empty set is not a block and every point belongs
to some block. We make these assumptions throughout. Conversely,
it is easy to see that the transpose of the structure matrix of a
combinatorial completely 0-simple semigroup is an incidence system
with these two properties.
For example, if we consider the matrix in figure \mathrel{\leq _{\R}}ef{incidence}
as an incidence system, the points are $\{L_{1},\mathrel{\leq _{\L}}dots,L_{8}\}$.
The blocks are
$R_{1}=\{L_{1},L_{2},L_{3},L_{4}\},R_{2}=\{L_{1},L_{2},L_{6},L_{8}\}$,etc.
Now we show that this incidence system can be coordinatized as a
certain affine configuration over the field of order 2 and that the
semigroup $S$ can be faithfully represented by affine partial
functions that are ``continuous" with respect to this structure in
the sense of \cite{DM1, DM2, DM3}.
Let $F_2$ be the field of order 2 and let $V=F_2^{3}$ be 3-space
over $F_{2}$. Consider the set of planes through the origin (i.e.
2 dimensional subspaces of $V$) that do not contain the vector
$(1,1,1)$. An elementary counting argument shows that there are 4
such planes. We let $\mathcal{B}$ be the set of these 4 planes
plus their 4 translates by the vector (1,1,1). Therefore,
$\mathcal{B}$ has 8 elements. We claim that by suitably ordering
the points in $V$ and the planes in $\mathcal{B}$, the incidence
matrix of $(V,\mathcal{B})$ is the matrix in figure
\mathrel{\leq _{\R}}ef{incidence}. We do this by making the assignment of vectors to
the points $L_{1}, \mathrel{\leq _{\L}}dots , L_{8}$ according to figure
\mathrel{\leq _{\R}}ef{points}.
\begin{figure}
\caption{The points of the structure}
\end{figure}
With this identification of the $L_{i}$ as vectors in $V$, we have
the following way to identify the blocks of our structure. For
simplicity of presentation, we write $i$ in place of $L_{i}$ in
figure \mathrel{\leq _{\R}}ef{blocks}.
\begin{figure}
\caption{The blocks of the structure}
\end{figure}
We can see from the preceding table that $R_{1},R_{2},R_{4},R_{7}$
are precisely the 4 planes through the origin in $V$ that do not
contain the vector $(1,1,1)$ and that
$R_{3}=R_{2}+(1,1,1),R_{5}=R_{7}+(1,1,1),R_{6}=R_{4}+(1,1,1),R_{8}=R_{1}+(1,1,1)$
are their translates.
Now we show that the semigroup $S$ defined in the previous
subsection also has a natural interpretation with respect to this
geometric structure. Let $V$ be a vector space over an arbitrary
field. An affine partial function on $V$ is a partial function
$f_{A,w}:V \mathrel{\leq _{\R}}ightarrow V$ of the form $vf=vA+w$,where $A:V
\mathrel{\leq _{\R}}ightarrow V$ is a partial linear transformation, that is a
linear transformation whose domain is an affine subspace of $V$
and range an affine subspace of $V$ and $w \in V$. The collection
of all affine partial functions is a monoid $Aff(V)$. If we
identify $f_{A,w}$ with the pair $(A,w)$, then multiplication in
$Aff(V)$ takes the form $(A,w)(A',w')=(AA',wA'+w')$ so that
$Aff(V)$ is a semidirect product of the monoid of partial linear
transformations on $V$ with the additive group on $V$.
We claim that the idempotents $e$ and $k$ defined in the previous
section in defining our semigroup $S$ act as affine functions on
$F_{2}^3$ using our translation of our structure in this section.
Indeed, let $A=\mathrel{\leq _{\L}}eft[
\begin{array}{ccc}
1 & 0 & 1 \\
0 & 1 & 0 \\
0 & 0 & 0 \\
\end{array}
\mathrel{\leq _{\R}}ight]$ considered as a matrix over $F_2$. Then it is easily
checked that for $1 \mathrel{\leq _{\L}}eq i \mathrel{\leq _{\L}}eq 8$, $ie=j$ if and only if
$v_{i}A=v_{j}$ where $v_{i}$ is the vector corresponding to
$L_{i}$ in the table above and that if $B=\mathrel{\leq _{\L}}eft[
\begin{array}{ccc}
0 & 1 & 0 \\
0 & 1 & 0 \\
1 & 1 & 1 \\
\end{array}
\mathrel{\leq _{\R}}ight]$ and $w=(1,1,0)$, then for $1 \mathrel{\leq _{\L}}eq i \mathrel{\leq _{\L}}eq 8$, $ik=j$ if
and only if $v_{i}B + w =v_{j}$. Thus the completely simple
subsemigroup $T$ of our semigroup $S$ is faithfully represented by
affine functions over our geometric structure.
Furthermore, each element of $T$ has the following property with
respect to this structure: the inverse image of each plane in the
structure is also in the structure. For example, $R_{1}e^{-1}=
R_{4}, R_{2}e^{-1}= R_{2}, R_{3}e^{-1}=R_{3}, R_{4}e^{-1}=R_{4},
R_{5}e^{-1}=R_{3}, R_{6}e^{-1}=R_{6}, R_{7}e^{-1}=R_{2},
R_{8}e^{-1}=R_{6}$.
Each element $(R_{i},L_{j})$ is also represented as an affine
partial function, namely the partial function whose domain is
$R_{i}$ and sends all points in its domain to $L_{j}$. We can
represent this as an affine partial function by taking $A$ to be
the 0 linear transformation restricted to $R_{i}$ and $w$ to be
$L_j$. Clearly, the inverse image of a block $R$ under this
function is either $R_{i}$ if $L_{j} \in R$ and the empty set
otherwise.
Notice also, that for every element of $S$ the closure of blocks
under inverse image encodes left multiplication of $e$ in the
biordered set $E(S)$. For example,
$e(R_{1},L_{1})=(R_{1}e^{-1},L_{1})=(R_{4},L_{1})$,
$(R_{1},L_{1})(R_{3},L_{1})=0$, etc.
Thus, there is an analogue of the action of the partial functions
on our structure to continuous functions on a topological space.
If we consider the blocks of our structure to be ``open", then our
functions preserve open sets under inverse image. The notion of
continuous partial functions on combinatorial structures and its
relationship to the semigroup theoretic notion of translational
hull \cite{CP} has been explored in \cite{DM1,DM2,DM3}. We see
here that there is a close connection between building biordered
sets with a specific connected component and the continuous
partial functions on the corresponding 0-simple semigroup. We will
explore this connection in future work.
\section{Summary and future directions}
We have shown how to represent the maximal subgroups of the free
(regular) idempotent generated semigroup on a regular biordered set
by a 2-complex derived from Nambooripad's \cite{Namb} work. By
applying the Bass-Serre techniques of \cite{HMM}, we are directly
lead to the graph defined by Graham and Houghton for finite 0-simple
semigroups \cite{Gr, Hough}. We put a structure of 2-complex on this
graph and use that to construct an example of a finite regular
biordered set that has a maximal subgroup that is isomorphic to the
free abelian group of rank 2. This is the first example of a
non-free group that appears in a free idempotent generated
semigroup.
The biordered set arises from a certain combinatorial structure
defined on a 3 dimensional vector space over the field of order 2.
This suggests looking for further examples by either varying the
field and looking at analogous structures over 3 dimensional
spaces or by looking at higher dimensional analogues of the
structure we have defined.
In related work we have proved, using completely different
techniques, that if $F$ is any field, and $E_{3}(F)$ is the
biordered set of the monoid of $3 \times 3$ matrices over $F$, then
the free idempotent generated semigroup over $E_{3}(F)$ has a
maximal subgroup isomorphic to the multiplicative subgroup of $F$.
In particular, finite cyclic groups of order $p^{n}-1$, $p$ a prime
number appear as maximal subgroups of free idempotent generated
semigroups.
This last example motivates an intended application of this work. We
would like to apply Nambooripad's powerful theory of inductive
groupoids \cite{Namb} to study reductive linear algebraic monoids
\cite{Putchbook}. This very important class of regular monoids and
their finite analogues have been intensively studied over the last
25 years. A basic example is the monoid of all matrices over a
field.
The above discussion begs the question of describing the class of
groups that are maximal subgroup of $IG(E)$ or $RIG(E)$ for a
biordered set $E$. This seems to be a very difficult question at
this time.
\end{document}
|
\begin{document}
\title{The Minimum Size of Signed Sumsets \[.4in]}
\begin{abstract}
For a finite abelian group $G$ and positive integers $m$ and $h$, we let
$$\rho(G, m, h) = \min \{ |hA| \; : \; A \subseteq G, |A|=m\}$$ and
$$\rho_{\pm} (G, m, h) = \min \{ |h_{\pm} A| \; : \; A \subseteq G, |A|=m\},$$ where $hA$ and $h_{\pm} A$ denote the $h$-fold sumset and the $h$-fold signed sumset of $A$, respectively. The study of $\rho(G, m, h)$ has a 200-year-old history and is now known for all $G$, $m$, and $h$. Here we prove that $\rho_{\pm}(G, m, h)$ equals $\rho (G, m, h)$ when $G$ is cyclic, and establish an upper bound for $\rho_{\pm} (G, m, h)$ that we believe gives the exact value for all $G$, $m$, and $h$.
\end{abstract}
\noindent 2010 AMS Mathematics Subject Classification: \\ Primary: 11B75; \\ Secondary: 05D99, 11B25, 11P70, 20K01.
\noindent Key words and phrases: \\ abelian groups, sumsets, Cauchy--Davenport Theorem.
\thispagestyle{empty}
\section{Introduction}
Let $G$ be a finite abelian group written with additive notation. For a nonnegative integer $h$ and a nonempty subset $A$ of $G$, we let $hA$ and $h_{\pm}A$ denote the $h$-fold {\em sumset} and the $h$-fold {\em signed sumset} of $A$, respectively; that is, for an $m$-subset $A=\{a_1, \dots, a_m\}$ of $G$, we let
$$hA=\{ \Sigma_{i=1}^m \lambda_i a_i \; : \; (\lambda_1,\dots,\lambda_m) \in \mathbb{N}_0^m, \; \Sigma_{i=1}^m \lambda_i=h\}$$ and
$$h_{\pm} A=\{ \Sigma_{i=1}^m \lambda_i a_i \; : \; (\lambda_1,\dots,\lambda_m) \in \mathbb{Z}^m, \; \Sigma_{i=1}^m |\lambda_i|=h\}.$$
While signed sumsets are less well-studied in the literature than sumsets are, they come up naturally: For example, in \cite{BajRuz:2003a}, the first author and Ruzsa investigated the {\em independence number} of a subset $A$ of $G$, defined as the maximum value of $t \in \mathbb{N}$ for which $$0 \not \in \cup_{h=1}^t h_{\pm}A$$ (see also \cite{Baj:2000a} and \cite{Baj:2004a}); and in \cite{KloLev:2003a}, Klopsch and Lev discussed the {\em diameter} of $G$ with respect to $A$, defined as the minimum value of $s \in \mathbb{N}$ for which $$\cup_{h=0}^s h_{\pm}A=G$$ (see also \cite{KloLev:2009a}). The independence number of $A$ in $G$ quantifies the ``degree'' to which $A$ is linearly independent in $G$ (no subset is ``completely'' independent), while the diameter of $G$ with respect to $A$ measures how ``effectively'' $A$ generates $G$ (if at all). Note that $h_{\pm}A$ is always contained in $h(A \cup -A)$, but this may be a proper containment when $h \geq 2$.
For a positive integer $m \leq |G|$, we
let $$\rho(G, m, h) = \min \{ |hA| \; : \; A \subseteq G, |A|=m\}$$ and
$$\rho_{\pm}(G, m, h) = \min \{ |h_{\pm}A| \; : \; A \subseteq G, |A|=m\}$$ (as usual, $|S|$ denotes the size of the finite set $S$).
The value of $\rho(G, m, h)$ has a long and distinguished history and has been determined for all $G$, $m$, and $h$; in this paper we attempt to find $\rho_{\pm}(G, m, h)$.
We start by a brief review of the case of sumsets. In 1813, for prime values of $p$, Cauchy \cite{Cau:1813a} found the minimum possible size of
$$A+B=\{a+b \; : \; a \in A,\; b \in B \}$$ among subsets $A$ and $B$ of given sizes in the cyclic group $\mathbb{Z}_p$. In 1935, Davenport \cite{Dav:1935a} rediscovered Cauchy's result, which is now known as the Cauchy--Davenport Theorem. (Davenport was unaware of Cauchy's work until twelve years later; see \cite{Dav:1947a}.)
\begin{thm}[Cauchy--Davenport Theorem] \label{Cauchy--Davenport}
If $A$ and $B$ are nonempty subsets of the group $\mathbb{Z}_p$ of prime order $p$, then
$$|A+B| \geq \min \{p, |A|+|B|-1\}.$$
\end{thm}
It can easily be seen that the bound is tight for all values of $|A|$ and $|B|$, and thus
$$ \rho (\mathbb{Z}_p, m, 2)=\min\{p,2m-1\}.$$
After various partial results, the general case was finally solved in 2006 by Plagne \cite{Pla:2006a} (see also \cite{Pla:2003a}, \cite{EliKer:2007a}, and \cite{EliKerPla:2003a}). To state the result, we introduce the function
$$u(n,m,h)=\min \{f_d (m,h) \; : \; d \in D(n)\},$$ where $n$, $m$, and $h$ are positive integers, $D(n)$ is the set of positive divisors of $n$, and
$$f_d(m,h)=\left(h\left \lceil m/h \right \rceil-h +1 \right) \cdot d.$$
(Here $u(n,m,h)$ is a relative of the Hopf--Stiefel function used also in topology and bilinear algebra; see, for example, \cite{EliKer:2005a}, \cite{Kar:2006a}, \cite{Pla:2003a}, and \cite{Sha:1984a}.)
\begin{thm} [Plagne; cf.~\cite{Pla:2006a}] \label{value of u}
Let $n$, $m$, and $h$ be positive integers with $m \leq n$. For any abelian group $G$ of order $n$ we have
$$\rho (G, m, h)=u(n,m,h).$$
\end{thm}
Turning now to $\rho_{\pm} (G, m, h)$, we start by observing that $$\rho_{\pm} (G,m,0)=1$$ and $$\rho_{\pm} (G,m,1)=m$$ for all $G$ and $m$. To see the latter equality, it suffices to verify that one can always find a {\em symmetric} subset of size $m$ in $G$, that is, an $m$-subset $A$ of $G$ for which $A=-A$. Therefore, from now on, we assume that $h \geq 2$.
We must admit that our study of $\rho_{\pm} (G, m, h)$ resulted in quite a few surprises. For a start, we noticed that, in spite of the fact that $h_{\pm}A$ is usually much larger than $hA$ is, the equality $$\rho_{\pm} (G, m,h)=\rho (G, m,h)$$ holds quite often; it is an easy exercise to verify that, among groups of order 24 or less, equality holds with only one exception: $\rho_{\pm} (\mathbb{Z}_3^2, 4,2)=8$ while $\rho (\mathbb{Z}_3^2, 4,2)=7.$ In fact, we can prove that $\rho_{\pm} (G, m,h)$ agrees with $\rho (G, m,h)$ for all cyclic groups $G$ and all $m$ and $h$ (see Theorem \ref{cyclic} below).
However, in contrast to $\rho (G, m,h)$, the value of $\rho_{\pm} (G, m,h)$ depends on the structure of $G$ rather than just the order $n$ of $G$. Suppose that $G$ is of type $(n_1,\dots,n_r)$, that is,
$$G \cong \mathbb{Z}_{n_1} \times \cdots \times \mathbb{Z}_{n_r},$$ where $n_1 \geq 2$ and $n_i$ divides $n_{i+1}$ for each $i \in \{1,\dots, r-1\}$. We exhibit a specific subset $D(G,m)$ of $D(n)$ with which the quantity
$$u_{\pm}(G,m,h)= \min \{f_d(m,h) \; : \; d \in D(G,m) \}$$ provides an upper bound for $\rho_{\pm} (G, m,h)$ (see Theorem \ref{u pm with f} below). Therefore, to get lower and upper bounds for $\rho_{\pm} (G, m,h)$, we minimize $f_d(m,h)$ for all $d \in D(n)$ and for $d \in D(G,m)$, respectively:
$$\min\{ f_d(m,h) \; : \; d \in D(n)\} \leq \rho_{\pm} (G, m,h) \leq \min \{f_d(m,h) \; : \; d \in D(G,m) \}.$$
In fact, we also conjecture that $$\rho_{\pm} (G, m,h)=u_{\pm}(G,m,h)$$ holds in all but one very special situation (see Conjecture \ref{conj for rho pm} below).
Further surprises come from the inverse problem of trying to classify subsets that yield the minimum signed sumset size. To start with, we point out that it is not always symmetric sets that work best. As an example, consider $\rho_{\pm} (\mathbb{Z}_5^2, 9, 2)$.
One can see that for any 9 elements of $\pm a +H$, where $H$ is any subgroup of size 5 and $a \not \in H$, we have $$2_{\pm}A=H \cup (\pm 2a +H),$$ so $$\rho_{\pm} (\mathbb{Z}_5^2, 9, 2)=\rho(\mathbb{Z}_5^2, 9, 2)=15.$$ Here $A$ is not symmetric but is {\em near-symmetric}: it becomes symmetric once one of its elements is removed. However, we can verify that for any symmetric subset $A$ of size 9, $2_{\pm}A$ must have size 17 or more, as follows:
If A contains a subgroup $H$ of size 5, then with any $a \in A \setminus H$, the 2-fold signed sumset of $A$ will contain the 17 distinct elements of $H$, $\pm a+H$, and $ \{\pm 2a\};$ while if $A$ contains no subgroup of size 5, then $$A \cap \{2a \; : \; a \in A\} = \{0\},$$ so $$|2_{\pm}A| \geq |A|+|\{2a \; : \; a \in A\}|-1=17.$$
And that's not all: sometimes it is best to take an {\em asymmetric} set, a set $A$ where $A$ and $-A$ are disjoint. It is easy to check that, in the example of $\rho_{\pm}(\mathbb{Z}_3^2,4,2)=8$ mentioned above, with a 4-subset $A$ of $\mathbb{Z}_3^2$ we get $2_{\pm}A=\mathbb{Z}_3^2 \setminus \{0\}$ when $A$ is asymmetric, and $2_{\pm}A=\mathbb{Z}_3^2$ in all other cases.
We have thus seen that sets that minimize signed sumset size may be symmetric, near-symmetric, or asymmetric---we can prove, however, that there is always a set that is of one of these three types
(see Theorem \ref{symmetry thm} below).
With this paper we aim to introduce the question of finding the minimum size of signed sumsets. Our approach here is entirely elementary. In the follow-up paper \cite{BajMat:2014b}, we investigate the question in elementary abelian groups, where, using deeper results from additive combinatorics, we are able to assert more.
\section{The role of symmetry}
Given a group $G$ and a positive integer $m \leq |G|$, we define a certain collection ${\cal A}(G,m)$ of $m$-subsets of $G$.
We let
\begin{itemize}
\item $\mathrm{Sym}(G,m)$ be the collection of {\em symmetric} $m$-subsets of $G$, that is, $m$-subsets $A$ of $G$ for which $A=-A$;
\item $\mathrm{Nsym}(G,m)$ be the collection of {\em near-symmetric} $m$-subsets of $G$, that is, $m$-subsets $A$ of $G$ that are not symmetric, but for which $A\setminus \{a\}$ is symmetric for some $a \in A$;
\item $\mathrm{Asym}(G,m)$ be the collection of {\em asymmetric} $m$-subsets of $G$, that is, $m$-subsets $A$ of $G$ for which $A \cap (-A)=\emptyset$.
\end{itemize} We then let
$${\cal A}(G,m)=\mathrm{Sym}(G,m) \cup \mathrm{Nsym}(G,m)\cup \mathrm{Asym}(G,m).$$ In other words, ${\cal A}(G,m)$ consists of those $m$-subsets of $G$ that have exactly $m$, $m-1$, or $0$ elements whose inverse is also in the set.
\begin{thm} \label{symmetry thm}
For every $G$, $m$, and $h$, we have
$$\rho_{\pm} (G,m,h)= \min \{|h_{\pm} A| \; : \; A \in {\cal A}(G,m)\}.$$
\end{thm}
{\em Proof:} Since our claim is trivial when $m \leq 2$, we assume that $m \geq 3$.
For a subset $S$ of $G$, let us define its {\em degree of symmetry}, denoted by $\mathrm{sdeg}(S)$, as the number of elements of $S$ that are also elements of $-S$. We shall prove that for any $m$-subset $B$ of $G$ with $$1 \leq \mathrm{sdeg}(B) \leq m-2,$$ there is an $m$-subset $B'$ of $G$ with $$\mathrm{sdeg}(B') = \mathrm{sdeg}(B)+2$$ and $|h_{\pm} B'| \leq |h_{\pm} B|$; repeated application of this results in a subset $A \in {\cal A}(G,m)$ with $|h_{\pm} A| \leq |h_{\pm} B|$, from which our result follows.
Let $$B=\{b_1,b_2,b_3,\dots,b_m\}$$ be an $m$-subset of $G$, and suppose that $-b_1 \not \in B$, $-b_2 \not \in B$, but $-b_3 \in B$. Note that we may have $b_3=-b_3$; furthermore, the sets $\{\pm b_1\}$, $\{\pm b_2\}$, and $\{\pm b_3\}$ are pairwise disjoint. Replacing $b_1$ by $-b_2$ in $B$, we let
$$B'=\{-b_2, b_2, b_3, \dots, b_m\}.$$ Then $B'$ has size $m$, and its degree of symmetry is exactly two more than that of $B$; we need to show that $|h_{\pm} B'| \leq |h_{\pm} B|$. We shall, in fact, show that $h_{\pm} B' \subseteq h_{\pm} B$.
By definition, $h_{\pm} B'$ is the collection of all elements of the form
$$g=\lambda_1(-b_2)+\lambda_2b_2+\lambda_3b_3+\cdots+\lambda_mb_m $$ where $\sum_{i=1}^m |\lambda_i|=h$. Clearly, if $\lambda_1$ and $\lambda_2$ are of opposite sign or either one is zero, then $$|\lambda_2-\lambda_1|=|\lambda_1|+|\lambda_2|,$$ so
$$g=(\lambda_2-\lambda_1)b_2+\lambda_3b_3+\cdots+\lambda_mb_m \in h_{\pm} B.$$
Suppose now that $\lambda_1$ and $\lambda_2$ are both positive; the case when they are both negative can be handled similarly. Furthermore, we assume that $\lambda_1 \geq \lambda_2$; again, the reverse case is analogous.
Assume first that $2b_3=0$; in this case we have $\lambda_3b_3=-\lambda_3b_3$, and thus we may assume that $\lambda_3 \geq 0$. Observe that
$$g=(\lambda_2-\lambda_1)b_2+(2\lambda_1+\lambda_3)b_3+\lambda_4b_4+\cdots+\lambda_mb_m,$$ and
$$ |\lambda_2-\lambda_1|+|2\lambda_1+\lambda_3|+|\lambda_4|+\cdots+|\lambda_m|=h,$$
thus $g \in h_{\pm} B.$
Finally, suppose that $2b_3 \neq 0$; since $-b_3 \in B$, we must have $m \geq 4$, and without loss of generality we can assume that $b_4=-b_3$. We can rewrite $g$ as follows:
$$g=\left\{
\begin{array}{ll}
(\lambda_2-\lambda_1)b_2+(\lambda_1+\lambda_3)b_3+(\lambda_1+\lambda_4)(-b_3)+\lambda_5b_5+\cdots+\lambda_mb_m & \mbox{if} \; \lambda_3 \geq 0, \lambda_4 \geq 0; \\ \\
(\lambda_2-\lambda_1)b_2+(\lambda_1+\lambda_3-\lambda_4)b_3+\lambda_1(-b_3)+\lambda_5b_5+\cdots+\lambda_mb_m & \mbox{if} \; \lambda_3 \geq 0, \lambda_4 \leq 0; \\ \\
(\lambda_2-\lambda_1)b_2+\lambda_1b_3+(\lambda_1-\lambda_3+\lambda_4)(-b_3)+\lambda_5b_5+\cdots+\lambda_mb_m & \mbox{if} \; \lambda_3 \leq 0, \lambda_4 \geq 0; \\ \\
(\lambda_2-\lambda_1)b_2+(\lambda_1-\lambda_4)b_3+(\lambda_1-\lambda_3)(-b_3)+\lambda_5b_5+\cdots+\lambda_mb_m & \mbox{if} \; \lambda_3 \leq 0, \lambda_4 \leq 0.
\end{array}
\right.$$
Since the expressions above show that $g \in h_{\pm} B$ in each case, our proof is complete. $\Box$
\section{Cyclic groups}
In this section we prove that, when $G$ is cyclic, then $\rho_{\pm} (G, m, h)$ agrees with $\rho (G, m, h)$ for all $m$ and $h$.
\begin{thm} \label{cyclic} For all positive integers $n$, $m$, and $h$, we have
$$\rho_{\pm} (\mathbb{Z}_n, m, h)= \rho (\mathbb{Z}_n, m, h).$$
\end{thm}
{\em Proof:} Since the reverse inequality is obvious, it suffices to prove that $$\rho_{\pm} (\mathbb{Z}_n, m, h) \leq \rho (\mathbb{Z}_n, m, h).$$ Recall that $$\rho (\mathbb{Z}_n, m, h)=\min\{f_d (m,h) \; : \; d \in D(n)\}.$$ Observe that, for any symmetric subset $R$ of $G$ (that is, for every subset $R$ for which $R=-R$), we have $h_{\pm} R=hR$.
Our strategy is to find, for each $d \in D(n)$, a symmetric subset $R=R_d(n,m)$ of $\mathbb{Z}_n$ so that $|R| \geq m$ and $|hR| \leq f_d$; this will then imply that
$$\rho_{\pm} (\mathbb{Z}_n, m, h) \leq \min\{f_d (m,h) \; : \; d \in D(n)\}=\rho (\mathbb{Z}_n, m, h).$$
We introduce some notations. We write $n=2^an_0$, $d=2^bd_0$, and $\left \lceil m/d \right \rceil =2^cm_0$, where $a$, $b$, and $c$ are nonnegative integers and $n_0$, $d_0$, and $m_0$ are odd positive integers. Our explicit construction of $R$ depends on whether $b+c \leq a$ or not.
Suppose first that $b+c \leq a$. In this case, let $H$ be the subgroup of $G$ that has order $2^{c}d$, and set
$$R=\bigcup_{i=-\left \lfloor m_0/2 \right \rfloor}^{\left \lfloor m_0/2 \right \rfloor} (i+H).$$ Clearly, $R$ is symmetric;
to see that $R$ has size at least $m$, note that for the index of $H$ in $G$ we have
$$|G:H|=n/(2^cd) \geq \left \lceil m/d \right \rceil/2^c =m_0=2 \left \lfloor m_0/2 \right \rfloor+1,$$hence $$|R|=\left( 2 \left \lfloor m_0/2 \right \rfloor+1 \right) \cdot |H|=d \left \lceil m/d \right \rceil \geq m.$$
To verify that $|hR| \leq f_d$, note that
$$hR=\bigcup_{i=-h\left \lfloor m_0/2 \right \rfloor}^{h\left \lfloor m_0/2 \right \rfloor} (i+H),$$ so
\begin{eqnarray*}
|hR|&=& \min\{n,\left( 2h\left \lfloor m_0/2 \right \rfloor+1 \right) \cdot |H|\} \\
& \leq & \left( 2h\left \lfloor m_0/2 \right \rfloor+1 \right) \cdot |H| \\
&=&(hm_0-h+1) \cdot 2^c d \\
&\leq &(2^c h m_0 -h+1)d \\
& = & f_d.
\end{eqnarray*}
In the case when $b+c \geq a+1$, we let $H$ be the subgroup of $G$ that has order $2^ad_0$, and set
$$R=\bigcup_{i=1}^{2^{b+c-a-1}m_0} \left(\left \lfloor e/2 \right \rfloor +i +H \right) \cup \left(-\left \lfloor e/2 \right \rfloor -i +H\right),$$ where $e=n_0/d_0$. We see that $R$ is symmetric; in order to estimate $|R|$ and $|hR|$, we rewrite $R$ as follows.
Note that $e$ is an odd integer, and thus
$$ -\left \lfloor e/2 \right \rfloor = \left \lfloor e/2 \right \rfloor +1 - e;$$ furthermore, $e=n/|H|$ and thus $e$ is an element of $H$, and so
$$-\left \lfloor e/2 \right \rfloor -i+H= \left \lfloor e/2 \right \rfloor +1-i+H$$ for every integer $i$.
With this, we have
$$R=\bigcup_{i=-2^{b+c-a-1}m_0+1}^{2^{b+c-a-1}m_0} \left(\left \lfloor e/2 \right \rfloor +i +H \right).$$
To show that $R$ has size at least $m$, we see that, for the index of $H$ in $G$, we have
$$|G:H|=n/(2^ad_0) = 2^{b-a}n/d \geq 2^{b-a} \left \lceil m/d \right \rceil = 2^{b+c-a}m_0,$$hence $$|R|=\left( 2^{b+c-a}m_0 \right) \cdot |H|=d \left \lceil m/d \right \rceil \geq m.$$
Finally, $$hR=\bigcup_{i=-2^{b+c-a-1}hm_0+h}^{2^{b+c-a-1}hm_0} \left(H+ h \left \lfloor e/2 \right \rfloor +i \right),$$
so for $|hR|$ we get
\begin{eqnarray*}
|hR|&=& \min\{n,\left( 2^{b+c-a}hm_0-h+1 \right) \cdot |H|\} \\
& \leq & \left( 2^{b+c-a}hm_0-h+1 \right) \cdot |H| \\
&=&\left( 2^{b+c-a}hm_0-h+1 \right) \cdot 2^ad_0 \\
&\leq &(2^c h m_0 -h+1)d \\
& = & f_d,
\end{eqnarray*}
with which our proof is complete. $\Box$
\section{Noncyclic groups}
Let us now turn to noncyclic groups. We say that a finite abelian group $G$ has type $(n_1,\dots,n_r)$ if it is isomorphic to the invariant product
$$\mathbb{Z}_{n_1} \times \cdots \times \mathbb{Z}_{n_r},$$ where $n_1 \geq 2$ and $n_i$ divides $n_{i+1}$ for each $i \in \{1,\dots, r-1\}$. Here $r$ is the rank of $G$, $n_r$ is the exponent of $G$, and we still use the notation $n=\Pi_{i=1}^r n_i$ for the order of $G$.
Recall that for the minimum size of the $h$-fold sumset of an $m$-subset of a group of order $n$ we have
$$\rho(G,m,h)=\min\{f_d (m,h) \; : \; d \in D(n)\}.$$ This, of course, implies that for signed sumsets we have the lower bound
$$\rho_{\pm} (G, m,h) \geq \min \{f_d (m,h) \; : \; d \in D(n) \}.$$
It turns out that we can get an upper bound for $\rho_{\pm} (G, m,h)$ by minimizing $f_d$ for a certain subset of $D(n)$;
more precisely, we establish the following result:
\begin{thm} \label{u pm with f} The minimum size of the $h$-fold signed sumset of an $m$-subset of a group $G$ of type $(n_1,\dots,n_r)$ satisfies
$$\rho_{\pm} (G, m,h) \leq \min \{f_d (m,h) \; : \; d \in D(G,m) \},$$
where $$D(G,m)=\{d \in D(n) \; : \; d= d_1 \cdots d_r, d_1 \in D(n_1), \dots, d_r \in D(n_r), dn_r \geq d_rm \}.$$
\end{thm}
Observe that, for cyclic groups of order $n$, $D(G,m)$ is simply $D(n)$.
Theorem \ref{u pm with f} will be the immediate consequence of Propositions \ref{u pm is upper} and \ref{u pm with f prop} below.
\begin{prop} \label{u pm is upper}
For every group $G$ of type $(n_1,\dots,n_r)$ and order $n$, $m \leq n$, and $h \in \mathbb{N}$ we have
$$\rho_{\pm} \left(G, m, h \right) \leq u_{\pm} (G, m,h),$$ where
$$u_{\pm} (G, m,h)= \min \left \{ \Pi_{i=1}^r u(n_i,m_i,h) \; : \; m_1 \leq n_1, \dots, m_r \leq n_r, \Pi_{i=1}^r m_i \geq m \right\}.$$
\end{prop}
{\em Proof:} For each $i=1,2,\dots,r$, let $m_i$ be an integer so that $m_i \leq n_i$ but $m_1 \cdots m_r \geq m$. By Theorem \ref{cyclic}, for each $i$ we can find symmetric sets $A_i \subseteq \mathbb{Z}_{n_i}$ of size at least $m_i$ for which
$$|h_{\pm}A_i|=|hA_i|=u(n_i,m_i,h).$$ Therefore, $A_1 \times \cdots \times A_r$ is a symmetric subset of $Z_{n_1} \times \cdots \times Z_{n_r}$ of size at least $m_1 \cdots m_r$, so we have
\begin{eqnarray*}
\rho_{\pm} \left(\mathbb{Z}_{n_1} \times \cdots \times \mathbb{Z}_{n_r}, m, h \right) &\leq & \rho_{\pm} \left(\mathbb{Z}_{n_1} \times \cdots \times \mathbb{Z}_{n_r}, m_1 \cdots m_r, h \right) \\
& \leq & |h_{\pm} (A_1 \times \cdots \times A_r)| \\
& = & |h (A_1 \times \cdots \times A_r)| \\
& \leq & |h A_1 \times \cdots \times h A_r| \\
& = & u(n_1,m_1,h) \cdots u(n_r,m_r,h),
\end{eqnarray*} as claimed. $\Box$
\begin{prop} \label{u pm with f prop} With the notations as introduced above, we have
$$u_{\pm} (G, m,h) = \min \{f_d (m,h) \; : \; d \in D(G,m) \}.$$
\end{prop}
{\em Proof:} First, we prove that
$$u_{\pm} (G, m,h) \leq \min \{f_d (m,h) \; : \; d \in D(G,m) \}.$$
Suppose that $d_1, \dots, d_r$ are positive integers so that $d_1 \in D(n_1), \dots, d_r \in D(n_r),$ and $d n_r \geq d_r m$. Let $m_1=d_1, \dots, m_{r-1}=d_{r-1}$, and $m_r=\lceil d_r m/d \rceil$. By assumption, $m_i \leq n_i$ for all $1 \leq i \leq r$, and we also have $m_1 \cdots m_r \geq m$; we will establish our claim by showing that $$u_{\pm} (G, m,h) \leq f_{d}(m, h).$$
Observe that, for each $1 \leq i \leq r-1$,
$$f_{d_i}(m_i,h)=f_{d_i}(d_i,h)=\left( h \left \lceil d_i/d_i \right \rceil -h+1 \right) d_i=d_i,$$
and
$$f_{d_r} (m_r,h)=f_{d_r} (\left \lceil d_r m/d \right \rceil,h)=\left( h \left \lceil \left \lceil d_rm/d \right \rceil /d_r \right \rceil -h+1 \right) d_r,$$ which, according to an identity for the ceiling function, equals
$$\left( h \left \lceil m/d \right \rceil -h+1 \right) d_r.$$
Therefore, $$f_{d_1}(m_1,h) \cdots f_{d_r} (m_r,h) = \left( h \left \lceil m/d \right \rceil -h+1 \right) d=f_{d}(m, h).$$ Our claim now follows, since
$$u_{\pm} (G, m,h) \leq u(n_1,m_1,h) \cdots u(n_r,m_r,h) \leq f_{d_1}(m_1,h) \cdots f_{d_r} (m_r,h).$$
Conversely, we need to prove that
\begin{eqnarray} \label{upper for r}
u_{\pm} (G, m,h) \geq \min \{f_d (m,h) \; : \; d \in D(G,m) \}.
\end{eqnarray} As we have already mentioned, this holds for cyclic groups. We will now prove that the inequality also holds for $r=2$; that is, for a group of type $(n_1,n_2)$ we have
\begin{eqnarray} \label{upper for r=2}
u_{\pm} (G, m,h) \geq \min \{f_{d_1d_2}(m,h) \; : \; d_1 \in D(n_1), d_2 \in D(n_2), d_1 n_2 \geq m \}.
\end{eqnarray}
Suppose that positive integers $m_1$ and $m_2$ are selected so that $m_1 \leq n_1$, $m_2 \leq n_2$, $m_1 m_2 \geq m$, and
$$u_{\pm} (G, m,h)=u(n_1,m_1,h) \cdot u(n_2,m_2,h);$$ furthermore, suppose that integers $\delta_1$ and $\delta_2$ are chosen so that $\delta_1 \in D(n_1)$, $\delta_2 \in D(n_2)$, $u(n_1,m_1,h)=f_{\delta_1}(m_1,h)$, and $u(m_2,h)=f_{\delta_2}(m_2,h)$. We need to prove that there are integers $d_1$ and $d_2$, so that $d_1 \in D(n_1)$, $d_2 \in D(n_2)$, $d_1 n_2 \geq m$, and
\begin{eqnarray} \label{f_d_1d_2 leq f_delta1f_delta2} f_{d_1d_2}(m,h) &\leq& f_{\delta_1}(m_1,h) \cdot f_{\delta_2}(m_2,h).\end{eqnarray}
We will separate two cases depending on whether $\delta_1 n_2 \geq m$ or not.
In the case when $\delta_1 n_2 \geq m$, we show that $d_1=\delta_1$ and $d_2=\delta_2$ are appropriate choices. Clearly, $d_1 \in D(n_1)$, $d_2 \in D(n_2)$, and $d_1 n_2 \geq m$, so we just need to show that $$f_{d_1d_2}(m,h) \leq f_{d_1}(m_1,h) \cdot f_{d_{2}}(m_2,h).$$ Since $m \leq m_1m_2$ and the function $f$ is nondecreasing in $m$, it suffices to prove that
$$f_{d_1d_2}(m_1m_2,h) \leq f_{d_1}(m_1,h) \cdot f_{d_{2}}(m_2,h),$$ or, equivalently, that
$$h \left \lceil (m_1m_2)/(d_1 d_2) \right \rceil -h+1 \leq \left( h \left \lceil m_1/d_1 \right \rceil -h+1 \right) \cdot \left( h \left \lceil m_2/d_2 \right \rceil -h+1 \right).$$
Note that $$\lceil (m_1m_2)/(d_1 d_2) \rceil \leq \lceil m_1/d_1 \rceil \cdot \lceil m_2/d_2 \rceil,$$ so our inequality will follow once we prove that
$$h \left \lceil m_1/d_1 \rceil \cdot \lceil m_2/d_2 \right \rceil -h+1 \leq \left( h \left \lceil m_1/d_1 \right \rceil -h+1 \right) \cdot \left( h \left \lceil m_2/d_2 \right \rceil -h+1 \right).$$ But this indeed holds as subtracting the left-hand side from the right-hand side yields
$$h(h-1) \left( \left \lceil m_1/d_1 \right \rceil -1 \right) \left( \left \lceil m_2/d_2 \right \rceil -1 \right),$$ which is clearly nonnegative.
Suppose now that $\delta_1 n_2 < m$; we consider two subcases: when $m_2 \leq \delta_2$ and when $m_2 > \delta_2$.
When $\delta_1 n_2 < m$ and $m_2 \leq \delta_2$, we set $d_1=\gcd(n_1,\delta_2)$ and $d_2=\delta_1\delta_2/\gcd(n_1,\delta_2)$. Then, clearly, $d_1 \in D(n_1)$; to see that $d_2 \in D(n_2)$, note that $n_1/d_1$ and $\delta_2/d_1$ are relatively prime integers that both divide $n_2/d_1$, so their product $n_1\delta_2/d_1^2$ divides $n_2/d_1$ as well, and therefore $n_1\delta_2/d_1$, and thus its divisor $d_2$, divide $n_2$. Furthermore, since $n_1\delta_2/d_1$ divides $n_2$, we have $$d_1n_2 \geq n_1\delta_2 \geq m_1 m_2 \geq m.$$ It remains to be shown that (\ref{f_d_1d_2 leq f_delta1f_delta2}) holds, but since $d_1d_2=\delta_1\delta_2$, this follows as in the previous case.
Finally, suppose that $\delta_1 n_2 < m$ and $m_2 > \delta_2$; we now set $d_1=n_1$ and $d_2=\delta_1n_2/n_1$. We see that $d_1 \in D(n_1)$, $d_2 \in D(n_2)$, and $d_1n_2 \geq m$; we need to show that (\ref{f_d_1d_2 leq f_delta1f_delta2}) holds.
Let us denote $\left \lceil m_1/\delta_1 \right \rceil$ and $\left \lceil m_2/\delta_2 \right \rceil$ by $k_1$ and $k_2$, respectively; note that $m_2 > \delta_2$ implies that $k_2 \geq 2$, and $\delta_1 n_2 < m$ implies that $k_1 \geq 2$ as well, since
$$m_1 \geq m/m_2 > \delta_1 n_2/m_2 \geq \delta_1.$$ Therefore,
$$2(k_1-1)(k_2-1)=(k_1-2) (k_2-2)+(k_1k_2-2) \geq k_1k_2-2,$$ so, since $h \geq 2$, we get
$$h(h-1)(k_1-1) (k_2-1) \geq k_1k_2-2,$$ or, equivalently,
$$(hk_1-h+1) \cdot (hk_2-h+1) \geq (h+1)(k_1k_2-1).$$ Multiplying by $\delta_1\delta_2$ yields exactly
$$f_{\delta_1}(m_1,h) \cdot f_{\delta_2}(m_2,h)$$ on the left hand side; therefore, to prove (\ref{f_d_1d_2 leq f_delta1f_delta2}), we need to verify that
\begin{eqnarray} \label{f_d1d2 leq (h+1)} f_{d_1d_2}(m,h) \leq (h+1)(k_1k_2-1)\delta_1\delta_2.\end{eqnarray}
By definition,
$$f_{d_1d_2}(m,h)=f_{\delta_1n_2}(m,h)=\left( h\left \lceil m/(\delta_1n_2) \right \rceil -h+1 \right) \delta_1n_2.$$
But
$$ \left \lceil \frac{m}{\delta_1n_2} \right \rceil \leq \left \lceil \frac{m_1m_2}{\delta_1n_2} \right \rceil \leq \left \lceil \frac{k_1k_2\delta_1\delta_2}{\delta_1n_2} \right \rceil = \left \lceil \frac{k_1k_2}{n_2/\delta_2} \right \rceil \leq \frac{k_1k_2+n_2/\delta_2-1}{n_2/\delta_2},$$
hence
\begin{eqnarray} \label{f_d1d2 leq again} f_{d_1d_2}(m,h) \leq \left( h(k_1k_2-1)+n_2/\delta_2 \right) \delta_1\delta_2. \end{eqnarray}
Since we are under the assumption that $\delta_1 n_2 < m$, we have
$$\frac{n_2}{\delta_2} < \frac{m}{\delta_1\delta_2} \leq \frac{m_1m_2}{\delta_1\delta_2} \leq k_1k_2,$$ so the integer $n_2/\delta_2 $ can be at most $k_1k_2-1$, and thus (\ref{f_d1d2 leq again}) implies (\ref{f_d1d2 leq (h+1)}), completing the proof of (\ref{upper for r=2}).
In order to prove that (\ref{upper for r}) holds for any fixed $r>2$, we suppose that positive integers $m_1, \dots, m_r$ are selected so that $m_i \leq n_i$ for each $1 \leq i \leq r$, $m_1 \cdots m_r \geq m$, and
$$u_{\pm} (G, m,h)=u(n_1,m_1,h) \cdots u(n_r,m_r,h).$$ Furthermore, we suppose that integers $\delta_1, \dots, \delta_r$ are chosen so that for each $1 \leq i \leq r$, $\delta_i \in D(n_i)$ and $u(n_i,m_i,h)=f_{\delta_i}(m_i,h)$.
We will prove that there are integers $d_1, \dots, d_r$, so that, for each $1 \leq i \leq r$, $d_i \in D(n_i)$,
\begin{eqnarray} \label{d 1 d r-1} d_1 \cdots d_{r-1} n_r &\geq & m, \end{eqnarray}
and
\begin{eqnarray} \label{f_d_1d_r leq f_delta1f_deltar} f_{d_1 \cdots d_r}(m,h) &\leq& u_{\pm} (G, m,h) =f_{\delta_1}(m_1,h) \cdots f_{\delta_r}(m_r,h).\end{eqnarray}
We proceed by induction, and assume that (\ref{upper for r}) holds for $r-1$ terms and for $m'=m_2 \cdots m_r$; in particular, for a group G of rank $r-1$ and of type $(n_2,\dots,n_r)$ we have
$$u_{\pm} (G, m',h) \geq \min \{f_d (m',h) \; : \; d \in D(G, m') \}.$$ Therefore, we are able to find integers $\mu_2, \dots, \mu_r$ so that $\mu_i \in D(n_i)$ for each $2 \leq i \leq r$,
\begin{eqnarray} \label{mu 1 mu r-1} \mu_2 \cdots \mu_{r-1} n_r &\geq & m', \end{eqnarray}
and
\begin{eqnarray} \label{f_d_1d_r leq f_mu1f_mur} f_{\mu_2 \cdots \mu_r}(m',h) & \leq & u_{\pm} (G, m',h) \leq f_{\delta_2}(m_2,h) \cdots f_{\delta_r}(m_r,h). \end{eqnarray}
Furthermore, observing that by (\ref{mu 1 mu r-1}), $m''=\lceil m'/(\mu_2 \cdots \mu_{r-1}) \rceil $ is at most $n_r$, from (\ref{upper for r=2}), for a group of rank 2 and of type $(n_1,n_r)$ we have
$$u_{\pm} (G, m_1m'',h) \geq \min \{f_d (m_1m'',h) \; : \; d \in D(G,m_1m'') \},$$ and so there are integers $\nu_1 \in D(n_1)$ and $\nu_r \in D(n_r)$ for which
\begin{eqnarray} \label{nu 1 nu r-1} \nu_1 n_r &\geq & m_1m'', \end{eqnarray}
and
\begin{eqnarray} \label{f_d_1d_r leq f_nu1f_nur} f_{\nu_1\nu_r}(m_1m'',h) & \leq & u_{\pm} (G, m_1m'',h) \leq f_{\delta_1}(m_1,h) \cdot f_{\mu_r}(m'',h). \end{eqnarray}
Now let $d_1=\nu_1$, $d_r=\nu_r$, and $d_i=\mu_i$ for $2 \leq i \leq r-1$. We immediately see that, with these notations, (\ref{d 1 d r-1}) holds, since, by (\ref{nu 1 nu r-1}),
$$d_1 \cdots d_{r-1}n_r = \nu_1 \mu_2 \cdots \mu_{r-1} n_r \geq m_1 \mu_2 \cdots \mu_{r-1} m'' \geq m_1 m' = m_1 \cdots m_r \geq m.$$
To see that (\ref{f_d_1d_r leq f_delta1f_deltar}) holds, note that, for the left-hand side we have
\begin{eqnarray*}
f_{d_1 \cdots d_r}(m,h) &=& f_{\nu_1\nu_r\mu_2 \cdots \mu_{r-1}}(m,h) \\
& \leq & f_{\nu_1\nu_r\mu_2 \cdots \mu_{r-1}}(m_1 m'' \mu_2 \cdots \mu_{r-1},h) \\
& = & \left( h \left \lceil (m_1m'')/(\nu_1 \nu_r) \right \rceil -h+1 \right) \nu_1\nu_r\mu_2 \cdots \mu_{r-1} \\
& = & f_{\nu_1 \nu_r} (m_1m'',h) \mu_2 \cdots \mu_{r-1};
\end{eqnarray*}
and, for the right-hand side of (\ref{f_d_1d_r leq f_delta1f_deltar}), using (\ref{f_d_1d_r leq f_mu1f_mur}), we see that
\begin{eqnarray*}
f_{\delta_1}(m_1,h) \cdots f_{\delta_r}(m_r,h) & \geq & f_{\delta_1}(m_1,h) f_{\mu_2 \cdots \mu_r}(m',h) \\
&=& f_{\delta_1}(m_1,h) \left( h \left \lceil m'/(\mu_2 \cdots \mu_r) \right \rceil -h+1 \right)\mu_2 \cdots \mu_r \\
&=& f_{\delta_1}(m_1,h) \left( h \left \lceil m''/\mu_r \right \rceil -h+1 \right)\mu_2 \cdots \mu_r \\
&=& f_{\delta_1}(m_1,h) f_{\mu_r}(m'',h)\mu_2 \cdots \mu_r.
\end{eqnarray*}
Therefore, (\ref{f_d_1d_r leq f_delta1f_deltar}) follows from (\ref{f_d_1d_r leq f_nu1f_nur}). With this, the proof of (\ref{upper for r}), and thus of Proposition \ref{u pm with f prop}, is complete.
$\Box$
Our next result exhibits a situation where the upper bound of Proposition \ref{u pm is upper}, and thus of Theorem \ref{u pm with f}, is not tight:
\begin{prop}
If $G$ is a noncyclic group of odd order $n$ and type $(n_1, \dots, n_r)$, then
$$\rho_{\pm} \left(G, (n-1)/2, 2 \right) \leq n-1,$$ but $$u_{\pm} (G, (n-1)/2,2)=n.$$
\end{prop}
{\em Proof:} Note that every element of $G \setminus \{0\}$ has order at least 3, thus there is a subset $A$ of $G \setminus \{0\}$ with which $G \setminus \{0\}$ can be partitioned into $A$ and $-A$. Since $|A|=(n-1)/2$ and $0 \not \in 2_{\pm}A$, we have $$\rho_{\pm} \left(G, (n-1)/2, 2 \right) \leq n-1.$$
To prove our second claim, note that for each $i \in \{1,\dots,r\}$,
$$n/n_i \cdot (n_i-1)/2 < (n-1)/2.$$ Therefore, if positive integers $m_1, \dots, m_r$ satisfy $m_i \leq n_i$ for each $i \in \{1,\dots,r\}$ and $$m_1\cdots m_r \geq (n-1)/2,$$ then we must have $m_i \geq (n_i+1)/2$, and thus $u(n_i,m_i,2)=n_i$, for each $i \in \{1,\dots,r\}$, from which our claim follows. $\Box$
A bit more generally, if $d$ is an odd element of $D(n)$ so that $d \geq 2m+1$, then the same argument yields
$$\rho_{\pm} \left(G, m, 2 \right) \leq d-1,$$
and therefore we have the following:
\begin{cor}
Suppose that $G$ is an abelian group of order $n$ and type $(n_1, \dots, n_r)$. Let $m \leq n$, and let $d_m$ be the smallest odd element of $D(n)$ that is at least $2m+1$; if no such element exists, set $d_m=\infty$. We then have
$$\rho_{\pm} \left(G, m, 2 \right) \leq \min\{u_{\pm}(G,m,2), d_m-1\}.$$
\end{cor}
We are not aware of any subsets with smaller signed sumset size, and we believe that the following holds:
\begin{conj} \label{conj for rho pm}
Suppose that $G$ is an abelian group of order $n$ and type $(n_1, \dots, n_r)$. Let $m \leq n$ and $h \geq 2$.
If $h \geq 3$, then $$\rho_{\pm} \left(G, m, h \right) =u_{\pm}(G,m,h).$$
If each odd divisor of $n$ is less than $2m$, then $$\rho_{\pm} \left(G, m, 2 \right) =u_{\pm}(G,m,2).$$
If there are odd divisors of $n$ greater than $2m$, let $d_m$ be the smallest one. We then have
$$\rho_{\pm} \left(G, m, 2 \right) = \min\{u_{\pm}(G,m,2), d_m-1\}.$$
\end{conj}
\section{An example}
Trivially, if $G$ is an elementary abelian 2-group, then $\rho_{\pm} \left(G, m, h \right)$ agrees with $\rho \left(G, m, h \right)$, and it is not hard to see that this is also true if $G$ is any 2-group. More generally still, as an application to Theorem \ref{u pm with f}, we prove the following:
\begin{prop} \label{example}
If there is no odd prime $p$ for which $\mathbb{Z}_p^2$ is isomorphic to a subgroup of $G$, then
$$\rho_{\pm} \left(G, m, h \right) = \rho \left(G, m, h \right).$$
\end{prop}
{\em Proof:} Suppose that $G$ is of order $n$ and of type $(n_1,\dots,n_r)$; by Theorem \ref{cyclic}, we may assume that $r \geq 2$.
Let $d \in D(n)$ be such that $$\rho \left(G, m, h \right)=u(n,m,h)=f_d(m,h).$$ By Theorem \ref{u pm with f}, it suffices to prove that $d \in D(G,m)$.
Our assumption that there is no odd prime $p$ for which $\mathbb{Z}_p^2$ is isomorphic to a subgroup of $G$ is equivalent to saying that $n_1 \cdots n_{r-1}$ is a power of 2; let $$n_1 \cdots n_{r-1}=2^{k_1}.$$ Furthermore, we write $$n_r=2^{k_2} \cdot c_2$$ and $$d=2^{k_3} \cdot c_3,$$ where $k_2$ and $k_3$ are nonnegative integers, and $c_2$ and $c_3$ are odd. Note that
\begin{eqnarray} \label{k1 and k2 and k3}
k_1+k_2 & \geq & k_3,
\end{eqnarray} and $c_2$ must be divisible by $c_3$.
Now if $m \leq n_r$, then clearly $d \in D(G,m)$, so assume that $m \geq n_r+1$, and thus there is a nonnegative integer $k$ for which
$$2^k \cdot n_r +1 \leq m \leq 2^{k+1} \cdot n_r.$$ Note that we must then have
\begin{eqnarray} \label{k1 and k}
k_1 & \geq & k+1.
\end{eqnarray}
We claim that we also have
\begin{eqnarray} \label{k3 and k2 and k}
k_3 &\geq & k_2+k+1.
\end{eqnarray} Indeed,
\begin{eqnarray*}
u(n,m,h) & = & f_d(m,h) \\
& = & \left(h \cdot \left \lceil m/h \right \rceil-h +1 \right) \cdot d \\
& \geq & \left( h \cdot \left \lceil \frac{2^k \cdot n_r + 1}{d} \right \rceil-h +1 \right) \cdot d.
\end{eqnarray*}
On the other hand, from (\ref{k1 and k}) we see that $G$ contains a subgroup of order $2^{k+1} \cdot n_r$, and thus
\begin{eqnarray*}
u(n,m,h) & \leq & 2^{k+1} \cdot n_r \\
& < & h \cdot 2^{k} \cdot n_r +d \\
& = & \left( h \cdot \frac{2^k \cdot n_r + d}{d} -h +1 \right) \cdot d.
\end{eqnarray*}
Therefore,
$$ \left \lceil \frac{2^k \cdot n_r + 1}{d} \right \rceil < \frac{2^k \cdot n_r + d}{d},$$
which yields that $2^k \cdot n_r$ cannot be divisible by $d$, that is, $2^{k+k_2} \cdot c_2$ cannot be divisible by $2^{k_3} \cdot c_3$, proving (\ref{k3 and k2 and k}).
Now let $$d_r=2^{k_2} \cdot c_3.$$ Then $d_r$ is a divisor of $n_r$; furthermore, by (\ref{k3 and k2 and k}), $d/d_r=2^{k_3-k_2}$ is an integer, and by (\ref{k1 and k2 and k3}), it is a divisor of $n_1 \cdots n_{r-1}$. Using (\ref{k3 and k2 and k}) again, we have
$$d \cdot n_r = 2^{k_3} \cdot c_3 \cdot n_r \geq 2^{k_2+k+1} \cdot c_3 \cdot n_r = d_r \cdot 2^{k+1} \cdot n_r \geq d_r \cdot m,$$ so $d \in D(G,m)$, as claimed. $\Box$
Having a subgroup that is isomorphic to $\mathbb{Z}_p^2$ for an odd prime $p$ is thus a necessary condition for $\rho_{\pm} \left(G, m, h \right)$ to be greater than $\rho \left(G, m, h \right)$. We study $\mathbb{Z}_p^2$, and, more generally, elementary abelian groups, in the upcoming paper \cite{BajMat:2014b}.
\end{document}
|
\begin{document}
\title
{A Ramsey Type problem for highly connected subgraphs}
\author[1]{Chunlok Lo}
\ead{[email protected]}
\address[1]{
College of Computing, Georgia Institute of Technology, Atlanta, Georgia, USA 30332}
\author[2]{Hehui Wu\fnref{fn2}}
\ead{[email protected]}
\address[2]{
Shanghai Center for Mathematical Sciences,
Fudan University, Shanghai, China 200438}
\fntext[fn2]{Supported in part by National Natural Science Foundation of China (Grant No. 11931006), National Key Research and Development Program of China (Grant No. 2020YFA0713200), and the Shanghai Dawn Scholar Program (Grant No. 19SG01).}
\author[3]{Qiqin Xie\fnref{fn3}}
\ead{[email protected]}
\address[3]{
Department of Mathematics, College of Science,
Shanghai University, Shanghai, China 200444}
\fntext[fn3]{Supported in part by National Natural Science Foundation of China (Grant No. 12201390) and National Key R\&D Program of China (Grant No. 2022YFA1006400).}
\begin{abstract}
Bollob\'{a}s and Gy\'{a}rf\'{a}s conjectured that for any $k, n \in \mathbb{Z}^+$ with $n > 4(k-1)$,
every 2-edge-coloring of the complete graph on $n$ vertices
leads to a $k$-connected monochromatic subgraph with at least $n-2k+2$ vertices.
We find a counterexample with $n = \lfloor 5k-2.5-\sqrt{8k-\frac{31}{4}} \rfloor$,
thus disproving the conjecture,
and we show the conclusion holds for $n > 5k-2.5-\sqrt{8k-\frac{31}{4}}$
when $k \ge 16$.
\end{abstract}
\begin{keyword}
Connectivity \sep Ramsey Theory
\end{keyword}
\maketitle
\section{Introduction}
Ramsey theory is one of the most important research areas in combinatorics.
For any given integers $s, t$,
the Ramsey number $R(s, t)$ is the smallest integer $n$,
such that for any 2-edge-colored (red/blue) $K_n$,
there must exist a red $K_s$ or a blue $K_t$.
In 1930, Ramsey \cite{Ramsey30} proved the existence of Ramsey numbers.
However, estimating Ramsey numbers is known to be notoriously challenging.
There are many variations of the original Ramsey problem,
including the one considering highly-connected subgraphs instead of cliques.
There have been many studies concerning the existence of $k$-connected subgraphs,
including Mader's \cite{Mader72} result in 1972,
indicating that every graph with large average degree always contains a $k$-connected subgraph.
To consider the existence of $k$-connected monochromatic subgraphs in edge-colored complete graphs,
we let $r_c(k)$ denote the smallest integer such that
every $c$-edge-colored complete graph on $r_c(k)$ vertices
must contain a $k$-connected monochromatic subgraph.
In 1983, Matula \cite{Mat83} proved $2c(k-1)+1 \le r_c(k) < (10/3)c(k-1)+1$.
Moreover, for 2-edge-coloring, Matula \cite{Mat83} improved the upper bound to $r_2(k) < (3+\sqrt{11/3})(k-1)+1$.
However, Matula's result does not have any restriction on the order of the $k$-connected monochromatic subgraph.
In 2008, Bollob\'{a}s and Gy\'{a}rf\'{a}s \cite{BG08} proposed the following conjecture:
\begin{conj}\label{conjBG}
Let $k, n$ be positive integers. For $n > 4(k-1)$, every 2-edge-colored $K_n$ contains a $k$-connected monochromatic subgraph with at least $n-2k+2$ vertices.
\end{conj}
Note that the conclusion is not true for $n \le 4(k-1)$ by Matula's result \cite{Mat83}
(also see \cite{BG08}).
Moreover, no matter how large $n$ is,
$n-2k+2$ is the best possible lower bound for the order of the $k$-connected subgraph
by the example $B(n, k)$ in \cite{BG08}.
Besides proposing the conjecture, Bollob\'{a}s and Gy\'{a}rf\'{a}s verified the conjecture for $k \le 2$, and showed it is sufficient to prove the conjecture holds for $4k-3 \le n < 7k-5$.
Liu, Morris, and Prince \cite{LMP09} verified the conjecture for $k=3$,
and proved it for $n \ge 13k-15$.
Later, Fujita and Magnant \cite{FM11} improved the bound to $n > 6.5(k-1)$.
In 2016, {\L}uczak \cite{Lu15} claimed the proof of the conjecture.
However, a gap has been found in the proof and not yet fixed \cite{HL} (also see \cite{Ma19}).
Bollob\'{a}s and Gy\'{a}rf\'{a}s' conjecture could be generalized to multicolored graphs
(see \cite{LMP08, FM13, KMN17, LW18}).
Besides, there are some other approaches to force large highly connected subgraphs.
For example, Fujita, Liu, and Sarkar \cite{FLS16, FLS18} proved the existence of large highly connected subgraphs
with given independence number.
The characterization of 2-edge-colored $K_n$ with no large $k$-connected monochromatic subgraphs has also been studied
(see \cite{JWW14}),
The main result of this paper is that we show Conjecture~\ref{conjBG} fails for $n = \lfloor 5k-2.5-\sqrt{8k-\frac{31}{4}} \rfloor$.
On the other hand, we verify the conclusion for any larger $n$.
\begin{theorem}\label{main}
\quad
\begin{enumerate}
\item Let $k,n \in \mathbb{Z}^+$.
If $k \ge 16$ and $n > 5k-2.5-\sqrt{8k-\frac{31}{4}}$,
then for any two spanning subgraphs $G_R$ and $G_B$ of $K_n$, where $E(G_R) \cup E(G_B)$ covers all edges of $K_n$, either $G_R$ or $G_B$ has a $k$-connected subgraph with at least $n-2k+2$ vertices.
\item For every $k \in \mathbb{Z}^+$,
let $n = \lfloor 5k-2.5-\sqrt{8k-\frac{31}{4}} \rfloor$.
There exists a 2-edge-colored $K_n$,
such that there is no $k$-connected monochromatic subgraph with at least $n-2k+2$ vertices.
\end{enumerate}
\end{theorem}
Note that in Theorem~\ref{main}~(1),
$G_R$ and $G_B$ may have common edges.
It is not hard to see that in our statement, it makes no difference to use either the original or our extended definition of edge-coloring,
which allow every edge to be colored more than once.
In Section 2, we will give a decomposition structure to graphs with no large $k$-connected monochromatic subgraphs.
We will prove Theorem \ref{main}~(1) in Section 3 and 4,
and demonstrate the counterexample (Theorem~\ref{main}~(2)) in Section 5.
\section{Structures of graphs without large $k$-connected subgraphs}
In this section,
we first introduce a decomposition for graphs with no $k$-connected subgraphs of large order.
We start with some terminologies and notations that we will use throughout this note.
We follow the notations and terminologies for graphs from \cite{GraphT}.
Let $G = (V, E)$ be a graph.
$G$ is $k$-connected if and only if it has more than $k$ vertices
and does not have a vertex cut of size at most $k-1$.
For $S \subseteq V$, we use $G[S]$ to denote the subgraph of $G$ induced by $S$.
We use $N_{G}(S)$ to denote the vertex set $\{v: v\notin S, \exists u \in S, uv \in E(G)\}$,
and $N_{G}[S]$ to denote $S \cup N_{G}(S)$.
For two sets $S_1$ and $S_2$, we may use $S_1 - S_2$ to denote $S_1 \setminus S_2$. Moreover, we use $G-S$ to denote the subgraph of $G$ induced by $V(G)-S$.
Let $e = uv$ where $u, v \in V(G)$ and $e \notin E(G)$. We use $G+e$ to denote the graph $(V(G), E(G) \cup \{e\})$.
For $S \subseteq V(G)$,
we say $S$ is complete (resp. connected) in $G$ if $G[S]$ is a complete (resp. connected) subgraph of $G$. For disjoint $V_1, V_2 \subseteq V(G)$, we say $[V_1, V_2]$ is complete in $G$ if $G$ has a complete bipartite subgraph with partite sets $V_1$, $V_2$.
We use $E(V_1, V_2)$ to denote the set of edges with one endpoint in $V_1$ and the other endpoint in $V_2$.
For any positive integer $i$, we use $[i]$ to denote the set of all integers in $[1, i]$. Given a mapping $f$ and a set $X$, we denote $f(X)$ to be $\sum_{x\in X}f(x)$ if $f$ is a real-value function, and denote $f(X)$ to be $\bigcup_{x\in X}f(x)$ if the value of $f$ is a set.
\begin{definition}\label{decomposition}
Let $k \in \mathbb{Z}^+$, $f(k)$ be a non-negative integer.
Let $G$ be a graph on $n$ vertices, where $n > f(k)+k$.
We define an {\bf $(f(k), k)$-decomposition} of $G$ to be a sequence of triples $((A_i, C_i, D_i))_{i=1}^{l}$, such that
\begin{enumerate}
\item $V(G)$ is a disjoint union of $A_1, C_1, D_1$
\item $C_i \cup D_i$ is a disjoint union of $A_{i+1}, C_{i+1}, D_{i+1}$, $i \in [l-1]$
\item $|C_i| \le k-1$, $i \in [l]$
\item $1 \le |A_i| \le |D_i|$, and there is no edge between $A_i$ and $D_i$, $i \in [l]$
\item $|C_i|+|D_i| \ge n-f(k)$, $i \in [l-1]$
\item $|C_l|+|D_l| < n-f(k)$
\end{enumerate}
\end{definition}
By (1) and (2) of Definition \ref{decomposition}, we have:
\begin{prop}\label{partition}
$V(G)$ is a disjoint union of $A_1, \dots, A_i, C_i, D_i$ for any $i \in [l]$.
\end{prop}
We also consider edge partitions of $G$ with respect to the decomposition. For convenience, we will frequently use the following notations.
\begin{notation}\label{edge type}
Let $((A_i, C_i, D_i))_{i=1}^l$ be an $(f(k),k)$-decomposition of $G$.
\begin{enumerate}
\item We use $A_{l+1}$ to denote $C_l \cup D_l$.
\item We say an edge $uv$ is an $AA$-type edge if there exists $i \in [l+1]$ such that $u, v \in A_i$.
We define $AC$-type and $AD$-type for $i \in [l]$ similarly.
\end{enumerate}
\end{notation}
Thus we have the following propositions.
\begin{prop}\label{Epartition}
Let $((A_i, C_i, D_i))_{i=1}^l$ be an $(f(k),k)$-decomposition of $G$.
\begin{enumerate}
\item $E(G)$ is a disjoint union of $AA$-type and $AC$-type edges.
\item All $AD$-type edges are in $\overline{G}$.
\item Let $K = G \cup \overline{G}$. Then $E(K)$ is a disjoint union of all $AA$-type, $AC$-type, and $AD$-type edges.
\end{enumerate}
\end{prop}
\begin{proof}
(2) is followed by Definition \ref{decomposition}(4), and (1) is a corollary of (2) and (3). We only need to prove (3).
Let $u, v$ be two distinct vertices in $V(G)$.
By Proposition \ref{partition}, there must exist $i \in [l+1]$, such that $\{u, v\} \cap A_i \ne \emptyset$.
we take the smallest such $i$.
By symmetry, we may assume $u \in A_i$.
Then by proposition \ref{partition},
$v$ must be in one of the disjoint sets $A_i$, $C_i$, and $D_i$.
Thus the type of $uv$ is unique.
Hence, $E(K)$ is a disjoint union of all $AA$-type, $AC$-type, and $AD$-types edges.
\end{proof}
\begin{lemma}\label{exist decomposition}
Let $k \in \mathbb{Z}^+$, $f(k)$ be a non-negative function on $k$.
Let $G$ be a graph on $n$ vertices with $n \ge f(k)+k+1$.
If $G$ does not have a $k$-connected subgraph with at least $n-f(k)$ vertices, then $G$ has an $(f(k), k)$-decomposition.
\end{lemma}
\begin{proof}
Let $G_0 = G$.
Since $f(k)$ is non-negative, $|G_0| = n \ge n-f(k)$.
We repeat the following steps until $|G_i| < n-f(k)$.
\begin{enumerate}
\item Let $C_{i+1}$ be a cut of $G_i$ of size at most $k-1$. Since $|G_i| \ge n-f(k) \ge k+1$, there must exist one such cut.
\item Let $A_{i+1}$ be the vertex set of smallest component of $G_i - C_{i+1}$, and $D_{i+1} = V(G_i) - (A_{i+1} \cup C_{i+1})$.
\item Let $G_{i+1}$ be the subgraph of $G_i$ induced by $C_{i+1} \cup D_{i+1}$.
\end{enumerate}
The sequence of triples generated by the above procedure is an $(f(k), k)$-decomposition of $G$.
\end{proof}
\begin{definition}\label{strong decomposition}
We say an $(f(k), k)$-decomposition is {\bf strong} if $|A_i|+|C_i| < n-f(k)$, for any $i \in [l]$.
\end{definition}
\begin{lemma}\label{decomposition to no conn}
Let $k \in \mathbb{Z}^+$, $f(k)$ be a non-negative function on $k$.
Let $G$ be a graph on $n$ vertices, where $n \ge f(k)+k+1$.
If $G$ has a strong $(f(k), k)$-decomposition,
then $G$ does not have a $k$-connected subgraph with at least $n-f(k)$ vertices.
\end{lemma}
\begin{proof}
Let $((A_i, C_i, D_i))_{i=1}^{l}$ be a strong $(f(k), k)$-decomposition of $G$.
Suppose $G$ has a $k$-connected subgraph $H$ such that $|H| \ge n-f(k)$.
Let $i^*$ be the smallest $i \in [l]$ such that $A_i \cap V(H) \ne \emptyset$.
Note that by Proposition \ref{partition} and (6) of Definition \ref{decomposition}, such $i^*$ must exist.
Then $H$ must be a subgraph of $G(A_{i^*} \cup C_{i^*} \cup D_{i^*})$.
We claim $V(H) \cap D_{i^*} = \emptyset$.
Otherwise by (3) and (4) of Definition \ref{decomposition}, $V(H) \cap C_{i^*}$ is a cut of $H$ of size at most $k-1$,
which is a contradiction to the connectivity of $H$.
Thus $H$ must be a subgraph of $G(A_{i^*} \cup C_{i^*})$.
However since the decomposition is strong, $|H| \le |A_{i^*}|+|C_{i^*}| < n-f(k)$.
We conclude that $G$ does not have a $k$-connected subgraph with at least $n-f(k)$ vertices.
\end{proof}
For the rest part of this section, we apply the decomposition and its properties on 2-edge-colored complete graphs with no large $k$-connected monochromatic subgraphs.
This will help us to set up the proof of Theorem \ref{main}(1).
Let $k, n \in \mathbb{Z}^+$, where $n \ge 4k-3$,
and $G$ be a complete graph on $n$ vertices.
We color each edge of $G$ by at least one of color red or blue.
Note that we allow the edges to be colored red and blue simultaneously.
Let $R$ (resp. $B$) be the set of all red (resp. blue) edges.
We set $G_R = (V, R)$ and $G_B = (V, B)$.
For $S \subseteq V$, we use $R(S)$ (resp. $B(S)$) to denote the subgraph of $G_R$ (resp. $G_B$) induced by $S$.
For convenience, We will use $N_{R}(S)$, $N_{B}(S)$, $N_{R}[S]$, and $N_{B}[S]$ to denote $N_{G_R}(S)$, $N_{G_B}(S)$, $N_{G_R}[S]$, and $N_{G_B}[S]$.
Suppose there exists $G$, such that $G$ does not have a $k$-connected monochromatic subgraph with at least $n-2k+2$ vertices.
By Lemma \ref{exist decomposition}, $G_R$ must have a $(2k-2, k)$-decomposition $((A_i, C_i, D_i))_{i=1}^{l_R}$,
and $G_B$ must have a $(2k-2, k)$-decomposition $((U_s, X_s, Y_s))_{s=1}^{l_B}$. We choose $G$ and the compositions according to the following rules:
\begin{asp}\label{decomp asp}
We may assume
\begin{enumerate}
\item $l_R$ and $l_B$ are maximized;
\item With respect to (1), $R$ and $B$ are maximized.
\end{enumerate}
\end{asp}
Next, we will characterize the decompositions of $G_R$ and $G_B$ with more details.
Note that the notations $U_{l_B+1}$, $UU$-type, $UX$-type, and $UY$-type are similar to those mentioned in Notation \ref{edge type}.
\begin{prop}\label{basic prop}
We have the following propositions:
\begin{enumerate}
\item All $AD$-type edges are in $\overline{R} \subseteq B$, all $UY$-type edges are in $\overline{B} \subseteq R$, and no edge are both $AD$-type and $UY$-type.
\item $A_i$ is connected in $G_{\overline{B}}$ for all $i \in [l_R]$ and $U_s$ is connected in $G_{\overline{R}}$ for all $i \in [l_B]$.
\end{enumerate}
\end{prop}
\begin{proof}
(1) is followed by Proposition~\ref{Epartition}.
For (2), suppose $A_j$ is not connected in $G_{\overline{B}}$ for some $j$ in $[l_R]$.
Let $A_j^*$ be the vertex set of a connected component of $\overline{B}(A_j)$.
In other words, $[A_j^*, A_j-A_j^*]$ is complete in $G_B$.
We obtain $R'$ from $R$ by removing all edges between $A_j^*$ and $A_j-A_j^*$ in $R$.
For all $i < j$, we set $(A_i', C_i', D_i') = (A_i, C_i, D_i)$.
We set $(A_j', C_j', D_j') = (A_j^*, C_j, D_j \cup A_j - A_j^*)$ and $(A_{j+1}', C_{j+1}', D_{j+1}') = (A_j - A_j^*, C_j, D_j)$.
And for all $i > j$, we set $(A_{i+1}', C_{i+1}', D_{i+1}') = (A_i, C_i, D_i)$.
Then $((A_i', C_i', D_i'))_{i' = 1}^{l_R+1}$ is a decomposition of $G_{R'}$,
which is a contradiction to Assumption \ref{decomp asp}(1).
Thus, $A_i$ is connected in $G_{\overline{B}}$ for all $i \in [l_R]$.
Similarly, we can prove $U_s$ is connected in $G_{\overline{R}}$ for all $s \in [l_B]$.
\end{proof}
By Proposition \ref{basic prop}(1), we have the following corollary:
\begin{coro}\label{AD UY empty}
For any $i \in [l_R]$ and $s \in [l_B]$,
\begin{enumerate}
\item Either $A_i \cap U_s$ or $D_i \cap Y_s$ is empty;
\item Either $A_i \cap Y_s$ or $U_s \cap D_i$ is empty.
\end{enumerate}
\end{coro}
\begin{proof}
For (1), suppose $A_i \cap U_s$ and $D_i \cap Y_s$ are both non-empty for some $i \in [l_R]$ and $s \in [l_B]$.
Let $u \in A_i \cap Y_s$ and $v \in D_i \cap U_s$.
Then $uv$ is an $AD$-type and $UY$-type edge simultaneously,
which is a contradiction to Proposition \ref{basic prop}(1).
Similarly we can prove (2).
\end{proof}
\begin{prop}\label{B ext}
Suppose exists $i \in [l_R]$ such that $B(C_i \cup D_i)$ has a $k$-connected subgraph $H$ of order at least $2k-1$,
then $B(A_{i} \cup V(H))$ is $k$-connected.
\end{prop}
\begin{proof}
Since $H$ is a subgraph of $B(C_i \cup D_i)$ and $|C_i| \le k-1$,
we have $|V(H) \cap D_i| = |V(H)| - |V(H) \cap C_i| \ge (2k-1) - (k-1) = k$.
By Proposition \ref{basic prop} (1), $[A_i, V(H) \cap D_i]$ is complete in $B(A_{i} \cup V(H))$.
Thus, $B(A_{i} \cup V(H))$ is $k$-connected.
\end{proof}
By Definition \ref{decomposition} (2), $C_i \cup D_i = A_{i+1} \cup C_{i+1} \cup D_{i+1}$, $i \in [l_R-1]$.
We will have the following corollary if we apply Proposition \ref{B ext} recursively:
\begin{coro}\label{B ext induc}
Suppose there exists $i \in [l_R]$ such that $B(C_i \cup D_i)$ has a $k$-connected subgraph $H$ of order at least $2k-1$,
then $B(A_1 \cup A_2 \cup \dots \cup A_{i} \cup V(H))$ is $k$-connected.
\end{coro}
\begin{claim}\label{Ai Us bound}
$|A_i| \le k-1$, $\forall i \in [l_R]$.
$|U_s| \le k-1$, $\forall s \in [l_B]$.
\end{claim}
\begin{proof}
Suppose there exists $i$ such that $|A_i| \ge k$.
By (4) of Definition \ref{decomposition}, $|D_i| \ge |A_i| \ge k$.
Since $[A_i, D_i]$ is complete in $G_B$, we have $B(A_i \cup D_i)$ is $k$-connected.
If $i = 1$, $B(A_1 \cup D_1)$ is a $k$-connected subgraph of $B$.
If $i \ge 2$,
By (2) of Definition \ref{decomposition}, $B(A_i \cup D_i)$ is a $k$-connected subgraph of $B(C_{i-1} \cup D_{i-1})$.
Moreover, $|B(A_i \cup D_i)| = |A_i| + |D_i| \ge k + k > 2k-1$.
Thus by applying Corollary \ref{B ext induc} on $(i-1)$ and $H = B(A_i \cup D_i)$,
we have $B(A_1 \cup A_2 \cup \dots \cup A_{i} \cup D_{i})$ is $k$-connected.
However, by Proposition \ref{partition} and Definition \ref{decomposition}(3),
$|A_1 \cup A_2 \cup \dots \cup A_{i} \cup D_{i}| = |V(G)| - |C_i| \ge n - (k-1) \ge n - 2k + 2$, a contradiction.
Thus, $|A_i| \le k-1$, $\forall i \in [l_R]$.
By symmetry, $|U_s| \le k-1$, $\forall s \in [l_B]$.
\end{proof}
For convenience, we use $A$ (resp. $U$) to denote $A_1 \cup A_2 \cup \dots \cup A_{l_R}$ (resp. $U_1 \cup U_2 \cup \dots \cup U_{l_B}$).
Combining (5)(6) of Definition \ref{decomposition} and Proposition \ref{partition}, we have the following corollary:
\begin{coro}\label{A U bound}
$2k-1 \le |A| \le 3k-3$.
$2k-1 \le |U| \le 3k-3$.
\end{coro}
\begin{proof}
By Definition \ref{decomposition} (6) and Proposition \ref{partition},
$\sum_{i=1}^{l_R} |A_i| = n - (|C_{l_R}|+|D_{l_R}|) \ge n - (n-2k+1) = 2k-1$.
By Definition \ref{decomposition} (5) and Proposition \ref{partition},
$\sum_{i=1}^{l_R} |A_i| = (\sum_{i=1}^{l_R-1} |A_i|)+|A_{l_R}| = n - (|C_{l_R-1}|+|D_{l_R-1}|) + |A_{l_R}| \le n - (n-2k+2) + (k-1) = 3k-3$.
By symmetry, $2k-1 \le |U| \le 3k-3$.
\end{proof}
The last two claims of this section are leaded by the maximality of $R$ and $B$.
\begin{claim}\label{RB}
$R$ is the disjoint union of all $AA$-type and $AC$-type edges, and $B$ is the disjoint union of all $UU$-type and $UX$-type edges.
\end{claim}
\begin{proof}
We will first prove that all $AA$-type edges are in $R$.
Suppose there exists $i' \in [l_R+1]$ and $u, v \in A_{i'}$, such that $uv \notin R$.
Consider $R' = R + uv$.
Since there does not exist $i^* \in [l_{R}]$ such that $uv$ is an edge between $A_{i^*}$ and $D_{i^*}$,
$((A_i, C_i, D_i))_{i=1}^{l_R}$ is still a $(2k-2, k)$-decomposition of $G_{R'}$.
Moreover, by Claim \ref{Ai Us bound} and (3) of Definition \ref{decomposition},
for any $i \in [l_R]$, $|A_i|+|C_i| \le (k-1) + (k-1) \le n-2k+2$ since $n \ge 4k-3$.
Hence, the decomposition is strong.
And by Lemma \ref{decomposition to no conn},
$G_{R'}$ does not have a $k$-connected subgraph with at least $n-2k+2$ vertices, a contradiction to the maximality of $R$.
Thus all $AA$-type edges are in $R$.
Similarly, we can prove all $AC$-type edges are in $R$,
and all $UU$-type and $UX$-type edges are in $B$.
By Proposition \ref{Epartition},
$R$ is the disjoint union of all $AA$-type and $AC$-type edges, and $B$ is the disjoint union of all $UU$-type and $UX$-type edges.
\end{proof}
\begin{claim}\label{cut size}
$|C_i| = |X_s| = k-1$ for all $i \in [l_R]$ and $s \in [l_B]$. \end{claim}
\begin{proof}
Suppose there exists $i' \in [l_R]$, such that $|C_{i'}| < k-1$.
Let $u$ be a vertex in $D_{i'}$.
Consider $R'$ to be the edge set which consists $R$ and all edges between $A_{i'}$ and $u$.
Let $C'_{i'} = C_{i'} \cup \{u\}$, and $C'_i = C_i$ for all $i \ne i'$.
Then $((A_i, C'_i, D_i))_{i=1}^{l_R}$ is a strong $(2k-2, k)$-decomposition of $G_{R'}$.
By Lemma \ref{decomposition to no conn},
$G_{R'}$ does not have a $k$-connected subgraph with at least $n-2k+2$ vertices, a contradiction to the maximality of $R$.
Thus, $|C_i| = k-1$ for all $i \in [l_R]$.
By symmetry, $|X_s| = k-1$ for all $s \in [l_B]$.
\end{proof}
\section{Proof of Theorem~\ref{main}(1)}
In this section,
we prove Theorem~\ref{main}(1). Suppose it is not true, there exists a 2-edge-colored $K_n$
that has no $k$-connected monochromatic subgraph with at least $n-2k+2$ vertices, where $n$ and $k$ are integers satisfy $n> 5k-2.5-\sqrt{8k-\frac{31}{4}}$ and $k\ge 16$. Note that by the examples we mentioned in Section 1 (see \cite{BG08, Mat83}),
we may assume $n \ge 4k-3$.
We follow all the assumptions and claims in section 2 in the proof of Theorem~\ref{main}(1).
We start with the following Observation:
\begin{obs}
\begin{equation*}\label{EQ:alledges}(k-1)(|A|+|U|)+\sum_{i=1}^{l_R+1} \binom{|A_i|}{2}+\sum_{s=1}^{l_B+1}\binom{|U_s|}{2}=|R|+|B|=\binom{n}{2}+|R\cap B|.
\end{equation*}
\end{obs}
\begin{proof}
By claim~\ref{RB} and claim~\ref{cut size}, we have $|R|=\sum_{i=1}^{l_R} |A_i||C_i|+\sum_{i=1}^{l_R+1}\binom{|A_i|}{2}=(k-1)|A|+\sum_{i=1}^{l_R+1}\binom{|A_i|}{2}$ and $|B|=\sum_{s=1}^{l_B} |U_s||X_s|+\sum_{s=1}^{l_B+1}\binom{|U_s|}{2}=(k-1)|U|+\sum_{s=1}^{l_B+1}\binom{|U_s|}{2}$. Sum them up, we have the above equation.
\end{proof}
\begin{definition}
In $R\cap B$, let $P$ consist of all edges that are both $AC$-type and $UX$-type, all the $AC$-type edges in $E(U_{l_B+1},U_{l_B+1})$, and all the $UX$-type edges in $E(A_{l_R+1}, A_{l_R+1})$. Note that $U_{l_B+1}=V-U$ and $A_{l_R+1}=V-A$.
Let $i \in [l_R]$ and $s \in [l_B]$. Given a vertex $v\in A_i\cap U_s$, let $Q_R(v)$ be the family of edges $uv$ with $u$ in $A_i\cap Y_s$. Similarly, we define $Q_B(v)$ and let $Q(v)=Q_R(v)\cup Q_B(v)$.
Let $Q_R = \cup_{v\in A\cap U}Q_R(v)$, $Q_B = \cup_{v\in A\cap U}Q_B(v)$, and $Q=Q_R\cup Q_B$.
\end{definition}
The following is the key formula for our reminding argument in this paper.
\begin{claim}\label{CLM:Formula}
\begin{align*}
&(5k-3-n)|A\cap U|-(2k-1)-\frac{1}{2}|A\cap U|^2
\\=&(|A|-2k+1)(|U|-2k+1)
+(|A|+|U|-4k+2)(k-|A\cap U|)\\
&+\sum_{i=1}^{l_R}\sum_{s=1}^{l_B}((k-1)|A_i\cap U_s|-\frac{1}{2}|A_i\cap U_s|^2-|Q(A_i\cap U_s)|)+|P|.
\end{align*}
\end{claim}
\begin{proof}
Note that in $R-B$, $Q_R$ consists of all $UY$-type edges in $E[A_i,A_i]$ for $i\in [l_R]$, and in $B-R$, $Q_B$ consists of all $AD$-type edges of $E[U_s,U_s]$ for $s\in [l_B]$. Moreover, by the definition of $P$, $R\cap B-P$ consists of the edges in $E(A_i, A_i)$ for $i\in [l_R]$ and edges in $E(U_s, U_s)$ for $s\in [l_B]$ but not in $Q$, and the edges in $E[V-A-U, V-A-U]$. Therefore, we have
$$|R\cap B|-|P|=\binom{|V-A-U|}{2}+\sum_{i=1}^{l_R} \binom{|A_i|}{2}+\sum_{s=1}^{l_B}\binom{|U_s|}{2}-\sum_{i=1}^{l_R}\sum_{s=1}^{l_B}\binom{|A_i\cap U_s|}{2}-|Q|,$$
Sum up with Observation~\ref{EQ:alledges}, we can get
\begin{align*}
&(k-1)(|A|+|U|)+\binom{|V-A|}{2}+\binom{|V-U|}{2}-|P|\\
=&\binom{n}{2}+\binom{|V-A-U|}{2}-\sum_{i=1}^{l_R}\sum_{s=1}^{l_B}\binom{|A_i\cap U_s|}{2}-|Q|.
\end{align*}
Therefore
\begin{align*}
&\sum_{i=1}^{l_R}\sum_{s=1}^{l_B}\binom{|A_i\cap U_s|}{2}+|Q|-|P|\\
= &\binom{n}{2}+\binom{n-|A\cup U|}{2}-\binom{n-|A|}{2}-\binom{n-|U|}{2}-(k-1)(|A|+|U|)\\
=&\frac{2n-1}{2}(-|A\cup U|+|A|+|U|)+\frac{1}{2}(|A\cup U|^2-|A|^2-|U|^2)
-(k-1)(|A|+|U|)\\
=&(n-\frac{1}{2})|A\cap U|+\frac{1}{2}((|A|+|U|-|A\cap U|)^2-|A|^2-|U|^2)-(k-1)(|A|+|U|)\\
=&(n-\frac{1}{2})|A\cap U|+|A||U|-(|A|+|U|)|A\cap U|+\frac{1}{2}|A\cap U|^2-(k-1)(|A|+|U|)\\
=&(2k-1)+|A \cap U|(n-4k+1.5) + \frac{1}{2}|A\cap U|^2+(|A|-2k+1)(|U|-2k+1)\\
&+(|A|+|U|-2(2k-1))(k-|A\cap U|).
\end{align*}
The last equation can be verified by expansion, and now we can use the property that $A$ and $U$ have size at least $2k-1$ by Corollary~\ref{A U bound}.
As
\begin{align*}\sum_{i=1}^{l_R}\sum_{s=1}^{l_B}\binom{|A_i\cap U_s|}{2}=&
\sum_{i=1}^{l_R}\sum_{s=1}^{l_B}(\frac{1}{2}|A_i\cap U_s|^2-(k-1)|A_i\cap U_s|+(k-1.5)|A_i\cap U_s|)\\
=&(k-1.5)|A\cap U|+\sum_{i=1}^{l_R}\sum_{s=1}^{l_B}(\frac{1}{2}|A_i\cap U_s|^2-(k-1)|A_i\cap U_s|)
\end{align*}
and
$$Q = \sum_{i=1}^{l_R}\sum_{s=1}^{l_B}Q(A_i \cap U_s),$$
we have
\begin{align*}
&(5k-3-n)|A\cap U|-(2k-1)-\frac{1}{2}|A\cap U|^2
\\=&(|A|-2k+1)(|U|-2k+1)
+(|A|+|U|-4k+2)(k-|A\cap U|)\\
&+\sum_{i=1}^{l_R}\sum_{s=1}^{l_B}((k-1)|A_i\cap U_s|-\frac{1}{2}|A_i\cap U_s|^2-|Q(A_i\cap U_s)|)+|P|.
\end{align*}
\end{proof}
For $i\in [l_R]$ and $s\in [l_B]$ with $A_i\cap U_s\not=\emptyset$, if $D_i\cap U_s=\emptyset$, then we put $A_i\cap U_s$ in a set $A^*_i$, if $A_i\cap Y_s=\emptyset$, then we put $A_i\cap U_s$ in a set $U^*_s$. If both of $A_i\cap Y_s$ and $D_i\cap U_s$ are empty, then we arbitrarily put $A_i\cap U_s$ in $A^*_i$ or $U^*_s$. Let $A^*=\bigcup_{i=1}^{l_R} A^*_i$ an let $U^*=\bigcup_{s=1}^{l_B} U^*_s$.
By the fact that every edge is either in $R$ or $B$, we immediately have the following claim.
\begin{claim}\label{CLM:A*U*}
The followings are true for $A^*$ and $U^*$:
\begin{enumerate}
\item $A\cap U=A^*\sqcup U^*$.
\item $Q(v)=Q_R(v)$ for all $v\in A^*$ and $Q(v)=Q_B(v)$ for all $v\in U^*$.
\end{enumerate}
\end{claim}
\begin{proof}
(1) By Corollary \ref{AD UY empty}(2), $A_i\cap U_s\subseteq A^*_i$ or $A_i\cap U_s\subseteq U^*_s$. Also, by the definition of $A^*_i$ and $U^*_s$, we have $A^*_i\cap U^*_s$ is empty. Hence, $A\cap U=A^*\sqcup U^*$.
(2) If $v\in A^*_i$, suppose $v\in A_i\cap U_s$, by definition of $A^*_i$, we have $D_i\cap U_s=\emptyset$, therefore $Q_B(v)=\emptyset$, hence $Q(v)=Q_R(v)$. Similarly, if $v\in U^*$, we have $Q(v)=Q_B(v)$.
\end{proof}
\begin{claim}
For any $i\in [l_R], s\in [l_B]$, we have to following:
\begin{enumerate}
\item $|P|\ge \sum_{i}|A_i-U||C_i-U|+\sum_{s}|U_s-A||X_s-A|$.
\item $\sum_{A_i\cap U_s\subseteq A^*_i}((k-1)|A_i\cap U_s|-\frac{1}{2}|A_i\cap U_s|^2-|Q(A_i\cap U_s)|)
\ge |A^*_i|(k-1-|A_i|)+\frac{1}{2}|A^*_i|^2.$
\item $\sum_{A_i\cap U_s\subseteq U^*_s}((k-1)|A_i\cap U_s|-\frac{1}{2}|A_i\cap U_s|^2-|Q(A_i\cap U_s)|)
\ge |U^*_s|(k-1-|U_s|)+\frac{1}{2}|U^*_s|^2.$
\end{enumerate}
\end{claim}
\begin{proof}
(1) follows from the definition of $P$, that it contains all the $AC$-type edges in $E(U_{l_B+1},U_{l_B+1})$, and the $UX$-type edges in $E(A_{l_R+1}, A_{l_R+1})$.
By symmetric, we only need to show (2).
\begin{align*}
&\sum_{A_i\cap U_s\subseteq A^*_i}(|A_i\cap U_s||A_i|-|Q(A_i\cap U_s)|)\\
=&\sum_{A_i\cap U_s\subseteq A^*_i}|A_i\cap U_s|(|A_i|-|A_i\cap Y_s|)\\
\ge&\sum_{A_i\cap U_s\subseteq A^*_i}|A_i\cap U_s|\sum_{\substack{A_i\cap U_t\subseteq A^*_i\\t\le s}}|A_i\cap U_t|\\
=&\left(\sum_{A_i\cap U_s\subseteq A^*_i}|A_i\cap U_s|\right)^2+\frac{1}{2}\sum_{A_i\cap U_s\subseteq A^*_i}|A_i\cap U_s|^2\\
=&\frac{1}{2}|A^*_i|^2+\frac{1}{2}\sum_{A_i\cap U_s\subseteq A^*_i}|A_i\cap U_s|^2
\end{align*}
Therefore, we have
\begin{align*}
&\sum_{A_i\cap U_s\subseteq A^*_i}((k-1)|A_i\cap U_s|-\frac{1}{2}|A_i\cap U_s|^2-|Q(A_i\cap U_s)|)\\
&\ge \sum_{A_i\cap U_s\subseteq A^*_i}(|A_i\cap U_s|(k-1-|A_i|)+(|A_i\cap U_s||A_i|-|Q(A_i\cap U_s)|-\frac{1}{2}|A_i\cap U_s|^2))\\
&\ge |A^*_i|(k-1-|A_i|)+\frac{1}{2}|A^*_i|^2.
\end{align*}
\end{proof}
\begin{claim}
$\sum_{i=1}^{l_R} |A_i^*|(k-1-|A_i|)+\frac{1}{2}|A_i^*|^2+\sum_{s=1}^{l_B} |U_s^*|(k-1-|U_s|)+\frac{1}{2}|U_s^*|^2\ge \frac{1}{12}|A\cap U|^2$
\end{claim}
\begin{proof}
Assume $|A_{j_1}^*|\ge |A_{j_2}^*|\ge |A_{j_3}^*|\dots$, we have
\begin{align*}
&\sum_{i=1}^{l_R} |A_i^*|(k-1-|A_i|)+\frac{1}{2}\sum_{i=1}^{l_R}|A_i^*|^2\\
\ge &|A_{j_1}^*|(k-1-(k-1))+|A_{j_2}^*|(k-1-(k-1))+|A_{j_3}^*|(k-1-(|A|-2(k-1)))\\
&+\sum_{i=4}^{l_R}|A_{j_i}^*|(k-1-0)+\frac{1}{2}\sum_{i=1}^{l_R}|A_i^*|^2\\
\ge & \sum_{i=4}^{l_R}|A_{j_i}^*|(k-1)+\frac{1}{2}\sum_{i=1}^{l_R}|A_i^*|^2\\
\ge &\frac{1}{6}(|A_{j_1}^*|+|A_{j_2}^*|+|A_{j_3}^*|)^2+\frac{1}{3}\sum_{i=4}^{l_R}|A_{j_i}^*|(\sum_{i=1}^{l_R}|A_{j_i}^*|)\\
\ge & \frac{1}{6}(\sum_{i=1}^{l_R}|A_{j_i}^*|)^2=\frac{1}{6}|A^*|^2.
\end{align*}
Similarly, we have $\sum_{s=1}^{l_B} |U_s^*|(k-1-|U_s|)+\frac{1}{2}|U_s^*|^2\ge \frac{1}{6}|U^*|^2$. Hence $$\sum_{i=1}^{l_R} |A_i^*|(k-1-|A_i|)+\frac{1}{2}|A_i^*|^2+\sum_{s=1}^{l_B} |U_s^*|(k-1-|U_s|)+\frac{1}{2}|U_s^*|^2\ge \frac{1}{6}(|A^*|^2+|U^*|^2)\ge \frac{1}{12}|A\cap U|^2.$$
\end{proof}
Together with Claim~\ref{CLM:Formula}, we immediately have the following:
\begin{coro}
\begin{align*}
&(5k-3-n)|A\cap U|-(2k-1)-\frac{7}{12}|A\cap U|^2\\
\ge& (|A|-2k+1)(|U|-2k+1)
+(|A|+|U|-4k+2)(k-|A\cap U|).
\end{align*}
\end{coro}
For convenience, let $\lambda=5k-3-n$. Since $n > 5k-2.5-\sqrt{8k-\frac{31}{4}}$, we have $\lambda<\sqrt{8k-\frac{31}{4}}-\frac{1}{2}$, hence $\lambda^2+\lambda<8k-8$ if $\lambda\ge 0$.
\begin{claim}
$|A \cap U|\le k-1$.
\end{claim}
\begin{proof}
If $|A\cap U|\ge k$, as $0\le |A|-2k+1, |U|-2k+1\le k-1$, we have
\begin{align*}
&\lambda|A\cap U|-(2k-1)-\frac{7}{12}|A\cap U|^2\\
\ge& (|A|-2k+1)(|U|-2k+1)+(|A|+|U|-4k+2)(k-|A\cap U|)\\
=&((|A|-2k+1)+(k-|A\cap U|))((|U|-2k+1)+(k-|A\cap U|))- (k-|A\cap U|)^2\\
\ge& ((k-1)+k-|A\cap U|)(k-|A\cap U|)- (k-|A\cap U|)^2\\
=&(k-1)(k-|A\cap U|).
\end{align*}
That is,
$$0\ge (k^2+k-1)-(k-1+\lambda)|A\cap U|+\frac{7}{12}|A\cap U|^2.$$
So we should have $k-1+\lambda>0$ and $(k-1+\lambda)^2\ge \frac{7}{3}(k^2+k-1)$, hence $\lambda\ge \lceil\sqrt{\frac{7}{3}(k^2+k-1)}-(k-1)\rceil\ge \sqrt{8k-\frac{31}{4}}-\frac{1}{2}$. The last inequality is true when $k\ge 16$.
Thus we must have $|A\cap U|\le k-1$.
\begin{comment}In this case, we have
$$\lambda |A \cap U|\ge (2k-1) + \frac{7}{12}|A \cap U|^2.$$
As $\lambda<\sqrt{8k-\frac{31}{4}}-\frac{1}{2}<2\sqrt{2k-1}$, we will have
$$0\ge \lambda^2-4\lambda|A \cap U| + \frac{7}{3}|A \cap U|^2.$$
Therefore $|A \cap U|< (1+\frac{\sqrt{2}}{2})\lambda$.
\end{comment}
\end{proof}
Let $\tau=k-1-|V-A-U|$, let $I^*=\{i: A^*_i\not=\emptyset\}$, let $S^*=\{s: U^*_s\not=\emptyset\}$.
\begin{claim}
We have the following:
\begin{enumerate}
\item There exists a $i_1\in I^*$, such that $$\sum_{i=1}^{l_R}(|A^*_i|(k-1-|A_i|)+\frac{1}{2}|A^*_i|^2+|A_i-U||C_i-U|)
\ge \frac{1}{2}|A^*|^2+\sum_{i\not=i_1}|A^*_i|(n-|A|-|U|).$$
\item There exists a $s_1\in S^*$, such that $$\sum_{s=1}^{l_B}(|U^*_s|(k-1-|U_s|)+\frac{1}{2}|U^*_s|^2+|U_s-A||X_s-A|)
\ge \frac{1}{2}|U^*|^2+\sum_{s\not=s_1}|U^*_s|(n-|A|-|U|).$$
\end{enumerate}
\end{claim}
\begin{proof}
By symmetry, we just need to prove (1).
For $i \in [l_R]$ with $A_i\cap U\not=\emptyset$, in particular, suppose $A_i\cap U_s\not=\emptyset$ with $s \in [l_B]$. By Corollary \ref{AD UY empty}(1), we have $D_i\cap Y_s=\emptyset$, therefore $D_i-U\subseteq X_s$, hence $|D_i-U|\le |X_s|\le k-1$. We have $|C_i-U|=\sum_{j=i+1}^{l_R+1}|A_j-U|-|D_i-U|\ge \sum_{j=i+1}^{l_R}|A_j-U|+|V-A-U|-(k-1)\ge \sum_{j=i+1}^{l_R}|A_j-U|-\tau$.
Let $i_0=\max\{i\in I^*: \sum_{j=i}^{l_R}|A_j-U|>\tau\}$, and let $a=\sum_{j=i_0}^{l_R}|A_j-U|-\tau$. Let $I^*_{<i_0}=\{i\in I^*, i<i_0\}$.
We have $\sum_{i=1}^{l_R}|A_i-U||C_i-U|\ge \sum_{i\in I^*_{<i_0}}|A_i-U|(\sum_{i<j<i_0, j\in I^*}|A_j-U|+a)=\sum_{i, j \in I^*_{<i_0}, i<j}|A_i-U||A_j-U|+\sum_{i\in I^*_{<i_0}}|A_i-U|a$.
So
\begin{align*}
&\sum_{i=1}^{l_R}(|A^*_i|(k-1-|A_i|)+\frac{1}{2}|A^*_i|^2+|A_i-U||C_i-U|)\\
\ge &\sum_{i=1}^{l_R}(|A^*_i|(k-1-|A_i\cap U|))+\frac{1}{2}\sum_{i=1}^{l^R}|A_i^*|^2-\sum_{i=1}^{l^R}|A^*_i||A_i-U|\\
&+\sum_{i,j\in I^*_{<i_0}, i<j}|A_i-U||A_j-U|+\sum_{i\in I^*_{<i_0}}|A_i-U|a\\
\ge &\sum_{i=1}^{l_R}(|A^*_i|(k-1-|A_i\cap U|))+\frac{1}{2}\sum_{i=1}^{l_R}|A_i^*|^2-\sum_{i=i_0}^{l_R}|A^*_i||A_i-U|\\
& +\sum_{i \in I^*_{<i_0}}(a-|A^*_i|)|A_i-U|+\sum_{i,j\in I^*_{<i_0}, i<j}|A_i-U||A_j-U|
\end{align*}
Let $f=\sum_{I^*_{<i_0}}(a-|A^*_i|)|A_i-U|+\sum_{i,j\in I^*_{<i_0}, i<j}|A_i-U||A_j-U|$. We consider $f$ as a function of $\{|A_i-U|:i\in I^*_{<i_0}\}$, where $|A_i-U|$ is range from 0 to $k-1-|A_i\cap U|$. Note that $f$ is linear for each $|A_i-U|$ with $1\le i<i_0$, it should achieve its extremal value at its end points. In particular, when $f$ achieve its minimal value, we should have $|A_i-U|=0$ if $\frac{\partial f}{\partial |A_i-U|}>0$ and $|A_i-U|=k-1-|A_i\cap U|$ if $\frac{\partial f}{\partial |A_i-U|}<0$. Suppose $|A_{i_1}-U|=k-1-|A_{i_1}\cap U|$, then for $i\not=i_1$, since $\frac{\partial f}{\partial |A_i-U|}=a-|A_i^*|+\sum_{j\not=i, j<i_0}|A_j-U|\ge (k-1-|A_{i_1}\cap U|)-|A_i^*|\ge k-1-|A\cap U|>0$, we should have $|A_i-U|=0$ for all $1\le i\le i_0$ with $i\not=i_1$. Therefore we have $f\ge (a-|A^*_{i_1}|)(k-1-|A_{i_1}\cap U|)$. If there is no such an $i_1$, we have $|A_i-U|=0$ for all $1\le i<i_0$, hence $f\ge 0$.
Case 1. There exists such an $i_1$. We have
\begin{align*}
&\sum_{i=1}^{l_R}(|A^*_i|(k-1-|A_i|)+\frac{1}{2}|A^*_i|^2+|A_i-U||C_i-U|)\\
\ge &\sum_{i=1}^{l_R}|A^*_i|(k-1-|A_i\cap U|)+\frac{1}{2}\sum_{i=1}^{l_R}|A_i^*|^2-\sum_{i=i_0}^{l_R}|A^*_i||A_i-U|+f\\
\ge &\sum_{i=1}^{l_R}|A^*_i|(k-1-|A_i\cap U|)+\frac{1}{2}\sum_{i=1}^{l_R}|A_i^*|^2-\sum_{i=i_0}^{l_R}|A^*_i||A_i-U|+(a-|A^*_{i_1}|)(k-1-|A_{i_1}\cap U|)\\
\ge &\sum_{i\not=i_1}|A^*_i|(k-1-|A_i\cap U|)+\frac{1}{2}\sum_{i=1}^{l_R}|A_i^*|^2-\sum_{i=i_0}^{l_R}|A^*_i|(a+\tau)+a(k-1-|A_{i_1}\cap U|)\\
\ge &\sum_{i\not=i_1}|A^*_i|(|V-A-U|-|A\cap U|+\sum_{j\not= i}|A_j\cap U|))+\frac{1}{2}\sum_{i=1}^{l_R}|A_i^*|^2+a(k-1-|A\cap U|)\\
\ge &\sum_{i\not=i_1}|A^*_i|(|V-A-U|-|A\cap U|+\sum_{j\not= i}|A_j\cap U|)+\frac{1}{2}\sum_{i=1}^{l_R}|A_i^*|^2\\
\ge &\sum_{i\not=i_1}|A^*_i|(n-|A|-|U|)+\sum_{1\le i<j\le l_R}|A_i^*||A_j^*|+\frac{1}{2}\sum_{i=1}^{l_R}|A_i^*|^2\\
= & \frac{1}{2}|A^*|^2+\sum_{i\not=i_1}|A^*_i|(n-|A|-|U|).
\end{align*}
Case 2. Suppose there is no such an $i_1$, thus $f\ge 0$. Let $i_1=i_0$. We have
\begin{align*}
&\sum_{i=1}^{l_R}(|A^*_i|(k-1-|A_i|)+\frac{1}{2}|A^*_i|^2+|A_i-U||C_i-U|)\\
\ge &\sum_{i=1}^{l_R}|A^*_i|(k-1-|A_i\cap U|)+\frac{1}{2}\sum_{i=1}^{l_R}|A_i^*|^2-\sum_{i=i_0}^{l_R}|A^*_i||A_i-U|+f\\
\ge &\sum_{i=1}^{l_R}|A^*_i|(k-1-|A_i\cap U|)+\frac{1}{2}\sum_{i=1}^{l_R}|A_i^*|^2-\sum_{i\not=i_0}|A^*_i|\tau-|A_{i_0}^*||A_{i_0}-U|\\
\ge &\sum_{i\not=i_0}|A^*_i|(k-1-\tau-|A_i\cap U|)+\frac{1}{2}\sum_{i=1}^{l_R}|A_i^*|^2+|A^*_{i_0}|(k-1-|A_{i_0}|)\\
\ge &\sum_{i\not=i_0}|A^*_i|(|V-A-U|-|A\cap U|+\sum_{j\not= i}|A_j\cap U|)+\frac{1}{2}\sum_{i=1}^{l_R}|A_i^*|^2\\
\ge &\sum_{i\not=i_0}|A^*_i|(n-|A|-|U|)+\sum_{1\le i<j\le l_R}|A_i^*||A_j^*|+\frac{1}{2}\sum_{i=1}^{l_R}|A_i^*|^2\\
= & \frac{1}{2}|A^*|^2+\sum_{i\not=i_1}|A^*_i|(n-|A|-|U|).
\end{align*}
\end{proof}
\begin{claim}
$|I^*|=1, |S^*|=1, |A|=|U|=2k-1$, and $|A\cap U|\le \lambda$.
\end{claim}
\begin{proof}
We have
\begin{align*}
&\lambda|A\cap U|\\
\ge & (2k-1)+ \frac{1}{2}|A\cap U|^2
+(|A|+|U|-4k+2))(k-|A\cap U|)\\
&+\frac{1}{2}|A^*|^2+\frac{1}{2}|U^*|^2+\sum_{i\not=i_1}|A^*_i|(n-|A|-|U|)+\sum_{s\not=s_1}|U^*_s|(n-|A|-|U|)\\
\ge& (2k-1)+ \frac{1}{2}|A\cap U|^2+\frac{1}{4}|A\cap U|^2\\
&+(|A|+|U|-4k+2))(k-|A\cap U|)+(n-|A|-|U|)(\sum_{i\not=i_1}|A^*_i|+\sum_{s\not=s_1}|U^*_s|).
\end{align*}
If $|A|+|U|-4k+2\ge 1$, we have
$$\lambda|A\cap U|\ge (2k-1)+ \frac{3}{4}|A\cap U|^2+k-|A\cap U|.$$
Therefore $(\lambda+1)^2-3(3k-1)\ge 0$, we will have $\lambda\ge \sqrt{9k-3}-1> \sqrt{8k-\frac{31}{4}}-\frac{1}{2}$ for all positive integer $k$. So we have $|A|=|U|=2k-1$.
If $|I^*|\ge 2$ or $|S^*|\ge 2$, we will have
$$\lambda|A\cap U|\ge (2k-1)+ \frac{3}{4}|A\cap U|^2+(5k-3-\lambda-(4k-2))=3k-2-\lambda+\frac{3}{4}|A\cap U|^2.$$
We will have $\lambda^2-3(3k-2-\lambda)\ge 0.$ Hence $\lambda\ge \sqrt{9k-\frac{15}{4}}+\frac{3}{2}> \sqrt{8k-\frac{31}{4}}-\frac{1}{2}$ for all positive integer $k$.
Furthermore, if one of $I^*$ and $U^*$ is empty, we will have
$\lambda|A\cap U|
\ge (2k-1)+ \frac{1}{2}|A\cap U|^2
+\frac{1}{2}(|A^*|^2+|U^*|^2)=2k-1+|A\cap U|^2$.
We will have $\lambda^2-4(2k-1)\ge 0.$ Hence $\lambda\ge \sqrt{8k-4}> \sqrt{8k-\frac{31}{4}}-\frac{1}{2}$ for all positive integer $k$.
Thus we should have $|I^*|=|S^*|=1$.
Furthermore, we have
$$\lambda |A \cap U|\ge (2k-1) + \frac{3}{4}|A \cap U|^2.$$
As $\lambda<\sqrt{8k-\frac{31}{4}}-\frac{1}{2}<2\sqrt{2k-1}$, we will have
$$0\ge \lambda^2-4\lambda|A \cap U| + 3|A \cap U|^2=(|A\cap U|-\lambda)(3|A\cap U|-\lambda).$$
Therefore $|A \cap U|<\lambda$.
\end{proof}\begin{claim}\label{I=S=1}
If $|I^*|=|S^*|=1$, then we must have $n \le 5k-2.5-\sqrt{8k-\frac{31}{4}}$.
\end{claim}
We will leave the proof of Claim \ref{I=S=1} to the next section,
which will complete the proof of Theorem \ref{main} (1).
\section{The case when $|I^*|=|S^*|=1$}
Assume $I^*=\{j\}$ and $S^*=\{t\}$. We have $A^*=A_j^*, U^*=U_t^*$ and $A\cap U=A_j^*\sqcup U_j^*$. We have
\begin{align*}
0\ge & (2k-1)-\lambda(|A^*|+|U^*|)+|A^*|^2+|A^*||U^*|+|U^*|^2+ |A^*|(k-1-|A_j|)\\
&+|U^*|(k-1-|U_t|)+|A_j-U||C_j-A|+|U_t-A||X_t-U|.
\end{align*}
We say $x>_R y$ if $x\in A_i$ and $y\in A_{i'}$ and $i>i'$. A set $X>_R Y$ if $\forall x\in X, y\in Y$ we have $x>_R y$. Similarly we define $\ge_R$, $>_B$, $\ge_B$.
As we assume $\lambda < \sqrt{8k-\frac{31}{4}}-\frac{1}{2}$, we have $\lambda^2+\lambda\le 8k-4$, hence $2k-1>\frac{\lambda^2}{4}$. Note that by Corollary \ref{AD UY empty}(2), either $U_t\cap D_j=\emptyset$ or $A_j\cap Y_t=\emptyset$.
\begin{claim}\label{CLM:UtinCj}
We have the following proposition:
\begin{enumerate}
\item If $U_t\cap D_j=\emptyset$ then $U_t\subseteq C_j$. If $A_j\cap Y_t=\emptyset$, then $A_j\subseteq X_t$. In particular, we always have $A_j\cap U_t=\emptyset$, $A^*=A_j\cap U$, $U^*=U_t\cap A$.
\item $C_j\subseteq U, X_t\subseteq A$.
\item $V-A-U\subseteq D_j\cap Y_t$.
\end{enumerate}
\end{claim}
\begin{proof}
(1) By symmetric, We just need to consider the case that $U_t\cap D_j=\emptyset$. By the definition of $A_j^*$ and $U_t^*$, we may assume $A_j\cap U_t$ is a subset of $A_j^*$ if it is not empty, hence $A^*=A_j\cap U$. If there is a vertex $v\in A_i\cap U_t$ with $i<j$, then $R=D_i\cap ((A_j-U)\cup (V-A-U))$ has size at least $|A_j-U|+|V-A-U|-|C_j|$. The edges between $v$ and $R$ are $AD$-type and therefore should be in $B$, and hence $R\subseteq X_t-U$.
So $|X_t-U|\ge |A_j-U|+|V-A-U|-|C_j|\ge |A_j|-(|A \cap U|-|U^*|)+n-|A\cup U|-(k-1)=|A_j|+|U^*|+(4k-2-\lambda)-|A|-|U|=|A_j|+|U^*|-\lambda$.
We have
\begin{align*}
0\ge & (2k-1)-\lambda(|A^*|+|U^*|)+|A^*|^2+|A^*||U^*|+|U^*|^2+ |A^*|(k-1-|A_j|)\\
&+|U^*|(k-1-|U_t|)+|U_t-A||X_t-U|.
\end{align*}
If $|A_j|\le \lambda-|U^*|$, then we will have
\begin{align*}
0\ge & (2k-1)-\lambda(|A^*|+|U^*|)+|A^*|^2+|A^*||U^*|+|U^*|^2+ |A^*|(k-1-\lambda+|U^*|)\\
> & \frac{\lambda^2}{4}-\lambda(|A^*|+|U^*|)+|A^*|^2+2|A^*||U^*|+|U^*|^2\\
= & (|A^*|+|U^*|-\frac{\lambda}{2})^2>0
\end{align*}
Contradiction! We have
\begin{align*}
0\ge & (2k-1)-\lambda(|A^*|+|U^*|)+|A^*|^2+|A^*||U^*|+|U^*|^2+ |A^*|(k-1-|A_j|)\\
&+|U^*|(k-1-|U_t|)+|U_t-A|(|A_j|+|U^*|-\lambda).
\end{align*}
The right hand side above can be consider as a linear function of $|A_j|$, with range from $\lambda-|U^*|$ to $k-1$, hence it is minimality should achieved at its endpoint. So we just need to consider the case $|A_j|=k-1$. We have
\begin{align*}
0\ge & (2k-1)-\lambda(|A^*|+|U^*|)+|A^*|^2+|A^*||U^*|+|U^*|^2+|U^*|(k-1-|U_t|)\\
+&|U_t-U_t\cap A_j-U^*|(k-1+|U^*|-\lambda).
\end{align*}
If $|U_t|\le |A^*|+|U^*|$, then
\begin{align*}
0\ge & (2k-1)-\lambda(|A^*|+|U^*|)+|A^*|^2+|A^*||U^*|+|U^*|^2+|U^*|(k-1-|A^*|-|U^*|)\\
> & \frac{\lambda^2}{4}-\lambda|A^*|+|A^*|^2+(k-1-\lambda)|U^*|\\
\ge& (|A^*|-\frac{\lambda}{2})^2\ge 0
\end{align*}
Contradiction! We have
\begin{align*}
0\ge & (2k-1)-\lambda(|A^*|+|U^*|)+|A^*|^2+|A^*||U^*|+|U^*|^2+|U^*|(k-1-|U_t|)\\
+&(|U_t|-|A^*|-|U^*|)(k-1+|U^*|-\lambda).
\end{align*}
Again, The right hand side above can be consider as a linear function of $|U_t|$, with range from $|A^*|+|U^*|$ to $k-1$, hence it is minimality should achieved at its endpoint. So we just need to consider the case $|U_t|=k-1$. We have
\begin{align*}
0\ge & (2k-1)-\lambda(|A^*|+|U^*|)+|A^*|^2+|A^*||U^*|+|U^*|^2\\
&+(k-1-|A^*|-|U^*|)(k-1-\lambda+|U^*|)\\
\ge & (2k-1)+(k-1)(k-1-\lambda)-(k-1-2\lambda)|A^*|+|A^*|^2
\end{align*}
So we should have $(k-1-2\lambda)^2\ge 4(2k-1)+4(k-1)(k-1-\lambda)$, so
$$0\ge 3(k-1)^2+4(2k-1)-4\lambda^2\ge 3(k-1)^2+3(2k-1)>0.$$ Contradiction.
From above, we should have every vertex $v \in U_t$, $v\ge_R A_j$.
If there exists $v\in A_j\cap U_t\not=\emptyset$, but there is no $AD$-type edge of $v$ in $U_t$, which contradict to Proposition~\ref{basic prop}(2) that $U_t$ is connected in $G_{\overline{R}}$. So $U_t>_R A_j$, therefore $U_t\subset C_j$. Hence $A_j\cap U_t=\emptyset$.
(2) As $A_j\cap U_t=\emptyset$, and $A\cap U=A^*_j\sqcup U^*_t$, we have $A^*_j=A_j\cap U$ and $U^*_t=U_t\cap A$.
If $|C_j-U|\ge 1$, we will have
$|A^*|(k-1-|A_j|)+|A_j-U||C_j-U|\ge |A^*|(k-1-|A_j|)+|A_j|-|A^*|\ge k-1-|A^*|$.
Hence we have
\begin{align*}
0\ge &2k-1-\lambda(|A^*|+|U^*|)+|A^*|^2+|U^*|^2 +|A^*||U^*|+k-1-|A^*|\\
=&\frac{1}{4}\lambda^2-(\frac{1}{2}|A^*|+|U^*|)\lambda+(\frac{1}{2}|A^*|+|U^*|)^2+\frac{3}{4}|A^*|^2-(1+\frac{1}{2}\lambda)|A^*|+3k-2-\frac{1}{4}\lambda^2\\
=&(\frac{1}{2}\lambda-(\frac{1}{2}|A^*|+|U^*|))^2+\frac{3}{4}(|A^*|-\frac{2}{3}-\frac{1}{3}\lambda)^2+3k-2-\frac{1}{4}\lambda^2-\frac{3}{4}(\frac{2}{3}+\frac{1}{3}\lambda)^2\\
\ge& 3k-2-\frac{1}{3}(\lambda^2+\lambda+1)\\
\ge& 3k-2-\frac{1}{3}(8k-7)\\
=&\frac{k}{3}+\frac{1}{3}
>0
\end{align*}
Contradiction! So we have $C_j\subseteq U$. Similarly, we have $X_t\subseteq A$.
(3) is implied by (2) immediately.
\end{proof}
Recall that $n=5k-3-\lambda$, $|V-A-U|=k-1-\tau$. Let $\omega=|A_j\cap U|=|A^*|$, $\sigma=|U_t\cap A|=|U^*|$, then $|A\cap U|=\omega+\sigma$. $n=|A|+|U|+|V- A- U|-|A\cap U|=2k-1+2k-1+k-1-\tau-(\omega+\sigma)=5k-3-(\tau+\omega+\sigma)$. So $\lambda=5k-3-n=\tau+\omega+\sigma$.
Let $A'=A-A_j-(U_t\cap A)$, $U'=U-U_t-(A_j\cap U)$.
\begin{claim}
$|A_j|\ge k-\sigma$, $|U_t|\ge k-\omega$, $|A'|\le k-1$, $|U'|\le k-1$.
\end{claim}
\begin{proof}
\begin{align*}
0>&\frac{\lambda^2}{4}-\lambda(\omega+\sigma)+\omega^2+\omega\sigma+\sigma^2+\omega(k-1-|A_j|)+\sigma(k-1-|U_t|)\\
=& (\frac{1}{2}\lambda-(\omega+\sigma))^2-\omega\sigma+\omega(k-1-|A_j|)+\sigma(k-1-|U_t|)\\
\ge&\omega(k-1-|A_j|)+\sigma(k-1-|U_t|)-\omega\sigma.
\end{align*}
We have $k-1-|A_j|<\sigma$, hence $|A_j|\ge k-\sigma$. $|A'|=|A|-|A_j|-|U_t\cap A|\le 2k-1-(k-\sigma)-\sigma=k-1$. Similarly we have $|U_t|\ge k-\omega$, $|U'|\le k-1$.
\end{proof}
By the following observation, we may assume $\tau(\omega+\sigma)+\frac{\omega+\sigma}{2}< 2k-2$.
\begin{obs}
If $\tau(\omega+\sigma)+\frac{\omega+\sigma}{2}\ge 2k-2$, then $\lambda=\tau+\omega+\sigma\ge \sqrt{8k-\frac{31}{4}}-\frac{1}{2}$.
\end{obs}
\begin{proof}
Since $\tau, \omega, \sigma$ are all integers, we have
$$(\tau+\frac{1}{2}+(\omega+\sigma))^2=4(\tau(\omega+\sigma)+\frac{\omega+\sigma}{2})+(\tau+\frac{1}{2}-(\omega+\sigma))^2\ge 4(2k-2)+\frac{1}{4}=8k-\frac{31}{4}.$$ Therefore $\tau+\omega+\sigma\ge \sqrt{8k-\frac{31}{4}}-\frac{1}{2}$.
\end{proof}
Given a vertex $v\in A_i$ with $1\le i\le l_R$, let $R(v)$ be the edges between $v$ and $C_i$. For a set $S\subseteq A$, let $R(S)=\cup_{v\in S}R(v)$. Similarly we define $B(v)$ and $B(S)$ for $v\in U, S\subseteq U$ in the blue graph.
\begin{center}
\resizebox{8.5cm}{!}
{
\begin{tikzpicture}
\draw (-5,2.5) ellipse (1.5 and 1);
\node at (-5,2.6) {$V-A-U$};
\filldraw [fill = gray!50](-8.15,0.5) ellipse (0.75 and 0.5);
\node at (-8.15,0.5) {$U_t \cap A$};
\draw (-7.6,0.35) ellipse (1.5 and 1);
\node at (-6.2,-0.45) {$C_j$};
\draw plot[smooth] coordinates {(-7.3,1.35) (-7.3,-0.65)};
\node at (-7.7,-0.35) {$U_t$};
\node at (-6.7,0.2) {$C_j \cap U'$};
\draw (-2.45,0.35) ellipse (1.5 and 1);
\node at (-0.6,-0.45) {$U'-C_j$};
\draw (-6.7,-2.5) ellipse (1.5 and 1);
\node at (-5.5,-1.55) {$A'$};
\fill [fill = gray!50](-3.3,-2.5) ellipse (1.5 and 1);
\fill [fill = white] (-3,-3.6) rectangle (-4.9,-1.4);
\draw (-3.3,-2.5) ellipse (1.5 and 1);
\node at (-2.1,-1.55) {$A_{j}$};
\draw plot[smooth] coordinates {(-3,-1.5) (-3,-3.5)};
\node at (-2.4,-2.5) {$A_j \cap U$};
\filldraw [fill = gray!50] (0.5,2.2) rectangle (1.5,2.8);
\node at (2.5, 2.5) {$A \cap U$};
\end{tikzpicture}
}
\end{center}
W.L.O.G., we assume $U_t\cap D_j=\emptyset$. By Claim~\ref{CLM:UtinCj}, we have $U_t\subseteq C_j\subseteq U$. We have the following claim.
\begin{claim}\label{CLM:DBCount}
$|R(A)|+|B(A_j\cap U)|+|B(U'-C_j)|\ge |E(A',A_j\cap U)|+|E(A_j,C_j)|+|E(A,U'-C_j)|+|E(A_j\cap U,V-A-U)|+|E(U_t\cap A, V-A-U)|$.
\end{claim}
\begin{proof}
We have $U'-C_j=U-U_t-(A_j\cap U)-C_j=U-C_j-(A_j\cap U)$. Hence $U=(A_j\cap U)\sqcup (U'-C_j)\sqcup C_j$. The left hand side count all the $AC$-type edges and $UX$-type edges except $B(C_j)$. All edges of $E(A',A_j\cap U)$, $E(A_j,C_j)$, $E(A, U'-C_j)$, $E(A_j\cap U,V-A-U)$ and $E(U_t\cap A, V-A-U)$ are disjoint and either $AC$-type or $UX$-type (as they are not $AA$-type or $UU$-type, with only possible exceptions in $E(A_j, C_j)$, which are $AC$-type), but does not contain edges in $B(C_j)$. Here the only non-trivial case are for edges between $U_t\cap A$ and $U'-C_j$ in $E(A, U'-C_j)$ and edges in $E(U_t\cap A, V-A-U)$, for which case we use the fact that $X_t\subseteq A$. Hence we have the above claim.
\end{proof}
\begin{claim}
We have the following:
\begin{enumerate}
\item $\sigma\tau+\omega\tau+\omega\ge 2k-1.$
\item $|A'|=k-1$.
\end{enumerate}
\end{claim}
\begin{proof}
By definition, $E(A_j,C_j)=R(A_j)$, $A=A'\sqcup A_j\sqcup (U_t\cap A)$, so we have
$|R(A)|-|E(A_j,C_j)|=|R(A')|+|R(U_t\cap A)|$.
$|R(U_t\cap A)|-|E(U_t\cap A,V-A-U)|=\sigma(k-1-|V-A-U|)=\sigma\tau$. $|B(A_j\cap U)|-|E(A_j\cap U,V-A-U)|=\omega(k-1-|V-A-U|)=\omega\tau$.
Since $U_t\subset C_j\subset U$ and $U=U_t\sqcup (A_j\cap U)\sqcup U'$, we have $|U'-C_j|=|U|-|A_j\cap U|-|C_j|=2k-1-\omega-(k-1)=k-\omega$.
By the claim~\ref{CLM:DBCount}, we have $(|R(A)|-|E(A_j,C_j)|-|E(U_t\cap A, V-A-U)|)+(|B(A_j\cap U)|-|E(A_j\cap U,V-A-U)|)
\ge |E(A',A_j\cap U)|+(|E(A,U'-C_j)|-B(U'-C_j)|)$.
That is,
$$|R(A')|+\sigma\tau+\omega\tau\ge |A'|\omega+(2k-1-(k-1))|U'-C_j|.$$
Therefore
$$|A'|(k-1-\omega)+\sigma\tau+\omega\tau\ge (k-\omega)k.$$
As $|A'|\le k-1$, we have $$(k-1)(k-1-\omega)+\sigma\tau+\omega\tau\ge k(k-\omega),$$
that is
$$\sigma\tau+\omega\tau+\omega\ge 2k-1.$$
If $|A'|\le k-2$, we will have
$$(k-2)(k-1-\omega)+\sigma\tau+\omega\tau\ge k(k-\omega),$$ that is $\sigma\tau+\omega\tau+2\omega\ge 3k-2$, hence $$\sigma+\omega+\tau+2\ge 2\sqrt{(\sigma+\omega)(\tau+2)}\ge 2\sqrt{3k-2}.$$
However $$\sigma+\omega+\tau+2=\lambda+2<\sqrt{8k-\frac{31}{4}}+1.5\le 2\sqrt{3k-2}.$$ Contradiction. So we should have $|A'|=k-1$.
\end{proof}
\begin{claim}
$\sigma\tau+\tau+\omega\ge k$.
\end{claim}
\begin{proof}
Among vertices in $A_j\cap U$, let $z\in U_s$ be the one with minimum $s$. Since $C_j\subset U$, we have $V-A-U\subseteq D_j$, hence by Proposition~\ref{basic prop}, $V-A-U\subset X_s$. For every vertex $u$ in $(U'-C_j)\cap U_{>s}$, since $uz$ is not in $R$, we have $u \in X_s$.
So there are at most $k-1-|V-A-U|=\tau$ vertices of $(U'-C_j)\cap U_{>s}$. Therefore there are at least $|U'-C_j|-\tau=k-\omega-\tau$ vertices in $(U'-C_j)\cap U_{\le s}$.
Since $V-A-U\subseteq Y_t$, $E(A\cap U_t, V-A-U)\subseteq R(A\cap U_t)$, there are at most $\sigma\tau$ in $R(A\cap U_t)-E(A\cap U_t, V-A-U)$.
If $k-\omega-\tau>\sigma\tau$, there is at least one vertex $u$ in $(U'-C_j)\cap U_{\le s}$ with no edge in $R(A\cap U_t)$. Note that $u\in U'-C_j\subseteq V-A$, all the edges between $u$ and $A\cap U_t$ should be in B. Note that $X_t\subseteq A$, we have $u\not\in X_t$.
Suppose $u\in U_{s'}$ for some $s'\le s$, we have $A\cap U_t\subseteq X_{s'}$. As $u\in U'-C_j$, the edges between $u$ and $A_j$ are all in $B$ and $A_j\ge_B u$. If $s'<s$, we also have $A_j\subseteq X_{s'}$. But $|A_j\cup (A\cap U_t)|=2k-1-|A'|=k>|X_{s'}|$.
If $s'=s$, then $(V-A-U)\cup (A_j-U_s)\cup (A\cap U_t)\subseteq X_s$, we have $|X_s|\ge |A_j|+|A\cap U_t|+|V-A-U|-|A_j\cap U_s|\ge k+k-\tau-\omega> k$. Contradiction!
So we must have $\sigma\tau+\tau+\omega\ge k$.
\end{proof}
Now we are ready to complete the proof that $n=5k-3-(\omega+\sigma+\tau)\le 5k-2.5-\sqrt{8k-\frac{31}{4}}$.
\begin{proof}[Proof of Claim~\ref{I=S=1}]
If $\sigma\ge \omega-2$, by $\sigma\tau+\omega\tau+\omega\ge 2k-1.$, we will have $(\sigma+\omega)(\tau+\frac{1}{2})\ge 2k-1-\frac{\omega-\sigma}{2}=2k-2$, hence $(\sigma+\omega+\tau+\frac{1}{2})^2\ge 8k-4+(\sigma+\omega-\tau-\frac{1}{2})^2\ge 8k-\frac{31}{4}$.
Then we are done.
If $\sigma\le \omega-3$, since $\sigma\tau+\tau+\omega\ge k$, we have
$$2k\le (\omega+\sigma-1)\tau+(\omega+\sigma-1)+4-(\omega-\sigma-3)(\tau-1)\le (\omega+\sigma-1)(\tau+1)+4.$$
This will implies $\omega+\sigma+\tau\ge \sqrt{8k-16}\ge \sqrt{8k-\frac{31}{4}}-\frac{1}{2}$. The last inequality is ture when $k\ge 10$.
So we always have $\omega+\sigma+\tau\ge \sqrt{8k-\frac{31}{4}}-\frac{1}{2}$, and $n=5k-3-(\omega+\sigma+\tau)\le 5k-2.5-\sqrt{8k-\frac{31}{4}}$.
\end{proof}
\begin{center}
\resizebox{13.5cm}{!}
{
\begin{tikzpicture}
\filldraw [fill = gray!50] (-5,2.5) ellipse (1.5 and 1);
\node at (-5,2.6) {$V-A-U$};
\node at (-5,2.2) {$k-1-\tau$};
\fill [fill = gray!50] (-7.6,0.35) ellipse (1.5 and 1);
\fill [fill = white] (-7.3,1.35) rectangle (-6.1,-0.65);
\draw (-7.6,0.35) ellipse (1.5 and 1);
\draw (-7.6,0.35) ellipse (1.5 and 1);
\node at (-6.2,-0.45) {$U_{t}$};
\draw plot[smooth] coordinates {(-7.3,1.35) (-7.3,-0.65)};
\node at (-6.7,0.35) {$U_t \cap A$};
\node at (-6.75,-0.05) {$\sigma$};
\node at (-8.1,0.05) {$k-1-\sigma$};
\filldraw [fill = gray!50] (-2.45,0.35) ellipse (1.5 and 1);
\node at (-1,-0.45) {$U'$};
\draw plot[smooth] coordinates {(-2.1,1.35) (-2.1,-0.65)};
\node at (-2.9,0.35) {$U'_{a}$};
\node at (-2.9,-0.05) {$\sigma\tau$};
\node at (-1.65,0.6) {$U'_{b}$};
\node at (-1.55,0.25) {$k-\omega$};
\node at (-1.65,-0.1) {$-\sigma\tau$};
\draw (-6.7,-2.5) ellipse (1.5 and 1);
\node at (-5.5,-1.55) {$A'$};
\node at (-6.7,-2.8) {$k-1$};
\filldraw [fill = gray!50] (-3.3,-2.5) ellipse (1.5 and 1);
\node at (-2.1,-1.55) {$A_{k}$};
\draw plot[smooth] coordinates {(-3,-1.5) (-3,-3.5)};
\node at (-3.8,-2.8) {$k-\sigma-\omega$};
\node at (-2.4,-2.5) {$A_k \cap U$};
\node at (-2.4,-2.9) {$\omega$};
\fill (-2.3, -2) circle(0.05);
\node at (-2.6,-2) {$z$};
\draw plot[smooth] coordinates {(-6.5,2.5) (-7.3,1.35)};
\draw plot[smooth] coordinates {(-3.5,2.5) (-2.1,1.35)};
\draw plot[smooth, tension = 1.5] coordinates {(-8.6,1.1) (-5.2,4) (-2.1,1.35)};
\draw[dashed] plot[smooth] coordinates {(-6.15,0.55) (-3.9,0.55)};
\node at (-5.9,0.75) {$\tau$};
\node at (-4.1,0.75) {$1$};
\draw plot[smooth] coordinates {(-7.3,-0.65) (-3,-1.5)};
\draw plot[smooth] coordinates {(-2.1,-0.65) (-7,-1.5)};
\draw[dashed] plot[smooth, tension = 1.7] coordinates {(-7.7,-3.25) (-5,-4) (-2.3,-3.25)};
\node at (-7.4,-4) {$\omega-1$};
\node at (-2.5,-4) {$k-1-\tau$};
\draw[dashed] plot[smooth, tension = 1] coordinates {(-2.3,-2) (-0.8,-3.05) (-2,-4.5) (-5,-4)};
\node at (-2,-4.8) {$(\omega-1)\tau$};
\node at (-5,-4.7) {\LARGE $R$};
\draw [fill = gray!50] (5,2.5) ellipse (1.5 and 1);
\node at (5,2.6) {$V-A-U$};
\node at (5,2.2) {$k-1-\tau$};
\filldraw [fill = gray!50] (2.4,0.35) ellipse (1.5 and 1);
\node at (3.8,-0.45) {$U_{t}$};
\draw plot[smooth] coordinates {(2.7,1.35) (2.7,-0.65)};
\node at (3.3,0.35) {$U_t \cap A$};
\node at (3.25,-0.05) {$\sigma$};
\node at (1.9,0.05) {$k-1-\sigma$};
\draw (7.55,0.35) ellipse (1.5 and 1);
\node at (9,-0.45) {$U'$};
\draw plot[smooth] coordinates {(7.9,1.35) (7.9,-0.65)};
\node at (7.1,0.35) {$U'_{a}$};
\node at (7.1,-0.05) {$\sigma\tau$};
\node at (8.35,0.6) {$U'_{b}$};
\node at (8.45,0.25) {$k-\omega$};
\node at (8.35,-0.1) {$-\sigma\tau$};
\filldraw [fill = gray!50] (3.3,-2.5) ellipse (1.5 and 1);
\node at (4.5,-1.55) {$A'$};
\node at (3.3,-2.8) {$k-1$};
\fill [fill = gray!50] (6.7,-2.5) ellipse (1.5 and 1);
\fill [fill = white] (7,-1.5) rectangle (8.2,-3.5);
\draw (6.7,-2.5) ellipse (1.5 and 1);
\node at (7.9,-1.55) {$A_{k}$};
\draw plot[smooth] coordinates {(7,-1.5) (7,-3.5)};
\node at (6.2,-2.8) {$k-\sigma-\omega$};
\node at (7.6,-2.5) {$A_k \cap U$};
\node at (7.6,-2.9) {$\omega$};
\fill (7.7, -2) circle(0.05);
\node at (7.4,-2) {$z$};
\draw plot[smooth, tension = 1] coordinates {(4.8,1.5) (4.2,-0.3) (3.7,-1.55)};
\draw plot[smooth, tension = 1] coordinates {(5.2,1.5) (6,-0.3) (7,-1.5)};
\draw plot[smooth] coordinates {(4.5,-1.9) (5.4,-1.9)};
\draw plot[smooth, tension = 1.5] coordinates {(2.9,1.3) (5.2,4) (8.55,1.1)};
\draw[dashed] plot[smooth, tension = 1.5] coordinates {(3,1.3) (5,3.8) (7.55,1.35)};
\node at (3.65,1.5) {$\sigma\tau-\tau$};
\node at (7.3,1.6) {$\sigma-1$};
\draw plot[smooth] coordinates {(2.1,-0.65) (3,-1.5)};
\draw plot[smooth] coordinates {(7.9,-0.65) (7,-1.5)};
\draw[dashed] plot[smooth, tension = 1.7] coordinates {(2.3,-3.25) (5,-4) (7.7,-3.25)};
\node at (2.6,-4) {$1$};
\node at (7.1,-4) {$\tau$};
\draw[dashed] plot[smooth, tension = 1] coordinates {(7.7,-2) (9.2,-3.05) (8,-4.5) (5,-4)};
\node at (8,-4.8) {$k-1-(\omega-1)\tau$};
\node at (5,-4.7) {\LARGE $B$};
\draw (-4.5,-5.7) rectangle (-3.5,-6.3);
\node at (-2.3, -6) {independent};
\filldraw [fill = gray!50] (-0.5,-5.7) rectangle (0.5,-6.3);
\node at (1.5, -6) {complete};
\node at (4.5, -6) {$z = a_k^{\omega} = u_{\sigma\tau+1}$};
\end{tikzpicture}
}
\end{center}
\section{The counterexample}
In this section, we illustrate a sharp example for Theorem \ref{main} (2).
That is, for given integer $k$ and $n$,
where $4k-3 \le n = \lfloor 5k-2.5-\sqrt{8k-\frac{31}{4}} \rfloor$,
we construct a 2-edge-colored $K_n$,
which contains no $k$-connected monochromatic subgraph with at least $n-2k+2$ vertices.
Note that we may assume $k \ge 6$,
otherwise, the example with $n = 4k-4$ vertices,
which was mentioned in \cite{BG08} and \cite{Mat83},
can serve as the counterexample we desire.
Let $G = (V, E)$ be a complete graph on $n$ vertices.
We demonstrate the coloring of our example
by specifying and verifying the two strong $(2k-2,k)$-decompositions:
$((A_i, C_i, D_i))_{i=1}^{l_R}$ for the red graph $G_R$,
and $((U_s, X_s, Y_s))_{s=1}^{l_B}$ for the blue graph $G_B$.
We then obtain $G_R$ and $G_B$
by coloring all AA-type and AC-type edges red,
and all UU-type and UX-type edges blue.
and justify that every edge in $G$ receives at least one color.
Note that some of the edges might be colored red and blue simultaneously.
The example was inspired by the case $|I^*| = |S^*| = 1$ in our proof.
Suppose $5k-3-n=\lceil \sqrt{8k-\frac{31}{4}}-\frac{1}{2}\rceil=4x+b$, where $x, b\in \mathbb{Z}$ with $b\in \{-2,-1,0, 1\}$. That is, $x=\lceil\frac{ \sqrt{8k-\frac{31}{4}}-\frac{3}{2}}{4}\rceil$, and $b=5k-3-n-4x$.
Note that $x \ge 2$, since $k \ge 6$.
Let $\sigma=x-1, \omega=x+1, \tau=2x+b$. Then $5k-3-n=\sigma+\omega+\tau$.
We have $$k\le \frac{(4x+b+\frac{1}{2})^2+\frac{31}{4}}{8}=x(2x+b+\frac{1}{2})+\frac{(b+\frac{1}{2})^2+\frac{31}{4}}{8}\le x(2x+b+\frac{1}{2})+\frac{5}{4}.$$
Therefore $k\le \lfloor x(2x+b)+\frac{x}{2}+\frac{5}{4}\rfloor\le x(2x+b)+\frac{x}{2}+1=\frac{\sigma+\omega}{2}\tau+\frac{\omega+1}{2}.$ Furthermore $x(2x+b)+\frac{x}{2}+1\le x(2x+b)+x+1=(\sigma+1)\tau+\omega$. Hence
we will have $\sigma\tau+\tau+\omega\ge k$ and $\sigma\tau+\omega\tau+\omega\ge 2k-1$.
Moreover, this implies $\omega\tau \ge k-1$
since $\omega\tau + 1 = \sigma\tau + 2\tau + 1 \ge \sigma\tau + \tau + \omega \ge k$.
On the other hand, $\sqrt{8k-\frac{31}{4}}+\frac{1}{2}> 4x+b$. Therefore, when $x \ge 2$,
$k>\frac{(4x+b-\frac{1}{2})^2+\frac{31}{4}}{8}=x(2x+b-\frac{1}{2})+\frac{(b-\frac{1}{2})^2+\frac{31}{4}}{8}=((x-1)(2x+b)+x)+\frac{x}{2}+b+\frac{b^2-b+8}{8}
= \sigma\tau+\omega-1+\frac{x}{2}+\frac{b^2+7b+8}{8}
= \sigma\tau+\omega+\frac{x}{2}+\frac{b^2+7b}{8}$.
If $(x, b) \ne (2, -2)$,
then $k> \sigma\tau+\omega+\frac{x}{2}+\frac{b^2+7b}{8}
\ge \sigma\tau+\omega$.
Futhermore, when $(x, b) = (2, -2)$,
$k\ge6>5=\sigma\tau+\omega$.
Hence we have $k \ge \sigma\tau+\omega+1$.
Let $t = k-\omega+2$.
We first define four disjoint vertex sets: $A'$, $A_k$, $U'$, and $U_t$.
Let $A' = \{a_1, \dots, a_{k-1}\}$,
$A_k = \{a_k^1, \dots, a_k^{k-\sigma}\}$,
$U' = U'_{a} \cup U'_{b}$
where $U'_{a} = \{u_1, \dots, u_{\sigma\tau}\}$
and $U'_{b} = \{u_{\sigma\tau+2}, \dots, u_{k-\omega+1}\}$,
and $U_t = \{u_t^1, \dots, u_t^{k-1}\}$.
Note that $U'_{b}$ is well-defined and non-empty
since $k \ge \sigma\tau + \omega + 1$.
Moreover, we let $A = A' \cup A_k \cup \{u_t^1, \dots, u_t^\sigma\}$,
and $U = U' \cup U_t \cup \{a_k^1, \dots, a_k^\omega\}$.
Here $A$ and $U$ are both well-defined,
since $\sigma \le k-1$, and $\omega \le k-\sigma\tau-1 \le k-\sigma$.
Note that $A \cap U$ consists of two parts:
$A_k \cap U = \{a_k^1, \dots, a_k^\omega\}$ and $U_t \cap A = \{u_t^1, \dots, u_t^\sigma\}$.
For convenience, let $z = a_k^{\omega} = u_{\sigma\tau+1}$.
Besides $A'$, $A_k$, $U'$, and $U_t$, we still have
$5k-3-(\sigma+\omega+\tau)-(k-1)-(k-\sigma)-(k-\omega)-(k-1) = k-1-\tau$
vertices left.
Let $V-A-U = \{v_1, \dots, v_{k-1-\tau}\}$.
We now set $A_i, C_i, D_i$ for $i \in [k+\sigma]$ to construct $G_R$.
For $i \in [k-1]$,
we set $A_i = \{a_i\}$,
and $C_i = U' \cup (A_k \cap U \setminus \{a_k^{\lceil\frac{i}{\tau}\rceil}\})$.
Note that $\omega-1 = \sigma+1 \le \lceil\frac{k-1}{\tau}\rceil \le \omega$,
since $\sigma\tau+\omega \le k-1 \le \omega\tau$.
We set $C_k = U_t$.
For $i=k+i'$ where $i' \in [\sigma]$,
We set $A_i = \{u_t^{i'}\}$,
and let $C_i$ contains all vertices in $V-A-U$,
and $\tau$ vertices in $U'_a$,
such that $C_i \cap U'_a=\{u_{(i'-1)\tau+1}, \dots, u_{i'\tau}\}$.
We set $D_i = V(G) \setminus (\bigcup_{i'=1}^i A_{i'} \cup C_i)$ for $i \in [k+\sigma]$.
The red graph $G_R$ consists of all AA-type and AC-type edges.
Next, we set $U_s, X_s, Y_s$ for $s \in [k+1]$ to construct $G_B$.
For $s \in [t-1]$,
we set $U_s = \{u_s\}$.
For $s \in [\sigma\tau]$,
where $u_s \in U'_{a}$,
let $X_s = A_k \cup (U_t \cap A \setminus \{u_t^{\lceil \frac{s}{\tau}\rceil}\})$.
Notice that for each $s \in [\sigma\tau]$,
$u_su_t^{\lceil \frac{s}{\tau}\rceil}$ is always an AC-type edge.
For $s = \sigma\tau+1$,
where $u_s = z$,
let $X_{\sigma\tau+1} = (V-A-U) \cup U'_{b} \cup \{a_{\tau(\omega-1)+1}, \dots, a_{k-1}\}$.
Note that $|X_{\sigma\tau+1}| = (k-1-\tau)+(k-\omega-\sigma\tau)+(k-1-\tau(\omega-1))
= 3k - 2 - \sigma\tau - \omega\tau - \omega
\le k-1$,
as $\sigma\tau+\omega\tau+\omega\ge 2k-1$.
For $s \in [\sigma\tau+2, t-1]$,
where $u_s \in U'_{b}$,
let $X_s = A_k \cup (U_t \cap A \setminus \{z\})$.
We set $X_t = A'$.
For $s = t+s'$ where $s' \in [\omega-1]$,
we set $U_s = \{a_k^{s'}\}$,
and let $X_s$ contains all vertices in $V-A-U$,
and $\tau$ vertices in $A'$,
such that $X_s \cap A'=\{a_{(s'-1)\tau+1}, \dots, a_{s'\tau}\}$.
Notice that this covers all the non-neighbours of $a_k^{s'}$ in $A'$ in $G_R$.
We set $Y_s = V(G) \setminus (\bigcup_{s'=1}^s U_{s'} \cup X_s)$ for $s \in [k+1]$.
The blue graph $G_B$ consists of all UU-type and UX-type edges.
By definition,
it is not difficult to see that $((A_i, C_i, D_i))_{i=1}^{k+\sigma}$ is a strong $(2k-2, k)$-decomposition of $G_R$,
and $((U_s, X_s, Y_s))_{s=1}^{k+1}$ is a strong $(2k-2, k)$-decomposition of $G_B$.
Moreover, we can verify $R \cup B$ covers all edges.
Hence, there does not exist a $k$-connected monochromatic subgraphs with at least $n-2k+2$ vertices,
Thus, the example we propose confirmed Theorem \ref{main}(2).
\section{Conclusion}
In this paper,
we presented a counterexample of Bollob\'{a}s and Gy\'{a}rf\'{a}s' conjecture with
$n = \lfloor 5k-2.5-\sqrt{8k-\frac{31}{4}} \rfloor$.
We also verified the conjecture for $n > 5k-2.5-\sqrt{8k-\frac{31}{4}}$ for $k \ge 16$.
We believe the requirement of $k\ge 16$ can be relaxed.
We also provided a shorter proof for $n \ge 5k-\min\{\sqrt{4k-2}+3,0.5k+4\}$ and a simpler counter-example with order $n=5k-3-2\lceil\sqrt{2k-1}\rceil$ in an earlier version of this paper, which could be found on Arxiv \cite{2008.09001}.
Recall that Matula showed that when $n> (3+\sqrt{11/3})(k-1)\approx 4.91k$, any 2-edge-coloring of $K_n$ has a $k$-connected monochromatic subgraph. Even though our requirement for $n$ is a little more strict, our result can satisfy the restriction on the order of the $k$-connected monochromatic subgraph. Probably our technique is enough to improve the result of Matula.
\begin{comment}
More generally, consider the statement
`` For $k, n \in \mathbb{Z}^+$ with $n \ge g(k)$,
every 2-edge-colored $K_n$ must contain a $k$-connected monochromatic subgraph
with at least $n-f(k)$ vertices".
For a given $f(k) \ge 2k-2$, what is the minimum $g(k)$ for the statement to be true?
Note that when $f(k) \le 2k-1$, the example $B(n, k)$ in \cite{BG08} can always serve as a counterexample of the statement.
On the other hand, if $g(k) \in [4k-3, 5k-4]$ is fixed, what is the correlated $f(k)$?
In other words, given the number of vertices $n$,
what is the order of the largest $k$-connected monochromatic subgraph we can guarantee in a 2-edge-colored $K_n$?
\end{comment}
Moreover, our result improves the bounds for some other related problems.
For example, since every $k$-connected graph has minimum degree at least $k$,
Theorem \ref{main} (1) leads to the following corollary:
\begin{coro}
If $n \ge 5k-2.5-\sqrt{8k-\frac{31}{4}}$ where $k$ is sufficiently large,
then for any 2-edge-colored $K_n$,
there exists a monochromatic subgraph with minimum degree at least $k$,
which contains at least $n-2k+2$ vertices.
\end{coro}
This problem concerning monochromatic large subgraphs with a specified minimum degree in edge-colored graphs
has been studied by Caro and Yuster \cite{CY03}.
By applying their conclusion on 2-edge-colored complete graphs,
the corollary holds when $n \ge 7k+4$,
which could be covered by our result.
Furthermore,
there are some open problems related to Bollob\'{a}s and Gy\'{a}rf\'{a}s' conjecture,
such as the multicoloring version of the conjecture,
and forcing large highly connected subgraphs with given independence number.
We believe the decomposition and calculation technique we introduced in this paper could also be applied to improve the results of those topics.
\section{Acknowledgments}
This work was partially supported by the National Natural Science Foundation of China [Grant No.11931006, 12201390], the National Key R\&D Program of China [Grant No. 2020YFA0713200, 2022YFA1006400], and the Shanghai Dawn Scholar Program [Grant No. 19SG01].
We would like to show our gratitude to Prof. Xingxing Yu, Georgia Institute of Technology,
for sharing his pearls of wisdom with us during the course of this research.
We are immensely grateful to Henry Liu, Sun Yat-sen University,
for his comments on an earlier version of the manuscript.
We would also like to express our thanks to Chaoliang Tang, Fudan University for his careful proofreading.
\end{document}
|
\begin{document}
\title{Maximum number of sum-free colorings in finite abelian groups}
\begin{abstract}
An $r$-coloring of a subset $A$ of a finite abelian group $G$ is called
sum-free if it does not induce a monochromatic Schur triple, i.e., a triple of elements $a,b,c\in A$ with $a+b=c$.
We investigate $\kappa_{r,G}$, the maximum number of sum-free $r$-colorings admitted by subsets of $G$, and our results show a close relationship between
$\kappa_{r,G}$ and largest sum-free sets of $G$.
Given a sufficiently large abelian group $G$ of type $I$, i.e., $|G|$ has a prime divisor $q$ with $q\equiv 2\pmod 3$.
For $r=2,3$ we show that a subset $A\subset G$ achieves $\kappa_{r,G}$ if and only if $A$ is a largest sum-free set of $G$.
For even order $G$ the result extends to $r=4,5$, where the phenomenon persists only if $G$ has a unique largest sum-free set.
On the contrary, if the largest sum-free set in $G$ is not unique then
$A$ attains $\kappa_{r,G}$ if and only if it is the union of two largest sum-free sets (in case $r=4$)
and the union of three (``independent'') largest sum-free sets (in case $r=5$).
Our approach relies on the so called container method and can be extended to larger $r$ in case $G$ is of even order and contains sufficiently many largest sum-free sets.
\end{abstract}
\section{Introduction}
A \emph{Schur triple} in an abelian group $G$ is a triple $(a,b,c)$ with $a+b=c$, and
a set $A\subset G$ is \emph{sum-free} if $A$ contains no such triple.
Given a not necessarily sum-free set $A\subset G$, a coloring of the elements of $A$ with $r$ colors is called a
\emph{sum-free $r$-coloring} if each of the color classes is a sum-free set. Sum-free colorings are among the classical objects studied in extremal combinatorics
and can be traced back to Schur's theorem~\cite{Schur}, one of the first results in Ramsey theory.
In this paper we investigate the maximum number of sum-free colorings admitted by subsets of a given finite abelian group.
This is a variant of a problem posed by Erd\H{o}s and Rothchild~\cite{Erdosproblem1, Erdosproblem2} for graphs, see Section~\ref{sec:related}.
Let $\kappa_r(A)$ denote the number of all sum-free $r$-colorings of $A\subset G$ and let the maximum over all $A\subset G$ be denoted by \[\kappa_{r,G}=\max\{\kappa_r(A)\colon A\subset G\}.\]
We are interested in the questions as how large $\kappa_{r,G}$ can be, given $r\geq 2$ and $G$, and which subsets of $G$ achieve the maximum.
A straightforward lower bound for $\kappa_{r,G}$ is obtained by
considering a largest sum-free set $B\subset G$, which gives $\kappa_{r,G}\geq r^{|B|}$.
The size of largest sum-free sets of $G$, denoted by $\mu(G)$, is a classical and well-understood quantity which depends only on the factorization of $G$.
The characterization of $\mu(G)$ distinguishes the following three types.
\begin{definition}
Let $G$ be a finite abelian group of order $n$.
If $n$ has a prime divisor $q$ such that $q \equiv 2 \pmod 3$,
then we say that $G$ is a type~I group. In addition, $G$ is called type I($q$) if $q$ is the smallest such prime.
If $G$ is not of type I and $3|n$ then we say that $G$ is of type II. Otherwise $G$ is called a type III group.
\end{definition}
For groups $G$ of type I and type II the quantity $\mu(G)$ was determined by Diananda and Yap~\cite{DianandaYap}.
The problem for groups of type III appears to be far more complicated and was only resolved decades later by Green and Ruzsa~\cite{GreenRuzsa} (see \cite{RhemtullaStreet,Yap1,Yap2} for partial results
for type III groups and~\cite{BPR} for the characterization of the largest sum-free sets therein). The results in~\cite{DianandaYap,GreenRuzsa}
determine $\mu(G)$ as follows:
\[\mu(G)=\begin{cases}\left(\frac13+\frac1{3q}\right)n&\text{if $G$ is of type I($q$),}\\
\frac n3 &\text{if $G$ is of type II,}\\
\left(\frac13-\frac1m\right)n&\text{if $G$ is of type III and $m$ is the largest order of an element in $G$.}
\end{cases}\]
For arbitrary abelian groups, we have the following upper bounds which are asymptotically sharp in the exponent for $r=2,3$.
\begin{proposition}\label{rem:contr}
Given $r\geq 2$ and a finite abelian group $G$ of order $n$, then
\[\log_2 (\kappa_{2,G})\leq {\mu(G)+ O(n(\log n)^{-\frac 1{45}})},\]
and for $r\geq 3$
\[\log_3(\kappa_{r,G}) \leq \frac{r \mu(G)}3 + O(n(\log n)^{-\frac 1{45}}).\]
\end{proposition}
Although $\mu(G)$ is known for all finite abelian groups, it is safe to claim that those of type~I are much better understood (see Section~\ref{sec:groups}).
This additional knowledge allows us to completely resolve the problem for two and three colors in groups of type I of sufficiently large order.
In these cases the straightforward lower bound from above is indeed sharp and only the largest sum-free sets achieve the maximum.
\begin{theorem}
\label{thm:main23}
Let $r \in \{2,3\}$, $q \in \mathbb{N}$ and
let $G$ be a type~I($q$) group of sufficiently large order.
Then $\kappa_{r,G}=r^{\mu(G)}$ and $\kappa_r(A)=\kappa_{r,G}$ if and only if $A$ is a largest sum-free set in~$G$.
\end{theorem}
For more than three colors this phenomenon does not persist in general and the problem becomes considerably more complicated. We therefore restrict our consideration to
type I(2) groups, i.e., those of even order.
For these groups our second result resolves the problem for $r=4,5$.
Further, we shall see in the course of the paper that our method can be extended to more than five colors.
We refer to Section~\ref{sec:concludingremarks} for further discussions.
Before stating the result we note that two largest sum-free sets $B_1, B_2$ in an abelian group of even order give rise to another
through $B_3=B_1\triangle B_2$ (see Corollary~\ref{remark:reduction}). In particular, $B_3\subset B_1\cup B_2$ holds in this case and there
is no even order group with exactly two largest sum-free sets. To distinguish the two possible cases we call a tuple $(B_1,\dots, B_t)$
of largest sum-free sets \emph{independent}, if none of the
$B_i$'s is contained in the union of the remaining ones, or \emph{dependent}, if the opposite holds.
\begin{theorem}\label{thm:main45}
Let $G$ be a sufficiently large group of even order.
If $G$ contains a unique largest sum-free set then this set and only this set maximizes the number of sum-free $r$-colorings for
$r=4,5$. Otherwise
\begin{itemize}
\item $A\subset G$ maximizes the number of sum-free $4$-colorings if and only if
$A$ is the union of two largest sum-free sets.
\item $A\subset G$ maximizes the number of sum-free $5$-colorings if and only if
$A$ is the union of three largest sum-free sets $B_1,B_2, B_3$.
Moreover, if $(B_1,B_2,B_3)$ is independent, then $\kappa_5(A)=(1+o(1))181440\cdot6^{n/2}$ and if $(B_1,B_2,B_3)$ is dependent, then $\kappa_5(A)=(1+o(1))90\cdot6^{n/2}$.
\end{itemize}
\end{theorem}
To emphasize the last point of the theorem note that the number of sum-free $5$-colorings admitted by
an independent triple and that of a dependent one are within one another by a multiplicative
constant independent of $n$. In contrast, the stability type Theorem~\ref{thm:stability5} implies that any set which differs from these extremal configurations by
$\Omega\big(n(\log n)^{-1/27}\big)$ elements admits exponentially fewer
sum-free $5$-colorings. This phenomenon neither appears for $r=4$ nor for $r=6$ or $r=7$ (see Section~\ref{sec:concludingremarks}).
\subsection{Related results}\label{sec:related}
Problems analogous to the ones considered in this paper were investigated for many other discrete structures (see, e.g.,
\cite{Yuster,ABKS,PikhurkoYilma, LPRS,LPS,LefmannPerson,HKL}).
Among them the one concerning clique-free edge colorings of graphs is the most prominent one, which moreover seems closest to the problem studied here
due to the relationship between triangles and Schur triples\footnote{In $\mathbb{F}_2^n$, for example, consider for given $A\subset \mathbb{F}_2^n$ the Cayley graph $G_A$ which consists of the vertex set $\mathbb F_2^n$ and in which
$\{a,b\}$ forms an edge if and only if $a+b\in A$. A triangle $a,b,c$ in $G_A$ then corresponds to the Schur triple $a+b$, $b+c$ and $c+a=c-a$ in $A$.}.
This problem was raised by Erd\H{o}s and Rothchild in~\cite{Erdosproblem1} (see also \cite{Erdosproblem2}) and in~\cite{Yuster} Yuster
showed that, among all graphs on $n$ vertices, only the largest triangle-free graphs
maximize the number of triangle-free $2$-colorings.
Using Szemer\'edi's regularity lemma Alon, Balogh, Keevash and Sudakov~\cite{ABKS} generalized the result to $r=2, 3$ colors and cliques $K_k$ of size $k\geq 3$. In this case
the Tur\'an graphs $T_{k-1}(n)$, i.e., the balanced complete $(k-1)$-partite graphs on $n$ vertices, are the unique graphs which attain
the maximum number of $K_k$-free $r$-colorings.
Similar to sum-free colorings this phenomenon does not persist for $r>3$ and the question becomes significantly harder.
Building on the work of Alon et al., Pikhurko and Yilma~\cite{PikhurkoYilma} determine the unique maximizers
for $(r,k)=(4,3)$ and for $(r,k)=(4,4)$. These turn out to be the Tur\'an graphs
$T_{4}(n)$ for $k=3$ and $T_{9}(n)$ for $k=4$.
For any other pair $(r,k)$, in particular for $r=5$ and $k=3$, the problem remains open.
\section{Outline of the proofs and stability theorems}
One possible approach to our problem is to use Green's regularity lemma for abelian groups~\cite{GreenRL}.
In various analogous contexts regularity lemmas have proven to be a suitable tool~\cite{ABKS,PikhurkoYilma,LPRS,LPS,LefmannPerson}.
While this may work well here for groups with many subgroups such
as $\mathbb F_p^n$ the technical difficulties are considerable for those lacking subgroups.
A novel aspect of our work is to avoid these difficulties by employing the so-called container method.
For sum-free sets this comes in the form of a result by Green and Ruzsa~\cite{GreenRuzsa}
which we state in a slightly modified form.
We also note that instead of the results of Green and Ruzsa one could also use the container results of
Balogh, Morris, Samotij in~\cite{BMS}, that of Saxton, Thomason in~\cite{SaxtonThomason} or a version by Alon et al in~\cite{ABMS} which is closely related to that in~\cite{BMS}.
\begin{theorem}[Proposition~2.1 in~\cite{GreenRuzsa}]\label{thm:container}
Let $G$ be a finite abelian group of sufficiently large order~$n$.
For every subset $A\subset G$ there is a family $\mathcal{F}=\mathcal{F}(A)$ of subsets of
$A$ (called container family of~$A$) with the following properties
\begin{enumerate}
\item \label{it:container1}$\log_2 |\mathcal{F}|\leq n(\log n)^{-1/18}$;
\item \label{it:container2}Every sum-free set $I\subset A$ is contained in some $F\in\mathcal{F}$;
\item \label{it:container3}If $F\in\mathcal{F}$ then $F$ contains at most $n^2(\log n)^{-1/9}$ Schur triples.
\end{enumerate}
\end{theorem}
Roughly speaking the theorem states that all sum-free sets in $A$ can be ``captured'' by a small family of almost sum-free sets.
The result of Green and Ruzsa is stated for $A=G$,
however, we obviously obtain the family $\mathcal{F}$ as above by taking the intersection of each $F$ with $A$. Here and in what follows
the dependence on $A$ is regularly suppressed as it is clear from the context.
With Theorem~\ref{thm:container} as the starting point we make the following simple but crucial observation.
\begin{observation}
\label{obs:PhiInContainer}
Let $\mathcal{F}=\mathcal{F}(A)$ be a container family as in Theorem~\ref{thm:container} and let $\Phi_r(A)$ denote the set of all sum-free $r$-colorings of $A$.
To each $\varphi\in\Phi_r(A)$ assign a tuple $(F_1,\dots,F_r) \in \mathcal{F}^r$ such that $\varphi^{-1}(i) \subseteq F_i$ for every $i \in [r]$ and
let $\Phi(F_1,\dots,F_r)$ denote the set of all $\varphi\in\Phi(A)$ assigned to $(F_1,\dots,F_r)$.
Note that this assignment is possible due to~(\ref{it:container2}) of Theorem~\ref{thm:container} and we have
\begin{align}\label{eq:PhiInContainer1}\Phi_r(A)=\bigcup_{(F_1,\dots,F_r)\in\mathcal{F}^r}\Phi(F_1,\dots,F_r).\end{align}
Further, let $n_k$ denote the number of elements in $F_1\cup\dots\cup F_r=A$ which are contained in exactly~$k$ sets of $(F_1,\dots,F_r)$.
Then \begin{align}\label{eq:PhiInContainer2}|\Phi(F_1,\dots,F_r)|\leq \prod_{k\in[r]}k^{n_k}\qquad \text{and} \qquad \sum_{k\in[r]}k\cdot n_k=\sum_{k\in[r]} |F_k|.\end{align}
\end{observation}
As the number of $r$-tuples $(F_1,\dots,F_r)$ is at most $|\mathcal{F}|^r\leq 2^{rn(\log n)^{-1/18}}\ll r^{\mu(G)}\leq \kappa_{r,G}$,
a set~$A$ which admits about as many sum-free $r$-colorings as $\kappa_{r,G}$ must give rise to a ``substantial''
$r$-tuple $(F_1,\dots,F_r)$, i.e., one for which $|\Phi(F_1,\dots,F_r)|$ is about as large as $\kappa_{r,G}$.
This information will be used to derive stability type results which form an important step in the proofs.
The case $r=2,3$ reads as follows.
\begin{theorem}
\label{thm:stability23}
Suppose that $r\in\{2,3\}$, $q\in\mathbb{N}$ and
$0<\varepsilon< \frac1{(q+1)}$.
Let $G$ be a type~I($q$) group of sufficiently large order $n$ and let
$A\subset G$ be such that $\kappa_r(A)>r^{\mu(G)-\frac{\varepsilon n}{200}}$. Then there is a largest sum-free set $B\subset G$
such that $|A\setminus B|<\varepsilon n$.
\end{theorem}
For $r=4,5$ and even order groups we will need a stronger notion of stability which is harder to establish. One reason for the complication is that in
these cases the straightforward lower bound $\kappa_{r,G}\geq r^{\mu(G)}$ is far from
best possible, as shown by the following.
\begin{proposition}
\label{prop:lowerbounds}
Given an abelian group $G$ of even order with at least two largest sum-free sets and let $B_1,B_2,B_3$ be largest sum-free sets in $G$ with $|\{B_1,B_2,B_3\}|\geq 2$. Then
\[\kappa_{4}(B_1\cup B_2) \geq (3\sqrt 2)^{n/2}\qquad\text{ and }\qquad \kappa_{5}(B_1\cup B_2\cup B_3) \geq 6^{n/2}.\]
\end{proposition}
Proposition~\ref{prop:lowerbounds} will be proven at the end of Section~\ref{sec:groups}. We continue with the description of the structure of ``substantial'' tuples,
i.e., those $(F_1,\dots,F_r)$ such that $|\Phi(F_1,\dots,F_r)|$ is about as large as the bounds from above. The statement
of the results requires some notation.
Let $G$ be a finite abelian group of even order and
let $\mathcal{B}=(B_1,\dots,B_t)$ be an ordered tuple of not necessarily distinct largest sum-free sets of $G$.
For a largest sum-free set $B$ let $B^1=B$ and let $B^0=G\setminus B$ be its complement.
For an $\boldsymbol{\eps}=(\varepsilon_1,\dots,\varepsilon_t)\in\{0,1\}^t$ define the \emph{atom} $\mathcal{B}(\boldsymbol{\eps})$ via
\[\mathcal{B}(\boldsymbol{\eps})=\bigcap_{i\in[t]}B_i^{\varepsilon_i}.\]
If $\sum_{i\in[t]}\varepsilon_i=k$ then
$\mathcal{B}(\boldsymbol{\eps})$ is referred to as a $k$-\emph{atom}. Hence, the elements in a $k$-atom are contained in exactly $k$ of the $B_i$'s.
Finally, we say that $\mathcal{B}=(B_1,\dots,B_t)$ consists of a collection of certain atoms if
these atoms partition $\cup_{i\in[t]} B_i$.
For four colors the intersection structure of ``substantial'' tuples are identified as follows.
\begin{theorem}
\label{thm:stability4}
Given an abelian group $G$ of sufficiently large even order~$n$.
Let~$A \subseteq G$ and let $\mathcal{F}=\mathcal{F}(A)$ be a container family as in Theorem~\ref{thm:container}.
If $B$ is the unique largest sum-free set in $G$ and $A$ satisfies $\kappa_4(A)>3.999^{n/2}$, then $|A\setminus B|<12 n(\log n)^{-1/27}$.
If $G$ has at least two largest sum-free sets
and the tuple $(F_1,\dots,F_4)\in\mathcal{F}^4$ satisfies \[|\Phi(F_1,\dots,F_4)|> \left(3\sqrt{2}-\frac1{25}\right)^{n/2},\]
then there exist three largest sum-free sets $B_1, B_2$ and $B_3=B_1\triangle B_2$
and a function \mbox{$f:[4]\to [3]$} such that the following holds
\begin{itemize}
\item $|F_i\setminus B_{f(i)}| <3 n (\log n)^{-1/27}$ for all $i\in[4]$, and
\item $(B_{f(1)},\dots, B_{f(4)})$ consists of one $2$-atom, and two $3$-atoms.
\end{itemize}
\end{theorem}
To formulate the corresponding result for five colors, we first note that
for an independent triple of largest sum-free sets $(B_1,B_2, B_3)$, there are seven largest sum-free sets contained in $B_1\cup B_2\cup B_3$.
This will be shown in Section~\ref{sec:groups} (see Corollary~\ref{remark:reduction}).
The structure of substantial tuples for five colors then reads as follows.
\begin{theorem}
\label{thm:stability5}
Given an abelian group $G$ of sufficiently large even order~$n$. Let~$A \subseteq G$
and let $\mathcal{F}$ be a container family of $A$ as in Theorem~\ref{thm:container}.
If $B$ is the unique largest sum-free set in $G$,
and $A$ satisfies $\kappa_5(A)>4.999^{n/2}$, then $|A\setminus B|<15 n(\log n)^{-1/27}$.
If $G$ has at least two largest sum-free sets
and the tuple $(F_1,\dots,F_5)\in\mathcal{F}^5$ satisfies \[|\Phi(F_1,\dots,F_5)|> 5.9^{n/2},\] then
there are
three largest sum-free sets $B_1, B_2, B_3$ of $G$ such that one of the following holds.
\begin{enumerate}
\item\label{it:stability51}
$B_3= B_1 \triangle B_2$ and there is a function $f: [5]\to[3]$ such that
\begin{itemize}
\item $|F_i\setminus B_{f(i)}|<3 n(\log n)^{-1/27}$ for all $i\in[5]$ and
\item $(B_{f(1)},\dots, B_{f(5)})$ consists of one $4$-atom, and two $3$-atoms.
\end{itemize}
\item\label{it:stability52} There are four distinct largest sum-free sets $B_4,\dots, B_7$ contained in $B_1\cup B_2\cup B_3$
and a function $f: [5]\to [7]$ such that
\begin{itemize}
\item $|F_i\setminus B_{f(i)}|<3 n(\log n)^{-1/27}$ for all $i\in[5]$ and
\item $(B_{f(1)},\dots, B_{f(5)})$ consists of two $2$-atoms, four $3$-atoms, and one $4$-atom.
\end{itemize}
\end{enumerate}
\end{theorem}
Our approach to Theorem~\ref{thm:stability4} and Theorem~\ref{thm:stability5} is to reduce the problems we want to solve
for general abelian even order groups to a related problem in $\mathbb F_2^t$ for some $t\leq r\leq 5$.
This reduction and the proof of Theorem~\ref{thm:stability23} will require further group theoretic facts which we introduce in the next section.
\section{Group theoretic facts}
\label{sec:groups}
The size and the structure of largest sum-free sets in type I abelian groups
were determined by Diananda and Yap as follows.
\begin{theorem}[Diananda and Yap~\cite{DianandaYap}]\label{lem:mu}
Let $G$ be a type I($q$) group of order~$n$.
Then the size of the largest sum-free set $\mu(G)$ satisfies
$\mu(G) = \left(\frac{1}{3} + \frac{1}{3q} \right) n$.
Moreover, if $B$ is a largest sum-free set in $G$,
then $B$ is a union of cosets of some subgroup $H$
of order $n/q$ of $G$, $B/H$ is in arithmetic progression and
$B \cup (B+B)=G$.
\end{theorem}
The following results are due to Green and Ruzsa~\cite{GreenRuzsa}.
\begin{lemma}[Proposition~2.2~in~\cite{GreenRuzsa}] \label{lem:supersaturation}
If $G$ is abelian and $A \subset G$ contains at most $\delta n^2$ distinct Schur triples,
then $|A| \leq \mu(G) + 2^{20} \delta^{1/5} n$.
\end{lemma}
\begin{lemma}[Lemma~4.2~in~\cite{GreenRuzsa}]\label{lem:removal}
Let $\varepsilonilon > 0$ and let $G$ be an abelian group. If $A \subseteq G$ has size at least $n/3 + \varepsilonilon n$
and contains at most $\varepsilonilon^3 n^2 /27$ distinct sums, then there is a sum-free set $S \subseteq A$
such that $|S| \geq |A| - \varepsilonilon n$.
\end{lemma}
\begin{lemma}[Lemma~5.6~in~\cite{GreenRuzsa}]\label{prop:re1}
Let~$G$ be of type I($q$).
If $S\subset G$ is a sum-free set and $|S| > \left(\frac{1}{3} + \frac{1}{3(q+1)}\right)n$,
then $S \subset B$ for some largest sum-free set $B \subset G$.
\end{lemma}
\begin{lemma}[Lemma~5.2~in~\cite{GreenRuzsa}] \label{prop:kneser}
Suppose that $G$ is abelian,
$S \subset G$ a sum-free set, $r > 0$ and $|S| \geq n/3 + r$. Then
there is a subgroup $H$ of $G$ with $|H| \geq 3r$, and a sum-free set $T \subset G/H$ such that
$S \subseteq \pi^{-1}(T)$, where $\pi: G \rightarrow G/H$ is the canonical homomorphism.
\end{lemma}
Further we will need the following result from~\cite{ABMS} (see page 19).
\begin{lemma}\label{lem:proof23exact}
Let $G$ be a type I($q$) group of odd order and let $B\subset G$ be a largest sum-free set.
If $x \in G\setminus B$ , then
\[ \left| \left\{ \{a,b\} \in \binom{B}{2}: x=a+b \right\} \right| \geq \frac{n}{2q} - 1.\]
\end{lemma}
\subsection*{Even order groups}
For even order groups we need further results. The first one refines and improves Lemma~\ref{prop:re1} for even order groups as follows.
\begin{lemma}\label{lem:3over8}
Let~$G$ be an abelian group of even order and let
$S\subset G$ be sum-free. If $|S| > \frac{3}{8} n$ and $S$ is~not contained in a largest sum-free set of $G$,
then there exist a subgroup $H$ of $G$ with $|H|=\frac{n}{5}$
and a largest sum-free set $T\subset G/H$ such that
$S \subseteq \pi^{-1}(T)$, where $\pi: G \rightarrow G/H$ is the canonical homomorphism.
Moreover, for all largest sum-free sets $B$ of $G$ we have $|\pi^{-1}(T) \cap B|= \frac n5$.
In particular, if $|S| > \frac{2}{5} n$, then $S $ is contained in a largest sum-free set of~$G$.
\end{lemma}
\begin{proof}
Let $H$ be the subgroup of $G$ and $T$ be the sum-free set of $G/H$
obtained by the application of Lemma~\ref{prop:kneser} with $r=\frac n{24}$.
Then, $S\subset \pi^{-1}(T)$ and $|H| = \frac{n}{\ell}$ for some $\ell\in \{2, \ldots, 8\}$.
If $\ell$ is even then $G/H$ has largest sum-free set of size $\ell/2$.
Hence, If $T$ is a largest sum-free set in $G/H$, then $S$ is contained in a largest sum-free set in $G$, a contradiction.
If, on the other hand, $T$ is not a largest sum-free
set in $G/H$, then trivially $|\pi^{-1}(T)|\leq \frac n\ell \left(\frac\ell2 -1\right)\leq \frac{3}{8} n$,
a contradiction to the size of $S$.
Suppose now that $\ell \in \{3,7\}$.
Then $G/H$ is isomorphic to $\mathbb{Z}_3$ or to $\mathbb{Z}_7$.
As the largest sum-free sets in $\mathbb{Z}_3$ and $\mathbb{Z}_7$
have sizes one and two, respectively, we have $|\pi^{-1}(T)| \leq \frac{1}{3} n < \frac{3}{8} n$, which is a contradiction
to the size of $S$. Hence, $\ell=5$ which finishes the proof of the first part of the lemma.
Suppose now that $B$ is a largest sum-free set in $G$. Then $B^0=G\setminus B$ is an index two subgroup of $G$ due to Theorem~\ref{lem:mu}.
Since $G/ (H\cap B^0) \simeq G/H \oplus G/B^0 \simeq \mathbb{Z}_5 \oplus \mathbb{Z}_2$,
the set $B$ must correspond to $\mathbb{Z}_5 \oplus \{1\}$ and
$\pi^{-1}(T)$ corresponds to $\{1,4\} \oplus \mathbb{Z}_2$ or to
$\{2,3\} \oplus \mathbb{Z}_2$.
We conclude $|\pi^{-1}(T) \cap B|= \frac n5$.
\end{proof}
\begin{lemma}
\label{lem:matching}
Let $G$ be an abelian group of even order, let
$\mathcal{B}=(B_1,\dots,B_t)$ be an ordered tuple of largest sum-free sets in $G$ and
$x\in \mathcal{B}(\boldsymbol{0}_t)$ where $\boldsymbol{0}_t$ is the zero vector of length $t$. Then for every $\boldsymbol{\eps}\in\{0,1\}^t$ and every
$b\in\mathcal{B}(\boldsymbol{\eps})$, we have $x-b \in \mathcal{B}(\boldsymbol{\eps})$.
\end{lemma}
\begin{proof}
Let $b'=x-b$ and for a contradiction suppose that $b'\in\mathcal{B}(\boldsymbol{\eps}')$ for an
$\boldsymbol{\eps}'=(\varepsilon_1',\dots,\varepsilon_t')\neq\boldsymbol{\eps}=(\varepsilon_1,\dots,\varepsilon_t)$.
Then there is an index $i$ such that $\varepsilon_i'\neq\varepsilon_i$, i.e.,
we have that one of the two elements $b$ or $b'$ is contained in $B_i^0$
but not both.
However, as $x\in B_i^0$, this yields a contradiction to the fact that, due to Theorem~\ref{lem:mu},
$B_i^0=G\setminus B_i$ is a subgroup of $G$. The lemma follows.
\end{proof}
Given an even order group $G$ and a tuple of largest sum-free sets. The following lemma
determines the size of the atoms and characterizes all largest sum-free sets contained in this tuple.
It is a crucial part in our reduction of the original problem in arbitrary even order groups to a related one in $\mathbb F_2^t$ for some $t\leq r$.
\begin{lemma}
\label{lem:intersectionlarg}
Let $G$ be an abelian group of even order and let $\mathcal{B}=(B_1, \ldots, B_t)$, $t\geq 2$,
be an independent tuple of largest sum-free sets in $G$.
Further, let~$\boldsymbol{0}_t$ denote the zero vector of length~$t$.
Then the size of each atom of $\mathcal{B}$ is $n/2^{t}$ and,
$B \subseteq \bigcup_{i\in [t]}B_i$
is a largest sum-free set in $G$ if and only if there is a largest sum-free set $S$ of $\mathbb{F}_2^t$ such that
\[ B = \bigcup_{\boldsymbol{\eps} \in S} \mathcal{B}(\boldsymbol{\eps}).\]
\end{lemma}
\begin{proof}
To show the lemma we will prove that
the atoms $\mathcal{B}(\boldsymbol{\eps})$, $\boldsymbol{\eps}\in\{0,1\}^t$ are exactly the cosets of $\mathcal{B}(\boldsymbol{0}_t)$, i.e.,
$G/\mathcal{B}(\boldsymbol{0}_t)=\{\mathcal{B}(\boldsymbol{\eps})\colon \boldsymbol{\eps}\in\{0,1\}^t\}$,
that $\mathcal{B}(\boldsymbol{0}_t)$ has size $n/2^t$
and that the map $\pi:G/\mathcal{B}(\boldsymbol{0}_t)\to\mathbb{F}_2^t$ defined via $\pi(\mathcal{B}(\boldsymbol{\eps}))=\boldsymbol{\eps}$ is a group isomorphism.
To see that this implies
the conclusion of the lemma note first that a (largest) sum-free set $S\subset \mathbb{F}_2^t$ of size $2^{t-1}$
maps to the sum-free set $\pi^{-1}(S)\subset G/\mathcal{B}(\boldsymbol{0}_t)$ of the same size.
The union $B$ of the cosets of $\pi^{-1}(S)$ is then a (largest) sum-free set in $G$ of size $n/2$.
As $\pi^{-1}(S)$ cannot contain $\boldsymbol{0}_t$ we have $B \subseteq \bigcup_{i\in [t]}B_i$.
On the other hand, if $B \subseteq \bigcup_{i\in [t]}B_i$ is a largest sum-free set in $G$, then
$B^0=G\setminus B$ is a subgroup of $G$ due to Theorem~\ref{lem:mu}, which properly contains
$\mathcal{B}(\boldsymbol{0}_t)$ since $t\geq 2$. By the third isomorphism theorem $B^0/\mathcal{B}(\boldsymbol{0}_t)$ is then a non-trivial
subgroup of $G/\mathcal{B}(\boldsymbol{0}_t)$ and the union of the cosets
in $B'=\big(G/\mathcal{B}(\boldsymbol{0}_t)\big)\setminus \big(B^0/\mathcal{B}(\boldsymbol{0}_t)\big)$ is $B$.
As $B$ is largest sum-free and each coset has size $n/2^t$ we have that $B'$ is sum-free in $G/\mathcal{B}(\boldsymbol{0}_t)$ and has
size $2^{t-1}$.
Hence, $B'$ corresponds to a largest sum-free set in $\mathbb{F}_2^t$ via $\pi$.
We proceed with the proof of the lemma and first note that each non-empty atom $\mathcal{B}(\boldsymbol{\eps})$, $\boldsymbol{\eps}\in\{0,1\}^t$, is a
subset of a coset of $\mathcal{B}(\boldsymbol{0}_t)$ in $G$, i.e., that
$a-a'\in B_i^0$ holds for all $a,a'\in\mathcal{B}(\boldsymbol{\eps})$ and all $i\in[t]$. Indeed, recall that $B_i^0=G\setminus B_i$ is a subgroup of $G$.
Hence, if $\varepsilon_i=0$ then $a, a'\in B_i^0$ and therefore $a-a'\in B_i^0$. On the other hand, if $\varepsilon_i=1$, then $a$ and $a'$ are in the largest sum-free set
$B_i$. Thus, $a+a'$ and $2a'$ are in its complement, the subgroup $B_i^0$.
Therefore $-2a'$ is also in $B_i^0$ and we have $a-a'=a+a'-2a'\in B_i^0$. This shows that $\mathcal{B}(\boldsymbol{\eps})$ is a subset of a coset of $\mathcal{B}(\boldsymbol{0}_t)$.
Further, it is easily seen that two non-empty atoms $\mathcal{B}(\boldsymbol{\eps})\neq\mathcal{B}(\boldsymbol{\eps}')$ are contained in different cosets of $\mathcal{B}(\boldsymbol{0}_t)$.
Indeed, as $\boldsymbol{\eps}\neq\boldsymbol{\eps}'$ there is an index $i\in[t]$ such that, say, $\varepsilon_i=0$ and
$\varepsilon_i'=1$. Given $a\in \mathcal{B}(\boldsymbol{\eps})$ and $a'\in\mathcal{B}(\boldsymbol{\eps}')$, then $a-a'$ cannot belong to $\mathcal{B}(\boldsymbol{0}_t)$ since $B_i^0$ is a subgroup.
As the atoms form a partition of $G$
we conclude that they are exactly the cosets of $\mathcal{B}(\boldsymbol{0}_t)$.
To show $|\mathcal{B}(\boldsymbol{0}_t)|=n/2^t$ we note that for all $k\in[t-1]$ the following holds
\[\mathcal{B}(\boldsymbol{0}_k)=\big(\mathcal{B}(\boldsymbol{0}_{k})\cap B_{k+1}^0\big)\cup \big(\mathcal{B}(\boldsymbol{0}_k)\cap B_{k+1}\big)=\mathcal{B}(\boldsymbol{0}_{k+1})\cup \mathcal{B}(0,\dots,0,1).\]
Here $\mathcal{B}(\boldsymbol{0}_{k+1})$ is a subgroup of $\mathcal{B}(\boldsymbol{0}_k)$ and $\mathcal{B}(0,\dots,0,1)$ is a non-empty set due to the lemma's assumption that
$B_{k+1}$ is not contained in the union of the remaining $B_i$'s. The argument from above yields therefore that $\mathcal{B}(0,\dots,0,1)$ is a
coset of $\mathcal{B}(\boldsymbol{0}_{k+1})$ in the group $\mathcal{B}(\boldsymbol{0}_{k})$ which implies that $|\mathcal{B}(\boldsymbol{0}_{k+1})|=|\mathcal{B}(0,\dots,0,1)|=|\mathcal{B}(\boldsymbol{0}_{k})|/2$.
With $|\mathcal{B}(\boldsymbol{0}_1)|=|B_1^0|=n/2$, due to Theorem~\ref{lem:mu}, we obtain that $|\mathcal{B}(\boldsymbol{0}_{t})|=n/2^t$.
It is left to verify that the bijective map
from $G/\mathcal{B}(\boldsymbol{0}_t)$ to $\mathbb{F}_2^t$
defined by $\mathcal{B}(\boldsymbol{\eps})\mapsto\boldsymbol{\eps}$
is a group homomorphism, i.e.,
that for all $\boldsymbol{\eps},\boldsymbol{\eps}'\in\{0,1\}^t$ we have $\mathcal{B}(\boldsymbol{\eps})+\mathcal{B}(\boldsymbol{\eps}')=\mathcal{B}(\boldsymbol{\eps}+\boldsymbol{\eps}')$ where the first addition is in $G/\mathcal{B}(\boldsymbol{0}_t)$ and
the second is in $\mathbb{F}_2^t$.
This can be verified component-wise, noting that for a fixed $i\in[t]$ we have $B_i^0+B_i^0=B_i^0$, because $B_i^0$ is a subgroup, and that
$B_i^1+B_i^1=B_i^0$, due to Theorem~\ref{lem:mu}.
Further, as $B_i^1$ is the only coset of $B_i^0$ in $G$ we have $B_i^1+B_i^0=B_i^1$.
The lemma follows.
\end{proof}
The characterization of the largest sum-free sets in the additive group~$\mathbb{F}_2^t$
is immediate from Theorem~\ref{lem:mu} and the fact
that $\mathbb{F}_2^t$ has exactly $2^t-1$ subgroups
of index two, given by
$\left\{ (a_1, \ldots, a_t) \in \mathbb{F}_2^t: \sum_{i \in I} a_i \equiv 0 \pmod 2\right\}$ for non-empty
$I \subseteq [t]$.
\begin{lemma}\label{prop:chaf2t}
A subset $S \subseteq \mathbb{F}_2^t$ is a largest sum-free set in $\mathbb{F}_2^t$
if and only if there exists a non-empty $I \subseteq [t]$ such that
$$S = \left\{ (a_1, \ldots, a_t) \in \mathbb{F}_2^t: \sum_{i \in I} a_i \equiv 1\pmod2 \right\}.$$
\end{lemma}
Given an independent tuple $\mathcal{B}=(B_1, \ldots, B_t)$, $t\geq 2$,
of largest sum-free sets in $G$.
As a consequence of Lemmas~\ref{lem:intersectionlarg} and~\ref{prop:chaf2t} there is a correspondence between
largest sum-free sets of $B \subset \cup_{i\in[t]} B_i$ of $G$
and non-empty subsets $I_B \subset [t]$. The one which assigns $B_i$ to $\{i\}$ for each $i\in[t]$ is called the \emph{canonical correspondence}. It is easily seen that this indeed yields a correspondence, e.g., by induction on $t$.
We summarize the properties of such a correspondence.
\begin{corollary}
\label{remark:reduction}
Let $G$ be an abelian group of even order and let $\mathcal{B}=(B_1, \ldots, B_t)$, $t\geq 2$,
be an independent tuple of largest sum-free sets in $G$.
Consider the canonical correspondence between largest sum-free sets
$B \subset \cup_{i\in[t]} B_i$ of $G$
and non-empty subsets $I_B \subset [t]$.
Then the size of each atom of $\mathcal{B}$ is $n/2^{t}$ and an atom
$\mathcal{B}(\boldsymbol{\eps})$ is contained in $B$ if and only if
$\boldsymbol{\eps}=(\varepsilon_1,\dots,\varepsilon_t)$ evaluates odd on $I_B$, i.e. $\sum_{i\in I_B}\varepsilon_i$ is odd.
\end{corollary}
We finish the section with the proof of Proposition~\ref{prop:lowerbounds}
showing that the following holds for sum-free sets $B_1,B_2,B_3$ in even order groups which satisfy $|\{B_1,B_2,B_3\}|\geq 2$:
\[\kappa_{4}(B_1\cup B_2) \geq (3\sqrt2)^{n/2}\qquad\text{ and }\qquad\kappa_{5}(B_1\cup B_2\cup B_3) \geq 6^{n/2}.\]
\begin{proof}[Proof of Proposition~\ref{prop:lowerbounds}]
Given $G$ of even order with an independent tuple $\mathcal{B}=(B_1,B_2)$ of two largest sum-free sets. Consider the canonical correspondence (see Corollary~\ref{remark:reduction}),
relating $B_i$, $i\in[2]$, to the set~$\{i\}$.
Each atom of $\mathcal{B}$ has size $n/4$ and $\mathcal{B}(1,0),\mathcal{B}(1,1)$ are those of $B_1$ whereas
those of $B_2$ are $\mathcal{B}(0,1),\mathcal{B}(1,1)$.
Further, there is a largest sum-free set $B_3$ corresponding to $\{1,2\}$ with the atoms $\mathcal{B}(1,0)$ and $\mathcal{B}(0,1)$, i.e,
$B_3=B_1\triangle B_2$.
For the first part of the proposition consider all $\varphi:B_1\cup B_2\to [4]$
such that $\varphi^{-1}(1)\subseteq B_1$, $\varphi^{-1}(2)\subseteq B_2$ and $\varphi^{-1}(3),\varphi^{-1}(4)\subseteq B_3$. These
colorings are clearly sum-free and there are $3^{n/2}2^{n/4}$ of them, as the elements in $\mathcal{B}(1,1)$ can be colored with the colors 1 and 2, the elements in $\mathcal{B}(1,0)$
with colors 1, 3 and 4 and the elements in $\mathcal{B}(0,1)$ with colors 2, 3 and 4.
For the second part first consider the case that $B_1$ and $B_2$ are two sum-free sets and $B_3=B_1\triangle B_2$ as above.
Consider the colorings $\varphi:B_1\cup B_2\to [5]$ such that $\varphi^{-1}(1),\varphi^{-1}(2)\subseteq~B_1$, $\varphi^{-1}(3),\varphi^{-1}(4)\subseteq B_2$
and $\varphi^{-1}(5)\subseteq B_3$. This gives rise to $3^{n/2}4^{n/4}$ sum-free 5-colorings.
Finally, given the independent triple $\mathcal{B}=(B_1,B_2,B_3)$ with atoms of size $n/8$ each,
because of Corollary~\ref{remark:reduction}. Consider the canonical correspondence
relating $B_i$, $i\in[3]$, to the set $\{i\}$. Then there are four further largest sum-free sets contained in $B_1\cup B_2\cup B_3$. One of these is $B_4=B_1\triangle B_2$
corresponding to $\{1,2\}$ and consisting of the atoms $\mathcal{B}(1,0,0), \mathcal{B}(1,0,1),\mathcal{B}(0,1,0),\mathcal{B}(0,1,1)$.
An another one is $B_5=(B_1\triangle B_2\triangle B_3)\cup (B_1\cap B_2\cap B_3)$ corresponding to $\{1,2,3\}$
with the atoms $\mathcal{B}(1,0,0), \mathcal{B}(0,1,0),\mathcal{B}(0,0,1),\mathcal{B}(1,1,1)$.
Consider all colorings $\varphi: B_1\cup B_2\cup B_3\to[5]$ such that $\varphi^{-1}(i)\subset B_i$, $i\in[5]$. This gives rise to
$3^{4\cdot \frac n8}2^{2\cdot\frac n8}4^{\frac n8}$ sum-free 5-colorings.
\end{proof}
\section{Proofs of stability: Theorem~\ref{thm:stability23}, \ref{thm:stability4}, \ref{thm:stability5}}
With the group theoretic results from previous section we are now ready to give the proofs
of Theorem~\ref{thm:stability23}, Theorem~\ref{thm:stability4} and Theorem~\ref{thm:stability5}.
\begin{proof}[Proof of Theorem~\ref{thm:stability23}]
We give the proof for $r=3$ only. The proof for $r=2$ follows the same line.
For given $\varepsilon>0$ let $\delta=\varepsilon/200$ and let $|G|=n$ be sufficiently large.
Let $A\subset G$ with $\kappa_3(A)\geq 3^{\mu(G)-\delta n}$ and let $\mathcal{F}=\mathcal{F}_A$ be a container family as in Theorem~\ref{thm:container}.
According to \eqref{it:container3} of this theorem each $F_i$ contains at most $n^2(\log n)^{-1/9}$ Schur triples
and Lemma~\ref{lem:supersaturation} implies for large enough $n$ that $|F_i|\leq \mu(G)+3n(\log n)^{-1/27}$ for each $F_i\in\mathcal{F}$.
Let $(F_1,F_2,F_3)$ be a triple maximizing $|\Phi(F'_1,F'_2,F'_3)|$ over all $(F'_1,F'_2,F'_3)\in\mathcal{F}^3$.
Together with~\eqref{eq:PhiInContainer1} of Observation~\ref{obs:PhiInContainer} we infer $3^{\mu(G)-\delta n}\leq \kappa_3(A)\leq |\mathcal{F}|^3\cdot |\Phi(F_1,F_2,F_3)|$.
Further,~\eqref{it:container1} of Theorem~\ref{thm:container} states that $\log_2 |\mathcal{F}|\leq n(\log n)^{-1/18}$ which implies $\log_3|\Phi(F_1,F_2,F_3)|\geq \mu(G)-2\delta n$
for large enough $n$.
Let $n_i$, $i=1,2,3$, be the number of elements contained in exactly~$i$ members from $(F_1,F_2,F_3)$.
Then $n_1+2n_2+3n_3= |F_1|+|F_2|+|F_3|\leq 3\mu(G)+3\delta n$, hence, $n_2\leq \frac{3}{2}(\mu(G)+\delta n-n_3)$.
We conclude that $n_3 \geq \mu(G)- 59 \delta n$ must hold otherwise the first part of~\eqref{eq:PhiInContainer2} of Observation~\ref{obs:PhiInContainer}
together with $\frac 32\log_32<\frac{19}{20}<1$ would yield the following contradiction
\begin{align*}
\mu(G)-2\delta n\leq \log_3 |\Phi(F_1,F_2,F_3)| \leq \,\, & n_3 +\big(\mu(G)+\delta n - n_3\big)\frac{3}{2}\log_32\\
< \,\, & \mu(G) - 59\delta n + 60 \delta n\cdot \frac{3}{2}\log_32< \mu(G) - 2\delta n.
\end{align*}
Let $F=F_1\cap F_2\cap F_3$ and note that
since $A=F_1 \cup F_2 \cup F_3$ and $|F_i\setminus F|= |F_i|-|F|\leq 60 \delta n$ for $i=1,2,3$, we have $|A\setminus F|\leq 180\delta n$.
To conclude the proof all is needed is to show that $|F\setminus B|\leq \delta n$ for some largest sum-free set $B$ since this then implies $|A\setminus B|\leq181\delta n<\varepsilon n$.
Note to this end that $|F|=n_3\geq \mu(G)-59\delta n=\left(\frac 13+\frac1{3q}-59\delta\right)n>\frac n3+\delta n$ due to $\delta\leq \frac{1}{200(q+1)}$. Lemma~\ref{lem:removal} then
implies that for sufficiently large $n$ there is a sum-free set $S$ of size
$|S|\geq |F|-\delta n\geq \mu(G)-60\delta n > \left(\frac 13+\frac1{3(q+1)}\right)n$ that is contained in $F$.
By Lemma~\ref{prop:re1} there must exist a largest sum-free set $B\subset G$ such that $S\subset B$ showing that $|F\setminus B|\leq\delta n$, as claimed.
\end{proof}
The proofs of Theorem~\ref{thm:stability4} and Theorem~\ref{thm:stability5} will be more involved and we divide the argument into two parts reflected by
two lemmas, Lemma~\ref{lem:alllargest} and Lemma~\ref{lem:OptStructure}.
In the following we state the two lemmas which will immediately imply
Theorem~\ref{thm:stability4} and Theorem~\ref{thm:stability5}. The proofs of the lemmas will be shown subsequently.
The first lemma states that $|\Phi(F_1,\dots,F_r)|$ being large implies that each of the $F_i$'s is essentially contained in a largest sum-free set in $G$.
\begin{lemma}
\label{lem:alllargest}
Let $G$ be an abelian group of sufficiently large even order $n$. Given
$A\subset G$ and $\mathcal{F}=\mathcal{F}(A)$ as in Theorem~\ref{thm:container} and
suppose that $G$, $r$ and $(F_1,\dots,F_r)\in\mathcal{F}^r$ are such that
\begin{itemize}
\item $G$ has a unique largest sum-free set and $|\Phi(F_1,\dots, F_r)|>\left(\sqrt{r}-\frac1{1000}\right)^{n}$ or
\item $r=4$ and $G$ has at least two largest sum-free sets and $|\Phi(F_1,\dots, F_r)|>2.01^n$ or
\item $r=5$ and $G$ has at least two largest sum-free sets and $|\Phi(F_1,\dots,F_r)|>2.41^n$.
\end{itemize}
Then
there is an $r$-tuple of largest sum-free sets $(B_1,\dots,B_r)$ in $G$ such that
\[|F_i\setminus B_i|\leq3 n(\log n)^{-1/27}\quad\text{ for all $i\in[r]$}.\]
\end{lemma}
Note that Theorem~\ref{thm:stability4} and Theorem~\ref{thm:stability5} immediately follows
in case $G$ has a unique largest sum-free set.
For the case that $G$ has at least
two largest sum-free sets we put forth the following lemma, which determines the intersection structure
of the optimal configuration of the $B_i$'s.
\begin{lemma}\label{lem:OptStructure}
Given an abelian group $G$ of even order.
If $\mathcal{B} = (B_1, \ldots, B_4)$ is tuple of largest sum-free sets
such that
$|\Phi({B_1,\dots,B_4})| > 2^{n}$,
then there are two sets among $B_1,\dots,B_4$ whose union contains all $B_i$, $i=1,\dots,4$, and
$\mathcal{B}$ consists of one 2-atom and two 3-atoms. In particular, $|\Phi(\mathcal{B})|= (3\sqrt2)^{n/2}$.
If $\mathcal{B}=(B_1,\dots,B_5)$ is a tuple of largest sum-free sets with
$|\Phi({B_1,\dots,B_5})|>\left(3\cdot2^{7/2}\right)^{n/4}.$
Then either
\begin{itemize}
\item there are two sets among $B_1,\dots,B_5$ whose union contains all $B_i$, $i=1,\dots,5$, and
$\mathcal{B}$ consists of two 3-atoms and one 4-atom, or
\item there are three sets among $B_1,\dots,B_5$ whose union contains all $B_i$, $i=1,\dots,5$, and
$\mathcal{B}$ consists of two 2-atoms, four 3-atoms and one 4-atom.
\end{itemize}
In both cases we have $|\Phi(\mathcal{B})|=6^{n/2}$.
\end{lemma}
Theorem~\ref{thm:stability4} and Theorem~\ref{thm:stability5} are now easy consequences of the two lemmas.
\begin{proof}[Proof of Theorem~\ref{thm:stability4} and Theorem~\ref{thm:stability5}]
Given an abelian group $G$ of sufficiently large even order~$n$ and let $\gamma= 3{(\log n)^{-1/27}}$.
Let $A\subset G$, $\kappa_r(A)$ and $\mathcal{F}=\mathcal{F}(A)$ be as stated in Theorem~\ref{thm:stability4} or Theorem~\ref{thm:stability5}, respectively.
In the case that $G$ has a unique largest sum-free set $B$ let $(F_1,\dots,F_r)$ be the tuple among all $(F_1',\dots,F_r')\in\mathcal{F}^r$ which maximizes $|\Phi(F_1',\dots, F_r')|$.
Owing to \eqref{eq:PhiInContainer1} of Observation~\ref{obs:PhiInContainer} we have
$\kappa_r(A)=|\Phi_r(A)|\leq |\mathcal{F}|^r|\Phi(F_1,\dots,F_r)|$. As $|\mathcal{F}|^r\leq 2^{rn(\log n)^{-1/18}}\ll \kappa_r(A)$
we conclude for sufficiently large $n$ that $|\Phi(F_1,\dots, F_r)|> |\mathcal{F}|^{-r}\left(r-\frac1{1000}\right)^{n/2}>\left(\sqrt{r}-\frac 1{1000}\right)^n$ for
$r=4,5$.
On the other hand, if $G$ has at least two largest sum-free sets let $(F_1,\dots,F_r)$ be a tuple given by the theorems, i.e.,
\begin{itemize}
\item for $r=4$ we have $|\Phi(F_1,\dots,F_4)|> \left(3\sqrt 2-\frac1{25}\right)^{n/2}> 2.01^n > 4^{4\gamma n} \cdot 2^{n}$ and
\item for $r=5$ we have $|\Phi(F_1,\dots,F_5)|>5.9^{n/2}>5^{5\gamma n}\left(3\cdot2^{7/2}\right)^{n/4}>2.41^n$.
\end{itemize}
In particular, the presumptions of Lemma~\ref{lem:alllargest} are satisfied in all cases and we infer
that there is a tuple $\mathcal{B}=(B_1,\dots, B_r)$ of largest sum-free sets in $G$ such that
$|F_i\setminus B_i|\leq \gamma n$ for all $i\in[r]$.
As $A=F_1\cup \dots\cup F_r$ we have $|A\setminus B|\leq r\gamma n$ and the theorems follow in the case $G$ has a unique largest sum-free set.
Suppose now that $G$ has at least two largest sum-free sets. We aim to apply Lemma~\ref{lem:OptStructure} to~$\mathcal{B}$ from above and therefore now verify its presumption.
Define for all $i\in[r]$ the sets $F_i^1=F_i\cap B_i$ and $F_i^0=F_i\setminus B_i$.
Then
$|\Phi(F_1,\dots, F_r)|\leq|\Phi(F_1^0,\dots, F_r^0)|\cdot |\Phi(F_1^1,\dots, F_r^1)|$ and
$|\Phi(F_1^0,\dots, F_r^0)|\leq r^{r\gamma n}$. Further, any coloring
$\varphi\in\Phi(F_1^1,\dots, F_r^1)$ can be extended to a coloring $\varphi'\in \Phi(B_1,\dots, B_r)$
as $F_i^1\subset B_i$ and $B_i$ is sum-free. This implies $|\Phi(F_1^1,\dots, F_r^1)|\leq |\Phi(B_1,\dots, B_r)|$ and we
infer $|\Phi(\mathcal{B})|\geq r^{-r\gamma n}|\Phi(F_1,\dots, F_r)|$. Considering the bounds on $|\Phi(F_1,\dots, F_r)|$ from above, this shows
that $\mathcal{B}$ satisfies the presumptions of Lemma~\ref{lem:OptStructure} and
the theorems follow from the conclusions of this
lemma for the case $G$ has at least two largest sum-free sets.
\end{proof}
\subsection{Proofs of Lemma~\ref{lem:alllargest} and Lemma~\ref{lem:OptStructure}}
We first show the proof of Lemma~\ref{lem:OptStructure}. Note that
Corollary~\ref{remark:reduction} reduces the problem to
a finite task which one could check exhaustively. For completeness we give a self-contained proof.
\begin{proof}[Proof of Lemma~\ref{lem:OptStructure}]
Given the tuple $\mathcal{B}=(B_1,\dots,B_r)$ with the properties stated in the lemma.
Let $n_i = \sum_{\boldsymbol{\eps} \,: \, \sum \varepsilon_j =i } |\mathcal{B}(\boldsymbol{\eps})|$ be the number of elements contained in $i$-atoms, i.e., those belonging to exactly
$i$ sets from $\mathcal{B}$. Recall from \eqref{eq:PhiInContainer2} of Observation~\ref{obs:PhiInContainer} that
\begin{align}\label{eq:phibis0}\log_3 |\Phi(\mathcal{B})|\leq \sum_{i \in [r]} {n_i}\log_3 i \qquad \text{ and }\qquad \sum_{i\in[r]}i \cdot n_i=\sum_{i\in[r]}|B_i|=r\mu(G)=\frac {rn}2.
\end{align}
As $\frac i3\geq\log_3i$ for all integer $i\geq 1$ we derive
for $\eta_4=\log_32$ and $\eta_5=13/16$ that
\begin{equation} \label{eq:phibis}
\eta_rn< \log_3 |\Phi(\mathcal{B})|\leq \sum_{i \in [r]} {n_i}\log_3 i\leq \frac13\sum_{i\in[r]}i\cdot n_i - \frac{1}{3} n_1 \leq \frac13\left(\frac{r n}2-n_1\right).
\end{equation}
Let $t\in[r]$ be the largest number such that there is an independent $t$-tuple of elements in $\mathcal{B}$, which we denote by $\mathcal{A}=(A_1,\dots,A_t)$.
Recall the correspondence of the structure of $\mathcal{A}$ to that of $\mathbb F_2^t$ as described in Corollary~\ref{remark:reduction}.
According to this each largest sum-free set $B$ contained in $\cup A_i$ corresponds to a largest sum-free set in $\mathbb F_2^t$ and is associated to a set $I=I(B)\subset [t]$.
The atoms of $\mathcal{A}$ then correspond to elements $\boldsymbol{\eps}\in \mathbb F_2^t$ and an atom $\mathcal{A}(\boldsymbol{\eps})$ is contained in $B$ if and only if
$\boldsymbol{\eps}=(\varepsilon_1,\dots,\varepsilon_t)$ evaluates odd on $I=I(B)$, i.e. $\sum_{i\in I}\varepsilon_i$ is odd.
Throughout the proof we will consider the canonical correspondence assigning $A_i$, $i\in[t]$, to the set $I=\{i\}$, so that, the 1-atoms correspond to those $t$ elements
$\boldsymbol{\eps}\in\mathbb F_2^t$
with exactly one $1$-entry.
Using this correspondence we first show that $1<t\leq r-2$ and $n_1=0$. The bound $t>1$ is trivial, since otherwise
we would have $|\Phi(\mathcal{B})|=r^{\frac n2}\leq3^{\eta_rn}$ which contradicts \eqref{eq:phibis}.
Suppose that $t=r$.
Then the atoms have size $\frac n{2^r}$ due to Corollary~\ref{remark:reduction} and the number of 1-atoms is $r$. Hence,
$n_1=r\frac n{2^r}$ which yields the contradiction $\log_3|\Phi(\mathcal{B})|\leq\eta_r n$. On the other hand, if $t\leq r-1$ and $n_1>0$ then
$n_1$ has the size of at least one atom, i.e., $n_1\geq n/2^{r-1}$
which again yield a violation of $\log_3|\Phi(\mathcal{B})|>\eta_r n$. Having $t=r-1$, say $A_1=B_1,\dots, A_{r-1}=B_{r-1}$, and $n_1=0$,
however, means that the remaining largest sum-free set $B\in \{B_1,\dots, B_r\}\setminus \{A_1,\dots, A_{r-1}\}$ must cover all 1-atoms of $\mathcal{A}$.
Since $A_i$ corresponds to $\{i\}$ the set $B$ must correspond to $I=[t]$.
This determines the atom structure of $\mathcal{B}$ completely and it is readily checked that
$\mathcal{B}$ then consists of six $2$-atoms and one $4$-atom in case $r=4$ and
of ten $2$-atoms and five $4$-atoms in case $r=5$. With \eqref{eq:phibis0} this yields a contradiction to the lower bound $\log_3|\Phi(\mathcal{B})|>{\eta_r n}$.
Hence, let $1<t\leq r-2$ and $n_1=0$ in both cases $r=4$ and $r=5$.
For $t=2$ and $r=4$ let $a_i$, $i=2,3,4$, denote the number of $i$-atoms of $\mathcal{B}=(B_1,\dots,B_r)$.
Similarly define $b_i$, $i=2,\dots,5$ for $t=2$, $r=5$ and $c_i$, $i=2,\dots,5$, for $t=3$ and $r=5$.
Due to Corollary~\ref{remark:reduction} we know that in the case $t=2$ ($t=3$, respectively) the set $\cup_{i \in [r]}B_i$ is the union of three (seven, respectively)
atoms, each of size $n/2^t$. This and the second part of \eqref{eq:phibis0} implies that
\begin{align}\label{eq:sys}
\sum_{i=2}^4a_i=\sum_{i=2}^5b_i=3,\quad \sum_{i =2}^{5}c_i =7,\qquad \sum_{i=2}^4i a_i=8,\qquad \sum_{i=2}^{4}i b_i=10, \qquad\sum_{i=2}^{5}i c_i =20.
\end{align}
Solving the equations yields that $(a_2,a_3,a_4)\in\{(2,0,1),(1,2,0)\}$.
If $(a_2,a_3,a_4)=(2,0,1)$ then the first part of~\eqref{eq:phibis0} yields
$|\Phi(\mathcal{B})|\leq (4\cdot4)^{n/4}=2^{n}$ which violates the lower bound
on $|\Phi(\mathcal{B})|$. For $(a_2,a_3,a_4)=(1,2,0)$ we have $|\Phi(\mathcal{B})|\leq (2\cdot 9)^{n/4}$.
Further, by considering largest sum-free sets in
$\mathbb F_2^2$ corresponding to $I_1=\{1\}$, $I_2=\{2\}$, $I_3=I_4=\{1,2\}$,
one obtains via the canonical correspondence
a tuple $(B_1,\dots,B_4)$ which consists of one $2$-atoms and two $3$-atoms, hence realizing the optimum $(a_2,a_3,a_4)=(1,2,0)$.
The first part of the lemma follows.
Turning to the second part of the lemma we first solve \eqref{eq:sys} for $(b_2,\dots,b_5)$ and obtain its solution set $\{(1,1,0,1),(1,0,2,0),(0,2,1,0)\}$.
It is readily checked that $|\Phi(\mathcal{B})|\leq 6^{n/2}$ where the maximum is achieved only for $(b_2,\dots,b_4)=(0,2,1,0)$. The second largest value is attained by
$(1,0,2,0)$ which would yield $|\Phi(\mathcal{B})|\leq (2\cdot 16)^{n/8}$, a contradiction to the lower bound.
Further, by considering largest sum-free sets in $\mathbb F_2^2$ corresponding to $I_1=I_2=\{1\}$, $I_3=I_4=\{2\}$
and $I_5=\{1,2\}$ we see that the optimum $(b_2,\dots,b_4)=(0,2,1,0)$ can indeed be realized.
Finally, we solve \eqref{eq:sys} for $(c_2,\ldots, c_5)$ and obtain its solution set $\{(1, 6, 0, 0), (2, 4, 1, 0),(3, 2, 2, 0),$
$(3, 3, 0, 1),(4, 0, 3, 0), (4, 1, 1, 1), (5, 0, 0, 2) \}$. However, the maximum is attained by $(1, 6, 0, 0)$,
which cannot be realized as
atom structure. To see this, suppose that $\mathcal{B}=(B_1,\cdots,B_5)$ has this atom structure. Assume that
$\mathcal{B}'=(B_1, B_2, B_3)$ is independent and corresponds to $I_i=\{i\}$.
Note that $\mathcal{B}'(1,1,1)$ is a 3-atom, hence $B_4$ and $B_5$
cannot contain $\mathcal{B'}(1,1,1)$ since $\mathcal{B}(1,1,1)$ would be a 4-atom or a 5-atom otherwise.
However, largest sum-free sets of $\mathbb{F}_2^3$ which do not contain $(1,1,1)$
correspond to two elements sets $I\subset[3]$. In particular, they consist of two 1-atoms and two 2-atoms of $\mathcal{B}'$. As there are only three 2-atoms in $\mathcal{B}'$
we conclude that $\mathcal{B}$ would contain a 4-atom.
Within the solutions for $(c_2,\dots,c_5)$, the third largest
value is achieved by $(3,2,2,0)$ giving $|\Phi(\mathcal{B})|\leq (8\cdot9\cdot16)^{n/8}$, a contradiction to the lower bound.
The second largest is obtained by
$(2,4,1,0)$ giving $|\Phi(\mathcal{B})|\leq (4\cdot 81\cdot 4)^{n/8}=6^{n/2}$.
Moreover, the five largest sum-free sets corresponding to $I_i=\{i\}$, $i\in[3]$, $I_4=\{1,2\}$ and $I_5=\{1,3\}$ indeed consists of
two 2-atoms, four 3-atoms and one 4-atom showing that $(2,4,1,0)$ is indeed realizable. The lemma follows.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lem:alllargest}]
Let $\gamma=3(\log n)^{-1/27}$.
Given an even order abelian group $G$ of sufficiently large order $n$ together with a container family $\mathcal{F}$ as in Theorem~\ref{thm:container} and let
$(F_1,\dots,F_r)$ be a tuple as in the lemma.
A set $F$ is called good if there
is a largest sum-free set $B$ in $G$ such that $|F\setminus B| \leq \gamma n$.
Hence, to establish the lemma we need to show that all $F_i$, $i=1,\dots,r$, are good.
Note to this end that $|F_i|> \frac 25n+\gamma n$ readily implies that $F_i$ is good.
Indeed, by property \eqref{it:container3} of Theorem~\ref{thm:container} we know that $F_i$
contains at most $n^2(\log n)^{-1/9}$ Schur triples. Lemma~\ref{lem:removal} then implies that
one can obtain a sum-free set $S_i\subset F_i$ by removing at most $\gamma n$ elements from $F_i$.
As $|S_i|>\frac 25n$ we conclude by the last part of Lemma~\ref{lem:3over8} that there is a largest sum-free set $B_i$ such that
$S_i\subset B_i$, showing that $F_i$ is good.
For any given tuple $(F_1,\dots, F_r)\in\mathcal{F}^r$ of containers let $n_i$ denote the number of elements contained in exactly $i$ sets
from $\{F_1,\dots,F_r\}$. Recall from~\eqref{eq:PhiInContainer2} that
\begin{align}\label{eq:trivcol}
|\Phi(F_1,\dots,F_r)|\leq \prod_{i\in[r]}i^{n_i}\qquad \text{and}\qquad \sum_{i\in[r]}i\cdot n_i=\sum_{i\in[r]}|F_i|
\end{align}
holds. Using $\frac i3\geq\log_3i$ for all integer $i\geq 1$ we derive
\begin{align}\label{eq:n3triv}\log_3|\Phi(F_1,\dots,F_r)|\leq \frac13\sum_{i\in[r]}|F_i|.\end{align}
Further, by~\eqref{it:container3} of Theorem~\ref{thm:container} and Lemma~\ref{lem:supersaturation} we have $|F_i|\leq\frac n2+2^{20}n(\log n)^{-1/45}$ for all $i\in[r]$.
We first consider the case that $G$ has at least two largest sum-free sets for which the lemma easily follows.
Indeed, suppose that there is an $i\in[r]$ such that $|F_i|\leq\frac 25n+\gamma n$. Then we have
\[\sum_{i\in[r]}|F_i|\leq (r-1)\left(\frac n2+2^{20}\frac n{(\log n)^{\frac1{45}}}\right)+\frac 25n+\gamma n,\]
which, with \eqref{eq:n3triv} and sufficiently large $n$, yields $|\Phi(F_1,\dots,F_r)|< 2.01^{n}$ in the case $r=4$ and
$|\Phi(F_1,\dots,F_r)|< 2.41^{n}$ in the case $r=5$. This, however, yields a contradiction to the fact that $(F_1,\dots,F_r)$ is substantial and we conclude
that $|F_i|>\frac 25n+\gamma n$ for all $i\in[r]$ implying that $F_i$ is good, as claimed.\footnote{We note that the same argument also works for $r=6$ in conjunction with
the lower bound $\kappa_{r,G}\geq \kappa_6(B_1\cup B_2\cup B_3)\geq 2^{3n/4} 3^{n/2} \geq 2.9129^n$ obtained by an independent triple $(B_1,B_2,B_3)$ of largest sum-free sets.
Assuming that a there is an $i\in[6]$ with $|F_i|\leq \frac 25n+\gamma n$ would imply
$|\Phi(F_1,\dots,F_r)|< 2.91^{n}$.}
Next we prove the lemma in case the group has a unique largest sum-free set which is somewhat more complicated as the lower bound is significantly smaller in this case.
Let $G$ be a group of even order with the unique largest sum-free set $B$ and
let $(F_1,\dots,F_r)$ be a substantial tuple, i.e., $|\Phi(F_1,\dots,F_r)|\geq (\sqrt{r}-0.01)^n$.
Note first that if $F_1,\dots,F_{r-1}$ of the tuple $(F_1,\dots,F_r)$ are good, then with $F_i^1=F_i\cap B$ we have
\begin{align}
\label{eq:allgood} (r^{\frac12}-0.01)^{n}\leq |\Phi(F_1,\dots, F_r)| \leq & \,\,r^{r\gamma n} \,\,|\Phi(F_1^1,\dots, F_{r-1}^1, F_r)| \leq \,\, r^{r \gamma n} r^{|F_r\cap B|}(r-1)^{|B\setminus F_r|}.
\end{align}
This implies that $F_r$ must have size larger than $\frac 25n+\gamma n$ for sufficiently large $n$ and hence being good as well.
Further, by the same argument as above, i.e., using $|F_i|\leq\frac n2+2^{20}n(\log n)^{-1/45}$ for all $i\in[r]$ together with property ~\eqref{eq:n3triv} we derive the following properties.
\begin{claim}\label{claim:uniqueB}\mbox{}
\begin{enumerate}
\item \label{it:unique1}Let $k_4=3$ and $k_5=2$ then at least $k_r$ sets from $(F_1,\dots,F_r)$ have size larger than $\frac 25n+\gamma n$, implying that they are good,
\item \label{it:unique2} at least four sets from $(F_1,\dots,F_5)$ have size larger than $\frac n3+\gamma n$, and
\item \label{it:unique3}if three sets from $(F_1,\dots,F_5)$ have size at most $\frac 25n+\gamma n$ then all five sets have size larger than $\frac38n+\gamma n.$
\end{enumerate}
\end{claim}
In particular, as $k_4=3$ the lemma follows in the case $r=4$ and it is left to prove that for
$r=5$ there are four sets out of $(F_1,\dots,F_5)$ which have size larger than $\frac 25n+\gamma n$.
For a given tuple $(F_1,\dots, F_5)$ let $F_i^1=F_i\cap B$ and $F_i^0=F_i\setminus B$ and let $m_i$ be the number of elements contained in exactly $i$ sets from $(F_1^1,\dots,F_5^1)$.
Then
\begin{align}
\label{eq:2good}
|\Phi(F_1,\dots, F_5)|\leq |\Phi(F_1^0,\dots, F_5^0)|\cdot |\Phi(F_1^1,\dots, F_5^1)| \qquad\text{and}\qquad m_1+\dots+m_5=|B|=\frac n2.
\end{align}
Note also that $ |\Phi(F_1^1,\dots, F_5^1)|\leq |\Phi(B,\dots, B,F_i^1,\dots, F_5^1)|$, $i\in[5]$,
since $F_j^1\subset B$, for all $j\in[5]$.
By~\eqref{it:unique1} of Claim~\ref{claim:uniqueB} we may assume that $F_1$ and $F_2$ are good, and for a contradiction suppose that
$|F_i|\leq \frac 25n+\gamma n$, $i=3,4,5$.
Then due to~\eqref{it:unique3} of Claim~\ref{claim:uniqueB}
each of these $F_i$'s has size larger than $\frac 38n + \gamma n$.
Property~\eqref{it:container3} of Theorem~\ref{thm:container}
and Lemma~\ref{lem:removal} implies that for each such $i$ there is a sum-free set $S_i \subseteq F_i$
such that $|F_i \setminus S_i| \leq \gamma n$.
By Lemma~\ref{lem:3over8} there is a sum-free set $L_i$ containing $S_i$ such that $|L_i\cap B|=|L_i\setminus B|=\frac n5$. Therefore
$|F_i^0|, |F_i^1|\leq \frac n5+\gamma n$ for $i \in \{3,4,5\}$ and~(\ref{eq:n3triv}) yields
$|\Phi(F_1^0,\dots,F_r^0)|\leq 3^{\frac{n}{5}+2\gamma n }.$
Consider now the tuple $(B,B,F_3^1,F_4^1 ,F_5^1)$ with $f=|F_3^1|+|F_4^1|+|F_5^1|\leq \frac 35n+ 3\gamma n$ and let $m_i$ be defined as above. Then
$m_3+2m_4+3m_5=f$ and together with the second part of~\eqref{eq:2good} this implies $m_4 + 2 m_5-m_2+\frac n2-f= 0$.
Using the first part of \eqref{eq:2good} and $\sqrt{\frac{20}{27}}<\frac {8}{9}$
we obtain for $G$ of sufficiently large order
\begin{align*}
|\Phi(F_1,\dots, F_5)| \leq &\,\,
|\Phi(F_1^0, \dots, F^0_5)| \,\, |\Phi(B, B, F^1_3,F^1_4, F^1_5)| \\
\leq & \,\, 3^{\frac n5+2\gamma n}~2^{m_2}~3^{m_3}~4^{m_4}~5^{m_5}\\
= &\,\, 3^{\frac n5+2\gamma n}~2^{m_4 +2 m_5 + \frac n{2}-f}~3^{f - 2m_4 -3m_5}~4^{m_4}~5^{m_5}\\
\leq &~~3^{ \frac n5+2\gamma n}~2^{\frac n{2}-f} \,\, 3^{f} \left(\frac 89\right)^{m_4}\left(\frac{20}{27}\right)^{m_5}\\
\leq& \,\,3^{ \frac n5+2\gamma n}~2^{\frac n{2}-f} \,\, 3^{f} \, \left(\frac{8}{9}\right)^{f-\frac n2}\\
\leq &\,\, 3^{ \frac n5+2\gamma n} \left(\frac{4}{3}\right)^{f} \left(\frac{3}{2}\right)^{n}
\,\, < \,\, 2.23^{n}.
\end{align*}
This contradicts the lower bound on $|\Phi(F_1,\dots, F_5)|$ and we conclude that one set among $F_3, F_4, F_5$, say $F_3$, has size larger than
$\frac 25n+\gamma n$ and is therefore good.
Finally we show that $F_4$ or $F_5$ is good, assuming that $F_1, F_2, F_3$ are. Let
$f_0=|F_4^0\cap F_5^0|$ and note that $|\Phi(F_1^0,\dots,F_5^0)|\leq 5^{4\gamma n}2^{f_0}$ due to the goodness of $F_1$, $F_2$ and $F_3$.
Further, consider $m_1,\dots,m_5$ for the tuple $(B,B,B,F_4^1,F_5^1)$.
Then with $f=|F_4|+|F_5|$ we have $m_1=m_2=0$ and
\begin{align}
\label{eq:F4m}
m_3+m_4+m_5=\frac n2, \quad m_4+2m_5=|F_4^1|+|F_5^1|,\quad\text{and}\quad |F_4^1|+|F_5^1|+2f_0\leq f.
\end{align}
Using $\sqrt{5/3}< \frac 43$ we obtain
\begin{align}
\begin{split}
\label{eq:F4}
|\Phi(F_1,\dots, F_5)|
\leq &\,\,
|\Phi(F_1^0, \dots, F_5^0)| \,\, |\Phi(B,B,B,F^1_4, F_5^1)| \\
\leq &\,\, 5^{4\gamma n} 2^{f_0} \,\, 3^{\frac n2 - m_4 - m_5}
\,\, 4^{m_4} \,\,5^{m_5}\\
\leq&\,\, 5^{4 \gamma n} 2^{f_0} \,\, 3^{\frac n2} \,\, \left(\frac 43\right)^{f -2f_0}\\
\leq&~~5^{4 \gamma n} \left(\frac 98\right)^{f_0} \,\, 3^{\frac n2} \,\, \left(\frac 43\right)^{f}.
\end{split}
\end{align}
This easily implies that $F_4$ or $F_5$ must have size larger than $\frac38n+\gamma n$.
Indeed, otherwise $f\leq\frac 34n+2\gamma n$ and~\eqref{it:unique2} of Claim~\ref{claim:uniqueB} implies that either $F_4$ or $F_5$ has size larger than $\frac n3 + \gamma n$.
By~\eqref{it:container2} of Theorem~\ref{thm:container} and
Lemma~\ref{lem:removal} one can then make, say, $F_4$ sum-free by removing at most $\gamma n$ elements.
This shows that $f_0\leq|F_4^0|\leq \frac n4+\gamma n$ as the largest sum-free set in $B^0$ has size at most $|B_0|/2=n/4$.
Plugging these bounds for $f$ and $f_0$ into \eqref{eq:F4}, however, yields
$|\Phi(F_1,\dots, F_5)| <2.22^n$ for $G$ with sufficiently large order. This contradicts the lower bound, hence, we can assume that $F_4$ has size larger than $\frac38n + \gamma n$.
Finally assume that $|F_4|, |F_5|\leq \frac 25 n+\gamma n$. Then $f\leq \frac 45+2\gamma n$ and by using~\eqref{it:container3}
of Theorem~\ref{thm:container} with Lemma~\ref{lem:removal} and Lemma~\ref{lem:3over8}
we further conclude that there exist sum-free sets $S_4$ and $L_4$ with $S_4\subset L_4$ such that
$|F_4 \setminus S_4| \leq \gamma n$ and $|L_4\cap B|=|L_4\setminus B|=\frac n5$. This implies $f_0\leq |F_4^0|\leq |L_4\cap B^0|+\gamma n\leq\frac n5+\gamma n$
and we obtain from \eqref{eq:F4} for $G$ of sufficiently large order that $|\Phi(F_1,\dots, F_5)|<2.233^n$. This again contradicts the lower bound and
we conclude that $F_4$ is good and so is $F_5$ as shown in the beginning. This finishes the proof of the lemma.
\end{proof}
\section{Exact results}
\label{sec:exact}
In this section we prove Theorem~\ref{thm:main23} and Theorem~\ref{thm:main45} based on Theorem~\ref{thm:stability23},~\ref{thm:stability4} and~\ref{thm:stability5}.
\begin{proof}[Proof of Theorem~\ref{thm:main23} and proof of Theorem~\ref{thm:main45} in case that $G$ has a unique largest sum-free set]
We show the proof of Theorem~\ref{thm:main23} for $r=3$ only, the proof for $r=2$ and that
of Theorem~\ref{thm:main45} in case that $G$ has a unique largest sum-free set follow the same line.
Let $G$ be a type~I($q$) group of sufficiently large order $n$
and suppose that $A \subset G$ maximizes the number of sum-free $3$-colorings.
Then $\kappa_3(A) \geq 3^{\mu(G)}$ as this bound is achieved by any largest sum-free set in $G$. Hence, we have $|A|\geq \mu(G)$.
Moreover, with $\varepsilonilon = \frac1{80(q+1)}$ Theorem~\ref{thm:stability23} implies that there
exists a largest sum-free set $B$ such that $|A \setminus B| < \varepsilonilon n$.
Assume that there is an $x\in A\setminus B$ otherwise $A=B$ and the theorem follows.
If $|G|$ is odd then by Lemma~\ref{lem:proof23exact}
there are at least $\frac{n}{2q}-1$ pairs
$\{a,b\}\in\binom{B}{2}$ such that $x=a+b$.
If $|G|$ is even then by Lemma~\ref{lem:matching} (with $t=1$) for each $a\in B$ there exists $b\in B$ such that $x=a+b$, where possibly $a=b$.
If $\varphi(x)=i \in[3]$ then $\varphi$ can be extended to at most 8 sum-free colorings of $\{a,b\}$ if $a\neq b$ and $a,b\in A$, to at most~2
if $a=b$, $a\in A$ (hence $4<8$ ways for two such pairs $a=b$ and $a'=b'$), and to at most 3 if $a\not\in A$ or $b\not\in A$.
Using $|A\setminus B|<\varepsilon n$ and $\left(1-\frac12\log_38\right)>\frac{1}{20}$ we conclude therefore
\begin{align*}
\kappa_3(A)< 3^{\varepsilon n}\cdot8^{\frac{n}{2q}} \cdot3^{\left(|A|-\frac n{q}+\varepsilonilon n+2\right)}< 3^{\mu(G)+4\varepsilon n-\frac n{20q}}=3^{\mu(G)}.
\end{align*}
This contradicts the lower bound. Thus, $A= B$ and the theorem follows.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:main45}] It is left to prove Theorem~\ref{thm:main45} in the case that $G$ has at least two largest sum-free sets.
The proofs are quite involved but similar for $r=4$ and for $r=5$. Therefore we show the case $r=5$ only.
Suppose that $A\subset G$ maximizes the number of sum-free $5$-colorings.
Let $\gamma =3(\log n)^{-1/27}$ and let $\mathcal{F}$ be a container family of $A$ as in Theorem~\ref{thm:container}.
If $A$ maximizes the number of sum-free $5$-colorings, then
$|\kappa_5(A)| \geq 6^{n/2}$ due to Proposition~\ref{prop:lowerbounds}.
Let $(F_1,\dots, F_5)$ be the quintuple maximizing $|\Phi(F'_1,\dots,F'_5)|$ over all $(F'_1,\dots, F'_5)\in\mathcal{F}^5$.
From \eqref{eq:PhiInContainer1} of Observation~\ref{obs:PhiInContainer}
we infer $6^{n/2}\leq\kappa_5(A)\leq |\mathcal{F}|^5|\Phi(F_1,\dots, F_5)|$ and~(\ref{it:container1}) of Theorem~\ref{thm:container} implies
\begin{align}\label{eq:substantial5}\log_6|\Phi(F_1,\dots,F_5)|\geq {\frac n2-2n(\log n)^{-1/18}}.\end{align}
Hence, we are in the position to apply Theorem~\ref{thm:stability5} to $(F_1,\dots, F_5)$ which entails that there are largest sum-free sets $B_1,B_2,B_3$ in $G$
which satisfy the following.
\begin{enumerate}
\item \label{it:stability1} $B_3= B_1 \bigtriangleup B_2$ and there is a function $f:[5]\to[3]$ such that
$|F_j\setminus B_{f(j)}|<\gamma n$ for all $j\in[5]$ and
$\mathcal{B}=(B_{f(1)},\dots, B_{f(5)})$ consists of one $4$-atom, and two $3$-atoms, or
\item\label{it:stability2} there are four distinct largest sum-free sets $B_4,\dots, B_7$ contained in
$B_1\cup B_2\cup B_3$ and a function $f:[5]\to [7]$ such that
$|F_j\setminus B_{f(j)}|<\gamma n$ for all $j\in[5]$ and
$\mathcal{B}=(B_{f(1)},\dots, B_{f(5)})$ consists of two $2$-atoms, four $3$-atoms, and one $4$-atom.
\end{enumerate}
\begin{claim}\label{claim:AcontU}
If \eqref{it:stability1} applies then $A\subseteq B_1\cup B_2$ and if \eqref{it:stability2} applies then we have $A\subseteq B_1\cup B_2\cup B_3$.
\end{claim}
\begin{proof}
The argument is similar to the one in the proof of Theorem~\ref{thm:main23}, i.e., we show that the existence of an $x\in A\setminus (B_1\cup B_2)$
or $x\in A\setminus (B_1\cup B_2\cup B_3)$, respectively, gives a substantial restriction on how the elements of $A$ may be colored. This then leads to a contradiction to the bound on $\kappa_5(A)$.
As the proofs of these two cases are similar, we show the second case only.
Assume that there is an $x\in A \setminus (B_1\cup B_2 \cup B_3)$ of which there are at most $5\gamma n$. As $B_{f(i)}\in \{B_1,B_2,B_3\}$ for all $i\in[5]$ the atoms
of $\mathcal{B}$ and of $\mathcal{B}'=(B_1,B_2,B_3)$ are the same and each has size $n/8$ due to Corollary~\ref{remark:reduction}.
Suppose $\varphi(x)=j \in [5]$ so that $x\in F_j$ and let $\mathcal{B}(\boldsymbol{\eps})$ be a $3$-atom contained in $B_{f(j)}$. This
indeed exists as $B_{f(j)}$ consists of four atoms and $\mathcal{B}$ consists of seven non-zero atoms,
four of which are 3-atoms.
By applying Lemma~\ref{lem:matching} to $\mathcal{B}'$ we infer that
for each $a\in \mathcal{B}(\boldsymbol{\eps})$ there is a $b\in \mathcal{B}(\boldsymbol{\eps})$ such that $x=a+b$.
Hence $\varphi$ can be extended to at most 8 sum-free colorings of $\{a,b\}$ if $a\neq b$ and $a,b\in F_j$, to at most~2
if $a=b$, $a\in F_j$ (hence $4<8$ ways for two such pairs $a=b$ and $a'=b'$), and at most 3 if $a\not\in F_j$ or $b\not\in F_j$.
Together with Observation~\ref{obs:PhiInContainer} and the atom structure of $\mathcal{B}$ this yields
\begin{align} \label{eq:upbound}
|\Phi(F_1, \ldots, F_5)| \leq 5^{5\gamma n}\cdot 2^{n/4} \cdot 3^{3n/8} \cdot 4^{n/8} \cdot 8^{n/16}= 5^{5\gamma n}\cdot 6^{n/2}\cdot\left(\frac{\sqrt 8}3\right)^{n/8},
\end{align}
which is a contradiction to \eqref{eq:substantial5}.
\end{proof}
We next establish that $B_1\cup B_2\subset A$ in the first case and $B_1\cup B_2\cup B_3\subset A$ in the second case.
The following distinction of the colorings will be crucial for this purpose.
\begin{definition}\label{def:goodbadcol}
A coloring $\varphi \in \Phi_5(A)$ is called \emph{good}
if for each $i\in[5]$ we have $\varphi^{-1}(i)\subset L_i$ for some largest sum-free set $L_i$ of $G$, and
\begin{itemize}
\item the tuple $\mathcal{L}=(L_1,\dots,L_5)$ consists of one $4$-atom and two $3$-atoms, or
\item the tuple $\mathcal{L}=(L_1,\dots,L_5)$ consists
of two 2-atoms, four 3-atoms and one 4-atom.
\end{itemize}
Otherwise $\varphi$ is called a \emph{bad} coloring.
\end{definition}
Note that Definition~\ref{def:goodbadcol}
distinguishes the two cases \eqref{it:stability1} and \eqref{it:stability2}
as the number of atoms in $(L_1,\dots,L_5)$ is three in the first case and seven in the second case.
Assume for contradiction that $x\in (B_1\cup B_2)\setminus A$
or $x\in (B_1\cup B_2\cup B_3)\setminus A$, respectively.
Crucial about the definition is that any good coloring $\varphi$ of $A$ can be extended to at least two sum-free $5$-colorings of $A\cup\{x\}$.
Indeed, as $x$ belongs to some $k$-atom $\mathcal{L}(\boldsymbol{\eps})$, $\boldsymbol{\eps}\in\{0,1\}^5$, of
$\mathcal{L}=(L_1,\ldots,L_5)$ for $k\geq 2$,
the coloring $\varphi$ can be extended
to colorings of $A\cup\{x\}$ by assigning to $x$ one of the $k$ colors associated with $\mathcal{L}(\boldsymbol{\eps})$.
As $\varphi^{-1}(i)\subset L_i$ for all $i\in[5]$
these extensions are sum-free.
Therefore, assuming the existence of $x$ implies
\[|\Phi_5(A\cup\{x\})|\geq 2 |\{ \varphi \in \Phi_5(A): \varphi \text{ is good}\}|.\]
As good and bad colorings of $A$ partition $\Phi_5(A)$ it is sufficient to show the following.
\begin{claim}\label{claim:goodbadcol}
For $n$ sufficiently large
\[|\{ \varphi \in \Phi_5(A): \varphi \text{ is good}\}|> 6^{n/2}2^{-24\gamma n} > \left(3\cdot2^{7/2}+\frac1{50}\right)^{n/4}>|\{ \varphi \in \Phi_5(A): \varphi \text{ is bad}\}|.\]
In particular, if \eqref{it:stability1} holds then $B_1\cup B_2\subseteq A$ and if
\eqref{it:stability2} holds then $B_1\cup B_2\cup B_3\subseteq A$.
\end{claim}
\begin{proof}
As the proofs for the two cases are similar we present the proof for the second case only, i.e., when $A \subset B_1\cup B_2\cup B_3$.
Recall that $|F_i\setminus B_{f(i)}|<\gamma n$ and $\mathcal{B}=(B_{f(1)},\dots, B_{f(5)})$ consists of two $2$-atoms, four $3$-atoms and one $4$-atom, each of which has size $n/8$. We deduce that
$\sum_{i\in[5]}|B_{f(i)}\setminus F_i|<12 \gamma n$ since the following contradiction to \eqref{eq:substantial5} would arise otherwise:
\[6^{n/2-2n(\log n)^{-1/18}}\leq |\Phi(F_1,\dots,F_5)|< 5^{5\gamma n}2^{n/4-12\gamma n}3^{n/2}4^{n/8}<6^{n/2-\gamma n}.\]
Due to the atom structure of $\mathcal{B}$ note that $\varphi\in\Phi_5(A)$ is
good if $\varphi^{-1}(i)\subset F_i\cap B_{f(i)}$ holds for all $i\in[5]$. Hence
\begin{align}
\label{eq:boundgood}
|\{ \varphi \in \Phi_5(A): \varphi \text{ is good}\}| > 2^{n/4}3^{n/2} 4^{n/8-12 \gamma n}=6^{n/2}2^{-24\gamma n}.
\end{align}
We now turn to bound the number of bad colorings of $A$.
Suppose that $(C_1,\dots, C_5)\in\mathcal{F}^5$ maximizes $|\{ \varphi \in \Phi(F'_1, \dots, F'_5): \varphi \text{ is bad}\}|$ over all $(F'_1, \dots, F'_5)\in\mathcal{F}^5$.
By~\eqref{eq:PhiInContainer1} of Observation~\ref{obs:PhiInContainer} and $\log_2|\mathcal{F}|\leq n(\log n)^{-1/18}$
due to~\eqref{it:container1} of Theorem~\ref{thm:container} we have
\begin{align}
\begin{split}
\label{eq:boundbad}
|\{ \varphi \in \Phi_5(A): \varphi \text{ is bad}\}|& \leq\sum_{(F_1,\dots,F_5)\in\mathcal{F}^5} \left|\{ \varphi \in \Phi(F_1, \ldots, F_5): \varphi \text{ is bad}\}\right|\\
&\leq 2^{5n(\log n)^{-1/18}} \left|\{ \varphi \in \Phi(C_1, \ldots, C_5): \varphi \text{ is bad}\}\right|.
\end{split}
\end{align}
Therefore suppose that $|\Phi(C_1,\dots, C_5)|\geq \left(3\cdot2^{7/2}+\frac1{100}\right)^{n/4}$ otherwise the claim follows. Theorem~\ref{thm:stability5}
then applies and there are largest sum-free sets $L_1, L_2, L_3$ and a $g:[5]\to [3]$ or $g:[5]\to [7]$
such that \eqref{it:stability51} or \eqref{it:stability52} of Theorem~\ref{thm:stability5} holds with $F_i,B_i, f,\mathcal{B}$ replaced by $C_i,L_i,g$ and $\mathcal{L}=(L_{g(1)},\dots, L_{g(5)})$.
As $C_1\cup\dots \cup C_5=A=F_1\cup \dots\cup F_5$, the sets
$L_1,L_2,L_3$ must be contained in $B_1\cup B_2\cup B_3$ (i.e., $L_i\in\{B_1,\dots ,B_7\}$)
and no $L_i$, $i\in[3]$, is contained in the union of the other two.
Hence, case~\eqref{it:stability52} of Theorem~\ref{thm:stability5} applies to $(C_1, \ldots, C_5)$ and
$\mathcal{L}$ consists of two $2$-atoms, four $3$-atoms and one $4$-atom.
In particular, a $\varphi \in \Phi(C_1, \ldots,C_5)$ satisfying $\varphi^{-1}(i) \subset L_{g(i)}$ for all $i \in [5]$ is a good
coloring and a bad
$\varphi\in\Phi(C_1, \dots, C_5)$ must therefore exhibit a $k\in[5]$ and an $x\in C_k\setminus L_{g(k)}$ such that $\varphi(x)= k$.
Call $(x,k)$ a bad pair and for such pair let $\Phi(C_1, \dots, C_5|x \mapsto k)$ be
the set of all (bad) $\varphi \in \Phi(C_1, \dots, C_5)$ with $\varphi(x) =k$. Suppose $(y,\ell)$ maximizes $|\Phi(C_1, \dots, C_5|x \mapsto k)|$ over all bad pairs $(x,k)$, then
an argument as in the proof of Claim~\ref{claim:AcontU} will show that $|\Phi(C_1, \dots, C_5|y \mapsto \ell)|$ is small.
Indeed, fix a $3$-atom $\mathcal{L}(\boldsymbol{\eps})$ in $L_{g(\ell)}$ which exists since $L_{g(\ell)}$ consists of four atoms and $\mathcal{L}$ consists of seven atoms, four of which are $3$-atoms.
By Lemma~\ref{lem:matching} for each $a\in \mathcal{L}(\boldsymbol{\eps})$ there is a $b\in \mathcal{L}(\boldsymbol{\eps})$ such that $y=a+b$.
Coloring $y$ with $\ell$ therefore extends to at most 8 sum-free colorings of $\{a,b\}$ if $a\neq b$ and $a,b\in C_\ell$, to at most~2
if $a=b$, $a\in C_\ell$ (hence $4<8$ ways for two such pairs $a=b$ and $a'=b'$), and to at most 3 if $a\not\in C_\ell$ or $b\not\in C_\ell$. Hence,
as $|C_i\setminus L_{ g(i)}|<\gamma n$, $i\in[5]$, we have
\begin{align*}
|\{ \varphi \in \Phi(C_1, \dots,C_5): \varphi \text{ is bad}
\}| &\leq {5\gamma n} \cdot |\Phi(C_1, \dots, C_5|y \mapsto \ell)|\\
&\leq {5\gamma n} \cdot 5^{5 \gamma n} \cdot 2^{n/4} \cdot 3^{3n/8} \cdot 4^{n/8} \cdot 8^{n/16}\\
&= {5\gamma n} \cdot 5^{5\gamma n}\cdot 6^{n/2}\cdot\left(\frac{\sqrt 8}3\right)^{n/8}<5.93^{n/2}< \left(3\cdot2^{7/2}+\frac1{100}\right)^{n/4}
\end{align*}
for $n$ sufficiently large. In view of \eqref{eq:boundbad} the claim follows.
\end{proof}
We have established that
either $A=B_1\cup B_2$ for an independent $(B_1,B_2)$ or $A=B_1\cup B_2\cup B_3$ for an independent $(B_1,B_2,B_3)$.
Moreover, the proof shows that the number of colorings in $\Phi_5(A)$ is largely dominated by the good colorings.
Call a quintuple $\mathcal{D}=(D_1,\dots, D_5)$ of largest sum-free sets in $A$
\emph{substantial} if $\mathcal{D}$ consists of $2$-atoms, four $3$-atoms and one $4$-atom. By definition a coloring $\varphi\in\Phi_5(A)$ is good if and only if
$\varphi\in \Phi(\mathcal{D})$ for some substantial $\mathcal{D}$. Further, if $\mathcal{D}$ and $\mathcal{D}'$ are two distinct substantial quintuple, then $|\Phi(\mathcal{D})\cap \Phi(\mathcal{D}')|=o(|\Phi(\mathcal{D})|)$, i.e., most of the
good colorings of $A$ are assigned to a substantial $\mathcal{D}$ in a unique way.
Indeed, a $\varphi\in\Phi(\mathcal{D})\setminus\Phi(\mathcal{D}')$
if for every $k$-atom $\mathcal{D}(\boldsymbol{\eps})$, $k\in\{2,3,4\}$, all $k$ colors are represented in $\mathcal{D}(\boldsymbol{\eps})$ under $\varphi$, i.e., $|\varphi(\mathcal{D}(\boldsymbol{\eps}))|=k$. It is easily seen that most $\varphi\in\Phi(\mathcal{D})$
satisfy this property.
Finally note that a substantial tuple $(D_1,\dots,D_5)$ in $A=B_1\cup B_2$ are exactly those having $B_1,B_2$ and $B_3$ as members, two of them twice.
There are $\binom{3}2\binom523=90$ many ways to choose such a tuple.
Similarly, a substantial tuple $(D_1,\dots,D_5)$ in $A=B_1\cup B_2\cup B_3$ must contain an independent triple, say $\mathcal{B}'=(B_1',B_2',B_3')$,
for which there are $\binom72\cdot4$ many choices.
These either extend to a quintuple consisting of pairwise distinct largest sum-free sets, all of which are substantial,
or to a quintuple in which exactly one member is repeated, from which all but those containing $\mathcal{B}'(1,1,1)$ are substantial.
No other tuple is substantial. Together this yields $\binom72\cdot4\left(4\cdot3\cdot 5!+3\cdot4\cdot \frac{5!}2\right)=181440$
ways to choose a substantial tuple in $A=B_1\cup B_2\cup B_3$. This finishes the proof.
\end{proof}
\section{Concluding remarks}
\label{sec:concludingremarks}
In the case that $G$ is an even order abelian group the proof of Theorem~\ref{thm:main45} consists of two parts, a stability and the exact part.
Both parts can be extended to more colors provided $G$ contains sufficiently many
largest sum-free sets.
Indeed, for $r=4,5$ the stability results, Theorem~\ref{thm:stability4} and~\ref{thm:stability5}, follow from Lemma~\ref{lem:alllargest} and~\ref{lem:OptStructure}.
The short argument in Lemma~\ref{lem:alllargest} extends to $r=6$ without
change using the lower bound $\kappa_{6,G}\geq 2^{3n/4}3^{n/2}$ (see the footnote in page~\pageref{eq:allgood}).
For $r=7$ we have the lower bound $\kappa_{7,G}\geq\kappa_7(B_1\cup B_2\cup B_3)\geq 4^{7n/8}$ obtained by an independent triple $(B_1,B_2,B_3)$
of largest sum-free sets in $G$. The same argument as in Lemma~\ref{lem:alllargest} then implies that for $|\Phi(F_1,\dots, F_7)|\geq 3.99^{7n/8}$ to hold
at least six of the seven sets from $(F_1,\dots, F_7)$ must be good and it can then be easily argued that the last $F_i$ must be good as well.
The second ingredient for the stability result is Lemma~\ref{lem:OptStructure} which identifies
the tuples $\mathcal{B}=(B_1,\dots, B_r)$ maximizing $|\Phi(\mathcal{B}')|$ over all tuple $\mathcal{B}'$ of largest sum-free sets.
This lemma can be extended to $r=6, 7$ along the same line. Alternatively, as Corollary~\ref{remark:reduction} reduces
the problem in Lemma~\ref{lem:OptStructure} for an arbitrary $G$ to a related one in $\mathbb F_2^t$ for $t\leq r$,
the problem might also be solved by computer search, if needed.
It can be verified that for $r=6$ the optimal structure is obtained for $t=3$ and $\mathcal{B}$ consisting of four 3-atoms and three 4-atoms,
and for $r=7$ it is obtained for $t=3$ and $\mathcal{B}$ consisting of seven 4-atoms.
The optimal structure for an arbitrary $r$ is unknown.
Given the stability result the exact configuration can then be derived using the argument given in Section~\ref{sec:exact}
which works for all fixed $r$. Together, we obtain the following for $r=6,7$.
\begin{theorem}
Let $r\in\{6,7\}$ and let $G$ be an abelian group of sufficiently large even order.
Then $\kappa_r(A)=\kappa_{r,G}$ if and only if $A=B_1\cup B_2\cup B_3$ for an independent triple $(B_1,B_2,B_3)$ of largest sum-free sets, provided such exists in $G$.
\end{theorem}
It would be interesting to extend the results concerning even order groups to arbitrary $r$ and from even order groups to arbitrary type I($q$) groups (for $r\geq 4$).
The methods presented here might be pushed further to give an answer to certain cases of $r>7$ and even order groups. They
might also be adapted give an answer to particular cases of type I($q$) groups and $r\geq 4$. However, new ideas are needed to solve these problems in general.
\end{document}
|
{\beta}egin{document}
\thetaetaitle[critical points]{On the number of critical points of solutions of semilinear equations in $\mathbb{R}^2$}
\thetaetahanks{This work was supported by Prin-2015KB9WPT, by Universit\'a di Roma "La Sapienza" and partially supported by Indam-Gnampa}
{\alpha}uthor[Gladiali]{Francesca Gladiali}
{\alpha}ddress{Dipartimento di Chimica e Farmacia, Universit\`a di Sassari, via Piandanna 4 - 07100 Sassari, e-mail: {{\sigma}f [email protected]}.}
{\alpha}uthor[Grossi]{Massimo Grossi }
{\alpha}ddress{Dipartimento di Matematica, Universit\`a di Roma ``La Sapienza", P.le A. Moro 2 - 00185 Roma, e-mail: {{\sigma}f [email protected]}.}
\maketitle
{\beta}egin{abstract}
In this paper we construct families of bounded domains $\Omega_{\varepsilon}$ and solutions $u_{\varepsilon}$ of
\[{\beta}egin{cases}
-\Deltaelta u_{\varepsilon}=1&\thetaetaext{ in }\ \Omega_{\varepsilon}\\
u_{\varepsilon}=0&\thetaetaext{ on }\ \partial\Omega_{\varepsilon}
{\varepsilon}nd{cases}\]
such that, for any integer $k\gammae2$, $u_{\varepsilon}$ admits at least $k$ maximum points for small enough ${\varepsilon}psilon$. The domain $\Omega_{\varepsilon}$ is ``not far'' to be convex in the sense that it
is starshaped, the curvature of $\partial\Omega_{\varepsilon}$ vanishes at exactly $two$ points and the minimum of the curvature of $\partial\Omega_{\varepsilon}$ goes to $0$ as ${\varepsilon}\thetaetao0$.
{\varepsilon}nd{abstract}
{\sigma}ezione{Introduction}{\lambda}abel{s0}
The computation of the number and of the nature of critical points of positive solution of the problem
{\beta}egin{equation}{\lambda}abel{i0}
{\beta}egin{cases}
-\Deltaelta u=f(u)&\thetaetaext{ in }\ \Omega\\
u=0&\thetaetaext{ on }\ \partial\Omega
{\varepsilon}nd{cases}
{\varepsilon}nd{equation}
where $\Omega{\sigma}ubset\mathbb{R}^n$, $n\gammae2$ is a smooth bounded domain and $f$ is a smooth nonlinearity, is a classic and fascinating problem.\\
Many techniques and important results were developed in the literature (Morse theory, degree theory, etc.) to address this problem. In these few lines is impossible to mention all these contributions, so we will limit ourselves to recall some of them that are closer to the purpose of this paper.\\
One of the first major results concerns the case $f(u)={\lambda} u$, so $u$ is the first eigenfunction of the Laplacian with zero Dirichlet boundary condition. It was proved by
Brascamp and Lieb \cite{bl} and Acker, Payne and Phillippin \cite{app} in dimension $n=2$ that if $\Omega{\sigma}ubset\mathbb{R}^n$ is strictly convex then $-{\lambda}og u$ is convex, so that the superlevel sets are convex and $u$ admits a unique critical point in $\Omegaega$. Other results on the shape of level sets for various nonlinearities $f$ can be found in \cite{Ko83}, \cite{CS82}, \cite{k}, \cite{Ka86}, \cite{Ga55}, \cite{Ga57}, \cite{Cslin94} and references therein.
\\
A second seminal result that we want to mention is the fundamental theorem by Gidas, Ni and Nirenberg \cite{gnn}, which holds in domains which are convex in the direction $x_i$ for any $i=1,..,n$. We have that a domain is convex in the direction $x_1$ (say) if $P=(p_1,x')i^{*}n\Omegaega$ and $Q=(q_1,x')i^{*}n\Omegaega$ then the line segment $\omegaverline{PQ}$ is contained in $\Omegaega$.
\\
{\beta}egin{theorem*}[{\beta}f Gidas, Ni, Nirenberg]
Let $\Omega{\sigma}ubset\mathbb{R}^n$ a bounded, smooth domain which is symmetric with respect to the plane $x_i=0$ for any $i=1,..,n$ and convex in the $x_i$ direction for $i=1,..,n$. Suppose that $u$ is a positive
solution to {\varepsilon}qref{i0}
where $f$ is a locally Lipschitz nonlinearity. Then
{\beta}egin{itemize}
i^{*}tem $u$ is symmetric with respect to $x_1,..,x_n$. (Symmetry)
i^{*}tem $\frac {\partial u}{\partial x_i}<0$ for $x_i>0$ and $i=1,\partialltaots,n$. (Monotonicity)
{\varepsilon}nd{itemize}
{\varepsilon}nd{theorem*}
\noindent An easy consequence of the symmetry and monotonicity properties in the previous theorem is that \[
{\sigma}um_{i=1}^nx_i\frac{\partial u}{\partial x_i}<0 \ \ f_0rall x\ne0\] that is all the superlevel sets are $starshaped$ with respect to the origin.\\
This theorem holds in symmetric domains. Although it is expected that the uniqueness of the critical point (as well as the starlikeness of superlevel sets) holds in more general convex domains, this is a very difficult hypothesis to remove. \\
Next we mention another important result which holds for a wide class of nonlinearities $f$ without the symmetry assumption on $\Omega$ and for semi-stable solutions. To this end we recall that a solution $u$ to {\varepsilon}qref{i0} is semi-stable if the linearized operator at $u$ admits a nonnegative first eigenvalue.
{\beta}egin{theorem*}[{\beta}f Cabr\'e, Chanillo \cite{cc}]
Assume $\Omegaega$ is a smooth, bounded and convex domain of $\mathbb{R}^2$ whose boundary has positive curvature. Suppose $f\gammae0$ and $u$ is a semi-stable positive solution to {\varepsilon}qref{i0}.
Then $u$ has a unique critical point, which is non-degenerate.
{\varepsilon}nd{theorem*}
As a consequence the superlevel sets of $u$ are strictly convex in a neighborhood of the critical point and in a neighborhood of the boundary. It is thought that they are all convex, but this is certainly not true for suitable nonlinearities like in the following surprising result:
{\beta}egin{theorem*}[{\beta}f Hamel, Nadirashvili, Sire \cite{hns}]
In dimension $n=2$ there are some smooth bounded convex domains $\Omegaega$ and some $C^{i^{*}nfty}$ functions $f:[0,+i^{*}nfty)\thetaetao \mathbb{R}$ for which problem {\varepsilon}qref{i0} admits a solution $u$ which is not quasiconcave.
{\varepsilon}nd{theorem*}
We recall that a function is called quasiconcave if its superlevel sets are all convex. We can then conclude that the convexity of the domain is not always preserved by the superlevel sets. Nevertheless by the Gidas, Ni, Nirenberg theorem, being the domain $\Omegaega$ in \cite{hns} symmetric, the superlevel sets in this example are still starshaped and the maximum point of the solution is unique.\\
\noindent The previous results suggest the following questions:
\vskip0cm\noindentkip0.2cm
\noindent {{\beta}f Question 1}: {{\varepsilon}m Assume $\Omegaega$ is starshaped. Are the superlevel sets of any positive solution to {\varepsilon}qref{i0} starshaped?}
\vskip0cm\noindentkip0.2cm
\noindent {{\beta}f Question 2}: {{\varepsilon}m Assume that $u$ is a positive solution to {\varepsilon}qref{i0}
in a smooth bounded domain $\Omega{\sigma}ubset\mathbb{R}^2$ whose curvature is negative somewhere. What about the number of critical points of $u$?}
\vskip0cm\noindentkip0.2cm
\noindent Of course interesting examples deal with contractible domains $\Omegaega$, otherwise it is not difficult to construct examples of solution $u$ to {\varepsilon}qref{i0} with many critical points.
Some results in the direction to prove Question 1 were obtained for non-symmetric domains, in a perturbative setting, by Grossi and Molle \cite{gm} and Gladiali and Grossi \cite{gg1,gg2}.\\
In this paper we answer Question 1 showing that the starlikeness of the domain is not maintained by the superlevel sets.
Moreover we consider also Question 2 showing that in general there is no bound on the number of critical points. \\
Of course this last result is very sensitive to the shape of $\Omega$. In a recent paper \cite{lr} it was showed that if $\partial\Omega$ is contained in ${\lambda}eft\{zi^{*}n\mathbb{C}:|z|^2=f(z)+\omegaverline{f(z)}\rhoight\}$ where $f(z)$ is a rational function, then, differently than our case, there is a bound on the number of the critical points. We refer to \cite{am} for other results in these direction.
\\
Actually we will construct a family of domains $\Omega_{\varepsilon}$
starshaped with respect to an interior point and solutions $u_{\varepsilon}$ of the classical torsion problem, namely
{\beta}egin{equation}{\lambda}abel{eq:torsion}
{\beta}egin{cases}
-\Deltaelta u=1&\thetaetaext{ in } \ \Omega\\
u=0&\thetaetaext{ on }\ \partial\Omega
{\varepsilon}nd{cases}
{\varepsilon}nd{equation}
with an arbitrary large number of maxima and of disjoint superlevel sets.
Moreover the curvature of $\partial\Omega_{\varepsilon}$ vanishes at exactly two points and its minimum value goes to $0$ as ${\varepsilon}\thetaetao0$.
In some sense our domains $\Omega_{\varepsilon}$ are not ``far'' to be convex.
More precisely our result is the following,
{\beta}egin{theorem}{\lambda}abel{i1}
For any integer $k\gammaeq 2$ there exists a family of smooth bounded domains $\Omega_{{\varepsilon},k}{\sigma}ubset\mathbb{R}^2$ and smooth functions $u_{{\varepsilon},k}:\Omega_{{\varepsilon},k}\thetaetao\mathbb{R}^+$ which solves the torsion problem {\varepsilon}qref{eq:torsion} in $\Omegaega_{{\varepsilon},k}$, such that for ${\varepsilon}$ small enough,
{\beta}egin{itemize}
i^{*}tem $\Omega_{{\varepsilon},k}$ is starshaped with respect to an interior point.
$(P0)$
i^{*}tem The set $u_{{\varepsilon},k,}$ $\{u_{{\varepsilon},k}>c\}$ is non-empty and has at least
$k$ connected components; in particular $u_{{\varepsilon},k}$ has at least $k$ maximum points.
$(P1)$
i^{*}tem If $S$ is the strip $S=\{(x,y)i^{*}n\mathbb{R}^2\hbox{ such that }|y|<1\}$
and $Q$ is any compact set of $\mathbb{R}^2$ then \ $\Omega_{{\varepsilon},k}\cap Q\xrightarrow[{\varepsilon}\thetaetao0]\ S\cap Q$.\hskip5cm(P2)
i^{*}tem The curvature of $\partial\Omega_{{\varepsilon},k}$ changes sign and vanishes exactly at two points. Moreover $\min\Big(Curv_{\partial\Omega_{{\varepsilon},k}}\Big)\xrightarrow[{\varepsilon}\thetaetao0]\ 0$
.
$(P3)$
{\varepsilon}nd{itemize}
{\varepsilon}nd{theorem}
A picture of $\Omega_{{\varepsilon},2}$ for ${\varepsilon}$ small is given in Fig.1.
{\beta}egin{figure}[h]
\centering
i^{*}ncludegraphics[scale=0.15]{graf.png}
\caption{Domain $\Omega_{{\varepsilon},2}$ with level set $\{u_{{\varepsilon},2}=c\}$}
{\varepsilon}nd{figure}
Of course $(P2)$ implies that the superlevel set $\{u_{{\varepsilon},k}>c\}$ is not starshaped.
We recall that every solution to {\varepsilon}qref{eq:torsion} is positive by the Maximum principle and semi-stable as in \cite{cc}.
We point out that the solution $u_{{\varepsilon},k}$ will be explicitly provided and the domain $\Omega_{{\varepsilon},k}$ will be the superlevel set $\{u_{{\varepsilon},k}>0\}$.\\
In some sense our result shows that the assumption on the positivity of the curvature of $\partial\Omega$ in
Cabr\'e and Chanillo's Theorem cannot be relaxed because it is enough that the curvature of $\partial\Omega_{{\varepsilon},k}$ satisfies $(P3)$ to imply that there exists a semi-stable solution of a (simple) PDE with an arbitrary number of critical points. By $(P2)$ our domain is 'locally''' close to a strip and $u_{{\varepsilon},k}\xrightarrow[{\varepsilon}\thetaetao0]\ \frac12-\frac{y^2}2$ in $C^2_{loc}(\mathbb{R}^2)$. Note that the function $\frac12-\frac{y^2}2$, which solves $-\Deltaelta u=1$ in the strip $S$, was also used in Hamel, Nadirashvili and Sire \cite{hns}.
We point out that when ${\varepsilon}$ is small enough, the domain $\Omegaega_{{\varepsilon},k}$ in Theorem \rhoef{i0} looks like the one in \cite{hns} even if it has negative curvature somewhere.
Before describing the construction of the solution $u_{{\varepsilon},k}$ let us make some remarks on $(P2)$. It proves that the starlikeness of $\Omega_{{\varepsilon},k}$ is not enough to guarantee that the superlevel sets are starshaped proving Question 1. To our knowledge this is the first example with this property.
Theorem {\varepsilon}qref{i1} also shows that it cannot exist a starshaped rearrangement which associates to a smooth function $u$ another function $u^*$ with starshaped superlevel sets verifying the standard properties of rearrangements, i.e.
{\beta}egin{equation}{\lambda}abel{i2}
i^{*}nt_{\Omega^*}|u^*|^p= i^{*}nt_\Omega|u|^p\quadf_0rall p\gammae1\qquad\hbox{ and }\qquadi^{*}nt_{\Omega^*}|\nablabla u^*|^2{\lambda}e i^{*}nt_\Omega|\nablabla u|^2.
{\varepsilon}nd{equation}
A starshaped rearrangement which verifies, under additional assumptions, properties {\varepsilon}qref{i2} was introduced by Kawohl in \cite{k1} and \cite{k}. This implies that, jointly with {\varepsilon}qref{i2},
{\beta}egin{equation}{\lambda}abel{i99}
i^{*}nf{\lambda}imits_{ui^{*}n H^1_0(\Omega)}\frac12i^{*}nt_\Omega|\nablabla u|^2-i^{*}nt_\Omega u
{\varepsilon}nd{equation}
is achieved at a unique function $u$ with starshaped superlevel sets. However this type of rearrangement does not always exist, it is depending on the shape of $\Omega$.
An example (see \cite{k} and \cite{g}) is the so-called {{\varepsilon}m Grabmüller's long nose''} \cite{g}. Since our pair $(\Omega_{{\varepsilon},k},u_{{\varepsilon},k})$ satisfies {\varepsilon}qref{i99} and $u_{{\varepsilon},k}$ has some superlevel sets which are not starshaped, the requested starshaped rearrangement cannot exists for $\Omega_{{\varepsilon},k}$.\\
Finally we remark that in Makar-Limanov \cite{ml} it was proved that if $\Omegaega$ is a smooth bounded strictly convex domain of $\mathbb{R}^2$ and $u$ solves the torsion problem in $\Omegaega$ then the superlevel sets are strictly convex too. It seems then that the torsion problem is a {{\varepsilon}m``good'' } problem in which the properties of $\Omegaega$ are maintained by the superlevel sets. It is then even more unexpected that this does not hold for the starlikeness.
Next we say some words about the construction of $u_{{\varepsilon},k}$. The starting point is given by the function
$$\phi(y)=\frac 12-\frac12y^2$$
which solves
{\beta}egin{equation}
{\beta}egin{cases}
-\Deltaelta\phi=1&\thetaetaext{ in }\ |y|<1\\
u=0&\thetaetaext{ on }\ y=\pm1.
{\varepsilon}nd{cases}
{\varepsilon}nd{equation}
Our function $u_{{\varepsilon},k}$ is a perturbation of $\phi$ with suitable $harmonic$ functions. The choice of the harmonic functions is quite delicate: let us consider the holomorphic function $F_k:\mathbb{C}\thetaetao\mathbb{C}$,
{\beta}egin{equation}{\lambda}abel{eq:F}
F_k(z)=-\Pi_{i=1}^{2k}(z-x_i)
{\varepsilon}nd{equation}
for arbitrary real numbers $x_1<x_2<..<x_{2k}$ and define
$$v_k(x,y)=Re\Big(F_k(z)\Big).$$
Next we define $u_{{\varepsilon},k}$ as
$$u_{{\varepsilon},k}(x,y)=\frac 12-\frac12y^2+{\varepsilon}(y^3-3yx^2)+{\varepsilon}^\frac32v_k(x,y).$$
We have trivially that $-\Deltaelta u_{{\varepsilon},k}=1$ and the proof of Theorem \rhoef{i1} reduces to show that for ${\varepsilon}$
small enough the set $\Omega_{{\varepsilon},k}=\{u_{{\varepsilon},k}>0\}$ is a bounded smooth domain which verifies $(P0)-(P3)$. Although the function $u_{{\varepsilon},k}$ is explicitly provided, the proof of Theorem \rhoef{i1} involves delicate computations. Note that the power $\frac32$ appearing in the definition of $u_{{\varepsilon},k}$ can be replaced with any real number ${\alpha}lphai^{*}n(1,2)$. However ${\alpha}lpha=2$ is not allowed for technical reasons (``bad'' interactions occur).\\
There is a flexibility in the choice of the holomorphic function $F_k$; indeed it can be replaced by another one such that the restriction to the real line has $k$ maxima points and verifies some suitable growth condition at $\pmi^{*}nfty$. \\
Theorem \rhoef{i0} can be extended to semi-stable solutions of more general nonlinear problems. Let us consider a solution $u$ to
{\beta}egin{equation}{\lambda}abel{f1}
{\beta}egin{cases}
-\Deltaelta u={\lambda}ambda f(u)&\hbox{in }\Omega\\
u>0&\hbox{in }\Omega\\
u=0&\hbox{on }\partial\Omega
{\varepsilon}nd{cases}
{\varepsilon}nd{equation}
where $\Omega{\sigma}ubset\mathbb{R}^2$ is a bounded smooth domain, $f:\mathbb{R}^+\thetaetao\mathbb{R}$ is a smooth nonlinearity (say $C^1$) with $f(0)>0$ and $u_{\lambda}$ is a family of solutions of {\varepsilon}qref{f1} satisfying
{\beta}egin{equation}{\lambda}abel{f2}
||u_{\lambda}||_i^{*}nfty{\lambda}e C\quad\hbox{for ${\lambda}$ small},
{\varepsilon}nd{equation}
with $C$ independent of ${\lambda}$.
A classical example of solutions satisfying {\varepsilon}qref{f1} and {\varepsilon}qref{f2} was given by Mignot and Puel \cite{mp} when $f$ is a positive, increasing and convex nonlinearity and $0<{\lambda}<{\lambda}^*$.
See \cite[Theorem 10]{gg1} for some results about convexity and uniqueness of the critical point to solutions to {\varepsilon}qref{f1}.
Then we have the following result,
{\beta}egin{theorem}{\lambda}abel{i3}
Let ${\varepsilon}>0$, $k\gammae 2$ and $\Omegaega_{{\varepsilon},k}$ be as in Theorem \rhoef{i0}. Then there exists ${\beta}ar {\lambda}$ (depending on ${\varepsilon}$) such that if $u_{{\lambda},{\varepsilon},k}$
is a solution to {\varepsilon}qref{f1} in $\Omegaega_{{\varepsilon},k}$ that satisfies {\varepsilon}qref{f2}, we have that, for any $0<{\lambda}<{\beta}ar {\lambda}$, $u_{{\lambda},{\varepsilon},k}$
is semi-stable and satisfies $(P1)$-$(P3)$.
{\varepsilon}nd{theorem}
{\sigma}ection{The holomorphic function $F(z)$}
Here and in the next sections, to simplify the notation we omit the index $k$ when we define the functions $v(x,y)$, $u_{\varepsilon}(x,y)$ and the domains $\Omegaega_{\varepsilon}$.\\
For $k\gammae2$ let us consider arbitrary real numbers
$x_1<x_2<..<x_{2k}$ and the holomorphic function $F:\mathbb{C}\thetaetao\mathbb{C}$ in {\varepsilon}qref{eq:F} given by
{\beta}egin{equation}{\lambda}abel{b1}
F(z)=-\Pi_{i=1}^{2k}(z-x_i)=-{\sigma}um_{i=0}^{2k}a_iz^i
{\varepsilon}nd{equation}
where of course $a_{2k}=1$.\\
Let us denote by $f$ the restriction of $F$ to the {{\varepsilon}m real line}. We immediately get that $f(x_1)=..=f(x_{2k})=0$ and that $f$ has $k$ maximum points.
Let us consider the function $v:\mathbb{R}^2\thetaetao\mathbb{R}$ defined as
{\beta}egin{equation}{\lambda}abel{eq:v-re}
v(x,y)=Re\Big(F(z)\Big)
{\varepsilon}nd{equation}
which is harmonic in $\mathbb{R}^2$ and satisfies $v(x,0)=f(x)$. By construction
{\beta}egin{equation}{\lambda}abel{eq:v-pj}
v(x,y)=-{\sigma}um _{j=0}^{2k} a_jP_j(x,y){\varepsilon}nd{equation}
with $a_{2k}=1$ and where $P_j$ are homogeneous harmonic polynomials of degree $j$. Finally we introduce the function {\beta}egin{equation}{\lambda}abel{eq:u-epsilon}
{\beta}oxed{u_{\varepsilon}(x,y)=\frac 12-\frac12y^2+{\varepsilon}(y^3-3x^2y)+{\varepsilon}^\frac 32v(x,y)}
{\varepsilon}nd{equation}
which satisfies
\[-\Deltaelta u_{\varepsilon}=1\ \ \thetaetaext{ in }\mathbb{R}^2.\]
The function $v$ coincides with $f(x)$ along the $x$-axis, while $u_{\varepsilon}(x,0)=\frac 12-f(x)$. We end this section with a brief comment on the term ${\varepsilon}(y^3-3x^2y)$: it appears in the definition of $u_{\varepsilon}$ to have that the curvature of our domain vanishes exactly at two points. It
breaks the symmetry of the domain with respect to $y$, otherwise we would have that the curvature vanishes at $four$ points.
{\sigma}ection{Proof of Theorem \rhoef{i0}}
In this section we show that the function $u_{\varepsilon}$ in {\varepsilon}qref{eq:u-epsilon} verifies the claim of Theorem \rhoef{i0}. In the rest of the paper we let $o(1)$ be a quantity that goes to zero as ${\varepsilon}$ goes to zero and $k\gammaeq 2$.
{\beta}egin{theorem}{\lambda}abel{b2}
For ${\varepsilon}$ small enough the function $u_{\varepsilon}(x,y)$ in {\varepsilon}qref{eq:u-epsilon}
admits a connected component (that we call $\Omegaega_{\varepsilon}$) of the superlevel set
\[
\{(x,y)i^{*}n\mathbb{R}^2\hbox{ such that }u_{\varepsilon}(x,y)>0\}
\]
which satisfies:\\
i) $\Omegaega_{{\varepsilon}}$ is a smooth bounded domain;\\
ii) $\Omegaega_{{\varepsilon}}$ is starshaped with respect one of its points;\\
iii) $\Omegaega_{{\varepsilon}}$ contains $k$ disjoint connected components $Z_{1,{\varepsilon}},..,Z_{k,{\varepsilon}}$ of the superlevel set
$\{(x,y)i^{*}n\mathbb{R}^2\hbox{ such that } u_{\varepsilon}(x,y)>\frac 12\}$.
{\varepsilon}nd{theorem}
{\beta}egin{proof}
{{\beta}f Step 1:} Let $x_{\varepsilon}={\lambda}eft(\frac 3{{\varepsilon}^\frac 32}\rhoight)^\frac 1{2k}$. We want to show that
\[u(\pm x_{\varepsilon},y){\lambda}eq -2 \ \ \thetaetaext{ for } |y|<1+h\]
when ${\varepsilon}$ is small enough and $0<h<1$. In {\varepsilon}qref{eq:v-pj}
let us consider the polynomial of degree $2k$, namely
\[P_{2k}(x,y)={\sigma}um_{j=0}^k b_j x^{2k-2j}y^{2j}\]
for some suitable coefficients $b_j$ such that $b_0=b_{k}=1$. Then
\[{\varepsilon}^\frac 32 P_{2k}(x_{\varepsilon},y)={\varepsilon}^\frac 32 {\sigma}um_{j=0}^k b_j {\lambda}eft(\frac 3{{\varepsilon}^\frac 32}\rhoight)^\frac {2k-2j}{2k} y^{2j}
=3+o(1)\quad\hbox{as }{\varepsilon}\thetaetao0\]
uniformly with respect to $-1-h<y<1+h$.\\
In a very similar manner, for any $0{\lambda}e j{\lambda}e2k-1$ we have that
\[{\varepsilon}^\frac 32 P_j(x_{\varepsilon},y)=o(1)\]
and
\[{\lambda}eft| {\varepsilon} (y^3-x_{\varepsilon}^2y)\rhoight|=O{\lambda}eft({\varepsilon}^\frac {2k-3}{2k}\rhoight)\]
for ${\varepsilon}\thetaetao 0$ uniformly with respect to $-1-h<y<1+h$.
Considering all these estimates we obtain
\[{\lambda}eft| u_{\varepsilon}(x_{\varepsilon},y)+\frac 52+\frac 12 y^2\rhoight|=o(1)\quad\hbox{as }{\varepsilon}\thetaetao0.
\]
The very same computation shows also that
\[ u_{\varepsilon}(-x_{\varepsilon},y){\lambda}eq -2
\]
when ${\varepsilon}$ is small enough and concludes the proof.
\
\noindent {{\beta}f Step 2:} We show that $u_{\varepsilon}(x,y)< 0$ on the segments
$$T_{\pm h}=\{(x,y)i^{*}n\mathbb{R}^2\hbox{ such that } y=\pm(1+ h), xi^{*}n[-x_{\varepsilon},x_{\varepsilon}]\}$$
for some $0<h<1$ when ${\varepsilon}$ is small enough.\\
First let us observe that for
$(x,y)i^{*}n T_{\pm h}$,
\[{\lambda}eft|{\varepsilon}(y^3-3x^2y)\rhoight|{\lambda}eq {\varepsilon} (8+6x_{\varepsilon}^2){\lambda}eq 8{\varepsilon}+12{\varepsilon}^\frac{2k-3}{2k}=O{\lambda}eft({\varepsilon}^\frac{2k-3}{2k}\rhoight)\]
when ${\varepsilon}\thetaetao 0$.
Next, note that, by {\varepsilon}qref{eq:v-pj}
\[v(x,\pm(1+h))=-{\sigma}um_{j=1}^{2k}a_jP_j(x,\pm( 1+h))\]
and since $a_{2k}=1$ we get that
\[{\sigma}up_{xi^{*}n \mathbb{R}}v(x,\pm(1+h))=Ci^{*}n\mathbb{R}.\]
Then we obtain
\[u_{\varepsilon}(x,\pm(1+h))=-\frac 12h^2-h+{\varepsilon} ((\pm(1+h))^3\pm3x^2(1+h))+{\varepsilon} ^\frac 32 v(x,\pm(1+h))<-\frac 12 h^2<0\]
for ${\varepsilon} $ small enough.\\
\noindent {{\beta}f Step 3:}
We have proved that
for every ${\varepsilon}$ small enough $u_{\varepsilon}(x,y)<0$
on the boundary of the rectangle $R_{\varepsilon}=\{(x,y)i^{*}n\mathbb{R}^2\hbox{ such that } -x_{\varepsilon}{\lambda}e x{\lambda}e x_{\varepsilon}, -(1+h){\lambda}e y{\lambda}e1+h \}$. Since $u_{\varepsilon}(x_1,0)=\frac 12+{\varepsilon} v(x_1,0)=\frac 12+{\varepsilon} f(x_1)=\frac 12$
this implies that there is a connected component of the superlevel set $u_{\varepsilon}(x,y)>0$
, that we call $\Omegaega_{{\varepsilon}}$, which is contained in the interior of $R_{\varepsilon}$ and contains the point $(x_1,0)$.
Since $u_{\varepsilon}$ is continuous then $\Omegaega_{{\varepsilon}}$ is a connected
open set with non empty interior.\\
Furthermore when ${\varepsilon}$ satisfies
{\beta}egin{equation}{\lambda}abel{cond-ep-1}
{\varepsilon}<{\lambda}eft(\frac 1{2{\sigma}up_{xi^{*}n[x_1,x_{2k}] }(-f(x))}\rhoight)^\frac 23
{\varepsilon}nd{equation}
then all the segment $[x_1,x_{2k}]\thetaetaimes \{0\}$ belongs to $\Omegaega_{\varepsilon}$.\\
\noindent {{\beta}f Step 4:}
In this step we prove that when ${\varepsilon}$ is small enough $\Omegaega_{{\varepsilon}}$ is $smooth$ and $starshaped$ with respect to the point $(x_1,0)$, which is equivalent to show that
$$(x-x_1,y)\cdot\nu(x,y){\lambda}e-{\alpha}lpha<0\hbox{ for any }(x,y)i^{*}n\partial \Omegaega_{{\varepsilon}},$$
where $\nu(x,y)$ is the outer normal of $\partial \Omegaega_{\varepsilon}$ at the point $(x,y)$. In particular we will show that
{\beta}egin{equation}\nonumber
(x-x_1)\frac{\partial u_{\varepsilon}}{\partial x}+y\frac{\partial u_{\varepsilon}}{\partial y}{\lambda}e-{\alpha}lpha\quadf_0rall (x,y)i^{*}n R_{\varepsilon}\hbox{ such that } u_{\varepsilon}(x,y)=0.
{\varepsilon}nd{equation}
It is easily seen that
\[(x-x_1)\frac{\partial u_{\varepsilon}}{\partial x}+y\frac{\partial u_{\varepsilon}}{\partial y}=-y^2+{\varepsilon}{\lambda}eft( -9x^2y+6xx_1y+3y^3\rhoight) +{\varepsilon}^\frac 32 {\lambda}eft((x-x_1)\frac{\partial v}{\partial x}+y\frac{\partial v}{\partial y}\rhoight).\]
On the other hand since $u_{\varepsilon}(x,y)=0$ on $\partial \Omegaega_{{\varepsilon}}$ we get that
\[-y^2=-1-2{\varepsilon}(y^3-3x^2y) -2{\varepsilon}^\frac 32v(x,y)\]
and
{\beta}egin{equation}{\lambda}abel{eq:stell}\nonumber
(x-x_1)\frac{\partial u_{\varepsilon}}{\partial x}+y\frac{\partial u_{\varepsilon}}{\partial y}=-1+{\varepsilon}{\lambda}eft(y^3-3x^2y+6xx_1y \rhoight)
+{\varepsilon}^\frac 32 {\lambda}eft((x-x_1)\frac{\partial v}{\partial x}+y\frac {\partial v}{\partial y}-2v(x,y)\rhoight).
{\varepsilon}nd{equation}
By {\varepsilon}qref{eq:v-pj} and Euler Theorem we get
{\beta}egin{equation}\nonumber
x\frac{\partial v}{\partial x}+y\frac{\partial v}{\partial y}=-{\sigma}um_{j=0}^{2k}a_j{\lambda}eft(
x\frac{\partial P_j}{\partial x}+
y\frac{\partial P_j}{\partial y}\rhoight)=- {\sigma}um_{j=1}^{2k} j a_j P_j(x,y)
{\varepsilon}nd{equation}
and so, recalling that $a_{2k}=1$,
\[
-2v(x,y)+(x-x_1)\frac{\partial v}{\partial x}+y\frac {\partial v}{\partial y}=- {\sigma}um_{j=0}^{2k} (j-2) a_j P_j(x,y)+x_1{\sigma}um_{j=1}^{2k} a_j\frac{\partial P_j}{\partial x}\xrightarrow[|x|\thetaetaoi^{*}nfty]\ -i^{*}nfty
\]
uniformly for $yi^{*}n[-1-h,1+h]$. Hence
\[{\sigma}up_{(x,y)i^{*}n (-i^{*}nfty,i^{*}nfty)\thetaetaimes [-1-h,1+h]}{\lambda}eft(-2v(x,y)+(x-x_1)\frac{\partial v}{\partial x}+y\frac {\partial v}{\partial y}\rhoight)=d<i^{*}nfty.\]
In addition
\[{\sigma}up_{(x,y)i^{*}n [-x_{\varepsilon},x_{\varepsilon}]\thetaetaimes [-1-h,1+h]} {\lambda}eft(y^3-3x^2y+6xx_1y\rhoight){\lambda}e Cx_{\varepsilon}^2=O{\lambda}eft({\varepsilon}^{-\frac 3{2k}}\rhoight)\]
as ${\varepsilon}\thetaetao 0$, so that
\[{\varepsilon} {\lambda}eft(y^3-3x^2y+6xx_1y\rhoight)2=O{\lambda}eft({\varepsilon}^{\frac {2k-3}{2k}}\rhoight)\]
in the rectangle $R_{\varepsilon}$.
Summarizing again we have that
\[{\sigma}up_{\partial \Omegaega_{\varepsilon}{\sigma}ubset R_{\varepsilon}}{\lambda}eft( (x-x_1)\frac{\partial u_{\varepsilon}}{\partial x}+y\frac{\partial u_{\varepsilon}}{\partial y}\rhoight){\lambda}e -1+o(1)<-\frac12\]
for ${\varepsilon}\thetaetao 0$ which gives the claim.\\
Of course $(x-x_1)\frac{\partial u}{\partial x}+y\frac{\partial u}{\partial y}\neq 0$ on $\partial \Omegaega_{{\varepsilon}}$ implies that $\partial \Omegaega_{{\varepsilon}}$ is a smooth curve. \\
\noindent {{\beta}f Step 5:} Here we prove that the superlevel set
\[L_{{\varepsilon}}:=
\{(x,y)i^{*}n \mathbb{R}^2 \hbox{ such that }
u_{\varepsilon}(x,y)> \frac 12
\}
\]
admits in $\Omegaega_{\varepsilon}$ at least $k$ disjoint components $Z_{1,{\varepsilon}},\partialltaots, Z_{k,{\varepsilon}}$.\\
Since $f(x_j)=0$ for $j=1,\partialltaots,2k$ and $f(x)\thetaetao -i^{*}nfty$ as $|x|\thetaetao i^{*}nfty$, there exist points $s_ji^{*}n (x_{2j},x_{2j+1})$ for $j=1,\partialltaots,k-1$ and points ${\beta}ar s_ji^{*}n (x_{2j+1},x_{2j+2})$ for $j=0,\partialltaots,k$ such that
{\beta}egin{align*}
f(s_j)=\min_{xi^{*}n [ x_{2j},x_{2j+1}]}f(x)<0 & \thetaetaext{ for }j=1,\partialltaots,k-1\\
f({\beta}ar s_j)=\max_{xi^{*}n [ x_{2j+1},x_{2j+2}]}f(x)>0 & \thetaetaext{ for }j=0,\partialltaots,k-1
{\varepsilon}nd{align*}
First observe that
\[u_{\varepsilon}({\beta}ar s_j,0)=\frac 12 +{\varepsilon}^\frac 32 v({\beta}ar s_j,0)=\frac 12 +{\varepsilon}^\frac 32 f({\beta}ar s_j)>\frac 12\]
for $j=0,\partialltaots,k$ so that the points $({\beta}ar s_j,0)$ are contained in $L_{{\varepsilon}}$ for every ${\varepsilon}$.
Next we want to prove that
{\beta}egin{equation}{\lambda}abel{eq:u-delta}
u_{\varepsilon}(s_j,y)<\frac 12
{\varepsilon}nd{equation}
for $j=1,\partialltaots,k-1$ and $-1-h<y<1+h$.\\
In this way since ${\beta}ar s_j<x_{2j}<s_{j+1}$ and the segment $[x_1,x_{2k}]\thetaetaimes \{0\}$ is contained in $\Omegaega_{\varepsilon}$ by Step 3, we also obtain that the superlevel set $L_{{\varepsilon}}$ admits at least $k$ disjoint components.\\
To prove {\varepsilon}qref{eq:u-delta} we argue by contradiction and assume that there exists
a sequence ${\varepsilon}_n\thetaetao 0$ and points $y_ni^{*}n [-1-h,1+h]$ such that
{\beta}egin{equation}{\lambda}abel{eq:passaggio}
u_{{\varepsilon}_n}(s_j,y_n)=\frac 12-\frac 12 y_n^2+{\varepsilon}_n(y_n^3-3s_j^2y_n)+{\varepsilon}_n^\frac 32 v(s_j,y_n)\gammae\frac 12
{\varepsilon}nd{equation}
for $n\thetaetao i^{*}nfty$, for a fixed value of $j$. Formula {\varepsilon}qref{eq:passaggio} easily implies that $y_n\thetaetao 0$ as $n\thetaetao i^{*}nfty$ since $(y_n^3-3s_j^2y_n)$ and $v(s_j,y_n)$ are uniformly bounded and ${\varepsilon}_n\thetaetao 0$. Next we observe that since $v(s_j,0)=f(s_j)<0$
then $v(s_j,y_n){\lambda}e \frac {f(s_j)} 2<0$ for $n$ large enough. Moreover, using that $-\frac 12y_n^2+{\varepsilon}_n y_n^3<-\frac 14y_n^2$ for $n$ large enough we have
\[{\beta}egin{split}
u_{{\varepsilon}_n}(s_j,y_n)&=\frac 12-\frac 12 y_n^2+{\varepsilon}_n(y_n^3-3s_j^2y_n)+{\varepsilon}_n^\frac 32 v(s_j,y_n)\\
&{\lambda}e \frac 12-\frac 14y_n^2-3{\varepsilon}_n s_j^2y_n+{\varepsilon}_n^\frac 32 \frac {f(s_j)}2 <\frac 12
{\varepsilon}nd{split}
\]
since ${\varepsilon}_n^\frac 32{\lambda}eft( \frac {f(s_j)}2+9{\varepsilon}_n^\frac 12s_j^4\rhoight)<0$
for $n$ large enough. This contradiction ends the proof.
{\varepsilon}nd{proof}
Next aim is to derive additional information about the shape of $\Omegaega_{\varepsilon}$, in particular regarding the oriented curvature of $\partial \Omegaega_{\varepsilon}$. Since $\partial \Omegaega_{\varepsilon}$ is a level curve of $u_{\varepsilon}(x,y)$ then its oriented curvature at the point $(x,y)$ is given by
{\beta}egin{equation} {\lambda}abel{eq:curv}
Curv_{\partial \Omegaega_{\varepsilon}}(x,y)=-\frac{(u_{\varepsilon})_{xx}(u_{\varepsilon})_y^2-2(u_{\varepsilon})_{xy}(u_{\varepsilon})_x(u_{\varepsilon})_y+(u_{\varepsilon})_{yy}(u_{\varepsilon})_x^2}{{\lambda}eft((u_{\varepsilon})_x^2+(u_{\varepsilon})_y^2\rhoight)^\frac32}
{\varepsilon}nd{equation}
In particular, we want to prove
the following result
{\beta}egin{lemma}{\lambda}abel{lem:curv-2}
The oriented curvature of $\partial \Omegaega_{\varepsilon}$ vanishes exactly at two points when ${\varepsilon}$ is small enough.
{\varepsilon}nd{lemma}
Let us start examining the behavior of some points $(\zeta_{\varepsilon},{\varepsilon}ta_{\varepsilon})i^{*}n \partial \Omegaega_{\varepsilon}$ when ${\varepsilon}$ goes to zero.
{\beta}egin{lemma}{\lambda}abel{lem:behavior}
Let $(\zeta_{\varepsilon},{\varepsilon}ta_{\varepsilon})$ be a point on $\partial \Omegaega_{\varepsilon}$. Then if $|\zeta_{\varepsilon}|\thetaetao i^{*}nfty$ we have
{\beta}egin{equation}{\lambda}abel{e2}
|\zeta_{\varepsilon}|= {\lambda}eft(\frac 12 (1-{\varepsilon}ta_{\varepsilon}^2)\rhoight)^{\frac 1{2k}} {\varepsilon}^{-\frac 3{4k}}(1+o(1)).
{\varepsilon}nd{equation}
{\varepsilon}nd{lemma}
{\beta}egin{proof}
First
$(\zeta_{\varepsilon},{\varepsilon}ta_{\varepsilon})i^{*}n \partial \Omegaega_{\varepsilon}$ implies that
{\beta}egin{equation}{\lambda}abel{e3}
{\varepsilon} ({\varepsilon}ta_{\varepsilon}^3-3\zeta_{\varepsilon}^2{\varepsilon}ta_{\varepsilon})+{\varepsilon}^\frac 32 v(\zeta_{\varepsilon},{\varepsilon}ta_{\varepsilon})=\frac 12 ({\varepsilon}ta_{\varepsilon}^2-1).
{\varepsilon}nd{equation}
Next we observe that, since ${\beta}ar \Omegaega_{\varepsilon}{\sigma}ubset R_{\varepsilon}$, where $R_{\varepsilon}$ is the rectangle introduced in Step 3 in the proof of Theorem \rhoef{b2}, then $|\zeta_{\varepsilon}|<3^\frac 1{2k}{\varepsilon}^{-\frac 3{4k}}$ and this implies that
{\beta}egin{equation}
{\lambda}abel{e4}
{\varepsilon}({\varepsilon}ta_{\varepsilon}^3-3\zeta_{\varepsilon}^2{\varepsilon}ta_{\varepsilon})=O{\lambda}eft({\varepsilon}^\frac{2k-3}{2k}\rhoight)
{\varepsilon}nd{equation}
for ${\varepsilon}\thetaetao 0$ and {\varepsilon}qref{e3} becomes, since $|\zeta_{\varepsilon}|\thetaetao i^{*}nfty$
\[{\varepsilon}^\frac 32 v(\zeta_{\varepsilon},{\varepsilon}ta_{\varepsilon})=\frac 12 ({\varepsilon}ta_{\varepsilon}^2-1)+O{\lambda}eft({\varepsilon}^\frac{2k-3}{2k}\rhoight).\]
Finally by {\varepsilon}qref{eq:v-pj}, when $|\zeta_{\varepsilon}|\thetaetao i^{*}nfty$
\[v(\zeta_{\varepsilon},{\varepsilon}ta_{\varepsilon})=-\zeta_{\varepsilon}^{2k}(1+o(1))\]
which jointly with the previous estimate gives
{\beta}egin{equation}{\lambda}abel{eq:exp-x}
{\varepsilon}^\frac 32 \zeta_{\varepsilon}^{2k}=\frac 12 (1-{\varepsilon}ta_{\varepsilon}^2)(1+o(1))
{\varepsilon}nd{equation}
when ${\varepsilon}\thetaetao 0$, from which {\varepsilon}qref{e2} follows.
{\varepsilon}nd{proof}
\vskip0cm\noindentkip0.3cm
{\beta}egin{proof}[Proof of Lemma \rhoef{lem:curv-2}]
Here let us denote by $(\zeta_{\varepsilon},{\varepsilon}ta_{\varepsilon})i^{*}n\partial\Omega_{\varepsilon}$ a point such that $Curv_{\partial\Omega_{\varepsilon}}(\zeta_{\varepsilon},{\varepsilon}ta_{\varepsilon})=0$. By {\varepsilon}qref{eq:curv}
{\beta}egin{equation}{\lambda}abel{e9}\nonumber
Curv_{\partial\Omega_{\varepsilon}}(\zeta_{\varepsilon},{\varepsilon}ta_{\varepsilon})=-\frac {N_{1,{\varepsilon}}+N_{2,{\varepsilon}}+N_{3,{\varepsilon}}}{D_{\varepsilon}^\frac32}
{\varepsilon}nd{equation}
with
{\beta}egin{align*}
N_{1,{\varepsilon}}&=(-6{\varepsilon} {\varepsilon}ta_{\varepsilon}+{\varepsilon}^\frac32v_{xx}(\zeta_{\varepsilon},{\varepsilon}ta_{\varepsilon}))
{\lambda}eft(-{\varepsilon}ta_{\varepsilon}+3{\varepsilon}({\varepsilon}ta_{\varepsilon}^2-\zeta_{\varepsilon}^2)+{\varepsilon}^\frac32v_y(\zeta_{\varepsilon},{\varepsilon}ta_{\varepsilon})\rhoight)^2\\
N_{2,{\varepsilon}}&=-2(- 6{\varepsilon} \zeta_{\varepsilon}+{\varepsilon}^\frac32v_{xy}(\zeta_{\varepsilon},{\varepsilon}ta_{\varepsilon}))\Big(-6{\varepsilon} \zeta_{\varepsilon} {\varepsilon}ta_{\varepsilon}+{\varepsilon}^\frac32v_x(\zeta_{\varepsilon},{\varepsilon}ta_{\varepsilon})\Big)\\
&(-{\varepsilon}ta_{\varepsilon}+3{\varepsilon}({\varepsilon}ta_{\varepsilon}^2-\zeta_{\varepsilon}^2)+{\varepsilon}^ \frac32v_y(\zeta_{\varepsilon},{\varepsilon}ta_{\varepsilon}))\\
N_{3,{\varepsilon}}&=(-1+6 {\varepsilon} {\varepsilon}ta_{\varepsilon}+{\varepsilon}^\frac32v_{yy}(\zeta_{\varepsilon},{\varepsilon}ta_{\varepsilon}))\Big(-6{\varepsilon} \zeta_{\varepsilon}{\varepsilon}ta_{\varepsilon}+{\varepsilon}^\frac32v_x(\zeta_{\varepsilon},{\varepsilon}ta_{\varepsilon})\Big)^2\\
D_{\varepsilon}&= {\beta}ig(-6{\varepsilon}\zeta_{\varepsilon} {\varepsilon}ta_{\varepsilon}+{\varepsilon}^\frac32v_x(\zeta_{\varepsilon},{\varepsilon}ta_{\varepsilon}){\beta}ig)^2+{\beta}ig(-{\varepsilon}ta_{\varepsilon}+3{\varepsilon}(\zeta_{\varepsilon}^2-{\varepsilon}ta_{\varepsilon}^2)+{\varepsilon}^\frac32v_y(\zeta_{\varepsilon},{\varepsilon}ta_{\varepsilon}){\beta}ig)^2.
{\varepsilon}nd{align*}
We divide the proof in some steps.
\vskip0cm\noindentkip0.2cm
{{\beta}f Step 1: $|\zeta_{\varepsilon}|\thetaetao+i^{*}nfty$}
\vskip0cm\noindentkip0.1cm
We reason by contradiction. If the claim does not hold we can take sequences ${\varepsilon}_n$, $\zeta_n$, ${\varepsilon}ta_n$ such that ${\varepsilon}_n\thetaetao 0$, $\zeta_n\thetaetao \zeta_0$, ${\varepsilon}ta_n\thetaetao {\varepsilon}ta_0$ (since $|{\varepsilon}ta_{\varepsilon}|<2$ by definition of $R_{\varepsilon}$) and such that
$Curv_{\partial\Omega_{{\varepsilon}_n}}(\zeta_n,{\varepsilon}ta_n)=0$. Since $(\zeta_n,{\varepsilon}ta_n)i^{*}n \partial \Omegaega_n$ then {\varepsilon}qref{e3} holds and, passing to the limit, we have that ${\varepsilon}ta_n\thetaetao \pm1$ and
{\beta}egin{equation}{\lambda}abel{eq:somma-N}
{\beta}egin{split}
&0=\frac{N_{1,{\varepsilon}_n}+N_{2,{\varepsilon}_n}+N_{3,{\varepsilon}_n}}{\varepsilon}_n=\\
&-6{\beta}ig({\varepsilon}ta_n^3+o(1){\beta}ig)+72{\varepsilon}_n{\beta}ig(\zeta_0^2{\varepsilon}ta_n^2+o(1){\beta}ig)-36{\varepsilon}{\beta}ig(\zeta_0^2{\varepsilon}ta_n^2+o(1){\beta}ig)=-6{\beta}ig(\pm1+o(1){\beta}ig)
{\varepsilon}nd{split}
{\varepsilon}nd{equation}
which gives a contradiction.
\vskip0cm\noindentkip0.2cm
{{\beta}f Step 2}: We have that there exists $two$ values $\zeta_{\varepsilon}$ given by
$
\zeta_{\varepsilon}^\pm{\sigma}im\pm{\lambda}eft(\frac3{k(2k-1){\varepsilon}^\frac12}\rhoight)^\frac1{2k-2}
$
\vskip0cm\noindentkip0.1cm
First, observe that, by {\varepsilon}qref{b1}, {\varepsilon}qref{eq:v-re}, {\varepsilon}qref{eq:v-pj}
and Step 1 we have that
{\beta}egin{align*}
&v_x=-2k\zeta_{\varepsilon}^{2k-1}(1+o(1)) & v_y=c_k \zeta_{\varepsilon}^{2k-2} {\varepsilon}ta_{\varepsilon}(1+o(1))\\
& v_{xx}=-2k(2k-1) \zeta_{\varepsilon}^{2k-2}(1+o(1)) & v_{xy}=c'_k \zeta_{\varepsilon}^{2k-3} {\varepsilon}ta_{\varepsilon}(1+o(1)) \\
& v_{yy}=c_k \zeta_{\varepsilon}^{2k-2}(1+o(1))
{\varepsilon}nd{align*}
where $c_k,c'_k\ne0$ are constants depending on $k$.
Using {\varepsilon}qref{e2} and ${\varepsilon} \zeta_{\varepsilon}^2\thetaetao 0$ as ${\varepsilon} \thetaetao 0$, we obtain (denoting again by ${\varepsilon}ta_0={\lambda}im {\varepsilon}ta_{\varepsilon}$)
\[{\beta}egin{split}
&N_{1,{\varepsilon}}{\sigma}im {\lambda}eft(-6{\varepsilon}{\varepsilon}ta_{\varepsilon}-2k(2k-1){\varepsilon}^\frac 32 \zeta_{\varepsilon}^{2k-2}\rhoight){\lambda}eft(-{\varepsilon}ta_{\varepsilon}-3{\varepsilon} \zeta_{\varepsilon}^2+c_k{\varepsilon}^\frac 32 \zeta_{\varepsilon}^{2k-2}{\varepsilon}ta_{\varepsilon}
\rhoight)^2(1+o(1))\\
&={\beta}egin{cases}
{\varepsilon}ta_0^2 {\lambda}eft( -6{\varepsilon} {\varepsilon}ta_0-2k(2k-1){\varepsilon}^\frac 32\zeta_{\varepsilon}^{2k-2}\rhoight)(1+o(1)) & \thetaetaext{ when }{\varepsilon}ta_0\neq 0\\
o({\varepsilon}+{\varepsilon}^\frac 32 \zeta_{\varepsilon}^{2k-2}) & \thetaetaext{ when }{\varepsilon}ta_0=0
{\varepsilon}nd{cases}
{\varepsilon}nd{split}\]
Using that ${\varepsilon}^\frac 32\zeta_{\varepsilon}^{2k}=O(1)$ (see {\varepsilon}qref{eq:exp-x}),
\[{\beta}egin{split}
&N_{2,{\varepsilon}}{\sigma}im -2 {\lambda}eft(-6{\varepsilon} \zeta_{\varepsilon}+c'_k {\varepsilon}^\frac 32 \zeta_{\varepsilon}^{2k-3}{\varepsilon}ta_{\varepsilon}\rhoight){\lambda}eft(-6{\varepsilon}\zeta_{\varepsilon}{\varepsilon}ta_{\varepsilon}-2k {\varepsilon}^\frac 32 \zeta_{\varepsilon}^{2k-1}\rhoight)\cdot\\
&{\lambda}eft( -{\varepsilon}ta_{\varepsilon} -3{\varepsilon} \zeta_{\varepsilon}^2+c_k{\varepsilon}^\frac 32 \zeta_{\varepsilon}^{2k-2}{\varepsilon}ta_{\varepsilon}\rhoight)(1+o(1))\\
&={\beta}egin{cases}
2{\varepsilon}ta_0^2 {\lambda}eft( 12k {\varepsilon}^{1+\frac 32} \zeta_{\varepsilon}^{2k}-2k c'_k{\varepsilon}^3 \zeta_{\varepsilon}^{4k-4}{\varepsilon}ta_{\varepsilon}\rhoight) (1+o(1)) & \thetaetaext{ when }{\varepsilon}ta_0\neq 0\\
o({\varepsilon}+{\varepsilon}^3 \zeta_{\varepsilon}^{4k-4}) & \thetaetaext{ when }{\varepsilon}ta_0=0
{\varepsilon}nd{cases}
{\varepsilon}nd{split}\]
\[{\beta}egin{split}
&N_{3,{\varepsilon}}{\sigma}im {\lambda}eft( -1+6{\varepsilon} {\varepsilon}ta_{\varepsilon}+c_k{\varepsilon}^\frac 32\zeta_{\varepsilon}^{2k-2}\rhoight){\lambda}eft( -6{\varepsilon} \zeta_{\varepsilon}{\varepsilon}ta_{\varepsilon}-2k{\varepsilon}^\frac 32 \zeta_{\varepsilon}^{2k-1}\rhoight)^2(1+o(1))\\
&=-1{\lambda}eft(4k^2{\varepsilon}^3\zeta_{\varepsilon}^{4k-2}+12 {\varepsilon}^{1+\frac 32}\zeta_{\varepsilon}^{2k}{\varepsilon}ta_{\varepsilon}^2\rhoight)(1+o(1)).
{\varepsilon}nd{split}\]
Hence if ${\beta}oxed{{\lambda}im{\lambda}imits_{{\varepsilon}\thetaetao0}{\varepsilon}ta_{\varepsilon}={\varepsilon}ta_0\neq \pm1}$, then ${\varepsilon}^\frac 32 \zeta_{\varepsilon}^{2k-2}{\sigma}im {\varepsilon}^{\frac 3{2k}}{\lambda}eft( \frac 12 (1-{\varepsilon}ta_0^2)\rhoight)^{\frac {k-1}k}$ and
\[{\beta}egin{split}
N_{1,{\varepsilon}}+N_{2,{\varepsilon}}+N_{3,{\varepsilon}}
&={\varepsilon}^{\frac 3{2k}}{\lambda}eft( -{\varepsilon}ta_0^2 2k(2k-1) {\lambda}eft( \frac 12 (1-{\varepsilon}ta_0^2)\rhoight)^{\frac {k-1}k}\rhoight.\\
&{\lambda}eft. -4k^2{\lambda}eft( \frac 12 (1-{\varepsilon}ta_0^2)\rhoight)^{\frac {2k-1}k}\rhoight)(1+o(1))<0
{\varepsilon}nd{split}\]
showing that the curvature is {{\varepsilon}m strictly positive} in this case.\\
So we necessarily have that ${\beta}oxed{{\lambda}im{\lambda}imits_{{\varepsilon}\thetaetao0} {\varepsilon}ta_{\varepsilon}=\pm 1}$ and by {\varepsilon}qref{e2} we have that ${\varepsilon}^\frac 32 \zeta_{\varepsilon}^{2k}=o(1)$. This implies that
\[
N_{1,{\varepsilon}}={\lambda}eft( \mp 6{\varepsilon}-2k(2k-1){\varepsilon}^\frac 32 \zeta_{\varepsilon}^{2k-2}\rhoight)(1+o(1));\]
\[
N_{2,{\varepsilon}}=O{\lambda}eft( {\varepsilon}^2\zeta_{\varepsilon}^2+{\varepsilon}^{1+\frac 32}\zeta_{\varepsilon}^{2k}+{\varepsilon}^3\zeta_{\varepsilon}^{4k-4}\rhoight);\]
\[N_{3,{\varepsilon}}=O{\lambda}eft( {\varepsilon}^3\zeta_{\varepsilon}^{4k-2} +{\varepsilon}^3\zeta_{\varepsilon}^{4k-4}\rhoight)
\]
and, since ${\varepsilon}^2\zeta_{\varepsilon}^2,{\varepsilon}^{1+\frac 32}\zeta_{\varepsilon}^{2k}=o({\varepsilon})$ and ${\varepsilon}^3 \zeta_{\varepsilon}^{4k-4},{\varepsilon}^3\zeta_{\varepsilon}^{4k-2}
=o({\varepsilon}^\frac 32\zeta_{\varepsilon}^{2k-2})$ then $N_{2,{\varepsilon}},N_{3,{\varepsilon}}
=o(N_{1,{\varepsilon}})$ and
{\beta}egin{equation}{\lambda}abel{eq:curv-fin}
N_{1,{\varepsilon}}+N_{2,{\varepsilon}}+N_{3,{\varepsilon}}
={\lambda}eft(\mp6{\varepsilon}-2k(2k-1){\varepsilon}^\frac32\zeta_{\varepsilon}^{2k-2}\rhoight)(1+o(1)).
{\varepsilon}nd{equation}
Since $Curv_{\partial \Omegaega_{\varepsilon}}(\zeta_{\varepsilon},{\varepsilon}ta_{\varepsilon})=0$ we deduce
{\beta}egin{equation}{\lambda}abel{e10}
{\lambda}eft(\mp6{\varepsilon}-2k(2k-1){\varepsilon}^\frac32\zeta_{\varepsilon}^{2k-2}\rhoight)(1+o(1))=0
{\varepsilon}nd{equation}
as ${\varepsilon}\thetaetao 0$.
First we get that if ${\varepsilon}ta_{\varepsilon}\thetaetao1$ {\varepsilon}qref{e10} does not have solutions. Hence ${\varepsilon}ta_{\varepsilon}\thetaetao-1$ and {\varepsilon}qref{e10} becomes
{\beta}egin{equation}
{\lambda}eft(6-2k(2k-1){\varepsilon}^\frac12\zeta_{\varepsilon}^{2k-2}\rhoight)(1+o(1))=0
{\varepsilon}nd{equation}
that implies
{\beta}egin{equation}
{\varepsilon}^\frac12\zeta_{\varepsilon}^{2k-2}=\frac3{k(2k-1)}(1+o(1)).
{\varepsilon}nd{equation}
Correspondingly we get $two$ solutions $\zeta_{\varepsilon}$ whose behavior is given by
{\beta}egin{equation}
\zeta_{\varepsilon}^\pm{\sigma}im\pm{\lambda}eft(\frac3{k(2k-1){\varepsilon}^\frac12}\rhoight)^\frac1{2k-2}.
{\varepsilon}nd{equation}
\vskip0cm\noindentkip0.2cm
{{\beta}f Step 3: conclusion}
\vskip0cm\noindentkip0.1cm
We end the proof showing that, corresponding to $\zeta_{\varepsilon}^+$ there exists only one ${\varepsilon}ta_{\varepsilon}^+$ which verifies $0=Curv_{\partial\Omega_{\varepsilon}}(\zeta_{\varepsilon},{\varepsilon}ta_{\varepsilon})=N_{1,{\varepsilon}}+N_{2,{\varepsilon}}+N_{3,{\varepsilon}}$ and the same is true for $\zeta_{\varepsilon}^-$. We apply the implicit function theorem to $u_{\varepsilon}$.
We have that $u_{\varepsilon}(\zeta_{\varepsilon}^+,{\varepsilon}ta_{\varepsilon}^+)=0$ and, recalling that ${\varepsilon}ta_{\varepsilon}^+\thetaetao-1$,
\[
(u_{\varepsilon})_y(\zeta_{\varepsilon}^+,{\varepsilon}ta_{\varepsilon}^+)=-{\varepsilon}ta_{\varepsilon}+3{\varepsilon}(\zeta_{\varepsilon}^2{\varepsilon}-{\varepsilon}ta_{\varepsilon}^2)+{\varepsilon}^\frac32v_y=1+o(1)\]
for ${\varepsilon}\thetaetao 0$.
So by the implicit function theorem we deduce that the equation $u_{\varepsilon}(x,y)=0$ has only one solution for $x=\zeta^+_{\varepsilon}$ and $y$ close to $-1$ which ends the proof.
{\varepsilon}nd{proof}
{\beta}egin{proof}[Proof of Theorem \rhoef{i0}]
The existence of the family of solutions $u_{{\varepsilon},k}$ to {\varepsilon}qref{eq:torsion} and of the domains $\Omegaega_{{\varepsilon},k}$, as well as the properties $(P0)$ and $(P1)$ follow by Theorem \rhoef{b2}.\\
Property $(P2)$ is a consequence of the definition of $u_{{\varepsilon},k}(x,y)$ and that locally $u_{{\varepsilon},k}(x,y)\thetaetao \frac 12 (1-y^2)$ as ${\varepsilon}\thetaetao 0$.\\
Concerning $(P3)$, the curvature of $\partial\Omegaega_{{\varepsilon},k}$ does change sign because, denoting by $q_{\varepsilon}=(0,{\beta}eta_{\varepsilon})i^{*}n\partial\Omegaega_{{\varepsilon},k}$ with ${\beta}eta_{\varepsilon}\thetaetao-1$, we have
$$Curv_{\partial\Omega_{{\varepsilon},k}}(q_{\varepsilon})={\beta}ig(-6+o(1){\beta}ig)<0.$$
Next the fact that the curvature of $\partial \Omegaega_{{\varepsilon},k}$ vanishes exactly at two points follows by Lemma \rhoef{lem:curv-2}.
To prove that $\min{\lambda}eft(Curv_{\partial\Omega_{{\varepsilon},k}}\rhoight)\thetaetao 0$ as ${\varepsilon}\thetaetao 0$
we proceed as in the proof of Lemma \rhoef{lem:curv-2}. Denote by $(\thetaetailde\zeta_{\varepsilon},\thetaetailde{\varepsilon}ta_{\varepsilon})i^{*}n\partial\Omegaega_{\varepsilon}$ a point which achieves the minimum of the curvature of $\partial\Omegaega_{\varepsilon}$ (recall that $|\thetaetailde{\varepsilon}ta_{\varepsilon}|{\lambda}e C $). If, up to some subsequence, $\thetaetailde\zeta_{{\varepsilon}_n}\thetaetao\thetaetailde\zeta_0 $, since $\Omegaega_{\varepsilon}$ converges to a strip on compact set, the claim follows. On the other hand, if $|\thetaetailde\zeta_{{\varepsilon}_n}|\thetaetao+i^{*}nfty$, repeating step by step the computation in Case $2$ of Lemma \rhoef{lem:curv-2} we again get $Curv_{\partial\Omega_{{\varepsilon},k}}(\thetaetailde\zeta_{{\varepsilon}_n},\thetaetailde{\varepsilon}ta_{{\varepsilon}_n})\thetaetao0$. This ends the proof.
{\varepsilon}nd{proof}
{\sigma}ection{More general nonlinearities}
In this section we consider solutions to {\varepsilon}qref{f1} which satisfy {\varepsilon}qref{f2}. The existence is guaranteed for example if the assumptions in \cite{mp} are satisfied. Next lemma studies the behavior as ${\lambda}\thetaetao0$.
{\beta}egin{lemma}{\lambda}abel{lem:convergence}
Let $u_{\lambda}$ be a family of solutions to {\varepsilon}qref{f1} satisfying {\varepsilon}qref{f2}. Then we have that
{\beta}egin{equation}{\lambda}abel{f4}
\frac{u_{\lambda}}{{\lambda} f(0)}\thetaetao u_0\quad\hbox{as ${\lambda}\thetaetao0$ in }C^2(\Omega)
{\varepsilon}nd{equation}
where $u_0$ is a solution to
{\beta}egin{equation}{\lambda}abel{f6}
{\beta}egin{cases}
-\Deltaelta u=1&\hbox{in }\Omega\\
u=0&\hbox{on }\partial\Omega.
{\varepsilon}nd{cases}
{\varepsilon}nd{equation}
{\varepsilon}nd{lemma}
{\beta}egin{proof}
Let us show that
{\beta}egin{equation}{\lambda}abel{f3}
|u_{\lambda}|{\lambda}e C{\lambda}\quad\hbox{in }\Omega
{\varepsilon}nd{equation}
where $C$ is a constant independent of ${\lambda}$. By the Green representation formula we have that
\[
|u_{\lambda}(x)|{\lambda}e{\lambda}i^{*}nt_\Omega G(x,y){\lambda}eft|f{\beta}ig(u_{\lambda}(y){\beta}ig)\rhoight|dy{\lambda}e{\lambda}\max{\lambda}imits_{si^{*}n[0,C]}|f(s)|i^{*}nt_\Omega G(x,y)dy{\lambda}e C{\lambda}
\]
where $C$ is independent of ${\lambda}$. Next by {\varepsilon}qref{f2}, {\varepsilon}qref{f3} and the standard regularity theory we derive that
\[
u_{\lambda}\thetaetao0\quad\hbox{in }C^2(\Omega)
\]
as ${\lambda}\thetaetao 0$ so that $f(u_{\lambda})\thetaetao f(0)$.
Finally the standard regularity theory, applied to $\frac{u_{\lambda}}{{\lambda} f(0)}$, and
{\varepsilon}qref{f1} gives the claim.
{\varepsilon}nd{proof}
Theorem \rhoef{i3} is a straightforward consequence of the previous lemma
{\beta}egin{proof}[Proof of Theorem \rhoef{i3}]
Assume ${\varepsilon}$ is small enough to satisfy the assumptions of Theorem \rhoef{i0}. By Lemma \rhoef{lem:convergence} $\frac {u_{\lambda}}{{\lambda} f(0)}\thetaetao u_{{\varepsilon},k}$ as ${\lambda}\thetaetao 0$. Then the claim follows by the $C^2$ convergence of $u_{\lambda}$ to $u_{{\varepsilon},k}$ and the semi-stability of all solutions to {\varepsilon}qref{f6}.
{\varepsilon}nd{proof}
{\beta}ibliography{GladialiGrossiFinal.bib}
{\beta}ibliographystyle{abbrv}
{\varepsilon}nd{document}
|
\betaegin{document}
\thetaitle{The entropy of quantum causal networks}
\alphauthor{Xian Shi}\epsilonnd{bmatrix}il[]
{[email protected]}
\alphaffiliation{School of Mathematical Sciences, Beihang University, Beijing 100191, China}
\alphauthor{Lin Chen}\epsilonnd{bmatrix}il[]{[email protected] (corresponding author)}
\alphaffiliation{School of Mathematical Sciences, Beihang University, Beijing 100191, China}
\alphaffiliation{International Research Institute for Multidisciplinary Science, Beihang University, Beijing 100191, China}
\betaegin{abstract}
\iotandent Quantum networks play a key role in many scenarios of quantum information theory. Here we consider the quantum causal networks in the manner of entropy. First we present a revised smooth max-relative entropy of quantum combs, then we present a lower and upper bound of a type \upsilonppercase\epsilonxpandafter{\rhoomannumeral2} error of the hypothesis testing. Next we present a lower bound of the smooth max-relative entropy for the quantum combs with asymptotic equipartition. At last, we consider the score to quantify the performance of an operator. We present a quantity equaling to the smooth asymptotic version of the performance of a quantum positive operator.
\epsilonnd{abstract}
\muaketitle
\sigmaection{Introduction}
\iotandent Nowadays, quantum networks attract much attention of researchers from the technological levels and theoretical levels. Technologically, The development of quantum communication \chiite{wang2015quantum,pirandola2015advances} and computation \chiite{llewellyn2020chip} prompt the realization of the quantum network \chiite{chiribella2013quantum,gutoski2018fidelity}. Theoretically, quantum networks provide a framework for quantum games \chiite{gutoski2007toward}, the discrimination and transformation of quantum channels \chiite{chiribella2008memory,lloyd2011quantum,chiribella2012perfect}. They also prompt the advances of models of quantum computation \chiite{bisio2010optimal}.\\
\iotandent One of the fundamental concepts in quantum information theoy is the relative entropy for two quantum states . In 1962, Umegaki introduced the quantum relative entropy \chiite{umegaki1954conditional}, then the relative entropy was extended in terms of Renyi entropy \chiite{renyi1961measures}, min- and max-entropy \chiite{datta2009min}. Here the max-relative entropy was shown to interpret a number of operational tasks \chiite{brandao2011one,napoli2016robustness,anshu2018quantifying,takagi2019general,seddon2020quantifying}. The other approach to study the relative entropy is in the manner of smooth entropy \chiite{renner2008security}. The smooth max-relative entropy can also be used to interpret the one-shot cost tasks in terms of entanglement and coherence frameworks \chiite{brandao2011one,zhu2017operational}. Another smooth entropy is correlated with quantum hypothesis testing. This is a fundamental task in statistics theory, that is, an experimentalist should make a decision of a binary testing. The aim is to present an optimal strategy to minimize the error possibility. This task correponds to the discrimination of quantum states \chiite{ogawa2005strong,li2014second} and channels \chiite{cooney2016strong, gour2019quantify}. It also provides a way to compute the capacity of quantum channels \chiite{buscemi2010quantum,anshu2018hypothesis}. Recently, some work were done on the quantum channel in the manner of the entroy \chiite{gour2019quantify,fang2020no}. As far as we know, the quantum networks are not much studied.\\
\iotandent Quantum networks can be seen as transformations from states to channels, from channels to channels, and from sequences of channels to channels. In \chiite{chiribella2009theoretical}, the authors presented a framework of quantum networks, there they also introduced two important concepts, quantum combs and link product. Due to the importance to study the optimalization of quantum networks, \chiite{chiribella2016optimal} proposed a semidefinite programming method for this problem. There the authors presented the measure of a quantum performance correponds to the max relative entropy, as well as they also present an opertational interpretation of the max relative entropy of quantum combs.\\
\iotandent In this paper, we first present a quantity similar to the smooth max-relative entropy of quantum combs $C_0$ with respect to $C_1.$ We also give a bound between the two max-relative entropies of quantum combs. Then we present a relation between the type \upsilonppercase\epsilonxpandafter{\rhoomannumeral2} error of quantum hypothesis testing and the smooth max-relative entropy of quantum combs we defined. At last, we present a smooth asymptotic version of the performance of a quantum network.\\
\iotandent This paper is organized as follows. In section \upsilonppercase\epsilonxpandafter{\rhoomannumeral2}, we recall the preliminary knowledge needed. In section \upsilonppercase\epsilonxpandafter{\rhoomannumeral3}, we present the main results. First we present the bound between the type \upsilonppercase\epsilonxpandafter{\rhoomannumeral2} error of quantum hypothesis testing on quantum combs. Then we present a quantity equaling to the smooth asymptotic version of the performance of a quantum positive operator. In section
\upsilonppercase\epsilonxpandafter{\rhoomannumeral4}, we end with a conclusion.
\sigmaection{Preliminary knowledge}
In this section, we first recall the definition and some properties of the quantum operations and link products. We next recall the definition and properties of quantum causal networks. At last, we recall the maximal relative entropy and present a revised type \upsilonppercase\epsilonxpandafter{\rhoomannumeral2} error of hypothesis testing on quantum combs we will consider in the following.
\sigmaubsection{Linear Maps and Link Products}
\iotandent In this article, we denote $L(\muathcal{H}_0,\muathcal{H}_1)$ as the set of linear operators from a finite dimensional Hilbert space $\muathcal{H}_0$ to $\muathcal{H}_1,$ $L(\muathcal{H})$ as the set of linear operators from a Hilbert space $\muathcal{H}$ to $\muathcal{H}$, and $Pos(\muathcal{H})$ as the set of positive semidefinite operators on $\muathcal{H}$. When $M$ is a positive semidefinite operator on a Hilbert space $\muathcal{H},$ we also denote $\leftarrowmbda_{\muax}(M)$ as the maximal eigenvalue of $M.$ When $S, T\iotan Pos(\muathcal{H}),$ $\{S\gammae T\}$ is the projection operator on the space $span\{\kappaet{v}|\betaegin{remark}a{v}(S-T)\kappaet{v}\gammae 0\}$.\\
\iotandent Assume $\muathcal{C}$ is a map from operators on a Hilbert space $\muathcal{H}_A$ to that on $\muathcal{H}_B.$ When $\muathcal{C}$ is completely positive trace non-increasing,
then we say $\muathcal{C}$ is a quantum operation. When $\muathcal{C}$ is completely positive trace-preserving, then we say $\muathcal{C}$ is a quantum channel. In the following, we use the diagrammatic notation
\Qcircuit@C=1.5em @R=1.2em {
&\upsilonstick{A} \qw&\gammaate{\muathcal{C}}&\upsilonstick{B}\qw&\qw}, here $\muathcal{C}$ in the above diagram is of type $A\rhoightarrow B.$\\
\iotandent Next we recall the isomorphism between a quantum operation $\muathcal{M}$ in $L(L(\muathcal{H}_0),L(\muathcal{H}_1))$ and an operator $M$ in $L(L(\muathcal{H}_1\omegatimes\muathcal{H}_0)).$
\betaegin{lemma}(Choi-Jamiolkowski (C-J) isomorphism)\chiite{choi1975completely}
The bijective correspondence $\muathcal{L}: \muathcal{M}\rhoightarrow M$ is defined as follows:
\betaegin{align}
M=\muathcal{L}(\muathcal{M})=\muathcal{M}\omegatimes I_{\muathcal{H}_1}(\kappaet{\Pisi}\betaegin{remark}a{\Pisi}).
\epsilonnd{align}
here we denote $\kappaet{\Pisi}=\sigmaum_i\kappaet{i}_{\muathcal{H}_0}\kappaet{i}_{\muathcal{H}_1},$ $\{\kappaet{i}_{\muathcal{H}_0}\},\{\kappaet{i}_{\muathcal{H}_1}\}$ are the orthnormal bases of the Hilbert space $\muathcal{H}_0$ and $\muathcal{H}_1.$
Its inverse is defined as
\betaegin{align}\leftarrowbel{im}
[\muathcal{L}^{-1}(M)](X)=tr_{\muathcal{H}_0}[(I_{\muathcal{H}_1}\omegatimes X^T)M]
\epsilonnd{align}\\
\epsilonnd{lemma}\piar
In the following, we denote $M$ as $\muathcal{L}(M).$ Next we present some facts needed, readers who are interested in the proof of these results please refer to \chiite{chiribella2009theoretical}.
\betaegin{lemma}
(i). A linear map $\muathcal{M}\iotan L(\muathcal{H}_0,\muathcal{H}_1)$ is trace preserving if and only if its C-J operator satisfies the following equality
\betaegin{align}
\thetar_{\muathcal{H}_1}M=I_{\muathcal{H}_0}.
\epsilonnd{align}
(ii). A linear map $\muathcal{M}$ is Hermitian preserving if and only if its C-J operator is Hermitian.\\
(iii). A linear map $\muathcal{M}$ is completely positive if and only if its C-J operator is positive semidefinite.
\epsilonnd{lemma}\piar
\iotandent Two maps $\muathcal{M}$ and $\muathcal{N}$ are composed if the output space of $\muathcal{M}$ is the input space of $\muathcal{N}.$ If $\muathcal{M}:L(\muathcal{H}_0)\rhoightarrow L(\muathcal{H}_1),$ $\muathcal{N}:L(\muathcal{H}_1)\rhoightarrow L(\muathcal{H}_2),$ next we denote $\muathcal{C}=\muathcal{N}\chiirc\muathcal{M},$ then
\betaegin{align}
C(X)=&\thetar_{\muathcal{H}_1}[(I_{\muathcal{H}_2}\omegatimes \thetar_{\muathcal{H}_0}[(I_{\muathcal{H}_1}\omegatimes X^T)M]^T)N]\nuonumber\\
=&\thetar_{\muathcal{H}_1,\muathcal{H}_0}[(I_{\muathcal{H}_2}\omegatimes I_{\muathcal{H}_1}\omegatimes X)(I_{\muathcal{H}_2}\omegatimes M^{T_1})(N\omegatimes I_{\muathcal{H}_0})],
\epsilonnd{align}
comparing with the equality (\rhoef{im}), we have
\betaegin{align}
C=\thetar_{\muathcal{H}_1}[(I_{\muathcal{H}_2}\omegatimes M^{T_1})(N\omegatimes I_{\muathcal{H}_0})],
\epsilonnd{align}
then we denote
\betaegin{align}\leftarrowbel{lp}
N*M=\thetar_{\muathcal{H}_1}[(I_{\muathcal{H}_2}\omegatimes M^{T_1})(N\omegatimes I_{\muathcal{H}_0})],
\epsilonnd{align}
here we denote (\rhoef{lp}) as the link product of $N$ and $M.$\\
\iotandent Next we present some properties of the link product. Readers who are interested in the proof of these properties please refer to \chiite{chiribella2009theoretical}.
\betaegin{lemma}\leftarrowbel{lpl}
1. If $M$ and $N$ are Hermitian, then $M*N$ is Hermitian.\\
2. If $M$ and $N$ are positive, then $M*N$ is positive.\\
3. The link product is associative, $i. e.$ $A*(B*C)=(A*B)*C.$\\
4. $N*M=SWAP_{\muathcal{H}_0,\muathcal{H}_2}(M*N)SWAP_{\muathcal{H}_0,\muathcal{H}_2},$ here $SWAP_{\muathcal{H}_0,\muathcal{H}_2}$ is the unitary operator that swaps the $\muathcal{H}_0$ and $\muathcal{H}_2.$
\epsilonnd{lemma}
\sigmaubsection{Quantum Networks}
\iotandent A quantum network is a collection of quantum devices connected with each other. If there are no loops connecting the output of a device to the output of the same device, the quantum network is causal. A network is deterministic if all devices are channels. By the interpretation in \chiite{chiribella2009theoretical}, a quantum causal network can always be represented as an ordered sequence of quantum devices, such as Fig. \rhoef{qc}.
\betaegin{figure}
\chientering
\iotancludegraphics[width=100mm]{qcb.png}
\chiaption{Here we plot an example of quantum causal network}\leftarrowbel{qc}
\epsilonnd{figure}
Here $A_i^{in}(A_i^{out})$ is the input (output) system of the network at the $i$-th step. Assume the C-J operator of $\muathcal{C}_i$ is $C_i$, then the C-J operator of the above network $(\rhoef{qc})$ is
\betaegin{align}\leftarrowbel{qcc}
C=C_1*C_2*\chidots*C_N.
\epsilonnd{align}
The C-J operator of a deterministic network is called a quantum comb \chiite{chiribella2009theoretical}, it is a positive operator on $\omegatimes_{j=1}^N(\muathcal{H}_j^{out}\omegatimes\muathcal{H}_j^{in}),$ here $\muathcal{H}_j^{out}(\muathcal{H}_j^{in})$ is the Hilbert system of $A_j^{out}(A_j^{in})$. Next we recall a result on charactering the quantum combs.
\betaegin{lemma}\chiite{chiribella2009theoretical}\leftarrowbel{qcb}
A positive operator $C$ is a quantum comb if and only if
\betaegin{align}
\thetar_{A_n^{out}} C^{(n)}=I_{A_n^{in}}\omegatimes C^{(n-1)}, \etaspace{3mm}\forall n\iotan{1,2,\chidots, N}\leftarrowbel{qc'},
\epsilonnd{align}
here $C^{(n)}$ is a suitable operator on $\muathcal{H}_n=\omegatimes_{j=1}^n(\muathcal{H}_j^{out}\omegatimes\muathcal{H}_j^{in}),$ $C^{(N)}=C,$ $C^{(0)}=1.$
\epsilonnd{lemma}\piar
\iotandent A quantum network $\muathcal{T}$ is non-deterministic if there exists the C-J operator $R$ of a quantum deterministic network $\muathcal{R}$, $R\gammae T.$ An example of non-deterministic network is the Fig. \rhoef{f1}, here $\rhoho$ is a quantum state, $\muathcal{D}_i$ are quantum channels, $\{P_x\}_x$ is a positive operator-valued measure (POVM). The network of type in Fig. \rhoef{f1} can be used to probe the network of type (\rhoef{qc}), then the network can be represented as Fig. \rhoef{f2}, then the probability of the outcome $x$ is
\betaegin{align}
p_x=\rhoho*C_1*D_1*C_2*D_2*\chidots*D_{N-1}*C_N*P_x^T\leftarrowbel{px}
\epsilonnd{align}
by the Lemma \rhoef{lpl}, when we omit the $SWAP$ operation, that is, we assume the Hilbert spaces have been reordered we need, then $(\rhoef{px})$ can be written as
\betaegin{align}
p_x=&(\rhoho*D_1*D_2*\chidots*D_{N-1}*P_x^T)*(C_1*C_2*\chidots*C_{N})\nuonumber\\
=&T_x*C\nuonumber\\
\epsilonnd{align}
here
\betaegin{align}
T_x=\rhoho*D_1*D_2*\chidots*D_{N-1}*P_x^T,
\epsilonnd{align}
we call the set of operations $\betaoldsymbol{T}=\{T_x\}_x$ a quantum tester. Next we recall the mathematical structure of a quantum tester, which was given in \chiite{chiribella2009theoretical}.
\betaegin{lemma}\chiite{chiribella2009theoretical}\leftarrowbel{qut}
Let $\betaoldsymbol{T}$ be a collection of positive operations on $\omegatimes_{j=1}^N(\muathcal{H}_j^{out}\omegatimes\muathcal{H}_j^{in}).$ $\betaoldsymbol{T}$ is a quantum tester if and only if
\betaegin{align}
\sigmaum_{x\iotan X} T_x=&I_{A^{out}_N}\omegatimes \Gammaamma^{(N)}\leftarrowbel{qt}\\
\thetar_{A_n^{in}}[\Gammaamma^{(n)}]=&I_{A_{n-1}^{out}}\omegatimes\Gammaamma^{(n-1)},\etaspace{3mm}n=2,3,\chidots,N.\leftarrowbel{qt1}\\
\thetar_{A_1^{in}}[\Gammaamma^{(1)}]=&1,\leftarrowbel{qt2}
\epsilonnd{align}
here $\Gammaamma^{(n)},$ $n=1,2,\chidots,N$ is a positive operator on $\muathcal{H}^{in}_n\omegatimes[\omegatimes_{j=1}^n(\muathcal{H}_j^{out}\omegatimes \muathcal{H}_j^{in})]$.
\epsilonnd{lemma}
\betaegin{figure*}
\chientering
\iotancludegraphics[width=170mm]{qct.png}
\chiaption{Here we present an example of a non-deterministic network.} \leftarrowbel{f1}
\epsilonnd{figure*}
\betaegin{figure*}
\chientering
\iotancludegraphics[width=170mm]{qcp.png}
\chiaption{Here we present the type of the network when we probe the quantum comb $(\rhoef{qc})$ by network of Fig. $\rhoef{f1}$.} \leftarrowbel{f2}
\epsilonnd{figure*}
\sigmaubsection{Max-Relative Entropy and Hypothesis Testing}
\iotandent Here we first recall the definition of max-relative entropy.
\betaegin{definition}
Assume $M$ and $N$ are two quantum combs on a Hilbert space $\omegatimes_{j=1}^n(\muathcal{H}_j^{out}\omegatimes \muathcal{H}_j^{in}),$ the max entropy of $M$ relative to $N$ is defined as
\betaegin{align}
D_{\muax}(M||N)=\log\muin\{\leftarrowmbda|\leftarrowmbda N-M\gammae 0\},
\epsilonnd{align}
when $\sigmaupp(M)\nuot\sigmaubset \sigmaupp(N),$ $D_{\muax}(M||N)=\iotanfty.$\\
Assume $\epsilonpsilon\iotan (0,1),$ the $\epsilonpsilon-$smooth max-relative of $M$ and $N$ is defined as
\betaegin{align}
D_{\muax}^{\epsilonpsilon}(M||N)=\muin_{\thetailde{M}\iotan S} D_{\muax}(\thetailde{M}||N),
\epsilonnd{align}
where the minimum takes over all $\thetailde{M}$ in the set $S=\{\thetailde{M}|1/2||\thetailde{M}-M||\le \epsilonpsilon, \thetailde{M} \thetaextit{is a quantum comb.} \}.$
\epsilonnd{definition}
Next we recall the quantum hypothesis testing on states. Assume
\betaegin{align}
\thetaextit{Null hypothesis } H_0:\etaspace{3mm}\rhoho\nuonumber\\
\thetaextit{Alternative hypothesis } H_1:\etaspace{3mm \sigmaigma},
\epsilonnd{align}
for a POVM $\{\Pii,I-\Pii\},$ the error probability of type \MakeUppercase{\rhoomannumeral1} and the error probability of type \MakeUppercase{\rhoomannumeral2} are defined as
\betaegin{align}
\alphalpha(\MakeUppercase{\rhoomannumeral1})=&\thetar[(I-\Pii)\rhoho],\nuonumber\\
\betaeta(\MakeUppercase{\rhoomannumeral2})=&\thetar[\Pii\sigmaigma].
\epsilonnd{align}
here $\alphalpha(\MakeUppercase{\rhoomannumeral1})$ is the probability of accepting $\sigmaigma$ when $\rhoho$ is true, and $\betaeta$(\MakeUppercase{\rhoomannumeral2}) is the probability of accepting $\rhoho$ when $\sigmaigma$ is true. Next we recall the quantity $\betaeta_{\epsilonpsilon}(\rhoho||\sigmaigma),$
\betaegin{align}
\betaeta_{\epsilonpsilon}(\rhoho||\sigmaigma)=\muin\{\betaeta_{\Gammaamma}(\Pii)|\etaspace{3mm}\alphalpha_{\Gammaamma}(\Pii)\le \epsilonpsilon\},
\epsilonnd{align}
here the minimum takes over all the POVMs $\{\Pii,I-\Pii\}.$ \\
\iotandent Then we generalize the hypothesis testing on quantum combs,
\betaegin{align}
\thetaextit{Null hypothesis } H_0: \etaspace{3mm}C_0,\nuonumber\\
\thetaextit{Alternative hypothesis } H_1: \etaspace{3mm}C_1.
\epsilonnd{align}
We define a quantity on quantum combs similar to $\betaeta_{\epsilonpsilon}(\rhoho||\sigmaigma),$
\betaegin{align}
\betaeta_{\epsilonpsilon}(C_0||C_1)=&\muin\{\betaeta_{\Gammaamma}(\Pii)|\alphalpha_{\Gammaamma}(\Pii)\le \epsilonpsilon\},\nuonumber\\
\betaeta_{\Gammaamma}(\Pii)=&\{\thetar[\Pii\Gammaamma^{1/2}C_1\Gammaamma^{1/2}]|\Gammaamma\iotan \thetaextit{DualComb}\}\nuonumber\\
\alphalpha_{\Gammaamma}(\Pii)=&\{\thetar[(I-\Pii)\Gammaamma^{1/2}C_0\Gammaamma^{1/2}]|\Gammaamma\iotan \thetaextit{DualComb}\}.
\epsilonnd{align}
here we denote that $\{\Pii,I-\Pii\}$ is a POVM and
\betaegin{align}\leftarrowbel{duc}
DualComb=\{\Gammaamma=&I_{A^{out}_N}\omegatimes \Gammaamma^{(N)},\nuonumber\\
\thetar_{A_n^{in}}[\Gammaamma^{(n)}]=&I_{A_{n-1}^{out}}\omegatimes\Gammaamma^{(n-1)},\etaspace{3mm}n=2,3,\chidots,N.\nuonumber\\
\thetar_{A_1^{in}}[\Gammaamma^{(1)}]=&1.\}
\epsilonnd{align}
\sigmaection{main results}
\iotandent In this section, we first present a definition similar to the smooth max-relative entropy $\thetailde{D}_{\muax}^{\epsilonpsilon}$ of two combs, and we present bounds of the quantum hypothesis testing in terms of quantum combs that are on the max-relative entropy of two combs. Then we present a bound of the regularized $\thetailde{D}_{\muax}^{\epsilonpsilon}$ of two combs with respect to the relative entropy of two combs. At last, we present the quantum asymptotic equipartition property in terms of the maximum score for quantum operators.\\
\iotandent When $C_0$ and $C_1$ are two quantum combs, then we present a quantity similar to the smooth max-relative entropy of $C_0$ with respect to $C_1$
\betaegin{align}
\thetailde{D}_{\muax}^{\epsilonpsilon}(C_0||C_1)=&\muax_{\Gammaamma}\muin_{C\iotan \thetailde{S^{^{\Gammaamma}}_{\epsilonpsilon}}}D_{\muax}(C||\Gammaamma^{1/2}C_1\Gammaamma^{1/2})\nuonumber\\
\thetailde{S^{^{\Gammaamma}}_{\epsilonpsilon}}=\{C|\frac{1}{2}||&\Gammaamma^{1/2}C_0\Gammaamma^{1/2}-C||_1\le \epsilonpsilon,tr C=1\}\etaspace{3mm}\Gammaamma \iotan \thetaextit{DualComb}
\epsilonnd{align}
Next we present a relation between the smooth max-relative entropy and the definiton we defined above of $C_0$ with respect to $C_1$.
\betaegin{theorem}
Assume $C_0$ and $C_1$ are two quantum combs on $\omegatimes_{j=1}^n[\muathcal{H}_j^{out}\omegatimes\muathcal{H}_j^{in}],$ then we have
\betaegin{align}
\thetailde{D}^{\epsilonpsilon\pirod_{j}\deltaim d_{j}^{out}}(C_0||C_1)\le D^{\epsilonpsilon}(C_0||C_1).
\epsilonnd{align}
\epsilonnd{theorem}
\betaegin{proof}
By the result in \chiite{chiribella2016optimal}, we have when $E_0$ and $E_1$ are two quantum combs,
\betaegin{align}
D_{\muax}(E_0||E_1)=\muax_{\Gammaamma}D_{\muax}(\Gammaamma^{1/2}E_0\Gammaamma^{1/2}||\Gammaamma^{1/2}E_1\Gammaamma^{1/2}),
\epsilonnd{align}
here $\Gammaamma\iotan DualComb$.\piar
Assume $C$ is the optimal in terms of the $D^{\epsilonpsilon}(C_0||C_1)$, as $C$ is a quantum comb, then $\thetar\Gammaamma^{1/2}C\Gammaamma^{1/2}=1,$ and
\betaegin{align}
&\thetar|\Gammaamma^{1/2}C_0\Gammaamma^{1/2}-\Gammaamma^{1/2}C\Gammaamma^{1/2}|\nuonumber\\
\le &\thetar [\Gammaamma^{1/2}(C_0-C)_{+}\Gammaamma^{1/2}]\nuonumber\\
\le &\epsilonpsilon\thetar\Gammaamma=\epsilonpsilon\pirod_{j}\deltaim d_{j}^{out}
\epsilonnd{align}
here the second inequality is due to $\frac{1}{2}||C-C_0||\le\epsilonpsilon$. Then we have $\Gammaamma^{1/2}C\Gammaamma^{1/2}$ is in $\thetailde{S}_{\pirod_j \deltaim d_j^{out}\epsilonpsilon}^{\Gammaamma},$ at last, by the definition of $\thetailde{D}^{\epsilonpsilon}$ and $ D^{\epsilonpsilon}(C_0||C_1)$, we finish the proof.
\epsilonnd{proof}
\betaegin{theorem}
Assume $C_0$ and $C_1$ are two quantum combs. Let $\leftarrowmbda>0,$ $ \thetariangle_{\Gammaamma}(\leftarrowmbda)=[\Gammaamma^{1/2}(C_0-\leftarrowmbda C_1)\Gammaamma^{1/2}]_{+},$ here $\Gammaamma\iotan$ $DualComb,$ then we have
\betaegin{align}
\thetailde{D}_{\muax}^{g(\leftarrowmbda)}(C_0||C_1)\le \muax_{\Gammaamma}\log\frac{\leftarrowmbda}{\sigmaqrt{1-g_{\Gammaamma}^2(\leftarrowmbda)}}, \leftarrowbel{qcdmaxht}
\epsilonnd{align}
here we assume $g_{\Gammaamma}(\leftarrowmbda)=\sigmaqrt{\thetar[\thetariangle_{\Gammaamma}(\leftarrowmbda)(2-tr\thetariangle_{\Gammaamma}(\leftarrowmbda))]}$ and the maximum takes over all the $\Gammaamma\iotan {DualComb}$.
\epsilonnd{theorem}
\betaegin{proof}
As $C_x$, $x=0,1$ is a quantum comb, then $C_x$ is a semidefinite positive operator, that is, $\Gammaamma^{1/2}C_x\Gammaamma^{1/2}$ is also semidefinite positive,
\betaegin{align}
&tr \Gammaamma^{1/2} C_x\Gammaamma^{1/2}\nuonumber\\
=&tr\Gammaamma C_x\nuonumber\\
=&1\etaspace{5mm} x=0,1,
\epsilonnd{align}
the second equality is due to the Lemma $\rhoef{qcb}$ and $\rhoef{qut}$, that is, $\Gammaamma^{1/2}C_x\Gammaamma^{1/2}$ is a state.
\piar Then we could prove the theorem by a similar method in \chiite{datta2013smooth}, next we present the proof for the integrity.\piar
Here we denote
\betaegin{align}
\thetariangle_{\Gammaamma}(\leftarrowmbda)=[\Gammaamma^{1/2}(C_0-\leftarrowmbda C_1)\Gammaamma^{1/2}]_{+},
\epsilonnd{align}
then we have
\betaegin{align}
\Gammaamma^{1/2}C_0\Gammaamma^{1/2}\le \Gammaamma^{1/2}\leftarrowmbda C_1\Gammaamma^{1/2}+\thetariangle_{\Gammaamma}(\leftarrowmbda),
\epsilonnd{align}
by the Lemma C. 5 in \chiite{brandao2010generalization}, there exists a state $\rhoho$ such that
\betaegin{align}
\rhoho\le (1-tr\thetariangle_{\Gammaamma}(\leftarrowmbda))^{-1}&\Gammaamma^{1/2}\leftarrowmbda C_1\Gammaamma^{1/2},\nuonumber\\
D_{\muax}(\rhoho||\Gammaamma^{1/2}C_1\Gammaamma^{1/2})&\le \log[\leftarrowmbda(1-tr\thetariangle_{\Gammaamma}(\leftarrowmbda))^{-1}],\nuonumber\\
\frac{1}{2}||\rhoho-\Gammaamma^{1/2}C_0\Gammaamma^{1/2}||&\le g_{\Gammaamma}(\leftarrowmbda),
\epsilonnd{align}
when $\thetariangle_{\Gammaamma}$ gets the maximum with the variable $\Gammaamma$, the function $\log[\leftarrowmbda(1-tr\thetariangle_{\Gammaamma}(\leftarrowmbda))^{-1}]$ gets the maximum.
then we finish the proof.
\epsilonnd{proof}
\betaegin{remark}\leftarrowbel{trianglelambda}
By the same method in \chiite{datta2013smooth}, we have when $\sigmaupp C_0\sigmaubset\sigmaupp C_1,$ $tr\thetariangle_{\Gammaamma}(\leftarrowmbda)$ is strictly decreasing and continuous on $[0,2^{D_{\muax}(\Gammaamma^{1/2}C_0\Gammaamma^{1/2}||\Gammaamma^{1/2}C_1\Gammaamma^{1/2})}]$ with range $[0,1],$ and $g_{\Gammaamma}(\leftarrowmbda)$ is strictly decreasing and continuous on $[0,2^{D_{\muax}(\Gammaamma^{1/2}C_0\Gammaamma^{1/2}||\Gammaamma^{1/2}C_1\Gammaamma^{1/2})}]$ with range $[0,1]$.
\epsilonnd{remark}
\piar\iotandent Next we present a relationship between the max-relative entropy and the hypothesis testing in terms of quantum combs.
\betaegin{theorem}
Assume $C_0$ and $C_1$ are two quantum combs, then we have
\betaegin{align}
\thetailde{D}_{\muax}^{f(\epsilonpsilon)} \le-\log\betaeta_{1-\epsilonpsilon}(C_0||C_1)\le \thetailde{D}_{\muax}^{\epsilonpsilon^{'}}(C_0||C_1)+\log\frac{1}{\epsilonpsilon-\epsilonpsilon^{'}},
\epsilonnd{align}
here $f(\epsilonpsilon)=\sigmaqrt{1-(1-\epsilonpsilon)^2}.$
\epsilonnd{theorem}
\betaegin{proof}
\iotandent First we prove the upper bound. If we could prove $\forall\Pii, 0\le \Pii\le I$ such that
\betaegin{align}
\log\betaeta_{\Gammaamma}(\Pii)<-\thetailde{D}_{\muax}^{\epsilonpsilon^{'}}(C_0||C_1)-\log\frac{1}{\epsilonpsilon-\epsilonpsilon^{'}},\leftarrowbel{assume}
\epsilonnd{align}
then
\betaegin{align}
\alphalpha_{\Gammaamma}(\Pii)>1-\epsilonpsilon.
\epsilonnd{align}
By the definition of $\thetailde{D}_{\muax}^{\epsilonpsilon}(C_0||C_1),$ we have there exists a state $C$ and a tester $\Gammaamma$ such that
\betaegin{align}
C\le 2^{\thetailde{D}_{\muax}^{\epsilonpsilon^{'}}(C_0||C_1)}\Gammaamma^{1/2}C_1\Gammaamma^{1/2},\leftarrowbel{dmax'}
\epsilonnd{align}
then we have
\betaegin{align}
tr\Pii C\le& 2^{\thetailde{D}_{\muax}^{\epsilonpsilon^{'}}(C_0||C_1)}tr \Pii \Gammaamma^{1/2}C_1\Gammaamma^{1/2}\nuonumber\\
=&2^{\thetailde{D}_{\muax}^{\epsilonpsilon^{'}}(C_0||C_1)}\betaeta_{\Gammaamma}(\Pii)\nuonumber\\
<&2^{\thetailde{D}_{\muax}^{\epsilonpsilon^{'}}(C_0||C_1)}2^{-[{\thetailde{D}_{\muax}^{\epsilonpsilon^{'}}(C_0||C_1)}+\log(\epsilonpsilon-\epsilonpsilon^{'})]}\nuonumber\\
=&\epsilonpsilon-\epsilonpsilon^{'},
\epsilonnd{align}the first inequality is due to $(\rhoef{dmax'})$
the second inequality is due to $(\rhoef{assume}).$
Then we have
\betaegin{align}
1-\alphalpha_{\Gammaamma}(\Pii)=&tr(\Pii \Gammaamma^{1/2}C_0\Gammaamma^{1/2})\nuonumber\\
=&tr(\Pii C)+\thetar[\Pii (\Gammaamma^{1/2}C_0\Gammaamma^{1/2}-C)]\nuonumber\\
\le &\epsilonpsilon-\epsilonpsilon^{'}+|| \Gammaamma^{1/2}C_0\Gammaamma^{1/2}-C||/2\nuonumber\\
\le &\epsilonpsilon-\epsilonpsilon^{'}+\epsilonpsilon^{'}\le \epsilonpsilon,
\epsilonnd{align}
\iotandent Next we show the other hand. Here we assume $\Gammaamma$ is the optimal in terms of $\thetailde{D}^{f(\epsilonpsilon)}_{max}(C_0||C_1).$ By the remark $\rhoef{trianglelambda},$ then we have there exists $\leftarrowmbda$ such that $tr\thetariangle_{\Gammaamma}(\leftarrowmbda)=\epsilonpsilon,$ and we denote $\Pii_{\Gammaamma}=\{\Gammaamma^{1/2}C_0\Gammaamma^{1/2}\gammae\Gammaamma^{1/2}\leftarrowmbda C_1\Gammaamma^{1/2}\}$, then
\betaegin{align}
tr\Pii_{\Gammaamma} \Gammaamma^{1/2}C_0\Gammaamma^{1/2}\gammae tr\Pii_{\Gammaamma}\Gammaamma^{1/2}(C_0-\leftarrowmbda C_1)\Gammaamma^{1/2}=\epsilonpsilon,
\epsilonnd{align}
that is, $\alphalpha_{\Gammaamma}(\Pii)\le 1-\epsilonpsilon,$ hence we have
\betaegin{align}
-\log\betaeta_{1-\epsilonpsilon}(C_0||C_1)\gammae -\log\betaeta_{\Gammaamma}(\Pii),
\epsilonnd{align}
as
\betaegin{align}
\epsilonpsilon=&tr\thetariangle(\leftarrowmbda)\nuonumber\\=&tr\Pii_{\Gammaamma}\Gammaamma^{1/2}(C_0-\leftarrowmbda C_1)\Gammaamma^{1/2}\nuonumber\\\le& 1-\leftarrowmbda tr\Pii_{\Gammaamma} \Gammaamma^{1/2}C_1\Gammaamma^{1/2},
\epsilonnd{align}
that is, $\betaeta_{\Gammaamma}(\Pii)\le \frac{1-\epsilonpsilon}{\leftarrowmbda},$
then we have there exists a $\Gammaamma$ such that
\betaegin{align}
&-\log\betaeta_{\Gammaamma}(\Pii)\nuonumber\\
\gammae& \log\leftarrowmbda-\log(1-\epsilonpsilon)\nuonumber\\
\gammae& \thetailde{D}_{\muax}^{f(\epsilonpsilon)}(C_0||C_1)+\log (\sigmaqrt{1-g_{\Gammaamma}(\leftarrowmbda)})-\log(1-\epsilonpsilon)\nuonumber\\=&D_{\muax}^{f(\epsilonpsilon)}(C_0||C_1),
\epsilonnd{align}
the second inequality is due to the continuity of $g_{\Gammaamma}(\leftarrowmbda)$ and the range of $g_{\Gammaamma}(\leftarrowmbda)$ is $[0,1]$ which is presented in the Remark \rhoef{trianglelambda}.
\epsilonnd{proof}
\piar Then we present a result on the max-relative entropy of quantum combs by the generalized quantum stein's lemma \chiite{brandao2010generalization}.
\betaegin{theorem}
Assume $C_0$ is a quantum comb, $C_1$ is a quantum comb with full rank, then we have
\betaegin{align}
&\lim_{\epsilonpsilon\rhoightarrow 0}\lim_{k\rhoightarrow\iotanfty}\frac{1}{k}\thetailde{D}^{\epsilonpsilon}_{\muax}(C^{\omegatimes k}_0||C^{\omegatimes k}_1)\nuonumber\\
\gammae&\muax_{\Gammaamma\iotan DualComb} D(\Gammaamma^{1/2}C_0\Gammaamma^{1/2}||\Gammaamma^{1/2} C_1\Gammaamma^{1/2}), \leftarrowbel{gsl}
\epsilonnd{align}
\epsilonnd{theorem}
\betaegin{proof}
Here we assume
\betaegin{align}
M_k^{\Gammaamma}=\{(\Gammaamma^{1/2})^{\omegatimes k}C_1^{\omegatimes k}(\Gammaamma^{1/2})^{\omegatimes k}|\Gammaamma\iotan DualComb.\},
\epsilonnd{align}
\piar
Next we prove $M_k^{\Gammaamma}$ satisfies the properties 1-5 in \chiite{brandao2010generalization}.\\
(i) the set $\{\Gammaamma^{(1)}|\thetar_{A^{in}_1}[\Gammaamma^{(1)}]=1\}$ is closed, then as the preimage of a closed set is closed, then we have the set is closed, then the set $M_1^{\Gammaamma}$ is closed. And by the definition of $Comb^{\omegatimes k}$, we have the set $M_k^{\Gammaamma}$ is closed.\\
(ii) the set $M_k^{\Gammaamma}$ is a set owning an operator with full rank.\\
(iii) Assume $D_{k+1}$ is an element in the set $M_{k+1}^{\Gammaamma},$as $$\thetar_{l}(\Gammaamma^{1/2})^{\omegatimes k+1}C_1^{\omegatimes k+1}(\Gammaamma^{1/2})^{\omegatimes k+1}=(\Gammaamma^{1/2})^{\omegatimes k}C_1^{\omegatimes k}(\Gammaamma^{1/2})^{\omegatimes k},$$
then we will show that when $l$ is arbitrary, then $\thetar_{l}D_{k+1}\iotan M_k $.\\
(iv) as \betaegin{align}
(\Gammaamma^{1/2})^{\omegatimes s}C_1^{\omegatimes s}(\Gammaamma^{1/2})^{\omegatimes s} \iotan M_s^{\Gammaamma},\nuonumber\\ (\Gammaamma^{1/2})^{\omegatimes t}C_1^{\omegatimes t}(\Gammaamma^{1/2})^{\omegatimes t } \iotan M_t^{\Gammaamma},
\epsilonnd{align}
then from the definition of $M_k^{\Gammaamma},$ then we have
\betaegin{align}
(\Gammaamma^{1/2})^{\omegatimes s}C_1^{\omegatimes s}(\Gammaamma^{1/2})^{\omegatimes s}\omegatimes(\Gammaamma^{1/2})^{\omegatimes t}C_1^{\omegatimes t}(\Gammaamma^{1/2})^{\omegatimes t }\iotan M^{\Gammaamma}_{s+t}.
\epsilonnd{align}
(v) Assume $(\Gammaamma^{1/2})^{\omegatimes k}C_1^{\omegatimes k}(\Gammaamma^{1/2})^{\omegatimes k}\iotan M_k^{\Gammaamma},$ then $P_{\pii}[(\Gammaamma^{1/2})^{\omegatimes k}C_1^{\omegatimes k}(\Gammaamma^{1/2})^{\omegatimes k}]=(\Gammaamma^{1/2})^{\omegatimes k}C_1^{\omegatimes k}(\Gammaamma^{1/2})^{\omegatimes k},$ $\pii$ is a permutation.\\
Next as
\betaegin{align}
\lim_{\epsilonpsilon\rhoightarrow 0}\lim_{k\rhoightarrow\iotanfty}\frac{1}{k}\thetailde{D}^{\epsilonpsilon}_{\muax}(C^{\omegatimes k}_0||C^{\omegatimes k}_1)\gammae &\lim_{\epsilonpsilon\rhoightarrow 0}\lim_{k\rhoightarrow\iotanfty}\muax_{\Gammaamma\iotan DualComb}\muin_{C\iotan \thetailde{S^{{\Gammaamma}^{\omegatimes k}}_{\epsilonpsilon}}}\frac{1}{k} D_{\muax}(C||(\Gammaamma^{1/2})^{\omegatimes k}(C_1)^{\omegatimes k}(\Gammaamma^{1/2})^{\omegatimes k})\nuonumber\\
\gammae& \lim_{\epsilonpsilon\rhoightarrow 0}\lim_{k\rhoightarrow\iotanfty}\muin_{C\iotan \thetailde{S^{{\Gammaamma}^{\omegatimes k}}_{\epsilonpsilon}}}\frac{1}{k} D_{\muax}(C||(\Gammaamma^{1/2})^{\omegatimes k}C^{\omegatimes k}_1(\Gammaamma^{1/2})^{\omegatimes k}),
\epsilonnd{align}
then
\betaegin{align}
&\lim_{\epsilonpsilon\rhoightarrow 0}\lim_{k\rhoightarrow\iotanfty}\muax_{\Gammaamma\iotan DualComb}\muin_{C\iotan \thetailde{S^{{\Gammaamma}^{\omegatimes k}}_{\epsilonpsilon}}}\frac{1}{k} D_{\muax}(C||(\Gammaamma^{1/2}C_1\Gammaamma^{1/2})^{\omegatimes k})\nuonumber\\
\gammae& \lim_{\epsilonpsilon\rhoightarrow 0}\lim_{k\rhoightarrow\iotanfty}\muin_{C\iotan \thetailde{S^{{\Gammaamma}^{\omegatimes k}}_{\epsilonpsilon}}}\frac{1}{k} D_{\muax}(C||(\Gammaamma^{1/2}C_1\Gammaamma^{1/2})^{\omegatimes k}),
\epsilonnd{align}
and we have
\betaegin{widetext}
\betaegin{align}
\lim_{\epsilonpsilon\rhoightarrow 0}\lim_{k\rhoightarrow\iotanfty}\muax_{\Gammaamma\iotan DualComb}\muin_{C\iotan \thetailde{S^{{\Gammaamma}^{\omegatimes k}}_{\epsilonpsilon}}}\frac{1}{k} {D}_{\muax}(C||(\Gammaamma^{1/2}C_1\Gammaamma^{1/2})^{\omegatimes k})\gammae \muax_{\Gammaamma\iotan DualComb}\lim_{\epsilonpsilon\rhoightarrow 0}\lim_{k\rhoightarrow\iotanfty}\muin_{C\iotan \thetailde{S^{{\Gammaamma}^{\omegatimes k}}_{\epsilonpsilon}}}\frac{1}{k} {D}_{\muax}(C||(\Gammaamma^{1/2}C_1\Gammaamma^{1/2})^{\omegatimes k}),\leftarrowbel{mutual}
\epsilonnd{align}
\epsilonnd{widetext}
At last, by the Proposition $\upsilonppercase\epsilonxpandafter{\rhoomannumeral2}.1$ in \chiite{brandao2010generalization}, we have
\betaegin{align}
&\muax_{\Gammaamma\iotan DualComb}\lim_{\epsilonpsilon\rhoightarrow 0}\lim_{k\rhoightarrow\iotanfty}\muin_{C\iotan \thetailde{S^{{\Gammaamma}^{\omegatimes k}}_{\epsilonpsilon}}}\frac{1}{k} {D}_{\muax}(C||(\Gammaamma^{1/2}C_1\Gammaamma^{1/2})^{\omegatimes k})\nuonumber\\
=&\muax_{\Gammaamma\iotan DualComb} \lim_{k\rhoightarrow\iotanfty}\frac{1}{k}D((\Gammaamma^{1/2}C_0\Gammaamma^{1/2})^{\omegatimes k}||(\Gammaamma^{1/2} C_1\Gammaamma^{1/2})^{\omegatimes k})\nuonumber\\
=&\muax_{\Gammaamma\iotan DualComb} D(\Gammaamma^{1/2}C_0\Gammaamma^{1/2}||\Gammaamma^{1/2} C_1\Gammaamma^{1/2}).
\epsilonnd{align}
\epsilonnd{proof}
\piar \iotandent At last, we recall a scenario of assessing the performance of an unknown quantum network \chiite{chiribella2016optimal}. There a quantum network is connected to a quantum tester, when the tester returns an outcome $x$, we assign a weight $w_x$, let $C$ be the quantum comb and $T=\{T_x|x\iotan Y\}$ be the quantum tester, then the average score is given by
\betaegin{align}
w=&\sigmaum_x w_x(T_x*C)\nuonumber\\
=&\Omegamega*C,
\epsilonnd{align}
here $\Omegamega=\sigmaum_x w_x T_x,$ here we call $\Omegamega$ the performance operator, then we have the performance of a quantum network $C$ is determined by $\Omegamega$. And the maximum score is
\betaegin{align}
w_{\muax}=&\muax_{C\iotan \muathcal{C}}\Omegamega*C\nuonumber\\
=&\muax_{C\iotan\muathcal{C}}tr\Omegamega C^T\nuonumber\\
=&\muax_{C\iotan\muathcal{C}}tr \Omegamega C.
\epsilonnd{align}
Here we denote the set $\muathcal{C}$ as the set of quantum combs.
The last equality is due to the fact that the set of quantum comb is closed under transpose.
Then we present another characterization of $w_{\muax}$ of a performance operator $\Omegamega.$
\betaegin{lemma}\leftarrowbel{max}\chiite{chiribella2016optimal}
Let $\Omegamega$ be an operator on $\omegatimes_{j=1}^N(\muathcal{H}_j^{out}\omegatimes\muathcal{H}_j^{in})$ and let $w_{max}$ be the maximum of $\thetar\Omegamega C$ over all the elements in the set quantum comb $C$, then $w_{max}$ is given by
\betaegin{align}
w_{max}=\muin\{\leftarrowmbda\iotan \muathbb{R}|\leftarrowmbda\thetaheta\gammae \Omegamega,\thetaheta\iotan DualComb\},
\epsilonnd{align}
When $\Omegamega$ is positive, $w_{max} $ can be written as
\betaegin{align}
w_{max}=2^{D_{max}(\Omegamega||DualComb)}.\leftarrowbel{maxsc}
\epsilonnd{align}
\epsilonnd{lemma}
\piar \iotandent Next we present a result on the smooth asymptotic version for a performance operator, then we first present a lemma, this result will be used in Theorem $\rhoef{asy}$.
\betaegin{lemma}\leftarrowbel{cdc}
Assume $P(M_1,M_2)\le \epsilonpsilon,$ $M_1,M_2$ are positive operators with the same trace, then $|w_{\muax}(M_1)- w_{\muax}(M_2)|\le d_{2n-2}d_{2n-4}\chidots d_{0}\epsilonpsilon$, here $d_{2i-2}$ is the dimension of the Hilbert space $A_i^{in}.$
\epsilonnd{lemma}
\betaegin{proof}
As $||M_1-M_2||_1\le 2\epsilonpsilon,$ and $\thetar(M_1-M_2)=0,$ ($\thetar(M_1-M_2)_{+}=\thetar(M_1-M_2)_{-}$) then $\epsilonpsilon\thetariangle=(M_1-M_2)_{+}\le \epsilonpsilon I,$ here $(M_1-M_2)_{+}$ is the positive part of $M_1-M_2,$ $\thetariangle$ is a bona fide state.
\piar \iotandent Assume $\Thetaheta_1$ is the optimal in terms of $(\rhoef{maxsc})$ for $M_1$ in terms of $w_{\muax}(\chidot),$ then
\betaegin{align}
\leftarrowmbda_1\Thetaheta_1+\epsilonpsilon I\gammae &
\leftarrowmbda_1\Thetaheta_1+(M_2-M_1)_{+}\nuonumber\\
\gammae& M_1+(M_2-M_1)_{+}\gammae M_2,
\epsilonnd{align}
as $DualComb$ is the set of operators satisfying $(\rhoef{qt})-(\rhoef{qt2}),$ and the operations in $Dualcomb$ are linear, then the set of $DualComb$ is convex, and $I/(d_{2n-2}d_{2n-4}\chidots d_{0})\iotan DualComb, $ then we have $|w_{\muax}(M_2)-w_{\muax}(M_1)|\le d_{2n-2}d_{2n-4}\chidots d_{0}\epsilonpsilon.$
\epsilonnd{proof}
\betaegin{theorem}\leftarrowbel{asy}
Let $\Omegamega$ be a positive operator on $\omegatimes_{j=1}^N(\muathcal{H}_j^{out}\omegatimes\muathcal{H}_j^{in})$, $\lim_{\epsilonpsilon\rhoightarrow0}\lim_{n\rhoightarrow\iotanfty}\frac{1}{n}\log w_{max}^{\epsilonpsilon}(\Omegamega^{\omegatimes n})=\log w_{max}(\Omegamega)$.
Here
\betaegin{align}
w_{max}^{\epsilonpsilon}(\Omegamega)=&\sigmaup_{\mubox{\thetainy$\betaegin{array}{c}1/2||\Omegamega-\Omegamega^{'}||_1\le \epsilonpsilon\\
\thetar\Omegamega=\thetar\Omegamega^{'}\epsilonnd{array}$}}w_{max}(\Omegamega^{'})\nuonumber\\=&\sigmaup_{\mubox{\thetainy$\betaegin{array}{c}1/2||\Omegamega-\Omegamega^{'}||_1\le \epsilonpsilon\\
\thetar\Omegamega=\thetar\Omegamega^{'}\epsilonnd{array}$}}\muin_{\thetaheta\iotan DualComb}\muin\{\leftarrowmbda\iotan \muathbb{R}|\leftarrowmbda\thetaheta\gammae \Omegamega^{'}\},
\epsilonnd{align}
\epsilonnd{theorem}
\betaegin{proof}
Here we will show that $\lim\limits_{n\rhoightarrow\iotanfty}\frac{1}{n}\log w_{max}^{\epsilonpsilon}(\Omegamega^{\omegatimes n})\gammae \log w_{max}(\Omegamega).$\\
\iotandent From the definition and $0\le \epsilonpsilon$, we have
\betaegin{align}
\log w^0_{\muax}(\Omegamega)=&\log w_{\muax}(\Omegamega)
\epsilonnd{align}
\betaegin{align}\leftarrowbel{wmax}
w_{max}(\Omegamega)\nuonumber=&\iotanf\{\leftarrowmbda\iotan\muathbb{R}|\leftarrowmbda\thetaheta\gammae \Omegamega,\thetaheta\iotan DualComb\}\nuonumber\\
=&\sigmaup\{\thetar\Omegamega M|M\iotan \muathcal{C}\},
\epsilonnd{align}
Here the first equality is due to the Lemma \rhoef{max}, the second equality is due to the definition of $w_{max}.$\\
\iotandent By the definition of comb, it should be closed under the operation tensor, as if $\Thetaheta\iotan Comb,$ then $\Thetaheta^{\omegatimes n}$ satisfies the equality $(\rhoef{qc'}),$ that is, $\Thetaheta^{\omegatimes n}\iotan Comb.$ Next we assume $M$ is the optimal in terms of the second equality $(\rhoef{wmax})$ of $w_{\muax}(\Omegamega)$, then from the second equality of (\rhoef{wmax}), we have
\betaegin{align}\leftarrowbel{wmaxn}
w_{\muax}(\Omegamega^{\omegatimes n})\gammae& \thetar\Omegamega^{\omegatimes n}M^{\omegatimes n}\nuonumber\\=&(\thetar\Omegamega M)^n=w_{\muax}(\Omegamega)^n.
\epsilonnd{align}
The first inequality in $(\rhoef{wmaxn})$ holds is due to that the comb set is closed under the tensor operation. \\
Then we have
\betaegin{align}
w_{\muax}^{\epsilonpsilon}(\Omegamega^{\omegatimes n})\gammae &w^0_{\muax}(\Omegamega^{\omegatimes n})\gammae w_{\muax}(\Omegamega)^n,
\epsilonnd{align} \betaegin{align}
\lim_{n\rhoightarrow\iotanfty}\frac{1}{n}\log w_{max}^{\epsilonpsilon}(\Omegamega^{\omegatimes n})\gammae &\log w_{\muax}(\Omegamega)
\epsilonnd{align}
Next we prove
\betaegin{align}
\lim\limits_{\epsilonpsilon\rhoightarrow 0}\lim_{n\rhoightarrow\iotanfty}\frac{1}{n}\log w_{\muax}^{\epsilonpsilon}(\Omegamega^{\omegatimes n})\le \log w_{\muax}(\Omegamega).\leftarrowbel{wmaxg}
\epsilonnd{align}
As \betaegin{align}\leftarrowbel{max1}
w_{\muax}(\Omegamega)=\iotanf\{\leftarrowmbda\iotan\muathbb{R}|\leftarrowmbda\thetaheta\gammae\Omegamega,\thetaheta\iotan DualComb\},
\epsilonnd{align}
let $\thetaheta$ be the optimal $w_{\muax}(\Omegamega)$ in terms of (\rhoef{max1}), as $\thetaheta\iotan DualComb,$ by the equailty (\rhoef{duc}), $\thetaheta$ can be written as $I\omegatimes \Gammaamma,$ then
\betaegin{align}
&(w_{\muax}(\Omegamega) \thetaheta)^{\omegatimes 2}-\Omegamega^{\omegatimes 2}\nuonumber\\
=&(w_{max}I\omegatimes\Gammaamma)^{\omegatimes 2}-\Omegamega\omegatimes(w_{\muax} I\omegatimes\Gammaamma)\nuonumber\\+&\Omegamega\omegatimes(w_{\muax} I\omegatimes\Gammaamma)-\Omegamega^{\omegatimes2}\nuonumber\\
=&(w_{\muax}I\omegatimes\Gammaamma-\Omegamega)\omegatimes(w_{\muax}I\omegatimes\Gammaamma)\nuonumber\\+&\Omegamega\omegatimes(w_{\muax}I\omegatimes\Gammaamma-\Omegamega)\gammae0,
\epsilonnd{align}
similarly, we have $(w_{\muax}(\Omegamega) \thetaheta)^{\omegatimes n}-\Omegamega^{\omegatimes n},\forall n,$ that is, $w_{\muax}(\Omegamega^{\omegatimes n})\le w_{\muax}^n(\Omegamega) ,$ by the Lemma \rhoef{cdc}, we have
\betaegin{align}
&\frac{1}{n}\log w_{\muax}^{\epsilonpsilon}(\Omegamega^{\omegatimes n})\nuonumber\\
\le &\frac{1}{n}\log(d_{2n-2}d_{2n-4}\chidots d_{0}\epsilonpsilon+w_{\muax}(\Omegamega^{\omegatimes n})\nuonumber\\
\le &\frac{1}{n}\log(d_{2n-2}d_{2n-4}\chidots d_{0}\epsilonpsilon+w_{\muax}^n(\Omegamega)),
\epsilonnd{align}
then we have
\betaegin{align}
\lim\limits_{\epsilonpsilon\rhoightarrow 0}\lim_{n\rhoightarrow\iotanfty}\frac{1}{n}\log w_{\muax}^{\epsilonpsilon}(\Omegamega^{\omegatimes n})\le \log w_{\muax}(\Omegamega).
\epsilonnd{align}
\epsilonnd{proof}
\sigmaection{Conclusion}
\iotandent In this article, we have presented a quantity with respect to the smooth max-entropy relative for two quantum combs, we have also presented a relationship between the max-relative entropy and the type \upsilonppercase\epsilonxpandafter{\rhoomannumeral2} error of quantum hypothesis testing in terms of quantum networks we presented, next we have presented a lower bound of the regularized smooth max-entropy relative to two quantum combs by the result in \chiite{brandao2010generalization}. At last, we have shown a smooth asymptotic version of the performance of a quantum positive operator is still the performance of the operator.
\sigmaection{Acknowledgements}
The authors thank Mengyao Hu, she gave us help on plotting the type of the network. Authors were supported by the NNSF of China (Grant No. 11871089), and the Fundamental Research Funds for the Central Universities (Grant Nos. KG12080401 and ZG216S1902).
\betaibliographystyle{IEEEtran}
\betaibliography{reff}
\epsilonnd{document}
|
\betagin{document}
\title{Tracking chains revisited}
\author{Gunnar Wilken
\footnote{This article is a pre-print of a chapter in {\it Sets and Computations}, Lecture Notes Series Vol.\ 33,
Institute for Mathematical Sciences, National University of Singapore, \copyright World Scientific Publishing Company (2017), see \cite{W17}.
The author would like to acknowledge the Institute for Mathematical Sciences of the National University of Singapore
for its partial support of this work during the ``Interactions'' week of the workshop {\it Sets and Computations} in April 2015.
}\\
Structural Cellular Biology Unit\\
Okinawa Institute of Science and Technology\\
1919-1 Tancha, Onna-son, 904-0495 Okinawa, Japan\\
{\tt [email protected]}
}
\maketitle
\betagin{abstract}
The structure ${\operatorname{C}}two:=(1^\infty,\le,\le_1,\le_2)$, introduced and first analyzed in \cite{CWc}, is shown to be elementary recursive.
Here, $1^\infty$ denotes the proof-theoretic ordinal of the fragment $\Pi^1_1$-$\mathrm{CA}_0$ of second order number theory,
or equivalently the set theory ${\operatorname{KP}\!\ell_0}$, which axiomatizes limits of models of Kripke-Platek set theory with infinity.
The partial orderings $\le_1$ and $\le_2$ denote the relations of $\Sigma_1$- and $\Sigma_2$-elementary substructure, respectively.
In a subsequent article \cite{W} we will show that the structure ${\operatorname{C}}two$ comprises the core of the structure ${\cal R}two$ of pure
elementary patterns of resemblance of order $2$.
In \cite{CWc} the stage has been set by showing that the least ordinal containing a cover of each pure pattern of order $2$
is $1^\infty$. However, it is not obvious from \cite{CWc} that ${\operatorname{C}}two$ is an elementary recursive structure. This is shown here
through a considerable disentanglement in the description of connectivity components of $\le_1$ and $\le_2$.
The key to and starting point of our analysis is the apparatus of ordinal arithmetic developed in \cite{W07a} and in Section $5$
of \cite{CWa}, which was enhanced in \cite{CWc}, specifically for the analysis of ${\operatorname{C}}two$.
\end{abstract}
\section{Introduction}
Let ${\cal R}two=\left({\mathrm{Ord}};\le,\le_1,\le_2\right)$ be the structure of ordinals with standard linear ordering $\le$ and
partial orderings $\le_1$ and $\le_2$, simultaneously defined by induction on $\beta$ in
\[\alpha\le_i\beta:\Leftrightarrow \left(\alpha;\le,\le_1,\le_2\right) \preceq_{\Sigma_i} \left(\beta;\le,\le_1,\le_2\right)\]
where $\preceq_{\Sigma_i}$ is the usual notion of $\Sigma_i$-elementary substructure (without bounded quantification), see \cite{C99,C01}
for fundamentals and groundwork on elementary patterns of resemblance.
Pure patterns of order $2$ are the finite isomorphism types of ${\cal R}two$. The \emph{core}
of ${\cal R}two$ consists of the union of \emph{isominimal realizations} of these patterns within ${\cal R}two$, where a finite
substructure of ${\cal R}two$ is called isominimal, if it is pointwise minimal (with respect to increasing enumerations)
among all substructures of ${\cal R}two$ isomorphic to it, and where an isominimal substructure of ${\cal R}two$ realizes a pattern $P$, if
it is isomorphic to $P$. It is a basic observation, cf.\ \cite{C01}, that the class of pure patterns of order $2$ is contained in the class
$\mathcal{RF}_2$ of \emph{respecting forests of order $2$}:
finite structures $P$ over the language $(\le_0,\le_1,\le_2)$ where $\le_0$ is a linear ordering and $\le_1,\le_2$ are forests such that
$\le_2\subseteqseteq\le_1\subseteqseteq\le_0$ and $\le_{i+1}$ \emph{respects} $\le_i$, i.e.\ $p\le_i q\le_i r\:\&\: p\le_{i+1}r$ implies $p\le_{i+1}q$
for all $p,q,r\in P$, for $i=0,1$.
In \cite{CWc} we showed that every pattern has a cover below $1^\infty$, the least such ordinal.
Here, an order isomorphism (embedding) is a cover (covering, respectively) if it maintains the relations $\le_1$ and $\le_2$.
The ordinal of ${\operatorname{KP}\!\ell_0}$ is therefore least such that there exist arbitrarily long finite $\le_2$-chains.
Moreover, by determination of enumeration functions of (relativized) connectivity components of $\le_1$ and $\le_2$ we
were able to describe these relations in terms of classical ordinal notations. The central observation in connection with
this is that every ordinal below $1^\infty$ is the greatest element in a $\le_1$-chain in which $\le_1$- and $\le_2$-chains alternate.
We called such chains \emph{tracking chains} as they provide all $\le_2$-predecessors and the greatest $\le_1$-predecessors
insofar as they exist.
In the present article we will review and slightly extend the ordinal arithmetical toolkit and then
verify through a disentangling reformulation, that \cite{CWc} in fact yields an elementary recursive characterization of the restriction of ${\cal R}two$ to the structure
${\operatorname{C}}two=(1^\infty;\le,\le_1,\le_2)$. It is not obvious from \cite{CWc} that ${\operatorname{C}}two$ is an elementary recursive structure since several proofs
there make use of transfinite induction up to $1^\infty$, which allowed for a somewhat shorter argumentation there.
We will summarize the results in \cite{CWc} to a sufficient and convenient degree.
As a byproduct, \cite{CWc} will become considerably more accessible. We will prove the equivalence of the arithmetical descriptions of
${\operatorname{C}}two$ given in \cite{CWc} and here.
Note that the equivalence of this elementary recursive characterization with the original structure based on elementary substructurehood is proven in Section $7$ of \cite{CWc}, using full transfinite induction up to the
ordinal of ${\operatorname{KP}\!\ell_0}$. In this article we rely on this result and henceforth identify ${\operatorname{C}}two$ with its arithmetical characterization given in \cite{CWc} and further illuminated in Section \ref{revisitsec} of the present article, where we also show that the finite isomorphism types
of the arithmetical ${\operatorname{C}}two$ are respecting forests of order $2$, without relying on semantical characterization of the arithmetical ${\operatorname{C}}two$.
With these preparations out of the way we will be able to provide, in a subsequent article \cite{W}, an algorithm that assigns
an isominimal realization within ${\operatorname{C}}two$ to each respecting forest
of order $2$, thereby showing that each such respecting forest is in fact (up to isomorphism)
a pure pattern of order $2$.
The approach is to formulate the corresponding theorem flexibly so that isominimal realizations above certain relativizing
tracking chains are considered.
There we will also define an elementary recursive function that assigns descriptive patterns $P(\alpha)$ to
ordinals $\alpha\in 1^\infty$. A descriptive pattern for an ordinal $\alpha$ in the above sense is a pattern, the isominimal
realization of which contains $\alpha$.
Descriptive patterns will be given in a way that makes a canonical choice for normal
forms, since in contrast to the situation in ${\cal R}onepl$, cf.\ \cite{W07c,CWa}, there is no unique notion of normal form in ${\cal R}two$.
The chosen normal forms will be of least possible cardinality.
The mutual order isomorphisms between hull and pattern notations that will be given in \cite{W} enable classification of a new independence
result for ${\operatorname{KP}\!\ell_0}$: We will demonstrate that the result by Carlson in \cite{C16}, according to which the collection of respecting forests of
order $2$ is well-quasi ordered with respect to coverings,
cannot be proven in ${\operatorname{KP}\!\ell_0}$ or, equivalently, in the restriction $\Pi^1_1\mathrm{-CA}_0$ of second order number theory to
$\Pi^1_1$-comprehension and set induction. On the other hand,
we know that transfinite induction up to the ordinal $1^\infty$ of ${\operatorname{KP}\!\ell_0}$ suffices to show that every pattern is covered \cite{CWc}.
This article therefore delivers the first part of an in depth treatment of the insights and results presented in a lecture during
the ``Interactions'' week
of the workshop {\it Sets and Computations}, held at the Institute for Mathematical Sciences of the National University of Singapore in April 2015.
\section{Preliminaries}
The reader is assumed to be familiar with basics of ordinal arithmetic (see e.g.\ \cite{P09})
and the ordinal arithmetical tools developed in \cite{W07a} and Section 5 of \cite{CWa}. See the index at the
end of \cite{W07a} for quick access to its terminology. Section 2 of \cite{W07c} (2.1--2.3) provides a summary of results
from \cite{W07a}. As mentioned before, we will build upon \cite{CWc}, the central concepts and results of which will be reviewed
here and in the next section. For detailed reference see also the index of \cite{CWc}.
\subseteqsection{Basics}
Here we recall terminology already used in \cite{CWc} (Section 2) for the reader's convenience.
Let ${\mathbb P}$ denote the class of additive principal numbers, i.e.\ nonzero ordinals that are closed under ordinal addition,
that is the image of ordinal exponentiation to base $\omega$.
Let ${\mathbb L}$ denote the class of limits of additive principal numbers, i.e.\ the limit points of ${\mathbb P}$,
and let ${\mathbb M}$ denote the class of multiplicative principal numbers, i.e.\ nonzero ordinals closed under ordinal multiplication.
By ${\mathbb E}$ we denote the class of epsilon numbers, i.e.\ the fixed-points of $\omega$-exponentiation.
We write $\alpha=_{\mathrm{\scriptscriptstyle{ANF}}}\alphae+\ldots+\alphan$ if $\alphae,\ldots,\alphan\in{\mathbb P}$ such that $\alphae\ge\ldots\ge\alphan$, which is called the
representation of $\alpha$ in additive normal form,
and $\alpha=_{\mathbb N}F\beta+\gamma$ if the expansion of $\beta$ into its additive normal form (ANF) in the sum $\beta+\gamma$ syntactically results in the
additive normal form of $\alpha$.
The Cantor normal form representation of an ordinal $\alpha$ is given by $\alpha=_{\mathrm{\scriptscriptstyle{CNF}}}\omega^\alphae+\ldots+\omega^\alphan$ where $\alphae\ge\ldots\ge\alphan$
with $\alpha>\alphae$ unless $\alpha\in{\mathbb E}$.
For $\alpha=_{\mathrm{\scriptscriptstyle{ANF}}}\alphae+\ldots+\alphan$, we define ${\operatorname{mc}}(\alpha):=\alphae$\index{${\operatorname{mc}}$} and ${\mathrm{end}}(\alpha):=\alphan$.
We set ${\mathrm{end}}(0):=0$.
Given ordinals $\alpha, \beta$ with $\alpha\le\beta$ we write $-\alpha+\beta$\index{$-\alpha+\beta$} for the unique $\gamma$ such that $\alpha+\gamma=\beta$.
As usual let $\alpha\minusp\betata$\index{$-p$@$\alpha\minusp\beta$} be $0$ if $\beta\ge\alpha$, $\gamma$ if $\beta<\alpha$ and there exists the minimal $\gamma$ s.t.\ $\alpha=\gamma+\beta$, and $\alpha$ otherwise.
For $\alpha\in{\mathrm{Ord}}$ we denote the least multiplicative principal number greater than $\alpha$ by $\alpha^{\mathbb M}$.
Notice that if $\alpha\in{\mathbb P}$, $\alpha>1$, say $\alpha=\omega^\alphapr$, we have $\alpha^{\mathbb M}=\alpha^\omega=\omega^{\alphapr\cdot\omega}$.
For $\alpha\in{\mathbb P}$ we use the following notations for \emph{multiplicative normal form}:\index{multiplicative normal form}
\betagin{enumerate}
\item $\alpha=_{\mathrm{\scriptscriptstyle{NF}}}\eta\cdot\xi$\index{$=_{\mathrm{\scriptscriptstyle{NF}}}$} if and only if $\xi=\omega^{\xi_0}\in{\mathbb M}$ (i.e.\ $\xi_0\in\sigmangleton{0}\cup{\mathbb P}$) and
either $\eta=1$ or $\eta=\omega^{\eta_1+\ldots+\eta_n}$ such that $\eta_1+\ldots+\eta_n+\xi_0$ is in
additive normal form. When ambiguity is unlikely, we sometimes allow $\xi$ to be of a form $\omega^{\xi_1+\ldots+\xi_m}$ such that
$\eta_1+\ldots+\eta_n+\xi_1+\ldots+\xi_m$ is in additive normal form.
\item $\alpha=_{\mathrm{\scriptscriptstyle{MNF}}}\alphae\cdot\ldots\cdot\alphak$\index{$=_{\mathrm{\scriptscriptstyle{MNF}}}$} if and only if $\alphae,\ldots,\alphak$ is the unique decreasing sequence of multiplicative principal numbers, the product of which is equal to $\alpha$.
\end{enumerate}
For $\alpha\in{\mathbb P}$, $\alpha=_{\mathrm{\scriptscriptstyle{MNF}}}\alphae\cdot\ldots\cdot\alphak$, we write ${\operatorname{mf}}(\alpha)$ for $\alphae$ and ${\operatorname{lf}}(\alpha)$\index{${\operatorname{lf}}$} for $\alphak$.
Note that if $\alpha\in{\mathbb P}-{\mathbb M}$ then ${\operatorname{lf}}(\alpha)\in{\mathbb M}^{>1}$ and $\alpha=_{\mathrm{\scriptscriptstyle{NF}}}\alphabar\cdot{\operatorname{lf}}(\alpha)$
where the definition of $\alphabar$ given in \cite{W07a} for limits of additive principal numbers is
extended to ordinals $\alpha$ of a form $\alpha=\omega^{\alphapr+1}$ by $\alphabar:=\omega^\alphapr$, see Section 5 of \cite{CWa}.
Given $\alpha,\beta\in{\mathbb P}$ with $\alpha\le\beta$ we write $(1/\alpha)\cdot\beta$\index{$/$@$(1/\alpha)\cdot\beta$} for the uniquely determined ordinal $\gamma\le\beta$
such that $\alpha\cdot\gamma=\beta$. Note that with the representations $\alpha=\omega^\alphapr$ and $\beta=\omega^\betapr$ we have
\[(1/\alpha)\cdot\beta=\omega^{-\alphapr+\betapr}.\]
For any $\alpha$ of a form $\omega^\alphapr$ we write $<_1g(\alpha)$\index{$<_1g$} for $\alphapr$, and we set $<_1g(0):=0$. For an arbitrary ordinal $\beta$ we
write ${\mathrm{logend}}(\beta)$ for $<_1g({\mathrm{end}}(\beta))$.
\subseteqsection{Relativized notation systems ${\operatorname{T}}t$}
Settings of relativization are given by ordinals from ${\mathbb E}one:=\sigmangleton{1}\cup{\mathbb E}$\index{$\ezo$@${\mathbb E}one$} and frequently indicated by Greek letters, preferably $\sigma$ or $\tau$.
Clearly, in this context $\tau=1$ denotes the trivial setting of relativization. For a setting $\tau$ of relativization we define
${\tau_i}nf:={\operatorname{T}}t\cap\Omegaega_1$\index{${\tau_i}nf$}
where ${\operatorname{T}}t$ is defined as in \cite{W07a} and reviewed in Section 2.2 of \cite{W07c}.
${\operatorname{T}}t$ is the closure of parameters below $\tau$ under addition and the stepwise, injective, and fixed-point free collapsing functions
$\varthetak$ the domain of which is ${\operatorname{T}}t\cap\Omega_{k+2}$, where $\varthetat:=\vartheta_0$ is relativized to $\tau$.
As in \cite{CWc}, most considerations will be confined to the segment $1^\infty$.
Translation between different settings of relativization, see Section 6 of \cite{W07a}, is effective on the term syntax and enjoys convenient invariance properties regarding the operators described below, as was verified in \cite{W07a}, \cite{CWa}, and \cite{CWc}.
We therefore omit the purely technical details here.
\subseteqsection{Refined localization}\lambdabel{reflocsubsec}
The notion of $\tau$-localization (Definition 2.11 of \cite{W07c}) and its refinement to $\tau$-fine-localization by iteration of
the operator $\bar{\cdot}$, see Definitions 5.1 and 5.5 of \cite{CWa}, continue to be essential as they locate ordinals in terms of
closure properties (fixed-point level and limit point thinning). These notions are effectively derived from the term syntax.
We refer to Subsection 2.3 of \cite{W07c} and Section 5 of \cite{CWa} for a complete picture of these concepts. In the present article,
the operator $\bar{\cdot}$ is mostly used to decompose ordinals that are not multiplicative principal, i.e. if $\alpha=_{\mathrm{\scriptscriptstyle{NF}}}\eta\cdot\xi$ where $\eta>1$ and $\xi\in{\mathbb M}$,
then $\alphabar=\eta$. The notion of $\tau$-localization enhanced with multiplicative decomposition turns out to be the appropriate tool
for the purposes of the present article, whereas general $\tau$-fine-localization will re-enter the picture through the notion of closedness
in a subsequent article \cite{W}.
\subseteqsection{Operators related to connectivity components}
The function $<_1g$ (${\mathrm{logend}}$) is described in ${\operatorname{T}}t$-notation in Lemma 2.13 of \cite{W07c}, and for $\beta=\varthetat(\eta)\in{\operatorname{T}}t$
where $\eta<\Omega_1$ we have
\betagin{equation}\lambdabel{logred}
<_1g((1/\tau)\cdot\beta)=\left\{\betagin{array}{l@{\quad}l}
\eta+1&\mbox{ if }\eta=\varepsilon+k\mbox{ where }\varepsilon\in{\mathbb E}^{>\tau}, k<\omega\\[2mm]
\eta&\mbox{ otherwise.}
\end{array}\right.
\end{equation}
The foregoing distinction reflects the property of $\vartheta$-functions to omit fixed points.
The operators $\iota_{\tau,\al}$ indicating the fixed-point level, ${\ze^\tau}atal$ displaying the degree of limit point thinning, and their combination
$\lambdatal$ measuring closure properties of ordinals $\alpha\in{\operatorname{T}}t$ are as in Definitions 2.14 and 2.18 of \cite{W07c}, which also reviews the notion of
base transformation $\pi_{\si,\tau}$ and its smooth interaction with these operators.
The operator $\lambdatal$ already played a central role in the analysis of ${\cal R}onepl$-patterns as it displays the number of $\le_1$-connectivity components that are $\le_1$-connected to the component with index $\alpha$ in a setting of relativization $\tau\in{\mathbb E}one$,
cf.\ Lemma 2.31 part (a) of \cite{W07c}. It turns out that $\lambdatal$ plays a similar role in ${\cal R}two$, see below.
In order to avoid excessive repetition of formal definitions from \cite{CWc} we continue to describe operators and functions introduced for analysis of ${\operatorname{C}}two$ in \cite{CWc} in terms of their meaning in the context of ${\operatorname{C}}two$.
Those (relativized) $\le_1$-components, the enumeration index of which is an epsilon number, give rise to infinite $\le_1$-chains,
along which new $\le_2$-components arise.
Omitting from these $\le_1$-chains those elements that have a $\le_2$-predecessor in the chain and enumerating the remaining elements, we obtain
the so-called $\nu$-functions, see Definition 4.4 of \cite{CWc}, and the $\mu$-operator provides the length of such enumerations up to the \emph{final newly arising $\le_2$-component}, cf.\ the remark before Definition 4.4 of \cite{CWc}. This {\it terminal point} on a main line, at which the largest
newly arising $\le_2$-component originates, is crucial for understanding the structure ${\operatorname{C}}two$. Note
that in general the terminal point has an infinite increasing continuation in the $\le_1$-chain under consideration, leading to $\le_2$-components which have isomorphic copies below, i.e.\ which are
{\it not} new.
Recall Convention 2.9 of \cite{W07c}.
\betagin{defi}[3.4 of \cite{CWc}]
Let $\tau\in{\mathbb E}one$ and $\alpha\in(\tau,{\tau_i}nf)\cap{\mathbb E}$, say
$\alpha=\varthetat(\Delta+\eta)$ where $\eta<\Omega_1$ and $\Delta=\Omegaega_1\cdot(\lambda+k)$ such that $\lambda\in\sigmangleton{0}\cup\mathrm{Lim}$ and $k<\omega$.
We define
\[\mu^\tall:=\omega^{\iota_{\tau,\al}(\lambda)+\chi^\al(\iota_{\tau,\al}(\lambda))+k}.\]\index{$\mu^\tall$}
\end{defi}
The $\chi$-indicator occurring above is given in Definition 3.1 of \cite{CWc} and indicates whether the maximum $\le_2$-component
starting from an ordinal on the infinite $\le_1$-chain under consideration itself $\le_1$-reconnects to that chain which we called a {\it main line}.
The question remains which $\le_1$-component starting from such a point on a main line is the largest that is also $\le_2$-connected to it.
This is answered by the $\varrho$-operator:
\betagin{defi}[3.9 of \cite{CWc}]
Let $\alpha\in{\mathbb E}$, $\beta<\alpha^\infty$, and $\lambda\in\sigmangleton{0}\cup\mathrm{Lim}$, $k<\omega$ be such that ${\mathrm{logend}}(\beta)=\lambda+k$.
We define
\[{\varrho^\al_\be}:=\alpha\cdot(\lambda+k\minusp\chi^\al(\lambda)).\]
\end{defi}
Now, the terminal point on a main line, given as, say, $\nu^{{\vec{\tau}},\alpha}_{\mu^\tall}$ with
a setting of relativization ${\vec{\tau}}=(\tau_1,\ldots,\tau_n)$ that will be discussed later and $\tau=\tau_n$, $\alpha\in{\mathbb E}^{>\tau}$,
connects to $\lambdatal$-many $\le_1$-components. The following lemma is a direct consequence of the respective definitions.
\betagin{lem}[3.12 of \cite{CWc}]\lambdabel{lamurholem}
Let $\tau\in{\mathbb E}one$ and $\alpha=\varthetat(\Delta+\eta)\in(\tau,{\tau_i}nf)\cap{\mathbb E}$. Then we have
\betagin{enumerate}
\item $\iota_{\tau,\al}(\Delta)=\varrho^\al_{\mutal}$ and hence $\lambdatal=\varrho^\al_{\mutal}+{\ze^\tau}atal$.
\item\lambdabel{mainlinecondpart} ${\varrho^\al_\be}\le\lambdatal$ for every $\beta\le\mu^\tall$. For $\beta<\mu^\tall$ such that
\footnote{This condition is missing in \cite{CWc}. However, that inequality was only applied under this condition, cf.\ Def.\ 5.1 and L.\ 5.7 of \cite{CWc}.}
$\chi^\al(\beta)=0$ we even have ${\varrho^\al_\be}+\alpha\le\lambdatal$.
\item If $\mu^\tall<\alpha$ we have $\mu^\tall<\alpha\le\lambdatal<\alpha^2$, while otherwise \[\max\left((\mu^\tall+1)\cap{\mathbb E}\right)=\max\left((\lambdatal+1)\cap{\mathbb E}\right).\]
\item If $\lambdatal\in{\mathbb E}^{>\alpha}$ we have $\mu^\tall=\lambdatal\cdot\omega$ in case of $\chi^\al(\lambdatal)=1$ and $\mu^\tall=\lambdatal$ otherwise.
\end{enumerate}
\end{lem}
Notice that we have $\mu^\tall=\iota_{\tau,\al}(\Delta)={\operatorname{mc}}(\lambdatal)$ whenever $\mu^\tall\in{\mathbb E}^{>\alpha}$.
\section{Tracking sequences and their evaluation}
\subseteqsection{Maximal and minimal \boldmath$\mu$\unboldmath-coverings}
The following sets of sequences are crucial for the description of settings of relativization, which in turn is the key to
understanding the structure of connectivity components in ${\operatorname{C}}two$.
\betagin{defi}[4.2 of \cite{CWc}]
Let $\tau\in{\mathbb E}one$. A nonempty sequence $(\alphae,\ldots,\alphan)$ of ordinals in the interval $[\tau,{\tau_i}nf)$ is called a $\tau$-tracking sequence\index{$\tau$-tracking sequence} if
\betagin{enumerate}
\item $(\alphae,\ldots,\alphanmin)$ is either empty or a strictly increasing sequence of epsilon numbers greater than $\tau$.
\item $\alphan\in{\mathbb P}$, $\alphan>1$ if $n>1$.
\item $\alphaie\le\mu^\talli$ for every $i\in\sigmangleton{1,\ldots,n-1}$.
\end{enumerate}
By ${\operatorname{T}}St$\index{$\tst$@${\operatorname{T}}St$} we denote the set of all $\tau$-tracking sequences. Instead of ${\operatorname{T}}Se$ we also write ${\operatorname{T}}S$.\index{$\tst$@${\operatorname{T}}St$!${\operatorname{T}}S$}
\end{defi}
According to Lemma 3.5 of \cite{CWc} the length of a tracking sequence is bounded in terms of the largest index of $\vartheta$-functions in the term representation of the first element of the sequence.
\betagin{defi}\lambdabel{mucovering}
Let $\tau\in{\mathbb E}one$, $\alpha\in{\mathbb E}\cap(\tau,{\tau_i}nf)$, and $\beta\in{\mathbb P}\cap(\alpha,\alphainf)$.
A sequence $(\alpha_0,\dots,\alpha_{n+1})$ where $\alpha_0=\alpha$, $\alpha_{n+1}=\beta$,
$(\alpha_1,\ldots,\alpha_{n+1})\in{\operatorname{T}}Sal$, and $\alpha<\alpha_1\le\mu^\tall$
is called a {\it $\mu$-covering from $\alpha$ to $\beta$}.
\end{defi}
\betagin{lem}\lambdabel{mucovloc}
Any $\mu$-covering from $\alpha$ to $\beta$ is a subsequence of the $\alpha$-localization of $\beta$.
\end{lem}
{\bf Proof.} Let $(\alpha_0,\ldots,\alphane)$ be a $\mu$-covering from $\alpha$ to $\beta$.
Stepping down from $\alphane$ to $\alpha_0$, Lemmas 3.5 of \cite{CWc} and 4.9, 6.5 of \cite{W07a} apply to
show that the $\alpha$-localization of $\beta$ is the successive concatenation of the $\alphai$-localization of $\alphaie$ for $i=1,\ldots,n$, modulo translation between the respective
settings of relativization.
\mbox{ }
$\Box$
\betagin{defi}\lambdabel{maxminmucov}
Let $\tau\in{\mathbb E}one$.
\betagin{enumerate}
\item For $\alpha\in{\mathbb P}\cap(\tau,{\tau_i}nf)$ we define $\operatorname{max-cov}tau(\alpha)$ to be the longest subsequence $(\alphae,\ldots,\alphane)$ of the $\tau$-localization of $\alpha$
which satisfies $\tau<\alphae$, $\alphane=\alpha$, and which is {\it $\mu$-covered}, i.e.\ which satisfies $\alphaie\le\mu^\talli$ for $i=1,\ldots,n$.
\item For $\alpha\in{\mathbb E}\cap(\tau,{\tau_i}nf)$ and $\beta\in{\mathbb P}\cap(\alpha,\alphainf)$ we denote the shortest subsequence $(\beta_0,\beta_1,\ldots,\beta_{n+1})$
of the $\alpha$-localization of $\beta$ which is a $\mu$-covering from $\alpha$ to $\beta$ by $\operatorname{min-cov}^\al(\beta)$, if such sequence exists.
\end{enumerate}
\end{defi}
We recall the notion of the tracking sequence of an ordinal, for greater clarity only for
multiplicative principals at this stage.
\betagin{defi}[cf.\ 3.13 of \cite{CWc}]\lambdabel{trsofmzdefi}
Let $\tau\in{\mathbb E}one$ and $\alpha\in{\mathbb M}\cap(\tau,{\tau_i}nf)$ with $\tau$-localization $\tau=\alpha_0,\ldots,\alpha_n=\alpha$.
The tracking sequence of $\alpha$ above $\tau$\index{tracking sequence}, ${\mathrm{ts}}t(\alpha)$\index{${\mathrm{ts}}tmz$}, is defined as follows.
If there exists the largest index $i\in\{1,\ldots,n-1\}$ such that $\alpha\le\mu^\talli$, then
\[{\mathrm{ts}}t(\alpha):={\mathrm{ts}}t(\alphai)^\frown(\alpha),\]
otherwise ${\mathrm{ts}}t(\alpha):=(\alpha)$.
\end{defi}
\betagin{defi}\lambdabel{mts}
Let $\tau\in{\mathbb E}one$, $\alpha\in{\mathbb E}\cap(\tau,{\tau_i}nf)$, $\beta\in{\mathbb P}\cap(\alpha,\alphainf)$, and let $\alpha=\alpha_0,\ldots,\alpha_{n+1}=\beta$ be the $\alpha$-localization
of $\beta$.
If there exists the least index $i\in\{0,\ldots,n\}$ such that $\alphai<\beta\le\mu^\talli$, then
\[\operatorname{mts}al(\beta):=\operatorname{mts}al(\alphai)^\frown(\beta),\]
otherwise $\operatorname{mts}al(\beta):=(\alpha)$.
\end{defi}
Note that $\operatorname{mts}al(\beta)$ reaches $\beta$ if and only if it is a $\mu$-covering from $\alpha$ to $\beta$.
\betagin{lem}\lambdabel{covcharlem} Fix $\tau\in{\mathbb E}one$.
\betagin{enumerate}
\item For $\alpha\in{\mathbb P}\cap(\tau,{\tau_i}nf)$ let $\operatorname{max-cov}tau(\alpha)=(\alphae,\ldots,\alphane)=\alphavec$. If $\alphae<\alpha$ then $\alphavec$ is a $\mu$-covering from $\alphae$ to $\alpha$
and $\operatorname{mts}ale(\alpha)\subseteqseteq\alphavec$.
\item If $\alpha\in{\mathbb M}\cap(\tau,{\tau_i}nf)$ then $\operatorname{max-cov}tau(\alpha)={\mathrm{ts}}t(\alpha)$.
\item Let $\alpha\in{\mathbb E}\cap(\tau,{\tau_i}nf)$ and $\beta\in{\mathbb P}\cap(\alpha,\alphainf)$. Then $\operatorname{min-cov}^\al(\alpha)$ exists if and only if $\operatorname{mts}al(\beta)$ is a $\mu$-covering from $\alpha$ to $\beta$,
in which case these sequences are equal, characterizing the lexicographically maximal $\mu$-covering from $\alpha$ to $\beta$.
\end{enumerate}
\end{lem}
{\bf Proof.} These are immediate consequences of the definitions.
\mbox{ }
$\Box$
\noindent Recall Definition 3.16 from \cite{CWc}, which for $\tau\in{\mathbb E}one$, $\alpha\in{\mathbb E}\cap(\tau,{\tau_i}nf)$ defines $\alphahat$ to be the minimal $\gamma\in{\mathbb M}^{>\alpha}$
such that ${\mathrm{ts}}al(\gamma)=(\gamma)$ and $\mu^\tall<\gamma$.
\betagin{lem}\lambdabel{mtshatlem}
Let $\tau\in{\mathbb E}one$, $\alpha\in{\mathbb E}\cap(\tau,{\tau_i}nf)$, and $\beta\in{\mathbb M}\cap(\alpha,\alphainf)$. Then $\operatorname{mts}al(\beta)$ is a $\mu$-covering from $\alpha$ to $\beta$
if and only if $\beta<\alphahat$. This holds if and only if for ${\mathrm{ts}}al(\beta)=(\beta_1,\ldots,\beta_m)$ we have $\beta_1\le\mu^\tall$.
\end{lem}
{\bf Proof.}
Suppose first that $\operatorname{mts}al(\beta)$ is a $\mu$-covering from $\alpha$ to $\beta$. Then $\alpha$ is an element of the $\tau$-localization of $\beta$, and modulo
term translation we obtain the $\tau$-localization of $\beta$ by concatenating the $\tau$-localization of $\alpha$ with the $\alpha$-localization of $\beta$.
By Lemma \ref{covcharlem} we therefore have $\operatorname{mts}al(\beta)\subseteqseteq(\alpha)^\frown{\mathrm{ts}}al(\beta)$ where $\beta_1\le\mu_\al$.
Let $\gamma\in{\mathbb M}\cap(\alpha,\beta]$ be given. Then by Lemma 3.15 of \cite{CWc} we have \[(\gamma_1,\ldots,\gamma_k):={\mathrm{ts}}al(\gamma)\le_\mathrm{\scriptscriptstyle{lex}}{\mathrm{ts}}al(\beta),\]
so $\gamma_1\le\beta_1\le\mu_\al$, and hence $\beta<\alphahat$.
Toward proving the converse, suppose that $\beta<\alphahat$.
We have ${\mathrm{ts}}al(\beta_1)=(\beta_1)$, so $\beta_1\le\mu_\al$ since $\beta_1<\alphahat$. This implies that $\operatorname{mts}al(\beta)$ reaches $\beta$ as a subsequence of $(\alpha)^\frown{\mathrm{ts}}al(\beta)$.
\mbox{ }
$\Box$
\betagin{defi}[4.3 of \cite{CWc}]
Let $\tau\in{\mathbb E}one$. A sequence $\alphavec$ of ordinals below ${\tau_i}nf$ is a $\tau$-reference sequence\index{$\tau$-reference sequence} if
\betagin{enumerate}
\item $\alphavec=()$ or
\item $\alphavec=(\alphae,\ldots,\alphan)\in{\operatorname{T}}St$ such that $\alphan\in{\mathbb E}^{>\alphanmin}$ (where $\alphanod:=\tau$).
\end{enumerate}
We denote the set of $\tau$-reference sequences by ${\cal R}St$.\index{$\rst$@${\cal R}St$} In case of $\tau=1$ we simply write ${\cal R}S$\index{$\rst$@${\cal R}St$!${\cal R}S$} and call its elements reference sequences.\index{reference sequence}
\end{defi}
\betagin{defi}[c.f.\ 4.9 of \cite{CWc}]
For $\gamma\in{\mathbb M}\cap1^\infty$ and $\varepsilon\in{\mathbb E}\cap1^\infty$ let $\operatorname{sk}_\gamma(\varepsilon)$\index{$\operatorname{sk}$} be the maximal sequence $\delta_1,\ldots,\delta_l$ such that (setting $\delta_0:=1$)
\betagin{enumerate}
\item $\delta_1=\varepsilon$ and
\item if $i\in\sigmangleton{1,\ldots,l-1}\:\&\:\delta_i\in{\mathbb E}^{>\delta_{i-1}}\:\&\:\gamma\le\mu_{\delta_i}$, then
$\delta_{i+1}=\mathrm{o}erline{\mu_{\delta_i}\cdot\gamma}$.
\end{enumerate}
\end{defi}
\noindent{\bf Remark(\cite{CWc}).} Lemma 3.5 of \cite{CWc} guarantees that the above definition terminates.
We have $(\delta_1,\ldots,\delta_{l-1})\in{\cal R}S$ and $(\delta_1,\ldots,\delta_l)\in{\operatorname{T}}S$. Notice that $\gamma\le\delta_i$ for $i=2,\ldots,l$.
\betagin{defi}\lambdabel{hgamalbe}
Let $\alphavec^\frown\beta\in{\cal R}S$ and $\gamma\in{\mathbb M}$.
\betagin{enumerate}
\item If $\gamma\in(\beta,\betahat)$, let $\operatorname{mts}be(\gamma)={\vec{\eta}}^\frown(\varepsilon,\gamma)$ and define
\[\operatorname{h}_\ga(\alphavecbe):=\alphavec^\frown{\vec{\eta}}^\frown\operatorname{sk}ga(\varepsilon).\]
\item If $\gamma\in(1,\beta]$ and $\gamma\le\mu_\be$ then
\[\operatorname{h}_\ga(\alphavecbe):=\alphavec^\frown\operatorname{sk}ga(\beta).\]
\item If $\gamma\in(1,\beta]$ and $\gamma>\mu_\be$ then
\[\operatorname{h}_\ga(\alphavecbe):=\alphavec^\frown\beta.\]
\end{enumerate}
\end{defi}
\noindent{\bf Remark.} In 1.\ let ${\mathrm{ts}}be(\gamma)=:(\gamma_1,\ldots,\gamma_m)$. Then we have $\gamma_1\le\mu_\be$ according to Lemma 3.17 of \cite{CWc},
so that $\operatorname{mts}be(\gamma)$ reaches $\gamma$, $\beta\le\varepsilon<\gamma\le\mu_\varepsilon$ and $\beta<\mu_\be$.
In 2.\ we have $\operatorname{sk}ga(\beta)=(\beta, \mathrm{o}erline{\mu_\be\cdot\gamma})$ with $\gamma\le \mathrm{o}erline{\mu_\be\cdot\gamma}\le\beta$ in case of $\mu_\be\le\beta$.
In 3.\ we have $\mu_\be<\gamma\le\beta$ and hence $\operatorname{sk}ga(\beta)=(\beta)$.
\betagin{lem}\lambdabel{hgalem}
Let $\alphavec^\frown\beta\in{\cal R}S$ and $\gamma\in{\mathbb M}$.
Then $\operatorname{h}_\ga(\alphavecbe)$ is of a form $\alphavec^\frown{\vec{\eta}}^\frown\operatorname{sk}ga(\varepsilon)$ where
${\vec{\eta}}=(\eta_1,\ldots,\eta_r)$, $r\ge 0$, $\eta_1=\beta$, $\eta_{r+1}:=\varepsilon$,
and $\operatorname{sk}ga(\varepsilon)=(\delta_1,\ldots,\delta_{l+1})$, $l\ge 0$, with $\delta_1=\varepsilon$.
We have \[{\operatorname{lf}}(\delta_{l+1})\ge\gamma,\]
and for ${\vec{\tau}}si\in{\operatorname{T}}S$, where ${\vec{\tau}}=(\tau_1,\ldots,\tau_s)$ and $\tau_{s+1}:=\sigma$,
such that $\alphavecbe\subseteqseteq{\vec{\tau}}si$ and $\operatorname{h}_\ga(\alphavecbe)<_\mathrm{\scriptscriptstyle{lex}}{\vec{\tau}}si$
we either have
\betagin{enumerate}
\item ${\vec{\tau}}=\alphavec^\frown{\vec{\eta}}^\frown\deltavec_{\restriction_i}$ for some $i\in\{1,\ldots,l+1\}$ and
$\sigma=_{\mathbb N}F\delta_{i+1}\cdot{\sigma^\prime}$ for some ${\sigma^\prime}<\gamma$, setting $\delta_{l+2}:=1$, or
\item ${\vec{\tau}}_{\restriction_{s_0}}=\alphavec^\frown{\vec{\eta}}_{\restriction_{r_0}}$ for some
$r_0\in[1,r]$ and $s_0\le s$ such that $\eta_{r_0+1}<\tau_{s_0+1}$, in which case we have
$\mu_{\tau_j}<\gamma$ for all $j\in\{s_0+1,\ldots,s\}$, and $\mu_\sigma<\gamma$ if $\sigma\in{\mathbb E}^{>\tau_s}$.
\end{enumerate}
\end{lem}
{\bf Proof.} Suppose first that ${\vec{\tau}}$ is a maximal initial segment $\alphavec^\frown{\vec{\eta}}^\frown\deltavec_{\restriction_i}$ for some $i\in\{1,\ldots,l+1\}$.
In the case $i=l+1$ we have $\delta_{l+1}\in{\mathbb E}^{>\delta_l}$ and $\mu_{\delta_{l+1}}<\gamma\le\delta_{l+1}$, so $\sigma\le\mu_{\delta_{l+1}}<\gamma$, and we also observe that
${\vec{\tau}}si$ could not be extended further.
Now suppose that $i\le l$. Then $\delta_{i+1}<\sigma\le\mu_{\delta_i}$, and since $\gamma\le\delta_{i+1}=\mathrm{o}erline{\mu_{\delta_i}\cdot\gamma}\le\mu_{\delta_i}$, we obtain
$\sigma=\delta_{i+1}\cdot{\sigma^\prime}$ for some ${\sigma^\prime}\in(1,\gamma)$.
Otherwise ${\vec{\tau}}$ must be of the form given in part 2 of the claim. This implies $\gamma\in{\mathbb M}\cap(\beta,\betahat)$ and $\operatorname{mts}be(\gamma)={\vec{\eta}}^\frown(\varepsilon,\gamma)$.
Let us assume, toward contradiction, there existed a least $j\in\{s_0+1,\ldots,s+1\}$ such that $\tau_j\in{\mathbb E}^{>\tau_{j-1}}$ and $\gamma\le\mu_{\tau_j}$.
Then ${{\vec{\eta}}_{\restriction_{r_0}}}^\frown(\tau_{s_0+1},\ldots,\tau_j,\gamma)$ is a $\mu$-covering from $\beta$ to $\gamma$, hence by part 3 of Lemma \ref{covcharlem}
it must be lexicographically less than or equal to $\operatorname{mts}be(\gamma)$ and therefore ${{\vec{\eta}}_{\restriction_{r_0}}}^\frown(\tau_{s_0+1},\ldots,\tau_j)\le_\mathrm{\scriptscriptstyle{lex}}{\vec{\eta}}^\frown\varepsilon$: contradiction.
\mbox{ }
$\Box$
\betagin{cor}\lambdabel{hgacor}
For $\alphavec^\frown\beta\in{\cal R}S$ and $\gamma,\delta\in{\mathbb M}$ such that $\delta\in(1,\gamma)$ we have
\[\operatorname{h}_\ga(\alphavecbe)\le_\mathrm{\scriptscriptstyle{lex}}\operatorname{h}_\de(\alphavecbe).\]
\end{cor}
\subseteqsection{Evaluation}
\betagin{defi}\lambdabel{odef}
Let $\alphavecbe\in{\operatorname{T}}S$, where $\alphavec=(\alphae,\ldots,\alphan)$, $n\ge 0$, $\beta=_{\mathrm{\scriptscriptstyle{MNF}}}\beta_1\cdot\ldots\cdot\beta_k$,
and set $\alpha_0:=1$, $\alphane:=\beta$, $h:={\operatorname{ht}_1}(\alphae)+1$, and
$\gammavec_i:={\mathrm{ts}}^{\alphaimin}(\alphai)$, $i=1,\ldots,n$,
\[\gammavec_{n+1}:=\left\{\betagin{array}{ll}
(\beta)&\mbox{if }\beta\le\alphan\\[2mm]
{\mathrm{ts}}aln(\beta_1)^\frown\beta_2&\mbox{if } k>1,\beta_1\in{\mathbb E}^{>\alphan}\:\&\:\beta_2\le\mu_{\beta_1}\\[2mm]
{\mathrm{ts}}aln(\beta_1)&\mbox{otherwise,}
\end{array}\right.\]
and write $\gammavec_i=(\gamma_{i,1},\ldots,\gamma_{i,m_i})$, $i=1,\ldots,n+1$.
Then define
\[\mathrm{lSeq}(\alphavecbe):=(m_1,\ldots,m_{n+1})\in[h]^{\le h}.\]
Let $\betapr:=1$ if $k=1$ and $\betapr:=\beta_2\cdot\ldots\cdot\beta_k$ otherwise.
We define $\mathrm{o}(\alphavecbe)$ recursively in $\mathrm{lSeq}(\alphavecbe)$, as well as auxiliary parameters
$n_0(\alphavecbe)$ and $\gamma(\alphavecbe)$, which are set to $0$ where not defined explicitly.
\betagin{enumerate}
\item $\mathrm{o}((1)):=1$.
\item If $\beta_1\le\alphan$, then $\mathrm{o}(\alphavecbe):=_{\mathbb N}F\mathrm{o}(\alphavec)\cdot\beta$.
\item If $\beta_1\in{\mathbb E}^{>\alphan}$, $k>1$, and $\beta_2\le\mu_{\beta_1}$, then
set $n_0(\alphavecbe):=n+1$, $\gamma(\alphavecbe):=\beta_1$, and define
\[\mathrm{o}(\alphavecbe):=_{\mathbb N}F\mathrm{o}(\operatorname{h}_{\beta_2}(\alphavec^\frown\beta_1))\cdot\betapr.\]
\item Otherwise. Then setting
\[n_0:=n_0(\alphavecbe):=\max\left(\{i\in\{1,\ldots,n+1\}\mid m_i>1\}\cup\{0\}\right),\]
define
\[\mathrm{o}(\alphavecbe):=_{\mathbb N}F\left\{\betagin{array}{ll}
\beta&\mbox{if } n_0=0\\[2mm]
\mathrm{o}(\operatorname{h}_{\beta_1}({\alphavec_{\restriction_{n_0-1}}}^\frown\gamma))\cdot\beta&\mbox{if } n_0>0,
\end{array}\right.\]
where $\gamma:=\gamma(\alphavecbe):=\gamma_{n_0,m_{n_0}-1}$.
\end{enumerate}
\end{defi}
\noindent{\bf Remark.} As indicated in writing $=_{\mathbb N}F$ in the above definition, we obtain terms in multiplicative normal form
denoting the values of $\mathrm{o}$. The {\bf fixed points of $\mathrm{o}$}, i.e.\ those $\alphavecbe$ that satisfy $\mathrm{o}(\alphavecbe)=\beta$ are therefore
characterized by 1.\ and 4.\ for $n_0=0$.
Recall Definition 3.13 of \cite{CWc}, extending Definition \ref{trsofmzdefi} to additive principal numbers that are not multiplicative
principal ones.
\betagin{defi}[cf.\ 3.13 of \cite{CWc}]\lambdabel{trsofhzdefi}
Let $\tau\in{\mathbb E}one$ and $\alpha\in[\tau,{\tau_i}nf)\cap{\mathbb P}$.
The tracking sequence of $\alpha$ above $\tau$\index{tracking sequence}, ${\mathrm{ts}}t(\alpha)$\index{${\mathrm{ts}}thz$}, is defined as in Definition \ref{trsofmzdefi}
if $\alpha\in{\mathbb M}^{>\tau}$, and otherwise recursively in the multiplicative decomposition of $\alpha$ as follows.
\betagin{enumerate}
\item If $\alpha\le\tau^\omega$ then ${\mathrm{ts}}t(\alpha):=(\alpha)$.
\item Otherwise. Then $\alphabar\in[\tau,\alpha)$ and $\alpha=_{\mathrm{\scriptscriptstyle{NF}}}\alphabar\cdot\beta$ for some $\beta\in{\mathbb M}^{>1}$.
Let ${\mathrm{ts}}t(\alphabar)=(\alphae,\ldots,\alphan)$ and set $\alpha_0:=\tau$.\footnote{As verified in part 2 of the lemma below we have
$\beta\le\alpha_n$.}
\betagin{enumerate}
\item[2.1.] If $\alphan\in{\mathbb E}^{>\alpha_{n-1}}$ and $\beta\le\mu^\talln$ then ${\mathrm{ts}}t(\alpha):=(\alphae,\ldots,\alphan,\beta)$.
\item[2.2.] Otherwise. For $i\in\sigmangleton{1,\ldots,n}$ let $(\beta^i_1,\ldots,\beta^i_{m_i})$ be ${\mathrm{ts}}^{\alphai}(\beta)$
provided $\beta>\alphai$, and set $m_i:=1$, $\beta^i_1:=\alphai\cdot\beta$ if $\beta\le\alphai$.
We first define the critical index
\[i_0(\alpha)=i_0:=\max\left(\sigmangleton{1}\cup\set{j\in\sigmangleton{2,\ldots,n}}{\beta^j_1\le\mu^\tau_{\alpha_{j-1}}}\right).\]
Then ${\mathrm{ts}}t(\alpha):=(\alphae,\ldots,\alpha_{i_0-1},\beta^{i_0}_1,\ldots,\beta^{i_0}_{m_{i_0}})$.
\end{enumerate}
\end{enumerate}
Instead of ${\mathrm{ts}}^1(\alpha)$ we also simply write ${\mathrm{ts}}(\alpha)$.\index{${\mathrm{ts}}t$!${\mathrm{ts}}$}
\end{defi}
\betagin{lem}\lambdabel{trsestimlem}
If in the above definition, part 2.2, we have $\beta>\alphainod$, then for all $j\in(i_0,\ldots,n]$ we have \[\beta^{i_0}_1\le\alphaj\le\mu_{\alpha_{j-1}},\]
in particular $\beta^{i_0}_1\le\mu_\alphainod$.
\end{lem}
{\bf Proof.} Assume toward contradiction that there exists the maximal $j\in(i_0,\ldots,n]$ such that $\alphaj<\beta^{i_0}_1$.
Since $\beta^{i_0}_1\le\beta\le\alphan$ we have $j<n$ and obtain \[\alphainod<\alphaj<\beta^{i_0}_1\le\alpha_{j+1}\le\mu_\alphaj,\]
implying that $\alphaj\in{\mathrm{ts}}^\alphainod(\beta)$, contradicting the minimality of $\beta^{i_0}_1$ in ${\mathrm{ts}}^\alphainod(\beta)$. \mbox{ }
$\Box$
\betagin{lem}[3.14 of \cite{CWc}]
Let $\tau\in{\mathbb E}one$ and $\alpha\in[\tau,{\tau_i}nf)\cap{\mathbb P}$. Let further $(\alphae,\ldots,\alphan)$ be
${\mathrm{ts}}t(\alpha)$, the tracking sequence of $\alpha$ above $\tau$.
\betagin{enumerate}
\item If $\alpha\in{\mathbb M}$ then $\alphan=\alpha$ and ${\mathrm{ts}}t(\alphai)=(\alphae,\ldots,\alphai)$ for $i=1,\ldots,n$.
\item If $\alpha=_{\mathrm{\scriptscriptstyle{NF}}}\eta\cdot\xi\not\in{\mathbb M}$ then $\alphan\in{\mathbb P}\cap[\xi,\alpha]$ and $\alphan=_{\mathrm{\scriptscriptstyle{NF}}}\alphanbar\cdot\xi$.
\item $(\alphae,\ldots,\alpha_{n-1})$ is either empty or a strictly increasing sequence of epsilon numbers in the interval $(\tau,\alpha)$.
\item For $1\le i\le n-1$ we have $\alphaie\le\mu^\talli$, and if $\alphai<\alphaie$ then $(\alphae,\ldots,\alphaie)$ is a subsequence of the $\tau$-localization of $\alphaie$.
\end{enumerate}
\end{lem}
{\bf Proof.} The proof proceeds by straightforward induction along the definition of ${\mathrm{ts}}t(\alpha)$, i.e. along the length of the
$\tau$-localization of multiplicative principal numbers and
the number of factors in the multiplicative decomposition of additive principal numbers. In part 4 Lemma 6.5 of \cite{W07a} and the previous remark apply.\mbox{ }
$\Box$
\betagin{lem}[3.15 of \cite{CWc}]\lambdabel{citedinjtrslem}
Let $\tau\in{\mathbb E}one$ and $\alpha,\gamma\in[\tau,{\tau_i}nf)\cap{\mathbb P}$, $\alpha<\gamma$. Then we have
\[{\mathrm{ts}}t(\alpha)<_\mathrm{\scriptscriptstyle{lex}}{\mathrm{ts}}t(\gamma).\]
\end{lem}
{\bf Proof.} The proof given in \cite{CWc} is in fact an induction along the inductive definition of ${\mathrm{ts}}t(\gamma)$ with a subsidiary induction along the
inductive definition of ${\mathrm{ts}}t(\alpha)$.
\mbox{ }
$\Box$
\betagin{theo}\lambdabel{thma}
For all $\alpha\in{\mathbb P}\cap1^\infty$ we have \[\mathrm{o}({\mathrm{ts}}(\alpha))=\alpha.\]
\end{theo}
{\bf Proof.} The theorem is proved by induction along the inductive definition of ${\mathrm{ts}}(\alpha)$.\\[2mm]
{\bf Case 1:} $\alpha\in{\mathbb M}$. Then $\mathrm{lSeq}({\mathrm{ts}}(\alpha))=(1,\ldots,1)$ and hence $\mathrm{o}({\mathrm{ts}}(\alpha))=\alpha$ immediately by definition.\\[2mm]
{\bf Case 2:} $\alpha=_{\mathbb N}F\alphabar\cdot\beta\in{\mathbb P}-{\mathbb M}$. Let ${\mathrm{ts}}(\alphabar)=:(\alphae,\ldots,\alphan)$ and $\alpha_0:=1$. By the i.h.\ $\mathrm{o}(\alphavec)=\alphabar$.
We have $\beta\le{\operatorname{lf}}(\alphan)\le\alphan$, $n\ge 1$.\\[2mm]
{\bf Subcase 2.1:} $\alphan\in{\mathbb E}^{>\alphanmin}\:\&\:\beta\le\mu_\aln$. Then ${\mathrm{ts}}(\alpha)=\alphavec^\frown\beta$, and since $\beta\in{\mathbb M}^{\le\alphan}$ according to the definition of $\mathrm{o}$
we obtain $\mathrm{o}(\alphavecbe)=\mathrm{o}(\alphavec)\cdot\beta=\alphabar\cdot\beta=\alpha$.\\[2mm]
{\bf Subcase 2.2:} Otherwise. Let $(\beta^i_1,\ldots,\beta^i_{m_i})$ for $i=1,\ldots,n$ as well as the index $i_0$ be defined as in case 2.2 of Definition \ref{trsofhzdefi},
so that ${\mathrm{ts}}(\alpha)=(\alphae,\ldots,\alpha_{i_0-1},\beta^{i_0}_1,\ldots,\beta^{i_0}_{m_{i_0}})$.\\[2mm]
{\bf 2.2.1:} $i_0=n$. Then we have ${\mathrm{ts}}(\alpha)=(\alphae,\ldots,\alphanmin,\alphan\cdot\beta)$, and using the i.h.\ we obtain $\mathrm{o}({\mathrm{ts}}(\alpha))=\mathrm{o}(\alphavec)\cdot\beta=\alpha$.\\[2mm]
{\bf 2.2.2:} $i_0<n$ and $\beta\le\alphainod$. Then we have $\alphainod\in{\mathbb E}^{>\alpha_{i_0-1}}$, $\alphainod\cdot\beta\le\mu_{\alpha_{i_0-1}}$, and
${\mathrm{ts}}(\alpha)=(\alphae,\ldots,\alpha_{i_0-1},\alphainod\cdot\beta)$. It follows that for all $j\in(i_0,n]$ we have $\beta\le\alphaj$ and $\alphaj\cdot\beta>\mu_{\alpha_{j-1}}$, hence $\beta\le\mu_\alphainod$
and thus \[\mathrm{o}({\mathrm{ts}}(\alpha))=\mathrm{o}(\operatorname{h}_\be(\alphavec_{\restriction_{i_0}}))\cdot\beta.\] The sequence $\operatorname{h}_\be(\alphavec_{\restriction_{i_0}})$ is of the form
${\alphavec_{\restriction_{i_0-1}}}^\frown\deltavec$ where $\deltavec:=\operatorname{sk}be(\alphainod)$. Since $\mu_\alphainod<\alpha_{i_0+1}\cdot\beta$ we have
$\mathrm{o}erline{\mu_\alphainod\cdot\beta}=\alpha_{i_0+1}=\delta_2$. In the case $i_0+1=n$ we obtain $\deltavec=(\alphanmin,\alphan)$, hence $\operatorname{h}_\be(\alphavec_{\restriction_{i_0}})=\alphavec$,
otherwise we iterate the above argumentation to see that $\deltavec=(\alphainod,\ldots,\alphan)$. Hence $\mathrm{o}({\mathrm{ts}}(\alpha))=\mathrm{o}(\alphavec)\cdot\beta$ as desired.\\[2mm]
{\bf 2.2.3:} $i_0<n$ and $\alphainod<\beta$. Then we have $\betavec^{i_0}={\mathrm{ts}}^\alphainod(\beta)$ with $\beta^{i_0}_1\le\mu_{\alpha_{i_0-1}}$ if $i_0>1$.
By Lemma \ref{trsestimlem} $\alphainod$ is the immediate predecessor of $\beta^{i_0}_1$ in ${\mathrm{ts}}^{\alpha_{i_0-1}}(\beta)$.
By definition of $\mathrm{o}$ we have $\mathrm{o}({\mathrm{ts}}(\alpha))=\mathrm{o}(\operatorname{h}_\be(\alphavec_{\restriction_{i_0}}))\cdot\beta$ and therefore have to show that $\operatorname{h}_\be(\alphavec_{\restriction_{i_0}})=\alphavec$.
We define \[j_0:=\min\{j\in\{i_0,\ldots,n-1\}\mid\beta\le\mu_\alphaj\},\]
which exists, because $\beta\le\alphan\le\mu_\alphanmin$.\\[2mm]
{\bf Claim:} $\operatorname{mts}^\alphainod(\beta)=(\alphainod,\ldots,\alphajnod,\beta)$.\\
{\bf Proof.} For every $j\in\{i_0,\ldots,j_0-1\}$ the minimality of $j_0$ implies $\beta>\mu_\alj\ge\alphaje$, and thus by the maximality of $i_0$ also $\beta^{j+1}_1>\mu_\alj$.
Moreover, we have \[\beta^{j+1}_1\le\alpha_{j+2}:\] Assume otherwise and let $j$ be maximal in $\{i_0,\ldots,j_0-1\}$ such that $\beta^{j+1}_1>\alpha_{j+2}$. Since
$\beta^{j+1}_1\le\beta\le\alphan$ we must have $j\le n-3$. But then $\alpha_{j+1}<\alpha_{j+2}<\beta^{j+1}_1\le\alpha_{j+3}\le\mu_{j+2}$ and hence $\alpha_{j+2}\in{\mathrm{ts}}^{\alpha_{j+1}}(\beta)$,
contradicting the minimality of $\beta^{j+1}_1$ in ${\mathrm{ts}}^{\alpha_{j+1}}(\beta)$.
Therefore \[\mu_\alj<\beta^{j+1}_1\le\alpha_{j+2}\le\mu_{\alphaje},\]
which concludes the proof of the claim.\mbox{ }
$\Box$
It remains to be shown that $\operatorname{sk}be(\alphajnod)=(\alphajnod,\ldots,\alphan)$, i.e.\ to successively check for $j=j_0,\ldots,n-1$ that $\beta\le\alphaje\le\mu_\alj$ and $\alphaje\cdot\beta>\mu_\alj$,
whence $\mathrm{o}erline{\mu_\alj\cdot\beta}=\alphaje$. This concludes the verification of $\operatorname{h}_\be(\alphavec_{\restriction_{i_0}})=\alphavec$ and consequently the proof of the theorem.
\mbox{ }
$\Box$
\betagin{theo}\lambdabel{thmb}
For all $\alphavecbe\in{\operatorname{T}}S$ we have \[{\mathrm{ts}}(\mathrm{o}(\alphavecbe))=\alphavecbe.\]
\end{theo}
{\bf Proof.} The theorem is proved by induction on $\mathrm{lSeq}(\alphavecbe)$ along the ordering $(\mathrm{lSeq},<_\mathrm{\scriptscriptstyle{lex}})$.
Let $\beta=_{\mathbb N}F\beta_1\cdot\ldots\cdot\beta_k$ and set $n_0:=n_0(\alphavecbe)$, $\gamma:=\gamma(\alphavecbe)$ according to Definition \ref{odef}, which
provides us with an ${\mathbb N}F$-representation of $\mathrm{o}(\alphavecbe)$, where in the interesting cases the i.h.\ applies to the term ${\mathrm{ts}}\left(\mathrm{o}erline{\mathrm{o}(\alphavecbe)}\right)$.
\\[2mm]
{\bf Case 1:} $n=0$ and $\beta=1$. Trivial.\\[2mm]
{\bf Case 2:} $1<\beta_1\le\alphan$. Then $\mathrm{o}(\alphavecbe)=_{\mathbb N}F\mathrm{o}(\alphavec)\cdot\beta$, and it is straightforward to verify the claim from the i.h.\ applied to $\alphavec$
by inspecting case 2.1 of Definition \ref{trsofhzdefi}.\\[2mm]
{\bf Case 3:} $k>1$ with $\beta_1\in{\mathbb E}^{>\alphan}$ and $\beta_2\le\mu_{\beta_1}$. Then by definition $\mathrm{o}(\alphavecbe)=_{\mathbb N}F\mathrm{o}(\operatorname{h}_{\beta_2}(\alphavec^\frown\beta_1))\cdot\betapr$
where $\betapr=(1/\beta_1)\cdot\beta$. According to part 2 of Definition \ref{hgamalbe} we have \[\operatorname{h}_{\beta_2}(\alphavec^\frown\beta_1)=\alphavec^\frown\deltavec,\] where
$\deltavec=(\delta_1,\ldots,\delta_{l+1}):=\operatorname{sk}_{\beta_2}(\beta_1)$. Assume first that $k=2$. Then the maximality of the length of $\deltavec$ excludes the possibility
$\delta_{l+1}\in{\mathbb E}^{>\delta_l}\:\&\:\beta_2\le\mu_{\delta_{l+1}}$. We have $\beta_2\le\beta_1$ and $\beta\le\mu_\aln$. For $j\in\{2,\ldots,l+1\}$ we have
$\beta_2\le\delta_j=\mathrm{o}erline{\mu_{\delta_{j-1}}\cdot\beta_2}\le\mu_{\delta_{j-1}}$ and $\delta_j\cdot\beta_2=\mu_{\delta_{j-1}}\cdot\beta_2>\mu_{\delta_{j-1}}$.
This implies that ${\mathrm{ts}}(\mathrm{o}(\alphavecbe))=\alphavec^\frown\beta$. The claim now follows easily for $k>2$ since $\beta\le\mu_\aln$.\\[2mm]
{\bf Case 4:} Otherwise.\\[2mm]
{\bf Subcase 4.1:} $n_0=0$. Then we have $\mathrm{o}(\alphavecbe)=\beta$, ${\mathrm{ts}}^\alphaimin(\alphai)=(\alphai)$ for $i=1,\ldots,n$, and ${\mathrm{ts}}^\alphan(\beta_1)=(\beta_1)$, whence
${\mathrm{ts}}(\beta)=(\beta)$.\\[2mm]
{\bf Subcase 4.2:} $n_0>0$. Using the abbreviation $\alphavecpr:=\alphavec_{\restriction{n_0-1}}$ we then have $\mathrm{o}(\alphavecbe)=_{\mathbb N}F\mathrm{o}(\operatorname{h}_{\beta_1}(\alphavecpr^\frown\gamma))\cdot\beta$.
Setting $\operatorname{mts}^\gamma(\beta_1)=:(\gamma_1,\ldots,\gamma_{m+1})$ where $\gamma_1=\gamma$, $\gamma_{m+1}=\beta_1$, and $\operatorname{sk}_{\beta_1}(\gamma_m)=:\deltavec=(\delta_1,\ldots,\delta_{l+1})$ where $\delta_1=\gamma_m$,
we have \[\operatorname{h}_{\beta_1}(\alphavecpr^\frown\gamma)={\alphavecpr}^\frown{\gammavec_{\restriction_{m-1}}}^\frown\deltavec,\]
which by the i.h.\ is the tracking sequence of $\mathrm{o}erline{\mathrm{o}(\alphavec^\frown\beta_1)}$.
Assuming first that $k=1$, we now verify that the tracking sequence of $\mathrm{o}(\alphavecbe)$ actually is $\alphavecbe$, by checking
that case 2.2 of Definition \ref{trsofhzdefi} applies, with $n_0$ playing the role of the critical index $i_0(\mathrm{o}(\alphavecbe))$.
Note first that the maximality of the length of $\deltavec$ rules out the possibility $\delta_{l+1}\in{\mathbb E}^{>\delta_l}\:\&\:\beta_1\le\mu_{\delta_{l+1}}$ and hence case 2.1 of
Definition \ref{trsofhzdefi}.
According to the choice of $n_0$ and part 2 of Lemma \ref{covcharlem} we have ${\mathrm{ts}}^\gamma(\beta_1)=\operatorname{max-cov}^\gamma(\beta_1)=(\alpha_{n_0},\ldots,\alphan,\beta_1)$ and
of course $\alpha_{n_0}\le\mu_{\alpha_{n_0-1}}$. Thus $n_0$ qualifies for the critical index, once we show its maximality:
Firstly, for any $i\in\{2,\ldots,m\}$ we have $\gamma_i<\beta_1$, and setting $\betavec^i:={\mathrm{ts}}^{\gamma_i}(\beta_1)$ the assumption $\beta^i_1\le\mu_{\gamma_{i-1}}$ would imply
that ${\gammavec_{\restriction_{i-1}}}^\frown\betavec^i$ is a $\mu$-covering from $\gamma$ to $\beta_1$ such that
\[\operatorname{mts}^\gamma(\beta_1)<_\mathrm{\scriptscriptstyle{lex}}{\gammavec_{\restriction_{i-1}}}^\frown\betavec^i,\]
contradicting part 3 of Lemma \ref{covcharlem}.
Secondly, for any $j\in\{2,\ldots,l+1\}$ we have $\delta_j=\mathrm{o}erline{\mu_{\delta_{j-1}}\cdot\beta_1}$, so $\delta_j\cdot\beta_1=\mu_{\delta_{j-1}}\cdot\beta_1>\mu_{\delta_{j-1}}$.
These considerations entail \[{\mathrm{ts}}(\alphavec^\frown\beta_1)=\alphavecpr^\frown{\mathrm{ts}}^\gamma(\beta_1),\]
and it is easy now to verify the claim for arbitrary $k$, again since $\beta\le\mu_\aln$.
\mbox{ }
$\Box$
\betagin{cor}\lambdabel{ocontcor}
$\mathrm{o}$ is strictly increasing with respect to the lexicographic ordering on ${\operatorname{T}}S$ and
continuous in the last vector component.
\end{cor}
{\bf Proof.} The first statement is immediate from Lemma \ref{citedinjtrslem} and Theorems \ref{thma} and \ref{thmb}.
In order to verify continuity, let $\alphavec=(\alphae,\ldots,\alphan)\in{\cal R}S$ and $\beta\in{\mathbb L}^{\le\mu_\aln}$ be given.
For any $\gamma\in{\mathbb P}\cap\beta$, we have $\gammati:=\mathrm{o}(\alphavecga)<\mathrm{o}(\alphavecbe)=:\betati$ and
$\alphavecga={\mathrm{ts}}(\gammati)<_\mathrm{\scriptscriptstyle{lex}}{\mathrm{ts}}(\betati)=\alphavecbe$. For given $\deltati\in{\mathbb P}\cap(\gammati,\betati)$ set
$\deltavec:={\mathrm{ts}}(\deltati)$, so that \[\alphavecga<_\mathrm{\scriptscriptstyle{lex}}\deltavec<_\mathrm{\scriptscriptstyle{lex}}\alphavecbe,\]
whence $\alphavec\subseteqseteq\deltavec$ is an initial segment. Writing $\deltavec=\alphavec^\frown({\ze^\tau}a_1,\ldots,{\ze^\tau}a_m)$
we obtain $\gamma<{\ze^\tau}a_1<\beta$. For $\deltavecpr:=\alphavec^\frown{\ze^\tau}a_1\cdot\omega$ we then have $\deltati<\mathrm{o}(\deltavecpr)<\betati$.
\mbox{ }
$\Box$
\noindent{\bf Remark.} Theorems \ref{thma} and \ref{thmb} establish Lemma 4.10 of \cite{CWc} in a weak theory,
adjusted to our redefinition of $\mathrm{o}$. Its equivalence with the definition in \cite{CWc} follows, since
the definition of ${\mathrm{ts}}$ has not been modified. In the next section we will continue in this way in order to obtain suitable
redefinitions of the $\kappappa$- and $\nu$-functions.
The following lemma will not be required in the sequel, but it
has been included to further illuminate the approach.
\betagin{lem}\lambdabel{olem}
Let $\alphavecbe\in{\cal R}S$, $\gamma\in{\mathbb M}\cap(1,\betahat)$, and let ${\vec{\tau}}si\in{\operatorname{T}}S$ be such that $\alphavecbe\subseteqseteq{\vec{\tau}}si$ and $\operatorname{h}_\ga(\alphavecbe)<_\mathrm{\scriptscriptstyle{lex}}{\vec{\tau}}si$.
Then we have \[\mathrm{o}({\vec{\tau}}si)=_{\mathbb N}F\mathrm{o}(\operatorname{h}_\ga(\alphavecbe))\cdot\delta\]
for some $\delta\in{\mathbb P}\cap(1,\gamma)$.
\end{lem}
{\bf Proof.} The lemma is proved by induction on $\mathrm{lSeq}({\vec{\tau}}si)$, using Lemma \ref{hgalem}.
In order to fix some notation, set ${\vec{\tau}}=(\tau_1,\ldots,\tau_s)$ and $\tau_{s+1}:=\sigma$.
Write $\operatorname{h}_\ga(\alphavecbe)=\alphavec^\frown{\vec{\eta}}^\frown\deltavec$ where ${\vec{\eta}}=(\eta_1,\ldots,\eta_r)$ and $\deltavec=(\delta_1,\ldots,\delta_{l+1})=\operatorname{sk}ga(\varepsilon)$,
$\delta_1=\varepsilon=:\eta_{r+1}$, and $\delta_{l+2}:=1$.
Note that since $\alphavecbe\in{\cal R}S$, we have $\varepsilon\in{\mathbb E}^{>\eta_r}$. According to Lemma \ref{hgalem} either one of the two following cases applies
to ${\vec{\tau}}si$. \\[2mm]
{\bf Case 1:} ${\vec{\tau}}=\alphavec^\frown{\vec{\eta}}^\frown\deltavec_{\restriction_i}$ for some $i\in\{1,\ldots,l+1\}$ and $\sigma=_{\mathbb N}F\delta_{i+1}\cdot{\ze^\tau}a$ for some
${\ze^\tau}a\in{\mathbb P}\cap(1,\gamma)$. Let ${\ze^\tau}a=_{\mathrm{\scriptscriptstyle{MNF}}}{\ze^\tau}a_1\cdot\ldots\cdot{\ze^\tau}a_j$.\\[2mm]
{\bf Subcase 1.1:} $i=l+1$. Then $\operatorname{h}_\ga(\alphavecbe)={\vec{\tau}}$, $\delta_{l+1}\in{\mathbb E}^{>\delta_l}$, and
$\sigma\le\mu_{\delta_{l+1}}<\gamma\le\delta_{l+1}$. According to the definition of $\mathrm{o}$ we have
$\mathrm{o}({\vec{\tau}}si)=\mathrm{o}(\operatorname{h}_\ga(\alphavecbe))\cdot\sigma$.\\[2mm]
{\bf Subcase 1.2:} $i\le l$ where $\delta_{i+1}\in{\mathbb E}^{>\delta_i}$ and ${\ze^\tau}a_1\le\mu_{\delta_{i+1}}$.
By definition, $\mathrm{o}({\vec{\tau}}si)=\mathrm{o}(\operatorname{h}_{{\ze^\tau}a_1}({\vec{\tau}}^\frown\delta_{i+1}))\cdot{\ze^\tau}a$.
As ${\vec{\tau}}^\frown\delta_{i+1}$ is an initial segment of $\operatorname{h}_\ga(\alphavecbe)$ and ${\ze^\tau}a_1<\gamma$, we have
\[\operatorname{h}_\ga(\alphavecbe)=\operatorname{h}_\ga({\vec{\tau}}^\frown\delta_{i+1})\le_\mathrm{\scriptscriptstyle{lex}}\operatorname{h}_{{\ze^\tau}a_1}({\vec{\tau}}^\frown\delta_{i+1})\]
by Corollary \ref{hgacor}. The claim now follows by the i.h.\ (if necessary) applied
to $\operatorname{h}_{{\ze^\tau}a_1}({\vec{\tau}}^\frown\delta_{i+1})$.\\[2mm]
{\bf Subcase 1.3:} Otherwise. Here we must have $i=l$, since if $i<l$ it follows that $\delta_{i+1}\in{\mathbb E}^{>\delta_i}$
and ${\ze^\tau}a_1<\gamma\le\mathrm{o}erline{\mu_{\delta_{i+1}}\cdot\gamma}=\delta_{i+2}\le\mu_{\delta_{i+1}}$, which has been covered by the
previous subcase. We therefore have $\mathrm{o}({\vec{\tau}}si)=\mathrm{o}(\operatorname{h}_\ga(\alphavecbe))\cdot{\ze^\tau}a$.\\[2mm]
{\bf Case 2:} ${\vec{\tau}}_{\restriction_{s_0}}=\alphavec^\frown{\vec{\eta}}_{\restriction_{r_0}}$ for some $r_0\in[1,r]$, $s_0\le s$, $\eta_{r_0+1}<\tau_{s_0+1}$,
whence according to Lemma \ref{hgalem} we have $\sigma\le\mu_{\tau_s}<\gamma$, $\mu_{\tau_j}<\gamma$ for $j=s_0+1,\ldots,s$, and $\mu_\sigma<\gamma$ if $\sigma\in{\mathbb E}^{>\tau_s}$.
Let $\sigma=_{\mathrm{\scriptscriptstyle{MNF}}}\sigma_1\cdot\ldots\cdot\sigma_k$ and ${\sigma^\prime}:=(1/\sigma_1)\cdot\sigma$.
In the case $s_0<s$ we have $\operatorname{h}_\ga(\alphavecbe)<_\mathrm{\scriptscriptstyle{lex}}{\vec{\tau}}$, and noting that $\sigma<\gamma$, the i.h.\ straightforwardly applies to ${\vec{\tau}}$. Let us therefore assume that $s_0=s$,
whence ${\vec{\tau}}=\alphavec^\frown{\vec{\eta}}_{\restriction_{r_0}}$ and $\eta_{r_0+1}<\sigma$. Then we have ${\vec{\eta}}^\frown\varepsilon<_\mathrm{\scriptscriptstyle{lex}}{\vec{\tau}}si$ and
$\eta_{r_0}<\eta_{r_0+1}<\sigma\le\mu_{\eta_{r_0}}<\gamma$.\\[2mm]
{\bf Subcase 2.1:} $k>1$ where $\sigma_1\in{\mathbb E}^{>\eta_{r_0}}$ and $\sigma_2\le\mu_{\sigma_1}$.
Then $\mathrm{o}({\vec{\tau}}si)=\mathrm{o}(\operatorname{h}_{\sigma_2}({\vec{\tau}}^\frown\sigma_1))\cdot{\sigma^\prime}$ and
$\gamma>\sigma_1\ge\eta_{r_0+1}\in{\mathbb E}^{>\eta_{r_0}}$. In the case $\sigma_1>\eta_{r_0+1}$ the i.h.\ applies to
$\operatorname{h}_{\sigma_2}({\vec{\tau}}^\frown\sigma_1)$. Now assume $\sigma_1=\eta_{r_0+1}$. As in Subcase 1.2 we obtain
\[\operatorname{h}_\ga(\alphavecbe)=\operatorname{h}_\ga({\vec{\tau}}^\frown\sigma_1)\le_\mathrm{\scriptscriptstyle{lex}}\operatorname{h}_{\sigma_2}({\vec{\tau}}^\frown\sigma_1),\]
and (if necessary) the i.h.\ applies to $\operatorname{h}_{\sigma_2}({\vec{\tau}}^\frown\sigma_1)$.\\[2mm]
{\bf Subcase 2.2:} Otherwise, i.e.\ Case 4 of Definition \ref{odef} applies. Let $n_0:=n_0({\vec{\tau}}si)$.
We first assume that $k=1$, i.e.\ $\sigma\in{\mathbb M}\cap(\eta_{r_0+1},\gamma)$.\\[2mm]
{\bf 2.2.1:} $n_0\le s$. We obtain $\mathrm{o}({\vec{\tau}})=\xi\cdot\tau_s$ and $\mathrm{o}({\vec{\tau}}si)=\xi\cdot\sigma$ where $\xi=1$ if
$n_0=0$ and $\xi=\operatorname{h}_{\tau_s}({{\vec{\tau}}_{\restriction_{n_0-1}}}^\frown\gamma({\vec{\tau}}))$.
The $<,<_\mathrm{\scriptscriptstyle{lex}}$-order isomorphism between ${\operatorname{T}}S$ and ${\mathbb P}^{<1^\infty}$ established by
Lemma \ref{citedinjtrslem} and Theorems \ref{thma} and \ref{thmb} yields
\[\mathrm{o}({\vec{\tau}})<\mathrm{o}(\operatorname{h}_\ga(\alphavecbe))<\mathrm{o}({\vec{\tau}}si),\]
and hence the claim.\\[2mm]
{\bf 2.2.2:} $n_0=s+1$. We have $\sigma\in{\mathbb M}\cap(\eta_{r_0+1},\gamma)$. Let ${\ze^\tau}a$ be the immediate predecessor
of $\sigma$ in ${\mathrm{ts}}^{\eta_{r_0}}(\sigma)$. Then $\mathrm{o}({\vec{\tau}}si)=\mathrm{o}(\operatorname{h}_\sigma({\vec{\tau}}^\frown{\ze^\tau}a))\cdot\sigma$.\\[2mm]
{\bf 2.2.2.1:} $\eta_{r_0+1}<{\ze^\tau}a$. Then the i.h.\ applies to ${\vec{\tau}}^\frown{\ze^\tau}a$.\\[2mm]
{\bf 2.2.2.2:} $\eta_{r_0+1}={\ze^\tau}a$. Then we argue as before, since
\[\operatorname{h}_\ga(\alphavecbe)=\operatorname{h}_\ga({\vec{\tau}}^\frown\eta_{r_0+1})\le_\mathrm{\scriptscriptstyle{lex}}\operatorname{h}_\sigma({\vec{\tau}}^\frown\eta_{r_0+1}),\]
so that the i.h.\ (if necessary) applies to $\operatorname{h}_\sigma({\vec{\tau}}^\frown\eta_{r_0+1})$.\\[2mm]
{\bf 2.2.2.3:} $\eta_{r_0+1}>{\ze^\tau}a$. Then ${\ze^\tau}a$ is an element of ${\mathrm{ts}}^{\eta_{r_0}}(\eta_{r_0+1})$, and
by a monotonicity argument as in 2.2.1 we obtain the claim as a consequence of
\[\mathrm{o}(\operatorname{h}_\sigma({\vec{\tau}}^\frown{\ze^\tau}a))<\mathrm{o}(\operatorname{h}_\ga(\alphavecbe))<\mathrm{o}({\vec{\tau}}si).\]
This concludes the proof for $k=1$, and for $k>1$ the claim now follows easily.
\mbox{ }
$\Box$
\section{Enumerating relativized connectivity components}\lambdabel{conncompsec}
Recall Definition 4.4 of \cite{CWc}. We are now going to characterize the functions $\kappappa$ and $\nu$
by giving an alternative definition which is considerably less intertwined. The first step is to define
the restrictions of $\kappa^\alvec$ and $\nu^\alvec$ to additive principal indices. Recall part 3 of Lemma \ref{lamurholem}.
\betagin{defi}\lambdabel{kappanuprincipals}
Let $\alphavec\in{\cal R}S$ where $\alphavec=(\alphae,\ldots,\alphan)$, $n\ge 0$, $\alpha_0:=1$.
We define $\kappa^\alvecbe$ and $\nu^\alvecbe$ for additive principal $\beta$ as follows, writing $\kappappa_\beta$ instead of $\kappappa^{()}_\beta$.\\[2mm]
{\bf Case 1:} $n=0$. For $\beta<1^\infty$ define \[\kappappa_\beta:=\mathrm{o}((\beta)).\]
{\bf Case 2:} $n>0$. For $\beta\le \mu_\aln$, i.e.\ $\alphavec^\frown\beta\in{\operatorname{T}}S$, define \[\nu^\alphavec_\beta:=\mathrm{o}(\alphavecbe).\]
$\kappa^\alvecbe$ for $\beta\le \lambdaaln$ is defined by cases. If $\beta\le\alphan$ let $i\in\{0,\ldots,n-1\}$ be maximal such that $\alphai<\beta$.
If $\beta>\alphan$ let $\beta=_{\mathrm{\scriptscriptstyle{MNF}}}\beta_1\cdot\ldots\cdot\beta_k$ and set $\betapr:=(1/\beta_1)\cdot\beta$.
\[\kappa^\alvecbe:=\left\{\betagin{array}{ll}
\kappa^{{\alphavec_{\restriction_i}}}_\beta&\mbox{if } \beta\le\alphan\\[2mm]
\mathrm{o}(\alphavec)\cdot\betapr&\mbox{if } \beta_1=\alphan\:\&\: k>1\\[2mm]
\mathrm{o}(\alphavecbe)&\mbox{if } \beta_1>\alphan.
\end{array}\right.\]
\end{defi}
\noindent{\bf Remark.} Note that in the case $n>0$ we have the following inequalities between $\kappa^\alvecbe$ and $\nu^\alvecbe$, which are consequences
of the monotonicity of $\mathrm{o}$ proved in Theorems \ref{thma} and \ref{thmb}.
\betagin{enumerate}
\item If $\beta\le\alphan$ then $\kappa^\alvecbe\le\kappa^\alvecnminaln=\mathrm{o}(\alphavec)$. Later we will define $\nu^\alvec_0:=\mathrm{o}(\alphavec)$.
\item If $\beta_1=\alphan$ and $k>1$ then $\kappa^\alvecbe=\mathrm{o}(\alphavec)\cdot\betapr=\nu^\alvec_\betapr$, which is less than $\nu^\alvecbe$ if $\beta\le\mu_\aln$.
\item Otherwise we have $\kappa^\alvecbe=\nu^\alvecbe$.
\end{enumerate}
\betagin{cor}\lambdabel{kappanucor}
$\mbox{ }$
\betagin{enumerate}
\item $\kappappa$ and $\nu$ are strictly increasing with respect to their $<_\mathrm{\scriptscriptstyle{lex}}$-ordered arguments $\alphavecbe\in{\operatorname{T}}S$.
\item Each branch $\kappa^\alvec$ (where $\alphavec\in{\cal R}S$) and $\nu^\alvec$ (where $\alphavec\in{\cal R}S-\{()\}$) is continuous at arguments $\beta\in{\mathbb L}$.
\end{enumerate}
\end{cor}
{\bf Proof.} This is a consequence of Corollary \ref{ocontcor}.
\mbox{ }
$\Box$
We now prepare for the conservative extension of $\kappappa$ and $\nu$ to their entire domain as well as
the definition of $\mathrm{dp}$ which is in accordance with Definition 4.4 of \cite{CWc}.
\betagin{defi}\lambdabel{Ttauvec}
Let ${\vec{\tau}}\in{\cal R}S$, ${\vec{\tau}}=(\tau_1,\ldots,\tau_n)$, $n\ge0$, $\tau_0:=1$.
The term system ${\operatorname{T}}tvec$ is obtained from ${\operatorname{T}}tn$ by successive substitution of parameters in $({\tau_i},{\tau_i}e)$ by their ${\operatorname{T}}ti$-representations,
for $i=n-1,\ldots,1$. The parameters ${\tau_i}$ are represented by the terms $\varthetati(0)$.
The length ${\operatorname{l}^\tauvec}(\alpha)$ of a ${\operatorname{T}}tvec$-term $\alpha$ is defined inductively by
\betagin{enumerate}
\item ${\operatorname{l}^\tauvec}(0):=0$,
\item ${\operatorname{l}^\tauvec}(\beta):={\operatorname{l}^\tauvec}(\gamma)+{\operatorname{l}^\tauvec}(\delta)$ if $\beta=_{\mathbb N}F\gamma+\delta$, and
\item ${\operatorname{l}^\tauvec}(\vartheta(\eta)):=\left\{\betagin{array}{l@{\quad}l}
1&\mbox{ if }\quad\eta=0\\
{\operatorname{l}^\tauvec}(\eta)+4&\mbox{ if }\quad\eta>0
\end{array}\right.$\\[2mm]
where $\vartheta\in\{\vartheta^{\tau_i}\mid 0\le i\le n\}\cup\{\vartheta_{i+1}\mid i\in{\mathbb N}\}$.
\end{enumerate}
\end{defi}
\noindent{\bf Remark.} Recall Equation (\ref{logred}) as well as Lemma 2.13 and Definitions 2.14 and 2.18 of \cite{W07c}.
\betagin{enumerate}
\item For $\beta=\vartheta^{\tau_n}(\Delta+\eta)\in{\mathbb E}$ such that $\beta\le\mu_{\tau_n}$ we have
\betagin{equation}\lambdabel{iotalen} {\operatorname{l}^\tauvec}(\Delta)={\operatorname{l}^\tauvec}be(\iota_{\tau_n,\beta}(\Delta))<{\operatorname{l}^\tauvec}(\beta).
\end{equation}
\item For $\beta\in{\operatorname{T}}tvec\cap{\mathbb P}^{>1}\cap\Omega_1$ let $\tau\in\{\tau_0,\ldots,\tau_n\}$ be maximal such that $\tau<\beta$.
Clearly,
\betagin{equation}\lambdabel{barlen} {\operatorname{l}^\tauvec}(\betabar) < {\operatorname{l}^\tauvec}(\beta),
\end{equation}
cf.\ Subsection \ref{reflocsubsec}, and
\betagin{equation}\lambdabel{zelen}
{\operatorname{l}^\tauvec}({\ze^\tau}atbe) < {\operatorname{l}^\tauvec}(\beta),
\end{equation}
In case of $\beta\not\in{\mathbb E}$ we have
\betagin{equation}\lambdabel{loglen} {\operatorname{l}^\tauvec}(<_1g(\beta)), {\operatorname{l}^\tauvec}(<_1g((1/\tau)\cdot\beta)) < {\operatorname{l}^\tauvec}(\beta),
\end{equation}
and for $\beta\in{\mathbb E}$ we have
\betagin{equation}\lambdabel{lalen}
{\operatorname{l}^\tauvec}be(\lambdatbe) < {\operatorname{l}^\tauvec}(\beta).
\end{equation}
\end{enumerate}
Finally, the definition of the enumeration functions of relativized connectivity components can be
completed. This is easily seen to be a sound, elementary recursive definition.
\betagin{defi}[cf.\ 4.4 of \cite{CWc}]
Let $\alphavec\in{\cal R}S$ where $\alphavec=(\alpha_1,\ldots,\alpha_n)$, $n\ge0$, and set $\alpha_0:=1$.
We define the functions
\[\kappa^\alvec, {\mathrm{dp}_\alvec}: \mathrm{dom}kval\to1^\infty,\]
\index{$\kappa^\alvec$}\index{${\mathrm{dp}_\alvec}$}
\noindent where $\mathrm{dom}kval:=1^\infty$ if $n=0$ and $\mathrm{dom}kval:=[0,\lambdaaln]$ if $n>0$, simultaneously by recursion on $\lambdalvec(\beta)$, extending Definition \ref{kappanuprincipals}.
The clauses extending the definition of $\kappa^\alvec$ are as follows.
\betagin{enumerate}
\item $\kappa^\alvec_0:=0$, $\kappa^\alvec_1:=1$,
\item\lambdabel{kappapl} $\kappa^\alvecbe:=\kappa^\alvecga+{\mathrm{dp}_\alvec}(\gamma)+\kappa^\alvecde$ for $\beta=_{\mathbb N}F\gamma+\delta$.
\end{enumerate}
\noindent ${\mathrm{dp}_\alvec}$ is defined as follows, using $\nu$ as already defined on ${\operatorname{T}}S$.
\betagin{enumerate}
\item ${\mathrm{dp}_\alvec}(0):=0$, ${\mathrm{dp}_\alvec}(1):=0$, and ${\mathrm{dp}_\alvec}(\alphan):=0$ in case of $n>0$,
\item ${\mathrm{dp}_\alvec}(\beta):={\mathrm{dp}_\alvec}(\delta)$ if $\beta=_{\mathbb N}F\gamma+\delta$,
\item\lambdabel{dpred} ${\mathrm{dp}_\alvec}(\beta):=\mathrm{dp}_{\alphavecrestrnmin}(\beta)$ if $n>0$ for $\beta\in{\mathbb P}\cap(1,\alphan)$,
\item for $\beta\in{\mathbb P}^{>\alphan}-{\mathbb E}$ let $\gamma:=(1/\alphan)\cdot\beta$ and $<_1g(\gamma)=_{\mathrm{\scriptscriptstyle{ANF}}}\gamma_1+\ldots+\gamma_m$ and set
\[{\mathrm{dp}_\alvec}(\beta):=\kappa^\alvec_{\gamma_1}+{\mathrm{dp}_\alvec}(\gamma_1)+\ldots+\kappa^\alvec_{\gamma_m}+{\mathrm{dp}_\alvec}(\gamma_m),\]
\item\lambdabel{dpeps} for $\beta\in{\mathbb E}^{>\alphan}$ let $\gammavec:=(\alphae,\ldots,\alphan,\beta)$, and set
\[{\mathrm{dp}_\alvec}(\beta):=\nu^\gavec_{\mu^\alphan_\beta}+\kappa^\gavec_{\lambdaalnbe}+{\mathrm{dp}_\gavec}(\lambdaalnbe).\]
\end{enumerate}
\end{defi}
\betagin{defi}[cf.\ 4.4 of \cite{CWc}]
Let $\alphavec\in{\cal R}S$ where $\alphavec=(\alpha_1,\ldots,\alpha_n)$, $n>0$, and set $\alpha_0:=1$.
We define
\[\nu^\alvec:\mathrm{dom}nuval\to1^\infty\]\index{$\nu^\alvec$}
\noindent where $\mathrm{dom}nuval:=[0,\mu_\alphan]$,
extending Definition \ref{kappanuprincipals} and setting $\alpha:=\mathrm{o}(\alphavec)$, by
\betagin{enumerate}
\item $\nu^\alvec_0:=\alpha$,
\item\lambdabel{nuple} $\nu^\alvec_{\beta}:=\nu^\alvec_\gamma+\kappa^\alvec_{{\varrho^\aln_\ga}}+{\mathrm{dp}_\alvec}({\varrho^\aln_\ga})+\chi^\alncheck(\gamma)\cdot\alpha$ if $\beta=\gamma+1$,
\item $\nu^\alvec_\beta:=\nu^\alvec_\gamma+\kappa^\alvec_{{\varrho^\aln_\ga}}+{\mathrm{dp}_\alvec}({\varrho^\aln_\ga})+\nu^\alvec_{\delta}$ if $\beta=_{\mathbb N}F\gamma+\delta\in\mathrm{Lim}$.
\end{enumerate}
\end{defi}
In the sequel we want to establish the results of Lemma 4.5 of \cite{CWc} for the new definitions within a weak theory, avoiding the long transfinite induction
used in the corresponding proof in \cite{CWc}.
Then the agreement of the definitions of $\kappappa,\mathrm{dp}$ and $\nu$ in \cite{CWc} and here can be shown in a weak theory as well. This includes also Lemma 4.7 of \cite{CWc}
and extends to the relativization of tracking sequences to contexts as lined out in Definition 4.13 through Lemma 4.17 of \cite{CWc}.
\betagin{lem}\lambdabel{kdpmainlem}
Let $\alphavec=(\alphae,\ldots,\alphan)\in{\cal R}S$ and set $\alpha_0:=1$.
\betagin{enumerate}
\item Let $\gamma\in\mathrm{dom}kval\cap{\mathbb P}$. If $\gamma=_{\mathrm{\scriptscriptstyle{MNF}}}\gamma_1\cdot\ldots\cdot\gamma_k\ge\alphan$, setting
$\gammapr:=(1/\gamma_1)\cdot\gamma$, we have
\[(\kappa^\alvecga+{\mathrm{dp}_\alvec}(\gamma))\cdot\omega=\left\{\betagin{array}{ll}
\mathrm{o}(\alphavec)\cdot\gammapr\cdot\omega&\mbox{if } \gamma_1=\alphan\\[2mm]
\mathrm{o}(\alphavec^\frown\gamma\cdot\omega)&\mbox{otherwise.}
\end{array}\right.\]
If $\gamma<\alphan$ we have $(\kappa^\alvecga+{\mathrm{dp}_\alvec}(\gamma))\cdot\omega<\mathrm{o}(\alphavec)$.
\item For $\gamma\in\mathrm{dom}kval-({\mathbb E}\cup\{0\})$ we have \[{\mathrm{dp}_\alvec}(\gamma)<\kappa^\alvecga.\]
\item For $\gamma\in{\mathbb E}^{>\alphan}$ such that $\mu_\ga<\gamma$ we have \[{\mathrm{dp}_\alvec}(\gamma)<\mathrm{o}(\alphavec^\frown\gamma)\cdot\mu_\ga\cdot\omega.\]
\item For $\gamma\in\mathrm{dom}kval\cap{\mathbb E}^{>\alphan}$ we have
\[\kappa^\alvecga\cdot\omega\le{\mathrm{dp}_\alvec}(\gamma) \quad\mbox{ and }\quad {\mathrm{dp}_\alvec}(\gamma)\cdot\omega=_{\mathbb N}F\mathrm{o}(\operatorname{h}_\om(\alphavecga))\cdot\omega.\]
\item Let $\gamma\in\mathrm{dom}nuval\cap{\mathbb P}$, $\gamma=_{\mathrm{\scriptscriptstyle{MNF}}}\gamma_1\cdot\ldots\cdot\gamma_k$. We have
\[(\nu^\alvecga+\kappa^\alphavec_{\varrho^\alphan_\gamma}+{\mathrm{dp}_\alvec}(\varrho^\alphan_\gamma))\cdot\omega=\left\{\betagin{array}{ll}
\mathrm{o}(\alphavec)\cdot\gamma\cdot\omega&\mbox{if } \gamma_1\le\alphan\\[2mm]
\mathrm{o}(\operatorname{h}_\om(\alphavec^\frown\gamma))\cdot\omega&\mbox{if } \gamma\in{\mathbb E}^{>\alphan}\\[2mm]
\mathrm{o}(\alphavec^\frown\gamma)\cdot\omega&\mbox{otherwise.}
\end{array}\right.\]
\end{enumerate}
\end{lem}
{\bf Proof.} The lemma is shown by simultaneous induction on $\lambdalvec(\gamma)$ over all parts.
\\[2mm]
{\bf Ad 1.} The claim is immediate if $\gamma=\alphan$, and if $\gamma_1=\alphan$ and $k>1$, we have $\kappa^\alvecga=\mathrm{o}(\alphavec)\cdot\gammapr$ and by part 2
the claim follows. Now assume that $\gamma_1>\alphan$, whence $\kappa^\alvecga=\mathrm{o}(\alphavecga)$.
The case $\gamma\not\in{\mathbb E}$ is handled again by part 2. If $\gamma\in{\mathbb E}$, we apply part 4 to see that
\[(\kappa^\alvecga+{\mathrm{dp}_\alvec}(\gamma))\cdot\omega={\mathrm{dp}_\alvec}(\gamma)\cdot\omega=\mathrm{o}(\operatorname{h}_\om(\alphavecga))\cdot\omega,\]
and since $\mu_\ga\ge\omega$, the latter is equal to $\mathrm{o}(\alphavec^\frown\gamma\cdot\omega)$.
Now consider the situation where $\gamma<\alphan$. Let $i\in[0,\ldots,n-1]$ be maximal such that $\alphai<\gamma$.
The same argument as above yields the corresponding claim for $\alphavec_{\restriction_i}$ instead of $\alphavec$, and by the
monotonicity of $\mathrm{o}$ we see that the resulting ordinal is strictly below $\mathrm{o}(\alphavec_{\restriction_{i+1}})\le\mathrm{o}(\alphavec)$.
Note that in the case $\gamma\cdot\omega\in\mathrm{dom}kval$ we have $(\kappa^\alvecga+{\mathrm{dp}_\alvec}(\gamma))\cdot\omega=\kappa^\alvec_{\gamma\cdot\omega}$ as a direct consequence
of the definitions.\\[2mm]
{\bf Ad 2.} We may assume that $\gamma>\alphan$ (otherwise replace $n$ by the suitable $i<n$ and $\alphavec$ by $\alphavec_{\restriction_i}$).
Set $\gammapr:=<_1g((1/\alphan)\cdot\gamma)=_{\mathrm{\scriptscriptstyle{ANF}}}\gamma_1+\ldots+\gamma_m$, so that $\gamma=\alphan\cdot\omega^{\gamma_1+\ldots+\gamma_m}$ and
${\mathrm{dp}_\alvec}(\gamma)=\sum^m_{i=1}(\kappa^\alvec_{\gamma_i}+{\mathrm{dp}_\alvec}(\gamma_i))$. According to the definition, $\kappa^\alvecga$ is either
$\mathrm{o}(\alphavec)\cdot\omega^{\gamma_1+\ldots+\gamma_m}$ if $\gamma_1\le\alphan$, or $\mathrm{o}(\alphavecga)$ if $\gamma_1>\alphan$.
In the case $\gamma_i\cdot\omega<\gamma$ for $i=1,\ldots,m$ an application
of part 1 of the i.h.\ to the $\gamma_i$ yields the claim, thanks to the monotonicity of $\mathrm{o}$.
Otherwise we must have $\gamma=\gamma_1\cdot\omega$ where $\gamma_1\in{\mathbb E}^{>\alphan}$, and
applying part 1 of the i.h.\ to $\gamma_1$ yields
\[{\mathrm{dp}_\alvec}(\gamma)=\kappa^\alvec_{\gamma_1}+{\mathrm{dp}_\alvec}(\gamma_1)+1<(\kappa^\alvec_{\gamma_1}+{\mathrm{dp}_\alvec}(\gamma_1))\cdot\omega=\mathrm{o}(\alphavecga)=\kappa^\alvecga.\]
{\bf Ad 3.} Let $\lambda_\gamma=_{\mathrm{\scriptscriptstyle{ANF}}}\lambda_1+\ldots+\lambda_r$, $\lambda\in\{\lambda_1,\ldots,\lambda_r\}$, and note that $\lambda\le\gamma\cdot\mu_\ga$.
Setting $\lambdapr:=(1/{\operatorname{mf}}(\lambda))\cdot\lambda$, we have $\lambdapr\le\mu_\ga$ and applying part 1 of the i.h.\ to $\lambda$ we obtain
$(\kappa^\alvec_\lambda+{\mathrm{dp}_\alvec}(\lambda))\cdot\omega\le\mathrm{o}(\alphavecga)\cdot\mu_\ga\cdot\omega$.
Since $\nu^{\alphavecga}_{\mu_\ga}=\mathrm{o}(\alphavec^\frown(\gamma,\mu_\ga))=\mathrm{o}(\alphavecga)\cdot\mu_\ga$, we obtain the claim.\\[2mm]
{\bf Ad 4.} The inequality is seen by a quick inspection of the respective definitions. We have $\kappa^\alvecga=\mathrm{o}(\alphavecga)$,
and since $\mu_\ga\ge\omega$ we obtain
\[\mathrm{o}(\alphavecga)\cdot\omega=\mathrm{o}(\alphavec^\frown(\gamma,\omega))\le\mathrm{o}(\alphavec^\frown(\gamma,\mu_\ga))\le{\mathrm{dp}_\alvec}(\gamma).\]
In order to verify the claimed equation, note that \[\operatorname{h}_\om(\alphavecga)=\alphavec^\frown\operatorname{sk}_\omega(\gamma),\]
where $\operatorname{sk}_\omega(\gamma)=(\delta_1,\ldots,\delta_{l+1})$ consists of a maximal strictly increasing chain
$\deltavec:=(\delta_1,\ldots,\delta_l)=(\gamma,\mu_\ga,\mu_{\mu_\ga},\ldots)$ of ${\mathbb E}$-numbers and $\delta_{l+1}=\mu_{\delta_l}\not\in{\mathbb E}^{>\delta_l}$.
We have $\lambda_{\delta_i}=\varrho_{\mu_{\delta_i}}+{\ze^\tau}a_{\delta_i}$ where ${\ze^\tau}a_{\delta_i}<\delta_i$ and, for $i<l$,
$\varrho_{\mu_{\delta_i}}=\mu_{\delta_i}=\delta_{i+1}\in{\mathbb E}^{>\delta_i}$. Applying the i.h.\ to (the additive decompositions) of these
terms $\lambda_{\delta_i}$ we obtain
\[{\mathrm{dp}_\alvec}(\gamma)\cdot\omega=\mathrm{dp}_{\alphavec^\frown\deltavec_{\restriction_{l-1}}}(\delta_l)\cdot\omega=
(\nu^{\alphavec^\frown\deltavec}_{\mu_{\delta_l}}+\kappa^{\alphavec^\frown\deltavec}_{\lambda_{\delta_l}}+\mathrm{dp}_{\alphavec^\frown\deltavec}(\lambda_{\delta_l}))\cdot\omega,\]
and consider the additive decomposition of the term $\lambda_{\delta_l}$. The components below $\delta_l$ from ${\ze^\tau}a_{\delta_l}$ are easily handled
using the inequality of part 1 of the i.h., while for $\mu_{\delta_l}=_{\mathrm{\scriptscriptstyle{MNF}}}\mu_1\cdot\ldots\cdot\mu_j$ we have
$\varrho_{\mu_{\delta_l}}\le\delta_l\cdot<_1g(\mu_{\delta_l})=\delta_l\cdot(<_1g(\mu_1)+\ldots+<_1g(\mu_j))$ (where the only possible difference is
$\delta_l$) and consider the summands separately.
Let $\mu\in\{\mu_1,\ldots,\mu_j\}$.\\[2mm]
{\bf Case 1:} $\varrho_{\mu_{\delta_l}}\in{\mathbb E}^{>\delta_l}$. Let $<_1g(\mu_{\delta_l})=\lambda+k$ where $\lambda\in\mathrm{Lim}\cup\{0\}$ and $k<\omega$.
We must have $\chi^{\delta_l}(\lambda)=1$, since otherwise $\varrho_{\mu_{\delta_l}}=\mu_{\delta_l}\in{\mathbb E}^{>\delta_l}$, which would contradict the
maximality of the length of $\deltavec$.
It follows that $k=1$, hence $\mu_{\delta_l}=\lambda\cdot\omega$, $\lambda=\varrho_{\mu_{\delta_l}}\in{\mathbb E}^{>\delta_l}$, and applying the i.h.\ to $\lambda$
yields \[(\kappa^{\alphavec^\frown\deltavec}_{\lambda}+\mathrm{dp}_{\alphavec^\frown\deltavec}(\lambda))\cdot\omega=\mathrm{o}(\operatorname{h}_\om(\alphavec^\frown\deltavec^\frown\lambda))\cdot\omega=
\mathrm{o}(\alphavec^\frown\deltavec^\frown\lambda\cdot\omega)=\nu^{\alphavec^\frown\deltavec}_{\mu_{\delta_l}},\]
whence ${\mathrm{dp}_\alvec}(\gamma)\cdot\omega=\mathrm{o}(\operatorname{h}_\om(\alphavec^\frown\gamma))\cdot\omega$ as claimed.
\\[2mm]
{\bf Case 2:} Otherwise.\\[2mm]
{\bf Subcase 2.1:} $\mu<\delta_l$. Then applying part 2 of the i.h.\ to $\delta_l\cdot<_1g(\mu)$ we see that
\[\mathrm{dp}_{\alphavec^\frown\deltavec}(\delta_l\cdot<_1g(\mu))<\kappa^{\alphavec^\frown\deltavec}_{\delta_l\cdot<_1g(\mu)}=\mathrm{o}(\alphavec^\frown\deltavec)\cdot<_1g(\mu).\]
{\bf Subcase 2.2:} $\mu=\delta_l$. We calculate $\kappa^{\alphavec^\frown\deltavec}_{\delta_l^2}=\mathrm{o}(\alphavec^\frown\deltavec)\cdot\delta_l$ and
$\mathrm{dp}_{\alphavec^\frown\deltavec}(\delta_l^2)=\mathrm{o}(\alphavec^\frown\deltavec)$.
\\[2mm]
{\bf Subcase 2.3:} $\mu>\delta_l$.
\\[2mm]
{\bf 2.3.1:} $\mu\not\in{\mathbb E}^{>\delta_l}$. Then $\delta_l<\delta_l\cdot<_1g(\mu)\not\in{\mathbb E}^{>\delta_l}$, hence by the i.h., applied to $\delta_l\cdot<_1g(\mu)$,
which is a summand of $\lambda_{\delta_l}$,
\[\mathrm{dp}_{\alphavec^\frown\deltavec}(\delta_l\cdot<_1g(\mu))<\kappa^{\alphavec^\frown\deltavec}_{\delta_l\cdot<_1g(\mu)}\le\nu^{\alphavec^\frown\deltavec}_\mu\le\nu^{\alphavec^\frown\deltavec}_{\mu_{\delta_l}}.\]
{\bf 2.3.2:} $\mu\in{\mathbb E}^{>\delta_l}$. Then we have $\delta_l\cdot<_1g(\mu)=\mu$, $\mu\cdot\omega\le\mu_{\delta_l}$, and applying the i.h.\ to $\mu$
\[\mathrm{dp}_{\alphavec^\frown\deltavec}(\mu)\cdot\omega=\mathrm{o}(\operatorname{h}_\om(\alphavec^\frown\deltavec^\frown\mu))\cdot\omega=\nu^{\alphavec^\frown\deltavec}_{\mu\cdot\omega}\le\nu^{\alphavec^\frown\deltavec}_{\mu_{\delta_l}}.\]
{\bf Ad 5.} We have $\varrho_\gamma\le\alphan\cdot(<_1g(\gamma_1)+\ldots+<_1g(\gamma_k))$. \\[2mm]
{\bf Case 1:} $\gamma\in{\mathbb E}^{>\alphan}$. Here we have $\varrho_\gamma=\gamma$ and $\nu^\alvecga=\kappa^\alvecga<{\mathrm{dp}_\alvec}(\gamma)=\nu^\alphavecga_{\mu_\ga}+\kappa^\alphavecga_{\lambda_\gamma}+\mathrm{dp}_\alphavecga(\lambda_\gamma)$,
and by part 4 we have ${\mathrm{dp}_\alvec}(\gamma)\cdot\omega=\mathrm{o}(\operatorname{h}_\om(\alphavecga))\cdot\omega$.
\\[2mm]
{\bf Case 2:} $\gamma_1\le\alphan$. Here we argue similarly as in the proof of part 4, case 2. However, the access to the i.h.\ is different.
Clearly, $\kappa^\alvec_{\alphan\cdot<_1g(\gamma_i)}=\mathrm{o}(\alphavec)\cdot<_1g(\gamma_i)$. For given $i$, let $<_1g(<_1g(\gamma_i))=_{\mathrm{\scriptscriptstyle{ANF}}}\xi_1+\ldots+\xi_s$. In the case $\gamma_i=\alphan$ we have ${\mathrm{dp}_\alvec}(\alpha_n^2)=\mathrm{o}(\alphavec)$. Now assume that $\gamma_i<\alphan$. An application of part 1 of the i.h.\ to $\xi_j$ yields $(\kappa^\alvec_{\xi_j}+{\mathrm{dp}_\alvec}(\xi_j))\cdot\omega<\mathrm{o}(\alphavec)$
for $j=1,\ldots,s$.
Therefore \[(\kappa^\alvec_{\alphan\cdot<_1g(\gamma_i)}+{\mathrm{dp}_\alvec}(\alphan\cdot<_1g(\gamma_i)))\cdot\omega=\kappa^\alvec_{\alphan\cdot<_1g(\gamma_i)}\cdot\omega=\mathrm{o}(\alphavec)\cdot<_1g(\gamma_i)\cdot\omega.\]
These considerations show that we obtain \[(\nu^\alvecga+\kappa^\alphavec_{\varrho^\alphan_\gamma}+{\mathrm{dp}_\alvec}(\varrho^\alphan_\gamma))\cdot\omega=\nu^\alvecga\cdot\omega=\mathrm{o}(\alphavec)\cdot\gamma\cdot\omega.\]
{\bf Case 3:} Otherwise.
\\[2mm]
{\bf Subcase 3.1:} $\varrho_\gamma\in{\mathbb E}^{>\alphan}$. Since $\gamma\not\in{\mathbb E}^{>\alphan}$ we have $<_1g(\gamma)=\lambda+1$ where $\lambda\in{\mathbb E}^{>\alphan}$ and $\chi^\alphan(\lambda)=1$.
Hence $\gamma=\lambda\cdot\omega$ and $\varrho_\gamma=\lambda$, for which part 4 of the i.h. yields
\[(\kappa^\alvec_\lambda+{\mathrm{dp}_\alvec}(\lambda))\cdot\omega={\mathrm{dp}_\alvec}(\lambda)\cdot\omega=\mathrm{o}(\operatorname{h}_\om(\alphavec^\frown\lambda))\cdot\omega=\mathrm{o}(\alphavec^\frown\gamma),\]
implying the claim.\\[2mm]
{\bf Subcase 3.2:} $\varrho_\gamma\not\in{\mathbb E}^{>\alphan}$. Here we extend the argumentation from case 2, where the situation $\gamma_i\le\alphan$ has been resolved.
In the case $\gamma_i\in{\mathbb E}^{>\alphan}$ we have $\gamma_i\cdot\omega\le\gamma$ and apply part 4 of the i.h.\ to $\gamma_i$ to obtain
\[(\kappa^\alvec_{\alphan\cdot<_1g(\gamma_i)}+{\mathrm{dp}_\alvec}(\alphan\cdot<_1g(\gamma_i)))\cdot\omega={\mathrm{dp}_\alvec}(\gamma_i)\cdot\omega=\mathrm{o}(\alphavec^\frown\gamma_i\cdot\omega)\le\mathrm{o}(\alphavecga).\]
We are left with the cases where $\gamma_i\in{\mathbb M}^{>\alphan}-{\mathbb E}$. Writing $<_1g(<_1g(\gamma_i))=_{\mathrm{\scriptscriptstyle{ANF}}}\xi_1+\ldots+\xi_s$, which resides in $(\alphan,<_1g(\gamma_i))$, we
have ${\mathrm{dp}_\alvec}(\alphan\cdot<_1g(\gamma_i))=\sum^s_{j=1}(\kappa^\alvec_{\xi_j}+{\mathrm{dp}_\alvec}(\xi_j))$, where for each $j$ part 1 of the i.h.\ applied to $\xi_j$ together with the
monotonicity of $\kappa^\alvec$ on additive principal arguments yield \[(\kappa^\alvec_{\xi_j}+{\mathrm{dp}_\alvec}(\xi_j))\cdot\omega=\kappa^\alvec_{\xi_j\cdot\omega}\le\kappa^\alvec_{\alphan\cdot<_1g(\gamma_i)}<\nu^\alvecga,\]
and we conclude as in case 2.\mbox{ }
$\Box$
\betagin{cor}\lambdabel{kappanuhzcor} Let $\alphavec\in{\cal R}S$. We have
\betagin{enumerate}
\item $\kappa^\alvec_{\gamma\cdot\omega}=(\kappa^\alvecga+{\mathrm{dp}_\alvec}(\gamma))\cdot\omega$ for $\gamma\in{\mathbb P}$ such that $\gamma\cdot\omega\in\mathrm{dom}kval$.
\item $\nu^\alvec_{\gamma\cdot\omega}=(\nu^\alvecga+\kappa^\alphavec_{\varrho^\alphan_\gamma}+{\mathrm{dp}_\alvec}(\varrho^\alphan_\gamma))\cdot\omega$ for $\gamma\in{\mathbb P}$ such that $\gamma\cdot\omega\in\mathrm{dom}nuval$.
\end{enumerate}
$\kappa^\alvec$ and for $\alphavec\not=()$ also $\nu^\alvec$ are strictly monotonically increasing and continuous.
\end{cor}
{\bf Proof.} Parts 1 and 2 follow from Definitions \ref{odef} and \ref{kappanuprincipals} using parts 1 and
5 of Lemma \ref{kdpmainlem}, respectively.
In order to see general monotonicity and continuity we can build upon Corollary \ref{kappanucor}.
The missing argument is as follows.
For any $\beta\in{\mathbb P}$ in the respective domain and any $\gamma=_{\mathrm{\scriptscriptstyle{ANF}}}\gamma_1+\ldots+\gamma_m<\beta$
we have $\kappa^\alvecga<\kappa^\alvecbe$ using part 1, and $\nu^\alvecga<\nu^\alvecbe$ using part 2, since $\gamma_i\cdot\omega\le\beta$
for $i=1,\ldots,m$.\mbox{ }
$\Box$
\betagin{theo}\lambdabel{agreementthm}
Let $\alphavec=(\alphae,\ldots,\alphan)\in{\cal R}S$, $n\ge 0$, and set $\alpha_0:=1$.
For $\beta\in{\mathbb P}$ let $\delta:=(1/\betabar)\cdot\beta$, so that $\beta=_{\mathbb N}F\betabar\cdot\delta$ if $\beta\not\in{\mathbb M}$.
\betagin{enumerate}
\item For all $\beta\in\mathrm{dom}kval\cap{\mathbb P}^{>\alphan}$ we have \[\kappa^\alvecbe=\kappa^\alvec_{\betabar+1}\cdot\delta.\]
\item For all $\beta\in\mathrm{dom}nuval\cap{\mathbb P}^{>\alphan}$ (where $n>0$) we have \[\nu^\alvecbe=\nu^\alvec_{\betabar+1}\cdot\delta.\]
\end{enumerate}
Hence, the definitions of $\kappappa,\nu$, and $\mathrm{dp}$ given in \cite{CWc} and here fully agree.
\end{theo}
{\bf Proof.} We rely on the monotonicity of $\mathrm{o}$. Note that Corollary \ref{kappanuhzcor} has already shown the theorem for $\beta$ of the form $\gamma\cdot\omega$,
i.e.\ successors of additive principal numbers. Let $\beta=_{\mathrm{\scriptscriptstyle{MNF}}}\beta_1\cdot\ldots\cdot\beta_k\in{\mathbb P}^{>\alphan}$.\\[2mm]
{\bf Case 1:} $k=1$. Then $\delta=\beta$, and setting $n_0:=n_0(\alphavecbe)$ and $\gamma:=\gamma(\alphavecbe)$ according to Definition \ref{odef}, we have
\[\kappa^\alvecbe=\nu^\alvecbe=\mathrm{o}(\alphavecbe)=\left\{\betagin{array}{ll}
\beta&\mbox{if } n_0=0\\[2mm]
\mathrm{o}(\alphavecpr)\cdot\beta&\mbox{if }n_0>0,
\end{array}\right.\]
where $\alphavecpr:=\operatorname{h}_\be({\alphavec_{\restriction_{n_0-1}}}^\frown\gamma)$, and parts 1 and 5 of Lemma \ref{kdpmainlem} yield
\[\kappa^\alvec_{\betabar+1}\cdot\beta=\nu^\alvec_{\betabar+1}\cdot\beta=\left\{\betagin{array}{ll}
\mathrm{o}(\alphavec)\cdot\beta&\mbox{if } \betabar=\alphan\\[2mm]
\mathrm{o}(\operatorname{h}_\om(\alphavec^\frown\betabar))\cdot\beta&\mbox{if }\betabar>\alphan.
\end{array}\right.\]
If $n_0=0$ the claim is immediate since $\mathrm{o}(\alphavec)\cdot\beta=\beta$ if $\betabar=\alphan$ and
$1<\mathrm{o}(\operatorname{h}_\om(\alphavec^\frown\betabar))<\mathrm{o}(\alphavecbe)=\beta$ if $\betabar>\alphan$. Now assume that $n_0>0$.\\[2mm]
{\bf Subcase 1.1:} $\betabar=\alphan$. This implies $n_0\le n$ and therefore $\alphavecpr <_\mathrm{\scriptscriptstyle{lex}}\alphavec<_\mathrm{\scriptscriptstyle{lex}}\alphavecbe$.
By the monotonicity of $\mathrm{o}$ we obtain
\[\mathrm{o}(\alphavecpr )<\mathrm{o}(\alphavecbe)=\mathrm{o}(\alphavecpr )\cdot\beta,\]
which implies the claim since $\beta\in{\mathbb M}$.\\[2mm]
{\bf Subcase 1.2:} $\betabar>\alphan$. This implies $\betabar\in{\mathbb E}^{>\alphan}$ and $\kappa^\alvec_{\betabar+1}\cdot\beta=\nu^\alvec_{\betabar+1}\cdot\beta=\mathrm{o}(\operatorname{h}_\om(\alphavec^\frown\betabar))\cdot\beta$,
where $\operatorname{h}_\om(\alphavec^\frown\betabar)<_\mathrm{\scriptscriptstyle{lex}}\alphavecbe$.\\[2mm]
{\bf 1.2.1:} $n_0\le n$. Then we obtain
$\alphavecpr <_\mathrm{\scriptscriptstyle{lex}}\alphavec<_\mathrm{\scriptscriptstyle{lex}}\operatorname{h}_\om(\alphavec^\frown\betabar)<_\mathrm{\scriptscriptstyle{lex}}\alphavecbe$ and hence
\[\mathrm{o}(\alphavecpr )<\mathrm{o}(\operatorname{h}_\om(\alphavec^\frown\betabar))<\mathrm{o}(\alphavecbe)=\mathrm{o}(\alphavecpr )\cdot\beta,\]
which implies the claim.\\[2mm]
{\bf 1.2.2:} $n_0=n+1$. Then we have $\gamma\le\betabar$, $\alphavecpr=\operatorname{h}_\be(\alphavecga)$, and using Corollary \ref{hgacor} it follows that
\[\kappa^\alvecbe=\nu^\alvecbe=\mathrm{o}(\alphavecbe)=\mathrm{o}(\alphavecpr)\cdot\beta>\mathrm{o}(\operatorname{h}_\om(\alphavec^\frown\betabar))\ge\mathrm{o}(\operatorname{h}_\be(\alphavecga)),\]
which again implies the claim.\\[2mm]
{\bf Case 2:} $k>1$. Then we have $\betabar=_{\mathrm{\scriptscriptstyle{MNF}}}\beta_1\cdot\ldots\cdot\beta_{k-1}\ge\alphan$, $\delta=\beta_k$, and set $\betapr:=(1/\beta_1)\cdot\beta$ and $\betabarpr:=(1/\beta_1)\cdot\betabar$.\\[2mm]
{\bf Subcase 2.1:} $\beta_1=\alphan$. Using Lemma \ref{kdpmainlem} we obtain
\[\kappa^\alvec_{\betabar+1}\cdot\delta=\mathrm{o}(\alphavec)\cdot\betapr=\kappa^\alvecbe\]
and
\[\nu^\alvec_{\betabar+1}\cdot\delta=\mathrm{o}(\alphavec)\cdot\beta=\nu^\alvecbe.\]
{\bf Subcase 2.2:} $\beta_1>\alphan$. Then we have $\kappa^\alvecbe=\nu^\alvecbe$.
\\[2mm]
{\bf 2.2.1:} $\betabar\not\in{\mathbb E}^{>\alphan}$. Then by the involved definitions
\[\kappa^\alvec_{\betabar+1}\cdot\delta=\nu^\alvec_{\betabar+1}\cdot\delta=\mathrm{o}(\alphavec^\frown\betabar)\cdot\delta=\nu^\alvecbe=\kappa^\alvecbe.\]
{\bf 2.2.2:} $\betabar\in{\mathbb E}^{>\alphan}$. This implies $k=2$, $\delta=\beta_2$, and we see that
\[\kappa^\alvec_{\betabar+1}\cdot\delta=\nu^\alvec_{\betabar+1}\cdot\delta=\mathrm{o}(\operatorname{h}_\om(\alphavec^\frown\betabar))\cdot\delta.\]
In the case $\delta>\mu_{\beta_1}$ we have $\operatorname{h}_{\delta}(\alphavec^\frown\betabar)=\alphavec^\frown\betabar$ and hence obtain uniformly
\[\kappa^\alvecbe=\nu^\alvecbe=\mathrm{o}(\operatorname{h}_{\delta}(\alphavec^\frown\betabar))\cdot\delta.\]
By Corollary \ref{hgalem} we have $\operatorname{h}_{\delta}(\alphavec^\frown\betabar)\le_\mathrm{\scriptscriptstyle{lex}}\operatorname{h}_\om(\alphavec^\frown\betabar)$, hence
\[\mathrm{o}(\operatorname{h}_{\delta}(\alphavec^\frown\betabar))\le\mathrm{o}(\operatorname{h}_\om(\alphavec^\frown\betabar))<\mathrm{o}(\alphavecbe)=\mathrm{o}(\operatorname{h}_{\delta}(\alphavec^\frown\betabar))\cdot\delta,\]
which implies the claim since $\delta\in{\mathbb M}$.
\mbox{ }
$\Box$
\betagin{lem}\lambdabel{hatmainlem}
Let $\alphavec=(\alphae,\ldots,\alphan)\in{\cal R}S$, $n>0$.
\betagin{enumerate}
\item For all $\beta$ such that $\alphavecbe\in{\operatorname{T}}S$ we have
\[\mathrm{o}(\alphavecbe)<\mathrm{o}(\alphavec)\cdot\widehat{\alphan}.\]
\item For all $\gamma$ such that $\alphavecga\in{\cal R}S$ we have
\[\mathrm{o}(\operatorname{h}_\om(\alphavecga))<\mathrm{o}(\alphavecga)\cdot\gammahat.\]
\end{enumerate}
\end{lem}
{\bf Proof.} We prove the lemma by simultaneous induction on $\mathrm{lSeq}(\alphavecbe)$ and $\mathrm{lSeq}(\operatorname{h}_\om(\alphavecga))$, respectively.\\[2mm]
{\bf Ad 1.}
Let $\beta=_{\mathrm{\scriptscriptstyle{MNF}}}\beta_1\cdot\ldots\cdot\beta_k$ and $\betapr:=(1/\beta_1)\cdot\beta$.
Note that $\beta\le\mu_\aln<\alphahat$.
\\[2mm]
{\bf Case 1:} $\beta_1\le\alphan$. Immediate, since $\mathrm{o}(\alphavecbe)=\mathrm{o}(\alphavec)\cdot\beta$.
\\[2mm]
{\bf Case 2:} $k>1$ where $\beta_1\in{\mathbb E}^{>\alphan}$ and $\beta_2\le\mu_{\beta_1}$. Then we have
$\mathrm{o}(\alphavecbe)=\mathrm{o}(\operatorname{h}_{\beta_2}(\alphavec^\frown\beta_1))\cdot\betapr$ and apply the i.h.\ to $\alphavec^\frown\beta_1$,
which clearly satisfies $\mathrm{lSeq}(\alphavec^\frown\beta_1)<_\mathrm{\scriptscriptstyle{lex}}\mathrm{lSeq}(\alphavecbe)$.
By Corollaries \ref{hgacor} and \ref{ocontcor} we have
$\mathrm{o}(\operatorname{h}_{\beta_2}(\alphavec^\frown\beta_1))\le\mathrm{o}(\operatorname{h}_\om(\alphavec^\frown\beta_1))$,
and by the i.h., parts 1 (for $\alphavec^\frown\beta_1$) and 2 (for $\operatorname{h}_\om(\alphavec^\frown\beta_1)$) we obtain
\[\mathrm{o}(\operatorname{h}_\om(\alphavec^\frown\beta_1))<\mathrm{o}(\alphavec^\frown\beta_1)\cdot\widehat{\beta_1}<\mathrm{o}(\alphavec)\cdot\widehat{\alphan},\]
where we have used that $\widehat{\beta_1}\le\widehat{\alphan}$ according to Lemma 3.17 of \cite{CWc}.
This implies the desired inequality.
\\[2mm]
{\bf Case 3:} Otherwise. Let $n_0:=n_0(\alphavecbe)$ and $\gamma:=\gamma(\alphavecbe)$ according to Definition \ref{odef}.
\\[2mm]
{\bf Subcase 3.1:} $n_0=0$. Immediate.
\\[2mm]
{\bf Subcase 3.2:} $n_0>0$. By definition we have
$\mathrm{o}(\alphavecbe)=\mathrm{o}(\operatorname{h}_{\beta_1}({\alphavec_{\restriction_{n_0-1}}}^\frown\gamma))\cdot\beta$.
\\[2mm]
{\bf 3.2.1:} $n_0\le n$. The monotonicity of $\mathrm{o}$ then yields
$\mathrm{o}(\alphavecbe)\le\mathrm{o}(\alphavec)\cdot\beta<\mathrm{o}(\alphavec)\cdot\widehat{\alphan}$.
\\[2mm]
{\bf 3.2.2:} $n_0=n+1$. Then $\gamma$ is the immediate predecessor of $\beta$ in ${\mathrm{ts}}^\alphan(\beta)$.
We apply the i.h.\ for parts 1 (to $\alphavecga$) and 2 (to $\operatorname{h}_\om(\alphavecga)$) and argue as in Case 2 to see that
$\mathrm{o}(\alphavecbe)=\mathrm{o}(\operatorname{h}_{\beta_1}(\alphavecga))\cdot\beta<\mathrm{o}(\alphavec)\cdot\widehat{\alphan}$.\\[2mm]
{\bf Ad 2.} We have $\operatorname{h}_\om(\alphavecga)=\alphavec^\frown\operatorname{sk}_\omega(\gamma)$, and setting
$\operatorname{sk}_\omega(\gamma)=:(\delta_1,\ldots,\delta_{l+1})$ we obtain a strictly increasing sequence of ${\mathbb E}$-numbers
$\deltavec:=(\delta_1,\ldots,\delta_l)=(\gamma,\mu_\ga,\mu_{\mu_\ga},\ldots)$ such that
$\delta_{l+1}=\mu_{\delta_l}\not\in{\mathbb E}^{>\delta_l}$. For $i=1,\ldots,l$ we have
\[\operatorname{h}_\om(\alphavec^\frown\deltavec_{\restriction_i})=\alphavec^\frown\deltavec^\frown\delta_{l+1},\]
\[\mathrm{lSeq}(\alphavec^\frown\deltavec_{\restriction_i})\le_\mathrm{\scriptscriptstyle{lex}}\mathrm{lSeq}(\operatorname{h}_\om(\alphavecga)),\]
and the i.h., part 1 (up to $\operatorname{h}_\om(\alphavecga)$), yields
\[\mathrm{o}(\alphavec^\frown\deltavec_{\restriction_{i+1}})<\mathrm{o}(\alphavec^\frown\deltavec_{\restriction_i})\cdot\widehat{\delta_i}.\]
Appealing to Lemma 3.17 of \cite{CWc} we obtain
\[\widehat{\delta_l}\le\widehat{\delta_{l-1}}\le\ldots\le\widehat{\delta_1}=\gammahat,\]
and finally conclude that $\mathrm{o}(\operatorname{h}_\om(\alphavecga))<\mathrm{o}(\alphavecga)\cdot\gammahat$.
\mbox{ }
$\Box$
\betagin{cor}\lambdabel{kdpnuestimcor}
For all $\alphavecga\in{\cal R}S$ the ordinal $\mathrm{o}(\alphavecga)\cdot\gammahat$ is a strict upper bound of
\[\mathrm{Im}(\kappappa^{\alphavecga}),\: \mathrm{Im}(\nu^{\alphavecga}),\: {\mathrm{dp}_\alvec}(\gamma),\: \mbox{and }
\nu^{\alphavecga}_{\mu_\gamma}+\kappappa^{\alphavecga}_{\lambda_\gamma}+\mathrm{dp}_{\alphavecga}(\lambda_\gamma).\]
\end{cor}
{\bf Proof.} This directly follows from Lemmas \ref{kdpmainlem} and \ref{hatmainlem}.
\mbox{ }
$\Box$
\section{Revisiting tracking chains}\lambdabel{revisitsec}
\subseteqsection{Preliminary remarks}\lambdabel{prelimsubsec}
Our preparations in the previous sections are almost sufficient to demonstrate that the characterization of
${\operatorname{C}}two$ provided in Section 7 of \cite{CWc} is elementary recursive. We first provide a brief
argumentation based on the previous sections showing that the structure ${\operatorname{C}}two$ is elementary recursive.
In the following subsections we will elaborate on the characterization of $\le_1$ and $\le_2$
within ${\operatorname{C}}two$, further illuminating the structure.
In Section 5 of \cite{CWc} the termination of the process of maximal extension (see Definition 5.2 of \cite{CWc})
is seen when applying the ${\operatorname{l}^\tauvec}$-measure from the second step on, as clause 2.3.1 of Def.\ 5.2 in \cite{CWc} can
only be applied at the beginning of the process of maximal extension.
Lemma 5.4, part a), of \cite{CWc} is not needed in full generality, the ${\operatorname{l}^\tauvec}$-measure suffices, cf.\
Lemma 5.5 of \cite{CWc}.
The proof of Lemma 5.12 of \cite{CWc}, parts a), b), and c), actually proceeds by induction on the number
of 1-step extensions, an ``induction on ${\mathrm{cs}}pr(\alphavec)$ along $<_\mathrm{\scriptscriptstyle{lex}}$'' is not needed.
In Section 6 of \cite{CWc} the proof of Lemma 6.2 actually proceeds by induction on the length of the additive decomposition of $\alpha$. Definition 6.1 of \cite{CWc}, which assigns to each $\alpha<1^\infty$ its unique tracking chain ${\mathrm{tc}}(\alpha)$, involves the evaluation function $\mathrm{o}$
(in the guise of $\tilde{\cdot}$, cf.\ Definition 5.9 of \cite{CWc}), which we have shown to be elementary recursive.
\subseteqsection{Pre-closed and spanning sets of tracking chains}
For the formal definition of tracking chains recall Definition 5.1 of \cite{CWc}. We will rely on its detailed terminology,
including the notation $\alphavec\subseteq\betavec$ when $\alphavec$ is an initial chain of $\betavec$.
The following definitions of pre-closed and spanning sets of tracking chains provide a generalization of the notion of maximal
extension, denoted by $\operatorname{me}$, cf.\ Definition 5.2 of \cite{CWc}.
\betagin{defi}[Pre-closedness]\lambdabel{precldefi} Let $M\subseteqseteq_\mathrm{fin}{\operatorname{T}}C$. $M$ is \emph{pre-closed} if and only if $M$
\betagin{enumerate}
\item is \emph{closed under initial chains:} if $\alphavec\in M$ and $(i,j)\in\mathrm{dom}(\alphavec)$ then $\alphavec_{\restriction_{(i,j)}}\in M$,
\item is \emph{$\nu$-index closed:} if $\alphavec\in M$, $m_n>1$, $\alphacp{n,m_n}=_{\mathrm{\scriptscriptstyle{ANF}}}\xi_1+\ldots+\xi_k$ then
$\alphavec[\mu_{{\tau^\prime}}], \alphavec[\xi_1+\ldots+\xi_l]\in M$ for $1\le l\le k$,
\item \emph{unfolds minor $\le_2$-components:} if $\alphavec\in M$, $m_n>1$, and $\tau<\mu_{\tau^\prime}$ then:
\betagin{enumerate}
\item[3.1.] ${\alphavec_{\restriction_{n-1}}}^\frown(\alphacp{n,1},\ldots,\alphacp{n,m_n},\mu_\tau)\in M$ in the case $\tau\in{\mathbb E}^{>{\tau^\prime}}$, and
\item[3.2.] otherwise $\alphavec^\frown(\varrho^{\tau^\prime}_\tau)\in M$, provided that $\varrho^{\tau^\prime}_\tau>0$,
\end{enumerate}
\item is \emph{$\kappa$-index closed:} if $\alphavec\in M$, $m_n=1$, and $\alphacp{n,1}=_{\mathrm{\scriptscriptstyle{ANF}}}\xi_1+\ldots+\xi_k$, then:
\betagin{enumerate}
\item[4.1.] if $m_{n-1}>1$ and $\xi_1=\taucp{n-1,m_{n-1}}\in{\mathbb E}^{>\taucp{n-1,m_{n-1}-1}}$ then
${\alphavec_{\restriction_{n-2}}}^\frown(\alphacp{n-1,1},\ldots,\alphacp{n-1,m_{n-1}},\mu_{\xi_1})\in M$,
else ${\alphavec_{\restriction_{n-1}}}^\frown(\xi_1)\in M$, and
\item[4.2.] ${\alphavec_{\restriction_{n-1}}}^\frown(\xi_1+\ldots+\xi_l)\in M$ for $l=2,\ldots,k$,
\end{enumerate}
\item \emph{maximizes $\operatorname{me}$-$\mu$-chains:} if $\alphavec\in M$, $m_n\ge 1$, and $\tau\in{\mathbb E}^{>{\tau^\prime}}$, then:
\betagin{enumerate}
\item[5.1.] if $m_n=1$ then ${\alphavec_{\restriction_{n-1}}}^\frown(\alphacp{n,1},\mu_\tau)\in M$, and
\item[5.2.] if $m_n>1$ and $\tau=\mu_{\tau^\prime}=\lambda_{\tau^\prime}$ then ${\alphavec_{\restriction_{n-1}}}^\frown(\alphacp{n,1}\ldots,\alphacp{n,m_n},\mu_\tau)\in M$.
\end{enumerate}
\end{enumerate}
\end{defi}
\noindent{\bf Remark.}
Pre-closure of some $M\subseteqseteq_\mathrm{fin}{\operatorname{T}}C$ is obtained by closing under clauses 1 -- 5 in this order once, hence finite:
in clause 5 note
that $\mu$-chains are finite since the $\htarg{}$-measure of terms strictly decreases with each application of $\mu$. Note further that intermediate indices
are of the form $\lambda_{\tau^\prime}$, whence we have a decreasing $\operatorname{l}$-measure according to inequality \ref{lalen} in the remark following Definition \ref{Ttauvec}.
\betagin{defi}[Spanning sets of tracking chains]\lambdabel{spanningdefi}
$M\subseteqseteq_\mathrm{fin}{\operatorname{T}}C$ is \emph{spanning} if and only if it is pre-closed and closed under
\betagin{enumerate}
\item[6.] \emph{unfolding of $\le_1$-components:} for $\alphavec\in M$, if $m_n=1$ and $\tau\not\in{\mathbb E}^{\ge{\tau^\prime}}$
(i.e.\ $\tau=\taucp{n,m_n}\not\in{\mathbb E}one$, ${\tau^\prime}={\tau_n^\star}$), let
\[<_1g((1/{\tau^\prime})\cdot\tau)=_{\mathrm{\scriptscriptstyle{ANF}}}\xi_1+\ldots+\xi_k,\]
if otherwise $m_n>1$ and $\tau=\mu_{\tau^\prime}$ such that $\tau<\lambda_{\tau^\prime}$ in the case $\tau\in{\mathbb E}^{>{\tau^\prime}}$, let
\[\lambda_{\tau^\prime}=_{\mathrm{\scriptscriptstyle{ANF}}}\xi_1+\ldots+\xi_k.\]
Set $\xi:=\xi_1+\ldots+\xi_k$, unless $\xi>0$ and $\alphavec^\frown(\xi_1+\ldots+\xi_k)\not\in{\operatorname{T}}C$,
\footnote{This is the case if clause 6 of Def.\ 5.1 of \cite{CWc} does not hold.}
in which case we set $\xi:=\xi_1+\ldots+\xi_{k-1}$.
Suppose that $\xi>0$. Let $\alphavecpl$ denote the vector $\{\alphavec^\frown(\xi)\}$ if this is a tracking chain,
or otherwise the vector ${\alphavec_{\restriction_{n-1}}}^\frown(\alphacp{n,1},\ldots,\alphacp{n,m_n},\mu_\taucp{n,m_n})$.
\footnote{This case distinction, due to clause 5 of Def.\ 5.1 of \cite{CWc}, is missing in \cite{W17}.}
Then the closure of $\{\alphavecpl\}$ under clauses 4 and 5 is contained in $M$.
\end{enumerate}
\end{defi}
\noindent{\bf Remark.}
Closure of some $M\subseteqseteq_\mathrm{fin}{\operatorname{T}}C$ under clauses 1 -- 6 is a finite process since pre-closure is finite and since
the $\kappa$-indices added in clause 6 strictly decrease in $\operatorname{l}$-measure.
Semantically, the above notion of spanning sets of tracking chains and closure under clauses 1 -- 6 leaves
some redundancy in the form that certain
$\kappa$-indices could be omitted. This will be adressed elsewhere, since the current formulation is advantageous
for technical reasons.
\betagin{defi}[Relativization]\lambdabel{reldefi} Let $\alphavec\in{\operatorname{T}}C\cup\{()\}$ and $M\subseteqseteq_\mathrm{fin}{\operatorname{T}}C$ be a set of tracking chains
that properly extend $\alphavec$.
$M$ is \emph{pre-closed above $\alphavec$} if and only if it is pre-closed
with the modification that clauses 1 -- 5 only apply when the respective resulting tracking chains $\betavec$
properly extend $\alphavec$.
$M$ is \emph{weakly spanning above $\alphavec$} if and only if $M$ is pre-closed above $\alphavec$ and closed under
clause 6.
\end{defi}
\betagin{lem}\lambdabel{meclosurelem} If $M$ is spanning (weakly spanning above some $\alphavec$), then it is closed under $\operatorname{me}$
(closed under $\operatorname{me}$ for proper extensions of $\alphavec$).
\end{lem}
{\bf Proof.} This follows directly from the definitions involved. \mbox{ }
$\Box$
\subseteqsection{Characterizing $\le_1$ and $\le_2$ in ${\operatorname{C}}two$}
The purpose of this section is to provide a detailed picture of the restriction of ${\cal R}two$ to $1^\infty$ on the basis of the
results of \cite{CWc} and to conclude with the extraction of an elementary recursive arithmetical characterization of this
structure given in terms of tracking chains which we will refer to as ${\operatorname{C}}two$.
We begin with a few observations that follow from the results in Section 7 of \cite{CWc} and explain the concept of tracking chains.
The evaluations of all initial chains of some tracking chain $\alphavec\in{\operatorname{T}}C$ form a $<_1$-chain.
Evaluations of initial chains $\alphavec_{\restriction_{i,j}}$ where $(i,j)\in\mathrm{dom}(\alphavec)$ and $j=2,\ldots,m_i$ with fixed index
$i$ form $\kappa^\tauwo$-chains.
Recall that indices $\alphacp{i,j}$ are $\kappa$-indices for $j=1$ and $\nu$-indices otherwise, cf.\ Definitions 5.1 and 5.9 of \cite{CWc}.
According to Theorem 7.9 of \cite{CWc}, an ordinal $\alpha<1^\infty$ is $\le_1$-minimal if and only if its tracking chain consists of
a single $\kappa$-index, i.e.\ if its tracking chain $\alphavec$ satisfies $(n,m_n)=(1,1)$. Clearly, the least $\le_1$-predecessor of
any ordinal $\alpha<1^\infty$ with tracking chain $\alphavec$ is $\mathrm{o}(\alphavec_{\restriction_{1,1}})=\kappa_\alphacp{1,1}$.
According to Corollary 7.11 of \cite{CWc} the ordinal $1^\infty$ is $\le_1$-minimal.
An ordinal $\alpha>0$ has a non-trivial $\le_1$-reach if and only if $\taucp{n,1}>{\tau_n^\star}$, hence in particular when $m_n>1$,
cf.\ condition 2 in Definition 5.1 of \cite{CWc}.
We now turn to a characterization of the \emph{greatest immediate $\le_1$-successor}, ${\operatorname{gs}}(\alpha)$,
of an ordinal $\alpha<1^\infty$ with tracking chain $\alphavec$.
Recall the notations $\rho_i$ and $\alphavec[\xi]$ from Definition 5.1 of \cite{CWc}. The largest $\alpha$-$\le_1$-minimal ordinal is the root
of the $\lambda$th $\alpha$-$\le_1$-component for $\lambda:=\rho_n\minusp1$. Therefore, if $\alpha$ has a non-trivial $\le_1$-reach, its greatest immediate
$\le_1$-successor ${\operatorname{gs}}(\alpha)$ has the tracking chain $\alphavec^\frown(\lambda)$, \emph{unless} we either
have $\taucp{n,m_n}<\mu_{{\tau^\prime_n}}\:\&\:\chi^{\tau^\prime_n}(\taucp{n,m_n})=0$, where
${\mathrm{tc}}({\operatorname{gs}}(\alpha))=\alphavec[\alphacp{n,m_n}+1]$, or $\alphavec^\frown(\lambda)$ is in conflict with either condition 5 of Definition 5.1 of \cite{CWc},
in which case we have ${\mathrm{tc}}({\operatorname{gs}}(\alpha))={\alphavec_{\restriction_{n-1}}}^\frown(\alphacp{n,1},\ldots,\alphacp{n,m_n},1)$, or condition 6
of Definition 5.1 of \cite{CWc}, in which case we have ${\mathrm{tc}}({\operatorname{gs}}(\alpha))=\alphavec_{\restriction_{i,j+1}}[\alphacp{i,j+1}+1]$.
\footnote{This condition is missing in \cite{W17}.}
In case $\alpha$ does not have any $<_1$-successor, we set ${\operatorname{gs}}(\alpha):=\alpha$.
$\alpha$ is $\le_2$-minimal if and only if for its tracking chain $\alphavec$ we have $m_n\le2$ and ${\tau_n^\star}=1$,
and $\alpha$ has a non-trivial $\le_2$-reach if and only if $m_n>1$ and $\taucp{n,m_n}>1$. Note that any $\alpha\in{\mathrm{Ord}}$ with
a non-trivial $\le_2$-reach is the proper supremum of its $<_1$-predecessors, hence $1^\infty$ does not possess any $\kappa^\tauwo$-successor.
Iterated closure under the relativized notation system ${\operatorname{T}}t$ for $\tau=1^\infty, (1^\infty)^\infty, \ldots$ results in the infinite $\kappa^\tauwo$-chain
through ${\mathrm{Ord}}$. Its $<_1$-root is $1^\infty$, the root of the ``master main line'' of ${\cal R}two$, outside the core of ${\cal R}two$, i.e.\ $1^\infty$.
According to part (a) of Theorem 7.9 of \cite{CWc} $\alpha$ has a greatest $<_1$-predecessor if and only if it is not
$\le_1$-minimal and has a trivial $\le_2$-reach (i.e.\ does not have any $\kappa^\tauwo$-successor). This is the case if
and only if either $m_n=1$ and $n>1$, where we have $\operatorname{pred}_1(\alpha)=\ordcp{n-1,m_{n-1}}(\alphavec)$, or $m_n>1$ and
$\taucp{n,m_n}=1$. In this latter case $\alphacp{n,m_n}$ is of a form $\xi+1$ for some $\xi\ge 0$, and using again the notation
from Definition 5.1 of \cite{CWc} we have
$\operatorname{pred}_1(\alpha)=\mathrm{o}(\alphavec[\xi])$ if $\chi^\taucp{n,m_n-1}(\xi)=0$, whereas $\operatorname{pred}_1(\alpha)=\mathrm{o}(\operatorname{me}(\alphavec[\xi]))$ in
the case $\chi^\taucp{n,m_n-1}(\xi)=1$.
Recall Definition 7.12 of \cite{CWc}, defining for $\alphavec\in{\operatorname{T}}C$ the notation $\alphavec^\star$ and the index pair $\operatorname{gbo}(\alphavec)=:(n_0,m_0)$,
which according to Corollary 7.13 of \cite{CWc} enables us to express the $\le_1$-reach ${\operatorname{lh}}(\alpha)$ of $\alpha:=\mathrm{o}(\alphavec)$,
cf.\ Definition 7.7 of \cite{CWc}, by
\betagin{equation}\lambdabel{lhequation}{\operatorname{lh}}(\alpha)=\mathrm{o}(\operatorname{me}(\betavec^\star)),
\end{equation}
where $\betavec:=\alphavec_{\restriction_{n_0,m_0}}$, which in the case $m_0=1$ is equal to
$\mathrm{o}(\operatorname{me}(\betavec))=\ordcp{n_0,1}(\alphavec)+\mathrm{dp}_{\tilde{\tau}cp{n_0,0}}(\taucp{n_0,1})$
and in the case $m_0>1$ equal to $\mathrm{o}(\operatorname{me}(\betavec[\mu_\taucp{n_0,m_0-1}]))$.
Note that if $\operatorname{cml}(\alphavec^\star)$ does not exist we have
\[{\operatorname{lh}}(\alpha)=\mathrm{o}(\operatorname{me}(\alphavec^\star)),\]
and the tracking chain $\betavec$ of any ordinal $\beta$ such that $\mathrm{o}(\alphavec^\star)\le_1\beta$ is then an extension of $\alphavec$,
$\alphavec\subseteq\betavec$, as will follow from Lemma \ref{subresplem}.
The relation $\le_1$ can be characterized by
\betagin{equation}\lambdabel{leoequation}\alpha\le_1\beta
\quad\Leftrightarrow\quad\alpha\le\beta\le{\operatorname{lh}}(\alpha),
\end{equation}
showing that $\le_1$ is a forest contained in $\le$, which \emph{respects} the ordering $\le$, i.e.\
if $\alpha\le\beta\le\gamma$ and $\alpha\le_1\gamma$ then $\alpha\le_1\beta$. On the basis of Lemma \ref{meclosurelem}, Equation \ref{lhequation} has the following
\betagin{cor}\lambdabel{leoclosurecor} Let $M\subseteqseteq_\mathrm{fin}{\operatorname{T}}C$ be spanning (weakly spanning above some $\alphavec\in{\operatorname{T}}C$) and $\betavec\in M$,
$\beta:=\mathrm{o}(\betavec)$.
Then \[{\mathrm{tc}}({\operatorname{lh}}(\beta))\in M,\] provided that $\mathrm{o}(\betavec_{\restriction_{\operatorname{gbo}(\betavec)}})$ is a proper extension of $\alphavec$
in the case that $M$ is weakly spanning above $\alphavec$.
\mbox{ }
$\Box$
\end{cor}
We now recall how to retrieve the greatest $\kappa^\tauwo$-predecessor of an ordinal below $1^\infty$, if it exists, and the iteration of this procedure to obtain the maximum
chain of $\kappa^\tauwo$-predecessors. Recall Definition 5.3 and Lemma 5.10 of \cite{CWc}.
Using the following proposition we can prove two other useful characterizations of the relationship $\alpha\le_2\beta$.
\betagin{prop}\lambdabel{letwopredprop} Let $\alpha<1^\infty$ with ${\mathrm{tc}}(\alpha)=:\alphavec$.
We define a sequence ${\vec{\sigma}}\in{\cal R}S$ as follows.
\betagin{enumerate}
\item If $m_n\le2$ and ${\tau_n^\star}=1$, whence $\alpha$ is $\le_2$-minimal according to Theorem 7.9 of \cite{CWc}, set ${\vec{\sigma}}:=()$. Otherwise,
\item if $m_n>2$, whence $\operatorname{pred}_2(\alpha)=\ordcp{n,m_n-1}(\alphavec)$ with base $\taucp{n,m_n-2}$ according to Theorem 7.9 of \cite{CWc}, we set
${\vec{\sigma}}:={\mathrm{cs}}(\alphavec_{\restriction_{n,m_n-2}})$,
\item and if $m_n\le2$ and ${\tau_n^\star}>1$, whence $\operatorname{pred}_2(\alpha)=\ordcp{i,j+1}(\alphavec)$ with base $\taucp{i,j}$ where $(i,j):=n^\star$, again according to Theorem 7.9 of \cite{CWc}, we set
${\vec{\sigma}}:={\mathrm{cs}}(\alphavec_{\restriction_{i,j}})$.
\end{enumerate}
Each $\sigma_i$ is then of a form $\taucp{k,l}$ where $1\le l< m_k$, $1\le k \le n$. The corresponding $\kappa^\tauwo$-predecessor of $\alpha$ is $\ordcp{k,l+1}(\alphavec)=:\beta_i$.
We obtain sequences ${\vec{\sigma}}=(\sigma_1,\ldots,\sigma_r)$ and $\betavec=(\beta_1,\ldots,\beta_r)$ with $\beta_1\kappa^\tauwo\ldots\kappa^\tauwo\beta_r\kappa^\tauwo\alpha$, where $r=0$ if $\alpha$ is $\le_2$-minimal,
so that $\operatorname{pred}s_2(\alpha)=\{\beta_1,\ldots,\beta_r\}$ and hence
$\beta\kappa^\tauwo\alpha$ if and only if $\beta\in\operatorname{pred}s_2(\alpha)$, displaying that $\le_2$ is a forest contained in $\le_1$.\mbox{ }
$\Box$
\end{prop}
\betagin{lem}\lambdabel{letwosuclem} Let $\alpha,\beta<1^\infty$ with tracking chains ${\mathrm{tc}}(\alpha)=\alphavec=(\alphavec_1,\ldots,\alphavec_n)$,
$\alphavec_i=(\alphacp{i,1},\ldots,\alphacp{i,m_i})$, $1\le i\le n$, and ${\mathrm{tc}}(\beta)=\betavec=(\betavec_1,\ldots,\betavec_l)$,
$\betavec_i=(\betacp{i,1},\ldots,\betacp{i,k_i})$, $1\le i\le l$.
Assume further that $\alphavec\subseteqseteq\betavec$ with
associated chain ${\vec{\tau}}$ and that $m_n>1$. Set $\tau:=\taucp{n,m_n-1}$. The following are equivalent:
\betagin{enumerate}
\item $\alpha\le_2\beta$
\item $\tau\le\taucp{j,1}$ for $j=n+1,\ldots,l$
\item $\tilde{\tau}\mid\beta$.
\end{enumerate}
\end{lem}
{\bf Proof.} Note that $\alphavec\subseteqseteq\betavec$ is a necessary condition for $\alpha\le_2\beta$, cf.\ Proposition \ref{letwopredprop}.
\\[2mm]
{\bf 1 $\Rightarrow$ 2:} Iterating the operation $(\cdot)^\prime$ (which for $\taucp{j,1}$ is $\taucppr{j,1}={\tau_j^\star}$) then reaches $\tau$.
This implies $\taucp{j,1}\ge\tau$ for $j=n+1,\ldots,l$.
\\[2mm]
{\bf 2 $\Rightarrow$ 1:} Iterating the procedure to find the greatest $\le_2$-predecessor, cf.\ Proposition \ref{letwopredprop},
from $\beta$ downward satisfies ${\tau_j^\star}\ge\tau$ at each $j\in(n,l]$, where we therefore have $(n,m_n-1)\le_\mathrm{\scriptscriptstyle{lex}} j^\star$.
\\[2mm]
{\bf 2 $\Rightarrow$ 3:} Note that according to Lemma 5.10(c) of \cite{CWc} we have \[{\mathrm{ts}}(\tilde{\tau})={\mathrm{cs}}(\alphavec_{\restriction_{n,m_n-1}}).\]
We argue by induction along $<_\mathrm{\scriptscriptstyle{lex}}$ on $(l,k_l)$. The case $k_l>1$ is trivial since $\beta$ is then a multiple of $\mathrm{o}(\betavec_{\restriction_{l,k_l-1}})$.
Assume now that $k_l=1$, so that $\taucp{l,1}\ge\tau$. Let $(u,v):=l^\star$, so $(n,m_n-1)\le_\mathrm{\scriptscriptstyle{lex}}(u,v)$ and $\tau_l^\star\ge\tau$. By Lemma 5.10(b) of \cite{CWc}
we have \[\tilde{\tau}cp{l,1}=\kappa^{\tilde{\tau}^\star_l}_\taucp{l,1}.\] If $(n,m_n-1)=(u,v)$ we obtain $\tilde{\tau}\mid\kappa^{\tilde{\tau}}_\taucp{l,1}$ since $\taucp{l,1}\ge\tau$.
By the i.h.\ we have $\tilde{\tau}\mid\mathrm{o}(\betavec_{\restriction_{u,v}})$, so $\tilde{\tau}\mid\tilde{\tau}cp{u,v}$, and since $\taucp{l,1}\ge\tau^\star_l$ we also have
$\tilde{\tau}\mid\tilde{\tau}cp{l,1}$, which implies that $\tilde{\tau}\mid\beta$, cf.\ Definition \ref{kappanuprincipals}.
\\[2mm]
{\bf 3 $\Rightarrow$ 2:} Assume there exists a maximal $j\in\{n+1,\ldots,l\}$ such that $\taucp{j,1}<\tau$. Then we have ${\tau_j^\star}\le\taucp{j,1}<\tau$, so
$(u,v):=j^\star<_\mathrm{\scriptscriptstyle{lex}}(n,m_n-1)$. Let ${\mathrm{ts}}(\tilde{\tau})=:(\sigma_1,\ldots,\sigma_s)$ and recall Theorem \ref{thma}.
\\[2mm]
{\bf Case 1:} $\taucp{j,1}\not\in{\mathbb E}^{>{\tau_j^\star}}$. Then it follows that $(j,1)=(l,k_l)$ since $j$ was chosen maximally.
In the case ${\tau_j^\star}=1$ we have $\taucp{j,1}<\sigma_1$ and hence $\tilde{\tau}cp{j,1}=\kappa_\taucp{j,1}<\tilde{\tau}$. Otherwise, i.e.\ ${\tau_j^\star}=\taucp{u,v}>1$, we
obtain ${\mathrm{ts}}(\tilde{\tau}cp{u,v})={\mathrm{cs}}(\alphavec_{\restriction_{u,v}})<_\mathrm{\scriptscriptstyle{lex}}{\mathrm{ts}}(\tilde{\tau})$, so \[\tilde{\tau}cp{j,1}=\kappa^{{\mathrm{ts}}(\tilde{\tau}cp{u,v})}_\taucp{j,1}<\tilde{\tau}.\]
{\bf Case 2:} $\taucp{j,1}\in{\mathbb E}^{>{\tau_j^\star}}$. Then ${\mathrm{ts}}(\tilde{\tau}cp{j,1})={\mathrm{cs}}(\betavec_{\restriction_{j,1}})={\mathrm{cs}}(\betavec_{\restriction_{u,v}})^\frown\taucp{j,1}<_\mathrm{\scriptscriptstyle{lex}}{\mathrm{ts}}(\tilde{\tau})$,
hence $\tilde{\tau}cp{j,1}<\tilde{\tau}$. In the case $\taucp{l,k_l}\in{\mathbb E}^{>\taucppr{l,k_l}}$ we see that ${\mathrm{ts}}(\tilde{\tau}cp{j,1})\subseteqseteq{\mathrm{ts}}(\tilde{\tau}cp{l,k_l})$, hence
$\tilde{\tau}cp{l,k_l}<\tilde{\tau}$. The other cases are easier, cf.\ Case 1.
\mbox{ }
$\Box$
Applying the mappings ${\mathrm{tc}}$, cf.\ Section 6 of \cite{CWc}, and $\mathrm{o}$, which we verified in the present article to be elementary recursive,
cf.\ Subsection \ref{prelimsubsec},
we are now able to formulate the arithmetical characterization of ${\operatorname{C}}two$.
\betagin{cor}\lambdabel{elemreccharcor} The structure ${\operatorname{C}}two$ is characterized elementary recursively by
\betagin{enumerate}
\item $(1^\infty,\le)$ is the standard ordering of the classical notation system $1^\infty={\operatorname{T}}arg{1}\cap\Omega_1$, cf.\ \cite{W07a},
\item $\alpha\le_1\beta$ if and only if $\alpha\le\beta\le{\operatorname{lh}}(\alpha)$ where ${\operatorname{lh}}$ is given by equation \ref{lhequation}, and
\item $\alpha\le_2\beta$ if and only if ${\mathrm{tc}}(\alpha)\subseteqseteq{\mathrm{tc}}(\beta)$ and condition 2 of Lemma \ref{letwosuclem} holds.\mbox{ }
$\Box$
\end{enumerate}
\end{cor}
\betagin{cor}\lambdabel{lhtwoclscor}
Let $M\subseteqseteq_\mathrm{fin}{\operatorname{T}}C$ be spanning (weakly spanning above some $\alphavec\in{\operatorname{T}}C$). Then $M$ is closed under ${\operatorname{lh}}_2$.
\end{cor}
{\bf Proof.} This follows from Lemma \ref{meclosurelem} using Lemma \ref{letwosuclem}, cf.\ Corollaries 5.6 and 7.13 of \cite{CWc}.
\mbox{ }
$\Box$
Recall Definition 5.13 of \cite{CWc} which characterizes the standard linear ordering $\le$ on $1^\infty$
by an ordering $\le_\mathrm{TC}$ on the corresponding tracking chains.
We can formulate a characterization of the relation $\le_1$ (below $1^\infty$) in terms of the corresponding
tracking chains as well. This follows from an inspection of the ordering $\le_\mathrm{TC}$ in combination with the above
statements. Let $\alpha,\beta<1^\infty$ with tracking chains ${\mathrm{tc}}(\alpha)=\alphavec=(\alphavec_1,\ldots,\alphavec_n)$,
$\alphavec_i=(\alphacp{i,1},\ldots,\alphacp{i,m_i})$, $1\le i\le n$, and ${\mathrm{tc}}(\beta)=\betavec=(\betavec_1,\ldots,\betavec_l)$,
$\betavec_i=(\betacp{i,1},\ldots,\betacp{i,k_i})$, $1\le i\le l$.
We have $\alpha\le_1\beta$ if and only if either $\alphavec\subseteqseteq\betavec$ or there exists
$(i,j)\in\mathrm{dom}(\alphavec)\cap\mathrm{dom}(\betavec)$, $j<\min\{m_i,k_i\}$, such that
$\alphavec_{\restriction_{i,j}}=\betavec_{\restriction_{i,j}}$ and $\alphacp{i,j+1}<\betacp{i,j+1}$,
and we either have $\chi^\taucp{i,j}(\alphacp{i,j+1})=0\:\&\:(i,j+1)=(n,m_n)$ or
$\chi^\taucp{i,j}(\alphacp{i,j+1})=1\:\&\:\alpha\le_1\mathrm{o}(\operatorname{me}(\alphavec_{\restriction_{i,j+1}}))<_1\beta$.
Iterating this argument and recalling Lemma 5.5 of \cite{CWc} we obtain the following
\betagin{prop}\lambdabel{gbocharprop} Let $\alpha$ and $\beta$ with tracking chains $\alphavec$ and $\betavec$, respectively, as above.
We have $\alpha\le_1\beta$ if and only if either $\alphavec\subseteqseteq\betavec$ or
there exists the $<_\mathrm{\scriptscriptstyle{lex}}$-increasing chain of index pairs $(i_1,j_1+1),\ldots,(i_s,j_s+1)\in\mathrm{dom}(\alphavec)$ of maximal length $s\ge 1$ where $j_r\ge 1$ for $r=1,\ldots,s$,
such that $(i_1,j_1+1)\in\mathrm{dom}(\betavec)$, $\alphavec_{\restriction_{i_1,j_1}}=\betavec_{\restriction_{i_1,j_1}}$, $\alphacp{i_1,j_1+1}<\betacp{i_1,j_1+1}$,
\[\alphavec_{\restriction_{i_s,j_s+1}}\subseteqseteq\alphavec\subseteqseteq\operatorname{me}(\alphavec_{\restriction_{i_s,j_s+1}}),\]
and $\chi^\taucp{i_r,j_r}(\alphacp{i_r,j_r+1})=1$ at least whenever $(i_r,j_r+1)\not=(n,m_n)$.
Setting $\alpha_r:=\mathrm{o}(\alphavec_{\restriction_{i_r,j_r+1}})$ for $r=1,\ldots,s$ as well as $\alpha^+_r:=\mathrm{o}(\operatorname{me}(\alphavec_{\restriction_{i_r,j_r+1}}))$
for $r$ such that $\chi^\taucp{i_r,j_r}(\alphacp{i_r,j_r+1})=1$ and $\alpha^+_s:=\alpha$ if $\chi^\taucp{i_s,j_s}(\alphacp{i_s,j_s+1})=0$ we have
\[\alpha_1\kappa^\tauwo\ldots\kappa^\tauwo\alpha_s\le_2\alpha\le_1\alpha^+_s<_1\ldots<_1\alpha^+_1<_1\mathrm{o}(\betavec_{\restriction_{i_1,j_1+1}})\le_1\beta.\]
For $\beta={\operatorname{lh}}(\alpha)$ the cases $\alphavec\subseteqseteq\betavec$ and $s=1$ with $(i_1,j_1+1)=(n,m_n)$ correspond to the situation
$\operatorname{gbo}(\alphavec)=(n,m_n)$, while otherwise we have $\operatorname{gbo}(\alphavec)=(i_1,j_1+1)$.\mbox{ }
$\Box$
\end{prop}
\noindent{\bf Remark.}
Note that the above index pairs characterize the relevant sub-maximal $\nu$-indices in the initial chains of $\alphavec$
with respect to $\betavec$ and leave the intermediate steps of maximal ($\operatorname{me}$-) extension along the iteration.
Using Lemma 5.5 of \cite{CWc} we observe that the sequence $\taucp{i_1,j_1},\ldots,\taucp{i_s,j_s}$ of bases in the above proposition satisfies
\betagin{equation}\lambdabel{baseseqeq}
\taucp{i_1,j_1}<\ldots<\taucp{i_s,j_s}\quad\mbox{ and }\quad\taucp{i_s,j_s}<\taucp{i,1} \quad\mbox{ for every }i\in(i_s,n],
\end{equation}
so that in the case where $\alpha<_1\beta$ and $\alphavec\not\subseteqseteq\betavec$ we have $\alpha<_1{\operatorname{gs}}(\alpha)\le_1\beta$ with $\taucp{i_s,j_s}\mid\rho_n\minusp1$.
\betagin{lem}\lambdabel{subresplem}
The relation $\subseteq$ of initial chain on ${\operatorname{T}}C$ respects the ordering $\le_\mathrm{TC}$ and hence also the characterization of $\le_1$ on ${\operatorname{T}}C$.
\end{lem}
{\bf Proof.}
Suppose tracking chains $\alphavec,\betavec,\gammavec\in{\operatorname{T}}C$ satisfy $\alphavec\le_\mathrm{TC}\betavec\le_\mathrm{TC}\gammavec$ and $\alphavec\subseteq\gammavec$.
The case where any two chains are equal is trivial. We may therefore assume that $\mathrm{o}(\alphavec)<\mathrm{o}(\betavec)<\mathrm{o}(\gammavec)$ as
$({\operatorname{T}}C,\le_\mathrm{TC})$ and $(1^\infty,<)$ are order-isomorphic. Since $\alphavec\subseteq\gammavec$ we have $\mathrm{o}(\alphavec)<_1\mathrm{o}(\gammavec)$, hence
$\mathrm{o}(\alphavec)<_1\mathrm{o}(\betavec)$ as $\le_1$ respects $\le$. Assume toward contradiction that $\alphavec\not\subseteq\betavec$,
whence by Proposition \ref{gbocharprop} there exists $(i,j+1)\in\mathrm{dom}(\alphavec)\cap\mathrm{dom}(\betavec)$ such that
$\alphavec_{\restriction_{i,j}}=\betavec_{\restriction_{i,j}}$ and $\alphacp{i,j+1}<\betacp{i,j+1}$.
But then for any $\deltavec\in{\operatorname{T}}C$ such that $\alphavec\subseteq\deltavec$ we have
\[\mathrm{o}(\deltavec)<\mathrm{o}(\alphavecpl)\le\mathrm{o}(\betavec)<\mathrm{o}(\gammavec),\]
where the tracking chain $\alphavecpl:=\alphavec_{\restriction_{i,j+1}}[\alphacp{i,j+1}+1]$
is the result of changing the terminal index $\alphacp{i,j+1}$ of $\alphavec_{\restriction_{i,j+1}}$ to $\alphacp{i,j+1}+1$.
\mbox{ }
$\Box$
We may now return to the issue of closure under ${\operatorname{lh}}$, which has a convenient sufficient condition on the basis of the following
\betagin{defi} A tracking chain $\alphavec\in{\operatorname{T}}C$ is called \emph{convex} if and only if every $\nu$-index in $\alphavec$
is maximal, i.e.\ given by the corresponding $\mu$-operator.
\end{defi}
\betagin{cor}\lambdabel{lhclscor}
Let $\alphavec\in{\operatorname{T}}C$ be convex and $M\subseteqseteq_\mathrm{fin}{\operatorname{T}}C$ be weakly spanning above $\alphavec$.
Then $M$ is closed under ${\operatorname{lh}}$.
\end{cor}
{\bf Proof.}
This is a consequence of Proposition \ref{gbocharprop}, Corollary \ref{leoclosurecor}, Lemma \ref{meclosurelem},
and equation \ref{lhequation}.
\mbox{ }
$\Box$
While it is easy to observe that in ${\cal R}two$ the relation $\le_1$ is a forest that respects $\le$ and the relation $\le_2$ is a forest contained in $\le_1$
which respects $\le_1$, we can now conclude that this also holds for the arithmetical formulations of $\le_1$ and $\le_2$ in ${\operatorname{C}}two$,
without referring to the results in Section 7 of \cite{CWc}.
\betagin{cor} Consider the arithmetical characterizations of $\le_1$ and $\le_2$ on $1^\infty$.
The relation $\le_2$ respects $\le_1$, i.e.\ whenever $\alpha\le_1\beta\le_1\gamma<1^\infty$ and $\alpha\le_2\gamma$, then $\alpha\le_2\beta$.
\end{cor}
{\bf Proof.} In the case $\betavec\subseteqseteq\gammavec$ this directly follows from Lemma \ref{letwosuclem},
while otherwise we additionally employ Proposition \ref{gbocharprop} and property \ref{baseseqeq}.
\mbox{ }
$\Box$
\section{Conclusion}
In the present article, the arithmetical characterization of the structure ${\operatorname{C}}two$, which was established in Theorem 7.9 and
Corollary 7.13 of \cite{CWc},
has been analyzed and shown to be elementary recursive. We have seen that finite isomorphism
types of ${\operatorname{C}}two$ (in its arithmetical formulation) are contained in the class of ``respecting forests'', cf.\ \cite{C01}, over the language
$(\le,\le_1,\le_2)$. In a subsequent article \cite{W} we will establish the converse by providing an effective assignment of isominimal realizations
in ${\operatorname{C}}two$ to arbitrary respecting forests. We will provide an algorithm to find pattern notations for the ordinals in ${\operatorname{C}}two$, and will
conclude that the union of isominimal realizations of respecting forests is indeed the core of ${\cal R}two$, i.e.\ the structure ${\operatorname{C}}two$ in its
semantical formulation based on $\Sigma_i$-elementary substructures, $i=1,2$.
As a corollary we will see that the well-quasi orderedness of respecting forests with respect to coverings, which was shown by Carlson in \cite{C16},
implies (in a weak theory) transfinite induction up to the proof-theoretic ordinal $1^\infty$ of ${\operatorname{KP}\!\ell_0}$.
We expect that the approaches taken here and in our treatment of the structure ${\cal R}onepl$, see \cite{W07b} and \cite{W07c}, will naturally
extend to an analysis of the structure ${\cal R}twopl$ and possibly to structures of patterns of higher order.
A subject of ongoing work is to verify that the core of ${\cal R}twopl$ matches the proof-theoretic strength of a limit of $\mathrm{KPI}$-models.
\end{document}
|
\begin{document}
\title{On certain maximal hyperelliptic curves related to Chebyshev polynomials}
\author{Saeed Tafazolian and Jaap Top
}
\date{}
\address{University of Campinas (UNICAMP)\\
Institute of Mathematics, Statistics and Computer Science (IMECC)\\
Rua S\'{e}rgio Buarque de Holanda, 651, Cidade Universit\'{a}ria\\
13083-859, Campinas, SP, Brazil}
\address{Johan Bernoulli Institute for Mathematics and Computer Science\\
Nijenborgh~9\\9747 AG Groningen\\ the Netherlands}
\email{[email protected]}
\email{[email protected]}
\begin{abstract}
We study hyperelliptic curves arising from Chebyshev polynomials. The aim of this paper is to characterize the pairs $(q,d)$ such that the
hyperelliptic curve ${\mathcal C}$ over a
finite field ${\mathbb F}_{q^2}$ given by $y^2 = \varphi_{d}(x)$ is maximal
over the finite field ${\mathbb F}_{q^2}$ of cardinality $q^2$.
Here $\varphi_{d}(x)$ denotes the Chebyshev polynomial of degree $d$.
The same question is studied for the curves given by
$y^2=(x\pm 2) \varphi_{d}(x)$, and also for
$y^2=(x^2-4)\varphi_d(x)$. Our results generalize some of the statements in \cite{KJW}.
\end{abstract}
{\bf\em Keywords:} finite field, maximal curves, hyperelliptic curves,
Chebyshev polynomials, Dickson polynomials.
\emph{2000 Mathematics Subject Classification}: 11G20, 11M38, 14G15, 14H25.
\maketitle
\section{Introduction}
Let $p$ be an odd prime number, let $q$ be a power of $p$, and denote by ${\mathbb F}_{q^2}$ the finite field with $q^2$ elements.
Let $\mathcal{C}$ be a curve (complete, smooth, and
geometrically irreducible) of
genus $g \geq 0$ over the finite field ${\mathbb F}_{q^2}$. We call the curve ${\mathcal C}$ maximal over ${\mathbb F}_{q^2}$ if the number of rational points of ${\mathcal C}$ over ${\mathbb F}_{q^2}$ attains the upper bound of Hasse-Weil, i.e,
$$\#\mathcal{C}({\mathbb F} _{q^2}) = 1 + q^2 + 2g q.$$
Not only have maximal curves several intrinsic geometrical properties, but also they have been investigated in connection with Coding Theory:
in some cases the best known linear codes over finite fields of square order are obtained as one-point AG-codes from maximal curves.
In this note we consider hyperelliptic curves given by
one of the equations $y^2 = \varphi_{d}(x)$ or
$y^2=(x\pm 2) \varphi_{d}(x)$ or
$y^2=(x^2-4)\varphi_d(x)$ over ${\mathbb F}_{q^2}$. Here $\varphi_d(x)$ denotes
the Chebychev polynomial of degree $d$ over
${\mathbb F}_p\subset{\mathbb F}_{q^2}$: recall that this is the reduction modulo $p$ of the unique polynomial $\phi(X) \in {\mathbb Z}[X]$ such that
\[
x^d+x^{-d}=\phi(x+x^{-1})
\]
in ${\mathbb Z}[x,x^{-1}]$.
\begin{remark}
\rm{Note that $\varphi_d(x)=D_d(x,1)$ with $D_d$ the $d$-th Dickson polynomial of the first kind with parameter $1$, defined recursively by
\[D_n(x,1) =xD_{n-1}(x,1)-D_{n-2}(x,1)\]
for $n \geq 2$, and $D_0(x,1) = 2$ and $D_1(x,1) =x$.
Dickson polynomials are related to the classical Chebyshev polynomials $T_n(x)$, defined for each integer $ n \geq 0$ by $T_n(x)= \cos ( n ~ \arccos x)$; indeed we have that $D_n(x,1)=2T_n(x/2).$ Because of this connection, these Dickson polynomials are also called Chebyshev polynomials (see \cite[Page 355]{LN}), a convention we follow here.
}
\end{remark}
In Lemma~\ref{sep} we describe the pairs $(q,d)$ such that
$\varphi_d(x)$ is a separable polynomial (over ${\mathbb F}_q$).
Our main goal is to study the problem,
for which pairs $(q,d)$ the curve in question; so, given by one of the equations
$y^2=\varphi(x)$ or $y^2=(x\pm 2)\varphi_d(x)$ or $y^2=(x^2-4)\varphi_d(x)$, is maximal over ${\mathbb F}_{q^2}$.
Throughout, when we write ``curve with (affine) equation $y^2=f(x)$'' or even $C\colon y^2=f(x)$ we mean that we
consider the smooth, complete curve birational (over the ground field) to the curve given by the affine equation.We have the following results.
\begin{thm}\label{maineven}
Let $d>0$ be an even integer and let $q$ be a prime power
with $\gcd(q,d)=1$. Then the hyperelliptic curve ${\mathcal C}$
given by
\[y^2=(x+2) \varphi_{d}(x)\] is maximal over ${\mathbb F}_{q^2}$ if and only if either $q \equiv -1~(\bmod~4d) ~\mbox{or} ~ q\equiv 2d+1 ~ (\bmod~4d).$
\end{thm}
\begin{remark}\label{rem1.2}{\rm
The definition of the polynomials $\varphi_d$ implies
that $\varphi_d(-x)=(-1)^d\varphi_d(x)$.
As a consequence, for $d$ even and $q$ odd, using
a primitive $4$-th root of unity $i\in{\mathbb F}_{q^2}$ one
obtains an isomorphism over ${\mathbb F}_{q^2}$ given by
$(u,v)\mapsto (-u,iv)$ from the curve $C$ described above,
to the curve with equation $y^2=(x-2)\varphi_{d}(x)$.
Hence for this curve, the same maximality criteria over
${\mathbb F}_{q^2}$ hold as those described in Theorem~\ref{maineven}.
Another property that is immediate from the definition of
the polynomials $\varphi_d(x)$ is that if $d=a\cdot b$ for
positive integers $a,b$, then $\varphi_d(x)=
\varphi_a(\varphi_b(x))$.
Applying this in the situation of Theorem~\ref{maineven}
with $a=d/2$ and $b=2$, one obtains $\varphi_d(x)=
\varphi_{d/2}(x^2-2)$.
Writing the equation for ${\mathcal C}$ as
$y^2=(x+2)\varphi_{d/2}(x^2-2)$, it already appears
in \cite[Proposition~3]{TTV}. In fact this will be
used in Section~\ref{five}.
}
\end{remark}
The analog of Theorem~\ref{maineven}
for odd $d$ is as follows.
\begin{thm}\label{x2mainodd}
Let $d\geq 1$ be an odd integer and let $q$ be a prime power
with $\gcd(q,2d)=1$. Then the hyperelliptic curve ${\mathcal C}$
given by
\[
y^2=(x+2)\varphi_d(x)
\]
is maximal over ${\mathbb F}_{q^2}$ if and only if $q\equiv -1
~(\bmod~2d)$.
\end{thm}
For the hyperelliptic curve given by $y^2=\varphi_d(x)$ our strongest
results are obtained in the case that $d$ is even:
\begin{thm}\label{evencheb}
Suppose $d>0$ is an even integer and $q$ is a prime power
with $\gcd(q,d)=1$.
Then the following statements are equivalent.
\begin{itemize}
\item[{\rm (i)}] the hyperelliptic curve ${\mathcal C}_1\colon y^2= (x^2-4)\varphi_{d}(x)$ is maximal over ${\mathbb F}_{q^2}$;
\item[{\rm (ii)}] $q\equiv -1~(\bmod~4)$ and the hyperelliptic curve ${\mathcal C}\colon y^2=\varphi_d(x)$
is maximal over ${\mathbb F}_{q^2}$;
\item[{\rm (iii)}] $q\equiv -1~(\bmod~2d)$.
\end{itemize}
\end{thm}
For odd $d>0$ we have the following somewhat weaker result.
\begin{thm}\label{mainodd}
Let $d>0$ be an odd integer and let $q$ be a
prime power. Assume that $q$ is coprime to $2d$.
If $q \equiv -1(\bmod~4d)$ or $q\equiv 2d+1 (\bmod~4d)$, then the curve ${\mathcal C}\colon y^2= \varphi_{d}(x)$ is maximal over ${\mathbb F}_{q^2}$,
and so is the curve ${\mathcal C}_1\colon y^2=(x^2-4)\varphi_d(x)$.
If both ${\mathcal C}$ and ${\mathcal C}_1$ are maximal over
${\mathbb F}_{q^2}$, then either
$q \equiv -1(\bmod~4d)$ or $q\equiv 2d+1 (\mbox{mod}~4d)$.
\end{thm}
Based on considering small cases and experiments
using Magma (see also the discussion in Remark~\ref{rem4.3} and the special case based on a result of Kohel and Smith
which we discuss in Remark~\ref{kohelsmithresult}), we in fact have a stronger expectation for odd $d>0$:
\begin{con}\label{conj}
For any prime power $q$ and any odd $d>0$ with $\gcd(q,2d)=1$,
the following statements are equivalent.
\begin{itemize}
\item[{\rm (i)}] the hyperelliptic curve ${\mathcal C}_1\colon y^2= (x^2-4)\varphi_{d}(x)$ is maximal over ${\mathbb F}_{q^2}$;
\item[{\rm (ii)}] $q\equiv -1(\bmod~4)$ and the hypereliptic curve ${\mathcal C}\colon y^2=\varphi_d(x)$
is maximal over ${\mathbb F}_{q^2}$.
\end{itemize}
\end{con}
Clearly if Conjecture~\ref{conj} holds then a more complete
and simple criterion follows (using Theorem~\ref{mainodd} and
similar to Theorem~\ref{evencheb}).
In Sections~\ref{two} and \ref{three}
some necessary background is recalled and a
general necessary condition on the characteristic
is shown (Proposition~\ref{p2.3}) in order for
a hyperelliptic curve with
equation $y^2=xg(x^2)$ over ${\mathbb F}_{q^2}$ (of positive genus) to be maximal.
Section~\ref{proofs} contains the proofs
of most results announced in this introduction.
In Section~\ref{five} we prove Theorem~\ref{evencheb}
and discuss Conjecture~\ref{conj}. We finish with a small
application/illustration of Complex Multiplication theory (Proposition~\ref{CMresult}).
\section{Preliminaries}\label{two}
The \emph{zeta function} of a curve $\mathcal{C}$
over a finite field $k$ of cardinality $q$ is a
rational function of the form
\[Z(\mathcal{C}/k)=\dfrac{L(t)}{(1-t)(1-qt)},\]
\noindent where $L(t) \in {\mathbb Z}[t]$ is a polynomial of degree $2g=2\cdot\mbox{genus}({\mathcal C})$
with integral coefficients (see \cite[Chapter V]{St}).
We call this polynomial the $L$-\textit{polynomial} of $\mathcal{C}$ over $k$.
We recall the following fact about maximal curves which can be deduced by extending the argument on p. 182 of \cite{St}.
\begin{pro}\label{p2.1}
Suppose $q$ is a square. For a smooth projective curve $\mathcal{C}$
of genus $g$, defined over $ k={\mathbb F}_{q}$, the following conditions
are equivalent:
\begin{itemize}
\item $\mathcal{C}$ is maximal over ${\mathbb F}_{q}$.
\item $L(t)= (1+\sqrt{q}t)^{2g} $.
\end{itemize}
\end{pro}
A common method to construct (explicit) maximal curves is via the following remark which although commonly attributed to J-P. Serre (cf. Lachaud \cite{La}), is implicitly already contained in Tate's seminal paper \cite{Tate}:
\begin{remark}\label{r1}
\rm{Given a non-constant morphism $f\colon\mathcal{C} \longrightarrow
\mathcal{D}$ defined over
the finite field $k$, the $L$-polynomial of $\mathcal{D}$
over $k$ divides
the one of $\mathcal{C}$ over $k$. Hence a subcover $\mathcal{D}$ over ${\mathbb F}_{q^2}$ of a
maximal curve $\mathcal{C}$ over ${\mathbb F}_{q^2}$ is also maximal.}
Many examples of maximal curves have been found in this way starting from `standard' known ones.
In various cases this is done including explicit equations for the subcover, in other cases by
merely identifying appropriate subfields (and the genus of the corresponding curve) of
a function field ${\mathbb F}_{q^2}(\mathcal{C})$ of a maximal $\mathcal{C}/{\mathbb F}_{q^2}$.
From the abundant literature on this, we mention \cite{GSX}, \cite{AQ}, \cite{CO}, \cite{ABB},
\cite{FF2}, \cite{JPAA}, \cite{SFN}, \cite{GQZ}, \cite{GMQZ}, \cite{BM}, \cite{ASF}.
In the present paper we work in some sense `the other way around': the curves we study are
indeed subcovers $\mathcal{D}$ (by an morphism of degree $2$) of curves $\mathcal{C}$ for which maximality
properties are precisely known. By identifying the $L$-polynomial of $\mathcal{C}$ essentially in terms of
that of $\mathcal{D}$ in the cases at hand, which is done by `understanding' up to isogeny
the Jacobian variety of $\mathcal{C}$ in terms of that of $\mathcal{D}$,
we obtain necessary (and not only sufficient) maximality criteria for $\mathcal{D}$.
\end{remark}
The following result yields a necessary condition
for maximality of a special type of hyperelliptic
curves.
\begin{pro}\label{p2.3}
Let $q=p^n$ be the cardinality of a finite field
${\mathbb F}_q$ of characteristic $p>2$.
Suppose $g(x)\in{\mathbb F}_q[x]$ is separable
of degree $d\geq 1$, and $g(0)\neq 0$. Let ${\mathcal C}$ be the hyperelliptic curve
over ${\mathbb F}_q$ with equation
$y^2=xg(x^2)$.\\
If the Jacobian of ${\mathcal C}$ is supersingular, then $p\equiv 3~(\bmod~4)$.\\
As a consequence, if ${\mathcal C}$ is maximal over ${\mathbb F}_{q^2}$, then
$p\equiv 3~(\bmod~4)$.
\end{pro}
\begin{proof}
The assumptions imply that ${\mathcal C}$ is a curve
of genus $d\geq 1$.
Let $i$ be a primitive $4$-th root of unity in
some extension of ${\mathbb F}_q$. The curve ${\mathcal C}$ admits the automorphism $\iota$ given by $\iota(x,y)=(-x,iy)$.
The action of $\iota$ on the vector space of regular
$1$-forms on ${\mathcal C}$ is diagonalizable, and has as
eigenvalues $\pm i$.
We claim that maximality of ${\mathcal C}$ over ${\mathbb F}_{q^2}$
implies that the characteristic $p$ of ${\mathbb F}_q$
satisfies $p\equiv 3(\bmod~4)$. Indeed, if
$p\equiv 1(\bmod~4)$ then take integers $a,b$
such that $p=a^2+b^2$.
As endomorphisms of ${\mathcal J}=\mbox{Jac}({\mathcal C})$ this yields
a factorization $p=(a+b\iota)(a-b\iota)$.
Since multiplication by $p$ is inseparable, at least one
of the endomorphisms $a\pm b\iota$ is inseparable as well.
However, it is not possible that both are inseparable since
that would imply the sum $2a$ to be inseparable as well,
which clearly is not the case.
This means that after changing the sign of $b$ if necessary,
we have that $a+b\iota$ is separable. Hence its kernel
${\mathcal J}[a+b\iota](\overline{{\mathbb F}_q})$ is a nontrivial
subgroup of the $p$-torsion of ${\mathcal J}$, which shows that ${\mathcal J}$ is not supersingular.
Since the $p$-torsion of the Jacobian of any maximal
curve over ${\mathbb F}_{q^2}$ is trivial, ${\mathcal C}$ cannot be maximal over any finite field
of characteristic $\equiv 1(\bmod~4)$.
So we have $p\equiv 3(\bmod~4)$.
\end{proof}
\begin{remark}{\rm
The assumption that a curve ${\mathcal C}$ of genus $g$ is maximal over ${\mathbb F}_{q^2}$
implies that the $L$-polynomial of ${\mathcal C}$ over ${\mathbb F}_q$
(which has as zeros square roots of the zeros of the
$L$-polynomial of ${\mathcal C}$ over ${\mathbb F}_{q^2}$) must be
$(1+qt^2)^{2g}$. In the situation described in
Proposition~\ref{p2.3} this means that if $q$
is a square, then the quartic
twist of ${\mathcal C}$ over ${\mathbb F}_q$ corresponding to the
cocycle $F_q\mapsto\iota$ (with $F_q$ the $q$-th power
Frobenius) has $L$-polynomial $(1-qt^2)^{2g}$.
In case $q$ is not a square, the analogous cocycle
results in a twist that has (again) $L$-polynomial $(1+qt^2)^{2g}$.
}\end{remark}
We finish this section with a preliminary result generalizing parts of
\cite[Theorem~6.1(b)]{GS} and \cite[Theorem~7.2(a)]{ASF}
(in fact it is based on essentially the same ideas
already present in \cite{GS}).
\begin{lem}\label{sep}
For $d\in{\mathbb Z}_{>0}$ and $q$ a prime power,
the Chebyshev polynomial $\varphi_d$ considered
over the finite field ${\mathbb F}_q$ of cardinality $q$
is separable if and only if $\gcd(q,2d)=1$ or $d=1$.
\end{lem}
\begin{proof}
Consider the morphism $\alpha\colon {\mathbb P}^1\to{\mathbb P}^1 $ given (in terms of local
coordinates) by $\alpha(x)=x^d+x^{-d}$.
One factors $\alpha=\beta\circ \gamma$ with
$\gamma\colon{\mathbb P}^1\to{\mathbb P}^1$ given by
$\gamma(x)=x^d$ and
$\beta\colon{\mathbb P}^1\to{\mathbb P}^1$ by $\beta(x)=x+x^{-1}$.
Regarding $\varphi_d$ as the morphism ${\mathbb P}^1\to{\mathbb P}^1$
given by $x\mapsto\varphi_d(x)$, by
definition $\alpha=\beta\circ\gamma=\varphi_d\circ\beta$. We study separability of the
{\em polynomial} $\varphi_d$, which means we examine
whether the {\em morphism} $\varphi_d$ is separable
and moreover has no ramification points over
$0\in{\mathbb P}^1$. To this end, first consider separability
(and ramification) of the two morphisms $\gamma$ and $\beta$.
Clearly $\beta$ is a separable morphism of degree $2$, in every characteristic. It is only ramified in
$\pm 1$, and this is one point in characteristic $2$
and two points in every other characteristic.
The morphism $\gamma$ is inseparable precisely when
$\gcd(q,d)\neq 1$. If this holds then also
$\alpha=\beta\circ\gamma$ is inseparable. As
a consequence, so is $\varphi_d$ since
$\alpha=\gamma\circ \varphi_d$ and $\gamma$ is separable. So
\[\gcd(q,d)\neq 1\quad\Rightarrow\quad \mbox{the~polynomial}
\;\; \varphi_d\;\; \mbox{is~inseparable~over}\;\;{\mathbb F}_q.\]
Next, assume $\gcd(q,d)=1$ so that $\gamma$
is separable (over ${\mathbb F}_q$). Then $\alpha$ and hence
$\varphi_d$ are separable morphisms as well.
To obtain the ramification points of $\varphi_d$ in
this case, we compute the ramification of
$\alpha=\beta\circ\gamma$. First consider the case
that $q$ is odd. Then $\beta$ is only ramified
in $\pm 1$ (both points with ramification index
$e_{\pm1}=2$ and $\beta(\pm 1)=\pm 2$).
Moreover $\gamma$ is only ramified in $0$ and in
$\infty$ (both with ramification index $d$)
and $\gamma^{-1}(\pm 1)$ consists of the
$2d$ pairwise distinct solutions of $x^{2d}=1$.
Since $\gamma^{-1}(0)=0$ and $\gamma^{-1}(\infty)
=\infty$, the conclusion is that the total map
$\alpha$ is ramified only in the following points:
$\{0,\infty\}$, each with ramification index $d$,
and in the $2d$-th roots of unity, each with
ramification index $2$. Moreover the image of these
points under $\alpha$ is $\{\infty,\pm 2\}$.
Since $0\not\in\{\infty,\pm 2\}$ and
$\alpha=\varphi_d\circ\beta$, one concludes
\[q\;\;\mbox{is~odd~and}\;\;\gcd(q,d)=1
\quad\Rightarrow\quad \mbox{the~polynomial}
\;\; \varphi_d\;\; \mbox{is~separable~over}\;\;{\mathbb F}_q.
\]
Now consider the case $2|q$ and $\gcd(q,d)=1$.
This implies that the map $\alpha$ is separable
over ${\mathbb F}_q$. As in the previous case, the ramification
of $\alpha$ is easily found using $\alpha=\beta\circ\gamma$. Now $\beta$ is only ramified at $1$,
with $\beta(1)=0$ (ramification index $2$).
We conclude that $\alpha$ is ramified only in the
following points: $\{0,\infty\}$, each with ramification
index $d$, and in the $d$-th roots of unity,
each with ramification index $2$.
The image of these points under $\alpha$ is
$\{\infty,0\}$.
As $\alpha^{-1}(0)$ consists of the $d$-th roots of unity and only $1\in\alpha^{-1}(0)$ is a ramification
point of $\beta$, the decomposition $\alpha=
\varphi_d\circ\beta$ shows that whenever $d>1$
then $\varphi_d\colon{\mathbb P}^1\to{\mathbb P}^1$ is ramified in
some points over $0$ (namely, in
$\zeta+\zeta^{-1}$ with $\zeta\neq 1$ satisfying
$\zeta^d=1$). We showed:
\[
2|q\;\;\mbox{and}\;\;\gcd(q,d)=1\;\;\mbox{and}
\;\; d>1
\quad\Rightarrow\quad \mbox{the~polynomial}
\;\; \varphi_d\;\; \mbox{is~inseparable~over}\;\;{\mathbb F}_q.
\]
Since the case $d=1$ (so $\varphi_d(x)=x$) is trivial,
the lemma follows.
\end{proof}
\section{The curves $y^2=x^{2d+1}+x$ and $y^2=x^{2d}+1$}\label{three}
Let $d\geq 1$ be an integer, and let $q$ be a
prime power such that $\gcd(q,2d)=1$.
We consider the complete non-singular curve ${\mathcal X}$
over ${\mathbb F}_{q^2}$ birational to the plane affine curve given by
$$
y^2=x^{2d+1}+x\, .
$$
The condition on the pair $(q,d)$ implies that ${\mathcal X}$
has genus $d$.
The following result is crucial for us (see \cite[Theorem 1]{FF2}).
\begin{thm}\label{main}
The smooth complete hyperelliptic curve ${\mathcal X}$ given by
\[y^2=x^{2d+1}+x\]
is maximal over ${\mathbb F}_{q^2}$ if and only if either
$q \equiv -1~(\bmod~4d)~\mbox{or}~
q\equiv 2d+1~(\bmod~4d).$
\end{thm}
Now let ${\mathcal Y}$ be the complete non-singular curve
over ${\mathbb F}_q$ given by $y^2=x^{2d}+1$.
Note that the condition $\gcd(q,2d)=1$ implies
that ${\mathcal Y}$ has genus $d-1$.
One more result which will be used in our proofs is recalled from \cite{JPAA}:
\begin{thm}\label{main2}
The smooth complete hyperelliptic curve ${\mathcal Y}\colon y^2=x^{2d}+1$ is maximal over ${\mathbb F}_{q^2}$ if and only if
$q\equiv -1 (\bmod~2d)$.
\end{thm}
\section{Hyperelliptic curves from Chebyshev
polynomials}\label{proofs}
In this section we prove Theorems~\ref{maineven}, \ref{x2mainodd}, \ref{mainodd}, and we present and prove
some preliminary results which will be used in the proof of Theorem~\ref{evencheb}.
\subsection*{Case ${d}$ even and ${v^2=(u+2)\varphi_d(u)}$}
\begin{proof} (of Theorem~\ref{maineven}).
Take $d>0$ an even integer, and let $q$ be a prime power with $\gcd(d,q)=1$. We will
show that the curve ${\mathcal C}$ with affine
equation
\[v^2=(u+2) \varphi_{d}(u)\] is maximal over ${\mathbb F}_{q^2}$ if and only if the curve ${\mathcal X}$ introduced in
Section~\ref{three} (with equation $y^2=x^{2d+1}+x$)
is maximal over ${\mathbb F}_{q^2}$. Theorem~\ref{maineven} is then a
consequence of Theorem~\ref{main}.
The main idea is to decompose the Jacobian
variety ${\mathcal J}({\mathcal X})$ up to isogeny over ${\mathbb F}_{q^2}$.
Let $\tau\in\mbox{Aut}({\mathcal X})$ be the involution
given by $\tau(x,y)=(1/x,y/x^{d+1})$.
The quotient of ${\mathcal X}$ by $\tau$ is the curve ${\mathcal C}={\mathcal X}/<\tau>$ with equation
$$v^2=(u+2) \varphi_{d}(u);$$
indeed, the functions $u=x+1/x$ and $v=y(x+1)x^{-1-d/2}$ generate the subfield of $\tau$-invariants
in the function field of ${\mathcal X}$, as is seen as follows.
Write ${\mathbb F}_p(x,y)$ for the function field of ${\mathcal X}$
over the prime field ${\mathbb F}_p$ of ${\mathbb F}_q$.
We have the inclusions of fields (where the numbers
describe the degree of the given extensions)
\[
\begin{array}{ccc}
{\mathbb F}_p(x,y) & {\supset} & {\mathbb F}_p(u,v)\\
\stackrel{ \scriptstyle \phantom{2}}{\cup{\scriptstyle 2}} && \cup{\scriptstyle 2} \\
{\mathbb F}_p(x) & \stackrel{{\scriptstyle 2}}{\supset}
& {\mathbb F}_p(u)
\end{array}
\]
Since $[{\mathbb F}_p(x,y):{\mathbb F}_p(x,y)^{<\tau>}]=2$ and
$u,v\in{\mathbb F}_p(x,y)^{<\tau>}$, one has ${\mathbb F}_p(x,y)^{<\tau>}={\mathbb F}_p(u,v)$.
Moreover, $u,v$ satisfy
\[v^2=(x^{2d+1}+x)(x+2+x^{-1})x^{-d-1}=(u+2)\varphi_d(u).\]
We have the basis
$$\{ \omega_j:=\frac{x^{j-1} dx}{y} \;|\; 1 \leq j \leq d \}$$
for the space of regular differentials on ${\mathcal X}$. A basis for the differentials invariant under $\tau$ is $$\{ \omega_j-\omega_{d-j+1}\;|\; 1 \leq j \leq d/2 \},$$
which also generate the pull-backs of the regular differentials on ${\mathcal C}$ (note that since we assume
$\gcd(q,d)=1$, Lemma~\ref{sep} implies that
$\varphi_d$ is separable over ${\mathbb F}_q$. Also,
$\varphi_d(-2)=\varphi_d((-1)+(-1))=(-1)^d+(-1)^d=2\neq 0$, so ${\mathcal C}$ has genus $d/2$).
Let $\iota$ be the hyperelliptic involution on ${\mathcal X}$,
so $\iota(x,y)=(x,-y)$.
The quotient of ${\mathcal X}$ by $\tau\iota$ (this map is an
involution defined over the prime field) is the curve ${\mathcal C}_1={\mathcal X}/<\tau\iota>$ with equation
$$\eta^2=(\xi -2) \varphi_{d}(\xi);$$
indeed, the invariants under $\rho$ in the function field
of ${\mathcal X}$ are generated by $\xi:=x+x^{-1}$ and
$\eta:=y(x-1)x^{-1-d/2}$. These functions satisfy
\[\eta^2=(x^{2d+1}+x)(x-2-x^{-1})x^{-d-1}=
(\xi-2)\varphi_d(\xi).\]
A basis for the differentials invariant under $\tau\iota$ is
$$\{ \omega_j+\omega_{d-j+1}| 1 \leq j \leq d/2 \},$$
which also generate the pull-backs to ${\mathcal X}$ of the regular differentials on ${\mathcal C}_1$.
Fixing a primitive $4$-th root of unity $i\in{\mathbb F}_{q^2}$,
the map
$(u,v)\mapsto (-u, iv)$ yields an isomorphism
${\mathcal C}\cong {\mathcal C}_1$ defined over ${\mathbb F}_{q^2}$. The discussion above
shows, with $\sim$ denoting isogeny defined over
${\mathbb F}_{q^2}$, that
\[{\mathcal J}({\mathcal X}) \sim {\mathcal J}({\mathcal C})\times{\mathcal J}({\mathcal C}_1)\cong {\mathcal J}({\mathcal C})^2.\]
As a consequence $L_{{\mathcal X}}(t)=L_{{\mathcal C}}(t)^2$
with $L$ denoting an $L$-polynomial over ${\mathbb F}_{q^2}$. Now
Proposition \ref{p2.1} implies that the curve ${\mathcal X}$ is maximal if and only if the curve ${\mathcal C}$ is maximal. This completes the proof.
\end{proof}
\begin{remark}
\rm{Theorem~\ref{maineven} generalizes a part of \cite[Proposition 6]{KJW}.}
The decomposition up to isogeny of the Jacobian variety ${\mathcal J}({\mathcal X})$ as a product of Jacobians of quotient
curves, can also be obtained using results of Kani and Rosen \cite{Kani-Rosen}.
There are various examples in the literature illustrating this technique; we refer to
\cite[\S~3.1.1]{Paulhus} and
\cite[p.~36]{Soomro} for situations very similar to the ones discussed in the present paper.
\end{remark}
\subsection*{Case $d$ odd and $v^2=(u+2)\varphi_d(u)$}
\begin{proof} (of Theorem~\ref{x2mainodd}).
This is very similar to the proof of Theorem~\ref{maineven}.
Take $d>0$ an odd integer, and let $q$ be a prime power with
$\gcd(q,2d)=1$.
We will
show that the curve ${\mathcal C}$ with affine
equation
\[v^2=(u+2) \varphi_{d}(u)\]
is maximal over ${\mathbb F}_{q^2}$ if and only if the curve ${\mathcal Y}\colon y^2=x^{2d}+1$ is maximal over ${\mathbb F}_{q^2}$. Theorem~\ref{x2mainodd} is then a
consequence of Theorem~\ref{main2}.
Let $\sigma$ be the involution on ${\mathcal Y}$ defined by $\sigma(x,y)=(1/x,y/x^d)$.
The quotient of ${\mathcal Y}$ by $\sigma$ is the hyperelliptic curve
${\mathcal C}$. Indeed, the functions $u=x+x^{-1}$ and $v=y(1+x)x^{-(d+1)/2}$
generate the field of functions invariant under $\sigma$,
and one computes
\[ v^2=y^2\cdot x^{-(d+1)}\cdot(x+1)^2=(x^d+x^{-d})(x+2+x^{-1})=(u+2)\varphi_d(u).\]
Multiplying $\sigma$ by the hyperelliptic involution on ${\mathcal Y}$
one obtains another quotient curve which we denote by ${\mathcal C}_1$. The invariant functions
under the new involution are generated by $u=x+x^{-1}$ and $w=y(1-x)x^{-(d+1)/2}$.
They satisfy $w^2=(u-2)\varphi_d(u)$.
The map $(u,w)\mapsto (-u,w)$ defines an isomorphism ${\mathcal C}_1\cong {\mathcal C}$.
Analogous to the previous proof one concludes
$L_{{\mathcal Y}}(t)=L_{{\mathcal C}}(t)\cdot L_{{\mathcal C}_1}(t)=L_{{\mathcal C}}(t)^2$, in this case for the $L$-polynomials
over ${\mathbb F}_q$ as well as for those over ${\mathbb F}_{q^2}$.
This implies the result.
\end{proof}
\subsection*{Case $d$ odd and $y^2=\varphi_d(x)$}
\begin{proof}(of Theorem~\ref{mainodd}).
Let $d\geq 1$ be an odd integer. Take a prime power
$q$ such that $\gcd(q,2d)=1$. We will consider curves
over (the prime field of) ${\mathbb F}_{q^2}$.
Recall (see the proof of Theorem~\ref{maineven}) that the hyperelliptic curve ${\mathcal X}$
with affine equation $y^2=x^{2d+1}+x$ admits the involution $\tau$ defined by $\tau(x,y)=(1/x, y/x^{d+1})$.
For odd $d$, the quotient of $ {\mathcal X}$ by $\tau$ is the hyperelliptic curve ${\mathcal C}$ with equation $$y^2= \varphi_{d}(x);$$
indeed, a quotient map is given by
$$(x,y) \mapsto (x+1/x, y/x^{(d+1)/2})$$
(compare \cite[Proposition~3]{TTV}).
Now if either
$q \equiv -1 ~(\bmod~4d)~\mbox{or}~q\equiv 2d+1~(\bmod~4d)$,
then by Theorem \ref{main} the curve ${\mathcal X}$ is maximal over ${\mathbb F}_{q^2}$ which implies that the curve ${\mathcal C}$ is also maximal over ${\mathbb F}_{q^2}$. This proves
the first assertion of Theorem~\ref{mainodd}.
To show the remaining parts, we will decompose
up to isogeny the Jacobian ${\mathcal J}({\mathcal X})$ of the curve ${\mathcal X}$.
With the basis $\omega_j:= x^{j-1} dx/y$ (for $1 \leq j\leq d$) for the regular differentials on $ {\mathcal X}$, one checks that a basis for the differentials invariant under $\tau$ is
$$\omega_1 - \omega_d, \omega_2 - \omega_{d-1}, \cdots, \omega_{(d-1)/2} - \omega_{(d+3)/2},$$
which also generate the pull-backs of the regular differentials on ${\mathcal C}$; note that by Lemma~\ref{sep}
the condition $\gcd(q,2d)=1$ implies that $\varphi_d$
is separable over ${\mathbb F}_q$ hence ${\mathcal C}$ has genus $(d-1)/2$.
Let $\iota$ be the hyperelliptic involution on ${\mathcal X}$. The quotient of ${\mathcal X}$ by $\tau\iota$ (this automorphism
has order $2$ and it is defined over the prime field) is the curve ${\mathcal C}_1={\mathcal X}/<\tau\iota>$ with equation
\[y^2=(x^2-4)\varphi_d(x);\]
indeed, the functions $\xi:=x+1/x$ and
$\eta:=\frac{y}{x^{(d+1)/2}}(x-\frac{1}{x})\in
{\mathbb F}_q({\mathcal X})$ are invariant under
the action of $\tau\iota$ and
$[{\mathbb F}_q({\mathcal X}):{\mathbb F}_q(\xi,\eta)]=2$. Hence
$\xi,\eta$ generate the function field of ${\mathcal C}_1$.
We have
\[\eta^2=\frac{y^2}{x^{d+1}}(x^2-2+x^{-2})=(x^d+x^{-d})\left((x+\frac1x)^2-4\right)=(\xi^2-4)\varphi_d(\xi).\]
From this, the second assertion in Theorem~\ref{mainodd}
follows: namely, by Theorem~\ref{main} the
congruence condition on $q$ implies that
${\mathcal X}$ is maximal over ${\mathbb F}_{q^2}$. Since ${\mathcal X}$
covers ${\mathcal C}_1$, the same is true for ${\mathcal C}_1$ over
${\mathbb F}_{q^2}$.
Note that $\varphi_d(2)=\varphi_d(1+1)=1^d+1^d=2$
and similarly $\varphi_d(-2)=-2$. Using
Lemma~\ref{sep} this implies that in every
characteristic coprime to $2d$ the polynomial
$(x^2-4)\varphi_d(x)$ is separable.
A basis for the differentials invariant under $\tau\iota$ is
$\{ \omega_i+ \omega_{d-i+1}| 1 \leq i \leq (d+1)/2 \},$
which also generate the pull-backs of the regular differentials on ${\mathcal C}_1$.
Since the pull-backs of a basis of the regular differentials on ${\mathcal C}$ together with the pull-backs of
a similar basis on ${\mathcal C}_1$ yield a basis for the
regular differentials on ${\mathcal X}$, one concludes that the Jacobian ${\mathcal J}({\mathcal X})$ of ${\mathcal X}$ is isogenous to a product $${\mathcal J}({\mathcal C}) \times {\mathcal J}({\mathcal C}_1),$$
where ${\mathcal J}({\mathcal C})$ and ${\mathcal J}({\mathcal C}_1)$ are the Jacobians of the curves ${\mathcal C}$ and ${\mathcal C}_1$, respectively.
This implies that $L_{{\mathcal X}}(t)=L_{{\mathcal C}}(t)\cdot L_{{\mathcal C}_1}(t)$
(for $L$-polynomials over any extension of ${\mathbb F}_q$).
Hence if both ${\mathcal C}$ and ${\mathcal C}_1$ are maximal over ${\mathbb F}_{q^2}$
then so is ${\mathcal X}$, which by Theorem~\ref{main} implies
that $q\equiv -1(\bmod~4d)$ or $q\equiv 2d+1(\bmod~4d)$.
This finishes the proof.
\end{proof}
\begin{remark}\label{Conjd=3}
\rm{The special case $d=3$ of the Theorem~\ref{mainodd}
is a part of \cite[Proposition 4]{KJW}.}
In fact for $d=3$ one finds {({\it loc.\ sit.}) that ${\mathcal J}({\mathcal C})$
is the elliptic curve ${\mathcal E}_1$ with equation $y^2=x^3-3x$
and (up to isogeny) ${\mathcal J}({\mathcal C}_1)$ is
a product ${\mathcal E}_2\times {\mathcal E}_3$ where
${\mathcal E}_2$ is the elliptic curve
with equation $y^2=x^3+x$ and ${\mathcal E}_3$
is the one
with equation $y^2=x^3+108x$. These two elliptic curves ${\mathcal E}_1$ and ${\mathcal E}_2$
are isogenous over ${\mathbb F}_{q^2}$ (for $q$ any prime power
with $\gcd(q,6)=1$). So in this case
maximality of any one of them over
${\mathbb F}_{q^2}$ is equivalent to
$q\equiv 3(\bmod~4)$ and to maximality
of any one of the curves ${\mathcal C}$ or ${\mathcal C}_1$
over ${\mathbb F}_{q^2}$. In particular, Conjecture~\ref{conj}
holds for $d=3$.
}\end{remark}
\subsection*{Case $d$ even and $y^2=\varphi_d(x)$}
A preliminary result relying on an analogous
reasoning as above,
is the following which will be used in
the proof of Theorem~\ref{evencheb}.
\begin{lem}\label{mainevench}
Let $d>0$ be an even integer and let $q$ be a prime power.
Assume $\gcd(q,d)=1$. The next two statements
are equivalent.
\begin{itemize}
\item[{\rm (i)}] $q\equiv -1(\bmod~2d)$;
\item[{\rm (ii)}] the curve ${\mathcal C}$ with affine equation
\[ v^2=\varphi_d(u)\]
is maximal over ${\mathbb F}_{q^2}$, and so is the curve ${\mathcal C}_1$
over ${\mathbb F}_{q^2}$ given by
\[v^2=(u^2-4)\varphi_d(u).\]
\end{itemize}
\end{lem}
\begin{proof}
Take $d=2e$ for some integer $e>0$ and let $q$ be a prime
power with $\gcd(q,d)=1$.
The curve ${\mathcal Y}$ over ${\mathbb F}_q$ with affine equation
$y^2=x^{2d}+1$ admits the involution $\sigma$ given by
$\sigma(x,y)=(1/x,y/x^d)$.
The functions in ${\mathbb F}_q({\mathcal Y})$ which are invariant under
$\sigma$ are generated by $u=x+1/x$ and $v=\frac{y}{x^e}$.
We have
\[
v^2=x^{-d}(x^{2d}+1)=\varphi_d(x+\frac1x)=\varphi_d(u),
\]
so the quotient of ${\mathcal Y}$ by $\sigma$ is the curve ${\mathcal C}$
given by $v^2=\varphi_d(u)$.
If $q\equiv -1\bmod~2d$ then by Theorem~\ref{main2}
the curve ${\mathcal Y}$ is maximal over ${\mathbb F}_{q^2}$. Since
this curve covers ${\mathcal C}$, it follows from Remark~\ref{r1}
that also ${\mathcal C}$ is maximal over ${\mathbb F}_{q^2}$.
This shows the first claim in Proposition~\ref{mainevench}.
For the second claim we use the product $\sigma'$
of $\sigma$ and the hyperelliptic involution on ${\mathcal Y}$,
so $\sigma'(x,y)=(1/x,-y/x^d)$.
The invariants in ${\mathbb F}_q({\mathcal Y})$ under $\sigma'$ are
generated by $u=x+1/x$ and $w=\frac{y}{x^e}(x-\frac1x)$,
and they are related by
\[
w^2=(x^{2d}+1)x^{-d}(x^2-2+x^{-2})=(x^d+x^{-d})((x+\frac1x)^2-4)=(u^2-4)\varphi_d(u).
\]
So also ${\mathcal C}_1\colon w^2=(u^2-4)\varphi_d(u)$ is covered by ${\mathcal Y}$.
Hence by Theorem~\ref{main2}, if $q\equiv -1~(\bmod~2d)$
then
${\mathcal C}_1$ is maximal over ${\mathbb F}_{q^2}$, proving the second
claim in Proposition~\ref{mainevench}.
For the last claim, observe that analogous to the other results shown in this section we have that the Jacobian ${\mathcal J}({\mathcal Y})$
is isogenous over ${\mathbb F}_q$ to the product
${\mathcal J}({\mathcal C})\times{\mathcal J}({\mathcal C}_1)$. Hence the $L$-polynomial of
${\mathcal Y}$ over any extension of ${\mathbb F}_q$ is the product of the
$L$-polynomials of ${\mathcal C}$ and ${\mathcal C}_1$ (over the same
extension). The remaining statement in Proposition~\ref{mainevench} is an immediate consequence of this.
\end{proof}
\section{Relating $y^2=\varphi_d(x)$ and $y^2=(x^2-4)\varphi_d(x)$}\label{five}
Here we prove Theorem~\ref{evencheb} and
we make some remarks concerning Conjecture~\ref{conj}.
The following lemma
turns out to be useful.
\begin{lem}\label{Lpols}
Let $d\geq 1$ be any integer and let $q$ be a prime power with
$\gcd(q,2d)=1$. Then the $L$-polynomial of the elliptic curve
over ${\mathbb F}_q$ given by $y^2=x^3+x$ divides the $L$-polynomial
of the curve ${\mathcal C}_1$ over ${\mathbb F}_q$ with affine equation $y^2=(x^2-4)\varphi_d(x)$.
\end{lem}
\begin{proof}
First consider the case that $d$ is odd.
We use the notations from the proof
of Theorem~\ref{mainodd} and we let $\zeta$ in some
extension of ${\mathbb F}_q$ be a primitive $4d$-th root of unity.
The curve ${\mathcal X}$ admits an automorphism $\rho$ given by
$\rho(x,y)=(\zeta^2x,\zeta y)$.
The quotient of ${\mathcal X}$ by the group generated by $\rho^4$ is the elliptic curve ${\mathcal E}$ given by $$y^2=x^3+x$$
and an explicit quotient map is given by $$(x,y)\mapsto(x^d, x^{(d-1)/2}y).$$
Note that although the elements of the group generated by $\rho^4$ may not be defined
over ${\mathbb F}_q$, the group is, which explains why the
quotient curve and the map to it are defined over ${\mathbb F}_q$.
A regular differential on ${\mathcal X}$ invariant under $\rho^4$ is $ \omega_{(d+1)/2}=x^{(d-1)/2}dx/y$; observe that this
differential is a pull back of a regular differential on ${\mathcal C}_1$.
As a consequence, the elliptic curve ${\mathcal E}$
is up to isogeny contained in the
Jacobian ${\mathcal J}({\mathcal C}_1)$. This implies
the lemma for $d$ odd.
Now assume $d=2e$ is even. The curve ${\mathcal Y}\colon y^2=x^{4e}+1$ covers the given elliptic curve, with an explicit
covering map given by $(x,y)\mapsto (x^{2e},x^ey)$.
Note that $x^{e-1}\frac{dx}{y}$ is a pull-back to ${\mathcal Y}$ of a regular
differential on the elliptic curve.
The proof of Proposition~\ref{mainevench} shows that
${\mathcal J}({\mathcal C}_1)$ is up to isogeny an abelian subvariety of ${\mathcal J}({\mathcal Y})$,
and the regular differentials on ${\mathcal Y}$ coming from ${\mathcal C}_1$
are the ones invariant under the action of the automorphism
denoted $\sigma'$. As the differential
$x^{e-1}\frac{dx}{y}$ is invariant under $\sigma'$, it follows that
the elliptic curve is up to isogeny contained in ${\mathcal J}({\mathcal C}_1)$.
This implies the lemma for $d$ even.
\end{proof}
\begin{pro}\label{2mod4}
The analogue of Conjecture~\ref{conj} holds in the special case $d\equiv2~(\bmod~4)$.
\end{pro}
\begin{proof}
Write $d=2e$ with $e$ a positive, odd integer and let $q$ be a prime power
satisfying $\gcd(q,2e)=1$.
One decomposes, up to isogeny, the Jacobian ${\mathcal J}({\mathcal C}_1)$ of
the curve ${\mathcal C}_1$ given by $y^2=(x^2-4)\varphi_{2e}(x)$ as follows.
Note that ${\mathcal C}_1$ admits the involution $\alpha$ given by $\alpha(x,y)=(-x,y)$.
Since $\varphi_{2e}(x)=\varphi_e(\varphi_2(x))=\varphi_e(x^2-2)$,
the quotient by $\alpha$ is the curve ${\mathcal D}$ with affine equation
$v^2=(t-4)\varphi_e(t-2)$ (with quotient map $(x,y)\mapsto (x^2,y)$).
Using the variables $v$ and $u:=t-2$, this equation becomes $v^2=(u-2)\varphi_e(u)$.
Using that the curve ${\mathcal D}$ is isomorphic to the one with equation $v^2=(u+2)\varphi_e(u)$ (just change the sign of $u$ and use that $e$ is odd),
Theorem~\ref{x2mainodd} implies that if ${\mathcal C}_1$ is maximal
over ${\mathbb F}_{q^2}$, then so is ${\mathcal D}$, and therefore $q\equiv -1(\bmod~2e)$.
From Lemma~\ref{Lpols}, the maximality of ${\mathcal C}_1$ over ${\mathbb F}_{q^2}$
implies maximality of the elliptic curve given by $y^2=x^3+x$ over ${\mathbb F}_{q^2}$.
The latter maximality is equivalent to $q\equiv -1~(\bmod~4)$.
Using that $e$ is odd, one concludes that if ${\mathcal C}_1$ is maximal over
${\mathbb F}_{q^2}$, then $q\equiv -1~(\bmod~4e)$. Hence Proposition~\ref{mainevench}
implies the implication $\mbox{\rm (i)}\Rightarrow\mbox{\rm (ii)}$ of
Conjecture~\ref{conj} in this case.
For the other implication, assume that ${\mathcal C}\colon y^2=\varphi_{2e}(x)$
is maximal over ${\mathbb F}_{q^2}$ and that $q\equiv -1~(\bmod~4)$.
Writing $\varphi_{2e}(x)=\varphi_e(x^2-2)$ it is clear that
the map $(x,y)\mapsto (x^2-2,xy)$ yields a nonconstant morphism from ${\mathcal C}$
to the curve with equation $s^2=(t+2)\varphi_e(t)$.
Hence the latter curve is maximal over ${\mathbb F}_{q^2}$, which by
Theorem~\ref{mainodd} implies $q\equiv -1~(\bmod~2e)$.
So again one concludes $q\equiv -1~(\bmod~4e)$,
and the maximality of ${\mathcal C}_1$ over ${\mathbb F}_{q^2}$ follows
from Proposition~\ref{mainevench}.
\end{proof}
Similar ideas allow one to obtain some results in the case $d\equiv 0~(\bmod~4)$:
\begin{pro}\label{0mod4}
Suppose the integer $d>0$ satisfies $4|d$, then the analogue of Conjecture~\ref{conj} holds.
\end{pro}
\begin{proof}
With notations as above, write $d=2e$.
The map $(x,y)\mapsto (x^2-2, y)$ shows that
${\mathcal C}_1$ covers the curve given by $v^2=(u-2)\varphi_{e}(u)$.
Hence as before, by Theorem~\ref{maineven} one concludes
that if ${\mathcal C}_1$ is maximal over ${\mathbb F}_{q^2}$,
then either $q\equiv -1~(\bmod~4e)$ or $q\equiv 2e+1~(\bmod~4e)$.
Similarly, the map $(x,y)\mapsto (x^2-2,xy)$ shows that
${\mathcal C}_1$ covers the curve with affine equation
$w^2=(u^2-4)\varphi_e(u)$.
Hence maximality of ${\mathcal C}_1$ over ${\mathbb F}_{q^2}$ implies
using Lemma~\ref{Lpols} that the elliptic curve with
equation $y^2=x^3+x$ is maximal over ${\mathbb F}_{q^2}$.
As a consequence $q\equiv -1~(\bmod~4)$.
Combining the congruences for $q$ we conclude that
maximality implies $q\equiv -1(\bmod~2d)$.
Hence by Proposition~\ref{mainevench} the curve given
by $y^2=\varphi_d(x)$ is maximal over ${\mathbb F}_{q^2}$,
which is what we wanted to show.
Vice versa, is ${\mathcal C}$ with affine equation
$y^2=\varphi_d(x)$ maximal over ${\mathbb F}_{q^2}$ and is moreover
$q\equiv -1~(\bmod~4)$,
then since $(x,y)\mapsto (x^2-2,xy)$ shows that ${\mathcal C}$
covers the curve given by $v^2=(u+2)\varphi_e(u)$,
we conclude using Theorem~\ref{maineven} that
either $q\equiv -1~(\bmod~4e)$ or $q\equiv 2e+1~(\bmod~4e)$.
However, the additional condition on $q$ shows that
the latter congruence is impossible,
so one concludes $q\equiv -1~(\bmod~4e)$.
But then Proposition~\ref{mainevench} implies maximality
of ${\mathcal C}_1$ over ${\mathbb F}_{q^2}$, which is what we wished to show.
\end{proof}
Evidently, combining Lemmas~\ref{mainevench} and \ref{Lpols}, and
Propositions~\ref{2mod4} and \ref{0mod4}
one obtains a proof of Theorem~\ref{evencheb}.
We will now discuss Conjecture~\ref{conj}.
To this end, we first describe an attempt
to prove the conjecture which unfortunately
seems to fail.
\begin{remark}\label{rem4.3}{\rm
We continue with the notations introduced
in the proofs of Theorem~\ref{mainodd}
and Lemma~\ref{Lpols}; in particular, the integer $d>0$ is assumed to be odd.
A natural way to describe a decomposition of a Jacobian variety such
as ${\mathcal J}({\mathcal X})$ is in terms of suitable endomorphisms of this Jacobian.
We refer to the paper of Kani and Rosen \cite{Kani-Rosen} which studies the special
endomorphisms generated by those coming from automorphisms of the curve.
Consider the action of $1+\tau$ and of
$1+\rho^4+\rho^8+\ldots+\rho^{4d-4}$ on ${\mathcal J}({\mathcal X})$.
As endomorphisms on ${\mathcal J}({\mathcal X})$ these maps are
defined over the prime field of ${\mathbb F}_q$.
Moreover since $1+\tau$ acts as $0$ on the
regular differentials on ${\mathcal X}$ which are pulled back
from ${\mathcal C}$, and as multiplication by $2$ on the
regular differentials pulled back from ${\mathcal C}_1$,
it follows that $(1+\tau)({\mathcal J}({\mathcal X}))$ is isogenous
to ${\mathcal J}({\mathcal C}_1)$.
An analogous argument shows that
\[
(1+\rho^4+\ldots+\rho^{4d-4})(1+\tau)({\mathcal J}({\mathcal X}))
\]
is isogenous to the elliptic curve ${\mathcal E}$.
Since $1+\rho^4+\ldots+\rho^{4d-4}$ acts as
multiplication by $d$ on the differential
$\omega_{(d+1)/2}$ and as $0$ on the
differentials $\omega_j+\omega_{d+1-j}$ ($1\leq j\leq (d-1)/2$),
it follows that the abelian variety ${\mathcal A}\subset {\mathcal J}({\mathcal X})$
defined by
\[{\mathcal A}:= (-d+1+\rho^4+\ldots+\rho^{4d-4})(1+\tau)({\mathcal J}({\mathcal X}))\]
is defined over the prime field of ${\mathbb F}_q$, and
$\dim({\mathcal A})=(d-1)/2$, and ${\mathcal J}({\mathcal C}_1)\sim{\mathcal E}\times {\mathcal A}$
(an isogeny defined over the prime field of ${\mathbb F}_q$).
As a result,
\[{\mathcal J}({\mathcal X})\sim {\mathcal J}({\mathcal C})\times{\mathcal E}\times {\mathcal A}.\]
Suppose that we would know that ${\mathcal A}$ and ${\mathcal J}({\mathcal C})$
are isogenous over ${\mathbb F}_{q^2}$. Then in particular
the $L$-polynomial $L_{{\mathcal C}}(t)$
divides $L_{{\mathcal C}_1}(t)$ (here we take $L$-polynomials
over ${\mathbb F}_{q^2}$).
Clearly, this would imply the case $d$ odd of
Conjecture~\ref{conj}.
A rather natural idea for showing that
indeed the
abelian varieties ${\mathcal A}$ and ${\mathcal J}({\mathcal C})$ are isogenous
over ${\mathbb F}_{q^2}$, is to look for endomorphisms in
the subalgebra ${\mathbb Z}[\rho,\tau]\subset\mbox{End}({\mathcal J}({\mathcal X}))$ and restrict those to ${\mathcal A}$ or to
$(1-\tau)({\mathcal J}({\mathcal X}))\sim {\mathcal J}({\mathcal C})$. Unfortunately, this cannot work,
as is seen by the following argument.
Consider the regular differentials on ${\mathcal X}$ that
correspond to ${\mathcal A}$ and to ${\mathcal J}({\mathcal C})$.
The action of ${\mathbb Z}[\rho,\tau]$ on the regular
differentials on ${\mathcal X}$ has the invariant
subspaces $V_j$ spanned by $\omega_j$ and $\omega_{d+1-j}$. If $d>1$ then $\dim(V_1)=2$ and $\tau,\rho$ act
on $V_1$ by the matrices
${{0\;\;-1}\choose{-1\;\;\;0}}$ and
${{\zeta\;\;\;\;\;0}\choose{0\;-\zeta^{-1}}}$, respectively.
We look for an element in the ${\mathbb Z}$-algebra generated
by these two matrices that sends one of the two
lines spanned by ${1}\choose{1}$ or by ${1}\choose{-1}$,
to the other.
However, such an element does not exist.
}\end{remark}
\begin{remark}\label{kohelsmithresult}
{\rm In fact Conjecture~\ref{conj} is true in the case that $d$ is (an odd) prime.
Namely, as a special case of Proposition~14 in the paper \cite{KS} by Kohel and Smith,
one obtains that ${\mathcal J}({\mathcal X})$ is isogenous to $\mathcal{E}\times {\mathcal J}({\mathcal C})\times {\mathcal J}({\mathcal C})$
with $\mathcal{E}$ the elliptic curve given by $y^2=x^3-x$ and
${\mathcal C}\colon y^2=\varphi_d(x)$. This means that the $L$-polynomial of ${\mathcal X}$ over
${\mathbb F}_{q^2}$ is the product of that of $\mathcal{E}$ and two copies of that of ${\mathcal C}$.
As we saw in the proof of Theorem~\ref{evencheb}, the $L$-polynomial of ${\mathcal X}$
is also the product of that of ${\mathcal C}$ and that of ${\mathcal C}_1\colon y^2=(x^2-4)\varphi_d(x)$.
Combining the two factorizations, one concludes that for $d>2$ prime, the $L$-polynomial
of ${\mathcal C}_1$ equals the product of that of ${\mathcal C}$ and that of $\mathcal{E}$.
This shows Conjecture~\ref{conj} in this case.
And so by Tate's classical work \cite{Tate} we have that ${\mathcal J}({\mathcal C}_1)$ is isogenous to $\mathcal{E}\times {\mathcal J}({\mathcal C})$.
A natural approach to proving Conjecture~\ref{conj} would be, to show that also for
{\em composite} odd $d$ one has an isogeny ${\mathcal J}({\mathcal X})\sim \mathcal{E}\times {\mathcal J}({\mathcal C})\times {\mathcal J}({\mathcal C})$
defined over ${\mathbb F}_{q^2}$. Although we have not been able to show this, we can in fact
prove the weaker statement that these abelian varieties are isogenous over the
algebraic closure $\overline{{\mathbb F}_q}$. Indeed, consider the subgroup $G$ of
$\mbox{Aut}({\mathcal X})$ generated by $r:=\rho^4$ and $s:=\tau$. Then $r$ has order $d$
and $s$ has order $2$. Moreover $srs=r^{-1}$, so $G$ is a dihedral group of order $2d$ (and in the
case considered here, $d$ is odd).
Following Paulhus \cite[\S~3.1.2]{Paulhus}, who applies Kani-Rosen theory (specifically, \cite[Theorem~B]{Kani-Rosen})
to the subgroups $H_1=\langle r\rangle$ and $H_j=\langle sr^j \rangle$ ($2\leq j\leq 2d+1$) of $G$,
and who observes that because $d$ is odd, all groups $H_j$ ($j\neq 1$) are conjugate in $G$ and therefore
the quotients ${\mathcal X}/H_j$ are isomorphic, one concludes
\[
{\mathcal J}({\mathcal X})\times {\mathcal J}({\mathcal X}/G) \times {\mathcal J}({\mathcal X}/G) \sim {\mathcal J}({\mathcal X}/\langle r\rangle)\times {\mathcal J}({\mathcal X}/\langle s\rangle\times {\mathcal J}({\mathcal X}/\langle s\rangle).
\]
We analyze the quotient curves appearing here.
As we saw in the proof of Theorem~\ref{mainodd}, ${\mathcal X}/\langle s\rangle={\mathcal X}/\langle \tau\rangle\cong {\mathcal C}$
since $d$ is odd.
Moreover, the proof of Lemma~\ref{Lpols} shows ${\mathcal X}/\langle r\rangle={\mathcal X}/\langle\rho^4\rangle\cong {\mathcal E}$,
and up to scalars, $x^{(d-1)/2}dx/y$ is the only regular differential on ${\mathcal X}$ invariant under
the action of $r$. As this differential is not fixed by $s=\tau$, no regular differentials fixed by
every automorphism in the group $G$ exist. Therefore the genus of ${\mathcal X}/G$ equals $0$, so
${\mathcal J}({\mathcal X}/G)=(0)$. So the displayed isogeny in fact reads
\[ {\mathcal J}({\mathcal X})\sim {\mathcal E} \times {\mathcal J}({\mathcal C})\times {\mathcal J}({\mathcal C}),
\]
which is what we wished to show. Adapting this line of reasoning so that it works over ${\mathbb F}_{q^2}$ as well,
would lead to a proof of Conjecture~\ref{conj} but unfortunately, so far we have not been able
to do so.
}\end{remark}
\begin{remark}
\rm{Let $d>0$ be any integer, and let $q$
be a prime power with $\gcd(q,2d)=1$. If $3|d$, then $v^2= \varphi_{d}(u)$ covers the elliptic
curve $v^2= \varphi_{3}(u)=u^3-3u$ since in this case (see Remark~\ref{rem1.2}) we have
$\varphi_{d}(u)=\varphi_{3}(\varphi_{d/3}(u))$. Hence if
in this case the curve given by $v^2= \varphi_d(u)$ is maximal over ${\mathbb F}_{q^2}$, then the elliptic curve
$v^2= \varphi_{3}(u)$ is also maximal over ${\mathbb F}_{q^2}$.
The latter maximality occurs precisely
when $q \equiv -1(\bmod~4)$.
As a consequence, for $d$ a multiple of $3$
the assumption $q\equiv -1(\bmod~4)$
mentioned in statement (ii) of
Conjecture~\ref{conj} can be deleted.
}\end{remark}
\begin{remark}\label{cheb3mod4}{\rm
In \cite{TTV}, the curve ${\mathcal C}\colon y^2=\varphi_d(x)$ is denoted by ${\mathcal C}_0$; one of
the results of that paper (\cite[Section~3.2]{TTV})
is that in case $d=\ell$ is an odd prime number,
then the endomorphism algebra of ${\mathcal J}({\mathcal C})$ contains
the field $K:={\mathbb Q}(\sqrt{-1},\zeta_\ell+\zeta_\ell^{-1})$.
Note that $[K:{\mathbb Q}]=\ell-1=2g$ where $g$ is the genus of ${\mathcal C}$.
Moreover, provided $\ell\neq 5$, regarding
${\mathcal J}({\mathcal C})$ as an abelian variety in characteristic~$0$,
by \cite[Proposition~5]{TTV} it has no nontrivial abelian subvarieties (over any
field extension). This means that ${\mathcal J}({\mathcal C})$
is a so-called CM abelian variety. The extension
$K/{\mathbb Q}$ is Galois (even abelian), with Galois
group $G\cong {\mathbb Z}/2{\mathbb Z}\times {\mathbb F}_\ell^\times/\pm 1$;
note that this group is cyclic precisely when
$\ell\equiv 3(\bmod~4)$.
The CM type corresponding to ${\mathcal J}({\mathcal C})$ is computed
in \cite{TTV}. One identifies it with the
subset $\Phi\subset G$ given by
\[
\Phi=\left\{(0,\pm 1),\,(1,\pm 2),\,(0,\pm 3), \ldots\right\}
\]
of cardinality $(\ell-1)/2$.
In \cite[Theorem~3.1]{Blake} it is explained how
the slopes of the Newton polygon of Frobenius on a
reduction of ${\mathcal C}$ modulo a prime $p$ can be
determined from the decomposition group $D\subset G$
at $p$: the possible slopes are $\#(Dg\cap \Phi)/\#Dg$
with $g$ an element of $G$.
Note that the group $D$ (at any prime $p$
with $\gcd(p,2\ell)=1$ which means, at any
prime that does not ramify in $K$) is
the cyclic group generated by
$\left((p-1)/2(\bmod~2), \pm p(\bmod~\ell)\right)$.
In particular, taking $p\equiv 1(\bmod~4)$ one has
that $D\subset (0)\times {\mathbb F}_\ell^\times/\pm 1$.
Hence taking $g=(1,\pm 1)\in G$ one finds $Dg\cap \Phi=
\emptyset$. As a result, one of the slopes is $0$,
implying that the $p$-rank of ${\mathcal J}({\mathcal C})$ is positive.
In particular, this provides an alternative
proof of Proposition~\ref{p2.3} for the
special case of the polynomial $\varphi_d(x)$
with $d>1$ odd. Indeed,
taking $\ell$ any prime divisor of $d$, the
equality $\varphi_d(x)=\varphi_\ell(\varphi_{d/\ell}(x))$ implies that the curve
with equation $y^2=\varphi_\ell(x)$
is covered by the curve given by
$y^2=\varphi_d(x)$.
Hence if the latter curve is maximal over ${\mathbb F}_{q^2}$ (and $\gcd(q,2\ell)=1$),
then so is the first, and therefore
the characteristic of ${\mathbb F}_{q^2}$ is
$\equiv 3(\bmod~4)$.
\vspace*{.4cm}
We illustrate the use of CM theory
also in the next result.
}\end{remark}
\begin{pro}\label{CMresult}
Let $q$ be a prime power
with $\gcd(q,10)=1$. If the hyperelliptic curve given by
$y^2=x^5-5x^3+5x$ is maximal over ${\mathbb F}_{q^2}$, then
the characteristic of ${\mathbb F}_q$ is either
$11(\bmod~20)$ or $19(\bmod~20)$.
\end{pro}
\begin{proof}
Note that $x^5-5x^3+5x=\varphi_5(x)$.
We will show the result by using the CM theory
described above in Remark~\ref{cheb3mod4}.
We therefore use the notations introduced
in that remark, for the special case $\ell=5$.
Let $p$ be the characteristic of ${\mathbb F}_q$.
By Proposition~\ref{p2.3} (and
alternatively, by Remark~\ref{cheb3mod4}),
maximality of the given curve ${\mathcal C}$ implies
that $p\equiv 3(\bmod~4)$.
Hence the decomposition group $D$ at p
in $G=\mbox{Gal}(K/{\mathbb Q})\cong {\mathbb Z}/2{\mathbb Z}\times{\mathbb F}_5^\times/\pm1$ is generated by
$\left(1,\pm p(\bmod 5)\right)$.
In case $p\equiv\pm 2(\bmod~5)$, this means
\[D=\{(1,\pm 2),\,(0,\pm 1)\}.\]
Clearly $D\cdot (0,\pm 2)\cap\Phi=\emptyset$
where $\Phi\subset G$ describes the
CM=type of the curve ${\mathcal C}$.
As before, this implies that ${\mathcal C}$
cannot be maximal in characteristic $p$.
So a necessarily condition for maximality
in characteristic $p$ is besides
$p\equiv 3(\bmod~4)$ that also $p\equiv \pm 1
(\bmod~5)$. From this, the result follows.
\end{proof}
\begin{remark}{\rm
In the proof above we only used the fact
that for a maximal curve, the slopes of
Frobenius are all positive. A stronger
condition is that in fact they need to be
equal to $\frac12$. Exploiting that, one
obtains similar results for other
values of $\ell$. For example, with
$\ell=17$ one can exclude characteristic
$p\equiv \pm 2(\bmod~17)$ in this way.
}\end{remark}
We finish this manuscript by briefly
mentioning some small cases of Conjecture~\ref{conj}.
\begin{itemize}
\item[$d=1$:] here statement~(i)
asserts the maximality of the elliptic curve
given by $y^2=x^3-4x$ over ${\mathbb F}_{q^2}$.
This holds precisely when $q\equiv -1(\bmod~4)$.
Statement~(ii) asserts, besides this congruence
condition, also the maximality of the
hyperelliptic curve given by $v^2=u$. Since
this maximality holds over any ${\mathbb F}_{q^2}$
(the curve has genus~$0$),
Conjecture~\ref{conj} holds for $d=1$.
\item[$d \geq 5$:] we verified using Magma for
all prime powers $q<100$ and $d \in \{9,15,21\}$ Conjecture~\ref{conj} holds.
In fact, the experiment shows for these cases,
as we saw in Remark~\ref{kohelsmithresult} for the case $d$ is an odd prime,
that the curves ${\mathcal C}\colon y^2=\varphi_d(x)$ and ${\mathcal C}_1\colon y^2=(x^2-4)\varphi_d(x)$
over ${\mathbb F}_{q}$ are related by ${\mathcal J}({\mathcal C}_1)\sim {\mathcal J}({\mathcal C})\times {\mathcal E}$ with ${\mathcal E}\colon y^2=x^3+x$.
\end{itemize}
\end{document}
|
\begin{document}
\allowdisplaybreaks
\newcommand{1803.11230}{1803.11230}
\renewcommand{\arabic{footnote}}{}
\renewcommand{095}{095}
\FirstPageHeading
\ShortArticleName{Tronqu{\'e}e Solutions of the Third and Fourth Painlev{\'e} Equations}
\ArticleName{Tronqu{\'e}e Solutions\\ of the Third and Fourth Painlev{\'e} Equations\footnote{This paper is a~contribution to the Special Issue on Painlev\'e Equations and Applications in Memory of Andrei Kapaev. The full collection is available at \href{https://www.emis.de/journals/SIGMA/Kapaev.html}{https://www.emis.de/journals/SIGMA/Kapaev.html}}}
\Author{Xiaoyue XIA}
\AuthorNameForHeading{X.~Xia}
\Address{Department of Mathematics, The Ohio State University,\\ 100 Math Tower, 231 West 18th Avenue, Columbus OH, 43210-1174, USA}
\Email{\href{mailto:[email protected]}{[email protected]}}
\ArticleDates{Received April 04, 2018, in final form August 30, 2018; Published online September 08, 2018}
\Abstract{Recently in a paper by Lin, Dai and Tibboel, it was shown that the third and fourth Painlev{\'e} equations have tronqu{\'e}e and tritronqu{\'e}e solutions. We obtain global information about these tronqu{\'e}e and tritronqu{\'e}e solutions. We find their sectors of analyticity, their Borel summed representations in these sectors as well as the asymptotic position of the singularities near the boundaries of the analyticity sectors. We also correct slight errors in the paper mentioned.}
\Keywords{the third and fourth Painlev{\'e} equations; asymptotic position of singularities; tronqu{\'e}e solutions; tritronqu{\'e}e solutions; Borel summed representation}
\Classification{34M25; 34M40; 34M55}
\renewcommand{\arabic{footnote}}{\arabic{footnote}}
\setcounter{footnote}{0}
\section{Introduction}\label{intro}
The well-known Painlev{\'e} equations were first introduced by Painlev{\'e} more than a century ago and have been investigated by many researchers. The Painlev{\'e} equations define new functions called Painlev{\'e} transcendents, which are considered as special nonlinear functions and their asymptotic behavior is of particular importance. For an overview of Painlev{\'e} equations and the asymptotic behavior of Painlev{\'e} transcendents please see, e.g.,~\cite{Clarkson} and~\cite{FIKN}. In recent decades there has been revived interest in Painlev{\'e} equations as they play important roles in various mathematical and physical applications (see, e.g., \cite{Clarkson,CC2,FFW,FIKN,K2,K3} for references to applications).
Boutroux first studied a family of particular solutions of the first Painlev{\'e} equations ${\rm P_I}$, which he named ``tronqu{\'e}e'' and ``tritronqu{\'e}e'' solutions in~\cite{Boutroux}. These special solutions of Painlev{\'e} equations have pole-free sectors while generic solutions have poles accumulating at $\infty$ in all sectors. Tronqu{\'e}e and tritronqu{\'e}e solutions receive attention not only for their interesting analytic property but also because they appear in a number of problems such as the Ising model~\cite{MTW}, the critical behavior in the NLS/Toda lattices~\cite{Dubrovin, DGK} and the analysis of the cubic oscillator~\cite{Masoero}.
For the first Painlev{\'e} equation some pioneering works based on the powerful techniques of isomonodromic deformation and reduction to Riemann--Hilbert problem were done in the study of tronqu{\'e}e solutions by Kapaev and coauthors. In \cite{Kapaev} and \cite{KK} the Stokes constant for the tritronqu{\'e}e solution of ${\rm P_I}$ was calculated for the first time. In \cite{K2} the global asymptotic behavior of the tronqu{\'e}e solutions of ${\rm P_I}$ was described with connection formulae presented. In~\cite{K3} and~\cite{IK} the global asymptotic behavior of the tronqu{\'e}e solutions of $\rm P_{II}$ was described with connection formulae presented. In \cite{K4} the global asymptotics of the solutions of the fourth Painlev{\'e} equation $\rm P_{IV}$ including its tronqu{\'e}e solutions was analyzed in detail. In \cite{DK} an fourth-order nonlinear ODE which controls the pole dynamics in the general solution of equation $\mathrm{P}_{\mathrm{I}}^{2}$ was studied. See also the monograph~\cite{FIKN} for a summary of recent developments in the theory of Painlev{\'e} equations based on this Riemann--Hilbert-isomonodromy method.
There is an impressive body of work on tronqu{\'e}e solutions and we only mention a few contributions here. Using approaches different from the Riemann--Hilbert-isomonodromy method Costin and coauthors analyzed tronqu{\'e}e solutions of ${\rm P_I}$ in \cite{CHT} and \cite{CCH} and obtained similar results to those in \cite{Kapaev} and \cite{KK}. In~\cite{GKK} the existence of the tritronqu{\'e}e solutions of~$\mathrm{P}_{\mathrm{I}}^{2}$, the second member in the $\rm P_I$ hierarchy was proved. In~\cite{Mazzocco} the existence of tronqu{\'e}e solutions of the second Painlev{\'e} hierarchy was proved. For the location of poles for the Hasting--McLeod solution to the second Painlev{\'e} equation please see~\cite{HXZ}, in which a special case of Novokshenov conjecture~\cite{Novo} was also proved. For the tronqu{\'e}e solutions to the third Painlev{\'e} equation please see~\cite{Lin}, which followed the idea in~\cite{Joshi}.
In this paper the tronqu{\'e}e and tritronqu{\'e}e solutions of the third and fourth Painlev{\'e} equation are studied:
\begin{gather}
{\rm P_{III}}\colon \ {\frac {{\rm d}^{2}y}{{\rm d}{x}^{2}}} = \frac{1}{y}\left(\frac{{\rm d}y}{{\rm d}x}\right)^2-\frac{1}{x}\frac{{\rm d}y}{{\rm d}x}+\frac{1}{x}\left(\alpha y^2+\beta \right)+\gamma y^3+\frac{\delta}{y},
\nonumber\\
\label{P4}
{\rm P_{IV}}\colon \ {\frac {{\rm d}^{2}y}{{\rm d}{x}^{2}}} = \frac{1}{2y}\left(\frac{{\rm d}y}{{\rm d}x}\right)^2+\frac{3}{2}y^3+4xy^2+2\left(x^2-\alpha \right)y+\frac{\beta}{y},
\end{gather}
where $\alpha$, $\beta$, $\gamma$ and $\delta$ are arbitrary complex numbers. By B{\"a}cklund transformations (see \cite{Milne}) ${\rm P_{III}}$ can be reduced to
\begin{gather}\label{P31}
{\rm P}^{(i)}_{\rm III}\colon \ {\frac {{\rm d}^{2}y}{{\rm d}{x}^{2}}} = \frac{1}{y}\left(\frac{{\rm d}y}{{\rm d}x}\right)^2-\frac{1}{x}\frac{{\rm d}y}{{\rm d}x}+\frac{1}{x}\big(\alpha y^2+\beta \big)+ y^3-\frac{1}{y},\\
\label{P32}
{\rm P}^{(ii)}_{\rm III}\colon \ {\frac {{\rm d}^{2}y}{{\rm d}{x}^{2}}} = \frac{1}{y}\left(\frac{{\rm d}y}{{\rm d}x}\right)^2-\frac{1}{x}\frac{{\rm d}y}{{\rm d}x}+\frac{1}{x}\big( y^2+\beta \big)-\frac{1}{y}.
\end{gather}
In a famous paper \cite{MTW} by McCoy, Tracy and Wu, a one-parameter family of tronqu{\'e}e solutions of a special case of~\eqref{P31} where $\alpha = 2\nu$, $\beta = -2\nu$ was constructed, whose asymptotics at $\infty$ was congruent to ours~(\eqref{transseriesh} and~\eqref{P31cov}) and asymptotic expansion for small~$x$ was obtained. Furthermore, in a recent paper~\cite{FFW} by Fasondini et al.\ a comprehensive computer simulation of the McCoy--Tracy--Wu solution was given. The computer pictures of the pole distributions in~\cite{FFW} provide a good illustration of our description of the asymptotic position of poles in, e.g.,~\eqref{poledistrn}.
\eqref{P32} was studied as the degenerate $\rm P_{III}$ in \cite{KV1} and \cite{KV2}, and the position of the first array of poles was found in~\cite{KV2} via isomonodromy methods.
We base our methods on the results in \cite{OCIMRN} and \cite{OCInv}, which used the technique of Borel summation to describe the Stokes phenomenon. We obtain representations of tronqu{\'e}e solutions as Borel summed transseries (see also~\cite{OCCONM}), as well as the position of the first array of poles, bordering the sector of analyticity. We will first use a simple example to briefly illustrate some concepts in the Borel summation method. Please see also \cite[Section~5]{CCH} for an introduction.
In the following, we denote by $\mathcal{L}_{\phi}$ the Laplace transform
\begin{gather*}
f\longmapsto \int_{0}^{\infty {\rm e}^{{\rm i}\phi}} f(p){\rm e}^{-xp}{\rm d}p,
\end{gather*}
where $\phi \in \mathbb{R}$. See also \cite[p.~8]{OCIMRN} for the notation.
Assume that we have a formal series
\begin{gather*}
\tilde{f}(w) = \sum_{n=0}^{\infty}a_n w^{-r-n},\qquad \operatorname{Re}(r)>0,
\end{gather*}
where the series $\sum\limits_{n=0}^{\infty}a_n x^{n}$ has a positive radius of convergence. The Borel transform of $\tilde{f}$ is defined to be the formal power series
\begin{gather*}
\big(\mathcal{B} \tilde{f} \big)(p) := \sum_{n=0}^{\infty}\frac{a_n p^{n+r-1}}{\Gamma(n+r)}.
\end{gather*}
In most cases the explicit solution of a differential equation is not known. We may obtain classical asymptotic series as formal power series solutions, but these formal solutions do not contain parameters that help us distinguish between actual solutions. This is illustrated in the following simple ordinary differential equation at the irregular singularity at $x=\infty$
\begin{gather}\label{linearexample}
y'+y=\frac{1}{x^2}.
\end{gather}
The unique formal power series solution for $x\to \infty$ is
\begin{gather*}
\tilde{y}_0(x) = \sum_{n=1}^{\infty}\frac{n!}{x^{n+1}}
\end{gather*}
and the general solution to \eqref{linearexample} is
\begin{gather*}
y(x;C) = y_0(x)+C{\rm e}^{-x}, \qquad \textrm{where} \quad y_0(x) = {\rm e}^{-x} \int_{x_1}^{x}\frac{{\rm e}^s}{s^2} {\rm d}s \sim \tilde{y}_0(x) \quad \textrm{as} \quad x\to +\infty.
\end{gather*}
The idea of transseries solution is a completion of classical formal power series solution in the sense that the transseries solution representation includes the free parameters which appear in the actual solutions. In the example above, if we let
\begin{gather}\label{transseries0}
\tilde{y}(x) = \tilde{y}_0(x) + C {\rm e}^{-x} \qquad \textrm{for} \quad x\to +\infty,
\end{gather}
then \eqref{transseries0} is a formal solution to \eqref{linearexample} and the simplest example of a transseries.
Under appropriate conditions (see \cite{OCIMRN,OCDuke,OCInv}), given $\phi$, the operator $\mathcal{L}_{\phi}\mathcal{B}$ is a one-to-one map between the transseries solutions and actual solutions. In the example~\eqref{linearexample}, the actual solutions have representation:
\begin{gather*}
y(x) =
\begin{cases}
\mathcal{L}_{\phi} \mathcal{B}\tilde{y}_0(x) + C_{+} {\rm e}^{-x}, & -\phi = \arg(x) \in \big(0,\tfrac{\pi}{2}\big),\\
\mathcal{L}_{\phi} \mathcal{B}\tilde{y}_0(x) + C_{-} {\rm e}^{-x}, & -\phi = \arg(x) \in \big({-}\frac{\pi}{2},0\big).
\end{cases}
\end{gather*}
The value $C_+-C_-$ is called the Stokes constant. This representation is a trivial example of Borel summed representation of solutions. In the case of nonlinear systems such as~\eqref{NF}, the transseries solution is of the form~\eqref{NFsys} and the Borel summed representation of actual solutions is of the form~\eqref{solnsys}.
In this paper we study tronqu{\'e}e solutions of~\eqref{P31}, \eqref{P32} and~\eqref{P4} by first transforming each of them into a~second-order differential equation of the following form
\begin{gather}\label{eqh}
h''(w) - h(w)+\frac{1}{w}\big[\left(\beta_2-\beta_1\right) h(w)+\left(\beta_2+\beta_1\right) h'(w)\big] = g(w,h,h'),
\end{gather}
where $\beta_1$ and $\beta_2$ are constants and $g(w,h,h')$ is analytic at $(\infty,0,0)$. See \eqref{P31cov} for the change of variable for \eqref{P31}, see \eqref{P32cov} for the change of variable for \eqref{P32} and see \eqref{P41cov}, \eqref{P42cov} and \eqref{P43cov} for the change of variable for~\eqref{P4}. Next we make the substitution
\begin{gather}\label{htou}
\begin{bmatrix}
h(w)\\
h'(w)
\end{bmatrix}=
\begin{bmatrix}
1-\dfrac{\beta_1}{2w}&1+\dfrac{\beta_2}{2w}
\\
-1-\dfrac{\beta_1}{2w}&1-\dfrac{\beta_2}{2w}
\end{bmatrix}
\mathbf{u}(w).
\end{gather}
Then $\mathbf{u}$ is a solution to the following normalized (see \cite{OCInv}) 2-dimensional differential system:
\begin{gather}\label{NF}
\mathbf{u}'+\left(\hat{\Lambda} +\frac{\hat{B}}{w}\right) \mathbf{u}=\mathbf{g} (w,\mathbf{u} ),
\end{gather}
where
\begin{gather*}
\hat{\Lambda} =
\begin{bmatrix}
1&0\\
0&-1
\end{bmatrix},\qquad
\hat{B} =
\begin{bmatrix}
\beta_1&0\\
0&\beta_2
\end{bmatrix},
\end{gather*}
and $\mathbf{g} (w,\mathbf{u} )$ is analytic at $(\infty,\mathbf{0})$ with $\mathbf{g}(w,\mathbf{u}) = O\big(w^{-2}\big)+O\big(|\mathbf{u}|^2\big)$ as $w\to \infty$ and $\mathbf{u} \to \mathbf{0}$.
We obtain information about the tronqu{\'e}e and tritronqu{\'e}e solutions of the normalized sys\-tem~\eqref{NF} such as their existence, regions of analyticity and asymptotic position of poles through which we obtain corresponding results regarding tronqu{\'e}e and tritronqu{\'e}e solutions of~$\rm P_{III}$ and~$\rm P_{IV}$. See also~\cite{CCH} in which a similar approach was used to study the tronqu{\'e}e solutions of the first Painlev{\'e} equation.
\section{Tronqu{\'e}e solutions of (\ref{eqh})}\label{NFinfo}
\subsection{Formal solutions and tronqu{\'e}e solutions of (\ref{eqh})}
In Proposition~\ref{NFformal} and Theorem~\ref{NFactual} we present formal and actual solutions on the right half $w$-plane $S_1 := \big\{w\colon \arg(w) \in \big({-}\frac{\pi}{2},\frac{\pi}{2}\big)\big\}$. Then through a~simple symmetry transformation we obtain solutions in the left half plane $S_2 := \big\{w\colon \arg(w) \in \big(\frac{\pi}{2},\frac{3\pi}{2}\big)\big\}$. We start with the formal expansions of the solutions.
Assume that $d$ is a ray of the form ${\rm e}^{{\rm i}\phi} \mathbb{R}^+$ with $\phi \in \big({-}\frac{\pi}{2},\frac{\pi}{2}\big)$. We have the following results on transseries solutions, formal expansions in powers of~$1/w$ and ${\rm e}^{-w}$ (see~\cite{OCInv}), of~\eqref{eqh} valid on~$d$ and, moreover, in the sector~$S_1$:
\begin{Proposition} \label{NFformal}
Assume that $d$ is a ray of the form ${\rm e}^{{\rm i}\phi} \mathbb{R}^+$ with $\phi\in\big({-}\frac{\pi}{2},\frac{\pi}{2}\big)$. Then
\begin{enumerate}[$(i)$]\itemsep=0pt
\item the one-parameter family of transseries solutions of \eqref{eqh} satisfying $h(w) \to 0$ as $|w| \to \infty$ on d are
\begin{gather}\label{transseriesh}
\tilde{h}(w) = \tilde{h}_{0}(w) +\sum_{k=1}^{\infty} C^k {\rm e}^{-kw}w^{-\beta_1k}\tilde{s}_{k}(w),
\end{gather}
where for each $k\geq 1$
\begin{gather*}
\tilde{s}_{k}(w) = \sum_{j=0}^{\infty}\frac{{s}_{k,j}}{w^j}
\end{gather*}
is a formal power series in $w^{-1}$.
\item The formal power series in $w^{-1}$
\begin{gather*}
\tilde{h}_{0}(w)= \sum_{j=2}^{\infty}\frac{{h}_{0,j}}{w^j}
\end{gather*}
is the unique formal power series solution of \eqref{eqh}.
\end{enumerate}
\end{Proposition}
The results in \cite{OCInv} provide us with the relation between these transseries solutions and actual solutions.
In the following, we denote by $\mathcal{L}_{\phi}$ the Laplace transform
\begin{gather*}
f\longmapsto \int_{0}^{\infty {\rm e}^{{\rm i}\phi}} f(p){\rm e}^{-xp}{\rm d}p,
\end{gather*}
where $\phi \in \mathbb{R}$. See also \cite[p.~8]{OCIMRN} for the notation.
\begin{Theorem}\label{NFactual} Let $d$, $\tilde{h}_{0}(w)$ and $\tilde{s}_{k}(w)$ be as in Proposition~{\rm \ref{NFformal}}. Let $h(w)$ be a solution to~\eqref{eqh} on~$d$ for~$|w|$ large enough satisfying
\begin{gather*}
h(w) \to 0,\qquad w\in d, \qquad |w| \to \infty.
\end{gather*}
Then
\begin{enumerate}[$(i)$]\itemsep=0pt
\item There is a unique pair of constants $(C_+,C_-)$ associated with $h(w)$, and $h(w)$ has the following representations
\begin{gather}\label{truesolnup}
h(w) = \mathcal{L}_{\phi} {H}_0(w) + \sum_{k=1}^{\infty}C_{+}^k {\rm e}^{-kw} w^{k M_1} \mathcal{L}_{\phi}{H}_k(w),
\qquad -\phi = \arg(w) \in \left(0,\frac{\pi}{2}\right),\\
\label{truesolndown}
h(w) = \mathcal{L}_{\phi} {H}_0(w) + \sum_{k=1}^{\infty}C_{-}^k {\rm e}^{-kw} w^{k M_1} \mathcal{L}_{\phi}{H}_k(w), \qquad -\phi = \arg(w) \in \left(-\frac{\pi}{2},0\right),
\end{gather}
where
\begin{gather*}
M_1 = \lfloor \operatorname{Re}(-\beta_1)\rfloor+1,\qquad H_0 = \mathcal{B} \tilde{h}_{0},\\
H_k = \mathcal{B} \tilde{h}_{k} = \mathcal{B}\big(w^{-k\beta_1-kM_1}\tilde{s}_{k}\big), \qquad k=1,2,\dots,
\end{gather*}
where each $H_k$ is analytic on the Riemann surface of $\mathbb{C}\backslash\left(\mathbb{Z}^+\cup \mathbb{Z}^-\right)$, and the branch cut for each ${H}_k$, $k\geq 1$, is chosen to be $(-\infty,0]$.
\item There exists $\epsilon_0>0$ such that for each $0<\epsilon \leq \epsilon_0$ there exist $\delta_\epsilon>0$, $R_\epsilon>0$ such that $h(w)$ can be analytically continued to $($at least$)$ the following region
\begin{gather}\label{Sanep}
S_{{\rm an},\epsilon}(h(w)) = S^+_{\epsilon}\cup S^-_{\epsilon},
\end{gather}
where
\begin{gather}
S^{-}_{\epsilon} = \left\{w\colon |w|>R_\epsilon , \, \arg(w) \in \left[-\frac{\pi}{2}-\epsilon,\frac{\pi}{2}-\epsilon\right] \; \mathrm{and}\; |C_{-} {\rm e}^{-w} w^{-\beta_1}| < \delta_\epsilon^{-1} \right\},\nonumber\\
S^{+}_{\epsilon} = \left\{w\colon |w|>R_\epsilon , \, \arg(w) \in \left[-\frac{\pi}{2}+\epsilon,\frac{\pi}{2}+\epsilon \right] \; \mathrm{and}\; |C_{+} {\rm e}^{-w} w^{-\beta_1}| < \delta_\epsilon^{-1}\right\}.\label{Spm}
\end{gather}
Consequently, $h(w)$ is analytic $($at least$)$ in
\begin{gather}\label{San}
S_{\rm an}(h) = \bigcup_{0<\epsilon \leq \epsilon_0}\big(S^{-}_{\epsilon}\cup S^{+}_{\epsilon}\big).
\end{gather}
\item $h(w) \sim h_0(w)$ in $S_1$.
\end{enumerate}
\end{Theorem}
\begin{Note}\label{NFshape}\quad
\begin{enumerate}[(i)]\itemsep=0pt
\item It is straightforward to check that if $\operatorname{Re}(\beta_1) >0$, $S_{\rm an}$ contains all but a compact subset of~${\rm i}\mathbb{R}$. In other words there exists $R_0>0$ such that $h(w)$ is analytic in the closure of~${S}_1 \backslash {\mathbb{D}}_{R_0}$, where $S_1$ is the open right half plane and $\mathbb{D}_{R_0} = \{|w| < R_0\}$ is the open disk centered at origin with radius~$R_0$.
\item On the other hand if $\operatorname{Re}(\beta_1) <0$, $S^c_{\rm an}$ contains all but a compact subset of ${\rm i} \mathbb{R}$. We point out that in particular, the solution is not analytic in ${S}_1 \backslash {\mathbb{D}}_{R_0}$ for any $R_0>0$, contrary to the claim in~\cite{Lin}. Singularities of the tronqu{\'e}e solutions exist for large $w$ in ${S}_1$ as seen in Theorem~\ref{sing},~\ref{NFtri}, \ref{P31sing}, \ref{P31tri}, \ref{P42sing} and \ref{P42tri}.
\end{enumerate}
\end{Note}
\begin{Theorem}[asymptotic position of singularities]\label{sing} Let $h$, $C_{+}$ and $C_{-}$ be as in Theorem~{\rm \ref{NFactual}}.
\begin{enumerate}[$(i)$]\itemsep=0pt
\item Assume $C_{+} \neq 0$. Denote
\begin{gather}\label{xip}
\xi_+(w) = C_{+} w^{-\beta_1} {\rm e}^{-w}.
\end{gather}
Then
\begin{gather*}
h(w) \sim \sum_{m=0}^{\infty} \frac{F_m(\xi_+(w))}{w^m},\qquad |w| \to \infty, \qquad w\in \mathcal{D}^+_w,
\end{gather*}
where for each $m\geq 0$, $F_m$ is analytic at $\xi = 0$ and
\begin{gather}\label{Dwp}
\mathcal{D}^+_w =\left\{|w| > R\colon \arg w \in \left(-\frac{\pi}{2}+\delta,\frac{\pi}{2}+\delta \right), \,\operatorname{dist}\left(\xi_+(w),\Xi \right) >\epsilon ,\, |\xi(w)|<\epsilon^{-1}\right\}\!\!\!
\end{gather}
for any $\delta,\epsilon>0$ small enough and $R$ large enough, and where $\Xi$ is the set of singularities of $F_0(\xi)$. $F_0(\xi)$ satisfies
\begin{gather}\label{conF0}
F_0(0) =0,\qquad F'_0(0)=1.
\end{gather}
\item Assume $C_{+} \neq 0$, and $\xi_s \in \Xi$ is a singularity of $F_0$. Then the singular points of $h$, $w^{+}_n$, near the boundary $\{w\colon \arg(w) = {\pi}/{2} \}$ of the sector of analyticity are given asymptotically by
\begin{gather}\label{wpn}
w^{+}_{n} = 2n\pi{\rm i} -\beta_1 \ln(2n\pi{\rm i})+\ln(C_+)-\ln(\xi_s)+o(1)
\end{gather}
as $n \to \infty$.
\item Assume $C_{-} \neq 0$. Denote
\begin{gather}\label{xim}
\xi_-(w) = C_{-} w^{-\beta_1} {\rm e}^{-w}.
\end{gather}
Then
\begin{gather*}
h(w) \sim \sum_{m=0}^{\infty} \frac{F_m(\xi_-(w))}{w^m},\qquad |w| \to \infty, \qquad w\in \mathcal{D}^-_w,
\end{gather*}
where
\begin{gather}\label{Dwm}
\mathcal{D}^-_w =\left\{|w| > R\colon \arg w \in \left(-\frac{\pi}{2}-\delta,\frac{\pi}{2}-\delta \right), \,\operatorname{dist} (\xi_-(w),\Xi ) >\epsilon , \, |\xi(w)|<\epsilon^{-1}\right\}\!\!\!
\end{gather}
for any $\delta,\epsilon>0$ small enough and $R$ large enough, and where $F_m$, $m\geq 0$, and $\Xi$ are as described in $(i)$.
\item Assume $C_{-} \neq 0$, and $\xi_s \in \Xi$ is a singularity of $F_0$. Then the singular points of $h$, $w^{-}_n$, near the boundary $ \{w\colon \arg(w) = {-\pi}/{2} \}$ of the sector of analyticity are given asymptotically by
\begin{gather*}
w^{-}_{n} = -2n\pi{\rm i} -\beta_1 \ln(-2n\pi{\rm i})+\ln(C_-)-\ln(\xi_s)+o(1)
\end{gather*}
as $n \to \infty$.
\end{enumerate}
\end{Theorem}
The expression of $F_0$ (see \eqref{P31F0}, \eqref{P32F0}, \eqref{P41F0} and \eqref{P42F0}) is obtained explicitly in each case where asymptotic position of singularities is presented.
\subsection{Tritronqu{\'e}e solutions of (\ref{eqh})}\label{NFtrit}
The information on formal and actual tronqu\'{e}e solutions of~\eqref{eqh} in the left half plane $S_2 := \big\{w\colon \arg(w) \in \big(\frac{\pi}{2},\frac{3\pi}{2}\big)\big\}$ is obtained by means of a simple transformation
\begin{gather*}
h(w)=\hat{h}(-w),\qquad \tilde{w} = -w.
\end{gather*}
\eqref{eqh} is rewritten as
\begin{gather} \label{eqhhat}
\hat{h}''(\tilde{w})-\hat{h}(\tilde{w})+\frac{1}{\tilde{w}}\big[(\beta_1-\beta_2) \hat{h}(\tilde{w})+(\beta_1+\beta_2) \hat{h}'(\tilde{w})\big] = g\big({-}\tilde{w},\hat{h},-\hat{h}'\big),
\end{gather}
which is of the form \eqref{eqh} with $\beta_1$ and $\beta_2$ exchanged, and thus all results in Proposition~\ref{NFformal}, Theorems~\ref{NFactual} and~\ref{sing} apply. Without repeating all of the results, we introduce some notations needed for describing the tritronqu\'{e}e solutions of~\eqref{eqh}.
The small transseries solutions of \eqref{eqhhat} in the right half $\tilde{w}$-plane is
\begin{gather*}
\tilde{h}_l(\tilde{w}) = \tilde{h}_{0}(-\tilde{w}) +\sum_{k=1}^{\infty} C^k {\rm e}^{-k\tilde{w}}\tilde{w}^{-\beta_2 k}\tilde{t}_{k}(\tilde{w}),
\end{gather*}
where for each $k\geq 1$, $\tilde{t}_{k}(\tilde{w})$ is a formal power series in $\tilde{w}^{-1}$.
Assume that $\hat{h}(\tilde{w})$ is an actual solution to \eqref{eqhhat} on $d = {\rm e}^{{\rm i}\theta} \mathbb{R}^+$ with $\cos \theta >0$, such that $\hat{h}(\tilde{w})=o(1)$ as $|\tilde{w}| \to \infty$. Then there exists a unique pair of constants $\big(\hat{C}_+,\hat{C}_-\big)$ such that
\begin{gather*}
\hat{h}(\tilde{w}) =
\begin{cases}
\displaystyle \mathcal{L}_{\phi} {\hat{H}}_0(\tilde{w}) + \sum_{k=1}^{\infty}\hat{C}_{+}^k {\rm e}^{-k\tilde{w}} \tilde{w}^{k M_2} \mathcal{L}_{\phi}{\hat{H}}_k(\tilde{w}), & \displaystyle -\phi = \arg(\tilde{w}) \in \left(0,\frac{\pi}{2}\right),\\
\displaystyle \mathcal{L}_{\phi} {\hat{H}}_0(\tilde{w}) + \sum_{k=1}^{\infty}\hat{C}_{-}^k {\rm e}^{-k\tilde{w}} \tilde{w}^{k M_2} \mathcal{L}_{\phi}{\hat{H}}_k(\tilde{w}), & \displaystyle -\phi = \arg(\tilde{w}) \in \left(-\frac{\pi}{2},0\right),
\end{cases}
\end{gather*}
where
\begin{gather}
M_2 = \lfloor \operatorname{Re}(-\beta_2)\rfloor+1,\!\qquad \hat{H}_0(p) = -H_0(-p),\!\qquad
\hat{H}_k = \mathcal{B}\big(w^{-k\beta_2-kM_2}\tilde{t}_{k}\big),\!\qquad k\geq1,\!\!\!\label{hathcdn}
\end{gather}
where each $\hat{H}_k$ is analytic in the Riemann surface of $\mathbb{C}\backslash (\mathbb{Z}^+\cup \mathbb{Z}^-)$, and the branch cut for each~${\hat{H}}_k$, $k\geq 1$, is chosen to be~$(-\infty,0]$. Note that the second equation in~\eqref{hathcdn} holds because the power series solution of~\eqref{eqhhat} must be~$\tilde{h}_0(-\tilde{w})$. By the definition of the Borel transform (see Appendix~\ref{Appx}) we have $\hat{H}_0(p) = -H_0(-p)$.
By Theorem~\ref{NFactual}(ii), $\hat{h}$ is analytic at least on
\begin{gather*}
\hat{S}_{\rm an}(h):=-{S}_{\rm an}\big(\hat{h}\big),
\end{gather*}
where $S_{\rm an}(\hat{h})$ is given by \eqref{Sanep}--\eqref{San} with $\beta_1$ replaced by $\beta_2$. Denote $\hat{\xi}_{\pm} = \hat{C}_{\pm} \tilde{w}^{-\beta_2} {\rm e}^{-\tilde{w}}$ as in~\eqref{xip} and~\eqref{xim}. By Theorem~\ref{sing} if $\hat{C}_+ \neq 0$ then
\begin{gather*}
\hat{h}(\tilde{w}) \sim \sum_{m=0}^{\infty} \frac{\hat{F}_m\big(\hat{\xi}_+(\tilde{w})\big)}{\tilde{w}^m}, \qquad |\tilde{w}| \to \infty, \qquad \tilde{w}\in \mathcal{D}^+_{\tilde{w}},
\end{gather*}
where $\hat{F}_m$ are analytic at $\xi = 0$. If $\hat{C}_- \neq 0$ then
\begin{gather*}
\hat{h}(\tilde{w}) \sim \sum_{m=0}^{\infty} \frac{\hat{F}_m(\hat{\xi}_-(\tilde{w}))}{\tilde{w}^m}, \qquad |\tilde{w}| \to \infty, \qquad \tilde{w}\in \mathcal{D}^-_{\tilde{w}},
\end{gather*}
where $D^{\pm}_{\tilde{w}}$ are defined by~\eqref{Dwp} and~\eqref{Dwm} with~$\xi_{\pm}$ replaced by $\hat{\xi}_{\pm}$ respectively and~$\Xi$ replaced by~$\hat{\Xi}$ which is defined to be the set of singularities of $\hat{F}_0$.
Tritronqu\'{e}e solutions are special cases of tronqu\'{e}e solutions with $C_{+}=0$ or $C_{-}=0$. Denote
\begin{alignat*}{3}
& h^{+}(w) = \mathcal{L}_{\phi} H_0 (w), \qquad && -\phi = \arg(w) \in (0,\pi),&\\
& h^{-}(w) = \mathcal{L}_{\phi}H_0(w), \qquad && -\phi = \arg(w) \in (-\pi,0),&\\
&\hat{h}^{+}(w)= \mathcal{L}_{\phi} \hat{H}_0 (w), \qquad && -\phi = \arg(w) \in (0,\pi),&\\
&\hat{h}^{-}(w)= \mathcal{L}_{\phi} \hat{H}_0(w), \qquad && -\phi = \arg(w) \in (-\pi,0).&
\end{alignat*}
\begin{Corollary} \label{NFtri}Assume $\phi \in \big(0,\frac{\pi}{2}\big)$. Let ${C}^{t}_{1}$, ${C}^{t}_{2}$, ${C}^{t}_{3}$, ${C}^{t}_{4}$ be the constants in the transseries of~$h^{\pm}$ and~$\hat{h}^{\pm}$, namely,
\begin{gather}
h^{+}(w) = \mathcal{L}_{-\phi} H_0 (w) = \mathcal{L}_{\phi} H_0 (w) + \sum_{k=1}^{\infty}\big({C}^{t}_{1}\big)^k {\rm e}^{-kw} w^{M_1 k} \mathcal{L}_{\phi} H_k (w),\nonumber \\
h^{-}(w) = \mathcal{L}_{\phi} H_0 (w) = \mathcal{L}_{-\phi} H_0 (w) + \sum_{k=1}^{\infty}\big({C}^{t}_{2}\big)^k {\rm e}^{-kw} w^{M_1 k} \mathcal{L}_{-\phi} H_k (w),\nonumber \\
\hat{h}^{+}(w)= \mathcal{L}_{-\phi} \hat{H}_0 (w) = \mathcal{L}_{\phi} \hat{H}_0 (w) + \sum_{k=1}^{\infty}\big({C}^{t}_{3}\big)^k {\rm e}^{-kw} w^{M_2 k} \mathcal{L}_{\phi} \hat{H}_k (w),\nonumber \\
\hat{h}^{-}(w)= \mathcal{L}_{\phi} \hat{H}_0 (w) = \mathcal{L}_{-\phi} \hat{H}_0 (w) + \sum_{k=1}^{\infty}\big({C}^{t}_{4}\big)^k {\rm e}^{-kw} w^{M_2 k} \mathcal{L}_{-\phi} \hat{H}_k (w).\label{htritrep}
\end{gather}
\begin{enumerate}[$(i)$]\itemsep=0pt
\item We have
\begin{gather*}
h^+(w) = \hat{h}^-(-w),\qquad h^-(w) = \hat{h}^+(-w).
\end{gather*}
A consequence of Theorem~{\rm \ref{NFactual}}$(ii)$ is that for any $\delta>0$ there exists $R>0$ such that $h^+$ is analytic in the sector
\begin{gather*}
T^+_{\delta,R}:=\left\{w\colon |w| > R,\, \arg (w) \in \left[-\frac{\pi}{2}+\delta,\frac{3\pi}{2}-\delta\right] \right\},
\end{gather*}
and $h^-$ is analytic in the sector
\begin{gather*}
T^-_{\delta,R}:=\left\{w\colon |w| > R,\, \arg (w) \in \left[-\frac{3\pi}{2}+\delta,\frac{\pi}{2}-\delta\right] \right\}.
\end{gather*}
\item Assume $\xi_s \in \Xi$ is a singularity of $F_0$ $($see Theorem~{\rm \ref{sing}}$(ii))$ and $\hat{\xi}_s \in \hat{\Xi}$ is a singularity of~$\hat{F}_0$. Then the singular points of $h^+$, $w^-_{1,n}$ near the boundary $\big\{w\colon \arg w =- \frac{\pi}{2}\big\}$ and $w^+_{1,n}$ near the boundary $\big\{w\colon \arg w = \frac{3\pi}{2}\big\}$, are given asymptotically by
\begin{gather}
w^-_{1,n} = -2 n \pi {\rm i} -\beta_1 \ln(-2n\pi {\rm i})+\ln \big({C}^{t}_{1}\big) - \ln(\xi_s)+o(1), \nonumber\\
w^+_{1,n} = -2 n \pi {\rm i} +\beta_2 \ln(2n\pi {\rm i})-\ln \big({C}^{t}_{4}\big) +\ln\big(\hat{\xi}_s\big)+o(1),\label{poledistrn}
\end{gather}
as $n \to \infty$. The singular points of $h^-$, $w^-_{2,n}$ near the boundary $\big\{w\colon \arg w =- \frac{3\pi}{2}\big\}$ and~$w^+_{2,n}$ near the boundary $\big\{w\colon \arg w = \frac{\pi}{2}\big\}$, are given asymptotically by
\begin{gather*}
w^-_{2,n} = 2 n \pi {\rm i} +\beta_2 \ln(-2n\pi {\rm i})-\ln \big({C}^{t}_{3}\big) + \ln\big(\hat{\xi}_s\big)+o(1), \\
w^+_{2,n} = 2 n \pi {\rm i} -\beta_1 \ln(2n\pi {\rm i})+\ln \big({C}^{t}_{2}\big) - \ln({\xi}_s)+o(1).
\end{gather*}
\end{enumerate}
\end{Corollary}
\section[Normalizations and Tronqu{\'e}e solutions of $\rm P_{III}$ and $\rm P_{IV}$]{Normalizations and Tronqu{\'e}e solutions of $\boldsymbol{\rm P_{III}}$ and $\boldsymbol{\rm P_{IV}}$}\label{BS3}
\subsection[Tronqu{\'e}e solutions of ${\rm P}^{(i)}_{\rm III}$]{Tronqu{\'e}e solutions of $\boldsymbol{{\rm P}^{(i)}_{\rm III}}$}
If $y(x)$ is a solution of \eqref{P31} which is asymptotic to a formal power series on a ray~$d$ which is not an antistokes line (lines on which $\arg{w} = \pm \frac{\pi}{2}$ where $w$ is the independent variable in the normalized equation), then by dominant balance we have
\begin{gather*}
y(x) \sim l(x), \qquad |x|\to \infty, \qquad x\in d,
\end{gather*}
where
\begin{gather*}
l(x) = A - \left(\frac{\alpha+A^2 \beta}{4} \right)\frac{1}{x}
\end{gather*}
for some $A$ satisfying $A^4=1$. Fix some $A$ satisfying $A^4 = 1$ and make the change of variables
\begin{gather}\label{P31cov}
w=2Ax, \qquad y(x)=h(w)+l\left(\frac{w}{2A}\right).
\end{gather}
Then the equation \eqref{P31} is transformed into an equation for $h$ of the form \eqref{eqh} with
\begin{gather*}
\beta_1 = \frac{1}{2}+\frac{\alpha}{4}-\frac{A^2 \beta}{4}, \qquad \beta_2 = \frac{1}{2}-\frac{\alpha}{4} +\frac{A^2 \beta}{4}.
\end{gather*}
Results in Section~\ref{NFinfo} apply. Let the notations be the same as in Section~\ref{NFinfo}.
\begin{Theorem}\quad
\begin{enumerate}[$(i)$]\itemsep=0pt
\item There is a unique formal power series solution
\begin{gather*}
\tilde{y}_0(x) = \sum_{k=0}^{\infty} \frac{y_{0,k}}{x^k}
\end{gather*}
to \eqref{P31}, where
\begin{gather*}
y_{0,0}=A,\qquad y_{0,1}= -\frac{\alpha+A^2\beta}{4}.
\end{gather*}
\item There is a one-parameter family $\mathcal{F}_{A,1}$ of tronqu{\'e}e solutions of~\eqref{P31} in $A^{-1}S_1$ with representations
\begin{gather}\label{P31FA1}
y(x) = \begin{cases}
\displaystyle l(x)+ h^{+}(2Ax) + \sum_{k=1}^{\infty}C_{+}^k {\rm e}^{-2Akx} (2Ax)^{kM_1} \mathcal{L}_{\phi}{H}_k(2Ax), & \displaystyle -\phi \in \left(0,\frac{\pi}{2}\right),\\
\displaystyle l(x)+ h^{-}(2Ax) + \sum_{k=1}^{\infty}C_{-}^k {\rm e}^{-2Akx} (2Ax)^{kM_1} \mathcal{L}_{\phi}{H}_k(2Ax), & \displaystyle -\phi \in \left(-\frac{\pi}{2},0\right).
\end{cases}\hspace*{-10mm}
\end{gather}
\item There is a one-parameter family $\mathcal{F}_{A,2}$ of tronqu{\'e}e solutions of \eqref{P31} in $A^{-1}S_2$ with representations
\begin{gather}\label{P31FA2}
y(x) =
\begin{cases}
\displaystyle l(x)+ h^{-}(2Ax) + \sum_{k=1}^{\infty}\hat{C}_{+}^k {\rm e}^{2Akx} (-2Ax)^{kM_2} \mathcal{L}_{\phi}{\hat{H}}_k(-2Ax), & \displaystyle -\phi \in \left(0,\frac{\pi}{2}\right),\\
\displaystyle l(x)+ h^{+}(2Ax) + \sum_{k=1}^{\infty}\hat{C}_{-}^k {\rm e}^{2Akx} (-2Ax)^{kM_2} \mathcal{L}_{\phi}{\hat{H}}_k(-2Ax), & \displaystyle -\phi \in \left(-\frac{\pi}{2},0\right).
\end{cases}\hspace*{-10mm}
\end{gather}
\item For each tronqu\'{e}e solution in $(ii)$ or $(iii)$ we have
\begin{gather*}
y(x) \sim \tilde{y}_0(x),\qquad x\in d = A^{-1}{\rm e}^{{\rm i}\theta} \mathbb{R}^+, \qquad |x|\to\infty,
\end{gather*}
and the solution is analytic at least in $(2A)^{-1}{S}_{\rm an}$ if $\cos\theta>0$, in $(2A)^{-1}\hat{S}_{\rm an}$ if $\cos\theta <0$. $S_{\rm an}$ and $\hat{S}_{\rm an}$ are as defined in Theorem~{\rm \ref{NFactual}} and Section~{\rm \ref{NFtrit}}.
\end{enumerate}
\end{Theorem}
From Theorem~\ref{sing} we obtain information about the singularities of $y$. Assume that $y$ is a~tronqu{\'e}e solution with representation~\eqref{P31FA1} or~\eqref{P31FA2}. Let $\xi_+ = C_+ {\rm e}^{-w} w^{-\beta_1}$, $\xi_- = C_- {\rm e}^{-w} w^{-\beta_2}$, $F_m$ and $\hat{F}_m$ be as in Section~\ref{NFinfo}. Then the equation satisfied by $F_0$ is
\begin{gather}\label{P31eqF0}
{\xi}^{2}{\frac {{\rm d}^{2}}{{\rm d}{\xi}^{2}}}F_0 (\xi) +\xi {\frac {{\rm d}}{{\rm d}\xi}}F_0 (\xi) -{\frac {{\xi}^{2} \big( {\frac {{\rm d}}{{\rm d} \xi}}F_0 (\xi) \big) ^{2}}{A+F_0 (\xi) }}- {\frac {( A+F_0 (\xi)) ^{3}}{4{A}^{2}}}+ {\frac {1}{4{A}^{2} ( A+F_0 (\xi)) }} =0.
\end{gather}
The equation of $\hat{F}_0$ is the same as \eqref{P31eqF0}. The solution satisfying \eqref{conF0} is
\begin{gather}\label{P31F0}
F_0(\xi) = \frac{2A \xi}{2A-\xi}.
\end{gather}
\begin{Theorem}\label{P31sing}\quad
\begin{enumerate}[$(i)$]\itemsep=0pt
\item Assume $y(x) \in \mathcal{F}_{A,1}$ is given by the representation~\eqref{P31FA1}. If $C_{+} \neq 0$, then the singular points of $y$, $x^{+}_n$, near the boundary $\{x\colon \arg(2Ax) = {\pi}/{2} \}$ of the sector of analyticity are given asymptotically by
\begin{gather*}
(2A)x^+_n = 2n\pi {\rm i} -\beta_1 \ln(2n\pi {\rm i})+\ln(C_+)-\ln(2A)+o(1), \qquad n \to \infty.
\end{gather*}
If $C_{-} \neq 0$, then the singular points of $y$, $x^{-}_n$, near the boundary $\{x\colon \arg(2Ax) = {-\pi}/{2}\}$ of the sector of analyticity are given asymptotically by
\begin{gather*}
(2A)x^-_n = -2n\pi {\rm i} -\beta_1 \ln(-2n\pi {\rm i})+\ln(C_{-})-\ln(2A)+o(1), \qquad n \to \infty.
\end{gather*}
\item Assume $y(x) \in \mathcal{F}_{A,2}$ is given by the representation \eqref{P31FA2}. If $\hat{C}_{+} \neq 0$, then the singular points of $y$, $\tilde{x}^{+}_n$, near the boundary $\{\tilde{x}\colon \arg(-2A\tilde{x}) = {\pi}/{2}\}$ of the sector of analyticity are given asymptotically by
\begin{gather*}
(-2A)\tilde{x}^+_n = 2n\pi {\rm i} -\beta_2 \ln(2n\pi {\rm i})+\ln\big(\hat{C}_+\big)-\ln(2A)+o(1), \qquad n \to \infty.
\end{gather*}
If $\hat{C}_{-} \neq 0$, then the singular points of $y$, $\tilde{x}^{-}_n$, near the boundary $\{\tilde{x}\colon \arg(-2A\tilde{x}) = -{\pi}/{2} \}$ of the sector of analyticity are given asymptotically by
\begin{gather*}
(-2A)\tilde{x}^-_n = -2n\pi {\rm i} -\beta_2 \ln(-2n\pi {\rm i})+\ln\big(\hat{C}_{-}\big)-\ln(2A)+o(1), \qquad n \to \infty.
\end{gather*}
\end{enumerate}
\end{Theorem}
From Theorem~\ref{sing} we obtain the following results about tritronqu{\'e}e solutions of \eqref{P31}:
\begin{Theorem}\label{P31tri}
\eqref{P31} has two tritronqu{\'e}e solutions $y^+(x)$ and $y^-(x)$ given by
\begin{gather*}
y^+(x) = l(x) +h^{+}(2Ax),\qquad y^-(x)= l(x)+h^{-}(2Ax).
\end{gather*}
Let $C^t_{j}$, $1\leq j\leq 4$ be as in \eqref{htritrep}. Then
\begin{enumerate}[$(i)$]\itemsep=0pt
\item $\mathcal{F}_{A,1} \cap \mathcal{F}_{A,2} = \{ y^+,y^-\}$.
\item For each $\delta>0$ there exists $R>0$ such that $y^+(x)$ is analytic in $A^{-1}T^+_{\delta,R}$, and $y^+$ is asymptotic to $y_0(x)$ in the sector
\begin{gather*}
\bigcup_{-\frac{\pi}{2}< \phi < \frac{3\pi}{2} } \big(A^{-1}{\rm e}^{{\rm i}\phi} \mathbb{R}^+\big) .
\end{gather*}
The singular points of $y^+(x)$, $x^{\pm}_{1,n}$, near the boundary of the sector of analyticity are given asymptotically by
\begin{gather*}
(2A)x^-_{1,n} = -2n\pi {\rm i} -\beta_1 \ln(-2n\pi {\rm i})+\ln\big(C^{t}_{1}\big)-\ln(2A)+o(1), \qquad n \to \infty,\\
(2A){x}^+_{1,n} = -2n\pi {\rm i} +\beta_2 \ln(2n\pi {\rm i})-\ln\big({C}^{t}_4\big)+\ln(2A)+o(1), \qquad n \to \infty.
\end{gather*}
\item For each $\delta>0$ there exists $R>0$ such that $y^-(x)$ is analytic in $A^{-1}T^-_{\delta,R}$, and $y^-$ is asymptotic to $y_0(x)$ in the sector
\begin{gather*}
\bigcup_{-\frac{3\pi}{2}< \phi < \frac{\pi}{2} } \big(A^{-1}{\rm e}^{{\rm i}\phi} \mathbb{R}^+\big).
\end{gather*}
The singular points of $y^-(x)$, $x^{\pm}_{2,n}$, near the boundary of the sector of analyticity are given asymptotically by
\begin{gather*}
(2A){x}^-_{2,n} = 2n\pi {\rm i} +\beta_2 \ln(-2n\pi {\rm i})-\ln\big({C}^{t}_3\big)+\ln(2A)+o(1), \qquad n \to \infty,\\
(2A)x^+_{2,n} = 2n\pi {\rm i} -\beta_1 \ln(2n\pi {\rm i})+\ln\big(C^{t}_{2}\big)-\ln(2A)+o(1), \qquad n \to \infty.
\end{gather*}
\end{enumerate}
\end{Theorem}
\subsection[Tronqu{\'e}e solutions of ${\rm P}^{(ii)}_{\rm III}$]{Tronqu{\'e}e solutions of $\boldsymbol{{\rm P}^{(ii)}_{\rm III}}$}
If $y(x)$ is a solution of \eqref{P32} which is asymptotic to a formal power series on a ray $d$ which is not an antistokes line, then by dominant balance we have
\begin{gather*}
y(x) \sim l(x), \qquad |x|\to \infty, \qquad x\in d,
\end{gather*}
where
\begin{gather}\label{P32lx}
l(x) = A x^{1/3} -\frac{\beta}{3A x^{1/3}}
\end{gather}
for some $A$ satisfying $A^3=1$. Fix an $A$ satisfying $A^3 = 1$ and make the change of variables
\begin{gather}\label{P32cov}
w=(27A/4)^{1/2}x^{2/3}, \qquad y(x)=x^{1/3}h(w)+l(x).
\end{gather}
Then the equation \eqref{P32} is transformed into an equation for $h$ of the form \eqref{eqh} with
\begin{gather*}
\beta_1 =\beta_2 = \frac{1}{2}.
\end{gather*}
Let the notations be the same as in Section~\ref{NFinfo}. In view of the transformation \eqref{P32cov}, we denote
\begin{gather*}
S^{(0)}_{R}:=\left\{x\colon |x|\geq R, \, \arg(x) \in \left[-\frac{3\pi}{4}-\frac{3\arg{{A}}}{4},\frac{3\pi}{4}-\frac{3\arg{{A}}}{4}\right]\right\}, \\
S^{(1)}_{R}:=\left\{x\colon |x|\geq R, \,\arg(x) \in \left[\frac{3\pi}{4}-\frac{3\arg{{A}}}{4},\frac{9\pi}{4}-\frac{3\arg{{A}}}{4}\right]\right\},\\
S^{(2)}_{R}:=\left\{x\colon |x|\geq R,\, \arg(x) \in \left[\frac{9\pi}{4}-\frac{3\arg{{A}}}{4},\frac{15\pi}{4}-\frac{3\arg{{A}}}{4}\right]\right\},\\
S^{(3)}_{R}:=\left\{x\colon |x|\geq R,\, \arg(x) \in \left[\frac{15\pi}{4}-\frac{3\arg{{A}}}{4},\frac{21\pi}{4}-\frac{3\arg{{A}}}{4}\right]\right\}.
\end{gather*}
We notice that for $j \in\{ 0,2\}$, $S^{(j)}_R$ is mapped under the transformation~\eqref{P32cov} bijectively to the closed sector $\overline{S}_1 \backslash {\mathbb{D}}_{R_0}$ in the $w$-plane, where $R_0 = R^{2/3}$ (see also Note~\ref{NFshape}); for $j \in\{ 1,3\}$, $S^{(j)}_R$ is mapped bijectively to the closed sector $\overline{S}_2 \backslash {\mathbb{D}}_{R_0}$ in the $w$-plane.
\begin{Theorem}\quad
\begin{enumerate}[$(i)$]\itemsep=0pt
\item There is a unique formal power series solution
\begin{gather*}
\tilde{y}_0(x) = x^{1/3} \sum_{k=0}^{\infty} \frac{y_{0,k}}{x^{2k/3}}
\end{gather*}
to \eqref{P32}, where
\begin{gather*}
y_{0,0}=A,\qquad y_{0,1}= -\frac{\beta}{3A}.
\end{gather*}
\item For each $j \in \{0,1,2,3\}$, there is a one-parameter family $\mathcal{F}_{A,j}$ of tronqu{\'e}e solutions of~\eqref{P32} in~$S^{(j)}_R$ where
\begin{gather*}
y(x) = l(x)+ x^{1/3} h(w), \qquad w = K x^{2/3},\qquad K=(27A/4)^{1/2}.
\end{gather*}
If $j$ is even, then $h(w)$ has the representations
\begin{gather}\label{P32FA02}
h(w) = \begin{cases}
\displaystyle h^{+}(w) + \sum_{k=1}^{\infty}C_{+}^k {\rm e}^{-kw} \mathcal{L}_{\phi}{H}_k(w), &\displaystyle -\phi \in \left(0,\frac{\pi}{2}\right],\\
\displaystyle h^{-}(w) + \sum_{k=1}^{\infty}C_{-}^k {\rm e}^{-kw} \mathcal{L}_{\phi}{H}_k(w), & \displaystyle -\phi \in \left[-\frac{\pi}{2},0\right).
\end{cases}
\end{gather}
If $j$ is odd, then $h(w)$ has the representations
\begin{gather}\label{P32FA13}
h(w) = \begin{cases}
\displaystyle h^{-}(w) + \sum_{k=1}^{\infty}\hat{C}_{+}^k {\rm e}^{kw} \mathcal{L}_{\phi}{\hat{H}}_k(-w), & \displaystyle -\phi \in \left(0,\frac{\pi}{2}\right],\\
\displaystyle h^{+}(w) + \sum_{k=1}^{\infty}\hat{C}_{-}^k {\rm e}^{kw} \mathcal{L}_{\phi}{\hat{H}}_k(-w), &\displaystyle -\phi \in \left[-\frac{\pi}{2},0\right).
\end{cases}
\end{gather}
\item Let $y(x)$ be a tronqu{\'e}e solution in $\mathcal{F}_{A,j}$. If $j$ is even, the region of analyticity contains the corresponding branch of $\big(K^{-1} S_{\rm an}(h)\big)^{2/3}$, which contains~$S^{(j)}_R$ for~$R$ large enough. If~$j$ is odd, the region of analyticity contains the corresponding branch of $\big(K^{-1} \hat{S}_{\rm an}(h)\big)^{2/3}$, which contains~$S^{(j)}_R$ for $R$ large enough, and
\begin{gather*}
y(x) \sim \tilde{y}_0(x),\qquad x\in d, \qquad |x|\to\infty,
\end{gather*}
where $d$ is a ray whose infinite part is contained in the interior of~$S^{(j)}_R$.
\end{enumerate}
\end{Theorem}
Assume that $y(x)$ is a tronqu{\'e}e solution to \eqref{P32} and $h$ is defined by~\eqref{P32cov}. Then $h$ has the representation~\eqref{P32FA02} or~\eqref{P32FA13}. From Theorem~\ref{sing} we obtain information about singularities of $h$. Let $\xi_+ = C_+ {\rm e}^{-w} w^{-1/2}$, $\xi_- = C_- {\rm e}^{-w} w^{-1/2}$, $F_m$ and $\hat{F}_m$ be as in Section~\ref{NFinfo}. Then the equation satisfied by $F_0$ and $\hat{F}_0$ is the same
\begin{gather}\label{P32eqF0}
{\xi}^{2}{\frac {{\rm d}^{2}}{{\rm d}{\xi}^{2}}}F_0 (\xi) + \left( {
\frac {{\rm d}}{{\rm d}\xi}}F_0 (\xi) \right) \xi-{\frac { \big( {
\frac {{\rm d}}{{\rm d}\xi}}F_0 (\xi) \big) ^{2}{\xi}^{2}}{{A}+
F_0 (\xi) }}- {\frac {( {A}+F_0 (\xi)) ^{2}}{3{A}}}+ {\frac {1}{3{A} ( {A}+F_0 (\xi) ) }} =0.
\end{gather}
The solution satisfying \eqref{conF0} is
\begin{gather}\label{P32F0}
F_0(\xi) = \frac{36A^2 \xi}{(6A-\xi)^2}.
\end{gather}
\begin{Theorem} \label{P32sing}\quad
\begin{enumerate}[$(i)$]\itemsep=0pt
\item If $j\in\{0,2\}$, then $h$ has representation \eqref{P32FA02} for a unique pair of constants $(C_+,C_-)$. If $C_{+} \neq 0$, then the singular points of $h$, $w^{+}_n$, near the boundary $\{w\colon \arg w = {\pi}/{2}\}$ of the sector of analyticity are given asymptotically by
\begin{gather*}
w^{+}_{n} = 2n\pi {\rm i} -\frac{\ln(2n\pi {\rm i})}{2} +\ln(C_+)-\ln(6A)+o(1), \qquad n \to \infty.
\end{gather*}
If $C_{-} \neq 0$, then the singular points of $h$, $w^{-}_n$, near the boundary $\{w\colon \arg w = -{\pi}/{2}\}$ of the sector of analyticity are given asymptotically by
\begin{gather*}
w^{-}_{n} = -2n\pi {\rm i} -\frac{\ln(-2n\pi {\rm i})}{2}+\ln(C_-)-\ln(6A)+o(1),\qquad n \to \infty.
\end{gather*}
\item If $j\in\{1,3\}$, then $h$ has representation \eqref{P32FA13} for a unique pair of constants $\big(\hat{C}_+,\hat{C}_-\big)$. If $\hat{C}_{+} \neq 0$, then the singular points of $h$, $\tilde{w}^{+}_n$, near the boundary $\{\tilde{w}\colon \arg \tilde{w} = -{\pi}/{2}\}$ of the sector of analyticity are given asymptotically by
\begin{gather*}
\tilde{w}^+_n = -2n\pi {\rm i} +\frac{\ln(2n\pi {\rm i})}{2} -\ln\big(\hat{C}_+\big)+\ln(6A)+o(1), \qquad n \to \infty.
\end{gather*}
If $\hat{C}_{-} \neq 0$, then the singular points of~$h$, $\tilde{w}^{-}_n$, near the boundary $\{\tilde{w}\colon \arg \tilde{w} = {\pi}/{2} \}$ of the sector of analyticity are given asymptotically by
\begin{gather*}
\tilde{w}^{-}_{n} = 2n\pi {\rm i} +\frac{\ln(-2n\pi {\rm i})}{2}-\ln\big(\hat{C}_-\big)+\ln(6A)+o(1), \qquad n \to \infty.
\end{gather*}
\end{enumerate}
\end{Theorem}
\begin{Theorem}\label{P32tri}\quad
\begin{enumerate}[$(i)$]\itemsep=0pt
\item For each $j\in\{0,2\}$ we have a tritronqu{\'e}e solution $y^+_{j}$ analytic in $S^{(j)}_R \bigcup S^{(j+1)}_R$ for $R$ large enough, given by
\begin{gather*}
y^+_{j}(x) = l(x) +h^{+}(w).
\end{gather*}
\item For each $j\in\{1,3\}$ we have a tritronqu{\'e}e solution $y^-_{j}$ analytic in $S^{(j)}_R \bigcup S^{(j+1)}_R$, where $S^{(4)}_R = S^{(0)}_R$ and $R$ is large enough, given by
\begin{gather*}
y^-_{j}(x) = l(x) +h^{-}(w).
\end{gather*}
\end{enumerate}
Let $C^t_{j}$, $1\leq j\leq 4$ be as in \eqref{htritrep}. Then
\begin{enumerate}[(i)]\itemsep=0pt
\item[$(iii)$] The singular points of $h^+(w)$, $w^{\pm}_{1,n}$, near the boundary of the sector of analyticity are given asymptotically by
\begin{gather*}
w^-_{1,n} = -2 n \pi {\rm i} -\frac{\ln(-2n\pi {\rm i})}{2}+\ln \big({C}^{t}_{1}\big) - \ln(6A)+o(1), \\
w^+_{1,n} = -2 n \pi {\rm i} +\frac{ \ln(2n\pi {\rm i})}{2}-\ln \big({C}^{t}_{4}\big) +\ln(6A)+o(1).
\end{gather*}
\item[$(iv)$] The singular points of $h^-(w)$, $w^{\pm}_{2,n}$, near the boundary of the sector of analyticity are given asymptotically by
\begin{gather*}
w^-_{2,n} = 2 n \pi {\rm i} +\frac{ \ln(-2n\pi {\rm i})}{2}-\ln \big({C}^{t}_{3}\big) + \ln(6A)+o(1), \\
w^+_{2,n} = 2 n \pi {\rm i} - \frac{\ln(2n\pi {\rm i})}{2}+\ln \big({C}^{t}_{2}\big) - \ln(6A)+o(1).
\end{gather*}
\end{enumerate}
\end{Theorem}
\subsection[Tronqu{\'e}e solutions of $\rm P_{IV}$]{Tronqu{\'e}e solutions of $\boldsymbol{\rm P_{IV}}$}
By dominant balance we have four possibilities for the leading behavior of $\rm P_{IV}$. We shall study them one by one.
\begin{gather*}
y(x) \sim l(x), \qquad |x|\to \infty, \qquad x\in d.
\end{gather*}
\subsubsection{Case 1}
\begin{gather*}
l(x) = -\frac{2x}{3} + \frac{\alpha}{x}.
\end{gather*}
Make the change of variables
\begin{gather}\label{P41cov}
x=\big(\sqrt{3} {\rm i} w\big)^{1/2}, \qquad y(x)=x h(w)+l(x).
\end{gather}
Then the equation \eqref{P4} is transformed into an equation for $h$ of the form \eqref{eqh} with
\begin{gather*}
\beta_1 = \beta_2 = \frac{1}{2}.
\end{gather*}
Let the notations be the same as in Section~\ref{NFinfo}. In view of the transformation \eqref{P41cov}, we denote
\begin{gather}
S^{(0)}_{R}:=\left\{x\colon |x|\geq R,\, \arg(x) \in \left[0,\frac{\pi}{2}\right]\right\}, \qquad S^{(1)}_{R}:=\left\{x\colon |x|\geq R,\, \arg(x) \in \left[\frac{\pi}{2},\pi \right]\right\},\!\!\!\label{P41SjR}\\
S^{(2)}_{R}:=\left\{x\colon |x|\geq R,\, \arg(x) \in \left[\pi,\frac{3\pi}{2}\right]\right\},\qquad S^{(3)}_{R}:=\left\{x\colon |x|\geq R,\, \arg(x) \in \left[\frac{3\pi}{2},2 \pi\right]\right\}.\nonumber
\end{gather}
We notice that for $j \in\{ 0,2\}$, $S^{(j)}_R$ is mapped under the transformation~\eqref{P32cov} bijectively to the closed sector $\overline{S}_1 \backslash {\mathbb{D}}_{R^2}$ in the $w$-plane, (see also Note~\ref{NFshape}); for $j \in\{ 1,3\}$, $S^{(j)}_R$ is mapped bijectively to the closed sector~$\overline{S}_2 \backslash {\mathbb{D}}_{R^2}$ in the $w$-plane.
\begin{Theorem}\quad
\begin{enumerate}[$(i)$]\itemsep=0pt
\item There is a formal power series solution of \eqref{P4} of the form
\begin{gather*}
\tilde{y}_0(x) = x \sum_{k=0}^{\infty} \frac{y_{0,k}}{x^{2k}},
\end{gather*}
where
\begin{gather*}
y_{0,0}=-\frac{2}{3},\qquad y_{0,1}= {\alpha}.
\end{gather*}
\item For each $j \in \{0,1,2,3\}$, there is a one-parameter family $\mathcal{F}_{A,j}$ of tronqu{\'e}e solutions of~\eqref{P4} in~$S^{(j)}_R$, where
\begin{gather*}
y(x) = l(x)+ x h(w), \qquad w=\frac{x^2}{\sqrt{3}{\rm i}}.
\end{gather*}
If $j$ is even, then $h(w)$ has the representations
\begin{gather}\label{P41FA02}
h(w) =
\begin{cases}
\displaystyle h^{+}(w) + \sum\limits_{k=1}^{\infty}C_{+}^k {\rm e}^{-kw} \mathcal{L}_{\phi}{H}_k(w), & \displaystyle -\phi \in \left(0,\frac{\pi}{2}\right],\\
\displaystyle h^{-}(w) + \sum\limits_{k=1}^{\infty}C_{-}^k {\rm e}^{-kw} \mathcal{L}_{\phi}{H}_k(w), & \displaystyle -\phi \in \left[-\frac{\pi}{2},0\right).
\end{cases}
\end{gather}
If $j$ is odd, then $h(w)$ has the representations
\begin{gather}\label{P41FA13}
h(w) =
\begin{cases}
\displaystyle h^{-}(w) + \sum_{k=1}^{\infty}\hat{C}_{+}^k {\rm e}^{kw} \mathcal{L}_{\phi}{\hat{H}}_k(-w), & \displaystyle -\phi \in \left(0,\frac{\pi}{2}\right],\\
\displaystyle h^{+}(w) + \sum_{k=1}^{\infty}\hat{C}_{-}^k {\rm e}^{kw} \mathcal{L}_{\phi}{\hat{H}}_k(-w), & \displaystyle -\phi \in \left[-\frac{\pi}{2},0\right).
\end{cases}
\end{gather}
\item Let $y(x)$ be a tronqu{\'e}e solution in $\mathcal{F}_{A,j}$. If $j$ is even, the region of analyticity contains the corresponding branch of $\big(\sqrt{3}{\rm i} S_{\rm an}(h)\big)^{1/2}$, which contains~$S^{(j)}_R$ for~$R$ large enough. If~$j$ is odd, the region of analyticity contains the corresponding branch of $\big(\sqrt{3}{\rm i} \hat{S}_{\rm an}(h)\big)^{1/2}$, which contains~$S^{(j)}_R$ for~$R$ large enough, and
\begin{gather*}
y(x) \sim \tilde{y}_0(x),\qquad x\in d, \qquad |x|\to\infty,
\end{gather*}
where $d$ is a ray whose infinite part is contained in the interior of $S^{(j)}_R$.
\end{enumerate}
\end{Theorem}
Assume that $y(x)$ is a tronqu{\'e}e solution to \eqref{P4} satisfying $y(x) \sim -\frac{2x}{3}$ and $h$ is defined by~\eqref{P41cov}. Then $h$ has representation~\eqref{P41FA02} or~\eqref{P41FA13}. From Theorem~\ref{sing} we obtain information about singularities of $h$. Let $\xi_+ = C_+ {\rm e}^{-w} w^{-1/2}$, $\xi_- = C_- {\rm e}^{-w} w^{-1/2}$, $F_m$ and $\hat{F}_m$ be as in Section~\ref{NFinfo}. Then the equation satisfied by $F_0$ and $\hat{F}_0$ is the same
\begin{gather}
{\xi}^{2}{\frac {{\rm d}^{2}}{{\rm d}{\xi}^{2}}}F_0 (\xi) +\xi {
\frac {{\rm d}}{{\rm d}\xi}}F_0 (\xi) - {\frac {3 {\xi}^{2} \big( {
\frac {{\rm d}}{{\rm d}\xi}}F_0 (\xi) \big) ^{2}}{2(3 F_0 (\xi) -2)}}\nonumber\\
\qquad{} + \frac{( 3 F_0 (\xi) -2) ^{3}}{24}+ \frac{( 3 F_0 (\xi) -2) ^{2}}{3}+\frac{3 F_0 ( \xi)-2}{2} =0.\label{P41eqF0}
\end{gather}
The solution satisfying \eqref{conF0} is
\begin{gather}\label{P41F0}
F_0(\xi) = \frac{4 \xi}{\xi^2+2\xi+4}
\end{gather}
with simple poles at $\xi^{(1)}_s =-1-\sqrt{3}{\rm i}$ and $\xi^{(2)}_s =-1+\sqrt{3}{\rm i}$. Hence the statements in Theorems~\ref{P32sing} and~\ref{P32tri} hold true for function~$h$, with $S^{(j)}_R$ as defined in \eqref{P41SjR} and $6A$ in the formula replaced by $\xi^{(i)}_s$, $i=1$ or $i=2$.
\subsubsection{Case 2}
\begin{gather*}
l(x) = -{2x} - \frac{\alpha}{x}.
\end{gather*}
Make the change of variables
\begin{gather}\label{P42cov}
x=\left( w\right)^{1/2}, \qquad y(x)=x h(w)+l(x).
\end{gather}
Then the equation \eqref{P4} is transformed into an equation for $h$ of the form \eqref{eqh} with
\begin{gather*}
\beta_1 =\alpha+\frac{1}{2}, \qquad \beta_2 = -\alpha+\frac{1}{2}.
\end{gather*}
Let the notations be the same as in Section~\ref{NFinfo}. In view of the transformation~\eqref{P41cov}, we denote
\begin{gather}
S^{(0)} :=\left\{x\colon \arg(x) \in \left(-\frac{\pi}{4},\frac{\pi}{4}\right)\right\}, \qquad S^{(1)}:=\left\{x\colon \arg(x) \in \left(\frac{\pi}{4},\frac{3\pi}{4} \right)\right\},\nonumber\\
S^{(2)} :=\left\{x\colon \arg(x) \in \left(\frac{3\pi}{4},\frac{5\pi}{4}\right)\right\}, \qquad S^{(3)}:=\left\{x\colon \arg(x) \in \left(\frac{5\pi}{4},\frac{7\pi}{4}\right)\right\}.\label{P42SjR}
\end{gather}
For $j \in\{ 0,2\}$, $S^{(j)}$ is mapped under the transformation~\eqref{P32cov} bijectively to the right half $w$-plane~${S}_1$; for $j \in\{ 1,3\}$, $S^{(j)}$ is mapped bijectively to the sector to the left half $w$-plane ${S}_2$.
\begin{Theorem}\quad
\begin{enumerate}[$(i)$]\itemsep=0pt
\item There is a formal power series solution of \eqref{P4} of the form
\begin{gather*}
\tilde{y}_0(x) = x \sum_{k=0}^{\infty} \frac{y_{0,k}}{x^{2k}},
\end{gather*}
where
\begin{gather*}
y_{0,0}=-2,\qquad y_{0,1}= -{\alpha}.
\end{gather*}
\item For each $j \in \{0,1,2,3\}$, there is a one-parameter family $\mathcal{F}_{A,j}$ of tronqu{\'e}e solutions of~\eqref{P4} in $S^{(j)}$, where
\begin{gather*}
y(x) = l(x)+ x h(w), \qquad w=x^2.
\end{gather*}
If $j$ is even, then $h(w)$ has the representations
\begin{gather}\label{P42FA02}
h(w) =
\begin{cases}
\displaystyle h^{+}(w) + \sum_{k=1}^{\infty}C_{+}^k {\rm e}^{-kw} w^{kM_1} \mathcal{L}_{\phi}{H}_k(w), & \displaystyle -\phi \in \left(0,\frac{\pi}{2}\right),\\
\displaystyle h^{-}(w) + \sum_{k=1}^{\infty}C_{-}^k {\rm e}^{-kw} w^{kM_1} \mathcal{L}_{\phi}{H}_k(w), & \displaystyle -\phi \in \left(-\frac{\pi}{2},0\right).
\end{cases}
\end{gather}
If $j$ is odd, then $h(w)$ has the representations
\begin{gather}\label{P42FA13}
h(w) = \begin{cases}
\displaystyle h^{-}(w) + \sum_{k=1}^{\infty}\hat{C}_{+}^k {\rm e}^{kw} (-w)^{kM_2} \mathcal{L}_{\phi}{\hat{H}}_k(-w), &\displaystyle -\phi \in \left(0,\frac{\pi}{2}\right),\\
\displaystyle h^{+}(w) + \sum_{k=1}^{\infty}\hat{C}_{-}^k {\rm e}^{kw} (-w)^{kM_2} \mathcal{L}_{\phi}{\hat{H}}_k(-w), & \displaystyle -\phi \in \left(-\frac{\pi}{2},0\right).
\end{cases}
\end{gather}
\item Let $y(x)$ be a tronqu{\'e}e solution in $\mathcal{F}_{A,j}$. If $j$ is even, then the region of analyticity contains the corresponding branch of $(S_{\rm an}(h))^{1/2}$. If $j$ is odd, then the region of analyticity contains the corresponding branch of $\big(\hat{S}_{\rm an}(h)\big)^{1/2}$, and
\begin{gather*}
y(x) \sim \tilde{y}_0(x),\qquad\ x\in d \subset S^{(j)}, \qquad |x|\to\infty.
\end{gather*}
\end{enumerate}
\end{Theorem}
Assume that $y(x)$ is a tronqu{\'e}e solution to \eqref{P4} satisfying $y(x) \sim -2 x$ and $h$ is defined by~\eqref{P42cov}. Then $h$ has representation~\eqref{P42FA02} or~\eqref{P42FA13}. From Theorem~\ref{sing} we obtain information about singularities of $h$. Let $F_m$ and $\hat{F}_m$ be as in Section~\ref{NFinfo}. Then the equation satisfied by $F_0$ and $\hat{F}_0$ is the same
\begin{gather}
{\xi}^{2}{\frac {{\rm d}^{2}}{{\rm d}{\xi}^{2}}}F_0 (\xi) +\xi {
\frac {{\rm d}}{{\rm d}\xi}}F_0 (\xi) - {\frac {{\xi}^{2} \big( {
\frac {{\rm d}}{{\rm d}\xi}}F_0 (\xi) \big) ^{2}}{2(F_0 ( \xi) -2)}}-\frac{3 ( F_0 (\xi) -2 ) ^{3}}{8}\nonumber\\
\qquad{} -( F_0 (\xi) -2) ^{2}-\frac{ F_0 (\xi)-2}{2}=0.\label{P42eqF0}
\end{gather}
The solution satisfying \eqref{conF0} is
\begin{gather}\label{P42F0}
F_0(\xi) = \frac{2 \xi}{\xi+2}
\end{gather}
with a simple pole at $\xi_s =-2 $.
\begin{Theorem} \label{P42sing}\quad
\begin{enumerate}[$(i)$]\itemsep=0pt
\item If $j\in\{0,2\}$, then $h$ has representation \eqref{P42FA02} for a unique pair of constants $(C_+,C_-)$. If $C_{+} \neq 0$, then the singular points of~$h$, at $w^{+}_n$, near the boundary $\{w\colon \arg w ={\pi}/{2}\big\}$ of the sector of analyticity are given asymptotically by
\begin{gather*}
w^{+}_{n} = 2n\pi {\rm i} -(\alpha+1/2){\ln(2n\pi {\rm i})} +\ln(C_+)-\ln(-2)+o(1), \qquad n \to \infty.
\end{gather*}
If $C_{-} \neq 0$, then the singular points of $h$, $w^{-}_n$, near the boundary $\{w\colon \arg w =-{\pi}/{2}\}$ of the sector of analyticity are given asymptotically by
\begin{gather*}
w^{-}_{n} = -2n\pi {\rm i} - (\alpha+1/2 ){\ln(-2n\pi {\rm i})}+\ln(C_-)-\ln(-2)+o(1),\qquad n \to \infty.
\end{gather*}
\item If $j\in\{1,3\}$, then $h$ has representation~\eqref{P42FA13} for a unique pair of constants $\big(\hat{C}_+,\hat{C}_-\big)$. If $\hat{C}_{+} \neq 0$, then the singular points of~$h$, $\tilde{w}^{+}_n$, near the boundary~$\{\tilde{w}\colon \arg \tilde{w} =-{\pi}/{2}\}$ of the sector of analyticity are given asymptotically by
\begin{gather*}
\tilde{w}^+_n = -2n\pi {\rm i} +(-\alpha+1/2){\ln(2n\pi {\rm i})}-\ln\big(\hat{C}_+\big)+\ln(-2)+o(1), \qquad n \to \infty.
\end{gather*}
If $\hat{C}_{-} \neq 0$, then the singular points of~$h$, $\tilde{w}^{-}_n$, near the boundary $\{\tilde{w}\colon \arg \tilde{w} ={\pi}/{2}\}$ of the sector of analyticity are given asymptotically by
\begin{gather*}
\tilde{w}^{-}_{n} = 2n\pi {\rm i} +(-\alpha+1/2){\ln(-2n\pi {\rm i})}-\ln\big(\hat{C}_-\big)+\ln(-2)+o(1), \qquad n \to \infty.
\end{gather*}
\end{enumerate}
\end{Theorem}
Denote
\begin{gather}
T^{(0)}_{\delta,R}:=\left\{w\colon |w| > R,\, \arg (w) \in \left[-\frac{\pi}{4}+\delta,\frac{3\pi}{4}-\delta\right] \right\},\nonumber\\
T^{(1)}_{\delta,R}:=\left\{w\colon |w| > R,\, \arg (w) \in \left[\frac{\pi}{4}+\delta,\frac{5\pi}{4}-\delta\right] \right\},\nonumber\\
T^{(2)}_{\delta,R}:=\left\{w\colon |w| > R,\, \arg (w) \in \left[\frac{3\pi}{4}+\delta,\frac{7\pi}{4}-\delta\right] \right\},\nonumber\\
T^{(3)}_{\delta,R}:=\left\{w\colon |w| > R,\, \arg (w) \in \left[-\frac{3\pi}{4}+\delta,\frac{\pi}{4}-\delta\right] \right\}.\label{P42Tj}
\end{gather}
\begin{Theorem}\label{P42tri}\quad{}
\begin{enumerate}[$(i)$]\itemsep=0pt
\item Let $j\in\{0,2\}$. For each $\delta>0$ there exists $R$ large enough such that we have a tritronqu{\'e}e solution $y^+_{j}$ analytic in $T^{(j)}_{\delta,R}$ given by
\begin{gather*}
y^+_{j}(x) = -2x+\frac{\alpha}{x} +h^{+}\big(x^2\big).
\end{gather*}
\item Let $j\in\{1,3\}$. For each $\delta>0$ there exists $R$ large enough such that we have a tritronqu{\'e}e solution $y^-_{j}$ analytic in $T^{(j)}_{\delta,R}$ given by
\begin{gather*}
y^-_{j}(x) = -2x+\frac{\alpha}{x} +h^{-}\big(x^2\big).
\end{gather*}
\end{enumerate}
Let $C^t_{j}$, $1\leq j\leq 4$ be as in \eqref{htritrep}. Then
\begin{enumerate}\itemsep=0pt
\item[$(iii)$] The singular points of $h^+(w)$, $w^{\pm}_{1,n}$, near the boundary of the sector of analyticity are given asymptotically by
\begin{gather*}
w^-_{1,n} = -2 n \pi {\rm i} -(\alpha+1/2){\ln(-2n\pi {\rm i})}+\ln ({C}^{t}_{1}) - \ln(-2)+o(1), \\
w^+_{1,n} = -2 n \pi {\rm i} +(-\alpha+1/2){ \ln(2n\pi {\rm i})}-\ln ({C}^{t}_{4}) +\ln(-2)+o(1).
\end{gather*}
\item[$(iv)$] The singular points of $h^-(w)$, $w^{\pm}_{2,n}$, near the boundary of the sector of analyticity are given asymptotically by
\begin{gather*}
w^-_{2,n} = 2 n \pi {\rm i} +(-\alpha+1/2){ \ln(-2n\pi {\rm i})}-\ln ({C}^{t}_{3}) + \ln(-2)+o(1), \\
w^+_{2,n} = 2 n \pi {\rm i} - (\alpha+1/2){\ln(2n\pi {\rm i})}+\ln ({C}^{t}_{2}) - \ln(-2)+o(1).
\end{gather*}
\end{enumerate}
\end{Theorem}
\subsubsection{Case 3}
\begin{gather*}
l(x) = \frac{A}{ x}+ \frac{\alpha A+\beta }{2x^3}, \qquad A^2 = -b/2.
\end{gather*}
Make the change of variables
\begin{gather}\label{P43cov}
x=\left( w\right)^{1/2}, \qquad y(x)=x^{-1} h(w)+l(x).
\end{gather}
Then the equation \eqref{P4} is transformed into an equation for $h$ of the form \eqref{eqh} with
\begin{gather*}
\beta_1 =-\frac{\alpha}{2}+\frac{3A}{2},\qquad \beta_2 = \frac{\alpha}{2}-\frac{3A}{2}.
\end{gather*}
Let the notations be as in Section~\ref{NFinfo}, $S^{(j)}$ be as in \eqref{P42SjR} and $T^{(j)}_{\delta,R}$ be as in~\eqref{P42Tj}.
\begin{Theorem}\quad
\begin{enumerate}[$(i)$]\itemsep=0pt
\item There is a formal power series solution of \eqref{P4} of the form
\begin{gather*}
\tilde{y}_0(x) = \frac{1}{x} \sum_{k=0}^{\infty} \frac{y_{0,k}}{x^{2k}},
\end{gather*}
where
\begin{gather*}
y_{0,0}=A,\qquad y_{0,1}= \frac{\alpha A+\beta}{2}.
\end{gather*}
\item For each $j \in \{0,1,2,3\}$, there is a one-parameter family $\mathcal{F}_{A,j}$ of tronqu{\'e}e solutions of~\eqref{P4} in~$S^{(j)}$, where
\begin{gather*}
y(x) = l(x)+ x^{-1} h(w), \qquad w=x^2.
\end{gather*}
If $j$ is even, then $h(w)$ has the representations
\begin{gather*}
h(w) =
\begin{cases}
\displaystyle h^{+}(w) + \sum_{k=1}^{\infty}C_{+}^k {\rm e}^{-kw} w^{kM_1} \mathcal{L}_{\phi}{H}_k(w), & \displaystyle -\phi \in \left(0,\frac{\pi}{2}\right),\\
\displaystyle h^{-}(w) + \sum_{k=1}^{\infty}C_{-}^k {\rm e}^{-kw} w^{kM_1} \mathcal{L}_{\phi}{H}_k(w), & \displaystyle -\phi \in \left(-\frac{\pi}{2},0\right).
\end{cases}
\end{gather*}
If $j$ is odd, then $h(w)$ has the representations
\begin{gather*}
h(w) =
\begin{cases}
\displaystyle h^{-}(w) + \sum_{k=1}^{\infty}\hat{C}_{+}^k {\rm e}^{kw} (-w)^{kM_2} \mathcal{L}_{\phi}{\hat{H}}_k(-w), & \displaystyle -\phi \in \left(0,\frac{\pi}{2}\right),\\
\displaystyle h^{+}(w) + \sum_{k=1}^{\infty}\hat{C}_{-}^k {\rm e}^{kw} (-w)^{kM_2} \mathcal{L}_{\phi}{\hat{H}}_k(-w), & \displaystyle -\phi \in \left(-\frac{\pi}{2},0\right).
\end{cases}
\end{gather*}
\item Let $y(x)$ be a tronqu{\'e}e solution in $\mathcal{F}_{A,j}$. If $j$ is even, then the region of analyticity contains the corresponding branch of $(S_{\rm an}(h))^{1/2}$. If $j$ is odd, then the region of analyticity contains the corresponding branch of $\big(\hat{S}_{\rm an}(h)\big)^{1/2}$, and
\begin{gather*}
y(x) \sim \tilde{y}_0(x),\qquad x\in d \subset S^{(j)}, \qquad |x|\to\infty.
\end{gather*}
\end{enumerate}
\end{Theorem}
\begin{Theorem}\quad
\begin{enumerate}[$(i)$]\itemsep=0pt
\item Let $j\in\{0,2\}$. For each $\delta>0$ there exists $R$ large enough such that we have a tritronqu{\'e}e solution $y^+_{j}$ analytic in $T^{(j)}_{\delta,R}$ given by
\begin{gather*}
y^+_{j}(x) = \frac{A}{ x}+ \frac{\alpha A+\beta }{2x^3} +h^{+}\big(x^2\big).
\end{gather*}
\item Let $j\in\{1,3\}$. For each $\delta>0$ there exists $R$ large enough such that we have a tritronqu{\'e}e solution $y^-_{j}$ analytic in $T^{(j)}_{\delta,R}$ given by
\begin{gather*}
y^-_{j}(x) = \frac{A}{ x}+ \frac{\alpha A+\beta }{2x^3} +h^{-}\big(x^2\big).
\end{gather*}
\end{enumerate}
\end{Theorem}
\begin{Note}In this case, the corresponding $F_0$ and $\hat{F}_0$ turn out to be $\xi$, which yield no singularities for $h$. However, it does not imply that the poles are nonexistent. More research needs to be done for this case.
\end{Note}
\section{Proofs and further results}
\subsection{Proof of Proposition~\ref{NFformal}}
Let $h$ and $\mathbf{u}$ be as defined in Section~\ref{intro}. We have a system of differential equations \eqref{NF} for $\mathbf{u}$. It is known (see \cite{OCIMRN,OCDuke, OCInv}) that it admits transseries solutions (i.e., formal exponential power series solutions) of the form
\begin{gather}\label{NFsys}
\tilde{\mathbf{u}}(w) = \tilde{\mathbf{u}}_0(w) +\sum_{k=1}^{\infty} C^k {\rm e}^{-kw}w^{-\beta_1k}\tilde{\mathbf{u}}_{k}(w),
\end{gather}
where $\tilde{\mathbf{u}}_0(w)$ and $\tilde{\mathbf{u}}_{k}(w)$ are formal power series in $w^{-1}$, namely
\begin{gather*}
\tilde{\mathbf{u}}_{k}(w) = \sum_{r=0}^{\infty}\frac{{\mathbf{u}}_{k,r}}{w^r},\qquad k \geq 1,\qquad
\tilde{\mathbf{u}}_{0}(w) = \sum_{r=2}^{\infty}\frac{{\mathbf{u}}_{0,r}}{w^r}.
\end{gather*}
Also, $\tilde{\mathbf{u}}_{0}(w)$ is the unique power series solution of~\eqref{NF}. The coefficients in the series $\tilde{\mathbf{u}}_{k}$ can be determined by substitution of the formal exponential power series $\tilde{\mathbf{u}}(w)$ into~\eqref{NFsys} and identification of each coefficient of~${\rm e}^{-kw}$. Proposition~\ref{NFformal} is then obtained through~\eqref{htou}. Furthermore,
\begin{gather}\label{serieshtou}
\tilde{h}_0(w) = \mathbf{r}_1 \cdot \tilde{\mathbf{u}}_{0}(w),\qquad
\tilde{s}_k(w) =\mathbf{r}_1 \cdot \tilde{\mathbf{u}}_{k}(w),\qquad \mathbf{r}_1 = \begin{bmatrix}
1-\dfrac{\beta_1}{2w}&1+\dfrac{\beta_2}{2w}\end{bmatrix}.
\end{gather}
\subsection{Proof of Theorem~\ref{NFactual}}
Let $d = {\rm e}^{{\rm i}\theta}\mathbb{R}^+$ with $\cos \theta>0$, and let $\mathbf{u}$ be a solution to~\eqref{NF} on $d$ for $w$ large enough, satisfying
\begin{gather*}
\mathbf{u}(w) \to \mathbf{0},\qquad w\in d,\qquad |w| \to \infty.
\end{gather*}
Theorem 3 in \cite{OCDuke}, Theorem 16, Lemma 17 and Theorem 19 in~\cite{OCInv} imply the following results:
\begin{Proposition}\label{sysu}\quad
\begin{enumerate}[$(i)$]\itemsep=0pt
\item For any $d'= {\rm e}^{{\rm i}\theta'}\mathbb{R}^+$ where $\cos \theta'>0$, the solution $\mathbf{u}(w)$ is analytic on $d'$ for $w$ large enough and $\mathbf{u} \sim \tilde{\mathbf{u}}_0(w)$ on~$d'$.
\item Given $\phi \in \big({-}\frac{\pi}{2},0\big) \cup\big(0, \frac{\pi}{2}\big)$, there exists a unique constant~$C(\phi)$ such that $\mathbf{u}$ has the following representation:
\begin{gather}\label{solnsys}
\mathbf{u}(w) = \mathcal{L}_{\phi} \mathbf{U}_0(w) + \sum_{k=1}^{\infty} (C(\phi) )^k {\rm e}^{-kw} w^{kM_1} \mathcal{L}_{\phi}\mathbf{U}_k(w),
\end{gather}
where
\begin{gather}\label{solnsys1}
\mathbf{U}_0 = \mathcal{B} \tilde{\mathbf{u}}_{0},\qquad
\mathbf{U}_k = \mathcal{B}\big(w^{-k\beta_1-kM_1}\tilde{\mathbf{u}}_{k}\big),\qquad k=1,2,\dots,
\end{gather}
where for each $k\geq 1$, $\mathbf{U}_k$ is analytic in the Riemann surface of $\mathbb{C}\backslash\left(\mathbb{Z}^+\cup \mathbb{Z}^-\right)$, and the branch cut for $\mathbf{U}_k$ is chosen to be $(-\infty,0]$. The function $C(\phi)$ is constant on $\big({-}\frac{\pi}{2},0\big)$ and also constant on $\big(0, \frac{\pi}{2}\big)$.
\item Let $\epsilon$ be small. There exist $\delta$, $R>0$ such that $\mathbf{u}(w)$ is analytic on
\begin{gather*}
S_{an,\epsilon}(\mathbf{u}(w)) = S^+_{\epsilon}\cup S^-_{\epsilon},
\end{gather*}
where $S^{\pm}$ is as defined in~\eqref{Spm}.
\end{enumerate}
\end{Proposition}
We now return to the proof of Theorem~\ref{NFactual}. Assume that $h(w)$ is a solution of~\eqref{eqh} on $d = {\rm e}^{{\rm i}\phi} \mathbb{R}^+$ with $\cos \phi>0$ for $|w|>w_0$, where $w_0>0$ is large enough. Without loss of generality we may assume that $w_0>\frac{\sqrt{|\beta_1 \beta_2|}}{2}$. Thus the vector function $\mathbf{u}(w)$ defined by
\begin{gather}\label{utoh}
\mathbf{u}(w) =\begin{bmatrix}
1-\dfrac{\beta_1}{2w}&1+\dfrac{\beta_2}{2w}
\\
-1-\dfrac{\beta_1}{2w}&1-\dfrac{\beta_2}{2w}
\end{bmatrix}^{-1}\begin{bmatrix}
h(w)\\
h'(w)
\end{bmatrix}
\end{gather}
is a solution of the differential system \eqref{NF}, and $h(w) = \mathbf{r}_1 \cdot \mathbf{u}(w)$.
Next we use the basic properties (see Lemmas \ref{L51} and~\ref{L52}) of the operators $\mathcal{B}$ and $\mathcal{L}_{\phi}$ and obtain the following
\begin{gather}
\mathcal{L}_{\phi}\mathcal{B} \big(\mathbf{r}_1\cdot \mathbf{\tilde{u}}_0\big) = \mathbf{r}_1\cdot \mathcal{L}_{\phi}\mathcal{B} \big( \mathbf{\tilde{u}}_0\big) =\mathbf{r}_1\cdot \mathcal{L}_{\phi} \mathbf{U}_0,\nonumber \\
\mathcal{L}_{\phi}\mathcal{B} \big(w^{-k\beta_1-kM_1}\mathbf{r}_1\cdot \mathbf{\tilde{u}}_k\big) = \mathbf{r}_1\cdot \mathcal{L}_{\phi}\mathcal{B} \big( w^{-k\beta_1-kM_1}\mathbf{\tilde{u}}_k\big)=\mathbf{r}_1\cdot \mathcal{L}_{\phi} \mathbf{U}_k.\label{r1pull}
\end{gather}
By Proposition~\ref{sysu}(i), given a ray $d'$ in the right half $w$-plane, $h(w) = \mathbf{r}_1 \cdot \mathbf{u}(w)$ is analytic on~$d'$ for $|w|$ large enough and is asymptotic to $\tilde{h}_0(w) = \mathbf{r}_1 \cdot \mathbf{\tilde{u}}_0(w)$ on $d'$. From the representations~\eqref{solnsys},~\eqref{solnsys1} in Proposition~\ref{sysu}(ii) of~$\mathbf{u}(w)$ and~\eqref{r1pull} we obtain the representations for $h(w)=\mathbf{r}_1 \cdot \mathbf{u}(w)$ as in~\eqref{truesolnup} and~\eqref{truesolndown}. For $|w|$ large enough, $h(w)$ is analytic where~$\mathbf{u}(w)$ is analytic, hence Proposition~\ref{sysu}(iii) implies Theorem~\ref{NFactual}(ii). Thus Theorem~\ref{NFactual} is proved.
\subsection{Proof of Theorem~\ref{sing}}
Let $h(w)$ and $\mathbf{u}(w)$ be as in the proof of Theorem~\ref{NFactual}. $h(w)$ has representations~\eqref{truesolnup} and~\eqref{truesolndown}. We will consider the case $C_+ \neq 0$ and prove~(i) and~(ii). The statements~(iii) and~(iv) about the case $C_- \neq 0$ follow by symmetry.
By Theorem~1 in~\cite{OCInv}, there exists $\delta_1>0$ such that for $|\xi_+|<\delta_1$ the power series
\begin{gather*}
\mathbf{G}_m(\xi_+) = \sum_{k=0}^{\infty}\xi_+^k\mathbf{u}_{k,m},\qquad m=0,1,2,\dots
\end{gather*}
converges, where $\mathbf{G}_0$ satisfies
\begin{gather*}
\mathbf{G}_0 =\mathbf{0},\qquad \mathbf{G}'_0 =\mathbf{e}_1.
\end{gather*}
Furthermore,
\begin{gather}\label{usim}
\mathbf{u}(w) \sim \sum_{m=0}^{\infty}w^{-m}\mathbf{G}_m\left(\xi_+(w)\right),\qquad |w| \to \infty
\end{gather}
holds uniformly in
\begin{gather*}
\mathcal{S}_{\delta_1} = \left\{w\colon \arg(w) \in \left(-\frac{\pi}{2}+\delta,\frac{\pi}{2}+\delta \right), \, |\xi_+(w)|<\delta_1\right\}.
\end{gather*}
By Theorem 2 in \cite{OCInv}, for $R$ large enough and $\delta$, $\epsilon$ small enough, $\mathbf{u}(w)$ is analytic in $\mathcal{D}_w^+$ (see~\eqref{Dwp}). Also, the asymptotic representation~\eqref{usim} holds in~$\mathcal{D}_w^+$. Moreover, if~$\mathbf{G}_0$ has an isolated singularity at $\xi_s$, then $\mathbf{u}(w)$ is singular at a distance at most $o(1)$ of $w_n^+$ given in~\eqref{wpn}, as $w_n^+ \to \infty$.
Since $h(w) = \mathbf{r}_1 \cdot \mathbf{u}(w)$, Theorem~\ref{sing}(i) follows from the results cited.
Assume $|w|> {\sqrt{|\beta_1 \beta_2|}}\slash{2}$. Both \eqref{htou} and \eqref{utoh} hold. While \eqref{htou} implies that $h$ is analytic at least where $\mathbf{u}$ is analytic, \eqref{utoh} implies that $h$ is singular where $\mathbf{u}$ is singular. Thus the asymptotic position of singularities, i.e., poles of $h(w)$ is the same as that of $\mathbf{u}(w)$, which is presented in equation~\eqref{wpn}. Thus Theorem~\ref{sing}(ii) is proved.
\subsection{Proof of Corollary~\ref{NFtri}}
Let the notations be the same as in Section~\ref{NFtrit}. First we point out some properties of $\mathbf{U}_0(p)$. See also~\cite{OCIMRN}.
We apply formal inverse Laplace transform to the system \eqref{NF}. To be precise, assume the analytic function $\mathbf{g}(w,\mathbf{u})$ has the Taylor expansion at $(\infty,\mathbf{0})$ as follows
\begin{gather*}
\mathbf{g}(w,\mathbf{u}) = \sum_{m\geq0;|\mathbf{l}|\geq 0} \mathbf{g}_{m,\mathbf{l}}w^{-m} \mathbf{u}^{\mathbf{l}}, \qquad \big\vert w^{-1}\big\vert < \xi_0, \qquad \vert \mathbf{u} \vert < \xi_0.
\end{gather*}
Note that by assumption $\mathbf{g}_{m,\mathbf{l}}=\mathbf{0}$ if~$|\mathbf{l}|\leq 1$ and $m\leq 1$. Denote $\mathbf{U} = \mathcal{L}^{-1}\mathbf{u}$. Then the formal inverse Laplace transform of the differential system~\eqref{NF} is the system of convolution equations
\begin{gather}\label{ILTNF}
-p \mathbf{U}(p) = -\left[\hat{\Lambda} \mathbf{U}(p)+\hat{B} \int_{0}^{p}\mathbf{U}(s)\mathrm{d}s\right] +\mathcal{N}\left(\mathbf{U}\right)(p),
\end{gather}
where
\begin{gather*}
\mathcal{N}\left(\mathbf{U}\right)(p) = \sum_{m=2}^{\infty} \frac{\mathbf{g}_{m,\mathbf{0}}}{(m-1)!}p^{m-1}+\sum_{|\mathbf{l}|\geq 2} \mathbf{g}_{0,\mathbf{l}} \mathbf{U}^{\ast \mathbf{l}}+\sum_{|\mathbf{l}|\geq 1}\left(\sum_{m=1}^{\infty}\frac{\mathbf{g}_{m,\mathbf{l}}}{(m-1)!}p^{m-1}\right) \ast \mathbf{U}^{\ast \mathbf{l}}.
\end{gather*}
Let $\mathbf{v}(p) = \left(v_1(p),\dots,v_n(p)\right)$ be an $n$-dimensional complex vector function, $f(p)$~be a locally integrable complex function and $\mathbf{l} = (l_1,\dots,l_n)$ be an $n$-dimensional multi-index. Then
\begin{gather*}
\mathbf{v}^{\ast \mathbf{l}} := v_1^{\ast l_1} \ast v_2^{\ast l_2} \ast\cdots \ast v_n^{\ast l_n},\qquad
\left(\mathbf{v}\ast f\right) (p) \in \mathbb{C}^{n}, \qquad (\mathbf{v} \ast f )_{j} = v_j \ast f, \qquad j=1,\dots,n.
\end{gather*}
We gather the following facts about $\mathbf{U}_0$.
\begin{Proposition}\label{U0}\quad
\begin{enumerate}[$(i)$]\itemsep=0pt
\item Let ${K} \in \mathcal{O}$ be a closed set such that for every point $p \in K$, the line segment connecting the origin and $p$ is contained in~$K$. Then $\mathbf{U}_0$ is the unique solution to \eqref{ILTNF} in $K$.
\item $\mathbf{U}_0 = \mathcal{B} \tilde{\mathbf{u}}_0$. $\mathbf{U}_0$ is analytic in the domain $\mathcal{O}= \mathbb{C} \backslash \left[(\infty,-1] \cup [1,\infty)\right]$, and is Laplace transformable along any ray ${\rm e}^{{\rm i}\phi} \mathbb{R}^+$ contained in $\mathcal{O}$. $\mathcal{L}_{\phi} \mathbf{U}_0$ is a solution of~\eqref{ILTNF} for each~$\phi$ such that $|\cos(\phi)|<1$.
\item
Let $K$ be as in $(i)$. There exists $b_K>0$ large enough such that
\begin{gather*}
\sup_{p \in K} \int_{[0,p]} \vert \mathbf{U}_0(s) \vert {\rm e}^{-b_K |s|} |\mathrm{d}s| < \infty.
\end{gather*}
\end{enumerate}
\end{Proposition}
Proposition~\ref{U0}(i) and (ii) come from Proposition 6 in~\cite{OCIMRN}. Although (iii) is not stated expli\-cit\-ly in~\cite{OCIMRN}, it can be easily obtained by the same approach used to prove Proposition~6. Let $K$ be as in~(i). Consider the Banach space
\begin{gather*}
L_{\rm ray}(K):=\left\{\mathbf{f}\colon \mathbf{f} \ \text{is locally integrable on }[0,p]\text{ for each }p\in K \right\}
\end{gather*}
equipped with the norm $\| \cdot \|_{b,K}$ defined by
\begin{gather*}
\|\mathbf{f}\|_{b,K}:=\sup_{p\in K} \int_{[0,p]}\| \mathbf{f}(s) \| {\rm e}^{-b |s|} |\mathrm{d}s|,
\end{gather*}
where $\| \mathbf{f}(s) \| = \max\{|f_1(s)|,|f_2(s)|\}$. We can show that for $b$ large enough, the operator
\begin{gather*}
\mathcal{N}_1:=\mathbf{U}(p) \mapsto \big(\hat{\Lambda} - p I\big)^{-1} \left(-\hat{B} \int_{0}^{p}\mathbf{U}(s) \mathrm{d}s+\mathcal{N} (\mathbf{U} )(p)\right)
\end{gather*}
is contractive in the closed ball $\mathcal{S}:= \{\mathbf{f}\in L_{\rm ray}(K)\colon \|\mathbf{f}\|_{b,K} \leq \delta\}$ of $L_{\rm ray}(K)$ if $\delta$ is small enough. By contractive mapping theorem there is a unique solution of $\mathcal{N}_1 \mathbf{U} =\mathbf{U} $ in $\mathcal{S}$, namely~$\mathbf{U}_0$ by uniqueness of the solution. Using integration by parts and (iii) we have the following:
\begin{Corollary}\label{LU0facts}\quad
\begin{enumerate}[$(i)$]\itemsep=0pt
\item If $\phi \in (0,\pi)$ or $\phi \in (-\pi,0)$, $\mathcal{L}_{\phi} \mathbf{U}_0(w)$ is analytic $($at least$)$ in the region
\begin{gather*}
\mathcal{A}_{\phi}:= \{w\colon |w|\cos (\phi+\arg(w) )>b \},
\end{gather*}
where $b=b_K$ is as in Proposition~{\rm \ref{U0}}$(iii)$ with $K={\rm e}^{{\rm i}\phi}\mathbb{R}^+$.
\item If $0<\phi_1< \phi_2 < \pi$ or $0<-\phi_1< -\phi_2 < \pi$, then $\mathcal{L}_{\phi_1}\mathbf{U}_0$ and $\mathcal{L}_{\phi_2}\mathbf{U}_0$ are analytic continuations of each other.
\end{enumerate}
\end{Corollary}
Since $H_0 = \mathcal{B} \tilde{h}_0$, by \eqref{serieshtou} and Lemma~\ref{L51} we have
\begin{gather}
H_0(p) = (\mathcal{B} u_{0,1} )(p)+ (\mathcal{B} u_{0,2} )(p) -\frac{\beta_1}{2} [1\ast (\mathcal{B} u_{0,1} ) ](p)+\frac{\beta_2}{2} [1\ast (\mathcal{B} u_{0,2} ) ](p)\nonumber\\
\hphantom{H_0(p)}{} =U_{0,1}(p)+U_{0,2}(p) -\frac{\beta_1}{2} (1\ast U_{0,1} )(p)+\frac{\beta_2}{2} (1\ast U_{0,2} )(p),\label{H0U0}
\end{gather}
where ${u}_{0,i}$, $i = 1,2$, is the $i$-th component of the vector function $\mathbf{u}_0$ and ${U}_{0,i}$, $i = 1,2$, is the $i$-th component of $\mathbf{U}_0$. It is clear from \eqref{H0U0} that Proposition~\ref{U0}(iii) and Corollary~\ref{LU0facts} hold with~$\mathbf{U}_0$ replaced by~$H_0$.
Merely by $\hat{H}_0(p) = -H_0(-p)$ and Corollary~\ref{LU0facts}(ii) with $\mathbf{U}_0$ replaced by $H_0$ we obtain Corollary~\ref{NFtri}(i). Moreover, both~$h^+$ and~$h^-$ are special cases of tronqu{\'e}e solutions, thus Theorems~\ref{NFactual} and~\ref{sing} apply. $h^+$ is analytic at least on $S_{\rm an}(h^+)\cup\big({-}S_{\rm an}\big(\hat{h}^+\big)\big) $ and $h^-$ is analytic at least on $S_{\rm an}(h^-)\cup\big({-}S_{\rm an}\big(\hat{h}^-\big)\big) $. We also obtain the asymptotic position of singularities of the tritronqu{\'e}e solutions as in Corollary~\ref{NFtri}(ii).
\subsection{Proof of the results in Section~\ref{BS3}}
Once we have the normalizations in the form of \eqref{eqh} of the equations \eqref{P31}, \eqref{P32} and \eqref{P4}, the results in Section~\ref{BS3} follow from the results in Section~\ref{NFinfo}. Here we present the details of finding solutions to \eqref{P31eqF0}, \eqref{P32eqF0}, \eqref{P41eqF0} and \eqref{P42eqF0} satisfying \eqref{conF0}.
\subsubsection{Solving (\ref{P31eqF0})}
Make the substitution $Q(s) = A+F_0({\rm e}^s)$ then \eqref{P31eqF0} transforms into
\begin{gather*}
{\frac {\mathrm{d}^{2}}{\mathrm{d}{s}^{2}}}Q (s) -{\frac { \big( {\frac {\mathrm{d}
}{\mathrm{d}s}}Q (s) \big) ^{2}}{Q (s) }}- {\frac { (Q(s)) ^{3}}{4{A}^{2}}}+ {\frac {1}{4{A}^{2}Q (s) }} =0.
\end{gather*}
Multiplying both sides by $1/Q(s)$ we obtain
\begin{gather*}
\frac{\mathrm{d}}{\mathrm{d}s}\left(\frac{Q'(s)}{Q(s)}\right) = \frac{1}{4A^2}\left(Q^2(s)-\frac{1}{Q^2(s)}\right).
\end{gather*}
Multiplying both sides by ${2Q'(s)}/{Q(s)}$ and integrating with respect to $s$ we have
\begin{gather}
\left(\frac{Q'(s)}{Q(s)}\right)^2 = \frac{1}{4A^2}\left(Q^2(s)+\frac{1}{Q^2(s)}+C_1\right),\qquad \text{i.e.},\nonumber\\
(Q'(s))^2 = \frac{1}{4A^2}\big(Q^4(s)+C_1Q^2(s)+1\big).\label{P31eqQ}
\end{gather}
By a linear transformation $Q(s) = \tilde{Q}(s)/(2A)$, \eqref{P31eqQ} is reduced to the Jacobi normal form which is solved by Jacobi elliptic functions unless $C_1 \in \{-2,2\}$. Since $Q(s) = A+F_0({\rm e}^s)$ and $F_0(0)=0$ (see~\eqref{conF0}), the solution we look for cannot be an elliptic function. Moreover, as $\operatorname{Re}(s) \to -\infty$, $Q(s) \to A$ implies $Q'(s) \to 0$, so~\eqref{P31eqQ} needs to be of the form
\begin{gather*}
(Q'(s))^2 = \frac{1}{4A^2}\big(Q^2(s)-A^2\big)^2.
\end{gather*}
The solution satisfying $Q(s) \to A$ as $\operatorname{Re}(s) \to -\infty$ is
\begin{gather*}
Q(s) = A \cdot\frac{C_2-{\rm e}^s}{C_2+{\rm e}^s}, \qquad C_2\neq 0.
\end{gather*}
Thus the solution to \eqref{P31eqF0} is
\begin{gather*}
F_0(\xi) =-\frac{2A\xi}{C_2+\xi}.
\end{gather*}
Hence the solution to~\eqref{P31eqF0} satisfying~\eqref{conF0} is
\begin{gather*}
F_0(\xi) =\frac{2A\xi}{2A-\xi}.
\end{gather*}
\subsubsection{Solving (\ref{P32eqF0})}
Make the substitution $Q(s) = A+F_0({\rm e}^s)$ then \eqref{P32eqF0} transforms into
\begin{gather*}
{\frac {{\rm d}^{2}}{{\rm d}{s}^{2}}}Q (s) -{\frac {\big( {\frac {\rm d}{{\rm d}s}}Q (s) \big) ^{2}}{Q
(s) }}- {\frac { (Q(s))^{2}}{3A}}+ {\frac {1}{3AQ (s) }}=0.
\end{gather*}
Multiplying both sides by $1/Q(s)$ we obtain
\begin{gather*}
\frac{\mathrm{d}}{\mathrm{d}s}\left(\frac{Q'(s)}{Q(s)}\right) = \frac{1}{3A}\left(Q(s)-\frac{1}{Q^2(s)}\right).
\end{gather*}
Multiplying both sides by ${2Q'(s)}/{Q(s)}$ and integrating with respect to $s$ we have
\begin{gather}
\left(\frac{Q'(s)}{Q(s)}\right)^2 = \frac{1}{3A}\left(2Q(s)+\frac{1}{Q^2(s)}+C_1\right),\qquad\text{i.e.},\nonumber\\
(Q'(s))^2 = \frac{1}{3A}\big(2Q^3(s)+C_1Q^2(s)+1\big).\label{P32eqQ}
\end{gather}
Notice that if the equation $2x^3+C_1x^2+1 =0$ has three distinct roots then~\eqref{P32eqQ} is known to have Weierstrass $\wp$-functions as general solutions, in which case the corresponding $F_0(\xi) = Q(\ln \xi)-A$ fails to satisfy the condition~\eqref{conF0}. Hence $C_1$ must be such that the equation $2x^3+C_1x^2+1 =0$ has a multiple root. Denote the multiple root by~$r_1$. Then
\begin{gather*}
2x^3+C_1x^2+1 =2(x-r_1)^2(x-r_2).
\end{gather*}
Then we obtain
\begin{gather*}
r_1 = -2 r_2, \qquad r^3_1=1, \qquad C_1 = -3 r_1.
\end{gather*}
Since $Q(s) \to A$ as $\operatorname{Re}(s) \to -\infty$, $Q'(s) \to 0$. Hence $r_1 = A$ or $r_2 = A$. We knew from the normalization (see~\eqref{P32lx}) that $A^3=1$. Thus $r_1=A$, $r_2 = -A/2$, and $C_1 = -3A$. Hence~\eqref{P32eqQ} is of the form
\begin{gather}\label{P32eqQ2}
(Q'(s))^2 = \frac{2}{3A} (Q(s)-A )^2\left(Q(s)+\frac{A}{2}\right).
\end{gather}
The solution to \eqref{P32eqQ2} is
\begin{gather*}
Q(s) = -\frac{A}{2}+\frac{3A}{2}\left(\frac{C_2-{\rm e}^s}{C_2+{\rm e}^s}\right)^2.
\end{gather*}
Hence the solution of \eqref{P32eqF0} satisfying \eqref{conF0} is
\begin{gather*}
F_0(\xi) = \frac{36A^2\xi}{(6A-\xi)^2}.
\end{gather*}
\subsubsection{Solving (\ref{P41eqF0})}
Make the substitution $Q(s) = F_0({\rm e}^s)-2/3$ then \eqref{P41eqF0} transforms into
\begin{gather*}
{\frac {{\rm d}^{2}}{{\rm d}{s}^{2}}}Q (s) -{\frac {\big( {\frac {\rm d}{{\rm d}s}}Q (s) \big)^2}{2 Q
(s) }}+ {\frac { 9 Q^3 (s) }{8}}+3 Q^2(s)+ {\frac {3 Q(s)}{2}}=0.
\end{gather*}
Multiplying both sides by $2Q'(s)/Q(s)$ and we have
\begin{gather*}
\frac{{\rm d}}{{\rm d}{s}}\left[\frac{(Q'(s))^2}{Q(s)}\right] = \frac{{\rm d}}{{\rm d}{s}}\left(-\frac{3}{4}Q^3(s)-3 Q^2(s)-3 Q(s)\right).
\end{gather*}
Integrating with respect to $s$ we have
\begin{gather}\label{P41eqQ}
(Q'(s))^2 = -\frac{3}{4}Q^4(s)-3 Q^3(s)-3 Q^2(s) +C_1 Q(s).
\end{gather}
Letting $\operatorname{Re}(s) \to -\infty$ we have $Q(s) \to -2/3$ and $Q'(s) \to 0$. Thus $C_1 = -8/9$ and the equation~\eqref{P41eqQ} is of the form
\begin{gather*}
(Q'(s))^2 = -\frac{3}{4} Q(s)\left(Q(s)+\frac{8}{3}\right)\left(Q(s)+\frac{2}{3}\right)^2.
\end{gather*}
This is a separable differential equation with general solutions
\begin{gather*}
Q(s) = -\frac{2}{3} \frac{{\rm e}^{2s}-C_2^2-2{\rm i}C_2 {\rm e}^{s}}{{\rm e}^{2s}-C_2^2+{\rm i}C_2{\rm e}^{s}}.
\end{gather*}
Hence the solution of \eqref{P41eqF0} satisfying \eqref{conF0} is
\begin{gather*}
F_0(\xi) = \frac{4 \xi}{\xi^2+2\xi+4}.
\end{gather*}
\subsubsection{Solving (\ref{P42eqF0})}
Make the substitution $Q(s) = F_0({\rm e}^s)-2$ then \eqref{P41eqF0} transforms into
\begin{gather*}
{\frac {{\rm d}^{2}}{{\rm d}{s}^{2}}}Q (s) -{\frac {\big( {\frac {\rm d}{{\rm d}s}}Q (s) \big)^2}{2 Q(s) }}- {\frac { 3 Q^3 (s) }{8}}- Q^2(s)- {\frac {Q(s)}{2}}=0.
\end{gather*}
Multiplying both sides by $2Q'(s)/Q(s)$ and we have
\begin{gather*}
\frac{{\rm d}}{{\rm d}{s}}\left[\frac{(Q'(s))^2}{Q(s)}\right] = \frac{{\rm d}}{{\rm d}{s}}\left(\frac{1}{4}Q^3(s)+ Q^2(s)+ Q(s)\right).
\end{gather*}
Integrating with respect to $s$ we have
\begin{gather}\label{P41eqQn}
(Q'(s))^2 = \frac{1}{4}Q^4(s)+ Q^3(s)+ Q^2(s) +C_1 Q(s).
\end{gather}
Letting $\operatorname{Re}(s) \to -\infty$ we have $Q(s) \to -2$ and $Q'(s) \to 0$. Thus $C_1 = 0$ and the equation~\eqref{P41eqQn} is of the form
\begin{gather*}
(Q'(s))^2 = \frac{1}{4} Q^2(s)(Q(s)+2)^2.
\end{gather*}
This differential equation has general solutions
\begin{gather*}
Q(s) = -\frac{2 C_2}{C_2+{\rm e}^{s}}.
\end{gather*}
Hence the solution of \eqref{P42eqF0} satisfying \eqref{conF0} is
\begin{gather*}
F_0(\xi) = \frac{2 \xi}{\xi+2}.
\end{gather*}
\appendix
\section{Appendix}\label{Appx}
Recall that the Borel transform of a formal series
\begin{gather*}
\tilde{f}(w) = \sum_{n=0}^{\infty}a_n w^{-r-n},\qquad \operatorname{Re}(r)>0,
\end{gather*}
where the series $\sum\limits_{n=0}^{\infty}a_n x^{n}$ has a positive radius of convergence, is defined to be the formal power series
\begin{gather*}
\big(\mathcal{B} \tilde{f} \big)(p) := \sum_{n=0}^{\infty}\frac{a_n p^{n+r-1}}{\Gamma(n+r)}.
\end{gather*}
\begin{Lemma}\label{L51}Assume that we have two formal series $\tilde{f}$ and $\tilde{g}$,
\begin{gather*}
\tilde{f}(w) = \sum_{n=0}^{\infty}a_n w^{-r-n},\qquad \operatorname{Re}(r)>0,\\
\tilde{g}(w) = \sum_{n=0}^{\infty}b_n w^{-r-s},\qquad \operatorname{Re}(s)>0,
\end{gather*}
where both series $\sum\limits_{n=0}^{\infty}a_n x^{n}$ and $\sum\limits_{n=0}^{\infty}b_n x^{n}$ have positive radii of convergence. Then
\begin{gather*}
\mathcal{B}\big(\tilde{f} \tilde{g}\big)(p) = \big(\mathcal{B}\tilde{f} \ast \mathcal{B}\tilde{g}\big) (p)= p^{r+s-1} \sum_{n=0}^{\infty}\left(\sum_{k=0}^{n} a_k b_{n-k}\right)\frac{p^n}{\Gamma(n+r+s)},
\end{gather*}
where
\begin{gather*}
\big(\mathcal{B}\tilde{f} \ast \mathcal{B}\tilde{g}\big)(p) := \int_{0}^{p}\big(\mathcal{B}\tilde{f}\big)(t)\big(\mathcal{B}\tilde{g}\big)(p-t) \mathrm{d}t.
\end{gather*}
\end{Lemma}
Recall that the Laplace transform $\mathcal{L}_{\phi}$ is defined as the following
\begin{gather*}
f\longmapsto \int_{0}^{\infty {\rm e}^{{\rm i}\phi}} f(p){\rm e}^{-xp}{\rm d}p,
\end{gather*}
where $\phi \in \mathbb{R}$.
\begin{Lemma}\label{L52}
Assume that the function $f$ is integrable over the ray ${\rm e}^{{\rm i}\phi} \mathbb{R}^+$, namely
\begin{gather*}
\int_{0}^{\infty {\rm e}^{{\rm i}\phi} } \vert f(p) \vert \vert \mathrm{d}p \vert < \infty.
\end{gather*}
Then for $\operatorname{Re}(w {\rm e}^{{\rm i}\phi})>0 $,
\begin{gather*}
\mathcal{L}_{\phi}(1\ast f) (w) = \frac{1}{w} \mathcal{L}_{\phi}(f) (w).
\end{gather*}
\end{Lemma}
\LastPageEnding
\end{document}
|
\begin{document}
\begin{frontmatter}
\title{\textbf{Ms.FPOP}: An Exact and Fast Segmentation Algorithm
With a Multiscale Penalty}
\cortext[corresp_author]{Corresponding author}
\author[evryPS,ips2]{Liehrmann Arnaud\corref{corresp_author}}
\ead{[email protected]}
\author[evryPS,ips2]{Rigaill Guillem}
\address[evryPS]{Universit\'e Paris-Saclay, CNRS, Univ Evry, Laboratoire de Math\'ematiques et Mod\'elisation d'Evry, 91037, Evry-Courcouronnes, France.}
\address[ips2]{Universit\'e Paris-Saclay, CNRS, INRAE, Universit\'e Evry, Institute of Plant Sciences Paris-Saclay (IPS2), 91190, Gif sur Yvette, France.
}
\begin{abstract}
Given a time series in $\mathbb{R}^n$ with a piecewise constant mean and independent noises, we propose an exact dynamic programming algorithm to minimize a least square criterion with a multiscale penalty promoting well-spread changepoints. Such a penalty has been proposed in Verzelen \textit{et al.}~(2020), and it achieves optimal rates for changepoint detection and changepoint localization.
Our proposed algorithm, named \textbf{Ms.FPOP}{}, extends functional pruning ideas of Rigaill~(2015) and Maidstone \textit{et al.}~(2017) to multiscale penalties. For large signals, $n \geq 10^5$, with relatively few real changepoints, \textbf{Ms.FPOP}{} is typically quasi-linear and an order of magnitude faster than \textbf{PELT}{}. We propose an efficient C++ implementation interfaced with R of \textbf{Ms.FPOP}{} allowing to segment a profile of up to $n=10^6$ in a matter of seconds.
Finally, we illustrate on simple simulations that for large enough profiles ($n\geq 10^4$) \textbf{Ms.FPOP}{} using the multiscale penalty of Verzelen \textit{et al.}~(2020) is typically more powerfull than \textbf{FPOP}{} using the classical BIC penalty of Yao 1989.
\end{abstract}
\begin{keyword}
changepoint detection, multiscale penalty, maximum likelihood inference, discrete optimization, dynamic programming, functional pruning
\end{keyword}
\end{frontmatter}
\section{Introduction}
A National Research Council report \cite{national2013frontiers} identifies changepoint detection as one of the ``inferential giants'' in massive data analysis.
Detecting changepoints, whether a posteriori or online, is important in areas as diverse as
bioinformatics \cite{olshen2004circular,Picard2005}, econometrics and finance \cite{bai2003computation,Thies2018}, climate \cite{Reeves2007}, autonomous
driving \cite{galceran2017multipolicy}, computer vision \cite{ranganathan2012pliss} and neuroscience \cite{jewell2020fast}.
The most common and prototypical changepoint detection problem is that of detecting changes in mean of a univariate gaussian signal :
\begin{equation}\label{eq:piecewisemodel}
y_t = f_t + \varepsilon_t, \quad \text{for } t=1, \ldots, n,
\end{equation}
where $f_t$ is a deterministic piecewise constant with changepoints whose number $D$ and locations, $0 < \tau_1 < \ldots < \tau_D < n$, are unknown, and $\varepsilon_t$ are independant and follow a Gaussian distribution of mean 0 and variance 1. A large number of approaches have been proposed to solve this problem (amongst many others \cite{Yao,emilie,harchaoui2010multiple,frick,fryzlewicz2020detecting}, see \cite{aminikhanghahi2017survey,Truong2020} for a review).
Recently, \cite{verzelen2020optimal} characterize optimal rates for changepoint detection and changepoint localization and proposed
a least-squares estimator with a multiscale penalty achieving these optimal rates.
This multiscale penalty depends on minus the log-length of the segments which promotes well spread changepoints. It can be written as :
\begin{equation}\label{eq:criteria_verzelenetal}
\sum_{d=1}^{D+1} \gamma + \beta \log(n) - \beta \log(\tau_d - \tau_{d-1}),
\end{equation}
where $\gamma=q L$ and $\beta=2L$ with $q$ positive and $L>1$, and with the convention that $\tau_0=0$ and $\tau_{D+1}=n$.
Up to a multiplicative constant this penalty is always smaller than the BIC penalty ($2\log(n)$) \cite{Yao}.
Intuitively, it favors balanced segmentation as:
\begin{itemize}
\item the penalty of a fixed sized segment ($r$) increases with $n$ : $\beta \log(n/r).$
\item while the penalty for a segment whose size is proportionnal to $n$ ($\alpha.n$) is constant of $n$ :
$\beta \log(1/\alpha).$
\end{itemize}
\paragraph{Contribution}
In this paper, we propose a dynamic programming algorithm, named \textbf{Ms.FPOP}{} optimizing a slightly more general penalty. where the $\log(\tau_d - \tau_{d-1})$ is replaced by $g(\tau_d - \tau_{d-1})$ for an arbitrary function $g$ satistying assumption A\ref{assumpt1}.
\paragraph{Existing works}
\textbf{Ms.FPOP}{} extends functional pruning techniques as in \textbf{PDPA}{} or \textbf{FPOP}{} \cite{PDPA,FPOP} to the case of multiscale penalties. A key condition for \textbf{FPOP}{} and \textbf{PDPA}{} is that the cost function is point additive (condition C1 in \cite{FPOP}). As we will explain in more details later, this condition is not verified for the multiscale penalty \eqref{eq:criteria_verzelenetal}, making the extension not trivial. The key idea behind functionnal pruning is to store the set of parameter values for which a particular change is optimal. For a classical penalty (i.e. with a point additive cost function) this set gets smaller with every new datapoint. This is not the case with the multiscale penalty making the update more complex. A key insight of \textbf{Ms.FPOP}{} is to store a slightly larger set that is easy to update.
Importantly, it is possible to optimize the multiscale criteria of \cite{verzelen2020optimal} using inequality based pruning as in \textbf{PELT}{}. We will call \textbf{Ms.PELT}{} this strategy. However for large signals with relatively few true changepoints it is our experience that \textbf{Ms.PELT}{} is quadratic while \textbf{Ms.FPOP}{} is quasi-linear. For example it can be seen on Figure \ref{fig:simple_runtime}.A that it takes about 193 seconds for \textbf{Ms.PELT}{} to process a signal of size $n=128000$ without any changepoint. In the same amount of time \textbf{Ms.FPOP}{} can process signals of size larger than $n=4\times10^6.$ In the presence of true changepoints, (one every thousand datapoints) \textbf{Ms.PELT}{} as expected is much faster but still slower than \textbf{Ms.FPOP}{} (see Figure \ref{fig:simple_runtime}.B).
\begin{figure}
\caption{\textbf{Runtimes of \textbf{PELT}
\label{fig:simple_runtime}
\end{figure}
\paragraph{Outline}
In the rest of the paper we will (1) introduce our notations, (2) review the key idea behind \textbf{FPOP}, (3) explain how and under which conditions we extend \textbf{FPOP}\ to multiscale penalty, (4) study the performance of \textbf{Ms.FPOP}\ relative to \textbf{FPOP}\ for various signals and (5) conclude with a discussion.
\subsection{Multiple Changepoint Model}
In this section we describe our changepoint notations and the multiscale criteria we want to optimize.
\paragraph{Segmentations and set of segmentations} For any $n$ in $\mathbb{N}$ we write $1:n=\{1, \cdots, n\}$. For any integer $D \geq 0$ we define a segmentation with $D$ changes of $1:n$ as an ordered subset of $1:(n-1)$ of size $D, $ with $\tau_ {j}$ the location of the $j^{th}$ change for $j$ in $1, \ldots, D$.
It will be usefull to also consider the dummy indices $\tau_{0} = 0$ and $\tau_{D + 1} = n$. We call $\mathcal{M}^{D}_{1: n}$ the set of all such segmentations in $D$ changes and $\mathcal{M}_{1: n}$ the union of all these sets : $\bigcup_{0 \leq D \leq n-1} \mathcal{M}^{D}_{1: n}.$ For any segmentation $\tau$ in $\mathcal{M}_{1:n}$ we note $|\tau|$ the number of segments of $\tau$. In other words, if $\tau$ is in $\mathcal{M}^{D}_{1:n}$ then $|\tau| = D+1.$ We can enumerate the elements of $\mathcal{M}_{1:n}$ and we get :
\begin{displaymath}
| \mathcal{M}_{1:n}| = \sum_{D = 0}^{n-1} | \mathcal{M}^{D}_{1:n} | = \sum_{D = 0}^{n-1} \binom{n-1}{D} = 2^{n-1}
\end{displaymath}
\paragraph{Multiscale penalized likelihood} Under the piecewise constant model \ref{eq:piecewisemodel} a classical method to estimate the position and the number of changes is to optimize a penalized likelihood criterion.
It is common to use a penalty that is linear in the number of changepoints \cite{Yao,PELT,FPOP} and optimization wise the goal is to compute:
\begin{align*}
{\Large{\boldsymbol{\tau}}}^{*}_n & = &\argmin_{\tau \in \mathcal{M}_{1:n}}\ \left\{ \displaystyle\sum_{j=1}^{|\tau|} \min_{\mu} \left( \displaystyle\sum_{i=\tau_{j-1}+1}^{\tau{j}} (y_{i} -\mu)^2 \right) + \alpha|\tau| \right\}, \\
F_n & = & \min_{\tau \in \mathcal{M}_{1:n}}\ \left\{ \displaystyle\sum_{j=1}^{|\tau|} \min_{\mu} \left( \displaystyle\sum_{i=\tau_{j-1}+1}^{\tau{j}} (y_{i} -\mu)^2 \right) + \alpha|\tau| \right\},
\end{align*}
\begin{equation}\label{mainl2pen}
\end{equation}
where $\alpha$ is a constant to be calibrated (\textit{e.g.} $\alpha = 2 \log(n)$).
Here we consider a more general penalty that depends on the length of the segments:
\begin{align*}
{\Large{\boldsymbol{\tau}}}^{*}_n & = &\argmin_{\tau\in \mathcal{M}_{1:n}}\ \left\{ \displaystyle\sum_{j=1}^{|\tau|} \min_{\mu} \left( \displaystyle\sum_{i=\tau_{j-1}+1}^{\tau{j}} (y_{i} -\mu)^2 -\beta g(\tau_{j}-\tau_{j-1})\right) + \alpha |\tau|\right\}, \\
F_n & = & \min_{\tau\in \mathcal{M}_{1:n}}\ \left\{ \displaystyle\sum_{j=1}^{|\tau|} \min_{\mu} \left( \displaystyle\sum_{i=\tau_{j-1}+1}^{\tau{j}} (y_{i} -\mu)^2 -\beta g(\tau_{j}-\tau_{j-1})\right) + \alpha |\tau|\right\},
\end{align*}
\begin{equation}\label{mainfpoppsd}
\end{equation}
where $g$ is a function satistfying assumption A\ref{assumpt1} described in the next paragraph, and $\alpha$ and $\beta$ are constants to be calibrated.
We recover the multiscale criteria proposed in \cite{verzelen2020optimal} taking $g=\log$, $\alpha = \gamma + \beta g(n)$, and $\gamma $ a constant that remains to be chosen.
We recover the classical penalty of \cite{Yao} taking $g=0$, $\alpha = 2\log(n).$
\begin{assumption}\label{assumpt1}
$h(t, s, s') = g (t-s') - g(t-s)$ is a non-decreasing function in $t$, and $ \lim_{t \to \infty} h(t, s, s') = 0 $ therefore $h(t, s, s') \leq 0$.
\end{assumption}
This assumption will be useful later to bound the difference between the cost of two changes $s$ and $s'$. Intuitively, assumption A\ref{assumpt1} states that $g$ favors older changes but that asymptotically (large enough $t$ relative to $s$ and $s'$) this advantage for older changes vanishes. Importantly, this assumption is true for the multiscale penalty proposed in \cite{verzelen2020optimal} as $\beta > 0$ and $g (t-s') - g(t-s)= \log(1 - (s'-s)/(t-s))$ is increasing with $t$.
\subsection{Optimization with Dynamic Programming}
In this section we explain how one can optimize equation \eqref{mainfpoppsd} using dynamic programming ideas with (i) inequality based pruning and (ii) functional pruning.
\paragraph{Dynamic programming with inequality based pruning} The penalised cost of a segmentation $\tau$ inside the $\arg\min$ of equation \eqref{mainfpoppsd} can be written as a sum over all segments of $\tau$ :
\begin{displaymath}
\sum_{j=1}^{|\tau|} \min_{\mu} \left( \displaystyle\sum_{i=\tau_{j-1}+1}^{\tau{j}} (y_{i} -\mu)^2 -\beta g(\tau_{j}-\tau_{j-1}) + \alpha \right),
\end{displaymath}
therefore the optimisation can be done iteratively using the Optimal Partionning (\textbf{OP}) algorithm proposed in \cite{OP} using dynamical programming ideas developped in \cite{law} and \cite {bell}. It is possible to speed calculations using the \textbf{PELT}\ algorithm \cite{PELT} because equation (4) of \cite{PELT} is true at least for constant $K = -\beta (\max_{1 \leq \ell \leq n}\{g(\ell)\} - 2 \min_{1 \leq \ell \leq n} \{ g(\ell) \})$ (see \ref{app:phi_pelt}).
If $g$ is concave (as in the penalty \eqref{eq:criteria_verzelenetal} proposed in \cite{verzelen2020optimal}), $K$ can be chosen much closer to zero : $K = - \beta (g(2) - 2g(1))$ (see \ref{app:phi_pelt}), or adaptively to the last segment length $\ell$ : $\ K_{\ell} = - \beta (g(\ell)+g(1)-g(\ell+1))$ (see \ref{phi_pelt_2}). Our implementation of \textbf{PELT}{} optimizing (\ref{mainfpoppsd}) with $g=\log$ and $K_{\ell}=-\beta\log(\frac{1}{\ell}+1)$ is called \textbf{Ms.PELT}. Note that $K_{\ell}\leq -\beta\log(2)$.
As shown in the Figure \ref{fig:simple_runtime}, if the number of real changepoints is not linear in $n$, for $g = \log$, and a positive $\beta$, \textbf{Ms.PELT}\ is quadratic. This makes the analysis of large profiles with $10^5$ or $10^6$ datapoints long and unpractical (\textit{e.g.} $>100$ seconds for a profile with $10^5$ datapoints and 1 changepoint, $>1$ hour for a profile with $10^6$ datapoints and 1 changepoint \footnote{ Runtimes observed on an Intel Core i7-10810U CPU @ 1.10GHzx12 computer.}).
\paragraph{Dynamic programming with functional pruning} In the rest of the paper, we present a functional pruning algorithm (called \textbf{Ms.FPOP}), in the sense of the \textbf{PDPA}\ \cite{PDPA} or \textbf{FPOP}\ \cite{FPOP}, to solve (\ref{mainfpoppsd}), making it possible to optimize (\ref{mainfpoppsd}) in a matter of seconds even for $n=10^6$ As the cost of equation (\ref{mainfpoppsd}) is not point-additive, condition C1 of \cite{FPOP} is not true, and maintaining the set of means for which a change is optimal is more complex. Our key idea is to maintain a slightly larger set that is easier to update.
\section{Functional Pruning}
\subsection{Functional Pruning Optimal Partioning (\textbf{FPOP}{})}
To better explain \textbf{Ms.FPOP}{} we first review some of the key elements of \textbf{FPOP}{} to optimize equation \eqref{mainl2pen}.
\textbf{FPOP}{} introduces for every change $s$ its best cost as function of the last parameter $\mu$ at time $t$, $\widetilde{f}_{t, s} (\mu)$.
Formally this is:
\begin{equation}\label{fpop_f}
\widetilde{f}_{t,s}(\mu) = F_{s} + \displaystyle\sum_{i=s+1}^{t} (y_{i}-\mu)^{2} + \alpha, \text{ }\text{ } \text{ with } \text{ }\text{ } \widetilde{f}_{t,t}(\mu) = F_{t}+\alpha \text{ }\text{ }\text{ and } \text{ }\text{ } F_{0}=-\alpha.
\end{equation}
$\widetilde{f}_{t,s}(\mu)$ is a second degree polynomial in $\mu$ defined by three coefficients : $a_2\mu^2 + a_1\mu + a_0$ with
$a_2 = t-s,$ $a_1 =-2\sum_{i=s+1}^{t}y_i$ and
$a_0 = F_{s}+\alpha+\displaystyle\sum_{i=s+1}^{t} y_{i}^2$.
The update of these coefficients is straightforward using the following formula:
\begin{equation}
\widetilde{f}_{t,s}(\mu) = \widetilde{f}_{t-1,s}(\mu) + (y_t-\mu)^2.
\end{equation}
At each time step $t$, \textbf{FPOP}{} updates the minimum of all $\widetilde{f}_{t, s} (\mu) $, denoted $\widetilde{F}_{t}(\mu) = \min_{s\leq t}\ \left\{ \widetilde{f}_{t,s}(\mu) \right\}.$ The key idea behind \textbf{FPOP}{} is that to compute and update $\widetilde{F}_{t}(\mu)$ one only need to consider changes $s$ with a none empty ``living-set'' : $\mathcal{F}_{t}= \{ s \leq t | Z_{t, s}^{*} \neq \emptyset\}$ where the ``living-set'' of change $s$ is $Z_{t, s}^{*} = \{ \mu | \widetilde{f}_{t, s}(\mu) = \widetilde{F}_{t}(\mu)\}$. Given those definitions we have $\widetilde{F}_{t}(\mu) = \min_{s \in \mathcal{F}_{t}}\ \left\{ \widetilde{f}_{t,s}(\mu) \right\}$. In other words, $s$ is pruned as soon as its ``living-set'' is empty, which is justified because
\begin{equation}
\label{14}
Z_{t,s}^{*} \supset Z_{t+1,s}^{*}. \text{ }\text{ } \text{ }\text{and } \text{ }\text{ } \text{ } Z_{t,s}^{*} = \emptyset \implies Z_{t+1,s}^{*}=\emptyset\ .
\end{equation}
Note that we can then retrieve $F_t$ by minimizing $\widetilde{F}_{t}(\mu)$ on $\mu$.
\subsection{\textbf{Ms.FPOP}\ : functional Pruning for a Multiscale Penalty}\label{new}
\textbf{Ms.FPOP}{} optimizes equation \eqref{mainfpoppsd}.
As for \textbf{FPOP}{} we introduce for every change $s$ its best cost as a function of the last parameter $\mu$ at time $t$, $\widetilde{f}_{t, s} (\mu)$. Formally this is :
\begin{equation}\label{new_f_cost}
\widetilde{f}_{t,s}(\mu) = F_{s} + \displaystyle\sum_{i=s+1}^{t} (y_{i}-\mu)^{2} + \alpha - \beta g\left(t-s\right),
\end{equation}
with $\widetilde{f}_{t,t}(\mu) = F_{t}+\alpha$ and $F_{0}=-\alpha$. As in \textbf{FPOP}{}, $\widetilde{f}_{t,s}(\mu)$ can be stored as a second degree polynomial in $\mu$. The update is also straightforward using the following formula:
\begin{equation}\label{mise_a_jour_cost_f_fpoppsd}
\widetilde{f}_{t,s}(\mu) = \widetilde{f}_{t-1,s}(\mu) + (y_{t}-\mu)^{2} + \beta g\left(t-1-s\right) - \beta g\left(t-s\right)
\end{equation}
Analogously to \textbf{FPOP}{} we can calculate $F_t$ by minimizing $\widetilde{f}_{t, s}$ both on $\mu$ and $s$. The main difference with \textbf{FPOP}{} is that the rule (\ref{14}) is no longer true for \textbf{Ms.FPOP}{} because $\widetilde{f}_{t, s}(\mu) - \widetilde{f}_{t, s'}(\mu)$ depends on $t$:
\begin{equation} \label{etude_de_fct}
\widetilde{f}_{t,s}(\mu) - \widetilde{f}_{t,s^{'}}(\mu) = F_{s}-F_{s^{'}} + \displaystyle\sum_{i=s+1}^{s{'}} (y_{i}-\mu)^{2}
+\beta(\underbrace{g(t-s^{'}) - g(t-s))}_{\substack{\text{\textit{a function varying}}\\ \text{\textit{with t, s et s'}}}}).
\end{equation}
Because of that, in the course of the algorithm we need to re-evaluate the set on which the candidate change $s$ is better than $s'$ at various $t$, $I_{t,s,s^{'}}$ with $s < s^{'}$:
\begin{equation}
I_{t,s,s^{'}} = \{ \mu\ |\ \widetilde{f}_{t,s}(\mu)\ \leq\ \widetilde{f}_{t,s^{'}}(\mu)\}.
\end{equation}
For arbitrary functions $g$ the set $I_{t,s,s^{'}}$ may vary drastically from one $t$ to the next. Using assumption A\ref{assumpt1} we can control those variations.
\subsubsection{Update of The Candidate Changes Living Set ($ Z_{t, s}$)}
Rather than evaluating the exact living set $Z_{t, s}^{*}$ of all changes, we are seeking to update a slightly larger set, $Z_{t, s}$, including $Z_{t, s}^{*}$ and such that if $Z_{t, s}$ is empty we can guarantee that $Z_{t+h,s}^{*}$ is also empty for all $h>0$. The possibility of defining such a $Z_{t,s}$ depends on the property of the function $g$.
Assume A\ref{assumpt1} we propose to update $Z_{t+1,s}$ as follow:
\begin{equation} \label{update_croissant}
Z_{t+1,s} = Z_{t,s}\ \cap \overbrace{(\ \bigcap_{s' \in \mathcal{A}_{t,s}} I_{t+1,s,s{'}})}^{\text{\textit{comparison with future changes}}} \backslash \overbrace{(\ \bigcup_{s'' \in \mathcal{B}_{s}} I_{\infty,s{''},s})}^{\text{\textit{comparison with past changes}}}\ ,
\end{equation}
where $\mathcal{A}_{t, s}$ is any subset of $\{s+1, ..., t\}$,
$\mathcal{B}_{s}$ is any subset of $\{1, ..., s-1\}$, and
$I_{\infty,s,s^{'}}$ correspond to $I_{t,s,s^{'}}$ when $t \to \infty$ (which is properly define under assumption A1).
\paragraph{Pruning} Based on update \eqref{update_croissant} it should be clear that if $Z_{t,s}$ is empty so are all $Z_{t+h,s}, $ for $h>0.$ In the next lemma we show that $Z_{t,s}$ includes $Z^*_{t,s}.$ Therefore we further have that if $Z_{t,s}$ is empty so are all $Z^*_{t+h,s}, $ and change $s$ can be pruned.
\begin{lemma}
Taking $Z_{s,s} = ]min_i y_i, \max_i y_i[$, updating $Z_{t+1,s}$ using equation \eqref{update_croissant} and assuming A\ref{assumpt1} we have
\begin{equation}\label{eq:lemmamain}
Z_{t,s}^{*} \subset Z_{t,s}\ ,
\end{equation}
and for an integer $h > 0$
\begin{equation}\label{eq:lemmacorro}
Z_{t+h,s}^{*} \subset Z_{t+1,s}\ .
\end{equation}
\end{lemma}
\begin{proof}
For any $t$, we will prove by induction that for any $t'$ in $\{s, \cdots, t\}$ we have $Z_{t,s}^{*} \subset Z_{t',s}.$
For $t'=s$ and for any $t$ larger or equal to $s$ we have (by definition of $Z_{s, s}$) that $Z_{t,s}^{*} \subset Z_{t',s} = Z_{s, s}.$
Now assume that for $t' < t$ we have $Z_{t,s}^{*} \subset Z_{t',s}.$
As $h$ is non-decreasing for any $t'+1 \leq t$ we have the following two inclusions :
\begin{align}
I_{t, s, s^{'}} & \subset I_{t'+1, s, s^{'}}. \label{inclus2} \\
I_{\infty, s, s^{'}} & \subset I_{t'+1, s, s^{'}} \label{inclus1}
\end{align}
Therefore for $t'< t$ we have
\begin{eqnarray*} \label{h_croissant}
Z^{*}_{t,s} & = & \qquad \quad (\ \bigcap_{s < s^{'} \leq t} I_{t,s,s^{'}}) \backslash (\ \bigcup_{s^{''} < s} I_{t,s{''},s}) \qquad \quad \quad \text{ by definition of } Z^*_{t,s} \\
Z^{*}_{t,s} & \subset & Z_{t',s}\ \cap (\ \bigcap_{s < s^{'} \leq t} I_{t,s,s^{'}}) \backslash (\ \bigcup_{s^{''} < s} I_{t,s{''},s}) \qquad \quad \quad \text{by induction} \\
& \subset & Z_{t',s}\ \cap (\ \bigcap_{s \ < s^{'} \leq \ t} I_{t'+1,s,s^{'}}) \backslash (\ \bigcup_{s^{''} < \ s} I_{\infty,s^{''},s}) \quad \text{using equation \eqref{inclus2} and\eqref{inclus1}} \\
& \subset & Z_{t',s}\ \cap (\ \bigcap_{s^{'} \in \ \mathcal{A}_{t', s}} I_{t^{'}+1,s,s^{'}}) \backslash (\ \bigcup_{s^{''} \in \ \mathcal{B}_{s}} I_{\infty,s^{''},s}) \quad \text{by definition of } \mathcal{A}_{t', s} \text{ and } \mathcal{B}_{s}.\\
\end{eqnarray*}
Using equation \eqref{update_croissant} we thus get that $Z^{*}_{t,s} \subset Z_{t'+1, s}, $ proving the induction.
To recover equation \eqref{eq:lemmacorro} we notice from update \eqref{update_croissant} that $Z_{t+1, s} \subset Z_{t, s}$ and apply equation \eqref{eq:lemmamain}.
\end{proof}
\subsubsection{\textbf{Ms.FPOP}\ Algorithm, Choice of $\mathcal{A}_{t, s}$ and $\mathcal{B}_{s}$}\label{sec:subsampling}
The update rule (\ref{update_croissant}) suggest that for each candidate change $s$ we should compare it future change $s'$ in $\mathcal{A}_{t,s}$, and past change $s''$ in $\mathcal{B}_{s}$. For past candidate changes $s^{''}$ this comparison can be done once and for all considering that $t$ goes to infinity ($I_{\infty,s^{''},s}$). For future candidate changes $s^{'}$, on the contrary, it might be usefull to update the interval $I_{t,s,s^{'}}$.
Performing at each time step, for each $s$, a comparison with all s' is time consuming. Intuitively, the complexity of each time step is in $\mathcal{O}(\text{number of candidate changes}^2).$ Ideally, for each $s$, one would like to make the minimum number of comparisons that would result in its pruning. In the Algorithm \ref{algo1} we consider a generic \textit{sampling} function of $s'$ that returns $\mathcal{A}_{t,s}$ (see the Sampling Strategies paragraph in section \ref{sec:Rcpp}). \\[0.1cm]
{\footnotesize
\begin{algorithm}[H]
\SetAlgoLined
\KwIn{$Y=(y_{1}, ..., y_{n})$, $\ \alpha$,$\ \beta$, $\ g=log(.)$}
\KwOut{set of last best changes $cp_{n}$}
$n \gets |Y|$\;
$F_{0} \gets -\alpha$\;
$cp_{0} \gets \emptyset$\;
$R_{1} \gets \{0\}$\;
$D \gets [min(Y),\ max(Y)]$\;
$Z_{0,0} \gets D$\;
$\widetilde{f}_{0,0} \gets F_{0} + \alpha\ (=0)$\;
\For{$t \gets 1,...,n$}{
\For{$s \in R_{t}$}{
$\widetilde{f}_{t,s}(\mu) \gets \widetilde{f}_{t,s}(\mu) + (y_{t}-\mu)^{2}+\beta \times g(t-1-s) - \beta \times g(t-s)$\;
}
$F_{t} \gets \min_{s \in R_{t}}(\mintheta_{\mu \in Z_{t,s}}(\widetilde{f}_{t,s}(\mu)))$\;
$s_{t} \gets \argmin_{s \in R_{t}}(\min_{\mu \in Z_{t,s}}(\widetilde{f}_{t,s}(\mu)))$\;
$cp_{t} \gets (cp_{s_{t}}, s_{t})$\;
$\widetilde{f}_{t,t} \gets F_{t} + \alpha$\;
$Z_{t,t} \gets D$\;
\For{$s \in R_{t}$}{
$Z_{t,t} \gets Z_{t,t}\ \backslash\ I_{\infty, s, t}$\;
$\mathcal{A}_{t,s} \gets sample(\{s^{'} \in \{R_{t} \cup \{t\}\}: s^{'} > s\})$\;
$Z_{t, s} \gets Z_{t,s} \cap (\bigcap_{s^{'} \in\ \mathcal{A}_{t,s}} I_{t,s,s^{'}})$\;
}
$R_{t+1} \gets \{s \in \{R_{t} \cup \{t\}\}: Z_{t,s} \neq \emptyset\}$\;
}
\caption{\textbf{Ms.FPOP}{} \label{algo1}}
\end{algorithm}}
\section{Rcpp Implementation of \textbf{Ms.FPOP}\ Algorithm}\label{sec:Rcpp}
\paragraph{\textbf{Ms.FPOP}\ R package} The dynamic programming and functional pruning procedures describe in the algorithm \ref{algo1} are implemented in \textit{C++}. The input and output operations are interfaced with the R programming language thanks to \textit{Rcpp} R package. The main function \verb|MsFPOP()| takes as input the sequence of obervations, a vector of weights for these obervations, the parameters $\beta$ and $\alpha$ of the multiscale penalty. The function returns the set of optimal changepoints in the sense of (\ref{mainfpoppsd}). Analogously, we implemented a version of the \textbf{PELT}\ algorithm, \verb|MsPELT()|, that optimizes (\ref{mainfpoppsd}).
\paragraph{Sampling Strategies} To recover $\mathcal{A}_{t,s}$ we consider either an exhaustive sampling of all future changes $s'>s$ in $R_t$ or a uniform random subsampling of them without replacement. The main function parameter \verb|size| can be set by the user to specify for each $s$ the number of sampled $s'$. In the appendix we compare the runtime of different sampling strategies (see \ref{supp_speed_benchmark}).\\[0.1cm]
\section{Simultation Study}
\subsection{Calibration of Constants $\gamma$ and $\beta$ from The Multiscale Penalty}\label{calibration}
Paper \cite{verzelen2020optimal} does not recommend values for $\gamma$ and $\beta$ in their penalty \eqref{eq:criteria_verzelenetal}. As explained in detail below, we calibrated those values to control the percentage of falsely detecting at least one change in profiles simulated without any actual change.
\paragraph{No change simulation} We repeatedly simulate \textit{iid} Gaussian signals of mean 0, variance 1 and varying lengths $n$ ($n \in \{10^2, 10^3, 10^4, 10^5, 2.5\times10^5\}$). On these profiles we run \textbf{Ms.FPOP}\ for different $\gamma$ values (ten $\gamma$ values evenly spaced on the interval $[1,20]$) and different $\beta$ values ($\beta \in \{2, 2.25, 2.5, 2.75, 3\})$.
\paragraph{Percentage of false detection} We denote $R_{>0}$ as the proportion of replicates for which \textbf{Ms.FPOP}\ returns at least 1 changepoint. These changepoints are false positives. Our goal is to find a combination of $\beta$ and $\gamma$ such that
\begin{equation} \label{signi}
R_{>0} < 0.05\ (\text{significance level}) \quad.
\end{equation}\paragraph{Empirical Results} In Figure \ref{fig:cal_phifpop_main} we observe that, by setting $\beta = 2.25$, a conservative range of $\gamma$ satisfying inequality (\ref{signi}) can be reached for $\gamma \in [7.5,10]$. Note that this interval satisfy inequality (\ref{signi}) for all tested $n$ and $\beta$ (see \ref{cal_phifpop_main_supp}).
\noindent Based on these results, in the following simulations we set $\gamma = 9$ and $\beta = 2.25$\footnote{This is equivalent to setting $L = 1.125$ and $q = 8$ in equations (31) and (32) of \cite{verzelen2020optimal}} for all methods optimizing (\ref{mainfpoppsd}) (\textbf{Ms.FPOP}, \textbf{Ms.PELT}). We set $\alpha=2\sigma^2\log(n)$ for all methods optimizing (\ref{mainl2pen}) (\textbf{FPOP}, \textbf{PELT}).
\begin{figure}
\caption{\textbf{Proportion of stationary Gaussian signal replicates on which \textbf{Ms.FPOP}
\label{fig:cal_phifpop_main}
\end{figure}
\subsection{Evalutation of \textbf{Ms.FPOP}: Speed Benchmark} \label{speed_bench}
\paragraph{Design of Simulations} We repeatedly simulate \textit{iid} Gaussian signals with $10^5$ datapoints. The profiles are affected by one or more changepoints in their mean ($D \in \{1, 5, 10, 15, 20, 25, 30, 45, 50, 100, 150, 200, 250, 300, 350, 400, 450,$\\$500, 550, 600, 650, 700, 750, 800, 850, 900, 950, 1000\}$). The mean of segments alternates between 0 and 1, starting with 0. The variance of each segment is fixed at 1. On these profiles we run two methods optimizing the penalized likelihood defines in (\ref{mainl2pen}): \textbf{PELT}\ \cite{PELT} and \textbf{FPOP}\ \cite{FPOP}, as well as methods optimizing the multiscale penalized likelihood defines in (\ref{mainfpoppsd}): \textbf{Ms.PELT}{} and \textbf{Ms.FPOP}{}. For \textbf{Ms.FPOP}{}, after comparisons with other sampling strategies (see \ref{supp_speed_benchmark}), we choose to randomly sample 1 future candidate change.
\paragraph{Metric} For each replicate we time in seconds the compared methods.
\paragraph{Empirical Results} In Figure \ref{fig:runtime} we firstly observe that for both criteria (multiscale penalized likelihood and penalized likelihood), functional pruning methods are always faster than inequality based pruning ones. Indeed, \textbf{Ms.FPOP}{} and \textbf{FPOP}{} are always faster than \textbf{Ms.PELT}{} and \textbf{PELT}, respectively. The smaller $D$, the larger the time difference between functional pruning methods and inequality based pruning ones. For $D=1$, \textbf{Ms.FPOP}{} runs in 2.4 seconds in average and is about 50 times faster than \textbf{Ms.PELT}{} (121.3 seconds in average). For $D=1000$, \textbf{Ms.FPOP}{} runs in 0.7 second in average and is about 1.3 times faster than \textbf{Ms.PELT}{} (0.9 second in average). Marginally to $D$, \textbf{FPOP}{} runs always under 0.05 seconds. Similar trends can be observed on \textit{iid} Gaussian signals with $10^6$ datapoints (see Figure \ref{fig:speed1M}).
\begin{figure}
\caption{\textbf{Runtimes as a function of the true number of changepoints.}
\label{fig:runtime}
\end{figure}
\subsection{Evalutation of \textbf{Ms.FPOP}{} relative to \textbf{FPOP}: Accuracy Benchmarks}
In this section we seek to illustrate using minimalist simulations the performances of the multiscale criteria proposed in \cite{verzelen2020optimal} and implemented in \textbf{Ms.FPOP}{} relative to the BIC criteria proposed in \cite{Yao} and implemented in \textbf{FPOP}{}.
\subsubsection{Hat Simulations}
\label{text:hat_simu}
\paragraph{Design of Simulations} We repeatedly simulate \textit{iid} Gaussian signals of varying size $n \in \{10^3, 10^4, 10^5\}$. Each signal is affected by 2 changepoints. The second changepoint ($\tau_2$) is fixed at position $\lfloor\frac{2n}{3}\rfloor$ while we vary the position of the first changepoint ($\tau_1$) (see Figure \ref{fig:hat_fig}.A). $\tau_1$ takes a series of 30 positive integers evenly spaced on the $\log$ scale on the interval $[1, \lfloor\frac{n}{3}\rfloor]$. We also look at the symmetry of this series builds around $\lfloor\frac{n}{3}\rfloor$ (i.e. $\lfloor\frac{2n}{3}\rfloor-\tau_1$, see dotted line in Figure \ref{fig:supphat_simu}). Note that for $\tau_1 = \lfloor\frac{n}{3}\rfloor$ the segmentation is balanced. The means of the three resulting segments are set to $\mu_1 = 0$ , $\mu_2=\sqrt{\frac{100}{n}}$ and $\mu_3 = 0$. We run both \textbf{Ms.FPOP}{} and \textbf{FPOP}{} on these profiles. \textbf{Ms.FPOP}{} incorporates a multiscale penalty, while \textbf{FPOP}{} assigns equal weight to all segment sizes and serves as a reference point for comparison with \textbf{Ms.FPOP}. We anticipate that the multiscale penalty in \textbf{Ms.FPOP}{} will lead to more accurate segmentations of profiles with well-spread changepoints compared to \textbf{FPOP}. Additionally, as the size of the data ($n$) increases, we expect \textbf{Ms.FPOP}{} to get similar performance or outperform \textbf{FPOP}{} in terms of accuracy for all segment sizes.
\paragraph{Metric} We denote $R_{2}$ the proportion of replicates for which a method returns exactly two changepoints. We also denote $\Delta_{R_2}$, the $\log_2$-ratio between $R_{2}$ of \textbf{Ms.FPOP}{} and \textbf{FPOP}.
\paragraph{Empirical Results}
In Figure \ref{fig:hat_fig}.B and \ref{fig:supphat_simu} we observe that with both \textbf{Ms.FPOP}{} and \textbf{FPOP}, $R_{2}$ increases when $\tau_1$ tends towards $\lfloor\frac{n}{3}\rfloor$ (balanced segmentation). Note that the maximum is reached before $\tau_1 = \lfloor\frac{n}{3}\rfloor$.
Furthermore, in agreement with our expectations, in Figure \ref{fig:hat_fig}.B we observe that $\Delta_{R_2}$ increases when $\tau_1$ tends towards $\lfloor\frac{n}{3}\rfloor$. When $n$ increases, the differences observed on small segments in favor of \textbf{FPOP}{} ($\Delta_{R_2}<0$) disappear ($\Delta_{R_2}\to 0$) and the differences on other segments in favor of \textbf{Ms.FPOP}{} ($\Delta_{R_2}>0$) are accentuated.
\begin{figure}
\caption{\textbf{\textbf{Ms.FPOP}
\label{fig:hat_fig}
\end{figure}
\subsubsection{Extended Range of Simulation Scenarios}
\label{extended_sim}
\paragraph{Design of Simulations} Following a protocol written by Fearnhead \textit{et al.} 2020, we simulate different scenarios of \textit{iid} Gaussian signals. Each scenario is defined by a combination of $D$, $n$, $\tau$, $\mu$. For each scenario we vary the variance $\sigma^2$ (see Supplementary Data of \cite{Fearnhead2020}). All the simulated profiles, with a variance one, can be seen in \ref{supp:other_simu_default}. Based on these initial scenarios we simulate another set of profiles in which profile lengths are multiplied so that each segments contain at least 300 datapoints. These new set of simulated profiles can be seen in \ref{supp:other_simu_min300}. For each scenario and tested $\sigma^2$ we simulate 300 replicates.
\paragraph{Metric} We denote $AE\%$, the average number of times a method is at least as good as other methods in terms of absolute difference between the true number of changes and the estimated number of changes ($\Delta_D$), mean squared error (MSE) or adjusted rand index (ARI). The closer to 100 (AE\%), the better the method. See Supplementary Data of \cite{Fearnhead2020} for a formal definition of this criterion.
\paragraph{Empirical Results} On the simulation of \cite{Fearnhead2020} in which a large portion of the segments have a length under 100 the performance of \textbf{Ms.FPOP}\ are worse than \textbf{FPOP}\ and \textbf{MOSUM} \cite{meier2021mosum} on almost all scenarios except \textit{Dt7} that do not contain any changepoint (see \ref{supp:other_simu_default}).
On the second set of profiles, using $\Delta_D$ as comparison criterion, we observe on Figure \ref{fig:AE_perc_K_min300} that \textbf{Ms.FPOP}\ get similar performance or is better than \textbf{FPOP}\ and \textbf{MOSUM} in all scenarios marginaly to $\sigma^2$. The results are similar when we use MSE or ARI as a criterion of comparison (see \ref{supp:other_simu_min300}).
\begin{figure}
\caption{\textbf{AE\% as a function of the scaling factor for the variance (comparison criterion : $\Delta_D$).}
\label{fig:AE_perc_K_min300}
\end{figure}
\section{Discussion}
\paragraph{Extending Functional Pruning Techniques to the Multiscale Penalty} In section \ref{new} we have explained how to extend functional pruning techniques to the case of multiscale penalty. In Figures \ref{fig:simple_runtime} and \ref{fig:runtime} we have seen that for large signals ($n\geq10^5$) with few changepoints, \textbf{Ms.FPOP}\ is an order of magnitude faster than \textbf{Ms.PELT}\ (which relies on inequality based pruning, see \ref{app:phi_pelt}). Even when the number of changepoints increased linearly with the size of the data, \textbf{Ms.FPOP}\ was still faster than \textbf{Ms.PELT}.
The main update rule \ref{update_croissant} of our dynamic programming algorithm suggests to compare each candidate change $s$ with a set of future candidate changes $s'$. As we have seen in \ref{supp_speed_benchmark}, the strategy of randomly drawing one $s'$ according to a uniform distribution is the best strategy and allows us to tackle large signals. It is likely that uniform sampling is not optimal. The algorithm alternates between good draws (leading to a strong reduction of $Z_{t,s}$ or even the pruning of $s$) and bad draws (leading to a weak reduction $Z_{t,s}$). On average this is sufficient but improvements are possible. In particular the study of $h(t,s,s')=\log(\frac{t-s'}{t-s})$ (see Assumption A1), suggests disfavoring $s'$ that are too recent or that have been compared recently.
\paragraph{Calibration of $\gamma$ and $\beta$ from the Multiscale Penalty} The least-squares estimator with multiscale penalty proposed by \cite{verzelen2020optimal} involves two constants $\gamma$ and $\beta$ that still need to be investigated. Using signals simulated under the null hypothesis (no changepoint) we have seen that it is possible to find a pair of constants $\gamma = 9$ and $\beta = 2.25$ for which \textbf{Ms.FPOP}{} controls $R_{>0}$. Under this setting we have shown on \textit{hat} (see section \ref{text:hat_simu}) and \textit{step} (see Figure \ref{fig:suppstep_simu}) simulations that \textbf{Ms.FPOP}\ is more powerful than \textbf{FPOP}\ on segmentations with well-spread changepoints. This difference of power grows with $n$. For segmentation with small segments \textbf{FPOP}\ is more powerful \textbf{Ms.FPOP}\ when $n$ is small ($\approx 10^3$), but for larger $n$ ($\geq 10^4$) this difference disappears.
We also tested \textbf{Ms.FPOP}{} on the benchmark proposed in \cite{Fearnhead2020}. The performances of \textbf{Ms.FPOP}{} are not so good on the original benchmark containing mostly small profiles with small segments but much better for an extended benchmark with larger profiles (see section \ref{extended_sim}).
Without additional work on the calibration of the constants, we would thus recommend using \textbf{Ms.FPOP} for large profiles ($\geq 10^4$).
\paragraph{Unknown Variance} All our simulations have been done on signals with known variance, $\sigma^2$. However, in real-world situations, this may not always be the case. One approach is to estimate $\sigma^2$ and then plugging-in it in the problem, \textit{i.e} scaling the signal or the penalty by $\frac{1}{\sigma^2}$ or $\sigma^2$, respectively. A robust estimate of $\sigma^2$ can be obtained by calculating the variance of $\Delta_Y = Y_{i+1} - Y_i$ using either the median absolute deviation or the estimator suggested in \cite{hall1990asymptotically}. As an alternative, \cite{verzelen2020optimal} pointed out that one could calibrate the multiplicative constant $L$ of the penalized least-squares estimator using the slope heuristic \cite{Arlot2019}. Investigating the performances of these various approaches is outside the scope of this paper.
\section{Availability of Materials}
The scripts used to generate the figures are available in the following GitHub repository: \url{https://github.com/aLiehrmann/MsFPOP_paper}
. A reference implementation of the \textbf{Ms.FPOP}{} (and \textbf{Ms.PELT}) algorithm is available in the R package of the same name: \url{https://github.com/aLiehrmann/MsFPOP}.
\appendix
\section{\textbf{PELT}\ for Multiscale Penalized Likelihood}\label{app:phi_pelt}
\noindent Following the notation of the PELT paper \cite{PELT} the cost of a segment from $s+1$ to $s'$, $s+1:s'$ is defined as $\mathcal{C}_{s+1:s'} = \sum_{i=s+1}^{s'} (y_{i}-\bar{y}_{s+1:s'})^{2} - \beta g(s'-s).$
In what follow we consider three time points $s<s'<t$. Let $\ell = s'-s$ denote the length of the sequence of observations between time $s$ and $s'$ and $\ell' = t-s'$ denote the length of the sequence of observations between time $s'$ and $t$.
The key condition to apply the PELT algorithm \cite{PELT} is that up to a constant $K$ adding a changepoints always reduce the cost, that is :
\begin{assumption}\label{eq:peltcondition}
\begin{equation}
\mathcal{C}_{s+1:s'} + \mathcal{C}_{s'+1:t} + K \leq \mathcal{C}_{s+1:t}
\end{equation}
\end{assumption}
The following lemma ensure that such $K$ exists for any $n$ and provide explicit values for $K$ in general and if $g$ is concave.
\begin{lemma}\label{lemma:peltbound}
(a) For any function $g$ from $\mathbb{R}$ to $\mathbb{R}$, $\beta \geq 0$, and any $n$, Assumption \ref{eq:peltcondition} is true at least for $K=2 \beta \min_{1 \leq \ell \leq n } \{g(\ell)\} -\beta \max_{1 \leq \ell \leq n } \{g(\ell)\}$.
(b) If $g$ is concave the condition is true for $K=-\beta g(2) + 2\beta g(1).$
\end{lemma}
\begin{proof}
We first note that
\begin{displaymath}
m_n = \underset{1 \leq \ell < n}{\min} \left\{ \underset{\substack{1 \leq \ell' < n \\ \ell+\ell' \leq n}}{\min} \left\{ g(\ell)+g(\ell') - g(\ell+\ell') \right\} \right\}
\end{displaymath}
is well defined as the minimum of a finite set.
By definition of $m_n$ we thus have, for any $1 \leq s < s' < t \leq n$ and for any $K < \beta m_n$, that
\begin{eqnarray*}
-\beta g(s'-s) - \beta g(t-s') + K & \leq & -\beta g(t-s)
\end{eqnarray*}
Combining this with
\begin{eqnarray*}
\sum_{i=s+1}^{s'} (y_{i}-\bar{y}_{s+1:s'})^{2} + \sum_{i=s'+1}^{t} (y_{i}-\bar{y}_{s'+1:t})^{2} & \leq & \sum_{i=s+1}^{t} (y_{i}-\bar{y}_{s+1:t})^{2}.
\end{eqnarray*}
we recover that equation \eqref{eq:peltcondition} is true for any $K < \beta m_n.$
Now for any $\ell$, $\ell'$ in $\{1, \ldots, n\}^2$ such that $\ell + \ell' \leq n$ we have
\begin{displaymath}
2 \min_{1 \leq \ell \leq n } \{g(\ell)\} - \max_{1 \leq \ell \leq n } \{g(\ell)\} \leq g(\ell) + g(\ell') - g(\ell+\ell').
\end{displaymath}
Hence we get
\begin{displaymath}
2 \min_{1 \leq \ell \leq n } \{g(\ell)\} - \max_{1 \leq \ell \leq n } \{g(\ell)\} \leq m_n,
\end{displaymath}
and we recover (a).
In case $g$ is concave using the technical lemma \ref{lemma:concavitydecreasing} two times we get :
\begin{equation}\label{eq:lspecificPELT}
\underset{\substack{1 \leq \ell' < n \\ \ell+\ell' \leq n}}{\min} \left\{ g(\ell)+g(\ell') - g(\ell+\ell') \right\} = g(\ell) + g(1) - g(\ell+1)
\end{equation}
and
\begin{displaymath}
\underset{1 \leq \ell < n}{\min} \left\{ \underset{\substack{1 \leq \ell' < n \\ \ell+\ell' \leq n}}{\min} \left\{ g(\ell)+g(\ell') - g(\ell+\ell') \right\} \right\} = 2g(1) - g(2)
\end{displaymath}
For example, if $g=\log$ we get $K= -\beta\log(2)$
\end{proof}
\begin{lemma}\label{lemma:concavitydecreasing}
If $g$ is concave then for any $\delta>0$, the function $h: x \rightarrow g(x+\delta) - g(x)$ is non increasing.
\end{lemma}
\begin{proof}
Consider any $\delta' > 0$.
We have
$x+\delta = (1-\alpha) x + \alpha(x+\delta+\delta')$ for $\alpha = \delta/(\delta+\delta')$ and similarly
$x+\delta' = (1-\alpha') x + \alpha'(x+\delta+\delta')$ with $\alpha' = \delta'/(\delta+\delta').$
Using concavity we have
\begin{eqnarray*}
g(x+\delta) & \geq &(1-\alpha) g(x) + \alpha g(x+\delta+\delta') \\
g(x+\delta') & \geq & (1-\alpha') g(x) + \alpha' g(x+\delta+\delta'). \\
\end{eqnarray*}
Suming these two lines and noting that $\alpha+\alpha'=1$ we get
$g(x+\delta) - g(x) \geq g(x+\delta'+\delta) - g(x+\delta')$
\end{proof}
\section{Adaptative \textbf{PELT}{} for Concave Multiscale Penalty }\label{phi_pelt_2}
In the following lemma we show that for our multiscale penalty assuming the function $g$ is concave the constant $K$ in theorme 3.1 of \cite{PELT} can be chosen adaptively to the length of the last segment.
\begin{lemma}
If $g$ is concave and $\beta \geq 0.$
then if at time $s'$ we have,
$$ F_s + \sum_{i=s+1}^{s'} (y_{i}-\bar{y}_{s+1:s'})^{2} - \beta g(\ell) + K_{s'-s=\ell} \geq F_{s'}, $$ with $K_{\ell} = \beta( g(\ell) + g(1) - g(\ell+1))$
then for any time $t$ larger than $s'$ we have :
$$ F_s + \sum_{i=s+1}^{t} (y_{i}-\bar{y}_{s+1:t})^{2} - \beta g(\ell+\ell') \geq F_{s'} + \sum_{i=s'+1}^{t} (y_{i}-\bar{y}_{s'+1:t})^{2} - \beta g(\ell'), $$
and thus
for any time $t\geq s'$, a change at $s$ can never be optimal. Taking $g=\log$ we get $K_{\ell}= -\beta\log(\frac{1}{\ell}+1) \leq -\beta\log(2)$.
\end{lemma}
\begin{proof}
We follows the proof of Theorem 3.1 of \cite{PELT} using the fact that if $g$ is concave then equation \ref{eq:lspecificPELT} is true.
\end{proof}
\section{\textbf{Ms.FPOP}\ : Calibration of Constants $\gamma$ and $\beta$ from The Multiscale Penalty}
\label{cal_phifpop_main_supp}
The following plots were generated to calibrate the constants in the multiscale penalty of \cite{verzelen2020optimal}. They are generated as explained in section \ref{calibration}.
\begin{figure}
\caption{\textbf{Proportion of stationary Gaussian process replicates on which \textbf{Ms.FPOP}
\end{figure}
\section{\textbf{Ms.FPOP}{} Speed Benchmark} \label{supp_speed_benchmark}
\paragraph{Sampling Strategies} We compared the runtime of \textbf{Ms.FPOP}{} for various sampling strategies (see section \ref{sec:subsampling}).
We tested sampling 1, 2, 3 and all future changes. We call these strategies respectively rand 1, rand 2, rand 3 and all. We tested them on the simulation described in section \ref{speed_bench}.
It can be seen on the Figure \ref{fig:subsamplingspeed} that sampling 1 future change uniformaly at random is the fastest for all true number of changes and $n=10^5.$
\begin{figure}
\caption{\textbf{Runtimes as a function of the true number of changepoints.}
\label{fig:subsamplingspeed}
\end{figure}
\paragraph{Larger Profile Lengths $(n=10^6)$} Figure \ref{fig:speed1M} is obtained as explained in section \ref{speed_bench}, with $n=10^6$ and $D \in \{1, 500, 1000, 1500, 2000, 2500, 3000, 3500, 4000, 4500,$\\$ 5000, 5500, 6000, 6500, 7000, 7500, 8000, 8500, 9000, 9500, 10000\}$.
\begin{figure}
\caption{\textbf{Runtimes as a function of the true number of changepoints.}
\label{fig:speed1M}
\end{figure}
\section{\textbf{FPOP}\ vs \textbf{Ms.FPOP}\ : Simulations on Hat Profiles} \label{supp:hat_simu}
Figure \ref{fig:supphat_simu} is obtained as explained in section \ref{text:hat_simu}.
\begin{figure}
\caption{\textbf{\textbf{Ms.FPOP}
\label{fig:supphat_simu}
\end{figure}
\section{\textbf{FPOP}\ vs \textbf{Ms.FPOP}\ : Simulations on Step Profiles}
\label{supp:step_simu}
\begin{figure}
\caption{\textbf{\textbf{Ms.FPOP}
\label{fig:suppstep_simu}
\end{figure}
\section{\textbf{FPOP}\ vs \textbf{Ms.FPOP}\ vs \textbf{MOSUM} : Simulations on Several Scenarios of Gaussian Signals (segments length $>300$)}
\label{supp:other_simu_min300}
Figures \ref{fig:all_scenarios_min300}, \ref{fig:AE_perc_K_min300}, \ref{fig:AE_perc_ARI_min300}, \ref{fig:AE_perc_MSE_min300} were obtained as explained in section \ref{extended_sim} when considering the benchmark in \cite{Fearnhead2020}.
On these simulations a large portion of the segments have a length under 100.
\begin{figure}
\caption{\textbf{Simulated scenarios of Gaussian signals with minimum segments length equal to 300.}
\label{fig:all_scenarios_min300}
\end{figure}
\begin{figure}
\caption{\textbf{AE\% as a function of the scaling factor for the variance (comparison criterion : ARI).}
\label{fig:AE_perc_ARI_min300}
\label{fig:my_label}
\end{figure}
\begin{figure}
\caption{\textbf{AE\% as a function of the scaling factor for the variance (comparison criterion : MSE).}
\label{fig:AE_perc_MSE_min300}
\end{figure}
\section{\textbf{FPOP}\ vs \textbf{Ms.FPOP}\ vs \textbf{MOSUM} : Simulations on Several Scenarios of Gaussian Signals} \label{supp:other_simu_default}
Figures \ref{fig:all_scenarios}, \ref{fig:AE_perc_K}, \ref{fig:AE_perc_ARI}, \ref{fig:AE_perc_MSE} were obtained as explained in section \ref{extended_sim} when considering an extension of the benchmark in \cite{Fearnhead2020}.
Based on the initial scenarios we simulated another set of profiles in which segments length are multiplied so that each of segments contain at least 300 datapoints.
\begin{figure}
\caption{\textbf{Simulated scenarios of Gaussian signals.}
\label{fig:all_scenarios}
\end{figure}
\begin{figure}
\caption{\textbf{AE\% as a function of the scaling factor for the variance (comparison criterion : $\Delta_D$).}
\label{fig:AE_perc_K}
\end{figure}
\begin{figure}
\caption{\textbf{AE\% as a function of the scaling factor for the variance (comparison criterion : ARI).}
\label{fig:AE_perc_ARI}
\end{figure}
\begin{figure}
\caption{\textbf{AE\% as a function of the scaling factor for the variance (comparison criterion : MSE).}
\label{fig:AE_perc_MSE}
\end{figure}
\end{document}
|
\betaegin{document}
\title[On the convergence of the Calabi flow]{On the convergence of the Calabi flow}
\alphauthor{Weiyong He}
\alphaddress{Department of Mathematics, University of Oregon, Eugene, Oregon, 97403}
\email{[email protected]}
\betaegin{abstract}Let $(M, [\omegamega_0], J)$ be a compact Kahler manifold without holomorphic vector field. Suppose $\omegamega_0$ is (the unique) constant scalar curvature metric. We show that the Calabi flow with any smooth initial metric converges to the constant scalar curvature metric $\omegamega_0$ with the assumption that Ricci curvature stays uniformly bounded.
\end{abstract}
\maketitle
\sigmaection{Introduction}
Let $(M, [\omegamega_0], J)$ be a compact Kahler manifold.
The space of Kahler potentials is given by
\[
{\mathfrak g}ammaH=\{\partialhi\sigmaqrt{-1}n C^\sigmaqrt{-1}nfty: \omegamega=\omegamega_0+\sigmaqrt{-1} \partial\betaar \partial \partialhi>0\}.
\]
The Calabi flow is defined by E. Calabi in his seminal paper {\mathfrak g}ammaite{Calabi82} as follows
\[
{\mathfrak f}rac{\partial \partialhi}{\partial t}=R_\partialhi-\mathfrak underline{R},
\]
where $R_\partialhi$ denotes the scalar curvature of $\omegamega_\partialhi$ and $\mathfrak underline{R}$ is the average of scalar curvature, a topological constant depending only on $(M, [\omegamega_0], J)$.
It is a natural equation to seek a canonical representative in a fixed Kahler class (called extremal metric in {\mathfrak g}ammaite{Calabi82}), which includes the Kahler metrics with constant scalar curvature as a special case.
The Calabi flow is the gradient flow of Mabuchi's $K$-energy $K$ and is also a reduced gradient flow for the Calabi energy. One of the most challenging problem is whether the Calabi flow exists for all time or not (say with any smooth initial Kahler metric). X.X. Chen has made an ambitious conjecture as follows,
\betaegin{conj}[Chen]\lambdaabel{C-1}The Calabi flow exists for all time for any initial potential in ${\mathfrak g}ammaH$.\end{conj}
Donaldson {\mathfrak g}ammaite{Donaldson03} gives a conjectural picture of the asymptotic behavior of the Calabi flow and relates it to the stability conjecture regarding the existence of constant scalar curvature metric. In particular, it is expected that when there exists a constant scalar curvature metric in $(M, [\omegamega_0], J)$, the Calabi flow exists for all time and converges to a metric with constant scalar curvature (it is contributed to Donaldson's conjecture in literature). Note that by Chen-Tian's theorem {\mathfrak g}ammaite{Chentian05, Chentian06}, constant scalar metrics (more generally extremal metrics) in $(M, [\omegamega_0], J)$ is unique up to diffeomorphism. The longtime existence problem is still largely open (complex dimension is 2 or higher) except for a few special cases. In the present paper, we assume the following, \\
{\betaf Assumption}: suppose the Calabi flow with any initial potential $\partialhi$ exists for all time with a uniform Ricci bound (for example, we assume a bound like $|Ric(t)|\lambdaeq C(\omegamega, \omegamega_\partialhi, |\partialhi|_{C^4})$ depending only on initial data and background geometry, but not on time). Note that by a result joint with X.X. Chen {\mathfrak g}ammaite{Chen-He}, the assumption on Ricci curvature actually implies the longtime existence.
\\
The main result in this paper is to prove the following,
\betaegin{thm}\lambdaabel{T-1} With the assumption above, if there is a constant scalar curvature metric in $(M, [\omegamega_0], J)$ and suppose there is no holomorphic vector field on $(M, J)$, then the Calabi flow converges to a constant scalar curvature metric for any initial metric in $(M, [\omegamega_0], J)$.
\end{thm}
\betaegin{rmk}The main motivation of the result is that Conjecture \ref{C-1} should imply that the Calabi flow converges to a constant scalar curvature metric when assuming such a metric exists. The Ricci curvature assumption is only technical in the proof. It will be interesting to drop this assumption.
\end{rmk}
{\betaf Acknowledgement:} I thank Bing Wang and Song Sun for valuable discussions, and I thank Prof. X.X. Chen for encouragements. I am also grateful to J. Streets for sending me his recent preprint {\mathfrak g}ammaite{Streets} which inspires the consideration in Section 2. The author is partially supported by an NSF grant.
\sigmaection{Evolution variational inequality along the Calabi flow}
In this section we prove a version of evolution variational inequality along the Calabi flow.
\betaegin{prop}Let $\partialhi_t$ be a smooth solution of the Calabi flow and $\partialsi\sigmaqrt{-1}n {\mathfrak g}ammaH$ be any point (not on $\partialhi_t$). Then
\betaegin{equation}\lambdaabel{EVI-1}
K(\partialsi)-K(\partialhi_t){\mathfrak g}eq d(\partialsi, \partialhi_t) {\mathfrak f}rac{d}{dt} d(\partialsi, \partialhi_t)
\end{equation}
As a consequence, we have
\betaegin{equation}\lambdaabel{EVI}
2s(K(\partialsi)-K(\partialhi_{t+s})){\mathfrak g}eq d^2(\partialsi, \partialhi_{t+s})-d^{2}(\partialsi, \partialhi_t),
\end{equation}
where $d$ is a natural distance function on ${\mathfrak g}ammaH$ and it will be reviewed later.
\end{prop}
Evolution variational inequality is well-known for the gradient flow of convex functionals on Hilbert spaces. In a recent preprint, J. Streets {\mathfrak g}ammaite{Streets} proved \eqref{EVI} in the frame work of $K$-energy minimizing movements (see Mayer {\mathfrak g}ammaite{Mayer02}, for example, for some general results on minimizing movement of convex functionals on NPC spaces and Streets {\mathfrak g}ammaite{Streets12} for $K$-energy minimizing movement). As a consequence of this inequality, Streets also proved that the Calabi flow, when it exists for all time, minimizes ${\mathfrak g}ammaK$-energy (when it is bounded below) and also minimizes the Calabi energy.
We shall give an alternative proof of the evolution inequality \eqref{EVI-1} and \eqref{EVI} is a direct consequence.
Our proof does not rely on the framework of minimizing movement. Actually it is rather straightforward given the results of Chen {\mathfrak g}ammaite{Chen00, Chen09} on the geometric structure of ${\mathfrak g}ammaH$ and weak convexity of $K$-energy.
One motivation for this proof is to understand the similar picture for Kahler-Ricci flow on Fano manifolds.
In particular, we are interested in
\betaegin{prob}\lambdaabel{Q-3}Suppose $(M, [\omegamega_0], J)$ is a Fano manifolds and suppose ${\mathfrak g}ammaF$ functional is bounded below. Does the Kahler-Ricci flow minimizes ${\mathfrak g}ammaF$ functional? In particular, we would like to know whether the Kahler-Ricci flow minimizes
\[
H(\omegamega)=\sigmaqrt{-1}nt_M he^h \omegamega^n,
\]
where $h$ is the normalized Ricci potential such that $Ric(\omegamega)-\omegamega=\sigmaqrt{-1} \partial\betaar \partial h$ and
\[
\sigmaqrt{-1}nt_M e^h \omegamega^n=\sigmaqrt{-1}nt_M \omegamega^n.
\]
\end{prob}
We first recall the metric structure on ${\mathfrak g}ammaH$, the space of Kahler potentials. Mabuchi {\mathfrak g}ammaite{Mabuchi86} introduced a Riemannian metric on ${\mathfrak g}ammaH$ and he also proves that this is formally a symmetric space with nonpositive curvature (See {\mathfrak g}ammaite{Semmes, Donaldson99} also). Chen {\mathfrak g}ammaite{Chen00} proved that ${\mathfrak g}ammaH$ is actually a metric space and it is convex by $C^{1, 1}$ geodesics (potential with bounded mixed derivative), by confirming partially conjectures of Donaldson {\mathfrak g}ammaite{Donaldson99}, in which he set up an ambitious program which tights up the existence of constant scalar curvature with the geometry of ${\mathfrak g}ammaH$.
Our proof of evolution variational inequality relies on two results of Chen {\mathfrak g}ammaite{Chen00, Chen09} and it is a rather straightforward consequence of his results.
The first result we need is the following weak convexity of $K$-energy proved in {\mathfrak g}ammaite{Chen09}.
Let $A(t), 0\lambdaeq t\lambdaeq 1$ be a geodesic in ${\mathfrak g}ammaH$ with two end points $A(0)=\partialhi, A(1)=\partialsi$, then $K$-energy is weakly convex in the sense that
\betaegin{equation}\lambdaabel{E-chen1}
K(\partialsi)-K(\partialhi){\mathfrak g}eq {\mathfrak f}rac{d}{dt} K(A(t))|_{t=0}=\sigmaqrt{-1}nt_M \dot A(0) (\mathfrak underline{R}-R_{\partialhi})\omegamega_\partialhi^n.
\end{equation}
The second result is about the derivative of distance function $d$ on ${\mathfrak g}ammaH$, which is proved in {\mathfrak g}ammaite{Chen00}.
Suppose $\partialhi=\partialhi(s)$ is a smooth curve $C$ on ${\mathfrak g}ammaH$, then for any $\partialsi\sigmaqrt{-1}n {\mathfrak g}ammaH$ not on the curve $C$, $d(\partialsi, \partialhi)$ is $C^1$ on $s$ and in particular,
\betaegin{equation}\lambdaabel{E-chen2}
{\mathfrak f}rac{d}{ds}d(\partialsi, \partialhi)= -d(\partialsi, \partialhi)^{-1}\sigmaqrt{-1}nt_M \dot A(0) {\mathfrak f}rac{\partial\partialhi}{\partial s} \omegamega_{\partialhi}^n,
\end{equation}
where $A(t)$ is the geodesic with $A(0)=\partialsi(s), A(1)=\partialsi$.
Given \eqref{E-chen1} and \eqref{E-chen2}, suppose $\partialhi(s)$ solves the Calabi flow. It follows that
\[
K(\partialsi)-K(\partialhi(s))-d(\partialsi, \partialhi(s)){\mathfrak f}rac{d}{ds}d(\partialsi, \partialhi(s)){\mathfrak g}eq \sigmaqrt{-1}nt_M \dot A(0) (\mathfrak underline{R}-R_\partialhi+{\mathfrak f}rac{\partial\partialhi}{\partial s})\omegamega^n_{\partialhi(s)}=0
\]
This proved \eqref{EVI-1}, while \eqref{EVI} can be obtained directly by taking integration from $t$ to $t+s$, noting that $K$-energy is decreasing along the flow.
As a corollary, it follows that
\betaegin{cor}For any initial potential $\partialhi_0\sigmaqrt{-1}n {\mathfrak g}ammaH$, if the Calabi flow has a long time solution $\partialhi(t)$ such that $\partialhi(0)=\partialhi_0$, then
\[\lambdaim_{t\rightarrow \sigmaqrt{-1}nfty}K(\partialhi(t))=\sigmaqrt{-1}nf_{\partialsi\sigmaqrt{-1}n {\mathfrak g}ammaH}K(\partialsi).\]
\end{cor}
\betaegin{proof}This is proved in {\mathfrak g}ammaite{Streets}. Since the proof is rather short, we repeat it here. Suppose otherwise, then there exists $\delta>0$ and $\partialsi\sigmaqrt{-1}n {\mathfrak g}ammaH$ such that for any $t$ sufficiently large,
\[
K(\partialsi)-K(\partialhi(t))\lambdaeq -\delta.
\]
In \eqref{EVI}, let $t=0$, $s\rightarrow \sigmaqrt{-1}nfty$, we get
\[
-2s\delta {\mathfrak g}eq -d(\partialsi, \partialhi_0)^2.
\]Contradiction.
\end{proof}
Suppose the Calabi flow $\partialhi(t)$ exists for all time with initial metric $\partialhi_0$. We denote
\[
A(\partialhi_0)=\lambdaim_{t\rightarrow \sigmaqrt{-1}nfty}{\mathfrak g}ammaC(\partialhi(t))
\]
Clearly the limit exists and $A(\partialhi_0)$ is a nonnegative number.
\betaegin{cor}\lambdaabel{C-C}
Suppose $\partialhi(t)$ and $\partialsi(t)$ are two long-time solutions of the Calabi flow with initial metrics $\partialhi_0$ and $\partialsi_0$ respectively. Then
\[
A(\partialhi_0)=A(\partialsi_0).
\]
In other words, if the Calabi flow exists for all time, then the Calabi energy has the same energy level at infinity.
\end{cor}
\betaegin{proof}This is also proved in {\mathfrak g}ammaite{Streets}.
We can assume that the curves $\partialhi(t)$ and $\partialsi(t)$ have no intersection. Otherwise we can assume that $\partialsi_0=\partialhi(t)$ for some $t$ for example. In this case it is clear that $A(\partialhi_0)=A(\partialsi_0)$.
Suppose we have $A(\partialhi_0)=A(\partialsi_0)-3\delta$ for some $\delta>0$. Since the statement only concerns the asymptotic behavior of the Calabi energy along $\partialhi(t)$ and $\partialhi(t)$, we can assume that
for any $t$ (otherwise let $t$ be sufficiently large),
\betaegin{equation}\lambdaabel{E-2.5}
{\mathfrak g}ammaC(\partialhi_t)\lambdaeq {\mathfrak g}ammaC(\partialsi_t)-\delta
\end{equation}
Note that if $\partialhi(t)$ is a solution of Calabi flow, by \eqref{EVI}, we have, for $\partialsi$ not on the curve $\partialhi(t)$,
\betaegin{equation}
2s(K(\partialsi)-K(\partialhi(t+s))){\mathfrak g}eq d^2(\partialsi, \partialhi(t+s))-d^2(\partialsi, \partialhi(t)).
\end{equation}
Take $s=1$, and $\partialsi=\partialsi(t)$, we have
\betaegin{equation}\lambdaabel{E-2.6}
d^2(\partialsi(t), \partialhi(t))+2(K(\partialsi(t))-K(\partialhi(t)))+2(K(\partialhi(t))-K(\partialhi(t+1))){\mathfrak g}eq 0.
\end{equation}
By a result of Calabi-Chen, we know that
\[
d^2(\partialsi(t), \partialhi(t))\lambdaeq d^2(\partialhi_0, \partialsi_0).
\]
Also we have
\[
K(\partialhi(t))-K(\partialhi(t+1))=\sigmaqrt{-1}nt_t^{t+1} {\mathfrak g}ammaC(\partialhi(s)) ds\lambdaeq {\mathfrak g}ammaC(\partialhi_0).
\]
On the other hand,
\[
K(\partialsi_0)-K(\partialsi(t))=\sigmaqrt{-1}nt_0^t {\mathfrak g}ammaC(\partialhi(s))ds
\]
and
\[
K(\partialhi_0)-K(\partialhi(t))=\sigmaqrt{-1}nt_0^t{\mathfrak g}ammaC(\partialsi(s))ds
\]
It follows that
\[
K(\partialsi(t))-K(\partialhi(t))=\sigmaqrt{-1}nt_0^t ({\mathfrak g}ammaC(\partialhi(s))-{\mathfrak g}ammaC(\partialsi(s)))ds+K(\partialsi_0)-K(\partialhi_0).
\]
By \eqref{E-2.5}, we have
\[
K(\partialsi(t))-K(\partialhi(t))\lambdaeq -\delta t +C
\]
This contradicts \eqref{E-2.6} when $t$ is large enough.
\end{proof}
\betaegin{rmk}
The results in this section are mostly proved by J. Streets {\mathfrak g}ammaite{Streets} using the framework of minimizing movement. Our main motivation in this section is to establish \eqref{EVI-1} along the Calabi flow using a direct approach. We hope this argument will give an approach to Question \ref{Q-3}.
\end{rmk}
\sigmaection{The convergence of the Calabi flow}
We prove our main theorem in this section. Tian-Zhu {\mathfrak g}ammaite{TZ05} proved the convergence of Kahler-Ricci flow to a Kahler-Ricci soliton using a continuity method to deal with the Kahler-Ricci flow (see {\mathfrak g}ammaite{TZ3} also). Our method mimics their argument using continuity method and the level set of certain functional. In Kahler-Ricci flow they use Perelman's entropy and we use the Calabi energy here.
First we can establish a finite time stability result. Given a fixed background metric $\omegamega$, we define a set $B=B(\lambda, \Lambda, K, \omegamega)$ as follows,
\[
B=\{\partialhi\sigmaqrt{-1}n {\mathfrak g}ammaH: \lambda\omegamega\lambdaeq \omegamega_\partialhi\lambdaeq \Lambda\omegamega, \|\partialhi\|_{C^{3, \alphalpha}}\lambdaeq K\}.
\]
In {\mathfrak g}ammaite{Chen-He}, we proved a short time existence result of the Calabi flow with smoothing property and smooth dependence of initial data (see Theorem 3.2 in {\mathfrak g}ammaite{Chen-He}).
\betaegin{prop}\lambdaabel{P-2}
Suppose the Calabi flow exists in the maximal time interval $[0, T)$ with initial potential $\partialhi_0$. Then for any $0<T_0<t$, there exists $\epsilon_0=\epsilon_0(\partialhi_0, T_0, \omegamega)$ such that
for any potential $|\partialsi-\partialhi_0|_{C^5}\lambdaeq \epsilon_0$, the Calabi flow exists with initial potential $\partialsi$ exists in $[0, T_0]$ and
\[
|\partialsi(T_0)-\partialhi_0(T_0)|_{C^5}\lambdaeq C=C(\epsilon_0, \partialhi_0, T_0, \omegamega),
\]
where $C\rightarrow 0$ when $\epsilon_0\rightarrow 0$.
\end{prop}
\betaegin{proof}For any $T_0$ fixed, we know that $\partialhi_0(t)$ is a smooth path in ${\mathfrak g}ammaH$ for $t\sigmaqrt{-1}n [0, T_0]$. We can then pick up uniform constants $\lambda, \Lambda, K$ depending on $\partialhi_0, T_0$ such that
$\partialhi_0(t)\sigmaqrt{-1}n B=B(\lambda, \Lambda, K, \omegamega)$.
We shall also assume that there exist a small number $\delta$ such that if \[\min_{t\sigmaqrt{-1}n [0, T_0]}|\partialsi-\partialhi_0(t)|_{C^{3, \alphalpha}} \lambdaeq \delta\]
then $\partialsi \sigmaqrt{-1}n B$.
By Theorem 3.2 in {\mathfrak g}ammaite{Chen-He}, we know that for any potential $\partialsi\sigmaqrt{-1}n B$, there exists uniform constants $t_0, \epsilon_0$ and $C_0$ depending only on $\lambda, \Lambda, K$ and $\omegamega$ such that the Calabi flow solution $\partialsi(t)$ exists for $[0, t_0]$; moreover, if
\[
|\partialsi-\partialhi_0|_{C^{3, \alphalpha}} \lambdaeq \epsilon_0,
\]
then for any $t\sigmaqrt{-1}n (0, t_0]$,
\betaegin{equation}\lambdaabel{E-3.5}
\betaegin{split}
&|\partialsi(t)-\partialhi_0(t)|_{C^{3, \alphalpha}} \lambdaeq C_0 |\partialsi-\partialhi_0|_{C^{3, \alphalpha}}\\
&
|\partialsi(t)-\partialhi_0(t)|_{C^{4, \alphalpha}} \lambdaeq C_0 t^{-1/4} |\partialsi-\partialhi_0|_{C^{3, \alphalpha}}
\end{split}
\end{equation}
The smooth dependence part \eqref{E-3.5} is only emphasized around a constant scalar curvature metric in Theorem 3.2 {\mathfrak g}ammaite{Chen-He} (it is mainly used in the paper to prove the stability of a constant scalar curvature metric along the Calabi flow), but it holds around any smooth metric $\omegamega_{\partialhi_0}$ (it even holds for initial metric in $C^{2, \alphalpha}$, see Theorem 3.1 in {\mathfrak g}ammaite{Chen-He} and Theorem 2.1 in {\mathfrak g}ammaite{He}).
We shall also assume $\epsilon_0\lambdaeq \delta$. Let $k$ be the least integer such that $k{\mathfrak g}eq T_0/t_0$. Take $\epsilon$ small enough such that $(C_0)^k \epsilon<\epsilon_0$, then for any $\partialsi\sigmaqrt{-1}n B$ satisfying
\[
|\partialsi-\partialhi_0|_{C^5}\lambdaeq \epsilon,
\]
we get that
\[
|\partialsi(t_0)-\partialhi_0(t_0)|_{C^{3, \alphalpha}}\lambdaeq C_0\epsilon.
\]
Clearly $\partialsi(t_0)$ and $\partialhi_0(t_0)$ are both in $B$. We can then apply Theorem 3.2 in {\mathfrak g}ammaite{Chen-He} to the initial potential $\partialhi(t_0)$ and $\partialsi(t_0)$ to get a Calabi flow solution $\partialsi(t)$ in $[t_0, 2t_0]$. We can repeat this argument to get a solution $\partialsi(t)$ in $[0, kt_0]$ which contains $[0, T_0]$. (We emphasize that there is no contradiction since $t_0$ depends on $T_0$ and we know that $T_0\lambdaeq kt_0<T$; we cannot repeat the argument for $t>T_0$.) It then follows from \eqref{E-3.5} that, for any $t\sigmaqrt{-1}n [0, T_0]$
\[
|\partialsi(t)-\partialhi_0(t)|_{C^{3, \alphalpha}}\lambdaeq (C_0)^k\epsilon<\epsilon_0.
\]
By the smoothing property, we can get that
\[
|\partialsi(T_0)-\partialhi_0(T_0)|_{C^5}\lambdaeq C (\epsilon_0, \lambda, \Lambda, K, T_0, \omegamega),
\]
where $C$ goes to zero when $\epsilon_0$ goes to zero.
\end{proof}
\betaegin{rmk}With the assumption on Ricci curvature, then the above finite time stability becomes instant with the compactness theorem in {\mathfrak g}ammaite{Chen-He}.
\end{rmk}
Now we are in the position to prove Theorem \ref{T-1}.
\betaegin{proof}Note that we assume there is no holomorphic vector field. Let $\omegamega_0$ be the unique constant scalar curvature metric in $(M, [\omegamega_0])$ (uniqueness is proved by Chen-Tian).
Let $\omegamega_\partialhi=\omegamega_0+\sigmaqrt{-1}\partial\betaar \partial \partialhi$ (we assume a normalization condition $I(\partialhi)=0$). Denote
\[G=\{\partialhi\sigmaqrt{-1}n {\mathfrak g}ammaH: |\partialhi(t)|_{C^5}\rightarrow 0, t\rightarrow \sigmaqrt{-1}nfty\}\]
to be the initial potentials such that $\partialhi(t)$ solves the Calabi flow equation and converges to zero (hence $\omegamega_{\partialhi(t)}$ converges to $\omegamega_0$). Clearly $0\sigmaqrt{-1}n G$ and let $\partialhi\sigmaqrt{-1}n {\mathfrak g}ammaH$. Let $\partialhi_s, 0\lambdaeq s\lambdaeq 1$ be a smooth path in ${\mathfrak g}ammaH$ such that $\partialhi_0=0, \partialhi_1=\partialhi$ (we can choose $\partialhi_s=s\partialhi$ say). Denote the corresponding Calabi flow solution as $\partialhi_s(t)$. We want to show that $\partialhi_s(t)\sigmaqrt{-1}n G$. Now let
\[
S=\{s\sigmaqrt{-1}n [0, 1]: \partialhi_s\sigmaqrt{-1}n G\}.
\]
We want to prove $S=[0, 1]$. By the method of continuity, we need to show $S$ is open and closed. The openness follows essentially from the finite time stability and the stability theorem around the constant scalar curvature metric proved in {\mathfrak g}ammaite{Chen-He}. In particular, we have
\betaegin{prop}Suppose $s\sigmaqrt{-1}n S$, then for $\delta$ small enough $(s-\delta, s+\delta){\mathfrak g}ammaap [0, 1] \sigmaqrt{-1}n S$.
\end{prop}
Since $\omegamega_{\partialhi_s(t)}$ converges to $\omegamega_0$ and hence $|\partialhi_s(t)|_{C^5}\rightarrow 0$ when $t$ sufficiently large. By stability of constant scalar curvature metric along the Calabi flow, we can assume that there exists $\epsilon=\epsilon(\omegamega_0)$ such that
\[
|\partialhi|_{C^5}\lambdaeq 2 \epsilon,
\]
then $\partialhi\sigmaqrt{-1}n G$. Let $T$ be sufficiently large, we assume that $|\partialhi_s(T)|_{C^5}\lambdaeq \epsilon$. Fix $T$, by Proposition \ref{P-2}, there exists $\epsilon_1$ small enough such that $|\partialsi-\partialhi_s|_{C^5}\lambdaeq \epsilon_1$, then
\[
|\partialhi(T)-\partialsi(T)|_{C^5}\lambdaeq c(\epsilon_1, \partialhi_s, T)
\]
We choose $\epsilon_1$ small enough such that $c(\epsilon_1, \partialhi_s, T)\lambdaeq \epsilon$. Then
\[
|\partialsi(T)|\lambdaeq 2\epsilon.
\]
Applying Theorem 4.1 in {\mathfrak g}ammaite{Chen-He} to $\partialsi(T)$, we get that $\partialsi(T)\sigmaqrt{-1}n G$. It then follows that, there exists $\epsilon_1$ such that $\partialsi\sigmaqrt{-1}n G$ provided
\[
|\partialsi-\partialhi_s|_{C^5}\lambdaeq \epsilon_1.
\]
Hence if $s\sigmaqrt{-1}n S$, we can choose $\delta$ small enough, such that $\partialhi_r\sigmaqrt{-1}n G$ for $r\sigmaqrt{-1}n (s-\delta, s+\delta){\mathfrak g}ammaap [0, 1]$. \\
To show $S$ is closed, it suffices to show that if $[0, 1)\sigmaubset S$ then $1\sigmaqrt{-1}n S$. Actually we want to show if $[0, 1)\sigmaubset S$, then the following uniform estimates for $\partialhi_s(t)$ for any $s\sigmaqrt{-1}n [0, 1)$,
\betaegin{prop}For any $\delta>0$ and $s\sigmaqrt{-1}n [0, 1)$, there exists $T>0$ such that for any $t{\mathfrak g}eq T$,
\[
|\partialhi_s(t)|_{C^5}\lambdaeq \delta.
\]
\end{prop}
We argue by contradiction. Suppose otherwise, then there exists a sequence of $s_i\rightarrow 1$ and $t_i\rightarrow \sigmaqrt{-1}nfty$ such that
\[
|\partialhi_{s_i}(t_i)|_{C^5}{\mathfrak g}eq \delta
\]
for some $\delta>0$.
Since for any $s_i\sigmaqrt{-1}n [0, 1)$, $|\partialhi_{s_i}(t)|_{C^5}\rightarrow 0$ when $t\rightarrow \sigmaqrt{-1}nfty$. Hence we can choose $t_i$ such that
\[
|\partialhi_{s_i}(t_i)|_{C^5}=\delta{\mathfrak g}eq |\partialhi_{s_i}(t)|_{C^5}, t{\mathfrak g}eq t_i.
\]
Let $\partialsi_i(t)=\partialhi_{s_i}(t_i+t)$ for $t\sigmaqrt{-1}n [-1, 1]$. Then we know that $\partialsi_i(t)$ is a sequence of solutions of Calabi flow on $(M, [\omegamega_0], J)$ in $[0, 1]$ such that
\[
|\partialsi_i(t)|_{C^5}\lambdaeq \delta.
\]
By the compactness theorem, we get that $\partialsi_i(t)$ converges to $\partialsi_\sigmaqrt{-1}nfty(t)$ such that $\partialsi_\sigmaqrt{-1}nfty(t)$ is still a solution of the Calabi flow in $[0, 1]$. Note that at the moment, the convergence
\[
\partialsi_i(t)\rightarrow \partialsi_\sigmaqrt{-1}nfty(t)
\]
is only in $C^{4, \alphalpha}$ for any $\alphalpha\sigmaqrt{-1}n (0, 1)$. Now we claim that $\partialhi_\sigmaqrt{-1}nfty(t)\equiv 0$ for $t\sigmaqrt{-1}n [0, 1]$.
When $s=1$, let $\partialhi_1(t)=\partialhi(t)$ be the solution of the Calabi flow. By Corollary \ref{C-C}, we know that for any solution of the Calabi flow $\partialhi(t)$, the Calabi energy has the same energy level at infinity. Hence ${\mathfrak g}ammaC(\partialhi(t))\rightarrow 0$ when $t\rightarrow 0$.
Hence for any $\epsilon>0$, we can find $t_0$ sufficiently large that
\[
{\mathfrak g}ammaC(\partialhi(t_0))<\epsilon/2
\]
By Proposition \ref{P-2}, we get that for $i$ sufficiently large,
\[
{\mathfrak g}ammaC(\partialhi_{s_i}(t_0))<\epsilon.
\]
It then follows that for $i$ sufficiently large and $t\sigmaqrt{-1}n [0, 1]$
\[
{\mathfrak g}ammaC(\partialsi_i(t))={\mathfrak g}ammaC(\partialhi_{s_i}(t_i+t))<\epsilon.
\]
We can then conclude that for any $t\sigmaqrt{-1}n [0, 1]$,
\[
{\mathfrak g}ammaC(\partialsi_\sigmaqrt{-1}nfty(t))=0
\]
Hence $\partialsi_\sigmaqrt{-1}nfty(t)=0$ are all potentials of the constant scalar curvature metric. By the uniqueness theorem, we know that $\partialsi_\sigmaqrt{-1}nfty(t)\equiv 0$ (we assume that there is no holomorphic vector field). In particular we know that
\[
\partialsi_i(t) \rightarrow 0
\]
in $C^{4, \alphalpha}$ for $i$ sufficiently large. To get a contradiction, we use the assumption that the Ricci curvature is uniformly bounded for all $i$.
Consider $\partialsi_i(t)$ in $[-1, 1]$. Then we know that (since the Ricci curvature, hence the scalar curvature is uniformly bounded),
\[
|\partial_t \partialsi_i(t)|\lambdaeq C
\]
Hence it follows that $\partialsi_i(t)$ is uniformly bounded in $[-1, 1]$ ($\partialsi_i(0)$ is uniformly small in $C^5$). By the compactness theorem in {\mathfrak g}ammaite{Chen-He}, we know that $\partialsi_i(t)$ is uniformly bounded in $[-1, 1]$ in $C^{3, \alphalpha}$.
Now applying smoothing property of the Calabi flow to $\partialsi_i(t)$ in $t\sigmaqrt{-1}n [-1, 1]$ for $i$ sufficiently large, it follows that for any $t{\mathfrak g}eq 0$,
\[
|\partialsi_i(t)|_{C^k}\lambdaeq C(k, t)
\]
In other words, $\partialsi_i(t)$ converges to $0$ in $C^\sigmaqrt{-1}nfty$ for any $t{\mathfrak g}eq 0$, when $i\rightarrow \sigmaqrt{-1}nfty$. This clearly contradicts the fact that $|\partialsi_i(0)|_{C^5}=\delta$. This completes the proof Theorem \ref{T-1}.
\end{proof}
\betaegin{rmk}The Ricci curvature assumption is only used to get an improved regularity at time $t=0$ for $\partialsi_i(t)$. Such an assumption is rather technical; we hope we can overcome the difficulty to drop the Ricci curvature assumption in future. The assumption on nonexistence of holomorphic vector fields is rather superfluous and the similar strategy should also apply to extremal metrics. We will leave these problems for future study since Ricci curvature assumption seems to be more serious.
\end{rmk}
\betaegin{thebibliography}{s}
\betaibitem{Calabi82}E. Calabi, {\sigmaqrt{-1}t Extremal K\"ahler metric}, in {\sigmaqrt{-1}t Seminar of Differential Geometry},
ed. S. T. Yau, Annals of Mathematics Studies 102, Princeton
University Press (1982), 259-290.
\betaibitem{CalabiChen}E. Calabi, X.-X. Chen, \emph{The space of Kahler metrics. II.} J. Differential Geom. 61 (2002), no. 2, 173-193.
\betaibitem{Chen00}X.-X. Chen, {\sigmaqrt{-1}t The space of K\"ahler metrics}, J. Differential Geom. 56 (2000), no. 2, 189-234.
\betaibitem{Chen09}X.-X. Chen, \emph{Space of Kahler metrics. III. On the lower bound of the Calabi energy and geodesic distance.} Invent. Math. 175 (2009), no. 3, 453-503.
\betaibitem{Chen-He}X.-X. Chen, W.-Y. He, \emph{On the Calabi flow}, Amer. J. Math. 130 (2008), no. 2, 539-570.
\betaibitem{Chentian05}X.-X. Chen, G. Tian, \emph{Partial regularity for homogeneous complex Monge-Ampere equations.} C. R. Math. Acad. Sci. Paris 340 (2005), no. 5, 337-340.
\betaibitem{Chentian06}X.-X. Chen, G. Tian, \emph{ Geometry of K\"ahler metrics and foliations by holomorphic discs.} Publ. Math. Inst. Hautes Etudes Sci. No. 107 (2008), 1-107.
\betaibitem{Donaldson99}S.K. Donaldson, {\sigmaqrt{-1}t Symmetric spaces, K\"ahler geometry and Hamiltonian dynamics.} Northern California Symplectic Geometry Seminar, 13-33, Amer. Math. Soc. Transl. Ser. 2, 196, Amer. Math. Soc., Providence, RI, 1999.
\betaibitem{Donaldson03}S. K. Donaldson, {\sigmaqrt{-1}t Conjectures in K\"ahler geometry}, Strings and
geometry, 71--78, Clay Math. Proc., {\betaf 3}, Amer. Math. Soc.,
Providence, RI, 2004.
\betaibitem{He}W.-Y. He, \emph{Local solution and extension to the Calabi flow}, to appear in J. Geom. Analysis (2012).
\betaibitem{Mabuchi86} T. Mabuchi, {\sigmaqrt{-1}t Some symplectic geometry on compact K\"ahler manifolds. I}, Osaka J. Math. 24 (1987), no. 2, 227-252.
\betaibitem{Mayer02}U.F. Mayer, \emph{Gradient flows on nonpositively curved metric spaces and harmonic maps}, Comm, Anal, Geom. 6 (1998), 199-253.
\betaibitem{Semmes}S. Semmes, \emph{Complex Monge-Ampere and symplectic manifolds.} Amer. J. Math. 114 (1992), no. 3, 495-550.
\betaibitem{Streets12}J. Streets, \emph{Long time existence of Minimizing Movement solutions of Calabi flow}, arxiv.org/abs/1208.2718.
\betaibitem{Streets}J. Streets, \emph{The consistence and convergence of K-energy minimizing movement}, preprint.
\betaibitem{TZ05}G. Tian. X.-H. Zhu, \emph{Convergence of K\"ahler-Ricci flow on Fano manifolds, II}, arxiv.org/abs/1102.4798.
\betaibitem{TZ3}G. Tian, S.J Zhang, Z.L Zhang, X.H. Zhu, \emph{Supremum of Perelman's entropy and Kahler-Ricci flow on a Fano manifold}, arxiv.org/abs/1107.4018.
\end{thebibliography}
\end{document}
|
\begin{document}
\ifnum0=0
\title{Approximate Quantum Circuit Synthesis using Block-Encodings}
\author{Daan Camps}
\email{[email protected]}
\affiliation{Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA}
\author{Roel Van Beeumen}
\email{[email protected]}
\affiliation{Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA}
\date{\today}
\begin{abstract}
One of the challenges in quantum computing is the synthesis of unitary
operators into quantum circuits with polylogarithmic gate complexity.
Exact synthesis of generic unitaries requires an exponential
number of gates in general.
We propose a novel approximate quantum circuit synthesis technique by relaxing the
unitary constraints and interchanging them for ancilla qubits via block-encodings.
This approach combines smaller block-encodings, which are
easier to synthesize, into quantum circuits for larger operators.
Due to the use of block-encodings, our technique is not limited to unitary
operators and can also be applied for the synthesis of arbitrary operators.
We show that operators which can be approximated by a canonical polyadic
expression with a polylogarithmic number of terms can be synthesized with polylogarithmic gate complexity
with respect to the matrix dimension.
\end{abstract}
\maketitle
\fi
\section{Introduction}
Quantum computing holds the promise of speeding up computations
in a wide variety of fields \cite{Nielsen:2011:QCQI}.
After early breakthroughs such as Shor's algorithm \cite{shor1994} for factoring
and Grover's algorithm \cite{grov1996} for searching,
there have been substantial developments in various quantum algorithms over the past two decades.
Noteworthy are the quantum walk algorithm of Szegedy \cite{quant-ph/0401053,szeg2004},
and the quantum linear systems
algorithm by Harrow, Hassidim, and Loyd \cite{haha2009}.
These developments have lead to quantum linear systems \cite{chko2017} and Hamiltonian simulation \cite{bech2015} algorithms inspired by quantum walks.
A unifying framework called the quantum singular value transformation, which combines
the notion of qubitization \cite{loch2019} and quantum signal processing \cite{loch2017} by Low and Chuang,
was recently proposed by Gily\'en et al.~\cite{gisu2018,gisu2019}.
The quantum singular value transformation can describe all aforementioned quantum algorithms except factoring.
Besides that, it has sparked an interest in the use of block-encodings since they can directly be used as input for a quantum singular value transformation. A block-encoding is the embedding of a --not necessarily unitary-- operator as the leading principal block in a larger unitary
\begin{myequation}
U = \begin{bmatrix} A/\alpha & *\, \\ * & *\, \end{bmatrix}
\ \ \Longleftrightarrow \ \
A = \alpha \left(\bra0 \otimes \eye\right) U \left(\ket0 \otimes \eye\right),
\end{myequation}
where $*$ indicate arbitrary matrix elements.
In this paper, we propose the use of block-encodings, not as a building block for quantum algorithms, but as a technique for \emph{approximate} quantum circuit synthesis and, more generally, the synthesis of arbitrary operators into quantum circuits.
One of the major challenges on noisy intermediate-scale quantum (NISQ)
devices is the limited circuit depth \cite{pres2018}.
In general, exact synthesis of generic unitary operators requires
exponentially many quantum gates \cite{Kitaev:2002:CQC,shbu2006,dani2006}.
The noise in NISQ devices limits the circuit depth but also relaxes the
need for exact synthesis.
In other words, we only need to approximate the action of some $n$-qubit
operator up to an error proportional to the noise level.
A polynomial dependence of the circuit depth on $n$ is necessary to obtain
efficient quantum circuits.
Examples of other approximate synthesis approaches have been proposed in
\cite{pasv2014,boro2015,mamo2016,khlr2019,yose2020}.
We show that, under certain assumptions, an efficient quantum circuit can be devised if the operator can be $\epsilon$-approximated by a canonical polyadic (CP) expression \cite{koba2009,hitc1927} with a number of terms that depends polylogarithmically on the operator dimension.
We denote these by \emph{PLTCP matrices}.
CP decompositions have found applications in many scientific disciplines because they can often be computed approximately using optimization algorithms. However, their calculation is an NP-hard problem in general.
We also demonstrate that the class of operators that we can efficiently synthesize is a linear combination of terms with Kronecker product structure, which is more general than standard CP decompositions.
We call these \emph{CP-like} decompositions.
The proposed technique uses two operations to efficiently combine block-encodings: the Kronecker product of block-encodings and a linear combination of block-encodings.
This allows us to combine block-encodings of small matrices into quantum circuits for larger operators.
We show that in practice the scheme requires at most a logarithmic number of ancilla qubits,
study the relation between the errors on the individual encodings and the overall circuit,
and analyze the CNOT complexity of the circuits.
Finally, we show three examples of
non-unitary operators that naturally have a CP-like structure and can efficiently be encoded using the proposed technique.
\section{Block-encodings}
Since an $n$-qubit quantum circuit performs a unitary operation, non-unitary
operations cannot directly be handled by quantum computers.
One way to overcome this limitation is by encoding the non-unitary matrix into
a larger unitary one, so called \emph{block-encoding}~\cite{gisu2018,gisu2019}.
We define an \emph{approximate} block-encoding of an operator on $s$ signal qubits, $\eA_s$, in a unitary $\eU_n$ on $n$ qubits as follows.
\begin{definition}
\label{def:BE}
Let $a,s,n \inN$ such that $n = a + s$, and $\epsilon\inR^+$.
Then an $n$-qubit unitary $\eU_n$ is an $(\alpha,a,\epsilon)$-block-encoding of an $s$-qubit operator $\eA_s$ if
\begin{myequation}
\tilde \eA_s = \left(\bra{0}^{\otimes a} \otimes \eI_{s} \right) \eU_n
\left(\ket{0}^{\otimes a} \otimes \eI_{s} \right),
\end{myequation}
and
\(
\normtwo[\big]{\eA_s - \alpha \tilde \eA_s} \leq \epsilon.
\)
\end{definition}
The parameters $(\alpha, a, \epsilon)$ of the block-encoding are, respectively, the \emph{subnormalization factor} to encode matrices of arbitrary norm, the number of \emph{ancilla} qubits, and the \emph{error} of the block-encoding.
Since $\normtwo{\eU_n} = 1$, we have that $\normtwo{\tilde \eA_s} \leq 1$ and $\normtwo{\eA_s} \leq \alpha + \epsilon$.
Note that every unitary $U_s$ is already a $(1,0,0)$-block-encoding of itself and every non-unitary matrix $\eA_s$ can be embedded in a $(\normtwo{\eA_s},1,0)$-block-encoding \cite{QI:Alber:2001}.
This does not guarantee the existence of an efficient quantum circuit.
An equivalent interpretation of \Cref{def:BE} is that $\tilde \eA_s$ is the partial trace of $\eU_n$ over the zero state of the ancilla space. This naturally partitions the Hilbert space $\cH_n$ into $\cH_a \otimes \cH_s$.
Given an $s$ qubit signal state, $\ket{\psi_s} \in \cH_s$,
the action of $\eU_n$ on $\ket{\psi_n} = \ket{0}^{\otimes a} \otimes \ket{\psi_s} \in \cH_n$ becomes
\begin{myequation}
\eU_n \ket{\psi_n} = \ket{0}^{\otimes a} \otimes \tilde \eA_s \ket{\psi_s}
+ \sqrt{1 - \normtwo{ \tilde \eA_s \ket{\psi_s}}^2} \ket{\phi^{\perp}_n},
\end{myequation}
with
\begin{myequation}
\left(\bra{0}^{\otimes a} \otimes \eI_{s}\right) \ket{\phi^{\perp}_n} =0,
\qquad \normtwo[\big]{\ket{\phi^{\perp}_n}} =1,
\end{myequation}
and $\ket{\phi^{\perp}_n}$ the normalized state for which the ancilla register has a state orthogonal to $\ket{0}^{\otimes a}$.
By construction, we see that a partial measurement of the ancilla register projects out $\ket{\phi^{\perp}_n}$ and results in
$(\ket{0}^{\otimes a} \otimes \tilde \eA_s \ket{\psi_s})/\normtwo{\tilde \eA_s \ket{\psi_s}}$ with probability $\normtwo{\tilde \eA_s \ket{\psi_s}}^2$.
In this case, the ancilla register is measured in the zero state and the signal register is in the target state $\tilde \eA_s \ket{\psi_s}$, see \Cref{fig:BE}.
An inadmissible state orthogonal to the desired outcome is obtained with probability $1 - \normtwo{\tilde \eA_s \ket{\psi_s}}^2$.
Using amplitude amplification, the process must be repeated $1/\normtwo{\tilde \eA_s \ket{\psi_s}}$ times for success on average.
This makes our proposed synthesis technique probabilistic.
\begin{figure}
\caption{Quantum circuit for $\eU_n$. The thick quantum wire carries the \emph{signal}
\label{fig:BE}
\end{figure}
\section{Combining block-encodings}
We introduce two operations on block-encodings that in combination allow us to build encodings of larger operators from encodings of small operators.
The first operation creates a block-encoding of a Kronecker product of two matrices from the block-encodings of the individual matrices.
We denote a SWAP-gate on the $i$th and $j$th qubits as $\swap_j^i$.
\begin{lemma}
\label{lemma:KP}
Let $\eU_{n}$ and $\eU_{m}$ be $(\alpha,a,\epsilon_1)$- and $(\beta,b,\epsilon_2)$-block-encodings of $\eA_{s}$ and $\eA_{t}$, respectively, and define
$\eS_{n + m} = \prod_{i=1}^{s} \, \swap^{a+i}_{a+b+i}$.
Then,
\begin{myequation}\label{eq:TP}
\eS_{n + m}
\left( \eU_{n} \otimes \eU_{m} \right)
\eS_{n + m}^{\dagger}
\end{myequation}
is an $(\alpha\beta, a + b, \alpha \epsilon_2 + \beta \epsilon_1 + \epsilon_1 \epsilon_2)$-block-encoding of $\eA_{s} \otimes \eA_{t}$.
\end{lemma}
The proof of \Cref{lemma:KP} is given in \Cref{app:proof-lemma1}.
This lemma shows how two individual block-encodings can be combined to encode the Kronecker product of two matrices.
The method requires no additional ancilla qubits and the approximation error scales as a weighted sum of the individual errors up to first order.
The operation requires only $2s$ additional SWAP operations.
\Cref{fig:TPBE} shows the quantum circuit for a Kronecker product of block-encodings.
This reveals the observation that in order to combine block-encodings into Kronecker products, the signal qubits of the leading block-encoding have to be swapped with the ancilla qubits of the second block-encoding in such a way that the $s+t$ signal qubits become the least-significant qubits in the combined circuit and that the mutual ordering of the signal qubits is preserved.
\Cref{lemma:KP} trivially extends to Kronecker products of more than two block-encodings.
Let $\eU_{n_i}$ be $(\alpha_i,a_i,\epsilon_i)$-block-encodings of $\eA_{s_i}$
for $i \in \lbrace 1,\dots,d \rbrace$.
Define $n = \sum_i n_i$, and $S_n$ as a SWAP register that
swaps all signal qubits of each block-encoding $\eU_{n_i}$
to the least significant qubits of the $n$-qubit unitary while
preserving the mutual ordering between the signal qubits.
Then, ignoring the second order error terms,
\begin{myequation}
\label{eq:TPnd}
\eS_{n} \left( \eU_{n_1} \otimes \eU_{n_2} \otimes \cdots \otimes \eU_{n_{d}} \right) \eS_{n}^{\dagger}
\end{myequation}
is an $(\prod_i \alpha_{i}, \sum_i a_{i}, \sum_i \epsilon_{i} \prod_{k \neq i} \alpha_{k})$-block-encoding of $\eA_{s_1} \otimes \eA_{s_2} \otimes \cdots \otimes \eA_{s_d}$.
In order for the subnormalization factor and approximation error on the Kronecker product not to grow too large, the subnormalization factors of the individual block-encodings should be small enough.
\begin{figure}
\caption{Block-encoding of the Kronecker product of 2 block-encoded matrices: (a) quantum circuit for $a = 3$, $s = 3$, $b = 2$, $t = 2$, and (b) equivalent multi-qubit gate $\eU_p$ with $p = n + m$.\label{fig:TPBE}
\label{fig:TPBE}
\end{figure}
The second operation used in the proposed technique constructs a block-encoding of a linear combination of block-encodings.
To this end, we review the notion of a \emph{state preparation pair of unitaries}~\cite{gisu2019}.
\begin{definition}
\label{def:SP}
Let $\exy \inC[m]$, with $\normone[]{\exy} \leq \beta$, and define $\underline{\exy} = \left[ \exy^T \, 0 \right]^T \inC[2^b]$, where $2^b \geq m$.
Then the pair of unitaries $(\eP_b,\eQ_b)$ is called a $(\beta, b,\epsilon)$-state-preparation-pair for $\exy$ if
$\eP_b \ket{0}^{\otimes b} = \ket{p}$ and $\eQ_b \ket{0}^{\otimes b} = \ket{q}$, such that
\begin{myequation}
\sum_{j=0}^{2^b-1} | \beta (p_{j}^* q_j) - \underline{y_j} | \leq \epsilon.
\end{myequation}
\end{definition}
The following lemma is a known result \cite{chwi2012}, but we provide a sharper upper bound on the approximation error compared to \cite{gisu2019}.
\begin{lemma}
\label{lemma:LCU}
Let $\eB_s = \sum_{j=0}^{m-1} y_j \eA_{s}\p{j}$ be an $s$-qubit operator
and assume that $(\eP_b,\eQ_b)$ is a $(\beta,b,\epsilon_1)$-state-preparation-pair for $\exy$.
Further, let $\eU_{n}\p{j}$ be $(\alpha, a, \epsilon_2)$-block-encodings for $\eA_{s}\p{j}$ for $j \in [m]$ and
define the following select oracle
\begin{myequation}
\eW_{b+n} = \sum_{j=0}^{m-1} \ket{j}\bra{j} \otimes \eU_{n}\p{j} + \sum_{j=m}^{2^b-1} \ket{j} \bra{j} \otimes \eI_n.
\label{eq:selor}
\end{myequation}
Then,
\begin{myequation}
\eU_{b+n} = (\eP_{b}^{\dagger} \otimes \eI_a \otimes \eI_s) \, \eW_{b+n} \, (\eQ_{b} \otimes \eI_a \otimes \eI_s),
\end{myequation}
is an $(\alpha\beta, a+b,\alpha \epsilon_1 + \beta \epsilon_2)$-block-encoding of $\eB_s$.
\end{lemma}
The proof is provided in \Cref{app:proof-lemma2}.
This lemma shows that, if an efficient state preparation pair exists for the coefficient vector $\exy$,
then we can efficiently implement a linear combination of block-encodings from the individual block-encodings.
\Cref{fig:LCUBE} shows the corresponding quantum circuit.
Note that this operation requires $b$ additional ancilla qubits. The approximation error again scales as a weighted sum of the (maximum) error on the block-encodings and the error on the state-preparation pair.
\begin{figure}
\caption{Block-encoding of linear combinations of block-encodings: (a) quantum circuit where the
white control nodes are controlled on the $\ket{0}
\label{fig:LCUBE}
\end{figure}
The combination of \Cref{lemma:LCU} and \eqref{eq:TPnd}
shows that we can directly construct
a block-encoding of an $s$-qubit operator with the CP-like form
\begin{myequation}\label{eq:CP}
\eB_s = \sum_{j=0}^{m-1} y_j \ \eA_{s_1}\p{j} \otimes \eA_{s_2}\p{j} \otimes \dots \otimes \eA_{s_{d_j}}\p{j},
\end{myequation}
if $\sum_{i=1}^{d_j} s_i = s$ for $j \in [m]$, i.e., all terms in the sum in \eqref{eq:CP} are of the same
dimension, and if we have a block-encoding $\eU_{n_i}\p{j}$ for each $\eA_{s_i}\p{j}$
where $j \in [m]$, and $i \in \lbrace 1,\dots,d_j \rbrace$.
To quantify the subnormalization factor, the number of ancilla qubits, and the approximation error in the block-encoding for \eqref{eq:CP}, we assume that each $\eU_{n_i}\p{j}$ is an
$(\alpha_i\p{j},a_i\p{j},\epsilon_i\p{j})$-block-encoding for $\eA_{s_i}\p{j}$.
Let
\begin{myequation}
\alpha\p{j} = \myprod_i \alpha_{i}\p{j}, \
a\p{j} = \mysum_i a_{i}\p{j}, \
\epsilon\p{j} = \mysum_i \epsilon_{i}\p{j} \myprod_{k \neq i} \alpha_{k}\p{j},
\label{eq:TPparam}
\end{myequation}
for $j \in [m]$.
Then, using \eqref{eq:TPnd}, we can combine these into
$(\alpha\p{j},a\p{j},\epsilon\p{j})$-block-encodings
for each term in \eqref{eq:CP}.
Notice that while the number of signal qubits has to be the same for each term in the linear combination,
we do not assume the same number of ancilla qubits here.
If we define $a = \max_j a\p{j}$, then each block-encoding for $\eA_s\p{j}$ can simply be extended to
$a$ ancilla qubits by adding additional ones at the top of the register. This does not change the leading block of the unitary.
The properties of a block-encoding for \eqref{eq:CP}
under these assumptions are formalized in the following theorem.
\begin{theorem}
Let $\eB_s$ be the $s$-qubit operator in \eqref{eq:CP} with
$(\alpha\p{j},a\p{j},\epsilon\p{j})$-block-encodings of
$ \eA_{s_1}\p{j} \otimes \eA_{s_2}\p{j} \otimes \dots \otimes \eA_{s_{d_j}}\p{j}$, for $j \in [m]$,
constructed according to \eqref{eq:TPnd} with parameters given by \eqref{eq:TPparam}.
Assume that all block-encodings are extended to $a = \max_j a\p{j}$ ancilla qubits,
$\alpha = \max_j \alpha\p{j}$,
and $\epsilon_1 =\max_j \epsilon\p{j}$.
Then, by \Cref{lemma:LCU}, we can construct a unitary
$\eU_{b+n}$ that is an $(\alpha\beta, a+b,\alpha \epsilon_2 + \beta \epsilon_1)$-block-encoding of $\eB_s$.
\label{thm:CP}
\end{theorem}
\Cref{thm:CP} follows directly from the combination of \Cref{lemma:KP}
and \Cref{lemma:LCU}.
Without loss of generality, the subnormalization factors $\alpha\p{j} \leq \alpha$ can be incorporated in
the vector $\exy$ encoding the coefficients of the linear combination.
The circuit construction can be simplified for operators with CP structure instead of CP-like structure.
The combination of the SWAP registers from \eqref{eq:TPnd} with the select oracle
in \Cref{lemma:LCU} introduces generalized Fredkin gates \cite{frto1982}.
Fredkin gates are difficult to realize experimentally \cite{onok2017}
and can be avoided if every Kronecker product of the block-encodings in the linear combination uses
the same SWAP register.
In this case, the select oracle becomes
\begin{myequation}
\eW_{b+n} = \left( \eI_b \otimes S_n \right) \tilde\eW_{b+n}
\left( \eI_b \otimes S^{\dagger}_n \right),
\label{eq:SWAPselect}
\end{myequation}
where
\begin{myequation}
\tilde\eW_{b+n} = \sum_{j=0}^{m-1} \ket{j}\bra{j} \otimes \tilde \eU_{n}\p{j} + \sum_{j=m}^{2^b-1} \ket{j} \bra{j} \otimes \eI_n,
\end{myequation}
with $\tilde \eU_{n}\p{j} = \eU_{n_1}\p{j} \otimes \dots \eU_{n_d}\p{j}$.
\section{Discussion}
Our technique combines block-encodings of small matrices to create block-encodings of larger operators that can be represented as in \eqref{eq:CP}.
This decomposition is closely related to the CP decomposition
of a tensor \cite{koba2009} and allows for more generality.
The sizes of the individual block-encoded matrices can differ in each term of the linear combination but
they must all have the same size when combined into a Kronecker product.
Optimization algorithms, such as for example alternating least squares, have been successfully used to compute approximations to CP decompositions in many applications. Even though exact CP decompositions are NP-hard to compute in general.
The optimization algorithms can be extended to accommodate for the different sizes of block-encodings in each
of the terms and
could incorporate the flexibility in size of the terms in their objective.
They can be used as such for approximate quantum circuit synthesis.
As NISQ devices suffer from noise \cite{pres2018}, the approximate nature of algorithms for CP-like decompositions can be exploited to obtain shorter circuits for less precise decompositions with fewer terms.
Under a given noise level, the error on the approximate CP-like decomposition can be balanced with the error on the individual block-encodings to find a tradeoff with short circuit depth.
One of the major challenges with using block-encodings is the introduction of an ancilla register.
This removes the constraint of strictly unitary approximations and allows for
linear combinations, but at the same time it introduces a probabilistic nature in the synthesis process and requires that the circuit is repeatedly executed until success.
This makes our strategy related to the Repeat-Until-Succes (RUS) synthesis technique for single qubit unitaries \cite{pasv2014,boro2015}.
A RUS circuit is a block-encoding of the desired operator in combination with a set of recovery operators
to recover the input state if a failure state is measured.
In our work we do not consider
recovery operators and assume that the computation is repeated if a failure state is measured.
Another related work is \cite{zhzh2019}, which proposes basic linear algebra subroutines for quantum computers.
Their method relies on Hamiltonian simulation of embeddings of arbitrary matrices
and also allows to approximate the action of PLTCP-like matrices using Trotter splitting for simulating
sums and Kronecker products of matrices.
\subsection{CNOT complexity}
The asymptotic gate complexity of the resulting quantum circuit synthesis technique depends on two factors:
the number of terms $m$ in the CP-like decomposition in \eqref{eq:CP} and the gate count of each individual block-encoding in the select oracle.
If we assume that $m = \bigO(\poly(s))$,
then $b = \bigO(\polylog(s))$ and quantum circuits with $\bigO(\poly(s))$ gates
for the state-preparation unitaries always exist \cite{plbr2011}.
Also the select oracle of \Cref{lemma:LCU} can in this case be implemented with
$\bigO(\poly(s))$ gates.
We call operators that can be expressed as \eqref{eq:CP} \emph{PLTCP-like matrices}
if the linear combination consists of $\bigO(\poly(s))$ terms,
a polylogarithmic number of terms in the matrix dimension.
PLTCP-like matrices can be synthesized with
polylogarithmic gate complexity if each term is efficiently implementable.
The precise asymptotic complexity depends on the size of every block $\eA_{s_i}\p{j}$ and the number
of gates required for their block-encoding.
The CNOT complexity for the simplest case where $B_s$ is a PLTCP matrix with $s$ terms and where every term is
a Kronecker product of $s$ $2 \times 2$ matrices is summarized in \Cref{tab:gc}.
The CNOT complexity of the select oracle is determined from the decomposition of $2$-qubit unitaries \cite{vida2004}
and the synthesis of controlled $1$-qubit unitaries \cite{babe1995}.
\begin{table*}[t]
\centering\ifnum0=0
\begin{tabular}{L{0.36\linewidth}C{0.02\linewidth}C{0.2\linewidth}C{0.15\linewidth}C{0.22\linewidth}}
\toprule
\textbf{Circuit element} & \textbf{\#} & \textbf{Gates} & \multicolumn{2}{c}{\textbf{Total CNOT complexity}}\\
& & & Exact & Approximate\\
\toprule
\emph{State preparation} $(P_{\log(s)},Q_{\log(s)})$ \cite{plbr2011} & & & $\frac{23}{24} s$ & -- \\
\midrule
\emph{SWAP registers} \cite{Nielsen:2011:QCQI} & $2s$ & $\swap$ gates & $6s$ & -- \\
\midrule
\emph{Select oracle} & $s$ & controlled $2s$-qubit & $\Theta(11s^2 \log(s)^2 )$ & $\Theta(11 s^2 \log(s) \log(1/\epsilon)) $ \\
\hspace{1.2em}$2s$-qubit with $\log(s)$ controls & $s$ & controlled $2$-qubit & $\Theta(11s \log(s)^2)$ & $\Theta(11 s \log(s) \log(1/\epsilon))$ \\
\hspace{2.4em} $2$-qubit with $\log(s)$ controls \cite{vida2004,babe1995} & 11 & controlled $1$-qubit & $\Theta(11 \log(s)^2)$ & $\Theta(11 \log(s) \log(1/\epsilon))$ \\
\hspace{3.6em} $1$-qubit with $\log(s)$ controls \cite{babe1995} & & & $\Theta(\log(s)^2)$ & $\Theta(\log(s)\log(1/\epsilon))$ \\
\hspace{3.6em} Toffoli with $\log(s)+1$ controls \cite{babe1995} & & & $\Theta((\log(s)+1)^2)$ & $\Theta((\log(s)+1)\log(1/\epsilon))$ \\
\bottomrule
\end{tabular}
\fi
\caption{Asymptotic CNOT complexity for a quantum circuit that block-encodes a PLTCP matrix $B_s$ with $s$ terms in the linear combination and every term a Kronecker product of $s$ $2 \times 2$ matrices.
The third column lists the CNOT complexity for an exact synthesis of a controlled single qubit gate, the fourth column for an approximate synthesis \cite{babe1995}.}
\label{tab:gc}
\end{table*}
For PLTCP-like matrices with more complicated structures we still maintain a $\bigO(\poly(s))$ CNOT complexity
as long as the gate complexity for the synthesis of the individual block-encodings scales at most with $\bigO(\poly(s))$.
An advantage of this method is that the synthesis of the $\bigO(\poly(s))$ small block-encoding unitaries requires fewer classical resources than the synthesis of larger blocks.
The strength of the technique lies in the ability to combine small-scale block-encodings to build larger operators.
\subsection{Examples}
We stress that unitariness of $\eB_s$ is not required because of the embedding as a block-encoding
and that even if $\eB_s$ is unitary, the individual terms in \eqref{eq:CP} clearly are not unitary.
One class of PLTCP matrices
is the Laplace-like operators \cite{krst2014}
\begin{myequation}
\sum_{j=1}^d M\p{1} \otimes \cdots \otimes M\p{j-1} \otimes L\p{j}
\otimes M\p{j+1} \otimes \cdots \otimes M\p{d},
\end{myequation}
and they can directly be encoded from block-encodings of the individual terms.
For example in the Laplace operator itself,
all $M\p{j}$ are identities and $L\p{j} = L$ for $j \in \lbrace 1,\dots,d \rbrace$.
In this case
we only need one block-encoding of $L$, which is repeated $d$ times, to encode the full operator.
This is an improvement over the $d^2$ block-encodings that are required in general.
Localized Hamiltonians are another example of PLTCP operators.
The Hamiltonian of a transverse field Ising model (TFIM) on a one-dimensional chain of $s$ spin-$1/2$ particles is given by
\begin{myequation}
H_{\mathrm{TFIM}} = - \sum_{i=1}^{s-1} \sigma_z\p{i}\sigma_z\p{i+1}
- h\sum_{i=1}^{s} \sigma_x\p{i},
\label{eq:TFIM}
\end{myequation}
where $\sigma_x$ and $\sigma_z$ are the Pauli-$X$ and $Z$ matrices.
Since this Hamiltonian is a linear combination of $2s-1$ unitaries,
no ancilla qubits are required to encode the $2 \times 2$ matrices, and
no $\swap$ operations are necessary to form the Kronecker products.
The complexity of block-encoding $H_{\mathrm{TFIM}}$ lies in forming the
linear combination.
We have simulated block-encoding circuits for $H_{\mathrm{TFIM}}$
under three different error scenarios: a $1\%$ error on the $\sigma_x$ and $\sigma_z$ gates,
a $1\%$ error on the state preparation for the linear combination
of unitaries (LCU), and the combination of both.
The results are summarized in \Cref{fig:TFIM} with the theoretical
upper bound derived from \Cref{thm:CP} denoted by the dotted lines.
\begin{figure}
\caption{Results of $1000$ simulations of $H_{\mathrm{TFIM}
\label{fig:TFIM}
\end{figure}
We observe that errors on the Pauli gates have
a smaller effect on the accuracy of the block-encoding then errors
on the state preparation unitaries.
The upper bound slightly overestimates the effect of the errors on the Pauli gates.
This happens because the error is not uniformly
distributed over the terms in the linear combination in \eqref{eq:TFIM}.
The expected number of repetitions until success lies between $1.2$ and $1.4$ for $2$ to $10$ spins
and is not sensitive to errors.
The Hamiltonian for the spin-1 Heisenberg model is equal to
\begin{myequation}
H_{\mathrm{XYZ}} = \sum_{i=1}^{s-1} X\p{i}X\p{i+1} + Y\p{i}Y\p{i+1} + Z\p{i}Z\p{i+1},
\label{eq:HIAM}
\end{myequation}
where $X, Y,$ and $Z$ are the spin-1 generators of SU(2).
These $3 \times 3$ matrices can be embedded in $4 \times 4$ matrices by zero-padding and block-encoded
in $2$ signal qubits and $1$ ancilla qubit.
In order to compress the CP rank,
we have \emph{tensorized} $H_{\mathrm{XYZ}}$ to an $s$-way $9 \times 9 \times \cdots \times 9$ array
and numerically computed an approximate CP decomposition using the alternating least squares
algorithm from tensor toolbox \cite{TTB_Software}.
The results for $3$ to $6$ spins are shown in \Cref{fig:HIAM}.
\begin{figure}
\caption{Compression of the CP rank with tensor toolbox \cite{TTB_Software}
\label{fig:HIAM}
\end{figure}
We observe that the relative error on the approximation of the Hamiltonian
decreases with increasing CP rank. A stagnation occurs at the exact CP rank of
the operator, signaling convergence. If an approximation with a relative error of
$1\%$ is sufficient, a CP rank reduction of $20\%-30\%$ can be achieved.
This directly translates to shorter quantum circuits as each term appears in the
select oracle.
For example, in the case $s=4$ it also leads to a reduction
in ancilla qubits: the exact expression
is a linear combination of $9$ terms, requiring $4$ ancilla qubits for encoding the linear
combination, and this can be compressed to $7$ terms, or only $3$ ancilla qubits.
\section{Conclusions}
In this paper we showed how block-encodings of small matrices, which are easier to synthesize, can be combined together to create block-encodings of larger operators with CP-like structure.
Under the assumption of $\bigO(\poly(s))$ terms in the decomposition and small individual block-encodings, this scheme has a polynomial dependence on the number of signal qubits both for gate complexity and ancilla qubits.
We reviewed three examples of PLTCP matrices, showed that the CP rank can be compressed if a larger
approximation error is acceptable and found that the circuits behave well under errors.
Further research is required to study the class of operators with PLTCP-like structure and operators that can be well-approximated in this form.
The modification of optimization algorithms for CP decompositions \cite{koba2009} to admit decompositions like \eqref{eq:CP} is another interesting research direction.
\ifnum0=0
\begin{acknowledgments}
This work was supported by the
Laboratory Directed Research and Development Program
of Lawrence Berkeley National Laboratory under
U.S. Department of Energy Contract No. DE-AC02-05CH11231.
\end{acknowledgments}
\fi
\ifnum0=0
\onecolumngrid
\appendix
\section{Proof of \cref{lemma:KP}}
\label{app:proof-lemma1}
\emph{Proof.}
From \Cref{def:BE} and the mixed-product property of the Kronecker product $(A \otimes B)(C \otimes D) = (AC) \otimes (BD)$, we obtain
\begin{myequation}\label{eq:permutedTP}
\tilde \eA_{s} \otimes \tilde \eA_{t} =
\left(\bra{0}^{\otimes a} \otimes \eI_{s} \otimes \bra{0}^{\otimes b} \otimes \eI_{t} \right)
\left( \eU_{n} \otimes \eU_{m} \right)
\left( \ket{0}^{\otimes a} \otimes \eI_{s} \otimes \ket{0}^{\otimes b} \otimes \eI_{t} \right).
\end{myequation}
The Kronecker product $\tilde \eA_{s} \otimes \tilde \eA_{t}$ is encoded in $\eU_{n} \otimes \eU_{m}$, but not as the leading principal block.
We use the property,
\begin{myequation*}
\swap^1_2 \left(\eI_1 \otimes \ket{0}\right) = \ket{0} \otimes \eI_1,
\end{myequation*}
to show that $\eS_{n + m}$ recovers the correct order by swapping
the $s$ signal qubits:
\begin{myalign*}
\eS_{n + m}
\left( \ket{0}^{\otimes a} \otimes \eI_{s} \otimes \ket{0}^{\otimes b} \otimes \eI_{t} \right)
\ & = \
\prod_{i=1}^{s} \swap^{a+i}_{a+b+i}
\left( \ket{0}^{\otimes a} \otimes \eI_{s} \otimes \ket{0}^{\otimes b} \otimes \eI_{t} \right),\\
\ & = \
\prod_{i=1}^{s-1} \swap^{a+i}_{a+b+i} \, \swap^{a+s}_{a+b+s}
\left( \ket{0}^{\otimes a} \otimes \eI_{s} \otimes \ket{0}^{\otimes b} \otimes \eI_{t} \right),\\
\ & = \
\prod_{i=1}^{s-1} \swap^{a+i}_{a+b+i}
\left( \ket{0}^{\otimes a} \otimes \eI_{s-1} \otimes \ket{0}^{\otimes b} \otimes \eI_1 \otimes \eI_{t} \right),\\
\ & = \
\dots\\
\ & = \
\ket{0}^{\otimes a} \otimes \ket{0}^{\otimes b}\otimes \eI_{s} \otimes \eI_{t}.
\end{myalign*}
Taking the Hermitian conjugate yields
\begin{myequation*}
\left(\bra{0}^{\otimes a} \otimes \eI_{s} \otimes \bra{0}^{\otimes b} \otimes \eI_{t} \right)
\eS_{n + m}^{\dagger}
\ = \
\bra{0}^{\otimes a} \otimes \bra{0}^{\otimes b} \otimes \eI_{s} \otimes \eI_{t}.
\end{myequation*}
Combining this with \eqref{eq:permutedTP} shows
\begin{myalign*}
\tilde \eA_{s} \otimes \tilde \eA_{t}
\ & = \
\left(\bra{0}^{\otimes a} \otimes \eI_{s} \otimes \bra{0}^{\otimes b} \otimes \eI_{t} \right)
\eS_{n+m}^{\dagger} \eS_{n+m} \left( \eU_{n} \otimes \eU_{m} \right)
\eS_{n+m}^{\dagger} \eS_{n+m}
\left( \ket{0}^{\otimes a} \otimes \eI_{s} \otimes \ket{0}^{\otimes b} \otimes \eI_{t} \right),\\
\ & = \
\left(\bra{0}^{\otimes a} \otimes \bra{0}^{\otimes b} \otimes \eI_{s} \otimes \eI_{t} \right)
\eS_{n+m} \left( \eU_{n} \otimes \eU_{m} \right)\eS_{n+m}^{\dagger}
\left( \ket{0}^{\otimes a} \otimes \ket{0}^{\otimes b}\otimes \eI_{s} \otimes \eI_{t} \right),
\end{myalign*}
such that \eqref{eq:TP} has $\tilde \eA_{s} \otimes \tilde \eA_{t}$
as principal leading block.
The subnormalization and approximation error of $\tilde \eA_{s} \otimes \tilde \eA_{t}$ satisfy:
\begin{myalign*}
\normtwo[\big]{\eA_{s} \otimes \eA_{t}
- \alpha \beta \tilde \eA_{s} \otimes \tilde \eA_{t}}
&\leq
\normtwo[\big]{ \big( \alpha \tilde \eA_{s} + \epsilon_1 \eI_{s} \big)
\otimes \big( \beta \tilde \eA_{t} + \epsilon_2 \eI_{t} \big)
- \alpha \tilde \eA_{s} \otimes \beta \tilde \eA_{t} }, \\
&=
\normtwo[\big]{ \alpha \tilde \eA_{s} \otimes \epsilon_2 \eI_{t}
+ \epsilon_1 \eI_{s} \otimes \beta \tilde \eA_{t}
+ \epsilon_1 \eI_{s} \otimes \epsilon_2 \eI_{t} }, \\
&\leq \alpha \epsilon_2 \normtwo[\big]{\tilde\eA_{s}}
+ \beta \epsilon_2 \normtwo[\big]{\tilde\eA_{t}}
+ \epsilon_1 \epsilon_2, \\
&\leq \alpha \epsilon_2 + \beta \epsilon_1 + \epsilon_1 \epsilon_2,
\end{myalign*}
where we used that $\normtwo{\eA_{s}} \leq \alpha \normtwo{\tilde \eA_{s}} + \epsilon_1$, and $\normtwo{\tilde \eA_{s}} \leq 1$ and analogous results for $\tilde \eA_t$.
This completes the proof.
$\square$
\section{Proof of \cref{lemma:LCU}}
\label{app:proof-lemma2}
\emph{Proof.}
We have that the leading $s$-qubit block of $\eU_{b+n}$ is given by
\begin{myalign*}
\tilde \eB_s = \ &
\big( \bra{0}^{\otimes b} \otimes \bra{0}^{\otimes a} \otimes \eI_s \big)
\ \eU_{b+n} \
\big( \ket{0}^{\otimes b} \otimes \ket{0}^{\otimes a} \otimes \eI_s \big),\\
=\ &
\big( \bra{0}^{\otimes b} \otimes \bra{0}^{\otimes a} \otimes \eI_s \big)
\ \big(\eP_{b}^{\dagger} \otimes \eI_a \otimes \eI_s) \ \eW_{b+n} \ (\eQ_{b} \otimes \eI_a \otimes \eI_s \big) \
\big( \ket{0}^{\otimes b} \otimes \ket{0}^{\otimes a} \otimes \eI_s \big),\\
=\ &
\big(\bra{0}^{\otimes b} \eP_{b}^{\dagger} \otimes \bra{0}^{\otimes a} \otimes \eI_s \big)
\ \eW_{b+n} \
\big(\eQ_{b}\ket{0}^{\otimes b} \otimes \ket{0}^{\otimes a} \otimes \eI_s\big),\\
=\ &
\big(\bra{p} \otimes \bra{0}^{\otimes a} \otimes \eI_s \big)
\ \eW_{b+n} \
\big(\ket{q} \otimes \ket{0}^{\otimes a} \otimes \eI_s\big).
\end{myalign*}
Plugging in the expression for the select oracle, \eqref{eq:selor}, this yields
\begin{myalign*}
\tilde \eB_s & =
\sum_{j=0}^{m-1} \braket {p | j} \braket{j | q} \otimes (\bra{0}^{\otimes a} \otimes \eI_s) \eU_{n}\p{j} (\ket{0}^{\otimes a} \otimes \eI_s)
\ + \
\sum_{j=m}^{2^b-1} \braket {p | j} \braket{j | q} \otimes \bra{0}^{\otimes a} \ket{0}^{\otimes a} \otimes \eI_s,\\
& =
\sum_{j=0}^{m-1} p_{j}^* q_j \, \tilde \eA_{s}\p{j}
\ + \
\sum_{j=m}^{2^b-1} p_{j}^* q_j \, \eI_s.
\end{myalign*}
By \Cref{def:BE} and \Cref{def:SP}, we get that
\begin{myalign*}
\normtwo[\Big]{\eB_s - \alpha \beta \tilde \eB_s}
& =
\normtwo[\Bigg]{
\sum_{j=0}^{m-1} y_j \eA_{s}\p{j} -
\alpha \beta \sum_{j=0}^{m-1} p_{j}^* q_j \, \tilde \eA_{s}\p{j}
\ - \ \alpha \beta
\sum_{j=m}^{2^b-1} p_{j}^* q_j \, \eI_s}, \\
& =
\normtwo[\Bigg]{
\sum_{j=0}^{m-1} y_j \eA_{s}\p{j} - \alpha \beta p_{j}^* q_j \, \tilde \eA_{s}\p{j}
\ - \
\alpha \sum_{j=m}^{2^b-1} \beta p_{j}^* q_j \, \eI_s},\\
& \leq
\alpha \epsilon_1 +
\normtwo[\Bigg]{\sum_{j=0}^{m-1} \underline{y_j} (\eA_{s}\p{j} - \alpha \tilde \eA_{s}\p{j})} +
\alpha \normtwo[\Bigg]{\sum_{j=m}^{2^b-1} \underline{y_j} \, \eI_s}, \\
& \leq
\alpha \epsilon_1 + \beta \epsilon_2.
\end{myalign*}
The penultimate inequality approximates all $\beta p_{j}^*q_j$ terms by $\underline{y_j}$ in the two sums. The error of each individual approximation is bounded by $\epsilon_1$, such that the total error is bounded from above by $\alpha \epsilon_1$ as $\normtwo{\tilde \eA_{s}\p{j}} \leq 1$ and $\normtwo{\eI_s} = 1$.
The last term in the penultimate line is equal to zero by \Cref{def:SP}.
The final equality directly follows from the block-encoding property and $\normone[]{\exy} \leq \beta$.
$\square$
\fi
\end{document}
|
\begin{document}
\centerline{\bf A Note on the Riemann $\xi-$Function}\vskip .4in
\centerline{M.L. GLasser}\vskip .1in
\centerline{Department of Theoretical Physics, University of Valladolid, Valladolid (Spain)}\vskip .2in
\centerline{Department of Physics, Clarkson University, Potsdam, NY(USA)}\vskip 1in
\centerline{\bf Abstract}
\vskip .1in
This note investigates a number of integrals of and integral equations satisfied by Riemann's $\xi-$function. A different, less restrictive, derivation of one of his key identities is provided. This work centers on the critical strip and it is argued that the line $s=3/2+i t$ , e.g., contains a holographic image of this region. \vskip 1in
\noindent
MSC classes: 11M06, 11M26, 11M99, 26A09, 30B40, 30E20, 30C15, 33C47, 33B99, 33F99
\vskip .8in
\noindent
{\bf Keywords:} $\xi$ function, $\zeta$. function, Integral equations
\vskip 1in
\noindent
email: [email protected]
\centerline{\bf Introduction}\vskip .2in
The notation used throughout this note is:
$$\xi(s)=(s-1)\pi^{-s/2}\Gamma(1+s/2)\zeta(s)$$
$$\rho=\sigma+i\tau$$
$$\Xi(\tau)=\xi(1/2+i\tau)$$
$$E_z(a)=\int_1^{\infty}\frac{dt}{t^z}e^{-at},$$
$$\psi(x)=\sum_{n=1}^{\infty}e^{-\pi n^2 x}=\frac{1}{2}[\theta_3(0,e^{-\pi x})-1]$$
$$ \quad J(\rho)=\int_0^1 dt [t^{\rho-2}+t^{(1-\rho)-2}]\psi(1/t^2).$$
$\gamma$ is the contour consisting of the two parallel lines $[c-i\infty,c+i\infty]$, $[1-c+i\infty,1-c-i\infty]$, $1<c<2$, which span the {\it critical strip} $0<\sigma<1$, $\rho_n=1/2+i\tau_n$ is the n-th zero of $\zeta(s)$ on the critical line in the upper half plane.\vskip .2in
The function $\xi(s)$, introduced by Riemann[1], satisfies the simple functional equation $\xi(1-s)=\xi(s)$, is analytic, decays exponentially as $|\rho|\rightarrow\infty$ and possesses the same zeros in the critical strip as $\zeta(s)$. By Cauchy's theorem, one has, for $0<Re[s]<1$
$$\xi(s)=\int_{\gamma}\frac{dt}{2\pi i}\frac{\xi(t)}{t-s},\eqno(1)$$
which, in view of the functional equation, can be written
$$\xi(s)=\int_{c-i\infty}^{c+i\infty}\frac{dt}{2\pi i}\xi(t)\left(\frac{1}{t-s}+\frac{1}{t-1+s}\right)\eqno(2)$$
and expresses the values of $\xi$ inside the critical strip entirely in terms of its values in a region where $\zeta(s)$ is completely known from its defining series, say. In the following section this feature will be exploited to obtain several known and some, perhaps, unfamiliar identities.\vskip .2in
\centerline{\bf Calculation}\vskip .1in
We begin by recalling the tabulated inverse Mellin transform[2]
$$\int_{c-i\infty}^{c+i\infty}\frac{dt}{2\pi i}x^{-t}\Gamma(t)\zeta(2t)=\sum_{n=1}^{\infty}e^{-n^2 x}\eqno(3)$$
from which, by differentiation, one finds the useful inverse Mellin transform
$$F(x)=\int_{c-i\infty}^{c+i\infty}\frac{dt}{2\pi i}x^{-t}\xi(t)=4\pi^2 x^4\sum_{n=1}^{\infty}n^4 e^{-\pi n^2 x^2}-6\pi x^2\sum_{n=1}^{\infty}n^2e^{-\pi. n^2x^2},\eqno(4)$$
Eq.(4) has been presented, in different form, by Patkowski[3].
Parenthetically, we note that if $f$ is integrable and odd, then $f(1-2t)\xi(t)$ is odd under $t\rightarrow(1-t)$ so that
$$\int_{c-i\infty}^{c+i\infty}\frac{dt}{2\pi i}f(1-2t)\xi(t)=\int_{\gamma}\frac{dt}{2\pi i}f(1-2t)\xi(t)=0.\eqno(5)$$
Thus, by noting Romik's formulas[4] for the values of the Theta function $\theta_3(0,q)$ and its derivatives we have, from(4)
$$ \int_{c-i\infty}^{c+i\infty}\frac{dt}{2\pi i}\frac{\xi(t)}{t}=\frac{1}{2}-\frac{\Gamma(5/4)}{\sqrt{2}\pi^{3/4}}\eqno(6).$$
and from (5)
$$\int_{c-i\infty}^{c+i\infty}\frac{dt}{2\pi i}\xi(t)\left[\frac{(1-2t)}{4\tau_n^2+(1-2t)^2}\right]=0\eqno(7)$$
$$\int_{c-i\infty}^{c+i\infty}\frac{dt}{2\pi i}\xi(t)(2t-1)^{2n+1}=0,\quad n=0,1,2,\cdots..\eqno(8a)$$
$$\int_{c-i\infty}^{c+i\infty}\frac{dt}{2\pi i}t\xi(t)=\frac{1}{2}\int_{c-i\infty}^{c+i\infty}\frac{dt}{2\pi i}\xi(t)=\eqno(8b)$$
$$\pi^2\sum_{n=-\infty}^{\infty} n^4e^{-\pi n^2}-\frac{3\pi}{2}\sum_{n=-\infty}^{\infty}n^2e^{-\pi n^2}\eqno(8c)$$
$$=\frac{\Gamma(5/4)}{128\sqrt{2}\pi^{19/4}}[\Gamma^8(1/4)-96\pi^4].\eqno(8d)$$
$$\int_{c-i\infty}^{c+i\infty}\frac{dt}{2\pi i}\frac{\xi(t)}{t(1-t)}=\frac{1}{2}\left(1-\frac{\pi^{1/4}}{\Gamma(3/4)}\right)=\int_{1-c-i\infty}^{1-c+i\infty}\frac{dt}{2\pi i}\frac{\xi(t)}{t(1-t)}\eqno(9)$$
None of these appears to have been recorded previously.
Next, by rewriting (2), we have \vskip .2in
\noindent
{\bf Theorem 1}\vskip .1in
Within the critical strip Riemann's function $\xi(s)$ obeys the integral equation
$$\xi(s)=1-\frac{\pi^{1/4}}{2\Gamma[3/4)}-\int_{c-i\infty}^{c+i\infty}\frac{dt}{2\pi i}\frac{\xi(t)}{t}\left[\frac{2s(1-s)-t}{s(1-s)-t(1-t)}\right], \quad 1<c<2.\eqno(10)$$
or
$$\xi(s)=\frac{1}{2}+\int_{c-i\infty}^{c+i\infty}\frac{dt}{2\pi i}\frac{\xi(t)}{t}\left[\frac{1}{1-t}-\frac{2s(1-s)-t}{s(1-s)-t(1-t)}\right].\eqno(11)$$\vskip .2in
\noindent
From this, one finds \vskip .1in
\noindent
{\bf Corollary 1}\vskip .1in
$$\xi(s)=2\pi^2\sum_{n=1}^{\infty}\int_1^{\infty}dt\left(t^{s/2}+t^{(1-s)/2}\right)\left(n^4t-\frac{3}{2\pi}n^2\right)e^{-n^2\pi t}.\eqno(12)$$
$$=\frac{\pi^{1/4}}{2\Gamma(3/4)}-
\pi\sum_{n=1}^{\infty}n^2\left[sE_{(1-s)/2}(\pi n^2)+(1-s)E_{-s/2}(\pi n^2)\right],\eqno(13)$$
\vskip .1in
Eq. (10) is equivalent to the very important eq(3.10) in Milgram's paper[5] and (13), apart from having summed a series, is LeClair's key formula (15) in [6]\vskip .2in
To explore further consequences of (2), note that
the Mellin transform
$$\phi(x)= \int_{c-i\infty}^{c+i\infty}\frac{dt}{2\pi i}x^{-t}\frac{\xi(t)}{t-s}\eqno(14)$$
satisfies the linear differential equation
$$\phi'(x)+\frac{s}{x}\phi(x)=-\frac{1}{x}F(x),\quad \phi(\infty)=0\eqno(15)$$
where $F$ is defined in (4), so after a bit of easy analysis
$$\phi(x)=2\pi^2\sum_{n=1}^{\infty} E_{\frac{x}{2}+1}(\pi n^2)-3\pi\sum_{n=1}^{\infty}E_{\frac{x}{2}}(\pi n^2).\eqno(16)$$
By applying (16) to (2) one has (note that $\tau$ here is not restricted to be real)\vskip .2in
\noindent{\bf Theorem 2}
\vskip .1in
In the critical strip , i.e. for $|Im[\tau]|<1/2$, Riemann's function $\Xi(\tau)$ satisfies the integral equation
$$\Xi(\tau)=\frac{1}{\pi i}\int_{-\infty+ic}^{\infty+ic}\frac{t\, \Xi(t)}{t^2-\tau^2}dt,\quad -3/2<c<-1/2.\eqno(17)$$\vskip .2in
\noindent
{\bf Corollary 2}\vskip .1in
$$\Xi(\tau)=4\pi^2\sum_{n=1}^{\infty}\int_1^{\infty}dt\, t^{1/4}\cos(\tau \ln\sqrt{t})\left(n^4t-\frac{3}{2\pi}n^2\right)e^{-n^2\pi t}.\eqno(17a)$$
$$=4\int_1^{\infty}dt\, t^{1/4}\cos[\frac{1}{2}\tau\ln t][t\psi''(t)+\frac{3}{2}\psi'(t)]\eqno(17b)$$
$$=\frac{1}{2}-(\tau^2+1/4)\sum_{n=1}^{\infty}Re\, E_{\frac{3}{4}+i\frac{\tau}{2}} (\pi n^2)\eqno(17c)$$
So
$$\Xi(\tau)=1/2-(\tau^2+1/4)\int_1^{\infty}\frac{dt}{t^{3/4}}\cos(\frac{\tau}{2}\ln t)\psi(t)\eqno(18)$$
\vskip .2in
Now, (18), which appears in Riemann's paper[7], can be rewritten
$$\xi(\rho)=\frac{1}{2}-(\alpha+i\beta)\int_0^1dt [t^{\rho-2}+t^{(1-\rho)-2}]\psi(1/t^2).\eqno(19)$$
where $\alpha=\sigma(1-\sigma)+\tau^2$ and $\beta=(1-2\sigma)\tau.$
For $\sigma=1/2$, thus restricting $\tau$ to real values, (18) gives\vskip .1in
\noindent
{\bf Corollary 3}\vskip .1in
$\rho$ is a zero of $\zeta(s)$
on the critical line, $\sigma=1/2$, if and only if,
$$Re\int_0^{1}t^{\rho-2}\psi(1/t^2)dt=\frac{1}{4|\rho|^2}.\eqno(20)$$
\vskip .1in
In our key result (17) for simplicity we choose $c=1$ and to emphasize that $\tau$ is not restricted to real values, write
$$\xi(\rho)=\frac{1}{\pi }\int_{-\infty}^{\infty}dt \xi(3/2+i t)\frac{1+it}{(1+it)^2+[\frac{1}{2}-\sigma+i\tau)^2}\eqno(21)$$
\vskip .2in
Before leaving this section, we note that (17) is the source of a great number of fascinating integral identities found by multiplying both sides by a suitable function $g(\tau)$ and integrating over $\tau$. In the next section a few of these are presented omitting details.\vskip .2in
\centerline{\bf Additional integrals}\vskip .1in
$$ \int_0^{\infty}\cos(xt)\Xi(t)dt=\frac{1}{2}e^{-x}\int_{-\infty}^{\infty}e^{-ixt}\xi(3/2+it)dt,\quad x>0\eqno(i),$$
so, from [2], for $x>0$
$$\int_{-\infty}^{\infty}e^{-ixt}\xi(3/2+it)dt=8\pi e^{-3x/2}[e^{-2x}\psi''(e^{-2x})-\frac{3}{2}\psi'(e^{-2x})]\eqno(ii)$$
and
$$\int_{-\infty}^{\infty}[\xi(1/2+i t)-\xi(3/2+it)]dt=0.\eqno(iii)$$
Similarly,
$$\int_0^{\infty}\Xi(t)\frac{\cos(xt)}{t^2+b^2}dt=\frac{e^{-x}}{2b}\int_{-\infty}^{\infty}dt\frac{\xi(3/2+it)}{(1+it)^2+b^2}[(1+it)e^{(1-b)x}-be^{-ixt}],\eqno(iv)$$
so by [9]
$$\int_{-\infty}^{\infty}dt\frac{\xi(3/2+it)}{(1+it)^2+1/4}[(1+it)e^{x/2}-\frac{1}{2}e^{-ixt}]=\frac{\pi}{2}[e^{x/2}[e^x-2\psi(e^{-2x})].\eqno(v)$$
Thus
$$\int_{-\infty}^{\infty}dt\frac{\xi(3/2+it)}{(1+it)^2+1/4}(\frac{1}{2}+it)=\frac{\pi}{2}(1-\sqrt{2}\Gamma(1/4)\pi^{-3/4}).\eqno(vi)$$
$$\int_0^{\infty} J_0(a t)\Xi(t)dt=\frac{e^{-x}}{2}\int_{-\infty}^{\infty}\xi(3/2+it)I_0[a(1+it)]dt.\eqno(vii)$$
$$\int_0^T\Xi(t)dt=\frac{1}{\pi}\int_{-\infty}^{\infty}\xi(3/2+it)\tan^{-1}\left(\frac{T}{1+it}\right).\eqno(viii)$$
Eq.(viii) is equivalent to
$$\int_0^T\xi(\sigma+it)dt=\frac{1}{\pi}\int_{-\infty}^{\infty}dt\xi(3/2+it)\tan^{-1}\left(\frac{T+
i( \frac{1}{2}-\sigma)}{1+it}\right)+g_0(\sigma)\eqno(ix)$$
where $0<\sigma<1$ and
$$g_0(\sigma)=(\sigma-1/2)i\int_0^1\xi(1/2+(\sigma-1/2) t]dt\eqno(ixa)$$
is independent of $T$.
What these examples have in common is that all information on the critical line is equivalent to information on the line $\sigma=3/2$.
\vskip .2in
\centerline{\bf Discussion}\vskip .1in
The thrust of the preceding sections, and the key formulas have all been confirmed numerically with Mathematica,
is that all the features of $\xi$ in the critical strip are encoded on the lines $\sigma=c$, $1<c<2$ in a holographic manner made explicit by (21). This has the form
$$\xi(\rho)=\int \xi(c+it)R(\sigma,\tau;t)dt\eqno(22)$$
where R is a {\it rational} function. As functions go, $\xi(3/2+it)$ lis uncomplicated: it has no zeros or poles, it is infinitely differentiable and it decays exponentially. Its real and imaginary parts are even and odd, respectively, and related to each other by the Cauchy-Riemann equations. Both of the latter functions are, except near $t=0$, almost featureless. Furthermore, the zeta function component is nearly equal to unity over most of the integration range. Thus any interesting feature in the critical strip must be ascribed largely to features of $R$ which ought be trivial to analyze.
Equation (20) (in essence due to Riemann) has been confirmed for the first 1000 critical zeros, which are available on Mathematica, in its exact form, but even if $\psi(x)$ is truncated to one exponential it is satisfied to many decimal place accuracy for large magnitude zeros. In this case, the integral is $E_{3/4+i\tau_n/2}(\pi)$ and by asymptotic expansion should be capable of producing a formula for $\tau_n$ similar to LeClair's[6] and Milgram's[7], but with less complexity. That is, to derive an expression for $\rho_n$, (20) one expresses in the form
$$Re\left[e^{-(\rho+1/2)\ln\sqrt{\pi}}\Gamma\left[\frac{1}{2}(\rho+\frac{1}{2})\right]+g(\rho)\right]=0\eqno(23 a)$$
$$g(\rho)=\frac{\rho+1/2}{|\rho+1/2|^2}\;_1F_1(\rho+1/2;\rho+3/2;-\pi)-\frac{1}{4|\rho|^2}\eqno(23b)$$
Now, along the critical line the real part of the function $g( \rho)$ in (23b) is non-oscillatory, monotonically decreasing and smaller than the accuracy we are trying to achieve, although much larger than the first term in (23a).which is oscillatory. However, in the spirit of [6 ] one ignores $g$ and thus approximates (23a) as
$$Re\left[e^{-i\tau\ln(\sqrt{\pi})}\Gamma\left(\frac{1}{2}+i\frac{\tau}{2}\right)\right]=0\eqno(23c)$$
Following [6] one now applies Stirling's formula in (22) and solves for $\tau_n$ to obtain an analogue of LeClair's formula[6. (20)].
Finally, as anyone, who writes on the zeta function must find irresistible, I conclude this note with speculations on the elephant in the room, the Riemann hypothesis(RH). In essence, the latter claims nothing more than that $\xi(\rho)$ does not vanish for $0<\sigma<1/2$. However, as remarked above, this is a question of the simultaneous vanishing of the real and imaginary parts of the integral (22) for which the zeta function itself is nearly irrelevant. For $\sigma=1/2$, the imaginary part goes away and it is known that the real part has countably many roots $\tau_n$. Otherwise, the situation appears to depend mainly on the nature of the rational function $R$, which depends on $\sigma$, than on $\xi$, which does not. so the validity of the RH should be easy to resolve. It could be that the vast literature concerning $\zeta$ in the critical strip is a red herring.
Another possible approach is to break the integral $J(\rho)$ in (19) into real and imaginary parts. Then if $\rho$ is a zero of $\zeta$ with $0<\sigma<1/2$,
$$\alpha Re[J]-\beta Im[J]=\frac{1}{2}\eqno(24a)$$
$$\beta Re[J]+\alpha Im[J]=0\eqno(23b)$$
These two relations lead to
\vskip .1in
\noindent
{\bf Corollary 4}\vskip .1in
The Riemann conjecture is true if and only if
$$Re\int_0^1 dt [t^{\rho-2}+t^{(1-\rho)-2}]\psi(1/t^2)=\frac{\alpha}{2(\alpha^2-\beta^2)}\eqno(25)$$
has no solution for $0<\sigma<1/2$.\vskip .1in
Since the critical strip is known to be free of such zeros to astronomical values of $|\rho|$, the resolution of this matter might be settled by extracting a low order asymptotic estimate of the Mellin transform $J(\rho)$ and analyzing the resulting algebraic equation. Since truncating $\psi$ to a single term appears to yield accurate results for large $n$, it may be reasonable to conjecture:\vskip .2in
The Riemann hypothesis is true if for large $t$ the equation
$$\frac{(1-s) s+t^2}{\left((1-s) s+t^2\right)^2-(1-2 s)^2
t^2}-\Re\left(E_{-\frac{s}{2}-\frac{i t}{2}+1}(\pi )+E_{\frac{1}{2} (s+i t+1)}(\pi
)\right)=0$$
has no real solution $t$ for $0<s<1/2$. However, such simple expedients tend to be illusory since no matter how small it is, a positive number is not zero.
\vskip .1in
\vskip .4in\noindent{\bf Acknowledgements}\vskip ,1in
This work was inspired by reading [5] and I thank its author for this opportunity. I also thank Dr. Michael Milgram and Prof. Richard Brent for valuable comments and suggestions. This work was partially supported by the Spanish grant MINECO (MTM2014-57129-C2-1-P), Junta de Castilla y Le\'on, FEDER projects (BU229P18, VA057U16, and VA137G18).\vskip .2in
\centerline{\bf References}\vskip .1in
\noindent
[1] E.C. Titchmarsh and D.R Heath-Brown. The Theory of the Riemann Zeta-Function. Oxford
Science Publications, Oxford, Second edition, 1986. \vskip .1in
\noindent
[2] Ibid (2.16.1)\vskip .1in
\noindent
[3] A.E. Patkowski, A New Integral Equation and some Integrals Assviated with Number Theory, arXiv e-prints, March 2018. arXiv: 1407.2983v6\vskip .1in
\noindent
[4] Dan Romik. The Taylor coefficients of the Jacobi theta constant $\theta_3$, arXiv e-prints, July
2018. arXiv:1807.06130.\vskip .1in
\noindent
[5] Michael Milgram, An Integral Equation for Riemann?s Zeta Function and its Approximate Solution,
arXiv e prints, January 20191901.01256v1 [math.CA]\vskip .1in
\noindent
[6] Andre LeClair. An electrostatic depiction of the validity of the Riemann Hypothesis and a
formula for the N-th zero at large N. Int. J. Mod. Phys., A28:1350151, 2013. Also available
from http://arxiv.org/abs/1305.2613v3\vskip .1in
\noindent
[7] Michael Milgram, Exploring Riemann?s functional equation, Cogent Math. (2016), 3: 1179246
http://dx.doi.org/10.1080/23311835.2016.1179246\vskip .1in
\noindent
[8] H.M. Edwards. Riemann's Zeta Function. Dover, 2001. \vskip .1in
\noindent
[9] (2.16.2) of reference[1]. \vskip .1in
\end{document}
|
{\bf e}gin{document}
\title{Functions with Pick matrices having bounded number of
negative eigenvalues}
\author{V. Bolotnikov, A. Kheifets and L. Rodman}
\date{}
\maketitle
\vskip 12pt
{\bf e}gin{abstract}
A class is studied of complex valued functions defined on the unit disk
(with a possible
exception of a discrete set) with the property that all their Pick
matrices have not more than a prescribed number of negative eigenvalues.
Functions in this class, known to appear as pseudomultipliers
of the Hardy space, are
characterized in several other ways. It turns out that
a typical function in the class is meromorphic with a possible modification at
a finite
number of points, and total number of poles and of points of modification does
not exceed the prescribed number of negative eigenvalues.
Bounds are given for the number of points that generate Pick matrices that are
needed to achieve the requisite number of negative eigenvalues.
A result analogous to
Hindmarsh's theorem is proved for functions in the class.
\end{abstract}
\vskip 10pt
{\baselineskip=10pt
{\bf Key Words}: Schur functions, Pick matrices.}
\vskip 10pt
{\baselineskip=10pt
{\bf 2000 Mathematics Subject Classification}: 30E99, 15A63.}
\vspace*{.12in}
\baselineskip=15pt
\section{Introduction and main results}
\setcounter{equation}{0}
A complex valued function $S$ is called a {\em Schur function} if $S$ is
defined on the open unit disk ${\mathbb D}$, is analytic, and satisfies $|S(z)|\leq 1$
for every $z\in{\mathbb D}$. Schur functions and their generalizations have been
extensively studied in the literature. They admit various useful
characterizations; one such well-known characterization is
the following:
{\em A function $S$ defined on ${\mathbb D}$ is a Schur function
if and only
the kernel
{\bf e}gin{equation}
K_S(z,w)=\frac{1-S(z)S(w)^*}{1-z{w}^*}
\label{1.2}
\end{equation}
is positive on} ${\mathbb D}$.
The positivity of (\ref{1.2}) on ${\mathbb D}$ means that
for every choice of positive integer $n$ and of distinct points
$z_1,\ldots,z_n\in{\mathbb D}$,
the {\em Pick matrix}
$$
P_n(S; \, z_1,\ldots,z_n)=
\left[\frac{1-S(z_i)S(z_j)^*}{1-z_i{z}^*_j}\right]_{i,j=1}^n
$$
is positive semidefinite: $P_n(S; \, z_1,\ldots,z_n)\geq 0.$
The following remarkable theorem due to Hindmarsh \cite{hind} (see also
\cite{D}) implies that if all Pick matrices of order $3$ are positive
semidefinite, then the function is necessarily analytic.
{\bf e}gin{Tm}
Let $S$ be a function defined on an open set $U\subseteq {\mathbb D}$, and
assume that $P_3(S; \, z_1,z_2,z_3)$ is positive
semidefinite for every choice of $z_1, \, z_2, \, z_3\in U$. Then
$S$ is analytic on $U$, and $|S(z)|\leq 1$ for every $z\in U$.
\label{T:1.2}
\end{Tm}
A corollary of Theorem \ref{T:1.2} will be useful; we say that a set
${\Lambda}\subset {\mathbb D}$ is {\em discrete} (in ${\mathbb D}$) if ${\Lambda}$ is at most countable,
with accumulation points (if any) only on the boundary of ${\mathbb D}$.
{\bf e}gin{Cy}
Let $S$ be a function defined on ${\mathbb D}\setminus {\Lambda}$, where ${\Lambda}$ is a
discrete set. If $P_3(S; \, z_1,z_2,z_3)$ is positive
semidefinite for every choice of $z_1, \, z_2, \, z_3\in{\mathbb D}\setminus {\Lambda}$,
then $S$ admits analytic continuation to a Schur function (defined on ${\mathbb D}$).
\label{T:1.2a}
\end{Cy}
In this paper we prove, among other things, an analogue of Hindmarsh's
theorem
for the class of functions whose Pick matrices have a bounded number
of negative eigenvalues.
These functions need not be analytic, or
meromorphic, on ${\mathbb D}$. It will be generally assumed that such functions
are defined on ${\mathbb D}\setminus {\Lambda}$, where ${\Lambda}$ is a discrete set, which
may depend on the function. More precisely:
{\bf e}gin{Dn}
Given a nonnegative integer $\kappa$, the class ${\mathcal S}_\kappa$ consists
of (complex valued) functions $f$ defined on ${\rm Dom}\,(f)={\mathbb D}\setminus
{\Lambda}$, where ${\Lambda}={\Lambda}(f)$ is a discrete set, and such that all Pick
matrices
{\bf e}gin{equation}
P_n(f; \, z_1,\ldots,z_n):=
\left[\frac{1-f(z_i)f(z_j)^*}{1-z_i{z}^*_j}\right]_{i,j=1}^n, \quad
z_1, \ldots ,z_n\in {\mathbb D}\setminus {\Lambda} \ \ {\rm are} \ \ {\rm distinct},
\label{1.3a}
\end{equation}
have at most $\kappa$ negative eigenvalues, and at least one such Pick
matrix has exactly $\kappa$ negative eigenvalues (counted with
multiplicities).
\label{D:1.4}
\end{Dn}
The Pick matrices (\ref{1.3a}) are clearly Hermitian. Note also that in
Definition \ref{D:1.4} the points $z_1, \ldots ,z_n\in {\mathbb D}\setminus {\Lambda}$
are assumed to be distinct; an
equivalent definition is obtained if we omit the requirement that the points
$z_1,\ldots,z_n$ are distinct.
Meromorphic functions in the class ${\mathcal S}_{\kappa}$ have been
studied before in various contexts: approximation problems \cite{akh},
spectral theory of unitary operators in Pontryagin spaces \cite{kl},
Schur--Takagi problem \cite{AAK}, Nevanlinna - Pick problem
\cite{N}, \cite{gol}, \cite{BH}, model theory
\cite{ADRS}. Recently,
functions with jumps in
${\mathcal S}_{\kappa}$
appeared in the theory of almost multipliers (or pseudomultipliers)
\cite{AY}; this connection
will be discussed in more details later on.
Throughout the paper, ${\rm Dom}\,(f)$ stands for the domain of definition
of
$f$ and $Z(f)$ denotes the zero set for $f$:
$$
Z(f)=\{z\in{\rm Dom}\,(f): \; \; f(z)=0\}.
$$
The number of negative eigenvalues (counted with multiplicities) of
a Hermitian matrix $P$ will be denoted by ${\rm{sq}_-} P$.
For a function $f$ defined on ${\mathbb D}\setminus {\Lambda}$, where ${\Lambda}$
is a discrete set, we let ${\bf k}_n(f)$ to denote the maximal number
of negative eigenvalues (counted with multiplicities) of all Pick
matrices of order $n$:
{\bf e}gin{equation}
{\bf k}_n(f):=\max_{z_1,\ldots,z_n\in{\mathbb D}\setminus {\Lambda}}
{\rm{sq}_-} P_n(f; \, z_1,\ldots,z_n).
\label{1.4}
\end{equation}
We also put formally ${\bf k}_0(f)=0$. It is clear that the sequence
$\{{\bf k}_n(f)\}$ is non decreasing and
if $f\in{\mathcal S}_\kappa$, then
{\bf e}gin{equation}
{\bf k}_n(f)=\kappa
\label{1.5}
\end{equation}
for all sufficiently large integers $n$. Note that the class ${\mathcal S}_0$
coincides with the Schur class; more precisely,
every function $f$ in ${\mathcal S}_0$
admits an extension to a Schur
function. Here and elsewhere, we say that a function $g$ is an
{\em extension} of a function $f$ if ${\rm Dom}\,(g)\supseteq {\rm
Dom}\,(f)$
and $g(z)=f(z)$ for every $z\in {\rm Dom}\,(f)$.
The following result by Krein - Langer \cite{kl} gives a characterization
of
{\em meromorphic} functions in ${\mathcal S}_\kappa$.
{\bf e}gin{Tm}
Let $f$ be a meromorphic function on ${\mathbb D}$. Then $f\in{\mathcal S}_\kappa$ if and
only if $f(z)$ can be represented as $f(z)=
{\displaystyle \frac{S(z)}{B(z)}},$ where $S$ is a
Schur function and $B$ is a Blaschke product of degree $\kappa$ such that
$S$ and $B$ have no common zeros.
\label{T:1.2b}
\end{Tm}
Recall that a {\em Blaschke product} (all Blaschke products in this paper are
assumed to be finite) is a rational function $B(z)$ that is
analytic on ${\mathbb D}$ and unimodular on the unit circle ${\mathbb T}$ : $|B(z)|=1$ for
$|z|=1$; the {\em degree} of $B(z)$ is the number of zeros (counted with
multiplicities) of $B(z)$ in ${\mathbb D}$.
See also \cite{gol}, \cite{DLS}, \cite{BR} for various proofs of matrix
and
operator--valued versions of Theorem \ref{T:1.2b}.
However, not all functions in ${\mathcal S}_{\kappa}$ are meromorphic, as a
standard example shows:
{\bf e}gin{Ex}
{\rm Let the function $f$ be defined on ${\mathbb D}$ as $f(z)=1$ if $z\neq 0$;
$f(0)=0$.
Then $P_n(f;z_1, \ldots ,z_n)=0$ if $z_j$'s are all nonzero; if one of
the points is zero (up to a permutation similarity we can assume that
$z_1=0$), then $P_n(f;z_1, \ldots ,z_n)$ is the matrix with ones in
the first column and in the first row and with zeroes elsewhere. This
matrix clearly is of rank two and has one negative eigenvalue.
Thus, $f\in {\mathcal S}_{1}$. That $f$ has a jump discontinuity at $z=0$ is
essential; removing the discontinuity
would result in the constant function $\tilde{f}(z)=1$,
which does not belong to} ${\mathcal S}_1$.
\label{E:2.1}
\end{Ex}
As it was shown in \cite{AY}, functions in ${\mathcal S}_{\kappa}$
that are defined in ${\mathbb D}$ except for a finite set of singularities,
are exactly the $\kappa$-pseudomultipliers of the Hardy space $H_2$
(see \cite{AY} for the definitions of pseudomultipliers and their
singularities; roughly speaking, a pseudomultiplier maps a subspace
of finite codimension in Hilbert space of functions into the Hilbert
space, by means of the corresponding multiplication operator). In fact,
similar results were obtained in \cite{AY} for more general reproducing
kernel Hilbert spaces of functions. Theorem 2.1 of \cite{AY} implies in
particular that every function in the class ${\mathcal S}_{\kappa}$ defined on a
set of uniqueness of $H_2$ can be extended to a function in ${\mathcal S}_{\kappa}$
defined everywhere in ${\mathbb D}$ except for a finite number of singularities.
Theorem 3.1 of \cite{AY} implies that the pseudomultipliers of $H_2$ are
meromorphic in ${\mathbb D}$ with a finite number of poles and jumps, some of which
may coincide. This result applies to other reproducing kernel Hilbert
spaces of analytic functions as well, as proved in \cite{AY}.
For the Hardy space $H_2$, the pseudomultipliers can be characterized in a
more concrete way, namely as standard functions defined below, see Theorem
\ref{T:1.4}(2). This characterization is based
largely on Krein - Langer theorem \ref{T:1.2b}.
In this paper we focus on a more detailed study of the class
${\mathcal S}_{\kappa}$. Our proofs depend on techniques used in
interpolation, such as matrix inequalities and Schur complements,
and allow us to obtain generalizations of Hindmarsh's theorem
(Theorem \ref{T:1.4}(3))
and precise
estimates of the size of Pick matrices needed
to attain the required number of
negative eigenvalues (Theorem \ref{T:3.1}).
{\bf e}gin{Dn}
A function $f$ is said to be a {\em standard function}
if it admits the representation
{\bf e}gin{equation}
f(z)=\left\{{\bf e}gin{array}{cl} {\displaystyle\frac{S(z)}{B(z)}}
& \quad \mbox{if } \; z\not \in {\mathcal W}\cup{\mathcal Z}, \\
\gamma_j & \quad \mbox{if } \; z=z_j\in{\mathcal Z}, \end{array}\right.
\label{1.6a}\end{equation}
for some complex numbers $\gamma_1 ,\ldots , \gamma_{\ell}$, where:
{\bf e}gin{enumerate}
\item ${\mathcal Z}=\{z_1, \ldots ,z_{\ell}\}$ and ${\mathcal W}=\{w_1,\ldots ,w_p\}$ are
disjoint sets of distinct points in ${\mathbb D}$;
\item $B(z)$ is a Blaschke product of degree $q\geq 0$ and $S(z)$ is a
Schur function with the zero sets $Z(B)$ and $Z(S)$, respectively, such
that
$$
{\mathcal W}\subseteq Z(B)\subseteq {\mathcal W}\cup{\mathcal Z}\quad\mbox{and}\quad
Z(B)\cap Z(S)=\emptyset;
$$
\item if $z_j\in {\mathcal Z}\setminus Z(B)$, then $
{\displaystyle \frac{S(z_j)}{B(z_j)}}\neq \gamma_j.$
\end{enumerate}
\label{D:1.7}
\end{Dn}
For the standard function $f$ of the form (\ref{1.6a}), ${\rm Dom}\,(f)=
{\mathbb D}\setminus {\mathcal W}$.
More informally, ${\mathcal Z}$ is the set of points $z_0$ at which
$f$ is defined and $f(z_0)\neq \lim_{z \rightarrow z_0} f(z)$.
The case when $\lim_{z \rightarrow z_0} |f(z)|=\infty$ is not excluded here; in
this case $f(z_0)$ is defined nevertheless.
The set ${\mathcal W}$
consists of those poles of $
{\displaystyle \frac{S(z)}{B(z)}}$ at which $f$ is not defined.
In reference to the properties (1)-(3) in Definition
\ref{D:1.7} we will say that $f(z)$ has $q$ poles (the zeros of $B(z)$),
where each pole is counted according to its multiplicity as a zero of $B(z)$,
and $\ell$ jumps $z_1, \ldots ,z_{\ell}$. Note that the poles and jumps
need not be disjoint.
{\bf e}gin{Tm}
Let $f$ be defined on ${\mathbb D}\setminus {\Lambda}$, where ${\Lambda}$ is a discrete set.
Fix a nonnegative integer $\kappa$. Then the following statements are
equivalent:
{\bf e}gin{enumerate}
\item $f$ belongs to ${\mathcal S}_\kappa$.
\item $f$ admits an extension to a
standard function with $\ell$ jumps (for
some $\ell$, $0\leq \ell\leq \kappa$) and
$\kappa - \ell$ poles, and
the jumps of the standard function
are contained in ${\mathbb D}\setminus {\Lambda}$.
\item {\bf e}gin{equation} {\bf k}_n(f)={\bf k}_{n+3}(f)=\kappa
\quad {\rm for} \quad {\rm some} \quad {\rm integer} \ \ n\geq 0.
\label{2.10}
\end{equation}
\end{enumerate}
\label{T:1.4}
\end{Tm}
Note that the extension to a standard function in the second statement is
unique. Note also that the equivalence $ {\bf 1 \Leftrightarrow 3}$ is a
generalization of Hindmarsh's theorem
to the class of functions whose Pick matrices have a bounded number of
negative eigenvalues,
and
coincides with Theorem \ref{T:1.2} if $\kappa=0$.
The third statement in
Theorem \ref{T:1.4} raises the question of the minimal possible
value of $n$ for which (\ref{1.5}) holds. For convenience of future
reference we denote this minimal value by $N(f)$:
{\bf e}gin{equation}
N(f)=\min_{n}\{n: \; {\bf k}_{n}(f)=\kappa\}, \qquad f\in{\mathcal S}_{\kappa}.
\label{1.9}
\end{equation}
{\bf e}gin{Tm}
Let $f$ be a standard function with $q$ poles and $\ell$
jumps. Then $f\in {\mathcal S}_{\kappa}$, where $\kappa=q+\ell$, and
{\bf e}gin{equation}
q+\ell\leq N(f)\leq q +2\ell.
\label{3.1}
\end{equation}
\label{T:3.1}
\end{Tm}
The left inequality in (\ref{3.1}) is self-evident (to have $\kappa$
negative eigenvalues, a Hermitian matrix must be of size at least
$\kappa\times
\kappa$), while the right inequality is
the essence of the theorem.
We shall show
that inequalities (\ref{3.1}) are exact; here we present a simple example
showing that these inequalities can be strict.
{\bf e}gin{Ex}\label{E:last} {\rm Let
$$
f(z)=\left\{{\bf e}gin{array}{cc} 1, & z\neq 0, \\
a\neq 1, & z=0.\end{array}\right.
$$
Then $f(z)$ is a standard function from ${\mathcal S}_1$ with no
poles and one jump and Theorem \ref{T:3.1} guarantees that
$1\leq N(f)\leq 2$. More detailed analysis shows that if $|a|>1$, then
${\bf k}_n(f)=1$ for $n\ge 1$ and therefore $1=N(f)<2$. If $|a|\leq 1$,
then ${\bf k}_1(f)=0$, ${\bf k}_n(f)=1$
for $n\ge 2$ and therefore $1<N(f)=2$.}
\end{Ex}
The following addition to Theorem \ref{T:3.1} is useful:
{\bf e}gin{Rk}
Let $f$ be as in
Theorem \ref{T:3.1}.
Let $w_1, \ldots ,w_k$ be the distinct poles of $f$ with multiplicities
$r_1, \ldots ,r_k$, respectively, and let $z_1,\ldots ,z_{\ell}$ be the
(distinct) jumps
of $f$. Then there exists an $\epsilon>0$ such that
every set
$Y=\{y_1,\ldots,y_m\}$ of
$m:=q+2\ell$ distinct points
satisfying the conditions 1 - 3 listed below
has the property that
$$
{\rm{sq}_-} P_m(f; y_1,\ldots ,y_m)=q+\ell.$$
{\bf e}gin{enumerate}
\item $\ell$ points in the set $Y$ coincide with $z_1,\ldots,
z_{\ell}$;
\item
$\ell$ points in $Y$, different from $z_1,\ldots,
z_{\ell}$, are at a
distance less than $\epsilon$
from
the
set $\{z_1, \ldots, z_{\ell}\}$;
\item the remaining $q$ points in $Y$ are
distributed
over $k$ disjoint subsets $Y_{1}, \ldots, Y_k$ such that the set $Y_j$
consists
of $r_j$ points all at a positive distance less then $\epsilon$ from
$w_j$, for
$j=1,\ldots ,k$.
\end{enumerate} \label{R:last}
\end{Rk}
The proof of Remark \ref{R:last} is
obtained in the course of the proof of Theorem \ref{T:3.1}.
Still another characterization of the class ${\mathcal S}_{\kappa}$ is given in the
following theorem:
{\bf e}gin{Tm} Let $f$ be as in Theorem $\ref{T:1.4}$, and fix a nonnegative
integer $\kappa$. Then:
{\bf e}gin{enumerate}
\item If
${\bf k}_{2\kappa}(f)={\bf k}_{2\kappa+3}(f)=\kappa$, then $f\in {\mathcal S}_{\kappa}$.
\item If $f\in {\mathcal S}_{\kappa}$, then ${\bf k}_{2\kappa}=\kappa.$
\end{enumerate}
\label{T:1.new}
\end{Tm}
Example \ref{E:last} (with $|a|\le 1$) shows that for a fixed
$\kappa$,
the subscript $2\kappa$ in Theorem \ref{T:1.new}
cannot be replaced by any smaller nonnegative integer.
Theorems \ref{T:1.4}, \ref{T:3.1}, and \ref{T:1.new} comprise the main results
of this paper.
Sections 2 through 4 contain the proofs of Theorems \ref{T:1.4} and
\ref{T:3.1}. In Section 5, a local result is proved to the effect that
for a function $f\in{\mathcal S}_{\kappa}$ and any open set $\Omega$ in
${\rm Dom}\, (f)$, the requisite number of negative eigenvalues of Pick
matrices $P_n(f;z_1, \ldots, z_n)$ can be achieved when the points $z_j$
are restricted to belong to $\Omega$. Hindmarsh sets and their elementary
properties are introduced in Section 6, where we also state an open
problem.
If not stated explicitly otherwise, all functions are
assumed to be scalar (complex
valued). The superscript $^*$ stands for the conjugate of a complex
number or the conjugate transpose of a matrix.
\section{Theorem \ref{T:1.4}: part of the proof}
\setcounter{equation}{0}
In this section we prove implication ${\bf 3 {\mathbb R}ightarrow 1}$ of Theorem
\ref{T:1.4}. We start with some auxiliary lemmas.
{\bf e}gin{La}
If an $m \times m$ matrix $X$ has rank $k<m$ and the algebraic
multiplicity of zero as an eigenvalue of $X$ is $m-k$, then
there exists a $k \times k$ nonsingular principal submatrix of $X$.
\label{L:4.1}
\end{La}
{\bf Proof:} The characteristic polynomial of $X$ has the form
$\lambda^m +a_{m-1}\lambda^{m-1}+\cdots + a_{m-k}\lambda^{m-k}$,
where $a_{m-k}\neq 0$. Since $\pm a_{m-k}$ is the sum of all
determinants of $k \times k$ principal submatrices of $X$, at least
one of those determinants must be nonzero. {\eproof}
{\bf e}gin{La}
\label{L:new}
Let $X$ be a Hermitian $n\times n$ matrix. Then
{\bf e}gin{enumerate}
\item ${\rm{sq}_-} Y\geq{\rm{sq}_-} X$ for all Hermitian matrices
$Y\in{\mathbb C}^{n\times n}$ in some small neighborhood of $X$.
\item ${\rm{sq}_-} P^*XP \leq {\rm{sq}_-} X$ for every $P\in{\mathbb C}^{n\times m}$.
In particular, if $X_0$ is any principal submatrix of $X$, then
${\rm{sq}_-} X_0 \leq {\rm{sq}_-} X$.
\item If $\{X_m\}_{m=1}^\infty$ is a sequence of Hermitian matrices such that
${\rm{sq}_-} X_m\leq k$ for $m=1,2, \ldots$, and $\lim_{m\rightarrow \infty}
X_m=X$, then also ${\rm{sq}_-} X\leq k$.
\end{enumerate}
\end{La}
{\bf Proof}: The first and third statements easily follow by the
continuity properties of eigenvalues of Hermitian matrices.
The second statement is a consequence from the interlacing properties of
eigenvalues of Hermitian matrices, see, e.g. \cite[Section 8.4]{LT}. {\eproof}
{\bf e}gin{La} \label{L:last1}
Let $f$ be a function defined on ${\mathbb D}\setminus {\Lambda}$, where ${\Lambda}$
is a discrete set.
{\bf e}gin{enumerate}
\item If $g$ is defined on a set $({\mathbb D}\setminus {\Lambda})\cup
\{w_1,\ldots,w_k\}$
(the points $w_1,\ldots,w_k\in{\mathbb D}$ are not assumed to be distinct or to be
disjoint with ${\Lambda}$) and $g(z)=f(z)$ for every $z\in {\mathbb D}\setminus {\Lambda}$,
then
{\bf e}gin{equation}
{\bf k}_n(f)\leq {\bf k}_n(g) \leq
\max_{0\leq r\leq \min \{k,n\}} \{{\bf k}_{n-r}(f) +r\}
\leq
{\bf k}_n(f) +k, \qquad n=1,2,
\ldots.
\label{4.1a}
\end{equation}
\item If $g$ is defined on ${\mathbb D}\setminus {\Lambda}$, and $g(z)=f(z)$ for every
$z\in {\mathbb D}\setminus {\Lambda}$, except for $k$
distinct points $z_1, \ldots z_k\in
{\mathbb D}\setminus {\Lambda}$, then
$$
{\bf k}_n(g) \leq {\bf k}_n(f)+k,
\qquad n=1,2,
\ldots.
$$
\end{enumerate}
\end{La}
{\bf Proof:} We prove the first statement.
The first inequality in (\ref{4.1a}) follows from the definition
(\ref{1.4}) of
${\bf k}_n(f)$. Let $y_1, \ldots ,y_n$ be a set of
distinct points in $({\mathbb D}\setminus {\Lambda})\cup
\{w_1,\ldots,w_k\}$, and assume that exactly $r$ points among
$y_1, \ldots ,y_n$ are in the set $\{w_1, \ldots w_k\}\setminus
({\mathbb D}\setminus {\Lambda})$. For notational convenience, suppose that
$y_1, \ldots ,y_r \in \{w_1, \ldots w_k\}\setminus
({\mathbb D}\setminus {\Lambda})$. We have
$$ P_n(g;y_1, \ldots ,y_n)=\left[{\bf e}gin{array}{cc} ^* & ^* \\ ^* &
P_{n-r}(f;y_{r+1},\ldots ,y_n) \end{array}\right], $$
where $*$ denotes blocks of no immediate interest. By the
interlacing properties of eigenvalues of Hermitian matrices, we have
$${\rm{sq}_-} P_n(g;y_1, \ldots ,y_n)\leq
{\rm{sq}_-} P_{n-r}(f;y_{r+1},\ldots ,y_n) +r \leq {\bf k}_{n-r}(f) +r, $$
and, since $0\leq r\leq \min\{n,k\}$, the second and third inequalities in
(\ref{4.1a}) follow.
For the second statement, using induction on $k$, we need to prove only the
case $k=1$. Let $y_1, \ldots ,y_n$ be a set of
distinct points in ${\mathbb D}\setminus {\Lambda}$. Then
{\bf e}gin{equation}\label{ag} P_n(g;y_1, \ldots ,y_n)=P_n(f;y_1, \ldots ,y_n)+Q,
\end{equation}
where $Q$ is either the zero matrix (if $y_j\neq z_1$), or $Q$ has
an $(n-1) \times (n-1)$ principal zero submatrix (if $y_j=z_1$ for some $j$),
and in any case $Q$ has at most one negative eigenvalue.
Let $k={\rm{sq}_-} P_n(g;y_1, \ldots ,y_n)$.
Then (\ref{ag}) implies, using the Weyl inequalities for eigenvalues of the sum
of two Hermitian matrices (see, e.g., \cite[Section III.2]{B}) that
${\rm{sq}_-} P_n(f;y_1, \ldots ,y_n)\leq k+1$.
\eproof
{\bf Proof of $3{\mathbb R}ightarrow 1$ of Theorem \ref{T:1.4}.} Let $f$ be
defined on ${\mathbb D}\backslash {\Lambda}$, where ${\Lambda}$ is discrete. Furthermore,
let (\ref{2.10}) hold for some integer $n\geq 0$ and let the set
$$
{\mathcal Z}=\{z_1,\ldots,z_n\}\subset{\mathbb D}\backslash{\Lambda}
$$
be such that
$$
{\rm{sq}_-} P_n(f; \, z_1,\ldots,z_n)=\kappa.
$$
Without loss of generality we can assume that
{\bf e}gin{equation}
P_n(f; \, z_1,\ldots,z_n)=
\left[\frac{1-f(z_i)f(z_j)^*}{1-z_i{z}^*_j}\right]_{i,j=1}^n
\label{2.1}
\end{equation}
is not singular.
Indeed, if it happens that the matrix (\ref{2.1}) is singular, and its rank is
$m < n$, then by Lemma \ref{L:4.1}, there is a nonsingular $m\times m$
principal submatrix $P_m(f;z_{i_1},\ldots,z_{i_m})$ of (\ref{2.1}).
A Schur complement argument shows that
$$
{\rm{sq}_-} P_m(f; \, z_{i_1},\ldots,z_{i_m})=\kappa.
$$
It follows then that ${\bf k}_m(f)=\kappa$. But then, in view of
(\ref{2.10}) and the nondecreasing property of the sequence
$\{{\bf k}_j(f)\}_{j=1}^{\infty}$,
we have ${\bf k}_{m+3}(f)=\kappa$, and the subsequent proof may be repeated
with $n$ replaced by $m$.
Setting $P_n(f; \, z_1,\ldots,z_n)=P$ for short, note the identity
{\bf e}gin{equation}
P-TPT^*=FJF^*,
\label{2.2}
\end{equation}
where
{\bf e}gin{equation}
T=\left[{\bf e}gin{array}{ccc}{z}_1 && \\ &\ddots & \\
&& {z}_n\end{array}\right],\quad J=\left[{\bf e}gin{array}{cr}1 &0\\
0&-1\end{array}\right],\quad
F=\left[{\bf e}gin{array}{cc} 1 & f(z_1) \\
\vdots & \vdots \\ 1 & f(z_n)\end{array}\right].
\label{2.3}
\end{equation}
Consider the function
{\bf e}gin{equation}
{\mathbb T}heta(z)=I_2-(1-z)F^*(I_n-zT^*)^{-1}P^{-1}(I_n-T)^{-1}FJ
\label{2.3a}
\end{equation}
It follows from (\ref{2.2}) by a straightforward standard
calculation (see, e.g., \cite[Section 7.1]{bgr}) that
{\bf e}gin{equation}
J-{\mathbb T}heta(z)J{\mathbb T}heta(w)^*=(1-z{w}^*)F^*(I_n-zT^*)^{-1}P^{-1}
(I_n-{w}^*T)^{-1}F.
\label{2.3b}
\end{equation}
Note that ${\mathbb T}heta(z)$ is a rational function taking $J$--unitary values
on ${\mathbb T}$: ${\mathbb T}heta(z)J{\mathbb T}heta(z)^*=J$ for $z\in {\mathbb T}$. Therefore, by the symmetry
principle,
$$
{\mathbb T}heta(z)^{-1}=J{\mathbb T}heta(\frac{1}{{z}^*})^*J=
I_2+(1-z)F^*(I_n-T^*)^{-1}P^{-1}(zI_n-T)^{-1}FJ,
$$
which implies, in particular, that ${\mathbb T}heta(z)$ is invertible
at each point $z\not\in{\mathcal Z}$.
By (\ref{2.10}) and by Lemma \ref{L:new},
{\bf e}gin{equation}
{\rm{sq}_-} P_{n+3}(f; \, z_1,\ldots,z_n,\zeta_1,\zeta_2,\zeta_3)=\kappa
\label{2.4}
\end{equation}
for every choice of $\zeta_1, \, \zeta_2, \, \zeta_3\in{\mathbb D}\backslash{\Lambda}$.
The matrix in (\ref{2.4}) can be written in the block form as
{\bf e}gin{equation}
P_{n+3}(f; \, z_1,\ldots,z_n,\zeta_1,\zeta_2,\zeta_3)=
\left[{\bf e}gin{array}{cc} P & \Psi^*\\ \Psi & P_3(f; \, \zeta_1, \zeta_2,
\zeta_3)\end{array}\right],
\label{2.6}
\end{equation}
where
$$
\Psi=\left[{\bf e}gin{array}{c}\Psi_1 \\ \Psi_2 \\ \Psi_3 \end{array}\right]
\quad\mbox{and}\quad
\Psi_i=\left[{\bf e}gin{array}{ccc}{\displaystyle
\frac{1-f(\zeta_i)f(z_1)^*}
{1-\zeta_i{z}^*_1}} &\ldots &
{\displaystyle\frac{1-f(\zeta_i)f(z_n)^*}{1-\zeta_i{z}^*_n}}
\end{array}\right] \; (i=1,2,3).
$$
The last formula for $\Psi_i$ can be written in terms of (\ref{2.3}) as
{\bf e}gin{equation}
\Psi_i=[1 \; \; -f(\zeta_i)]F^*(I_n-\zeta_i T^*)^{-1}\qquad (i=1,2,3).
\label{2.5}
\end{equation}
By (\ref{2.10}), it follows from (\ref{2.6}) that
$$
P_3(f; \, \zeta_1, \zeta_2, \zeta_3)-\Psi P^{-1}\Psi^*\geq 0,
$$
or more explicitly,
{\bf e}gin{equation}
\left[ \frac{1-f(\zeta_i)f(\zeta_j)^*}{1-\zeta_i{\zeta^*_j}}-
\Psi_i P^{-1}\Psi_j^*\right]_{i,j=1}^3\geq 0.
\label{2.8}
\end{equation}
By (\ref{2.5}) and (\ref{2.3b}),
{\bf e}gin{eqnarray}
&&\frac{1-f(\zeta_i)f(\zeta_j)^*}{1-\zeta_i{\zeta^*_j}}-
\Psi_i P^{-1}\Psi_j^*\nonumber\\
&&=\left[{\bf e}gin{array}{cc}1 &
-f(\zeta_i)\end{array}\right]\left\{\frac{J}{1-\zeta_i{\zeta}^*_j}-
F^*(I_n-\zeta_i
T^*)^{-1}P^{-1}(I_n-{\zeta}^*_jT)^{-1}F\right\}\left[{\bf e}gin{array}{c}
1 \\ -f(\zeta_j)^*\end{array}\right]\nonumber \\
&&=\left[{\bf e}gin{array}{cc}1 &
-f(\zeta_i)\end{array}\right]\frac{{\mathbb T}heta(\zeta_i)J{\mathbb T}heta(\zeta_j)^*}
{1-\zeta_i{\zeta}^*_j}\left[{\bf e}gin{array}{c}
1 \\ -f(\zeta_j)^*\end{array}\right],\label{2.8a}
\end{eqnarray}
which allows us to rewrite (\ref{2.8}) as
{\bf e}gin{equation}
\left[ \left[{\bf e}gin{array}{cc}1 &
-f(\zeta_i)\end{array}\right]\frac{{\mathbb T}heta(\zeta_i)J{\mathbb T}heta(\zeta_j)^*}
{1-\zeta_i{\zeta}^*_j}\left[{\bf e}gin{array}{c}
1 \\ -f(\zeta_j)^*\end{array}\right]\right]_{i,j=1}^3\geq 0.
\label{2.9}
\end{equation}
Introducing the block decomposition
$$
{\mathbb T}heta(z)=\left[{\bf e}gin{array}{cc}\theta_{11}(z) & \theta_{12}(z)\\
\theta_{21}(z) & \theta_{22}(z)\end{array}\right]
$$
of ${\mathbb T}heta$ into four scalar blocks, note that
{\bf e}gin{equation}
d(z):=\theta_{21}(z)f(z)-\theta_{11}(z)\neq 0, \quad
z\in {\mathbb D}\setminus ({\mathcal Z}\cup{\Lambda}).
\label{2.9a}
\end{equation}
Indeed, assuming that $\theta_{21}(\zeta)f(\zeta)=\theta_{11}(\zeta)$
for some $\zeta\in{\mathbb D}\backslash({\mathcal Z}\cup{\Lambda})$, we get
{\bf e}gin{eqnarray*}
&&\left[{\bf e}gin{array}{cc}1 &
-f(\zeta)\end{array}\right]\frac{{\mathbb T}heta(\zeta)J{\mathbb T}heta(\zeta)^*}
{1-|\zeta|^2}\left[{\bf e}gin{array}{c}
1 \\ -f(\zeta)^*\end{array}\right]\nonumber\\
&&=-\left[{\bf e}gin{array}{cc}1 &
-f(\zeta)\end{array}\right]\frac{\left[{\bf e}gin{array}{c}\theta_{12}(\zeta)\\
\theta_{22}(\zeta)\end{array}\right]\left[{\bf e}gin{array}{cc}
\theta_{12}(\zeta)^* & \theta_{22}(\zeta)^*\end{array}\right]}
{1-|\zeta|^2}\left[{\bf e}gin{array}{c}1 \\ -f(\zeta)^*\end{array}\right]\leq
0,
\end{eqnarray*}
which contradicts (\ref{2.9}), unless $\det {\mathbb T}heta(\zeta)=0$. But as we
have mentioned above, ${\mathbb T}heta(z)$ is invertible at each point
$\zeta\not\in{\mathcal Z}$.
Thus, the function
{\bf e}gin{equation}
\sigma(z)=\frac{\theta_{12}(z)-f(z)\theta_{22}(z)}
{\theta_{21}(z)f(z)-\theta_{11}(z)}
\label{2.10z}
\end{equation}
is defined on ${\mathbb D}\backslash({\mathcal Z}\cup{\Lambda})$. Moreover,
$$
\left[{\bf e}gin{array}{cc}1 & -f(\zeta)\end{array}\right]{\mathbb T}heta(\zeta)=-
d(\zeta)^{-1}\left[{\bf e}gin{array}{cc}1
& -\sigma(\zeta)\end{array}\right]
$$
and thus, inequality (\ref{2.9}) can be written in terms of $\sigma$ as
$$
\left[ d(\zeta_i)^{-1}\left[{\bf e}gin{array}{cc}1
& -\sigma(\zeta_i)\end{array}\right]\frac{J}
{1-\zeta_i{\zeta}^*_j}\left[{\bf e}gin{array}{c}
1 \\ -\sigma(\zeta_j)^*\end{array}\right]
(d(\zeta_j)^*)^{-1}\right]_{i,j=1}^3\geq 0,
$$
or equivalently, (since $d(\zeta_i)\neq 0$) as
$$
\left[ \frac{1-\sigma(\zeta_i)\sigma(\zeta_j)^*}
{1-\zeta_i{\zeta}^*_j}\right]_{i,j=1}^3\geq 0.
$$
The latter inequality means that
$\; P_3(\sigma; \, \zeta_1,\zeta_2,\zeta_3)\geq 0 \;$
for every choice of points $\zeta_1, \, \zeta_2, \, \zeta_3$ in
${\mathbb D}\backslash({\mathcal Z}\cup{\Lambda})$. By Corollary
\ref{T:1.2a}, $\sigma(z)$ is a Schur function. Although
$\sigma(z)$ has been defined via (\ref{2.10z}) on
${\mathbb D}\backslash({\mathcal Z}\cup\{z_1,\ldots,z_n\})$, it admits an analytic
continuation to all of ${\mathbb D}$, which still will be denoted by $\sigma(z)$.
It follows from (\ref{2.10z}) that $f$ coincides with the function
{\bf e}gin{equation}
F(z)=\frac{\theta_{11}(z)\sigma(z)+\theta_{12}(z)}
{\theta_{21}(z)\sigma(z)+\theta_{22}(z)}.
\label{2.11}
\end{equation}
at every point $z\in{\mathbb D}\setminus({\mathcal Z}\cup{\Lambda})$.
Since $f$ has not been defined on ${\Lambda}$, one can consider $F$
as a (unique) meromorphic extension of $f$. However,
$F$ need not coincide with $f$ on ${\mathcal Z}$.
Now we prove that $f\in{\mathcal S}_{\kappa}$. To this end it suffices to show that
{\bf e}gin{equation}
{\rm{sq}_-} P_{n+r}(f; \, z_1,\ldots,z_n,\zeta_1,\ldots,\zeta_r)=\kappa
\label{2.13}
\end{equation}
for every choice of $\zeta_1,\ldots,\zeta_r\in{\mathbb D}\setminus ({\mathcal Z}\cup{\Lambda})$.
Note that all possible ``jumps'' of $f$ are in ${\mathcal Z}$ and at all other
points of ${\mathbb D}\setminus {\Lambda}$, it holds that $f(\zeta)=F(\zeta)$. Thus,
writing
$$
P_{n+r}(f; \, z_1,\ldots,z_n,\zeta_1,\ldots,\zeta_r)=
\left[{\bf e}gin{array}{cc} P & {\bf\Psi}^*\\ {\bf\Psi} & P_{r}(f; \,
\zeta_1, \ldots, \zeta_r)\end{array}\right],
$$
where
$$
{\bf\Psi}=\left[{\bf e}gin{array}{c}\Psi_1 \\ \vdots \\ \Psi_r
\end{array}\right]
$$
and the $\Psi_i$'s are defined via (\ref{2.5}) for $i=1,\ldots,r$, we
conclude by the Schur complement argument that
{\bf e}gin{equation}
{\rm{sq}_-} P_{n+r}(f; \, z_1,\ldots,z_n,\zeta_1,\ldots,\zeta_r)=
{\rm{sq}_-} P + {\rm{sq}_-} (P_{r}(f; \, ,\zeta_1, \ldots, \zeta_r)-\Psi
P^{-1}\Psi^*).
\label{2.14}
\end{equation}
It follows from the calculation (\ref{2.8a}) that
$$
P_{r}(f; \, \zeta_1, \ldots, \zeta_r)-\Psi
P^{-1}\Psi^*=\left[ \left[{\bf e}gin{array}{cc}1 &
-f(\zeta_i)\end{array}\right]\frac{{\mathbb T}heta(\zeta_i)J{\mathbb T}heta(\zeta_j)^*}
{1-\zeta_i{\zeta}^*_j}\left[{\bf e}gin{array}{c}
1 \\ -f(\zeta_j)^*\end{array}\right]\right]_{i,j=1}^r,
$$
which can be written in terms of functions $\sigma$ and $d$ defined in
(\ref{2.10z}) and (\ref{2.9a}), respectively, as
$$
P_{r}(f; \, \zeta_1, \ldots, \zeta_r)-\Psi
P^{-1}\Psi^*=\left[
d(\zeta_i)^{-1}\frac{1-\sigma(\zeta_i)\sigma(\zeta_j)^*}
{1-\zeta_i{\zeta}^*_j}(d(\zeta_j)^*)^{-1}\right]_{i,j=1}^r.
$$
We have already proved that $\sigma$ is a Schur function and therefore,
$$
P_{r}(f; \, \zeta_1, \ldots, \zeta_r)-\Psi
P^{-1}\Psi^*\geq 0,
$$
which together with (\ref{2.14}) implies (\ref{2.13}).
{\eproof}
\section{Proof of Theorem \ref{T:3.1}}
\setcounter{equation}{0}
Since the first inequality in
(\ref{3.1}) is obvious, we need to prove only the second
inequality.
If $\tilde{f}$ is the meromorphic part of $f$, then
by Theorem \ref{T:1.2b} $\tilde{f}\in{\mathcal S}_{q}$, and therefore
by Lemma \ref{L:last1}, ${\bf k}_n(f)\leq q+\ell,$ for $n=1,2, \ldots $.
Therefore, it suffices to prove that there exist $q+2\ell$ distinct
points
$u_1, \ldots , u_{q+2\ell}\in{\rm Dom}\,(f)$ such that
$$
{\rm{sq}_-} P_{q + 2\ell}(f;u_1, \ldots ,u_{q +2\ell})\geq q+\ell.
$$
We start with notation and some preliminary
results. Let
$$
J_r(a)=\left[{\bf e}gin{array}{ccccc} a &0 & 0 & \cdots &0 \\
1& a & 0 & \cdots &0 \\ 0&1&a& \cdots &0 \\
\vdots & \vdots & \vdots & \ddots & \vdots \\ 0&0&0& \cdots 1 & a
\end{array}\right], \qquad a\in {\mathbb C}
$$
be the lower triangular $r \times r$ Jordan block with
eigenvalue $a$ and let $E_r$ and $G_r$ be vectors from ${\mathbb C}^r$ defined by
{\bf e}gin{equation}
\label{6.01}
E_r=\left[{\bf e}gin{array}{c} 1 \\ 0 \\ \vdots \\ 0\end{array}\right]\in{\mathbb C}^r
\quad \mbox{and}\quad
G_r=\left[{\bf e}gin{array}{c} 1 \\ 1 \\ \vdots \\ 1\end{array}\right]\in{\mathbb C}^r.
\end{equation}
Given an ordered set ${\mathcal Z}=\{z_1, \ldots ,z_k\}$ of
distinct points in the complex plane, we denote by $\Phi({\mathcal Z})$ the lower
triangular $k \times k$ matrix defined by
{\bf e}gin{equation}\label{x.1}
\Phi({\mathcal Z})=\left[\Phi_{i,j}\right]_{i,j=1}^k, \qquad
\Phi_{i,j}=\left\{ {\bf e}gin{array}{cl}
{\displaystyle\frac{1}{\phi_i^{\prime} (z_j)}} &
{\rm if} \ i\geq j, \\ 0 & {\rm if} \ i<j, \end{array}\right.
\end{equation}
where $\phi_i(z)=\prod_{j=1}^i (z-z_j)$. Furthermore, given a set ${\mathcal Z}$ as
above, for a complex valued function $v(z)$ we define recursively the divided
differences $[z_1, \ldots ,z_j]_v$ by
$$
[z_1]_v=v(z_1), \qquad [z_1, \ldots ,z_{j+1}]_v=
\frac{[z_1, \ldots ,z_{j}]_v-[z_2, \ldots ,z_{j+1}]_v}{z_1- z_{j+1}}, \qquad
j=1, \ldots ,k,
$$
and use the following notation for associated matrices and vectors:
$$ D({\mathcal Z})=
\left[{\bf e}gin{array}{cccc} z_1 &0 & \cdots &0 \\
0& z_2 & \cdots &0 \\
\vdots & \vdots & \ddots & \vdots \\ 0&0& \cdots & z_k
\end{array}\right], \quad J({\mathcal Z})=
\left[{\bf e}gin{array}{ccccc} z_1 &0 & 0 & \cdots &0 \\
1& z_2 & 0 & \cdots &0 \\ 0&1&z_3& \cdots &0 \\
\vdots & \vdots & \vdots & \ddots & \vdots \\ 0&0&0& \cdots 1 & z_k
\end{array}\right],
$$
{\bf e}gin{equation}
v({\mathcal Z})=\left[{\bf e}gin{array}{c} v(z_1) \\
v(z_2) \\ \vdots \\
v(z_k)\end{array}\right], \quad
[{\mathcal Z}]_v= \left[{\bf e}gin{array}{c} [z_1]_v \\ \left[z_1, z_2\right]_v \\
\vdots \\ \left[z_1, \ldots ,z_k\right]_v \end{array}\right].
\label{x.2.2a}
\end{equation}
{\bf e}gin{La}
\label{L:x.1}
Let ${\mathcal Z}=\{z_1, \ldots,z_k\}$ be an ordered set of distinct points,
and let $\Phi({\mathcal Z})$ be defined as in $(\ref{x.1})$. Then:
{\bf e}gin{equation}
\Phi({\mathcal Z})D({\mathcal Z})=J({\mathcal Z})\Phi({\mathcal Z}) \qquad {\rm and } \qquad
\Phi({\mathcal Z})v({\mathcal Z})=[{\mathcal Z}]_v.
\label{x.2.3}
\end{equation}
Moreover, if the function $v(z)$ is analytic at
$z_0\in{\mathbb C}$, with the Taylor series $v(z)=\sum_{k=0}^{\infty} v_k(z-z_0)^k$,
then
{\bf e}gin{equation}
\label{x.4}
\lim_{z_1,\ldots,z_k\rightarrow z_0}
\Phi({\mathcal Z})v({\mathcal Z})=\left[{\bf e}gin{array}{c} v_0\\ v_1 \\ \vdots \\ v_{k-1}
\end{array}\right]. \end{equation}
\end{La}
{\bf Proof:} Formulas (\ref{x.2.3}) are verified by direct computation;
formula (\ref{x.4}) follows from (\ref{x.2.3}), since
$$
\lim_{z_i\rightarrow z_0,\ i=1,\ldots ,j} [z_1, \ldots ,z_j]_v=
\frac{v^{(j-1)}(z_0)}{(j-1)!}=v_{j-1}, \quad j=1, \ldots ,k .
$$
In the sequel it will be convenient to use the following notation:
${\rm diag}\, \left(X_1, \ldots ,X_k\right)$ stands for the block diagonal
matrix with the diagonal blocks $X_1, \ldots ,X_k$ (in that order).
{\bf e}gin{La}
\label{L:x.2}
Let $w_1, \ldots ,w_k$ be distinct points in ${\mathbb D}$, and let $K$ be the unique
solution of the Stein equation
{\bf e}gin{equation}\label{x.4a}
K -AKA^*=EE^*, \end{equation}
where
{\bf e}gin{equation}
A=\left[{\bf e}gin{array}{ccc}J_{r_1}(w_1) && \\ & \ddots & \\ &&
J_{r_k}(w_k)\end{array}\right], \quad E=\left[{\bf e}gin{array}{c}E_{r_1} \\
\vdots \\ E_{r_k}\end{array}\right]
\label{x.40}
\end{equation}
and $E_{r_j}$ are defined via the first formula in $(\ref{6.01})$.
Then the normalized Blaschke product
{\bf e}gin{equation}\label{x.4b}
b(z)=e^{i\alpha} \prod_{j=1}^k \left(\frac{z-w_j}{1-zw_j^*}\right)^{r_j},\qquad
b(1)=1,
\end{equation}
admits a realization
{\bf e}gin{equation}\label{x.4c}
b(z)=1+(z-1)E^*(I-zA^*)^{-1}K^{-1}(I-A)^{-1}E,\end{equation}
and the following formula holds:
{\bf e}gin{equation}\label{x.4d}
1-b(z)b(w)^*=(1-zw^*)E^*(I-zA^*)^{-1}K^{-1}(I-w^*A)^{-1}E, \quad z,w\in {\mathbb D}.
\end{equation}
\end{La}
{\bf Proof:} First we note that
$K$ is given by the convergent series
{\bf e}gin{equation}\label{x.4e}
K=\sum_{j=0}^{\infty} A^jEE^*(A^*)^j.
\end{equation}
Since the pair $(E^*, A^*)$ is observable, i.e, $\cap_{j=0}^{\infty}
{\rm Ker}\, (E^*(A^*)^j)=\{0\}$, the matrix $K$ is positive definite. Equality
(\ref{x.4d}) follows from (\ref{x.4c}) and (\ref{x.4a}) by a standard
straightforward calculation (or from (\ref{2.3b}) upon setting $J=I_2$
in (\ref{2.3})--(\ref{2.3b})).
It follows from (\ref{x.4d}) that the function $b(z)$ defined via
(\ref{x.4c}) is inner.
Using the fact that ${\rm det}\, (I+XY)={\rm det}\,(I+YX)$ for
matrices $X$ and $Y$ of sizes $u \times v$ and $v \times u$ respectively,
we obtain from (\ref{x.4a}) and (\ref{x.4c}):
{\bf e}gin{eqnarray*}
b(z)&= & {\rm det}\, \left(
I+(z-1)(I-zA^*)^{-1}K^{-1}(I-A)^{-1}(K-AKA^*)\right) \\
&=& {\rm det}\, \left((I-zA^*)^{-1}K^{-1}(I-A)^{-1}\right)
\\ && \times \ {\rm det} \, \left((I-A)K(I-zA^*)+(z-1)(K-AKA^*)\right) \\
&= & {\rm det}\, \left((I-zA^*)^{-1}K^{-1}(I-A)^{-1}\right)\, \cdot\,
{\rm det}\, \left((zI-A)K(I-A^*)\right) \\
&= & c\, \frac{{\rm det}\,(zI -A)}{{\rm det}\, (I-zA^*)} \qquad
{\rm for}\ \ {\rm some} \ \ c\in {\mathbb C}, \ |c|=1.
\end{eqnarray*}
It follows that the degree of $b(z)$ is equal to $r:=\sum_{j=1}^k
r_j$, and $b(z)$ has zeros at $w_1, \ldots ,w_k$ of multiplicities
$r_1, \ldots ,r_k$, respectively.
Since $b(1)=1$,
the function $b(z)$ indeed
coincides with (\ref{x.4b}). {\eproof}
The result of Lemma \ref{L:x.2} is known also for matrix valued inner functions
(see
\cite[Section 7.4]{bgr}, where it is given in a slightly different form for
$J$-unitary matrix functions), in which case it can be interpreted as a formula
for
such functions having a prescribed left null pair with respect to the unit disk
${\mathbb D}$ (see \cite{bgr} and relevant references there).
Let
{\bf e}gin{equation}
\label{x.5}
f(z)=\left\{{\bf e}gin{array}{cl} {\displaystyle \frac{S(z)}{b(z)}} &
{\rm if}\, z\not\in \{z_1,\ldots,z_{\ell}, w_{t+1},\ldots ,w_k\} \\
f_j & {\rm if }\, z=z_j, j=1, \ldots , \ell \end{array}\right.
\end{equation}
where $S(z)$ is a Schur function not vanishing at any of the $z_j$'s, $b(z)$
is the Blaschke product given by (\ref{x.4b}). The points $z_1, \ldots
,z_{\ell}$ are assumed to be distinct points in ${\mathbb D}$, and $w_1,\ldots ,w_k$ are
also assumed to be distinct in ${\mathbb D}$. Furthermore, we assume that
$w_j=z_j$ for $j=1, \ldots , t$, $\{w_{t+1}, \ldots ,w_k\}\cap
\{z_{t+1},\ldots
,z_{\ell}\}=\emptyset$, and ${\displaystyle \frac{S(z_j)}{b(z_j)}}\neq f_j$ for
$j=t+1, \ldots ,\ell.
$ The cases when $t=0$, i.e.,
$\{w_{1}, \ldots ,w_k\}\cap \{z_{1},\ldots
,z_{\ell}\}=\emptyset$, and when $t=\min\{k,\ell\}$ are not excluded; in these
cases the subsequent arguments should be modified in obvious ways.
We let $N=2\ell+\sum_{j=1}^kr_j$.
Take $N$ distinct points in the unit disk sorted into $k+2$ ordered sets
{\bf e}gin{equation}\label{x.5a}
{\mathcal M}_j=\{\mu_{j,1},\ldots ,\mu_{j,r_j}\}, \ j=1, \ldots, k; \quad
{\mathcal N}=\{\nu_1, \ldots ,\nu_{\ell} \}; \quad {\mathcal Z}=\{z_1,\ldots ,z_{\ell}\}
\end{equation}
and such that ${\mathcal M}_j\cap {\mathcal W}=\emptyset$, $j=1, \ldots ,k$, and
${\mathcal N}\cap {\mathcal W}=\emptyset$, where ${\mathcal W}=\{w_1, \ldots , w_k\}$. Consider the
corresponding Pick matrix
$$ P=P_N(f;\mu_{1,1},\ldots , \mu_{k,r_k}, \nu_1, \ldots , \nu_{\ell},z_1,
\ldots , z_{\ell}). $$
We shall show that if $\mu_{j,i}$ and $\nu_j$ are sufficienly close to
$w_j$ and $z_{j}$, respectively, then ${\rm{sq}_-} P\geq N-\ell$.
It is readily seen that $P$ is a unique solution of the Stein equation
{\bf e}gin{equation}
\label{x.6}
P-TPT^*=G_NG_N^*-CC^* \end{equation}
where $G_N\in{\mathbb C}^N$ is defined via the second formula in (\ref{6.01}) and
$$
T=\left[{\bf e}gin{array}{ccccc}D({\mathcal M}_1) &&&&\\ &\ddots &&& \\
&&D({\mathcal M}_k)&&\\ &&& D({\mathcal N}) &\\ &&&& D({\mathcal Z})\end{array}\right],\qquad
C= \left[{\bf e}gin{array}{c} f({\mathcal M}_1) \\ \vdots \\ f({\mathcal M}_k) \\ f({\mathcal N}) \\ f({\mathcal Z})
\end{array}\right].
$$
We recall that by definition (\ref{x.2.2a}) and in view of (\ref{x.5}),
$$
f({\mathcal M}_j)=\left[{\bf e}gin{array}{c}f(\mu_{j,1}) \\ \vdots \\ f(\mu_{j,r_j})
\end{array}\right],\quad
f({\mathcal N})=\left[{\bf e}gin{array}{c}f(\nu_1)\\ \vdots \\
f(\nu_\ell)\end{array}\right],\quad
f({\mathcal Z})=\left[{\bf e}gin{array}{c}f_1\\ \vdots \\ f_{\ell}\end{array}\right].
$$
Consider the matrices
$$
B_j=\Phi({\mathcal M}_j){\rm diag}\,\left(b(\mu_{j,1}), \ldots ,
b(\mu_{j,r_j})\right)\quad (j=1,\ldots,k)
$$
and note that by Lemma \ref{L:x.1},
{\bf e}gin{eqnarray}
B_jD({\mathcal M}_j)&=&\Phi({\mathcal M}_j)D({\mathcal M}_j){\rm diag}\,\left(b(\mu_{j,1}), \ldots ,
b(\mu_{j,r_j})\right)\nonumber\\
&=&J({\mathcal M}_j)\Phi({\mathcal M}_j){\rm diag}\,\left(b(\mu_{j,1}),\ldots,b(\mu_{j,r_j})
\right)\nonumber\\ &=&J({\mathcal M}_j)B_j,\nonumber\\
B_jf({\mathcal M}_j)&=&\Phi({\mathcal M}_j)S({\mathcal M}_j)=\left[{\mathcal M}_j\right]_S,\nonumber\\
B_jG_{r_j}&=&\Phi({\mathcal M}_j)b({\mathcal M}_j)=\left[{\mathcal M}_j\right]_b.\nonumber
\end{eqnarray}
The three last equalities together with block structure of the
matrices $T$, $C$ and $G_N$, lead to
{\bf e}gin{equation}
\label{x.7}
BT=T_1B, \quad Y_1=BY, \quad {\rm and} \quad C_1=BC,
\end{equation}
where
{\bf e}gin{equation}
\label{x.6ab}
B={\rm diag}\,\left(B_1,\ldots,B_j, \, I_{2\ell}\right), \quad
T_1={\rm diag}\, \left(J({\mathcal M}_1), \ldots,
J({\mathcal M}_k), D({\mathcal N}), D({\mathcal Z})\right), \end{equation}
{\bf e}gin{equation} \label{x.6ac} Y_1=\left[{\bf e}gin{array}{c}
\left[{\mathcal M}_1\right]_b \\ \vdots \\
\left[{\mathcal M}_k\right]_b \\ G_{2\ell}\end{array}\right]\quad\mbox{and}\quad
C_1 \ = \ \left[{\bf e}gin{array}{c} \left[{\mathcal M}_1\right]_S \\
\vdots \\
\left[{\mathcal M}_k\right]_S \\ f({\mathcal N}) \\ f({\mathcal Z}) \end{array}\right].
\end{equation}
Pre- and post-multiplying (\ref{x.6}) by
$B$ and $B^*$, respectively, we conclude, on account of (\ref{x.7}), that the
matrix $
P_1:=BPB^* $
is the unique solution of the Stein equation
{\bf e}gin{equation}\label{x.9}
P_1-T_1P_1T_1^*=Y_1Y_1^*-C_1C_1^*,
\end{equation}
where $T_1$, $Y_1$, and $C_1$ are given by (\ref{x.6ab}) and
(\ref{x.6ac}).
Recall that all the entries in (\ref{x.9}) depend on the $\mu_{j,i}$'s.
We now let $\mu_{j,i} \rightarrow w_j$, for $i=1, \ldots ,r_j$,
$j=1,\ldots , k$. Since $b$ has zero of order $r_j$ at $w_j$, it follows
by (\ref{x.4}) that
$$
\lim_{\mu_{j,i}\rightarrow w_j}[{\mathcal M}_j]_b=0, \quad j=1, \ldots, k,
$$
and thus,
$$
Y_2:=\lim_{\mu_{j,i}\rightarrow w_j} Y_1=\left[{\bf e}gin{array}{c}
0 \\ G_{2\ell}\end{array}\right]\in{\mathbb C}^{N}.
$$
Similarly, we get by (\ref{x.2.2a}),
$$
C_2:=\lim_{\mu_{j,i}\rightarrow w_j} C_1=\left[{\bf e}gin{array}{c}
\widehat{S}_1 \\ \vdots \\ \widehat{S}_k \\ f({\mathcal N}) \\ f({\mathcal Z})
\end{array}\right], \quad\mbox{where}\quad
\widehat{S}_j=\left[{\bf e}gin{array}{c} S(w_j) \\[1mm]
{\displaystyle \frac{S'(w_j)}{1!}}
\\[1mm]
\vdots \\[1mm]
{\displaystyle \frac{S^{(r_j-1)}(w_j)}{(r_j-1)!}}\end{array}\right]
\equiv \left[{\bf e}gin{array}{c}s_{j,0} \\ s_{j,1} \\
\vdots \\ s_{j,r_{j-1}}\end{array}\right].
$$
Furthermore, taking the limit in (\ref{x.6ab}) as $\mu_{j,i} \rightarrow
w_j$, we get
{\bf e}gin{eqnarray*}
T_2:=\lim_{\mu_{j,i}\rightarrow w_j} T_1 &=&{\rm diag} \,
\left(J_{r_1}(w_1),
\ldots , J_{r_k}(w_k),D({\mathcal N}),D({\mathcal Z}) \right) \nonumber\\
&=& {\rm diag}\, (A, D({\mathcal N}), D({\mathcal Z})),
\end{eqnarray*}
where $A$ is given in (\ref{x.40}).
Since the above three limits exist, there also exists the limit
{\bf e}gin{equation}
\label{x.13}
P_2:= \lim_{\mu_{j,i}\rightarrow w_j} P_1,
\end{equation}
which serves to define a unique solution $P_2$ of the Stein equation
{\bf e}gin{equation}\label{x.14}
P_2-T_2P_2T_2^*=Y_2Y_2^* - C_2C_2^*.
\end{equation}
Let ${\mathcal S}_j$ be the lower triangular Toeplitz matrices defined by:
$$
{\mathcal S}_j=\left[{\bf e}gin{array}{ccccc} s_{j,0} & 0& \ldots &0&0 \\
s_{j,1} & s_{j,0} & \ldots &0&0 \\ \vdots & \ddots & \ddots & \vdots & \vdots
\\ s_{j,r_j-2} & &\ldots & s_{j,0}&0 \\ s_{j,r_j-1} & s_{j,r_j-2} & \ldots
& s_{j,1} & s_{j,0} \end{array}\right], \qquad j=1, \ldots ,k,
$$
which are invertible because $S(w_j)=s_{j,0}\neq 0$, $j=1,\ldots, k$. Let
$$
R:={\rm diag}\,({\mathcal S}_1^{-1}, \ldots , \, {\mathcal S}_{k}^{-1}, \,
I_{2\ell})\quad\mbox{and}\quad
C_3=\left[{\bf e}gin{array}{c}E \\ f({\mathcal N})\\f({\mathcal Z})\end{array}\right],
$$
where $E$ is given in (\ref{x.40}). The block structure of matrices
$R$, $T_2$, $C_2$, $C_3$, $E$ and $Y_2$ together with selfevident
relations
$$
{\mathcal S}_j^{-1}J_{r_j}(w_j)=J_{r_j}(w_j){\mathcal S}_j^{-1},\quad
{\mathcal S}_j^{-1}\widehat{S}_1=E_{r_j}\quad (j=1,\ldots,k),
$$
lead to
$$
RT_2=T_2R,\quad \quad RC_2=C_3\quad {\rm and }\quad RY_2=Y_2.
$$
Taking into account the three last equalities, we pre- and post-multiply
(\ref{x.14}) by $R$ and $R^*$, respectively, to conclude that the matrix
$P_3:=RP_2R^*$ is the unique solution of the Stein equation
{\bf e}gin{equation}
\label{x.16}
P_3-T_2P_3T_2^*=Y_2Y_2^*-C_3C_3^*.
\end{equation}
Denote
$$
{\mathcal N}_1=\{\nu_1,\ldots ,\nu_t\}, \qquad {\mathcal Z}_1 =\{z_1, \ldots ,z_t\},
$$ $$
{\mathcal N}_2=\{\nu_{t+1},\ldots ,\nu_\ell\}, \qquad {\mathcal Z}_2 =\{z_{t+1}, \ldots
,z_\ell\}.
$$
By the hypothesis, ${\mathcal Z}_2 \cap {\mathcal W}=\emptyset.$ Let $\nu_j \rightarrow
z_j$ for $j=t+1, \ldots , \ell$. Then $$f({\mathcal N}_2)\rightarrow \frac{S}{b}({\mathcal Z}_2),
$$ and note that by the assumptions on $f$,
{\bf e}gin{equation}\label{x.16a}
\frac{S}{b} (z_j)\neq f(z_j), \qquad j=t+1, \ldots ,\ell. \end{equation}
Taking limits in (\ref{x.16}) as $\nu_j \rightarrow z_j$ ($j=t+1,\ldots
,\ell$), we arrive at the equality
{\bf e}gin{equation}\label{x.17}
P_4- T_3P_4T_3^*=Y_2Y_2^*- C_4C_4^*,
\qquad P_4:=\lim_{\nu_j\rightarrow z_j,\ j=t+1,\ldots ,\ell} P_3,
\end{equation}
where
$$
T_3={\rm diag}\, \left( A, D({\mathcal N}_1), D({\mathcal Z}_2), D({\mathcal Z}_1), D({\mathcal Z}_2) \right),
\quad C_4 = \left[{\bf e}gin{array}{c} E \\ f({\mathcal N}_1)\\ {\displaystyle\frac{S}{b}
({\mathcal Z}_2)}\\
f({\mathcal Z}_1) \\
f({\mathcal Z}_2)\end{array} \right].
$$
By (\ref{x.16a}), the matrix
$$
L:={\rm diag} \, \left( \left(\frac{S}{b} - f\right)(z_{t+1}), \ldots ,
\left(\frac{S}{b}-f\right)(z_{\ell}) \right)
$$
is invertible. Let
{\bf e}gin{equation}\label{x.18a}
T_4=\left[{\bf e}gin{array}{cccc} A &0 &0 &0 \\ 0& D({\mathcal Z}_2)&0&0\\
0&0& D({\mathcal Z}_1)&0 \\ 0&0&0& D({\mathcal N}_1)\end{array}\right],\quad
C_5= \left[{\bf e}gin{array}{c} E \\ G_{\ell-t} \\ f({\mathcal Z}_1) \\ f({\mathcal N}_1)
\end{array}\right],\quad
Y_3= \left[{\bf e}gin{array}{c} 0 \\ G_{2t}\end{array}\right]
\end{equation}
and
$$
X=\left[{\bf e}gin{array}{ccccc} I_{N-2\ell} & 0 & 0 & 0 & 0\\
0 & 0 & L^{-1} & 0& -L^{-1} \\
0 & 0 & 0& I_t & 0 \\ 0 & I_t &0 & 0&0 \end{array}\right].
$$
Since
$$
XT_3=T_4X, \quad XC_4=C_5,\quad XY_2=Y_3,
$$
pre- and post-multiplication of (\ref{x.17}) by
$X$ and by $X^*$, respectively,
leads to the conclusion that the matrix $\; P_5:=XP_4X^* \;$
is the unique solution of the Stein equation
{\bf e}gin{equation}\label{x.19}
P_5-T_4P_5T_4^*=Y_3Y_3^*-C_5C_5^*. \end{equation}
By (\ref{x.18a}) and (\ref{x.19}), $P_5$ has
the form
$$ P_5=\left[{\bf e}gin{array}{c|ccc} -K & Q_1^* & Q_2^*& Q_3^* \\
\hline Q_1 & R_{11}&R_{12} & R_{13}\\ Q_2 & R_{21}& R_{22} & R_{23}\\
Q_3 & R_{31}& R_{32}& R_{33}
\end{array}\right], $$ where
$K$ is the matrix given by (\ref{x.4e}), and $Q_j$, $j=1,2,3$, are solutions
of
{\bf e}gin{eqnarray*}
Q_1-D({\mathcal Z}_2)Q_1A^*&= &-G_{\ell-t}E^*, \\
Q_2-D({\mathcal Z}_1)Q_2A^*&=&-f({\mathcal Z}_1)E^*,
\\
Q_3-D({\mathcal N}_1)Q_3A^*&=& -f({\mathcal N}_1)E^*, \end{eqnarray*}
respectively. Thus, denoting by $\left[Q_{\alpha}\right]_j$ the $j$th row of
$Q_{\alpha}$, $\alpha=1,2,3$, we have from the three last equalities (recall
that
$w_j=z_j$ for $j=1, \ldots ,t$):
{\bf e}gin{eqnarray*}
\left[Q_1\right]_j&=& -E^*(I-z_{j+t}A^*)^{-1}, \qquad j=1, \ldots, \ell-t; \\
\left[Q_2\right]_j&=& -f_jE^*(I-w_{j}A^*)^{-1}, \qquad j=1, \ldots, t; \\
\left[Q_3\right]_j&=& -f(\nu_j)E^*(I-\nu_jA^*)^{-1}, \qquad j=1, \ldots,
t. \end{eqnarray*}
Furthermore, in view of the three last relations and (\ref{x.4d}), and
since $b(w_j)=0$, following equalities are obtained:
{\bf e}gin{eqnarray*}
\left[Q_1\right]_jK^{-1}\left[Q_1\right]_i^*&= &
\frac{1-b(z_{t+j})b(z_{t+i})^*}{1-z_{t+j}z_{t+i}^*} , \quad j,i=1, \ldots
,\ell-t,\\
\left[Q_2\right]_jK^{-1}\left[Q_1\right]_i^*&= &
\frac{f_j}{1-w_{j}z_{t+i}^*} , \quad j=1, \ldots ,t; \ i=1, \ldots ,\ell-t,\\
\left[Q_3\right]_jK^{-1}\left[Q_1\right]_i^*&= &
f(\nu_j)\,\frac{1-b(\nu_{j})b(z_{t+i})^*}{1-\nu_{j}z_{t+i}^*} ,
\quad j=1, \ldots ,t; \ i=1, \ldots ,\ell-t,\\
\left[Q_2\right]_jK^{-1}\left[Q_2\right]_i^*&= &
\frac{f_jf_i^*}{1-w_{j}w_{i}^*} , \quad j,i=1, \ldots ,t, \\
\left[Q_3\right]_jK^{-1}\left[Q_2\right]_i^*&= &
\frac{f(\nu_j)f_i^*}{1-\nu_{j}w_{i}^*} , \quad j,i=1, \ldots ,t,\\
\left[Q_3\right]_jK^{-1}\left[Q_3\right]_i^*&= &
f(\nu_j)f(\nu_i)^*\frac{1-b(\nu_{j})b(\nu_{i})^*}{1-\nu_{j}\nu_{i}^*},
\quad j,i=1, \ldots ,t.
\end{eqnarray*}
Next, the Schur complements are computed:
{\bf e}gin{eqnarray*}
\left[R_{11}+Q_1K^{-1}Q_1^*\right]_{ji}&=&
\frac{-1}{1-z_{t+j}z_{t+i}^*}+
\frac{1-b(z_{t+j})b(z_{t+i})^*}{1-z_{t+j}z_{t+i}^*}\\
&=&-\frac{b(z_{t+j})
b(z_{t+i})^*}{1-z_{t+j}z_{t+i}^*}, \\
\left[R_{21}+Q_2K^{-1}Q_1^*\right]_{ji}&= &0 , \\
\left[ R_{31}+Q_3K^{-1}Q_1^*\right]_{ji} & = &
-\frac{f(\nu_j)}{1-\nu_jz_{t+i}^*}+f(\nu_j)
\frac{1-b(\nu_j)b(z_{t+i})^*}{1-\nu_jz_{t+i}^*} \\
&= & -f(\nu_j)\frac{b(\nu_j)b(z_{t+i})^*}{1-\nu_jz_{t+i}^*}\\
&=& -\frac{S(\nu_j)b(z_{t+i})^*}{1-\nu_jz_{t+i}^*},
\end{eqnarray*} {\bf e}gin{eqnarray*}
\left[R_{22}+Q_2K^{-1}Q_2^*\right]_{ji}
&=& \frac{1-f_jf_i^*}{1-w_jw_i^*}+
\frac{f_jf_i^*}{1-w_jw_i^*} = \frac{1}{1-w_jw_i^*}, \\
\left[R_{32}+Q_3K^{-1}Q_2^*\right]_{ji}&=&
\frac{1-f(\nu_j)f_i^*}{1-\nu_jw_i^*}+
\frac{f(\nu_j)f_i^*}{1-\nu_jw_i^*} = \frac{1}{1-\nu_jw_i^*}, \\
\left[ R_{33}+Q_3K^{-1}Q_3^*\right]_{ji} & = &
\frac{1-f(\nu_j)f(\nu_i)^*}{1-\nu_j\nu_i^*}+f(\nu_j)f(\nu_i)^*
\frac{1-b(\nu_j)b(\nu_i)^*}{1-\nu_j\nu_i^*} \\
&= & \frac{1-S(\nu_j)S(\nu_i)^*}{1-\nu_j\nu_i^*}\, .
\end{eqnarray*}
In the obtained Schur complement $\left[R_{i,j}+Q_iK^{-1}Q_j
\right]_{i,j=1}^3$, we let $\nu_j\rightarrow w_j$ ($j=1,2,\ldots ,t$) and
then pre- and post-multiply by the matrix $\left[{\bf e}gin{array}{cc}
b({\mathcal Z}_2)^{-1} & 0 \\ 0 & I_{2t}\end{array}\right]$ and by its adjoint,
respectively, resulting in the matrix
{\bf e}gin{equation}
\label{x.20}
\left[{\bf e}gin{array}{ccc} -K_{11}&0 & K_{31}^*\\
0 & K_{22} & K_{22}\\ K_{31} & K_{22} &
\left[{\displaystyle \frac{1-S(w_j)S(w_i)^*}{1-w_jw_i^*}}\right]_{j,i=1}^t
\end{array}\right],
\end{equation}
where
{\bf e}gin{equation}\label{x.20a}
K_{11}= \left[\frac{1}{1-z_{t+j}z_{t+i}^*}
\right]_{j,i=1}^{\ell-t}, \ \
K_{22}=\left[\frac{1}{1-w_jw_i^*}\right]_{j,i=1}^t,
\ \ K_{31}=\left[-\frac{S(w_j)}{1-w_jz_{t+i}^*}\right]_{j,i=1}^{t,\ell-t}.
\end{equation}
Note that the $j$-th row $[K_{31}]_j$ of $K_{31}$ can be written in the
form
$$
[K_{31}]_j=S(w_j)G_{\ell-t}^*(I-w_jD({\mathcal Z}_2)^*)^{-1}\quad (j=1,\ldots,t).
$$
Using Lemma \ref{L:new}, in view of
the reductions made so far from $P$ to $P_5$, to prove that ${\rm{sq}_-}
P\geq N-\ell$, we only have to show that
the Schur complement ${\bf S}$ to the block
$\left[{\bf e}gin{array}{cc} -K_{11}&0\\ 0 &K_{22} \end{array}\right]$ in
(\ref{x.20}) has at least $t$ negative eigenvalues, i.e., is negative
definite. This Schur complement is equal to
{\bf e}gin{eqnarray}
{\bf S}&=&\left[\frac{1-S(w_j)S(w_i)^*}{1-w_jw_i^*}\right]_{j,i=1}^t
+K_{31}K_{11}^{-1}K_{31}^* -K_{22}\nonumber\\
&=&-\left[\frac{S(w_j)S(w_i)^*}{1-w_jw_i^*}\right]_{j,i=1}^t
+K_{31}K_{11}^{-1}K_{31}^*\nonumber\\
&=&-\left[\frac{S(w_j)S(w_i)^*}{1-w_jw_i^*}-[K_{31}]_jK_{11}^{-1}
\left([K_{31}]_i\right)^*\right]_{j,i=1}^t\nonumber \\
&=&-\left[\frac{S(w_j)S(w_i)^*}{1-w_jw_i^*}\right.\nonumber\\
&&\quad \left. -S(w_j)G_{\ell-t}(I-w_jD({\mathcal Z}_2)^*)^{-1}K_{11}^{-1}
(I-w_i^*D({\mathcal Z}_2))^{-1}G_{\ell-t}^*S(w_i)^*\right]_{j,i=1}^t.
\label{x.22}
\end{eqnarray}
Introduce the rational function
{\bf e}gin{equation}
\label{x.20b}
\vartheta(z)=1+(z-1)G_{\ell-t}^*(I-zD({\mathcal Z}_2)^*)^{-1}K_{11}^{-1}
(I-D({\mathcal Z}_2))^{-1}G_{\ell-t}.
\end{equation}
and note that $K_{11}$ satisfies the Stein equation
$$
K_{11}-D({\mathcal Z}_2)K_{11}D({\mathcal Z}_2)^*=G_{\ell-t}G_{\ell-t}^*.
$$
Upon setting $k=\ell-t$ and $r_1=\ldots=r_k=1$ and
picking points $z_{t+1},\ldots,z_\ell$ instead of $w_1,\ldots,w_k$
in Lemma \ref{L:x.2}, we get $A=D({\mathcal Z}_2)$, $K=K_{11}$, $E=G_{\ell-t}$
and thus, by Lemma
\ref{L:x.2}, we conclude that
{\bf e}gin{equation}
\label{x.22c}
\vartheta(z)=e^{i\alpha} \prod_{j=1}^{\ell-t}
\frac{z-z_{t+j}}{1-zz_{t+j}^*},
\end{equation}
where $\alpha\in{\mathbb R}$ is chosen to provide the normalization
$\vartheta(1)=1$, and moreover, relation (\ref{x.4d}) in the present
setting takes the form
{\bf e}gin{equation}
\label{x.22a}
1-\vartheta(z)\vartheta(w)^*=(1-zw^*)G_{\ell-t}^*(I-zD({\mathcal Z}_2)^*)^{-1}
K_{11}^{-1}(I-w^*D({\mathcal Z}_2))^{-1}G_{\ell-t}.
\end{equation}
It follows from (\ref{x.22c}) that $Z(\vartheta)={\mathcal Z}_2$ (more precisely,
$\vartheta$ is of degree $\ell-t$ and has simple poles at $z_{t+1},
\ldots,z_\ell$). Since ${\mathcal Z}_1\cap{\mathcal Z}_2=\emptyset$, it follows that
$\vartheta(w_j)\neq 0$, $j=1, \ldots, t$. Upon making use of
(\ref{x.22a}), we rewrite (\ref{x.22}) as
{\bf e}gin{equation}
\label{x.23}
{\bf S}= -\left[\frac{S(w_j)\vartheta(w_j)\vartheta(w_i)^*S(w_i)^*}{1
-w_jw_i^*}\right]_{j,i=1}^t
\end{equation}
and conclude, since $S(w_j)\vartheta(w_j)\neq 0$ ($j=1, \ldots, t$)
that ${\bf S}$ is negative definite. Summarizing, we have
{\bf e}gin{eqnarray*}
{\rm{sq}_-} P\geq {\rm{sq}_-} P_5&=&{\rm{sq}_-} (-K)+{\rm{sq}_-}
\left[{\bf e}gin{array}{cc} -K_{11}&0\\ 0 &K_{22} \end{array}\right]
+{\rm{sq}_-} {\bf S}\\ &=&(N-2\ell)+(\ell-t)+t=N-\ell,
\end{eqnarray*}
which completes the proof.{\eproof}
\section{Theorems \ref{T:1.4} and \ref{T:1.new}: proofs}
\setcounter{equation}{0}
{\bf Proof of Theorem \ref{T:1.4}:}
The implication {\bf 3} ${\mathbb R}ightarrow$ {\bf 1} was already proved,
and
{\bf 1} ${\mathbb R}ightarrow$ {\bf 3} follows from the definition of ${\mathcal S}_{\kappa}$.
Assume {\bf 2} holds, and let $\tilde{f}$ be the standard function that
extends $f$ as stated in {\bf 2}. By Theorem \ref{T:3.1} $\tilde{f}\in
{\mathcal S}_{\kappa}$. It is obvious from the definition of ${\mathcal S}_{\kappa}$ that
we have $f\in {\mathcal S}_{\kappa'}$ for some $\kappa'\leq\kappa$. However,
Remark \ref{R:last} implies that in fact $\kappa'=\kappa$, and {\bf 1}
follows.
It remains to prove {\bf 3} ${\mathbb R}ightarrow $ {\bf 2}
Assume that $f$ satisfies {\bf 3} Arguing as in the proof of
{\bf 3.} ${\mathbb R}ightarrow $ {\bf 1}, we obtain
the meromorphic function $F(z)$ given by (\ref{2.11}) such that
$F(z)=f(z)$ for
$z\in{\mathbb D}\setminus({\mathcal Z}\cup{\Lambda})$ (in the notation of the proof of
{\bf 3} ${\mathbb R}ightarrow $ {\bf 1}). By
the already proved part of Theorem \ref{T:1.4}, we know that $f\in
{\mathcal S}_{\kappa}$. By the second statement of
Lemma \ref{L:last1}, $\tilde{F}\in{\mathcal S}_{\kappa'}$ for some $\kappa'\leq
\kappa +n$, where $\tilde{F}$ is the restriction of $F$ to the set
${\mathbb D}\setminus({\mathcal Z}\cup{\Lambda})$. By continuity (statement 3. of Lemma \ref{L:new}),
the function $F$, considered on its natural domain of definition
${\mathbb D}\setminus P(F)$, where $P(F)$ is the set of poles of $F$, also
belongs to ${\mathcal S}_{\kappa'}$. Using Theorem \ref{T:1.2b}, write
$F=\frac{S}{B}$, where $S$ is a Schur function and $B$ is a Blaschke
product of degree $\kappa'$ without common zeros. Thus, $f$ admits an
extension to a standard function $\tilde{f}$ with $\kappa'$ poles and
$\rho$ jumps, for some nonnegative integer $\rho$. By
Theorem \ref{T:3.1}
and Remark \ref{R:last} we have
{\bf e}gin{equation}\label{aga}
{\bf k}_m (f)=\kappa'+\rho, \qquad
{\rm for} \ \ {\rm every} \ \ m\geq \kappa'+ 2\rho. \end{equation}
On the other hand, since $f\in{\mathcal S}_{\kappa}$, we have ${\bf k}_n(f)=\kappa$
for all sufficiently large $n$. Comparing with (\ref{aga}), we see that
$\kappa' +\rho=\kappa$, and {\bf 2} follows.
{\bf Proof of Theorem \ref{T:1.new}:}
If $f\in {\mathcal S}_{\kappa}$, and $\tilde{f}$ is the standard function that extends
$f$ (as in statement {\bf 2} of Theorem \ref{T:1.4}), then
by Theorem \ref{T:3.1}
$$ N(f)\leq N(\tilde{f})\leq 2\kappa.$$
If
${\bf k}_{2\kappa}(f)={\bf k}_{2\kappa+3}(f)=\kappa$, then obviously {\bf 3}
of Theorem \ref{T:1.4} holds, and $f\in {\mathcal S}_{\kappa}$ by Theorem \ref{T:1.4}.
\section{A local result}
\setcounter{equation}{0}
Let $\Omega\subseteq{\mathbb D}$ be an open set, and let
$$
{\bf k}_n(f;\Omega):=\max_{z_1,\ldots,z_n\in\Omega}{\rm{sq}_-} P_n(f; \,
z_1,\ldots,z_n),
$$
where the domain of definition of the function $f$ contains
$\Omega$.
A known local result
(see \cite[Theorem 1.1.4]{ADRS}, where it is proved in the more general
setting of operator valued functions)
states that for a meromorphic
function $f$ from
the class ${\mathcal S}_\kappa$ and for every open set $\Omega\in{\mathbb D}$
that does not contain any poles of $f$, there is
an integer $n$ such that ${\bf k}_n(f;\Omega)=\kappa$. However, the minimal
such $n$
can be arbitrarily large, in contrast with Theorem \ref{T:3.1}, as the
following example shows.
{\bf e}gin{Ex}
{\rm Fix a positive integer $n$, and let
$$f_n(z)={\displaystyle \frac{z^n(2-z)}{2z-1}}
=\frac{S(z)}{b(z)}, $$
where $S(z)=z^n$ is a Schur function, and $b(z)=\frac{z-1/2}{1-z/2}$ is a
Blaschke factor. By Theorem \ref{T:1.2b},
$f_n\in {\mathcal S}_1$.
Thus, the kernel
$$
K(z,w):=\frac{1-f_n(z)f_n(w)^*}{1-z{w}^*}=1+z{w}^*+\ldots
+z^{n-1}({w}^*)^{n-1}-\frac{3z^n(w^*)^n}{(2z-1)(2{w}^*-1)}
$$
has one negative square on ${\mathbb D}\setminus\{\frac{1}{2}\}$.
Select $n$ distinct points ${\mathcal Z}=\{z_1, \ldots ,z_n\}$ (in some order) near
zero, and multiply
the Pick matrix $P_n(f;z_1, \ldots ,z_n)$
by $\Phi({\mathcal Z})$ on the left and by $\Phi({\mathcal Z})^*$ on the right, where
$\Phi({\mathcal Z})$ is defined in (\ref{x.1}).
By Lemma \ref{L:x.1},
{\bf e}gin{equation}\label{xyz}
\lim_{z_1, \ldots ,z_n\rightarrow 0} \Phi({\mathcal Z})
P_n(f;z_1, \ldots ,z_n) \Phi({\mathcal Z})^*=
\left[\frac{1}{i!}\frac{1}{j!} \left(\frac{\partial^{i+j}}{\partial z^i\partial
({w}^*)^j} K(z,w)\right)_{z,w=0}\right]_{i,j=0}^{n-1},
\end{equation}
which is the identity matrix. Therefore, by Lemma \ref{L:new}
there exists a $\delta_n>0$ such that
$P_n(f;z_1, \ldots ,z_n)\geq 0$ if $|z_1|, \ldots, |z_n|< \delta_n$.
On the
other hand, since
$$ \left(\frac{\partial^{2n}}{\partial z^n\partial
({w}^*)^n} K(z,w)\right)_{z,w=0}=-3, $$
selecting an ordered set of $n+1$ distinct points ${\mathcal Z}'=\{z_1, \ldots
,z_{n+1}\}$ in a neighborhood
of zero, analogously to (\ref{xyz}) we obtain
{\bf e}gin{eqnarray*}
\lim_{z_1, \ldots ,z_{n+1}\rightarrow 0} \Phi({\mathcal Z}')
P_{n+1}(f;z_1, \ldots ,z_{n+1}) \Phi({\mathcal Z}')^*&=&
\left[\frac{1}{i!}\frac{1}{j!} \left(\frac{\partial^{i+j}}{\partial z^i\partial
({w}^*)^j}
K(z,w)\right)_{z,w=0}\right]_{i,j=0}^{n}
\\ &
=&\left[{\bf e}gin{array}{cc}I_n &0 \\ 0 &
-3\end{array}\right]. \end{eqnarray*}
By Lemma \ref{L:new} again, there exists a $\delta'_n>0$ such that
$P_{n+1}(f;z_1, \ldots ,z_{n+1})$ has exactly one negative eigenvalue if
$|z_1|, \ldots, |z_{n+1}|< \delta'_n$. (Note that
$P_{n+1}(f;z_1, \ldots ,z_{n+1})$ cannot have more than one negative eigenvalue
because $f\in {\mathcal S}_{1}$.) }
\label{E:3.1a}
\end{Ex}
Theorem \ref{T:1.4}, or more precisely its proof, allows us to extend the local
result to
the more general classes
${\mathcal S}_{\kappa}$:
{\bf e}gin{Tm} If $f\in {\mathcal S}_{\kappa}$ is defined
on ${\mathbb D}\setminus \Lambda$, where $\Lambda$ is a discrete set, then for every
open set $\Omega\subseteq {\mathbb D}\setminus\Lambda$ there exists a positive integer
$n$
such that
{\bf e}gin{equation}\label{5.z}
{\bf k}_{n}(f;\Omega)=q + \ell_{\Omega},
\end{equation}
where
$\ell_{\Omega}$ is the
number of
jumps points of $f$
that belong to $\Omega$,
and $q$ is the number of poles of $f$ in ${\mathbb D}$, counted with multiplicities.
\end{Tm}
Note that in view of {\bf 1} $\Leftrightarrow$ {\bf 2} of Theorem
\ref{T:1.4}, the right hand side of (\ref{5.z}) is equal to
$\kappa -\ell_{\not\in\Omega}$,
where $\ell_{\not\in\Omega}$
is the
number of
jump points of $f$
that do not belong to $\Omega$. Therefore,
since ${\bf k}_{n}(f;\Omega)$ is completely
independent of the values of $f$ at jump points outside $\Omega$,
it is easy to see that
{\bf e}gin{equation}\label{5.z0}
{\bf k}_{m}(f;\Omega)\leq q + \ell_{\Omega} \qquad
{\rm for} \ \ {\rm every} \ \ {\rm integer}\ \ m>0. \end{equation}
{\bf Proof}: We may assume by Theorem \ref{T:1.4} that $f$ is a standard
function. We may also assume that $f$ has no jumps outside $\Omega$,
because the values of $f$ at jump points outside $\Omega$ do not
contribute to
${\bf k}_{n}(f;\Omega)$.
Let $w_1, \ldots ,w_k$ be the distinct poles of $f$ in $\Omega$ (if $f$ has no
poles in $\Omega$, the subsequent argument is simplified accordingly), of
orders $r_1, \ldots, r_k$, respectively, and let $z_1, \ldots, z_{\ell}$
be the jumps of $f$. Then analogously to (\ref{x.5}) we have
{\bf e}gin{equation}
\label{5.z1}
f(z)=\left\{{\bf e}gin{array}{cl} {\displaystyle \frac{\widehat{S}(z)}{b(z)}} &
{\rm if}\, z\not\in \{z_1,\ldots,z_{\ell}, w_{t+1},\ldots ,w_k\} \\
f_j & {\rm if }\, z=z_j, j=1, \ldots , \ell \end{array}\right.
\end{equation}
where $b(z)$
is the Blaschke product
having zeros $w_1, \ldots ,w_k$
of
orders $r_1, \ldots, r_k$, respectively.
given by (\ref{x.4b}). We assume that
$w_j=z_j$ for $j=1, \ldots , t$, $\{w_{t+1}, \ldots ,w_k\}\cap
\{z_{t+1},\ldots
,z_{\ell}\}=\emptyset$, and $\widehat{S}(z_j)/b(z_j)\neq f_j$ for $j=t+1,
\ldots
,\ell.$ The function $\widehat{S}(z)$ is of the form
$\widehat{S}(z)=S(z)/b_{{\rm
out}}(z)$,
where $S(z)$ is a Schur function that does not vanish at $w_1, \ldots ,w_k$,
and $b_{{\rm out}}(z)$ is the Blaschke product whose zeros coincide with the
poles of $f$ outside $\Omega$, with matched orders.
Let $N=2\ell+\sum_{j=1}^kr_j$, and select $N$ distinct points
as in (\ref{x.5a}), with the additional proviso that all these points belong to
$\Omega$.
Select also $n$ distinct points $\Xi=\{\xi_1,\ldots ,\xi_n\}$,
$\xi_j\in\Omega$,
disjoint from (\ref{x.5a}), in a vicinity of some $z_0\in \Omega$; we
assume that $f$ is analytic at $z_0$. The number $n$ of
points $\xi_j$, and further specifications concerning the set $\Xi$, will be
determined later.
Let
$$ P=P_{N+n}(f; \mu_{j,i}, \nu_1, \ldots, \nu_\ell, z_1, \ldots, z_\ell, \xi_1,
\ldots , \xi_n ). $$
We now repeat the steps between (\ref{x.5a}) and (\ref{x.20}) of the proof of
Theorem \ref{T:3.1}, applied to the top left $N \times N$ corner of $P$.
As a result, we obtain the following matrix:
{\bf e}gin{equation}\label{5.z2}
P_6=\left[{\bf e}gin{array}{cccc}
-K_{11} & 0 & K_{31}^* & K_{41}^*\\
0& K_{22} &K_{22} & K_{42}^* \\
K_{31} & K_{22} &
K_{33} & K_{43}^* \\
K_{41}&K_{42} & K_{43} & K_{44} \end{array}\right], \end{equation}
where $K_{11}$, $K_{22}$, and $K_{31}$
are as in (\ref{x.20a}), and
{\bf e}gin{eqnarray*}
K_{41}&=&\left[{\displaystyle-\frac{\widehat{S}(\xi_j)}{1-\xi_jz_{t+i}^*}}
\right]_{j,i=1}^{n,\ell-t}, \quad
K_{42}=\left[\frac{1}{1-\xi_jw^*_i}\right]_{j,i=1}^{n,\ell-t},
\end{eqnarray*}
{\bf e}gin{eqnarray*}
K_{43}&=&\left[{\displaystyle\frac{1-\widehat{S}(\xi_j)\widehat{S}(w_i)^*}
{1-\xi_jw^*_i}}\right]_{j,i=1}^{n,\ell-t}, \quad
K_{44}=\left[{\displaystyle\frac{1-\widehat{S}(\xi_j)\widehat{S}(\xi_i)^*}
{1-\xi_j\xi^*_i}}\right]_{j,i=1}^{n}, \\[3mm]
K_{33}&=&\left[{\displaystyle\frac{1-
\widehat{S}(w_j)\widehat{S}(w_i)^*}{1-w_jw^*_i}}
\right]_{j,i=1}^{t}.
\end{eqnarray*}
The sizes of matrices $K_{11}$, $K_{22}$, $K_{33}$, and $K_{44}$ are
$(\ell -t)\times (\ell -t)$, $t \times t$, $t \times t$, and $n \times n$,
respectively.
It follows that
{\bf e}gin{equation}\label{5.z2a}
{\rm{sq}_-} P \geq N-2\ell+ {\rm{sq}_-} P_6= \sum_{j=1}^k r_j + {\rm{sq}_-} P_6.
\end{equation}
Take the Schur complement ${\bf S}$
to the block
$\left[{\bf e}gin{array}{cc} -K_{11}&0\\ 0 &K_{22} \end{array}\right]$ in
(\ref{5.z2}):
$$ {\bf S}=\left[{\bf e}gin{array}{cc} {\bf S}_{11} &
{\bf S}_{12} \\ {\bf S}_{21} & {\bf S}_{22}\end{array}\right],
$$
where
$$
{\bf S}_{22}=K_{44}-K_{42}K_{22}^{-1}K_{42}^*+K_{41}K_{11}^{-1}K_{41}^*.
$$
We have
{\bf e}gin{equation}\label{5.z2b}
{\rm{sq}_-} P_6= \ell -t +{\rm{sq}_-} {\bf S}. \end{equation}
Analogously to formulas (\ref{x.22}) and (\ref{x.23}) we obtain
{\bf e}gin{equation}
\label{5.z4}
{\bf S}_{11}=
-\left[\frac{\widehat{S}(w_j)\vartheta(w_j)\vartheta(w_i)^*
\widehat{S}(w_i)^*}{1 -w_jw_i^*}\right]_{j,i=1}^t,\quad
{\bf S}_{21}=
-\left[\frac{\widehat{S}(\xi_j)\vartheta(\xi_j)\vartheta(w_i)^*
\widehat{S}(w_i)^*}{1-\xi_jw_i^*}\right]_{j,i=1}^{t,n},
\end{equation}
where the rational function $\vartheta$ is given by (\ref{x.20b}).
Note that $\widehat{S}(w_i)\vartheta(w_i)\neq 0$, for $i=1, \ldots ,t$.
Multiply ${\bf S}$ by the matrix
{\bf e}gin{equation}
\label{5.z5}
{\rm diag}\, \left(\frac{1}{\widehat{S}(w_1)\vartheta(w_1)}, \ldots ,
\frac{1}{\widehat{S}(w_t)\vartheta(w_t)}, I\right) \end{equation}
on the left and by the adjoint of (\ref{5.z5}) on the right. We arrive at
{\bf e}gin{equation}\label{5.z6}
P_7=\left[{\bf e}gin{array}{cc}
-\left[{\displaystyle\frac{1}{1-w_jw_i^*}}\right]_{j,i=1}^t &
\left(-\left[{\displaystyle\frac{\widehat{S}(\xi_j)\vartheta(\xi_j)}{1-
\xi_jw_i^*}}\right]_{j,i=1}^{n,t}\right)^*\\
-\left[{\displaystyle\frac{\widehat{S}(\xi_j)\vartheta(\xi_j)}{1-
\xi_jw_i^*}}\right]_{j,i=1}^{n,t} &
K_{44}-K_{42}K_{22}^{-1}K_{42}^*+K_{41}K_{11}^{-1}K_{41}^* \end{array}\right].
\end{equation}
Finally, we take the Schur complement, denoted $P_8$, to the block
$-\left[{\displaystyle \frac{1}{1-w_jw_i^*}}\right]_{j,i=1}^t$ of $P_7$.
Then
{\bf e}gin{equation}\label{5.z2c}
{\rm{sq}_-} {\bf S}=t+ {\rm{sq}_-} P_8. \end{equation}
The $(j,i)$ entry
of $P_8$ is
{\bf e}gin{eqnarray*}
&&\frac{1-\widehat{S}(\xi_j)\widehat{S}(\xi_i)^*}{1-\xi_j\xi_i^*} -G_{\ell
-t}^*(I-\xi_jD({\mathcal W}_1)^*)^{-1}K_{22}^{-1}(I-\xi_i^*D({\mathcal W}_1))^{-1}G_{\ell-t}\\
&&+ \widehat{S}(\xi_j)G_{\ell-t}^*
(I-\xi_jD({\mathcal Z}_2)^*)^{-1}K_{11}^{-1}(I-\xi_i^*D({\mathcal Z}_2))^{-1}G_{\ell-t}
\widehat{S}(\xi_i)^* \\
&&+\widehat{S}(\xi_j)\vartheta(\xi_j)
G_{\ell -t}^*(I-\xi_jD({\mathcal W}_1)^*)^{-1}K_{22}^{-1}
(I-\xi_i^*D({\mathcal W}_1))^{-1}G_{\ell-t}\vartheta(\xi_i)^*\widehat{S}(\xi_i)^*.
\end{eqnarray*}
Since
$$
K_{22} -D({\mathcal W}_1)K_{22}D({\mathcal W}_1)^*=G_{\ell-t}G_{\ell -t}^*,
$$
we conclude (as it was done for the function $\vartheta$ given in
(\ref{x.20b})) that the rational function
$$
\widehat{\vartheta}(z)=1 +(z-1) G^*_{\ell -t}
(I-zD({\mathcal W}_1)^*)^{-1}K_{22}^{-1}
(I-D({\mathcal W}_1))^{-1}G_{\ell-t}
$$
is inner, and its set of zeros is $Z(\widehat{\vartheta})={\mathcal W}_1$.
Using (\ref{x.22a}) and the formula
$$
1-\widehat{\vartheta}(z)\widehat{\vartheta}(w)^*=G^*_{\ell -t}
(I-zD({\mathcal W}_1)^*)^{-1}K_{22}^{-1}(I-w^*D({\mathcal W}_1))^{-1}G_{\ell-t}
$$
similar to (\ref{x.22a}), the expression for the $(j,i)$ entry of $P_8$ takes
the form
{\bf e}gin{eqnarray*}
&&\frac{1-\widehat{S}(\xi_j)\widehat{S}(\xi_i)^*}{1-\xi_j\xi_i^*}-
\frac{1-\widehat{\vartheta}(\xi_j)\widehat{\vartheta}(\xi_i)^*}
{1-\xi_j\xi_i^*}+\widehat{S}(\xi_j)
\frac{1-\vartheta(\xi_j)\vartheta(\xi_i)^*}{1-\xi_j\xi_i^*}
\widehat{S}(\xi_i)^*\\
&&+\widehat{S}(\xi_j)\vartheta(\xi_j)
\frac{1-\widehat{\vartheta}(\xi_j)\widehat{\vartheta}(\xi_i)^*}{1-\xi_j\xi_i^*}
\vartheta(\xi_i)^*\widehat{S}(\xi_i)^*\\
&&= \widehat{\vartheta}(\xi_j)
\frac{1-
\widehat{S}(\xi_j)\vartheta(\xi_j)\vartheta(\xi_i)^*\widehat{S}(\xi_i)^*}
{1-\xi_j\xi_i^*}\widehat{\vartheta}(\xi_i)^*.
\end{eqnarray*}
We now assume that the point $z_0\in\Omega$ in a neighborhood of which the set
$\Xi$ is selected is such that $\widehat{\vartheta}(z_0)\neq 0$.
Then we may assume that $\widehat{\vartheta}(\xi_j)\neq 0$, $j=1, \ldots,
n$. Multiply $P_8$ on the left by the matrix
${\rm diag}\, (\widehat{\vartheta}(\xi_1)^{-1}, \ldots ,
\widehat{\vartheta}(\xi_n)^{-1})$ and on the right by
its adjoint of, resulting in a matrix
$$
P_9=\left[\frac{1-T(\xi_j)T(\xi_i)^*}{1-\xi_j\xi_i^*}\right]_{j,i=1}^n,
$$
where the function $T(z)$ is given by $T(z)=\widehat{S}(z)\vartheta(z)$.
We have
{\bf e}gin{equation}\label{5.z2d}
{\rm{sq}_-} P_8={\rm{sq}_-} P_9 \end{equation}
Note that $T(z)$ is a meromorphic function with no poles in $\Omega$.
By \cite[Theorem 1.1.4]{ADRS}, there exist a positive integer $n$ and
points $\xi_1, \ldots , \xi_n$ in a neighborhood of $z_0$ such that
$P_9$ has $q - \sum_{j=1}^k r_j$ (the number of poles of $T(z)$) negative
eigenvalues.
Using this choice of $\xi_j$, and
combining (\ref{5.z2a}), (\ref{5.z2b}), (\ref{5.z2c}), and
(\ref{5.z2d}), we obtain
$$ {\rm{sq}_-} P\geq q+\ell. $$
Now equality (\ref{5.z}) follows in view of (\ref{5.z0}).
{\eproof}
\section{An open problem}
\setcounter{equation}{0}
Fix an integer $k\geq 3$.
An open set $D\subseteq {\mathbb D}$ is said to have the $k$-{\em th
Hindmarsh property} if every function $f$ defined on $D$ and such that
the matrices $P_k(f;z_1,\ldots ,z_k)$
are positive semidefinite for every $k$-tuple of points
$z_1, \ldots ,z_k\in D$, admits a (necessarily unique) extension
to a Schur function (defined on ${\mathbb D}$). Theorem \ref{T:1.2}
shows that
${\mathbb D}$ has the $3$-rd Hindmarsh property.
Example \ref{E:3.1a} shows that the open disk of radius $\delta_n$ centered at
the origin
does not have the $n$-th Hindmarsh property, if $\delta_n$ is sufficiently
small.
Denote by ${{\mathcal H}}_k$ the collection of all sets with the $k$-th
Hindmarsh
property. Clearly,
$$ {{\mathcal H}}_3 \subseteq {{\mathcal H}}_4 \subseteq \cdots \subseteq {{\mathcal H}}_k
\subseteq \cdots $$
{\bf e}gin{Pn}
Let $D\in {{\mathcal H}}_k$. If ${\mathcal Z}\subset D$ is a discrete set in $D$, then
$D\setminus {\mathcal Z}$ also has the $k$-th Hindmarsh property.
\end{Pn}
{\bf Proof.} Let $f$ be a function with the domain of definition
$D\setminus {\mathcal Z}$ and such that $P_k(f;z_1,\ldots ,z_k)\geq 0$
for every set of distinct points $z_1, \ldots ,z_k\in D\setminus {\mathcal Z}$.
In particular,
$P_3(f;z_1,z_2,z_3)\geq 0$
for every triple of distinct points $z_1, z_2,z_3\in D\setminus {\mathcal Z}$.
By Hindmarsh's theorem, $f$ is analytic on $D\setminus {\mathcal Z}$.
Also, $$|f(z)|^2=(|f(z)|^2-1)+1=-P_1(f;z)(1-|z|^2)+1 \leq 1 $$
for every $z\in D\setminus {\mathcal Z}$. Thus, $f$ admits an analytic
continuation to a function $\widehat{f}$ on $D$. By continuity,
$P_k(\widehat{f};z_1,\ldots ,z_k)\geq 0$ for every $k$-tuple of
distinct points $z_1,\ldots ,z_k \in D$. Since $D\in {{\mathcal H}}_k$,
$\widehat{f}$ admits an extension to a Schur function.
{\eproof}
{\bf e}gin{Cy} Let $D_0={\mathbb D}$, $D_j=D_{j-1}\setminus {\mathcal Z}_{j-1}$, $j=1,2,
\ldots$, where ${\mathcal Z}_{j-1}$ is a discrete set in $D_{j-1}$. Then all the
sets $D_j$, $j=1,2, \ldots, $, have the $3$-rd Hindmarsh property.
\end{Cy}
Using the sets with the Hindmarsh property,
the implication {\bf 3.} ${\mathbb R}ightarrow$ {\bf 1.} of Theorem \ref{T:1.4}
can be extended to a larger class of functions, as follows:
{\bf e}gin{Tm}
Let $f$ be defined on $D$, where $D\in {{\mathcal H}}_p$. Then
$f$ belongs to ${\mathcal S}_\kappa (D)$ if and only if
{\bf e}gin{equation}
{\bf k}_n(f)={\bf k}_{n+p}(f)=\kappa
\label{1.8'}
\end{equation}
for some integer $n$. \label{T:last}
\end{Tm}
The definition of ${\mathcal S}_\kappa (D)$ is the same as ${\mathcal S}_\kappa$, with the only
difference being that ${\mathcal S}_\kappa (D)$ consists of functions defined on $D$.
The proof of Theorem \ref{T:last} is essentially the same as that of
{\bf 3.} ${\mathbb R}ightarrow$ {\bf 1.} of Theorem \ref{T:1.4}.
{\bf e}gin{Pb}
Describe the structure of sets with the $k$-th Hindmarsh property.
\end{Pb}
In the general case this seems to be a very difficult unsolved problem.
We do not address this problem in the present paper.
{\bf Acknowledgement}. The research of LR was supported in
part by an
NSF grant.
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
{\bf e}gin{thebibliography}{10}
\bibitem{AAK}
V. M. Adamyan, D. Z. Arov, and M. G. Kre{\u\i}n.
\emph{Analytic properties of the Schmidt pairs of a Hankel operator and the
generalized Schur-Takagi problem}, (Russian) Mat. Sb. (N.S.) \textbf{86(128)}
(1971),
34--75.
\bibitem{AY} J.~Agler and N.~J.~Young. \emph{Functions that are almost
multipliers
of Hilbert function spaces}, Proc. of London Math. Soc., \textbf{76}
(1998), 453--475.
\bibitem{akh}
N.~I.~Akhiezer. \emph{On a minimum problem in function theory and the
number of
roots of an algebraic equation inside the unit disc},
{I}zv. {A}kad. {N}auk {S}{S}{S}{R} {M}at. {E}stestv. {N}auk {\bf 9}
(1930), 1169--1189, English transl. in:
{\em Topics in interpolation theory}, Oper. Theory Adv. Appl., {\bf 95},
19--35.
\bibitem{ADRS} D.~Alpay, A.~Dijksma, J.~Rovnyak, and H. de Snoo.
\emph{Schur functions, operator colligations and reproducing kernel Pontryagin
spaces}, OT96, Birkh\"{a}user, 1997.
\bibitem{aron}
N.~Aronszajn. \emph{ Theory of reproducing kernels},
Trans. {A}mer. {M}ath. {S}oc., \textbf{68} (1950), 337--404.
\bibitem{bgr} J.~A.~Ball, I.~Gohberg, and L.~Rodman.
\emph{Interpolation of rational matrix functions},
OT45, Birkh\"{a}user Verlag, 1990.
\bibitem{BH} J. A. Ball and J. W. Helton. \emph{Interpolation problems of
Pick--Hevanlinna and Loewner type for meromorphic matrix functions:
parametrizations of the set of all solutions}, Integral Equations and
Operator Theory {\bf 9} (1986), 155--203.
\bibitem{B} R.~Bhatia. \emph{Matrix analysis}, Springer--Verlag, New York,
1996.
\bibitem{BR} V. Bolotnikov and L. Rodman.
\emph{Krein--Langer factorizations via pole triples}, Integral
Equations and Operator Theory, to appear.
\bibitem{DLS} A. Dijksma, H. Langer, and H. S. de Snoo.
\emph{Characteristic functions of unitary operator colligations in
$\Pi_{\kappa}$ spaces}, Operator Theory: Advances and Applications,
{\bf 19} (1986), 125--194.
\bibitem{D} W.~F.~Donoghue, Jr.
\emph{Monotone matrix functions and analytic continuation}, Springer--Verlag,
New York--Heidelberg, 1974.
\bibitem{gol} L. B. Golinskii. \emph{A generalization of the matrix
{N}evanlinna--{P}ick
problem}, Izv. Akad. Nauk Armyan. SSR Ser.
Mat. {\textbf 18} (1983), 187--205. (Russian).
\bibitem{hind}
A.~Hindmarsh. \emph{Pick conditions and analyticity}, Pacific J. Math.
{\textbf 27} (1968), 527--531.
\bibitem{kl}
M.~G.~Kre{\u\i}n and H.~Langer.
\emph{\"{U}ber die verallgemeinerten {R}esolventen und die
charakteristische {F}unktion eines isometrischen {O}perators
im {R}aume $\Pi \sb{\kappa }$},
Colloq. Math. Soc. J\'anos Bolyai {\bf 5} (1972), 353--399.
\bibitem{LT}
P.~Lancaster and M.~Tismenetsky. \emph{The theory of matrices with
applications,} 2nd Edition, Academic Press, 1985.
\bibitem{N} A. A. Nudelman. \emph{A generalization of classical
interpolation problems}, Dokl. Akad. Nauk. SSSR {\bf 4} (1981), 790--793.
(Russian).
\end{thebibliography}
\parbox[t]{7 cm}{
Department of Mathematics \\
P. O. Box 8795 \\
The College of William and Mary\\
Williamsburg VA 23187-8795, USA \\
vladi@@math.wm.edu, sykhei@@wm.edu, lxrodm@@math.wm.edu }
\end{document}
|
\begin{document}
\title{The Penrose transform in quaternionic geometry}
\author{Radu Pantilie}
\email{\href{mailto:[email protected]}{[email protected]}}
\address{R.~Pantilie, Institutul de Matematic\u a ``Simion~Stoilow'' al Academiei Rom\^ane,
C.P. 1-764, 014700, Bucure\c sti, Rom\^ania}
\subjclass[2010]{Primary 53C28, Secondary 53C26}
\keywords{the Penrose transform, twistor theory, quaternionic geometry}
\newtheorem{thm}{Theorem}[section]
\newtheorem{lem}[thm]{Lemma}
\newtheorem{cor}[thm]{Corollary}
\newtheorem{prop}[thm]{Proposition}
\theoremstyle{definition}
\newtheorem{defn}[thm]{Definition}
\newtheorem{rem}[thm]{Remark}
\newtheorem{exm}[thm]{Example}
\numberwithin{equation}{section}
\begin{abstract}
We study the Penrose transform for the `quaternionic objects' whose twistor spaces are complex manifolds
endowed with locally complete families of embedded Riemann spheres with positive normal bundles.
\end{abstract}
\maketitle
\thispagestyle{empty}
\section*{Introduction}
\indent
The obvious aim, when trying to solve an equation, is to find all the solutions, explicitly.
However, one is usually satisfied with a subclass of solutions and a procedure by which
all the solutions can be obtained from those in the subclass.\\
\indent
The (arche)typical example is given by the Laplacian on $\R^2(=\!\C)$, where any complex-valued harmonic
function is, locally, the sum of a holomorphic and an anti-holomorphic function.\\
\indent
On $\R^3$, the situation was settled, essentially, in \cite{Whi-1903}\,: for any complex-valued
harmonic function $f$\,, locally defined on $\R^3$, there exists a family of maps
$\phi_t:\R^3\to\C$, with $t\in I$\,, and $I\subseteq\R$ a closed interval, such that $\phi_t=\psi_t\circ\p_t$\,, with
$\p_t:\R^3\to\C$ orthogonal projections, $\psi_t$ locally defined holomorphic functions on $\C$, and
\begin{equation} \label{e:Penrose_prehistory}
f=\int_I\phi_t\dif\!t\;.
\end{equation}
\indent
On $\R^4$, with the Lorentzian signature, this was done by R.~Penrose (see \cite{Pen-69} and the references therein).
Up to a complexification (in Twistor Theory, the objects are usually real-analytic) we may assume Riemannian signature.\\
\indent
The procedure by which all of the harmonic functions on $\R^4$ can be found is given, again, by \eqref{e:Penrose_prehistory}\,,
whilst the particular class of solutions is formed of all the functions $\phi$ for which there exists a (positive) K\"ahler structure on $\R^4$,
with respect to which $\phi$ is holomorphic.\\
\indent
The first example, above, admits the obvious higher dimensional generalization through the classical notion of pluriharmonic function, of complex analysis.
Furthermore, the unicity is easy to deal with.\\
\indent
On the other hand, neither the unicity nor the adequate level of generality can be easily described for the other two examples.
Nevertheless, the former problem can be understood through the classical fact that the integral representations above can be obtained, through the \v Cech cohomology,
from elements of $H^1\bigl(Z,\mathcal{L}^2\bigr)$\,, where $Z=\mathcal{O}(2)$\,, for $\R^3$, $Z=\mathcal{O}(1)\oplus\mathcal{O}(1)$\,, for $\R^4$,
and $\mathcal{L}=\p^*\bigl(\mathcal{O}(-1)\bigr)$\,, with $\mathcal{O}(k)$ the holomorphic line bundle of Chern number $k$ over the Riemann sphere,
and $\p$ the bundle projection of $Z$. That is, \emph{the Penrose transform} establishes a correspondence between certain
cohomology classes over the twistor space $Z$ of $M$ and the `pluriharmonic functions' on~$M$.\\
\indent
In this paper, we study the Penrose transform in the case when, up to conjugations, the twistor space is a complex manifold endowed
with a locally complete family of embedded Riemann spheres with positive normal bundles. On the differential geometric side,
these correspond to what we call `quaternionic objects' (see Section \ref{section:q_objects}\,, below). This way, we retrieve, for example,
the already known cases of (classical) quaternionic manifolds \cite{Bas-q_complexes}\,, and, in particular, the anti-self-dual manifolds \cite{Hit-80}\,.\\
\indent
The fact that the parameter spaces of families of embedded Riemann spheres are, canonically, `quaternionic' has its roots in
the fact that the quaternions arise from the Riemann sphere endowed with the antipodal map. This is explained in Section \ref{section:quaternions}
which we believe will help the reader to understand the quaternionic objects and the corresponding Penrose transform.\\
\indent
The relevant notion of `quaternionic pluriharmonicity' is constructed in Section \ref{section:q-ph}\,. This is done in two steps,
by using the fact that any suitable holomorphic line bundle over the twistor space of $M$ corresponds to a
`hypercomplex object' which is the total space of a principal bundle over $M$ (see Definition \ref{defn:hypercomplex_object}\,,
and the comments before it).\\
\indent
Finally, the quaternionic Penrose transform is discussed in Section \ref{section:qPt}\,. We believe that our approach is
as explicit as possible, based on a cohomology result of \cite{EGW}\,, and in the particular case of anti-self-dual manifolds
it, essentially, reduces to the approach of \cite{W'd'se-85} (see, also, \cite{Raw-79}\,).
\section{Where do the quaternions come from?} \label{section:quaternions}
\indent
The idea is that \emph{the quaternions arise from the Riemann sphere} (endowed with the antipodal map), as we shall now explain.\\
\indent
A \emph{Riemann sphere} is a compact Riemann surface $Y$, with zero first Betti number,
endowed with an involutive antiholomorphic diffeomorphism $\s$ without fixed points.
From the exact sequence of cohomology groups associated to the exponential sequence
$\{0\}\longrightarrow\mathbb{Z}\longrightarrow\C\longrightarrow\C^{\!*}\longrightarrow\{1\}$
we deduce that $H^1(Y,\C^{\!*})=\mathbb{Z}$\,. In particular, each holomorphic line bundle over $Y$ is determined,
up to isomorphisms, by its Chern number. Furthermore, each element of $H^1(Y,\C^{\!*})$ is determined by a divisor.
Thus, on denoting by $\overline{Y}$ the Riemann surface with the same underlying conformal structure as $Y$
but with the opposite orientation, $\s$ induces an isomorphism $H^1(Y,\C^{\!*})=H^1(\overline{Y},\C^{\!*})$ which
preserves the Chern numbers.\\
\indent
On the other hand, if $\mathcal{L}$ is a holomorphic line bundle over $Y$ then its conjugate $\overline{\mathcal{L}}$
is a holomorphic line bundle over $\overline{Y}$. Consequently, $\mathcal{L}$ is isomorphic to $\s^*(\overline{\mathcal{L}})$\,.
Equivalently, there exists an antiholomorphic diffeomorphism $\t:\mathcal{L}\to\mathcal{L}$\,, covering $\s$, which is
complex-conjugate linear on each fibre. Moreover, any two such morphisms differ by the multiplication with a nonzero complex number.
Furthermore, as $\t^2$ is an isomorphism of $\mathcal{L}$ (covering the identity map of $Y$) we have $\t^2=\l\,{\rm Id}_{\mathcal{L}}$\,,
for some nonzero complex number $\l$\,. But $\t^2$ is, also, an isomorphism of $\overline{\mathcal{L}}$ which implies that $\l$ is real.
Therefore, by suitably multiplying, if necessary, $\t$ with a complex number, we may assume $\t^2=\pm{\rm Id}_{\mathcal{L}}$\,.
Note that, $\t$ is unique, with this property, up to a factor of modulus~$1$.
\begin{prop}[cf.~\cite{Qui-QJM98}\,] \label{prop:Q-J}
$\t^2=(-1)^d\,{\rm Id}_{\mathcal{L}}$\,, where $d$ is the Chern number of $\mathcal{L}$\,.
\end{prop}
\begin{proof}
Suppose that $\mathcal{L}$ has Chern number $1$. By using the Kodaira vanishing and the Riemann--Roch theorems,
we obtain that $H=H^0(Y,\mathcal{L})$ has (complex) dimension two. Consequently, the elements of $H$ which vanish at a point of $Y$
form a one-dimensional subspace of $H$. Therefore we may identify $Y=PH$ and, together with the fact that $\s$ has no fixed points,
we deduce $\t^2=-{\rm Id}_{\mathcal{L}}$\,.\\
\indent
Now, the proof follows quickly from the fact that any holomorphic line bundle over $Y$, of Chern number $d$\,,
is of the form $\mathcal{L}^d$, where $\mathcal{L}$ has Chern number $1$.
\end{proof}
\begin{rem} \label{rem:Q-J}
Let $\mathcal{L}$ be a holomorphic line bundle, of Chern number $-1$, over a Riemann sphere $Y$, and let $H=H^0(Y,\mathcal{L}^*)$\,.
The proof of Proposition \ref{prop:Q-J} shows that, under the identification $Y=PH(=P(H^*)\,)$\,,
$\mathcal{L}$ becomes the tautological line bundle, canonically embedded into the trivial bundle $Y\times H^*$.
\end{rem}
\indent
We can now make the following:
\begin{defn}
Let $Y$ be the Riemann sphere and let $H=H^0(Y,\mathcal{L})$\,, where $\mathcal{L}$ is a holomorphic line bundle over $Y$ of Chern number $1$.\\
\indent
The \emph{algebra of quaternions} is the unital associative subalgebra $\mathscr{H}q\!\subseteq{\rm End}H$ formed of the elements
which commute with the complex-conjugate isomorphism induced by~$\t$. (Briefly, $\mathscr{H}q$ is formed of the `real' elements of ${\rm End}H$.)
\end{defn}
\indent
The real multiples of ${\rm Id}_H$ are the \emph{real quaternions}, whilst the trace-free elements of $\mathscr{H}q\!\subseteq{\rm End}H$ are
the \emph{imaginary quaternions}. Also, the (complex) determinant on ${\rm End}H$ restricts to give the Euclidean structure of $\mathscr{H}q$.
Then the imaginary quaternions of modulus $1$ are linear complex isomorphisms of $H$ of square $-{\rm Id}_H$\,. Thus, we obtain
the identification $S^2=PH(=Y)$ through which any imaginary quaternion of modulus $1$ is identified with its eigenspace
corresponding to $-{\rm i}$\,. Then, under this identification, $\s$ is the antipodal map.\\
\indent
Note that, $\mathscr{H}q$ does not depend of the choice of $\t$, but only of $Y$, $\s$, and $\mathcal{L}$\,.
Moreover, if $\mathcal{L}'$ is another holomorphic line bundle, of Chern number $1$, and $H'=H^0(Y,\mathcal{L}')$
then the two embeddings of $\mathscr{H}q$ into ${\rm End}H$ and ${\rm End}H'$ differ by a composition with an element of
${\rm SO}(3)$ acting trivially on $\R$ and canonically on $\R^3(={\rm Im}\mathscr{H}q)$.
In particular, the automorphism group of $\mathscr{H}q$ is ${\rm SO}(3)$\,.
\section{The main objects of quaternionic geometry} \label{section:q_objects}
\indent
In this section, we define the class of manifolds for which we want to describe the Penrose transform.
Up to integrability, the corresponding geometric structure is given by the almost co-CR quaternionic structures of \cite{fq,fq_2}\,,
whilst to formulate the integrability we need the generalized connections of \cite{qgfs}\,. We start by recalling the latter.
\subsection{Principal $\r$-connections}
Let $(P,M,G)$ be a principal bundle and let $F$ be a vector bundle over $M$, endowed with a morphism of vector bundles $\r:F\to TM$.
A \emph{principal $\r$-connection} on $P$ is a $G$-invariant morphism of vector bundles $C:\p^*F\to TP$ such that $\widetilde{\dif\!\p}\circ C=\p^*\r$\,,
where $\p:P\to M$ is the projection and $\widetilde{\dif\!\p}:TP\to\p^*(TM)$ is the morphism of vector bundles induced by $\dif\!\p$\,.\\
\indent
The notion of associated connection has a straight generalization to this setting, as we shall now explain. Let $S$ be a manifold on which
$G$ is acting to the left. Denote by $Y=P\times_GS$ the associated bundle, by $\p_Y$ its projection, and by $\mu:TP\times S\to TY$
the obvious $G$-invariant morphism of vector bundles, covering the projection $P\times S\to Y$.\\
\indent
On identifying, as usual, $\p_Y^*(F)$ with a submanifold of $Y\times F$, we may define the \emph{associated $\r$-connection} $c:\p_Y^*(F)\to TY$,
as follows: $c\bigl([u,s],f\bigr)=\mu\bigl(C(u,f),s\bigr)$\,, for any $(u,f)\in\p^*F$ and $s\in S$,
where $[u,s]\in Y$ is the equivalence class determined by $(u,s)\in P\times S$. Note that, if we compose $c$ with the morphism of vector bundles
from $TY$ onto $\p_Y^*(TM)$\,, induced by $\dif\!\p_Y$, we obtain $\p_Y^*(\r)$\,. Indeed, as $\dif\!\p_Y\circ\mu$ is the composition
of the projection from $TP\times S$ onto $TP$ followed by $\dif\!\p$\,, we have $(\dif\!\p_Y\circ c)\bigl([u,s],f\bigr)=(\dif\!\p\circ C)(u,f)=\r(f)$\,,
for any $(u,f)\in\p^*F$ and $s\in S$.
\begin{rem}
Let $(P,M,G)$ be a principal bundle endowed with a principal $\r$-connection $C:\p^*F\to TP$, where $F$ is a vector bundle over $M$,
and $\r:F\to TM$ a morphism of vector bundles.\\
\indent
Suppose that $E\subseteq F$ is a vector subbundle mapped isomorphically by $\r$ onto a distribution on $M$. Then, by restriction, $C$ induces
a partial principal connection on $P$ over $\r(E)$\,. This, obviously, works, also, if $E\subseteq F^{\C\!}$ is a complex vector subbundle
mapped isomorphically by (the complexification of) $\r$ onto a complex distribution on $M$.
\end{rem}
\indent
If $S$ is a vector space on which $G$ acts by linear isomorphisms then any (associated) $\r$-connection on the associated vector bundle
$Y=P\times_GS$ corresponds to a covariant derivation $\nabla:\G(Y)\to\G\bigl({\rm Hom}(F,Y)\bigr)$ which is a linear map satisfying
$\nabla(fs)=\r^*(\dif\!f)\otimes s+f(\nabla s)$\,, for any function $f$ on $M$ and any section $s$ of $Y$.
If $Y=F$ then we can define the \emph{torsion} of $\nabla$ which is the section $T$ of $TM\otimes\Lambda^2F^*$ characterised by
$T(s_1,s_2)=\r\bigl(\nabla_{s_1}s_2-\nabla_{s_2}s_1\bigr)-\bigl[\r(s_1),\r(s_2)\bigr]$\,, for any sections $s_1$\,, $s_2$ of $F$.
\subsection{Quaternionic objects}
A \emph{linear quaternionic structure} on a (real) vector space $F$ is an equivalence class of morphisms of
associative algebras from $\mathscr{H}q$ to ${\rm End}F$, where two such morphisms $\s_1$ and $\s_2$ are equivalent if $\s_2=\s_1\circ a$\,, for some
$a\in{\rm SO}(3)$\,. If $\s:\mathscr{H}q\to{\rm End}F$ is a morphism of associative algebras then the induced linear quaternionic structure on $F$
is determined by $\s(S^2)$\,, whose elements are the \emph{admissible linear complex structures} on $F$. Thus, a linear quaternionic structure is
given by a family of linear complex structures, parameterized by the Riemann sphere.\\
\indent
A \emph{linear co-CR quaternionic structure} on a vector space $U$ is a pair $(F,\r)$\,, where $F$ is a quaternionic vector space, and $\r:F\to U$
is a surjective linear map such that $({\rm ker}\r)\cap J({\rm ker}\r)=\{0\}$\,, for any admissible linear complex structure $J$ on $F$; consequently,
$C=\r\bigl({\rm ker}(J+{\rm i})\bigr)$ is a linear co-CR structure on $U$ (that is, $C+\overline{C}=U^{\C}$). Thus, a linear co-CR quaternionic structure
is given by a family of linear co-CR structures, parameterized by the Riemann sphere.\\
\indent
These notions extend in the obvious way to vector bundles, thus, giving the notion of almost co-CR quaternionic structure on a manifold.
Note that, if $F$ is a quaternionic vector bundle then the corresponding space $Y$ of admissible linear complex structures on (the fibres of) $F$
is a sphere bundle, with structural group ${\rm SO}(3)$\,.\\
\indent
Now, let $(F,\r)$ be an almost co-CR quaternionic structure on $M$ and suppose that $F$ is endowed with a $\r$-connection (compatible with its
structural group). Denote by $Y$ the bundle of admissible linear complex structures on $F$ and let $c:\p^*F\to TY$ be the associated $\r$-connection on it,
where $\p:Y\to M$ is the projection.\\
\indent
Let $\mathcal{B}\subseteq T^{\C\!}Y$ be the complex distribution given by $\mathcal{B}_J=c_J\bigl({\rm ker}(J+{\rm i})\bigr)$\,, for any $J\in Y$.
Then $\mathcal{C}=\mathcal{B}\oplus({\rm ker}\dif\!\p)^{0,1}$ is an almost co-CR structure on $Y$; that is, $\mathcal{C}+\overline{\mathcal{C}}=T^{\C\!}Y$.
\begin{defn} \label{defn:q_object}
We say that $(M,F,\r,c)$ is a \emph{quaternionic object} if $\mathcal{C}$ is integrable (that is, its space of sections is closed under the usual bracket).
\end{defn}
\indent
Note that, a quaternionic object is a (real) $\r$-quaternionic manifold \cite{qgfs} with $\r$ surjective. In particular, if $\r$ is an isomorphism then we obtain
the classical notion of quaternionic manifold.\\
\indent
To define the corresponding twistor spaces, suppose that $(M,F,\r,c)$ is a quaternionic object for which there exists a surjective submersion
$\psi:Y\to Z$ such that:\\
\indent
\quad(1) $({\rm ker}\dif\!\psi)^{\C}=\mathcal{C}\cap\overline{\mathcal{C}}$\,,\\
\indent
\quad(2) $\mathcal{C}$ is projectable with respect to $\psi$ (note that, this is a consequence of (1)\,, if the fibres of $\psi$ are connected),\\
\indent
\quad(3) $\psi$ restricted to each fibre of $\p$ is injective.\\
\indent
Then $Z$ endowed with $\dif\!\psi(\mathcal{C})$ is a complex manifold (with $T^{0,1}Z=\dif\!\psi(\mathcal{C})$\,) which is the \emph{twistor space}
of $(M,F,\r,c)$\,. Furthermore, $\bigl(\psi(Y_x)\bigr)_{x\in M}$\,, where $Y_x=\p^{-1}(x)$\,, is a smooth family of embedded Riemann spheres on $Z$\,,
whose members are the \emph{real twistor spheres}.
\begin{exm}
Let $(U,F,\r)$ be a co-CR quaternionic vector space. Then $U\times F$ is a quaternionic vector bundle over $U$ and ${\rm Id}_U\times\r$
is a morphism of vector bundles from $U\times F$ onto $TU\,(=U\times U)$\,, giving an almost co-CR quaternionic structure on $U$.\\
\indent
Moreover, on endowing $U\times F$ with the obvious (classical) flat connection, we obtain a quaternionic object whose twistor space is
the holomorphic vector bundle $\mathcal{U}$\,, over the Riemann sphere, which is the quotient of the trivial holomorphic vector bundle $S^2\times U^{\C}$
through $\bigl(\{J\}\times\r\bigl({\rm ker}(J+{\rm i})\bigr)\bigr)_{J\in S^2}$\,; furthermore, $\mathcal{U}$ determines $(U,F,\r)$ \cite{fq,fq_2}\,.
\end{exm}
\indent
Returning to the general case, let $Z$ be the twistor space of the quaternionic object $(M,F,\r,c)$\,. Then, for any $x\in M$, the normal
bundle of the corresponding twistor sphere $\psi(Y_x)\subseteq Z$\,, where $\psi:Y\to Z$ is as above, is (isomorphic to)
the holomorphic vector bundle of $(T_xM,E_x,\r_x)$\,. Consequently, by applying classical results (see \cite{qgfs} and the references therein)
we obtain that $\bigl(\psi(Y_x)\bigr)_{x\in M}$ is contained by a locally complete family of embedded Riemann spheres parameterized
by a complexification of $M$ (in particular, $M$ is real-analytic); the members of this family are the \emph{twistor spheres}.
Conversely, up to a conjugation and on restricting, if necessary, to an open subset, \emph{any complex manifold endowed with a locally complete family
of Riemann spheres, with positive normal bundles, is the twistor space of a quaternionic object} (consequence of \cite{qgfs}\,).\\
\indent
There is only one more ingredient needed to start describing the Penrose transform for quaternionic objects: a holomorphic line bundle
over the twistor space.\\
\indent
Let $Z$ be the twistor space of the quaternionic object $(M,F,\r,c)$\,. Denoting by $K_Z$ the canonical line bundle of $Z$ (that is, $K_Z$ is the determinant
of the holomorphic cotangent bundle of $Z$) and with $\psi:Y\to Z$ as above, we may assume, by passing, if necessary, to an open neighbourhood
of any point of $M$, that there exists a line bundle $\mathcal{L}$ over $Z$ which is a root of $K_Z$ and whose restriction
to each twistor sphere has Chern number $-1$. Furthermore, as the antipodal map on $Y$ induces an involutive antiholomorphic involution $\s:Z\to Z$,
without fixed points, we have that $\s$ is covered by an antiholomorpic involution $\t_Z:K_Z\to K_Z$\,.
It follows that, we may assume that there exists an antiholomorphic anti-involution $\t:\mathcal{L}\to\mathcal{L}$ covering $\s$\,.
Note that, if we restrict $\mathcal{L}$ to the real twistor spheres, then $\t$ give antiholomorphic anti-involutions as in Section \ref{section:quaternions}\,.
Also, for classical quaternionic manifolds of dimensions at least eight it is known that $\mathcal{L}^2$ exists globally \cite{Sal-dg_qm}\,, \cite{PePoSw-98}\,.
\begin{rem}
\emph{The quaternionic objects are abundant.} Various classes of examples can be found in \cite{Pan-integrab_co-cr_q}\,, \cite{qgfs}\,.
Furthermore, if the twistor space of a quaternionic object
is a complex projective manifold then it is a `rationally connected manifold', one of the main objects of study of algebraic geometry
(see \cite{Paltin-2005}\,, and the references therein).
\end{rem}
\section{Quaternionic pluriharmonicity} \label{section:q-ph}
\indent
Let $(M,F,\r,c)$ be a quaternionic object with twistor space $Z$. Suppose that $\mathcal{L}$ is a holomorphic line bundle over $Z$
whose restriction to each twistor sphere has Chern number $-1$, and which is endowed with an anti-holomorphic anti-involution $\t$
which covers the conjugation $\s$ on $Z$ induced by the antipodal map on $Y$. As already mentioned, such a line bundle
always exists locally (that is, if we pass to an open subset of $M$).
Let $H$ be the dual of the direct image through $\p$ of $\psi^*(\mathcal{L}^*)$\,;
that is, the space of sections of $H^*$ over each open set $U\subseteq M$ is the space of
sections of $\psi^*(\mathcal{L}^*)$\,, over $\p^{-1}(U)$\,, which are holomorphic when restricted to the fibres of $\p$.\\
\indent
We may assume $F^{\C\!}=H\otimes E$, where $E$ is a complex vector bundle.
Indeed, the tensor product of $\psi^*(\mathcal{L}^*)$ and $\mathcal{B}$
(note that, the latter is equal to the quotient of $({\rm ker}\dif\!\psi)^{\C}$
through $({\rm ker}\dif\!\p)^{0,1}$) is trivial when restricted to each fibre of $\p$\,. Therefore
$\psi^*(\mathcal{L}^*)\otimes\mathcal{B}=\p^*(E^*)$\,, for some vector bundle $E$ over $M$.
Now (cf.~\cite{qgfs}\,), $F^{\C\!}$ is equal to the dual of the direct image through $\p$ of $\mathcal{B}^*=\psi^*(\mathcal{L}^*)\otimes\p^*(E)$
and is therefore equal to $H\otimes E$.\\
\indent
Note that (compare Remark \ref{rem:Q-J}), $Y=PH$, and $\psi^*(\mathcal{L}\setminus0)=H\setminus0$\,, as principal bundles over $Y$,
with structural group $\C^{\!*}$. We shall denote by $\psi_H:H\setminus0\to\mathcal{L}\setminus0$ the corresponding bundle-map.\\
\indent
Furthermore, $H\setminus0$ is a principal bundle over $M$, with group $\mathscr{H}q^{\!*}$; that is, $H$ is a hypercomplex vector bundle of real rank four over $M$.
Indeed, for any $u\in H\setminus0$ and $z_1+{\rm j}z_2\in\mathscr{H}q^{\!*}$, we may define $u\cdot(z_1+{\rm j}z_2)=z_1u+z_2\t u$\,.\\
\indent
Let $\p_H:H\to M$ and $p:H\setminus0\to Y$ be the projections. We have the following diagram:
\begin{equation} \label{diagram:for_Penrose}
\begin{gathered}
\xymatrix{
& H\setminus0 \ar[dl]_{\psi_H} \ar[d]^p \ar@/^/[ddr]^{\p_H} & \\
\mathcal{L}\setminus0 \ar[d] & \hspace{4mm}Y(=PH) \ar[dl]_{\psi} \ar[dr]^{\p} & \\
Z & & M
}
\end{gathered}
\end{equation}
\indent
Obviously, $\mathcal{C}_H=(\dif\!\psi_H)^{-1}\bigl(T^{0,1}(\mathcal{L}\setminus0)\bigr)$ is a co-CR structure on $H\setminus0$\,.
(Note that, $\dif\!p(\mathcal{C}_H)$ is the co-CR structure involved in Definition \ref{defn:q_object}\,.)
If $q\in\mathscr{H}q^{\!*}$ we denote by $[q]$ its image under the projection onto $\mathscr{H}q^{\!*}\!/\C^{\!*}=\C\!P^1$ (where $\C^{\!*}\subseteq\mathscr{H}q^{\!*}$
is acting to the right on $\mathscr{H}q^{\!*}$). Then $[q]\mapsto\mathcal{C}_H\cdot q^{-1}$ is a family of co-CR structures on $H\setminus0$ parametrized
by the Riemann sphere. It follows that $H\setminus0$ is a quaternionic object such that its twistor space $Z(H\setminus0)$\,,
with respect to its underlying smooth structure, is diffeomorphic to $S^2\times(\mathcal{L}\setminus0)$\,.
Furthermore, $\mathscr{H}q^{\!*}$ acts by twistorial diffeomorphisms on $H\setminus0$ (corresponding to holomorphic diffeomorphisms on $Z(H\setminus0)$\,).\\
\indent
In fact, $H\setminus0$ is a special type of quaternionic object which we define, next.
\begin{defn} \label{defn:hypercomplex_object}
A quaternionic object $(M,F,\r,c)$ is \emph{hypercomplex} if the bundle $Y$ of admissible linear complex structures on $F$
is trivial and $c$ is the corresponding trivial flat connection.
\end{defn}
\indent
For the twistor space $Z$ of a hypercomplex object $(M,F,\r,c)$\,, defined by $\psi:Y\to Z$, we shall assume that the projection $Y=M\times S^2\to S^2$
factorises into $\psi$ followed by a holomorphic submersion from $Z$ onto $S^2$ (obviously, this is automatically satisfied if the fibres of $\psi$ are
connected).\\
\indent
Note that, a hypercomplex object $(M,F,\r,c)$ with $\r$ an isomorphism is a classical hypercomplex manifold.
\begin{prop} \label{prop:twistor_hyper}
Let $Z$ be the twistor space of a quaternionic object $(M,F,\r,c)$ defined by $\psi:Y\to Z$\,. If the fibres of $\psi$ are connected
then the following assertions are equivalent:\\
\indent
{\rm (i)} There exists a $\r$-connection $c_1$ with respect to which $(M,F,\r,c_1)$ is hypercomplex.\\
\indent
{\rm (ii)} There exists a holomorphic submersion $\phi:Z\to S^2$ whose restriction to each twistor sphere is a diffeomorphism
and which intertwines the conjugation on $Z$ and the antipodal map on $S^2$.
\end{prop}
\begin{proof}
If (ii) holds, then $\phi\circ\psi:Y\to S^2$ induces a trivialization $M=Y\times S^2$.
Conversely, if (i) holds then the fibres of $\psi$ are contained in the fibres of the projection $Y\to S^2$ induced by the trivialization $Y=M\times S^2$.
Thus, $Y\to S^2$ factorises as $\phi\circ\psi$\,, where $\phi$ is as in (ii)\,.
\end{proof}
\indent
For a hypercomplex object $(M,F,\r,c)$ with twistor space $Z$ we have a distinguished class of holomorphic line bundles over $Z$, namely
those which are pull backs through $\phi$ of holomorphic line bundles over $S^2$. Any such holomorphic line bundle $\mathcal{L}$
is characterised by the fact that $\psi^{*\!}\mathcal{L}$ is trivial along the fibres of the projection $\phi\circ\psi:Y\to S^2$.
\begin{prop} \label{prop:hyper_right_connection}
Let $Z$ be the twistor space of a hypercomplex object $(M,F,\r,c)$\,, with a decomposition $F^{\C\!}=H\otimes E$
corresponding to a holomorphic line bundle $\mathcal{L}$ over $Z$.\\
\indent
Then there exists a unique $\r$-connection on $H$ which induces the trivial flat connection on~$Y$ and which induces an `anti-self-dual' connection
on $\Lambda^2H$ (that is, flat along the admissible co-CR structures defined by the constant sections of $Y(=M\times S^2)$\,).
\end{prop}
\begin{proof}
On identifying $S^2=\C\!P^1$, assume, firstly, that $\mathcal{L}=\phi^*\bigl(\mathcal{O}(-1)\bigr)$\,, and recall that the space of sections of $H^*$
over any open subset $U\subseteq M$ is, by definition, the space of sections of $\mathcal{L}^*$ over $\p^{-1}(U)$
which are holomorphic when restricted to the fibres of $\p$. We shall denote in the same way the local sections of $H^*$
and the corresponding local sections of $\mathcal{L}^*$.\\
\indent
Let $X\in T_xM$, $(x\in M)$, and let $\widetilde{X}$ be tangent to the fibres of $\phi\circ\psi$\,, along $\p^{-1}(x)$\,,
and such that $\dif\!\p(\widetilde{X})=X$. As $\psi^{*\!}\mathcal{L}$ is trivial along the fibres of $\phi\circ\psi$ we
may define $\widetilde{X}(s)$\,, for any local section $s$ of $H^*$, thus obtaining a holomorphic section of $\mathcal{L}^*$ along $\p^{-1}(x)$\,;
that is, an element of $H^*_x$\,. We have, thus, obtained a connection on $H^*$ with respect to which the sections obtained as pull backs
through $\phi\circ\psi$ of holomorphic sections of $\mathcal{O}(1)$ are covariantly constant. As these sections generate each fibre of $H^*$,
this shows that the obtained connection on $H^*$ is trivial. Dualizing, we obtain a trivializing flat connection on $H$.\\
\indent
Now, let $\mathcal{L}_1$ be any other holomorphic line bundle over $Z$, whose restriction to each twistor sphere has Chern number $-1$,
and endowed with an antiholomorphic anti-involution covering $\s$.
Let $F^{\C\!}=H_1\otimes E_1$ be the induced decomposition.
Then the Ward tranform admits a straight generalization to this setting to show that $\mathcal{L}_1\otimes\mathcal{L}^*$
corresponds to a complex line bundle $L$\,, over $M$, endowed with an anti-self-dual $\r$-connection.
Consequently, $H_1=H\otimes L$\,, which on endowing with the tensor product $\r$-connection completes the proof.
\end{proof}
\begin{rem} \label{rem:line_bundle_of_H}
Let $Z$ be the twistor space of a quaternionic object $(M,F,\r,c)$\,, with a decomposition $F^{\C\!}=H\otimes E$
corresponding to a holomorphic line bundle $\mathcal{L}$ over $Z$.\\
\indent
As we have seen, $H\setminus0$ is a hypercomplex object so that there exists a holomorphic submersion $\phi$ from its twistor space
$Z(H\setminus0)$ onto $\C\!P^1$.\\
\indent
Also, $\p_H:H\setminus0\to M$ is a twistorial map, corresponding to a surjective holomorphic submersion from $Z(H\setminus0)$
onto $Z$. Then the pull back through this submersion of $\mathcal{L}$ is $\phi^*(\mathcal{O}(-1))$\,.
\end{rem}
\indent
By using Proposition \ref{prop:hyper_right_connection} and Remark \ref{rem:line_bundle_of_H}\,, we can now define the relevant
`quaternionic pluriharmonic' sheaf, in two steps, as follows.
\begin{defn} \label{defn:q_h_hyper}
Let $(M,F,\r,c)$ be a hypercomplex object with twistor space $Z$, and a decomposition $F^{\C\!}=H\otimes E$
corresponding to a holomorphic line bundle $\mathcal{L}$ over~$Z$.\\
\indent
Let $I$ and $J$ be anti-commuting admissible linear complex structures on $F$ given by constant sections of $Y(=PH=M\times S^2)$\,.
Then $\mathcal{C}=\r\bigl({\rm ker}(I+{\rm i})\bigr)$ is a complex distribution on $M$ over which the $\r$-connection on $H$
induces a partial connection. As $\mathcal{C}$ is integrable, we may define the exterior covariant differential $\overline{\partial}$ mapping
$(\Lambda^2H)$-valued $r$-forms on $\mathcal{C}$ to $(\Lambda^2H)$-valued $(r+1)$-forms on $\mathcal{C}$\,, $(r\in\mathbb{N})$\,.\\
\indent
A section $f$ of $\Lambda^2H$ is \emph{hypercomplex pluriharmonic (with respect to the pair $(I,J)$)} if
$\bigl(\overline{\partial}\circ J\circ\partial\bigr)(f)=0$\,.
\end{defn}
\indent
For the particular case of (classical) hypercomplex manifolds, the pluriharmonic functions, also, appear in \cite{AleVer-hyper-psh}\,.
However, the operator defining them is due to \cite{W'd'se-85}\,.
\begin{rem} \label{rem:q_h_hyper}
With the same notations as in Definition \ref{defn:q_h_hyper}\,, suppose that the connection on $H$ together with some $\r$-connection on $E$
induces a torsion free $\r$-connection $\nabla$ on $F$.
Then a quick calculation shows that, for any section $f$ of $\Lambda^2H$, the following assertions are equivalent
(recall that, in dimension four, a linear quaternionic structure is just an oriented linear conformal structure):\\
\indent
\quad(i) $f$ is hypercomplex pluriharmonic;\\
\indent
\quad(ii) $\nabla^{2\!}f$ restricted to any quaternionic subspace, of real dimension four, of any fibre of $F$,
is trace free.\\
\indent
In particular, in this case, the notion of hypercomplex pluriharmonicity does not depend of the pair $(I,J)$\,.
\end{rem}
\begin{defn} \label{defn:q_h}
Let $(M,F,\r,c)$ be a quaternionic object with twistor space $Z$, and a decomposition $F^{\C\!}=H\otimes E$
corresponding to a holomorphic line bundle $\mathcal{L}$ over~$Z$.\\
\indent
A section $f$ of $\Lambda^2H$ is \emph{quaternionic pluriharmonic} if the corresponding equivariant function $\widetilde{f}:H\setminus0\to\C$
is hypercomplex pluriharmonic.
\end{defn}
\indent
In the particular cases of hyper-K\"ahler and quaternionic manifolds, our definition reduces to the ones considered
in \cite{HarLaw-some_psh} and \cite{Ale-q_pluripotential}\,, respectively (note that, the latter is based on \cite{Bas-q_complexes}\,).\\
\indent
The fact that $\mathscr{H}q^{\!*}$ acts by twistorial diffeomorphisms on $H\setminus0$ implies that Definition \ref{defn:q_h}
does not depend of the choice of $I$ and $J$ on $H\setminus0$\,.
\section{The Penrose transform} \label{section:qPt}
\indent
Let $(M,F,\r,c)$ be a quaternionic object with twistor space $Z$, and a decomposition $F^{\C\!}=H\otimes E$
corresponding to a holomorphic line bundle $\mathcal{L}$ over~$Z$. We make the following definition,
where $\psi:Y(=PH)\to Z$ is the corresponding surjective co-CR submersion (recall the diagram \eqref{diagram:for_Penrose}\,).
\begin{defn}
The \emph{Penrose transform} is the complex linear map associating to each $\g\in H^1\bigl(Z,\mathcal{L}^2\bigr)$ the section
$\bigl(\int_{\p^{-1}(x)}\psi^*\g\bigr)_{x\in M}$ of $\Lambda^2H$.
\end{defn}
\indent
The injectivity of the Penrose transform is quite straightforward.
\begin{prop} \label{prop:qPt_inj}
If the fibres of $\psi$ are connected then, by passing to an open neighbourhood of each point of $M$, the Penrose transform is injective.
\end{prop}
\begin{proof}
If $M$ is a hypercomplex object then this follows quickly from standard arguments, involving \v Cech cohomology and by using \cite{Siu-Stein_nbds}\,.
In general, through the pull back, the proof for $H\setminus0$ gives, in particular, the required injectivity, locally, on $M$.
\end{proof}
\indent
We say that the quaternionic object $(M,F,\r,c)$ is of \emph{constant type} if the co-CR quaternionic vector spaces $(T_xM,F_x,\r_x)$\,, $(x\in M)$\,, are the
same, up to isomorphisms.\\
\indent
Here is the main result of this paper.
\begin{thm} \label{thm:q_Penrose}
Let $(M,F,\r,c)$ be a quaternionic object with twistor space $Z$, and a decomposition $F^{\C\!}=H\otimes E$
corresponding to a holomorphic line bundle $\mathcal{L}$ over~$Z$.\\
\indent
The following statements hold:\\
\indent
\quad{\rm (i)} If on $H\setminus0$ the sheaf of quaternionic pluriharmonic functions is equal to the sheaf of hypercomplex pluriharmonic functions
(in particular, if $H\setminus0$ admits a compatible torsion free $\r$-connection)
then the image of the Penrose transform is contained in the space of quaternionic pluriharmonic sections of $\Lambda^2H$.\\
\indent
\quad{\rm (ii)} If $(M,F,\r,c)$ is of constant type, then, by passing, if necessary, to an open neighbourhood of each point of $M$,
the Penrose transform admits a partial section defined on the space of quaternionic pluriharmonic sections of $\Lambda^2H$.
\end{thm}
\indent
The proof of Theorem \ref{thm:q_Penrose}\,, essentially, relies on the fact that it is sufficient to prove it for hypercomplex objects.
To show this, we need the following result.
\begin{prop} \label{prop:q_a_connection}
Let $(M,F,\r,c)$ be a quaternionic object with twistor space $Z$, and a decomposition $F^{\C\!}=H\otimes E$
corresponding to a holomorphic line bundle $\mathcal{L}$ over~$Z$.\\
\indent
Then there exists a principal $\r$-connection $C:\p_H^*F\to T(H\setminus0)$ on $(H\setminus0,M,\mathscr{H}q^{\!*})$ such that the associated
$\r$-connection on $Y(=PH)$ induces the same co-CR structure on $Y$ as the one defining $\psi:Y\to Z$.
\end{prop}
\begin{proof}
This is a consequence of the fact that $\p_H:H\setminus0\to M$ is twistorial and in particular, at each point, its differential is a co-CR quaternionic linear map
(see \cite{fq,fq_2} for the definition of the latter). Therefore, on denoting by $(F_H,\r_H)$ the almost co-CR quaternionic structure of $H\setminus0$
we have $\dif\!\p_H\circ\r_H=\r\circ\Pi$ where $\Pi:F_H\to F$ is a morphism of quaternionic vector bundles, covering $\p_H$\,.
Furthermore, $\Pi$ factorises into a morphism of quaternionic vector bundles $\widetilde{\Pi}:F_H\to(\p_H)^*(F)$ followed by the bundle-map $(\p_H)^*(F)\to F$.\\
\indent
Now, $(\p_H)^*\bigl(F^{\C\!}\bigr)=\bigl((H\setminus0)\times\C^{\!2}\bigr)\otimes E$, whilst $F_H^{\C\!}=\bigl((H\setminus0)\times\C^{\!2}\bigr)\otimes E_H$\,,
for some complex vector bundle $E_H$ over $H\setminus0$\,, such that, $\widetilde{\Pi}$ is given by the tensor product of the identity morphism of
$(H\setminus0)\times\C^{\!2}$ and a morphism of complex vector bundles from $E_H$ onto $(\p_H)^*(E)$\,. Hence, by taking a section of the latter,
we may embedd $(\p_H)^*(F)\subseteq F_H$\,, as a quaternionic subbundle, so that the restriction of $\widetilde{\Pi}$ to it is the identity morphism.\\
\indent
On the other hand, as a structural group, $\mathscr{H}q^{\!*}$ induces right actions on $F_H$ and on $T(H\setminus0)$\,,
where the former is given by the action on (the first components) of $(\p_H)^*(H)\bigl(=(H\setminus0)\times\C^{\!2}\bigr)$.
Moreover, $\r_H$ is equivariant with respect to these actions.\\
\indent
Then the restriction $C$ of $\r_H$ to $(\p_H)^*(F)$ is a principal $\r$-connection, as claimed.
\end{proof}
\indent
Another ingredient needed to prove Theorem \ref{thm:q_Penrose} is a natural extension, to this setting, of the classical notion
of `standard (horizontal) vector field'. For this, let $(M,F,\r,c)$ be a quaternionic object with twistor space $Z$, and a decomposition $F^{\C\!}=H\otimes E$
corresponding to a holomorphic line bundle $\mathcal{L}$ over $Z$. Endow $H\setminus0$ with a principal $\r$-connection $C$ as in
Proposition \ref{prop:q_a_connection}\,.
\begin{defn} \label{defn:q_standard}
For any $\xi\in\C^{\!2}$ let $B(\xi)$ be the morphism of complex vector bundles, from $(\p_H)^*(E)$ to $T^{\C\!}(H\setminus0)$
given by $B(\xi)(u,s)=C\bigl(u,(u\xi)\otimes s\bigr)$\,, for any $u\in H\setminus0$ and $s\in E_{\p_H(u)}$\,.
\end{defn}
\begin{prop} \label{prop:q_standard_invariance}
On denoting by $R_q$ the right translation by $q\in\mathscr{H}q^{\!*}$ on $H\setminus0$ (and $(\p_H)^*(E)$\,), we have
$\dif\!R_q\circ B(\xi)=B(q^{-1}\xi)\circ R_q$\,, for any $\xi\in\C^{\!2}$.
\end{prop}
\begin{proof}
Let $u\in H\setminus0$ and $s\in E_{\p_H(u)}$\,. By the equivariance of $C$, we have
$$\dif\!R_q\bigl(B(\xi)(u,s)\bigr)=\dif\!R_q\bigl(C\bigl(u,(u\xi)\otimes s\bigr)\bigr)=C\bigl(u\cdot q,(u\xi)\otimes s\bigr)\;.$$
\indent
On the other hand,
$$\bigl(B(q^{-1}\xi)\circ R_q\bigr)(u,s)=B(q^{-1}\xi)(u\cdot q,s)=C\bigl(u\cdot q, \bigl((u\cdot q)(q^{-1}\xi)\bigr)\otimes s\bigr)=C\bigl(u\cdot q,(u\xi)\otimes s\bigr)\;,$$
thus completing the proof.
\end{proof}
\indent
The morphism of Lie algebras from $\mathscr{H}q$ (seen as the Lie algebra of $\mathscr{H}q^{\!*}$) to the Lie algebra of vector fields on $T(H\setminus0)$ extends
to a morphism of complex Lie algebras from $A\in\mathfrak{gl}(2,\C\!)$ to the sections of $T^{\C\!}(H\setminus0)$\,. We shall denote in the same way,
the elements of $\mathfrak{gl}(2,\C\!)$ and the corresponding (fundamental) complex vector fields on $H\setminus0$\,.
Then, from Proposition \ref{prop:q_standard_invariance}\,, we quickly obtain the following fact.
\begin{cor} \label{cor:q_standard_invariance}
If $\xi\in\C^{\!2}$ and $s$ is a section of $E$ then the vector field $B(\xi)(s)$\,, which at any $u\in H\setminus0$ is equal to $B(\xi)(u,s_{\p_H(u)})$\,,
satisfies $\bigl[A,B(\xi)(s)\bigr]=B(A\xi)(s)$\,, for any $A\in\mathfrak{gl}(2,\C\!)$\,.
\end{cor}
\indent
We now have a convenient way to write $\mathcal{C}_H=(\dif\!\psi_H)^{-1}\bigl(T^{0,1}(\mathcal{L}\setminus0)\bigr)$\,, namely,
$\mathcal{C}_H=\mathcal{B}_H\oplus({\rm ker}\dif\!\p_H)^{0,1}$, where $\mathcal{B}_H$ is generated by $B(e_1)(s)$\,, for any (local) section $s$ of $E$,
with $(e_1,e_2)$ the canonical basis of $\C^{\!2}$.
\begin{prop} \label{prop:q_h_harmonic}
Let $(M,F,\r,c)$ be a quaternionic object with twistor space $Z$, and a decomposition $F^{\C\!}=H\otimes E$
corresponding to a holomorphic line bundle $\mathcal{L}$ over $Z$. Endow $H\setminus0$ with a principal $\r$-connection $C$ as in
Proposition \ref{prop:q_a_connection}\,.\\
\indent
Then a section $f$ of $\Lambda^2H$ is quaternionic pluriharmonic if and only if the corresponding equivariant function $\widetilde{f}$ on $H\setminus0$
satisfies $\bigl(\bigl(\overline{\partial}\circ J\circ\partial\bigr)(\widetilde{f}\,)\bigr)(X,Y)=0$\,, for any $X,Y\in\mathcal{B}_H$\,.
\end{prop}
\begin{proof}
The necessity of that condition for $f$ to be quaternionic pluriharmonic is trivial. For the sufficiency, firstly, note that
$f$ is quaternionic pluriharmonic if and only if
\begin{equation} \label{e:q_h_harmonic}
X\bigl((JY)(\widetilde{f}\,)\bigr)-Y\bigl((JX)(\widetilde{f}\,)\bigr)-\bigl(J[X,Y]\bigr)(\widetilde{f}\,)=0\;,
\end{equation}
for any sections $X$ and $Y$ of $\mathcal{C}_H$\,.\\
\indent
By writing $\widetilde{f}$ with respect to a local section of $H\setminus0$\,, a straightforward calculation shows that \eqref{e:q_h_harmonic} always holds for any
$X$ and $Y$ sections of $({\rm ker}\dif\!\p_H)^{0,1}$.\\
\indent
Note that, $({\rm ker}\dif\!\p_H)^{0,1}$ is generated by (the fundamental complex vector fields corresponding to) those $A\in\mathfrak{gl}(2,\C\!)$
for which the first column is zero. Take $X$ to be such an $A$\,, and $Y=B(e_1)(s)$ for some section $s$ of $E$. Then Corollary \ref{cor:q_standard_invariance}
shows that $[X,Y]=0$\,. Further, $JX$ is given by the matrix whose first column is the opposite of the second column of $A$\,, whilst its second column is zero.
Also, $JY=B(e_2)(s)$\,.\\
\indent
On denoting by $\xi\bigl(\in\C^{\!2}\bigr)$ the second column of $A$ we have that $(JX)(\widetilde{f}\,)=\xi_1\widetilde{f}$. Hence,
$Y\bigl((JX)(\widetilde{f}\,)\bigr)=\bigl(B(\xi_1e_1)(s)\bigr)(\widetilde{f}\,)$\,.\\
\indent
Now, by applying, again, Corollary \ref{cor:q_standard_invariance}\,, we obtain that $[X,JY]=B(\xi)(s)$\,. Hence,
\begin{equation*}
\begin{split}
X\bigl((JY)(\widetilde{f}\,)\bigr)&=JY\bigl((X)(\widetilde{f}\,)\bigr)+\bigl(B(\xi)(s)\bigr)(\widetilde{f}\,)\\
&=-\xi_2\bigl(B(e_2)(s)\bigr)(\widetilde{f}\,)+\bigl(B(\xi)(s)\bigr)(\widetilde{f}\,)=\bigl(B(\xi_1e_1)(s)\bigr)(\widetilde{f}\,)\;.
\end{split}
\end{equation*}
\indent
We have, thus, shown that \eqref{e:q_h_harmonic} is automatically satisfied, also, when $X$ is a section $({\rm ker}\dif\!\p_H)^{0,1}$
and $Y$ is a section of $\mathcal{B}_H$\,. The proof is complete.
\end{proof}
\begin{rem} \label{rem:q_hyper_ph}
A quick consequence of Proposition \ref{prop:q_h_harmonic} is that, for a hypercomplex object, the quaternionic pluriharmonic sheaf
is the intersection of the hypercomplex pluriharmonic sheaves, with respect to all possible (anti-commuting) pairs $(I,J)$\,.\\
\indent
Consequently, the following assertions are equivalent, for a hypercomplex object:\\
\indent
\quad(i) The hypercomplex pluriharmonicity does not depend of the pair $(I,J)$\,.\\
\indent
\quad(ii) The hypercomplex and the quaternionic pluriharmonicities coincide.
\end{rem}
\indent
Next, we give the following:
\begin{proof}[Proof of Theorem \ref{thm:q_Penrose}]
To prove (i)\,, by passing, through pull back, to $H\setminus0$\,, we may suppose $M$ hypercomplex and, on it, the
sheaf of quaternionic pluriharmonic sections of $\Lambda^2H$ is equal to the sheaf of hypercomplex pluriharmonic sections of $\Lambda^2H$.
Then, by using \v Cech cohomology, integral formulae similar to \eqref{e:Penrose_prehistory} can be, locally, obtained
for the sections of $\Lambda^2H$ which are in the image of the Penrose transform. Consequently, the image
of the Penrose transform is contained in the space of quaternionic pluriharmonic sections of $\Lambda^2H$.\\
\indent
To prove (ii)\,, we may complexify $M$ so that $H$ and (consequently) $Y$ are the restrictions to $M$ of holomorphic bundles over $M^{\C\!}$.
For simplicity, we denote with the same $H$ and $Y$, respectively, these bundles over $M^{\C\!}$; also, the same for the notation of the corresponding
bundle projections. We, thus, have the same diagram as \eqref{diagram:for_Penrose}\,, up to the fact that $M$ is replaced by $M^{\C\!}$.
Furthermore, we have the following diagram (which, in part, is the complexification of \eqref{diagram:for_Penrose}\,),
\begin{equation} \label{diagram:for_Penrose_complex}
\begin{gathered}
\xymatrix{
& {\rm GL}(H) \ar[dl]_{\psi_{H,j}} \ar[d]^p \ar@/^/[ddr]^{\p_H} & \\
\mathcal{L}\setminus0 \ar[d] & \hspace{3mm}(Y+Y)\setminus Y \ar[dl]_{\psi_j} \ar[dr]^{\p} & \\
Z & & M^{\C}
}
\end{gathered}
\end{equation}
where ${\rm GL}(H)$ is the frame bundle of $H$, $Y(=PH)$ is diagonnally embedded into $Y+Y$, whilst $\psi_{H,j}$ is the composition
of $\psi_H$ with the projection onto the $j$-component of any frame on $H$, and similarly for $\psi_j$\,, $(j=1,2)$\,, and we have, also, denoted
by $\p_H$ the projection from ${\rm GL}(H)$ onto $M^{\C\!}$.\\
\indent
Let $\partial_j$ be the partial exterior differential determined by the holomorphic foliation $\mathcal{C}_j$ given by (the connected components of) the fibres
of $\psi_{H,j}$\,, $(j=1,2)$\,. As ${\rm GL}(H)$ is the complexification of $(H\setminus0)|_M$\,, we obtain (through a complexification)
a morphism of holomorphic vector bundles $J:\mathcal{C}_2^*\to\mathcal{C}_1^*$.
Then a holomorphic section $f$ of $\Lambda^2H$ is (complex-)quaternionic pluriharmonic if and only if the corresponding equivariant holomorphic function
$\widetilde{f}$ on ${\rm GL}(H)$ satisfies $(\partial_1\circ J\circ\partial_2)(\widetilde{f}\,)=0$\,.\\
\indent
Note that, ${\rm GL}(H)$\,, as a bundle over $(Y+Y)\setminus Y$, is a principal bundle with group $\C^{\!*}\times\C^{\!*}$, where the action
is induced, by restriction to the diagonal matrices, from the action of ${\rm GL}(2,\C\!)$\,. Furthermore,
$\psi_{H,j}$ is a morphism of principal bundles, covering $\psi_j$\,, with respect to the morphism of Lie groups $\C^{\!*}\times\C^{\!*}\to\C^{\!*}$,
$(\l_1,\l_2)\mapsto\l_j$\,, $(j=1,2)$\,.\\
\indent
Let $f$ be a section of $\Lambda^2H$ and let $\widetilde{f}$ be the corresponding equivariant function on ${\rm GL}(H)$\,.
Then, for any $(\l_1,\l_2)\in\C^{\!*}\times\C^{\!*}$, we have
\begin{equation} \label{e:cocycle_invariance}
R_{(\l_1,\l_2)}^*\bigl((J\circ\partial_2)(\widetilde{f}\,)\bigr)=\l_1^{-2}(J\circ\partial_2)(\widetilde{f}\,)\;,
\end{equation}
where $R^*$ denotes the pull back transformation induced by the right translation. This is straightforward if we restrict to the fibres of $\p_H$\,.
Further, the proof of Proposition \ref{prop:q_a_connection} works in this setting as well by passing, if necessary, to an open neighbourhood
of each point of $M$. Consequently, Proposition \ref{prop:q_standard_invariance} can be easily adapted to this setting, and by using it
\eqref{e:cocycle_invariance} quickly follows.\\
\indent
Consequently, if $f$ is a quaternionic pluriharmonic section of $\Lambda^2H$ then the partial $1$-form $(J\circ\partial_2)(\widetilde{f}\,)$
defines a $1$-cocycle, for the relative de Rham cohomology (see \cite{EGW}\,) of $\mathcal{L}^2$, with respect to $\psi_1$\,, whose first group we denote by
$H_{\psi_1\!}^1\bigl(Z,\mathcal{L}^2\bigr)$\,. Consequently, we have obtained an injective linear map, into
this cohomology group, from the space of quaternionic pluriharmonic sections of $\Lambda^2H$ (on integrating $(J\circ\partial_2)(\widetilde{f}\,)$
along the fibres of $\p_H$ we obtain $f$, up to a nonzero constant factor).\\
\indent
Therefore (ii) is proved if, locally,
we have $H^1\bigl(Z,\mathcal{L}^2\bigr)=H_{\psi_1\!}^1\bigl(Z,\mathcal{L}^2\bigr)$\,. From the main result of \cite{EGW}\,, we deduce that, locally,
this holds if the fibres of $\psi$ are contractible. Now, if $(M,F,\r,c)$ is of constant type, we may assume that the
images through $\p_H$ of the fibres of $\psi$ are totally geodesic with respect to a (classical) connection on $M^{\C\!}$. Thus, in this case,
by passing to any convex neighbourhood over which $Y$ is trivial, we have the desired cohomology isomorphism, and the proof is complete.
\end{proof}
\begin{rem}
From Proposition \ref{prop:qPt_inj} and Theorem \ref{thm:q_Penrose} we obtain that, locally, the Penrose transform is an isomorphism in the following cases:\\
\indent(1) for (classical) quaternionic manifolds \cite{Bas-q_complexes} (see, also, \cite{Hit-80}\,, for the particular case of anti-self-dual manifolds).\\
\indent(2) for co-CR quaternionic vector spaces (this was, essentially, proved in \cite{Tsai}\,).
\end{rem}
\end{document}
|
\begin{document}
\twocolumn[
\icmltitle{Regret Guarantees for Adversarial Online Collaborative Filtering}
\icmlsetsymbol{equal}{*}
\begin{icmlauthorlist}
\icmlauthor{Stephen Pasteris}{yyy}
\icmlauthor{Fabio Vitale}{xxx}
\icmlauthor{Mark Herbster}{zzz}
\icmlauthor{Claudio Gentile}{www}
\end{icmlauthorlist}
\icmlaffiliation{yyy}{The Alan Turing Institute, UK}
\icmlaffiliation{xxx}{CENTAI, Italy}
\icmlaffiliation{zzz}{University College London, UK}
\icmlaffiliation{www}{Google Research, USA}
\icmlcorrespondingauthor{Stephen Pasteris}{[email protected]}
\icmlcorrespondingauthor{Fabio Vitale}{[email protected]}
\icmlcorrespondingauthor{Mark Herbster}{[email protected]}
\icmlcorrespondingauthor{Claudio Gentile}{[email protected]}
\icmlkeywords{Machine Learning, ICML}
\vskip 0.3in
]
\printAffiliationsAndNotice{\icmlEqualContribution}
\iffalse
\textbf{LOCK}
\begin{itemize}
\item Abstract - -
\item Section 1: - -
\item Subsection related work: - -
\item Section 2: - -
\item Section 3: - -
\item Section 4: - -
\item Appendix: SP
\end{itemize}
\fi
\begin{abstract}
We investigate the problem of online collaborative filtering under no-repetition constraints, whereby users need to be served content in an online fashion and a given user cannot be recommended the same content item more than once. We design and analyze a fully adaptive algorithm that works under biclustering assumptions on the user-item preference matrix, and show that this algorithm exhibits an optimal regret guarantee, while being oblivious to any prior knowledge about the sequence of users, the universe of items, as well as the biclustering parameters of the preference matrix.
We further propose a more robust version of the algorithm which addresses the scenario when the preference matrix is adversarially perturbed. We then give regret guarantees that scale with the amount by which the preference matrix is perturbed from a biclustered structure.
To our knowledge, these are the first results on online collaborative filtering that hold at this level of generality and adaptivity under no-repetition constraints.
\end{abstract}
\section{Introduction}\label{s:intro}
\iffalse
\begin{itemize}
\item Online adversarial collaborative filtering
\item \CG{non-contextual -- motivate}
\item Motivations for \enquote{no-repetition constraint} (items that are consumed only once, e.g., movies, books, etc.)
\item Biclustering (\MJH{will connect to latent factors and low rank})
\item Relation with sleeping bandits and clustered bandits
\item Adversarial user sequence and adversarial ground truth
\item Adversarial noise / adversarial corrupted ground truth
\item Dynamic inventory
\item Optimality for noise-free case
\end{itemize}
\fi
Helping customers identify their preferences is essential for businesses with a diverse product offering. Many companies rely on recommendation systems (RS)~\cite{resnick1997recommender}, which allow users to browse, search, or receive suggestions from online services. Recommendation algorithms let us narrow down massive amounts of information into personalized choices.
This is especially relevant in
online businesses, where the capabilities of interactive RS have become of paramount importance.
Customers can obtain suggestions for movies (e.g., Netflix, YouTube TV), music (e.g., Spotify, YouTube Music), job openings (e.g., LinkedIn, Indeed), or various products (e.g., Amazon, eBay), while their feedback is tracked and exploited to improve future recommendations tailored to specific user interests.
In most cases, the goal is to improve user experience as measured by the amount of ``likes''
given by the user over time.
A standard approach to content recommendation is the one provided by Collaborative Filtering (CF), where personalized recommendations are generated based on both content data and aggregate user activity.
CF algorithms are either user-based or item-based.
A user-based CF algorithm suggests items that similar users enjoy. Item-based algorithms suggests items that are similar to items enjoyed by the user in the past. In many practical applications, suggesting an item that has already been consumed is often useless~\cite{bresler2014latent, bresler2016collaborative, ariu2020regret}. For instance, it is pointless to keep advertising the same movie or book to a user after she has already watched or read it. In the recent online CF literature, this assumption is often called the \enquote{\em{no-repetition constraint}}~\cite{ariu2020regret}.
The aim of this paper is to design and analyze novel learning algorithms for online collaborative filtering
under the no-repetition constraint assumption, still forcing the algorithms to leverage the collaborative effects in the user-item structure.
In this sense, the algorithms we propose combine both user-based and item-based online CF approaches.
Our learning problem can be described as follows. Learning proceeds in a sequence of interactive {\em trials} (or {\em rounds}). At each round, a single user shows up, and the RS is compelled to recommend content (an individual item) to them. We make no assumptions whatsoever on the way the sequence of users gets generated across rounds.
The user then provides feedback encoding their opinion about the selected item, and the RS uses this signal to update its internal state.
The kind of feedback we expect is the binary click/no-click, thumb up/down, like/dislike, which is very common in online media services (TikTok, YouTube, etc.)
In any trial, the RS is constrained to recommend to the user at hand an item that {\em has not} been recommended to that user in the past.
In order to leverage non-trivial collaborative effects, we
investigate a latent model of user preferences based on {\em biclustering}~\cite{H72}.
Specifically, we shall assume that each user falls under one user {\em type} (or {\em cluster}) and that, in the absence of noise, users within the same type prefer the same items. At the same time,
items are also clustered so that, in the absence of noise, all items belonging to the same cluster are liked or disliked in the same fashion by each user. Modern user-based (item-based) CF algorithms often operate under these assumptions, which are supported by a number of experiments in the RS literature, for both user~\cite{das2007google, bellogin2012using, bresler2014latent, bresler2021regret} and item~\cite{sarwar2001item, bresler2016collaborative, bresler2021regret} clustering.
The set of all user preferences is generally viewed as a matrix called a {\em preference matrix}.
\subsection{Our contributions}
We consider a sequential and adversarial learning setting where users and user preferences can be generated arbitrarily. We initially investigate the case where the preference matrix is perfectly biclustered, and then relax this assumption to allow an adversarial perturbation of the preference matrix. In all cases, we quantify online performance in terms of cumulative {\em regret}, that is, the extent to which the number of recommendation mistakes made by our algorithms across a sequence of rounds exceeds those made by an omniscient oracle that knows the preference matrix beforehand. In the perfect biclustering case, we fully characterize the problem: we both provide a regret lower bound and describe an algorithm that achieves a regret guarantee that matches this lower bound. In the adversarially perturbed case, we introduce a more robust algorithm whose regret performance is expressed in terms of the degree by which the perturbed preference matrix diverges from a perfectly biclustered one.
Our algorithms are scalable and {\em fully adaptive}, in that they need not know the parameters of the underlying biclustering structure, the time horizon, or the amount of perturbation in the preference matrix. Moreover, the algorithms can be naturally run in situations where both the set of users and the universe of items increase (arbitrarily) over time.
\subsection{Discussion and related work}
As far as we are aware, this is the first work that provides theoretical performance guarantees under the no-repetition constraint in a non-stochastic setting where both the sequence of users and the user-item preference matrix are generated adversarially.
Because user preferences can be observed solely for items recommended in the past, we must address the classical problem of how to quickly determine the
users' interests without affecting the quality of our recommendations.
That is,
should we provide recommendations according to the users' interests observed thus far,
or should we obtain new feedback signals so as to better profile the users? This well-known exploitation-exploration dilemma
has received wide attention in the machine learning and statistics fields. In particular, the research in multi-armed bandit (MAB) problems and methods applied to RS tasks
is interested in how to strike an optimal balance
between exploitation of current knowledge of user preferences and exploration of new potential interests~(see, e.g., the monographs \cite{bubeck2012regret, lattimore2020bandit}).
One thing to emphasize, though, is that
none of the variants of MABs which are readily available in the literature includes all the core elements of the problem we are considering here. An item can only be suggested {\em once} to a user in our setting, while an arm (item) in MAB problems can typically be recommended multiple times.
This is a significant difference between our setting and standard MAB formulations where, once the best arm is discovered for a given user, the problem for that user is deemed to be solved.
Clustering~(e.g., \cite{bui2012clustered,maillard2014latent, gentile2014online, 10.1145/2911451.2911548, kwon2017sparse, jedor2019categorized}) as well as low-rank (e.g., \cite{katariya+17,10.1145/3240323.3240408,NEURIPS2020_9b7c8d13,pmlr-v97-jun19a,pmlr-v117-trinh20a,pmlr-v130-lu21a,kang2022efficient}) assumptions are also widespread in the stochastic MAB literature as applied to recommendation problems, but these works do not consider the no-repetition constraint on the items, nor do they address adversarial perturbations of the preference matrix.
Our problem also shares similarities with the classical problem of {\em Matrix Completion} (MC)~
see, e.g.,~\cite{keshavan2010matrix, biau2010statistical, rohde2011estimation, aditya2011channel, candes2012exact, negahban2012restricted, jain2013low}.
The objective in a MC problem is to estimate the remaining entries of a given matrix having at one's disposal a subset of the observed entries. It is often assumed that the matrix meets specific criteria (like low rank). Again, a closer inspection reveals that the typical conditions under which a MC algorithm work do not properly reflect the sequence of user events in a RS. For instance, whereas MC algorithms typically require the subset of the entries to be drawn at random according to some distribution, we are not bound to observe the matrix entries according to such a benign criterion, for users may visit a RS and revisit in an arbitrary order.
Furthermore, we impose differing structural assumptions, such as the adversarial perturbation of the preference matrix.
The closest MC problem to ours is when the components need to be predicted online~\cite{ hazan2012near, herbster2020online}. However, this problem is fundamentally different from ours in that on each trial a component of the preference matrix must be predicted instead of an item selected to be recommended to a given user.
Matrix factorization and structured bandit formulations have often been used to frame the design of RS algorithms -- see, e.g., \cite{koren2009matrix, dabeer2013adaptive, zhao2013interactive,wang2006unifying, verstrepen2014unifying}, and references therein. The relevant literature on RS is abundant, and we can hardly do it justice here. Yet, we observe that most of the classical investigations on RS are experimental in nature, while those on structural bandits (e.g., \cite{bui2012clustered,maillard2014latent, gentile2014online,10.1145/2911451.2911548, kwon2017sparse, jedor2019categorized,katariya+17,10.1145/3240323.3240408,NEURIPS2020_9b7c8d13,pmlr-v97-jun19a,pmlr-v117-trinh20a,pmlr-v130-lu21a,kang2022efficient,pal2023optimal} or on MC \cite{keshavan2010matrix, biau2010statistical, rohde2011estimation, aditya2011channel, candes2012exact, hazan2012near, negahban2012restricted, jain2013low, herbster2020online} do not readily apply to our adversarial scenarios.
The closest references to our work are perhaps \cite{bresler2014latent, bresler2016collaborative, ariu2020regret,bresler2021regret}, where online CF problems are investigated under the no-repetition constraint assumption. As in our paper, algorithm performance is measured
by comparing the proposed algorithm against an omniscient RS that knows the preferences of all users on all items. However, in~\cite{bresler2016collaborative} the sequence of users is generated is a quite benign way (uniformly at random), while in \cite{bresler2014latent, ariu2020regret,bresler2021regret} all users need to receive a recommendation and provide a feedback simultaneously.
\iffalse
As discussed in the first part of this section, one of the main differences between our problem and the ones addressed so far is the no-repetition constraint, which do not allow to apply RS algorithms developed for matrix factorization and bandit problems. The feedback biclustering is often also studied by separating the user-user from the item-item cluster paradigm-- see, e.g., \cite{wang2006unifying, verstrepen2014unifying, bresler2016collaborative, bresler2021regret}, while we naturally combine the two cluster problems when dealing with the exploitation-exploration trade-off.
Additionally, many RS papers are different from other significant viewpoints. In particular, the online RS problem is not studied in an adversarial setting simultaneously for the target user selection and the feedback biclustering structure. Often, the target user is selected uniformly at random from the user set-- see, e.g., \cite{dabeer2013adaptive, bresler2016collaborative, ariu2020regret, pal2023optimal}, or all the RS algorithms can provide a recommendation to all users at each time step, thereby receiving all user feedback simultaneously~\cite{bresler2014latent, heckel2017sample, bresler2021regret}. These assumptions make the problem easier, especially in providing strong theoretical performance guarantees. Indeed, a major strength of our work that differentiates it from the ones in the RS literature is the capability of providing theoretical guarantees, optimal in the noiseless case, within a very general and adversarial framework.
It is also worth mentioning that many online CF algorithms cannot deal with the exploration-exploitation tradeoff in the first trials~\cite{zhao2013interactive, heckel2017sample, huleihel2021learning}. Hence, they have the cold start time, which is another remarkable difference from our algorithm.
Furthermore, another difference is that we extend our problem setting to incorporate new users and items adversarially generated over time (dynamic inventory), a desirable feature often disregarded in the RS literature.
\fi
Finally, it is worth mentioning that standard ranking problems, where the RS is required to produce a ranked list of diverse items (see, e.g., classical references, like \cite{10.5555/2567709.2502595}), can be seen as a way to implement the no-repetition constraint, but only within a given user session, not across multiple sessions of the same user. Hence this gives rise a substantially different RS problem than the one we consider here.
\iffalse
\begin{itemize}
\item Main selling point: Difference between stochastic and adversarial settings for this problem
\item Difference between biclustering matrix completion and our problem
\item Discuss the following papers:
\begin{itemize}
\item \enquote{Collaborative Filtering with Low Regret}
\item \enquote{Using Mixture Models for Collaborative Filtering}
\item \enquote{A Latent Source Model for Online Collaborative Filtering}
\item \enquote{The Sample Complexity of Online One-Class Collaborative Filtering}
\item \enquote{Regret Bounds and Regimes of Optimality for User-User and Item-Item Collaborative Filtering}
\item \enquote{Regret in Online Recommendation Systems}
\item \enquote{Adaptive collaborating filtering: The low noise regime}
\item \enquote{Learning User Preferences in Non-Stationary Environments}
\item \enquote{Interactive Collaborative Filtering}
\end{itemize}
\end{itemize}
\fi
\iffalse
\noindent\FV{\textbf{Suggestions:}\\
\ \\
Items $\to$ actions\\
Finite sets $\to$ streams of users and actions\\
$X\to U$\\
$x\to u$\\
$Y, Y_t\to A, A_t$\\
$y\to a$\\
$A_i,A_j\to U_i,U_j$\\
$\sim$ {\em observed like}/ $u$ {\em apparently} likes $y$\\
$\approx$ {\em true like}/ $u$ {\em truly} likes $y$\\
Define $\llbracket\cdot\rrbracket$\\
Use American English (e.g., analize vs. analyze)\\
$f,g\to m',k'$\\
$p,p',p''\to\ell,\ell',\ell''$\\
$\pi(x)\to \ell(x)$\\
$q\to\ell'$ or $q\to\widetilde{\ell}$\\
$p'\to\Lambda$ or $p'\to L$\\
$\hat{x}_i\to u^*_i$ or $\hat{x}_i\to u^i$ or $\hat{x}_i\to \widetilde{u}^i$ (I personally don't like to see two different uses of $u$ subscripts)\\
Use $\backslash$textsc\{AlgorithmName\}\\
$\nu\to c$ (it's basically a counter for corrupted actions)\\
Time horizon $n$ $\to$ Time horizon $T$ (and perhaps $n$ users instead of $m$ users)\\
Pay attention to the meaning of $[v]$ when variable $v=0$\\
Define $\approx$ only after the description of the noise-free case\\
Don't abuse of macro/new commands definitions (e.g., don't use $\backslash$nc\{$\backslash$nlik\}\{$\backslash$not$\backslash$sim\}; use instead simply $\backslash$not$\backslash$sim which is clear for all of us)
}
\fi
\section{Preliminaries and Learning Tasks}
All the tasks considered here involve the recommendation of $\nma$ items to $\nmu$ users. In order to define our problems we must first introduce what it means for a matrix to be \emp{(bi)clustered}.
Given an $\nmu\times \nma$ matrix $\lmat$, we say that two users $\au,\au'\in\na{\nmu}$ are \emp{equivalent} if and only if $L_{\au,\ai}=L_{\au',\ai}$ for all $\ai\in\na{K}$, that is, if and only if the rows corresponding to the two users are identical. Similarly, two items $\ai,\ai'\in\na{\nma}$ are \emp{equivalent} if and only if $L_{\au,\ai}=L_{\au,\ai'}$ for all $\au\in\na{\nmu}$. A matrix $\bs{L}$ is $C$-user clustered and $D$-item clustered if the number of equivalence classes under these equivalence relations are no more than $C$ and $D$, respectively. For brevity, we shall refer to such a matrix as a $(C,D)$-biclustered matrix.
\subsection{The Basic Problem}\label{ss:basic}
Here we shall describe the simplest problem that we study. We have an unknown binary matrix $\bs{L}$ which is $(C,D)$-biclustered for some unknown $C$ and $D$. We say that user $i$ \emp{likes} item $j$ if and only if $L_{i,j}=1$. Our learning problem proceeds sequentially in trials (or rounds) $t=1,2,\ldots,T$, where on trial $t$ a learning agent (henceforth called ``Learner")
interacts with its environment as follows:
\ben
\item The environment reveals user $i_t$ to Learner;
\item Learner chooses an item $j_t$ to recommend to $i_t$. However, Learner is restricted in that it cannot have recommended item $j_t$ to user $i_t$ on some earlier trial.
\item $L_{\ut{\tim},\itt{\tim}}$ is revealed to Learner.
\een
Note that the problem restricts the environment in that a given user cannot be queried more than $\nma$ times. For any trial $t$, if $L_{\ut{\tim},\itt{\tim}}=0$ then user $\ut{\tim}$ does not like item $\itt{\tim}$ and we say that Learner incurs a \emp{mistake}. The aim of Learner is to minimize the total number of mistakes made throughout the $T$ rounds for the given matrix $\bs{L}$ and the sequence of users $i_1,\ldots, i_T$ generated by the environment.
In fact, since the binary matrix $\bs{L}$ can be arbitrary (it may contain a lot of zeros) and the sequence $i_1,\ldots, i_T$ may be generated adversarially, our goal we will be to bound the learner's \emp{regret} $R$, which is defined as the difference between the number of mistakes made by Learner and those which would have been obtained by an {\em omniscient} oracle, i.e., one that has a-priori knowledge of $\bs{L}$. Formally, given a user $\au\in\na{\nmu}$, let $\nqu{\au}$ be the number of rounds $t\in\na{\ntr}$ in which $\au=\ut{\tim}$, and let $\nli{\au}$ be the number of items in $\na{\nma}$ that $\au$ likes.
Let us denote here and throughout by the brackets $\indi{\cdot}$ the indicator function of the predicate at argument. The regret $\reg$ is then defined as:
\be
\reg:=\sum_{\tim\in\na{\ntr}}(1-\lrel{\ut{\tim}}{\itt{\tim}})-\sum_{\au\in\na{\nmu}} \indi{\omega_i \geq \xi_i}~,
\ee
where the first summation is the number mistakes made by Learner and the second summation is the number of mistakes made by the omniscient oracle for the given sequence of users $i_1,\ldots, i_T$ generated by the environment.
We shall prove for this problem that our
randomized algorithm \algn\ ({\bf O}ne-time {\bf R}e{\bf C}ommendation {\bf A}lgorithm -- Section \ref{ss:orcbandit}) has an expected regret bounded of the form:
\be
\expt{R} = \mathcal{O}\left(\min\{C,D\}(\nmu+\nma)\right)~,
\ee
and a time complexity of only $\mathcal{O}(\nma)$ per round. We will also prove that the above regret bound is essentially optimal.
\subsection{Adversarial biclustering perturbation}\label{ss:advnoise}
We now turn to the problem of incorporating adversarial perturbation into matrix $\lmat$. In this case we will only exploit similarities between items and leave it as an open problem to adapt our methodology to exploit similarities between users. Our algorithm \algnn\ (Section \ref{ss:orcbanditstar}) takes a parameter $\icr\in\nat\setminus\{1\}$. We will discuss how to combine instances of \algnn\ in such a way that we obtain a parameter-free algorithm whose regret bound is only a $\bo{\ln(M)}$ factor off that of \algnn\ with the optimal value of $\icr$.
We shall assume now that we have a \emp{hidden} $D$-item clustered $\nmu\times\nma$ binary matrix $\lmap$ which is perturbed arbitrarily to form the matrix $\lmat$. The matrix $\lmap$ and parameter $\icr$ induce the following concept of {\em bad} users and items:
\begin{itemize}
\item Given a user $i\in[M]$, its \emp{perturbation level} $\ucp{i}$ is the number of items $j\in[N]$ in which $\lrel{i}{j}\neq\lrep{i}{j}$. Recall that $\nqu{i}$ and $\nli{i}$ are the number of times this user is queried and the number of items that it likes, respectively. User $i$ is \emp{bad} if and only if both $\ucp{i}>0$ and $\nqu{i}>\nli{i}-2\ucp{i}$.
\item Given an item $j\in[N]$, its \emp{perturbation level} $\icp{j}$ is the number of users $i\in[M]$ in which $\lrel{i}{j}\neq\lrep{i}{j}$. Item $j$ is \emp{bad} if and only if $\icp{j}>\icr$.
\end{itemize}
Note that the parameter $\icr$ affects the definition of a bad item. We can hence view $\icr$ as a tolerance of the algorithm to item perturbation. A bad user is one which the learner is compelled to serve content ``too often": observe that this notion not only depends on the difference between $\bs{L}$ and $\bs{L^*}$, but also on the specific sequence of users $i_1,\ldots,i_T$ generated by the environment. We also stress that in relevant real-world scenarios there will often be no bad users. E.g., there are usually many more books that a person would enjoy than those they have time to read.
Given that we have $m$ bad users and $n = n(\psi)$ bad items, \algnn\ enjoys the following regret bound:
\begin{align*}
\expt{\reg} = {\mathcal O}\Bigl(\min\Bigl\{& (D\icr+m+n)(M+N),\\ &(D+n+m/\icr)(M+N\icr)\Bigl\}\Bigl)\,.
\end{align*}
Notice that the two terms in the above minimum are generally incomparable, even when solely viewed as a function of parameter $\psi$. When $M=\bo{N}$ the first term is better due to the reduced influence of $n$, whilst when $M=\Omega(N\icr)$ the second term is better. Hence the above bound expresses a best-of-both-worlds guarantee which is independent of the relative size of $M$ and $N$.
\subsection{Dynamic Inventory}\label{diss}
\iffalse
A further extension of our problem is to allow the set of users and items to grow
over time. We first note that since the user sequence $i_1,\ldots, i_T$ is determined adversarially, it is essentially {\em automatic}\FV{I would prefer a different term, or avoiding emphasizing it} for the set of users to grow over time. Hence in what follows we restrict to the set of items. The way the problem formulation changes is that now on each round $t$ we have a set of items $\mathcal{I}_t\subseteq\na{\nma}$ which is chosen adversarially but satisfies $\mathcal{I}_{t-1}\subseteq \mathcal{I}_t$, and we incorporate the restriction on Learner that $\itt{\tim}\in \mathcal{I}_t$. Note that in this formulation the learner need to know $\nmu$ or $\nma$.
\fi
A natural extension to our problem is to dynamically allow new users and items over time. Thus on a given trial we may or may not see a (single) new user but the set of items $\mathcal{I}_t$ that we may recommend from on round $t$ is a superset of the items from previous round i.e., $\mathcal{I}_{t-1}\subseteq \mathcal{I}_t$ and there is no limit on the number of added items. Conventionally, at the final round $T$ the set of distinct users is $[M]$ and distinct items is $[N]= \mathcal{I}_T$. Our algorithms do not need to know $M$ and $N$ in advance.
For simplicity of presentation, we will only consider here the
the perturbation-free case (Section \ref{ss:basic}), but the same methodology can also be applied in the adversarially perturbed case (Section \ref{ss:advnoise}).
The notion of regret generalizes to this dynamic case as follows. For all trials $t = 1,\ldots, T$, let $\hta{t}$ be the number of trials $t' \leq t$ in which $\ut{t'}=\ut{t}$, and let $\hx{t}$ be the number of items in $\mathcal{I}_{t}$ that user $\ut{t}$ likes. The regret is then defined as
\be
\reg:=\sum_{\tim\in\na{\ntr}}(1-\lrel{\ut{\tim}}{\itt{\tim}})-\sum_{\tim\in\na{\ntr}}\indi{\hta{t}>\hx{t}}\,.
\ee
We note that this definition of regret is again the difference between the number of mistakes made by Learner and those of an omniscient oracle.
For this dynamic case \algn\ enjoys {\em the same} regret guarantee as it did for the static case. However, its running time increases from $\mathcal{O}(\nma)$ to
$\mathcal{O}(\nma^2)$ per round.\footnote
{
More precisely, $\mathcal{O}(|\mathcal{I}_{T}|^2)$ per round, where $|\mathcal{I}|$ is the cardinality of set $\mathcal{I}$.
}
\section{The Perturbation-Free Algorithm}
We start off by describing the basic algorithm
\algn\ which is designed to work in the absence of perturbation, when the inventory is either static (Section \ref{ss:basic}) or dynamic (Section \ref{diss}). In the dynamic case the time complexity of the algorithm is $\mathcal{O}(|\mathcal{I}_{T}|^2) = \mathcal{O}(\nma^2)$ per trial. On the other hand, in the static case (i.e., when $\ist{\tim}=\na{\nma}$ for all $\tim\in\na{\ntr}$) the time complexity decreases to $\mathcal{O}(\nma)$ per trial. We shall analyze both the static and dynamic cases together.
\subsection{\algn}\label{ss:orcbandit}
\algn\ has two variants -- \algn-UC and \algn-IC, which exploit user and item clusters, respectively. We will refer for brevity to these algorithms as \uca\ and \ica, respectively. These algorithms will be designed in such a way that they can be fused together (into the resulting \algn) as follows. We run both \uca\ and \ica\ in parallel and maintain a $0/1$-flag.
On any round $t$, if the flag is set to $0$ then we select $\itt{\tim}$ with and update \uca. If the flag is set to $1$ we do so with \ica. The flag gets flipped if and only if $\lrel{\ut{\tim}}{\itt{\tim}}=0$.
The algorithms \uca\ and \ica\ share much of the same code (contained in Algorithm \ref{a:orca}), and analysis. Hence, when we describe and analyze the two algorithms we are considering both at the same time, unless we state otherwise. The pseudo-code of the two algorithms only differ in the lines marked ``\uca" and ``\ica" at the very end.
The algorithm(s) maintains over time a number $\lel$ of \emp{levels} that can only increase during the algorithm's functioning. $\lel$ is initialized equal to zero and hence we start with no levels. The levels are ordered and hence enumerated as $\{1,2,3,\ldots,\lel\}$. At any point in time each user $i$ is associated with an integer $\pos{i}$ that is initialized as $0$ and can only increase over time. We will always have that $\pos{i}\leq\lel$ and when $\pos{i}>0$ we say that user $i$ is \emp{at} level $\pos{i}$. For all levels $\lii$, we denote by $\cru{\lii}$ the user $\ut{\tim}$ on the trial $\tim$ on which level $\lii$ gets created - in that $\lel$ becomes equal to $\lii$ (during step \ref{tp3} of the pseudo-code). Each level $\lii$ is associated with a \emp{representative} item $\lir{\lii}$ and a set of users $\les{\lii}$. The only difference between \uca\ and \ica\ is how this set $\les{\lii}$ is defined:
\begin{itemize}
\item In \uca, $\les{\lii}$ is the set of all users $\au$ in which $\lrel{\au}{\lir{\lij}}=\lrel{\cru{\lii}}{\lir{\lij}}$ for all $\lij\leq \lii$.
\item In \ica, $\les{\lii}$ is the set of all users $\au$ with $\lrel{\au}{\lir{\lii}}=1$.
\end{itemize}
For each level $\lii$ we maintain a \emp{pool} $\zi{\lii}$ which is the set of all items that we believe are liked by all users in $\les{\lii}$. In essence, the algorithm is biased towards the initial assumption that all users like all items, and then operates by keeping track of any bias violation to recover the biclustering structure. This is because we initialize $\zi{\lii}$ to be equal to $\nat$ (the universe of potential items). When we encounter a user $\au$ that we know to be in $\les{\lii}$ and that we know does not like an item $\ai$, we then know that $\ai$ is not liked by all users in $\les{\lii}$, and hence we remove it from $\zi{\lii}$.\footnote
{
To keep the algorithm simple and fast we actually do not always remove this item, but for the purposes of this discussion assume we do.
}
\begin{algorithm}[!h]
{\bf Init :}
\begin{itemize}
\item Set $\lel\la0$
\item Set $\pos{\au}\la0$ for all $\au\in\nat$
\end{itemize}
{\bf For} $t = 1, \ldots, T$ :
\begin{enumerate}
\item Observe user $\ut{\tim}$ and set of items $\ist{\tim}$
\item Set $\uco\la\ist{\tim}\setminus\{j_{s}~|~s\leq t-1\,,\,i_{s}=i_t\}$
\item Set $\cpo\la\pos{\ut{\tim}}$
\item \label{tp1} {\bf If} there exists $\lii\leq \cpo$ with $\ut{\tim}\in\les{\lii}$ and $\uco\cap\zi{\lii}\neq\emptyset$
{\bf then :}\\ \comm{In the static inventory model $\lii=\cpo$}
\begin{itemize}
\item Pick any $\lse\leq \cpo$ with $\ut{\tim}\in\les{\lse}$ and $\uco\cap\zi{\lse}\neq\emptyset$
\item Pick any item $\itt{\tim}$ from $\uco\cap\zi{\lse}$
\item Observe $\lrel{\ut{\tim}}{\itt{\tim}}$
\item If $\lrel{\ut{\tim}}{\itt{\tim}}=0$ remove $\itt{\tim}$ from $\zi{\lse}$
\end{itemize}
\item \label{tp2} \label{trt2} {\bf Else if} $\cpo\neq\lel$
{\bf then :}
\begin{itemize}
\item If $\lir{\cpo+1}\in\uco$ select item $\itt{\tim}\la\lir{\cpo+1}$, otherwise pick any $\itt{\tim}$ from $\uco$
\item Observe $\lrel{\ut{\tim}}{\itt{\tim}}$
\item Set $\pos{\ut{\tim}}\la\cpo+1$
\end{itemize}
\item \label{tp3} {\bf Else :}
\begin{itemize}
\item Draw item $\itt{\tim}$ uniformly at random from $\uco$
\item Observe $\lrel{\ut{\tim}}{\itt{\tim}}$
\item If $\lrel{\ut{\tim}}{\itt{\tim}}=1$
\comm{Create new level}
\begin{itemize}
\item Set $\lel\la\lel+1$
\comm{Define $\cru{\lel}:=i_t$}
\item Set $\pos{\ut{\tim}}\la\lel$
\item Set $\lir{\lel}\la\itt{\tim}$
\item Set $\zi{\lel}\la\nat$
\item Define $\les{\lel}$ to be the set of all users $\au\in\us$ with:\\
$\rightarrow$ \uca:\,\,
$\lrel{\au}{\lir{\lii}}=\lrel{\ut{\tim}}{\lir{\lii}}~~\forall a\in\na{\lel}$\\
$\rightarrow$ \ica\,\,:\,\,\, $\lrel{\au}{\lir{\lel}}=1$
\end{itemize}
\end{itemize}
\end{enumerate}
\caption{One time recommendation alg. (\algn).\label{a:orca}}
\end{algorithm}
\iffalse
\begin{algorithm}[!h]
{\bf Init :}
\begin{itemize}
\item Set $\lel\la0$
\item Set $\pos{\au}\la0$ for all $\au\in\nat$
\end{itemize}
{\bf For} $t = 1, \ldots, T$ :
\begin{enumerate}
\item Observe user $\ut{\tim}$ and set of items $\ist{\tim}$
\item Set $\uco\la\ist{\tim}\setminus\{j_{s}~|~s\leq t-1\,,\,i_{s}=i_t\}$
\item Set $\cpo\la\pos{\ut{\tim}}$
\item \label{tp1} {\bf If} there exists $\lii\leq \cpo$ with $\ut{\tim}\in\les{\lii}$ and $\uco\cap\zi{\lii}\neq\emptyset$
{\bf then :}
\begin{itemize}
\item Pick any $\lse\leq \cpo$ with $\ut{\tim}\in\les{\lse}$ and $\uco\cap\zi{\lse}\neq\emptyset$
\item Pick any item $\itt{\tim}$ from $\uco\cap\zi{\lse}$
\item Observe $\lrel{\ut{\tim}}{\itt{\tim}}$
\item If $\lrel{\ut{\tim}}{\itt{\tim}}=0$ remove $\itt{\tim}$ from $\zi{\lse}$
\end{itemize}
\item \label{tp2} \label{trt2} {\bf Else if} $\cpo\neq\lel$
{\bf then :}
\begin{itemize}
\item If $\lir{\cpo+1}\in\uco$ select item $\itt{\tim}\la\lir{\cpo+1}$, otherwise pick any $\itt{\tim}$ from $\uco$
\item Observe $\lrel{\ut{\tim}}{\itt{\tim}}$
\item Set $\pos{\ut{\tim}}\la\cpo+1$
\end{itemize}
\item \label{tp3} {\bf Else :}
\comm{create new level}
\begin{itemize}
\item Draw item $\itt{\tim}$ uniformly at random from $\uco$
\item Observe $\lrel{\ut{\tim}}{\itt{\tim}}$
\item If $\lrel{\ut{\tim}}{\itt{\tim}}=1$
\begin{itemize}
\item Set $\lel\la\lel+1$
\item Set $\pos{\ut{\tim}}\la\lel$
\item Set $\lir{\lel}\la\itt{\tim}$
\item Set $\zi{\lel}\la\nat$
\item Define $\les{\lel}$ to be the set of all users $\au\in\us$ with:\\
$\rightarrow$ \uca:\,\,
$\lrel{\au}{\lir{\lii}}=\lrel{\ut{\tim}}{\lir{\lii}}~~\forall a\in\na{\lel}$\\
$\rightarrow$ \ica\,\,:\,\,\, $\lrel{\au}{\lir{\lel}}=1$
\end{itemize}
\end{itemize}
\end{enumerate}
\caption{One time recommendation alg. (\algn).\label{a:orca}}
\end{algorithm}
\fi
When a user $\au$ moves into a level $\lii$ -- in that $\pos{i}$ becomes equal to $\lii$ (during step \ref{tp2} of the pseudo-code in Algorithm \ref{a:orca}) -- we check to see if $i$ likes $\lir{\lii}$. This means that at any point in time we know, for all $\lij\leq\pos{i}$, whether or not $i\in\les{\lij}$.
On a trial $t$ in which $\ell:=\pos{i_t}>0$ we recommend our item $\itt{\tim}$ as follows. If there exists some item $\ai\in\ist{\tim}$, not having been recommended to $\ut{\tim}$ before, such that there exists $\lii\leq\ell$ with both $\ut{\tim}\in\les{\lii}$ and $\ai\in\zi{\lii}$ (i.e., the condition in Line \ref{tp1} is true) then we think that user $\ut{\tim}$ likes item $\ai$ so we choose $\itt{\tim}$ to be such an item. If no such item $\ai$ exists, or if $\ell=0$, (the condition in Line \ref{tp1} is false) then, given $\ell<\lel$ (so that the condition in Line \ref{tp2} is true), we move $\ut{\tim}$ up a level: first checking to see if it likes the next level's representative item $\lir{(\ell+1)}$ (by choosing $j_t:=\lir{(\ell+1)}$ if it hasn't already been recommended this item) and then updating $\pos{\ut{\tim}}\la\ell+1$. This process is illustrated in Figure~\ref{f:1}.
Now suppose we want to move $i_t$ up a level but $\pos{\ut{\tim}}=\lel$ so there is no level to move up to (that is, the condition in Line \ref{tp3} is true). In this case we draw $\itt{\tim}$ uniformly at random from those items in $\ist{\tim}$ not yet recommended to $\ut{\tim}$. If $\ut{\tim}$ likes $\itt{\tim}$ then we increment $\lel$ by one and then create a new level $\lel$ with $\lir{\lel}:=\itt{\tim}$ and $\cru{\lel}:=\ut{\tim}$.
\begin{figure}
\caption{The behavior of \algn\ (with a static inventory) at trial $t$ when $\ell:=\pos{\ut{\tim}
\label{f:1}
\end{figure}
\subsection{Analysis}
We analyze the properties of \algn\ by giving an upper bound on its regret as well as a matching lower bound on the regret of {\em any} algorithm (hence showing the optimality of \algn).
\begin{theorem}\label{thm:noisefree}
Let \algn\ be run on a $(C,D)$-biclustered matrix $\bs{L}$ of size $M\times N$ with an arbitrary sequence of users $i_1,\ldots, i_T$ and an arbitrary monotonically increasing sequence of item sets $\ist{1},\ldots, \ist{T}\subseteq[N]$. Then the expected regret of \algn\ is upper bounded as
\be
\expt{\reg} = \mathcal{O}(\min\{C,D\}(\nmu+\nma))~,
\ee
the expectation being over the internal randomization of the algorithm. \algn\ is parameter-free in that $C,D,M,N$ need not be known. \footnote
{
Full proofs of our results are given in the appendix.
}
\end{theorem}
\begin{proof}[Proof sketch]
Without loss of generality we assume that on every trial $\tim$ there exists an item $\ai\in\ist{\tim}$ which $\ut{\tim}$ likes and has not been recommended to $\ut{\tim}$ before so that the regret equals the number of mistakes. Let $\pfin$ be the total number of levels created by Algorithm \ref{a:orca}.
Given a level $\lii\in[\pfin]$ we count the following mistakes. There are at most $N$ mistakes made on trials corresponding to Line \ref{tp1} of Algorithm \ref{a:orca} with $\lse=\lii$, at most $M$ mistakes on trials corresponding to Line \ref{tp2} with $\cpo=\lii-1$ and, in expectation, at most $N$ mistakes made on trials corresponding to Line \ref{tp3} with $\cpo=\lii-1$. This means that the total number of mistakes is $\mathcal{O}(\pfin(M+N))$.
Next, we proceed to bounding $\pfin$. A crucial property here is what we call the \emp{separation property}, stating that given $\lii,\lij\in\na{\pfin}$ with $\lij>\lii$ and $\cru{\lij}\in\les{\lii}$ there exists some $\au\in\les{\lii}$ with $\lrel{\au}{\lir{\lij}}=0$. With this property in hand, we then analyze the algorithms \uca\ and \ica\ separately.
In \ica\ the separation property implies that given distinct levels $\lii,\lij\in[\pfin]$, their representative items $\lir{\lii}$ and $\lir{\lij}$ are not equivalent. This directly implies that $\pfin\leq\ic$.
In \uca\ the separation property leads to what we call the \emp{tree property}, stating that given levels $\lii,\lij\in[\pfin]$ with $\lii<\lij$ we have either $\les{\lij}\cap\les{\lii}=\emptyset$ or $\les{\lij}\subset\les{\lii}$. We build a forest (whose nodes are sets) as follows. First, for every level $\lij\in[\pfin]$ we make $\les{\lij}$ a node. If there exists $\lii<\lij$ with $\les{\lij}\subset\les{\lii}$ then for the maximum such $\lii$ we make $\les{\lii}$ the parent of $\les{\lij}$. Otherwise $\les{\lij}$ has no parent. Finally, if there exists $\lii\in[\pfin]$ such that $\les{\lii}$ has a single child $\les{\lij}$ then we add the set $\les{\lii}\setminus\les{\lij}$ to the forest and make it a child of $\les{\lii}$. Figure 2 in the appendix gives an example of this forest. By the tree property all leaves are disjoint and non-empty, which implies that there are at most $\uc$ leaves. Since each internal node of the forest has at least two children, and there are at least $\pfin$ nodes, this yields $\pfin\leq 2\uc$.
We have hence shown that the mistakes of \ica\ and \uca\ are $\mathcal{O}(\ic(M+N))$ and $\mathcal{O}(\uc(M+N))$, respectively. When combined together into \algn\ we hence get the desired bound.
\end{proof}
\iffalse
Claudio. You can see my changes because they are all in green. If you want to "roll back", just look for \fv (not capital, i.e., not \FV, which are comments)
\fi
\nc{\mcl}{E}
\nc{\vv}[1]{\bs{v}_{#1}}
\nc{\vc}[2]{v_{#1,#2}}
\nc{\xs}[1]{\mathcal{X}_{#1}}
\nc{\wv}[1]{\bs{w}_{#1}}
\nc{\wc}[2]{w_{#1,#2}}
\nc{\ys}[1]{\mathcal{W}_{#1}}
\nc{\ysp}{\mathcal{W}'}
\nc{\ws}{\mathcal{Y}}
\nc{\zs}[1]{\mathcal{Z}_{#1}}
\nc{\wst}[1]{\mathcal{Y}_{#1}}
\nc{\vs}{\mathcal{V}}
\nc{\bm}{\mu}
\nc{\s}{s}
\nc{\oc}{\theta}
As for the lower bound, we have the following result that that proves the optimality of \algn\ in the static inventory model of Section \ref{ss:basic}. As the static model is a special case of the dynamic one, this also proves optimality in the dynamic inventory model.
\begin{theorem}\label{thm:noisefree_lowerbound}
For any algorithm and any $M,N,C,D\in\nat$ with $\min(C,D)\leq\sqrt{\min(M,N)}$ there exists a $(C,D)$-biclustered matrix $\lmat$
of size $M\times N$, a time-horizon $T\in\nat$, and a user sequence $i_1,\ldots, i_T$ such that the algorithm has, in the static inventory model, a regret of
\be
\Omega(\min\{C,D\}(M+N))~.
\ee
\end{theorem}
\section{Adversarial Perturbation}
We now turn to the problem of incorporating arbitrary perturbations of the biclustered matrix. For simplicity we will only consider the static inventory model but note that the dynamic-inventory methodology from the perturbation-free case can be applied to this case also. Our algorithm \algnn\ only exploits similarities among items -- we leave it as an open problem to adapt our methodology in order to exploit similarities among users.
We give two algorithms, \algnn-\uie\ and \algnn-\ue, which have regret bounds of
\be
\expt{\reg} = \bo{( D\icr+m+n)(M+N)}
\ee
and
\be
\expt{\reg} = \bo{(D+n+m/\icr)(M+N\icr)}~,
\ee
respectively. These algorithms can be fused together (into the algorithm \algnn) in the same way as we did for \algn-\uca\ and \algn-\ica, in order to obtain a best-of-both guarantee. These two algorithms, which we will refer to as \uie\ and \ue\ respectively, differ only by whether the instruction labelled \uie\ in the pseudo-code of Algorithm \ref{a:orcastar} (check Line 8 therein) is included or not.
\subsection{Removing the parameter}
We can remove the parameter $\psi$ in
\algnn\ by a standard doubling trick as follows. For each value of $\icr$ in $\{2^a~|~a\in\nat\,,\, a\leq \log_2(M)+1\}$ take an instance of \algnn\ with that parameter value. On any trial we predict with and update only one instance $a$. We stay with instance $a$ until a mistake is made. Once a mistake is made we set $a\la a+1$ modulo $\lfloor\log_2(M)+1\rfloor$. This method allows us to achieve a regret bound that is only an $\bo{\ln(M)}$ factor off the regret bound of \algnn\ with the optimal $\icr$ in hindsight.
\subsection{\algnn}\label{ss:orcbanditstar}
\algnn\ is a modification of \algn-\ica. Lines \ref{ntp1}, \ref{ntp2}, and \ref{ntp3} of the pseudo-code of \algnn\ correspond to Lines \ref{tp1}, \ref{tp2}, and \ref{tp3} of the pseudo-code of \algn. To arrive at \algnn\ we make the following two changes to \algn-\ica:
\begin{itemize}
\item In \algn-\ica\ we removed an item $\ai$ from a set $\zi{\lii}$ as soon as we determined that there exists some user $\au\in\les{\lii}$ with $\lrel{\au}{\ai}=0$. In \algnn\ we only remove $\ai$ from the set $\zi{\lii}$ (in Step \ref{ntp1} of the pseudo-code in Algorithm \ref{a:orcastar}) when we determine that there exist over $2\icr$ such users. The variable $\nnul{\lii}{\ai}$ keeps track of how many of these users have been found so far. Note that the set $\les{\lii}$ is not explicitly defined in the pseudo-code of \algnn.
\item The next modification is rather counter-intuitive. In \algn-\ica\ (under the static inventory model) if $i_t$ is at level $\lel$ and there are no items in $\zi{\lel}$
that are not yet recommended to $i_t$, then we draw $j_t$ uniformly at random from the unrecommended items and if $\lrel{i_t}{j_t}=1$ we create a new level. In \uie\ however, we only create a new level (in Step \ref{ntp3}) with probability $1/\icr$. Otherwise we \emp{exclude} $i_t$ and $j_t$. When a user $i$ is excluded future items are recommended to it arbitrarily (in Step \ref{ntp0}) and trials $t$ in which $i_t=i$ no longer have an effect on the rest of the algorithm. When an item $j$ is excluded it will be recommended to all users as soon as possible (in Step \ref{ntp-1}). Excluded users and items are recorded in the sets $\exc$ and $\exi$, respectively. \ue\ differs slightly in that only users are excluded, not items.
\end{itemize}
\begin{algorithm}[!h]
{\bf Input:} $ \psi \in \nat\setminus \{1\}$\\
{\bf Init :}
\begin{itemize}
\item Set $\lel\la0$
\item Set $\pos{\au}\la0$ for all $\au\in\us$
\item Set $\exc, \exi \la\emptyset$
\end{itemize}
{\bf For} $t = 1,\ldots, T$ :
\begin{enumerate}
\item Observe $\ut{\tim}$
\item Set $\uco\la[N]\setminus\{j_{s}~|~s\leq t-1\,,\,i_{s}=i_t\}$
\item Set $\cpo\la\pos{\ut{\tim}}$
\item \label{ntp-1} {\bf If} $\uco\cap\exi\neq\emptyset${\bf~~then :}\\
Pick any $j_t$ from $\uco\cap\exi$ and observe $\lrel{\ut{\tim}}{\itt{\tim}}$
\item \label{ntp0} {\bf Else if} $\ut{\tim}\in\exc${\bf~~then :}\\
Pick any $\itt{\tim}$ from $\uco$ and observe $\lrel{\ut{\tim}}{\itt{\tim}}$
\item \label{ntp1} {\bf Else if} $\cpo>0$ and $\lrel{\ut{\tim}}{\lir{\cpo}}=1$ and $\uco\cap\zi{\cpo}\neq\emptyset${\bf~~then :}
\begin{itemize}
\item Pick any $\itt{\tim}$ from $\uco\cap\zi{\cpo}$ and observe $\lrel{\ut{\tim}}{\itt{\tim}}$
\item {\bf If} $\lrel{\ut{\tim}}{\itt{\tim}}=0${\bf~~then :}
\begin{itemize}
\item Set $\nnul{\cpo}{\itt{\tim}}\la\nnul{\cpo}{\itt{\tim}}+1$
\item If $\nnul{\cpo}{\itt{\tim}}>2\icr$ then remove $\itt{\tim}$ from $\zi{\cpo}$
\end{itemize}
\end{itemize}
\item \label{ntp2} {\bf Else if} $\cpo\neq\lel${\bf~~then :}
\begin{itemize}
\item If $\lir{\cpo+1}\in\uco$ then select $\itt{\tim}\la\lir{\cpo+1}$, otherwise pick any $\itt{\tim}$ from $\uco$
\item Observe $\lrel{\ut{\tim}}{\itt{\tim}}$
\item Set $\pos{\ut{\tim}}\la\cpo+1$
\end{itemize}
\item \label{ntp3} {\bf Else :}
\begin{itemize}
\item Draw $\itt{\tim}$ uniformly at random from $\uco$
\item Observe $\lrel{\ut{\tim}}{\itt{\tim}}$
\item Draw $\coin \sim $ Bernoulli($1/\icr$)
\item {\bf If} $\lrel{\ut{\tim}}{\itt{\tim}}=1$ and $\coin=0${\bf~~then :}
\begin{itemize}
\item Add $\ut{\tim}$ to $\exc$
\item \uie: Add $\itt{\tim}$ to $\exi$
\end{itemize}
\item {\bf If} $\lrel{\ut{\tim}}{\itt{\tim}}=1$
and $\gamma = 1$ {\bf~~then}\\
Set:
\begin{align*}
\lel&\la\lel+1,\quad \pos{\ut{\tim}}\la\lel,\quad \lir{\lel}\la\itt{\tim}\\
\zi{\lel}&\la\as\\
\nnul{\lel}{\ai}&\la 0\,\,\,\, \forall\ai\in\as
\end{align*}
\end{itemize}
\end{enumerate}
\caption{One time recommendation algorithm for adversarial perturbation (\algnn).\label{a:orcastar}}
\end{algorithm}
\subsection{Analysis}
The following is the main result of this section.
\begin{theorem}\label{t:orcastar}
Let \algnn\ be run with parameter $\psi$ on an $M\times N$ matrix $\bs{L}$ and an arbitrary sequence of users $i_1,...,i_T$. Suppose that there exists a $(C,D)$-biclustered ground-truth matrix $\bs{L^*}$ that induces $m$ bad users and $n = n(\psi)$ bad items on $\bs{L}$ (see Section \ref{ss:advnoise}). Then the expected regret of \algnn\ is upper bounded as
\begin{align*}
\expt{\reg} = \mathcal{O}\Bigl(\min\Bigl\{ &D\icr+m+n)(M+N),\\ &(D+n+m/\icr)(M+N\icr)\Bigl\}\Bigl)\,,
\end{align*}
the expectation being over the internal randomization of the algorithm. Note that $C$ and $D$ need not be known.\footnote
{
Here we can only given an idea of the proof, the reader is referred to the appendix for a detailed analysis.
}
\end{theorem}
\begin{proof}[Proof sketch]
Let us first consider \uie. As in the analysis of \algn, we assume without loss of generality that for all trials $t$ there exists an item that $i_t$ likes and that has not been recommended to $i_t$ so far. Following similar steps as in the analysis of \algn, we first bound the contribution of each level to the regret, which is now $\mathcal{O}(M+\icr N)$.
Let $\fnd$ be the set of trials $t$ in which Line \ref{ntp3} of Algorithm \ref{a:orcastar} is executed and $\lrel{i_t}{j_t}=1$. Let $\fnl$ be the set of trials $t\in\fnd$ in which $\coin=1$ on trial $t$. Note that on each trial in $\fnl$ a level is created and on each trial $t$ in $\fnd\setminus\fnl$ user $i_t$ and item $j_t$ are excluded, which contributes $\mathcal{O}(M+N)$ to the total regret.
Trials $t\in\fnl$ in which $\lrep{i_t}{j_t}=0$ can hurt the algorithm by creating an unnecessary level. Given a non-bad user $i$, trials $t\in\fnd$ with $i_t=i$ and $\lrep{i_t}{j_t}=0$ happen infrequently enough that we can effectively ignore their contribution to the regret. However, given a bad user $i$, trials $t\in\fnd$ in which $\lrep{i_t}{j_t}=0$ can be more likely to happen. Yet, on such a trial, with probability $1-1/\icr$ user $i_t$ becomes excluded which implies that bad users contribute, in expectation, only $\mathcal{O}(M+N)$ to the regret. Hence we need not be concerned with trials $t\in\fnd$ with $\lrep{i_t}{j_t}=0$. In a similar fashion, one can show that bad items contribute $\mathcal{O}(M+N)$ to the overall regret.
Let a \emp{cluster} $\clus$ be a set of items that are not bad and are all equivalent in the matrix $\lmap$. We will be interested in the contribution to the regret of trials $t\in\cls:=\{t\in\fnd~|~j_t\in\clus\}$ with $\lrep{i_t}{j_{t}}=1$. Define $\fst:=\min\cls\cap\fnl$. Note that there are, in expectation, at most $\icr$ trials in $\{t\in\cls~|~t<\fst\}$, and all such trials are in $\fnd\setminus\fnl$. Due to the fact that a level is created on trial $\fst$ (if $\fst$ is finite), and that its representative item $j_{\fst}$ is not bad, we can prove that for all trials $t\in\cls$ with $t>\fst$ we have $\lrel{i_t}{j_{\fst}}=0$. Since $j_{\fst}$ is not bad and $j_{\fst}$ and $j_t$ are equivalent in $\lmap$, there can be at most $\icr$ such $t$ in which $\lrep{i_t}{j_{t}}=1$. This implies that in expectation at most one of them will be in $\fnl$. Summing up, this allows us to conclude that trials $t\in\cls$ with $\lrep{i_t}{j_{t}}=1$ have an expected contribution to \algnn's regret of
$\mathcal{O}(\icr(M+N))$.
Taking a sum over all clusters, bad users and bad items gives us the required regret bound. In \ue\ we can, without loss of generality, assume that there are no bad items. Noting the fact that each trial in $\fnd\setminus\fnl$ now only contributes $\mathcal{O}(N)$, the same analysis as \uie\ gives us the desired regret guarantee.
\end{proof}
\section{Conclusions and Work in Progress}
Motivated by real-world RS applications, we have considered a sequential content recommendation problem where items can only be recommended once to each user (``no-repetition constraint"). Unlike the abundant literature on MAB under low-rank and clustering assumptions, we have handled through a regret analysis the more general situation where users show up in the system in an arbitrary (possibly adversarial) order. We have proposed, \algn, that works under biclustering assumptions, and have shown that this algorithm exhibits an optimal (up to constant factor) regret guarantee against an omniscient oracle that knows the user-item preference matrix ahead of time. \algn\, can be naturally made adaptive to users and content universe changes, and is oblivious to any prior knowledge about the biclustering parameters.
We have then extended \algn\, to \algnn\,, a more robust version which is able to handle adversarial perturbations to the biclustering structure in the preference matrix. \algnn's regret is shown to scale with the amount of this perturbation, as measured by the extent to which the observed user preferences depart from a biclustering structure in the form of induced bad users and bad items.
We close by mentioning a couple of research directions we are currently working on.
\begin{enumerate}
\item We are trying to extend \algnn\, so as to be able to {\em jointly} leverage similarities among items and similarities among users. This is likely to require a substantial redesign and rethinking of our algorithm.
\item Among the relevant extensions that would allow us to better address real-world scenarios are: i. The case where the learner has access to side information (i.e., features) about users and/or items,
and ii. The case where the user feedback is non-binary (e.g., relevance scores rather than clicks).
\end{enumerate}
\begin{thebibliography}{40}
\providecommand{\natexlab}[1]{#1}
\providecommand{\url}[1]{\texttt{#1}}
\expandafter\ifx\csname urlstyle\endcsname\relax
\providecommand{\doi}[1]{doi: #1}\else
\providecommand{\doi}{doi: \begingroup \urlstyle{rm}\Url}\fi
\bibitem[Aditya et~al.(2011)Aditya, Dabeer, and Dey]{aditya2011channel}
Aditya, S., Dabeer, O., and Dey, B.~K.
\newblock A channel coding perspective of collaborative filtering.
\newblock \emph{IEEE Transactions on Information Theory}, 57\penalty0
(4):\penalty0 2327--2341, 2011.
\bibitem[Ariu et~al.(2020)Ariu, Ryu, Yun, and Prouti{\`e}re]{ariu2020regret}
Ariu, K., Ryu, N., Yun, S.-Y., and Prouti{\`e}re, A.
\newblock Regret in online recommendation systems.
\newblock \emph{Advances in Neural Information Processing Systems},
33:\penalty0 21141--21150, 2020.
\bibitem[Bellog{\'\i}n \& Parapar(2012)Bellog{\'\i}n and
Parapar]{bellogin2012using}
Bellog{\'\i}n, A. and Parapar, J.
\newblock Using graph partitioning techniques for neighbour selection in
user-based collaborative filtering.
\newblock In \emph{Proceedings of the sixth ACM conference on Recommender
systems}, pp.\ 213--216, 2012.
\bibitem[Biau et~al.(2010)Biau, Cadre, and Rouviere]{biau2010statistical}
Biau, G., Cadre, B., and Rouviere, L.
\newblock Statistical analysis of k-nearest neighbor collaborative
recommendation.
\newblock \emph{The Annals of Statistics}, 38\penalty0 (3):\penalty0
1568--1592, 2010.
\bibitem[Bresler \& Karzand(2021)Bresler and Karzand]{bresler2021regret}
Bresler, G. and Karzand, M.
\newblock Regret bounds and regimes of optimality for user-user and item-item
collaborative filtering.
\newblock \emph{IEEE Transactions on Information Theory}, 67\penalty0
(6):\penalty0 4197--4222, 2021.
\bibitem[Bresler et~al.(2014)Bresler, Chen, and Shah]{bresler2014latent}
Bresler, G., Chen, G.~H., and Shah, D.
\newblock A latent source model for online collaborative filtering.
\newblock \emph{Advances in neural information processing systems}, 27, 2014.
\bibitem[Bresler et~al.(2016)Bresler, Shah, and
Voloch]{bresler2016collaborative}
Bresler, G., Shah, D., and Voloch, L.~F.
\newblock Collaborative filtering with low regret.
\newblock In \emph{Proceedings of the 2016 ACM SIGMETRICS International
Conference on Measurement and Modeling of Computer Science}, pp.\ 207--220,
2016.
\bibitem[Bubeck et~al.(2012)Bubeck, Cesa-Bianchi, et~al.]{bubeck2012regret}
Bubeck, S., Cesa-Bianchi, N., et~al.
\newblock Regret analysis of stochastic and nonstochastic multi-armed bandit
problems.
\newblock \emph{Foundations and Trends{\textregistered} in Machine Learning},
5\penalty0 (1):\penalty0 1--122, 2012.
\bibitem[Bui et~al.(2012)Bui, Johari, and Mannor]{bui2012clustered}
Bui, L., Johari, R., and Mannor, S.
\newblock Clustered bandits.
\newblock \emph{arXiv preprint arXiv:1206.4169}, 2012.
\bibitem[Candes \& Recht(2012)Candes and Recht]{candes2012exact}
Candes, E. and Recht, B.
\newblock Exact matrix completion via convex optimization.
\newblock \emph{Communications of the ACM}, 55\penalty0 (6):\penalty0 111--119,
2012.
\bibitem[Dabeer(2013)]{dabeer2013adaptive}
Dabeer, O.
\newblock Adaptive collaborating filtering: The low noise regime.
\newblock In \emph{2013 IEEE International Symposium on Information Theory},
pp.\ 1197--1201. IEEE, 2013.
\bibitem[Das et~al.(2007)Das, Datar, Garg, and Rajaram]{das2007google}
Das, A.~S., Datar, M., Garg, A., and Rajaram, S.
\newblock Google news personalization: scalable online collaborative filtering.
\newblock In \emph{Proceedings of the 16th international conference on World
Wide Web}, pp.\ 271--280, 2007.
\bibitem[Gentile et~al.(2014)Gentile, Li, and Zappella]{gentile2014online}
Gentile, C., Li, S., and Zappella, G.
\newblock Online clustering of bandits.
\newblock In \emph{International Conference on Machine Learning}, pp.\
757--765. PMLR, 2014.
\bibitem[Hartigan(1972)]{H72}
Hartigan, J.~A.
\newblock {Direct Clustering of a Data Matrix}.
\newblock \emph{Journal of the American Statistical Association}, 67\penalty0
(337):\penalty0 123--129, 1972.
\newblock ISSN 01621459.
\newblock \doi{10.2307/2284710}.
\newblock URL \url{http://dx.doi.org/10.2307/2284710}.
\bibitem[Hazan et~al.(2012)Hazan, Kale, and Shalev-Shwartz]{hazan2012near}
Hazan, E., Kale, S., and Shalev-Shwartz, S.
\newblock Near-optimal algorithms for online matrix prediction.
\newblock In \emph{Conference on Learning Theory}, pp.\ 38--1. JMLR Workshop
and Conference Proceedings, 2012.
\bibitem[Herbster et~al.(2020)Herbster, Pasteris, and Tse]{herbster2020online}
Herbster, M., Pasteris, S., and Tse, L.
\newblock Online matrix completion with side information.
\newblock \emph{Advances in Neural Information Processing Systems},
33:\penalty0 20402--20414, 2020.
\bibitem[Hong et~al.(2020)Hong, Kveton, Zaheer, Chow, Ahmed, and
Boutilier]{NEURIPS2020_9b7c8d13}
Hong, J., Kveton, B., Zaheer, M., Chow, Y., Ahmed, A., and Boutilier, C.
\newblock Latent bandits revisited.
\newblock In \emph{Advances in Neural Information Processing Systems},
volume~33, pp.\ 13423--13433. Curran Associates, Inc., 2020.
\bibitem[Jain et~al.(2013)Jain, Netrapalli, and Sanghavi]{jain2013low}
Jain, P., Netrapalli, P., and Sanghavi, S.
\newblock Low-rank matrix completion using alternating minimization.
\newblock In \emph{Proceedings of the forty-fifth annual ACM symposium on
Theory of computing}, pp.\ 665--674, 2013.
\bibitem[Jedor et~al.(2019)Jedor, Perchet, and Louedec]{jedor2019categorized}
Jedor, M., Perchet, V., and Louedec, J.
\newblock Categorized bandits.
\newblock \emph{Advances in Neural Information Processing Systems}, 32, 2019.
\bibitem[Jun et~al.(2019)Jun, Willett, Wright, and Nowak]{pmlr-v97-jun19a}
Jun, K.-S., Willett, R., Wright, S., and Nowak, R.
\newblock Bilinear bandits with low-rank structure.
\newblock In \emph{Proceedings of the 36th International Conference on Machine
Learning}, volume~97 of \emph{Proceedings of Machine Learning Research}, pp.\
3163--3172. PMLR, 2019.
\bibitem[Kang et~al.(2022)Kang, Hsieh, and Chun Man~Lee]{kang2022efficient}
Kang, Y., Hsieh, C., and Chun Man~Lee, T.
\newblock Efficient frameworks for generalized low-rank matrix bandit problems.
\newblock In \emph{Advances in Neural Information Processing Systems}. PMLR,
2022.
\bibitem[Katariya et~al.(2017)Katariya, Kveton, Szepesvári, Vernade, and
Wen]{katariya+17}
Katariya, S., Kveton, B., Szepesvári, C., Vernade, C., and Wen, Z.
\newblock Bernoulli rank-$1$ bandits for click feedback.
\newblock In \emph{Proceedings of the Twenty-Sixth International Joint
Conference on Artificial Intelligence (IJCAI-17)}, 2017.
\bibitem[Keshavan et~al.(2010)Keshavan, Montanari, and Oh]{keshavan2010matrix}
Keshavan, R.~H., Montanari, A., and Oh, S.
\newblock Matrix completion from a few entries.
\newblock \emph{IEEE transactions on information theory}, 56\penalty0
(6):\penalty0 2980--2998, 2010.
\bibitem[Koren et~al.(2009)Koren, Bell, and Volinsky]{koren2009matrix}
Koren, Y., Bell, R., and Volinsky, C.
\newblock Matrix factorization techniques for recommender systems.
\newblock \emph{Computer}, 42\penalty0 (8):\penalty0 30--37, 2009.
\bibitem[Kwon et~al.(2017)Kwon, Perchet, and Vernade]{kwon2017sparse}
Kwon, J., Perchet, V., and Vernade, C.
\newblock Sparse stochastic bandits.
\newblock \emph{arXiv preprint arXiv:1706.01383}, 2017.
\bibitem[Lattimore \& Szepesv{\'a}ri(2020)Lattimore and
Szepesv{\'a}ri]{lattimore2020bandit}
Lattimore, T. and Szepesv{\'a}ri, C.
\newblock \emph{Bandit algorithms}.
\newblock Cambridge University Press, 2020.
\bibitem[Li et~al.(2016)Li, Karatzoglou, and Gentile]{10.1145/2911451.2911548}
Li, S., Karatzoglou, A., and Gentile, C.
\newblock Collaborative filtering bandits.
\newblock In \emph{Association for Computing Machinery}, SIGIR '16, pp.\
539–548, New York, NY, USA, 2016.
\bibitem[Lu et~al.(2018)Lu, Wen, and Kveton]{10.1145/3240323.3240408}
Lu, X., Wen, Z., and Kveton, B.
\newblock Efficient online recommendation via low-rank ensemble sampling.
\newblock In \emph{Proceedings of the 12th ACM Conference on Recommender
Systems}, RecSys '18, pp.\ 460–464. Association for Computing Machinery,
2018.
\newblock ISBN 9781450359016.
\bibitem[Lu et~al.(2021)Lu, Meisami, and Tewari]{pmlr-v130-lu21a}
Lu, Y., Meisami, A., and Tewari, A.
\newblock Low-rank generalized linear bandit problems.
\newblock In \emph{Proceedings of The 24th International Conference on
Artificial Intelligence and Statistics}, volume 130 of \emph{Proceedings of
Machine Learning Research}, pp.\ 460--468. PMLR, 2021.
\bibitem[Maillard \& Mannor(2014)Maillard and Mannor]{maillard2014latent}
Maillard, O.-A. and Mannor, S.
\newblock Latent bandits.
\newblock In \emph{International Conference on Machine Learning}, pp.\
136--144. PMLR, 2014.
\bibitem[Negahban \& Wainwright(2012)Negahban and
Wainwright]{negahban2012restricted}
Negahban, S. and Wainwright, M.~J.
\newblock Restricted strong convexity and weighted matrix completion: Optimal
bounds with noise.
\newblock \emph{The Journal of Machine Learning Research}, 13\penalty0
(1):\penalty0 1665--1697, 2012.
\bibitem[Pal et~al.(2023)Pal, Suggala, Shanmugam, and Jain]{pal2023optimal}
Pal, S., Suggala, A.~S., Shanmugam, K., and Jain, P.
\newblock Optimal algorithms for latent bandits with cluster structure.
\newblock \emph{arXiv preprint arXiv:2301.07040}, 2023.
\bibitem[Resnick \& Varian(1997)Resnick and Varian]{resnick1997recommender}
Resnick, P. and Varian, H.~R.
\newblock Recommender systems.
\newblock \emph{Communications of the ACM}, 40\penalty0 (3):\penalty0 56--58,
1997.
\bibitem[Rohde \& Tsybakov(2011)Rohde and Tsybakov]{rohde2011estimation}
Rohde, A. and Tsybakov, A.~B.
\newblock Estimation of high-dimensional low-rank matrices.
\newblock \emph{The Annals of Statistics}, 39\penalty0 (2):\penalty0 887--930,
2011.
\bibitem[Sarwar et~al.(2001)Sarwar, Karypis, Konstan, and
Riedl]{sarwar2001item}
Sarwar, B., Karypis, G., Konstan, J., and Riedl, J.
\newblock Item-based collaborative filtering recommendation algorithms.
\newblock In \emph{Proceedings of the 10th international conference on World
Wide Web}, pp.\ 285--295, 2001.
\bibitem[Slivkins et~al.(2013)Slivkins, Radlinski, and
Gollapudi]{10.5555/2567709.2502595}
Slivkins, A., Radlinski, F., and Gollapudi, S.
\newblock Ranked bandits in metric spaces: Learning diverse rankings over large
document collections.
\newblock \emph{J. Mach. Learn. Res.}, 14\penalty0 (1):\penalty0 399–436,
2013.
\bibitem[Trinh et~al.(2020)Trinh, Kaufmann, Vernade, and
Combes]{pmlr-v117-trinh20a}
Trinh, C., Kaufmann, E., Vernade, C., and Combes, R.
\newblock Solving bernoulli rank-one bandits with unimodal thompson sampling.
\newblock In \emph{Proceedings of the 31st International Conference on
Algorithmic Learning Theory}, volume 117 of \emph{Proceedings of Machine
Learning Research}, pp.\ 862--889. PMLR, 2020.
\bibitem[Verstrepen \& Goethals(2014)Verstrepen and
Goethals]{verstrepen2014unifying}
Verstrepen, K. and Goethals, B.
\newblock Unifying nearest neighbors collaborative filtering.
\newblock In \emph{Proceedings of the 8th ACM Conference on Recommender
systems}, pp.\ 177--184, 2014.
\bibitem[Wang et~al.(2006)Wang, De~Vries, and Reinders]{wang2006unifying}
Wang, J., De~Vries, A.~P., and Reinders, M.~J.
\newblock Unifying user-based and item-based collaborative filtering approaches
by similarity fusion.
\newblock In \emph{Proceedings of the 29th annual international ACM SIGIR
conference on Research and development in information retrieval}, pp.\
501--508, 2006.
\bibitem[Zhao et~al.(2013)Zhao, Zhang, and Wang]{zhao2013interactive}
Zhao, X., Zhang, W., and Wang, J.
\newblock Interactive collaborative filtering.
\newblock In \emph{Proceedings of the 22nd ACM international conference on
Information \& Knowledge Management}, pp.\ 1411--1420, 2013.
\end{thebibliography}
\appendix
\onecolumn
\section{Proofs}
This appendix contains the complete proofs of all our claims.
\subsection{Proof of Theorem \ref{thm:noisefree}}
\begin{proof}
We shall show that \uca\ and \ica\ have expected regret bounds of
\be
\expt{\reg}=\mathcal{O}(\uc(\nmu+\nma))~~~\operatorname{and}~~~\expt{\reg}=\mathcal{O}(\ic(\nmu+\nma))~,
\ee
respectively.
Let us assume that on every trial $\tim$ there exists an item $\ai\in\ist{\tim}$ which $\ut{\tim}$ likes and has not been recommended to $\ut{\tim}$ before. This is without loss of generality since on any trials in which this does not hold the omniscient oracle (when suggesting liked items before disliked items) is forced to make a mistake.
Let $\pfin$ be the value of $\lel$ on trial $\ntr$. Given a level $\lii\in\na{\pfin}$ we will bound the number of mistakes made on trials of the following types:
\begin{itemize}
\item Trials corresponding to Line \ref{tp1} (of the pseudocode in Algorithm \ref{a:orca}) with $\lse=\lii$. Given such a trial $\tim$, we have $\itt{\tim}\in\zi{\lii}$ and if a mistake is made $\itt{\tim}$ is removed from $\zi{\lii}$ so only one mistake can be made per item. Hence, no more than $\nma$ mistakes are made on such trials.
\item Trials corresponding to Line \ref{tp2} with $\cpo=\lii-1$. There is at most one such trial per user and hence no more than $\nmu$ mistakes are made on such trials.
\item Trials corresponding to Line \ref{tp3} with $\cpo=\lii-1$. On such a trial $\tim$, item $\itt{\tim}$ is selected uniformly at random from $\uco$. Since (by the initial assumption) there exists an item $\ai\in\uco$ with $\lrel{\ut{\tim}}{\ai}=1$, there are in expectation at most $\nma$ such trials $\tim$ until $\lrel{\ut{\tim}}{\itt{\tim}}=1$. Once this happens, there are no more trials of this type, and hence there are at most $\nma$ such trials in expectation.
\end{itemize}
Thus there are in expectation $\mathcal{O}(\nmu+\nma)$ mistakes in trials of the above types for each $\lii\in\na{\pfin}$, implying
$$
\expt{\reg}=\mathcal{O}(\expt{\pfin}(\nmu+\nma))~.
$$
Therefore, all we now need to prove is that $\pfin=\mathcal{O}(\uc)$ and $\pfin=\mathcal{O}(\ic)$ for \uca\ and \ica, respectively.
To this effect, recall that given a level $\lii\in[\pfin]$ we denote by $\cru{\lii}$ the user that created that level (i.e. $\cru{\lii}:=i_t$ when $t$ is the trial on which level $\lii$ was created).
A crucial property needed to prove this is what we call the \emp{separation property}:
\begin{center}
Given $\lii,\lij\in\na{\pfin}$ with $\lij>\lii$ and $\cru{\lij}\in\les{\lii}$, there exists some $\au\in\les{\lii}$ with $\lrel{\au}{\lir{\lij}}=0$.
\end{center}
To see why this property holds, first let $\tim$ be the trial on which $\cru{\lij}$ creates level $\lij$ (so $\cru{\lij}=\ut{\tim}$). Since $\lii<\lij$, on such round $\tim$ we have, direct from the algorithm, that either $\cru{\lij}\notin\les{\lii}$ or there is no item in $\zi{\lii}\cap\ist{\tim}$ that has not yet been recommended to $\cru{\lij}$. So since $\cru{\lij}\in\les{\lii}$, and since $\lir{\lij}$ is recommended to $\cru{\lij}$ on trial $\tim$ and hence not recommended to $\cru{\lij}$ before, we must have that $\lir{\lij}\notin\zi{\lii}$ on trial $\tim$. But for this to happen there must exist a user $\au\in\les{\lii}$ with $\lrel{\au}{\lir{\lij}}=0$, as claimed.
Now let the symbol $\eqi$ denote that two users or items are equivalent with respect to the matrix $\lmat$.
Let us first focus on \ica. Suppose, for contradiction, that we have $\lii,\lij\in\na{\pfin}$ with $\lij>\lii$ and $\lir{\lij}\eqi\lir{\lii}$. Since $\lrel{\cru{\lij}}{\lir{\lij}}=1$ we also have $\lrel{\cru{\lij}}{\lir{\lii}}=1$, and hence $\cru{\lij}\in\les{\lii}$. This implies, via the separation property, that there exists $\au\in\les{\lii}$ with $\lrel{\au}{\lir{\lij}}=0$, so choose such a $\au$. Since $\au\in\les{\lii}$ this gives $\lrel{\au}{\lir{\lii}}=1$, which contradicts the fact that $\lir{\lii}\eqi\lir{\lij}$. So all the representatives of the different levels come from different clusters which gives us $\pfin\leq\ic$, as required.
We now turn our attention to \uca. We will show that for all $\lii,\lij\in\na{\pfin}$ with $\lij>\lii$ we have either $\les{\lij}\cap\les{\lii}=\emptyset$ or $\les{\lij}\subset\les{\lii}$, where the subset property is strict (i.e., $\les{\lij}\neq\les{\lii}$). To show this, suppose that $\les{\lij}\cap\les{\lii}\neq\emptyset$. Choose $\au\in\les{\lij}\cap\les{\lii}$. Since $\au\in\les{\lii}$ we have $\lrel{\au}{\lir{\lii'}}=\lrel{\cru{\lii}}{\lir{\lii'}}$ for all $\lii'\in\na{\lii}$, and since $\au\in\les{\lij}$ we have $\lrel{\au}{\lir{\lii'}}=\lrel{\cru{\lij}}{\lir{\lii'}}$ for all $\lii'\in\na{\lij}$. Thus, since $\lij>\lii$ we have $\lrel{\cru{\lij}}{\lir{\lii'}}=\lrel{\au}{\lir{\lii'}}=\lrel{\cru{\lii}}{\lir{\lii'}}$ for all $\lii'\in\na{\lii}$, which in turn implies that $\cru{\lij}\in\les{\lii}$. Hence, for all $\au\in\les{\lij}$ and $\lii'\in\na{\lii}$ we have $\lrel{\au}{\lir{\lii'}}=\lrel{\cru{\lij}}{\lir{\lii'}}=\lrel{\cru{\lii}}{\lir{\lii'}}$ so that $\au\in\les{\lii}$. This implies that $\les{\lij}\subseteq\les{\lii}$. Hence, since $\cru{\lij}$ is trivially contained in $\les{\lij}$ it is also contained in $\les{\lii}$ which implies, by the separation property, that there exists a user $\au\in\les{\lii}$ with $\lrel{\au}{\lir{\lij}}=0$. Consider such an $\au$. Since (directly from the algorithm) we have $\lrel{\cru{\lij}}{\lir{\lij}}=1$ we then have $\lrel{\au}{\lir{\lij}}\neq\lrel{\cru{\lij}}{\lir{\lij}}$, so that $\au\notin\les{\lij}$. Hence $\les{\lij}\subseteq\les{\lii}$, and there exists $\au\in\les{\lii}\setminus\les{\lij}$ which implies that $\les{\lij}\subset\les{\lii}$, as required.
We have just shown that for all $\lii,\lij\in\na{\pfin}$ with $\lij>\lii$ we have either $\les{\lij}\cap\les{\lii}=\emptyset$ or $\les{\lij}\subset\les{\lii}$. We call this property the \emp{tree property}. We will now construct a directed graph (see Figure 2 for an example), whose nodes are sets, as follows. For all $\lij\in\na{\pfin}$ we have that $\les{\lij}$ is a node in the graph and that:
\begin{itemize}
\item If there exists $\lii\in\na{\pfin}$ with $\les{\lij}\subset\les{\lii}$ then the (unique) parent of $\les{\lij}$ is $\les{\lii}$ for the maximum such $\lii$;
\item If there does not exist such a level $\lii$ then $\les{\lij}$ has no parent.
\end{itemize}
Note that, by the tree property, if $\les{\lii}$ is the parent of $\les{\lij}$ then (since $\les{\lij}\subset\les{\lii}$) we have $\lii<\lij$, thereby making the graph acyclic. Moreover, since each node has at most one parent the graph is a forest.
Suppose we have $\lii,\lij\in[\pfin]$ such that $\lij>\lii$ and that $\les{\lii}$ and $\les{\lij}$ are both roots. Since $\les{\lii}$ is a root we must have that $\les{\lii}\not\subset\les{\lij}$ so by the tree property $\les{\lii}\cap\les{\lij}=\emptyset$ holds. Now suppose we have $\lii,\lij\in[\pfin]$ such that $\lij>\lii$ and that $\les{\lii}$ and $\les{\lij}$ are siblings. Let $\lii'$ be such that $\les{\lii'}$ is the parent of these siblings. We must have $\les{\lii}\subset\les{\lii'}$ so, again by the tree property, we have $\lii'<\lii$. We also must have that $\lii'$ is the maximum element $\lij'$ of $[\pfin]$ such that $\les{\lij}\subset\les{\lij'}$. These two properties imply that $\les{\lij}\not\subset\les{\lii}$ and hence, by the tree property, $\les{\lii}\cap\les{\lij}=\emptyset$. This shows that any two roots, and any two siblings, correspond to disjoint sets.
For all $\lii,\lij\in[\pfin]$ such that $\les{\lij}$ is the only child of $\les{\lii}$, create a new node $\les{\lii}\setminus\les{\lij}$ and make it a child of $\les{\lii}$. Since, here, $\les{\lij}\subset\les{\lii}$, such new nodes are non-empty, and hence, because for all $\lii\in\na{\pfin}$ we have $\cru{\lii}\in\les{\lii}$, all nodes are non-empty. Note that the property that any pair of siblings or roots are disjoint still holds so, since any node is a subset of each of its ancestors, all leaves are disjoint. Also, for all $\lii\in\na{\pfin}$ and $\au\in\les{\lii}$ we have $\au'\in\les{\lii}$ for all $\au'\in\as$ with $\au'\eqi\au$. This implies that each leaf of the forest contains a user cluster as a subset and hence that $\uc$ is at least as large as the number of leaves. Since all the internal nodes of the forest have at least two children and the number of nodes in the forest is no less than $\pfin$, we must have at least $\pfin/2$ leaves. Hence $\pfin\leq2\uc$, as required.
\end{proof}
\begin{figure}
\caption{\label{treefig}
\label{treefig}
\end{figure}
\subsection{Proof of Theorem \ref{thm:noisefree_lowerbound}}
\begin{proof}
Let $\mcl:=\min\{C,D\}$. We will construct our $M\times N$ matrix $\lmat$ such that it is both $\mcl$-user clustered and $\mcl$-item clustered, which implies $\lmat$ is also $(C,D)$-biclustered.
First consider the case that $M\geq N$. Without loss of generality, assume that $N$ is a multiple of $\mcl$. For all $a\in[\mcl]$ let $\vv{a}$ be the $N$-component vector such that for all $j\in[N]$ we have $\vc{a}{j}:=1$ if and only if
$$
(a-1)(N/\mcl)<j\leq aN/\mcl~.
$$
Define $T:=M\mcl$ and for all $i\in[M]$ and $t\in[T]$ with $(i-1)\mcl<t\leq i\mcl$, let $i_t:=i$. For all $i\in[M]$ we will choose some $a_i\in[\mcl]$ in a way that is dependent on the algorithm and set the $i$-th row of $\lmat$ equal to $\vv{a_i}$. Note that we can always choose $a_i$ such that in expectation $\Omega(\mcl)$ mistakes are made in the $\mcl$ trials $t$ for which $i_t=i$. Since $N/E\geq E$, an omniscient oracle would make no mistakes, and hence the expected regret of the learner is equal to its expected number of mistakes, which is $\Omega(M\mcl)$ as required.
We now turn to the case that $N\geq M$. Without loss of generality, assume that $N$ is a multiple of $\mcl$ and assume $M=\mcl^2$ (since for any $i\in[M]$ with $i>\mcl^2$ we will be able to choose the $i$-th row of $\lmat$ arbitrarily).
For all $a\in[\mcl]$ define $\xs{a}$ to be the set of all $i\in[M]$ such that
$$
(a-1)\mcl<i\leq a\mcl~.
$$
We will construct our matrix $\lmat$ so that for all $a\in[\mcl]$ there exists an $N$-component vector $\wv{a}$ such that for all $i\in\xs{a}$ the $i$-th row of $\lmat$ is equal to $\wv{a}$. Note that $\lmat$ will then be $\mcl$-user clustered as required. We set our time horizon $T:=NE$. Our user sequence is defined as follows. For all $i\in[M]$ we have that $i_t:=i$ for all $t\in[T]$ with
$$
(i-1)N/\mcl<t\leq iN/\mcl~.
$$
For all $a\in[\mcl]$, let $\zs{a}$ be the set of trials $t\in[T]$ with $i_t\in\xs{a}$, noting that $|\zs{a}|=N$ and the trials in $\zs{a}$ come directly before those of $\zs{a+1}$.
We now turn to the construction of the vectors $\{\wv{a}~|~a\in[\mcl]\}$. To do so, we will construct, in order, a sequence of sets $\{\ys{a}~|~a\in[\mcl]\}$ where, for all $a,a'\in[\mcl]$ with $a'\neq a$, we have $\ys{a}\subseteq[N]$ and $|\ys{a}|=N/\mcl$ and $\ys{a'}\cap\ys{a}=\emptyset$.
For all $a\in[\mcl]$ the vector $\wv{a}$ is defined so that for all $j\in[N]$ we have
$$
\wc{a}{j}:=\indi{j\in\ys{a}}~.
$$
Suppose we have constructed $\ys{a'}$ for all $a'$ less than some $a\in[\mcl]$. Take an arbitrary set $\ysp\subseteq[N]$ with $|\ysp|=N/\mcl$ and $\ysp\cap\ys{a'}=\emptyset$ for all $a'\in[a-1]$. Suppose that $\ys{a}$ is set equal to $\ysp$ and the learning algorithm is run. Given some trial $t\in\zs{a}$, let $\wst{t}$ be the set of all items $j\in\ysp$ such that there exists a trial $t'\in\zs{a}$ with $t'<t$ and $j_{t'}=j$. Let $\s$ be the first trial in $\zs{a}$ in which
$$
|\wst{\s}|=N/(4\mcl)
$$
(or $\max\zs{a}$ if no such $\s$ exists), and let $\vs$ be the set of all $t\in\zs{a}$ with $t<\s$.
The only trials $t\in\vs$ in which the algorithm (with knowledge of $\ys{a'}$ for all $a'\in[a-1]$) can be assured of not making a mistake are contained in the set of trials $t'\in\vs$ such that there exists an item $j\in\wst{\s}$ that has not been recommended to $i_{t'}$ before trial $t'$. Since $|\wst{\s}|\leq N/(4\mcl)$ and there are only $\mcl$ users $i$ in which $i_t=i$ for some $t\in\zs{a}$, there are at most $N/4$ trials in $\vs$ in which the algorithm is assured of not making a mistake. Similarly there are at most $N/4$ trials in $\vs$ in which no mistakes are made. As we shall see, $N/4$ is small enough that we can ignore such trials. On all other trials in $\vs$ the algorithm must search for an item in $\ysp$. Since $\ysp$ is an arbitrary subset of $N-(a-1)N/\mcl$ elements with cardinality $N/\mcl$ we can choose $\ysp$ in such a way that there are, in expectation,
$$
\Omega\Bigl(|\wst{\s}|(N-(a-1)N/\mcl)/(N/\mcl)\Bigl)=\Omega(N(\mcl-a)/\mcl)
$$
such trials in which mistakes are made. This is because (from above) there are at most $N/4$ trials in $\vs$ in which mistakes are not made, and there exists a constant $\oc$ such that
$$
\oc N(\mcl-a)/\mcl+N/4 \leq N~,
$$
$N$ being the number of trials in $\zs{a}$.
Summing the above mistake lower-bounds over all $a\in[\mcl]$ gives us a total mistake bound of $\Omega(N\mcl)$. Since, for all $a\in[\mcl]$, we have $|\ys{a}|=N/\mcl$, and each user is queried $N/\mcl$ times, an omniscient oracle would make no mistakes so the expected regret is equal to the expected number of mistakes which is $\Omega(N\mcl)$ as claimed. Moreover, for any item $j\in[N]$, any $a\in[\mcl]$ and any user $i\in\xs{a}$ we have $\lrel{i}{j}=\indi{j\in\ys{a}}$ so since the sets $\{\ys{a}~|~a\in[\mcl]\}$ partition $[N]$ we have that $\lmat$ is $\mcl$-item clustered as required.
\end{proof}
\subsection{Proof of Theorem \ref{t:orcastar}}
\begin{proof}
We will first analyze \uie\ and then show how to modify the analysis for \ue.
Recall that for all users $i\in[M]$ the values $\nqu{i}$ and $\nli{i}$ are the number of times that user $i$ is queried and the number of items that user $i$ likes, respectively. We can assume without loss of generality that $\nqu{i}\leq\nli{i}$ for all users $i\in\na{M}$, so that the regret is the number of mistakes. This is because if, on some trial $t$, there is no item that $i_t$ likes and has not been recommended to $i_t$ so far, then on such a trial the omniscient oracle (when suggesting liked items before disliked items) would incur a mistake. Note that this assumption means that on every trial $t$ there exists an item $j$ that $i_t$ likes and has not been recommended to $i_t$ so far. This assumption also entails that the regret of \algnn\, is equal to its number of mistakes.
Let $\fnd$ be the set of trials $t$ in which Line \ref{ntp3} of the pseudocode in Algorithm \ref{a:orcastar} is invoked and $\lrel{i_t}{j_t}=1$. Let $\fnl$ be the set of trials $t\in\fnd$ in which $\coin=1$ on trial $t$. Note that on each trial in $\fnl$ a level is created, and hence $|\fnl|=\pfin$ where $\pfin$ the value of $\lel$ on the final trial $\ntr$.
We will now bound the expected number of mistakes in terms the cardinality of the above sets. To do this, we consider a trial $t$ in which a mistake is made. We have the following possibilities on trial $t$:
\begin{itemize}
\item The condition in Line \ref{ntp-1} of Algorithm \ref{a:orcastar} is true. In this case $j_t\in\exi$. For all $j\in\exi$ we have that $j$ was added to $\exi$ on some trial in $\fnd$, and we know that the number of rounds $t$ in which $j_t=j$ is bounded from above by $M$. Hence there can be at most $M|\fnd|$ such trials $t$.
\item The condition in Line \ref{ntp0} holds. In this case $i_t\in\exc$. For all $i\in\exc$ we have that $i$ was added to $\exc$ on some trial in $\fnd$, and we know that the number of trials $t$ in which $i_t=i$ is bounded from above above by $N$. Hence there can be at most $N|\fnd|$ such trials $t$.
\item The condition in Line \ref{ntp1} holds. Let $j:=j_t$ and let $\ell$ be the value of $\ell_{i_t}$ on trial $t$. We must have that $j_t\in\zi{\ell}$, and hence that
$$
\nnul{\cpo}{\itt{\tim}}\leq 2\icr
$$
at the start of trial $t$. But $\nnul{\cpo}{\itt{\tim}}$ is increased by one on such a trial, which means there can be no more than $2\icr$ trials $t$ with $j_t=j$ and $\ell_{i_t}=\ell$. Note that each level $\ell\in[\pfin]$ is created on a trial in $\fnl$ so there are at most
$$
2\icr N|\fnl|
$$
mistakes made on trials in which Line \ref{ntp1} applies.
\item The condition in Line \ref{ntp2} is true. For every level $\ell\in[\pfin]$ and every user $i\in[M]$ there is at most one such trial $t$ with $i_t=i$ and $\ell_{i}=\ell$ (on trial $t$). Hence there are no more than $M\pfin$ such trials in total. Note that each level $\ell\in[\pfin]$ is created on a trial in $\fnl$ so there are at most
$$
M|\fnl|
$$
mistakes made on trials in which Line \ref{ntp2} applies.
\item The condition in Line \ref{ntp3} holds. Suppose $t'$ is a trial in which the condition in Line \ref{ntp3} is true but $\lrel{i_{t'}}{j_{t'}}=1$. This means that $t'\in\fnd$, so there cannot be more than $|\fnd|$ such trials $t'$. But given an arbitrary trial $t'$ in which that condition holds, the probability that $\lrel{i_{t'}}{j_{t'}}=1$ is at least $1/N$ (since $\nqu{i_{t'}}\leq\nli{i_{t'}}$). This implies that there are, in expectation, at most
$$
N\expt{|\fnd|}
$$
trials in which Line \ref{ntp3} applies and a mistake is made.
\end{itemize}
Putting together, we have so far shown that:
\be
\expt{\reg} = \bo{(M+\icr N)\expt{|\fnl|}+(M+N)\expt{|\fnd|}}~.
\ee
Recall that given a user $i\in[M]$, its perturbation level $\ucp{i}$ is the number of items $j\in[N]$ in which $\lrel{i}{j}\neq\lrep{i}{j}$.
Given a trial $t\in\na{T}$, let $\rem{t}$ be the number of items that user $i_t$ likes and have not been recommended to them so far. Let $\fnb$ be the set of trials $t\in\fnd$ with $\lrep{i_t}{j_t}=0$ and $\rem{t}>2\ucp{i_t}$, and $\fnc$ be the set of trials $t\in\fnd$ with $\lrep{i_t}{j_t}=0$ and $\rem{t}\leq2\ucp{i_t}$.
Let $\gi$ be the set of items which are \emp{good} (i.e., not bad). We call a non-empty set $\clus\subseteq\gi$ a \emp{cluster} if and only if for all pairs of items $j,j'\in\clus$ we have that the $j$-th and $j'$-th columns of $\lmap$ are identical and, in addition, for all items $j''\in\gi\setminus\clus$, the $j$-th and $j''$-th columns of $\lmap$ differ. Note that there are no more than $D$ clusters. Given a cluster $\clus$, we define $\cls$ to be the set of all $t\in\fnd$ with $j_t\in\clus$ and such that $i_t$ is \emp{good} (that is, not bad).
Let us now focus on a specific cluster $\clus$ and define
$$
\fst:=\min(\cls\cap\fnl)~,
$$
with the convention that the minimizer of the empty set is $\infty$. Let $\fsl$ be the level created on trial $\fst$. We then partition $\cls$ into the following sets:
\begin{itemize}
\item $\cl{1}$ is the set of all $t\in\cls$ with $t<\fst$;
\item $\cl{2}$ is the set of all $t\in\cls$ with $t\notin\fnb\cup\fnc\cup\fnl$ and $t\geq\fst$;
\item $\cl{3}$ is the set of all $t\in\cls\cap\fnl$ with $t\notin\fnb\cup\fnc$ (noting this implies $t\geq\fst$);
\item $\cl{4}$ is the set of all $t\in\cls\cap\fnb$ with $t\geq\fst$;
\item $\cl{5}$ is the set of all $t\in\cls\cap\fnc$ with $t\geq\fst$.
\end{itemize}
We will next analyze how much each of these sets contributes to the above regret bound.
Every $t\in\fnd$ has a $1/\icr$ probability of being in $\fnl$, which implies that the expected cardinality of $\cl{1}$ is at most $\icr$. Since each element of $\cl{1}$ is not in $\fnl$, it contributes $\bo{M+N}$ to the regret bound. Hence, the overall contribution of $\cl{1}$ to the regret bound is in expectation equal to
$$
\bo{\icr (M+N)}~.
$$
We will now show that for all $j\in\clus$ we always have
$$
\nnul{\fsl}{j}\leq2\icr~.
$$
To see this, take such a $j$ and suppose we have a round $t\in[T]$ in which $\nnul{\fsl}{j}$ is incremented. Note that on such a $t$ we necessarily have $\lrel{i_t}{\lir{\fsl}}=1$ and $\lrel{i_t}{j}=\lrel{i_t}{j_t}=0$. We have the following two possibilities:
\begin{itemize}
\item If $\lrep{i_t}{\lir{\fsl}}=0$ then since $\lrel{i_t}{\lir{\fsl}}=1$ we have $\lrep{i_t}{\lir{\fsl}}\neq\lrel{i_t}{\lir{\fsl}}$. Since $\lir{\fsl}=j_{\fst}$ and $\fst\in\cls$ we have that $\lir{\fsl}\in\clus$ so $\lir{\fsl}$ is good, and hence there can be no more than $\icr$ such trials.
\item If $\lrep{i_t}{\lir{\fsl}}=1$ then since $\lir{\fsl}=j_{\fst}\in\clus$ and $j\in\clus$ we have $\lrep{i_t}{j}=\lrep{i_t}{\lir{\fsl}}=1$. Since $\lrel{i_t}{j}=0$ we then have $\lrel{i_t}{j}\neq\lrep{i_t}{j}$. So, since $j$ is good, there can be no more that $\icr$ such trials.
\end{itemize}
This has proven our claim that for all $j\in\clus$ the inequality $\nnul{\fsl}{j}\leq2\icr$ holds deterministically.
We now analyze the cardinality of $\cl{2}$. To do this consider some arbitrary $t\in\cl{2}$. Since $t\in\cls$ we have $j_t\in\clus$. Since $t>\fst$ and $j_t$ was not recommended to $i_t$ before trial $t$ we must have that either $\lrel{i_t}{\lir{\fsl}}=0$ or $\nnul{\fsl}{j_t}>2\icr$ (at some point). But $j_t\in\clus$ so, by above, $\nnul{\fsl}{j_t}\leq2\icr$ always holds so we must have $\lrel{i_t}{\lir{\fsl}}=0$. As $t\notin\fnb\cup\fnc$ we have $\lrep{i_t}{j_t}=1$ so since $j_t,\lir{\fsl}\in\clus$ we have $\lrep{i_t}{\lir{\fsl}}=1$. This implies $\lrel{i_t}{\lir{\fsl}}\neq\lrep{i_t}{\lir{\fsl}}$ and hence there can be at most $\icr$ possible values of $i_t$.
We have just shown that the cardinality of $\{i_{t}~|~t\in\cl{2}\}$ is at most $\icr$. Now note that on any $t\in\cl{2}$ we have $t\notin\fnl$, so $i_t$ is added to $\exc$ and hence cannot be equal to $i_{t'}$ for any future trial $t'\in\cl{2}$ with $t'>t$. Hence, for each $t\in\cl{2}$ we have that $i_t$ is unique, so the cardinality of $\cl{2}$ is equal to that of $\{i_{t}~|~t\in\cl{2}\}$, which is at most $\icr$. Since each $t\in\cl{2}$ is in $\fnd$ but not $\fnl$ it contributes $\bo{M+N}$ to the regret, so that $\cl{2}$ contributes
$$
\bo{\icr (M+N)}
$$
to \algnn's the regret bound.
Suppose we have a trial $t\in\cl{2}\cup\cl{3}\setminus\{\fst\}$. If $\coin=0$ on trial $t$ then $t\in\cl{2}$, while if $\coin=1$ we have $t\in\cl{3}$. Since the probability that $\coin=1$ is $1/\icr$ and
$|\cl{2}|\leq\icr$ we have
$$
|\cl{3}|=\bo{1+|\cl{2}|/\icr}=\bo{1}
$$
in expectation. Since each trial $t\in\cl{3}$ contributes $\bo{M+\icr N}$ to the regret bound, this allows us to conclude that in expectation $\cl{3}$ contributes
$$
\bo{M+\icr N}
$$
to \algnn's regret bound.
We now argue that we can exclude the contributions of $\fnb$ (and hence also $\cl{4}$) to the regret bound. To this effect, suppose we have some $t\in\fnd$ with $\rem{t}>2\ucp{i_t}$. Note that $j_t$ is drawn uniformly at random from the items not yet recommended to $i_t$ so far, and $\lrel{i_t}{j_t}=1$. This implies that each of the $\rem{t}$ items $j$ that user $i_t$ likes and have not been recommended to $i_t$ so far have a $1/\rem{t}$ probability of being $j_t$. Since at most $\ucp{i_t}$ of these items $j$ satisfy $\lrep{i_t}{j_t}=0$ we have that there is at most a
$$
\ucp{i_t}/\rem{t}<1/2
$$
probability that $\lrep{i_t}{j_t}=0$ (so that $t\in\fnb$). Hence, $|\fnb|$ affects the regret bound by a constant factor only, so that we can exclude the contribution of $\cl{4}$.
Finally, suppose we have some $t\in\cl{5}$. Note that $t\in\fnc$ which implies that $i_t$ is bad. This means that $t\notin\cls$ which leads to a contradiction. The set $\cl{5}$ is therefore empty, hence it does not contribute to the regret.
Hence, the set $\cls$ contributes $\bo{\icr(M+N)}$ to the regret. Since the union of $\cls$ over all clusters $\clus$ is equal to the set of all $t\in\fnd$ such that $i_t$ and $j_t$ are both good, we have shown that the total expected regret is bounded by $\bo{D\icr(M+N)}$ plus the contribution (to the regret) of all $t\in\fnd$ in which either $i_t$ or $j_t$ is bad. Hence, we now analyze the contribution to the regret of the latter kinds of rounds.
Consider some bad user $i$ or bad item $j$, and let $\fbd$ be the set of trials $t\in\fnd$ with $i_t=i$ or $j_t=j$, respectively. Note that given $t\in\fbd$ with $t\notin\fnl$, in trial $t$ it must happen that both $i_t$ is added to $\exc$ and $j_t$ is added to $\exi$. This means that there can be no further trials in $\fbd$, hence there is at most one trial in $\fbd\setminus\fnl$. This also implies that whenever we encounter a round $t\in\fbd$, there is a $1-1/\icr$ probability that there are no further trials in $\fbd$ and a $1/\icr$ probability that $t\in\fnl$. This implies that the expected number of trials in $\fbd\cap\fnl$ is bounded from above by
$$
\sum_{a\in\nat}1/\icr^a = \frac{\psi}{\psi-1}-1\leq 2/\icr~,
$$
where the inequality uses the condition $\icr\geq 2$. Since trials in $\fbd\setminus\fnl$ contribute $\bo{M+N}$ to the regret, and trials in $\fbd\cap\fnl$ contribute $\bo{M+\icr N}$, this shows that in expectation $\fbd$ contributes overall
$$
\bo{M+N}
$$
to the regret.
Since there are $m$ bad users and $n$ bad items, the above shows that the set of trials $t\in\fnd$ such that $i_t$ or $j_t$ is bad contributes $\bo{(m+n)(M+N)}$ to the regret. Hence,
$$
\expt{\reg}=\bo{(D\icr+m+n)(M+N)}~,
$$
as claimed.
We now single out the analysis changes for \ue. First note that we can, without loss of generality, assume there are no bad items. This is because for any bad item $j$ we can modify $\lmap$ so that its $j$-th column is equal to that of $\lmat$ noting that $\lmap$ becomes $(D+n)$-item clustered and there are now no bad items.
Now observe that, since no items are ever added to $\exi$, the condition in Line \ref{ntp-1} of Algorithm \ref{a:orcastar} is never true, so our regret is now
$$
\expt{\reg} = \bo{(M+\icr N)|\fnl|+N|\fnd|}
$$
so trials in $\fnd\setminus\fnl$ now only contribute $\bo{N}$ instead of $\bo{M+N}$. Since there are no bad items (so $n=0$ and we can ignore in the analysis the fact that items are never added to $\exi$) this change leads to a regret bound of the form
$$
\expt{\reg}=\bo{(D+m/\icr)(M+N\icr)}~,
$$
as claimed.
\end{proof}
\end{document}
|
\begin{document}
\title{A generalized wave-particle duality relation for finite groups}
\author{Emilio Bagan$^{1,3}$, John Calsamiglia$^{3}$, J\'anos A. Bergou$^{1,2}$, and Mark Hillery$^{1,2}$}
\affiliation{$^{1}$Department of Physics and Astronomy, Hunter College of the City University of New York, 695 Park Avenue, New York, NY 10065 USA \\
$^{2}$Graduate Center of the City University of New York, 365 Fifth Avenue, New York, NY 10016 \\
$^{3}$F\'{i}sica Te\`{o}rica: Informaci\'{o} i Fen\`{o}mens Qu\`antics, Departament de F\'{\i}sica, Universitat Aut\`{o}noma de Barcelona, 08193 Bellaterra (Barcelona), Spain}
\begin{abstract}
Wave-particle duality relations express the fact that knowledge about the path a particle took suppresses information about its wave-like properties, in particular, its ability to generate an interference pattern. Recently, duality relations in which the wave-like properties are quantified by using measures of quantum coherence have been proposed. Quantum coherence can be generalized to a property called group asymmetry. Here we derive a generalized duality relation involving group asymmetry, which is closely related to the success probability of discriminating between the actions of the elements of a group. The second quantity in the duality relation, the one generalizing which-path information, is related to information about the irreducible representations that make up the group representation.
\end{abstract}
\maketitle
\section{Introduction}
Resource theories, in particular the resource theory of coherence, have been an area of considerable recent activity. In a resource theory, one has a set of free states, which do not possess the resource, and free operations that do not create the resource. In addition, there is a measure of the extent to which a state that is not a free state does possess the resource. The first such theory was that of entanglement. In that case, the free states are the separable states. In the case of coherence, one specifies a basis, and the free states are those that are diagonal in that basis \cite{baumgratz}.
The resource theory of coherence is an example of a broader class of resource theories that are characterized by asymmetry under a group of transformations \cite{gour,marvian}. One starts with a group, $G$, and a unitary representation of the group, $U(g)$ for $g\in G$ acting on a Hilbert space~$\mathcal{H}$. States, $\rho$, that are invariant under the action of the group, i.e. $U(g)\rho\, U^{\dagger}(g) = \rho$ for all $g\in G$, constitute the free states, and the free operations are those that satisfy $\mathcal{E}[U(g)\rho\, U^{\dagger}(g)]=U(g)\mathcal{E}(\rho)U^{\dagger}(g)$ for all $g\in G$ and all $\rho$, where $\mathcal{E}$ is a completely positive, trace preserving map. Maps with this property are called G-covariant \cite{marvian}. States for which ${\mathcal U}_g(\rho):=U(g)\rho\, U^{\dagger}(g) \neq \rho$ for at least one $g\in G$, are said to possess asymmetry. The resource theory of coherence results when the group is taken to be a cyclic group.
A useful measure of asymmetry is the robustness of asymmetry \cite{piani1,piani2}. For a given state $\rho$, it is given by
\begin{equation}
\mathcal{A}_{\mathcal{R}}(\rho )= \min_{\tau\in\mathcal{D}(\mathcal{H})} \left\{ s\geq 0 \left| \frac{\rho+s\tau}{1+s} \in \mathcal{S}\right. \right\} ,
\end{equation}
where $\mathcal{D}(\mathcal{H})$ is the set of density matrices on $\mathcal{H}$ and $\mathcal{S}$ is the set of free states. It has the following useful property. If one is trying to discriminate among the states $U(g)\rho\, U^{\dagger}(g)$ for $g\in G$, and each of the states is equally probable, the robustness of asymmetry of $\rho$ is closely related to the optimal minimum-error probability of successfully discriminating among the states, $P_{\rm s}(\rho )$. In particular, we have that \cite{piani2}
\begin{equation}
P_{\rm s}(\rho ) = \frac{1+ \mathcal{A}_{\mathcal{R}}(\rho ) }{|G|},
\end{equation}
where $|G|$ is the number of elements in $G$. This relation suggests that in this scenario, i.e., discriminating among the equally probable states $U(g)\rho\, U^{\dagger}(g)$, $P_{\rm s}(\rho )$ itself is a good measure of asymmetry. It has a clear operational interpretation. It tells us how good a state is for discriminating the quantum channels ${\mathcal U}_g$. In channel discrimination, one sends an input state into a channel, and then discriminates as best one can, the output states~\cite{dariano1,chiribella}. In general the input states can be in a Hilbert space that is larger than the one the channel acts on, but we will only consider states in the carrier space for the representation~$U(g)$. If $P_{\rm s}(\rho )$ is small, then the state~$\rho$ is a poor input state to use for channel discrimination, which means that its asymmetry must be small, too. If $P_{\rm s}(\rho )$ is close to one, then it is a good input state and also very asymmetric. It is also the case that $P_{\rm s}(\rho )$ has some additional properties that are desirable for a measure of asymmetry. It decreases under G-covariant quantum maps and it is convex. These properties follow from those of the robustness of asymmetry, but, for completeness, short proofs are provided in Appendix~\ref{App A}.
In a wave-particle duality experiment, a particle goes through an interferometer, and there are detectors that provide some information about which path the particle took. There is a tradeoff, expressed by the duality relation, between how much information one has about the path and the visibility of the interference pattern produced by the particle \cite{wootters,greenberger,jaeger,englert,durr,bimonte,englert2,jakob,englert3,angelo}. The higher the visibility, the easier it is to discriminate among different phases imprinted to the particle state by, e.g., phase-shift plates placed in the paths. In the case considered here, the paths, or rather the orthogonal one-dimensional subspaces that represent~them, are replaced by the invariant subspaces that carry the irreducible representations, and the phases by the channels ${\mathcal U}_g$. If one tags these subspaces with ancillary states, which can be thought of as detector sates and are not, in general, orthogonal, we find that the probability of discriminating the tagging states places a limit on the probability of discriminating the channels~${\mathcal U}_g$, for $g\in G$.
The tagging states, then, affect the asymmetry of the input state,
by affecting the coherence between the different subspaces. This duality notion for asymmetry could be implemented, e.g., by an optical network such as that in Fig.~\ref{f-1}, where detectors tell whether a photon has gone through different parts, the parts corresponding to the subspaces. The tags are then the states of the detectors. Our complementarity relation can be viewed as providing a tradeoff between being able to identify which network we have, expressed by $U(g)$ [in the figure $U(g)$ is the direct sum of three irreducible representations $\Gamma_1(g)$, $\Gamma_2(g)$, $\Gamma_3(g)$, of dimension~1, 1 and 2, respectively], and knowing which part of a network the photon went through.
\begin{figure}
\caption{Schematic representation of a network for the second example in Sec.~\ref{sec examples}
\label{f-1}
\end{figure}
The paper is organized as follows. In Section~\ref{sec 2} we present and discuss a formula for $P_{\rm s}(\rho )$ in the case that~$\rho$ is a pure state and there are no repeated irreducible representations.
In Sec.~\ref{sec 3} we derive a duality relation in the simplest case where irreducible subspaces have the same probability, i.e., the particle can be found in each part of the network with equal probability. The general case, including also that of irreducible representations with multiplicity greater than one and the possible use of entanglement with an idler particle, is left for a separate publication~\cite{us next}.
\section{A simple expression for the success probability}
\label{sec 2}
The discrimination of states generated by the action of a representation of a group acting on a single state, i.e., the states $\{U(g)|\phi\rangle\, |\, g\in G\}$, has been studied by a number of people. The case of cyclic groups was treated by Ban \emph{et al.}\ and this was extended to abelian groups by Eldar and Forney \cite{ban,eldar}. The problem for general groups has been studied by Chiribella \emph{et al.} \cite{chiribella1,chiribella2} and Krovi \emph{et al.} \cite{krovi}. We shall make use of a formula for the probability of successfully discriminating among the states $\{U(g)|\phi\rangle\, |\, g\in G\}$ with minimum error that was obtained in~\cite{krovi}. For completeness, we present a proof in Appendix~\ref{App B}.
Suppose that when the representation $U(g)$ is expressed as a direct sum of irreducible representations, each irreducible representation appears at most once. For any state $|\phi\rangle$, we then have
\begin{equation}
\label{Ps}
P_{\rm s}(|\phi\rangle\langle\phi |) = \left( \sum_{p} \sqrt{\frac{d_{p}}{|G|}} \|\phi_{p}\| \right)^{2} .
\end{equation}
Here the sum is over the irreducible representations that appear in $U(g)$, $d_{p}$ is the dimension of the p$^{\rm th}$ irreducible representation, and $|\phi_{p}\rangle$ is the component of $|\phi\rangle$ in the subspace, ${\mathcal H}_p$, that carries the p$^{\rm th}$ irreducible representation. Note that this relation plus the convexity property of~$P_{\rm s}$ (See Appendix~\ref{App A}) can be used to find an upper bound on $P_{\rm s}$ for mixed states.
We can use the above expression to find the best pure state to discriminate the channels $U(g)$ by maximizing the right-hand side. The Schwarz inequality and the fact that $\sum_{p} \|\phi_{p}\|^{2}=1$ imply that
\begin{equation}
P_{\rm s}(|\phi\rangle\langle\phi |) \leq \frac{1}{|G|} \sum_{p} d_{p} ,
\label{Ps coh}
\end{equation}
and that this bound is achieved when
\begin{equation}
\|\phi_{p}\| = \left(\frac{d_{p}}{\sum_{p^{\prime}}d_{p^{\prime}} } \right)^{\kern-.3em1/2} .
\end{equation}
To attain the success probability in Eq. \eqref{Ps coh} coherence among the various irreducible subspaces is required. If no such coherence exists, the maximum success probability is given by Eq.~(\ref{D}) below.
\section{A duality relation}
\label{sec 3}
There are duality relations that limit one's ability to both know which path a particle took and to produce an interference pattern with that particle. More recently, duality relations originating from entropic uncertainty relations~\cite{coles1,coles2} or incorporating coherence measures have been derived~\cite{pati,bagan}. As we have mentioned and will show in Sec.~\ref{sec examples}, the $l_{1}$ coherence measure is closely related to the optimal success probability of discriminating among states generated by the action of a cyclic group. This suggests that it should be possible to find a duality relation for more general groups.
Let us consider a representation $U(g)=\bigoplus_{p=1}^{N} \Gamma_{p}(g) $, where each irreducible representation appears at most once, and a pure state of the system as input, given by
\begin{equation}
|\psi\rangle_{\rm S}=\frac{1}{\sqrt{N}}\sum_{p=1}^N |u_p\rangle_{\rm S},
\label{psi & u's}
\end{equation}
where $|u_{p}\rangle_{\rm S}$ is a normalized state in the subspace ${\mathcal H}_p$ corresponding to $\Gamma_{p}$. We use an ancillary system to tag the~$N$ subspaces by applying a controlled-unitary gate to system plus ancilla, the latter having been prepared in an initial state $|\eta_0\rangle_{\rm A}$. If $\openone_{p}$ is the projector onto the invariant subspace ${\mathcal H}_p$, then the gate has the form \mbox{$\sum_{p=1}^N\openone_p\otimes V_p$}, where the unitaries $\{V_p\}_{p=1}^N$ acting on the ancillary system are such that $V_p|\eta_0\rangle_{\rm A}=|\eta_p\rangle_{\rm A}$. The resulting state in ${\mathcal H}={\mathcal H}_{\rm S}\otimes{\mathcal H}_{\rm A}$, ${\mathcal H}_{\rm S}=\oplus_{p=1}^N{\mathcal H}_p$, is
\begin{equation}
|\Psi\rangle= \frac{1}{\sqrt{N}}\sum_{p=1}^{N} |u_{p}\rangle_{\rm S} |\eta_{p}\rangle_{\rm A} .
\label{Psi}
\end{equation}
To simplify the notation, we will drop the indexes ${\rm S}$ (system) and ${\rm A}$ (ancilla) wherever no confusion may arise. The ancillary states $\{ |\eta_{p}\rangle\}_{p=1}^N$ are normalized but, in general, not orthogonal. If the channel ${\mathcal U}_g$ is applied, the state becomes
\begin{equation}
|\Psi_g\rangle=\frac{1}{\sqrt N}\sum_{p=1}^N [U(g)|u_p\rangle] |\eta_p\rangle.
\end{equation}
We note that the tagging and the channel application commute, so tagging after the channel application would lead to the same result. Let
\begin{eqnarray}
\rho_g& = & {\rm Tr}_{\rm A}\left(|\Psi_g\rangle\langle\Psi_g |\right) \nonumber \\
& = & \frac{1}{N} \sum_{p,p^{\prime}=1}^{N}\!U(g)|u_{p}\rangle \langle u_{p^{\prime}}|U^{\dagger}(g) \,\langle\eta_{p^{\prime}}|\eta_{p}\rangle \nonumber \\
& = & U(g)\rho_{e}U^{\dagger}(g) ,
\label{rho_g}
\end{eqnarray}
where $\rho_{e}$ corresponds to the identity element of the group. We want to find a relation between our ability to discriminate the states $\{ \rho_{g}\}_{g\in G}$ and our ability to discriminate the states $\{ |\eta_{p}\rangle\}_{p=1}^N$.
With no loss of generality, we can discriminate the states $\{\rho_{g}\}_{g\in G}$ with a covariant POVM $\{\Pi_g\}_{g\in G}$, where $\Pi_{g}=U(g)\Pi_{e}U^{\dagger}(g)$, and $\Pi_{e}$ is the POVM element corresponding to the identity element of the group, $e\in G$. This implies that our probability of successfully discriminating the channels ${\mathcal U}_g$ with the input state $|\psi\rangle$ is
\begin{equation}
P_{{\mathcal U}_g}:=\frac{1}{|G|} \sum_{g\in G}{\rm Tr}(\Pi_{g}\rho_{g}) = {\rm Tr}(\Pi_{e}\rho_{e}) .
\end{equation}
Now, using that $\Pi_e\ge 0$,
\begin{eqnarray}
\hspace{-1.5em}{\rm Tr}(\Pi_{e}\rho_{e})\! & = &\! \frac{1}{N}\!\!\sum_{p,p^{\prime}=1}^{N}\!\!\langle u_{p^{\prime}}|\Pi_{e}|u_{p}\rangle\langle \eta_{p^{\prime}}|\eta_{p}\rangle \nonumber \\
\!& \leq &\! \frac{1}{N}\!\!\!\sum_{p,p^{\prime}=1}^{N}\!\!\!\!\sqrt{\!\langle u_{p^{\prime}}\!|\Pi_{e}|u_{p^{\prime}}\!\rangle\!\langle u_{p}|\Pi_{e}|u_{p}\rangle} |\langle \eta_{p^{\prime}}|\eta_{p}\rangle| .
\end{eqnarray}
From Appendix B, we have that
\begin{equation}
\langle u_{p}|\Pi_{e}|u_{p}\rangle \leq \frac{d_{p}}{|G|} ,
\end{equation}
so that
\begin{equation}
\label{pi-rho}
P_{{\mathcal U}_g}=
{\rm Tr}(\Pi_{e}\rho_{e}) \leq \frac{1}{N|G|} \sum_{p,p^{\prime}=1}^{N} \sqrt{d_{p}d_{p^{\prime}}}
|\langle \eta_{p^{\prime}}|\eta_{p}\rangle| .
\end{equation}
This inequality can be satisfied as an equality in some circumstances. Let us choose $\Pi_{e}=|X\rangle\langle X|$, where $|X\rangle = \sum_{p=1}^{N} |X_{p}\rangle$, and $|X_{p}\rangle = e^{i\theta_{p}}\sqrt{d_{p}/|G|}|u_{p}\rangle$ is the component of $|X\rangle$ in the carrier space of $\Gamma_{p}$. This will satisfy $\sum_{g\in G} \Pi_{g} = \openone_{\rm S}$ (see Appendix~\ref{App B}), and we then have that
\begin{equation}
{\rm Tr}(\Pi_{e}\rho_{e}) = \frac{1}{N|G|} \sum_{p,p^{\prime}=1}^{N} \sqrt{d_{p}d_{p^{\prime}}}
e^{i(\theta_{p^{\prime}}-\theta_{p})} \langle \eta_{p^{\prime}}|\eta_{p}\rangle .
\end{equation}
If the $\{ \theta_{p}\}_{p=1}^N$ can be chosen so that
\begin{equation}
e^{i(\theta_{p^{\prime}}-\theta_{p})} \langle \eta_{p^{\prime}}|\eta_{p}\rangle= | \langle \eta_{p^{\prime}}|\eta_{p}\rangle | ,
\end{equation}
then the inequality in Eq.\ (\ref{pi-rho}) will be satisfied as an equality. One case in which this inequality becomes an equality is the case in which the vectors $\{ |\eta_{p}\rangle\}_{p=1}^N$ are orthonormal. This implies that
\begin{equation}
D:=P_{{\mathcal U}_g}^{\rm orth}=\frac{1}{N|G|} \sum_{p=1}^{N} d_{p} .
\label{D}
\end{equation}
It follows that in this case $|\Psi\rangle$ is maximally entangled, and thus $\rho_g$ is the least informative state, i.e., it has minimum asymmetry, among those in Eq.~(\ref{Psi}). Hence, $D$ is the minimum value of $P_{{\mathcal U}_g}$. We read off from Eq.~(\ref{D}) that $D=1/|G|$ (random guessing) if all irreducible representations are one dimensional (as is the case of coherence). We will come back to this below.
Now let us move on to the duality relation. In addition to $P_{{\mathcal U}_g}$, the relation involves the probability that one correctly infers the value of $p$ from the discrimination of the tagging states. In the example of Fig.~\ref{f-1}, this would amount to inferring which part of the network the particle went through. Tracing out the system, we find that the state of the ancilla is
\begin{equation}
\rho_{\rm A}={\rm Tr}_{\rm S}\left(|\Psi_g\rangle\langle\Psi_g|\right)=\frac{1}{N}\sum_{p=1}^N |\eta_p\rangle\langle\eta_p| ,
\end{equation}
independently of the channel ${\mathcal U}_g$ that has been applied.
Clearly, $\rho_{\rm A}$ corresponds to the specific ensemble where each of the states $\{|\eta_p\rangle\}_{p=1}^N$ is equally likely.
The optimal probability~of discriminating them with minimum error, $P_{{\mathcal H}_p}$, satisfies~\cite{bagan},
\begin{equation}
P_{{\mathcal H}_p}-\frac{1}{N} \leq \frac{1}{N^{2}} \sum_{p,p^{\prime}=1}^{N} \sqrt{1-|\langle \eta_{p}|\eta_{p^{\prime}}\rangle |^{2}} .
\label{P_p}
\end{equation}
Let us define the two-component vectors
\begin{equation}
\vecc v_{pp'}\!:=\!\left( \!\frac{1}{N} \sqrt{1\!-\!|\langle \eta_{p}|\eta_{p^{\prime}}\rangle |^{2}},\ \frac{\sqrt{d_{p}d_{p^{\prime}}}}{|G|} |\langle \eta_{p}|\eta_{p^{\prime}}\rangle | \right) ,
\end{equation}
such that
\begin{eqnarray}
\|\vecc v_{pp'}\|^{2} \!& = &\! \frac{1}{N^{2}}\!\! \left[ 1\!+\! \left(\! \frac{N^{2}d_{p}d_{p^{\prime}}}{|G|^{2}}\!-\!1\!\right) |\langle \eta_{p}|\eta_{p^{\prime}}\rangle |^{2}\right] \nonumber \\
\!& \leq &\! \frac{M}{N^{2}} ,
\end{eqnarray}
where $M$ is the maximum over all possible sets $\{|\eta_p\rangle\}_{p=1}^N$ of the term in square brackets. An obvious upper bound for $M$~is
\begin{equation}
M\le \tilde M: =1+ \max_{\two{p,p^{\prime}}{ p\neq p^{\prime}}}\left\{\left( \frac{N^{2}d_{p}d_{p^{\prime}}}{|G|^{2}}-1 \right), 0 \right\} .
\end{equation}
We then have from Eqs.~(\ref{pi-rho}) and~(\ref{P_p}) that
\begin{eqnarray}
\left(P_{{\mathcal U}_g}-D\right)^{2} + \left(P_{{\mathcal H}_p}-\frac{1}{N}\right)^{2} \leq \nonumber \\
\frac{1}{N^{2}} \sum_{\two{p,p^{\prime} =1}{p\neq p^{\prime}}}^{N} \sum_{\two{q,q^{\prime}=1}{q\neq q^{\prime}}}^{N} \vecc v_{pp'}\cdot\vecc v_{qq'} .
\end{eqnarray}
Making use of the Schwarz inequality we find
\begin{equation}
\left(P_{{\mathcal U}_g}-D\right)^{2} + \left(P_{{\mathcal H}_p}-\frac{1}{N}\right)^{2} \leq M\left( 1-\frac{1}{N}\right)^{2} .
\end{equation}
This is the desired duality relation which constitutes the central result of the paper. It places a limit on our ability to tell which channel, ${\mathcal U}_g$, we have and which invariant subspace ${\mathcal H}_p$ the particle went through.
\section{Examples}
\label{sec examples}
\subsection{Cyclic group}
\label{subsec cyclic}
Let us first look at the case in which $G$ is the cyclic group of order $N$. Let $a$ be the generator of the group, and then the group is $G=\{ a^{n}\}_{\raisebox{.05em}{$\scriptstyle n=0$}}^{N-1}$. We have that~\mbox{$a^{0}=e$}, the identity element, and~$a^{N}=e$. The irreducible representations of $G$ are one-dimensional, and there are~$N$ of them. We shall denote the elements of the~p$^{th}$ irreducible representation by $\Gamma_{p}(a^{n})$. If the state~$|u_p\rangle$ transforms according to the p$^{th}$ irreducible representation, then
\begin{equation}
\Gamma_{p}(a^{n})|u_p\rangle = e^{2\pi i pn/N}|u_p\rangle .
\end{equation}
Now consider the representation
\begin{equation}
U=\bigoplus_{p=0}^{N-1} \Gamma_{p} .
\end{equation}
This is an $N$-dimensional representation of $G$, and its carrier space is spanned by the orthonormal states~$\{ |u_p\rangle\}_{p=0}^{N-1}$.
For a vector $|\phi\rangle$ in the carrier space of $U$, we have, from Eq.\ (\ref{Ps}) that
\begin{equation}
P_{\rm s} (|\phi\rangle\langle\phi |)= \frac{1}{N}\left( \sum_{p=0}^{N-1} |\langle u_p|\phi\rangle |\right)^{2} .
\end{equation}
If we now set $\rho=|\phi\rangle\langle\phi |$ and subtract $1/N$, which is just the probability of guessing which state $U(a^{n})|\phi\rangle$ we have, we obtain
\begin{equation}
P_{\rm s}(\rho)-\frac{1}{N} = \sum_{\two{p,q=0}{p\neq q}}^{N-1}| \langle u_p|\rho |u_q\rangle |
= \sum_{\two{p,q=0}{p\neq q}}^{N-1}| \rho_{pq} |.
\end{equation}
This is just the $l_{1}$ measure of coherence defined in \cite{baumgratz} in the basis $\{u_p\}_{p=0}^{N-1}$. Physically we can interpret the states~$|u_p\rangle$ as corresponding to different paths in an interferometer and the factors of $\exp (2\pi i pn/N)$ as resulting from phase shifters placed in those paths.
\subsection{\boldmath Non-Abelian groups: dihedral group $D_3$ or symmetric group~$S_3$}
\label{subsec non-abelian}
Next, let us look at a non-abelian group. A simple non-abelian group is the dihedral group $D_{3}$, which consists of rotations and reflections in the plane that leave an equilateral triangle invariant. It has six elements, $\{ e,r,r^{2}, s, rs, r^{2}s \}$, where $r^{3}=e$ and $s^{2}=e$. The dihedral group $D_3$ is isomorphic to the symmetric group~$S_3$, i.e., the group of permutations of three elements. The mapping is defined by $s\mapsto(12)$, $r\mapsto(123)$.
The group has three conjugacy classes $C_{e}=\{ e\}$, $C_{r}=\{ r,r^{2}\}$, and~$C_{s}=\{ s, rs, r^{2}s \}$. It has three irreducible representations, $\Gamma_p$ for $p=1,2,3$, where $\Gamma_1$ and $\Gamma_2$ are one-dimensional and $\Gamma_3$ is two dimensional. The character table for the group is given in Table~\ref{t-1} for completeness.
\begin{table}
\centering
\begin{tabular}{|c|c|c|c|} \hline
& $C_{e}$ & $C_{r}$ & $C_{s}$ \\ \hline $\Gamma_1$ & $1$ & $1$& $1$ \\ \hline $\Gamma_2$ & $1$ & $1$ & $-1$ \\ \hline $\Gamma_3$ & $2$ & $-1$ & $0$ \\ \hline
\end{tabular}
\caption{\label{t-1}Character table for $D_{3}$.}
\end{table}
The one-dimensional representations are the trivial representation, $\Gamma_1(g)=1$ for all $g\in D_3$, and the so-called sign or alternate representation, defined by $
\Gamma_2(r)=1$ and $\Gamma_2(s)=-1$ for the generators of the group $r$ and $s$.
For the representation $\Gamma_3$, we can take the matrices,
\begin{equation}
\Gamma_3(r)=\left( \begin{array}{cc} -1/2 & -\sqrt{3}/2 \\ \sqrt{3}/2 & -1/2 \end{array} \right),\ \Gamma_3(s)=\left( \begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array}\right) ,
\end{equation}
expressed in the computational basis $\{ |0\rangle ,|1\rangle \}$.
Suppose we have two qubits, which transform according to the representation $\Gamma_3\otimes \Gamma_3$. We find that
\begin{equation}
\Gamma_3\otimes \Gamma_3=\Gamma_1 \oplus \Gamma_2 \oplus \Gamma_3 ,
\label{D3rep}
\end{equation}
where $|v_{1}\rangle = (|00\rangle + |11\rangle )/\sqrt{2}$ transforms as $\Gamma_{1}$, $|v_{2}\rangle = (|01\rangle - |10\rangle )/\sqrt{2}$ transforms as $\Gamma_{2}$, and the subspace that transforms as $\Gamma_{3}$ is spanned by $|v_{3}\rangle = (|00\rangle - |11\rangle )/\sqrt{2}$ and $|v_{4}\rangle = (|01\rangle + |10\rangle )/\sqrt{2}$. For a state $|\phi\rangle$ in the carrier space of $\Gamma_1\oplus\Gamma_2\oplus\Gamma_3$, we have
\begin{equation}
|\phi\rangle = \sum_{j=1}^{4} c_{j}|v_{j}\rangle .
\end{equation}
We can write $|\phi\rangle$ in the form,
\begin{equation}
|\phi\rangle=c_1|u_1\rangle+c_2|u_2\rangle+\sqrt{|c_3|^2+|c_4|^2}|u_3\rangle ,
\end{equation}
where
\begin{equation}
|u_p\rangle:=|v_p\rangle,\ p=1,2;\quad
|u_3\rangle:=\frac{c_3 |v_3\rangle+c_4|v_4\rangle}{\sqrt{|c_3|^2+|c_4|^2}},
\end{equation}
so $\|\phi_p\|=|c_p|$, for $p=1,2$, and $\|\phi_3\|=\sqrt{|c_3|^2+|c_4|^2}$.
Using Eq.~(\ref{Ps}) we find
\begin{equation}
P_{\rm s}(\rho) = \frac{1}{6} \left( |c_{1}|+|c_{2}|+\sqrt{2}\,\sqrt{|c_{3}|^{2}+|c_{4}|^{2}}\,\right)^{2} .
\end{equation}
In this case, the maximum value of $P_{\rm s}$ is $2/3$.
If we apply our duality relation to the representation in Eq.\ (\ref{D3rep}), we find $D=2/9$ and $M=\tilde M=1$ (for $\{|\eta_p\rangle\}_{p=1}^3$ orthogonal), giving us
\begin{equation}
\left( P_{{\mathcal U}_g}-\frac{2}{9}\right)^{2} + \left(P_{{\mathcal H}_p}-\frac{1}{3}\right)^{2} \leq \frac{4}{9} .
\end{equation}
Note that in the case that the states $\{|\eta_p\rangle\}_{p=1}^3$ are orthogonal, we have $P_{{\mathcal U}_g}^{\rm orth}=D =2/9$ and $P_{{\mathcal H}_p}=1$, so that in this case the inequality becomes an equality. Unlike in the case of a cyclic group, where all of the invariant subspaces are one-dimensional, and we can do no better than guessing which ${\mathcal U}_g$ we have when the states~$|\eta_{p}\rangle$ are orthogonal, in this case, since one of the subspaces is two-dimensional, there is some information about the transformation that survives ($D=2/9>1/6=1/|G|$).
\section{Conclusion}
We have derived a duality relation for finite groups, which generalizes those for wave-particle duality. One of the quantities in the duality relations, $P_{{\mathcal U}_g}$, i.e., the probability of successfully discriminating the channels ${\mathcal U_g}$, is a measure of the asymmetry of a state under the action of the group. If the group is cyclic, which corresponds to the usual case of phase information versus path information, $P_{{\mathcal U}_g}$ reduces to the $l_{1}$ measure of coherence. The other quantity, $P_{{\mathcal H}_p}$, reflects our ability to discriminate the tags $\{|\eta_p\rangle\}$ attached to the irreducible representations that act on the system. Since irreducible representations act in invariant subspaces, the tags can be interpreted as labelling these subspaces. If we have a network that implements the group transformations, such as that in Fig.~\ref{f-1}, the tags tell us which part of the network the particle went through, and so in the case of one-dimensional irreducible representations, they simply tell us about the path the particle took. In the usual case, if we have complete information about the path, the quantum coherence is zero and no information about the phases is left, while in the more general case considered here, since some of the subspaces have dimension greater than one, we can know which subspace the particle went through, and there will be some information left about the the group transformation, that is, the state of the particle will still have some asymmetry.
\section{}
\label{App A}
First, we will show that $P_{\rm s}(\rho )$ decreases under G-covariant quantum maps, that is, $P_{\rm s}(\mathcal{E}(\rho )) \leq P_{\rm s}(\rho )$, where $\mathcal{E}$ is a trace-preserving, completely positive and G-covariant map. Let the Kraus operators for $\mathcal{E}$ be $A_{j}$,
\begin{equation}
\mathcal{E}(\rho ) =\sum_{j} A_{j}\rho A^{\dagger}_{j} ,
\end{equation}
and set $\rho_{g}=U(g)\rho U^{\dagger}(g)$. Let $\{\Pi_{g}\}_{g\in G}$ be the optimal minimum-error POVM that discriminates among the states $\rho_{g}$. We then have that
\begin{equation}
P_{\rm s}(\rho ) = \frac{1}{|G|}\sum_{g\in G} {\rm Tr}(\rho_{g}\Pi_{g}) ,
\end{equation}
where $|G|$ is the number of elements in $G$. Now let $\{\Pi_{g}^{(\mathcal{E})}\}_{g\in G}$ be the optimal minimum-error POVM that discriminates among the states $\mathcal{E}(\rho_{g})$. We then have that
\begin{eqnarray}
P_{\rm s}(\mathcal{E}(\rho ) )& = & \frac{1}{|G|}\sum_{g\in G} {\rm Tr}(U(g)\mathcal{E}(\rho )U^{\dagger}(g)\Pi_{g}^{(\mathcal{E})} ) \nonumber \\
& = & \frac{1}{|G|}\sum_{g\in G} {\rm Tr}(\mathcal{E}(\rho_{g}) \Pi_{g}^{(\mathcal{E})}) \nonumber \\
& = & \frac{1}{|G|}\sum_{g\in G} \sum_{j} {\rm Tr}( \Pi_{g}^{(\mathcal{E})} A_{j}\rho_{g}A_{j}^{\dagger})
\nonumber \\
& = & \frac{1}{|G|}\sum_{g\in G} {\rm Tr}\left( \left[ \sum_{j} A_{j}^{\dagger} \Pi_{g}^{(\mathcal{E})} A_{j}\right] \rho_{g}\right) .
\end{eqnarray}
Define
\begin{equation}
\Pi_{g}^{\prime} = \sum_{j} A_{j}^{\dagger} \Pi_{g}^{(\mathcal{E})} A_{j} .
\end{equation}
These operators are positive and sum to the identity, and, therefore, constitute a POVM. Because $\{\Pi_{g}\}_{g\in G}$ is the optimal minimum-error POVM that discriminates among the states $\rho_{g}$, we have that
\begin{equation}
\sum_{g\in G} {\rm Tr}(\Pi_{g}^{\prime}\rho_{g}) \leq \sum_{g\in G} {\rm Tr}(\Pi_{g}\rho_{g}) ,
\end{equation}
which implies that $P_{\rm s}(\mathcal{E}(\rho )) \leq P_{\rm s}(\rho )$ .
Next, we would like to show that $P_{\rm s}(\rho )$ is convex. Let $\rho = \sum_{n} p_{n}\rho_{n}$ and let
$\{\Pi_{g}^{(n)}\}_{g\in G}$ be the optimal POVM for discriminating the states $U(g)\rho_{n} U(g)^{\dagger}$ for $g\in G$. If~$\{\Pi_{g}\, | \, g\in G\}$ is the optimal POVM for discriminating the states
$U(g)\rho U(g)^{\dagger}$ for $g\in G$, then
\begin{eqnarray}
\frac{1}{|G|} \sum_{g\in G} {\rm Tr}(U(g)\rho_{n}U(g)^{\dagger}\Pi_{g}) \nonumber \\
\leq \frac{1}{|G|} \sum_{g\in G} {\rm Tr}(U(g)\rho_{n}U(g)^{\dagger}\Pi_{g}^{(n)}) ,
\end{eqnarray}
and this implies that
\begin{equation}
P_{\rm s}(\rho ) \leq \sum_{n} p_{n} P_{\rm s} (\rho_{n}) .
\end{equation}
\section{}
\label{App B}
Suppose we have a set of states $\{ U(g)|\phi\rangle\, |\, g\in G\}$, where $G$ is a group and $U(g)$ is a unitary representation of $G$. We will denote the identity element of the group by $e$. Our object is to find a POVM that will optimally discriminate among these states with minimum error. We can assume that the POVM can be expressed as $\Pi_{g}=U(g)\Pi_{e}U(g)^{\dagger}$, where $\Pi_{g}$ is the POVM element corresponding to $U(g)|\phi\rangle$ and $\Pi_{e}$ corresponds to $|\phi\rangle$ (see Appendix C). Assuming the states are equally likely, the success probability for the measurement is
\begin{equation}
P_{s} = \frac{1}{|G|}\sum_{g\in G} \langle\phi |U(g)^{\dagger}\Pi_{g}U(g)|\phi\rangle = \langle\phi |\Pi_{e}|\phi\rangle ,
\end{equation}
where $|G|$ is the order of the group.
As in the main text, let us assume that $U(g)$ is a representation of $G$ in which each irreducible representation appears at most once and denote the p$^{\rm th}$ irreducible representation by $\Gamma_{p}(g)$. If $\openone_{p}$ is the projector onto the invariant subspace on which the p$^{\rm th}$ irreducible representation acts, and $|X\rangle$ is a vector, then
\begin{equation}
\frac{1}{|G|} \sum_{g\in G}U(g)|X\rangle\langle X|U(g)^{\dagger} = \sum_{p}\frac{1}{d_{p}}\| \openone_{p}X\|^{2} \openone_{p} ,
\end{equation}
where the sum is over the irreducible representations occurring in $U(g)$, and we recall that $d_{p}$ is the dimension of the p$^{\rm th}$ irreducible representation.
Now let $|\phi_{p}\rangle = \openone_{p}|\phi\rangle$. We then have that
\begin{eqnarray}
\langle\phi |\Pi_{e}|\phi\rangle & = & \sum_{p,q}\langle\phi_{p}|\Pi_{e}|\phi_{q}\rangle \nonumber \\
& = & \sum_{p,q} \langle\sqrt{\Pi_{e}}\phi_{p}|\sqrt{\Pi_{e}}\phi_{q}\rangle \nonumber \\
& \leq & \sum_{p,q} \left(\langle \phi_{p}|\Pi_{e}|\phi_{p}\rangle \langle \phi_{q}|\Pi_{e}|\phi_{q}\rangle \right)^{1/2} \nonumber \\
& \leq & \left( \sum_{p} \langle \phi_{p}|\Pi_{e}|\phi_{p}\rangle^{1/2} \right)^{2} .
\label{phiPiphi}
\end{eqnarray}
Now let's make use of the fact that the sum of the POVM elements is the identity. Let $|X_{p}\rangle$ be a vector of norm one in the invariant subspace corresponding to the p$^{\rm th}$ irreducible representation. Then
\begin{eqnarray}
\hspace{-2em}
\sum_{g\in G}\!{\rm Tr}(\Pi_{g} |X_{p}\rangle\langle X_{p}|)\! &=&
\!{\rm Tr}\left[\left(\sum_{g\in G}\Pi_g\right)|X_p\rangle\langle X_p|\right]\nonumber\\[.5em]
&=&\! 1 .
\end{eqnarray}
However, we also have
\begin{eqnarray}
\hspace{-.5em}
\sum_{g\in G}\!{\rm Tr}(\Pi_{g} |X_{p}\rangle\langle X_{p}|)
\! &=&\!\! \sum_{g\in G}\!{\rm Tr}(U(g)\Pi_{e}U(g)^{\dagger} |X_{p}\rangle\langle X_{p}|) \nonumber\\
\!&=&\! {\rm Tr}\!\left[\! \Pi_{e}\!\! \sum_{g\in G} \!U\!(g)^{\!\dagger}|X_{p}\rangle\langle X_{p}| U\!(g)\!\right] \nonumber \\[.5em]
& =&\! {\rm Tr}\!\left[ \Pi_{e}\!\! \sum_{g\in G} \!U\!(g^{-1}\!)|X_{p}\rangle\langle X_{p}| U\!(g^{-1})^{\!\dagger}\!\right] \nonumber \\[.5em]
& =& \frac{|G|}{d_{p}} {\rm Tr}(\openone_{p}\Pi_{e}\openone_{p}) .
\end{eqnarray}
Therefore,
\begin{equation}
{\rm Tr}(\openone_{p}\Pi_{e}\openone_{p}) = \frac{d_{p}}{|G|}\ .
\end{equation}
We also have that
\begin{equation}
\frac{1}{\|\phi_{p}\|^{2}} \langle \phi_{p}| \Pi_{e}|\phi_{p}\rangle \leq {\rm Tr}(\openone_{p}\Pi_{e}\openone_{p}) ,
\end{equation}
which, finally gives us that
\begin{equation}
\label{upbound}
\langle\phi |\Pi_{e}|\phi\rangle \leq \left( \sum_{p} \sqrt{\frac{d_{p}}{|G|}} \|\phi_{p}\| \right)^{2} .
\end{equation}
Now let us find a POVM that achieves this bound. Choose $\Pi_{e}=|X\rangle\langle X|$ for some vector $|X\rangle$. Then the requirement that the POVM elements sum to the identity gives us
\begin{equation}
\sum_{g\in G}U(g)|X\rangle\langle X|U(g)^{\dagger}= \sum_{p}\frac{|G|}{d_{p}} \|X_{p}\|^{2} \openone_{p} = \openone,
\end{equation}
where $|X_{p}\rangle = \openone_{p}|X\rangle$. This implies that
\begin{equation}
\|X_{p}\| = \sqrt{\frac{d_{p}}{|G|}} .
\end{equation}
Now assume that we choose $|X_{p}\rangle$ parallel to $|\phi_{p}\rangle$. This implies that
\begin{equation}
\langle\phi_{p}|X_{p}\rangle = \sqrt{\frac{d_{p}}{|G|}} \|\phi_{p}\| ,
\end{equation}
and
\begin{eqnarray}
P_{\rm s} & = & \langle\phi |\Pi_{e}|\phi\rangle = \sum_{p,q}\langle\phi |X_{p}\rangle\langle X_{q}|\phi\rangle \nonumber \\
& = & \left( \sum_{p} \sqrt{\frac{d_{p}}{|G|}} \|\phi_{p}\| \right)^{2} .
\label{Ps no-rep}
\end{eqnarray}
Therefore, this POVM achieves the upper bound in Eq.~(\ref{upbound}) and is the minimum-error POVM of covariant form, which is the optimal minimum-error POVM.
\section{}\label{App C}
Suppose we have a set of states $\{ U(g)|\phi\rangle\}_{g\in G}$, where~$G$ is a group and $U(g)$ is a unitary representation of $G$.
Our object is to show that in this case we can assume with no loss of generality that the optimal POVM for discriminating the given states is of the form $\Pi_g=U(g)\Pi_eU^\dagger(g)$, which we call covariant. We will do it by proving that for any POVM, $\{\tilde\Pi_g\}_{g\in G}$, we can always find a covariant POVM that attains the very same success probability~$\tilde P_{\rm s}$.
Assuming that the states are equally likely we have
\begin{eqnarray}
\tilde P_{\rm s}&=&\frac{1}{|G|}\sum_{g\in G} {\rm Tr}\!\left[U(g)|\phi\rangle\langle\phi|U^\dagger(g) \, \tilde\Pi_g\right] \nonumber \\
&=&{\rm Tr}\left[ |\phi\rangle\langle\phi|\; \frac{1}{|G|}\sum_{g\in G} U^\dagger(g)\tilde\Pi_g U(g)\right] \nonumber \\
&=&{\rm Tr}\left( |\phi\rangle\langle\phi|\Omega\right)=\langle \phi|\Omega|\phi\rangle,
\label{Ptilde}
\end{eqnarray}
where we recall that $|G|$ is the order of the group and we have defined
\begin{equation}
\Omega=\frac{1}{|G|}\sum_{g\in G} U^\dagger(g)\tilde\Pi_g U(g) .
\end{equation}
We further define $\Pi_g=U(g)\Omega U^\dagger(g)$. Each $\Pi_g$ is positive and
\begin{eqnarray}
\sum_{g\in G}\Pi_g\!\!&=&\!\frac{1}{|G|}\!\!\sum_{g,g'\!\in G} \!\!U(g)U^\dagger(g')\tilde\Pi_{g'}U(g')U^\dagger(g)\nonumber\\
&=&\!\frac{1}{|G|}\!\!\sum_{g,g'\!\in G}\!\!U(gg'^{-1})\tilde\Pi_{g'}U^\dagger(gg'^{-1})\nonumber\\
&=&\!\frac{1}{|G|}\!\!\sum_{g''\!\!,g'\in G}\!\!\! U(g'')\tilde\Pi_{g'}U^\dagger(g'')\nonumber\\
&=&\!\frac{1}{|G|}\!\!\sum_{g''\!\in G}\!\!U(g'')\!\Bigg(\sum_{g'\in G}\tilde\Pi_{g'}\!\Bigg)U^\dagger(g'')\!=\!
\openone.
\end{eqnarray}
This shows that the set $\{\Pi_g\}_{g\in G}$ defines a proper POVM, where~$\Pi_{e}=\Omega$ is the POVM element corresponding to~$|\phi\rangle$ and $\Pi_g$ corresponds to $U(g)|\phi\rangle$. Moreover, this POVM gives the same success probability as $\tilde\Pi_g$, since
\begin{eqnarray}
P_{\rm s}&=&\frac{1}{|G|}\sum_{g\in G}{\rm Tr}\left[U(g)|\phi\rangle\langle\phi|U^\dagger(g)\Pi_g\right]\nonumber\\
&=&
\frac{1}{|G|}\sum_{g\in G}{\rm Tr}\left[|\phi\rangle\langle\phi|U^\dagger(g)\Pi_g U(g)\right]\nonumber\\
&=&
\frac{1}{|G|}\sum_{g\in G}{\rm Tr}\left(|\phi\rangle\langle\phi|\Omega\right)
=\langle\phi|\Omega|\phi\rangle=\tilde P_{\rm s},
\end{eqnarray}
as Eq.~(\ref{Ptilde}) shows. The proof also works for mixed states.
\end{document}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.